threads
listlengths 1
275
|
---|
[
{
"msg_contents": "I've got a report that is starting to take too long to run. I'm going to\ncreate a lookup table that should speed up the results, but first I've got\nto create the lookup table.\n\nI honestly don't care how long the query takes to run, I just want to run it\nwithout causing a major performance impact on other operations. The query\nseems to take forever even when I limit the results to just 10, so I don't\nknow if I can get good results by splitting the query into groups of queries\n(for example, for a years worth of data do 12 queries, one for each month or\nmaybe 365 queries, one for each day) or if there is a psql equivalent to\n\"nice.\"\n\nI've tried `nice psql` in the past and I don't think that had much impact,\nbut I haven't tried it on this query.\n\nHere is the query (BTW, there will be a corresponding \"max\" version of this\nquery as well):\nINSERT INTO usage_sessions_min (accountid,atime,sessionid)\nselect accountid, min(atime) as atime, sessionid from usage_access \ngroup by accountid,sessionid;\n\natime is a timestamptz, accountis is varchar(12) and sessionid is int.\n\nI've tried to do an explain analyze of this query, but its been running for\nhours already and I don't know when it will finish.\n\n-- \nMatthew Nuzum <[email protected]>\nwww.followers.net - Makers of \"Elite Content Management System\"\nView samples of Elite CMS in action by visiting\nhttp://www.elitecms.com/\n\n\n",
"msg_date": "Thu, 24 Mar 2005 13:07:39 -0600",
"msg_from": "\"Matthew Nuzum\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Preventing query from hogging server"
},
{
"msg_contents": "while you weren't looking, Matthew Nuzum wrote:\n\n> select accountid, min(atime) as atime, sessionid from usage_access\n> group by accountid,sessionid;\n\nTry something along the lines of:\n\nselect ua.accountid\n , (select atime\n from usage_access\n where sessionid = ua.sessionid\n and accountid = ua.accountid\n order by atime asc\n limit 1\n ) as atime\n , ua.sessionid\n from usage_access ua\n group by accountid\n , sessionid\n\nmin() and max() currently do table scans, which, on large tables, or\neven moderately sized tables with large numbers of accounts/sessions,\ncan add up. You'll need to replace asc with desc in the subquery for\nthe max() version.\n\nThis form cheats a bit and uses the index to find the highest and\nlowest values, provided you've created the appropriate indices.\n\nThis is, IIRC, in the FAQ.\n\n/rls\n\n-- \n:wq\n",
"msg_date": "Thu, 24 Mar 2005 13:24:12 -0600",
"msg_from": "Rosser Schwarz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Preventing query from hogging server"
},
{
"msg_contents": "\"Matthew Nuzum\" <[email protected]> writes:\n> Here is the query (BTW, there will be a corresponding \"max\" version of this\n> query as well):\n> INSERT INTO usage_sessions_min (accountid,atime,sessionid)\n> select accountid, min(atime) as atime, sessionid from usage_access \n> group by accountid,sessionid;\n\nHow many rows in usage_access? How many groups do you expect?\n(Approximate answers are fine.) What PG version is this, and\nwhat's your sort_mem setting?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Mar 2005 14:43:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Preventing query from hogging server "
},
{
"msg_contents": "\n> How many rows in usage_access? How many groups do you expect?\n> (Approximate answers are fine.) What PG version is this, and\n> what's your sort_mem setting?\n> \n> \t\t\tregards, tom lane\n\nI believe there are about 40,000,000 rows, I expect there to be about\n10,000,000 groups. PostgreSQL version is 7.3.2 and the sort_mem is at the\ndefault setting.\n\n(I know that's an old version. We've been testing with 7.4 now and are\nnearly ready to upgrade.)\n\n-- \nMatthew Nuzum <[email protected]>\nwww.followers.net - Makers of \"Elite Content Management System\"\nView samples of Elite CMS in action by visiting\nhttp://www.followers.net/portfolio/\n\n",
"msg_date": "Thu, 24 Mar 2005 13:53:37 -0600",
"msg_from": "\"Matthew Nuzum\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Preventing query from hogging server "
},
{
"msg_contents": "> How many rows in usage_access?\n\nOh, I just got my explain analyze:\n QUERY\nPLAN \n----------------------------------------------------------------------------\n--------------------------------------------------------------------------\n Subquery Scan \"*SELECT*\" (cost=9499707.90..9856491.74 rows=3567838\nwidth=28) (actual time=11443537.58..12470835.17 rows=1198141 loops=1)\n -> Aggregate (cost=9499707.90..9856491.74 rows=3567838 width=28)\n(actual time=11443537.56..12466550.25 rows=1198141 loops=1)\n -> Group (cost=9499707.90..9767295.78 rows=35678384 width=28)\n(actual time=11443537.10..12408372.26 rows=35678383 loops=1)\n -> Sort (cost=9499707.90..9588903.86 rows=35678384\nwidth=28) (actual time=11443537.07..12035366.31 rows=35678383 loops=1)\n Sort Key: accountid, sessionid\n -> Seq Scan on usage_access (cost=0.00..1018901.84\nrows=35678384 width=28) (actual time=8.13..416580.35 rows=35678383 loops=1)\n Total runtime: 12625498.84 msec\n(7 rows)\n\n-- \nMatthew Nuzum <[email protected]>\nwww.followers.net - Makers of \"Elite Content Management System\"\nView samples of Elite CMS in action by visiting\nhttp://www.followers.net/portfolio/\n\n",
"msg_date": "Thu, 24 Mar 2005 13:55:32 -0600",
"msg_from": "\"Matthew Nuzum\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Preventing query from hogging server "
},
{
"msg_contents": "\"Matthew Nuzum\" <[email protected]> writes:\n> I believe there are about 40,000,000 rows, I expect there to be about\n> 10,000,000 groups. PostgreSQL version is 7.3.2 and the sort_mem is at the\n> default setting.\n\nOkay. I doubt that the nearby suggestion to convert the min()s to\nindexscans will help at all, given those numbers --- there aren't enough\nrows per group to make it a win.\n\nI think you've just gotta put up with the sorting required to bring the\ngroups together. LIMIT or subdividing the query will not make it\nfaster, because the sort step is the expensive part. You could probably\nimprove matters by increasing sort_mem as much as you can stand ---\nmaybe something like 10M to 100M (instead of the default 1M). Obviously\nyou don't want to make it a big fraction of your available RAM, or it\nwill hurt the concurrent processing, but on modern machines I would\nthink you could give this a few tens of MB without any problem. (Note\nthat you want to just SET sort_mem in this one session, not increase it\nglobally.)\n\nI would strongly suggest doing the min and max calculations together:\n\n\tselect groupid, min(col), max(col) from ...\n\nbecause if you do them in two separate queries 90% of the effort will be\nduplicated.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Mar 2005 15:02:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Preventing query from hogging server "
},
{
"msg_contents": "\n> I would strongly suggest doing the min and max calculations together:\n> \n> \tselect groupid, min(col), max(col) from ...\n>\n> because if you do them in two separate queries 90% of the effort will be\n> duplicated.\n>\n>\t\t\tregards, tom lane\n\nThanks. Other than avoiding using too much sort mem, is there anything else\nI can do to ensure this query doesn't starve other processes for resources?\n\nDoing the explain analyze only increases my server load by 1 and seems to\nreadily relinquish CPU time, but previously when I had been running a test\nquery my server load rose to unacceptable levels.\n\nFWIW, the explain was run from psql running on the db server, the test query\nthe other day was run from one of the webservers. Should I run this on the\ndb server to minimize load?\n\n-- \nMatthew Nuzum <[email protected]>\nwww.followers.net - Makers of \"Elite Content Management System\"\nView samples of Elite CMS in action by visiting\nhttp://www.followers.net/portfolio/\n\n",
"msg_date": "Thu, 24 Mar 2005 14:13:01 -0600",
"msg_from": "\"Matthew Nuzum\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Preventing query from hogging server "
},
{
"msg_contents": "\"Matthew Nuzum\" <[email protected]> writes:\n> Thanks. Other than avoiding using too much sort mem, is there anything else\n> I can do to ensure this query doesn't starve other processes for resources?\n\nNot a lot.\n\n> Doing the explain analyze only increases my server load by 1 and seems to\n> readily relinquish CPU time, but previously when I had been running a test\n> query my server load rose to unacceptable levels.\n\nInteresting. EXPLAIN ANALYZE is going to cause a bunch of\ngettimeofday() calls to be inserted ... maybe your kernel takes those as\nprocess preemption points? Seems unlikely, but ...\n\n> FWIW, the explain was run from psql running on the db server, the test query\n> the other day was run from one of the webservers. Should I run this on the\n> db server to minimize load?\n\nSince it's an insert/select, psql isn't participating in the data flow.\nIt's not going to matter where the psql process is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Mar 2005 15:19:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Preventing query from hogging server "
},
{
"msg_contents": "On Thu, Mar 24, 2005 at 01:07:39PM -0600, Matthew Nuzum wrote:\n> I've tried `nice psql` in the past and I don't think that had much impact,\n> but I haven't tried it on this query.\n\nOn linux, nice will only help if the query is CPU-bound. On FreeBSD,\nnice affects I/O scheduling, as well as CPU, so it's a more effective\nmeans of limiting the impact of large queries. I don't know how other\nOS's handle this.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Mon, 28 Mar 2005 15:05:59 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Preventing query from hogging server"
}
] |
[
{
"msg_contents": "Using information found on the web, I've come up with some configuration \nand tuning parameters for a server/db that we will be implementing. I \nwas wondering if I could generate some feedback as to configuration and \ntuning so that I could compare my estimations with those of others.\n\nHost is AIX 5.1 with 4 cpu's and 4 GB ram. Postgresql will be sharing \nthis machine with other processes. Storage is an EMC storage array. \nThe DB itself is very simple. Two tables, one with 40-45 columns ( \nlargest column will likely contain no more than 32 chars of data ), the \nother with less than 5 columns ( largest column will contain no more \nthan 20 chars data ). Expected transactions will be along the order of \n~600K +- 100K inserts and ~600K +-200K updates per week.\n\nThanks\n",
"msg_date": "Thu, 24 Mar 2005 14:46:41 -0500",
"msg_from": "Reid Thompson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Configuration/Tuning of server/DB"
},
{
"msg_contents": "Reid,\n\nThere are a few very valuable tuning documents that are part of the \nestablished PostgreSQL-related literature. You don't mention which \nversion of postgres you'll be running, but here are the documents \nyou'll find useful:\n\npostgresql.conf\n7.4: \nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/ \nannotated_conf_e.html\n8.0: http://www.powerpostgresql.com/Downloads/annotated_conf_80.html\n\ngeneral tuning\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n-tfo\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\n\nStrategic Open Source � Open Your i�\n\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-260-0005\n\nOn Mar 24, 2005, at 1:46 PM, Reid Thompson wrote:\n\n> Using information found on the web, I've come up with some \n> configuration and tuning parameters for a server/db that we will be \n> implementing. I was wondering if I could generate some feedback as to \n> configuration and tuning so that I could compare my estimations with \n> those of others.\n>\n> Host is AIX 5.1 with 4 cpu's and 4 GB ram. Postgresql will be sharing \n> this machine with other processes. Storage is an EMC storage array. \n> The DB itself is very simple. Two tables, one with 40-45 columns ( \n> largest column will likely contain no more than 32 chars of data ), \n> the other with less than 5 columns ( largest column will contain no \n> more than 20 chars data ). Expected transactions will be along the \n> order of ~600K +- 100K inserts and ~600K +-200K updates per week.\n>\n> Thanks\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to \n> [email protected]\n\n",
"msg_date": "Mon, 28 Mar 2005 09:44:49 -0600",
"msg_from": "Thomas F.O'Connell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration/Tuning of server/DB"
}
] |
[
{
"msg_contents": "v8.0.1 on a Sun v20Z running gentoo linux, 1 cpu, 1GB Ram, 1 10k scsi\ndisk\n\nI have a (fairly) newly rebuilt database. In the last month it has\nundergone extensive testing, hence thousands of inserts and deletes in\nthe table in question. After each mass unload/load cycle, I vacuum full\nanalyze verbose.\n\nI tried to build a test case to isolate the issue, but the problem does\nnot manifest itself, so I think I have somehow made postgresql angry. I\ncould drop the whole db and start over, but I am interested in not\nreproducing this issue.\n\nHere is the statement:\n\norfs=# explain analyze DELETE FROM int_sensor_meas_type WHERE\nid_meas_type IN (SELECT * FROM meas_type_ids);\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=11.53..42.06 rows=200 width=6) (actual\ntime=1.564..2.840 rows=552 loops=1)\n Hash Cond: (\"outer\".id_meas_type = \"inner\".id_meas_type)\n -> Seq Scan on int_sensor_meas_type (cost=0.00..25.36 rows=636\nwidth=10) (actual time=0.005..0.828 rows=748 loops=1)\n -> Hash (cost=11.03..11.03 rows=200 width=4) (actual\ntime=1.131..1.131 rows=0 loops=1)\n -> HashAggregate (cost=11.03..11.03 rows=200 width=4) (actual\ntime=0.584..0.826 rows=552 loops=1)\n -> Seq Scan on meas_type_ids (cost=0.00..9.42 rows=642\nwidth=4) (actual time=0.002..0.231 rows=552 loops=1)\n Total runtime: 2499616.216 ms\n(7 rows)\n\nYes, that's *40 minutes*. It drives cpu (as viewed in top) to 99%+ for\nthe entire duration of the query, but %mem hangs at 1% or lower.\n\nmeas_type_ids is a temp table with the id's I want to nuke. Here is a\nsimilar query behaving as expected:\n\norfs=# explain analyze DELETE FROM int_station_sensor WHERE id_sensor\nIN (SELECT * FROM sensor_ids);\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=4.18..21.13 rows=272 width=6) (actual\ntime=0.479..0.847 rows=169 loops=1)\n Hash Cond: (\"outer\".id_sensor = \"inner\".id_sensor)\n -> Seq Scan on int_station_sensor (cost=0.00..11.49 rows=549\nwidth=10) (actual time=0.007..0.265 rows=267 loops=1)\n -> Hash (cost=3.68..3.68 rows=200 width=4) (actual\ntime=0.325..0.325 rows=0 loops=1)\n -> HashAggregate (cost=3.68..3.68 rows=200 width=4) (actual\ntime=0.177..0.256 rows=169 loops=1)\n -> Seq Scan on sensor_ids (cost=0.00..3.14 rows=214\nwidth=4) (actual time=0.003..0.057 rows=169 loops=1)\n Total runtime: 1.340 ms\n(7 rows)\n\n\nI have posted my tables, data and test cases here:\nhttp://ccl.cens.nau.edu/~kan4/testing/long-delete\n\n\nWhere do I go from here?\n\n\nThanks in advance,\n-- \nKarim Nassar\nDepartment of Computer Science\nBox 15600, College of Engineering and Natural Sciences\nNorthern Arizona University, Flagstaff, Arizona 86011\nOffice: (928) 523-5868 -=- Mobile: (928) 699-9221\n\n",
"msg_date": "Thu, 24 Mar 2005 17:10:57 -0700",
"msg_from": "Karim Nassar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Delete query takes exorbitant amount of time"
},
{
"msg_contents": "Karim Nassar <[email protected]> writes:\n> Here is the statement:\n\n> orfs=# explain analyze DELETE FROM int_sensor_meas_type WHERE\n> id_meas_type IN (SELECT * FROM meas_type_ids);\n> QUERY PLAN \n> -----------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=11.53..42.06 rows=200 width=6) (actual\n> time=1.564..2.840 rows=552 loops=1)\n> ...\n> Total runtime: 2499616.216 ms\n> (7 rows)\n\nNotice that the actual join is taking 2.8 ms. The other ~40 minutes is\nin operations that we cannot see in this plan, but we can surmise are ON\nDELETE triggers.\n\n> Where do I go from here?\n\nLook at what your triggers are doing. My private bet is that you have\nunindexed foreign keys referencing this table, and so each deletion\nforces a seqscan of some other, evidently very large, table(s).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Mar 2005 19:52:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time "
},
{
"msg_contents": "Tom,\n\nI've got a similar problem with deletes taking a very long time. I know\nthat there are lots of foreign keys referencing this table, and other\nforeign keys referencing those tables, etc. I've been curious, is there\na way to find out how long the foreign key checks take for each\ndependent table?\n\n-- Mark Lewis\n\nOn Thu, 2005-03-24 at 16:52, Tom Lane wrote:\n> Karim Nassar <[email protected]> writes:\n> > Here is the statement:\n> \n> > orfs=# explain analyze DELETE FROM int_sensor_meas_type WHERE\n> > id_meas_type IN (SELECT * FROM meas_type_ids);\n> > QUERY PLAN \n> > -----------------------------------------------------------------------------------------------------------------------------\n> > Hash Join (cost=11.53..42.06 rows=200 width=6) (actual\n> > time=1.564..2.840 rows=552 loops=1)\n> > ...\n> > Total runtime: 2499616.216 ms\n> > (7 rows)\n> \n> Notice that the actual join is taking 2.8 ms. The other ~40 minutes is\n> in operations that we cannot see in this plan, but we can surmise are ON\n> DELETE triggers.\n> \n> > Where do I go from here?\n> \n> Look at what your triggers are doing. My private bet is that you have\n> unindexed foreign keys referencing this table, and so each deletion\n> forces a seqscan of some other, evidently very large, table(s).\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n\n",
"msg_date": "Thu, 24 Mar 2005 17:23:19 -0800",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "\nOn Thu, 2005-03-24 at 19:52 -0500, Tom Lane wrote:\n> Karim Nassar <[email protected]> writes:\n> > Here is the statement:\n> \n> > orfs=# explain analyze DELETE FROM int_sensor_meas_type WHERE\n> > id_meas_type IN (SELECT * FROM meas_type_ids);\n> > QUERY PLAN \n> >\n-----------------------------------------------------------------------------------------------------------------------------\n> > Hash Join (cost=11.53..42.06 rows=200 width=6) (actual\n> > time=1.564..2.840 rows=552 loops=1)\n> > ...\n> > Total runtime: 2499616.216 ms\n> > (7 rows)\n> \n> Notice that the actual join is taking 2.8 ms. The other ~40 minutes\nis\n> in operations that we cannot see in this plan, but we can surmise are\nON\n> DELETE triggers.\n\nThere are no DELETE triggers (that I have created).\n\n> > Where do I go from here?\n> \n> Look at what your triggers are doing. My private bet is that you have\n> unindexed foreign keys referencing this table, and so each deletion\n> forces a seqscan of some other, evidently very large, table(s).\n\nAlmost. I have a large table (6.3 million rows) with a foreign key\nreference to this one (which has 749 rows), however it is indexed. \n\nI deleted the fk, ran the delete, then recreated the foreign key in\nabout 15 seconds. Thanks!\n\nProblem now is: this referencing table I expect to grow to about 110\nmillion rows in the next 2 months, then by 4 million rows per month\nthereafter. I expect that the time for recreating the foreign key will\ngrow linearly with size.\n\nIs this just the kind of thing I need to watch out for? Any other\nsuggestions for dealing with tables of this size? What can I do to my\nindexes to make them mo' betta?\n\n-- \nKarim Nassar\nDepartment of Computer Science\nBox 15600, College of Engineering and Natural Sciences\nNorthern Arizona University, Flagstaff, Arizona 86011\nOffice: (928) 523-5868 -=- Mobile: (928) 699-9221\n\n\n\n\n",
"msg_date": "Thu, 24 Mar 2005 18:48:24 -0700",
"msg_from": "Karim Nassar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "Karim Nassar <[email protected]> writes:\n>> Look at what your triggers are doing. My private bet is that you have\n>> unindexed foreign keys referencing this table, and so each deletion\n>> forces a seqscan of some other, evidently very large, table(s).\n\n> Almost. I have a large table (6.3 million rows) with a foreign key\n> reference to this one (which has 749 rows), however it is indexed. \n\nIn that case there's a datatype mismatch between the referencing and\nreferenced columns, which prevents the index from being used for the\nFK check.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Mar 2005 20:48:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time "
},
{
"msg_contents": "Mark Lewis <[email protected]> writes:\n> I've got a similar problem with deletes taking a very long time. I know\n> that there are lots of foreign keys referencing this table, and other\n> foreign keys referencing those tables, etc. I've been curious, is there\n> a way to find out how long the foreign key checks take for each\n> dependent table?\n\nThere is not any easy way at the moment.\n\nHmm ... I wonder how hard it would be to teach EXPLAIN ANALYZE to show\nthe runtime expended in each trigger when the statement is of a kind\nthat has triggers. We couldn't break down the time *within* the\ntriggers, but even this info would help a lot in terms of finger\npointing ...\n\n\tSeq Scan on ... (nn.nnn ms)\n\tTrigger foo: nn.mmm ms\n\tTrigger bar: nn.mmm ms\n\tTotal time: nn.mmm ms\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Mar 2005 21:32:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time "
},
{
"msg_contents": "On Thu, 2005-03-24 at 20:48 -0500, Tom Lane wrote:\n> In that case there's a datatype mismatch between the referencing and\n> referenced columns, which prevents the index from being used for the\n> FK check.\n\nCan I have more words on this? Here is how I created the tables:\n\nCREATE TABLE int_sensor_meas_type( \n id_int_sensor_meas_type SERIAL PRIMARY KEY,\n id_sensor integer NOT NULL REFERENCES sensor,\n id_meas_type integer NOT NULL REFERENCES meas_type UNIQUE);\n\n\nCREATE TABLE measurement (\n id_measurement SERIAL PRIMARY KEY,\n id_int_sensor_meas_type integer NOT NULL REFERENCES int_sensor_meas_type,\n datetime timestamp WITH TIME ZONE NOT NULL,\n value numeric(15,5) NOT NULL,\n created timestamp with time zone NOT NULL DEFAULT now(),\n created_by TEXT NOT NULL REFERENCES public.person(id_person));\n\nCREATE INDEX measurement__id_int_sensor_meas_type_idx ON measurement(id_int_sensor_meas_type);\n\nDo I need to cast the id_int_sensor_meas_type column when creating the\nindex? Both referrer and referenced look like INTEGER to me...\n\nhttp://www.postgresql.org/docs/8.0/interactive/datatype.html#DATATYPE-SERIAL\nsays: \"The type names serial and serial4 are equivalent: both create\ninteger columns\" \n\nTIA,\n\n-- \nKarim Nassar\nDepartment of Computer Science\nBox 15600, College of Engineering and Natural Sciences\nNorthern Arizona University, Flagstaff, Arizona 86011\nOffice: (928) 523-5868 -=- Mobile: (928) 699-9221\n\n",
"msg_date": "Thu, 24 Mar 2005 19:58:40 -0700",
"msg_from": "Karim Nassar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "Watch your pg_stats_* views before and after the delete and check what \nrelated tables have had lots of seqscans.\n\nChris\n\nMark Lewis wrote:\n> Tom,\n> \n> I've got a similar problem with deletes taking a very long time. I know\n> that there are lots of foreign keys referencing this table, and other\n> foreign keys referencing those tables, etc. I've been curious, is there\n> a way to find out how long the foreign key checks take for each\n> dependent table?\n> \n> -- Mark Lewis\n> \n> On Thu, 2005-03-24 at 16:52, Tom Lane wrote:\n> \n>>Karim Nassar <[email protected]> writes:\n>>\n>>>Here is the statement:\n>>\n>>>orfs=# explain analyze DELETE FROM int_sensor_meas_type WHERE\n>>>id_meas_type IN (SELECT * FROM meas_type_ids);\n>>> QUERY PLAN \n>>>-----------------------------------------------------------------------------------------------------------------------------\n>>> Hash Join (cost=11.53..42.06 rows=200 width=6) (actual\n>>>time=1.564..2.840 rows=552 loops=1)\n>>>...\n>>> Total runtime: 2499616.216 ms\n>>>(7 rows)\n>>\n>>Notice that the actual join is taking 2.8 ms. The other ~40 minutes is\n>>in operations that we cannot see in this plan, but we can surmise are ON\n>>DELETE triggers.\n>>\n>>\n>>>Where do I go from here?\n>>\n>>Look at what your triggers are doing. My private bet is that you have\n>>unindexed foreign keys referencing this table, and so each deletion\n>>forces a seqscan of some other, evidently very large, table(s).\n>>\n>>\t\t\tregards, tom lane\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 7: don't forget to increase your free space map settings\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n",
"msg_date": "Fri, 25 Mar 2005 11:37:07 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "> In that case there's a datatype mismatch between the referencing and\n> referenced columns, which prevents the index from being used for the\n> FK check.\n\nIs creating such a foreign key a WARNING yet?\n\nChris\n",
"msg_date": "Fri, 25 Mar 2005 11:38:03 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "> Hmm ... I wonder how hard it would be to teach EXPLAIN ANALYZE to show\n> the runtime expended in each trigger when the statement is of a kind\n> that has triggers. We couldn't break down the time *within* the\n> triggers, but even this info would help a lot in terms of finger\n> pointing ...\n> \n> \tSeq Scan on ... (nn.nnn ms)\n> \tTrigger foo: nn.mmm ms\n> \tTrigger bar: nn.mmm ms\n> \tTotal time: nn.mmm ms\n\nThat would be really cool...\n",
"msg_date": "Fri, 25 Mar 2005 11:38:37 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "\nOn Mar 24, 2005, at 10:38 PM, Christopher Kings-Lynne wrote:\n\n>> In that case there's a datatype mismatch between the referencing and\n>> referenced columns, which prevents the index from being used for the\n>> FK check.\n>\n> Is creating such a foreign key a WARNING yet?\n>\n\nI recall getting such a warning when importing my schema from a 7.4 to \n8.0 server. I had one table with char and the other with varchar.\n\n",
"msg_date": "Thu, 24 Mar 2005 22:56:34 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "Karim,\n\n> Problem now is: this referencing table I expect to grow to about 110\n> million rows in the next 2 months, then by 4 million rows per month\n> thereafter. I expect that the time for recreating the foreign key will\n> grow linearly with size.\n>\n> Is this just the kind of thing I need to watch out for? Any other\n> suggestions for dealing with tables of this size? What can I do to my\n> indexes to make them mo' betta?\n\nHow about getting some decent disk support? A single 10K SCSI disk is a bit \nsub-par for a database with 100's of millions of records. Too bad you didn't \nget a v40z ...\n\nBeyond that, you'll want to do the same thing whenever you purge the \nreferencing table; drop keys, delete, re-create keys. Or think about why it \nis you need to delete batches of records from this FKed table at all.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 24 Mar 2005 21:24:39 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "On Thu, 2005-03-24 at 21:24 -0800, Josh Berkus wrote:\n> Karim,\n> How about getting some decent disk support? A single 10K SCSI disk is a bit \n> sub-par for a database with 100's of millions of records. Too bad you didn't \n> get a v40z ...\n\nHehe. I have one I am setting up that will be dedicated to postgresql,\nhence my question about a week ago about disk partitioning/striping :-)\n\n\n> Beyond that, you'll want to do the same thing whenever you purge the \n> referencing table; drop keys, delete, re-create keys. Or think about why it \n> is you need to delete batches of records from this FKed table at all.\n\nThe database is for weather data from multiple sources. When adding a\nnew dataset, I have to create/test/delete/recreate the config in the\nFKed table. Users don't have this power, but I need it.\nDrop/delete/recreate is a totally acceptable solution for this scenario.\n\nI guess I was wondering if there is other general tuning advice for such\nlarge table indexes such as increasing statistics, etc. \n\nThanks,\n-- \nKarim Nassar\nDepartment of Computer Science\nBox 15600, College of Engineering and Natural Sciences\nNorthern Arizona University, Flagstaff, Arizona 86011\nOffice: (928) 523-5868 -=- Mobile: (928) 699-9221\n\n",
"msg_date": "Thu, 24 Mar 2005 22:35:55 -0700",
"msg_from": "Karim Nassar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "On Thu, 24 Mar 2005, Tom Lane wrote:\n\n> Mark Lewis <[email protected]> writes:\n>> I've got a similar problem with deletes taking a very long time. I know\n>> that there are lots of foreign keys referencing this table, and other\n>> foreign keys referencing those tables, etc. I've been curious, is there\n>> a way to find out how long the foreign key checks take for each\n>> dependent table?\n>\n> There is not any easy way at the moment.\n>\n> Hmm ... I wonder how hard it would be to teach EXPLAIN ANALYZE to show\n> the runtime expended in each trigger when the statement is of a kind\n> that has triggers. We couldn't break down the time *within* the\n> triggers, but even this info would help a lot in terms of finger\n> pointing ...\n>\n> \tSeq Scan on ... (nn.nnn ms)\n> \tTrigger foo: nn.mmm ms\n> \tTrigger bar: nn.mmm ms\n> \tTotal time: nn.mmm ms\n\nand if you add\n\n Index foo_idx: nn.mm ss\n \tHeap foo_tbl: nn.mm ss\n\n\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n",
"msg_date": "Fri, 25 Mar 2005 08:38:33 +0300 (MSK)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time "
},
{
"msg_contents": "Christopher Kings-Lynne <[email protected]> writes:\n>> Hmm ... I wonder how hard it would be to teach EXPLAIN ANALYZE to show\n>> the runtime expended in each trigger when the statement is of a kind\n>> that has triggers.\n\n> Could SPI \"know\" that an explain analyze is being run and add their \n> output and timings to the output?\n\nIf it did, we'd be double-counting the time.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 Mar 2005 01:15:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time "
},
{
"msg_contents": "Christopher Kings-Lynne <[email protected]> writes:\n>> In that case there's a datatype mismatch between the referencing and\n>> referenced columns, which prevents the index from being used for the\n>> FK check.\n\n> Is creating such a foreign key a WARNING yet?\n\nI believe so as of 8.0. It's a bit tricky since 8.0 does allow some\ncross-type cases to be indexed, but IIRC we have a test that understands\nabout that...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 Mar 2005 01:58:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time "
},
{
"msg_contents": "On Fri, 2005-03-25 at 01:58 -0500, Tom Lane wrote:\n> Christopher Kings-Lynne <[email protected]> writes:\n> >> In that case there's a datatype mismatch between the referencing and\n> >> referenced columns, which prevents the index from being used for the\n> >> FK check.\n> \n> > Is creating such a foreign key a WARNING yet?\n> \n> I believe so as of 8.0. It's a bit tricky since 8.0 does allow some\n> cross-type cases to be indexed, but IIRC we have a test that understands\n> about that...\n\nsrc/backend/commands/tablecmds.c, line 3966 in CVSTIP\n/*\n * Check that the found operator is compatible with the PK index,\n * and generate a warning if not, since otherwise costly seqscans\n * will be incurred to check FK validity.\n*/\nif (!op_in_opclass(oprid(o), opclasses[i]))\n ereport(WARNING,\n\t(errmsg(\"foreign key constraint \\\"%s\\\" \"\n\t\t\"will require costly sequential scans\",\n\t\tfkconstraint->constr_name),\n\t errdetail(\"Key columns \\\"%s\\\" and \\\"%s\\\" \"\n\t \t\"are of different types: %s and %s.\",\n\t\t strVal(list_nth(fkconstraint->fk_attrs, i)),\n\t\t strVal(list_nth(fkconstraint->pk_attrs, i)),\n\t\t format_type_be(fktypoid[i]),\n\t\t format_type_be(pktypoid[i]))));\n\nSo, yes to the WARNING. Not sure about the cross-type cases...\n\nKarim: Did this happen? If not, can you drop and re-create and confirm\nthat you get the WARNING? If not, we have problems.\n\nI vote to make this an ERROR in 8.1 - I see little benefit in allowing\nthis situation to continue. If users do create a FK like this, it just\nbecomes another performance problem on list...\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Fri, 25 Mar 2005 15:10:48 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "Simon Riggs <[email protected]> writes:\n> I vote to make this an ERROR in 8.1 - I see little benefit in allowing\n> this situation to continue.\n\nOther than spec compliance, you mean? SQL99 says\n\n ... The declared type of each referencing column shall be\n comparable to the declared type of the corresponding referenced\n column.\n\nIt doesn't say that it has to be indexable, and most definitely not that\nthere has to be an index.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 Mar 2005 10:17:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time "
},
{
"msg_contents": "On Fri, 2005-03-25 at 10:17 -0500, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > I vote to make this an ERROR in 8.1 - I see little benefit in allowing\n> > this situation to continue.\n> \n> Other than spec compliance, you mean? SQL99 says\n> \n> ... The declared type of each referencing column shall be\n> comparable to the declared type of the corresponding referenced\n> column.\n> \n> It doesn't say that it has to be indexable, and most definitely not that\n> there has to be an index.\n\nspecs at dawn, eh?\n\nWell, SQL:2003 Foundation, p.550 clause 3a) states that the the\n<reference columns> in the referencing table must match a unique\nconstraint on the referenced table, or the PRIMARY KEY if the columns\nare not specified. Either way, the referenced columns are a unique\nconstraint (which makes perfect sense from a logical data perspective).\n\nWe implement unique constraints via an index, so for PostgreSQL the\nclause implies that it must refer to an index. \n\ntouche, Monsieur Lane and Happy Easter :-)\n\nBut even without that, there is little benefit in allowing it...\n\nWARNING -> ERROR, please.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Fri, 25 Mar 2005 16:01:18 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "\nOn Fri, 25 Mar 2005, Simon Riggs wrote:\n\n> On Fri, 2005-03-25 at 10:17 -0500, Tom Lane wrote:\n> > Simon Riggs <[email protected]> writes:\n> > > I vote to make this an ERROR in 8.1 - I see little benefit in allowing\n> > > this situation to continue.\n> >\n> > Other than spec compliance, you mean? SQL99 says\n> >\n> > ... The declared type of each referencing column shall be\n> > comparable to the declared type of the corresponding referenced\n> > column.\n> >\n> > It doesn't say that it has to be indexable, and most definitely not that\n> > there has to be an index.\n>\n> specs at dawn, eh?\n>\n> Well, SQL:2003 Foundation, p.550 clause 3a) states that the the\n> <reference columns> in the referencing table must match a unique\n> constraint on the referenced table, or the PRIMARY KEY if the columns\n> are not specified. Either way, the referenced columns are a unique\n> constraint (which makes perfect sense from a logical data perspective).\n>\n> We implement unique constraints via an index, so for PostgreSQL the\n> clause implies that it must refer to an index.\n\nIMHO, that reference is irrrelevant. Yes, there must be an index due to\nour implementation, however that doesn't imply that the types must be the\nsame, nor even that the index must be usable for the cross table\ncomparison.\n\n",
"msg_date": "Fri, 25 Mar 2005 08:23:19 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "Karim,\n\n> I guess I was wondering if there is other general tuning advice for such\n> large table indexes such as increasing statistics, etc.\n\nWell, your index use problem is being explained by Tom, Stephan and Simon; \nbasically your FKed data types are incompatible for index use purposes so the \nsystem *can't* use an index while loading.\n\nIf you're going with the drop/load/recreate option, then I'd suggest \nincreasing work_mem for the duration. Hmmm ... or maintenance_work_mem? \nWhat gets used for FK checks? Simon?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 25 Mar 2005 09:38:28 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "On Fri, 2005-03-25 at 08:23 -0800, Stephan Szabo wrote:\n> On Fri, 25 Mar 2005, Simon Riggs wrote:\n> \n> > On Fri, 2005-03-25 at 10:17 -0500, Tom Lane wrote:\n> > > Simon Riggs <[email protected]> writes:\n> > > > I vote to make this an ERROR in 8.1 - I see little benefit in allowing\n> > > > this situation to continue.\n> > >\n> > > Other than spec compliance, you mean? SQL99 says\n> > >\n> > > ... The declared type of each referencing column shall be\n> > > comparable to the declared type of the corresponding referenced\n> > > column.\n> > >\n> > > It doesn't say that it has to be indexable, and most definitely not that\n> > > there has to be an index.\n> >\n> > specs at dawn, eh?\n> >\n> > Well, SQL:2003 Foundation, p.550 clause 3a) states that the the\n> > <reference columns> in the referencing table must match a unique\n> > constraint on the referenced table, or the PRIMARY KEY if the columns\n> > are not specified. Either way, the referenced columns are a unique\n> > constraint (which makes perfect sense from a logical data perspective).\n> >\n> > We implement unique constraints via an index, so for PostgreSQL the\n> > clause implies that it must refer to an index.\n> \n> IMHO, that reference is irrrelevant. \n\nTom had said SQL99 required this; I have pointed out SQL:2003, which\nsupercedes the SQL99 standard, does not require this.\n\nLeading us back to my original point - what is the benefit of continuing\nwith having a WARNING when that leads people into trouble later?\n\n> Yes, there must be an index due to\n> our implementation, however that doesn't imply that the types must be the\n> same\n\nNo, it doesn't imply it, but what benefit do you see from the\ninterpretation that they are allowed to differ? That interpretation\ncurrently leads to many mistakes leading to poor performance. \n\nThere is clear benefit from forcing them to be the same. In logical data\nterms, they *should* be the same. I don't check fruit.apple_grade\nagainst fruit_type.orange_grade. When would I want to make a check of\nthat nature? If there is a reason, thats great, lets keep status quo\nthen.\n\nI respect the effort and thought that has already gone into the\nimplementation; I seek only to offer a very minor improvement based upon\nrecent list issues.\n\n> nor even that the index must be usable for the cross table\n> comparison.\n\nThats a separate discussion, possibly the next one.\n\nBest Regards, Simon Riggs\n\n\n\n",
"msg_date": "Fri, 25 Mar 2005 18:24:10 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "Simon Riggs <[email protected]> writes:\n> On Fri, 2005-03-25 at 10:17 -0500, Tom Lane wrote:\n>>> Other than spec compliance, you mean? SQL99 says\n>>> \n>>> ... The declared type of each referencing column shall be\n>>> comparable to the declared type of the corresponding referenced\n>>> column.\n\n> Tom had said SQL99 required this; I have pointed out SQL:2003, which\n> supercedes the SQL99 standard, does not require this.\n\nYou're reading the wrong part of SQL:2003. 11.8 <referential constraint\ndefinition> syntax rule 9 still has the text I quoted.\n\n> Leading us back to my original point - what is the benefit of continuing\n> with having a WARNING when that leads people into trouble later?\n\nAccepting spec-compliant schemas.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 Mar 2005 13:47:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time "
},
{
"msg_contents": "On Fri, 2005-03-25 at 13:47 -0500, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > On Fri, 2005-03-25 at 10:17 -0500, Tom Lane wrote:\n> >>> Other than spec compliance, you mean? SQL99 says\n> >>> \n> >>> ... The declared type of each referencing column shall be\n> >>> comparable to the declared type of the corresponding referenced\n> >>> column.\n> \n> > Tom had said SQL99 required this; I have pointed out SQL:2003, which\n> > supercedes the SQL99 standard, does not require this.\n> \n> You're reading the wrong part of SQL:2003. 11.8 <referential constraint\n> definition> syntax rule 9 still has the text I quoted.\n\nSo, we have this from SQL:2003 section 11.8 p.550\n- 3a) requires us to have an index\n- 9) requires the data types to be \"comparable\"\n\nIn the name of spec-compliance we wish to accept an interpretation of\nthe word \"comparable\" that means we will accept two datatypes that are\nnot actually the same. \n\nSo we are happy to enforce having the index, but not happy to ensure the\nindex is actually usable for the task?\n\n> > Leading us back to my original point - what is the benefit of continuing\n> > with having a WARNING when that leads people into trouble later?\n> \n> Accepting spec-compliant schemas.\n\nI definitely want this too - as you know I have worked on documenting\ncompliance previously.\n\nIs the word \"comparable\" defined elsewhere in the standard?\n\nCurrently, datatypes with similar type categories are comparable and yet\n(in 8.0) will now use the index. So, we are taking comparable to include\nfairly radically different datatypes?\n\nCould it be that because PostgreSQL has a very highly developed sense of\ndatatype comparison that we might be taking this to extremes? Would any\nother RDBMS consider two different datatypes to be comparable?\n\nPlease consider this. \n\nBest Regards, Simon Riggs\n\n\n",
"msg_date": "Fri, 25 Mar 2005 20:28:49 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "\nOn Fri, 25 Mar 2005, Simon Riggs wrote:\n\n> On Fri, 2005-03-25 at 13:47 -0500, Tom Lane wrote:\n> > Simon Riggs <[email protected]> writes:\n> > > On Fri, 2005-03-25 at 10:17 -0500, Tom Lane wrote:\n> > >>> Other than spec compliance, you mean? SQL99 says\n> > >>>\n> > >>> ... The declared type of each referencing column shall be\n> > >>> comparable to the declared type of the corresponding referenced\n> > >>> column.\n> >\n> > > Tom had said SQL99 required this; I have pointed out SQL:2003, which\n> > > supercedes the SQL99 standard, does not require this.\n> >\n> > You're reading the wrong part of SQL:2003. 11.8 <referential constraint\n> > definition> syntax rule 9 still has the text I quoted.\n>\n> So, we have this from SQL:2003 section 11.8 p.550\n> - 3a) requires us to have an index\n> - 9) requires the data types to be \"comparable\"\n>\n> In the name of spec-compliance we wish to accept an interpretation of\n> the word \"comparable\" that means we will accept two datatypes that are\n> not actually the same.\n>\n> So we are happy to enforce having the index, but not happy to ensure the\n> index is actually usable for the task?\n\nThe indexes \"usability\" only applies to the purpose of guaranteeing\nuniqueness which doesn't depend on the referencing type AFAICS.\n\n> > > Leading us back to my original point - what is the benefit of continuing\n> > > with having a WARNING when that leads people into trouble later?\n> >\n> > Accepting spec-compliant schemas.\n>\n> I definitely want this too - as you know I have worked on documenting\n> compliance previously.\n>\n> Is the word \"comparable\" defined elsewhere in the standard?\n\nYes. And at least in SQL99, there's a bunch of statements in 4.* about\nwhat are comparable.\n\n> Currently, datatypes with similar type categories are comparable and yet\n> (in 8.0) will now use the index. So, we are taking comparable to include\n> fairly radically different datatypes?\n\nNot entirely. I believe a referenced column of int, and a referencing\ncolumn of numeric currently displays that warning, but appears to be\nallowed by the spec (as the numeric types are considered mutually\ncomparable).\n\n> Could it be that because PostgreSQL has a very highly developed sense of\n> datatype comparison that we might be taking this to extremes? Would any\n> other RDBMS consider two different datatypes to be comparable?\n\nWe do have a broader comparable than the spec. However, if we were to\nlimit it to the spec then many of the implicit casts and cross-type\ncomparison operators we have would be invalid as well since the comparison\nbetween those types would have to fail as well unless we treat the\ncomparable used by <comparison predicate> differently.\n",
"msg_date": "Fri, 25 Mar 2005 13:10:51 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "Stephan Szabo <[email protected]> writes:\n> On Fri, 25 Mar 2005, Simon Riggs wrote:\n>> Could it be that because PostgreSQL has a very highly developed sense of\n>> datatype comparison that we might be taking this to extremes? Would any\n>> other RDBMS consider two different datatypes to be comparable?\n\n> We do have a broader comparable than the spec.\n\nHowever, the set of comparisons that we can presently support *with\nindexes* is narrower than the spec, so rejecting nonindexable cases\nwould be a problem.\n\nIt's worth noting also that the test being discussed checks whether the\nPK index is usable for testing the RI constraint. In the problem that\nstarted this thread, the difficulty is lack of a usable index on the FK\ncolumn, not the PK (because that's the table that has to be searched to\ndo a delete in the PK table). We cannot enforce that there be a usable\nindex on the FK column (since indexes on the FK table may not have been\nbuilt yet when the constraint is declared), and shouldn't anyway because\nthere are reasonable usage patterns where you don't need one.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 Mar 2005 16:25:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time "
},
{
"msg_contents": "On Fri, 2005-03-25 at 15:10 +0000, Simon Riggs wrote:\n> Karim: Did this happen? If not, can you drop and re-create and confirm\n> that you get the WARNING? If not, we have problems.\n\nNo. Nor do I think that I should. SERIAL is shortcut for INTEGER, no? I\nthink there is some other (TBD) problem causing my big seq scan.\n\norfs=# ALTER TABLE measurement DROP CONSTRAINT measurement_id_int_sensor_meas_type_fkey;\nALTER TABLE\norfs=# ALTER TABLE ONLY measurement ADD CONSTRAINT measurement_id_int_sensor_meas_type_fkey\norfs-# FOREIGN KEY (id_int_sensor_meas_type) REFERENCES int_sensor_meas_type(id_int_sensor_meas_type);\nALTER TABLE\norfs=#\n\nThe add constraint statement comes directly from a pg_dump.\n\nFor clarity, the table/indexes were created as such:\n\nCREATE TABLE int_sensor_meas_type( \n id_int_sensor_meas_type SERIAL PRIMARY KEY,\n id_sensor integer NOT NULL REFERENCES sensor,\n id_meas_type integer NOT NULL REFERENCES meas_type UNIQUE);\n\nCREATE TABLE measurement (\n id_measurement SERIAL PRIMARY KEY,\n id_int_sensor_meas_type integer NOT NULL REFERENCES int_sensor_meas_type,\n datetime timestamp WITH TIME ZONE NOT NULL,\n value numeric(15,5) NOT NULL,\n created timestamp with time zone NOT NULL DEFAULT now(),\n created_by TEXT NOT NULL REFERENCES public.person(id_person));\n\nCREATE INDEX measurement__id_int_sensor_meas_type_idx ON measurement(id_int_sensor_meas_type);\n\nRegards,\n-- \nKarim Nassar\nDepartment of Computer Science\nBox 15600, College of Engineering and Natural Sciences\nNorthern Arizona University, Flagstaff, Arizona 86011\nOffice: (928) 523-5868 -=- Mobile: (928) 699-9221\n\n",
"msg_date": "Fri, 25 Mar 2005 14:47:44 -0700",
"msg_from": "Karim Nassar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "On Fri, 2005-03-25 at 16:25 -0500, Tom Lane wrote:\n> Stephan Szabo <[email protected]> writes:\n> > On Fri, 25 Mar 2005, Simon Riggs wrote:\n> >> Could it be that because PostgreSQL has a very highly developed sense of\n> >> datatype comparison that we might be taking this to extremes? Would any\n> >> other RDBMS consider two different datatypes to be comparable?\n> \n> > We do have a broader comparable than the spec.\n> \n> However, the set of comparisons that we can presently support *with\n> indexes* is narrower than the spec, so rejecting nonindexable cases\n> would be a problem.\n\nOK. Can we have a TODO item then?\n\n* Ensure that all SQL:2003 comparable datatypes are also indexable when\ncompared\n\n...or something like that\n\n> It's worth noting also that the test being discussed checks whether the\n> PK index is usable for testing the RI constraint. In the problem that\n> started this thread, the difficulty is lack of a usable index on the FK\n> column, not the PK (because that's the table that has to be searched to\n> do a delete in the PK table). We cannot enforce that there be a usable\n> index on the FK column (since indexes on the FK table may not have been\n> built yet when the constraint is declared), and shouldn't anyway because\n> there are reasonable usage patterns where you don't need one.\n\nYes, I agree for CASCADE we wouldn't always want an index.\n\nAlright then, time to leave it there.\n\nI want to write up some additional comments for performance tips:\n- Consider defining RI constraints after tables have been loaded\n- Remember to add an index on the referencing table if the constraint is\ndefined as CASCADEing\n\nHave a good Easter, all, wherever you are and whatever you believe in.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Fri, 25 Mar 2005 21:50:04 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "> There is clear benefit from forcing them to be the same. In logical data\n> terms, they *should* be the same. I don't check fruit.apple_grade\n> against fruit_type.orange_grade. When would I want to make a check of\n> that nature? If there is a reason, thats great, lets keep status quo\n> then.\n> \n> I respect the effort and thought that has already gone into the\n> implementation; I seek only to offer a very minor improvement based upon\n> recent list issues.\n\nThe main problem would be people getting errors when upgrading their \ndatabases, or restoring from a backup, say.\n\nChris\n",
"msg_date": "Sat, 26 Mar 2005 14:31:58 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "\nOn Fri, 25 Mar 2005, Karim Nassar wrote:\n\n> On Fri, 2005-03-25 at 15:10 +0000, Simon Riggs wrote:\n> > Karim: Did this happen? If not, can you drop and re-create and confirm\n> > that you get the WARNING? If not, we have problems.\n>\n> No. Nor do I think that I should. SERIAL is shortcut for INTEGER, no? I\n> think there is some other (TBD) problem causing my big seq scan.\n>\n> orfs=# ALTER TABLE measurement DROP CONSTRAINT measurement_id_int_sensor_meas_type_fkey;\n> ALTER TABLE\n> orfs=# ALTER TABLE ONLY measurement ADD CONSTRAINT measurement_id_int_sensor_meas_type_fkey\n> orfs-# FOREIGN KEY (id_int_sensor_meas_type) REFERENCES int_sensor_meas_type(id_int_sensor_meas_type);\n> ALTER TABLE\n> orfs=#\n>\n> The add constraint statement comes directly from a pg_dump.\n>\n> For clarity, the table/indexes were created as such:\n>\n> CREATE TABLE int_sensor_meas_type(\n> id_int_sensor_meas_type SERIAL PRIMARY KEY,\n> id_sensor integer NOT NULL REFERENCES sensor,\n> id_meas_type integer NOT NULL REFERENCES meas_type UNIQUE);\n>\n> CREATE TABLE measurement (\n> id_measurement SERIAL PRIMARY KEY,\n> id_int_sensor_meas_type integer NOT NULL REFERENCES int_sensor_meas_type,\n> datetime timestamp WITH TIME ZONE NOT NULL,\n> value numeric(15,5) NOT NULL,\n> created timestamp with time zone NOT NULL DEFAULT now(),\n> created_by TEXT NOT NULL REFERENCES public.person(id_person));\n>\n> CREATE INDEX measurement__id_int_sensor_meas_type_idx ON measurement(id_int_sensor_meas_type);\n\nThat seems like it should be okay, hmm, what does something like:\n\nPREPARE test(int) AS SELECT 1 from measurement where\nid_int_sensor_meas_type = $1 FOR UPDATE;\nEXPLAIN ANALYZE EXECUTE TEST(1);\n\ngive you as the plan?\n",
"msg_date": "Sat, 26 Mar 2005 07:55:39 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "On Sat, 2005-03-26 at 07:55 -0800, Stephan Szabo wrote:\n> That seems like it should be okay, hmm, what does something like:\n> \n> PREPARE test(int) AS SELECT 1 from measurement where\n> id_int_sensor_meas_type = $1 FOR UPDATE;\n> EXPLAIN ANALYZE EXECUTE TEST(1);\n> \n> give you as the plan?\n\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------\n Seq Scan on measurement (cost=0.00..164559.16 rows=509478 width=6)\n (actual time=11608.402..11608.402 rows=0 loops=1)\n Filter: (id_int_sensor_meas_type = $1)\n Total runtime: 11608.441 ms\n(3 rows)\n\n\n-- \nKarim Nassar\nDepartment of Computer Science\nBox 15600, College of Engineering and Natural Sciences\nNorthern Arizona University, Flagstaff, Arizona 86011\nOffice: (928) 523-5868 -=- Mobile: (928) 699-9221\n\n",
"msg_date": "Sat, 26 Mar 2005 12:36:28 -0700",
"msg_from": "Karim Nassar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "On Sat, 26 Mar 2005, Karim Nassar wrote:\n\n> On Sat, 2005-03-26 at 07:55 -0800, Stephan Szabo wrote:\n> > That seems like it should be okay, hmm, what does something like:\n> >\n> > PREPARE test(int) AS SELECT 1 from measurement where\n> > id_int_sensor_meas_type = $1 FOR UPDATE;\n> > EXPLAIN ANALYZE EXECUTE TEST(1);\n> >\n> > give you as the plan?\n>\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------\n> Seq Scan on measurement (cost=0.00..164559.16 rows=509478 width=6)\n> (actual time=11608.402..11608.402 rows=0 loops=1)\n> Filter: (id_int_sensor_meas_type = $1)\n> Total runtime: 11608.441 ms\n> (3 rows)\n\nHmm, has measurement been analyzed recently? You might want to see if\nraising the statistics target on measurement.id_int_sensor_meas_type and\nreanalyzing changes the estimated rows down from 500k.\n\n",
"msg_date": "Sat, 26 Mar 2005 15:18:16 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "On Sat, 2005-03-26 at 15:18 -0800, Stephan Szabo wrote:\n> On Sat, 26 Mar 2005, Karim Nassar wrote:\n> \n> > On Sat, 2005-03-26 at 07:55 -0800, Stephan Szabo wrote:\n> > > That seems like it should be okay, hmm, what does something like:\n> > >\n> > > PREPARE test(int) AS SELECT 1 from measurement where\n> > > id_int_sensor_meas_type = $1 FOR UPDATE;\n> > > EXPLAIN ANALYZE EXECUTE TEST(1);\n> > >\n> > > give you as the plan?\n> >\n> > QUERY PLAN\n> > -----------------------------------------------------------------------------------------------------------------------\n> > Seq Scan on measurement (cost=0.00..164559.16 rows=509478 width=6)\n> > (actual time=11608.402..11608.402 rows=0 loops=1)\n> > Filter: (id_int_sensor_meas_type = $1)\n> > Total runtime: 11608.441 ms\n> > (3 rows)\n> \n> Hmm, has measurement been analyzed recently? You might want to see if\n> raising the statistics target on measurement.id_int_sensor_meas_type and\n> reanalyzing changes the estimated rows down from 500k.\n\norfs=# ALTER TABLE measurement ALTER COLUMN id_int_sensor_meas_type SET STATISTICS 1000;\nALTER TABLE\norfs=# VACUUM FULL ANALYZE VERBOSE;\n<snip>\nINFO: free space map: 52 relations, 13501 pages stored; 9760 total pages needed\nDETAIL: Allocated FSM size: 1000 relations + 300000 pages = 1864 kB shared memory.\nVACUUM\norfs=# PREPARE test(int) AS SELECT 1 from measurement where\norfs-# id_int_sensor_meas_type = $1 FOR UPDATE;\nPREPARE\norfs=# EXPLAIN ANALYZE EXECUTE TEST(1);\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------\n Seq Scan on measurement (cost=0.00..164559.16 rows=509478 width=6) (actual time=8948.452..8948.452 rows=0 loops=1)\n Filter: (id_int_sensor_meas_type = $1)\n Total runtime: 8948.494 ms\n(3 rows)\n\norfs=# EXPLAIN ANALYZE EXECUTE TEST(1);\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------\n Seq Scan on measurement (cost=0.00..164559.16 rows=509478 width=6) (actual time=3956.616..3956.616 rows=0 loops=1)\n Filter: (id_int_sensor_meas_type = $1)\n Total runtime: 3956.662 ms\n(3 rows)\n\n\n\nSome improvement. Even better once it's cached. Row estimate didn't\nchange. Is this the best I can expect? Is there any other optimizations\nI am missing?\n\nTIA,\n\n-- \nKarim Nassar\nDepartment of Computer Science\nBox 15600, College of Engineering and Natural Sciences\nNorthern Arizona University, Flagstaff, Arizona 86011\nOffice: (928) 523-5868 -=- Mobile: (928) 699-9221\n\n",
"msg_date": "Sat, 26 Mar 2005 17:44:47 -0700",
"msg_from": "Karim Nassar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "\nOn Sat, 26 Mar 2005, Karim Nassar wrote:\n\n> On Sat, 2005-03-26 at 15:18 -0800, Stephan Szabo wrote:\n> > On Sat, 26 Mar 2005, Karim Nassar wrote:\n> >\n> > > On Sat, 2005-03-26 at 07:55 -0800, Stephan Szabo wrote:\n> > > > That seems like it should be okay, hmm, what does something like:\n> > > >\n> > > > PREPARE test(int) AS SELECT 1 from measurement where\n> > > > id_int_sensor_meas_type = $1 FOR UPDATE;\n> > > > EXPLAIN ANALYZE EXECUTE TEST(1);\n> > > >\n> > > > give you as the plan?\n> > >\n> > > QUERY PLAN\n> > > -----------------------------------------------------------------------------------------------------------------------\n> > > Seq Scan on measurement (cost=0.00..164559.16 rows=509478 width=6)\n> > > (actual time=11608.402..11608.402 rows=0 loops=1)\n> > > Filter: (id_int_sensor_meas_type = $1)\n> > > Total runtime: 11608.441 ms\n> > > (3 rows)\n> >\n> > Hmm, has measurement been analyzed recently? You might want to see if\n> > raising the statistics target on measurement.id_int_sensor_meas_type and\n> > reanalyzing changes the estimated rows down from 500k.\n>\n> orfs=# ALTER TABLE measurement ALTER COLUMN id_int_sensor_meas_type SET STATISTICS 1000;\n> ALTER TABLE\n> orfs=# VACUUM FULL ANALYZE VERBOSE;\n> <snip>\n> INFO: free space map: 52 relations, 13501 pages stored; 9760 total pages needed\n> DETAIL: Allocated FSM size: 1000 relations + 300000 pages = 1864 kB shared memory.\n> VACUUM\n> orfs=# PREPARE test(int) AS SELECT 1 from measurement where\n> orfs-# id_int_sensor_meas_type = $1 FOR UPDATE;\n> PREPARE\n> orfs=# EXPLAIN ANALYZE EXECUTE TEST(1);\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------\n> Seq Scan on measurement (cost=0.00..164559.16 rows=509478 width=6) (actual time=8948.452..8948.452 rows=0 loops=1)\n> Filter: (id_int_sensor_meas_type = $1)\n> Total runtime: 8948.494 ms\n> (3 rows)\n>\n> orfs=# EXPLAIN ANALYZE EXECUTE TEST(1);\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------\n> Seq Scan on measurement (cost=0.00..164559.16 rows=509478 width=6) (actual time=3956.616..3956.616 rows=0 loops=1)\n> Filter: (id_int_sensor_meas_type = $1)\n> Total runtime: 3956.662 ms\n> (3 rows)\n>\n>\n>\n> Some improvement. Even better once it's cached. Row estimate didn't\n> change. Is this the best I can expect? Is there any other optimizations\n> I am missing?\n\nI'm not sure, really. Running a seq scan for each removed row in the\nreferenced table doesn't seem like a particularly good plan in general\nthough, especially if the average number of rows being referenced isn't\non the order of 500k per value. I don't know what to look at next though.\n\n",
"msg_date": "Sun, 27 Mar 2005 07:05:38 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "On Sun, 2005-03-27 at 07:05 -0800, Stephan Szabo wrote:\n> On Sat, 26 Mar 2005, Karim Nassar wrote:\n> > Some improvement. Even better once it's cached. Row estimate didn't\n> > change. Is this the best I can expect? Is there any other optimizations\n> > I am missing?\n> \n> I'm not sure, really. Running a seq scan for each removed row in the\n> referenced table doesn't seem like a particularly good plan in general\n> though, especially if the average number of rows being referenced isn't\n> on the order of 500k per value. I don't know what to look at next though.\n> \n\nKarim, please...\n\nrun the EXPLAIN after doing\n\tSET enable_seqscan = off\n\nThanks,\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Mon, 28 Mar 2005 11:21:03 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "Tom Lane Wrote:\n> Hmm ... I wonder how hard it would be to teach EXPLAIN ANALYZE to show\n> the runtime expended in each trigger when the statement is of a kind\n> that has triggers. We couldn't break down the time *within* the\n> triggers, but even this info would help a lot in terms of finger\n> pointing ...\n> \n> \tSeq Scan on ... (nn.nnn ms)\n> \tTrigger foo: nn.mmm ms\n> \tTrigger bar: nn.mmm ms\n> \tTotal time: nn.mmm ms\n\n\nSo I got the latest from CVS on Friday night to see how hard it would be\nto implement this, but it turns out that Tom has already committed the\nimprovement, so I'm in Tom's fan club today. I imported my test dataset\nand was almost immediately able to track down the cause of my\nperformance problem.\n\nThanks!\nMark Lewis\n\n",
"msg_date": "Mon, 28 Mar 2005 09:35:50 -0800",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "On Fri, 2005-03-25 at 09:38 -0800, Josh Berkus wrote:\n> > I guess I was wondering if there is other general tuning advice for such\n> > large table indexes such as increasing statistics, etc.\n> \n\n> If you're going with the drop/load/recreate option, then I'd suggest \n> increasing work_mem for the duration. Hmmm ... or maintenance_work_mem? \n> What gets used for FK checks? Simon?\n> \n\nIn 8.0, maintenance_work_mem is used for index creation, vacuum and\ninitial check of FK checks at time of creation. Everything else uses\nwork_mem as the limit.\n\nBest Regards, Simon Riggs\n\n\n\n",
"msg_date": "Mon, 28 Mar 2005 23:59:11 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "Mark Lewis wrote:\n> I imported my test dataset\n> and was almost immediately able to track down the cause of my\n> performance problem.\n\nWhy don't you tell us what the problem was :-) ?\n\nRegards\nGaetano Mendola\n\n\n\n\n\n",
"msg_date": "Thu, 29 Sep 2005 15:29:30 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
}
] |
[
{
"msg_contents": "Situation: An 24/7 animal hospital (100 employees) runs their business \non Centos 3.3 (RHEL 3) Postgres 7.4.2 (because they have to) off a 2-CPU \nXeon 2.8MHz, 4GB of RAM, (3) SCSI disks RAID 0 (zcav value 35MB per \nsec). The databse is 11GB comprised over 100 tables and indexes from 1MB \nto 2GB in size.\n\nI recently told the hospital management team worst-case scenerio they \nneed to get the database on its own drive array since the RAID0 is a \ndisaster wating to happen. I said ideally a new dual AMD server with \n6/7-disk configuration would be ideal for safety and performance, but \nthey don't have $15K. I said a seperate drive array offer the balance \nof safety and performance.\n\nI have been given budget of $7K to accomplish a safer/faster database \nthrough hardware upgrades. The objective is to get a drive array, but I \ncan use the budget any way I see fit to accomplish the goal.\n\nSince I am a dba novice, I did not physically build this server, nor did \nI write the application the hospital runs on, but I have the opportunity \nto make it better, I'd thought I should seek some advice from those who \nhave been down this road before. Suggestions/ideas anyone?\n\nThanks.\n\nSteve Poe\n",
"msg_date": "Fri, 25 Mar 2005 16:12:19 +0000",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to improve db performance with $7K?"
},
{
"msg_contents": "Tom,\n\n From what I understand, the vendor used ProIV for development, when \nthey attempted to use 7.4.3, they had ODBC issues and something else I \nhonestly don't know, but I was told that data was not coming through \nproperly. Their somewhat at the mercy of the ProIV people to give them \nthe stamp of approval, then the vendor will tell us what they support.\n\nThanks.\n\nSteve Poe\n\nTom Lane wrote:\n\n>Steve Poe <[email protected]> writes:\n> \n>\n>>Situation: An 24/7 animal hospital (100 employees) runs their business \n>>on Centos 3.3 (RHEL 3) Postgres 7.4.2 (because they have to)\n>> \n>>\n>\n>[ itch... ] Surely they could at least move to 7.4.7 without pain.\n>There are serious data-loss bugs known in 7.4.2.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 7: don't forget to increase your free space map settings\n>\n> \n>\n\n",
"msg_date": "Fri, 25 Mar 2005 17:19:55 +0000",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "You could build a dual opteron with 4 GB of ram, 12 10k raptor SATA \ndrives with a battery backed cache for about 7k or less.\n\nOkay. You trust SATA drives? I've been leary of them for a production \ndatabase. Pardon my ignorance, but what is a \"battery backed cache\"? I \nknow the drives have a built-in cache but I don't if that's the same. \nAre the 12 drives internal or an external chasis? Could you point me to \na place that this configuration exist? \n\n>\n> Or if they are not CPU bound just IO bound you could easily just\n> add an external 12 drive array (even if scsi) for less than 7k.\n>\nI don't believe it is CPU bound. At our busiest hour, the CPU is idle \nabout 70% on average down to 30% idle at its heaviest. Context switching \naverages about 4-5K per hour with momentary peaks to 25-30K for a \nminute. Overall disk performance is poor (35mb per sec).\n\nThanks for your input.\n\nSteve Poe\n\n\n\n\n\n",
"msg_date": "Sat, 26 Mar 2005 00:12:59 +0000",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Steve Poe <[email protected]> writes:\n> Situation: An 24/7 animal hospital (100 employees) runs their business \n> on Centos 3.3 (RHEL 3) Postgres 7.4.2 (because they have to)\n\n[ itch... ] Surely they could at least move to 7.4.7 without pain.\nThere are serious data-loss bugs known in 7.4.2.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 Mar 2005 19:59:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K? "
},
{
"msg_contents": "You can purchase a whole new dual opteron 740, with 6 gigs of ram, a \ncase to match and 6 74 gig ultra320 sca drives for about $7k\n\nI know because that's what I bought one for 2 weeks ago. Using Tyan's \ndual board.\n\nIf you need some details and are willing to go that route, let me know \nand I'll get you the information.\n\nSincerely,\n\nWill LaShell\n\nSteve Poe wrote:\n\n> Situation: An 24/7 animal hospital (100 employees) runs their \n> business on Centos 3.3 (RHEL 3) Postgres 7.4.2 (because they have to) \n> off a 2-CPU Xeon 2.8MHz, 4GB of RAM, (3) SCSI disks RAID 0 (zcav value \n> 35MB per sec). The databse is 11GB comprised over 100 tables and \n> indexes from 1MB to 2GB in size.\n>\n> I recently told the hospital management team worst-case scenerio they \n> need to get the database on its own drive array since the RAID0 is a \n> disaster wating to happen. I said ideally a new dual AMD server with \n> 6/7-disk configuration would be ideal for safety and performance, but \n> they don't have $15K. I said a seperate drive array offer the balance \n> of safety and performance.\n>\n> I have been given budget of $7K to accomplish a safer/faster database \n> through hardware upgrades. The objective is to get a drive array, but \n> I can use the budget any way I see fit to accomplish the goal.\n>\n> Since I am a dba novice, I did not physically build this server, nor \n> did I write the application the hospital runs on, but I have the \n> opportunity to make it better, I'd thought I should seek some advice \n> from those who have been down this road before. Suggestions/ideas \n> anyone?\n>\n> Thanks.\n>\n> Steve Poe\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n\n",
"msg_date": "Fri, 25 Mar 2005 18:03:04 -0700",
"msg_from": "Will LaShell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Steve Poe wrote:\n\n> Situation: An 24/7 animal hospital (100 employees) runs their \n> business on Centos 3.3 (RHEL 3) Postgres 7.4.2 (because they have to) \n> off a 2-CPU Xeon 2.8MHz, 4GB of RAM, (3) SCSI disks RAID 0 (zcav value \n> 35MB per sec). The databse is 11GB comprised over 100 tables and \n> indexes from 1MB to 2GB in size.\n>\n> I recently told the hospital management team worst-case scenerio they \n> need to get the database on its own drive array since the RAID0 is a \n> disaster wating to happen. I said ideally a new dual AMD server with \n> 6/7-disk configuration would be ideal for safety and performance, but \n> they don't have $15K. I said a seperate drive array offer the balance \n> of safety and performance.\n>\n> I have been given budget of $7K to accomplish a safer/faster database \n> through hardware upgrades. The objective is to get a drive array, but \n> I can use the budget any way I see fit to accomplish the goal.\n\n\nYou could build a dual opteron with 4 GB of ram, 12 10k raptor SATA \ndrives with a battery backed cache for about 7k or less.\n\nOr if they are not CPU bound just IO bound you could easily just\nadd an external 12 drive array (even if scsi) for less than 7k.\n\n\nSincerely,\n\nJoshua D. Drake\n\n\n>\n> Since I am a dba novice, I did not physically build this server, nor \n> did I write the application the hospital runs on, but I have the \n> opportunity to make it better, I'd thought I should seek some advice \n> from those who have been down this road before. Suggestions/ideas \n> anyone?\n>\n> Thanks.\n>\n> Steve Poe\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL",
"msg_date": "Fri, 25 Mar 2005 17:13:40 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Hi Steve,\n\n> Okay. You trust SATA drives? I've been leary of them for a production \n> database. Pardon my ignorance, but what is a \"battery backed cache\"? I \n> know the drives have a built-in cache but I don't if that's the same. \n> Are the 12 drives internal or an external chasis? Could you point me to \n> a place that this configuration exist?\n\nGet 12 or 16 x 74GB Western Digital Raptor S-ATA drives, one 3ware \n9500S-12 or two 3ware 9500S-8 raid controllers with a battery backup \nunit (in case of power loss the controller saves unflushed data), a \ndecent tyan board for the existing dual xeon with 2 pci-x slots and a \nmatching 3U case for 12 drives (12 drives internal).\n\nHere in Germany chassis by Chenbro are quite popular, a matching one for \nyour needs would be the chenbro RM312 or RM414 \n(http://61.30.15.60/product/product_preview.php?pid=90 and \nhttp://61.30.15.60/product/product_preview.php?pid=95 respectively).\n\nTake 6 or 10 drives for Raid 10 pgdata, 2-drive Raid 1 for Transaction \nlogs (xlog), 2-drive Raid 1 for OS and Swap, and 2 spare disks.\n\nThat should give you about 250 mb/s reads and 70 mb/s sustained write \nrate with xfs.\n\nRegards,\nBjoern\n",
"msg_date": "Sat, 26 Mar 2005 10:59:15 +0100",
"msg_from": "Bjoern Metzdorf <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": ">Steve, can we clarify that you are not currently having any performance \n>issues, you're just worried about failure? Recommendations should be based \n>on whether improving applicaiton speed is a requirement ...\n\nJosh,\n\nThe priorities are: 1)improve safety/failure-prevention, 2) improve \nperformance.\n\nThe owner of the company wants greater performance (and, I concure to \ncertain degree), but the owner's vote is only 1/7 of the managment team. \nAnd, the rest of the management team is not as focused on performance. \nThey all agree in safety/failure-prevention.\n\nSteve\n\n\n\n\n\n\n",
"msg_date": "Sat, 26 Mar 2005 13:04:44 +0000",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "\n>The Chenbros are nice, but kinda pricey ($800) if Steve doesn't need the \n>machine to be rackable.\n>\n>If your primary goal is redundancy, you may wish to consider the possibility \n>of building a brand-new machine for $7k (you can do a lot of machine for \n>$7000 if it doesn't have to be rackable) and re-configuring the old machine \n>and using it as a replication or PITR backup. This would allow you to \n>configure the new machine with only a moderate amount of hardware redundancy \n>while still having 100% confidence in staying running.\n>\n> \n>\nOur servers are not racked, so a new one does not have to be. *If* it is \npossible, I'd like to replace the main server with a new one. I could \ntweak the new one the way I need it and work with the vendor to make \nsure everything works well. In either case, I'll still need to test how \npositioning of the tables/indexes across a raid10 will perform. I am \nalso waiting onProIV developers feedback. If their ProvIV modules will \nnot run under AMD64, or take advantage of the processor, then I'll stick \nwith the server we have.\n\nSteve Poe\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Sat, 26 Mar 2005 13:19:20 +0000",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Bjoern, Josh, Steve,\n\n> Get 12 or 16 x 74GB Western Digital Raptor S-ATA drives, one 3ware\n> 9500S-12 or two 3ware 9500S-8 raid controllers with a battery backup\n> unit (in case of power loss the controller saves unflushed data), a\n> decent tyan board for the existing dual xeon with 2 pci-x slots and a\n> matching 3U case for 12 drives (12 drives internal).\n\nBased on both my testing and feedback from one of the WD Raptor engineers, \nRaptors are still only optimal for 90% read applications. This makes them a \ngreat buy for web applications (which are 95% read usually) but a bad choice \nfor OLTP applicaitons which sounds more like what Steve's describing. For \nthose, it would be better to get 6 quality SCSI drives than 12 Raptors.\n\nThe reason for this is that SATA still doesn't do bi-directional traffic very \nwell (simultaneous read and write) and OSes and controllers simply haven't \ncaught up with the drive spec and features. WD hopes that in a year they \nwill be able to offer a Raptor that performs all operations as well as a 10K \nSCSI drive, for 25% less ... but that's in the next generation of drives, \ncontrollers and drivers.\n\nSteve, can we clarify that you are not currently having any performance \nissues, you're just worried about failure? Recommendations should be based \non whether improving applicaiton speed is a requirement ...\n\n> Here in Germany chassis by Chenbro are quite popular, a matching one for\n> your needs would be the chenbro RM312 or RM414\n> (http://61.30.15.60/product/product_preview.php?pid=90 and\n> http://61.30.15.60/product/product_preview.php?pid=95 respectively).\n\nThe Chenbros are nice, but kinda pricey ($800) if Steve doesn't need the \nmachine to be rackable.\n\nIf your primary goal is redundancy, you may wish to consider the possibility \nof building a brand-new machine for $7k (you can do a lot of machine for \n$7000 if it doesn't have to be rackable) and re-configuring the old machine \nand using it as a replication or PITR backup. This would allow you to \nconfigure the new machine with only a moderate amount of hardware redundancy \nwhile still having 100% confidence in staying running.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sat, 26 Mar 2005 12:55:58 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Cott Lang wrote:\n\n>Have you already considered application/database tuning? Adding\n>indexes? shared_buffers large enough? etc. \n>\n>Your database doesn't seem that large for the hardware you've already\n>got. I'd hate to spend $7k and end up back in the same boat. :)\n> \n>\nCott,\n\nI agree with you. Unfortunately, I am not the developer of the \napplication. The vendor uses ProIV which connects via ODBC. The vendor \ncould certain do some tuning and create more indexes where applicable. I \nam encouraging the vendor to take a more active role and we work \ntogether on this.\n\nWith hardware tuning, I am sure we can do better than 35Mb per sec. Also \nmoving the top 3 or 5 tables and indexes to their own slice of a RAID10 \nand moving pg_xlog to its own drive will help too.\n\nSince you asked about tuned settings, here's what we're using:\n\nkernel.shmmax = 1073741824\nshared_buffers = 10000\nsort_mem = 8192\nvacuum_mem = 65536\neffective_cache_size = 65536\n\n\nSteve Poe\n\n\n\n\n\n",
"msg_date": "Mon, 28 Mar 2005 17:36:46 +0000",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Have you already considered application/database tuning? Adding\nindexes? shared_buffers large enough? etc. \n\nYour database doesn't seem that large for the hardware you've already\ngot. I'd hate to spend $7k and end up back in the same boat. :)\n\n\nOn Sat, 2005-03-26 at 13:04 +0000, Steve Poe wrote:\n> >Steve, can we clarify that you are not currently having any performance \n> >issues, you're just worried about failure? Recommendations should be based \n> >on whether improving applicaiton speed is a requirement ...\n> \n> Josh,\n> \n> The priorities are: 1)improve safety/failure-prevention, 2) improve \n> performance.\n> \n> The owner of the company wants greater performance (and, I concure to \n> certain degree), but the owner's vote is only 1/7 of the managment team. \n> And, the rest of the management team is not as focused on performance. \n> They all agree in safety/failure-prevention.\n> \n> Steve\n> \n> \n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n",
"msg_date": "Mon, 28 Mar 2005 15:43:14 -0700",
"msg_from": "Cott Lang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "\n> With hardware tuning, I am sure we can do better than 35Mb per sec. Also\n\n\tWTF ?\n\n\tMy Laptop does 19 MB/s (reading <10 KB files, reiser4) !\n\n\tA recent desktop 7200rpm IDE drive\n# hdparm -t /dev/hdc1\n/dev/hdc1:\n Timing buffered disk reads: 148 MB in 3.02 seconds = 49.01 MB/sec\n\n# ll \"DragonBall 001.avi\"\n-r--r--r-- 1 peufeu users 218M mar 9 20:07 DragonBall 001.avi\n\n# time cat \"DragonBall 001.avi\" >/dev/null\nreal 0m4.162s\nuser 0m0.020s\nsys 0m0.510s\n\n(the file was not in the cache)\n\t=> about 52 MB/s (reiser3.6)\n\n\tSo, you have a problem with your hardware...\n",
"msg_date": "Tue, 29 Mar 2005 11:48:34 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Yeah, 35Mb per sec is slow for a raid controller, the 3ware mirrored is \nabout 50Mb/sec, and striped is about 100\n\nDave\n\nPFC wrote:\n\n>\n>> With hardware tuning, I am sure we can do better than 35Mb per sec. Also\n>\n>\n> WTF ?\n>\n> My Laptop does 19 MB/s (reading <10 KB files, reiser4) !\n>\n> A recent desktop 7200rpm IDE drive\n> # hdparm -t /dev/hdc1\n> /dev/hdc1:\n> Timing buffered disk reads: 148 MB in 3.02 seconds = 49.01 MB/sec\n>\n> # ll \"DragonBall 001.avi\"\n> -r--r--r-- 1 peufeu users 218M mar 9 20:07 DragonBall \n> 001.avi\n>\n> # time cat \"DragonBall 001.avi\" >/dev/null\n> real 0m4.162s\n> user 0m0.020s\n> sys 0m0.510s\n>\n> (the file was not in the cache)\n> => about 52 MB/s (reiser3.6)\n>\n> So, you have a problem with your hardware...\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n>\n",
"msg_date": "Tue, 29 Mar 2005 07:15:04 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Yeah, 35Mb per sec is slow for a raid controller, the 3ware mirrored is\nabout 50Mb/sec, and striped is about 100\n\nDave\n\nPFC wrote:\n\n>\n>> With hardware tuning, I am sure we can do better than 35Mb per sec. Also\n>\n>\n> WTF ?\n>\n> My Laptop does 19 MB/s (reading <10 KB files, reiser4) !\n>\n> A recent desktop 7200rpm IDE drive\n> # hdparm -t /dev/hdc1\n> /dev/hdc1:\n> Timing buffered disk reads: 148 MB in 3.02 seconds = 49.01 MB/sec\n>\n> # ll \"DragonBall 001.avi\"\n> -r--r--r-- 1 peufeu users 218M mar 9 20:07 DragonBall \n> 001.avi\n>\n> # time cat \"DragonBall 001.avi\" >/dev/null\n> real 0m4.162s\n> user 0m0.020s\n> sys 0m0.510s\n>\n> (the file was not in the cache)\n> => about 52 MB/s (reiser3.6)\n>\n> So, you have a problem with your hardware...\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n>\n\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n",
"msg_date": "Tue, 29 Mar 2005 07:17:05 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "On Mon, 2005-03-28 at 17:36 +0000, Steve Poe wrote:\n\n> I agree with you. Unfortunately, I am not the developer of the \n> application. The vendor uses ProIV which connects via ODBC. The vendor \n> could certain do some tuning and create more indexes where applicable. I \n> am encouraging the vendor to take a more active role and we work \n> together on this.\n\nI've done a lot browsing through pg_stat_activity, looking for queries\nthat either hang around for a while or show up very often, and using\nexplain to find out if they can use some assistance.\n\nYou may also find that a dump and restore with a reconfiguration to\nmirrored drives speeds you up a lot - just from the dump and restore.\n\n> With hardware tuning, I am sure we can do better than 35Mb per sec. Also \n> moving the top 3 or 5 tables and indexes to their own slice of a RAID10 \n> and moving pg_xlog to its own drive will help too.\n\nIf your database activity involves a lot of random i/o, 35Mb per second\nwouldn't be too bad.\n\nWhile conventional wisdom is that pg_xlog on its own drives (I know you\nmeant plural :) ) is a big boost, in my particular case I could never\nget a a measurable boost that way. Obviously, YMMV.\n\n\n\n",
"msg_date": "Tue, 29 Mar 2005 07:52:54 -0700",
"msg_from": "Cott Lang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "\nDave Cramer <[email protected]> writes:\n\n> PFC wrote:\n> >\n> > My Laptop does 19 MB/s (reading <10 KB files, reiser4) !\n>\n> Yeah, 35Mb per sec is slow for a raid controller, the 3ware mirrored is\n> about 50Mb/sec, and striped is about 100\n\nWell you're comparing apples and oranges here. A modern 7200rpm drive should\nbe capable of doing 40-50MB/s depending on the location of the data on the\ndisk. \n\nBut that's only doing sequential access of data using something like dd and\nwithout other processes intervening and causing seeks. In practice it seems a\nbusy databases see random_page_costs of about 4 which for a drive with 10ms\nseek time translates to only about 3.2MB/s.\n\nI think the first order of business is getting pg_xlog onto its own device.\nThat alone should remove a lot of the seeking. If it's an ext3 device I would\nalso consider moving the journal to a dedicated drive as well. (or if they're\nscsi drives or you're sure the raid controller is safe from write caching then\njust switch file systems to something that doesn't journal data.)\n\n\n-- \ngreg\n\n",
"msg_date": "29 Mar 2005 13:11:04 -0500",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Thanks for everyone's feedback on to best improve our Postgresql \ndatabase for the animal hospital. I re-read the PostgreSQL 8.0 \nPerformance Checklist just to keep focused.\n\nWe purchased (2) 4 x 146GB 10,000rpm SCSI U320 SCA drive arrays ($2600) \nand (1) Sun W2100z dual AMD64 workstation with 4GB RAM ($2500). We did \nnot need a rack-mount server, so I though Sun's workstation would do \nfine. I'll double the RAM. Hopefully, this should out-perform our dual \n2.8 Xeon with 4GB of RAM.\n\nNow, we need to purchase a good U320 RAID card now. Any suggestions for \nthose which run well under Linux?\n\nThese two drive arrays main purpose is for our database. For those \nmessed with drive arrays before, how would you slice-up the drive array? \nWill database performance be effected how our RAID10 is configured? Any \nsuggestions?\n\nThanks.\n\nSteve Poe\n\n\n\n",
"msg_date": "Fri, 01 Apr 2005 02:01:01 +0000",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Follow-Up: How to improve db performance with $7K?"
},
{
"msg_contents": "I'd use two of your drives to create a mirrored partition where pg_xlog \nresides separate from the actual data.\n\nRAID 10 is probably appropriate for the remaining drives.\n\nFortunately, you're not using Dell, so you don't have to worry about \nthe Perc3/Di RAID controller, which is not so compatible with Linux...\n\n-tfo\n\n --\n Thomas F. O'Connell\n Co-Founder, Information Architect\n Sitening, LLC\n http://www.sitening.com/\n 110 30th Avenue North, Suite 6\n Nashville, TN 37203-6320\n 615-260-0005\n\nOn Mar 31, 2005, at 9:01 PM, Steve Poe wrote:\n\n> Thanks for everyone's feedback on to best improve our Postgresql \n> database for the animal hospital. I re-read the PostgreSQL 8.0 \n> Performance Checklist just to keep focused.\n>\n> We purchased (2) 4 x 146GB 10,000rpm SCSI U320 SCA drive arrays \n> ($2600) and (1) Sun W2100z dual AMD64 workstation with 4GB RAM \n> ($2500). We did not need a rack-mount server, so I though Sun's \n> workstation would do fine. I'll double the RAM. Hopefully, this should \n> out-perform our dual 2.8 Xeon with 4GB of RAM.\n>\n> Now, we need to purchase a good U320 RAID card now. Any suggestions \n> for those which run well under Linux?\n>\n> These two drive arrays main purpose is for our database. For those \n> messed with drive arrays before, how would you slice-up the drive \n> array? Will database performance be effected how our RAID10 is \n> configured? Any suggestions?\n>\n> Thanks.\n>\n> Steve Poe\n\n",
"msg_date": "Fri, 1 Apr 2005 03:09:50 -0500",
"msg_from": "Thomas F.O'Connell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Follow-Up: How to improve db performance with $7K?"
},
{
"msg_contents": "\nOn Mar 31, 2005, at 9:01 PM, Steve Poe wrote:\n\n> Now, we need to purchase a good U320 RAID card now. Any suggestions \n> for those which run well under Linux?\n\nNot sure if it works with linux, but under FreeBSD 5, the LSI MegaRAID \ncards are well supported. You should be able to pick up a 320-2X with \n128Mb battery backed cache for about $1k. Wicked fast... I'm suprized \nyou didn't go for the 15k RPM drives for a small extra cost.\n\n",
"msg_date": "Fri, 1 Apr 2005 16:23:13 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Follow-Up: How to improve db performance with $7K?"
},
{
"msg_contents": "Vivek Khera wrote:\n\n>\n> On Mar 31, 2005, at 9:01 PM, Steve Poe wrote:\n>\n>> Now, we need to purchase a good U320 RAID card now. Any suggestions \n>> for those which run well under Linux?\n>\n>\n> Not sure if it works with linux, but under FreeBSD 5, the LSI MegaRAID \n> cards are well supported. You should be able to pick up a 320-2X with \n> 128Mb battery backed cache for about $1k. Wicked fast... I'm suprized \n> you didn't go for the 15k RPM drives for a small extra cost.\n\n\n\nWow, okay, so I'm not sure where everyone's email went, but I got \nover a weeks worth of list emails at once. \n\nSeveral of you have sent me requests on where we purchased our systems \nat. Compsource was the vendor, www.c-source.com or \nwww.compsource.com. The sales rep we have is Steve Taylor or you \ncan talk to the sales manager Tom. I've bought hardware from them \nfor the last 2 years and I've been very pleased. I'm sorry wasn't able \nto respond sooner.\n\n\nSteve, The LSI MegaRAID cards are where its at. I've had -great- luck \nwith them over the years. There were a few weird problems with a series \nawhile back where the linux driver needed tweaked by the developers \nalong with a new bios update. The 320 series is just as Vivek said, \nwicked fast. Very strong cards. Be sure though when you order it to \nspecificy the battery backup either with it, or make sure you buy the \nright one for it. There are a couple of options with battery cache on \nthe cards that can trip you up.\n\nGood luck on your systems! Now that I've got my email problems \nresolved I'm definitely more than help to give any information you all \nneed.\n",
"msg_date": "Sat, 02 Apr 2005 09:21:36 -0700",
"msg_from": "Will LaShell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Follow-Up: How to improve db performance with $7K?"
},
{
"msg_contents": "To be honest, I've yet to run across a SCSI configuration that can\ntouch the 3ware SATA controllers. I have yet to see one top 80MB/sec,\nlet alone 180MB/sec read or write, which is why we moved _away_ from\nSCSI. I've seen Compaq, Dell and LSI controllers all do pathetically\nbadly on RAID 1, RAID 5 and RAID 10.\n\n35MB/sec for a three drive RAID 0 is not bad, it's appalling. The\nhardware manufacturer should be publicly embarassed for this kind of\nspeed. A single U320 10k drive can do close to 70MB/sec sustained.\n\nIf someone can offer benchmarks to the contrary (particularly in\nlinux), I would be greatly interested.\n\nAlex Turner\nnetEconomist\n\nOn Mar 29, 2005 8:17 AM, Dave Cramer <[email protected]> wrote:\n> Yeah, 35Mb per sec is slow for a raid controller, the 3ware mirrored is\n> about 50Mb/sec, and striped is about 100\n> \n> Dave\n> \n> PFC wrote:\n> \n> >\n> >> With hardware tuning, I am sure we can do better than 35Mb per sec. Also\n> >\n> >\n> > WTF ?\n> >\n> > My Laptop does 19 MB/s (reading <10 KB files, reiser4) !\n> >\n> > A recent desktop 7200rpm IDE drive\n> > # hdparm -t /dev/hdc1\n> > /dev/hdc1:\n> > Timing buffered disk reads: 148 MB in 3.02 seconds = 49.01 MB/sec\n> >\n> > # ll \"DragonBall 001.avi\"\n> > -r--r--r-- 1 peufeu users 218M mar 9 20:07 DragonBall\n> > 001.avi\n> >\n> > # time cat \"DragonBall 001.avi\" >/dev/null\n> > real 0m4.162s\n> > user 0m0.020s\n> > sys 0m0.510s\n> >\n> > (the file was not in the cache)\n> > => about 52 MB/s (reiser3.6)\n> >\n> > So, you have a problem with your hardware...\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 7: don't forget to increase your free space map settings\n> >\n> >\n> \n> --\n> Dave Cramer\n> http://www.postgresintl.com\n> 519 939 0336\n> ICQ#14675561\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n>\n",
"msg_date": "Mon, 4 Apr 2005 09:43:52 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "\n\nAlex Turner wrote:\n\n>To be honest, I've yet to run across a SCSI configuration that can\n>touch the 3ware SATA controllers. I have yet to see one top 80MB/sec,\n>let alone 180MB/sec read or write, which is why we moved _away_ from\n>SCSI. I've seen Compaq, Dell and LSI controllers all do pathetically\n>badly on RAID 1, RAID 5 and RAID 10.\n> \n>\nAlex,\n\nHow does the 3ware controller do in heavy writes back to the database? \nIt may have been Josh, but someone said that SATA does well with reads \nbut not writes. Would not equal amount of SCSI drives outperform SATA? \nI don't want to start a \"whose better\" war, I am just trying to learn \nhere. It would seem the more drives you could place in a RAID \nconfiguration, the performance would increase.\n\nSteve Poe\n\n\n",
"msg_date": "Mon, 04 Apr 2005 07:39:20 -0700",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "I'm no drive expert, but it seems to me that our write performance is\nexcellent. I think what most are concerned about is OLTP where you\nare doing heavy write _and_ heavy read performance at the same time.\n\nOur system is mostly read during the day, but we do a full system\nupdate everynight that is all writes, and it's very fast compared to\nthe smaller SCSI system we moved off of. Nearly a 6x spead\nimprovement, as fast as 900 rows/sec with a 48 byte record, one row\nper transaction.\n\nI don't know enough about how SATA works to really comment on it's\nperformance as a protocol compared with SCSI. If anyone has a usefull\nlink on that, it would be greatly appreciated.\n\nMore drives will give more throughput/sec, but not necesarily more\ntransactions/sec. For that you will need more RAM on the controler,\nand defaintely a BBU to keep your data safe.\n\nAlex Turner\nnetEconomist\n\nOn Apr 4, 2005 10:39 AM, Steve Poe <[email protected]> wrote:\n> \n> \n> Alex Turner wrote:\n> \n> >To be honest, I've yet to run across a SCSI configuration that can\n> >touch the 3ware SATA controllers. I have yet to see one top 80MB/sec,\n> >let alone 180MB/sec read or write, which is why we moved _away_ from\n> >SCSI. I've seen Compaq, Dell and LSI controllers all do pathetically\n> >badly on RAID 1, RAID 5 and RAID 10.\n> >\n> >\n> Alex,\n> \n> How does the 3ware controller do in heavy writes back to the database?\n> It may have been Josh, but someone said that SATA does well with reads\n> but not writes. Would not equal amount of SCSI drives outperform SATA?\n> I don't want to start a \"whose better\" war, I am just trying to learn\n> here. It would seem the more drives you could place in a RAID\n> configuration, the performance would increase.\n> \n> Steve Poe\n> \n>\n",
"msg_date": "Mon, 4 Apr 2005 15:12:20 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "\nOn Apr 4, 2005, at 3:12 PM, Alex Turner wrote:\n\n> Our system is mostly read during the day, but we do a full system\n> update everynight that is all writes, and it's very fast compared to\n> the smaller SCSI system we moved off of. Nearly a 6x spead\n> improvement, as fast as 900 rows/sec with a 48 byte record, one row\n> per transaction.\n>\n\nWell, if you're not heavily multitasking, the advantage of SCSI is lost \non you.\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806\n\n",
"msg_date": "Mon, 4 Apr 2005 15:23:33 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "I'm doing some research on SATA vs SCSI right now, but to be honest\nI'm not turning up much at the protocol level. Alot of stupid\nbenchmarks comparing 10k Raptor drives against Top of the line 15k\ndrives, where usnurprsingly the SCSI drives win but of course cost 4\ntimes as much. Although even in some, SATA wins, or draws. I'm\ntrying to find something more apples to apples. 10k to 10k.\n\nAlex Turner\nnetEconomist\n\n\n\nOn Apr 4, 2005 3:23 PM, Vivek Khera <[email protected]> wrote:\n> \n> On Apr 4, 2005, at 3:12 PM, Alex Turner wrote:\n> \n> > Our system is mostly read during the day, but we do a full system\n> > update everynight that is all writes, and it's very fast compared to\n> > the smaller SCSI system we moved off of. Nearly a 6x spead\n> > improvement, as fast as 900 rows/sec with a 48 byte record, one row\n> > per transaction.\n> >\n> \n> Well, if you're not heavily multitasking, the advantage of SCSI is lost\n> on you.\n> \n> Vivek Khera, Ph.D.\n> +1-301-869-4449 x806\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n",
"msg_date": "Mon, 4 Apr 2005 15:33:35 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Thomas F.O'Connell wrote:\n> I'd use two of your drives to create a mirrored partition where pg_xlog \n> resides separate from the actual data.\n> \n> RAID 10 is probably appropriate for the remaining drives.\n> \n> Fortunately, you're not using Dell, so you don't have to worry about \n> the Perc3/Di RAID controller, which is not so compatible with\n> Linux...\n\nHmm...I have to wonder how true this is these days.\n\nMy company has a Dell 2500 with a Perc3/Di running Debian Linux, with\nthe 2.6.10 kernel. The controller seems to work reasonably well,\nthough I wouldn't doubt that it's slower than a different one might\nbe. But so far we haven't had any reliability issues with it.\n\nNow, the performance is pretty bad considering the setup -- a RAID 5\nwith five 73.6 gig SCSI disks (10K RPM, I believe). Reads through the\nfilesystem come through at about 65 megabytes/sec, writes about 35\nmegabytes/sec (at least, so says \"bonnie -s 8192\"). This is on a\nsystem with a single 3 GHz Xeon and 1 gigabyte of memory. I'd expect\nmuch better read performance from what is essentially a stripe of 4\nfast SCSI disks.\n\n\nWhile compatibility hasn't really been an issue, at least as far as\nthe basics go, I still agree with your general sentiment -- stay away\nfrom the Dells, at least if they have the Perc3/Di controller. You'll\nprobably get much better performance out of something else.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n",
"msg_date": "Tue, 5 Apr 2005 21:44:56 -0700",
"msg_from": "Kevin Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Follow-Up: How to improve db performance with $7K?"
},
{
"msg_contents": "Alex Turner wrote:\n> I'm no drive expert, but it seems to me that our write performance is\n> excellent. I think what most are concerned about is OLTP where you\n> are doing heavy write _and_ heavy read performance at the same time.\n> \n> Our system is mostly read during the day, but we do a full system\n> update everynight that is all writes, and it's very fast compared to\n> the smaller SCSI system we moved off of. Nearly a 6x spead\n> improvement, as fast as 900 rows/sec with a 48 byte record, one row\n> per transaction.\n\nI've started with SATA in a multi-read/multi-write environment. While it \nran pretty good with 1 thread writing, the addition of a 2nd thread \n(whether reading or writing) would cause exponential slowdowns.\n\nI suffered through this for a week and then switched to SCSI. Single \nthreaded performance was pretty similar but with the advanced command \nqueueing SCSI has, I was able to do multiple reads/writes simultaneously \nwith only a small performance hit for each thread.\n\nPerhaps having a SATA caching raid controller might help this situation. \nI don't know. It's pretty hard justifying buying a $$$ 3ware controller \njust to test it when you could spend the same money on SCSI and have a \nguarantee it'll work good under multi-IO scenarios.\n",
"msg_date": "Wed, 06 Apr 2005 00:30:44 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "On Tue, Apr 05, 2005 at 09:44:56PM -0700, Kevin Brown wrote:\n> Now, the performance is pretty bad considering the setup -- a RAID 5\n> with five 73.6 gig SCSI disks (10K RPM, I believe). Reads through the\n> filesystem come through at about 65 megabytes/sec, writes about 35\n> megabytes/sec (at least, so says \"bonnie -s 8192\"). This is on a\n> system with a single 3 GHz Xeon and 1 gigabyte of memory. I'd expect\n> much better read performance from what is essentially a stripe of 4\n> fast SCSI disks.\n\nData point here: We have a Linux software RAID quite close to the setup you\ndescribe, with an onboard Adaptec controller and four 146GB 10000rpm disks,\nand we get about 65MB/sec sustained when writing to an ext3 filesystem\n(actually, when wgetting a file off the gigabit LAN :-) ). I haven't tested\nreading, though.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 6 Apr 2005 13:28:17 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Follow-Up: How to improve db performance with $7K?"
},
{
"msg_contents": "\n> and we get about 65MB/sec sustained when writing to an ext3 filesystem\n> (actually, when wgetting a file off the gigabit LAN :-) ). I haven't\n\n\tWell, unless you have PCI 64 bits, the \"standard\" PCI does 133 MB/s which \nis then split exactly in two times 66.5 MB/s for 1) reading from the PCI \nnetwork card and 2) writing to the PCI harddisk controller. No wonder you \nget this figure, you're able to saturate your PCI bus, but it does not \ntell you a thing on the performance of your disk or network card... Note \nthat the server which serves the file is limited in the same way unless \nthe file is in cache (RAM) or it's PCI64. So...\n\n\n> tested\n> reading, though.\n>\n> /* Steinar */\n\n\n",
"msg_date": "Wed, 06 Apr 2005 15:26:33 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Follow-Up: How to improve db performance with $7K?"
},
{
"msg_contents": "On Wed, Apr 06, 2005 at 03:26:33PM +0200, PFC wrote:\n> \tWell, unless you have PCI 64 bits, the \"standard\" PCI does 133 MB/s \n> \twhich is then split exactly in two times 66.5 MB/s for 1) reading from the \n> PCI network card and 2) writing to the PCI harddisk controller. No wonder \n> you get this figure, you're able to saturate your PCI bus, but it does not \n> tell you a thing on the performance of your disk or network card... Note \n> that the server which serves the file is limited in the same way unless \n> the file is in cache (RAM) or it's PCI64. So...\n\nThis is PCI-X.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 6 Apr 2005 15:33:48 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Follow-Up: How to improve db performance with $7K?"
},
{
"msg_contents": "It's hardly the same money, the drives are twice as much.\n\nIt's all about the controller baby with any kind of dive. A bad SCSI\ncontroller will give sucky performance too, believe me. We had a\nCompaq Smart Array 5304, and it's performance was _very_ sub par.\n\nIf someone has a simple benchmark test database to run, I would be\nhappy to run it on our hardware here.\n\nAlex Turner\n\nOn Apr 6, 2005 3:30 AM, William Yu <[email protected]> wrote:\n> Alex Turner wrote:\n> > I'm no drive expert, but it seems to me that our write performance is\n> > excellent. I think what most are concerned about is OLTP where you\n> > are doing heavy write _and_ heavy read performance at the same time.\n> >\n> > Our system is mostly read during the day, but we do a full system\n> > update everynight that is all writes, and it's very fast compared to\n> > the smaller SCSI system we moved off of. Nearly a 6x spead\n> > improvement, as fast as 900 rows/sec with a 48 byte record, one row\n> > per transaction.\n> \n> I've started with SATA in a multi-read/multi-write environment. While it\n> ran pretty good with 1 thread writing, the addition of a 2nd thread\n> (whether reading or writing) would cause exponential slowdowns.\n> \n> I suffered through this for a week and then switched to SCSI. Single\n> threaded performance was pretty similar but with the advanced command\n> queueing SCSI has, I was able to do multiple reads/writes simultaneously\n> with only a small performance hit for each thread.\n> \n> Perhaps having a SATA caching raid controller might help this situation.\n> I don't know. It's pretty hard justifying buying a $$$ 3ware controller\n> just to test it when you could spend the same money on SCSI and have a\n> guarantee it'll work good under multi-IO scenarios.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n",
"msg_date": "Wed, 6 Apr 2005 11:35:10 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "It's the same money if you factor in the 3ware controller. Even without \na caching controller, SCSI works good in multi-threaded IO (not \nwithstanding crappy shit from Dell or Compaq). You can get such cards \nfrom LSI for $75. And of course, many server MBs come with LSI \ncontrollers built-in. Our older 32-bit production servers all use Linux \nsoftware RAID w/ SCSI and there's no issues when multiple \nusers/processes hit the DB.\n\n*Maybe* a 3ware controller w/ onboard cache + battery backup might do \nmuch better for multi-threaded IO than just plain-jane SATA. \nUnfortunately, I have not been able to find anything online that can \nconfirm or deny this. Hence, the choice is spend $$$ on the 3ware \ncontroller and hope it meets your needs -- or spend $$$ on SCSI drives \nand be sure.\n\nNow if you want to run such tests, we'd all be delighted with to see the \nresults so we have another option for building servers.\n\n\nAlex Turner wrote:\n> It's hardly the same money, the drives are twice as much.\n> \n> It's all about the controller baby with any kind of dive. A bad SCSI\n> controller will give sucky performance too, believe me. We had a\n> Compaq Smart Array 5304, and it's performance was _very_ sub par.\n> \n> If someone has a simple benchmark test database to run, I would be\n> happy to run it on our hardware here.\n> \n> Alex Turner\n> \n> On Apr 6, 2005 3:30 AM, William Yu <[email protected]> wrote:\n> \n>>Alex Turner wrote:\n>>\n>>>I'm no drive expert, but it seems to me that our write performance is\n>>>excellent. I think what most are concerned about is OLTP where you\n>>>are doing heavy write _and_ heavy read performance at the same time.\n>>>\n>>>Our system is mostly read during the day, but we do a full system\n>>>update everynight that is all writes, and it's very fast compared to\n>>>the smaller SCSI system we moved off of. Nearly a 6x spead\n>>>improvement, as fast as 900 rows/sec with a 48 byte record, one row\n>>>per transaction.\n>>\n>>I've started with SATA in a multi-read/multi-write environment. While it\n>>ran pretty good with 1 thread writing, the addition of a 2nd thread\n>>(whether reading or writing) would cause exponential slowdowns.\n>>\n>>I suffered through this for a week and then switched to SCSI. Single\n>>threaded performance was pretty similar but with the advanced command\n>>queueing SCSI has, I was able to do multiple reads/writes simultaneously\n>>with only a small performance hit for each thread.\n>>\n>>Perhaps having a SATA caching raid controller might help this situation.\n>>I don't know. It's pretty hard justifying buying a $$$ 3ware controller\n>>just to test it when you could spend the same money on SCSI and have a\n>>guarantee it'll work good under multi-IO scenarios.\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 8: explain analyze is your friend\n>>\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n",
"msg_date": "Wed, 06 Apr 2005 13:01:35 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Well - unfortuantely software RAID isn't appropriate for everyone, and\nsome of us need a hardware RAID controller. The LSI Megaraid 320-2\ncard is almost exactly the same price as the 3ware 9500S-12 card\n(although I will conceed that a 320-2 card can handle at most 2x14\ndevices compare with the 12 on the 9500S).\n\nIf someone can come up with a test, I will be happy to run it and see\nhow it goes. I would be _very_ interested in the results having just\nspent $7k on a new DB server!!\n\nI have also seen really bad performance out of SATA. It was with\neither an on-board controller, or a cheap RAID controller from\nHighPoint. As soon as I put in a decent controller, things went much\nbetter. I think it's unfair to base your opinion of SATA from a test\nthat had a poor controler.\n\nI know I'm not the only one here running SATA RAID and being very\nsatisfied with the results.\n\nThanks,\n\nAlex Turner\nnetEconomist\n\nOn Apr 6, 2005 4:01 PM, William Yu <[email protected]> wrote:\n> It's the same money if you factor in the 3ware controller. Even without\n> a caching controller, SCSI works good in multi-threaded IO (not\n> withstanding crappy shit from Dell or Compaq). You can get such cards\n> from LSI for $75. And of course, many server MBs come with LSI\n> controllers built-in. Our older 32-bit production servers all use Linux\n> software RAID w/ SCSI and there's no issues when multiple\n> users/processes hit the DB.\n> \n> *Maybe* a 3ware controller w/ onboard cache + battery backup might do\n> much better for multi-threaded IO than just plain-jane SATA.\n> Unfortunately, I have not been able to find anything online that can\n> confirm or deny this. Hence, the choice is spend $$$ on the 3ware\n> controller and hope it meets your needs -- or spend $$$ on SCSI drives\n> and be sure.\n> \n> Now if you want to run such tests, we'd all be delighted with to see the\n> results so we have another option for building servers.\n> \n> \n> Alex Turner wrote:\n> > It's hardly the same money, the drives are twice as much.\n> >\n> > It's all about the controller baby with any kind of dive. A bad SCSI\n> > controller will give sucky performance too, believe me. We had a\n> > Compaq Smart Array 5304, and it's performance was _very_ sub par.\n> >\n> > If someone has a simple benchmark test database to run, I would be\n> > happy to run it on our hardware here.\n> >\n> > Alex Turner\n> >\n> > On Apr 6, 2005 3:30 AM, William Yu <[email protected]> wrote:\n> >\n> >>Alex Turner wrote:\n> >>\n> >>>I'm no drive expert, but it seems to me that our write performance is\n> >>>excellent. I think what most are concerned about is OLTP where you\n> >>>are doing heavy write _and_ heavy read performance at the same time.\n> >>>\n> >>>Our system is mostly read during the day, but we do a full system\n> >>>update everynight that is all writes, and it's very fast compared to\n> >>>the smaller SCSI system we moved off of. Nearly a 6x spead\n> >>>improvement, as fast as 900 rows/sec with a 48 byte record, one row\n> >>>per transaction.\n> >>\n> >>I've started with SATA in a multi-read/multi-write environment. While it\n> >>ran pretty good with 1 thread writing, the addition of a 2nd thread\n> >>(whether reading or writing) would cause exponential slowdowns.\n> >>\n> >>I suffered through this for a week and then switched to SCSI. Single\n> >>threaded performance was pretty similar but with the advanced command\n> >>queueing SCSI has, I was able to do multiple reads/writes simultaneously\n> >>with only a small performance hit for each thread.\n> >>\n> >>Perhaps having a SATA caching raid controller might help this situation.\n> >>I don't know. It's pretty hard justifying buying a $$$ 3ware controller\n> >>just to test it when you could spend the same money on SCSI and have a\n> >>guarantee it'll work good under multi-IO scenarios.\n> >>\n> >>---------------------------(end of broadcast)---------------------------\n> >>TIP 8: explain analyze is your friend\n> >>\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n> >\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n",
"msg_date": "Wed, 6 Apr 2005 18:12:06 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Sorry if I'm pointing out the obvious here, but it seems worth\nmentioning. AFAIK all 3ware controllers are setup so that each SATA\ndrive gets it's own SATA bus. My understanding is that by and large,\nSATA still suffers from a general inability to have multiple outstanding\ncommands on the bus at once, unlike SCSI. Therefore, to get good\nperformance out of SATA you need to have a seperate bus for each drive.\nTheoretically, it shouldn't really matter that it's SATA over ATA, other\nthan I certainly wouldn't want to try and cram 8 ATA cables into a\nmachine...\n\nIncidentally, when we were investigating storage options at a previous\njob we talked to someone who deals with RS/6000 storage. He had a bunch\nof info about their serial controller protocol (which I can't think of\nthe name of) vs SCSI. SCSI had a lot more overhead, so you could end up\nsaturating even a 160MB SCSI bus with only 2 or 3 drives.\n\nPeople are finally realizing how important bandwidth has become in\nmodern machines. Memory bandwidth is why RS/6000 was (and maybe still\nis) cleaning Sun's clock, and it's why the Opteron blows Itaniums out of\nthe water. Likewise it's why SCSI is so much better than IDE (unless you\njust give each drive it's own dedicated bandwidth).\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Wed, 6 Apr 2005 17:41:02 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "I guess I'm setting myself up here, and I'm really not being ignorant,\nbut can someone explain exactly how is SCSI is supposed to better than\nSATA?\n\nBoth systems use drives with platters. Each drive can physically only\nread one thing at a time.\n\nSATA gives each drive it's own channel, but you have to share in SCSI.\n A SATA controller typicaly can do 3Gb/sec (384MB/sec) per drive, but\nSCSI can only do 320MB/sec across the entire array.\n\nWhat am I missing here?\n\nAlex Turner\nnetEconomist\n\nOn Apr 6, 2005 5:41 PM, Jim C. Nasby <[email protected]> wrote:\n> Sorry if I'm pointing out the obvious here, but it seems worth\n> mentioning. AFAIK all 3ware controllers are setup so that each SATA\n> drive gets it's own SATA bus. My understanding is that by and large,\n> SATA still suffers from a general inability to have multiple outstanding\n> commands on the bus at once, unlike SCSI. Therefore, to get good\n> performance out of SATA you need to have a seperate bus for each drive.\n> Theoretically, it shouldn't really matter that it's SATA over ATA, other\n> than I certainly wouldn't want to try and cram 8 ATA cables into a\n> machine...\n> \n> Incidentally, when we were investigating storage options at a previous\n> job we talked to someone who deals with RS/6000 storage. He had a bunch\n> of info about their serial controller protocol (which I can't think of\n> the name of) vs SCSI. SCSI had a lot more overhead, so you could end up\n> saturating even a 160MB SCSI bus with only 2 or 3 drives.\n> \n> People are finally realizing how important bandwidth has become in\n> modern machines. Memory bandwidth is why RS/6000 was (and maybe still\n> is) cleaning Sun's clock, and it's why the Opteron blows Itaniums out of\n> the water. Likewise it's why SCSI is so much better than IDE (unless you\n> just give each drive it's own dedicated bandwidth).\n> --\n> Jim C. Nasby, Database Consultant [email protected]\n> Give your computer some brain candy! www.distributed.net Team #1828\n> \n> Windows: \"Where do you want to go today?\"\n> Linux: \"Where do you want to go tomorrow?\"\n> FreeBSD: \"Are you guys coming, or what?\"\n>\n",
"msg_date": "Wed, 6 Apr 2005 19:32:50 -0500",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Ok - so I found this fairly good online review of various SATA cards\nout there, with 3ware not doing too hot on RAID 5, but ok on RAID 10.\n\nhttp://www.tweakers.net/reviews/557/\n\nVery interesting stuff.\n\nAlex Turner\nnetEconomist\n\nOn Apr 6, 2005 7:32 PM, Alex Turner <[email protected]> wrote:\n> I guess I'm setting myself up here, and I'm really not being ignorant,\n> but can someone explain exactly how is SCSI is supposed to better than\n> SATA?\n> \n> Both systems use drives with platters. Each drive can physically only\n> read one thing at a time.\n> \n> SATA gives each drive it's own channel, but you have to share in SCSI.\n> A SATA controller typicaly can do 3Gb/sec (384MB/sec) per drive, but\n> SCSI can only do 320MB/sec across the entire array.\n> \n> What am I missing here?\n> \n> Alex Turner\n> netEconomist\n> \n> On Apr 6, 2005 5:41 PM, Jim C. Nasby <[email protected]> wrote:\n> > Sorry if I'm pointing out the obvious here, but it seems worth\n> > mentioning. AFAIK all 3ware controllers are setup so that each SATA\n> > drive gets it's own SATA bus. My understanding is that by and large,\n> > SATA still suffers from a general inability to have multiple outstanding\n> > commands on the bus at once, unlike SCSI. Therefore, to get good\n> > performance out of SATA you need to have a seperate bus for each drive.\n> > Theoretically, it shouldn't really matter that it's SATA over ATA, other\n> > than I certainly wouldn't want to try and cram 8 ATA cables into a\n> > machine...\n> >\n> > Incidentally, when we were investigating storage options at a previous\n> > job we talked to someone who deals with RS/6000 storage. He had a bunch\n> > of info about their serial controller protocol (which I can't think of\n> > the name of) vs SCSI. SCSI had a lot more overhead, so you could end up\n> > saturating even a 160MB SCSI bus with only 2 or 3 drives.\n> >\n> > People are finally realizing how important bandwidth has become in\n> > modern machines. Memory bandwidth is why RS/6000 was (and maybe still\n> > is) cleaning Sun's clock, and it's why the Opteron blows Itaniums out of\n> > the water. Likewise it's why SCSI is so much better than IDE (unless you\n> > just give each drive it's own dedicated bandwidth).\n> > --\n> > Jim C. Nasby, Database Consultant [email protected]\n> > Give your computer some brain candy! www.distributed.net Team #1828\n> >\n> > Windows: \"Where do you want to go today?\"\n> > Linux: \"Where do you want to go tomorrow?\"\n> > FreeBSD: \"Are you guys coming, or what?\"\n> >\n>\n",
"msg_date": "Wed, 6 Apr 2005 20:12:14 -0500",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Ok - I take it back - I'm reading through this now, and realising that\nthe reviews are pretty clueless in several places...\n\n\nOn Apr 6, 2005 8:12 PM, Alex Turner <[email protected]> wrote:\n> Ok - so I found this fairly good online review of various SATA cards\n> out there, with 3ware not doing too hot on RAID 5, but ok on RAID 10.\n> \n> http://www.tweakers.net/reviews/557/\n> \n> Very interesting stuff.\n> \n> Alex Turner\n> netEconomist\n> \n> On Apr 6, 2005 7:32 PM, Alex Turner <[email protected]> wrote:\n> > I guess I'm setting myself up here, and I'm really not being ignorant,\n> > but can someone explain exactly how is SCSI is supposed to better than\n> > SATA?\n> >\n> > Both systems use drives with platters. Each drive can physically only\n> > read one thing at a time.\n> >\n> > SATA gives each drive it's own channel, but you have to share in SCSI.\n> > A SATA controller typicaly can do 3Gb/sec (384MB/sec) per drive, but\n> > SCSI can only do 320MB/sec across the entire array.\n> >\n> > What am I missing here?\n> >\n> > Alex Turner\n> > netEconomist\n> >\n> > On Apr 6, 2005 5:41 PM, Jim C. Nasby <[email protected]> wrote:\n> > > Sorry if I'm pointing out the obvious here, but it seems worth\n> > > mentioning. AFAIK all 3ware controllers are setup so that each SATA\n> > > drive gets it's own SATA bus. My understanding is that by and large,\n> > > SATA still suffers from a general inability to have multiple outstanding\n> > > commands on the bus at once, unlike SCSI. Therefore, to get good\n> > > performance out of SATA you need to have a seperate bus for each drive.\n> > > Theoretically, it shouldn't really matter that it's SATA over ATA, other\n> > > than I certainly wouldn't want to try and cram 8 ATA cables into a\n> > > machine...\n> > >\n> > > Incidentally, when we were investigating storage options at a previous\n> > > job we talked to someone who deals with RS/6000 storage. He had a bunch\n> > > of info about their serial controller protocol (which I can't think of\n> > > the name of) vs SCSI. SCSI had a lot more overhead, so you could end up\n> > > saturating even a 160MB SCSI bus with only 2 or 3 drives.\n> > >\n> > > People are finally realizing how important bandwidth has become in\n> > > modern machines. Memory bandwidth is why RS/6000 was (and maybe still\n> > > is) cleaning Sun's clock, and it's why the Opteron blows Itaniums out of\n> > > the water. Likewise it's why SCSI is so much better than IDE (unless you\n> > > just give each drive it's own dedicated bandwidth).\n> > > --\n> > > Jim C. Nasby, Database Consultant [email protected]\n> > > Give your computer some brain candy! www.distributed.net Team #1828\n> > >\n> > > Windows: \"Where do you want to go today?\"\n> > > Linux: \"Where do you want to go tomorrow?\"\n> > > FreeBSD: \"Are you guys coming, or what?\"\n> > >\n> >\n>\n",
"msg_date": "Wed, 6 Apr 2005 20:23:59 -0500",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "\nAlex Turner <[email protected]> writes:\n\n> SATA gives each drive it's own channel, but you have to share in SCSI.\n> A SATA controller typicaly can do 3Gb/sec (384MB/sec) per drive, but\n> SCSI can only do 320MB/sec across the entire array.\n\nSCSI controllers often have separate channels for each device too.\n\nIn any case the issue with the IDE protocol is that fundamentally you can only\nhave a single command pending. SCSI can have many commands pending. This is\nespecially important for a database like postgres that may be busy committing\none transaction while another is trying to read. Having several commands\nqueued on the drive gives it a chance to execute any that are \"on the way\" to\nthe committing transaction.\n\nHowever I'm under the impression that 3ware has largely solved this problem.\nAlso, if you save a few dollars and can afford one additional drive that\nadditional drive may improve your array speed enough to overcome that\ninefficiency.\n\n-- \ngreg\n\n",
"msg_date": "06 Apr 2005 23:00:54 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Yeah - the more reading I'm doing - the more I'm finding out.\n\nAlledgelly the Western Digial Raptor drives implement a version of\nATA-4 Tagged Queing which allows reordering of commands. Some\ncontrollers support this. The 3ware docs say that the controller\nsupport both reordering on the controller and to the drive. *shrug*\n\nThis of course is all supposed to go away with SATA II which as NCQ,\nNative Command Queueing. Of course the 3ware controllers don't\nsupport SATA II, but a few other do, and I'm sure 3ware will come out\nwith a controller that does.\n\nAlex Turner\nnetEconomist\n\nOn 06 Apr 2005 23:00:54 -0400, Greg Stark <[email protected]> wrote:\n> \n> Alex Turner <[email protected]> writes:\n> \n> > SATA gives each drive it's own channel, but you have to share in SCSI.\n> > A SATA controller typicaly can do 3Gb/sec (384MB/sec) per drive, but\n> > SCSI can only do 320MB/sec across the entire array.\n> \n> SCSI controllers often have separate channels for each device too.\n> \n> In any case the issue with the IDE protocol is that fundamentally you can only\n> have a single command pending. SCSI can have many commands pending. This is\n> especially important for a database like postgres that may be busy committing\n> one transaction while another is trying to read. Having several commands\n> queued on the drive gives it a chance to execute any that are \"on the way\" to\n> the committing transaction.\n> \n> However I'm under the impression that 3ware has largely solved this problem.\n> Also, if you save a few dollars and can afford one additional drive that\n> additional drive may improve your array speed enough to overcome that\n> inefficiency.\n> \n> --\n> greg\n> \n>\n",
"msg_date": "Wed, 6 Apr 2005 22:06:47 -0500",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> In any case the issue with the IDE protocol is that fundamentally you\n> can only have a single command pending. SCSI can have many commands\n> pending.\n\nThat's the bottom line: the SCSI protocol was designed (twenty years ago!)\nto allow the drive to do physical I/O scheduling, because the CPU can\nissue multiple commands before the drive has to report completion of the\nfirst one. IDE isn't designed to do that. I understand that the latest\nrevisions to the IDE/ATA specs allow the drive to do this sort of thing,\nbut support for it is far from widespread.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Apr 2005 00:14:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K? "
},
{
"msg_contents": "Things might've changed somewhat over the past year, but this is from \n_the_ Linux guy at Dell...\n\n-tfo\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\n\nStrategic Open Source — Open Your i™\n\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-260-0005\n\n\nDate: Mon, 26 Apr 2004 14:15:02 -0500\nFrom: Matt Domsch <[email protected]>\nTo: [email protected]\nSubject: PERC3/Di failure workaround hypothesis\n\n\n--uXxzq0nDebZQVNAZ\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\nContent-Transfer-Encoding: quoted-printable\n\nOn Mon, Apr 26, 2004 at 11:10:36AM -0500, Sellek, Greg wrote:\n > Short of ordering a Perc4 for every 2650 that I want to upgrade to RH\n > ES, is there anything else I can do to get around the Perc3/Di\n > problem?\n\nOur working hypothesis for a workaround is to do as follows:\n\nIn afacli, set:\n\nRead Cache: enabled\nWrite Cache: enabled when protected\n\nThen unplug the ROMB battery. A reboot is not necessary. The firmware \nwill immediately drop into Write-Through Cache mode, which in our \ntesting has not exhibited the problem. Setting the write cache to \ndisabled in afacli doesn't seem to help - you've got to unplug the \nbattery with it in the above settings.\n\nWe are continuing to search for the root cause to the problem, and will \nupdate the list when we can.\n\nThanks,\nMatt\n\n--\nMatt Domsch\nSr. Software Engineer, Lead Engineer\nDell Linux Solutions linux.dell.com & www.dell.com/linux\nLinux on Dell mailing lists @ http://lists.us.dell.com\n\nOn Apr 5, 2005, at 11:44 PM, Kevin Brown wrote:\n\n> Thomas F.O'Connell wrote:\n>> I'd use two of your drives to create a mirrored partition where \n>> pg_xlog\n>> resides separate from the actual data.\n>>\n>> RAID 10 is probably appropriate for the remaining drives.\n>>\n>> Fortunately, you're not using Dell, so you don't have to worry about\n>> the Perc3/Di RAID controller, which is not so compatible with\n>> Linux...\n>\n> Hmm...I have to wonder how true this is these days.\n>\n> My company has a Dell 2500 with a Perc3/Di running Debian Linux, with\n> the 2.6.10 kernel. The controller seems to work reasonably well,\n> though I wouldn't doubt that it's slower than a different one might\n> be. But so far we haven't had any reliability issues with it.\n>\n> Now, the performance is pretty bad considering the setup -- a RAID 5\n> with five 73.6 gig SCSI disks (10K RPM, I believe). Reads through the\n> filesystem come through at about 65 megabytes/sec, writes about 35\n> megabytes/sec (at least, so says \"bonnie -s 8192\"). This is on a\n> system with a single 3 GHz Xeon and 1 gigabyte of memory. I'd expect\n> much better read performance from what is essentially a stripe of 4\n> fast SCSI disks.\n>\n>\n> While compatibility hasn't really been an issue, at least as far as\n> the basics go, I still agree with your general sentiment -- stay away\n> from the Dells, at least if they have the Perc3/Di controller. You'll\n> probably get much better performance out of something else.\n>\n>\n> -- \n> Kevin Brown\t\t\t\t\t [email protected]\n",
"msg_date": "Wed, 6 Apr 2005 23:40:26 -0500",
"msg_from": "Thomas F.O'Connell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Follow-Up: How to improve db performance with $7K?"
},
{
"msg_contents": "You asked for it! ;-)\n\nIf you want cheap, get SATA. If you want fast under\n*load* conditions, get SCSI. Everything else at this\ntime is marketing hype, either intentional or learned.\nIgnoring dollars, expect to see SCSI beat SATA by 40%.\n\n * * * What I tell you three times is true * * *\n\nAlso, compare the warranty you get with any SATA\ndrive with any SCSI drive. Yes, you still have some\nchange leftover to buy more SATA drives when they\nfail, but... it fundamentally comes down to some\nactual implementation and not what is printed on\nthe cardboard box. Disk systems are bound by the\nrules of queueing theory. You can hit the sales rep\nover the head with your queueing theory book.\n\nUltra320 SCSI is king of the hill for high concurrency\ndatabases. If you're only streaming or serving files,\nsave some money and get a bunch of SATA drives.\nBut if you're reading/writing all over the disk, the\nsimple first-come-first-serve SATA heuristic will\nhose your performance under load conditions.\n\nNext year, they will *try* bring out some SATA cards\nthat improve on first-come-first-serve, but they ain't\nhere now. There are a lot of rigged performance tests\nout there... Maybe by the time they fix the queueing\nproblems, serial Attached SCSI (a/k/a SAS) will be out.\nLooks like Ultra320 is the end of the line for parallel\nSCSI, as Ultra640 SCSI (a/k/a SPI-5) is dead in the\nwater.\n\nUltra320 SCSI.\nUltra320 SCSI.\nUltra320 SCSI.\n\nSerial Attached SCSI.\nSerial Attached SCSI.\nSerial Attached SCSI.\n\nFor future trends, see:\nhttp://www.incits.org/archive/2003/in031163/in031163.htm\n\n douglas\n\np.s. For extra credit, try comparing SATA and SCSI drives\nwhen they're 90% full.\n\nOn Apr 6, 2005, at 8:32 PM, Alex Turner wrote:\n\n> I guess I'm setting myself up here, and I'm really not being ignorant,\n> but can someone explain exactly how is SCSI is supposed to better than\n> SATA?\n>\n> Both systems use drives with platters. Each drive can physically only\n> read one thing at a time.\n>\n> SATA gives each drive it's own channel, but you have to share in SCSI.\n> A SATA controller typicaly can do 3Gb/sec (384MB/sec) per drive, but\n> SCSI can only do 320MB/sec across the entire array.\n>\n> What am I missing here?\n>\n> Alex Turner\n> netEconomist\n\n",
"msg_date": "Thu, 7 Apr 2005 00:58:33 -0400",
"msg_from": "\"Douglas J. Trainor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "A good one page discussion on the future of SCSI and SATA can\nbe found in the latest CHIPS (The Department of the Navy Information\nTechnology Magazine, formerly CHIPS AHOY) in an article by\nPatrick G. Koehler and Lt. Cmdr. Stan Bush.\n\nClick below if you don't mind being logged visiting Space and Naval\nWarfare Systems Center Charleston:\n\n http://www.chips.navy.mil/archives/05_Jan/web_pages/scuzzy.htm\n\n",
"msg_date": "Thu, 7 Apr 2005 05:55:59 -0400",
"msg_from": "\"Douglas J. Trainor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Another simple question: Why is SCSI more expensive? After the\neleventy-millionth controller is made, it seems like SCSI and SATA are\nusing a controller board and a spinning disk. Is somebody still making\nmoney by licensing SCSI technology?\n\nRick\n\[email protected] wrote on 04/06/2005 11:58:33 PM:\n\n> You asked for it! ;-)\n>\n> If you want cheap, get SATA. If you want fast under\n> *load* conditions, get SCSI. Everything else at this\n> time is marketing hype, either intentional or learned.\n> Ignoring dollars, expect to see SCSI beat SATA by 40%.\n>\n> * * * What I tell you three times is true * * *\n>\n> Also, compare the warranty you get with any SATA\n> drive with any SCSI drive. Yes, you still have some\n> change leftover to buy more SATA drives when they\n> fail, but... it fundamentally comes down to some\n> actual implementation and not what is printed on\n> the cardboard box. Disk systems are bound by the\n> rules of queueing theory. You can hit the sales rep\n> over the head with your queueing theory book.\n>\n> Ultra320 SCSI is king of the hill for high concurrency\n> databases. If you're only streaming or serving files,\n> save some money and get a bunch of SATA drives.\n> But if you're reading/writing all over the disk, the\n> simple first-come-first-serve SATA heuristic will\n> hose your performance under load conditions.\n>\n> Next year, they will *try* bring out some SATA cards\n> that improve on first-come-first-serve, but they ain't\n> here now. There are a lot of rigged performance tests\n> out there... Maybe by the time they fix the queueing\n> problems, serial Attached SCSI (a/k/a SAS) will be out.\n> Looks like Ultra320 is the end of the line for parallel\n> SCSI, as Ultra640 SCSI (a/k/a SPI-5) is dead in the\n> water.\n>\n> Ultra320 SCSI.\n> Ultra320 SCSI.\n> Ultra320 SCSI.\n>\n> Serial Attached SCSI.\n> Serial Attached SCSI.\n> Serial Attached SCSI.\n>\n> For future trends, see:\n> http://www.incits.org/archive/2003/in031163/in031163.htm\n>\n> douglas\n>\n> p.s. For extra credit, try comparing SATA and SCSI drives\n> when they're 90% full.\n>\n> On Apr 6, 2005, at 8:32 PM, Alex Turner wrote:\n>\n> > I guess I'm setting myself up here, and I'm really not being ignorant,\n> > but can someone explain exactly how is SCSI is supposed to better than\n> > SATA?\n> >\n> > Both systems use drives with platters. Each drive can physically only\n> > read one thing at a time.\n> >\n> > SATA gives each drive it's own channel, but you have to share in SCSI.\n> > A SATA controller typicaly can do 3Gb/sec (384MB/sec) per drive, but\n> > SCSI can only do 320MB/sec across the entire array.\n> >\n> > What am I missing here?\n> >\n> > Alex Turner\n> > netEconomist\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if\nyour\n> joining column's datatypes do not match\n\n",
"msg_date": "Thu, 7 Apr 2005 10:37:33 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Based on the reading I'm doing, and somebody please correct me if I'm\nwrong, it seems that SCSI drives contain an on disk controller that\nhas to process the tagged queue. SATA-I doesn't have this. This\nadditional controller, is basicaly an on board computer that figures\nout the best order in which to process commands. I believe you are\nalso paying for the increased tolerance that generates a better speed.\n If you compare an 80Gig 7200RPM IDE drive to a WD Raptor 76G 10k RPM\nto a Seagate 10k.6 drive to a Seagate Cheatah 15k drive, each one\nrepresents a step up in parts and technology, thereby generating a\ncost increase (at least thats what the manufactures tell us). I know\nif you ever held a 15k drive in your hand, you can notice a\nconsiderable weight difference between it and a 7200RPM IDE drive.\n\nAlex Turner\nnetEconomist\n\nOn Apr 7, 2005 11:37 AM, [email protected]\n<[email protected]> wrote:\n> Another simple question: Why is SCSI more expensive? After the\n> eleventy-millionth controller is made, it seems like SCSI and SATA are\n> using a controller board and a spinning disk. Is somebody still making\n> money by licensing SCSI technology?\n> \n> Rick\n> \n> [email protected] wrote on 04/06/2005 11:58:33 PM:\n> \n> > You asked for it! ;-)\n> >\n> > If you want cheap, get SATA. If you want fast under\n> > *load* conditions, get SCSI. Everything else at this\n> > time is marketing hype, either intentional or learned.\n> > Ignoring dollars, expect to see SCSI beat SATA by 40%.\n> >\n> > * * * What I tell you three times is true * * *\n> >\n> > Also, compare the warranty you get with any SATA\n> > drive with any SCSI drive. Yes, you still have some\n> > change leftover to buy more SATA drives when they\n> > fail, but... it fundamentally comes down to some\n> > actual implementation and not what is printed on\n> > the cardboard box. Disk systems are bound by the\n> > rules of queueing theory. You can hit the sales rep\n> > over the head with your queueing theory book.\n> >\n> > Ultra320 SCSI is king of the hill for high concurrency\n> > databases. If you're only streaming or serving files,\n> > save some money and get a bunch of SATA drives.\n> > But if you're reading/writing all over the disk, the\n> > simple first-come-first-serve SATA heuristic will\n> > hose your performance under load conditions.\n> >\n> > Next year, they will *try* bring out some SATA cards\n> > that improve on first-come-first-serve, but they ain't\n> > here now. There are a lot of rigged performance tests\n> > out there... Maybe by the time they fix the queueing\n> > problems, serial Attached SCSI (a/k/a SAS) will be out.\n> > Looks like Ultra320 is the end of the line for parallel\n> > SCSI, as Ultra640 SCSI (a/k/a SPI-5) is dead in the\n> > water.\n> >\n> > Ultra320 SCSI.\n> > Ultra320 SCSI.\n> > Ultra320 SCSI.\n> >\n> > Serial Attached SCSI.\n> > Serial Attached SCSI.\n> > Serial Attached SCSI.\n> >\n> > For future trends, see:\n> > http://www.incits.org/archive/2003/in031163/in031163.htm\n> >\n> > douglas\n> >\n> > p.s. For extra credit, try comparing SATA and SCSI drives\n> > when they're 90% full.\n> >\n> > On Apr 6, 2005, at 8:32 PM, Alex Turner wrote:\n> >\n> > > I guess I'm setting myself up here, and I'm really not being ignorant,\n> > > but can someone explain exactly how is SCSI is supposed to better than\n> > > SATA?\n> > >\n> > > Both systems use drives with platters. Each drive can physically only\n> > > read one thing at a time.\n> > >\n> > > SATA gives each drive it's own channel, but you have to share in SCSI.\n> > > A SATA controller typicaly can do 3Gb/sec (384MB/sec) per drive, but\n> > > SCSI can only do 320MB/sec across the entire array.\n> > >\n> > > What am I missing here?\n> > >\n> > > Alex Turner\n> > > netEconomist\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 9: the planner will ignore your desire to choose an index scan if\n> your\n> > joining column's datatypes do not match\n> \n>\n",
"msg_date": "Thu, 7 Apr 2005 11:46:31 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Yep, that's it, as well as increased quality control. I found this from\r\nSeagate:\r\n\r\nhttp://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\r\n\r\nWith this quote (note that ES stands for Enterprise System and PS stands\r\nfor Personal System):\r\n\r\nThere is significantly more silicon on ES products. The following\r\ncomparison comes from a study done in 2000:\r\n· the ES ASIC gate count is more than 2x a PS drive,\r\n· the embedded SRAM space for program code is 2x,\r\n· the permanent flash memory for program code is 2x,\r\n· data SRAM and cache SRAM space is more than 10x.\r\nThe complexity of the SCSI/FC interface compared to the\r\nIDE/ATA interface shows up here due in part to the more\r\ncomplex system architectures in which ES drives find themselves.\r\nES interfaces support multiple initiators or hosts. The\r\ndrive must keep track of separate sets of information for each\r\nhost to which it is attached, e.g., maintaining the processor\r\npointer sets for multiple initiators and tagged commands.\r\nThe capability of SCSI/FC to efficiently process commands\r\nand tasks in parallel has also resulted in a higher overhead\r\n“kernel” structure for the firmware. All of these complexities\r\nand an overall richer command set result in the need for a\r\nmore expensive PCB to carry the electronics.\r\n\r\nRick\r\n\r\nAlex Turner <[email protected]> wrote on 04/07/2005 10:46:31 AM:\r\n\r\n> Based on the reading I'm doing, and somebody please correct me if I'm\r\n> wrong, it seems that SCSI drives contain an on disk controller that\r\n> has to process the tagged queue. SATA-I doesn't have this. This\r\n> additional controller, is basicaly an on board computer that figures\r\n> out the best order in which to process commands. I believe you are\r\n> also paying for the increased tolerance that generates a better speed.\r\n> If you compare an 80Gig 7200RPM IDE drive to a WD Raptor 76G 10k RPM\r\n> to a Seagate 10k.6 drive to a Seagate Cheatah 15k drive, each one\r\n> represents a step up in parts and technology, thereby generating a\r\n> cost increase (at least thats what the manufactures tell us). I know\r\n> if you ever held a 15k drive in your hand, you can notice a\r\n> considerable weight difference between it and a 7200RPM IDE drive.\r\n>\r\n> Alex Turner\r\n> netEconomist\r\n>\r\n> On Apr 7, 2005 11:37 AM, [email protected]\r\n> <[email protected]> wrote:\r\n> > Another simple question: Why is SCSI more expensive? After the\r\n> > eleventy-millionth controller is made, it seems like SCSI and SATA are\r\n> > using a controller board and a spinning disk. Is somebody still making\r\n> > money by licensing SCSI technology?\r\n> >\r\n> > Rick\r\n> >\r\n> > [email protected] wrote on 04/06/2005 11:58:33 PM:\r\n> >\r\n> > > You asked for it! ;-)\r\n> > >\r\n> > > If you want cheap, get SATA. If you want fast under\r\n> > > *load* conditions, get SCSI. Everything else at this\r\n> > > time is marketing hype, either intentional or learned.\r\n> > > Ignoring dollars, expect to see SCSI beat SATA by 40%.\r\n> > >\r\n> > > * * * What I tell you three times is true * * *\r\n> > >\r\n> > > Also, compare the warranty you get with any SATA\r\n> > > drive with any SCSI drive. Yes, you still have some\r\n> > > change leftover to buy more SATA drives when they\r\n> > > fail, but... it fundamentally comes down to some\r\n> > > actual implementation and not what is printed on\r\n> > > the cardboard box. Disk systems are bound by the\r\n> > > rules of queueing theory. You can hit the sales rep\r\n> > > over the head with your queueing theory book.\r\n> > >\r\n> > > Ultra320 SCSI is king of the hill for high concurrency\r\n> > > databases. If you're only streaming or serving files,\r\n> > > save some money and get a bunch of SATA drives.\r\n> > > But if you're reading/writing all over the disk, the\r\n> > > simple first-come-first-serve SATA heuristic will\r\n> > > hose your performance under load conditions.\r\n> > >\r\n> > > Next year, they will *try* bring out some SATA cards\r\n> > > that improve on first-come-first-serve, but they ain't\r\n> > > here now. There are a lot of rigged performance tests\r\n> > > out there... Maybe by the time they fix the queueing\r\n> > > problems, serial Attached SCSI (a/k/a SAS) will be out.\r\n> > > Looks like Ultra320 is the end of the line for parallel\r\n> > > SCSI, as Ultra640 SCSI (a/k/a SPI-5) is dead in the\r\n> > > water.\r\n> > >\r\n> > > Ultra320 SCSI.\r\n> > > Ultra320 SCSI.\r\n> > > Ultra320 SCSI.\r\n> > >\r\n> > > Serial Attached SCSI.\r\n> > > Serial Attached SCSI.\r\n> > > Serial Attached SCSI.\r\n> > >\r\n> > > For future trends, see:\r\n> > > http://www.incits.org/archive/2003/in031163/in031163.htm\r\n> > >\r\n> > > douglas\r\n> > >\r\n> > > p.s. For extra credit, try comparing SATA and SCSI drives\r\n> > > when they're 90% full.\r\n> > >\r\n> > > On Apr 6, 2005, at 8:32 PM, Alex Turner wrote:\r\n> > >\r\n> > > > I guess I'm setting myself up here, and I'm really not being\r\nignorant,\r\n> > > > but can someone explain exactly how is SCSI is supposed to better\r\nthan\r\n> > > > SATA?\r\n> > > >\r\n> > > > Both systems use drives with platters. Each drive can physically\r\nonly\r\n> > > > read one thing at a time.\r\n> > > >\r\n> > > > SATA gives each drive it's own channel, but you have to share in\r\nSCSI.\r\n> > > > A SATA controller typicaly can do 3Gb/sec (384MB/sec) per drive,\r\nbut\r\n> > > > SCSI can only do 320MB/sec across the entire array.\r\n> > > >\r\n> > > > What am I missing here?\r\n> > > >\r\n> > > > Alex Turner\r\n> > > > netEconomist\r\n> > >\r\n> > >\r\n> > > ---------------------------(end of\r\nbroadcast)---------------------------\r\n> > > TIP 9: the planner will ignore your desire to choose an index scan if\r\n> > your\r\n> > > joining column's datatypes do not match\r\n> >\r\n> >",
"msg_date": "Thu, 7 Apr 2005 11:28:29 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Tom Lane wrote:\n> Greg Stark <[email protected]> writes:\n> > In any case the issue with the IDE protocol is that fundamentally you\n> > can only have a single command pending. SCSI can have many commands\n> > pending.\n> \n> That's the bottom line: the SCSI protocol was designed (twenty years ago!)\n> to allow the drive to do physical I/O scheduling, because the CPU can\n> issue multiple commands before the drive has to report completion of the\n> first one. IDE isn't designed to do that. I understand that the latest\n> revisions to the IDE/ATA specs allow the drive to do this sort of thing,\n> but support for it is far from widespread.\n\nMy question is: why does this (physical I/O scheduling) seem to matter\nso much?\n\nBefore you flame me for asking a terribly idiotic question, let me\nprovide some context.\n\nThe operating system maintains a (sometimes large) buffer cache, with\neach buffer being mapped to a \"physical\" (which in the case of RAID is\nreally a virtual) location on the disk. When the kernel needs to\nflush the cache (e.g., during a sync(), or when it needs to free up\nsome pages), it doesn't write the pages in memory address order, it\nwrites them in *device* address order. And it, too, maintains a queue\nof disk write requests.\n\nNow, unless some of the blocks on the disk are remapped behind the\nscenes such that an ordered list of blocks in the kernel translates to\nan out of order list on the target disk (which should be rare, since\nsuch remapping usually happens only when the target block is bad), how\ncan the fact that the disk controller doesn't do tagged queuing\n*possibly* make any real difference unless the kernel's disk\nscheduling algorithm is suboptimal? In fact, if the kernel's\nscheduling algorithm is close to optimal, wouldn't the disk queuing\nmechanism *reduce* the overall efficiency of disk writes? After all,\nthe kernel's queue is likely to be much larger than the disk\ncontroller's, and the kernel has knowledge of things like the\nfilesystem layout that the disk controller and disks do not have. If\nthe controller is only able to execute a subset of the write commands\nthat the kernel has in its queue, at the very least the controller may\nend up leaving the head(s) in a suboptimal position relative to the\nnext set of commands that it hasn't received yet, unless it simply\nwrites the blocks in the order it receives it, right (admittedly, this\nis somewhat trivially dealt with by having the controller exclude the\nfirst and last blocks in the request from its internal sort).\n\n\nI can see how you might configure the RAID controller so that the\nkernel's scheduling algorithm will screw things up horribly. For\ninstance, if the controller has several RAID volumes configured in\nsuch a way that the volumes share spindles, the kernel isn't likely to\nknow about that (since each volume appears as its own device), so\nwrites to multiple volumes can cause head movement where the kernel\nmight be treating the volumes as completely independent. But that\njust means that you can't be dumb about how you configure your RAID\nsetup.\n\n\nSo what gives? Given the above, why is SCSI so much more efficient\nthan plain, dumb SATA? And why wouldn't you be much better off with a\nset of dumb controllers in conjunction with (kernel-level) software\nRAID?\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n",
"msg_date": "Wed, 13 Apr 2005 22:56:55 -0700",
"msg_from": "Kevin Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "\nKevin Brown <[email protected]> writes:\n\n> My question is: why does this (physical I/O scheduling) seem to matter\n> so much?\n> \n> Before you flame me for asking a terribly idiotic question, let me\n> provide some context.\n> \n> The operating system maintains a (sometimes large) buffer cache, with\n> each buffer being mapped to a \"physical\" (which in the case of RAID is\n> really a virtual) location on the disk. When the kernel needs to\n> flush the cache (e.g., during a sync(), or when it needs to free up\n> some pages), it doesn't write the pages in memory address order, it\n> writes them in *device* address order. And it, too, maintains a queue\n> of disk write requests.\n\nI think you're being misled by analyzing the write case.\n\nConsider the read case. When a user process requests a block and that read\nmakes its way down to the driver level, the driver can't just put it aside and\nwait until it's convenient. It has to go ahead and issue the read right away.\n\nIn the 10ms or so that it takes to seek to perform that read *nothing* gets\ndone. If the driver receives more read or write requests it just has to sit on\nthem and wait. 10ms is a lifetime for a computer. In that time dozens of other\nprocesses could have been scheduled and issued reads of their own.\n\nIf any of those requests would have lied on the intervening tracks the drive\nmissed a chance to execute them. Worse, it actually has to backtrack to get to\nthem meaning another long seek.\n\nThe same thing would happen if you had lots of processes issuing lots of small\nfsynced writes all over the place. Postgres doesn't really do that though. It\nsort of does with the WAL logs, but that shouldn't cause a lot of seeking.\nPerhaps it would mean that having your WAL share a spindle with other parts of\nthe OS would have a bigger penalty on IDE drives than on SCSI drives though?\n\n-- \ngreg\n\n",
"msg_date": "14 Apr 2005 02:37:15 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Greg Stark wrote:\n\n\n> I think you're being misled by analyzing the write case.\n> \n> Consider the read case. When a user process requests a block and\n> that read makes its way down to the driver level, the driver can't\n> just put it aside and wait until it's convenient. It has to go ahead\n> and issue the read right away.\n\nWell, strictly speaking it doesn't *have* to. It could delay for a\ncouple of milliseconds to see if other requests come in, and then\nissue the read if none do. If there are already other requests being\nfulfilled, then it'll schedule the request in question just like the\nrest.\n\n> In the 10ms or so that it takes to seek to perform that read\n> *nothing* gets done. If the driver receives more read or write\n> requests it just has to sit on them and wait. 10ms is a lifetime for\n> a computer. In that time dozens of other processes could have been\n> scheduled and issued reads of their own.\n\nThis is true, but now you're talking about a situation where the\nsystem goes from an essentially idle state to one of furious\nactivity. In other words, it's a corner case that I strongly suspect\nisn't typical in situations where SCSI has historically made a big\ndifference.\n\nOnce the first request has been fulfilled, the driver can now schedule\nthe rest of the queued-up requests in disk-layout order.\n\n\nI really don't see how this is any different between a system that has\ntagged queueing to the disks and one that doesn't. The only\ndifference is where the queueing happens. In the case of SCSI, the\nqueueing happens on the disks (or at least on the controller). In the\ncase of SATA, the queueing happens in the kernel.\n\nI suppose the tagged queueing setup could begin the head movement and,\nif another request comes in that requests a block on a cylinder\nbetween where the head currently is and where it's going, go ahead and\nread the block in question. But is that *really* what happens in a\ntagged queueing system? It's the only major advantage I can see it\nhaving.\n\n\n> The same thing would happen if you had lots of processes issuing\n> lots of small fsynced writes all over the place. Postgres doesn't\n> really do that though. It sort of does with the WAL logs, but that\n> shouldn't cause a lot of seeking. Perhaps it would mean that having\n> your WAL share a spindle with other parts of the OS would have a\n> bigger penalty on IDE drives than on SCSI drives though?\n\nPerhaps.\n\nBut I rather doubt that has to be a huge penalty, if any. When a\nprocess issues an fsync (or even a sync), the kernel doesn't *have* to\ndrop everything it's doing and get to work on it immediately. It\ncould easily gather a few more requests, bundle them up, and then\nissue them. If there's a lot of disk activity, it's probably smart to\ndo just that. All fsync and sync require is that the caller block\nuntil the data hits the disk (from the point of view of the kernel).\nThe specification doesn't require that the kernel act on the calls\nimmediately or write only the blocks referred to by the call in\nquestion.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n",
"msg_date": "Thu, 14 Apr 2005 01:36:08 -0700",
"msg_from": "Kevin Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Kevin Brown <[email protected]> writes:\n> I really don't see how this is any different between a system that has\n> tagged queueing to the disks and one that doesn't. The only\n> difference is where the queueing happens. In the case of SCSI, the\n> queueing happens on the disks (or at least on the controller). In the\n> case of SATA, the queueing happens in the kernel.\n\nThat's basically what it comes down to: SCSI lets the disk drive itself\ndo the low-level I/O scheduling whereas the ATA spec prevents the drive\nfrom doing so (unless it cheats, ie, caches writes). Also, in SCSI it's\npossible for the drive to rearrange reads as well as writes --- which\nAFAICS is just not possible in ATA. (Maybe in the newest spec...)\n\nThe reason this is so much more of a win than it was when ATA was\ndesigned is that in modern drives the kernel has very little clue about\nthe physical geometry of the disk. Variable-size tracks, bad-block\nsparing, and stuff like that make for a very hard-to-predict mapping\nfrom linear sector addresses to actual disk locations. Combine that\nwith the fact that the drive controller can be much smarter than it was\ntwenty years ago, and you can see that the case for doing I/O scheduling\nin the kernel and not in the drive is pretty weak.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Apr 2005 10:44:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K? "
},
{
"msg_contents": "while you weren't looking, Kevin Brown wrote:\n\n[reordering bursty reads]\n\n> In other words, it's a corner case that I strongly suspect\n> isn't typical in situations where SCSI has historically made a big\n> difference.\n\n[...]\n\n> But I rather doubt that has to be a huge penalty, if any. When a\n> process issues an fsync (or even a sync), the kernel doesn't *have* to\n> drop everything it's doing and get to work on it immediately. It\n> could easily gather a few more requests, bundle them up, and then\n> issue them.\n\nTo make sure I'm following you here, are you or are you not suggesting\nthat the kernel could sit on -all- IO requests for some small handful\nof ms before actually performing any IO to address what you \"strongly\nsuspect\" is a \"corner case\"?\n\n/rls\n\n-- \n:wq\n",
"msg_date": "Thu, 14 Apr 2005 09:48:58 -0500",
"msg_from": "Rosser Schwarz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "On 4/14/05, Tom Lane <[email protected]> wrote:\n> \n> That's basically what it comes down to: SCSI lets the disk drive itself\n> do the low-level I/O scheduling whereas the ATA spec prevents the drive\n> from doing so (unless it cheats, ie, caches writes). Also, in SCSI it's\n> possible for the drive to rearrange reads as well as writes --- which\n> AFAICS is just not possible in ATA. (Maybe in the newest spec...)\n> \n> The reason this is so much more of a win than it was when ATA was\n> designed is that in modern drives the kernel has very little clue about\n> the physical geometry of the disk. Variable-size tracks, bad-block\n> sparing, and stuff like that make for a very hard-to-predict mapping\n> from linear sector addresses to actual disk locations. Combine that\n> with the fact that the drive controller can be much smarter than it was\n> twenty years ago, and you can see that the case for doing I/O scheduling\n> in the kernel and not in the drive is pretty weak.\n> \n> \n\nSo if you all were going to choose between two hard drives where:\ndrive A has capacity C and spins at 15K rpms, and\ndrive B has capacity 2 x C and spins at 10K rpms and\nall other features are the same, the price is the same and C is enough\ndisk space which would you choose?\n\nI've noticed that on IDE drives, as the capacity increases the data\ndensity increases and there is a pereceived (I've not measured it)\nperformance increase.\n\nWould the increased data density of the higher capacity drive be of\ngreater benefit than the faster spindle speed of drive A?\n\n-- \nMatthew Nuzum\nwww.bearfruit.org\n",
"msg_date": "Thu, 14 Apr 2005 10:51:46 -0500",
"msg_from": "Matthew Nuzum <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Kevin Brown <[email protected]> writes:\n\n> Greg Stark wrote:\n> \n> \n> > I think you're being misled by analyzing the write case.\n> > \n> > Consider the read case. When a user process requests a block and\n> > that read makes its way down to the driver level, the driver can't\n> > just put it aside and wait until it's convenient. It has to go ahead\n> > and issue the read right away.\n> \n> Well, strictly speaking it doesn't *have* to. It could delay for a\n> couple of milliseconds to see if other requests come in, and then\n> issue the read if none do. If there are already other requests being\n> fulfilled, then it'll schedule the request in question just like the\n> rest.\n\nBut then the cure is worse than the disease. You're basically describing\nexactly what does happen anyways, only you're delaying more requests than\nnecessary. That intervening time isn't really idle, it's filled with all the\nrequests that were delayed during the previous large seek...\n\n> Once the first request has been fulfilled, the driver can now schedule\n> the rest of the queued-up requests in disk-layout order.\n> \n> I really don't see how this is any different between a system that has\n> tagged queueing to the disks and one that doesn't. The only\n> difference is where the queueing happens. \n\nAnd *when* it happens. Instead of being able to issue requests while a large\nseek is happening and having some of them satisfied they have to wait until\nthat seek is finished and get acted on during the next large seek.\n\nIf my theory is correct then I would expect bandwidth to be essentially\nequivalent but the latency on SATA drives to be increased by about 50% of the\naverage seek time. Ie, while a busy SCSI drive can satisfy most requests in\nabout 10ms a busy SATA drive would satisfy most requests in 15ms. (add to that\nthat 10k RPM and 15kRPM SCSI drives have even lower seek times and no such\nIDE/SATA drives exist...)\n\nIn reality higher latency feeds into a system feedback loop causing your\napplication to run slower causing bandwidth demands to be lower as well. It's\noften hard to distinguish root causes from symptoms when optimizing complex\nsystems.\n\n-- \ngreg\n\n",
"msg_date": "14 Apr 2005 14:03:31 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Tom Lane wrote:\n> Kevin Brown <[email protected]> writes:\n> > I really don't see how this is any different between a system that has\n> > tagged queueing to the disks and one that doesn't. The only\n> > difference is where the queueing happens. In the case of SCSI, the\n> > queueing happens on the disks (or at least on the controller). In the\n> > case of SATA, the queueing happens in the kernel.\n> \n> That's basically what it comes down to: SCSI lets the disk drive itself\n> do the low-level I/O scheduling whereas the ATA spec prevents the drive\n> from doing so (unless it cheats, ie, caches writes). Also, in SCSI it's\n> possible for the drive to rearrange reads as well as writes --- which\n> AFAICS is just not possible in ATA. (Maybe in the newest spec...)\n> \n> The reason this is so much more of a win than it was when ATA was\n> designed is that in modern drives the kernel has very little clue about\n> the physical geometry of the disk. Variable-size tracks, bad-block\n> sparing, and stuff like that make for a very hard-to-predict mapping\n> from linear sector addresses to actual disk locations. \n\nYeah, but it's not clear to me, at least, that this is a first-order\nconsideration. A second-order consideration, sure, I'll grant that.\n\nWhat I mean is that when it comes to scheduling disk activity,\nknowledge of the specific physical geometry of the disk isn't really\nimportant. What's important is whether or not the disk conforms to a\ncertain set of expectations. Namely, that the general organization is\nsuch that addressing the blocks in block number order guarantees\nmaximum throughput.\n\nNow, bad block remapping destroys that guarantee, but unless you've\ngot a LOT of bad blocks, it shouldn't destroy your performance, right?\n\n> Combine that with the fact that the drive controller can be much\n> smarter than it was twenty years ago, and you can see that the case\n> for doing I/O scheduling in the kernel and not in the drive is\n> pretty weak.\n\nWell, I certainly grant that allowing the controller to do the I/O\nscheduling is faster than having the kernel do it, as long as it can\nhandle insertion of new requests into the list while it's in the\nmiddle of executing a request. The most obvious case is when the head\nis in motion and the new request can be satisfied by reading from the\nmedia between where the head is at the time of the new request and\nwhere the head is being moved to.\n\nMy argument is that a sufficiently smart kernel scheduler *should*\nyield performance results that are reasonably close to what you can\nget with that feature. Perhaps not quite as good, but reasonably\nclose. It shouldn't be an orders-of-magnitude type difference.\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n",
"msg_date": "Thu, 14 Apr 2005 19:03:37 -0700",
"msg_from": "Kevin Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "3ware claim that their 'software' implemented command queueing\nperforms at 95% effectiveness compared to the hardware queueing on a\nSCSI drive, so I would say that they agree with you.\n\nI'm still learning, but as I read it, the bits are split across the\nplatters and there is only 'one' head, but happens to be reading from\nmultiple platters. The 'further' in linear distance the data is from\nthe current position, the longer it's going to take to get there. \nThis seems to be true based on a document that was circulated. A hard\ndrive takes considerable amount of time to 'find' a track on the\nplatter compared to the rotational speed, which would agree with the\nfact that you can read 70MB/sec, but it takes up to 13ms to seek.\n\nthe ATA protocol is just how the HBA communicates with the drive,\nthere is no reason why the HBA can't reschedule reads and writes just\nthe like SCSI drive would do natively, and this is what infact 3ware\nclaims. I get the feeling based on my own historical experience that\ngeneraly drives don't just have a bunch of bad blocks. This all leads\nme to believe that you can predict with pretty good accuracy how\nexpensive it is to retrieve a given block knowing it's linear\nincrement.\n\nAlex Turner\nnetEconomist\n\nOn 4/14/05, Kevin Brown <[email protected]> wrote:\n> Tom Lane wrote:\n> > Kevin Brown <[email protected]> writes:\n> > > I really don't see how this is any different between a system that has\n> > > tagged queueing to the disks and one that doesn't. The only\n> > > difference is where the queueing happens. In the case of SCSI, the\n> > > queueing happens on the disks (or at least on the controller). In the\n> > > case of SATA, the queueing happens in the kernel.\n> >\n> > That's basically what it comes down to: SCSI lets the disk drive itself\n> > do the low-level I/O scheduling whereas the ATA spec prevents the drive\n> > from doing so (unless it cheats, ie, caches writes). Also, in SCSI it's\n> > possible for the drive to rearrange reads as well as writes --- which\n> > AFAICS is just not possible in ATA. (Maybe in the newest spec...)\n> >\n> > The reason this is so much more of a win than it was when ATA was\n> > designed is that in modern drives the kernel has very little clue about\n> > the physical geometry of the disk. Variable-size tracks, bad-block\n> > sparing, and stuff like that make for a very hard-to-predict mapping\n> > from linear sector addresses to actual disk locations.\n> \n> Yeah, but it's not clear to me, at least, that this is a first-order\n> consideration. A second-order consideration, sure, I'll grant that.\n> \n> What I mean is that when it comes to scheduling disk activity,\n> knowledge of the specific physical geometry of the disk isn't really\n> important. What's important is whether or not the disk conforms to a\n> certain set of expectations. Namely, that the general organization is\n> such that addressing the blocks in block number order guarantees\n> maximum throughput.\n> \n> Now, bad block remapping destroys that guarantee, but unless you've\n> got a LOT of bad blocks, it shouldn't destroy your performance, right?\n> \n> > Combine that with the fact that the drive controller can be much\n> > smarter than it was twenty years ago, and you can see that the case\n> > for doing I/O scheduling in the kernel and not in the drive is\n> > pretty weak.\n> \n> Well, I certainly grant that allowing the controller to do the I/O\n> scheduling is faster than having the kernel do it, as long as it can\n> handle insertion of new requests into the list while it's in the\n> middle of executing a request. The most obvious case is when the head\n> is in motion and the new request can be satisfied by reading from the\n> media between where the head is at the time of the new request and\n> where the head is being moved to.\n> \n> My argument is that a sufficiently smart kernel scheduler *should*\n> yield performance results that are reasonably close to what you can\n> get with that feature. Perhaps not quite as good, but reasonably\n> close. It shouldn't be an orders-of-magnitude type difference.\n> \n> --\n> Kevin Brown [email protected]\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n",
"msg_date": "Thu, 14 Apr 2005 22:24:22 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Kevin Brown <[email protected]> writes:\n> Tom Lane wrote:\n>> The reason this is so much more of a win than it was when ATA was\n>> designed is that in modern drives the kernel has very little clue about\n>> the physical geometry of the disk. Variable-size tracks, bad-block\n>> sparing, and stuff like that make for a very hard-to-predict mapping\n>> from linear sector addresses to actual disk locations. \n\n> What I mean is that when it comes to scheduling disk activity,\n> knowledge of the specific physical geometry of the disk isn't really\n> important.\n\nOh?\n\nYes, you can probably assume that blocks with far-apart numbers are\ngoing to require a big seek, and you might even be right in supposing\nthat a block with an intermediate number should be read on the way.\nBut you have no hope at all of making the right decisions at a more\nlocal level --- say, reading various sectors within the same cylinder\nin an optimal fashion. You don't know where the track boundaries are,\nso you can't schedule in a way that minimizes rotational latency.\nYou're best off to throw all the requests at the drive together and\nlet the drive sort it out.\n\nThis is not to say that there's not a place for a kernel-side scheduler\ntoo. The drive will probably have a fairly limited number of slots in\nits command queue. The optimal thing is for those slots to be filled\nwith requests that are in the same area of the disk. So you can still\nget some mileage out of an elevator algorithm that works on logical\nblock numbers to give the drive requests for nearby block numbers at the\nsame time. But there's also a lot of use in letting the drive do its\nown low-level scheduling.\n\n> My argument is that a sufficiently smart kernel scheduler *should*\n> yield performance results that are reasonably close to what you can\n> get with that feature. Perhaps not quite as good, but reasonably\n> close. It shouldn't be an orders-of-magnitude type difference.\n\nThat might be the case with respect to decisions about long seeks,\nbut not with respect to rotational latency. The kernel simply hasn't\ngot the information.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Apr 2005 22:41:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K? "
},
{
"msg_contents": "Tom Lane wrote:\n> Kevin Brown <[email protected]> writes:\n> > Tom Lane wrote:\n> >> The reason this is so much more of a win than it was when ATA was\n> >> designed is that in modern drives the kernel has very little clue about\n> >> the physical geometry of the disk. Variable-size tracks, bad-block\n> >> sparing, and stuff like that make for a very hard-to-predict mapping\n> >> from linear sector addresses to actual disk locations. \n> \n> > What I mean is that when it comes to scheduling disk activity,\n> > knowledge of the specific physical geometry of the disk isn't really\n> > important.\n> \n> Oh?\n> \n> Yes, you can probably assume that blocks with far-apart numbers are\n> going to require a big seek, and you might even be right in supposing\n> that a block with an intermediate number should be read on the way.\n> But you have no hope at all of making the right decisions at a more\n> local level --- say, reading various sectors within the same cylinder\n> in an optimal fashion. You don't know where the track boundaries are,\n> so you can't schedule in a way that minimizes rotational latency.\n\nThis is true, but has to be examined in the context of the workload.\n\nIf the workload is a sequential read, for instance, then the question\nbecomes whether or not giving the controller a set of sequential\nblocks (in block ID order) will get you maximum read throughput.\nGiven that the manufacturers all attempt to generate the biggest read\nthroughput numbers, I think it's reasonable to assume that (a) the\nsectors are ordered within a cylinder such that reading block x + 1\nimmediately after block x will incur the smallest possible amount of\ndelay if requested quickly enough, and (b) the same holds true when\nblock x + 1 is on the next cylinder.\n\nIn the case of pure random reads, you'll end up having to wait an\naverage of half of a rotation before beginning the read. Where SCSI\nbuys you something here is when you have sequential chunks of reads\nthat are randomly distributed. The SCSI drive can determine which\nblock in the set to start with first. But for that to really be a big\nwin, the chunks themselves would have to span more than half a track\nat least, else you'd have a greater than half a track gap in the\nmiddle of your two sorted sector lists for that track (a really\nwell-engineered SCSI disk could take advantage of the fact that there\nare multiple platters and fill the \"gap\" with reads from a different\nplatter).\n\n\nAdmittedly, this can be quite a big win. With an average rotational\nlatency of 4 milliseconds on a 7200 RPM disk, being able to begin the\nread at the earliest possible moment will shave at most 25% off the\ntotal average random-access latency, if the average seek time is 12\nmilliseconds.\n\n> That might be the case with respect to decisions about long seeks,\n> but not with respect to rotational latency. The kernel simply hasn't\n> got the information.\n\nTrue, but that should reduce the total latency by something like 17%\n(on average). Not trivial, to be sure, but not an order of magnitude,\neither.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n",
"msg_date": "Thu, 14 Apr 2005 22:03:36 -0700",
"msg_from": "Kevin Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Kevin Brown <[email protected]> writes:\n> In the case of pure random reads, you'll end up having to wait an\n> average of half of a rotation before beginning the read.\n\nYou're assuming the conclusion. The above is true if the disk is handed\none request at a time by a kernel that doesn't have any low-level timing\ninformation. If there are multiple random requests on the same track,\nthe drive has an opportunity to do better than that --- if it's got all\nthe requests in hand.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Apr 2005 01:28:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K? "
},
{
"msg_contents": "\n\n> My argument is that a sufficiently smart kernel scheduler *should*\n> yield performance results that are reasonably close to what you can\n> get with that feature. Perhaps not quite as good, but reasonably\n> close. It shouldn't be an orders-of-magnitude type difference.\n\n\tAnd a controller card (or drive) has a lot less RAM to use as a cache / \nqueue for reordering stuff than the OS has, potentially the OS can us most \nof the available RAM, which can be gigabytes on a big server, whereas in \nthe drive there are at most a few tens of megabytes...\n\n\tHowever all this is a bit looking at the problem through the wrong end. \nThe OS should provide a multi-read call for the applications to pass a \nlist of blocks they'll need, then reorder them and read them the fastest \npossible way, clustering them with similar requests from other threads.\n\n\tRight now when a thread/process issues a read() it will block until the \nblock is delivered to this thread. The OS does not know if this thread \nwill then need the next block (which can be had very cheaply if you know \nahead of time you'll need it) or not. Thus it must make guesses, read \nahead (sometimes), etc...\n",
"msg_date": "Fri, 15 Apr 2005 12:07:43 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "\n\n> platter compared to the rotational speed, which would agree with the\n> fact that you can read 70MB/sec, but it takes up to 13ms to seek.\n\n\tActually :\n\t- the head has to be moved\n\tthis time depends on the distance, for instance moving from a cylinder to \nthe next is very fast (it needs to, to get good throughput)\n\t- then you have to wait for the disk to spin until the information you \nwant comes in front of the head... statistically you have to wait a half \nrotation. And this does not depend on the distance between the cylinders, \nit depends on the position of the data in the cylinder.\n\tThe more RPMs you have, the less you wait, which is why faster RPMs \ndrives have faster seek (they must also have faster actuators to move the \nhead)...\n",
"msg_date": "Fri, 15 Apr 2005 12:13:59 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "PFC wrote:\n\n>\n>\n>> My argument is that a sufficiently smart kernel scheduler *should*\n>> yield performance results that are reasonably close to what you can\n>> get with that feature. Perhaps not quite as good, but reasonably\n>> close. It shouldn't be an orders-of-magnitude type difference.\n>\n>\n> And a controller card (or drive) has a lot less RAM to use as a \n> cache / queue for reordering stuff than the OS has, potentially the \n> OS can us most of the available RAM, which can be gigabytes on a big \n> server, whereas in the drive there are at most a few tens of \n> megabytes...\n>\n> However all this is a bit looking at the problem through the wrong \n> end. The OS should provide a multi-read call for the applications to \n> pass a list of blocks they'll need, then reorder them and read them \n> the fastest possible way, clustering them with similar requests from \n> other threads.\n>\n> Right now when a thread/process issues a read() it will block \n> until the block is delivered to this thread. The OS does not know if \n> this thread will then need the next block (which can be had very \n> cheaply if you know ahead of time you'll need it) or not. Thus it \n> must make guesses, read ahead (sometimes), etc...\n\nAll true. Which is why high performance computing folks use \naio_read()/aio_write() and load up the kernel with all the requests they \nexpect to make. \n\nThe kernels that I'm familiar with will do read ahead on files based on \nsome heuristics: when you read the first byte of a file the OS will \ntypically load up several pages of the file (depending on file size, \netc). If you continue doing read() calls without a seek() on the file \ndescriptor the kernel will get the hint that you're doing a sequential \nread and continue caching up the pages ahead of time, usually using the \npages you just read to hold the new data so that one isn't bloating out \nmemory with data that won't be needed again. Throw in a seek() and the \namount of read ahead caching may be reduced.\n\n\nOne point that is being missed in all this discussion is that the file \nsystem also imposes some constraints on how IO's can be done. For \nexample, simply doing a write(fd, buf, 100000000) doesn't emit a stream \nof sequential blocks to the drives. Some file systems (UFS was one) \nwould force portions of large files into other cylinder groups so that \nsmall files could be located near the inode data, thus avoiding/reducing \nthe size of seeks. Similarly, extents need to be allocated and the \nbitmaps recording this data usually need synchronous updates, which will \nrequire some seeks, etc. Not to mention the need to update inode data, \netc. Anyway, my point is that the allocation policies of the file \nsystem can confuse the situation.\n\nAlso, the seek times one sees reported are an average. One really needs \nto look at the track-to-track seek time and also the \"full stoke\" seek \ntimes. It takes a *long* time to move the heads across the whole \nplatter. I've seen people partition drives to only use small regions of \nthe drives to avoid long seeks and to better use the increased number of \nbits going under the head in one rotation. A 15K drive doesn't need to \nhave a faster seek time than a 10K drive because the rotational speed is \nhigher. The average seek time might be faster just because the 15K \ndrives are smaller with fewer number of cylinders. \n\n-- Alan\n",
"msg_date": "Fri, 15 Apr 2005 09:11:48 -0400",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "On Apr 14, 2005, at 10:03 PM, Kevin Brown wrote:\n\n> Now, bad block remapping destroys that guarantee, but unless you've\n> got a LOT of bad blocks, it shouldn't destroy your performance, right?\n>\n\nALL disks have bad blocks, even when you receive them. you honestly \nthink that these large disks made today (18+ GB is the smallest now) \nthat there are no defects on the surfaces?\n\n/me remembers trying to cram an old donated 5MB (yes M) disk into an \nold 8088 Zenith PC in college...\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806",
"msg_date": "Fri, 15 Apr 2005 11:43:40 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Vivek Khera wrote:\n> \n> On Apr 14, 2005, at 10:03 PM, Kevin Brown wrote:\n> \n>> Now, bad block remapping destroys that guarantee, but unless you've\n>> got a LOT of bad blocks, it shouldn't destroy your performance, right?\n>>\n> \n> ALL disks have bad blocks, even when you receive them. you honestly \n> think that these large disks made today (18+ GB is the smallest now) \n> that there are no defects on the surfaces?\n\nThat is correct. It is just that the HD makers will mark the bad blocks\nso that the OS knows not to use them. You can also run the bad blocks\ncommand to try and find new bad blocks.\n\nOver time hard drives get bad blocks. It doesn't always mean you have to \nreplace the drive but it does mean you need to maintain it and usually\nat least backup, low level (if scsi) and mark bad blocks. Then restore.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> /me remembers trying to cram an old donated 5MB (yes M) disk into an old \n> 8088 Zenith PC in college...\n> \n> Vivek Khera, Ph.D.\n> +1-301-869-4449 x806\n> \n\n\n-- \nYour PostgreSQL solutions provider, Command Prompt, Inc.\n24x7 support - 1.800.492.2240, programming, and consulting\nHome of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\nhttp://www.commandprompt.com / http://www.postgresql.org\n",
"msg_date": "Fri, 15 Apr 2005 08:58:47 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "On Apr 15, 2005, at 11:58 AM, Joshua D. Drake wrote:\n\n>> ALL disks have bad blocks, even when you receive them. you honestly \n>> think that these large disks made today (18+ GB is the smallest now) \n>> that there are no defects on the surfaces?\n>\n> That is correct. It is just that the HD makers will mark the bad blocks\n> so that the OS knows not to use them. You can also run the bad blocks\n> command to try and find new bad blocks.\n>\n\nmy point was that you cannot assume an linear correlation between block \nnumber and physical location, since the bad blocks will be mapped all \nover the place.\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806",
"msg_date": "Fri, 15 Apr 2005 12:10:14 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> Yes, you can probably assume that blocks with far-apart numbers are\n> going to require a big seek, and you might even be right in supposing\n> that a block with an intermediate number should be read on the way.\n> But you have no hope at all of making the right decisions at a more\n> local level --- say, reading various sectors within the same cylinder\n> in an optimal fashion. You don't know where the track boundaries are,\n> so you can't schedule in a way that minimizes rotational latency.\n> You're best off to throw all the requests at the drive together and\n> let the drive sort it out.\n\nConsider for example three reads, one at the beginning of the disk, one at the\nvery end, and one in the middle. If the three are performed in the logical\norder (assuming the head starts at the beginning), then the drive has to seek,\nsay, 4ms to get to the middle and 4ms to get to the end.\n\nBut if the middle block requires a full rotation to reach it from when the\nhead arrives that adds another 8ms of rotational delay (assuming a 7200RPM\ndrive).\n\nWhereas the drive could have seeked over to the last block, then seeked back\nin 8ms and gotten there just in time to perform the read for free.\n\n\nI'm not entirely convinced this explains all of the SCSI drives' superior\nperformance though. The above is about a worst-case scenario. should really\nonly have a small effect, and it's not like the drive firmware can really\nschedule things perfectly either.\n\n\nI think most of the difference is that the drive manufacturers just don't\npackage their high end drives with ATA interfaces. So there are no 10k RPM ATA\ndrives and no 15k RPM ATA drives. I think WD is making fast SATA drives but\nmost of the manufacturers aren't even doing that.\n\n-- \ngreg\n\n",
"msg_date": "15 Apr 2005 14:01:41 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Tom Lane wrote:\n> Kevin Brown <[email protected]> writes:\n> > In the case of pure random reads, you'll end up having to wait an\n> > average of half of a rotation before beginning the read.\n> \n> You're assuming the conclusion. The above is true if the disk is handed\n> one request at a time by a kernel that doesn't have any low-level timing\n> information. If there are multiple random requests on the same track,\n> the drive has an opportunity to do better than that --- if it's got all\n> the requests in hand.\n\nTrue, but see below. Actually, I suspect what matters is if they're\non the same cylinder (which may be what you're talking about here).\nAnd in the above, I was assuming randomly distributed single-sector\nreads. In that situation, we can't generically know what the\nprobability that more than one will appear on the same cylinder\nwithout knowing something about the drive geometry.\n\n\nThat said, most modern drives have tens of thousands of cylinders (the\nSeagate ST380011a, an 80 gigabyte drive, has 94,600 tracks per inch\naccording to its datasheet), but much, much smaller queue lengths\n(tens of entries, hundreds at most, I'd expect. Hard data on this\nwould be appreciated). For purely random reads, the probability that\ntwo or more requests in the queue happen to be in the same cylinder is\ngoing to be quite small.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n",
"msg_date": "Fri, 15 Apr 2005 17:53:50 -0700",
"msg_from": "Kevin Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Vivek Khera wrote:\n> \n> On Apr 14, 2005, at 10:03 PM, Kevin Brown wrote:\n> \n> >Now, bad block remapping destroys that guarantee, but unless you've\n> >got a LOT of bad blocks, it shouldn't destroy your performance, right?\n> >\n> \n> ALL disks have bad blocks, even when you receive them. you honestly \n> think that these large disks made today (18+ GB is the smallest now) \n> that there are no defects on the surfaces?\n\nOh, I'm not at all arguing that you won't have bad blocks. My\nargument is that the probability of any given block read or write\noperation actually dealing with a remapped block is going to be\nrelatively small, unless the fraction of bad blocks to total blocks is\nlarge (in which case you basically have a bad disk). And so the\nability to account for remapped blocks shouldn't itself represent a\nhuge improvement in overall throughput.\n\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n",
"msg_date": "Fri, 15 Apr 2005 17:58:25 -0700",
"msg_from": "Kevin Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Rosser Schwarz wrote:\n> while you weren't looking, Kevin Brown wrote:\n> \n> [reordering bursty reads]\n> \n> > In other words, it's a corner case that I strongly suspect\n> > isn't typical in situations where SCSI has historically made a big\n> > difference.\n> \n> [...]\n> \n> > But I rather doubt that has to be a huge penalty, if any. When a\n> > process issues an fsync (or even a sync), the kernel doesn't *have* to\n> > drop everything it's doing and get to work on it immediately. It\n> > could easily gather a few more requests, bundle them up, and then\n> > issue them.\n> \n> To make sure I'm following you here, are you or are you not suggesting\n> that the kernel could sit on -all- IO requests for some small handful\n> of ms before actually performing any IO to address what you \"strongly\n> suspect\" is a \"corner case\"?\n\nThe kernel *can* do so. Whether or not it's a good idea depends on\nthe activity in the system. You'd only consider doing this if you\ndidn't already have a relatively large backlog of I/O requests to\nhandle. You wouldn't do this for every I/O request.\n\nConsider this: I/O operations to a block device are so slow compared\nwith the speed of other (non I/O) operations on the system that the\nsystem can easily wait for, say, a hundredth of the typical latency on\nthe target device before issuing requests to it and not have any real\nnegative impact on the system's I/O throughput. A process running on\nmy test system, a 3 GHz Xeon, can issue a million read system calls\nper second (I've measured it. I can post the rather trivial source\ncode if you're interested). That's the full round trip of issuing the\nsystem call and having the kernel return back. That means that in the\nspan of a millisecond, the system could receive 1000 requests if the\nsystem were busy enough. If the average latency for a random read\nfrom the disk (including head movement and everything) is 10\nmilliseconds, and we decide to delay the issuance of the first I/O\nrequest for a tenth of a millisecond (a hundredth of the latency),\nthen the system might receive 100 additional I/O requests, which it\ncould then put into the queue and sort by block address before issuing\nthe read request. As long as the system knows what the last block\nthat was requested from that physical device was, it can order the\nrequests properly and then begin issuing them. Since the latency on\nthe target device is so high, this is likely to be a rather big win\nfor overall throughput.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n",
"msg_date": "Fri, 15 Apr 2005 18:33:31 -0700",
"msg_from": "Kevin Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Kevin Brown wrote:\n> Greg Stark wrote:\n> \n> \n> > I think you're being misled by analyzing the write case.\n> > \n> > Consider the read case. When a user process requests a block and\n> > that read makes its way down to the driver level, the driver can't\n> > just put it aside and wait until it's convenient. It has to go ahead\n> > and issue the read right away.\n> \n> Well, strictly speaking it doesn't *have* to. It could delay for a\n> couple of milliseconds to see if other requests come in, and then\n> issue the read if none do. If there are already other requests being\n> fulfilled, then it'll schedule the request in question just like the\n> rest.\n\nThe idea with SCSI or any command queuing is that you don't have to wait\nfor another request to come in --- you can send the request as it\narrives, then if another shows up, you send that too, and the drive\noptimizes the grouping at a later time, knowing what the drive is doing,\nrather queueing in the kernel.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 18 Apr 2005 16:58:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Does it really matter at which end of the cable the queueing is done\n(Assuming both ends know as much about drive geometry etc..)?\n\nAlex Turner\nnetEconomist\n\nOn 4/18/05, Bruce Momjian <[email protected]> wrote:\n> Kevin Brown wrote:\n> > Greg Stark wrote:\n> >\n> >\n> > > I think you're being misled by analyzing the write case.\n> > >\n> > > Consider the read case. When a user process requests a block and\n> > > that read makes its way down to the driver level, the driver can't\n> > > just put it aside and wait until it's convenient. It has to go ahead\n> > > and issue the read right away.\n> >\n> > Well, strictly speaking it doesn't *have* to. It could delay for a\n> > couple of milliseconds to see if other requests come in, and then\n> > issue the read if none do. If there are already other requests being\n> > fulfilled, then it'll schedule the request in question just like the\n> > rest.\n> \n> The idea with SCSI or any command queuing is that you don't have to wait\n> for another request to come in --- you can send the request as it\n> arrives, then if another shows up, you send that too, and the drive\n> optimizes the grouping at a later time, knowing what the drive is doing,\n> rather queueing in the kernel.\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n>\n",
"msg_date": "Mon, 18 Apr 2005 18:49:44 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "On Mon, Apr 18, 2005 at 06:49:44PM -0400, Alex Turner wrote:\n> Does it really matter at which end of the cable the queueing is done\n> (Assuming both ends know as much about drive geometry etc..)?\n\nThat is a pretty strong assumption, isn't it? Also you seem to be\nassuming that the controller<->disk protocol (some internal, unknown to\nmere mortals, mechanism) is equally powerful than the host<->controller\n(SATA, SCSI, etc).\n\nI'm lost whether this thread is about what is possible with current,\nin-market technology, or about what could in theory be possible [if you\nwere to design \"open source\" disk controllers and disks.]\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"La fuerza no est� en los medios f�sicos\nsino que reside en una voluntad indomable\" (Gandhi)\n",
"msg_date": "Mon, 18 Apr 2005 18:56:21 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Alex Turner wrote:\n> Does it really matter at which end of the cable the queueing is done\n> (Assuming both ends know as much about drive geometry etc..)?\n\nGood question. If the SCSI system was moving the head from track 1 to\n10, and a request then came in for track 5, could the system make the\nhead stop at track 5 on its way to track 10? That is something that\nonly the controller could do. However, I have no idea if SCSI does\nthat.\n\nThe only part I am pretty sure about is that real-world experience shows\nSCSI is better for a mixed I/O environment. Not sure why, exactly, but\nthe command queueing obviously helps, and I am not sure what else does.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 18 Apr 2005 21:45:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "On Thu, Apr 14, 2005 at 10:51:46AM -0500, Matthew Nuzum wrote:\n> So if you all were going to choose between two hard drives where:\n> drive A has capacity C and spins at 15K rpms, and\n> drive B has capacity 2 x C and spins at 10K rpms and\n> all other features are the same, the price is the same and C is enough\n> disk space which would you choose?\n> \n> I've noticed that on IDE drives, as the capacity increases the data\n> density increases and there is a pereceived (I've not measured it)\n> performance increase.\n> \n> Would the increased data density of the higher capacity drive be of\n> greater benefit than the faster spindle speed of drive A?\n\nThe increased data density will help transfer speed off the platter, but\nthat's it. It won't help rotational latency.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 19 Apr 2005 18:08:36 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
}
] |
[
{
"msg_contents": "1. Buy for empty PCI-X Slot - 1 or dual channel SCSI-320\nhardware RAID controller, like MegaRAID SCSI 320-2X\n(don't forget check driver for your OS)\nplus battery backup \nplus (optional) expand RAM to Maximum 256MB - approx $1K\n2. Buy new MAXTOR drives - Atlas 15K II (4x36.7GB) - approx 4x$400.\n3. SCSI 320 Cable set.\n4. Old drives (2) use for OS (optional DB log) files in RAID1 mode,\npossible over one channel of MegaRAID.\n5. New drives (4+) in RAID10 mode for DB\n6. Start tuning Postres + OS: more shared RAM etc.\n\nBest regards,\n Alexander Kirpa\n",
"msg_date": "Sat, 26 Mar 2005 05:21:25 +0200",
"msg_from": "\"Alexander Kirpa\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a web app using PostgreSQL which indexes, searches and \nstreams/downloads online movies.\nI think I have a problem with NFS and RAID, it is not strictly \nPostgreSQL but closely linked and I know\nmany people on this list are experienced with this technology. Apologies \nif it is off topic.\nSometimes it is hard to not be the Developer, Database and System \nAdministrator all rolled into one.\n\nI have a FreeBSD box with 1TB disk space RAID 5, 800GB is used.\nThis is mount via NFS onto Debian Linux running Apache/PHP/PostgreSQL.\n\nI have a script which loads the directory structure etc. into the database.\nAs users surf the site web pages are generated by selecting from the \ndatabase as per a standard web app.\nThe server is on a 100mbit link and has reached up to 80mbits/s in the \npast not using NFS or RAID.\n\nThe problem is when users start to stream/download the content the load \naverages go through the roof.\nSometimes as high as 300.\n\nI can only see mostly Apache processes running, up to 2000 is the max. \nlimit.\nEven after 200 Apache connections the load avg. is over 10.\n\nCould it be that using RAID 5 and NFS is causing the high load avg. on \nthe Linux web servers?\nI have a machine with RAID 0 but not ready for a day or so.\n\nI will soon need to move the databases onto the NFS partition and am \nconcerned it will increase my problem.\n\nAny advise much appreciated.\nThank you.\nRegards,\nRudi\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Tue, 29 Mar 2005 01:34:38 +1000",
"msg_from": "Rudi Starcevic <[email protected]>",
"msg_from_op": true,
"msg_subject": "NFS RAID 0/5"
}
] |
[
{
"msg_contents": "On Mon, 28 Mar 2005, Karim A Nassar wrote:\n\n> On Mon, 28 Mar 2005, Simon Riggs wrote:\n> > run the EXPLAIN after doing\n> > \tSET enable_seqscan = off\n>\n> The results I previously supplied were searching for a non-existent\n> value, so I have provided output for both cases.\n>\n> ***\n> *** Searching for non-existent value\n> ***\n>\n> orfs=# PREPARE test2(int) AS SELECT 1 from measurement where\n> orfs-# id_int_sensor_meas_type = $1 FOR UPDATE;\n> PREPARE\n> orfs=# EXPLAIN ANALYZE EXECUTE TEST2(1);\n> QUERY PLAN\n> --------------------------------------------------------------------------\n> Seq Scan on measurement\n> (cost=0.00..164559.16 rows=509478 width=6)\n> (actual time=6421.849..6421.849 rows=0 loops=1)\n> Filter: (id_int_sensor_meas_type = $1)\n> Total runtime: 6421.917 ms\n> (3 rows)\n>\n> orfs=# SET enable_seqscan = off;\n\nI think you have to prepare with enable_seqscan=off, because it effects\nhow the query is planned and prepared.\n\n",
"msg_date": "Mon, 28 Mar 2005 08:27:36 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "On Mon, 28 Mar 2005, Stephan Szabo wrote:\n> > On Mon, 28 Mar 2005, Simon Riggs wrote:\n> > > run the EXPLAIN after doing\n> > > \tSET enable_seqscan = off\n\n...\n\n> I think you have to prepare with enable_seqscan=off, because it\n> effects how the query is planned and prepared.\n\norfs=# SET enable_seqscan = off;\nSET\norfs=# PREPARE test2(int) AS SELECT 1 from measurement where\norfs-# id_int_sensor_meas_type = $1 FOR UPDATE;\nPREPARE\norfs=# EXPLAIN ANALYZE EXECUTE TEST2(1); -- non-existent\n\nQUERY PLAN \n-------------------------------------------------------------------------\n Index Scan using measurement__id_int_sensor_meas_type_idx on measurement\n (cost=0.00..883881.49 rows=509478 width=6) \n (actual time=29.207..29.207 rows=0 loops=1)\n Index Cond: (id_int_sensor_meas_type = $1)\n Total runtime: 29.277 ms\n(3 rows)\n\norfs=# EXPLAIN ANALYZE EXECUTE TEST2(197); -- existing value\n\nQUERY PLAN \n-------------------------------------------------------------------------\n Index Scan using measurement__id_int_sensor_meas_type_idx on measurement\n (cost=0.00..883881.49 rows=509478 width=6) \n (actual time=12.903..37478.167 rows=509478 loops=1)\n Index Cond: (id_int_sensor_meas_type = $1)\n Total runtime: 38113.338 ms\n(3 rows)\n\n--\nKarim Nassar\nDepartment of Computer Science\nBox 15600, College of Engineering and Natural Sciences\nNorthern Arizona University, Flagstaff, Arizona 86011\nOffice: (928) 523-5868 -=- Mobile: (928) 699-9221\n\n \n\n\n",
"msg_date": "Mon, 28 Mar 2005 09:37:01 -0700 (MST)",
"msg_from": "Karim A Nassar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "On Mon, 2005-03-28 at 09:37 -0700, Karim A Nassar wrote:\n> On Mon, 28 Mar 2005, Stephan Szabo wrote:\n> > > On Mon, 28 Mar 2005, Simon Riggs wrote:\n> > > > run the EXPLAIN after doing\n> > > > \tSET enable_seqscan = off\n> \n> ...\n> \n> > I think you have to prepare with enable_seqscan=off, because it\n> > effects how the query is planned and prepared.\n> \n> orfs=# SET enable_seqscan = off;\n> SET\n> orfs=# PREPARE test2(int) AS SELECT 1 from measurement where\n> orfs-# id_int_sensor_meas_type = $1 FOR UPDATE;\n> PREPARE\n> orfs=# EXPLAIN ANALYZE EXECUTE TEST2(1); -- non-existent\n> \n> QUERY PLAN \n> -------------------------------------------------------------------------\n> Index Scan using measurement__id_int_sensor_meas_type_idx on measurement\n> (cost=0.00..883881.49 rows=509478 width=6) \n> (actual time=29.207..29.207 rows=0 loops=1)\n> Index Cond: (id_int_sensor_meas_type = $1)\n> Total runtime: 29.277 ms\n> (3 rows)\n> \n> orfs=# EXPLAIN ANALYZE EXECUTE TEST2(197); -- existing value\n> \n> QUERY PLAN \n> -------------------------------------------------------------------------\n> Index Scan using measurement__id_int_sensor_meas_type_idx on measurement\n> (cost=0.00..883881.49 rows=509478 width=6) \n> (actual time=12.903..37478.167 rows=509478 loops=1)\n> Index Cond: (id_int_sensor_meas_type = $1)\n> Total runtime: 38113.338 ms\n> (3 rows)\n> \n\n\"That process starts upon the supposition that when you have eliminated\nall which is impossible, then whatever remains, however improbable, must\nbe the truth.\" - Sherlock Holmes\n\nWell, based upon the evidence so far, the Optimizer got it right:\n\nNormal\nSeqScan, value=1\telapsed= 6.4s\tcost=164559\nSeqScan, value=197\telapsed=28.1s\tcost=164559\n\nSeqScan=off\nIndexScan, value=1\telapsed= 29ms\tcost=883881\nIndexScan, value=197\telapsed=38.1s\tcost=883881\n\nWith SeqScan=off the index is used, proving that it has been correctly\ndefined for use in queries.\n\nThe FK CASCADE delete onto measurement will only be triggered by the\ndeletion of a real row, so the actual value will be the time taken. This\nis longer than a SeqScan, so the Optimizer is correct.\n\nMy guess is that Measurement has a greatly non-uniform distribution of\nvalues and that 197 is one of the main values. Other values exist in the\nlookup table, but are very infrequently occurring in the larger table.\n\nKarim,\nPlease do:\n\nselect id_int_sensor_meas_type, count(*)\nfrom measurement\ngroup by id_int_sensor_meas_type\norder by count(*) desc;\n\nBest Regards, Simon Riggs\n\n\n",
"msg_date": "Mon, 28 Mar 2005 20:25:54 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "> Well, based upon the evidence so far, the Optimizer got it right:\n\nAgreed. So, this means that the answer to my original question is \"that\ndelete gonna take a long time\"?\n\nSeems that there is still something wrong. From what I can tell from\neveryones questions, the FK constraint on measurement is causing multiple\nseq scans for each value deleted from int_sensor_meas_type. However, when\ndeleting a single value, the FK check should use the index, so my ~190\ndeletes *should* be fast, no?\n\n> IndexScan, value=1\telapsed= 29ms\tcost=883881\n\n190 * 29ms is much less than 40 minutes. What am I missing here?\n\n\n> Karim,\n> Please do:\n>\n> select id_int_sensor_meas_type, count(*)\n> from measurement\n> group by id_int_sensor_meas_type\n> order by count(*) desc;\n\nid_int_sensor_meas_type | count \n-------------------------+--------\n 31 | 509478\n 30 | 509478\n 206 | 509478\n 205 | 509478\n 204 | 509478\n 40 | 509478\n 39 | 509478\n 197 | 509478\n 35 | 509478\n 34 | 509478\n 33 | 509478\n 32 | 509478\n 41 | 509477\n\nThis sample dataset has 13 measurements from a weather station over 3\nyears, hence the even distribution.\n\n\nContinued thanks,\n--\nKarim Nassar\nDepartment of Computer Science\nBox 15600, College of Engineering and Natural Sciences\nNorthern Arizona University, Flagstaff, Arizona 86011\nOffice: (928) 523-5868 -=- Mobile: (928) 699-9221\n\n\n\n\n",
"msg_date": "Mon, 28 Mar 2005 13:03:12 -0700 (MST)",
"msg_from": "Karim A Nassar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "On Mon, 2005-03-28 at 13:03 -0700, Karim A Nassar wrote:\n> > Well, based upon the evidence so far, the Optimizer got it right:\n> \n> Agreed. So, this means that the answer to my original question is \"that\n> delete gonna take a long time\"?\n> \n> Seems that there is still something wrong. From what I can tell from\n> everyones questions, the FK constraint on measurement is causing multiple\n> seq scans for each value deleted from int_sensor_meas_type. However, when\n> deleting a single value, the FK check should use the index, so my ~190\n> deletes *should* be fast, no?\n\nNo.\n\n> > IndexScan, value=1\telapsed= 29ms\tcost=883881\n> \n> 190 * 29ms is much less than 40 minutes. What am I missing here?\n\nIt all depends upon your data.\n\nThere are *no* values in *your* table that take 29ms to delete...\n\n> > Karim,\n> > Please do:\n> >\n> > select id_int_sensor_meas_type, count(*)\n> > from measurement\n> > group by id_int_sensor_meas_type\n> > order by count(*) desc;\n> \n> id_int_sensor_meas_type | count \n> -------------------------+--------\n> 31 | 509478\n> 30 | 509478\n> 206 | 509478\n> 205 | 509478\n> 204 | 509478\n> 40 | 509478\n> 39 | 509478\n> 197 | 509478\n> 35 | 509478\n> 34 | 509478\n> 33 | 509478\n> 32 | 509478\n> 41 | 509477\n> \n> This sample dataset has 13 measurements from a weather station over 3\n> years, hence the even distribution.\n\nEach value has 1/13th of the table, which is too many rows per value to\nmake an IndexScan an efficient way of deleting rows from the table.\n\nThats it.\n\nIf you have more values when measurement is bigger, the delete will\neventually switch plans (if you reconnect) and use the index. But not\nyet.\n\nThere's a few ways to (re)design around it, but the distribution of your\ndata is not *currently* conducive to the using an index.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Mon, 28 Mar 2005 22:07:25 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "> Each value has 1/13th of the table, which is too many rows per value to\n> make an IndexScan an efficient way of deleting rows from the table.\n\nBut, the original question was that the delete that was taking a long time\nwas on a different table. I tried to delete 150 rows from a table with 750\nrows, which is FK referenced from this large table. If I understand\ncorrectly, Tom suggested that the length of time was due to a sequential\nscan being done on the large table for each value being deleted from the\nsmall one.\n\n(I have no formal training in database administration nor database theory,\nso please excuse me if I am being dumb.)\n\nFor this FK check, there only need be one referring id to invalidate the\ndelete. ISTM that for any delete with a FK reference, the index could\nalways be used to search for a single value in the referring table\n(excepting very small tables). Why then must a sequential scan be\nperformed in this case, and/or in general? \n\n--\nKarim Nassar\nDepartment of Computer Science\nBox 15600, College of Engineering and Natural Sciences\nNorthern Arizona University, Flagstaff, Arizona 86011\nOffice: (928) 523-5868 -=- Mobile: (928) 699-9221\n\n\n",
"msg_date": "Tue, 29 Mar 2005 01:48:48 -0700 (MST)",
"msg_from": "Karim A Nassar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "On Tue, 2005-03-29 at 01:48 -0700, Karim A Nassar wrote:\n> > Each value has 1/13th of the table, which is too many rows per value to\n> > make an IndexScan an efficient way of deleting rows from the table.\n> \n> But, the original question was that the delete that was taking a long time\n> was on a different table. I tried to delete 150 rows from a table with 750\n> rows, which is FK referenced from this large table. If I understand\n> correctly, Tom suggested that the length of time was due to a sequential\n> scan being done on the large table for each value being deleted from the\n> small one.\n\n> For this FK check, there only need be one referring id to invalidate the\n> delete. ISTM that for any delete with a FK reference, the index could\n> always be used to search for a single value in the referring table\n> (excepting very small tables). Why then must a sequential scan be\n> performed in this case, and/or in general? \n\nMy understanding was that you were doing a DELETE on the smaller table\nand that this was doing a DELETE on the measurement table because you\nhad the FK defined as ON DELETE CASCADE. You are right - only a single\nrow is sufficient to RESTRICT the DELETE. But for an ON UPDATE/DELETE\naction of CASCADE then you will want to touch all rows referenced, so a\nSeqScan is a perfectly valid consequence of such actions.\nI think now that you are using the default action, rather than\nspecifically requesting CASCADE?\n\nStephan, Tom: \nThe SQL generated for RI checking by the RI triggers currently applies a\nlimit at execution time, not at prepare time. i.e. there is no LIMIT\nclause in the SQL. \n\nWe know whether the check will be limit 1 or limit 0 at prepare time, so\nwhy not add a LIMIT clause to the SQL so it changes the plan, not just\nthe number of rows returned when the check query executes?\n(I note that PREPARE does allow you to supply a LIMIT 1 clause).\n\nThat is *ought* to have some effect on the plan used by the RI check\nqueries. In costsize.c:cost_index we would have tuples_fetched==1 and it\nwould be hard (but not impossible) for the index cost to ever be more\nthan the cost of a SeqScan. \n\n...but, I see no way for OidFunctionCall8 to ever return an answer of\n\"always just 1 row, no matter how big the relation\"...so tuples_fetched\nis always proportional to the size of the relation. Are unique indexes\ntreated just as very-low-selectivity indexes? - they're a very similar\nsituation in terms of forcing an absolute, not relative, number of rows\nreturned.\n\nBest Regards, Simon Riggs\n\n\n",
"msg_date": "Tue, 29 Mar 2005 12:29:36 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "On Tue, 29 Mar 2005, Simon Riggs wrote:\n\n> On Tue, 2005-03-29 at 01:48 -0700, Karim A Nassar wrote:\n> > > Each value has 1/13th of the table, which is too many rows per value to\n> > > make an IndexScan an efficient way of deleting rows from the table.\n> >\n> > But, the original question was that the delete that was taking a long time\n> > was on a different table. I tried to delete 150 rows from a table with 750\n> > rows, which is FK referenced from this large table. If I understand\n> > correctly, Tom suggested that the length of time was due to a sequential\n> > scan being done on the large table for each value being deleted from the\n> > small one.\n>\n> > For this FK check, there only need be one referring id to invalidate the\n> > delete. ISTM that for any delete with a FK reference, the index could\n> > always be used to search for a single value in the referring table\n> > (excepting very small tables). Why then must a sequential scan be\n> > performed in this case, and/or in general?\n>\n> My understanding was that you were doing a DELETE on the smaller table\n> and that this was doing a DELETE on the measurement table because you\n> had the FK defined as ON DELETE CASCADE. You are right - only a single\n> row is sufficient to RESTRICT the DELETE. But for an ON UPDATE/DELETE\n> action of CASCADE then you will want to touch all rows referenced, so a\n> SeqScan is a perfectly valid consequence of such actions.\n> I think now that you are using the default action, rather than\n> specifically requesting CASCADE?\n>\n> Stephan, Tom:\n> The SQL generated for RI checking by the RI triggers currently applies a\n> limit at execution time, not at prepare time. i.e. there is no LIMIT\n> clause in the SQL.\n>\n> We know whether the check will be limit 1 or limit 0 at prepare time, so\n> why not add a LIMIT clause to the SQL so it changes the plan, not just\n> the number of rows returned when the check query executes?\n\nBecause IIRC, FOR UPDATE and LIMIT at least historically didn't play\nnicely together, so you could sometimes get a result where if the first\nrow was locked, the FOR UPDATE would wait on it, but if it was deleted by\nthe other transaction you could get 0 rows back in the trigger.\n\n",
"msg_date": "Tue, 29 Mar 2005 05:50:42 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "On Tue, 29 Mar 2005, Stephan Szabo wrote:\n\n> On Tue, 29 Mar 2005, Simon Riggs wrote:\n>\n> > On Tue, 2005-03-29 at 01:48 -0700, Karim A Nassar wrote:\n> > > > Each value has 1/13th of the table, which is too many rows per value to\n> > > > make an IndexScan an efficient way of deleting rows from the table.\n> > >\n> > > But, the original question was that the delete that was taking a long time\n> > > was on a different table. I tried to delete 150 rows from a table with 750\n> > > rows, which is FK referenced from this large table. If I understand\n> > > correctly, Tom suggested that the length of time was due to a sequential\n> > > scan being done on the large table for each value being deleted from the\n> > > small one.\n> >\n> > > For this FK check, there only need be one referring id to invalidate the\n> > > delete. ISTM that for any delete with a FK reference, the index could\n> > > always be used to search for a single value in the referring table\n> > > (excepting very small tables). Why then must a sequential scan be\n> > > performed in this case, and/or in general?\n> >\n> > My understanding was that you were doing a DELETE on the smaller table\n> > and that this was doing a DELETE on the measurement table because you\n> > had the FK defined as ON DELETE CASCADE. You are right - only a single\n> > row is sufficient to RESTRICT the DELETE. But for an ON UPDATE/DELETE\n> > action of CASCADE then you will want to touch all rows referenced, so a\n> > SeqScan is a perfectly valid consequence of such actions.\n> > I think now that you are using the default action, rather than\n> > specifically requesting CASCADE?\n> >\n> > Stephan, Tom:\n> > The SQL generated for RI checking by the RI triggers currently applies a\n> > limit at execution time, not at prepare time. i.e. there is no LIMIT\n> > clause in the SQL.\n> >\n> > We know whether the check will be limit 1 or limit 0 at prepare time, so\n> > why not add a LIMIT clause to the SQL so it changes the plan, not just\n> > the number of rows returned when the check query executes?\n>\n> Because IIRC, FOR UPDATE and LIMIT at least historically didn't play\n> nicely together, so you could sometimes get a result where if the first\n> row was locked, the FOR UPDATE would wait on it, but if it was deleted by\n> the other transaction you could get 0 rows back in the trigger.\n\nIf there were some way to pass a \"limit\" into SPI_prepare that was treated\nsimilarly to a LIMIT clause for planning purposes but didn't actually\nchange the output plan to only return that number of rows, we could use\nthat.\n",
"msg_date": "Tue, 29 Mar 2005 06:20:25 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "Stephan Szabo <[email protected]> writes:\n> On Tue, 29 Mar 2005, Simon Riggs wrote:\n>> The SQL generated for RI checking by the RI triggers currently applies a\n>> limit at execution time, not at prepare time. i.e. there is no LIMIT\n>> clause in the SQL.\n>> \n>> We know whether the check will be limit 1 or limit 0 at prepare time, so\n>> why not add a LIMIT clause to the SQL so it changes the plan, not just\n>> the number of rows returned when the check query executes?\n\n> Because IIRC, FOR UPDATE and LIMIT at least historically didn't play\n> nicely together, so you could sometimes get a result where if the first\n> row was locked, the FOR UPDATE would wait on it, but if it was deleted by\n> the other transaction you could get 0 rows back in the trigger.\n\nYeah, this is still true. It would probably be a good idea to change it\nbut I haven't looked into exactly what would be involved. The basic\nproblem is that the FOR UPDATE filter needs to execute before LIMIT\ninstead of after, so presumably the FOR UPDATE shenanigans in execMain.c\nwould need to be pushed into a separate plan node that could go\nunderneath the LIMIT node.\n\nOriginally this would have led to even more broken behavior --- locks\ntaken on rows that weren't returned --- because the original coding of\nthe LIMIT node tended to pull one more row from the lower plan than it\nwould actually return. But we fixed that.\n\nI think having such a node might allow us to support FOR UPDATE in\nsubqueries, as well, but I haven't looked at the details. (Whether that\nis a good idea is another question --- the problem of pulling rows that\naren't nominally necessary, and thereby locking them, would apply in\nspades.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Mar 2005 09:29:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time "
},
{
"msg_contents": "Simon Riggs <[email protected]> writes:\n> ...but, I see no way for OidFunctionCall8 to ever return an answer of\n> \"always just 1 row, no matter how big the relation\"...so tuples_fetched\n> is always proportional to the size of the relation. Are unique indexes\n> treated just as very-low-selectivity indexes?\n\nYeah. It is not the job of amcostestimate to estimate the number of\nrows, only the index access cost. (IIRC there is someplace in the\nplanner that explicitly considers unique indexes as a part of developing\nselectivity estimates ... but it's not that part.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Mar 2005 09:40:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time "
},
{
"msg_contents": "Stephan Szabo <[email protected]> writes:\n> If there were some way to pass a \"limit\" into SPI_prepare that was treated\n> similarly to a LIMIT clause for planning purposes but didn't actually\n> change the output plan to only return that number of rows, we could use\n> that.\n\nHmm ... the planner does have the ability to do that sort of thing (we\nuse it for cursors). SPI_prepare doesn't expose the capability.\nPerhaps adding a SPI_prepare variant that does expose it would be the\nquickest route to a solution.\n\nI get a headache every time I look at the RI triggers ;-). Do they\nalways know at the time of preparing a plan which way it will be used?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Mar 2005 09:56:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time "
},
{
"msg_contents": "On Tue, 2005-03-29 at 05:50 -0800, Stephan Szabo wrote:\n> On Tue, 29 Mar 2005, Simon Riggs wrote:\n> \n> > On Tue, 2005-03-29 at 01:48 -0700, Karim A Nassar wrote:\n> > > > Each value has 1/13th of the table, which is too many rows per value to\n> > > > make an IndexScan an efficient way of deleting rows from the table.\n> > >\n> > > But, the original question was that the delete that was taking a long time\n> > > was on a different table. I tried to delete 150 rows from a table with 750\n> > > rows, which is FK referenced from this large table. If I understand\n> > > correctly, Tom suggested that the length of time was due to a sequential\n> > > scan being done on the large table for each value being deleted from the\n> > > small one.\n> >\n> > > For this FK check, there only need be one referring id to invalidate the\n> > > delete. ISTM that for any delete with a FK reference, the index could\n> > > always be used to search for a single value in the referring table\n> > > (excepting very small tables). Why then must a sequential scan be\n> > > performed in this case, and/or in general?\n> >\n> > My understanding was that you were doing a DELETE on the smaller table\n> > and that this was doing a DELETE on the measurement table because you\n> > had the FK defined as ON DELETE CASCADE. You are right - only a single\n> > row is sufficient to RESTRICT the DELETE. But for an ON UPDATE/DELETE\n> > action of CASCADE then you will want to touch all rows referenced, so a\n> > SeqScan is a perfectly valid consequence of such actions.\n> > I think now that you are using the default action, rather than\n> > specifically requesting CASCADE?\n> >\n> > Stephan, Tom:\n> > The SQL generated for RI checking by the RI triggers currently applies a\n> > limit at execution time, not at prepare time. i.e. there is no LIMIT\n> > clause in the SQL.\n> >\n> > We know whether the check will be limit 1 or limit 0 at prepare time, so\n> > why not add a LIMIT clause to the SQL so it changes the plan, not just\n> > the number of rows returned when the check query executes?\n> \n> Because IIRC, FOR UPDATE and LIMIT at least historically didn't play\n> nicely together, so you could sometimes get a result where if the first\n> row was locked, the FOR UPDATE would wait on it, but if it was deleted by\n> the other transaction you could get 0 rows back in the trigger.\n> \n\nWell, sorry to ask more...\n\n...but surely we only need FOR UPDATE clause if we are performing a\nCASCADE action? whereas we only want the LIMIT 1 clause if we are NOT\nperforming a CASCADE action? That way the two clauses are mutually\nexclusive and the problem you outline should never (need to) occur.\n\nThe current code doesn't seem to vary the check query according to the\nrequested FK action...\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Tue, 29 Mar 2005 16:20:02 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "On Tue, 2005-03-29 at 09:56 -0500, Tom Lane wrote:\n> Stephan Szabo <[email protected]> writes:\n> > If there were some way to pass a \"limit\" into SPI_prepare that was treated\n> > similarly to a LIMIT clause for planning purposes but didn't actually\n> > change the output plan to only return that number of rows, we could use\n> > that.\n> \n> Hmm ... the planner does have the ability to do that sort of thing (we\n> use it for cursors). SPI_prepare doesn't expose the capability.\n> Perhaps adding a SPI_prepare variant that does expose it would be the\n> quickest route to a solution.\n> \n> I get a headache every time I look at the RI triggers ;-). Do they\n> always know at the time of preparing a plan which way it will be used?\n\nIf action is NO ACTION or RESTRICT then \n\twe need to SELECT at most 1 row that matches the criteria\n\twhich means we can use LIMIT 1\n\nIf action is CASCADE, SET NULL, SET DEFAULT then\n\twe need to UPDATE or DELETE all rows that match the criteria\n\twhich means we musnt use LIMIT and need to use FOR UPDATE\n\nWe know that at CONSTRAINT creation time, which always occurs before\nplan preparation time.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Tue, 29 Mar 2005 16:24:40 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "Simon Riggs <[email protected]> writes:\n> If action is NO ACTION or RESTRICT then \n> \twe need to SELECT at most 1 row that matches the criteria\n> \twhich means we can use LIMIT 1\n\n> If action is CASCADE, SET NULL, SET DEFAULT then\n> \twe need to UPDATE or DELETE all rows that match the criteria\n> \twhich means we musnt use LIMIT and need to use FOR UPDATE\n\nHuh? UPDATE/DELETE don't use FOR UPDATE. I think you have failed\nto break down the cases sufficiently. In particular it matters which\nside of the RI constraint you are working from ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Mar 2005 10:31:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time "
},
{
"msg_contents": "On Tue, 2005-03-29 at 10:31 -0500, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > If action is NO ACTION or RESTRICT then \n> > \twe need to SELECT at most 1 row that matches the criteria\n> > \twhich means we can use LIMIT 1\n> \n> > If action is CASCADE, SET NULL, SET DEFAULT then\n> > \twe need to UPDATE or DELETE all rows that match the criteria\n> > \twhich means we musnt use LIMIT and need to use FOR UPDATE\n> \n> Huh? UPDATE/DELETE don't use FOR UPDATE. I think you have failed\n> to break down the cases sufficiently. In particular it matters which\n> side of the RI constraint you are working from ...\n\nOK... too quick, sorry. I'll hand over to Stephan for a better and more\nexhaustive explanation/analysis... but AFAICS we *can* always know the\ncorrect formulation of the query prepare time, whether or not we do\ncurrently.\n\nBest Regards, Simon Riggs\n\n\n\n",
"msg_date": "Tue, 29 Mar 2005 17:01:27 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "\nOn Tue, 29 Mar 2005, Simon Riggs wrote:\n\n> On Tue, 2005-03-29 at 10:31 -0500, Tom Lane wrote:\n> > Simon Riggs <[email protected]> writes:\n> > > If action is NO ACTION or RESTRICT then\n> > > \twe need to SELECT at most 1 row that matches the criteria\n> > > \twhich means we can use LIMIT 1\n> >\n> > > If action is CASCADE, SET NULL, SET DEFAULT then\n> > > \twe need to UPDATE or DELETE all rows that match the criteria\n> > > \twhich means we musnt use LIMIT and need to use FOR UPDATE\n> >\n> > Huh? UPDATE/DELETE don't use FOR UPDATE. I think you have failed\n> > to break down the cases sufficiently. In particular it matters which\n> > side of the RI constraint you are working from ...\n>\n> OK... too quick, sorry. I'll hand over to Stephan for a better and more\n> exhaustive explanation/analysis... but AFAICS we *can* always know the\n> correct formulation of the query prepare time, whether or not we do\n> currently.\n\nWe currently use FOR UPDATE on the NO ACTION check, because otherwise we\nmight get back a row that's already marked for deletion by a concurrent\ntransaction. I think that's intended to wait and succeed, not fail.\n\n",
"msg_date": "Tue, 29 Mar 2005 08:33:20 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "On Tue, 29 Mar 2005, Tom Lane wrote:\n\n> Stephan Szabo <[email protected]> writes:\n> > If there were some way to pass a \"limit\" into SPI_prepare that was treated\n> > similarly to a LIMIT clause for planning purposes but didn't actually\n> > change the output plan to only return that number of rows, we could use\n> > that.\n>\n> Hmm ... the planner does have the ability to do that sort of thing (we\n> use it for cursors). SPI_prepare doesn't expose the capability.\n> Perhaps adding a SPI_prepare variant that does expose it would be the\n> quickest route to a solution.\n>\n> I get a headache every time I look at the RI triggers ;-). Do they\n\nMe too, honestly.\n\n> always know at the time of preparing a plan which way it will be used?\n\nI believe so. I think each saved plan pretty much lives for a single\ntrigger type/argument set and is basically used in only one place.\n",
"msg_date": "Tue, 29 Mar 2005 08:38:28 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Delete query takes exorbitant amount of time "
},
{
"msg_contents": "On Tue, 2005-03-29 at 09:40 -0500, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > ...but, I see no way for OidFunctionCall8 to ever return an answer of\n> > \"always just 1 row, no matter how big the relation\"...so tuples_fetched\n> > is always proportional to the size of the relation. Are unique indexes\n> > treated just as very-low-selectivity indexes?\n> \n> Yeah. It is not the job of amcostestimate to estimate the number of\n> rows, only the index access cost. (IIRC there is someplace in the\n> planner that explicitly considers unique indexes as a part of developing\n> selectivity estimates ... but it's not that part.)\n\nWell, I mention this because costsize.c:cost_index *does* calculate the\nnumber of rows returned. If unique indexes are handled elsewhere then\nthis would not cause problems for them...but for LIMIT queries..?\n\ncost_index gets the selectivity then multiplies that by number of tuples\nin the relation to calc tuples_fetched, so it can use that in the\nMackert & Lohman formula. There's no consideration of the query limits.\n\nThat implies to me that LIMIT queries are not considered correctly in\nthe M&L formula and thus we are more likely to calculate a too-high cost\nfor using an index in those circumstances....and thus more likely to\nSeqScan for medium sized relations?\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Tue, 29 Mar 2005 18:17:45 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "Simon Riggs <[email protected]> writes:\n> That implies to me that LIMIT queries are not considered correctly in\n> the M&L formula and thus we are more likely to calculate a too-high cost\n> for using an index in those circumstances....and thus more likely to\n> SeqScan for medium sized relations?\n\nYou misunderstand how LIMIT is handled. The plan structure is\n\n\tLIMIT ...\n\t\tregular plan ...\n\nand so the strategy is to plan and cost the regular plan as though it\nwould be carried out in full, and then take an appropriate fraction\nof that at the LIMIT stage.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Mar 2005 12:31:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time "
},
{
"msg_contents": "On Tue, 2005-03-29 at 12:31 -0500, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > That implies to me that LIMIT queries are not considered correctly in\n> > the M&L formula and thus we are more likely to calculate a too-high cost\n> > for using an index in those circumstances....and thus more likely to\n> > SeqScan for medium sized relations?\n> \n> You misunderstand how LIMIT is handled. \n\nHuh? Well, not this time. (Though my error rate is admittedly high.)\n\n> The plan structure is\n> \n> \tLIMIT ...\n> \t\tregular plan ...\n> \n> and so the strategy is to plan and cost the regular plan as though it\n> would be carried out in full, and then take an appropriate fraction\n> of that at the LIMIT stage.\n\nTo cost it as if it would be carried out in full and then not execute in\nfull is the same thing as saying it overestimates the actual execution\ncost. Which can lead to selection of SeqScan plan when the IndexScan\nwould have been cheaper, all things considered.\n\n...it could work like this\n\n\tLIMIT ....\n\t\tregular plan (plan chosen knowing that LIMIT follows)\n\nso that the LIMIT would be considered in the M&L formula.\n\nNot that I am driven by how other systems work, but both DB2 and Oracle\nallow this form of optimization.\n\nThere's not a huge benefit in sending LIMIT 1 through on the FK check\nqueries unless they'd be taken into account in the planning.\n\nAnyway, I'm not saying I know how to do this yet/ever, just to say it is\npossible to use the information available to better effect.\n\nThis looks like a TODO item to me? Thoughts?\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Tue, 29 Mar 2005 19:53:26 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
},
{
"msg_contents": "On Tue, Mar 29, 2005 at 01:48:48 -0700,\n Karim A Nassar <[email protected]> wrote:\n> \n> For this FK check, there only need be one referring id to invalidate the\n> delete. ISTM that for any delete with a FK reference, the index could\n> always be used to search for a single value in the referring table\n> (excepting very small tables). Why then must a sequential scan be\n> performed in this case, and/or in general? \n\nFirst the index needs to exist. It isn't created automatically because not\neveryone wants such an index. Second, you need to have analyzed the\nreferencing table so that the planner will know it is big enough that\nusing an indexed search is worthwhile. The planner is getting better\nabout dealing with size changes without reanalyzing, but it seems there\nare still some gotchas in 8.0.\n",
"msg_date": "Sun, 3 Apr 2005 07:18:20 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete query takes exorbitant amount of time"
}
] |
[
{
"msg_contents": "\n\n\n\nPg: 7.4.5\nRH 7.3\nRaid 0+1 (200G 15k RPM)\nQuad Xeon\n8G ram\n\n95% Read-only\n5% - read-write\n\nI'm experiencing extreme load issues on my machine anytime I have more than\n40 users connected to the database. The majority of the users appear to be\nin an idle state according TOP, but if more than3 or more queries are ran\nthe system slows to a crawl. The queries don't appear to the root cause\nbecause they run fine when the load drops. I also doing routine vacuuming\non the tables.\n\nIs there some place I need to start looking for the issues bogging down the\nserver?\n\n\nHere are some of my settings. I can provide more as needed:\n\n\ncat /proc/sys/kernel/shmmax\n175013888\n\nmax_connections = 100\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 2000 # min 16, at least max_connections*2, 8KB\neach\nsort_mem = 12288 # min 64, size in KB\n#vacuum_mem = 8192 # min 1024, size in KB\n\n# - Free Space Map -\n\nmax_fsm_pages = 3000000 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 500 # min 100, ~50 bytes each\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\n#fsync = true # turns forced synchronization on or off\n#wal_sync_method = fsync # the default varies across platforms:\n # fsync, fdatasync, open_sync, or\nopen_datasync\nwal_buffers = 32 # min 4, 8KB each\n\n# - Checkpoints -\n\ncheckpoint_segments = 50 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 1800 # range 30-3600, in seconds\n\n\n# - Planner Cost Constants -\n\neffective_cache_size = 262144 # typically 8KB each\n#effective_cache_size = 625000 # typically 8KB each\nrandom_page_cost = 2 # units are one sequential page fetch cost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n\n\nPatrick Hatcher\n\n\n",
"msg_date": "Mon, 28 Mar 2005 10:20:54 -0800",
"msg_from": "Patrick Hatcher <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sluggish server performance"
},
{
"msg_contents": "Hi,\n\nAt 20:20 28/03/2005, Patrick Hatcher wrote:\n>I'm experiencing extreme load issues on my machine anytime I have more than\n>40 users connected to the database. The majority of the users appear to be\n>in an idle state according TOP, but if more than3 or more queries are ran\n>the system slows to a crawl. The queries don't appear to the root cause\n>because they run fine when the load drops. I also doing routine vacuuming\n>on the tables.\n>\n>Is there some place I need to start looking for the issues bogging down the\n>server?\n\nCheck that your queries use optimal plans, which usually (but not always) \nmeans they should use indexes rather than sequential scans. You can check \nfor this by using EXPLAIN <query> or EXPLAIN ANALYZE <query>. You can also \ncheck the pg_stat_* and pg_statio_* tables to get a feel of what kind of \naccesses are done. You also might want to find out if your system is \nlimited by IO or by the CPU. Most probably the former.\n\nYou can also check the \"performance tips\" section of the manual.\n\nAlso you shared_buffers setting seems to be pretty low given your \nconfiguration.\n\nHope that helps,\n\nJacques.\n\n\n",
"msg_date": "Mon, 28 Mar 2005 20:39:21 +0200",
"msg_from": "Jacques Caron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sluggish server performance"
},
{
"msg_contents": "On Mon, 2005-03-28 at 10:20 -0800, Patrick Hatcher wrote:\n> \n> \n> \n> Pg: 7.4.5\n> RH 7.3\n> Raid 0+1 (200G 15k RPM)\n> Quad Xeon\n> 8G ram\n> \n> 95% Read-only\n> 5% - read-write\n> \n> I'm experiencing extreme load issues on my machine anytime I have more than\n> 40 users connected to the database. The majority of the users appear to be\n> in an idle state according TOP, but if more than3 or more queries are ran\n> the system slows to a crawl. The queries don't appear to the root cause\n> because they run fine when the load drops. I also doing routine vacuuming\n> on the tables.\n> \n> Is there some place I need to start looking for the issues bogging down the\n> server?\n\n\nWell your shared buffers seems a little low but beyond that you may have\na couple of queries that run fine until you get into a highly concurrent\nsituation.\n\nI would turn on statement, duration and pid logging. See if there is\na query that takes say 400ms, if that query needs to be executed before\na bunch of other queries then you will get immediately slow down in a\nhighly concurrent environment.\n\nAlso I didn't see your statistics target listed... What level is that\nat?\n\nLastly you may be able to get away with a lower random_page_cost.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> \n> Here are some of my settings. I can provide more as needed:\n> \n> \n> cat /proc/sys/kernel/shmmax\n> 175013888\n> \n> max_connections = 100\n> \n> #---------------------------------------------------------------------------\n> # RESOURCE USAGE (except WAL)\n> #---------------------------------------------------------------------------\n> \n> # - Memory -\n> \n> shared_buffers = 2000 # min 16, at least max_connections*2, 8KB\n> each\n> sort_mem = 12288 # min 64, size in KB\n> #vacuum_mem = 8192 # min 1024, size in KB\n> \n> # - Free Space Map -\n> \n> max_fsm_pages = 3000000 # min max_fsm_relations*16, 6 bytes each\n> max_fsm_relations = 500 # min 100, ~50 bytes each\n> \n> \n> #---------------------------------------------------------------------------\n> # WRITE AHEAD LOG\n> #---------------------------------------------------------------------------\n> \n> # - Settings -\n> \n> #fsync = true # turns forced synchronization on or off\n> #wal_sync_method = fsync # the default varies across platforms:\n> # fsync, fdatasync, open_sync, or\n> open_datasync\n> wal_buffers = 32 # min 4, 8KB each\n> \n> # - Checkpoints -\n> \n> checkpoint_segments = 50 # in logfile segments, min 1, 16MB each\n> checkpoint_timeout = 1800 # range 30-3600, in seconds\n> \n> \n> # - Planner Cost Constants -\n> \n> effective_cache_size = 262144 # typically 8KB each\n> #effective_cache_size = 625000 # typically 8KB each\n> random_page_cost = 2 # units are one sequential page fetch cost\n> #cpu_tuple_cost = 0.01 # (same)\n> #cpu_index_tuple_cost = 0.001 # (same)\n> #cpu_operator_cost = 0.0025 # (same)\n> \n> \n> Patrick Hatcher\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n-- \nCommand Prompt, Inc., Your PostgreSQL solutions company. 503-667-4564\nCustom programming, 24x7 support, managed services, and hosting\nOpen Source Authors: plPHP, pgManage, Co-Authors: plPerlNG\nReliable replication, Mammoth Replicator - http://www.commandprompt.com/\n\n",
"msg_date": "Mon, 28 Mar 2005 10:39:48 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sluggish server performance"
}
] |
[
{
"msg_contents": "I'm using a Postgres table as the data source for a JTable \nin a Java app. As a first approximation, I'm implementing\nAbstractTableModel.getValueAt() like so:\n\n public Object getValueAt(int row, int col)\n {\n try\n {\n rs_.absolute(row + 1);\n return rs_.getObject(col + 1);\n }\n catch (Exception e)\n {\n ...\n }\n return null;\n }\n\nWhere rs_ is a RecordSet object. What I'm wondering is\nwhether it's better to call absolute() or relative() or\nnext()/previous(). If absolute() is the slowest call,\nthen I can cache the last row fetched and move relative\nto that.\n\nMy suspicion is that next()/previous() is much faster\nthan absolute() when the record to be fetched is very near\nthe last record fetched. I haven't actually tried it, but\nI'd like some insight if others can already answer this\nquestion based on knowledge of the server side and/or the\nJDBC driver.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n",
"msg_date": "Mon, 28 Mar 2005 16:58:55 -0600",
"msg_from": "\"Dave Held\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "JDBC best practice"
},
{
"msg_contents": "\n\nOn Mon, 28 Mar 2005, Dave Held wrote:\n\n> I'm using a Postgres table as the data source for a JTable in a Java\n> app. Where rs_ is a RecordSet object. What I'm wondering is whether\n> it's better to call absolute() or relative() or next()/previous(). If\n> absolute() is the slowest call, then I can cache the last row fetched\n> and move relative to that.\n> \n> My suspicion is that next()/previous() is much faster than absolute()\n> when the record to be fetched is very near the last record fetched. I\n> haven't actually tried it, but I'd like some insight if others can\n> already answer this question based on knowledge of the server side\n> and/or the JDBC driver.\n\nThere are two types of ResultSets that can be returned by the JDBC driver. \nOne is backed by a cursor and can only be used for TYPE_FORWARD_ONLY\nResultSets so it is not really applicable to you. The other method\nretrieves all results at once and stashes them in a Vector. This makes\nnext, absolute, and relative positioning all equal cost.\n\nKris Jurka\n",
"msg_date": "Mon, 28 Mar 2005 21:52:36 -0500 (EST)",
"msg_from": "Kris Jurka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JDBC best practice"
}
] |
[
{
"msg_contents": "I have used coalesce function for null fields but coalesce is too slow.\nI need fast alternative for coalesce\n\nAL� �EL�K \n\n\n",
"msg_date": "Tue, 29 Mar 2005 14:21:13 +0300",
"msg_from": "\"AL��� ���EL���K\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "coalesce alternative"
},
{
"msg_contents": "On Tue, Mar 29, 2005 at 14:21:13 +0300,\n AL� �EL�K <[email protected]> wrote:\n> I have used coalesce function for null fields but coalesce is too slow.\n> I need fast alternative for coalesce\n\nIt is unlikely that coalesce is your problem. People might be able to provide\nsome help if you provide EXPLAIN ANALYZE output and the actual query for your\nslow query.\n",
"msg_date": "Sun, 3 Apr 2005 08:04:11 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: coalesce alternative"
}
] |
[
{
"msg_contents": "Hi everybody...\n\nI'm new hear, and I will try to explain my problem, and maybe I can\nget a help...\n\nI'm writing a software for 3 years, and this software has the position\nGPS from vehicles, and other informations.\n\nMy problem starts when I had to store all the data about the vehicles,\nabout 1 or 2 months.\n\nActually I had a table called DADO_LIDO, that I write all information\nand the primary key is DATA (GPS DAY+HOUR) and the VEHICLE IDENTIFY.\n\nEach vehicle trasmit 1 position by 30 seconds, so I have something\nlike 2000 rows per vehicle/day. I already has 2 clients one with 4000\nvehicles, and the other with 500 vehicles.\n\nMy application was made in delphi using ZEOS that's permit me testing\nin mysql and postgres.\n\nI allready has the two databases.\n\nBut now the problem starts when I has to select data from this\nvehicles about the history ( I store only 2 months ) something like 40\nor 50 millions of data about 500 vehicles.\n\nUsing the keys VEHICLE_ID and GPS_TIME, the perfomance is very low...\n\nI need some ideas for a better perfomance in this table\n\nusing selects by \nPERIOD / VEHICLE\nPERIOD / VEHICLES\nPERIOD / VEHICLE / ( a bit test in 3 integer columns using logical operators )\n\nThanks for any help\n\nVinicius Marques De Bernardi\n",
"msg_date": "Tue, 29 Mar 2005 15:33:24 -0300",
"msg_from": "Vinicius Bernardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Million of rows"
},
{
"msg_contents": "On Tue, Mar 29, 2005 at 03:33:24PM -0300, Vinicius Bernardi wrote:\n>\n> But now the problem starts when I has to select data from this\n> vehicles about the history ( I store only 2 months ) something like 40\n> or 50 millions of data about 500 vehicles.\n> \n> Using the keys VEHICLE_ID and GPS_TIME, the perfomance is very low...\n\nPlease post an example query and the EXPLAIN ANALYZE output. The\ntable definition might be useful too.\n\n> I need some ideas for a better perfomance in this table\n\nDo you have indexes where you need them? Do you cluster on any of\nthe indexes? Do you VACUUM and ANALYZE the database regularly?\nHave you investigated whether you need to increase the statistics\non any columns? Have you tuned postgresql.conf? What version of\nPostgreSQL are you using?\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n",
"msg_date": "Tue, 29 Mar 2005 12:08:15 -0700",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Million of rows"
},
{
"msg_contents": "At now I have this system runing in a mysql, all the examples I have\nare in mysql, but the biggest client that will start now, we will use\nPostgreSQL, so I need a way to do those questions in postgres...\nIdeas like TABLESPACES or anothe things...\n\nJust looking for start ideas...\n\nThanks\n\nVinicius Marques De Bernardi\n\n\nOn Tue, 29 Mar 2005 12:08:15 -0700, Michael Fuhr <[email protected]> wrote:\n> On Tue, Mar 29, 2005 at 03:33:24PM -0300, Vinicius Bernardi wrote:\n> >\n> > But now the problem starts when I has to select data from this\n> > vehicles about the history ( I store only 2 months ) something like 40\n> > or 50 millions of data about 500 vehicles.\n> >\n> > Using the keys VEHICLE_ID and GPS_TIME, the perfomance is very low...\n> \n> Please post an example query and the EXPLAIN ANALYZE output. The\n> table definition might be useful too.\n> \n> > I need some ideas for a better perfomance in this table\n> \n> Do you have indexes where you need them? Do you cluster on any of\n> the indexes? Do you VACUUM and ANALYZE the database regularly?\n> Have you investigated whether you need to increase the statistics\n> on any columns? Have you tuned postgresql.conf? What version of\n> PostgreSQL are you using?\n> \n> --\n> Michael Fuhr\n> http://www.fuhr.org/~mfuhr/\n>\n",
"msg_date": "Tue, 29 Mar 2005 18:10:44 -0300",
"msg_from": "Vinicius Bernardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Million of rows"
}
] |
[
{
"msg_contents": "I recently pg_dumpall'd my DB from a (used for testing) v20z install of\npostgresql 8.0.1, and restored it to my production (but not yet in\nservice) v40z running the same version. The test DB has had multiple\nmillions of rows created/dropped during testing. The results of VACUUM\nVERBOSE are giving me pause:\n\n* v40z (hardly used after restore):\n\norfs=# vacuum full analyze verbose;\n<snip>\nINFO: free space map: 114 relations, 84 pages stored; 1824 total pages needed\nDETAIL: Allocated FSM size: 1000 relations + 300000 pages = 1864 kB shared memory.\nVACUUM\n\n\n* v20z (after having undergone extensive tweaking, deletes, and inserts):\n\norfs=# vacuum full analyze verbose;\n<snip>\nINFO: free space map: 53 relations, 13502 pages stored; 9776 total pages needed\nDETAIL: Allocated FSM size: 1000 relations + 300000 pages = 1864 kB shared memory.\nVACUUM\n\n1) Should I be concerned about the total pages needed? ISTM that using\nthat many more pages can't help but degrade relative performance on my\ntesting machine.\n\n2) How is it that my FSM has different numbers of relations?\n\n3) Are either of these affects normal for an oft-used (or not) DB?\n\nFWIW:\n v40z v20z\nmaintenance_work_mem 262144 16384 \nshared_buffers 30000 1000\n\nThanks,\n-- \nKarim Nassar\nDepartment of Computer Science\nBox 15600, College of Engineering and Natural Sciences\nNorthern Arizona University, Flagstaff, Arizona 86011\nOffice: (928) 523-5868 -=- Mobile: (928) 699-9221\n\n",
"msg_date": "Tue, 29 Mar 2005 17:52:58 -0700",
"msg_from": "Karim Nassar <[email protected]>",
"msg_from_op": true,
"msg_subject": "VACUUM on duplicate DB gives FSM and total pages discrepancies"
},
{
"msg_contents": "On Tue, Mar 29, 2005 at 05:52:58PM -0700, Karim Nassar wrote:\n> I recently pg_dumpall'd my DB from a (used for testing) v20z install of\n> postgresql 8.0.1, and restored it to my production (but not yet in\n> service) v40z running the same version. The test DB has had multiple\n> millions of rows created/dropped during testing. The results of VACUUM\n> VERBOSE are giving me pause:\n\nThe FSM only stores pages that have some free space. If the database\nhas only been through a restore, then probably there aren't many of\nthose. After you start playing with the data some more pages will need\nregistering. So the behavior you are seeing is expected.\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"We are who we choose to be\", sang the goldfinch\nwhen the sun is high (Sandman)\n",
"msg_date": "Wed, 30 Mar 2005 11:19:03 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM on duplicate DB gives FSM and total pages discrepancies"
}
] |
[
{
"msg_contents": "Hi\n\nI am looking for some references to literature. While we have used \nPostgreSQL in the past for a fair number of smaller projects, we are \nnow starting to use it on a larger scale and hence need to get into \nissues of performance optimisation and tuning. While I am OK with using \nthe EXPLAIN features, I am getting quite insecure when facing things \nlike the options in the postgresql.conf file. For example reading the \nman page on fsync option, it tells me to \"read the detailed \ndocumentation before using this!\" I then read the Admin guide where I \nget told that the benefits of this feature are issue of debate, leaving \nme with little help as to how to make up my mind on this issue. So I \nturn to this mailing list, but starting reading in the archive realise \nthat compared to the knowledge standard here, I am as wise as a baby.\n\nI have read most of Bruce Momjian's book on PostgreSQL (Can I update my \n2001 edition somehow? ;-)\nI have Sams' PostgreSQL Developer's Handbook (which is OK too), but \noffers little to nothing on operational issues.\nI have read most of the admin (and user) guide, but it does not help me \nreally understand the issues:\n> CPU_INDEX_TUPLE_COST (floating point) Sets the query optimizer’s \n> estimate of the cost of processing each index tuple during an index \n> scan. This is measured as a fraction of the cost of a sequential page \n> fetch.\nNo idea what this means! (And should I feel bad for it?)\n\nI am an application programmer with a little UNIX know-how.\n\nWhat books or sources are out there that I can buy/download and that I \nshould read to get to grips with the more advanced issues of running \nPostgreSQL?\n\nMore on what we do (for those interested):\nWe use PostgreSQL mainly with its PostGIS extension as the database \nbackend for Zope-based applications. Adding PostGIS features is what \nhas started to cause noticeable increase in the server load.\nWe're using the GIS enabled system on this platform:\nPostgreSQL 7.3.4\nPostGIS 0.8\nZope 2.7.5\nPython 2.3.5\n(Database-based functions are written in PL/PGSQL, not python!!)\n\non a 2-CPU (450MHz Intel P3) Compaq box (some Proliant flavour)\nWith a SCSI 4-disk RAID system (mirrored and striped)\nSunOS 5.8 (Which I think is Solaris 8)\n\nThe server is administrated by my host (co-located). We cannot easily \nupgrade to a newer version of Solaris, because we could not find a \ndriver for the disk controller used in this server. (And our host did \nnot manage to write/patch one up.)\n\nAs a business, we are creating and operating on-line communities, (for \nan example go to http://www.theguidlife.net) not only from a technical \npoint of view, but also supporting the communities in producing \ncontent.\n\nBTW. If you are a SQL/python programmer in (or near) Lanarkshire, \nScotland, we have a vacancy. ;-)\n\nCheers\n\nMarc\n\n--\nMarc Burgauer\n\nSharedbase Ltd\nhttp://www.sharedbase.com\nCreating and supporting on-line communities\n",
"msg_date": "Wed, 30 Mar 2005 12:07:29 +0100",
"msg_from": "Marc Burgauer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Reading recommendations"
},
{
"msg_contents": "On Wed, Mar 30, 2005 at 12:07:29PM +0100, Marc Burgauer wrote:\n> \n> What books or sources are out there that I can buy/download and that I \n> should read to get to grips with the more advanced issues of running \n> PostgreSQL?\n\nSee the Power PostgreSQL Performance & Configuration documents:\n\nhttp://www.powerpostgresql.com/Docs/\n\n> BTW. If you are a SQL/python programmer in (or near) Lanarkshire, \n> Scotland, we have a vacancy. ;-)\n\nAllow telecommute from across the pond and I might be interested :-)\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n",
"msg_date": "Wed, 30 Mar 2005 08:58:21 -0700",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reading recommendations"
},
{
"msg_contents": "\n\[email protected] wrote on 03/30/2005 10:58:21 AM:\n\n>\n> Allow telecommute from across the pond and I might be interested :-)\n\nPlease post phone bills to this list.\n\n>\n> --\n> Michael Fuhr\n> http://www.fuhr.org/~mfuhr/\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n",
"msg_date": "Wed, 30 Mar 2005 11:27:07 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Reading recommendations"
}
] |
[
{
"msg_contents": "\nI'm getting weird results for one of my queries. The actual time of this\nindex scan doesn't make any sense:\n\n-> Index Scan using dok_dok_fk_i on dokumendid a (cost=0.00..566.24\nrows=184 width=8) (actual time=0.170..420806.563 rows=1 loops=1) \n\ndok_dok_fk_i is index on dokumendid(dok_dok_id). Currently it contains\nmostly NULLs:\n\npos1=# select dok_dok_id, count(1) from dokumendid group by dok_dok_id;\n dok_dok_id | count\n------------+-------\n | 11423\n 8034 | 76\n(2 rows)\n\nIf I drop the index, seq scan + sort is used instead and everything is\nfast again.\n\nThe PostgreSQL version:\n\npos1=# select version();\n version\n------------------------------------------------------------------------\n------------------------------\n PostgreSQL 7.4.5 on i386-pc-linux-gnu, compiled by GCC i386-linux-gcc\n(GCC) 3.3.4 (Debian 1:3.3.4-9)\n(1 row)\n\nThe full EXPLAIN ANALYZE output:\n\npos1=# explain analyze select * from v_inventuuri_vahed_kaubagrupiti;\n \nQUERY PLAN \n------------------------------------------------------------------------\n------------------------------------------------------------------------\n-------------------------------------------------\n Subquery Scan v_inventuuri_vahed_kaubagrupiti (cost=50896.04..50896.61\nrows=46 width=128) (actual time=437007.670..437007.817 rows=45 loops=1)\n -> Sort (cost=50896.04..50896.15 rows=46 width=42) (actual\ntime=437007.664..437007.692 rows=45 loops=1)\n Sort Key: (COALESCE(sum(ir.summa_kmta), 0::numeric))::raha\n -> HashAggregate (cost=50893.85..50894.77 rows=46 width=42)\n(actual time=437007.229..437007.488 rows=45 loops=1)\n -> Hash Join (cost=5533.44..50807.93 rows=5728\nwidth=42) (actual time=436226.533..436877.499 rows=16271 loops=1)\n Hash Cond: (\"outer\".kau_kau_id = \"inner\".kau_id)\n -> Merge Right Join (cost=4759.52..49858.92\nrows=15696 width=26) (actual time=436117.333..436600.653 rows=16271\nloops=1)\n Merge Cond: ((\"outer\".dok_dok_id =\n\"inner\".dok_id) AND (\"outer\".kau_kau_id = \"inner\".kau_kau_id))\n -> Index Scan using dor_dok_kau_i on\ndokumentide_read ar (cost=0.00..42789.44 rows=480962 width=19) (actual\ntime=0.023..7873.117 rows=205879 loops=1)\n -> Sort (cost=4759.52..4798.76 rows=15696\nwidth=19) (actual time=428381.719..428392.204 rows=16271 loops=1)\n Sort Key: a.dok_id, ir.kau_kau_id\n -> Merge Left Join\n(cost=0.00..3665.65 rows=15696 width=19) (actual time=0.245..428279.595\nrows=16258 loops=1)\n Merge Cond: (\"outer\".dok_id =\n\"inner\".dok_dok_id)\n -> Nested Loop\n(cost=0.00..3620.23 rows=15696 width=19) (actual time=0.063..7243.529\nrows=16258 loops=1)\n -> Index Scan using dok_pk\non dokumendid i (cost=0.00..3.73 rows=1 width=4) (actual\ntime=0.030..0.035 rows=1 loops=1)\n Index Cond: (dok_id =\n8034)\n Filter: (tyyp =\n'IN'::bpchar)\n -> Index Scan using\ndor_dok_fk_i on dokumentide_read ir (cost=0.00..3459.55 rows=15696\nwidth=19) (actual time=0.023..7150.257 rows=16258 loops=1)\n Index Cond: (8034 =\ndok_dok_id)\n -> Index Scan using dok_dok_fk_i\non dokumendid a (cost=0.00..566.24 rows=184 width=8) (actual\ntime=0.170..420806.563 rows=1 loops=1)\n Filter: (tyyp =\n'IA'::bpchar)\n -> Hash (cost=757.71..757.71 rows=6487 width=24)\n(actual time=109.178..109.178 rows=0 loops=1)\n -> Hash Join (cost=15.56..757.71 rows=6487\nwidth=24) (actual time=1.787..85.554 rows=17752 loops=1)\n Hash Cond: (\"outer\".kag_kag_id =\n\"inner\".a_kag_id)\n -> Seq Scan on kaubad k\n(cost=0.00..588.52 rows=17752 width=8) (actual time=0.005..30.952\nrows=17752 loops=1)\n -> Hash (cost=15.35..15.35 rows=83\nwidth=24) (actual time=1.770..1.770 rows=0 loops=1)\n -> Hash Join (cost=5.39..15.35\nrows=83 width=24) (actual time=0.276..1.491 rows=227 loops=1)\n Hash Cond:\n(\"outer\".y_kag_id = \"inner\".kag_id)\n -> Seq Scan on\nkaubagruppide_kaubagrupid gg (cost=0.00..7.09 rows=409 width=8) (actual\ntime=0.004..0.405 rows=409 loops=1)\n -> Hash (cost=5.27..5.27\nrows=46 width=20) (actual time=0.259..0.259 rows=0 loops=1)\n -> Seq Scan on\nkaubagrupid g (cost=0.00..5.27 rows=46 width=20) (actual\ntime=0.010..0.206 rows=46 loops=1)\n Filter:\n(kag_kag_id IS NULL)\n Total runtime: 437011.532 ms\n(33 rows)\n\n Tambet\n",
"msg_date": "Wed, 30 Mar 2005 15:37:18 +0300",
"msg_from": "\"Tambet Matiisen\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Weird index scan"
}
] |
[
{
"msg_contents": "VOIP over BitTorrent? \n\n;-)\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of [email protected]\nSent: Wednesday, March 30, 2005 11:27 AM\nTo: Michael Fuhr\nCc: Marc Burgauer; [email protected]; [email protected]\nSubject: Re: [PERFORM] Reading recommendations\n\n\n\n\[email protected] wrote on 03/30/2005 10:58:21 AM:\n\n>\n> Allow telecommute from across the pond and I might be interested :-)\n\nPlease post phone bills to this list.\n\n>\n> --\n> Michael Fuhr\n",
"msg_date": "Wed, 30 Mar 2005 16:39:47 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reading recommendations"
},
{
"msg_contents": "Mohan, Ross wrote:\n> VOIP over BitTorrent? \n\nNow *that* I want to see. Aught to be at least as interesting\nas the \"TCP/IP over carrier pigeon\" experiment - and more\nchallenging to boot!\n\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n",
"msg_date": "Wed, 30 Mar 2005 09:52:13 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reading recommendations"
},
{
"msg_contents": "On Wed, Mar 30, 2005 at 16:39:47 -0000,\n \"Mohan, Ross\" <[email protected]> wrote:\n> VOIP over BitTorrent? \n\nPlain VOIP shouldn't be a problem. And if you want to do tricky things\nyou can use Asterisk on both ends. Asterisk is open source (GPL, duel\nlicensed from Digium) and runs on low powered linux boxes. A card that\ntalks to your existing analog phones and your existing phone line\ncosts $200. You don't need special cards if you have IP phones or a headset\nconnected to your computer and don't use your local phone company for\nthe calls.\n",
"msg_date": "Wed, 30 Mar 2005 12:00:39 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reading recommendations"
},
{
"msg_contents": "It was very challenging. I worked on the credit window sizing and\nretransmission timer estimation algorithms. We took into account weather\npatterns, size and age of the bird, feeding times, and the average number\nof times a bird circles before determining magnetic north. Interestingly,\npacket size had little effect in the final algorithms.\n\[email protected] wrote on 03/30/2005 11:52:13 AM:\n\n> Mohan, Ross wrote:\n> > VOIP over BitTorrent?\n>\n> Now *that* I want to see. Aught to be at least as interesting\n> as the \"TCP/IP over carrier pigeon\" experiment - and more\n> challenging to boot!\n>\n\nIt was very challenging. I worked on the credit window sizing and\nretransmission timer estimation algorithms. We took into account weather\npatterns, size and age of the bird, feeding times, and the average number\nof times a bird circles before determining magnetic north. Interestingly,\npacket size had little effect in the final algorithms.\n\nI would love to share them with all of you, but they're classified.\n\n>\n> --\n> Steve Wampler -- [email protected]\n> The gods that smiled on your birth are now laughing out loud.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Wed, 30 Mar 2005 15:11:24 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Reading recommendations"
},
{
"msg_contents": "[email protected] wrote:\n\n>>Mohan, Ross wrote:\n>>\n>>>VOIP over BitTorrent?\n>>\n>>Now *that* I want to see. Aught to be at least as interesting\n>>as the \"TCP/IP over carrier pigeon\" experiment - and more\n>>challenging to boot!\n>>\n> \n> \n> It was very challenging. I worked on the credit window sizing and\n> retransmission timer estimation algorithms. We took into account weather\n> patterns, size and age of the bird, feeding times, and the average number\n> of times a bird circles before determining magnetic north. Interestingly,\n> packet size had little effect in the final algorithms.\n> \n> I would love to share them with all of you, but they're classified.\n\nAh, but VOIPOBT requires many people all saying the same thing at the\nsame time. The synchronization alone (since you need to distribute\nthese people adequately to avoid overloading a trunk line...) is probably\nsufficiently hard to make it interesting. Then there are the problems of\ndifferent accents, dilects, and languages ;)\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n",
"msg_date": "Wed, 30 Mar 2005 13:58:12 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reading recommendations"
},
{
"msg_contents": "\n\nSteve Wampler <[email protected]> wrote on 03/30/2005 03:58:12 PM:\n\n> [email protected] wrote:\n>\n> >>Mohan, Ross wrote:\n> >>\n> >>>VOIP over BitTorrent?\n> >>\n> >>Now *that* I want to see. Aught to be at least as interesting\n> >>as the \"TCP/IP over carrier pigeon\" experiment - and more\n> >>challenging to boot!\n> >>\n> >\n> >\n> > It was very challenging. I worked on the credit window sizing and\n> > retransmission timer estimation algorithms. We took into account\nweather\n> > patterns, size and age of the bird, feeding times, and the average\nnumber\n> > of times a bird circles before determining magnetic north.\nInterestingly,\n> > packet size had little effect in the final algorithms.\n> >\n> > I would love to share them with all of you, but they're classified.\n>\n> Ah, but VOIPOBT requires many people all saying the same thing at the\n> same time. The synchronization alone (since you need to distribute\n> these people adequately to avoid overloading a trunk line...) is probably\n> sufficiently hard to make it interesting. Then there are the problems of\n> different accents, dilects, and languages ;)\n\nInterestingly, we had a follow on contract to investigate routing\noptimization using flooding techniques. Oddly, it was commissioned by a\nconsortium of local car washes. Work stopped when the park service sued us\nfor the cost of cleaning all the statuary, and the company went out of\nbusiness. We were serving \"cornish game hens\" at our frequent dinner\nparties for months.\n\n>\n> --\n> Steve Wampler -- [email protected]\n> The gods that smiled on your birth are now laughing out loud.\n\n",
"msg_date": "Thu, 31 Mar 2005 08:19:15 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Reading recommendations"
},
{
"msg_contents": "On 2005-03-31 15:19, [email protected] wrote:\n>> >>Now *that* I want to see. Aught to be at least as interesting\n>> >>as the \"TCP/IP over carrier pigeon\" experiment - and more\n>> >>challenging to boot!\n..\n> Interestingly, we had a follow on contract to investigate routing\n> optimization using flooding techniques. Oddly, it was commissioned by a\n> consortium of local car washes. Work stopped when the park service sued us\n> for the cost of cleaning all the statuary, and the company went out of\n> business. We were serving \"cornish game hens\" at our frequent dinner\n> parties for months.\n\nThis method might have been safer (and it works great with Apaches):\nhttp://eagle.auc.ca/~dreid/\n\ncheers\nstefan\n",
"msg_date": "Thu, 31 Mar 2005 15:57:55 +0200",
"msg_from": "Stefan Weiss <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reading recommendations"
},
{
"msg_contents": "Stefan Weiss wrote:\n> On 2005-03-31 15:19, [email protected] wrote:\n> \n>>>>>Now *that* I want to see. Aught to be at least as interesting\n>>>>>as the \"TCP/IP over carrier pigeon\" experiment - and more\n>>>>>challenging to boot!\n> \n> ..\n> \n>>Interestingly, we had a follow on contract to investigate routing\n>>optimization using flooding techniques. Oddly, it was commissioned by a\n>>consortium of local car washes. Work stopped when the park service sued us\n>>for the cost of cleaning all the statuary, and the company went out of\n>>business. We were serving \"cornish game hens\" at our frequent dinner\n>>parties for months.\n> \n> \n> This method might have been safer (and it works great with Apaches):\n> http://eagle.auc.ca/~dreid/\n\nAha - VOIPOBD as well as VOIPOBT! What more can one want?\n\nVOIPOCP, I suppose...\n\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n",
"msg_date": "Thu, 31 Mar 2005 08:48:09 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reading recommendations"
},
{
"msg_contents": "\n\[email protected] wrote on 03/31/2005 10:48:09 AM:\n\n> Stefan Weiss wrote:\n> > On 2005-03-31 15:19, [email protected] wrote:\n> >\n> >>>>>Now *that* I want to see. Aught to be at least as interesting\n> >>>>>as the \"TCP/IP over carrier pigeon\" experiment - and more\n> >>>>>challenging to boot!\n> >\n> > ..\n> >\n> >>Interestingly, we had a follow on contract to investigate routing\n> >>optimization using flooding techniques. Oddly, it was commissioned by\na\n> >>consortium of local car washes. Work stopped when the park service\nsued us\n> >>for the cost of cleaning all the statuary, and the company went out of\n> >>business. We were serving \"cornish game hens\" at our frequent dinner\n> >>parties for months.\n> >\n> >\n> > This method might have been safer (and it works great with Apaches):\n> > http://eagle.auc.ca/~dreid/\n>\n> Aha - VOIPOBD as well as VOIPOBT! What more can one want?\n>\n> VOIPOCP, I suppose...\n\nStart collecting recipes for small game birds now. We ran out pretty\nquickly. Finally came up with \"Pigeon Helper\" and sold it to homeless\nshelters in New York. Sales were slow until we added a wine sauce.\n\n>\n>\n> --\n> Steve Wampler -- [email protected]\n> The gods that smiled on your birth are now laughing out loud.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n\n",
"msg_date": "Thu, 31 Mar 2005 13:24:30 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Reading recommendations"
}
] |
[
{
"msg_contents": "Yea, the upside is that you get better than the 1 byte/hour rate for pigeon-net. \n\nDownside is that simply because you use BiTorrent, the RIAA accuses you of \neverything from CD piracy to shipping pr*n to cyberterrorism, and you spend \nthe next four years in Gitmo, comparing notes with your cellmates in Camp X-Ray, \nand watching pigeons fly overhead. \n\n\n-----Original Message-----\nFrom: Steve Wampler [mailto:[email protected]] \nSent: Wednesday, March 30, 2005 11:52 AM\nTo: Mohan, Ross\nCc: [email protected]\nSubject: Re: [PERFORM] Reading recommendations\n\n\nMohan, Ross wrote:\n> VOIP over BitTorrent?\n\nNow *that* I want to see. Aught to be at least as interesting as the \"TCP/IP over carrier pigeon\" experiment - and more challenging to boot!\n\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n",
"msg_date": "Wed, 30 Mar 2005 16:57:59 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reading recommendations"
}
] |
[
{
"msg_contents": "Hi All,\n\nI am developing a simple set returning function as my first step towards more\ncomplicated processes. I would like to understand the implications of using\nthe dynamic query capability.\n\nI have built two forms of an identically performing function. The first uses\na simple IF-THEN-ELSIF-THEN-ELSE structure to choose which query to run. The\nsecond builds the query dynamically using the FOR-IN-EXECUTE structure and a\nCASE statement.\n\nThe documentation\n(http://www.postgresql.org/docs/8.0/interactive/plpgsql-control-structures.html#PLPGSQL-RECORDS-ITERATING)\nindicates that a dynamic query (EXECUTE) is replanned for every LOOP iteration.\n\n This is like the previous form, except that the source\n SELECT statement is specified as a string expression,\n which is evaluated and replanned on each entry to the\n FOR loop. This allows the programmer to choose the speed\n of a preplanned query or the flexibility of a dynamic\n query, just as with a plain EXECUTE statement.\n\nThat seems like a potential performance problem. I don't understand why the\nquery would be planned for every LOOP iteration when the LOOP is over the\nrecord set.\n\nYour comments are appreciated.\n\nKind Regards,\nKeith\n\n\nCREATE OR REPLACE FUNCTION func_item_list(\"varchar\")\n RETURNS SETOF VARCHAR AS\n$BODY$\n DECLARE\n v_status ALIAS FOR $1;\n r_item_id RECORD;\n BEGIN\n-- Build the record set using the appropriate query.\n IF lower(v_status) = 'active' THEN\n FOR r_item_id IN SELECT tbl_item.id\n FROM tbl_item\n WHERE NOT tbl_item.inactive\n ORDER BY tbl_item.id\n LOOP\n RETURN NEXT r_item_id;\n END LOOP;\n ELSIF lower(v_status) = 'inactive' THEN\n FOR r_item_id IN SELECT tbl_item.id\n FROM tbl_item\n WHERE tbl_item.inactive\n ORDER BY tbl_item.id\n LOOP\n RETURN NEXT r_item_id;\n END LOOP;\n ELSE\n FOR r_item_id IN SELECT tbl_item.id\n FROM tbl_item\n ORDER BY tbl_item.id\n LOOP\n RETURN NEXT r_item_id;\n END LOOP;\n END IF;\n RETURN;\n END;\n$BODY$\n LANGUAGE 'plpgsql' VOLATILE;\n\nSELECT * FROM func_item_list('Active');\n\n\n\nCREATE OR REPLACE FUNCTION func_item_list(\"varchar\")\n RETURNS SETOF VARCHAR AS\n$BODY$\n DECLARE\n v_status ALIAS FOR $1;\n r_item_id RECORD;\n BEGIN\n-- Build the record set using a dynamically built query.\n FOR r_item_id IN EXECUTE 'SELECT tbl_item.id\n FROM tbl_item' ||\n CASE WHEN lower(v_status) = 'active' THEN\n ' WHERE NOT tbl_item.inactive '\n WHEN lower(v_status) = 'inactive' THEN\n ' WHERE tbl_item.inactive '\n ELSE\n ' '\n END ||\n ' ORDER BY tbl_item.id'\n LOOP\n RETURN NEXT r_item_id;\n END LOOP;\n RETURN;\n END;\n$BODY$\n LANGUAGE 'plpgsql' VOLATILE;\n\nSELECT * FROM func_item_list('AcTiVe');\n\n",
"msg_date": "Wed, 30 Mar 2005 12:22:55 -0500",
"msg_from": "\"Keith Worthington\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Dynamic query perormance"
},
{
"msg_contents": "Keith Worthington wrote:\n\n>Hi All,\n>\n>I am developing a simple set returning function as my first step towards more\n>complicated processes. I would like to understand the implications of using\n>the dynamic query capability.\n>\n>I have built two forms of an identically performing function. The first uses\n>a simple IF-THEN-ELSIF-THEN-ELSE structure to choose which query to run. The\n>second builds the query dynamically using the FOR-IN-EXECUTE structure and a\n>CASE statement.\n>\n>The documentation\n>(http://www.postgresql.org/docs/8.0/interactive/plpgsql-control-structures.html#PLPGSQL-RECORDS-ITERATING)\n>indicates that a dynamic query (EXECUTE) is replanned for every LOOP iteration.\n>\n> This is like the previous form, except that the source\n> SELECT statement is specified as a string expression,\n> which is evaluated and replanned on each entry to the\n> FOR loop. This allows the programmer to choose the speed\n> of a preplanned query or the flexibility of a dynamic\n> query, just as with a plain EXECUTE statement.\n>\n>That seems like a potential performance problem. I don't understand why the\n>query would be planned for every LOOP iteration when the LOOP is over the\n>record set.\n>\n>\n>\nReading the documentation and looking at the example, I don't think\nyou're query will be re-planned for each entry in the loop.\nI think it will be planned each time the FOR loop is started.\nIf you have the EXECUTE *inside* the LOOP, then it would be re-planned\nfor each entry.\n\nAt least that is the case for a normal EXECUTE without any for loop.\nEach time the function is called, the statement is re-planned. Versus\nwithout EXECUTE when the planning is done at function declaration time.\n\nI would guess that the FOR .. IN EXECUTE .. LOOP runs the EXECUTE one\ntime, and generates the results which it then loops over. Because that\nis what FOR .. IN SELECT .. LOOP does (you don't re-evaluate the SELECT\nfor each item in the result set).\n\nOn the other hand, I don't know of any way to test this, unless you have\na query that you know takes a long time to plan, and can compare the\nperformance of FOR IN EXECUTE versus FOR IN SELECT.\nJohn\n=:->\n\n>Your comments are appreciated.\n>\n>Kind Regards,\n>Keith\n>\n>\n>",
"msg_date": "Wed, 30 Mar 2005 11:57:49 -0600",
"msg_from": "John Arbash Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dynamic query perormance"
},
{
"msg_contents": "\n\n> which is evaluated and replanned on each entry to the\n> FOR loop. This allows the programmer to choose the speed\n\n\tOn each entry is not the same as on each iteration. It would means \"every \ntime the loop is started\"...\n\n\tRegards,\n\tPFC\n",
"msg_date": "Wed, 30 Mar 2005 20:34:28 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dynamic query perormance"
}
] |
[
{
"msg_contents": "Hi,\n\n1) seems that the table is a view, I am wrong? If this is true, please \ngive a query to that table, and try to guess if there is already a bottleneck there.\n\n2) Add to the query an order by and try to find if it works better.\n\n3) If you drop the index, and no other index exists, it will always use a seqscan or other method to gather the rows. No other index is plausible to be used there? (perhaps order by indexedcolumn may help).\n\nA hint, drop that index, identify a usable index, and set enable_seqscan to off; on your session (or as a global value on the conf file)\n\nBest wishes,\nGuido\n\n> \n> I'm getting weird results for one of my queries. The actual time of this\n> index scan doesn't make any sense:\n> \n> -> Index Scan using dok_dok_fk_i on dokumendid a (cost=0.00..566.24\n> rows=184 width=8) (actual time=0.170..420806.563 rows=1 loops=1) \n> \n> dok_dok_fk_i is index on dokumendid(dok_dok_id). Currently it contains\n> mostly NULLs:\n> \n> pos1=# select dok_dok_id, count(1) from dokumendid group by dok_dok_id;\n> dok_dok_id | count\n> ------------+-------\n> | 11423\n> 8034 | 76\n> (2 rows)\n> \n> If I drop the index, seq scan + sort is used instead and everything is\n> fast again.\n> \n> The PostgreSQL version:\n> \n> pos1=# select version();\n> version\n> ------------------------------------------------------------------------\n> ------------------------------\n> PostgreSQL 7.4.5 on i386-pc-linux-gnu, compiled by GCC i386-linux-gcc\n> (GCC) 3.3.4 (Debian 1:3.3.4-9)\n> (1 row)\n> \n> The full EXPLAIN ANALYZE output:\n> \n> pos1=# explain analyze select * from v_inventuuri_vahed_kaubagrupiti;\n> \n> QUERY PLAN \n> ------------------------------------------------------------------------\n> ------------------------------------------------------------------------\n> -------------------------------------------------\n> Subquery Scan v_inventuuri_vahed_kaubagrupiti (cost=50896.04..50896.61\n> rows=46 width=128) (actual time=437007.670..437007.817 rows=45 loops=1)\n> -> Sort (cost=50896.04..50896.15 rows=46 width=42) (actual\n> time=437007.664..437007.692 rows=45 loops=1)\n> Sort Key: (COALESCE(sum(ir.summa_kmta), 0::numeric))::raha\n> -> HashAggregate (cost=50893.85..50894.77 rows=46 width=42)\n> (actual time=437007.229..437007.488 rows=45 loops=1)\n> -> Hash Join (cost=5533.44..50807.93 rows=5728\n> width=42) (actual time=436226.533..436877.499 rows=16271 loops=1)\n> Hash Cond: (\"outer\".kau_kau_id = \"inner\".kau_id)\n> -> Merge Right Join (cost=4759.52..49858.92\n> rows=15696 width=26) (actual time=436117.333..436600.653 rows=16271\n> loops=1)\n> Merge Cond: ((\"outer\".dok_dok_id =\n> \"inner\".dok_id) AND (\"outer\".kau_kau_id = \"inner\".kau_kau_id))\n> -> Index Scan using dor_dok_kau_i on\n> dokumentide_read ar (cost=0.00..42789.44 rows=480962 width=19) (actual\n> time=0.023..7873.117 rows=205879 loops=1)\n> -> Sort (cost=4759.52..4798.76 rows=15696\n> width=19) (actual time=428381.719..428392.204 rows=16271 loops=1)\n> Sort Key: a.dok_id, ir.kau_kau_id\n> -> Merge Left Join\n> (cost=0.00..3665.65 rows=15696 width=19) (actual time=0.245..428279.595\n> rows=16258 loops=1)\n> Merge Cond: (\"outer\".dok_id =\n> \"inner\".dok_dok_id)\n> -> Nested Loop\n> (cost=0.00..3620.23 rows=15696 width=19) (actual time=0.063..7243.529\n> rows=16258 loops=1)\n> -> Index Scan using dok_pk\n> on dokumendid i (cost=0.00..3.73 rows=1 width=4) (actual\n> time=0.030..0.035 rows=1 loops=1)\n> Index Cond: (dok_id =\n> 8034)\n> Filter: (tyyp =\n> 'IN'::bpchar)\n> -> Index Scan using\n> dor_dok_fk_i on dokumentide_read ir (cost=0.00..3459.55 rows=15696\n> width=19) (actual time=0.023..7150.257 rows=16258 loops=1)\n> Index Cond: (8034 =\n> dok_dok_id)\n> -> Index Scan using dok_dok_fk_i\n> on dokumendid a (cost=0.00..566.24 rows=184 width=8) (actual\n> time=0.170..420806.563 rows=1 loops=1)\n> Filter: (tyyp =\n> 'IA'::bpchar)\n> -> Hash (cost=757.71..757.71 rows=6487 width=24)\n> (actual time=109.178..109.178 rows=0 loops=1)\n> -> Hash Join (cost=15.56..757.71 rows=6487\n> width=24) (actual time=1.787..85.554 rows=17752 loops=1)\n> Hash Cond: (\"outer\".kag_kag_id =\n> \"inner\".a_kag_id)\n> -> Seq Scan on kaubad k\n> (cost=0.00..588.52 rows=17752 width=8) (actual time=0.005..30.952\n> rows=17752 loops=1)\n> -> Hash (cost=15.35..15.35 rows=83\n> width=24) (actual time=1.770..1.770 rows=0 loops=1)\n> -> Hash Join (cost=5.39..15.35\n> rows=83 width=24) (actual time=0.276..1.491 rows=227 loops=1)\n> Hash Cond:\n> (\"outer\".y_kag_id = \"inner\".kag_id)\n> -> Seq Scan on\n> kaubagruppide_kaubagrupid gg (cost=0.00..7.09 rows=409 width=8) (actual\n> time=0.004..0.405 rows=409 loops=1)\n> -> Hash (cost=5.27..5.27\n> rows=46 width=20) (actual time=0.259..0.259 rows=0 loops=1)\n> -> Seq Scan on\n> kaubagrupid g (cost=0.00..5.27 rows=46 width=20) (actual\n> time=0.010..0.206 rows=46 loops=1)\n> Filter:\n> (kag_kag_id IS NULL)\n> Total runtime: 437011.532 ms\n> (33 rows)\n> \n> Tambet\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n",
"msg_date": "Wed, 30 Mar 2005 14:42:24 -0300 (GMT+3)",
"msg_from": "G u i d o B a r o s i o <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Weird index scan"
}
] |
[
{
"msg_contents": "\nCan anyone please help me make my JOIN find the right index to use?\n\nIt seems strange to me that in the two queries listed below, the \nLEFT OUTER JOIN can find the most efficient index to use, while \nthe unadorned JOIN can not. The result is that my query is \norders of magnitude slower than it seems it should be.\n\n\n\nThe table \"tlid_smaller\" (\\d and explain analyze shown below) is a\nlarge table contining integer IDs just like the fact table of any\ntraditional star-schema warehouse.\n\nThe tables *_lookup are simply tables that map strings to IDs, with\nunique IDs associating strings to the IDs.\n\nThe table \"tlid_smaller\" has an index on (streetname_id, city_id) that\nis extremely efficient at finding the desired row. When I use a \"LEFT\nOUTER JOIN\", the optimizer happily sees that it can use this index.\nThis is shown in the first explain analyze below. However when I\nsimply do a \"JOIN\" the optimizer does not use this index and rather\ndoes a hash join comparing thousands of rows.\n\nNote that the cost estimate using the good index is much better \n(16.94 vs 29209.16 thousands of times better). Any ideas why\nthe non-outer join didn't use it?\n\n\n\n\n\n\nfli=# explain analyze\n select *\n from streetname_lookup as sl\n join city_lookup as cl on (true)\n left outer join tlid_smaller as ts on (sl.geo_streetname_id = ts.geo_streetname_id and cl.geo_city_id=ts.geo_city_id)\n where str_name='alamo' and city='san antonio' and state='TX'\n;\nfli-# fli-# fli-# fli-# fli-# fli-# QUERY PLAN \\\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.00..16.94 rows=1 width=74) (actual time=0.115..0.539 rows=78 loops=1)\n -> Nested Loop (cost=0.00..9.03 rows=1 width=42) (actual time=0.077..0.084 rows=1 loops=1)\n -> Index Scan using streetname_lookup__str_name on streetname_lookup sl (cost=0.00..3.01 rows=1 width=19) (actual time=0.042..0.044 rows=1 loops=1)\n Index Cond: (str_name = 'alamo'::text)\n -> Index Scan using city_lookup__name on city_lookup cl (cost=0.00..6.01 rows=1 width=23) (actual time=0.026..0.028 rows=1 loops=1)\n Index Cond: ((city = 'san antonio'::text) AND (state = 'TX'::text))\n -> Index Scan using tlid_smaller__street_city on tlid_smaller ts (cost=0.00..7.86 rows=3 width=32) (actual time=0.029..0.176 rows=78 loops=1)\n Index Cond: ((\"outer\".geo_streetname_id = ts.geo_streetname_id) AND (\"outer\".geo_city_id = ts.geo_city_id))\n Total runtime: 0.788 ms\n(9 rows)\n\n\nfli=#\nfli=# explain analyze\n select *\n from streetname_lookup as sl\n join city_lookup as cl on (true)\n join tlid_smaller as ts on (sl.geo_streetname_id = ts.geo_streetname_id and cl.geo_city_id=ts.geo_city_id)\n where str_name='alamo' and city='san antonio' and state='TX'\n;\nfli-# fli-# fli-# fli-# fli-# fli-# QUERY PLAN \\\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=6.01..29209.16 rows=1 width=74) (actual time=9.421..28.154 rows=78 loops=1)\n Hash Cond: (\"outer\".geo_city_id = \"inner\".geo_city_id)\n -> Nested Loop (cost=0.00..29202.88 rows=52 width=51) (actual time=0.064..23.296 rows=4151 loops=1)\n -> Index Scan using streetname_lookup__str_name on streetname_lookup sl (cost=0.00..3.01 rows=1 width=19) (actual time=0.025..0.032 rows=1 loops=1)\n Index Cond: (str_name = 'alamo'::text)\n -> Index Scan using tlid_smaller__street_zipint on tlid_smaller ts (cost=0.00..28994.70 rows=16413 width=32) (actual time=0.028..8.153 rows=4151 loops=1)\n Index Cond: (\"outer\".geo_streetname_id = ts.geo_streetname_id)\n -> Hash (cost=6.01..6.01 rows=1 width=23) (actual time=0.073..0.073 rows=0 loops=1)\n -> Index Scan using city_lookup__name on city_lookup cl (cost=0.00..6.01 rows=1 width=23) (actual time=0.065..0.067 rows=1 loops=1)\n Index Cond: ((city = 'san antonio'::text) AND (state = 'TX'::text))\n Total runtime: 28.367 ms\n(11 rows)\n\nfli=#\n\nfli=#\n\n\n\nfli=# \\d tlid_smaller\n Table \"geo.tlid_smaller\"\n Column | Type | Modifiers\n-------------------+---------+-----------\n tlid | integer |\n geo_streetname_id | integer |\n geo_streettype_id | integer |\n geo_city_id | integer |\n zipint | integer |\n tigerfile | integer |\n low | integer |\n high | integer |\nIndexes:\n \"tlid_smaller__city\" btree (geo_city_id)\n \"tlid_smaller__street_city\" btree (geo_streetname_id, geo_city_id)\n \"tlid_smaller__street_zipint\" btree (geo_streetname_id, zipint)\n \"tlid_smaller__tlid\" btree (tlid)\n\n",
"msg_date": "Wed, 30 Mar 2005 12:37:08 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Left Outer Join much faster than non-outer Join?"
},
{
"msg_contents": "\nSetting join_collapse_limit=1 improves my performance dramatically.\n\nEven on a query with only 3 tables.\n\nThis surprised me, since there are only 3 tables being joined, I would\nhave assumed that the optimizer would have done the exhaustive search\nand not used geqo stuff - and that this exhaustive search would have\nfound the good plan.\n\nAny reason it didn't? Explain analyze results shown below.\n\n\n\nOn Wed, 30 Mar 2005 [email protected] wrote:\n> \n> Can anyone please help me make my JOIN find the right index to use?\n>\n\nfli=# set join_collapse_limit=1;\nSET\nfli=# explain analyze\n select *\n from streetname_lookup as sl\n join city_lookup as cl on (true)\n join tlid_smaller as ts on (sl.geo_streetname_id = ts.geo_streetname_id and cl.geo_city_id=ts.geo_city_id)\n where str_name='alamo' and city='san antonio' and state='TX'\n;\nfli-# fli-# fli-# fli-# fli-# fli-# QUERY PLAN \\\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..16.94 rows=1 width=74) (actual time=0.116..0.528 rows=78 loops=1)\n -> Nested Loop (cost=0.00..9.03 rows=1 width=42) (actual time=0.079..0.086 rows=1 loops=1)\n -> Index Scan using streetname_lookup__str_name on streetname_lookup sl (cost=0.00..3.01 rows=1 width=19) (actual time=0.042..0.044 rows=1 loops=1)\n Index Cond: (str_name = 'alamo'::text)\n -> Index Scan using city_lookup__name on city_lookup cl (cost=0.00..6.01 rows=1 width=23) (actual time=0.026..0.028 rows=1 loops=1)\n Index Cond: ((city = 'san antonio'::text) AND (state = 'TX'::text))\n -> Index Scan using tlid_smaller__street_city on tlid_smaller ts (cost=0.00..7.86 rows=3 width=32) (actual time=0.031..0.181 rows=78 loops=1)\n Index Cond: ((\"outer\".geo_streetname_id = ts.geo_streetname_id) AND (\"outer\".geo_city_id = ts.geo_city_id))\n Total runtime: 0.709 ms\n(9 rows)\n\n\n--------[with the default join_collapse_limit]-----------\n> fli=# explain analyze\n> select *\n> from streetname_lookup as sl\n> join city_lookup as cl on (true)\n> join tlid_smaller as ts on (sl.geo_streetname_id = ts.geo_streetname_id and cl.geo_city_id=ts.geo_city_id)\n> where str_name='alamo' and city='san antonio' and state='TX'\n> ;\n> fli-# fli-# fli-# fli-# fli-# fli-# QUERY PLAN \\\n> \n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=6.01..29209.16 rows=1 width=74) (actual time=9.421..28.154 rows=78 loops=1)\n> Hash Cond: (\"outer\".geo_city_id = \"inner\".geo_city_id)\n> -> Nested Loop (cost=0.00..29202.88 rows=52 width=51) (actual time=0.064..23.296 rows=4151 loops=1)\n> -> Index Scan using streetname_lookup__str_name on streetname_lookup sl (cost=0.00..3.01 rows=1 width=19) (actual time=0.025..0.032 rows=1 loops=1)\n> Index Cond: (str_name = 'alamo'::text)\n> -> Index Scan using tlid_smaller__street_zipint on tlid_smaller ts (cost=0.00..28994.70 rows=16413 width=32) (actual time=0.028..8.153 rows=4151 loops=1)\n> Index Cond: (\"outer\".geo_streetname_id = ts.geo_streetname_id)\n> -> Hash (cost=6.01..6.01 rows=1 width=23) (actual time=0.073..0.073 rows=0 loops=1)\n> -> Index Scan using city_lookup__name on city_lookup cl (cost=0.00..6.01 rows=1 width=23) (actual time=0.065..0.067 rows=1 loops=1)\n> Index Cond: ((city = 'san antonio'::text) AND (state = 'TX'::text))\n> Total runtime: 28.367 ms\n> (11 rows)\n> \n",
"msg_date": "Wed, 30 Mar 2005 12:50:23 -0800 (PST)",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Left Outer Join much faster than non-outer Join?"
},
{
"msg_contents": "[email protected] writes:\n> select *\n> from streetname_lookup as sl\n> join city_lookup as cl on (true)\n> left outer join tlid_smaller as ts on (sl.geo_streetname_id = ts.geo_streetname_id and cl.geo_city_id=ts.geo_city_id)\n> where str_name='alamo' and city='san antonio' and state='TX'\n> ;\n\nThat's a fairly odd query; why don't you have any join condition between\nstreetname_lookup and city_lookup?\n\nThe planner won't consider Cartesian joins unless forced to, which is\nwhy it fails to consider the join order \"((sl join cl) join ts)\" unless\nyou have an outer join in the mix. I think that's generally a good\nheuristic, and am disinclined to remove it ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 Mar 2005 23:07:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Left Outer Join much faster than non-outer Join? "
},
{
"msg_contents": "Tom Lane wrote:\n> [email protected] writes:\n> \n>> select *\n>> from streetname_lookup as sl\n>> join city_lookup as cl on (true)\n>> left outer join tlid_smaller as ts on (sl.geo_streetname_id = ts.geo_streetname_id and cl.geo_city_id=ts.geo_city_id)\n>> where str_name='alamo' and city='san antonio' and state='TX'\n>>;\n> \n> That's a fairly odd query; \n\nI think it's a very common type of query in data warehousing.\n\nIt's reasonably typical of a traditional star schema where\n\"streetname_lookup\" and \"city_lookup\" are dimension tables\nand \"tlid_smaller\" is the central fact table.\n\n> why don't you have any join condition between\n> streetname_lookup and city_lookup?\n\nThose two tables shared no data. They merely get the \"id\"s\nfor looking things up in the much larger central table.\n\nUnique indexes on the city_lookup and street_lookup make the\ncartesian join harmless (they each return only 1 value); and\nthe huge fact table has a multi-column index that takes both\nof the ids from those lookups.\n\n\nWith the tables I have (shown below), how else could one\nefficiently fetch the data for \"Main St\" \"San Francisco\"?\n\n streetname_lookup\n (for every street name used in the country)\n streetid | name | type\n ----------+--------+------\n 1 | Main | St\n 2 | 1st | St\n\n city_lookup\n (for every city name used in the country)\n cityid | name | state\n --------+---------+------\n 1 | Boston | MA\n 2 | Alameda| CA\n\n\n tlid_smaller\n (containing a record for every city block in the country)\n city_id | street_id | addresses | demographics, etc.\n --------+------------+-----------+----------------------\n 1 | 1 | 100 block | [lots of columns]\n 1 | 1 | 200 block | [lots of columns]\n 1 | 1 | 300 block | [lots of columns]\n 1 | 2 | 100 block | [lots of columns]\n 1 | 2 | 100 block | [lots of columns]\n\n> The planner won't consider Cartesian joins unless forced to, which is\n> why it fails to consider the join order \"((sl join cl) join ts)\" unless\n> you have an outer join in the mix. I think that's generally a good\n> heuristic, and am disinclined to remove it ...\n\nIMHO it's a shame it doesn't even consider it when the estimated\nresults are very small. I think often joins that merely look up\nIDs would be useful to consider for the purpose of making potential\nmulti-column indexes (as shown in the previous email's explain\nanalyze result where the cartesian join was 30X faster than the\nother approach since it could use the multi-column index on the\nvery large table).\n\n Ron\n",
"msg_date": "Wed, 30 Mar 2005 23:04:53 -0800",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Left Outer Join much faster than non-outer Join?"
},
{
"msg_contents": "Ron Mayer wrote:\n> Tom Lane wrote:\n>> [email protected] writes:\n>>> select *\n>>> from streetname_lookup as sl\n>>> join city_lookup as cl on (true)\n>>> left outer join tlid_smaller as ts on (sl.geo_streetname_id = \n>>> ts.geo_streetname_id and cl.geo_city_id=ts.geo_city_id)\n>>> where str_name='alamo' and city='san antonio' and state='TX'\n>>> ;\n>> That's a fairly odd query; \n> \n> \n> I think it's a very common type of query in data warehousing.\n> \n> It's reasonably typical of a traditional star schema where\n> \"streetname_lookup\" and \"city_lookup\" are dimension tables\n> and \"tlid_smaller\" is the central fact table.\n\nAlthough looking again I must admit the query was\nwritten unconventionally. Perhaps those queries are\nremnants dating back to a version when you could\nforce join orders this way?\n\nPerhaps a more common way of writing it would have been:\n\n select * from tlid_smaller\n where geo_streetname_id in (select geo_streetname_id from streetname_lookup where str_name='$str_name')\n and geo_city_id in (select geo_city_id from city_lookup where city='$city' and state='$state');\n\nHowever this query also fails to use the multi-column\nindex on (geo_streetname_id,geo_city_id). Explain\nanalyze shown below.\n\n\nIn cases where I can be sure only one result will come\nfrom each of the lookup queries I guess I can do this:\n\n select * from tlid_smaller\n where geo_streetname_id = (select geo_streetname_id from streetname_lookup where str_name='$str_name')\n and geo_city_id = (select geo_city_id from city_lookup where city='$city' and state='$state');\n\nwhich has the nicest plan of them all (explain analyze\nalso shown below).\n\n > With the tables I have (shown below), how else could one\n > efficiently fetch the data for \"Main St\" \"San Francisco\"?\n\nI guess I just answered that question myself. Where possible,\nI'll write my queries this way.\n\n Thanks,\n Ron\n\n\nfli=# fli=# explain analyze select * from tlid_smaller\n where geo_streetname_id in (select geo_streetname_id from streetname_lookup where str_name='alamo')\n and geo_city_id in (select geo_city_id from city_lookup where city='san antonio' and state='TX');\nfli-# fli-# QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash IN Join (cost=9.03..29209.16 rows=1 width=32) (actual time=76.576..96.605 rows=78 loops=1)\n Hash Cond: (\"outer\".geo_city_id = \"inner\".geo_city_id)\n -> Nested Loop (cost=3.02..29202.88 rows=52 width=32) (actual time=65.877..91.789 rows=4151 loops=1)\n -> HashAggregate (cost=3.02..3.02 rows=1 width=4) (actual time=0.039..0.042 rows=1 loops=1)\n -> Index Scan using streetname_lookup__str_name on streetname_lookup (cost=0.00..3.01 rows=1 width=4) (actual time=0.025..0.028 rows=1 loops=1)\n Index Cond: (str_name = 'alamo'::text)\n -> Index Scan using tlid_smaller__street_zipint on tlid_smaller (cost=0.00..28994.70 rows=16413 width=32) (actual time=65.820..81.309 rows=4151 loops=1)\n Index Cond: (tlid_smaller.geo_streetname_id = \"outer\".geo_streetname_id)\n -> Hash (cost=6.01..6.01 rows=1 width=4) (actual time=0.054..0.054 rows=0 loops=1)\n -> Index Scan using city_lookup__name on city_lookup (cost=0.00..6.01 rows=1 width=4) (actual time=0.039..0.041 rows=1 loops=1)\n Index Cond: ((city = 'san antonio'::text) AND (state = 'TX'::text))\n Total runtime: 97.577 ms\n(12 rows)\n\nfli=#\n\nfli=# explain analyze select * from tlid_smaller\n where geo_streetname_id = (select geo_streetname_id from streetname_lookup where str_name='alamo')\n and geo_city_id = (select geo_city_id from city_lookup where city='san antonio' and state='TX');\n\nfli-# fli-# QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using tlid_smaller__street_city on tlid_smaller (cost=9.02..16.88 rows=3 width=32) (actual time=0.115..0.255 rows=78 loops=1)\n Index Cond: ((geo_streetname_id = $0) AND (geo_city_id = $1))\n InitPlan\n -> Index Scan using streetname_lookup__str_name on streetname_lookup (cost=0.00..3.01 rows=1 width=4) (actual time=0.044..0.047 rows=1 loops=1)\n Index Cond: (str_name = 'alamo'::text)\n -> Index Scan using city_lookup__name on city_lookup (cost=0.00..6.01 rows=1 width=4) (actual time=0.028..0.030 rows=1 loops=1)\n Index Cond: ((city = 'san antonio'::text) AND (state = 'TX'::text))\n Total runtime: 0.474 ms\n(8 rows)\n",
"msg_date": "Thu, 31 Mar 2005 00:15:55 -0800",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Left Outer Join much faster than non-outer Join?"
},
{
"msg_contents": "> > [email protected] writes:\n> streetname_lookup\n> (for every street name used in the country)\n> streetid | name | type\n> ----------+--------+------\n> 1 | Main | St\n> 2 | 1st | St\n>\nAfa I'm concerned, I would add the column \"city_id\" since 2 different\nstreets in 2 different cities may have the same name.\n\nAmicalement\n\nPatrick\n\n",
"msg_date": "Thu, 31 Mar 2005 10:28:11 +0200",
"msg_from": "\"Patrick Vedrines\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Left Outer Join much faster than non-outer Join?"
},
{
"msg_contents": "On Thu, 2005-03-31 at 00:15 -0800, Ron Mayer wrote:\n> Ron Mayer wrote:\n> > Tom Lane wrote:\n> >> [email protected] writes:\n> >>> select *\n> >>> from streetname_lookup as sl\n> >>> join city_lookup as cl on (true)\n> >>> left outer join tlid_smaller as ts on (sl.geo_streetname_id = \n> >>> ts.geo_streetname_id and cl.geo_city_id=ts.geo_city_id)\n> >>> where str_name='alamo' and city='san antonio' and state='TX'\n> >>> ;\n> >> That's a fairly odd query; \n> > \n> > \n> > I think it's a very common type of query in data warehousing.\n> > \n> > It's reasonably typical of a traditional star schema where\n> > \"streetname_lookup\" and \"city_lookup\" are dimension tables\n> > and \"tlid_smaller\" is the central fact table.\n> \n\nYes, agreed.\n\n> Although looking again I must admit the query was\n> written unconventionally. Perhaps those queries are\n> remnants dating back to a version when you could\n> force join orders this way?\n> \n> Perhaps a more common way of writing it would have been:\n> \n> select * from tlid_smaller\n> where geo_streetname_id in (select geo_streetname_id from streetname_lookup where str_name='$str_name')\n> and geo_city_id in (select geo_city_id from city_lookup where city='$city' and state='$state');\n> \n> However this query also fails to use the multi-column\n> index on (geo_streetname_id,geo_city_id). Explain\n> analyze shown below.\n\n...which is my understanding too.\n\n> In cases where I can be sure only one result will come\n> from each of the lookup queries I guess I can do this:\n> \n> select * from tlid_smaller\n> where geo_streetname_id = (select geo_streetname_id from streetname_lookup where str_name='$str_name')\n> and geo_city_id = (select geo_city_id from city_lookup where city='$city' and state='$state');\n> \n> which has the nicest plan of them all (explain analyze\n> also shown below).\n\nWhich is not the case for the generalised star join.\n\nThe general case query here is:\n\tSELECT (whatever)\n\tFROM FACT, DIMENSION1 D1, DIMENSION2 D2, DIMENSION3 D3etc..\n\tWHERE\n\t\tFACT.dimension1_pk = D1.dimension1_pk\n\tAND\tFACT.dimension2_pk = D2.dimension2_pk\n\tAND \tFACT.dimension3_pk = D3.dimension3_pk\n\tAND\tD1.dimdescription = 'X'\n\tAND\tD2.dimdescription = 'Y'\n\tAND\tD3.dimdescription = 'Z'\n\t...\nwith FACT PK=(dimension1_pk, dimension2_pk, dimension3_pk)\n\nwith a more specific example of\n\tSELECT sum(item_price)\n\tFROM Sales, Store, Item, TTime\n\tWHERE\n\t\tSales.store_pk = Store.store_pk\n\tAND\tStore.region = 'UK'\n\tAND\tSales.item_pk = Item.item_pk\n\tAND\tItem.category = 'Cameras'\n\tAND\tSales.time_pk = TTime.time_pk\n\tAND\tTTime.month = 3\n\tAND\tTTime.year = 2005\n\nA very good plan for solving this, under specific conditions is...\n\tCartesianProduct(Store, Item, TTime) -> Sales.PK\n\nwhich accesses the largest table only once.\n\nAs Tom says, the current optimizer won't go near that plan, for good\nreason, without specifically tweaking collapse limits. I know full well\nthat any changes in that direction will need to be strong because that\nexecution plan is very sensitive to even minor changes in data\ndistribution.\n\nThe plan requires some fairly extensive checking to be put into place.\nThe selectivity of requests against the smaller tables needs to be very\nwell known, so that the upper bound estimate of cardinality of the\ncartesian product is feasible AND still low enough to use the index on\nSales.\n\nThis is probably going to need information to be captured on multi-\ncolumn index selectivity, to ensure that last part.\n\nIt is likely that the statistics targets on the dimension tables would\nneed to be higher enough to identify MFVs or at least reduce the upper\nbound of selectivity. It is also requires the table sizes to be\nexamined, to ensure this type of plan is considered pointlessly.\nSome other systems that support this join type, turn off checking for it\nby default. We could do the same with enable_starjoin = off.\n\nAnyway, seems like a fair amount of work there... yes?\n\nBest Regards, Simon Riggs\n\n\n",
"msg_date": "Thu, 31 Mar 2005 19:01:13 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Left Outer Join much faster than non-outer Join?"
}
] |
[
{
"msg_contents": "I can see that PG'ers have a wicked sense of humor. \n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Steve Wampler\nSent: Wednesday, March 30, 2005 3:58 PM\nTo: [email protected]\nCc: [email protected]\nSubject: Re: [PERFORM] Reading recommendations\n\n\[email protected] wrote:\n\n>>Mohan, Ross wrote:\n>>\n>>>VOIP over BitTorrent?\n>>\n>>Now *that* I want to see. Aught to be at least as interesting as the \n>>\"TCP/IP over carrier pigeon\" experiment - and more challenging to \n>>boot!\n>>\n> \n> \n> It was very challenging. I worked on the credit window sizing and \n> retransmission timer estimation algorithms. We took into account \n> weather patterns, size and age of the bird, feeding times, and the \n> average number of times a bird circles before determining magnetic \n> north. Interestingly, packet size had little effect in the final \n> algorithms.\n> \n> I would love to share them with all of you, but they're classified.\n\nAh, but VOIPOBT requires many people all saying the same thing at the same time. The synchronization alone (since you need to distribute these people adequately to avoid overloading a trunk line...) is probably sufficiently hard to make it interesting. Then there are the problems of different accents, dilects, and languages ;)\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n",
"msg_date": "Wed, 30 Mar 2005 23:09:59 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reading recommendations"
}
] |
[
{
"msg_contents": "Hardware: relatively modern Intel CPU, OS and database each on its own\nIDE hard-drive (separate IDE cables). Enough memory, i think, but i\ncan't add too much (not beyond 1GB).\nSoftware: Linux-2.6, pgsql-8.0.1\n\nFunction: Essentially a logging server. There are two applications (like\nsyslog) on the same box that are logging to pgsql, each one to its own\ndatabase. There are a few tables in one DB, and exactly one table in the\nother.\nMost of the time, the apps are just doing mindless INSERTs to the DB.\nEvery now and then, an admin performs some SELECTs via a PHP interface.\n\nObjective: Make the DB as fast as possible. Of course i'd like the\nSELECTs to be fast, but the INSERTs take precedence. It's gotta be able\nto swallow as many messages per second as possible given the hardware.\n\nQuestion: What are the pgsql parameters that need to be tweaked? What\nare the guidelines for such a situation?\n\n-- \nFlorin Andrei\n\nhttp://florin.myip.org/\n\n",
"msg_date": "Wed, 30 Mar 2005 17:50:09 -0800",
"msg_from": "Florin Andrei <[email protected]>",
"msg_from_op": true,
"msg_subject": "fine tuning for logging server"
},
{
"msg_contents": "Florin Andrei wrote:\n\n>Hardware: relatively modern Intel CPU, OS and database each on its own\n>IDE hard-drive (separate IDE cables). Enough memory, i think, but i\n>can't add too much (not beyond 1GB).\n>Software: Linux-2.6, pgsql-8.0.1\n>\n>Function: Essentially a logging server. There are two applications (like\n>syslog) on the same box that are logging to pgsql, each one to its own\n>database. There are a few tables in one DB, and exactly one table in the\n>other.\n>Most of the time, the apps are just doing mindless INSERTs to the DB.\n>Every now and then, an admin performs some SELECTs via a PHP interface.\n>\n>Objective: Make the DB as fast as possible. Of course i'd like the\n>SELECTs to be fast, but the INSERTs take precedence. It's gotta be able\n>to swallow as many messages per second as possible given the hardware.\n>\n>Question: What are the pgsql parameters that need to be tweaked? What\n>are the guidelines for such a situation?\n>\n>\n>\nPut pg_xlog onto the same drive as the OS, not the drive with the database.\n\nDo as many inserts per transaction that you can get away with.\n100-1000 is pretty good.\n\nKeep the number of indexes and foreign key references low to keep\nINSERTS fast.\n\nKeep a few indexes around to keep SELECTs reasonable speedy.\n\nIf you are doing lots and lots of logging, need only archival and slow\naccess for old data, but fast access on new data, consider partitioning\nyour table, and then using a view to join them back together.\n\nIf you are only having a couple processing accessing the db at any given\ntime, you can probably increase work_mem and maintenance_work_mem a bit.\nIf you have 1G ram, maybe around 50M for work_mem. But really this is\nonly if you have 1-3 selects going on at a time.\n\nWith 2 disks, and fixed hardware, it's a lot more about configuring your\nschema and the application. If you want more performance, adding more\ndisks is probably the first thing to do.\n\nJohn\n=:->",
"msg_date": "Wed, 30 Mar 2005 19:59:30 -0600",
"msg_from": "John Arbash Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fine tuning for logging server"
},
{
"msg_contents": "On Wed, 2005-03-30 at 17:50 -0800, Florin Andrei wrote:\n\n> Function: Essentially a logging server. There are two applications (like\n> syslog) on the same box that are logging to pgsql, each one to its own\n> database. There are a few tables in one DB, and exactly one table in the\n> other.\n> Most of the time, the apps are just doing mindless INSERTs to the DB.\n> Every now and then, an admin performs some SELECTs via a PHP interface.\n\nFor performance reasons, i was thinking to keep the tables append-only,\nand simply rotate them out every so often (daily?) and delete those\ntables that are too old. Is that a good idea?\n\n-- \nFlorin Andrei\n\nhttp://florin.myip.org/\n\n",
"msg_date": "Wed, 30 Mar 2005 18:02:54 -0800",
"msg_from": "Florin Andrei <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: fine tuning for logging server"
},
{
"msg_contents": "Florin Andrei wrote:\n\n>On Wed, 2005-03-30 at 17:50 -0800, Florin Andrei wrote:\n>\n>\n>\n>>Function: Essentially a logging server. There are two applications (like\n>>syslog) on the same box that are logging to pgsql, each one to its own\n>>database. There are a few tables in one DB, and exactly one table in the\n>>other.\n>>Most of the time, the apps are just doing mindless INSERTs to the DB.\n>>Every now and then, an admin performs some SELECTs via a PHP interface.\n>>\n>>\n>\n>For performance reasons, i was thinking to keep the tables append-only,\n>and simply rotate them out every so often (daily?) and delete those\n>tables that are too old. Is that a good idea?\n>\n>\n>\nIf you aren't doing updates, then I'm pretty sure the data stays packed\npretty well. I don't know that you need daily rotations, but you\ncertainly could consider some sort of rotation schedule.\n\nThe biggest performance improvement, though, is probably to group\ninserts into transactions.\nI had an application (in a different db, but it should be relevant),\nwhere using a transaction changed the time from 6min -> 6 sec.\nIt was just thrashing on all the little inserts that it had to fsync to\ndisk.\n\nHow fast is fast? How many log messages are you expecting? 1/s 100/s 1000/s?\nI think the hardware should be capable of the 10-100 range if things are\nproperly configured. Naturally that depends on all sorts of factors, but\nit should give you an idea.\n\nJohn\n=:->",
"msg_date": "Wed, 30 Mar 2005 20:11:59 -0600",
"msg_from": "John Arbash Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fine tuning for logging server"
},
{
"msg_contents": "On Wed, 2005-03-30 at 19:59 -0600, John Arbash Meinel wrote:\n\n> Put pg_xlog onto the same drive as the OS, not the drive with the database.\n\nI forgot to mention: the OS drive is purposefully made very slow - the\nwrite cache is turned off and the FS is Ext3 with data=journal. Is then\nstill ok to put pg_xlog on it?\n\nThe reason: if the power cord is yanked, the OS _must_ boot back up in\ngood condition. If the DB is corrupted, whatever, nuke it then re-\ninitialize it. But the OS must survive act-of-god events.\n\nNo, there is no uninterruptible power supply. It sucks, but that's how\nit is. I cannot change that.\n\n-- \nFlorin Andrei\n\nhttp://florin.myip.org/\n\n",
"msg_date": "Wed, 30 Mar 2005 18:24:38 -0800",
"msg_from": "Florin Andrei <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: fine tuning for logging server"
},
{
"msg_contents": "On Wed, 2005-03-30 at 20:11 -0600, John Arbash Meinel wrote:\n> Florin Andrei wrote:\n> >\n> >For performance reasons, i was thinking to keep the tables append-only,\n> >and simply rotate them out every so often (daily?) and delete those\n> >tables that are too old. Is that a good idea?\n> >\n> If you aren't doing updates, then I'm pretty sure the data stays packed\n> pretty well. I don't know that you need daily rotations, but you\n> certainly could consider some sort of rotation schedule.\n\n(sorry for re-asking, i'm coming from a mysql mindset and i still have a\nlot to learn about pgsql)\n\nSo, it is indeed a bad idea to delete rows from tables, right? Better\njust rotate to preserve the performance.\n\nDaily rotation may simplify the application logic - then i'll know that\neach table is one day's worth of data.\n\n> The biggest performance improvement, though, is probably to group\n> inserts into transactions.\n\nYes, i know that. I have little control over the apps, though. I'll see\nwhat i can do.\n\n> How fast is fast? How many log messages are you expecting? 1/s 100/s 1000/s?\n\nMore is better. <shrug>\nI guess i'll put it together and give it a spin and see just how far it\ngoes.\n\nI actually have some controls over the data that's being sent (in some\nplaces i can limit the number of events/second), so that might save me\nright there.\n\n-- \nFlorin Andrei\n\nhttp://florin.myip.org/\n\n",
"msg_date": "Wed, 30 Mar 2005 18:30:12 -0800",
"msg_from": "Florin Andrei <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: fine tuning for logging server"
},
{
"msg_contents": "Florin Andrei wrote:\n\n>On Wed, 2005-03-30 at 19:59 -0600, John Arbash Meinel wrote:\n>\n>\n>\n>>Put pg_xlog onto the same drive as the OS, not the drive with the database.\n>>\n>>\n>\n>I forgot to mention: the OS drive is purposefully made very slow - the\n>write cache is turned off and the FS is Ext3 with data=journal. Is then\n>still ok to put pg_xlog on it?\n>\n>The reason: if the power cord is yanked, the OS _must_ boot back up in\n>good condition. If the DB is corrupted, whatever, nuke it then re-\n>initialize it. But the OS must survive act-of-god events.\n>\n>No, there is no uninterruptible power supply. It sucks, but that's how\n>it is. I cannot change that.\n>\n>\n>\nYou don't want write cache for pg_xlog either. And you could always\ncreate a second partition that used reiserfs, or something like that.\n\nIf you have to survine \"act-of-god\" you probably should consider making\nthe system into a RAID1 instead of 2 separate drives (software RAID\nshould be fine).\n\n'Cause a much worse act-of-god is having a drive crash. No matter what\nyou do in software, a failed platter will prevent you from booting. RAID\n1 at least means 2 drives have to die.\n\nIf you need insert speed, and can't do custom transactions at the\napplication side, you could try creating a RAM disk for the insert\ntable, and then create a cron job that bulk pulls it out of that table\nand inserts it into the rest of the system. That should let you get a\nsuper-fast insert speed, and the bulk copies should stay reasonably fast.\n\nJust realize that if your cron job stops running, your machine will\nslowly eat up all of it's ram, and really not play nice. I think adding\nan extra hard-drive is probably the best way to boost performance and\nreliability, but if you have a $0 budget, this is a possibility.\n\nJohn\n=:->",
"msg_date": "Wed, 30 Mar 2005 20:34:39 -0600",
"msg_from": "John Arbash Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fine tuning for logging server"
},
{
"msg_contents": "Florin Andrei wrote:\n\n>On Wed, 2005-03-30 at 20:11 -0600, John Arbash Meinel wrote:\n>\n>\n>>Florin Andrei wrote:\n>>\n>>\n>>>For performance reasons, i was thinking to keep the tables append-only,\n>>>and simply rotate them out every so often (daily?) and delete those\n>>>tables that are too old. Is that a good idea?\n>>>\n>>>\n>>>\n>>If you aren't doing updates, then I'm pretty sure the data stays packed\n>>pretty well. I don't know that you need daily rotations, but you\n>>certainly could consider some sort of rotation schedule.\n>>\n>>\n>\n>(sorry for re-asking, i'm coming from a mysql mindset and i still have a\n>lot to learn about pgsql)\n>\n>So, it is indeed a bad idea to delete rows from tables, right? Better\n>just rotate to preserve the performance.\n>\n>\nThe only problems are if you get a lot of old tuples in places you don't\nwant them. If you are always appending new values that are increasing,\nand you are deleting from the other side, I think vacuum will do a fine\njob at cleaning up. It's deleting/updating every 3rd entry that starts\nto cause holes (though probably vacuum still does a pretty good job).\n\n>Daily rotation may simplify the application logic - then i'll know that\n>each table is one day's worth of data.\n>\n>\n>\nI don't think it is necessary, but if you like it, go for it. I would\ntend to think that you would want a \"today\" table, and a \"everything\nelse\" table, as it simplifies your queries, and lets you have foreign\nkeys (though if you are from mysql, you may not be used to using them.)\n\n>>The biggest performance improvement, though, is probably to group\n>>inserts into transactions.\n>>\n>>\n>\n>Yes, i know that. I have little control over the apps, though. I'll see\n>what i can do.\n>\n>\nYou could always add a layer inbetween. Or look at my mention of a fast\ntemp table, with a periodic cron job to pull in the new data. You can\nrun cron as fast as 1/min which might be just right depending on your needs.\nIt also means that you could ignore foreign keys and indexes on the temp\ntable, and only evaluate them on the main table.\n\n>\n>\n>>How fast is fast? How many log messages are you expecting? 1/s 100/s 1000/s?\n>>\n>>\n>\n>More is better. <shrug>\n>I guess i'll put it together and give it a spin and see just how far it\n>goes.\n>\n>I actually have some controls over the data that's being sent (in some\n>places i can limit the number of events/second), so that might save me\n>right there.\n>\n>\n>\nGood luck. And remember, tuning your queries can be just as important.\n(Though if you are doing append only inserts, there probably isn't much\nthat you can do).\n\nIf all you are doing is append only logging, the fastest thing is\nprobably just a flat file. You could have something that comes along\nlater to move it into the database. It doesn't really sound like you are\nusing any features a database provides. (normalization, foreign keys,\nindexes, etc.)\n\nJohn\n=:->",
"msg_date": "Wed, 30 Mar 2005 20:41:43 -0600",
"msg_from": "John Arbash Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fine tuning for logging server"
},
{
"msg_contents": "\n> The reason: if the power cord is yanked, the OS _must_ boot back up in\n> good condition. If the DB is corrupted, whatever, nuke it then re-\n> initialize it. But the OS must survive act-of-god events.\n\n\tWell, in that case :\n\t- Use reiserfs3 for your disks\n\t- Use MySQL with MyISAM tables\n",
"msg_date": "Thu, 31 Mar 2005 11:01:01 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fine tuning for logging server"
},
{
"msg_contents": "On Wed, Mar 30, 2005 at 08:41:43PM -0600, John Arbash Meinel wrote:\n> If all you are doing is append only logging, the fastest thing is\n> probably just a flat file. You could have something that comes along\n> later to move it into the database. It doesn't really sound like you are\n> using any features a database provides. (normalization, foreign keys,\n> indexes, etc.)\n\nHere's two ideas that I don't think have been mentioned yet: Use copy\nto bulk load the data instead of individual imports. And if you get\ndesperate, you can run pg with fsync=false since you don't seem to\ncare about re-initializing your whole database in the case of\nunexpected interruption. \n\n -Mike\n",
"msg_date": "Thu, 31 Mar 2005 10:01:37 -0500",
"msg_from": "Michael Adler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fine tuning for logging server"
}
] |
[
{
"msg_contents": "I've tested several keyword count from 2 millions record book\ndescription table that indexed with tseach2 indexing.\nThe result is always slow for first query attempt.\n\nThis my sample query:\n-- test one phrase --\nSELECT count(*) from table1 \nWHEREsearchvector @@ to_tsquery('default' ,'david') limit 100\n:: returns 16824 records match.\n:: take 49618.341 ms (1st attempt)\n:: take 504.229 ms (2nd attempt)\n\n-- test two phrase --\nSELECT count(*) from table1\nWHERE searchvector @@ to_tsquery('default' ,'martha&stewart') limit 100\n:: returns 155 records match.\n:: take 686.669 ms (1st attempt)\n:: take 40.282 ms (2nd attempt)\n\nI use ordinary aggregate function count(*), Is there other way to count faster?\n",
"msg_date": "Thu, 31 Mar 2005 12:52:48 -0600",
"msg_from": "Yudie Pg <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to speed up word count with tsearch2?"
}
] |
[
{
"msg_contents": "Hi all,\n\n I have a table with a little over 200,000 columns in it that I need \nto update using a regular expression. I admit that though I am not a \nbeginner and postgres, I am also far from an expert. :p\n\n I tried to create an Index that would optimize the UPDATE but I may \nhave made an error in how I built it. Here is the table structure, the \nindex I tried to create and an 'EXPLAIN ANALYZE' of the UPDATE (though I \n am still just now learning how to use 'EXPLAIN').\n\ntle-bu=> \\d file_info_3\n Table \"public.file_info_3\"\n Column | Type | Modifiers\n-----------------+----------------------+-----------------------------------------\n file_group_name | text | not null\n file_group_uid | bigint | not null\n file_mod_time | bigint | not null\n file_name | text | not null\n file_parent_dir | text | not null\n file_perm | text | not null\n file_size | bigint | not null\n file_type | character varying(2) | not null default \n'f'::character varying\n file_user_name | text | not null\n file_user_uid | bigint | not null\n file_backup | boolean | not null default true\n file_display | boolean | not null default false\n file_restore | boolean | not null default false\nIndexes:\n \"file_info_3_display_idx\" btree (file_type, file_parent_dir, file_name)\n\n Here is the EXPLAIN:\n\ntle-bu=> EXPLAIN ANALYZE UPDATE file_info_3 SET file_backup='f' WHERE \nfile_parent_dir~'^/home' OR (file_parent_dir='/' AND file_name='home');\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Seq Scan on file_info_3 (cost=0.00..7770.00 rows=1006 width=206) \n(actual time=1050.813..5648.462 rows=67835 loops=1)\n Filter: ((file_parent_dir ~ '^/home'::text) OR ((file_parent_dir = \n'/'::text) AND (file_name = 'home'::text)))\n Total runtime: 68498.898 ms\n(3 rows)\n\n I thought that it would have used the index because 'file_parent_dir' \nand 'file_name' are in the index but is I am reading the \"EXPLAIN\" \noutput right it isn't but is instead doing a sequencial scan. If that is \nthe case, how would I best built the index? Should I have just used the \n'file_parent_dir' and 'file_name'?\n\n Thanks all!!\n\nMadison\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\nMadison Kelly (Digimer)\nTLE-BU, The Linux Experience; Back Up\nhttp://tle-bu.thelinuxexperience.com\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n",
"msg_date": "Thu, 31 Mar 2005 14:04:01 -0500",
"msg_from": "Madison Kelly <[email protected]>",
"msg_from_op": true,
"msg_subject": "Very long time to execute and Update, suggestions?"
},
{
"msg_contents": "Philip Hallstrom wrote:\n> I'm not sure about this which is why I'm replying off list, but your \n> index is on file_type, file_parent_dir, and file_name and you're query \n> is on file_parent_dir and file_name.\n> \n> I seem to remember reading that that the index will only get used if the \n> columns in the where clause \"match up\" \"in order\".\n> \n> That is um... if you have an index on columns a and b and a where clause \n> of \"b = 1\" it woin't use the index since the index \"looks like\"\n> \n> a, b\n> a, b\n> a, b\n> etc...\n> \n> Does that make any sense? Not sure if that's right or not, but easy \n> enough to remove the \"file_type\" from your index and try it.\n> \n> post back to the list if that's it.\n> \n> -philip\n\nThanks for the reply!\n\n I have played around a little more and have created a few different \ntest Indexes and it looks like it is the regex that is causing it to do \nthe sequential scan. If I remove the regex and create a \n'file_parent_dir', 'file_name' index it will use it. If I create an \nIndex just for 'file_parent_dir' and change my UPDATE to just look for \nthe regex '... WHERE file_parent_dir~'^/<dir>'...' it will still do the \nsequential scan anyway.\n\n So I need to either find an Index that will work with regexes or \nre-write my code to update each subdirectory separately and use simpler \nUPDATE statement for each.\n\n Thanks again!\n\nMadison\n\nPS - I cc'ed the list to follow up on what I found out so far. (Hi list!)\n\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\nMadison Kelly (Digimer)\nTLE-BU, The Linux Experience; Back Up\nhttp://tle-bu.thelinuxexperience.com\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n",
"msg_date": "Thu, 31 Mar 2005 14:51:51 -0500",
"msg_from": "Madison Kelly <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very long time to execute and Update, suggestions?"
},
{
"msg_contents": "\n> So I need to either find an Index that will work with regexes or \n> re-write my code to update each subdirectory separately and use simpler \n> UPDATE statement for each.\n\n\tWhy don't you use a LTREE type to model your directory tree ? It's been \ndesigned specifically for this purpose and has indexed regular expression \nsearch.\n\nhttp://www.sai.msu.su/~megera/postgres/gist/ltree/\nhttp://www.sai.msu.su/~megera/postgres/gist/\n",
"msg_date": "Thu, 31 Mar 2005 23:28:03 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long time to execute and Update, suggestions?"
}
] |
[
{
"msg_contents": "Hi,\n\n We are migrating to a new server with more memory and also from \npostgres 7.4 to postgres 8.0.1 version.\n\nHere are my settings on the current 7.4 version:\nOS : RedHat 9\nCPUS: 2 hyperthreaded\nMemory: 4gig\nshared_buffers: 65536\nsort_mem: 16384\nvacuum_mem: 32768\nwal_buffers: 64\neffective_cache_size: 393216\ncheckpoint_segments: 3\ncheckpoint_timeout: 300\ncheckpoint_warning: 30\n\nThese are settings which I am planning on the new machine with 8.0.1 \nversion:\nOS: Fedora Core 2\nCPUs: 2 hyperthreaded\nMemory: 8 gig\nshared_buffers: 131072\nwork_mem: 32768\nmaintanence_work_mem: 65536\nwal_buffers: 64\neffective_cache_size: 786432\ncheckpoint_segments: 8\ncheckpoint_timeout: 600\ncheckpoint_warning: 30\n\n The current settings on my 7.4 version gives me very good \nperformance, so I basically doubled the settings since i will be having \nthe double the memory in the new machine. What my concern is about the \neffective_cache_settings , according the docs its recommends me to set \nmax to about 2/3 of the total memory and I went little over on top of \nit, is that ok ? I went little over on my current 7.4 system too, and \nits giving me very good performance so I used the same calculation for \nmy new system too.\n Also, can anyone guide me with the ideal settings for \nvacuum_cost_delay, vacuum_cost_page_hit, vacuum_cost_page_miss, \nvacuum_cost_page_dirty, vacuum_cost_limit, background_delay, \nbgwriter_percent, bgwriter_maxpages settings. I am not sure what \nsettings should I make to these parameters , are there any ideal \nsettings for these parameters in a OLTP environment ?\n\nThanks!\nPallav\n\n",
"msg_date": "Thu, 31 Mar 2005 14:07:18 -0500",
"msg_from": "Pallav Kalva <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql.conf setting recommendations for 8.0.1"
}
] |
[
{
"msg_contents": "(It is the 2nd posting, maybe the 1st one didn't goes thru)\nI've tested several keyword count from 2 millions record book\ndescription table that indexed with tseach2 indexing.\nThe result is always slow for first query attempt.\n\nThis my sample query:\n-- test one phrase --\nSELECT count(*) from table1\nWHEREsearchvector @@ to_tsquery('default' ,'david') limit 100\n:: returns 16824 records match.\n:: take 49618.341 ms (1st attempt)\n:: take 504.229 ms (2nd attempt)\n\n-- test two phrase --\nSELECT count(*) from table1\nWHERE searchvector @@ to_tsquery('default' ,'martha&stewart') limit 100\n:: returns 155 records match.\n:: take 686.669 ms (1st attempt)\n:: take 40.282 ms (2nd attempt)\n\nI use ordinary aggregate function count(*), Is there other way to count faster?\n",
"msg_date": "Thu, 31 Mar 2005 13:58:30 -0600",
"msg_from": "Yudie Pg <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to speed up word count in tsearch2?"
},
{
"msg_contents": "Yudie,\n\n> (It is the 2nd posting, maybe the 1st one didn't goes thru)\n> I've tested several keyword count from 2 millions record book\n> description table that indexed with tseach2 indexing.\n> The result is always slow for first query attempt.\n\nYes, this is because your tsearch2 index is getting pushed out of RAM. When \nthe index is cached it's very, very fast but takes a long time to get loaded \nfrom disk.\n\nYou need to look at what else is using RAM on that machine. And maybe buy \nmore.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 31 Mar 2005 12:50:45 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to speed up word count in tsearch2?"
},
{
"msg_contents": "> You need to look at what else is using RAM on that machine. And maybe buy\n> more.\n\nOuch.. I had that feeling also. then how can I know how much memory\nneeded for certain amount words? and why counting uncommon words are\nfaster than common one?\n",
"msg_date": "Thu, 31 Mar 2005 22:03:32 -0600",
"msg_from": "Yudie Pg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to speed up word count in tsearch2?"
},
{
"msg_contents": "On Apr 1, 2005 4:03 AM, Yudie Pg <[email protected]> wrote:\n> > You need to look at what else is using RAM on that machine. And maybe buy\n> > more.\n> \n> Ouch.. I had that feeling also. then how can I know how much memory\n> needed for certain amount words? and why counting uncommon words are\n> faster than common one?\n\nBecause the index is a tree. You fall of the end of a branch faster\nwith uncommon words. Plus the executor goes back to the table for\nfewer real rows with uncommon words.\n\nIt sounds like you may just need a faster disk subsystem. That would\nshrink the time for the first query on any particular set of words,\nand it would make everything else faster as a nice side effect. What\ndoes your disk layout look like now?\n\n-- \nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\nhttp://open-ils.org\n",
"msg_date": "Fri, 1 Apr 2005 12:17:56 +0000",
"msg_from": "Mike Rylander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to speed up word count in tsearch2?"
}
] |
[
{
"msg_contents": "\nAnybody a solution for the next problem:\n\npeople can subscribe to a service for 1 or more days (upto a max. of 365).\n\nSo in the database is stored: first_date and last_date\n\nTo select which people are subscribed for a certain date (e.g. today) we use\na select like\n\nselect ....... where first_date <= today and last_date >= today\n\nWhatever index we create system always does a sequential scan (which I can\nunderstand).\n\nHas someone a smarter solution?\n\nAll suggestions will be welcomed.\n\n\n",
"msg_date": "Fri, 1 Apr 2005 12:05:44 +0200",
"msg_from": "\"H.J. Sanders\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "date - range"
},
{
"msg_contents": "On Fri, Apr 01, 2005 at 12:05:44PM +0200, H.J. Sanders wrote:\n> \n> people can subscribe to a service for 1 or more days (upto a max. of 365).\n> \n> So in the database is stored: first_date and last_date\n> \n> To select which people are subscribed for a certain date (e.g. today) we use\n> a select like\n> \n> select ....... where first_date <= today and last_date >= today\n> \n> Whatever index we create system always does a sequential scan (which I can\n> understand).\n\nCould you show the table and index definitions and the EXPLAIN\nANALYZE output of two queries, one with enable_seqscan set to \"on\"\nand one with it set to \"off\"? The planner might think that a\nsequential scan would be faster than an index scan, and EXPLAIN\nANALYZE should tell us if that guess is correct.\n\nWhat version of PostgreSQL are you using?\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n",
"msg_date": "Fri, 1 Apr 2005 16:24:01 -0700",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: date - range"
},
{
"msg_contents": "Quoting \"H.J. Sanders\" <[email protected]>:\n\n> \n> Anybody a solution for the next problem:\n> people can subscribe to a service for 1 or more days (upto a max. of 365).\n> So in the database is stored: first_date and last_date\n> To select which people are subscribed for a certain date (e.g. today) we use\n> a select like\n> \n> select ....... where first_date <= today and last_date >= today\n> \n> Whatever index we create system always does a sequential scan (which I can\n> understand). Has someone a smarter solution?\n\nYep, standard SQL problem. The answer is sort of a hand-rolled GIST index.\n\nTo save typing, I'm going to pretend all your dates are stored as integers.\nIn reality, you'll probably be writing views with lots of EXTRACT(EPOCH...)'s in\nthem, to achieve the same result.\n\nSuppose you have table People(id, first_date, last_date, ...)\nEach such range \"fits\" in some larger fixed range of 1,2,4, ... days\nthat starts and ends on a fixed (epoch) date multiple of 1,2,4,...\nFor example, if your range were days (1040..1080), then that fits in the\n64-wide range (1024...1088]. You calculate the start and width of the range that\njust fits, and store that in People, too. Now, you index on (start,width).\n\nNow, when you want to query for a given \"today\", you have to try for\nall possible widths in People. Fortunately, that's darn few!\nThe ranges up to a decade (!) will still mean only 16 different widths.\nA max range of one year (<512 days) means only 9 widths.\nYou can do this with a tiny static table. \n\nThen: the query:\n\nSELECT People.* FROM People \nJOIN Widths\nON People.start = today - today % Widths.width\nAND People.width = Widths.width\n\nThough this may look gross, it makes an index work where no normal BTree index\nwould. I've used it for some really nasty data conversions of 100M-row tables. \n\nYour first name wouldn't be \"Harlan\", would it? :-)\n-- \"Dreams come true, not free.\"\n\n",
"msg_date": "Fri, 1 Apr 2005 21:59:44 -0800",
"msg_from": "Mischa <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: date - range"
},
{
"msg_contents": "Quoting Mischa <[email protected]>:\n\n[deleted]\n> SELECT People.* FROM People \n> JOIN Widths\n> ON People.start = today - today % Widths.width\n> AND People.width = Widths.width\n\nYikes! I hit the SEND button one ohnosecend too fast.\n\n(1) You still ALSO have to test:\n... AND today between first_date and last_date\n\n(2) On some SQL engines, it makes a different to how the engine can re-order the\nnested loops, if you make the index (width,start) instead of (start,width).\nHaven't tried on PG8 yet.\n-- \n\"Dreams come true, not free.\"\n\n",
"msg_date": "Fri, 1 Apr 2005 22:25:19 -0800",
"msg_from": "Mischa <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: date - range"
},
{
"msg_contents": "On Fri, Apr 01, 2005 at 09:59:44PM -0800, Mischa wrote:\n> > \n> > select ....... where first_date <= today and last_date >= today\n> > \n> > Whatever index we create system always does a sequential scan (which I can\n> > understand). Has someone a smarter solution?\n> \n> Yep, standard SQL problem. The answer is sort of a hand-rolled GIST index.\n\nThat might not be necessary in this case.\n\nCREATE TABLE foo (\n id serial PRIMARY KEY,\n first_date date NOT NULL,\n last_date date NOT NULL,\n CONSTRAINT check_date CHECK (last_date >= first_date)\n);\n\n/* populate table */\n\nCREATE INDEX foo_date_idx ON foo (first_date, last_date);\nANALYZE foo;\n\nEXPLAIN SELECT * FROM foo\nWHERE first_date <= current_date AND last_date >= current_date;\n QUERY PLAN \n--------------------------------------------------------------------------------------------\n Index Scan using foo_date_idx on foo (cost=0.01..15.55 rows=97 width=12)\n Index Cond: ((first_date <= ('now'::text)::date) AND (last_date >= ('now'::text)::date))\n(2 rows)\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n",
"msg_date": "Sat, 2 Apr 2005 00:01:31 -0700",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: date - range"
},
{
"msg_contents": "On Sat, Apr 02, 2005 at 00:01:31 -0700,\n Michael Fuhr <[email protected]> wrote:\n> On Fri, Apr 01, 2005 at 09:59:44PM -0800, Mischa wrote:\n> > > \n> > > select ....... where first_date <= today and last_date >= today\n> > > \n> > > Whatever index we create system always does a sequential scan (which I can\n> > > understand). Has someone a smarter solution?\n> > \n> > Yep, standard SQL problem. The answer is sort of a hand-rolled GIST index.\n> \n> That might not be necessary in this case.\n\nEven though you get an index scan, I don't think it is going to be\nvery fast as there are going to be a lot of entries with first_date\n<= current_date. If the requests are almost always for the current date,\nthen switching the order of columns in the index will help, since there\nwill probably be few orders for future service, so that the current\ndate being <= the last_date will be a much better indicator of whether\nthey have service. If requests are made well into the past then this\napproach will have the same problem as checking first_date first.\nHe will probably get faster lookups using rtree or gist indexes as\nhe really is checking for containment.\n\n> \n> CREATE TABLE foo (\n> id serial PRIMARY KEY,\n> first_date date NOT NULL,\n> last_date date NOT NULL,\n> CONSTRAINT check_date CHECK (last_date >= first_date)\n> );\n> \n> /* populate table */\n> \n> CREATE INDEX foo_date_idx ON foo (first_date, last_date);\n> ANALYZE foo;\n> \n> EXPLAIN SELECT * FROM foo\n> WHERE first_date <= current_date AND last_date >= current_date;\n> QUERY PLAN \n> --------------------------------------------------------------------------------------------\n> Index Scan using foo_date_idx on foo (cost=0.01..15.55 rows=97 width=12)\n> Index Cond: ((first_date <= ('now'::text)::date) AND (last_date >= ('now'::text)::date))\n> (2 rows)\n",
"msg_date": "Sat, 2 Apr 2005 08:14:17 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: date - range"
}
] |
[
{
"msg_contents": "\n Hello,\n\n What would be reasonable settings for quite heavily used\nbut not large database?\n\n Dabatase is under 1G in size and fits into server cache (server\nhas 2GB of memeory). Two of most used tables are ~100k rows each\nbut they get up to 50inserts/updates/deletes per second.\n\n How to tweak fsm (?) and pg_auovacuum settings for such case?\nWhat I do not like about one table is \"unused item pointers\" number.\n Now I use max_fsm_relations=1000 and max_fsm_pages=200000.\npg_autovacuum ran with default settings.\n\n Thanks,\n\n Mindaugas\n\n# VACUUM VERBOSE msq;\nINFO: vacuuming \"msq\"\nINFO: index \"msq_next\" now contains 74983 row versions in 537 pages\nDETAIL: 75963 index row versions were removed.\n123 index pages have been deleted, 0 are currently reusable.\nCPU 0.05s/0.13u sec elapsed 2.00 sec.\nINFO: index \"msq_recipient_idx\" now contains 75014 row versions in 740\npages\nDETAIL: 75963 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.16u sec elapsed 0.17 sec.\nINFO: index \"msq_id_pk\" now contains 75065 row versions in 396 pages\nDETAIL: 75963 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.01s/0.15u sec elapsed 0.17 sec.\nINFO: \"msq\": removed 75963 row versions in 6118 pages\nDETAIL: CPU 0.62s/0.56u sec elapsed 17.02 sec.\nINFO: \"msq\": found 75963 removable, 74553 nonremovable row versions in\n49386 pages\nDETAIL: 1221 dead row versions cannot be removed yet.\nThere were 1634616 unused item pointers.\n0 pages are entirely empty.\nCPU 1.36s/1.24u sec elapsed 33.23 sec.\n\n",
"msg_date": "Fri, 1 Apr 2005 17:54:44 +0300",
"msg_from": "\"Mindaugas Riauba\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tuning PostgreSQL"
}
] |
[
{
"msg_contents": "I have a query in my application that takes an unreasonable amount of time\nto complete (>4.5 hours execution time). After reviewing the EXPLAIN and\nEXPLAIN ANALYZE output for that and similar queries, my colleagues and I\ndetermined that turning off the enable_nestloop option might help - we\nnoticed dramatic speed improvements for that specific query after doing so\n(<2 minutes execution time). I was warned not to mess with the enable_XXX\noptions in a production environment, but does anyone see any problem with\nturning off the enable_nestloop option right before executing my query and\nturning it back on afterwards?\n\n \n\nBjorn Peterson\n\nSoftware Engineer\n\nPearson School Technologies\n\nBloomington, MN\n\n(952) 681-3384\n\n \n\n\n**************************************************************************** \nThis email may contain confidential material. \nIf you were not an intended recipient, \nPlease notify the sender and delete all copies. \nWe may monitor email to and from our network. \n****************************************************************************\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nI have a query in my application that takes an unreasonable\namount of time to complete (>4.5 hours execution time). After\nreviewing the EXPLAIN and EXPLAIN ANALYZE output for that and similar queries,\nmy colleagues and I determined that turning off the enable_nestloop option\nmight help – we noticed dramatic speed improvements for that specific query\nafter doing so (<2 minutes execution time). I was warned not to mess\nwith the enable_XXX options in a production environment, but does anyone see\nany problem with turning off the enable_nestloop option right before executing\nmy query and turning it back on afterwards?\n \nBjorn Peterson\nSoftware Engineer\nPearson School Technologies\nBloomington, MN\n(952) 681-3384\n \n\n**************************************************************************** \n\nThis email may contain confidential material. If you were not \nan intended recipient, Please notify the sender and delete all copies. \nWe may monitor email to and from our network.\n ***************************************************************************",
"msg_date": "Fri, 1 Apr 2005 10:04:08 -0600 ",
"msg_from": "\"Peterson, Bjorn\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "enable_XXX options"
},
{
"msg_contents": "\"Peterson, Bjorn\" <[email protected]> writes:\n> I have a query in my application that takes an unreasonable amount of time\n> to complete (>4.5 hours execution time). After reviewing the EXPLAIN and\n> EXPLAIN ANALYZE output for that and similar queries, my colleagues and I\n> determined that turning off the enable_nestloop option might help - we\n> noticed dramatic speed improvements for that specific query after doing so\n> (<2 minutes execution time). I was warned not to mess with the enable_XXX\n> options in a production environment, but does anyone see any problem with\n> turning off the enable_nestloop option right before executing my query and\n> turning it back on afterwards?\n\nThat's what it's there for ... but it would be useful to look into why\nthe planner gets it so wrong without that hint. Could we see EXPLAIN\nANALYZE both ways?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Apr 2005 11:36:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: enable_XXX options "
}
] |
[
{
"msg_contents": "Hi All,\n\nI have a trigger function that copies data from an input table to a table in\nthe actual data model. The data model table has a trigger after update on it.\n Is the first trigger fired after the copy terminates or after each insert?\nIs the second trigger fired after the first trigger is complete or once for\nevery iteration of the loop in the first trigger? I only want these triggers\nto fire after the previous action is complete. That is what I thought I was\ngetting when I chose the FOR EACH STATEMENT attribute. Here are excerpts from\nthe various programs that are running. Your thoughts are appreciated.\n\n From a bash shell COPY is used to put data in the input table.\n cat ${v_load_dir}/${v_filename}.ld | \\\n psql --echo-all \\\n --dbname ${DB} \\\n --username dbuser \\\n --command \\\n \"COPY tbl_status\n FROM stdin\n WITH DELIMITER AS ','\n NULL AS '';\"\n\nThe input table has an AFTER-INSERT-STATEMENT trigger.\n CREATE TRIGGER tgr_xfr_status\n AFTER INSERT\n ON tbl_status\n FOR EACH STATEMENT\n EXECUTE PROCEDURE tf_xfr_status();\n\nThe input table trigger uses a LOOP to process each newly inserted record.\n FOR rcrd_order IN SELECT...\n LOOP\n-- Now update the information in the detail table.\n UPDATE tbl_detail\n SET closed = rcrd_order.closed\n WHERE tbl_detail.number = rcrd_order.so_number;\n END LOOP;\n\nThe data model table has an AFTER-UPDATE-STATEMENT trigger.\n CREATE TRIGGER tgr_update_allocated\n AFTER UPDATE\n ON tbl_detail\n FOR EACH STATEMENT\n EXECUTE PROCEDURE tf_update_allocated();\n\nKind Regards,\nKeith\n",
"msg_date": "Fri, 1 Apr 2005 11:20:25 -0500",
"msg_from": "\"Keith Worthington\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Triggers with FOR EACH STATEMENT"
}
] |
[
{
"msg_contents": "-----Original Message-----\n>From: Tom Lane [mailto:[email protected]] \n>Sent: Friday, April 01, 2005 10:37 AM\n>To: Peterson, Bjorn\n>Cc: [email protected]\n>Subject: Re: [PERFORM] enable_XXX options \n>\n>\"Peterson, Bjorn\" <[email protected]> writes:\n>> I have a query in my application that takes an unreasonable amount of\ntime\n>> to complete (>4.5 hours execution time). After reviewing the EXPLAIN\nand\n>> EXPLAIN ANALYZE output for that and similar queries, my colleagues and I\n>> determined that turning off the enable_nestloop option might help - we\n>> noticed dramatic speed improvements for that specific query after doing\nso\n>> (<2 minutes execution time). I was warned not to mess with the\nenable_XXX\n>> options in a production environment, but does anyone see any problem with\n>> turning off the enable_nestloop option right before executing my query\nand\n>> turning it back on afterwards?\n>\n>That's what it's there for ... but it would be useful to look into why\n>the planner gets it so wrong without that hint. Could we see EXPLAIN\n>ANALYZE both ways?\n>\n>\t\t\tregards, tom lane\n>\n\n\nBelow is my query and the output of EXPLAIN - I was not able to run EXPLAIN\nANALYZE, as the query never completes unless we turn enable_nestloop off:\n\nSELECT t.term_id AS term_id, a.user_id AS user_id, a.time_slot AS course_id,\na.attendance_status AS status, SUM(CASE WHEN a.attendance_date>=t.start_date\nTHEN 1 ELSE 0 END) AS cur_total, COUNT(a.attendance_date) AS ytd_total FROM\n\"Attendance\" a, \"Terms\" t, \"Terms\" ytd, \"CoursesUsers\" cu, \"Courses\" c,\n\"CoursesOffered\" co, \"Schools\" s WHERE a.attendance_type=1 AND\na.attendance_status IN(3,4,1,2) AND a.attendance_date>=ytd.start_date AND\na.attendance_date<=t.end_date AND a.attendance_date<=now() AND\na.user_id=cu.user_id AND a.time_slot=cu.course_id AND\ncu.course_id=c.course_id AND co.course_offered_id=c.course_offered_id AND\nco.school_id=s.school_id AND s.district_id=2 AND ytd.term_id=t.top_term_id\nAND c.course_id IN\n(221,395,244,394,366,370,400,11,373,369,392,406,398,381,391,393,403,376,220,\n846,440,935,910,431,428,904,905,222,201,453,913,1794,408,901,856,424,443,175\n0,452,461,462,471,463,911,489,821,916,501,223) GROUP BY a.user_id,\na.time_slot, t.term_id, a.attendance_status ORDER BY a.user_id, a.time_slot,\nt.term_id, a.attendance_status\n\nThe Attendance table is the largest (about 2 million records), Terms has\nabout 50 records, CoursesUsers has about 30,000 records, Courses has about\n2000 records, CoursesOffered has about 1000 records, and Schools has 3\nrecords. The purpose of this query is to retrieve the number of absences\nfor each student/course/term combination - we need separate totals for\nyear-to-date (from the start of the school year), and for absences only\nwithin the current term. Every field referenced in the WHERE clause has an\nappropriate single or multi-column index.\n\nWe are using the standard PostgreSQL JDBC driver and the only parameter\nbeing set in this query is the district_id (s.district_id=2). We are\nrunning Postgres 8.0.1 on a Windows 2000 server. \n\n\nWith enable_nestloop on (default):\n\nQUERY PLAN\nGroupAggregate (cost=4674.63..4677.13 rows=100 width=22)\n -> Sort (cost=4674.63..4674.88 rows=100 width=22)\n Sort Key: a.user_id\n -> Nested Loop (cost=276.69..4671.30 rows=100 width=22)\n Join Filter: ((\"outer\".attendance_date <= \"inner\".end_date)\nAND (\"outer\".attendance_date >= \"inner\".start_date))\n -> Hash Join (cost=273.30..4649.92 rows=20 width=14)\n Hash Cond: (\"outer\".school_id = \"inner\".school_id)\n -> Nested Loop (cost=272.26..4648.50 rows=25 width=18)\n -> Hash Join (cost=272.26..986.69 rows=836\nwidth=16)\n Hash Cond: (\"outer\".course_offered_id =\n\"inner\".course_offered_id)\n -> Hash Join (cost=246.81..948.70 rows=836\nwidth=16)\n Hash Cond: (\"outer\".course_id =\n\"inner\".course_id)\n -> Seq Scan on \"CoursesUsers\" cu\n(cost=0.00..545.02 rows=29702 width=8)\n -> Hash (cost=246.68..246.68 rows=49\nwidth=8)\n -> Seq Scan on \"Courses\" c\n(cost=0.00..246.68 rows=49 width=8)\n Filter: ((course_id = 221)\nOR (course_id = 395) OR (course_id = 244) OR (course_id = 394) OR (course_id\n= 366) OR (course_id = 370) OR (course_id = 400) OR (course_id = 11) OR\n(course_id = 373) OR (course_i (..)\n -> Hash (cost=23.36..23.36 rows=836\nwidth=8)\n -> Seq Scan on \"CoursesOffered\" co\n(cost=0.00..23.36 rows=836 width=8)\n -> Index Scan using \"Attendance_pkey\" on\n\"Attendance\" a (cost=0.00..4.37 rows=1 width=14)\n Index Cond: ((a.attendance_date <= now())\nAND (a.attendance_type = 1) AND (\"outer\".course_id = a.time_slot) AND\n(a.user_id = \"outer\".user_id))\n Filter: ((attendance_status = 3) OR\n(attendance_status = 4) OR (attendance_status = 1) OR (attendance_status =\n2))\n -> Hash (cost=1.04..1.04 rows=3 width=4)\n -> Seq Scan on \"Schools\" s (cost=0.00..1.04\nrows=3 width=4)\n Filter: (district_id = 2)\n -> Materialize (cost=3.39..3.75 rows=36 width=16)\n -> Hash Join (cost=1.45..3.35 rows=36 width=16)\n Hash Cond: (\"outer\".top_term_id = \"inner\".term_id)\n -> Seq Scan on \"Terms\" t (cost=0.00..1.36\nrows=36 width=16)\n -> Hash (cost=1.36..1.36 rows=36 width=8)\n -> Seq Scan on \"Terms\" ytd\n(cost=0.00..1.36 rows=36 width=8)\n\n-------------------------------------------------------------------------\n\nAfter turning enable_nestloop off:\n\nQUERY PLAN\nGroupAggregate (cost=100078595.13..100078597.63 rows=100 width=22)\n -> Sort (cost=100078595.13..100078595.38 rows=100 width=22)\n Sort Key: a.user_id\n -> Nested Loop (cost=100078571.91..100078591.81 rows=100 width=22)\n Join Filter: ((\"inner\".attendance_date <= \"outer\".end_date)\nAND (\"inner\".attendance_date >= \"outer\".start_date))\n -> Hash Join (cost=1.45..3.35 rows=36 width=16)\n Hash Cond: (\"outer\".top_term_id = \"inner\".term_id)\n -> Seq Scan on \"Terms\" t (cost=0.00..1.36 rows=36\nwidth=16)\n -> Hash (cost=1.36..1.36 rows=36 width=8)\n -> Seq Scan on \"Terms\" ytd (cost=0.00..1.36\nrows=36 width=8)\n -> Materialize (cost=78570.46..78570.66 rows=20 width=14)\n -> Hash Join (cost=991.91..78570.44 rows=20 width=14)\n Hash Cond: (\"outer\".school_id = \"inner\".school_id)\n -> Hash Join (cost=990.87..78569.02 rows=25\nwidth=18)\n Hash Cond: ((\"outer\".time_slot =\n\"inner\".course_id) AND (\"outer\".user_id = \"inner\".user_id))\n -> Seq Scan on \"Attendance\" a\n(cost=0.00..75599.26 rows=79148 width=14)\n Filter: ((attendance_type = 1) AND\n((attendance_status = 3) OR (attendance_status = 4) OR (attendance_status =\n1) OR (attendance_status = 2)) AND (attendance_date <= now()))\n -> Hash (cost=986.69..986.69 rows=836\nwidth=16)\n -> Hash Join (cost=272.26..986.69\nrows=836 width=16)\n Hash Cond:\n(\"outer\".course_offered_id = \"inner\".course_offered_id)\n -> Hash Join\n(cost=246.81..948.70 rows=836 width=16)\n Hash Cond:\n(\"outer\".course_id = \"inner\".course_id)\n -> Seq Scan on\n\"CoursesUsers\" cu (cost=0.00..545.02 rows=29702 width=8)\n -> Hash\n(cost=246.68..246.68 rows=49 width=8)\n -> Seq Scan on\n\"Courses\" c (cost=0.00..246.68 rows=49 width=8)\n Filter:\n((course_id = 221) OR (course_id = 395) OR (course_id = 244) OR (course_id =\n394) OR (course_id = 366) OR (course_id = 370) OR (course_id = 400) OR\n(course_id = 11) OR (course_id = 373) (..)\n -> Hash (cost=23.36..23.36\nrows=836 width=8)\n -> Seq Scan on\n\"CoursesOffered\" co (cost=0.00..23.36 rows=836 width=8)\n -> Hash (cost=1.04..1.04 rows=3 width=4)\n -> Seq Scan on \"Schools\" s\n(cost=0.00..1.04 rows=3 width=4)\n Filter: (district_id = 2)\n\n\n\n\n**************************************************************************** \nThis email may contain confidential material. \nIf you were not an intended recipient, \nPlease notify the sender and delete all copies. \nWe may monitor email to and from our network. \n****************************************************************************\n",
"msg_date": "Fri, 1 Apr 2005 09:50:58 -0700 ",
"msg_from": "\"Peterson, Bjorn\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: enable_XXX options "
},
{
"msg_contents": "\"Peterson, Bjorn\" <[email protected]> writes:\n>> That's what it's there for ... but it would be useful to look into why\n>> the planner gets it so wrong without that hint. Could we see EXPLAIN\n>> ANALYZE both ways?\n\n> Below is my query and the output of EXPLAIN - I was not able to run EXPLAIN\n> ANALYZE, as the query never completes unless we turn enable_nestloop off:\n\nWell, when the point is to find out why the planner's estimates don't\nmatch reality, it's difficult to learn anything by looking only at the\nestimates and not at reality.\n\nGiven what you say about the table sizes, the planner's preferred plan\nlooks somewhat reasonable. I think the weak spot is the assumption that\nthis index check will be fast:\n\n> -> Index Scan using \"Attendance_pkey\" on\n> \"Attendance\" a (cost=0.00..4.37 rows=1 width=14)\n> Index Cond: ((a.attendance_date <= now())\n> AND (a.attendance_type = 1) AND (\"outer\".course_id = a.time_slot) AND\n> (a.user_id = \"outer\".user_id))\n\nand the reason this seems like a weak spot is that the plan implies that\nyou made attendance_date be the first column in the index. At least for\nthis query, it'd be far better for attendance_date to be the last\ncolumn, so that the info for any one user_id is bunched together in the\nindex. For that matter I'd bet that attendance_type shouldn't be the\nhighest part of the key either --- either course_id or user_id should\nprobably be the leading key, depending on what sorts of queries you do.\nIt wouldn't matter for this query, but you should look to see if you\nhave other queries that select on only one of the two.\n\nIf you have both equalities and inequalities in an index condition, you\nalways want the equalities to be on the higher-order keys. Otherwise\nthe scan will involve wasted scanning over index entries that match\nonly some of the conditions. (Think about the ordering of a multicolumn\nindex to see why this is so.) In this particular case I think the thing\nwill be scanning almost the whole index every time :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Apr 2005 12:27:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: enable_XXX options "
}
] |
[
{
"msg_contents": "\nJust curious, but does anyone have an idea of what we are capable of? I \nrealize that size of record would affect things, as well as hardware, but \nif anyone has some ideas on max, with 'record size', that would be \nappreciated ...\n\nThanks ...\n\n----\nMarc G. Fournier Hub.Org Networking Services (http://www.hub.org)\nEmail: [email protected] Yahoo!: yscrappy ICQ: 7615664\n",
"msg_date": "Fri, 1 Apr 2005 14:06:03 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sustained inserts per sec ... ?"
},
{
"msg_contents": "> Just curious, but does anyone have an idea of what we are capable of? I \n> realize that size of record would affect things, as well as hardware, but \n> if anyone has some ideas on max, with 'record size', that would be \n> appreciated ...\n\nWell, I just did an insert of 27,500 records with 9 fields, averaging \naround 118 bytes per record, each insert statement coming from a separate \nSQL statement fed to psql, and it took a bit over 4 minutes, or about \n106 inserts per second.\n\nThat seems consistent with what I get when I do a restore of a dump\nfile that has insert statement instead of COPY.\n\nThe hardware is a Dell dual Xeon system, the disks are mirrored SATA\ndrives with write buffering turned off. \n--\nMike Nolan\n\n\n",
"msg_date": "Fri, 1 Apr 2005 13:27:14 -0600 (CST)",
"msg_from": "Mike Nolan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "On Apr 1, 2005 1:06 PM, Marc G. Fournier <[email protected]> wrote:\n> \n> Just curious, but does anyone have an idea of what we are capable of? I\n> realize that size of record would affect things, as well as hardware, but\n> if anyone has some ideas on max, with 'record size', that would be\n> appreciated ...\n\nOn a AMD64/3000, 1Gb RAM, 2 SATA drives (1 for log, 1 for data), and\ninserting using batches of 500-1000 rows, and also using the COPY\nsyntax, I have seen an interesting thing. There are 5 indexes\ninvolved here, BTW. This is Linux 2.6 running on an XFS file system\n(ext3 was even worse for me).\n\nI can start at about 4,000 rows/second, but at about 1M rows, it\nplummets, and by 4M it's taking 6-15 seconds to insert 1000 rows. \nThat's only about 15 rows/second, which is quite pathetic. The\nproblem seems to be related to my indexes, since I have to keep them\nonline (the system in continually querying, as well).\n\nThis was an application originally written for MySQL/MYISAM, and it's\nlooking like PostgreSQL can't hold up for it, simply because it's \"too\nmuch database\" if that makes sense. The same box, running the MySQL\nimplementation (which uses no transactions) runs around 800-1000\nrows/second systained.\n\nJust a point of reference. I'm trying to collect some data so that I\ncan provide some charts of the degredation, hoping to find the point\nwhere it dies and thereby find the point where it needs attention.\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n",
"msg_date": "Fri, 1 Apr 2005 14:38:36 -0500",
"msg_from": "Christopher Petrilli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "Christopher Petrilli <[email protected]> writes:\n> I can start at about 4,000 rows/second, but at about 1M rows, it\n> plummets, and by 4M it's taking 6-15 seconds to insert 1000 rows. \n> That's only about 15 rows/second, which is quite pathetic. The\n> problem seems to be related to my indexes, since I have to keep them\n> online (the system in continually querying, as well).\n\nI doubt it has anything to do with your indexes. I'm wondering about\nforeign key checks, myself. What FKs exist on these tables? Is the\n\"start\" condition zero rows in every table? Which PG version exactly?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Apr 2005 15:42:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ? "
},
{
"msg_contents": "On Apr 1, 2005 3:42 PM, Tom Lane <[email protected]> wrote:\n> Christopher Petrilli <[email protected]> writes:\n> > I can start at about 4,000 rows/second, but at about 1M rows, it\n> > plummets, and by 4M it's taking 6-15 seconds to insert 1000 rows.\n> > That's only about 15 rows/second, which is quite pathetic. The\n> > problem seems to be related to my indexes, since I have to keep them\n> > online (the system in continually querying, as well).\n> \n> I doubt it has anything to do with your indexes. I'm wondering about\n> foreign key checks, myself. What FKs exist on these tables? Is the\n> \"start\" condition zero rows in every table? Which PG version exactly?\n\nSure, I'm going to post something on my web site when I get some\nnumbers that will make this more valuable. To answer your question:\n\n1. No foreign keys (long story, but they just don't exist for this one table)\n2. Start condition is zero. I'm using multiple inherited tables to\ndeal with the data \"partitioning\" since eventual size is billions of\nrows. Each \"partition\" currently has 10M rows in it as a goal.\n3. Version 8.0.2, however I started this experiment with 8.0.0.\n4. fsync=off\n\nWhat seems to happen is it slams into a \"wall\" of some sort, the\nsystem goes into disk write frenzy (wait=90% CPU), and eventually\nrecovers and starts running for a while at a more normal speed. What\nI need though, is to not have that wall happen. It is easier for me\nto accept a constant degredation of 5%, rather than a 99% degredation\nfor short periods, as it can cause cascade problems in the system.\n\nMy goal is to gather some numbers, and post code + schema + analysis.\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n",
"msg_date": "Fri, 1 Apr 2005 15:46:40 -0500",
"msg_from": "Christopher Petrilli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "\n> What seems to happen is it slams into a \"wall\" of some sort, the\n> system goes into disk write frenzy (wait=90% CPU), and eventually\n> recovers and starts running for a while at a more normal speed. What\n> I need though, is to not have that wall happen. It is easier for me\n> to accept a constant degredation of 5%, rather than a 99% degredation\n> for short periods, as it can cause cascade problems in the system.\n\nCould this possibly be a checkpoint happening?\n\nAlso how many checkpoint segments do you have?\n\n\n> \n> My goal is to gather some numbers, and post code + schema + analysis.\n> \n> Chris\n-- \nCommand Prompt, Inc., Your PostgreSQL solutions company. 503-667-4564\nCustom programming, 24x7 support, managed services, and hosting\nOpen Source Authors: plPHP, pgManage, Co-Authors: plPerlNG\nReliable replication, Mammoth Replicator - http://www.commandprompt.com/\n\n",
"msg_date": "Fri, 01 Apr 2005 12:53:38 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "Christopher Petrilli <[email protected]> writes:\n> What seems to happen is it slams into a \"wall\" of some sort, the\n> system goes into disk write frenzy (wait=90% CPU), and eventually\n> recovers and starts running for a while at a more normal speed.\n\nCheckpoint maybe? If so, tweaking the bgwriter parameters might\nimprove matters.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Apr 2005 15:58:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ? "
},
{
"msg_contents": "On Apr 1, 2005 3:53 PM, Joshua D. Drake <[email protected]> wrote:\n> \n> > What seems to happen is it slams into a \"wall\" of some sort, the\n> > system goes into disk write frenzy (wait=90% CPU), and eventually\n> > recovers and starts running for a while at a more normal speed. What\n> > I need though, is to not have that wall happen. It is easier for me\n> > to accept a constant degredation of 5%, rather than a 99% degredation\n> > for short periods, as it can cause cascade problems in the system.\n> \n> Could this possibly be a checkpoint happening?\n> \n> Also how many checkpoint segments do you have?\n\nChanges to the postgresql.conf file from \"default\":\n\n maintenance_work_mem = 131072\n fsync = false\n checkpoint_segments = 32\n\nI set the checkpoint segments up until it no longer complained about\nthem rolling over. That was the best \"advice\" I could find online. \nThe maintenance_work_mem I upped to deal with indexes being updated\nconstantly. And finally, since I'm willing to risk some loss, I\nturned fsync off, since the system is on a UPS (or will be in\nproduction) and carefully monitored.\n\nI did actually wonder about the checkpoint_segments being an issue,\nsince it seems to me the more of them you have, the more you'd have to\ndeal with when checkpointing, and so you might actually want to turn\nthat number down to create a \"smoother\" behavior.\n\nUnfortunately, the alot advice for 'loading data' doesn't apply when\nyou have a constant stream of load, rather than just sporadic. Any\nadvice is more than appreciated.\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n",
"msg_date": "Fri, 1 Apr 2005 15:59:52 -0500",
"msg_from": "Christopher Petrilli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "Mike Nolan <[email protected]> writes:\n> Well, I just did an insert of 27,500 records with 9 fields, averaging \n> around 118 bytes per record, each insert statement coming from a separate \n> SQL statement fed to psql, and it took a bit over 4 minutes, or about \n> 106 inserts per second.\n\nIs that with a separate transaction for each insert command? I can get\nsignificantly higher rates on my devel machine if the inserts are\nbundled into transactions of reasonable length.\n\nWith fsync on, you can't expect to get more than about one commit per\ndisk rotation (with a single inserting process), so with say a 7200RPM\ndrive (120 revs/sec) the above is a pretty good fraction of the\ntheoretical limit.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Apr 2005 16:03:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ? "
},
{
"msg_contents": "1250/sec with record size average is 26 bytes\n800/sec with record size average is 48 bytes. \n250/sec with record size average is 618 bytes.\n\nData from pg_stats and our own job monitoring\n\nSystem has four partitions, two raid 1s, a four disk RAID 10 and a six\ndisk RAID 10.\npg_xlog is on four disk RAID 10, database is on RAID 10.\n\nData is very spread out because database turnover time is very high,\nso our performance is about double this with a fresh DB. (the data\nhalf life is probably measurable in days or weeks).\n\nAlex Turner\nnetEconomist\n\nOn Apr 1, 2005 1:06 PM, Marc G. Fournier <[email protected]> wrote:\n> \n> Just curious, but does anyone have an idea of what we are capable of? I\n> realize that size of record would affect things, as well as hardware, but\n> if anyone has some ideas on max, with 'record size', that would be\n> appreciated ...\n> \n> Thanks ...\n> \n> ----\n> Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)\n> Email: [email protected] Yahoo!: yscrappy ICQ: 7615664\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n",
"msg_date": "Fri, 1 Apr 2005 16:17:19 -0500",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "Oh - this is with a seperate transaction per command.\n\nfsync is on.\n\nAlex Turner\nnetEconomist\n\nOn Apr 1, 2005 4:17 PM, Alex Turner <[email protected]> wrote:\n> 1250/sec with record size average is 26 bytes\n> 800/sec with record size average is 48 bytes.\n> 250/sec with record size average is 618 bytes.\n> \n> Data from pg_stats and our own job monitoring\n> \n> System has four partitions, two raid 1s, a four disk RAID 10 and a six\n> disk RAID 10.\n> pg_xlog is on four disk RAID 10, database is on RAID 10.\n> \n> Data is very spread out because database turnover time is very high,\n> so our performance is about double this with a fresh DB. (the data\n> half life is probably measurable in days or weeks).\n> \n> Alex Turner\n> netEconomist\n> \n> On Apr 1, 2005 1:06 PM, Marc G. Fournier <[email protected]> wrote:\n> >\n> > Just curious, but does anyone have an idea of what we are capable of? I\n> > realize that size of record would affect things, as well as hardware, but\n> > if anyone has some ideas on max, with 'record size', that would be\n> > appreciated ...\n> >\n> > Thanks ...\n> >\n> > ----\n> > Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)\n> > Email: [email protected] Yahoo!: yscrappy ICQ: 7615664\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 8: explain analyze is your friend\n> >\n>\n",
"msg_date": "Fri, 1 Apr 2005 16:20:46 -0500",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "Alex Turner <[email protected]> writes:\n> On Apr 1, 2005 4:17 PM, Alex Turner <[email protected]> wrote:\n>> 1250/sec with record size average is 26 bytes\n>> 800/sec with record size average is 48 bytes.\n>> 250/sec with record size average is 618 bytes.\n\n> Oh - this is with a seperate transaction per command.\n> fsync is on.\n\n[ raised eyebrow... ] What kind of disk hardware is that exactly, and\ndoes it have write cache enabled? It's hard to believe those numbers\nif not.\n\nWrite caching is fine if it's done in a battery-backed cache, which you\ncan get in the higher-end hardware RAID controllers. Otherwise you're\ngoing to have problems whenever the power goes away unexpectedly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Apr 2005 17:03:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ? "
},
{
"msg_contents": "> > Well, I just did an insert of 27,500 records with 9 fields, averaging \n> > around 118 bytes per record, each insert statement coming from a separate \n> > SQL statement fed to psql, and it took a bit over 4 minutes, or about \n> > 106 inserts per second.\n> \n> Is that with a separate transaction for each insert command? I can get\n> significantly higher rates on my devel machine if the inserts are\n> bundled into transactions of reasonable length.\n\nThat's with autocommit on. If I do it as a single transaction block,\nit takes about 6.5 seconds, which is about 4200 transactions/second.\n--\nMike Nolan\n",
"msg_date": "Fri, 1 Apr 2005 16:36:47 -0600 (CST)",
"msg_from": "Mike Nolan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "On Apr 1, 2005 3:59 PM, Christopher Petrilli <[email protected]> wrote:\n> On Apr 1, 2005 3:53 PM, Joshua D. Drake <[email protected]> wrote:\n> >\n> > > What seems to happen is it slams into a \"wall\" of some sort, the\n> > > system goes into disk write frenzy (wait=90% CPU), and eventually\n> > > recovers and starts running for a while at a more normal speed. What\n> > > I need though, is to not have that wall happen. It is easier for me\n> > > to accept a constant degredation of 5%, rather than a 99% degredation\n> > > for short periods, as it can cause cascade problems in the system.\n> >\n> > Could this possibly be a checkpoint happening?\n> >\n> > Also how many checkpoint segments do you have?\n> \n> Changes to the postgresql.conf file from \"default\":\n> \n> maintenance_work_mem = 131072\n> fsync = false\n> checkpoint_segments = 32\n\nI've now had a chance to run a couple more tests, and here's two\ngraphs of the time required to insert (via COPY from a file) 500\nrecords at a time:\n\nhttp://www.amber.org/~petrilli/diagrams/pgsql_copy500.png\nhttp://www.amber.org/~petrilli/diagrams/pgsql_copy500_bgwriter.png\n\nThe first is with the above changes, the second contains two\nadditional modificiations to the configuration:\n\n bgwriter_percent = 25\n bgwriter_maxpages = 1000 \n\nTo my, likely faulty, intuition, it would seem that there is a backup\nhappening in the moving of data from the WAL to the final resting\nplace, and that by increasing these I could pull that forward. As you\ncan see from the charts, that doesn't seem to have any major impact. \nThe point, in the rough middle, is where the program begins inserting\ninto a new table (inherited). The X axis is the \"total\" number of rows\ninserted. The table has:\n\n * 21 columns (nothing too strange)\n * No OIDS\n * 5 indexes, including the primary key on a string\n\nThey are created by creating a main table, then doing:\n\n CREATE TABLE foo001 INHERITS (foos);\n\nAnd then recreating all the indexes.\n\nThoughts? Any advice would be more than appreciated. \n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n",
"msg_date": "Mon, 4 Apr 2005 09:48:47 -0400",
"msg_from": "Christopher Petrilli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "Yup, Battery backed, cache enabled. 6 drive RAID 10, and 4 drive RAID\n10, and 2xRAID 1.\n\nIt's a 3ware 9500S-8MI - not bad for $450 plus BBU.\n\nAlex Turner\nnetEconomist\n\nOn Apr 1, 2005 6:03 PM, Tom Lane <[email protected]> wrote:\n> Alex Turner <[email protected]> writes:\n> > On Apr 1, 2005 4:17 PM, Alex Turner <[email protected]> wrote:\n> >> 1250/sec with record size average is 26 bytes\n> >> 800/sec with record size average is 48 bytes.\n> >> 250/sec with record size average is 618 bytes.\n> \n> > Oh - this is with a seperate transaction per command.\n> > fsync is on.\n> \n> [ raised eyebrow... ] What kind of disk hardware is that exactly, and\n> does it have write cache enabled? It's hard to believe those numbers\n> if not.\n> \n> Write caching is fine if it's done in a battery-backed cache, which you\n> can get in the higher-end hardware RAID controllers. Otherwise you're\n> going to have problems whenever the power goes away unexpectedly.\n> \n> regards, tom lane\n>\n",
"msg_date": "Mon, 4 Apr 2005 10:47:34 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "Christopher Petrilli <[email protected]> writes:\n> The table has:\n> * 21 columns (nothing too strange)\n> * No OIDS\n> * 5 indexes, including the primary key on a string\n\nCould we see the *exact* SQL definitions of the table and indexes?\nAlso some sample data would be interesting. I'm wondering for example\nabout the incidence of duplicate index keys.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Apr 2005 11:52:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ? "
},
{
"msg_contents": "On Apr 4, 2005 11:52 AM, Tom Lane <[email protected]> wrote:\n> Christopher Petrilli <[email protected]> writes:\n> > The table has:\n> > * 21 columns (nothing too strange)\n> > * No OIDS\n> > * 5 indexes, including the primary key on a string\n> \n> Could we see the *exact* SQL definitions of the table and indexes?\n> Also some sample data would be interesting. I'm wondering for example\n> about the incidence of duplicate index keys.\n\nOf course, this is a bit cleansed, since it's an internal project, but\nonly the column names are changed:\n\nCREATE TABLE foos (\n foo_id \tVARCHAR(32),\n s_ts \tINTEGER NOT NULL,\n c_ts \t \tINTEGER NOT NULL,\n bar_id \tINTEGER NOT NULL,\n proto INTEGER NOT NULL,\n src_ip INT8 NOT NULL,\n dst_ip INT8 NOT NULL,\n src_port INTEGER,\n dst_port INTEGER,\n nated INTEGER NOT NULL,\n src_nat_ip INT8,\n dst_nat_ip INT8,\n src_nat_port INTEGER,\n dst_nat_port INTEGER,\n foo_class \tINTEGER NOT NULL,\n foo_type \tINTEGER NOT NULL,\n src_bar \tINTEGER NOT NULL,\n dst_bar \tINTEGER NOT NULL,\n user_name VARCHAR(255),\n info TEXT\n) WITHOUT OIDS;\nALTER TABLE foos ADD CONSTRAINT foos_foo_id_pk UNIQUE (foo_id);\nCREATE INDEX foos_c_ts_idx ON foos(conduit_ts);\nCREATE INDEX foos_src_ip_idx ON foos(src_ip);\nCREATE INDEX foos_dst_ip_idx ON foos(dst_ip);\nCREATE INDEX foos_foo_class_idx ON foos(foo_class);\nCREATE INDEX foos_foo_type_idx ON foos(foo_type);\n\n\nCREATE TABLE foos001 ( ) INHERITS (foos) WITHOUT OIDS;\nALTER TABLE foos001 ADD CONSTRAINT foos001_foo_id_pk UNIQUE (foo_id);\nCREATE INDEX foos001_c_ts_idx ON foos001(conduit_ts);\nCREATE INDEX foos001_src_ip_idx ON foos001(src_ip);\nCREATE INDEX foos001_dst_ip_idx ON foos001(dst_ip);\nCREATE INDEX foos001_foo_class_idx ON foos001(foo_class);\nCREATE INDEX foos001_foo_type_idx ON foos001(foo_type);\n\nThat continues on, but you get the idea...\n\nSo, as you asked about data content, specifically regarding indices,\nhere's what the \"simulator\" creates:\n\nfoo_id - 32 character UID (generated by the UUID function in mxTools,\nwhich looks like '00beef19420053c64f3f01aeb0b4a2a5', and varies in the\nupper components more than the lower.\n\n*_ts - UNIX epoch timestamps, sequential. There's a long story behind\nnot using DATETIME format, but if that's the big issue, it can be\ndealt with.\n\n*_ip - Randomly selected 32-bit integers from a pre-generated list\ncontaining about 500 different numbers ranging from 3232235500 to\n3232236031. This is unfortunately, not too atypical from the \"real\nworld\".\n\n*_class - Randomly selected 1-100 (again, not atypical, although\nnormal distribution would be less random)\n\n*_type - Randomly selected 1-10000 (not atypical, and more random than\nin real world)\n\nHopefully this helps? \n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n",
"msg_date": "Mon, 4 Apr 2005 12:09:35 -0400",
"msg_from": "Christopher Petrilli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "Christopher Petrilli <[email protected]> writes:\n> On Apr 4, 2005 11:52 AM, Tom Lane <[email protected]> wrote:\n>> Could we see the *exact* SQL definitions of the table and indexes?\n\n> Of course, this is a bit cleansed, since it's an internal project, but\n> only the column names are changed:\n\nThanks. No smoking gun in sight there. But out of curiosity, can you\ndo a test run with *no* indexes on the table, just to see if it behaves\nany differently? Basically I was wondering if index overhead might be\npart of the problem.\n\nAlso, the X-axis on your graphs seems to be total number of rows\ninserted ... can you relate that to elapsed real time for us?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Apr 2005 12:23:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ? "
},
{
"msg_contents": "On Apr 4, 2005 12:23 PM, Tom Lane <[email protected]> wrote:\n> Christopher Petrilli <[email protected]> writes:\n> > On Apr 4, 2005 11:52 AM, Tom Lane <[email protected]> wrote:\n> >> Could we see the *exact* SQL definitions of the table and indexes?\n> \n> > Of course, this is a bit cleansed, since it's an internal project, but\n> > only the column names are changed:\n> \n> Thanks. No smoking gun in sight there. But out of curiosity, can you\n> do a test run with *no* indexes on the table, just to see if it behaves\n> any differently? Basically I was wondering if index overhead might be\n> part of the problem.\n\nRunning now, but it'll take a while since I have a 3/4 second pause\nafter each COPY to better reflect \"real world\" ... the application\ndoes 1 COPY per second, or whenever it hits 1000 entries. This seemed\nto be a sane way to deal with it, and not burden the system with\nneedless index balancing, etc.\n\n> Also, the X-axis on your graphs seems to be total number of rows\n> inserted ... can you relate that to elapsed real time for us?\n\nSure, like I said, there's a 3/4 second sleep between each COPY,\nregardless of how long it took (which well, isn't quite right, but\nclose enough for this test). I've created a PNG with the X axies\nreflecting \"elapsed time\":\n\nhttp://www.amber.org/~petrilli/diagrams/pgsql_copyperf_timeline.png\n\nIn addition, I've put up the raw data I used:\n\nhttp://www.amber.org/~petrilli/diagrams/results_timeline.txt\n\nThe columns are rowcount, elapsed time, instance time.\nHopefully this might help some? This machine has nothing else running\non it other than the normal stripped down background processes (like\nsshd).\n\n-- \n| Christopher Petrilli\n| [email protected]\n",
"msg_date": "Mon, 4 Apr 2005 12:51:17 -0400",
"msg_from": "Christopher Petrilli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "On Mon, 2005-04-04 at 09:48 -0400, Christopher Petrilli wrote:\n> The point, in the rough middle, is where the program begins inserting\n> into a new table (inherited). The X axis is the \"total\" number of rows\n> inserted.\n\nand you also mention the same data plotted with elapsed time:\nhttp://www.amber.org/~petrilli/diagrams/pgsql_copyperf_timeline.png\n\nYour graphs look identical to others I've seen, so I think we're\ntouching on something wider than your specific situation. The big\ndifference is that things seem to go back to high performance when you\nswitch to a new inherited table.\n\nI'm very interested in the graphs of elapsed time for COPY 500 rows\nagainst rows inserted. The simplistic inference from those graphs are\nthat if you only inserted 5 million rows into each table, rather than 10\nmillion rows then everything would be much quicker. I hope this doesn't\nwork, but could you try that to see if it works? I'd like to rule out a\nfunction of \"number of rows\" as an issue, or focus in on it depending\nupon the results.\n\nQ: Please can you confirm that the discontinuity on the graph at around\n5000 elapsed seconds matches EXACTLY with the switch from one table to\nanother? That is an important point.\n\nQ: How many data files are there for these relations? Wouldn't be two,\nby any chance, when we have 10 million rows in them?\n\nQ: What is the average row length?\nAbout 150-160 bytes?\n\nThanks,\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Mon, 04 Apr 2005 20:46:54 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "On Apr 4, 2005 3:46 PM, Simon Riggs <[email protected]> wrote:\n> On Mon, 2005-04-04 at 09:48 -0400, Christopher Petrilli wrote:\n> > The point, in the rough middle, is where the program begins inserting\n> > into a new table (inherited). The X axis is the \"total\" number of rows\n> > inserted.\n> \n> and you also mention the same data plotted with elapsed time:\n> http://www.amber.org/~petrilli/diagrams/pgsql_copyperf_timeline.png\n> \n> Your graphs look identical to others I've seen, so I think we're\n> touching on something wider than your specific situation. The big\n> difference is that things seem to go back to high performance when you\n> switch to a new inherited table.\n\nThis is correct.\n \n> I'm very interested in the graphs of elapsed time for COPY 500 rows\n> against rows inserted. The simplistic inference from those graphs are\n> that if you only inserted 5 million rows into each table, rather than 10\n> million rows then everything would be much quicker. I hope this doesn't\n> work, but could you try that to see if it works? I'd like to rule out a\n> function of \"number of rows\" as an issue, or focus in on it depending\n> upon the results.\n> \n> Q: Please can you confirm that the discontinuity on the graph at around\n> 5000 elapsed seconds matches EXACTLY with the switch from one table to\n> another? That is an important point.\n\nWell, the change over happens at 51593.395205 seconds :-) Here's two\nlines from the results with row count and time added:\n\n10000000\t51584.9818912\t8.41331386566\n10000500\t51593.395205\t0.416964054108\n\nNote that 10M is when it swaps. I see no reason to interpret it\ndifferently, so it seems to be totally based around switching tables\n(and thereby indices).\n\n> Q: How many data files are there for these relations? Wouldn't be two,\n> by any chance, when we have 10 million rows in them?\n\nI allow PostgreSQL to manage all the data files itself, so here's the\ndefault tablespace:\n\ntotal 48\ndrwx------ 2 pgsql pgsql 4096 Jan 26 20:59 1\ndrwx------ 2 pgsql pgsql 4096 Dec 17 19:15 17229\ndrwx------ 2 pgsql pgsql 4096 Feb 16 17:55 26385357\ndrwx------ 2 pgsql pgsql 4096 Mar 24 23:56 26425059\ndrwx------ 2 pgsql pgsql 8192 Mar 28 11:31 26459063\ndrwx------ 2 pgsql pgsql 8192 Mar 31 23:54 26475755\ndrwx------ 2 pgsql pgsql 4096 Apr 4 15:07 26488263\n[root@bigbird base]# du\n16624 ./26425059\n5028 ./26385357\n5660 ./26459063\n4636 ./17229\n6796 ./26475755\n4780 ./1\n1862428 ./26488263\n1905952 .\n\n> Q: What is the average row length?\n> About 150-160 bytes?\n\nRaw data is around 150bytes, after insertion, I'd need to do some\nother calculations.\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n",
"msg_date": "Mon, 4 Apr 2005 15:56:58 -0400",
"msg_from": "Christopher Petrilli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "On Mon, 2005-04-04 at 15:56 -0400, Christopher Petrilli wrote:\n> On Apr 4, 2005 3:46 PM, Simon Riggs <[email protected]> wrote:\n> > On Mon, 2005-04-04 at 09:48 -0400, Christopher Petrilli wrote:\n> > > The point, in the rough middle, is where the program begins inserting\n> > > into a new table (inherited). The X axis is the \"total\" number of rows\n> > > inserted.\n> > \n> > and you also mention the same data plotted with elapsed time:\n> > http://www.amber.org/~petrilli/diagrams/pgsql_copyperf_timeline.png\n> > \n> > Your graphs look identical to others I've seen, so I think we're\n> > touching on something wider than your specific situation. The big\n> > difference is that things seem to go back to high performance when you\n> > switch to a new inherited table.\n> \n> This is correct.\n> \n> > I'm very interested in the graphs of elapsed time for COPY 500 rows\n> > against rows inserted. The simplistic inference from those graphs are\n> > that if you only inserted 5 million rows into each table, rather than 10\n> > million rows then everything would be much quicker. I hope this doesn't\n> > work, but could you try that to see if it works? I'd like to rule out a\n> > function of \"number of rows\" as an issue, or focus in on it depending\n> > upon the results.\n\nAny chance of running a multiple load of 4 million rows per table,\nleaving the test running for at least 3 tables worth (12+ M rows)?\n\n> > \n> > Q: Please can you confirm that the discontinuity on the graph at around\n> > 5000 elapsed seconds matches EXACTLY with the switch from one table to\n> > another? That is an important point.\n> \n> Well, the change over happens at 51593.395205 seconds :-) Here's two\n> lines from the results with row count and time added:\n> \n> 10000000\t51584.9818912\t8.41331386566\n> 10000500\t51593.395205\t0.416964054108\n> \n> Note that 10M is when it swaps. I see no reason to interpret it\n> differently, so it seems to be totally based around switching tables\n> (and thereby indices).\n\nOK, but do you have some other external knowledge that it is definitely\nhappening at that time? Your argument above seems slightly circular to\nme.\n\nThis is really important because we need to know whether it ties in with\nthat event, or some other. \n\nHave you run this for more than 2 files, say 3 or more?\n\nYou COMMIT after each 500 rows?\n\n> > Q: How many data files are there for these relations? Wouldn't be two,\n> > by any chance, when we have 10 million rows in them?\n> \n> I allow PostgreSQL to manage all the data files itself, so here's the\n> default tablespace:\n> \n> total 48\n> drwx------ 2 pgsql pgsql 4096 Jan 26 20:59 1\n> drwx------ 2 pgsql pgsql 4096 Dec 17 19:15 17229\n> drwx------ 2 pgsql pgsql 4096 Feb 16 17:55 26385357\n> drwx------ 2 pgsql pgsql 4096 Mar 24 23:56 26425059\n> drwx------ 2 pgsql pgsql 8192 Mar 28 11:31 26459063\n> drwx------ 2 pgsql pgsql 8192 Mar 31 23:54 26475755\n> drwx------ 2 pgsql pgsql 4096 Apr 4 15:07 26488263\n> [root@bigbird base]# du\n> 16624 ./26425059\n> 5028 ./26385357\n> 5660 ./26459063\n> 4636 ./17229\n> 6796 ./26475755\n> 4780 ./1\n> 1862428 ./26488263\n> 1905952 .\n\nOK. Please...\ncd $PGDATA/base/26488263\nls -l\n\nI'm looking for the number of files associated with each inherited table\n(heap).\n\n> > Q: What is the average row length?\n> > About 150-160 bytes?\n> \n> Raw data is around 150bytes, after insertion, I'd need to do some\n> other calculations.\n\nBy my calculations, you should have just 2 data files per 10M rows for\nthe main table. The performance degradation seems to coincide with the\npoint where we move to inserting into the second of the two files.\n\nI'm not looking for explanations yet, just examining coincidences and\ntrying to predict the behaviour based upon conjectures.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Mon, 04 Apr 2005 21:11:19 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "On Apr 4, 2005 4:11 PM, Simon Riggs <[email protected]> wrote:\n> > > I'm very interested in the graphs of elapsed time for COPY 500 rows\n> > > against rows inserted. The simplistic inference from those graphs are\n> > > that if you only inserted 5 million rows into each table, rather than 10\n> > > million rows then everything would be much quicker. I hope this doesn't\n> > > work, but could you try that to see if it works? I'd like to rule out a\n> > > function of \"number of rows\" as an issue, or focus in on it depending\n> > > upon the results.\n> \n> Any chance of running a multiple load of 4 million rows per table,\n> leaving the test running for at least 3 tables worth (12+ M rows)?\n\nAs soon as I get done running a test without indexes :-) \n\n> > > Q: Please can you confirm that the discontinuity on the graph at around\n> > > 5000 elapsed seconds matches EXACTLY with the switch from one table to\n> > > another? That is an important point.\n> >\n> > Well, the change over happens at 51593.395205 seconds :-) Here's two\n> > lines from the results with row count and time added:\n> >\n> > 10000000 51584.9818912 8.41331386566\n> > 10000500 51593.395205 0.416964054108\n> >\n> > Note that 10M is when it swaps. I see no reason to interpret it\n> > differently, so it seems to be totally based around switching tables\n> > (and thereby indices).\n> \n> OK, but do you have some other external knowledge that it is definitely\n> happening at that time? Your argument above seems slightly circular to\n> me.\n\nMy program *SPECIFICALLY* counts to 10M then switches the COPY statement.\n\n> This is really important because we need to know whether it ties in with\n> that event, or some other.\n\nUnless basic integer math is failing, it's definately happening at 10M rows.\n\n> Have you run this for more than 2 files, say 3 or more?\n\nYou mean, 3 or more tables? I'm not sure which type of files you are\nreffering to here.\n\n> You COMMIT after each 500 rows?\n\nThis is done using COPY syntax, not INSERT syntax. So I suppose \"yes\"\nI do. The file that is being used for COPY is kept on a ramdisk.\n\n> OK. Please...\n> cd $PGDATA/base/26488263\n> ls -l\n\n[root@bigbird base]# cd 26488263/\n[root@bigbird 26488263]# ls -l\ntotal 2003740\n-rw------- 1 pgsql pgsql 49152 Apr 4 12:26 1247\n-rw------- 1 pgsql pgsql 245760 Apr 4 12:27 1249\n-rw------- 1 pgsql pgsql 573440 Apr 4 12:24 1255\n-rw------- 1 pgsql pgsql 57344 Apr 4 14:44 1259\n-rw------- 1 pgsql pgsql 0 Apr 4 12:24 16384\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 16386\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 16388\n-rw------- 1 pgsql pgsql 24576 Apr 4 12:29 16390\n-rw------- 1 pgsql pgsql 106496 Apr 4 12:24 16392\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16394\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 16396\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16398\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 16400\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 16402\n-rw------- 1 pgsql pgsql 0 Apr 4 12:24 16404\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 16406\n-rw------- 1 pgsql pgsql 212992 Apr 4 14:44 16408\n-rw------- 1 pgsql pgsql 49152 Apr 4 12:24 16410\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 16412\n-rw------- 1 pgsql pgsql 0 Apr 4 12:24 16414\n-rw------- 1 pgsql pgsql 114688 Apr 4 12:24 16416\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16418\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 16672\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16674\n-rw------- 1 pgsql pgsql 237568 Apr 4 12:26 16676\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16678\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16679\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16680\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16681\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16682\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16683\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 16684\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 16685\n-rw------- 1 pgsql pgsql 245760 Apr 4 12:26 16686\n-rw------- 1 pgsql pgsql 73728 Apr 4 12:26 16687\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16688\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16689\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:26 16690\n-rw------- 1 pgsql pgsql 65536 Apr 4 12:26 16691\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:26 16692\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:26 16693\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:26 16694\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:26 16695\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16696\n-rw------- 1 pgsql pgsql 32768 Apr 4 12:24 16697\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16698\n-rw------- 1 pgsql pgsql 163840 Apr 4 12:26 16701\n-rw------- 1 pgsql pgsql 196608 Apr 4 12:26 16702\n-rw------- 1 pgsql pgsql 73728 Apr 4 12:24 16703\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:26 16706\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:26 16707\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:26 16708\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16709\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16710\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 16711\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16712\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16713\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16714\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16715\n-rw------- 1 pgsql pgsql 32768 Apr 4 12:24 16716\n-rw------- 1 pgsql pgsql 106496 Apr 4 12:24 16717\n-rw------- 1 pgsql pgsql 106496 Apr 4 12:24 16718\n-rw------- 1 pgsql pgsql 1212416 Apr 4 12:24 16719\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16720\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16721\n-rw------- 1 pgsql pgsql 40960 Apr 4 14:44 16724\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16727\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16728\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16729\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16730\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:26 16731\n-rw------- 1 pgsql pgsql 49152 Apr 4 12:26 16732\n-rw------- 1 pgsql pgsql 0 Apr 4 12:24 16735\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 16737\n-rw------- 1 pgsql pgsql 0 Apr 4 12:24 16738\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 16740\n-rw------- 1 pgsql pgsql 0 Apr 4 12:24 16744\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 16746\n-rw------- 1 pgsql pgsql 0 Apr 4 12:24 16750\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 16752\n-rw------- 1 pgsql pgsql 122880 Apr 4 12:24 16753\n-rw------- 1 pgsql pgsql 16384 Apr 4 12:24 16755\n-rw------- 1 pgsql pgsql 0 Apr 4 12:24 16759\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 16761\n-rw------- 1 pgsql pgsql 40960 Apr 4 12:24 17158\n-rw------- 1 pgsql pgsql 0 Apr 4 12:24 17160\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 17162\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 17163\n-rw------- 1 pgsql pgsql 0 Apr 4 12:24 17165\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 17167\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 17168\n-rw------- 1 pgsql pgsql 0 Apr 4 12:24 17170\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 17172\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 17173\n-rw------- 1 pgsql pgsql 0 Apr 4 12:24 17175\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 17177\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 17178\n-rw------- 1 pgsql pgsql 0 Apr 4 12:24 17180\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 17182\n-rw------- 1 pgsql pgsql 0 Apr 4 12:24 17183\n-rw------- 1 pgsql pgsql 0 Apr 4 12:24 17185\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:24 17187\n-rw------- 1 pgsql pgsql 0 Apr 4 12:26 26488264\n-rw------- 1 pgsql pgsql 0 Apr 4 12:26 26488266\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 26488268\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 26488269\n-rw------- 1 pgsql pgsql 1073741824 Apr 4 15:07 26488271\n-rw------- 1 pgsql pgsql 407527424 Apr 4 16:17 26488271.1\n-rw------- 1 pgsql pgsql 0 Apr 4 12:26 26488273\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 26488275\n-rw------- 1 pgsql pgsql 565067776 Apr 4 16:17 26488276\n-rw------- 1 pgsql pgsql 0 Apr 4 12:26 26488278\n-rw------- 1 pgsql pgsql 0 Apr 4 12:26 26488280\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 26488282\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 26488283\n-rw------- 1 pgsql pgsql 0 Apr 4 12:26 26488285\n-rw------- 1 pgsql pgsql 0 Apr 4 12:26 26488287\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 26488289\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 26488290\n-rw------- 1 pgsql pgsql 0 Apr 4 12:26 26488292\n-rw------- 1 pgsql pgsql 0 Apr 4 12:26 26488294\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 26488296\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 26488297\n-rw------- 1 pgsql pgsql 0 Apr 4 12:26 26488299\n-rw------- 1 pgsql pgsql 0 Apr 4 12:26 26488301\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 26488303\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 26488304\n-rw------- 1 pgsql pgsql 0 Apr 4 12:26 26488306\n-rw------- 1 pgsql pgsql 0 Apr 4 12:26 26488308\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 26488310\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 26488311\n-rw------- 1 pgsql pgsql 0 Apr 4 12:26 26488313\n-rw------- 1 pgsql pgsql 0 Apr 4 12:26 26488315\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 26488317\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 26488318\n-rw------- 1 pgsql pgsql 0 Apr 4 12:26 26488320\n-rw------- 1 pgsql pgsql 0 Apr 4 12:26 26488322\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 26488324\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 26488325\n-rw------- 1 pgsql pgsql 0 Apr 4 12:26 26488327\n-rw------- 1 pgsql pgsql 0 Apr 4 12:26 26488329\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 26488331\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 26488332\n-rw------- 1 pgsql pgsql 0 Apr 4 12:26 26488334\n-rw------- 1 pgsql pgsql 0 Apr 4 12:26 26488336\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 26488338\n-rw------- 1 pgsql pgsql 8192 Apr 4 12:26 26488339\n-rw------- 1 pgsql pgsql 60045 Apr 4 12:24 pg_internal.init\n-rw------- 1 pgsql pgsql 4 Apr 4 12:24 PG_VERSION\n\nHopefully this helps.\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n",
"msg_date": "Mon, 4 Apr 2005 16:18:42 -0400",
"msg_from": "Christopher Petrilli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "\n\n> This is done using COPY syntax, not INSERT syntax. So I suppose \"yes\"\n> I do. The file that is being used for COPY is kept on a ramdisk.\n\n\tCOPY or psql \\copy ?\n\tIf you wanna be sure you commit after each COPY, launch a psql in a shell \nand check if the inserted rows are visible (watching SELECT count(*) grow \nwill do)\n",
"msg_date": "Mon, 04 Apr 2005 22:53:21 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "On Apr 4, 2005 4:53 PM, PFC <[email protected]> wrote:\n> > This is done using COPY syntax, not INSERT syntax. So I suppose \"yes\"\n> > I do. The file that is being used for COPY is kept on a ramdisk.\n> \n> COPY or psql \\copy ?\n> If you wanna be sure you commit after each COPY, launch a psql in a shell\n> and check if the inserted rows are visible (watching SELECT count(*) grow\n> will do)\n\nThe script is Python, using pyexpect (a'la expect) and does this, exactly:\n\npsql = pexpect.spawn('/usr/local/pgsql/bin/psql -d bench2 ')\n[ ...]\nstart = time.time()\npsql.expect_exact('bench2=#')\npsql.sendline(\"COPY events%03i FROM '/mnt/tmpfs/loadfile';\" % (tablenum+1))\nresults.write('%s\\n' % (time.time() - start))\nresults.flush()\n\nThere's other code, but it's all related to building the loadfile.\nNote that I'm specifically including the time it takes to get the\nprompt back in the timing (but it does slip 1 loop, which isn't\nrelevent).\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n",
"msg_date": "Mon, 4 Apr 2005 16:56:26 -0400",
"msg_from": "Christopher Petrilli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "On Mon, 2005-04-04 at 16:18 -0400, Christopher Petrilli wrote:\n> On Apr 4, 2005 4:11 PM, Simon Riggs <[email protected]> wrote:\n> > > > I'm very interested in the graphs of elapsed time for COPY 500 rows\n> > > > against rows inserted. The simplistic inference from those graphs are\n> > > > that if you only inserted 5 million rows into each table, rather than 10\n> > > > million rows then everything would be much quicker. I hope this doesn't\n> > > > work, but could you try that to see if it works? I'd like to rule out a\n> > > > function of \"number of rows\" as an issue, or focus in on it depending\n> > > > upon the results.\n> > \n> > Any chance of running a multiple load of 4 million rows per table,\n> > leaving the test running for at least 3 tables worth (12+ M rows)?\n> \n> As soon as I get done running a test without indexes :-) \n> \n> > > > Q: Please can you confirm that the discontinuity on the graph at around\n> > > > 5000 elapsed seconds matches EXACTLY with the switch from one table to\n> > > > another? That is an important point.\n> > >\n> > > Well, the change over happens at 51593.395205 seconds :-) Here's two\n> > > lines from the results with row count and time added:\n> > >\n> > > 10000000 51584.9818912 8.41331386566\n> > > 10000500 51593.395205 0.416964054108\n> > >\n> My program *SPECIFICALLY* counts to 10M then switches the COPY statement.\n\n> > OK. Please...\n> > cd $PGDATA/base/26488263\n> > ls -l\n> \n> [root@bigbird base]# cd 26488263/\n> [root@bigbird 26488263]# ls -l\n> total 2003740\n\n> -rw------- 1 pgsql pgsql 1073741824 Apr 4 15:07 26488271\n> -rw------- 1 pgsql pgsql 407527424 Apr 4 16:17 26488271.1\n\nCan you do:\nselect relname from pg_class where relfilenode = 26488271\nand confirm that the name is the table you've been loading...\n\nCouldn't see all your indexes... are they still there?\n\nThanks,\n\nBest Regards, Simon Riggs\n\n\n",
"msg_date": "Mon, 04 Apr 2005 21:58:23 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "On Apr 4, 2005 4:58 PM, Simon Riggs <[email protected]> wrote:\n> Can you do:\n> select relname from pg_class where relfilenode = 26488271\n> and confirm that the name is the table you've been loading...\n\nIt is.\n \n> Couldn't see all your indexes... are they still there?\n\nNope, I'm running a second run without the auxilary indices. I only\nhave the primary key index. So far, a quick scan with the eye says\nthat it's behaving \"better\", but beginning to have issues again. I'll\npost results as soon as they are done.\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n",
"msg_date": "Mon, 4 Apr 2005 17:03:56 -0400",
"msg_from": "Christopher Petrilli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "On Mon, 2005-04-04 at 17:03 -0400, Christopher Petrilli wrote:\n> On Apr 4, 2005 4:58 PM, Simon Riggs <[email protected]> wrote:\n> > Can you do:\n> > select relname from pg_class where relfilenode = 26488271\n> > and confirm that the name is the table you've been loading...\n> \n> It is.\n> \n> > Couldn't see all your indexes... are they still there?\n> \n> Nope, I'm running a second run without the auxilary indices. I only\n> have the primary key index. So far, a quick scan with the eye says\n> that it's behaving \"better\", but beginning to have issues again. I'll\n> post results as soon as they are done.\n\nHmmm....\n\nBefore I start to tunnel-vision on a particular coincidence...\n\nHow much memory have you got on the system?\nHow much of that have you allocated to various tasks?\nWhat else is happening on your system?\nTell us more about disk set-up and other hardware related things.\nDisk cache...disk speed...seek times....etc\n\nBest Regards, Simon Riggs\n\n\n\n\n\n",
"msg_date": "Mon, 04 Apr 2005 23:44:00 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "If I'm getting the point of this thread correctly, have a huge amount\nof data in one table degrades INSERT/COPY performance even with just a\nPKEY index. If that's about the size of it, read on. If not, ignore\nme because I missed something.\n\nOn Apr 4, 2005 10:44 PM, Simon Riggs <[email protected]> wrote:\n> Before I start to tunnel-vision on a particular coincidence...\n> \n\nDon't worry too much about tunnel vision. I see the same thing every\nday with multi-million row tables. The bigger the table gets (with\nonly a pkey index) the slower the inserts go. If I start over\n(truncate, drop/create table), or if I point the initial load at a new\ntable, everything gets speedy. I've always figured it was a function\nof table size and learned to live with it...\n\n> How much memory have you got on the system?\n\nOn mine, 16G\n\n> How much of that have you allocated to various tasks?\n\nshared buffers: 15000\n\n> What else is happening on your system?\n\nNothing on mine.\n\n> Tell us more about disk set-up and other hardware related things.\n\n6-disk RAID10 on a Compaq SmartArray 6404 with 256M BB cache, WAL on\n2-disk mirror on built in SmartArray5 controller.\n\n-- \nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\nhttp://open-ils.org\n",
"msg_date": "Mon, 4 Apr 2005 23:28:25 +0000",
"msg_from": "Mike Rylander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "On Apr 4, 2005 6:44 PM, Simon Riggs <[email protected]> wrote:\n> Before I start to tunnel-vision on a particular coincidence...\n> \n> How much memory have you got on the system?\n\nNow, 2Gb, but most of it is free in this situation. Earlier, I posted\nsome of the settings related to work mem.\n\n> How much of that have you allocated to various tasks?\n\nDo you mean inside PostgreSQL?\n\n> What else is happening on your system?\n\nsshd, that's it :-)\n\n> Tell us more about disk set-up and other hardware related things.\n> Disk cache...disk speed...seek times....etc\n\nSure, here's the system configuration:\n\n* AMD64/3000\n* 2GB RAM (was 1GB, has made no difference)\n* 1 x 120GB SATA drive (w/WAL), 7200RPM Seagate\n* 1 x 160GB SATA drive (main), 7200RPM Seagate\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n",
"msg_date": "Mon, 4 Apr 2005 21:31:38 -0400",
"msg_from": "Christopher Petrilli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "Christopher Petrilli <[email protected]> writes:\n> On Apr 4, 2005 12:23 PM, Tom Lane <[email protected]> wrote:\n>> do a test run with *no* indexes on the table, just to see if it behaves\n>> any differently? Basically I was wondering if index overhead might be\n>> part of the problem.\n\n> http://www.amber.org/~petrilli/diagrams/pgsql_copy500_pkonly.png\n\n> I appologize, I forgot to kill the PK, but as you can see, the curve\n> flattened out a lot. It still begins to increase in what seems like\n> the same place. You can find the results themselves at:\n\nYeah, this confirms the thought that the indexes are the source of\nthe issue. (Which is what I'd expect, because a bare INSERT ought to be\nan approximately constant-time operation. But it's good to verify.)\n\nNow some amount of slowdown is to be expected as the indexes get larger,\nsince it ought to take roughly O(log N) time to insert a new entry in an\nindex of size N. The weird thing about your curves is the very sudden\njump in the insert times.\n\nWhat I think might be happening is that the \"working set\" of pages\ntouched during index inserts is gradually growing, and at some point it\nexceeds shared_buffers, and at that point performance goes in the toilet\nbecause we are suddenly doing lots of reads to pull in index pages that\nfell out of the shared buffer area.\n\nIt would be interesting to watch the output of iostat or vmstat during\nthis test run. If I'm correct about this, the I/O load should be\nbasically all writes during the initial part of the test, and then\nsuddenly develop a significant and increasing fraction of reads at the\npoint where the slowdown occurs.\n\nThe indicated fix of course is to increase shared_buffers.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Apr 2005 22:36:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ? "
},
{
"msg_contents": "On Apr 4, 2005 10:36 PM, Tom Lane <[email protected]> wrote:\n> Christopher Petrilli <[email protected]> writes:\n> > On Apr 4, 2005 12:23 PM, Tom Lane <[email protected]> wrote:\n> >> do a test run with *no* indexes on the table, just to see if it behaves\n> >> any differently? Basically I was wondering if index overhead might be\n> >> part of the problem.\n> \n> > http://www.amber.org/~petrilli/diagrams/pgsql_copy500_pkonly.png\n> \n> > I appologize, I forgot to kill the PK, but as you can see, the curve\n> > flattened out a lot. It still begins to increase in what seems like\n> > the same place. You can find the results themselves at:\n> \n> Yeah, this confirms the thought that the indexes are the source of\n> the issue. (Which is what I'd expect, because a bare INSERT ought to be\n> an approximately constant-time operation. But it's good to verify.)\n\nThis seemsed to be my original idea, but I wanted to eliminate\neverything else as much as possible. I was also concerned that I might\nbe hitting a bad case in the trees. I had to change some UID\ngeneration code to better hash, so...\n \n> Now some amount of slowdown is to be expected as the indexes get larger,\n> since it ought to take roughly O(log N) time to insert a new entry in an\n> index of size N. The weird thing about your curves is the very sudden\n> jump in the insert times.\n\nRight, I expected O(log N) behavior myself, and it seems to behave\nthat way, if you look at the first section (although there's some\ninteresting patterns that are visible if you exclude data outside the\n90th percentile in the first section, that seems to coincide with some\nwrite activity.\n\n> It would be interesting to watch the output of iostat or vmstat during\n> this test run. If I'm correct about this, the I/O load should be\n> basically all writes during the initial part of the test, and then\n> suddenly develop a significant and increasing fraction of reads at the\n> point where the slowdown occurs.\n\nWell, I can track this on a run, if it would be useful, but I think\nyou're right as it matches what I saw from looking at iostat at those\npoints.\n\n> The indicated fix of course is to increase shared_buffers.\n\nAny idea where it should be set?\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n",
"msg_date": "Mon, 4 Apr 2005 22:54:57 -0400",
"msg_from": "Christopher Petrilli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "Christopher Petrilli <[email protected]> writes:\n> On Apr 4, 2005 10:36 PM, Tom Lane <[email protected]> wrote:\n>> The indicated fix of course is to increase shared_buffers.\n\n> Any idea where it should be set?\n\nNot really. An upper bound would be the total size of the finished\nindexes for one 10M-row table, but one would suppose that that's\noverkill. The leaf pages shouldn't have to stay in RAM to have\nreasonable behavior --- the killer case is when upper-level tree\npages drop out. Or that's what I'd expect anyway.\n\nYou could probably drop the inter-insert sleep for testing purposes,\nif you want to experiment with several shared_buffers values quickly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Apr 2005 23:35:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ? "
},
{
"msg_contents": "\nTom Lane <[email protected]> writes:\n\n> What I think might be happening is that the \"working set\" of pages\n> touched during index inserts is gradually growing, and at some point it\n> exceeds shared_buffers, and at that point performance goes in the toilet\n> because we are suddenly doing lots of reads to pull in index pages that\n> fell out of the shared buffer area.\n\nAll this is happening within a single transaction too, right? So there hasn't\nbeen an fsync the entire time. It's entirely up to the kernel when to decide\nto start writing data. \n\nIt's possible it's just buffering all the writes in memory until the amount of\nfree buffers drops below some threshold then it suddenly starts writing out\nbuffers. \n\n> It would be interesting to watch the output of iostat or vmstat during\n> this test run. If I'm correct about this, the I/O load should be\n> basically all writes during the initial part of the test, and then\n> suddenly develop a significant and increasing fraction of reads at the\n> point where the slowdown occurs.\n\nI think he's right, if you see a reasonable write volume before the\nperformance drop followed by a sudden increase in read volume (and decrease of\nwrite volume proportionate to the drop in performance) then it's just shared\nbuffers becoming a bottleneck.\n\nIf there's hardly any write volume before, then a sudden increase in write\nvolume despite a drop in performance then I might be right. In which case you\nmight want to look into tools to tune your kernel vm system.\n\n\n-- \ngreg\n\n",
"msg_date": "04 Apr 2005 23:45:47 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "On 04 Apr 2005 23:45:47 -0400, Greg Stark <[email protected]> wrote:\n> \n> Tom Lane <[email protected]> writes:\n> \n> > What I think might be happening is that the \"working set\" of pages\n> > touched during index inserts is gradually growing, and at some point it\n> > exceeds shared_buffers, and at that point performance goes in the toilet\n> > because we are suddenly doing lots of reads to pull in index pages that\n> > fell out of the shared buffer area.\n> \n> All this is happening within a single transaction too, right? So there hasn't\n> been an fsync the entire time. It's entirely up to the kernel when to decide\n> to start writing data.\n\nThis was my concern, and in fact moving from ext3 -> XFS has helped\nsubstantially in this regard. This is all happening inside COPY\nstatements, so there's effectively a commit every 500 rows. I could\nenlarge this, but I didn't notice a huge increase in performance when\ndoing tests on smaller bits.\n\nAlso, you are correct, I am running without fsync, although I could\nchange that if you thought it would \"smooth\" the performance. The\nissue is less absolute performance than something more deterministic. \nGoing from 0.05 seconds for a 500 row COPY to 26 seconds really messes\nwith the system.\n\nOne thing that was mentioned early on, and I hope people remember, is\nthat I am running autovacuum in the background, but the timing of it\nseems to have little to do with the system's problems, at least the\ndebug output doesn't conincide with performance loss.\n\n> It's possible it's just buffering all the writes in memory until the amount of\n> free buffers drops below some threshold then it suddenly starts writing out\n> buffers.\n\nThat was happening with ext3, actually, or at least to the best of my knowledge.\n\n> > It would be interesting to watch the output of iostat or vmstat during\n> > this test run. If I'm correct about this, the I/O load should be\n> > basically all writes during the initial part of the test, and then\n> > suddenly develop a significant and increasing fraction of reads at the\n> > point where the slowdown occurs.\n> \n> I think he's right, if you see a reasonable write volume before the\n> performance drop followed by a sudden increase in read volume (and decrease of\n> write volume proportionate to the drop in performance) then it's just shared\n> buffers becoming a bottleneck.\n\nI've set shared_buffers to 16000 (from the original 1000) and am\nrunning now, without the pauses. We'll see what it looks like, but so\nfar it seems to be running faster. How much and how it degrades will\nbe an interesting view.\n \n> If there's hardly any write volume before, then a sudden increase in write\n> volume despite a drop in performance then I might be right. In which case you\n> might want to look into tools to tune your kernel vm system.\n\nHere's a quick snapshot of iostat:\n\nLinux 2.6.9-1.667 (bigbird.amber.org) 04/04/2005\n\navg-cpu: %user %nice %sys %iowait %idle\n 1.05 0.01 0.63 13.15 85.17\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nhda 0.00 0.00 0.00 3616 0\nsda 23.15 68.09 748.89 246884021 2715312654\nsdb 19.08 37.65 773.03 136515457 2802814036\n\nThe first 3 columns have been identical (or nearly so) the whole time,\nwhich tells me the system is pegged in its performance on IO. This is\nnot surprising.\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n",
"msg_date": "Mon, 4 Apr 2005 23:55:00 -0400",
"msg_from": "Christopher Petrilli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> All this is happening within a single transaction too, right? So there hasn't\n> been an fsync the entire time. It's entirely up to the kernel when to decide\n> to start writing data. \n\nNo ... there's a commit every 500 records. However, I think Chris said\nhe was running with fsync off; so you're right that the kernel is at\nliberty to write stuff to disk when it feels like. It could be that\nthose outlier points are transactions that occurred in the middle of\nperiodic syncer-driven mass writes. Maybe fsync off is\ncounterproductive for this situation?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Apr 2005 23:57:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ? "
},
{
"msg_contents": "On Apr 4, 2005 11:57 PM, Tom Lane <[email protected]> wrote:\n> Greg Stark <[email protected]> writes:\n> > All this is happening within a single transaction too, right? So there hasn't\n> > been an fsync the entire time. It's entirely up to the kernel when to decide\n> > to start writing data.\n> \n> No ... there's a commit every 500 records. However, I think Chris said\n> he was running with fsync off; so you're right that the kernel is at\n> liberty to write stuff to disk when it feels like. It could be that\n> those outlier points are transactions that occurred in the middle of\n> periodic syncer-driven mass writes. Maybe fsync off is\n> counterproductive for this situation?\n\nLooking at preliminary results from running with shared_buffers at\n16000, it seems this may be correct. Performance was flatter for a\nBIT longer, but slammed right into the wall and started hitting the\n3-30 second range per COPY. I've restarted the run, with fsync turned\non (fdatasync), and we'll see.\n\nMy fear is that it's some bizarre situation interacting with both\nissues, and one that might not be solvable. Does anyone else have\nmuch experience with this sort of sustained COPY?\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n",
"msg_date": "Tue, 5 Apr 2005 00:16:27 -0400",
"msg_from": "Christopher Petrilli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "On Tue, Apr 05, 2005 at 12:16:27AM -0400, Christopher Petrilli wrote:\n> My fear is that it's some bizarre situation interacting with both\n> issues, and one that might not be solvable. Does anyone else have\n> much experience with this sort of sustained COPY?\n\nYou might ask the guy who just posted to -admin about a database that's\ndoing 340M inserts a day in 300M transactions...\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 5 Apr 2005 00:03:52 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "On Apr 5, 2005 12:16 AM, Christopher Petrilli <[email protected]> wrote:\n> Looking at preliminary results from running with shared_buffers at\n> 16000, it seems this may be correct. Performance was flatter for a\n> BIT longer, but slammed right into the wall and started hitting the\n> 3-30 second range per COPY. I've restarted the run, with fsync turned\n> on (fdatasync), and we'll see.\n> \n> My fear is that it's some bizarre situation interacting with both\n> issues, and one that might not be solvable. Does anyone else have\n> much experience with this sort of sustained COPY?\n\nWell, here's the results:\n\nhttp://www.amber.org/~petrilli/diagrams/pgsql_copy500_comparison.png\n\nThe red is the run with shared_buffers turned up, but fsync off.\nThe blue is the run with shared_buffers turned up, but fsync on.\n\nNote that it hits the wall sooner. Unfortunately, my brain is fried,\nand not sure what that means!\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n",
"msg_date": "Tue, 5 Apr 2005 10:27:10 -0400",
"msg_from": "Christopher Petrilli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "On Mon, 2005-04-04 at 22:36 -0400, Tom Lane wrote:\n> Christopher Petrilli <[email protected]> writes:\n> > On Apr 4, 2005 12:23 PM, Tom Lane <[email protected]> wrote:\n> >> do a test run with *no* indexes on the table, just to see if it behaves\n> >> any differently? Basically I was wondering if index overhead might be\n> >> part of the problem.\n> \n> > http://www.amber.org/~petrilli/diagrams/pgsql_copy500_pkonly.png\n> \n> > I appologize, I forgot to kill the PK, but as you can see, the curve\n> > flattened out a lot. It still begins to increase in what seems like\n> > the same place. You can find the results themselves at:\n>\n> Yeah, this confirms the thought that the indexes are the source of\n> the issue. (Which is what I'd expect, because a bare INSERT ought to be\n> an approximately constant-time operation. But it's good to verify.)\n\nYup, indexes are the best explanation so far - block extension needs\nsome work, but I doubted that it was the source of this effect.\n\n> Now some amount of slowdown is to be expected as the indexes get larger,\n> since it ought to take roughly O(log N) time to insert a new entry in an\n> index of size N. The weird thing about your curves is the very sudden\n> jump in the insert times.\n\nWell, ISTM that the curve is far from unique. Mark's OSDL tests show\nthem too. What was wierd, for me, was that it \"resets\" when you move to\na new table. The index theory does accurately explain that.\n\nPerhaps the jump is not so sudden? Do I see a first small step up at\nabout 4.5M rows, then another much bigger one at 7.5M (which looks like\nthe only one at first glance)?\n\n> What I think might be happening is that the \"working set\" of pages\n> touched during index inserts is gradually growing, and at some point it\n> exceeds shared_buffers, and at that point performance goes in the toilet\n> because we are suddenly doing lots of reads to pull in index pages that\n> fell out of the shared buffer area.\n\nSo this does seem to be the best explanation and it seems a good one.\n\nIt's also an excellent advert for table and index partitioning, and some\ndamning evidence against global indexes on partitioned tables (though\nthey may still be better than the alternative...)\n\n> The indicated fix of course is to increase shared_buffers.\n\nSplitting your tables at 4M, not 10M would work even better.\n\n..\n\nAnyway, where most of this started was with Christopher's comments:\n\nOn Fri, 2005-04-01 at 14:38 -0500, Christopher Petrilli wrote: \n> This was an application originally written for MySQL/MYISAM, and it's\n> looking like PostgreSQL can't hold up for it, simply because it's \"too\n> much database\" if that makes sense. The same box, running the MySQL\n> implementation (which uses no transactions) runs around 800-1000\n> rows/second systained.\n\nB-trees aren't unique to PostgreSQL; the explanation developed here\nwould work equally well for any database system that used tree-based\nindexes. Do we still think that MySQL can do this when PostgreSQL\ncannot? How?\n\nDo we have performance test results showing the same application load\nwithout the degradation? We don't need to look at the source code to\nmeasure MySQL performance...\n\nBest Regards, Simon Riggs\n\n\n",
"msg_date": "Tue, 05 Apr 2005 20:48:29 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "On Apr 5, 2005 3:48 PM, Simon Riggs <[email protected]> wrote:\n> > Now some amount of slowdown is to be expected as the indexes get larger,\n> > since it ought to take roughly O(log N) time to insert a new entry in an\n> > index of size N. The weird thing about your curves is the very sudden\n> > jump in the insert times.\n> \n> Well, ISTM that the curve is far from unique. Mark's OSDL tests show\n> them too. What was wierd, for me, was that it \"resets\" when you move to\n> a new table. The index theory does accurately explain that.\n> \n> Perhaps the jump is not so sudden? Do I see a first small step up at\n> about 4.5M rows, then another much bigger one at 7.5M (which looks like\n> the only one at first glance)?\n> \n> > What I think might be happening is that the \"working set\" of pages\n> > touched during index inserts is gradually growing, and at some point it\n> > exceeds shared_buffers, and at that point performance goes in the toilet\n> > because we are suddenly doing lots of reads to pull in index pages that\n> > fell out of the shared buffer area.\n> \n> So this does seem to be the best explanation and it seems a good one.\n> \n> It's also an excellent advert for table and index partitioning, and some\n> damning evidence against global indexes on partitioned tables (though\n> they may still be better than the alternative...)\n> \n> > The indicated fix of course is to increase shared_buffers.\n> \n> Splitting your tables at 4M, not 10M would work even better.\n\nUnfortunately, given we are talking about billions of rows\npotentially, I'm concerned about that many tables when it comes to\nquery time. I assume this will kick in the genetic optimizer?\n\n\n> Anyway, where most of this started was with Christopher's comments:\n> \n> On Fri, 2005-04-01 at 14:38 -0500, Christopher Petrilli wrote:\n> > This was an application originally written for MySQL/MYISAM, and it's\n> > looking like PostgreSQL can't hold up for it, simply because it's \"too\n> > much database\" if that makes sense. The same box, running the MySQL\n> > implementation (which uses no transactions) runs around 800-1000\n> > rows/second systained.\n> \n> B-trees aren't unique to PostgreSQL; the explanation developed here\n> would work equally well for any database system that used tree-based\n> indexes. Do we still think that MySQL can do this when PostgreSQL\n> cannot? How?\n\nThere are customers in production using MySQL with 10M rows/table, and\nI have no evidence of this behavior. I do not have the test jig for\nMySQL, but I can create one, which is what I will do. Note that they\nare using MyISAM files, so there is no ACID behavior. Also, I have\nseen troubling corruption issues that I've never been able to\nconcretely identify.\n\nAbove all, I've been impressed that PostgreSQL, even when it hits this\nwall, never corrupts anything.\n\n > Do we have performance test results showing the same application load\n> without the degradation? We don't need to look at the source code to\n> measure MySQL performance...\n\nI will see what I can do in the next few days to create a similar\nlittle test for MySQL.\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n",
"msg_date": "Tue, 5 Apr 2005 16:05:48 -0400",
"msg_from": "Christopher Petrilli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "On Apr 5, 2005 3:48 PM, Simon Riggs <[email protected]> wrote:\n> B-trees aren't unique to PostgreSQL; the explanation developed here\n> would work equally well for any database system that used tree-based\n> indexes. Do we still think that MySQL can do this when PostgreSQL\n> cannot? How?\n> \n> Do we have performance test results showing the same application load\n> without the degradation? We don't need to look at the source code to\n> measure MySQL performance...\n\nhttp://www.amber.org/~petrilli/diagrams/comparison_mysql_pgsql.png\n\nThat chart shows MySQL (using INSERT against MyISAM tables) and\nPostgreSQL (using COPY) running with the exact same code otherwise.\nNote that MySQL does hit a bit of a wall, but nothing as drastic as\nPostgreSQL and actually maintains something \"more flat\". The red and\nblue dashed lines are the 95th percentile point.\n\nMy suspicion is that what we're seeing is WAL issues, not particularly\nindex issues. The indices just fill up the WAL faster because there's\nmore data. This is a wag basically, but it would seem to explain the\ndifference. In both cases, the indices were identical. Five on each.\n\nOne interesting thing... PostgreSQL starts out a good bit faster, but\nlooses in the end.\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n",
"msg_date": "Tue, 5 Apr 2005 18:55:42 -0400",
"msg_from": "Christopher Petrilli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "On Tue, 2005-04-05 at 18:55 -0400, Christopher Petrilli wrote:\n> On Apr 5, 2005 3:48 PM, Simon Riggs <[email protected]> wrote:\n> > B-trees aren't unique to PostgreSQL; the explanation developed here\n> > would work equally well for any database system that used tree-based\n> > indexes. Do we still think that MySQL can do this when PostgreSQL\n> > cannot? How?\n> > \n> > Do we have performance test results showing the same application load\n> > without the degradation? We don't need to look at the source code to\n> > measure MySQL performance...\n> \n> http://www.amber.org/~petrilli/diagrams/comparison_mysql_pgsql.png\n> \n> That chart shows MySQL (using INSERT against MyISAM tables) and\n> PostgreSQL (using COPY) running with the exact same code otherwise.\n> Note that MySQL does hit a bit of a wall, but nothing as drastic as\n> PostgreSQL and actually maintains something \"more flat\". The red and\n> blue dashed lines are the 95th percentile point.\n\nInteresting comparison. Any chance of separating the graphs as well, I'm\ninterested in the detail on both graphs.\n\nCould you estimate the apparent periodicity on the PostgreSQL graphs?\n\n> My suspicion is that what we're seeing is WAL issues, not particularly\n> index issues. The indices just fill up the WAL faster because there's\n> more data. This is a wag basically, but it would seem to explain the\n> difference. In both cases, the indices were identical. Five on each.\n\nLet's test the shared_buffers theory.\n\nWould you mind loading only 5M rows per table, but load the same amount\nof data overall? That should keep us within the comparable zone overall.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Wed, 06 Apr 2005 09:09:58 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
},
{
"msg_contents": "On Tue, 2005-04-05 at 16:05 -0400, Christopher Petrilli wrote:\n> On Apr 5, 2005 3:48 PM, Simon Riggs <[email protected]> wrote:\n> > > The indicated fix of course is to increase shared_buffers.\n> > \n> > Splitting your tables at 4M, not 10M would work even better.\n> \n> Unfortunately, given we are talking about billions of rows\n> potentially, I'm concerned about that many tables when it comes to\n> query time. I assume this will kick in the genetic optimizer?\n\nNo, it won't start using the genetic optimizer.\n\nYou could just buy more RAM and keep table size the same.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Wed, 06 Apr 2005 09:14:19 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sustained inserts per sec ... ?"
}
] |
[
{
"msg_contents": "Hello!\n\nI posted a similar question to this one about a month ago; but, for some\nreason, it never seemed to be broadcast eventhough it ended up in the\narchives. So, since I'm still struggling with this, I thought I'd\nrepost...\n\nI'm trying to optimize a query and the EXPLAIN ANALYZE (see link below)\nshows that some hash join row estimates are wrong by a factor of 2-3,\nand upwards of 7-8. There is a corresponding mis-estimation of the\namount of time taken for these steps. The database is vacuum analyzed\nnightly by a cron job. How would I go about tightening up these\nerrors? I suspect that I need to SET STATISTIC on some columns, but\nhow can I tell which columns?\n\nAny help would be appreciated.\n\nWinXP (dual Xeon 1.2GB RAM) PgSQL 8.0.1\nExplain Analyze: <http://www.indeq.com/EA.txt>\nView Definition: <http://www.indeq.com/VGAUA.txt>\n\nThe largest table contains about 10,000 rows. All tables have indexes\non their foreign keys.\n\nThanks!\nMark\n\n\n",
"msg_date": "Sun, 3 Apr 2005 22:08:03 -0700",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Correcting Hash Join Estimates"
},
{
"msg_contents": "[email protected] writes:\n> I'm trying to optimize a query and the EXPLAIN ANALYZE (see link below)\n> shows that some hash join row estimates are wrong by a factor of 2-3,\n> and upwards of 7-8.\n\nI doubt that improving those estimates would lead to markedly better\nresults. You need to think about improving the view design instead.\nWhat context is this view used in --- do you just do \"select * from\nview_get_all_user_award2\", or are there conditions added to it, or\nperhaps it gets joined with other things? Do you really need the\nDISTINCT constraint? Do you really need the ORDER BY? Can you\nsimplify the WHERE clause at all?\n\nHalf a second sounds pretty decent to me for a ten-way join with a WHERE\nclause as unstructured as that. If you really need it to execute in way\nless time, you're probably going to have to rethink your data\nrepresentation to make the query simpler.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Apr 2005 01:54:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Correcting Hash Join Estimates "
},
{
"msg_contents": "\nOn Apr 4, 2005, at 12:54 AM, Tom Lane wrote:\n\n> [email protected] writes:\n>> I'm trying to optimize a query and the EXPLAIN ANALYZE (see link \n>> below)\n>> shows that some hash join row estimates are wrong by a factor of 2-3,\n>> and upwards of 7-8.\n>\n> I doubt that improving those estimates would lead to markedly better\n> results. You need to think about improving the view design instead.\n> What context is this view used in --- do you just do \"select * from\n> view_get_all_user_award2\", or are there conditions added to it, or\n> perhaps it gets joined with other things?\n\nYes. I forgot to show how the query is executed...\n\nselect * from view_get_all_user_award2 where person_id = 1;\n\n\n> Do you really need the\n> DISTINCT constraint?\n\nYes.\n\n> Do you really need the ORDER BY?\n\nThe customer wants an initial ordering in the displayed data.\n\n> Can you\n> simplify the WHERE clause at all?\n>\n\nI originally had a bunch of LEFT JOINs. After reading Tow's \"SQL \nTuning\", I was hoping to steer the planner into a more \"optimal\" plan \nby using a large where clause instead and doing the joins there (I \nthink they're called implicit joins). I was able to shave a couple of \nhundred milliseconds off the execution time by doing this.\n\n> Half a second sounds pretty decent to me for a ten-way join with a \n> WHERE\n> clause as unstructured as that. If you really need it to execute in \n> way\n> less time, you're probably going to have to rethink your data\n> representation to make the query simpler.\n>\n\nUnfortunately, I'm not sure I can restructure the data. I did consider \nmaterialized views. However, they couldn't be lazy and that seemed \nlike a lot of extra work for the backend for very little improvement.\n\nIf this sounds like decent performance to you... I guess I can just \ntell the complainers that it's as good as it's going to get (barring a \nmajor hardware upgrade...).\n\nThanks!\nMark\n\n",
"msg_date": "Mon, 4 Apr 2005 01:15:38 -0500",
"msg_from": "Mark Lubratt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Correcting Hash Join Estimates "
}
] |
[
{
"msg_contents": "hi all.\n\nWe are designing a quite big application that requires a high-performance \ndatabase backend. \nThe rates we need to obtain are at least 5000 inserts per second and 15 \nselects per second for one connection. There should only be 3 or 4 \nsimultaneous connections. \nI think our main concern is to deal with the constant flow of data coming \nfrom the inserts that must be available for selection as fast as possible. \n(kind of real time access ...)\n\nAs a consequence, the database should rapidly increase up to more than one \nhundred gigs. We still have to determine how and when we shoud backup old \ndata to prevent the application from a performance drop. We intend to \ndevelop some kind of real-time partionning on our main table keep the \nflows up.\n\nAt first, we were planning to use SQL Server as it has features that in my \nopinion could help us a lot :\n - replication \n - clustering\n\nRecently we started to study Postgresql as a solution for our project :\n - it also has replication \n - Postgis module can handle geographic datatypes (which would \nfacilitate our developments)\n - We do have a strong knowledge on Postgresql administration (we \nuse it for production processes)\n - it is free (!) and we could save money for hardware purchase.\n\nIs SQL server clustering a real asset ? How reliable are Postgresql \nreplication tools ? Should I trust Postgresql performance for this kind \nof needs ?\n\nMy question is a bit fuzzy but any advices are most welcome... \nhardware,tuning or design tips as well :))\n\nThanks a lot.\n\nBenjamin. \n\n\nhi all.\n\nWe are designing a quite big application that requires a high-performance database backend. \nThe rates we need to obtain are at least 5000 inserts per second and 15 selects per second for one connection. There should only be 3 or 4 simultaneous connections. \nI think our main concern is to deal with the constant flow of data coming from the inserts that must be available for selection as fast as possible. (kind of real time access ...)\n\nAs a consequence, the database should rapidly increase up to more than one hundred gigs. We still have to determine how and when we shoud backup old data to prevent the application from a performance drop. We intend to develop some kind of real-time partionning on our main table keep the flows up.\n\nAt first, we were planning to use SQL Server as it has features that in my opinion could help us a lot :\n - replication \n - clustering\n\nRecently we started to study Postgresql as a solution for our project :\n - it also has replication \n - Postgis module can handle geographic datatypes (which would facilitate our developments)\n - We do have a strong knowledge on Postgresql administration (we use it for production processes)\n - it is free (!) and we could save money for hardware purchase.\n\nIs SQL server clustering a real asset ? How reliable are Postgresql replication tools ? Should I trust Postgresql performance for this kind of needs ?\n\nMy question is a bit fuzzy but any advices are most welcome... hardware,tuning or design tips as well :))\n\nThanks a lot.\n\nBenjamin.",
"msg_date": "Mon, 4 Apr 2005 10:02:22 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Postgresql vs SQLserver for this application ?"
},
{
"msg_contents": "\nWell, quite honestly, if you need this performance (5000 ins / sec) and \nfeatures (clustering, replication) - you should be looking at DB2 or Oracle.\n\nThat is not to say that PG can not do the job, or that its not a great \ndatabase, but the reason that DB2 and Oracle are still in wide use is \nbecause they answer the exact question you asked.\n\n-Barry\n\[email protected] wrote:\n> \n> hi all.\n> \n> We are designing a quite big application that requires a \n> high-performance database backend.\n> The rates we need to obtain are at least 5000 inserts per second and 15 \n> selects per second for one connection. There should only be 3 or 4 \n> simultaneous connections.\n> I think our main concern is to deal with the constant flow of data \n> coming from the inserts that must be available for selection as fast as \n> possible. (kind of real time access ...)\n> \n> As a consequence, the database should rapidly increase up to more than \n> one hundred gigs. We still have to determine how and when we shoud \n> backup old data to prevent the application from a performance drop. We \n> intend to develop some kind of real-time partionning on our main table \n> keep the flows up.\n> \n> At first, we were planning to use SQL Server as it has features that in \n> my opinion could help us a lot :\n> - replication\n> - clustering\n> \n> Recently we started to study Postgresql as a solution for our project :\n> - it also has replication\n> - Postgis module can handle geographic datatypes (which would \n> facilitate our developments)\n> - We do have a strong knowledge on Postgresql administration (we \n> use it for production processes)\n> - it is free (!) and we could save money for hardware purchase.\n> \n> Is SQL server clustering a real asset ? How reliable are Postgresql \n> replication tools ? Should I trust Postgresql performance for this kind \n> of needs ?\n> \n> My question is a bit fuzzy but any advices are most welcome... \n> hardware,tuning or design tips as well :))\n> \n> Thanks a lot.\n> \n> Benjamin.\n> \n",
"msg_date": "Sat, 09 Apr 2005 08:19:12 -0500",
"msg_from": "BarryS <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql vs SQLserver for this application ?"
}
] |
[
{
"msg_contents": "\nHi,\n\nI have just upgraded our db from 7.4.2 to 8.0.1 and we are doing some \ntesting. For some reason, we have discovered that our application performs \nmuch slower on 8.0.1.\n\nMy initial reaction was to turn on log_min_duration_statement to see what's \nhappening. However, log_min_duration_statement does not work for JDBC \nclients in 8.0.1.\n\nAs a result, I modified log_statement to all. Without my application doing \nanything, I see statements below being executed non-stop. Who is triggering \nthese statemetns? Is this normal? What am I doing wrong?\n\nI am using Fedora Core 1 - Kernel: 2.4.22-1.2174.nptl\n\nPlease help. Thanks.\n\n\n\n\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM \npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM \npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND \na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc L\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM \npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM \npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND \na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc L\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM \npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM \npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND \na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc L\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM \npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM \npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND \na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc L\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM \npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM \npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND \na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc L\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM \npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM \npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND \na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc L\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM \npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM \npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND \na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc L\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM \npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM \npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND \na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc L\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM \npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM \npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND \na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc L\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM \npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM \npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND \na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc L\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM \npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM \npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND \na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc L\nIKE '%nextval(%'\n\n\n",
"msg_date": "Mon, 04 Apr 2005 10:21:16 +0000",
"msg_from": "\"anon permutation\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "8.0.1 much slower than 7.4.2?"
},
{
"msg_contents": "\nI would ask this on the jdbc mailling list. They might know.\n\n---------------------------------------------------------------------------\n\nanon permutation wrote:\n> \n> Hi,\n> \n> I have just upgraded our db from 7.4.2 to 8.0.1 and we are doing some \n> testing. For some reason, we have discovered that our application performs \n> much slower on 8.0.1.\n> \n> My initial reaction was to turn on log_min_duration_statement to see what's \n> happening. However, log_min_duration_statement does not work for JDBC \n> clients in 8.0.1.\n> \n> As a result, I modified log_statement to all. Without my application doing \n> anything, I see statements below being executed non-stop. Who is triggering \n> these statemetns? Is this normal? What am I doing wrong?\n> \n> I am using Fedora Core 1 - Kernel: 2.4.22-1.2174.nptl\n> \n> Please help. Thanks.\n> \n> \n> \n> \n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM \n> pg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM \n> pg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid\n> ) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND \n> a.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc L\n> IKE '%nextval(%'\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM \n> pg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM \n> pg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid\n> ) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND \n> a.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc L\n> IKE '%nextval(%'\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM \n> pg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM \n> pg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid\n> ) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND \n> a.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc L\n> IKE '%nextval(%'\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM \n> pg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM \n> pg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid\n> ) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND \n> a.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc L\n> IKE '%nextval(%'\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM \n> pg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM \n> pg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid\n> ) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND \n> a.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc L\n> IKE '%nextval(%'\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM \n> pg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM \n> pg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid\n> ) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND \n> a.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc L\n> IKE '%nextval(%'\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM \n> pg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM \n> pg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid\n> ) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND \n> a.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc L\n> IKE '%nextval(%'\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM \n> pg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM \n> pg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid\n> ) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND \n> a.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc L\n> IKE '%nextval(%'\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM \n> pg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM \n> pg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid\n> ) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND \n> a.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc L\n> IKE '%nextval(%'\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM \n> pg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM \n> pg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid\n> ) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND \n> a.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc L\n> IKE '%nextval(%'\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM \n> pg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM \n> pg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid\n> ) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND \n> a.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc L\n> IKE '%nextval(%'\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 14 Apr 2005 19:03:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.0.1 much slower than 7.4.2?"
}
] |
[
{
"msg_contents": "Hi,\n\nI have just upgraded our db from 7.4.2 to 8.0.1 and we are doing some\ntesting. For some reasons, we have discovered that our application\nperforms much slower on 8.0.1.\n\nMy initial reaction was to turn on log_min_duration_statement to see\nwhat's happening. However, log_min_duration_statement does not work\nfor JDBC clients in 8.0.1.\n\nAs a result, I modified log_statement to all. Without my application\ndoing anything, I see statements below being executed non-stop. Who\nis triggering these statemetns? Is this normal? What am I doing\nwrong?\n\nI am using Fedora Core 1 - Kernel: 2.4.22-1.2174.nptl\n\nPlease help. Thanks.\n\n\n\n\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n(a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\nL\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n(a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\nL\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n(a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\nL\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n(a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\nL\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n(a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\nL\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n(a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\nL\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n(a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\nL\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n(a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\nL\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n(a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\nL\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n(a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\nL\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n(a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\nL\nIKE '%nextval(%'\n",
"msg_date": "Mon, 4 Apr 2005 18:32:04 +0800",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "8.0.1 performance question."
},
{
"msg_contents": "<[email protected]> writes:\n> As a result, I modified log_statement to all. Without my application\n> doing anything, I see statements below being executed non-stop. Who\n> is triggering these statemetns? Is this normal? What am I doing\n> wrong?\n\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\n> pg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\n> pg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n> (a.attrelid=c.oid\n> ) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\n> a.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\n> L\n> IKE '%nextval(%'\n\nBetter ask about that on pgsql-jdbc. I suppose this is the trace of the\nJDBC driver trying to find out column metadata ... but if it's failing\nto cache the information that's a pretty serious performance hit.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Apr 2005 11:49:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.0.1 performance question. "
},
{
"msg_contents": "Hi,\n\nI have just upgraded our db from 7.4.2 to 8.0.1 and we are doing some\ntesting. For some reasons, we have discovered that our application\nperforms much slower on 8.0.1.\n\nMy initial reaction was to turn on log_min_duration_statement to see\nwhat's happening. However, log_min_duration_statement does not work\nfor JDBC clients in 8.0.1.\n\nAs a result, I modified log_statement to all. Without my application\ndoing anything, I see statements below being executed non-stop. Who\nis triggering these statemetns? Is this normal? What am I doing\nwrong?\n\nI am using Fedora Core 1 - Kernel: 2.4.22-1.2174.nptl\n\nPlease help. Thanks.\n\nPS. I sent this email to the performance list and Tom asked me to\ncheck with this list. Therefore, here I am.\n\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n(a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\nL\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n(a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\nL\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n(a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\nL\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n(a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\nL\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n(a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\nL\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n(a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\nL\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n(a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\nL\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n(a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\nL\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n(a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\nL\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n(a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\nL\nIKE '%nextval(%'\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\npg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\npg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n(a.attrelid=c.oid\n) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\na.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\nL\nIKE '%nextval(%'\n\n\n---------- Forwarded message ----------\nFrom: Tom Lane <[email protected]>\nDate: Apr 4, 2005 11:49 PM\nSubject: Re: [PERFORM] 8.0.1 performance question.\nTo: [email protected]\nCc: [email protected]\n\n\n<[email protected]> writes:\n> As a result, I modified log_statement to all. Without my application\n> doing anything, I see statements below being executed non-stop. Who\n> is triggering these statemetns? Is this normal? What am I doing\n> wrong?\n\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\n> pg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\n> pg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n> (a.attrelid=c.oid\n> ) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\n> a.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\n> L\n> IKE '%nextval(%'\n\nBetter ask about that on pgsql-jdbc. I suppose this is the trace of the\nJDBC driver trying to find out column metadata ... but if it's failing\nto cache the information that's a pretty serious performance hit.\n\n regards, tom lane\n",
"msg_date": "Tue, 5 Apr 2005 00:00:39 +0800",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "8.0.1 performance question."
},
{
"msg_contents": "\n\nOn Tue, 5 Apr 2005 [email protected] wrote:\n\n> I see statements below being executed non-stop. Who is triggering these\n> statemetns? Is this normal? What am I doing wrong?\n> \n> \n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\n> pg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n> 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\n> pg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n> (a.attrelid=c.oid\n> ) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\n> a.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\n> L\n> IKE '%nextval(%'\n\nThese are the results of ResultSetMetaData.isNullable() and \nisAutoIncrement(), which your code is apparently calling. The results of \nthese calls are cached on a per ResultSet data. We have discussed \ncaching them at a higher level, but couldn't find a way to know when to \nflush that cache.\n\nKris Jurka\n",
"msg_date": "Mon, 4 Apr 2005 11:15:40 -0500 (EST)",
"msg_from": "Kris Jurka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.0.1 performance question."
},
{
"msg_contents": "Thank you for the quick response. To help me debug what's happening,\ncan you tell me what's the difference between the 7.4 and 8.0 jdbc\ndrivers in this regard? Is this something that is newly introduced in\n8.0? Or is this something that has always been happening?\n\nThanks.\n\n\n\nOn Apr 5, 2005 12:15 AM, Kris Jurka <[email protected]> wrote:\n> \n> \n> On Tue, 5 Apr 2005 [email protected] wrote:\n> \n> > I see statements below being executed non-stop. Who is triggering these\n> > statemetns? Is this normal? What am I doing wrong?\n> >\n> >\n> > 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT attnotnull FROM\n> > pg_catalog.pg_attribute WHERE attrelid = $1 AND attnum = $2\n> > 2005-04-04 18:05:00 CST PARSELOG: statement: SELECT def.adsrc FROM\n> > pg_catalog.pg_class c JOIN pg_catalog.pg_attribute a ON\n> > (a.attrelid=c.oid\n> > ) LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND\n> > a.attnum = def.adnum) WHERE c.oid = $1 and a.attnum = $2 AND def.adsrc\n> > L\n> > IKE '%nextval(%'\n> \n> These are the results of ResultSetMetaData.isNullable() and\n> isAutoIncrement(), which your code is apparently calling. The results of\n> these calls are cached on a per ResultSet data. We have discussed\n> caching them at a higher level, but couldn't find a way to know when to\n> flush that cache.\n> \n> Kris Jurka\n>\n",
"msg_date": "Tue, 5 Apr 2005 00:44:26 +0800",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.0.1 performance question."
},
{
"msg_contents": "\n\nOn Tue, 5 Apr 2005 [email protected] wrote:\n\n> Thank you for the quick response. To help me debug what's happening,\n> can you tell me what's the difference between the 7.4 and 8.0 jdbc\n> drivers in this regard? Is this something that is newly introduced in\n> 8.0? Or is this something that has always been happening?\n> \n\n8.0 is the first driver version to take advantage of the V3 protocol's \nability to return the base tables and columns of a ResultSet. \nPreviously isNullable was hardcoded to always return \ncolumnNullableUnknown and isAutoIncrement always returned false.\n\nI guess the question is why are you calling these methods if they didn't \nwork previously?\n\nKris Jurka\n",
"msg_date": "Mon, 4 Apr 2005 12:05:17 -0500 (EST)",
"msg_from": "Kris Jurka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.0.1 performance question."
}
] |
[
{
"msg_contents": "\n\n\n\nToday while trying to do a bulk COPY of data into a table, the process\naborted with the following error message:\n\nERROR: end-of-copy marker corrupt\nCONTEXT: COPY tbl_logged_event, line 178519: \"606447014,1492,2005-02-24\n03:16:14,2005-02-23 20:27:48,win_applog,,error,adsmclientservice,nt\nauthor...\"\n\nGoogling the error, we found reference to the '\\.' (backslash-period) being\nan \"end-of-copy marker\". Unfortunately, our data contains the\nbackslash-period character sequence. Is there any know fix or workaround\nfor this condition?\n\nWe're using Postgresql 7.3.9 and also running tests on an 8.0.1 system.\n\nThanks in advance,\n--- Steve\n___________________________________________________________________________________\n\nSteven Rosenstein\nIT Architect/Developer | IBM Virtual Server Administration\nVoice/FAX: 845-689-2064 | Cell: 646-345-6978 | Tieline: 930-6001\nText Messaging: 6463456978 @ mobile.mycingular.com\nEmail: srosenst @ us.ibm.com\n\n\"Learn from the mistakes of others because you can't live long enough to\nmake them all yourself.\" -- Eleanor Roosevelt\n\n",
"msg_date": "Mon, 4 Apr 2005 19:40:30 -0400",
"msg_from": "Steven Rosenstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bulk COPY end of copy delimiter"
},
{
"msg_contents": "Hi,\n\nOn Mon, 4 Apr 2005, Steven Rosenstein wrote:\n\n>\n>\n>\n>\n> Today while trying to do a bulk COPY of data into a table, the process\n> aborted with the following error message:\n>\n> ERROR: end-of-copy marker corrupt\n> CONTEXT: COPY tbl_logged_event, line 178519: \"606447014,1492,2005-02-24\n> 03:16:14,2005-02-23 20:27:48,win_applog,,error,adsmclientservice,nt\n> author...\"\n>\n> Googling the error, we found reference to the '\\.' (backslash-period) being\n> an \"end-of-copy marker\". Unfortunately, our data contains the\n> backslash-period character sequence. Is there any know fix or workaround\n> for this condition?\n\nAny sequence \\. in COPY input data should be escaped as \\\\. If this data\nwas generated by pg_dump then its a problem, but I haven't seen any other\nreports of this. Can I assume that you've generated the data for bulk load\nyourself? If so, there is discussion of escaping characters here:\nhttp://www.postgresql.org/docs/8.0/static/sql-copy.html.\n\nGavin\n",
"msg_date": "Tue, 5 Apr 2005 10:00:15 +1000 (EST)",
"msg_from": "Gavin Sherry <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bulk COPY end of copy delimiter"
},
{
"msg_contents": "\n\n\n\nYour assumption is correct. The data was generated out of a DB2 database,\nand uses commas as field delimiters.\n\nThank you for the workaround,\n--- Steve\n___________________________________________________________________________________\n\nSteven Rosenstein\nIT Architect/Developer | IBM Virtual Server Administration\nVoice/FAX: 845-689-2064 | Cell: 646-345-6978 | Tieline: 930-6001\nText Messaging: 6463456978 @ mobile.mycingular.com\nEmail: srosenst @ us.ibm.com\n\n\"Learn from the mistakes of others because you can't live long enough to\nmake them all yourself.\" -- Eleanor Roosevelt\n\n\n \n Gavin Sherry \n <[email protected] \n u> To \n Sent by: Steven Rosenstein/New \n pgsql-performance York/IBM@IBMUS \n -owner@postgresql cc \n .org [email protected] \n Subject \n Re: [PERFORM] Bulk COPY end of copy \n 04/04/2005 08:00 delimiter \n PM \n \n \n \n \n \n\n\n\n\nHi,\n\nOn Mon, 4 Apr 2005, Steven Rosenstein wrote:\n\n>\n>\n>\n>\n> Today while trying to do a bulk COPY of data into a table, the process\n> aborted with the following error message:\n>\n> ERROR: end-of-copy marker corrupt\n> CONTEXT: COPY tbl_logged_event, line 178519: \"606447014,1492,2005-02-24\n> 03:16:14,2005-02-23 20:27:48,win_applog,,error,adsmclientservice,nt\n> author...\"\n>\n> Googling the error, we found reference to the '\\.' (backslash-period)\nbeing\n> an \"end-of-copy marker\". Unfortunately, our data contains the\n> backslash-period character sequence. Is there any know fix or workaround\n> for this condition?\n\nAny sequence \\. in COPY input data should be escaped as \\\\. If this data\nwas generated by pg_dump then its a problem, but I haven't seen any other\nreports of this. Can I assume that you've generated the data for bulk load\nyourself? If so, there is discussion of escaping characters here:\nhttp://www.postgresql.org/docs/8.0/static/sql-copy.html.\n\nGavin\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n\n",
"msg_date": "Mon, 4 Apr 2005 23:05:13 -0400",
"msg_from": "Steven Rosenstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bulk COPY end of copy delimiter"
}
] |
[
{
"msg_contents": "Maybe better for -hackers, but here it goes anyway...\n\nHas anyone looked at compressing WAL's before writing to disk? On a\nsystem generating a lot of WAL it seems there might be some gains to be\nhad WAL data could be compressed before going to disk, since today's\nmachines are generally more I/O bound than CPU bound. And unlike the\nbase tables, you generally don't need to read the WAL, so you don't\nreally need to worry about not being able to quickly scan through the\ndata without decompressing it.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Mon, 4 Apr 2005 23:04:57 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Compressing WAL"
},
{
"msg_contents": "\n\"\"Jim C. Nasby\"\" <[email protected]> writes\n> Has anyone looked at compressing WAL's before writing to disk? On a\n> system generating a lot of WAL it seems there might be some gains to be\n> had WAL data could be compressed before going to disk, since today's\n> machines are generally more I/O bound than CPU bound. And unlike the\n> base tables, you generally don't need to read the WAL, so you don't\n> really need to worry about not being able to quickly scan through the\n> data without decompressing it.\n> -- \n\nThe problem is where you put the compression code? If you put it inside\nXLogInsert lock or XLogWrite lock, which will hold the lock too long? Or\nanywhere else?\n\nRegards,\nQingqing\n\n\n\n",
"msg_date": "Fri, 8 Apr 2005 13:36:40 +0800",
"msg_from": "\"Qingqing Zhou\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compressing WAL"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> Maybe better for -hackers, but here it goes anyway...\n> \n> Has anyone looked at compressing WAL's before writing to disk? On a\n> system generating a lot of WAL it seems there might be some gains to be\n> had WAL data could be compressed before going to disk, since today's\n> machines are generally more I/O bound than CPU bound. And unlike the\n> base tables, you generally don't need to read the WAL, so you don't\n> really need to worry about not being able to quickly scan through the\n> data without decompressing it.\n\nI have never heard anyone talk about it, but it seems useful. I think\ncompressing the page images written on first page modification since\ncheckpoint would be a big win.\n\nIs this a TODO?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 10 Apr 2005 21:12:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compressing WAL"
},
{
"msg_contents": "On Sun, Apr 10, 2005 at 09:12:41PM -0400, Bruce Momjian wrote:\n> I have never heard anyone talk about it, but it seems useful. I think\n> compressing the page images written on first page modification since\n> checkpoint would be a big win.\n\nCould you clarify that? Maybe I'm being naive, but it seems like you\ncould just put a compression routine between the log writer and the\nfilesystem.\n\n> Is this a TODO?\n\nISTM it's at least worth hacking something together and doing some\nperformance testing...\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Wed, 13 Apr 2005 17:27:20 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Compressing WAL"
},
{
"msg_contents": "On Sun, 2005-04-10 at 21:12 -0400, Bruce Momjian wrote:\n> Jim C. Nasby wrote:\n> > Maybe better for -hackers, but here it goes anyway...\n> > \n> > Has anyone looked at compressing WAL's before writing to disk? On a\n> > system generating a lot of WAL it seems there might be some gains to be\n> > had WAL data could be compressed before going to disk, since today's\n> > machines are generally more I/O bound than CPU bound. And unlike the\n> > base tables, you generally don't need to read the WAL, so you don't\n> > really need to worry about not being able to quickly scan through the\n> > data without decompressing it.\n> \n> I have never heard anyone talk about it, but it seems useful. I think\n> compressing the page images written on first page modification since\n> checkpoint would be a big win.\n\nWell it was discussed 2-3 years ago as part of the PITR preamble. You\nmay be surprised to read that over...\n\nA summary of thoughts to date on this are:\n\nxlog.c XLogInsert places backup blocks into the wal buffers before\ninsertion, so is the right place to do this. It would be possible to do\nthis before any LWlocks are taken, so would not not necessarily impair\nscalability.\n\nCurrently XLogInsert is a severe CPU bottleneck around the CRC\ncalculation, as identified recently by Tom. Digging further, the code\nused seems to cause processor stalls on Intel CPUs, possibly responsible\nfor much of the CPU time. Discussions to move to a 32-bit CRC would also\nbe effected by this because of the byte-by-byte nature of the algorithm,\nwhatever the length of the generating polynomial. PostgreSQL's CRC\nalgorithm is the fastest BSD code available. Until improvement is made\nthere, I would not investigate compression further. Some input from\nhardware tuning specialists is required...\n\nThe current LZW compression code uses a 4096 byte lookback size, so that\nwould need to be modified to extend across a whole block. An\nalternative, suggested originally by Tom and rediscovered by me because\nI just don't read everybody's fine words in history, is to simply take\nout the freespace in the middle of every heap block that consists of\nzeros.\n\nAny solution in this area must take into account the variability of the\nsize of freespace in database blocks. Some databases have mostly full\nblocks, others vary. There would also be considerable variation in\ncompressability of blocks, especially since some blocks (e.g. TOAST) are\nlikely to already be compressed. There'd need to be some testing done to\nsee exactly the point where the costs of compression produce realisable\nbenefits.\n\nSo any solution must be able to cope with both compressed blocks and\nnon-compressed blocks. My current thinking is that this could be\nachieved by using the spare fourth bit of the BkpBlocks portion of the\nXLog structure, so that either all included BkpBlocks are compressed or\nnone of them are, and hope that allows benefit to shine through. Not\nthought about heap/index issues.\n\nIt is possible that an XLogWriter process could be used to assist in the\nCRC and compression calculations also, an a similar process used to\nassist decompression for recovery, in time.\n\nI regret I do not currently have time to pursue further.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Thu, 14 Apr 2005 00:33:42 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compressing WAL"
},
{
"msg_contents": "\nAdded to TODO:\n\n\t* Compress WAL entries [wal]\n\nI have also added this email to TODO.detail.\n\n---------------------------------------------------------------------------\n\nSimon Riggs wrote:\n> On Sun, 2005-04-10 at 21:12 -0400, Bruce Momjian wrote:\n> > Jim C. Nasby wrote:\n> > > Maybe better for -hackers, but here it goes anyway...\n> > > \n> > > Has anyone looked at compressing WAL's before writing to disk? On a\n> > > system generating a lot of WAL it seems there might be some gains to be\n> > > had WAL data could be compressed before going to disk, since today's\n> > > machines are generally more I/O bound than CPU bound. And unlike the\n> > > base tables, you generally don't need to read the WAL, so you don't\n> > > really need to worry about not being able to quickly scan through the\n> > > data without decompressing it.\n> > \n> > I have never heard anyone talk about it, but it seems useful. I think\n> > compressing the page images written on first page modification since\n> > checkpoint would be a big win.\n> \n> Well it was discussed 2-3 years ago as part of the PITR preamble. You\n> may be surprised to read that over...\n> \n> A summary of thoughts to date on this are:\n> \n> xlog.c XLogInsert places backup blocks into the wal buffers before\n> insertion, so is the right place to do this. It would be possible to do\n> this before any LWlocks are taken, so would not not necessarily impair\n> scalability.\n> \n> Currently XLogInsert is a severe CPU bottleneck around the CRC\n> calculation, as identified recently by Tom. Digging further, the code\n> used seems to cause processor stalls on Intel CPUs, possibly responsible\n> for much of the CPU time. Discussions to move to a 32-bit CRC would also\n> be effected by this because of the byte-by-byte nature of the algorithm,\n> whatever the length of the generating polynomial. PostgreSQL's CRC\n> algorithm is the fastest BSD code available. Until improvement is made\n> there, I would not investigate compression further. Some input from\n> hardware tuning specialists is required...\n> \n> The current LZW compression code uses a 4096 byte lookback size, so that\n> would need to be modified to extend across a whole block. An\n> alternative, suggested originally by Tom and rediscovered by me because\n> I just don't read everybody's fine words in history, is to simply take\n> out the freespace in the middle of every heap block that consists of\n> zeros.\n> \n> Any solution in this area must take into account the variability of the\n> size of freespace in database blocks. Some databases have mostly full\n> blocks, others vary. There would also be considerable variation in\n> compressability of blocks, especially since some blocks (e.g. TOAST) are\n> likely to already be compressed. There'd need to be some testing done to\n> see exactly the point where the costs of compression produce realisable\n> benefits.\n> \n> So any solution must be able to cope with both compressed blocks and\n> non-compressed blocks. My current thinking is that this could be\n> achieved by using the spare fourth bit of the BkpBlocks portion of the\n> XLog structure, so that either all included BkpBlocks are compressed or\n> none of them are, and hope that allows benefit to shine through. Not\n> thought about heap/index issues.\n> \n> It is possible that an XLogWriter process could be used to assist in the\n> CRC and compression calculations also, an a similar process used to\n> assist decompression for recovery, in time.\n> \n> I regret I do not currently have time to pursue further.\n> \n> Best Regards, Simon Riggs\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 18 Apr 2005 14:31:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compressing WAL"
}
] |
[
{
"msg_contents": "Unfortunately. \n\nBut we are in the the process to choose Postgresql with pgcluster. I'm \ncurrently running some tests (performance, stability...) \nSave the money on the license fees, you get it for your hardware ;-)\n\nI still welcome any advices or comments and I'll let you know how the \nproject is going on.\n\nBenjamin.\n\n\n\n\n\"Mohan, Ross\" <[email protected]>\n05/04/2005 20:48\n\n \n Pour : <[email protected]>\n cc : \n Objet : RE: [PERFORM] Postgresql vs SQLserver for this application ?\n\n\nYou never got answers on this? Apologies, I don't have one, but'd be \ncurious to hear about any you did get....\n \nthx\n \nRoss\n-----Original Message-----\nFrom: [email protected] \n[mailto:[email protected]] On Behalf Of [email protected]\nSent: Monday, April 04, 2005 4:02 AM\nTo: [email protected]\nSubject: [PERFORM] Postgresql vs SQLserver for this application ?\n\n\nhi all. \n\nWe are designing a quite big application that requires a high-performance \ndatabase backend. \nThe rates we need to obtain are at least 5000 inserts per second and 15 \nselects per second for one connection. There should only be 3 or 4 \nsimultaneous connections. \nI think our main concern is to deal with the constant flow of data coming \nfrom the inserts that must be available for selection as fast as possible. \n(kind of real time access ...) \n\nAs a consequence, the database should rapidly increase up to more than one \nhundred gigs. We still have to determine how and when we shoud backup old \ndata to prevent the application from a performance drop. We intend to \ndevelop some kind of real-time partionning on our main table keep the \nflows up. \n\nAt first, we were planning to use SQL Server as it has features that in my \nopinion could help us a lot : \n - replication \n - clustering \n\nRecently we started to study Postgresql as a solution for our project : \n - it also has replication \n - Postgis module can handle geographic datatypes (which would \nfacilitate our developments) \n - We do have a strong knowledge on Postgresql administration (we \nuse it for production processes) \n - it is free (!) and we could save money for hardware purchase. \n\nIs SQL server clustering a real asset ? How reliable are Postgresql \nreplication tools ? Should I trust Postgresql performance for this kind \nof needs ? \n\nMy question is a bit fuzzy but any advices are most welcome... \nhardware,tuning or design tips as well :)) \n\nThanks a lot. \n\nBenjamin. \n\n\n\nUnfortunately. \n\nBut we are in the the process to choose Postgresql with pgcluster. I'm currently running some tests (performance, stability...) \nSave the money on the license fees, you get it for your hardware ;-)\n\nI still welcome any advices or comments and I'll let you know how the project is going on.\n\nBenjamin.\n\n\n\n\n\n\n\"Mohan, Ross\" <[email protected]>\n05/04/2005 20:48\n\n \n Pour : <[email protected]>\n cc : \n Objet : RE: [PERFORM] Postgresql vs SQLserver for this application ?\n\n\nYou never got answers on this? Apologies, I don't have one, but'd be curious to hear about any you did get....\n \nthx\n \nRoss\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of [email protected]\nSent: Monday, April 04, 2005 4:02 AM\nTo: [email protected]\nSubject: [PERFORM] Postgresql vs SQLserver for this application ?\n\n\nhi all. \n\nWe are designing a quite big application that requires a high-performance database backend. \nThe rates we need to obtain are at least 5000 inserts per second and 15 selects per second for one connection. There should only be 3 or 4 simultaneous connections. \nI think our main concern is to deal with the constant flow of data coming from the inserts that must be available for selection as fast as possible. (kind of real time access ...) \n\nAs a consequence, the database should rapidly increase up to more than one hundred gigs. We still have to determine how and when we shoud backup old data to prevent the application from a performance drop. We intend to develop some kind of real-time partionning on our main table keep the flows up. \n\nAt first, we were planning to use SQL Server as it has features that in my opinion could help us a lot : \n - replication \n - clustering \n\nRecently we started to study Postgresql as a solution for our project : \n - it also has replication \n - Postgis module can handle geographic datatypes (which would facilitate our developments) \n - We do have a strong knowledge on Postgresql administration (we use it for production processes) \n - it is free (!) and we could save money for hardware purchase. \n\nIs SQL server clustering a real asset ? How reliable are Postgresql replication tools ? Should I trust Postgresql performance for this kind of needs ? \n\nMy question is a bit fuzzy but any advices are most welcome... hardware,tuning or design tips as well :)) \n\nThanks a lot. \n\nBenjamin.",
"msg_date": "Wed, 6 Apr 2005 09:17:15 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "RE : RE: Postgresql vs SQLserver for this application ?"
},
{
"msg_contents": "I think everyone was scared off by the 5000 inserts per second number.\n\nI've never seen even Oracle do this on a top end Dell system with\ncopious SCSI attached storage.\n\nAlex Turner\nnetEconomist\n\nOn Apr 6, 2005 3:17 AM, [email protected] <[email protected]> wrote:\n> \n> Unfortunately. \n> \n> But we are in the the process to choose Postgresql with pgcluster. I'm\n> currently running some tests (performance, stability...) \n> Save the money on the license fees, you get it for your hardware ;-) \n> \n> I still welcome any advices or comments and I'll let you know how the\n> project is going on. \n> \n> Benjamin. \n> \n> \n> \n> \"Mohan, Ross\" <[email protected]> \n> \n> 05/04/2005 20:48 \n> \n> Pour : <[email protected]> \n> cc : \n> Objet : RE: [PERFORM] Postgresql vs SQLserver for this\n> application ? \n> \n> \n> You never got answers on this? Apologies, I don't have one, but'd be curious\n> to hear about any you did get.... \n> \n> thx \n> \n> Ross \n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf\n> Of [email protected]\n> Sent: Monday, April 04, 2005 4:02 AM\n> To: [email protected]\n> Subject: [PERFORM] Postgresql vs SQLserver for this application ?\n> \n> \n> hi all. \n> \n> We are designing a quite big application that requires a high-performance\n> database backend. \n> The rates we need to obtain are at least 5000 inserts per second and 15\n> selects per second for one connection. There should only be 3 or 4\n> simultaneous connections. \n> I think our main concern is to deal with the constant flow of data coming\n> from the inserts that must be available for selection as fast as possible.\n> (kind of real time access ...) \n> \n> As a consequence, the database should rapidly increase up to more than one\n> hundred gigs. We still have to determine how and when we shoud backup old\n> data to prevent the application from a performance drop. We intend to\n> develop some kind of real-time partionning on our main table keep the flows\n> up. \n> \n> At first, we were planning to use SQL Server as it has features that in my\n> opinion could help us a lot : \n> - replication \n> - clustering \n> \n> Recently we started to study Postgresql as a solution for our project : \n> - it also has replication \n> - Postgis module can handle geographic datatypes (which would\n> facilitate our developments) \n> - We do have a strong knowledge on Postgresql administration (we use\n> it for production processes) \n> - it is free (!) and we could save money for hardware purchase. \n> \n> Is SQL server clustering a real asset ? How reliable are Postgresql\n> replication tools ? Should I trust Postgresql performance for this kind of\n> needs ? \n> \n> My question is a bit fuzzy but any advices are most welcome...\n> hardware,tuning or design tips as well :)) \n> \n> Thanks a lot. \n> \n> Benjamin. \n> \n> \n>\n",
"msg_date": "Wed, 6 Apr 2005 11:37:30 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RE : RE: Postgresql vs SQLserver for this application ?"
},
{
"msg_contents": "This thread seems to be focusing in on COPY efficiency,\nI'd like to ask something I got no answer to, a few months ago.\n\nUsing COPY ... FROM STDIN via the Perl DBI (DBD::Pg) interface,\nI accidentally strung together several \\n-terminated input lines,\nand sent them to the server with a single \"putline\".\n\nTo my (happy) surprise, I ended up with exactly that number of rows\nin the target table.\n\nIs this a bug? Is this fundamental to the protocol?\n\nSince it hasn't been documented (but then, \"endcopy\" isn't documented),\nI've been shy of investing in perf testing such mass copy calls.\nBut, if it DOES work, it should be reducing the number of network \nroundtrips.\n\nSo. Is it a feechur? Worth stress-testing? Could be VERY cool.\n\n-- \n\"Dreams come true, not free.\"\n\n",
"msg_date": "Wed, 6 Apr 2005 11:46:39 -0700",
"msg_from": "Mischa <[email protected]>",
"msg_from_op": false,
"msg_subject": "COPY Hacks (WAS: RE: Postgresql vs SQLserver for this application ?)"
},
{
"msg_contents": "Mischa <[email protected]> writes:\n> Using COPY ... FROM STDIN via the Perl DBI (DBD::Pg) interface,\n> I accidentally strung together several \\n-terminated input lines,\n> and sent them to the server with a single \"putline\".\n\n> To my (happy) surprise, I ended up with exactly that number of rows\n> in the target table.\n\n> Is this a bug?\n\nNo, it's the way it's supposed to work. \"putline\" really just sends a\nstream of data ... there's no semantic significance to the number of\nputline calls you use to send the stream, only to the contents of the\nstream. (By the same token, it's unlikely that deliberately aggregating\nsuch calls would be much of a win.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Apr 2005 16:27:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY Hacks (WAS: RE: Postgresql vs SQLserver for this application\n\t?)"
},
{
"msg_contents": "> Using COPY ... FROM STDIN via the Perl DBI (DBD::Pg) interface,\n> I accidentally strung together several \\n-terminated input lines,\n> and sent them to the server with a single \"putline\".\n> \n> To my (happy) surprise, I ended up with exactly that number of rows\n> in the target table.\n> \n> Is this a bug? Is this fundamental to the protocol?\n> \n> Since it hasn't been documented (but then, \"endcopy\" isn't documented),\n> I've been shy of investing in perf testing such mass copy calls.\n> But, if it DOES work, it should be reducing the number of network \n> roundtrips.\n\nI think it's documented in the libpq docs...\n\nChris\n",
"msg_date": "Thu, 07 Apr 2005 10:04:26 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY Hacks (WAS: RE: Postgresql vs SQLserver for this"
},
{
"msg_contents": "In article <[email protected]>,\nMischa <[email protected]> writes:\n\n> This thread seems to be focusing in on COPY efficiency,\n> I'd like to ask something I got no answer to, a few months ago.\n\n> Using COPY ... FROM STDIN via the Perl DBI (DBD::Pg) interface,\n> I accidentally strung together several \\n-terminated input lines,\n> and sent them to the server with a single \"putline\".\n\n> To my (happy) surprise, I ended up with exactly that number of rows\n> in the target table.\n\n> Is this a bug? Is this fundamental to the protocol?\n\n> Since it hasn't been documented (but then, \"endcopy\" isn't documented),\n> I've been shy of investing in perf testing such mass copy calls.\n> But, if it DOES work, it should be reducing the number of network \n> roundtrips.\n\n> So. Is it a feechur? Worth stress-testing? Could be VERY cool.\n\nUsing COPY from DBD::Pg _is_ documented - presumed you use DBD::Pg\nversion 1.41 released just today.\n\n",
"msg_date": "07 Apr 2005 14:21:53 +0200",
"msg_from": "Harald Fuchs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY Hacks (WAS: RE: Postgresql vs SQLserver for this application\n\t?)"
},
{
"msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n> Using COPY ... FROM STDIN via the Perl DBI (DBD::Pg) interface,\n> I accidentally strung together several \\n-terminated input lines,\n> and sent them to the server with a single \"putline\".\n...\n> So. Is it a feechur? Worth stress-testing? Could be VERY cool.\n\nAs explained elsewhere, not really a feature, more of a side-effect.\nKeep in mind, however, that any network round-trip time saved has to\nbe balanced against some additional overhead of constructing the\ncombined strings in Perl before sending them over. Most times COPY\nis used to parse a newline-separated file anyway. If you have a slow\nnetwork connection to the database, it *might* be a win, but my\nlimited testing shows that it is not an advantage for a \"normal\"\nconnection: I added 1 million rows via COPY using the normal way\n(1 million pg_putline calls), via pg_putline of 1000 rows at a\ntime, and via 10,000 rows at a time. They all ran in 22 seconds,\nwith no statistical difference between them. (This was the \"real\" time,\nthe system time was actually much lower for the combined calls).\n\nIt can't hurt to test things out on your particular system and see\nif it makes a real difference: it certainly does no harm as long as\nyou make sure the string you send always *end* in a newline.\n\n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200504072201\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n\n-----BEGIN PGP SIGNATURE-----\n\niD8DBQFCVeZrvJuQZxSWSsgRAoP+AJ9jTNetePMwKv9rdyu6Lz+BjSiDOQCguoSU\nie9TaeIxUuvd5fhjFueacvM=\n=1hWn\n-----END PGP SIGNATURE-----\n\n\n",
"msg_date": "Fri, 8 Apr 2005 02:03:18 -0000",
"msg_from": "\"Greg Sabino Mullane\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY Hacks (WAS: RE: Postgresql vs SQLserver for this application\n\t?)"
},
{
"msg_contents": "\nQuoting Greg Sabino Mullane <[email protected]>:\n\n> > Using COPY ... FROM STDIN via the Perl DBI (DBD::Pg) interface,\n> > I accidentally strung together several \\n-terminated input lines,\n> > and sent them to the server with a single \"putline\".\n> ...\n> > So. Is it a feechur? Worth stress-testing? Could be VERY cool.\n> \n> As explained elsewhere, not really a feature, more of a side-effect.\n> Keep in mind, however, that any network round-trip time saved has to\n> be balanced against some additional overhead of constructing the\n> combined strings in Perl before sending them over. Most times COPY\n> is used to parse a newline-separated file anyway. If you have a slow\n> network connection to the database, it *might* be a win, but my\n> limited testing shows that it is not an advantage for a \"normal\"\n> connection: I added 1 million rows via COPY using the normal way\n> (1 million pg_putline calls), via pg_putline of 1000 rows at a\n> time, and via 10,000 rows at a time. They all ran in 22 seconds,\n> with no statistical difference between them. (This was the \"real\" time,\n> the system time was actually much lower for the combined calls).\n> \n> It can't hurt to test things out on your particular system and see\n> if it makes a real difference: it certainly does no harm as long as\n> you make sure the string you send always *end* in a newline.\n\nMany thanks for digging into it.\n\nFor the app I'm working with, the time delay between rows being posted \nis /just/ enough to exceed the TCP Nagle delay, so every row goes across\nin its own packet :-( Reducing the number of network roundtrips \nby a factor of 40 is enough to cut elapsed time in half.\nThe cost of join(\"\",@FortyRows), which produces a 1-4K string, is what's\nnegligible in this case.\n\n-- \n\"Dreams come true, not free\" -- S.Sondheim, ITW\n\n",
"msg_date": "Thu, 7 Apr 2005 21:53:22 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: multi-line copy (was: Re: COPY Hacks)"
}
] |
[
{
"msg_contents": "hi,\n\n I like to know whether Indexed View supported in psql 7.1.3.?\n\nIs there any performance analysis tool for psql.?\n\nPlease! update me for the same.\n\nregards,\nstp.\n",
"msg_date": "Wed, 6 Apr 2005 18:01:35 +0530 (IST)",
"msg_from": "\"S.Thanga Prakash\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is Indexed View Supported in psql 7.1.3??"
},
{
"msg_contents": "> I like to know whether Indexed View supported in psql 7.1.3.?\n\nNo...\n\n> Is there any performance analysis tool for psql.?\n\nNo, we keep telling you to upgrade to newer PostgreSQL. Then you can \nuse EXPLAIN ANALYZE.\n\nChris\n",
"msg_date": "Mon, 11 Apr 2005 13:03:09 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is Indexed View Supported in psql 7.1.3??"
},
{
"msg_contents": "stp,\n\nI cannot help you with the first point, but as far as performance \nanalysis, I share with you what I've been using.\n\n1) pgbench -- which comes with PostgreSQL\n2) OSDB (http://osdb.sourceforge.net/)\n3) pg_autotune (http://pgfoundry.org/projects/pgautotune/)\n4) PQA (http://pgfoundry.org/projects/pqa/)\n\nYou did not mention how your database is being used/going to be used. If \nits already in production, use PQA, but I personally have not \nimplemented yet since seemed to be to take a performance hit of 15-25% \nwhen running it. Your mileage may vary.\n\nI use pgbench for quick tests and OSDB for more disk thrash testing.\n\nI am new to this; maybe someone else may be able to speak from more \nexperience.\n\nRegards.\n\nSteve Poe\n\n\n\n",
"msg_date": "Sun, 10 Apr 2005 22:35:08 -0700",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is Indexed View Supported in psql 7.1.3??"
}
] |
[
{
"msg_contents": "I wish I had a Dell system and run case to show you Alex, but I don't...\nhowever...using Oracle's \"direct path\" feature, it's pretty straightforward. \n\nWe've done 110,000 rows per second into index-less tables on a big system\n(IBM Power5 chips, Hitachi SAN). ( Yes, I am sure: over 100K a second. Sustained\nfor almost 9 minutes. )\n\nYes, this is an exception, but oracle directpath/InsertAppend/BulkLoad\nfeature enabled us to migrate a 4 TB database...really quickly. \n\nNow...if you ask me \"can this work without Power5 and Hitachi SAN?\"\nmy answer is..you give me a top end Dell and SCSI III on 15K disks\nand I'll likely easily match it, yea.\n\nI'd love to see PG get into this range..i am a big fan of PG (just a\nrank newbie) but I gotta think the underlying code to do this has\nto be not-too-complex.....\n\nBest, \n\nRoss\n\n\n\n-----Original Message-----\nFrom: Alex Turner [mailto:[email protected]] \nSent: Wednesday, April 06, 2005 11:38 AM\nTo: [email protected]\nCc: [email protected]; Mohan, Ross\nSubject: Re: RE : RE: [PERFORM] Postgresql vs SQLserver for this application ?\n\n\nI think everyone was scared off by the 5000 inserts per second number.\n\nI've never seen even Oracle do this on a top end Dell system with copious SCSI attached storage.\n\nAlex Turner\nnetEconomist\n\nOn Apr 6, 2005 3:17 AM, [email protected] <[email protected]> wrote:\n> \n> Unfortunately.\n> \n> But we are in the the process to choose Postgresql with pgcluster. I'm \n> currently running some tests (performance, stability...) Save the \n> money on the license fees, you get it for your hardware ;-)\n> \n> I still welcome any advices or comments and I'll let you know how the \n> project is going on.\n> \n> Benjamin.\n> \n> \n> \n> \"Mohan, Ross\" <[email protected]>\n> \n> 05/04/2005 20:48\n> \n> Pour : <[email protected]> \n> cc : \n> Objet : RE: [PERFORM] Postgresql vs SQLserver for this\n> application ?\n> \n> \n> You never got answers on this? Apologies, I don't have one, but'd be \n> curious to hear about any you did get....\n> \n> thx\n> \n> Ross\n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf\n> Of [email protected]\n> Sent: Monday, April 04, 2005 4:02 AM\n> To: [email protected]\n> Subject: [PERFORM] Postgresql vs SQLserver for this application ?\n> \n> \n> hi all.\n> \n> We are designing a quite big application that requires a \n> high-performance database backend. The rates we need to obtain are at \n> least 5000 inserts per second and 15 selects per second for one \n> connection. There should only be 3 or 4 simultaneous connections.\n> I think our main concern is to deal with the constant flow of data coming\n> from the inserts that must be available for selection as fast as possible.\n> (kind of real time access ...) \n> \n> As a consequence, the database should rapidly increase up to more \n> than one hundred gigs. We still have to determine how and when we \n> shoud backup old data to prevent the application from a performance \n> drop. We intend to develop some kind of real-time partionning on our \n> main table keep the flows up.\n> \n> At first, we were planning to use SQL Server as it has features that \n> in my opinion could help us a lot :\n> - replication \n> - clustering\n> \n> Recently we started to study Postgresql as a solution for our project : \n> - it also has replication \n> - Postgis module can handle geographic datatypes (which would \n> facilitate our developments)\n> - We do have a strong knowledge on Postgresql administration \n> (we use it for production processes)\n> - it is free (!) and we could save money for hardware \n> purchase.\n> \n> Is SQL server clustering a real asset ? How reliable are Postgresql \n> replication tools ? Should I trust Postgresql performance for this \n> kind of needs ?\n> \n> My question is a bit fuzzy but any advices are most welcome... \n> hardware,tuning or design tips as well :))\n> \n> Thanks a lot.\n> \n> Benjamin.\n> \n> \n>\n",
"msg_date": "Wed, 6 Apr 2005 16:12:47 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RE : RE: Postgresql vs SQLserver for this application ?"
},
{
"msg_contents": "Mohan, Ross wrote:\n> I wish I had a Dell system and run case to show you Alex, but I don't...\n> however...using Oracle's \"direct path\" feature, it's pretty straightforward. \n> \n> We've done 110,000 rows per second into index-less tables on a big system\n> (IBM Power5 chips, Hitachi SAN). ( Yes, I am sure: over 100K a second. Sustained\n> for almost 9 minutes. )\n> \n> Yes, this is an exception, but oracle directpath/InsertAppend/BulkLoad\n> feature enabled us to migrate a 4 TB database...really quickly. \n\nHow close to this is PG's COPY? I get surprisingly good results using\nCOPY with jdbc on smallish systems (now if that patch would make into\nthe mainstream PG jdbc support!) I think COPY has a bit more overhead\nthan what a Bulkload feature may have, but I suspect it's not that\nmuch more.\n\n> Now...if you ask me \"can this work without Power5 and Hitachi SAN?\"\n> my answer is..you give me a top end Dell and SCSI III on 15K disks\n> and I'll likely easily match it, yea.\n> \n> I'd love to see PG get into this range..i am a big fan of PG (just a\n> rank newbie) but I gotta think the underlying code to do this has\n> to be not-too-complex.....\n\nIt may not be that far off if you can use COPY instead of INSERT.\nBut comparing Bulkload to INSERT is a bit apples<->orangish.\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n",
"msg_date": "Wed, 06 Apr 2005 09:38:51 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RE : RE: Postgresql vs SQLserver for this application"
},
{
"msg_contents": "On Wed, 2005-04-06 at 16:12 +0000, Mohan, Ross wrote:\n> I wish I had a Dell system and run case to show you Alex, but I don't...\n> however...using Oracle's \"direct path\" feature, it's pretty straightforward. \n> \n> We've done 110,000 rows per second into index-less tables on a big system\n> (IBM Power5 chips, Hitachi SAN). ( Yes, I am sure: over 100K a second. Sustained\n> for almost 9 minutes. )\n\nJust for kicks I did a local test on a desktop machine (single CPU,\nsingle IDE drive) using COPY from STDIN for a set of integers in via a\nsingle transaction, no indexes.\n\n1572864 tuples were loaded in 13715.613ms, which is approx 115k rows per\nsecond.\n\nOkay, no checkpoints and I didn't cross an index boundary, but I also\nhaven't tuned the config file beyond bumping up the buffers.\n\nLets try again with more data this time.\n\n31Million tuples were loaded in approx 279 seconds, or approx 112k rows\nper second.\n\n> I'd love to see PG get into this range..i am a big fan of PG (just a\n> rank newbie) but I gotta think the underlying code to do this has\n> to be not-too-complex.....\n\nI'd say we're there.\n\n> -----Original Message-----\n> From: Alex Turner [mailto:[email protected]] \n> Sent: Wednesday, April 06, 2005 11:38 AM\n> To: [email protected]\n> Cc: [email protected]; Mohan, Ross\n> Subject: Re: RE : RE: [PERFORM] Postgresql vs SQLserver for this application ?\n> \n> \n> I think everyone was scared off by the 5000 inserts per second number.\n> \n> I've never seen even Oracle do this on a top end Dell system with copious SCSI attached storage.\n> \n> Alex Turner\n> netEconomist\n> \n> On Apr 6, 2005 3:17 AM, [email protected] <[email protected]> wrote:\n> > \n> > Unfortunately.\n> > \n> > But we are in the the process to choose Postgresql with pgcluster. I'm \n> > currently running some tests (performance, stability...) Save the \n> > money on the license fees, you get it for your hardware ;-)\n> > \n> > I still welcome any advices or comments and I'll let you know how the \n> > project is going on.\n> > \n> > Benjamin.\n> > \n> > \n> > \n> > \"Mohan, Ross\" <[email protected]>\n> > \n> > 05/04/2005 20:48\n> > \n> > Pour : <[email protected]> \n> > cc : \n> > Objet : RE: [PERFORM] Postgresql vs SQLserver for this\n> > application ?\n> > \n> > \n> > You never got answers on this? Apologies, I don't have one, but'd be \n> > curious to hear about any you did get....\n> > \n> > thx\n> > \n> > Ross\n> > \n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]] On Behalf\n> > Of [email protected]\n> > Sent: Monday, April 04, 2005 4:02 AM\n> > To: [email protected]\n> > Subject: [PERFORM] Postgresql vs SQLserver for this application ?\n> > \n> > \n> > hi all.\n> > \n> > We are designing a quite big application that requires a \n> > high-performance database backend. The rates we need to obtain are at \n> > least 5000 inserts per second and 15 selects per second for one \n> > connection. There should only be 3 or 4 simultaneous connections.\n> > I think our main concern is to deal with the constant flow of data coming\n> > from the inserts that must be available for selection as fast as possible.\n> > (kind of real time access ...) \n> > \n> > As a consequence, the database should rapidly increase up to more \n> > than one hundred gigs. We still have to determine how and when we \n> > shoud backup old data to prevent the application from a performance \n> > drop. We intend to develop some kind of real-time partionning on our \n> > main table keep the flows up.\n> > \n> > At first, we were planning to use SQL Server as it has features that \n> > in my opinion could help us a lot :\n> > - replication \n> > - clustering\n> > \n> > Recently we started to study Postgresql as a solution for our project : \n> > - it also has replication \n> > - Postgis module can handle geographic datatypes (which would \n> > facilitate our developments)\n> > - We do have a strong knowledge on Postgresql administration \n> > (we use it for production processes)\n> > - it is free (!) and we could save money for hardware \n> > purchase.\n> > \n> > Is SQL server clustering a real asset ? How reliable are Postgresql \n> > replication tools ? Should I trust Postgresql performance for this \n> > kind of needs ?\n> > \n> > My question is a bit fuzzy but any advices are most welcome... \n> > hardware,tuning or design tips as well :))\n> > \n> > Thanks a lot.\n> > \n> > Benjamin.\n> > \n> > \n> >\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n-- \n\n",
"msg_date": "Wed, 06 Apr 2005 12:40:47 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RE : RE: Postgresql vs SQLserver for this"
}
] |
[
{
"msg_contents": "How close to this is PG's COPY? I get surprisingly good results using COPY with jdbc on smallish systems (now if that patch would make into the mainstream PG jdbc support!) I think COPY has a bit more overhead than what a Bulkload feature may have, but I suspect it's not that much more.\n\n|| Steve, I do not know. But am reading the docs now, and should figure it out. Ask\n me later if you remember. Oracle's \"direct path\" is a way of just slamming blocks\n filled with rows into the table, above the high water mark. It sidesteps freelist\n management and all manner of intrablock issues. There is a \"payback\", but the benefits\n far far outweigh the costs. \n\n> Now...if you ask me \"can this work without Power5 and Hitachi SAN?\" my \n> answer is..you give me a top end Dell and SCSI III on 15K disks and \n> I'll likely easily match it, yea.\n> \n> I'd love to see PG get into this range..i am a big fan of PG (just a \n> rank newbie) but I gotta think the underlying code to do this has to \n> be not-too-complex.....\n\nIt may not be that far off if you can use COPY instead of INSERT. But comparing Bulkload to INSERT is a bit apples<->orangish.\n\n|| Oh! I see! I had no idea I was doing that! Thanks for pointing it out clearly to me. Yea, I would\n say a full transactional INSERT of 5K rows/sec into an indexed-table is a near-mythology without significant\n caveats (parallelized, deferred buffering, etc.) \n\n\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n",
"msg_date": "Wed, 6 Apr 2005 16:43:33 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RE : RE: Postgresql vs SQLserver for this application ?"
}
] |
[
{
"msg_contents": "<snip good stuff...>\n\n31Million tuples were loaded in approx 279 seconds, or approx 112k rows per second.\n\n> I'd love to see PG get into this range..i am a big fan of PG (just a \n> rank newbie) but I gotta think the underlying code to do this has to \n> be not-too-complex.....\n\nI'd say we're there.\n\n\n|| <CLAPPING!!> Yes! PG is there, assuredly! So VERY cool! I made a newbie\n error of conflating COPY with INSERT. I don't know if I could get\n oracle to do much more than about 500-1500 rows/sec...PG is quite impressive.\n\n Makes one wonder why corporations positively insist on giving oracle\n $$$$ yearly. <shrug>\n\n-----Original Message-----\nFrom: Rod Taylor [mailto:[email protected]] \nSent: Wednesday, April 06, 2005 12:41 PM\nTo: Mohan, Ross\nCc: [email protected]\nSubject: Re: RE : RE: [PERFORM] Postgresql vs SQLserver for thisapplication ?\n\n\nOn Wed, 2005-04-06 at 16:12 +0000, Mohan, Ross wrote:\n> I wish I had a Dell system and run case to show you Alex, but I \n> don't... however...using Oracle's \"direct path\" feature, it's pretty \n> straightforward.\n> \n> We've done 110,000 rows per second into index-less tables on a big \n> system (IBM Power5 chips, Hitachi SAN). ( Yes, I am sure: over 100K a \n> second. Sustained for almost 9 minutes. )\n\nJust for kicks I did a local test on a desktop machine (single CPU, single IDE drive) using COPY from STDIN for a set of integers in via a single transaction, no indexes.\n\n1572864 tuples were loaded in 13715.613ms, which is approx 115k rows per second.\n\nOkay, no checkpoints and I didn't cross an index boundary, but I also haven't tuned the config file beyond bumping up the buffers.\n\nLets try again with more data this time.\n\n31Million tuples were loaded in approx 279 seconds, or approx 112k rows per second.\n\n> I'd love to see PG get into this range..i am a big fan of PG (just a \n> rank newbie) but I gotta think the underlying code to do this has to \n> be not-too-complex.....\n\nI'd say we're there.\n\n> -----Original Message-----\n> From: Alex Turner [mailto:[email protected]]\n> Sent: Wednesday, April 06, 2005 11:38 AM\n> To: [email protected]\n> Cc: [email protected]; Mohan, Ross\n> Subject: Re: RE : RE: [PERFORM] Postgresql vs SQLserver for this application ?\n> \n> \n> I think everyone was scared off by the 5000 inserts per second number.\n> \n> I've never seen even Oracle do this on a top end Dell system with \n> copious SCSI attached storage.\n> \n> Alex Turner\n> netEconomist\n> \n> On Apr 6, 2005 3:17 AM, [email protected] <[email protected]> wrote:\n> > \n> > Unfortunately.\n> > \n> > But we are in the the process to choose Postgresql with pgcluster. \n> > I'm\n> > currently running some tests (performance, stability...) Save the \n> > money on the license fees, you get it for your hardware ;-)\n> > \n> > I still welcome any advices or comments and I'll let you know how \n> > the\n> > project is going on.\n> > \n> > Benjamin.\n> > \n> > \n> > \n> > \"Mohan, Ross\" <[email protected]>\n> > \n> > 05/04/2005 20:48\n> > \n> > Pour : <[email protected]> \n> > cc : \n> > Objet : RE: [PERFORM] Postgresql vs SQLserver for this\n> > application ?\n> > \n> > \n> > You never got answers on this? Apologies, I don't have one, but'd be\n> > curious to hear about any you did get....\n> > \n> > thx\n> > \n> > Ross\n> > \n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]] On Behalf Of \n> > [email protected]\n> > Sent: Monday, April 04, 2005 4:02 AM\n> > To: [email protected]\n> > Subject: [PERFORM] Postgresql vs SQLserver for this application ?\n> > \n> > \n> > hi all.\n> > \n> > We are designing a quite big application that requires a\n> > high-performance database backend. The rates we need to obtain are at \n> > least 5000 inserts per second and 15 selects per second for one \n> > connection. There should only be 3 or 4 simultaneous connections.\n> > I think our main concern is to deal with the constant flow of data coming\n> > from the inserts that must be available for selection as fast as possible.\n> > (kind of real time access ...) \n> > \n> > As a consequence, the database should rapidly increase up to more\n> > than one hundred gigs. We still have to determine how and when we \n> > shoud backup old data to prevent the application from a performance \n> > drop. We intend to develop some kind of real-time partionning on our \n> > main table keep the flows up.\n> > \n> > At first, we were planning to use SQL Server as it has features \n> > that\n> > in my opinion could help us a lot :\n> > - replication \n> > - clustering\n> > \n> > Recently we started to study Postgresql as a solution for our project : \n> > - it also has replication \n> > - Postgis module can handle geographic datatypes (which \n> > would\n> > facilitate our developments)\n> > - We do have a strong knowledge on Postgresql administration \n> > (we use it for production processes)\n> > - it is free (!) and we could save money for hardware \n> > purchase.\n> > \n> > Is SQL server clustering a real asset ? How reliable are Postgresql\n> > replication tools ? Should I trust Postgresql performance for this \n> > kind of needs ?\n> > \n> > My question is a bit fuzzy but any advices are most welcome...\n> > hardware,tuning or design tips as well :))\n> > \n> > Thanks a lot.\n> > \n> > Benjamin.\n> > \n> > \n> >\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n-- \n\n",
"msg_date": "Wed, 6 Apr 2005 16:47:13 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RE : RE: Postgresql vs SQLserver for thisapplication ?"
},
{
"msg_contents": "I guess I was thinking more in the range of 5000 transaction/sec, less\nso 5000 rows on bulk import...\n\nAlex\n\nOn Apr 6, 2005 12:47 PM, Mohan, Ross <[email protected]> wrote:\n> <snip good stuff...>\n> \n> 31Million tuples were loaded in approx 279 seconds, or approx 112k rows per second.\n> \n> > I'd love to see PG get into this range..i am a big fan of PG (just a\n> > rank newbie) but I gotta think the underlying code to do this has to\n> > be not-too-complex.....\n> \n> I'd say we're there.\n> \n> || <CLAPPING!!> Yes! PG is there, assuredly! So VERY cool! I made a newbie\n> error of conflating COPY with INSERT. I don't know if I could get\n> oracle to do much more than about 500-1500 rows/sec...PG is quite impressive.\n> \n> Makes one wonder why corporations positively insist on giving oracle\n> $$$$ yearly. <shrug>\n> \n> -----Original Message-----\n> From: Rod Taylor [mailto:[email protected]]\n> Sent: Wednesday, April 06, 2005 12:41 PM\n> To: Mohan, Ross\n> Cc: [email protected]\n> Subject: Re: RE : RE: [PERFORM] Postgresql vs SQLserver for thisapplication ?\n> \n> On Wed, 2005-04-06 at 16:12 +0000, Mohan, Ross wrote:\n> > I wish I had a Dell system and run case to show you Alex, but I\n> > don't... however...using Oracle's \"direct path\" feature, it's pretty\n> > straightforward.\n> >\n> > We've done 110,000 rows per second into index-less tables on a big\n> > system (IBM Power5 chips, Hitachi SAN). ( Yes, I am sure: over 100K a\n> > second. Sustained for almost 9 minutes. )\n> \n> Just for kicks I did a local test on a desktop machine (single CPU, single IDE drive) using COPY from STDIN for a set of integers in via a single transaction, no indexes.\n> \n> 1572864 tuples were loaded in 13715.613ms, which is approx 115k rows per second.\n> \n> Okay, no checkpoints and I didn't cross an index boundary, but I also haven't tuned the config file beyond bumping up the buffers.\n> \n> Lets try again with more data this time.\n> \n> 31Million tuples were loaded in approx 279 seconds, or approx 112k rows per second.\n> \n> > I'd love to see PG get into this range..i am a big fan of PG (just a\n> > rank newbie) but I gotta think the underlying code to do this has to\n> > be not-too-complex.....\n> \n> I'd say we're there.\n> \n> > -----Original Message-----\n> > From: Alex Turner [mailto:[email protected]]\n> > Sent: Wednesday, April 06, 2005 11:38 AM\n> > To: [email protected]\n> > Cc: [email protected]; Mohan, Ross\n> > Subject: Re: RE : RE: [PERFORM] Postgresql vs SQLserver for this application ?\n> >\n> >\n> > I think everyone was scared off by the 5000 inserts per second number.\n> >\n> > I've never seen even Oracle do this on a top end Dell system with\n> > copious SCSI attached storage.\n> >\n> > Alex Turner\n> > netEconomist\n> >\n> > On Apr 6, 2005 3:17 AM, [email protected] <[email protected]> wrote:\n> > >\n> > > Unfortunately.\n> > >\n> > > But we are in the the process to choose Postgresql with pgcluster.\n> > > I'm\n> > > currently running some tests (performance, stability...) Save the\n> > > money on the license fees, you get it for your hardware ;-)\n> > >\n> > > I still welcome any advices or comments and I'll let you know how\n> > > the\n> > > project is going on.\n> > >\n> > > Benjamin.\n> > >\n> > >\n> > >\n> > > \"Mohan, Ross\" <[email protected]>\n> > >\n> > > 05/04/2005 20:48\n> > >\n> > > Pour : <[email protected]>\n> > > cc :\n> > > Objet : RE: [PERFORM] Postgresql vs SQLserver for this\n> > > application ?\n> > >\n> > >\n> > > You never got answers on this? Apologies, I don't have one, but'd be\n> > > curious to hear about any you did get....\n> > >\n> > > thx\n> > >\n> > > Ross\n> > >\n> > > -----Original Message-----\n> > > From: [email protected]\n> > > [mailto:[email protected]] On Behalf Of\n> > > [email protected]\n> > > Sent: Monday, April 04, 2005 4:02 AM\n> > > To: [email protected]\n> > > Subject: [PERFORM] Postgresql vs SQLserver for this application ?\n> > >\n> > >\n> > > hi all.\n> > >\n> > > We are designing a quite big application that requires a\n> > > high-performance database backend. The rates we need to obtain are at\n> > > least 5000 inserts per second and 15 selects per second for one\n> > > connection. There should only be 3 or 4 simultaneous connections.\n> > > I think our main concern is to deal with the constant flow of data coming\n> > > from the inserts that must be available for selection as fast as possible.\n> > > (kind of real time access ...)\n> > >\n> > > As a consequence, the database should rapidly increase up to more\n> > > than one hundred gigs. We still have to determine how and when we\n> > > shoud backup old data to prevent the application from a performance\n> > > drop. We intend to develop some kind of real-time partionning on our\n> > > main table keep the flows up.\n> > >\n> > > At first, we were planning to use SQL Server as it has features\n> > > that\n> > > in my opinion could help us a lot :\n> > > - replication\n> > > - clustering\n> > >\n> > > Recently we started to study Postgresql as a solution for our project :\n> > > - it also has replication\n> > > - Postgis module can handle geographic datatypes (which\n> > > would\n> > > facilitate our developments)\n> > > - We do have a strong knowledge on Postgresql administration\n> > > (we use it for production processes)\n> > > - it is free (!) and we could save money for hardware\n> > > purchase.\n> > >\n> > > Is SQL server clustering a real asset ? How reliable are Postgresql\n> > > replication tools ? Should I trust Postgresql performance for this\n> > > kind of needs ?\n> > >\n> > > My question is a bit fuzzy but any advices are most welcome...\n> > > hardware,tuning or design tips as well :))\n> > >\n> > > Thanks a lot.\n> > >\n> > > Benjamin.\n> > >\n> > >\n> > >\n> >\n> > ---------------------------(end of\n> > broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n> >\n> --\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n",
"msg_date": "Wed, 6 Apr 2005 14:18:21 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RE : RE: Postgresql vs SQLserver for thisapplication ?"
}
] |
[
{
"msg_contents": "Hi list,\n\nI noticed on a forum a query taking a surprisingly large amount of time \nin MySQL. Of course I wanted to prove PostgreSQL 8.0.1 could do it much \nbetter. To my surprise PostgreSQL was ten times worse on the same \nmachine! And I don't understand why.\n\nI don't really need this query to be fast since I don't use it, but the \nrange-thing is not really an uncommon query I suppose. So I'm wondering \nwhy it is so slow and this may point to a wrong plan being chosen or \ngenerated.\n\nHere are table definitions:\n\n Table \"public.postcodes\"\n Column | Type | Modifiers\n-------------+---------------+-----------\n postcode_id | smallint | not null\n range_from | smallint |\n range_till | smallint |\nIndexes:\n \"postcodes_pkey\" PRIMARY KEY, btree (postcode_id)\n \"range\" UNIQUE, btree (range_from, range_till)\n\n Table \"public.data_main\"\n Column | Type | Modifiers\n--------+----------+-----------\n userid | integer | not null\n range | smallint |\nIndexes:\n \"data_main_pkey\" PRIMARY KEY, btree (userid)\n\nAnd here's the query I ran:\n\nSELECT COUNT(*) FROM\ndata_main AS dm,\npostcodes AS p\nWHERE dm.range BETWEEN p.range_from AND p.range_till\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=332586.85..332586.85 rows=1 width=0) (actual \ntime=22712.038..22712.039 rows=1 loops=1)\n -> Nested Loop (cost=3.76..328945.96 rows=1456356 width=0) (actual \ntime=0.054..22600.826 rows=82688 loops=1)\n Join Filter: ((\"outer\".range >= \"inner\".range_from) AND \n(\"outer\".range <= \"inner\".range_till))\n -> Seq Scan on data_main dm (cost=0.00..1262.20 rows=81920 \nwidth=2) (actual time=0.020..136.930 rows=81920 loops=1)\n -> Materialize (cost=3.76..5.36 rows=160 width=4) (actual \ntime=0.001..0.099 rows=160 loops=81920)\n -> Seq Scan on postcodes p (cost=0.00..3.60 rows=160 \nwidth=4) (actual time=0.010..0.396 rows=160 loops=1)\n Total runtime: 22712.211 ms\n\n\nWhen I do something completely bogus, which will result in coupling the \ndata per record from data_main on one record from postcodes, it still \nnot very fast but acceptable:\n\nSELECT COUNT(*) FROM\ndata_main AS dm,\npostcodes AS p\nWHERE dm.range / 10 = p.postcode_id\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=10076.98..10076.98 rows=1 width=0) (actual \ntime=1456.016..1456.017 rows=1 loops=1)\n -> Merge Join (cost=8636.81..9913.13 rows=65537 width=0) (actual \ntime=1058.105..1358.571 rows=81920 loops=1)\n Merge Cond: (\"outer\".postcode_id = \"inner\".\"?column2?\")\n -> Index Scan using postcodes_pkey on postcodes p \n(cost=0.00..5.76 rows=160 width=2) (actual time=0.034..0.507 rows=160 \nloops=1)\n -> Sort (cost=8636.81..8841.61 rows=81920 width=2) (actual \ntime=1057.698..1169.879 rows=81920 loops=1)\n Sort Key: (dm.range / 10)\n -> Seq Scan on data_main dm (cost=0.00..1262.20 \nrows=81920 width=2) (actual time=0.020..238.886 rows=81920 loops=1)\n Total runtime: 1461.156 ms\n\n\nDoing something similarily bogus, but with less results is much faster, \neven though it should have basically the same plan:\n\nSELECT COUNT(*) FROM\ndata_main AS dm,\npostcodes AS p\nWHERE dm.range = p.postcode_id\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=2138.63..2138.63 rows=1 width=0) (actual \ntime=180.667..180.668 rows=1 loops=1)\n -> Hash Join (cost=4.00..2087.02 rows=20642 width=0) (actual \ntime=180.645..180.645 rows=0 loops=1)\n Hash Cond: (\"outer\".range = \"inner\".postcode_id)\n -> Seq Scan on data_main dm (cost=0.00..1262.20 rows=81920 \nwidth=2) (actual time=0.005..105.548 rows=81920 loops=1)\n -> Hash (cost=3.60..3.60 rows=160 width=2) (actual \ntime=0.592..0.592 rows=0 loops=1)\n -> Seq Scan on postcodes p (cost=0.00..3.60 rows=160 \nwidth=2) (actual time=0.025..0.349 rows=160 loops=1)\n Total runtime: 180.807 ms\n(7 rows)\n\nIf you like to toy around with the datasets on your heavily optimized \npostgresql-installs, let me know. The data is just generated for \ntesting-purposes and I'd happily send a copy to anyone interested.\n\nBest regards,\n\nArjen van der Meijden\n",
"msg_date": "Wed, 06 Apr 2005 18:52:35 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Plan for relatively simple query seems to be very inefficient"
},
{
"msg_contents": "On Wed, Apr 06, 2005 at 06:52:35PM +0200, Arjen van der Meijden wrote:\n> Hi list,\n> \n> I noticed on a forum a query taking a surprisingly large amount of time \n> in MySQL. Of course I wanted to prove PostgreSQL 8.0.1 could do it much \n> better. To my surprise PostgreSQL was ten times worse on the same \n> machine! And I don't understand why.\n> \n> I don't really need this query to be fast since I don't use it, but the \n> range-thing is not really an uncommon query I suppose. So I'm wondering \n> why it is so slow and this may point to a wrong plan being chosen or \n> generated.\n\nThat's the wrong index type for fast range queries. You really need\nsomething like GiST or rtree for that. I do something similar in\nproduction and queries are down at the millisecond level with the\nright index.\n\n\nCheers,\n Steve\n \n> Here are table definitions:\n> \n> Table \"public.postcodes\"\n> Column | Type | Modifiers\n> -------------+---------------+-----------\n> postcode_id | smallint | not null\n> range_from | smallint |\n> range_till | smallint |\n> Indexes:\n> \"postcodes_pkey\" PRIMARY KEY, btree (postcode_id)\n> \"range\" UNIQUE, btree (range_from, range_till)\n> \n> Table \"public.data_main\"\n> Column | Type | Modifiers\n> --------+----------+-----------\n> userid | integer | not null\n> range | smallint |\n> Indexes:\n> \"data_main_pkey\" PRIMARY KEY, btree (userid)\n> \n> And here's the query I ran:\n> \n> SELECT COUNT(*) FROM\n> data_main AS dm,\n> postcodes AS p\n> WHERE dm.range BETWEEN p.range_from AND p.range_till\n",
"msg_date": "Wed, 6 Apr 2005 10:04:12 -0700",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Plan for relatively simple query seems to be very inefficient"
},
{
"msg_contents": "On 6-4-2005 19:04, Steve Atkins wrote:\n> On Wed, Apr 06, 2005 at 06:52:35PM +0200, Arjen van der Meijden wrote:\n> \n>>Hi list,\n>>\n>>I noticed on a forum a query taking a surprisingly large amount of time \n>>in MySQL. Of course I wanted to prove PostgreSQL 8.0.1 could do it much \n>>better. To my surprise PostgreSQL was ten times worse on the same \n>>machine! And I don't understand why.\n>>\n>>I don't really need this query to be fast since I don't use it, but the \n>>range-thing is not really an uncommon query I suppose. So I'm wondering \n>>why it is so slow and this may point to a wrong plan being chosen or \n>>generated.\n> \n> \n> That's the wrong index type for fast range queries. You really need\n> something like GiST or rtree for that. I do something similar in\n> production and queries are down at the millisecond level with the\n> right index.\n\nThat may be, but since that table is only two pages the index would \nprobably not be used even if it was rtree or GiST?\nBtw, \"access method \"rtree\" does not support multicolumn indexes\", I'd \nneed another way of storing it as well? Plus it doesn't support < and > \nso the query should be changed for the way ranges are checked.\n\nI'm not sure if the dataset is really suitable for other range checks. \nIt is a linear set of postal codes grouped by their number (range_from \nto range_till) into regions and the query basically joins the region to \neach records of a user table. Of course one could use lines on the \nx-axis and define the postal-code of a specific user as a point on one \nof those lines...\n\nBut nonetheless, /this/ query should be \"not that slow\" either, right?\n\nArjen\n",
"msg_date": "Wed, 06 Apr 2005 19:40:29 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Plan for relatively simple query seems to be very inefficient"
},
{
"msg_contents": "Arjen van der Meijden <[email protected]> writes:\n> I noticed on a forum a query taking a surprisingly large amount of time \n> in MySQL. Of course I wanted to prove PostgreSQL 8.0.1 could do it much \n> better. To my surprise PostgreSQL was ten times worse on the same \n> machine! And I don't understand why.\n\nWrong index ... what you probably could use here is an index on\ndata_main.range, so that the query could run with postcodes as the\nouter side. I get such a plan by default with empty tables:\n\n Aggregate (cost=99177.80..99177.80 rows=1 width=0)\n -> Nested Loop (cost=0.00..98021.80 rows=462400 width=0)\n -> Seq Scan on postcodes p (cost=0.00..30.40 rows=2040 width=4)\n -> Index Scan using rangei on data_main dm (cost=0.00..44.63 rows=227 width=2)\n Index Cond: ((dm.range >= \"outer\".range_from) AND (dm.range <= \"outer\".range_till))\n\nbut I'm not sure if the planner would prefer it with the tables loaded\nup. (It might not be the right thing anyway ... but seems worth\ntrying.)\n\nGiven the relatively small size of the postcodes table, and the fact\nthat each data_main row seems to join to about one postcodes row,\nit's possible that what the planner did for you was actually the\noptimal thing anyhow. I'm not sure that any range-capable index would\nbe faster than just scanning through 160 entries in memory ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Apr 2005 13:42:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Plan for relatively simple query seems to be very inefficient "
},
{
"msg_contents": "On 6-4-2005 19:42, Tom Lane wrote:\n> Arjen van der Meijden <[email protected]> writes:\n> \n>>I noticed on a forum a query taking a surprisingly large amount of time \n>>in MySQL. Of course I wanted to prove PostgreSQL 8.0.1 could do it much \n>>better. To my surprise PostgreSQL was ten times worse on the same \n>>machine! And I don't understand why.\n> \n> \n> Wrong index ... what you probably could use here is an index on\n> data_main.range, so that the query could run with postcodes as the\n> outer side. I get such a plan by default with empty tables:\n> \n> Aggregate (cost=99177.80..99177.80 rows=1 width=0)\n> -> Nested Loop (cost=0.00..98021.80 rows=462400 width=0)\n> -> Seq Scan on postcodes p (cost=0.00..30.40 rows=2040 width=4)\n> -> Index Scan using rangei on data_main dm (cost=0.00..44.63 rows=227 width=2)\n> Index Cond: ((dm.range >= \"outer\".range_from) AND (dm.range <= \"outer\".range_till))\n> \n> but I'm not sure if the planner would prefer it with the tables loaded\n> up. (It might not be the right thing anyway ... but seems worth\n> trying.)\n\nNo it didn't prefer it.\n\n> Given the relatively small size of the postcodes table, and the fact\n> that each data_main row seems to join to about one postcodes row,\n> it's possible that what the planner did for you was actually the\n> optimal thing anyhow. I'm not sure that any range-capable index would\n> be faster than just scanning through 160 entries in memory ...\n> \n> \t\t\tregards, tom lane\n\nYep, there is only one or in corner cases two postcode-ranges per \npostcode. Actually it should be only one, but my generated data is not \nperfect.\nBut the sequential scan per record is not really what surprises me, \nespecially since the postcode table is only two pages of data, I didn't \nreally expect otherwise.\nIt is the fact that it takes 22 seconds that surprises me. Especially \nsince the two other examples on the same data which consider about the \nsame amount of records per table/record only take 1.4 and 0.18 seconds.\n\nBest regards,\n\nArjen\n",
"msg_date": "Wed, 06 Apr 2005 20:00:09 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Plan for relatively simple query seems to be very inefficient"
},
{
"msg_contents": "Quoting Arjen van der Meijden <[email protected]>:\n\n> Hi list,\n> \n> I noticed on a forum a query taking a surprisingly large amount of time \n> in MySQL. Of course I wanted to prove PostgreSQL 8.0.1 could do it much \n> better. To my surprise PostgreSQL was ten times worse on the same \n> machine! And I don't understand why.\n> \n> I don't really need this query to be fast since I don't use it, but the \n> range-thing is not really an uncommon query I suppose. So I'm wondering \n> why it is so slow and this may point to a wrong plan being chosen or \n> generated.\n> \n> Here are table definitions:\n> \n> Table \"public.postcodes\"\n> Column | Type | Modifiers\n> -------------+---------------+-----------\n> postcode_id | smallint | not null\n> range_from | smallint |\n> range_till | smallint |\n> Indexes:\n> \"postcodes_pkey\" PRIMARY KEY, btree (postcode_id)\n> \"range\" UNIQUE, btree (range_from, range_till)\n> \n> Table \"public.data_main\"\n> Column | Type | Modifiers\n> --------+----------+-----------\n> userid | integer | not null\n> range | smallint |\n> Indexes:\n> \"data_main_pkey\" PRIMARY KEY, btree (userid)\n> \n> And here's the query I ran:\n> \n> SELECT COUNT(*) FROM\n> data_main AS dm,\n> postcodes AS p\n> WHERE dm.range BETWEEN p.range_from AND p.range_till\n\nI just posted an answer to this (via webcafe webmail; can't recall which\npg-list), that might interest you.\n\nBTree indexes as they stand (multi-column, ...) answer what most people need for\nqueries. Unfortunately, out-of-the-box, they have no good way of handling range\nqueries. To compensate, you can use a small amount of kinky SQL. This is in the\nsame line as the tricks used to implement hierarchic queries in relational SQL.\n\n[1] Create a table \"widths\"(wid int) of powers of 2, up to what will just cover\nmax(range_till-range_from). Since your \"range\" column is a smallint, this table\ncan have no more than 15 rows. You can get as fussy as you want about keeping\nthis table to a minimum.\n\n[2] Change postcodes:\n ALTER TABLE postcodes \n ADD wid INT USING 2 ^ CEIL(LOG(range_from - range_till,2));\n ALTER TABLE postcodes\n ADD start INT USING range_from - (range_from % wid);\n CREATE INDEX postcodes_wid_start_index ON (wid, start);\n ANALYZE postcodes;\n\n[4] Write your query as:\n SELECT COUNT(*)\n FROM data_main AS dm\n CROSS JOIN widths -- yes, CROSS JOIN. For once, it HELPS performance.\n JOIN postcodes AS p\n ON dm.wid = widths.wid AND dm.start = p.range - p.range % widths.wid\n WHERE dm.range BETWEEN p.range_from AND p.range_till\n\nThis uses BTREE exact-match to make a tight restriction on which rows to check.\nYMMV, but this has worked even for multi-M table joins.\n\n-- \n\"Dreams come true, not free.\"\n\n",
"msg_date": "Wed, 6 Apr 2005 11:35:53 -0700",
"msg_from": "Mischa <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Plan for relatively simple query seems to be very inefficient"
},
{
"msg_contents": "Arjen van der Meijden <[email protected]> writes:\n> On 6-4-2005 19:42, Tom Lane wrote:\n>> Wrong index ... what you probably could use here is an index on\n>> data_main.range, so that the query could run with postcodes as the\n>> outer side. I get such a plan by default with empty tables:\n>> but I'm not sure if the planner would prefer it with the tables loaded\n>> up. (It might not be the right thing anyway ... but seems worth\n>> trying.)\n\n> No it didn't prefer it.\n\nPlanner error ... because it doesn't have any good way to estimate the\nnumber of matching rows, it thinks that way is a bit more expensive than\ndata_main as the outside, but in reality it seems a good deal cheaper:\n\n\narjen=# set enable_seqscan TO 1;\nSET\narjen=# explain analyze\narjen-# SELECT COUNT(*) FROM data_main AS dm, postcodes AS p WHERE dm.range BETWEEN p.range_from AND p.range_till;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=332586.85..332586.85 rows=1 width=0) (actual time=143999.678..143999.683 rows=1 loops=1)\n -> Nested Loop (cost=3.76..328945.96 rows=1456356 width=0) (actual time=0.211..143549.461 rows=82688 loops=1)\n Join Filter: ((\"outer\".range >= \"inner\".range_from) AND (\"outer\".range <= \"inner\".range_till))\n -> Seq Scan on data_main dm (cost=0.00..1262.20 rows=81920 width=2) (actual time=0.059..663.065 rows=81920 loops=1)\n -> Materialize (cost=3.76..5.36 rows=160 width=4) (actual time=0.004..0.695 rows=160 loops=81920)\n -> Seq Scan on postcodes p (cost=0.00..3.60 rows=160 width=4) (actual time=0.028..1.589 rows=160 loops=1)\n Total runtime: 144000.415 ms\n(7 rows)\n\narjen=# set enable_seqscan TO 0;\nSET\narjen=# explain analyze\narjen-# SELECT COUNT(*) FROM data_main AS dm, postcodes AS p WHERE dm.range BETWEEN p.range_from AND p.range_till;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=100336307.18..100336307.18 rows=1 width=0) (actual time=2367.097..2367.102 rows=1 loops=1)\n -> Nested Loop (cost=100000000.00..100332666.28 rows=1456356 width=0) (actual time=0.279..1918.890 rows=82688 loops=1)\n -> Seq Scan on postcodes p (cost=100000000.00..100000003.60 rows=160 width=4) (actual time=0.060..1.381 rows=160 loops=1)\n -> Index Scan using dm_range on data_main dm (cost=0.00..1942.60 rows=9103 width=2) (actual time=0.034..7.511 rows=517 loops=160)\n Index Cond: ((dm.range >= \"outer\".range_from) AND (dm.range <= \"outer\".range_till))\n Total runtime: 2368.056 ms\n(6 rows)\n\n(this machine is slower than yours, plus I have profiling enabled still...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Apr 2005 16:51:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Plan for relatively simple query seems to be very inefficient "
},
{
"msg_contents": "I wrote:\n> Arjen van der Meijden <[email protected]> writes:\n>> SELECT COUNT(*) FROM\n>> data_main AS dm,\n>> postcodes AS p\n>> WHERE dm.range BETWEEN p.range_from AND p.range_till\n\n> Planner error ... because it doesn't have any good way to estimate the\n> number of matching rows, it thinks that way is a bit more expensive than\n> data_main as the outside, but in reality it seems a good deal cheaper:\n\nBTW, it would get the right answer if it had recognized the WHERE clause\nas a range restriction --- it still doesn't know exactly what fraction\nof rows will match, but its default estimate is a great deal tighter for\n\"WHERE x > something AND x < somethingelse\" than it is for two unrelated\ninequality constraints. Enough tighter that it would have gone for the\ncorrect plan.\n\nThe problem is that it doesn't recognize the WHERE as a range constraint\non dm.range. I thought for a moment that this might be a\nrecently-introduced bug, but actually the code is operating as designed:\nclauselist_selectivity says\n\n * See if it looks like a restriction clause with a pseudoconstant\n * on one side. (Anything more complicated than that might not\n * behave in the simple way we are expecting.)\n\n\"Pseudoconstant\" in this context means \"a constant, parameter symbol, or\nnon-volatile functions of these\" ... so comparisons against values from\nanother table don't qualify. It seems like we're missing a bet though.\n\nCan anyone suggest a more general rule? Do we need for example to\nconsider whether the relation membership is the same in two clauses\nthat might be opposite sides of a range restriction? It seems like\n\n\ta.x > b.y AND a.x < b.z\n\nprobably can be treated as a range restriction on a.x for this purpose,\nbut I'm much less sure that the same is true of\n\n\ta.x > b.y AND a.x < c.z\n\nThoughts?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Apr 2005 18:09:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Recognizing range constraints (was Re: Plan for relatively simple\n\tquery seems to be very inefficient)"
},
{
"msg_contents": "On Wed, Apr 06, 2005 at 06:09:37PM -0400, Tom Lane wrote:\n> Can anyone suggest a more general rule? Do we need for example to\n> consider whether the relation membership is the same in two clauses\n> that might be opposite sides of a range restriction? It seems like\n> \n> \ta.x > b.y AND a.x < b.z\n\nIn a case like this, you could actually look at the data in b and see\nwhat the average range size is. If you wanted to get really fancy, the\noptimizer could decide how best to access a based on each row of b.\n\n> probably can be treated as a range restriction on a.x for this purpose,\n> but I'm much less sure that the same is true of\n> \n> \ta.x > b.y AND a.x < c.z\n\nWell, this could end up being much trickier, since who knows how b and c\nare related. Though thinking about it, although I threw out the\nrow-by-row analysis idea to be glib, that would actually work in this\ncase; you could take a look at what b and c look like each time 'through\nthe loop'.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Wed, 6 Apr 2005 17:25:36 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Recognizing range constraints (was Re: Plan for\n\trelatively simple query seems to be very inefficient)"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Wed, Apr 06, 2005 at 06:09:37PM -0400, Tom Lane wrote:\n>> Can anyone suggest a more general rule? Do we need for example to\n>> consider whether the relation membership is the same in two clauses\n>> that might be opposite sides of a range restriction? It seems like\n>> \n>> a.x > b.y AND a.x < b.z\n\n> In a case like this, you could actually look at the data in b and see\n> what the average range size is.\n\nNot with the current statistics --- you'd need some kind of cross-column\nstatistics involving both y and z. (That is, I doubt it would be\nhelpful to estimate the average range width by taking the difference of\nindependently-calculated mean values of y and z ...) But yeah, in\nprinciple it would be possible to make a non-default estimate.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Apr 2005 18:35:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Recognizing range constraints (was Re: Plan for\n\trelatively simple query seems to be very inefficient)"
},
{
"msg_contents": "Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n>\n>>On Wed, Apr 06, 2005 at 06:09:37PM -0400, Tom Lane wrote:\n>>\n>>>Can anyone suggest a more general rule? Do we need for example to\n>>>consider whether the relation membership is the same in two clauses\n>>>that might be opposite sides of a range restriction? It seems like\n>>>\n>>>a.x > b.y AND a.x < b.z\n>\n>\n>>In a case like this, you could actually look at the data in b and see\n>>what the average range size is.\n>\n>\n> Not with the current statistics --- you'd need some kind of cross-column\n> statistics involving both y and z. (That is, I doubt it would be\n> helpful to estimate the average range width by taking the difference of\n> independently-calculated mean values of y and z ...) But yeah, in\n> principle it would be possible to make a non-default estimate.\n>\n> \t\t\tregards, tom lane\n\nActually, I think he was saying do a nested loop, and for each item in\nthe nested loop, re-evaluate if an index or a sequential scan is more\nefficient.\n\nI don't think postgres re-plans once it has started, though you could\ntest this in a plpgsql function.\n\nJohn\n=:->",
"msg_date": "Wed, 06 Apr 2005 17:54:07 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Recognizing range constraints (was Re: Plan"
},
{
"msg_contents": "John A Meinel <[email protected]> writes:\n> Actually, I think he was saying do a nested loop, and for each item in\n> the nested loop, re-evaluate if an index or a sequential scan is more\n> efficient.\n\n> I don't think postgres re-plans once it has started, though you could\n> test this in a plpgsql function.\n\nIt doesn't, and in any case that's a microscopic view of the issue.\nThe entire shape of the plan might change depending on what we think\nthe selectivity is --- much more than could be handled by switching\nscan types at the bottom level.\n\nAlso, I anticipate that bitmap-driven index scans will change things\nconsiderably here. The range of usefulness of pure seqscans will\ndrop drastically...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Apr 2005 19:06:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Recognizing range constraints (was Re: Plan for\n\trelatively simple query seems to be very inefficient)"
},
{
"msg_contents": "On Wed, 2005-04-06 at 18:09 -0400, Tom Lane wrote:\n> I wrote:\n> > Arjen van der Meijden <[email protected]> writes:\n> >> SELECT COUNT(*) FROM\n> >> data_main AS dm,\n> >> postcodes AS p\n> >> WHERE dm.range BETWEEN p.range_from AND p.range_till\n> \n> > Planner error ... because it doesn't have any good way to estimate the\n> > number of matching rows, it thinks that way is a bit more expensive than\n> > data_main as the outside, but in reality it seems a good deal cheaper:\n> \n> BTW, it would get the right answer if it had recognized the WHERE clause\n> as a range restriction --- it still doesn't know exactly what fraction\n> of rows will match, but its default estimate is a great deal tighter for\n> \"WHERE x > something AND x < somethingelse\" than it is for two unrelated\n> inequality constraints. Enough tighter that it would have gone for the\n> correct plan.\n> \n> The problem is that it doesn't recognize the WHERE as a range constraint\n> on dm.range. \n\n> Can anyone suggest a more general rule? Do we need for example to\n> consider whether the relation membership is the same in two clauses\n> that might be opposite sides of a range restriction? It seems like\n> \n> \ta.x > b.y AND a.x < b.z\n\nNot sure we need a more general rule. There's only three ways to view\nthis pair of clauses:\ni) its a range constraint i.e. BETWEEN\nii) its the complement of that i.e. NOT BETWEEN\niii) its a mistake, but we're not allowed to take that path\n\nArjen's query and your generalisation of it above is a common type of\nquery - using a lookup of a reference data table with begin/end\neffective dates. It would be very useful if this was supported.\n\n> probably can be treated as a range restriction on a.x for this purpose,\n> but I'm much less sure that the same is true of\n> \n> \ta.x > b.y AND a.x < c.z\n\nI can't think of a query that would use such a construct, and might even\nconclude that it was very poorly normalised model. I would suggest that\nthis is much less common in practical use. \n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Thu, 07 Apr 2005 00:24:46 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing range constraints (was Re: Plan for"
},
{
"msg_contents": "Bruno Wolff III <[email protected]> writes:\n> Tom Lane <[email protected]> wrote:\n>> Can anyone suggest a more general rule?\n\n> I think it makes sense to guess that a smaller fraction of the rows will\n> be returned when a column value is bounded above and below than if it\n> is only bounded on one side, even if the bounds aren't fixed. You can\n> certainly be wrong.\n\nYeah, the whole thing is only a heuristic anyway. I've been coming\naround to the view that relation membership shouldn't matter, because\nof cases like\n\n\tWHERE a.x > b.y AND a.x < 42\n\nwhich surely should be taken as a range constraint.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Apr 2005 10:20:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing range constraints (was Re: Plan for relatively simple\n\tquery seems to be very inefficient)"
},
{
"msg_contents": "On Wed, Apr 06, 2005 at 18:09:37 -0400,\n Tom Lane <[email protected]> wrote:\n> \n> Can anyone suggest a more general rule? Do we need for example to\n> consider whether the relation membership is the same in two clauses\n> that might be opposite sides of a range restriction? It seems like\n> \n> \ta.x > b.y AND a.x < b.z\n> \n> probably can be treated as a range restriction on a.x for this purpose,\n> but I'm much less sure that the same is true of\n> \n> \ta.x > b.y AND a.x < c.z\n> \n> Thoughts?\n\nI think it makes sense to guess that a smaller fraction of the rows will\nbe returned when a column value is bounded above and below than if it\nis only bounded on one side, even if the bounds aren't fixed. You can\ncertainly be wrong. The difference between this and the normal case is that\ncolumn statistics aren't normally going to be that useful.\n\nIf date/time ranges are the common use for this construct, it might be better\nto create date and/or time range types that use rtree or gist indexes.\n",
"msg_date": "Thu, 7 Apr 2005 09:31:20 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing range constraints (was Re: Plan for relatively simple\n\tquery seems to be very inefficient)"
},
{
"msg_contents": "Quoting Tom Lane <[email protected]>:\n\n> Yeah, the whole thing is only a heuristic anyway. I've been coming\n> around to the view that relation membership shouldn't matter, because\n> of cases like\n> \n> \tWHERE a.x > b.y AND a.x < 42\n> \n> which surely should be taken as a range constraint.\n\nOut of curiosity, will the planner induce \"b.y < 42\" out of this?\n\n-- \n\"Dreams come true, not free.\"\n\n",
"msg_date": "Thu, 7 Apr 2005 14:26:38 -0700",
"msg_from": "Mischa <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing range constraints (was Re: Plan for relatively simple\n\tquery seems to be very inefficient)"
},
{
"msg_contents": "On Wed, Apr 06, 2005 at 06:35:10PM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > On Wed, Apr 06, 2005 at 06:09:37PM -0400, Tom Lane wrote:\n> >> Can anyone suggest a more general rule? Do we need for example to\n> >> consider whether the relation membership is the same in two clauses\n> >> that might be opposite sides of a range restriction? It seems like\n> >> \n> >> a.x > b.y AND a.x < b.z\n> \n> > In a case like this, you could actually look at the data in b and see\n> > what the average range size is.\n> \n> Not with the current statistics --- you'd need some kind of cross-column\n> statistics involving both y and z. (That is, I doubt it would be\n> helpful to estimate the average range width by taking the difference of\n> independently-calculated mean values of y and z ...) But yeah, in\n> principle it would be possible to make a non-default estimate.\n\nActually, it might be possible to take a SWAG at it using the histogram\nand correlation stats.\n\nYou know... since getting universally useful cross-platform stats seems\nto be pretty pie-in-the-sky, would it be possible to generate more\ncomplex stats on the fly from a sampling of a table? If you're looking\nat a fairly sizeable table ISTM it would be worth sampling the rows on\n10 or 20 random pages to see what you get. In this case, you'd want to\nknow the average difference between two fields. Other queries might want\nsomething different.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Thu, 7 Apr 2005 16:40:15 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Recognizing range constraints (was Re: Plan for\n\trelatively simple query seems to be very inefficient)"
},
{
"msg_contents": "Mischa <[email protected]> writes:\n> Quoting Tom Lane <[email protected]>:\n>> WHERE a.x > b.y AND a.x < 42\n\n> Out of curiosity, will the planner induce \"b.y < 42\" out of this?\n\nNo. There's some smarts about transitive equality, but none about\ntransitive inequalities. Offhand I'm not sure if it'd be useful to add\nsuch. The transitive-equality code pulls its weight because you so\noften have situations like\n\n\tcreate view v as select a.x, ... from a join b on (a.x = b.y);\n\n\tselect * from v where x = 42;\n\nbut I'm less able to think of common use-cases for transitive\ninequality ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Apr 2005 19:58:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing range constraints (was Re: Plan for relatively simple\n\tquery seems to be very inefficient)"
},
{
"msg_contents": "Quoting Tom Lane <[email protected]>:\n\n> Mischa <[email protected]> writes:\n> > Quoting Tom Lane <[email protected]>:\n> >> WHERE a.x > b.y AND a.x < 42\n> \n> > Out of curiosity, will the planner induce \"b.y < 42\" out of this?\n> \n> No. There's some smarts about transitive equality, but none about\n> transitive inequalities. Offhand I'm not sure if it'd be useful to add\n> such. The transitive-equality code pulls its weight [...]\n> but I'm less able to think of common use-cases for transitive\n> inequality ...\n\nThanks. My apologies for not just going and looking at the code first.\n\nEquality-transitives: yes, worth their weight in gold.\nInequality-transitivies: I see in OLAP queries (usually ranges), or in queries\nagainst big UNION ALL views, where const false inequalities are the norm.\n\"a.x > b.y and a.x < c.z\" comes up in OLAP, too, usually inside an EXISTS(...),\nwhere you are doing something analogous to finding a path.\n\n\n\n",
"msg_date": "Thu, 7 Apr 2005 17:30:11 -0700",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Recognizing range constraints (was Re: Plan for relatively simple\n\tquery seems to be very inefficient)"
}
] |
[
{
"msg_contents": "On our production server, I can insert 5000 tuples in 2100 ms. \n\nSingle Xeon 2.6 Ghz\n2 Gigs ram\n3ware RAID 5 SATA drives array, 3 drives only :-((\nPG 8.0 - fsync off \n\nI do think inserting 5000 tuples in a second (i.e 5000 insert \ntransactions, no bulk load) can be reached with well a configured SCSI \nRAID 10 array.\n\nAnyway it was a MISTAKE in my former description of the project : (sorry \nfor this)\n\n - we need 5000 inserts per MINUTE\n\nMy question remain :\n\n Is pgcluster worth giving a try and can it be trusted for in a \nproduction environnement ?\n Will it be possible to get a sort of real-time application ?\n\n\nThanks for all your comments.\nBenjamin.\n\n\n\n\n \n\n\n\n\nRod Taylor <[email protected]>\nEnvoyé par : [email protected]\n06/04/2005 18:40\n\n \n Pour : \"Mohan, Ross\" <[email protected]>\n cc : [email protected]\n Objet : Re: RE : RE: [PERFORM] Postgresql vs SQLserver for this\n\n\nOn Wed, 2005-04-06 at 16:12 +0000, Mohan, Ross wrote:\n> I wish I had a Dell system and run case to show you Alex, but I don't...\n> however...using Oracle's \"direct path\" feature, it's pretty \nstraightforward. \n> \n> We've done 110,000 rows per second into index-less tables on a big \nsystem\n> (IBM Power5 chips, Hitachi SAN). ( Yes, I am sure: over 100K a second. \nSustained\n> for almost 9 minutes. )\n\nJust for kicks I did a local test on a desktop machine (single CPU,\nsingle IDE drive) using COPY from STDIN for a set of integers in via a\nsingle transaction, no indexes.\n\n1572864 tuples were loaded in 13715.613ms, which is approx 115k rows per\nsecond.\n\nOkay, no checkpoints and I didn't cross an index boundary, but I also\nhaven't tuned the config file beyond bumping up the buffers.\n\nLets try again with more data this time.\n\n31Million tuples were loaded in approx 279 seconds, or approx 112k rows\nper second.\n\n> I'd love to see PG get into this range..i am a big fan of PG (just a\n> rank newbie) but I gotta think the underlying code to do this has\n> to be not-too-complex.....\n\nI'd say we're there.\n\n> -----Original Message-----\n> From: Alex Turner [mailto:[email protected]] \n> Sent: Wednesday, April 06, 2005 11:38 AM\n> To: [email protected]\n> Cc: [email protected]; Mohan, Ross\n> Subject: Re: RE : RE: [PERFORM] Postgresql vs SQLserver for this \napplication ?\n> \n> \n> I think everyone was scared off by the 5000 inserts per second number.\n> \n> I've never seen even Oracle do this on a top end Dell system with \ncopious SCSI attached storage.\n> \n> Alex Turner\n> netEconomist\n> \n> On Apr 6, 2005 3:17 AM, [email protected] <[email protected]> wrote:\n> > \n> > Unfortunately.\n> > \n> > But we are in the the process to choose Postgresql with pgcluster. I'm \n\n> > currently running some tests (performance, stability...) Save the \n> > money on the license fees, you get it for your hardware ;-)\n> > \n> > I still welcome any advices or comments and I'll let you know how the \n> > project is going on.\n> > \n> > Benjamin.\n> > \n> > \n> > \n> > \"Mohan, Ross\" <[email protected]>\n> > \n> > 05/04/2005 20:48\n> > \n> > Pour : <[email protected]> \n> > cc : \n> > Objet : RE: [PERFORM] Postgresql vs SQLserver for this\n> > application ?\n> > \n> > \n> > You never got answers on this? Apologies, I don't have one, but'd be \n> > curious to hear about any you did get....\n> > \n> > thx\n> > \n> > Ross\n> > \n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]] On Behalf\n> > Of [email protected]\n> > Sent: Monday, April 04, 2005 4:02 AM\n> > To: [email protected]\n> > Subject: [PERFORM] Postgresql vs SQLserver for this application ?\n> > \n> > \n> > hi all.\n> > \n> > We are designing a quite big application that requires a \n> > high-performance database backend. The rates we need to obtain are at \n\n> > least 5000 inserts per second and 15 selects per second for one \n> > connection. There should only be 3 or 4 simultaneous connections.\n> > I think our main concern is to deal with the constant flow of data \ncoming\n> > from the inserts that must be available for selection as fast as \npossible.\n> > (kind of real time access ...) \n> > \n> > As a consequence, the database should rapidly increase up to more \n> > than one hundred gigs. We still have to determine how and when we \n> > shoud backup old data to prevent the application from a performance \n> > drop. We intend to develop some kind of real-time partionning on our \n> > main table keep the flows up.\n> > \n> > At first, we were planning to use SQL Server as it has features that \n> > in my opinion could help us a lot :\n> > - replication \n> > - clustering\n> > \n> > Recently we started to study Postgresql as a solution for our project \n: \n> > - it also has replication \n> > - Postgis module can handle geographic datatypes (which would \n> > facilitate our developments)\n> > - We do have a strong knowledge on Postgresql administration \n> > (we use it for production processes)\n> > - it is free (!) and we could save money for hardware \n> > purchase.\n> > \n> > Is SQL server clustering a real asset ? How reliable are Postgresql \n> > replication tools ? Should I trust Postgresql performance for this \n> > kind of needs ?\n> > \n> > My question is a bit fuzzy but any advices are most welcome... \n> > hardware,tuning or design tips as well :))\n> > \n> > Thanks a lot.\n> > \n> > Benjamin.\n> > \n> > \n> >\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n-- \n\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match\n\n\n\nOn our production server, I can insert 5000 tuples in 2100 ms. \n\nSingle Xeon 2.6 Ghz\n2 Gigs ram\n3ware RAID 5 SATA drives array, 3 drives only :-((\nPG 8.0 - fsync off \n\nI do think inserting 5000 tuples in a second (i.e 5000 insert transactions, no bulk load) can be reached with well a configured SCSI RAID 10 array.\n\nAnyway it was a MISTAKE in my former description of the project : (sorry for this)\n\n - we need 5000 inserts per MINUTE\n\nMy question remain :\n\n Is pgcluster worth giving a try and can it be trusted for in a production environnement ?\n Will it be possible to get a sort of real-time application ?\n\n\nThanks for all your comments.\nBenjamin.\n\n\n\n\n \n\n\n\n\n\n\nRod Taylor <[email protected]>\nEnvoyé par : [email protected]\n06/04/2005 18:40\n\n \n Pour : \"Mohan, Ross\" <[email protected]>\n cc : [email protected]\n Objet : Re: RE : RE: [PERFORM] Postgresql vs SQLserver for this\n\n\nOn Wed, 2005-04-06 at 16:12 +0000, Mohan, Ross wrote:\n> I wish I had a Dell system and run case to show you Alex, but I don't...\n> however...using Oracle's \"direct path\" feature, it's pretty straightforward. \n> \n> We've done 110,000 rows per second into index-less tables on a big system\n> (IBM Power5 chips, Hitachi SAN). ( Yes, I am sure: over 100K a second. Sustained\n> for almost 9 minutes. )\n\nJust for kicks I did a local test on a desktop machine (single CPU,\nsingle IDE drive) using COPY from STDIN for a set of integers in via a\nsingle transaction, no indexes.\n\n1572864 tuples were loaded in 13715.613ms, which is approx 115k rows per\nsecond.\n\nOkay, no checkpoints and I didn't cross an index boundary, but I also\nhaven't tuned the config file beyond bumping up the buffers.\n\nLets try again with more data this time.\n\n31Million tuples were loaded in approx 279 seconds, or approx 112k rows\nper second.\n\n> I'd love to see PG get into this range..i am a big fan of PG (just a\n> rank newbie) but I gotta think the underlying code to do this has\n> to be not-too-complex.....\n\nI'd say we're there.\n\n> -----Original Message-----\n> From: Alex Turner [mailto:[email protected]] \n> Sent: Wednesday, April 06, 2005 11:38 AM\n> To: [email protected]\n> Cc: [email protected]; Mohan, Ross\n> Subject: Re: RE : RE: [PERFORM] Postgresql vs SQLserver for this application ?\n> \n> \n> I think everyone was scared off by the 5000 inserts per second number.\n> \n> I've never seen even Oracle do this on a top end Dell system with copious SCSI attached storage.\n> \n> Alex Turner\n> netEconomist\n> \n> On Apr 6, 2005 3:17 AM, [email protected] <[email protected]> wrote:\n> > \n> > Unfortunately.\n> > \n> > But we are in the the process to choose Postgresql with pgcluster. I'm \n> > currently running some tests (performance, stability...) Save the \n> > money on the license fees, you get it for your hardware ;-)\n> > \n> > I still welcome any advices or comments and I'll let you know how the \n> > project is going on.\n> > \n> > Benjamin.\n> > \n> > \n> > \n> > \"Mohan, Ross\" <[email protected]>\n> > \n> > 05/04/2005 20:48\n> > \n> > Pour : <[email protected]> \n> > cc : \n> > Objet : RE: [PERFORM] Postgresql vs SQLserver for this\n> > application ?\n> > \n> > \n> > You never got answers on this? Apologies, I don't have one, but'd be \n> > curious to hear about any you did get....\n> > \n> > thx\n> > \n> > Ross\n> > \n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]] On Behalf\n> > Of [email protected]\n> > Sent: Monday, April 04, 2005 4:02 AM\n> > To: [email protected]\n> > Subject: [PERFORM] Postgresql vs SQLserver for this application ?\n> > \n> > \n> > hi all.\n> > \n> > We are designing a quite big application that requires a \n> > high-performance database backend. The rates we need to obtain are at \n> > least 5000 inserts per second and 15 selects per second for one \n> > connection. There should only be 3 or 4 simultaneous connections.\n> > I think our main concern is to deal with the constant flow of data coming\n> > from the inserts that must be available for selection as fast as possible.\n> > (kind of real time access ...) \n> > \n> > As a consequence, the database should rapidly increase up to more \n> > than one hundred gigs. We still have to determine how and when we \n> > shoud backup old data to prevent the application from a performance \n> > drop. We intend to develop some kind of real-time partionning on our \n> > main table keep the flows up.\n> > \n> > At first, we were planning to use SQL Server as it has features that \n> > in my opinion could help us a lot :\n> > - replication \n> > - clustering\n> > \n> > Recently we started to study Postgresql as a solution for our project : \n> > - it also has replication \n> > - Postgis module can handle geographic datatypes (which would \n> > facilitate our developments)\n> > - We do have a strong knowledge on Postgresql administration \n> > (we use it for production processes)\n> > - it is free (!) and we could save money for hardware \n> > purchase.\n> > \n> > Is SQL server clustering a real asset ? How reliable are Postgresql \n> > replication tools ? Should I trust Postgresql performance for this \n> > kind of needs ?\n> > \n> > My question is a bit fuzzy but any advices are most welcome... \n> > hardware,tuning or design tips as well :))\n> > \n> > Thanks a lot.\n> > \n> > Benjamin.\n> > \n> > \n> >\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n-- \n\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match",
"msg_date": "Wed, 6 Apr 2005 19:08:46 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "=?iso-8859-1?Q?R=E9f=2E_=3A_Re=3A_RE_=3A_RE=3A__Postgresql_vs?=\n\tSQLserver for this"
},
{
"msg_contents": "On Wed, 2005-04-06 at 19:08 +0200, [email protected] wrote:\n> \n> On our production server, I can insert 5000 tuples in 2100 ms. \n> \n> Single Xeon 2.6 Ghz \n> 2 Gigs ram \n> 3ware RAID 5 SATA drives array, 3 drives only :-(( \n> PG 8.0 - fsync off \n> \n> I do think inserting 5000 tuples in a second (i.e 5000 insert\n> transactions, no bulk load) can be reached with well a configured SCSI\n> RAID 10 array. \n\nYeah, I think that can be done provided there is more than one worker.\nMy limit seems to be about 1000 transactions per second each with a\nsingle insert for a single process (round trip time down the Fibre\nChannel is large) but running 4 simultaneously only drops throughput to\nabout 900 per process (total of 2400 transactions per second) and the\nmachine still seemed to have lots of oomph to spare.\n\nAlso worth noting is that this test was performed on a machine which as\na noise floor receives about 200 queries per second, which it was\nserving during the test.\n\n> Is pgcluster worth giving a try and can it be trusted for in a\n> production environnement ? \n> Will it be possible to get a sort of real-time application ? \n\n>From the design of pgcluster it looks like it adds in a significant\namount of additional communication so expect your throughput for a\nsingle process to drop through the floor.\n\n-- \n\n",
"msg_date": "Wed, 06 Apr 2005 13:18:29 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: =?ISO-8859-1?Q?R=E9f=2E?= : Re: RE : RE: Postgresql"
},
{
"msg_contents": "On Wed, Apr 06, 2005 at 01:18:29PM -0400, Rod Taylor wrote:\n> Yeah, I think that can be done provided there is more than one worker.\n> My limit seems to be about 1000 transactions per second each with a\n> single insert for a single process (round trip time down the Fibre\n> Channel is large) but running 4 simultaneously only drops throughput to\n> about 900 per process (total of 2400 transactions per second) and the\n> machine still seemed to have lots of oomph to spare.\n\nErm, have I missed something here? 900 * 4 = 2400?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 6 Apr 2005 19:42:54 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?Q?R=C3=A9f?="
},
{
"msg_contents": "On Wed, 2005-04-06 at 19:42 +0200, Steinar H. Gunderson wrote:\n> On Wed, Apr 06, 2005 at 01:18:29PM -0400, Rod Taylor wrote:\n> > Yeah, I think that can be done provided there is more than one worker.\n> > My limit seems to be about 1000 transactions per second each with a\n> > single insert for a single process (round trip time down the Fibre\n> > Channel is large) but running 4 simultaneously only drops throughput to\n> > about 900 per process (total of 2400 transactions per second) and the\n> > machine still seemed to have lots of oomph to spare.\n> \n> Erm, have I missed something here? 900 * 4 = 2400?\n\nNope. You've not missed anything.\n\nIf I ran 10 processes and the requirement would be met.\n-- \n\n",
"msg_date": "Wed, 06 Apr 2005 14:23:10 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: =?ISO-8859-1?Q?R=E9f?="
},
{
"msg_contents": "I think his point was that 9 * 4 != 2400\n\nAlex Turner\nnetEconomist\n\nOn Apr 6, 2005 2:23 PM, Rod Taylor <[email protected]> wrote:\n> On Wed, 2005-04-06 at 19:42 +0200, Steinar H. Gunderson wrote:\n> > On Wed, Apr 06, 2005 at 01:18:29PM -0400, Rod Taylor wrote:\n> > > Yeah, I think that can be done provided there is more than one worker.\n> > > My limit seems to be about 1000 transactions per second each with a\n> > > single insert for a single process (round trip time down the Fibre\n> > > Channel is large) but running 4 simultaneously only drops throughput to\n> > > about 900 per process (total of 2400 transactions per second) and the\n> > > machine still seemed to have lots of oomph to spare.\n> >\n> > Erm, have I missed something here? 900 * 4 = 2400?\n> \n> Nope. You've not missed anything.\n> \n> If I ran 10 processes and the requirement would be met.\n> --\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n",
"msg_date": "Wed, 6 Apr 2005 14:40:29 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "=?ISO-8859-1?Q?Re:__R=E9f?="
},
{
"msg_contents": "On Wed, 2005-04-06 at 14:40 -0400, Alex Turner wrote:\n> I think his point was that 9 * 4 != 2400\n\nOh.. heh.. I didn't even notice that.\n\nCan I pretend I did it in my head using HEX math and that it wasn't a\nmistake?\n\n> On Apr 6, 2005 2:23 PM, Rod Taylor <[email protected]> wrote:\n> > On Wed, 2005-04-06 at 19:42 +0200, Steinar H. Gunderson wrote:\n> > > On Wed, Apr 06, 2005 at 01:18:29PM -0400, Rod Taylor wrote:\n> > > > Yeah, I think that can be done provided there is more than one worker.\n> > > > My limit seems to be about 1000 transactions per second each with a\n> > > > single insert for a single process (round trip time down the Fibre\n> > > > Channel is large) but running 4 simultaneously only drops throughput to\n> > > > about 900 per process (total of 2400 transactions per second) and the\n> > > > machine still seemed to have lots of oomph to spare.\n> > >\n> > > Erm, have I missed something here? 900 * 4 = 2400?\n> > \n> > Nope. You've not missed anything.\n> > \n> > If I ran 10 processes and the requirement would be met.\n> > --\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n> \n-- \n\n",
"msg_date": "Wed, 06 Apr 2005 15:01:52 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: =?ISO-8859-1?Q?R=E9f?="
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Arjen van der Meijden \n> [mailto:[email protected]]\n> Sent: Wednesday, April 06, 2005 11:53 AM\n> To: performance pgsql\n> Subject: [PERFORM] Plan for relatively simple query seems to be very\n> inefficient\n> \n> [...]\n> SELECT COUNT(*) FROM\n> data_main AS dm,\n> postcodes AS p\n> WHERE dm.range BETWEEN p.range_from AND p.range_till\n> [...]\n> Aggregate (cost=332586.85..332586.85 rows=1 width=0) (actual \n> time=22712.038..22712.039 rows=1 loops=1)\n> -> Nested Loop (cost=3.76..328945.96 rows=1456356 \n> width=0) (actual \n> time=0.054..22600.826 rows=82688 loops=1)\n\nI'm still a noob at reading EXPLAIN ANALYZE, but it seems to me\nthat your statistics are throwing off the planner here. It \nestimates 1.4M and gets 82K, so it's off by a factor of about 20. \nHave you considered doing a VACUUM or upping your statistics?\n\n> [...]\n> When I do something completely bogus, which will result in \n> coupling the data per record from data_main on one record from\n> postcodes, it still not very fast but acceptable:\n> [...]\n> Aggregate (cost=10076.98..10076.98 rows=1 width=0) (actual \n> time=1456.016..1456.017 rows=1 loops=1)\n> -> Merge Join (cost=8636.81..9913.13 rows=65537 \n> width=0) (actual \n> time=1058.105..1358.571 rows=81920 loops=1)\n\nLooks like Merge Join is faster than the Nested Loop for this\nquery. If you notice, the row counts are a lot closer to the\nestimates, too. This is probably a \"good\" plan.\n\n> [...]\n> Doing something similarily bogus, but with less results is \n> much faster, even though it should have basically the same\n> plan:\n> \n> SELECT COUNT(*) FROM\n> data_main AS dm,\n> postcodes AS p\n> WHERE dm.range = p.postcode_id\n> [...]\n> Aggregate (cost=2138.63..2138.63 rows=1 width=0) (actual \n> time=180.667..180.668 rows=1 loops=1)\n> -> Hash Join (cost=4.00..2087.02 rows=20642 width=0) (actual \n> time=180.645..180.645 rows=0 loops=1)\n\nThis one I don't understand at all. Clearly, the Hash Join is\nthe way to go, but the estimates are way off (which probably \nexplains why this plan isn't chosen in the first place).\n\n> Hash Cond: (\"outer\".range = \"inner\".postcode_id)\n> -> Seq Scan on data_main dm (cost=0.00..1262.20 \n> rows=81920 \n> width=2) (actual time=0.005..105.548 rows=81920 loops=1)\n> -> Hash (cost=3.60..3.60 rows=160 width=2) (actual \n> time=0.592..0.592 rows=0 loops=1)\n> -> Seq Scan on postcodes p (cost=0.00..3.60 \n> rows=160 \n> width=2) (actual time=0.025..0.349 rows=160 loops=1)\n> Total runtime: 180.807 ms\n> (7 rows)\n> [...]\n\nMy completely amateur guess is that the planner is able to use\nMerge Join and Hash Join on your contrived queries because you\nare only trying to join one field to a single value (i.e.:\noperator=). But the BETWEEN clause is what forces the Nested\nLoop. You can see that here:\n\n -> Seq Scan on postcodes p (cost=0.00..3.60 rows=160 \nwidth=4) (actual time=0.010..0.396 rows=160 loops=1)\nvs. here:\n\n -> Index Scan using postcodes_pkey on postcodes p \n(cost=0.00..5.76 rows=160 width=2) (actual time=0.034..0.507 rows=160 \nloops=1)\n\nSo the first query forces a SeqScan on postcodes, while the\nsecond can do an IndexScan.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n",
"msg_date": "Wed, 6 Apr 2005 12:45:31 -0500",
"msg_from": "\"Dave Held\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Plan for relatively simple query seems to be very inefficient"
},
{
"msg_contents": "\"Dave Held\" <[email protected]> writes:\n> My completely amateur guess is that the planner is able to use\n> Merge Join and Hash Join on your contrived queries because you\n> are only trying to join one field to a single value (i.e.:\n> operator=). But the BETWEEN clause is what forces the Nested\n> Loop. You can see that here:\n\nYeah --- both merge and hash join are only usable for equality joins.\n(Thinking about it, it seems possible that mergejoin could be extended\nto work for range joins, but we're certainly far from being able to\ndo that today.) So the basic alternatives the planner has are nestloops\nwith either postcode on the outside, or data_main on the outside. The\npostcode-on-the-outside case would be plausible with an index on\ndata_main.range, but Arjen didn't have one. The data_main-on-the-outside\ncase could only use an index if the index was range-query-capable, which\na 2-column btree index isn't. Given the small size of the postcodes\ntable it's not real clear that an index probe would be much of a win\nanyway over a simple sequential scan.\n\nComparing the nestloop case to the hash case does make one think that\nthere's an awful lot of overhead somewhere, though. Two int2\ncomparisons ought not take very long :-(. Arjen, are you interested\nin getting a gprof profile of what the backend is doing in the nestloop\n-with-materialize plan? Or if you don't want to mess with it, please\nsend me the data off-list and I'll run a profile.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Apr 2005 14:09:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Plan for relatively simple query seems to be very inefficient "
},
{
"msg_contents": "Arjen van der Meijden <[email protected]> writes:\n> On 6-4-2005 20:09, Tom Lane wrote:\n>> Comparing the nestloop case to the hash case does make one think that\n>> there's an awful lot of overhead somewhere, though. Two int2\n>> comparisons ought not take very long :-(. Arjen, are you interested\n>> in getting a gprof profile of what the backend is doing in the nestloop\n>> -with-materialize plan? Or if you don't want to mess with it, please\n>> send me the data off-list and I'll run a profile.\n\n> Here you go, both are full pg_dump-dumps with create-data (including the \n> index on data_main.range).\n\nWell, indeed int2ge and int2le are pretty far down the list, but the\nstuff that's near the top has already been beat on pretty heavily :-(.\nI'm not sure there is a lot we can do about this short of a wholesale\nredesign of the way we do expression evaluation.\n\nFlat profile:\n\nEach sample counts as 0.01 seconds.\n % cumulative self self total \n time seconds seconds calls ms/call ms/call name \n 36.14 21.30 21.30 _mcount\n 7.62 25.79 4.49 13412606 0.00 0.00 ExecMakeFunctionResultNoSets\n 5.46 29.01 3.22 26825216 0.00 0.00 slot_getattr\n 4.19 31.48 2.47 26825216 0.00 0.00 ExecEvalVar\n 3.87 33.76 2.28 13189120 0.00 0.00 ExecMaterial\n 3.38 35.75 1.99 13494688 0.00 0.00 slot_deform_tuple\n 3.38 37.74 1.99 noshlibs\n 3.12 39.58 1.84 13353893 0.00 0.00 ExecProcNode\n 2.99 41.34 1.76 13107201 0.00 0.00 ExecQual\n 2.90 43.05 1.71 13271974 0.00 0.00 AllocSetReset\n 2.72 44.65 1.60 ExecEvalVar\n 2.43 46.08 1.43 $$dyncall\n 2.24 47.40 1.32 13271972 0.00 0.00 MemoryContextReset\n 2.24 48.72 1.32 13188960 0.00 0.00 tuplestore_gettuple\n 2.12 49.97 1.25 13189441 0.00 0.00 ExecStoreTuple\n 1.80 51.03 1.06 82689 0.01 0.06 ExecNestLoop\n 1.70 52.03 1.00 13354235 0.00 0.00 ExecClearTuple\n 1.63 52.99 0.96 13412761 0.00 0.00 check_stack_depth\n 1.58 53.92 0.93 AllocSetReset\n 1.29 54.68 0.76 int2ge\n 1.20 55.39 0.71 ExecMakeFunctionResultNoSets\n 1.14 56.06 0.67 13107200 0.00 0.00 int2ge\n 1.05 56.68 0.62 ExecEvalCoerceToDomain\n 1.04 57.29 0.61 13189120 0.00 0.00 tuplestore_ateof\n 0.64 57.67 0.38 13271972 0.00 0.00 MemoryContextResetChildren\n 0.41 57.91 0.24 readtup_heap\n 0.36 58.12 0.21 log_disconnections\n 0.24 58.26 0.14 BlessTupleDesc\n 0.19 58.37 0.11 ExecCountSlotsMaterial\n 0.14 58.45 0.08 MemoryContextAllocZeroAligned\n 0.12 58.52 0.07 ExecProcNode\n 0.10 58.58 0.06 int42div\n 0.08 58.63 0.05 AllocSetStats\n 0.05 58.66 0.03 166022 0.00 0.00 LockBuffer\n 0.05 58.69 0.03 82688 0.00 0.00 advance_transition_function\n 0.05 58.72 0.03 82080 0.00 0.00 HeapTupleSatisfiesSnapshot\n 0.05 58.75 0.03 ExecInitNestLoop\n 0.03 58.77 0.02 SeqNext\n 0.02 58.78 0.01 305408 0.00 0.00 int2le\n 0.02 58.79 0.01 84231 0.00 0.00 LWLockAcquire\n 0.02 58.80 0.01 82849 0.00 0.00 ExecProject\n 0.02 58.81 0.01 82848 0.00 0.00 ExecVariableList\n 0.02 58.82 0.01 82844 0.00 0.00 ResourceOwnerEnlargeBuffers\n 0.02 58.83 0.01 82844 0.00 0.00 ResourceOwnerRememberBuffer\n 0.02 58.84 0.01 82813 0.00 0.00 ReleaseAndReadBuffer\n 0.02 58.85 0.01 82688 0.00 0.00 ExecEvalConst\n 0.02 58.86 0.01 82688 0.00 0.00 ExecEvalExprSwitchContext\n 0.02 58.87 0.01 82688 0.00 0.00 advance_aggregates\n 0.02 58.88 0.01 82084 0.00 0.00 heapgettup\n 0.02 58.89 0.01 81920 0.00 0.00 ExecMaterialReScan\n 0.02 58.90 0.01 81920 0.00 0.00 ExecReScan\n 0.02 58.91 0.01 19 0.53 0.53 downcase_truncate_identifier\n 0.02 58.92 0.01 10 1.00 1.00 AllocateFile\n 0.02 58.93 0.01 1 10.00 70.59 agg_retrieve_direct\n[ nothing else shows as having any sample hits ]\n\n_mcount is profiler overhead, in case you were wondering; ignore it and\nmentally scale all the other percentages up by 20% or so.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Apr 2005 15:54:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Plan for relatively simple query seems to be very inefficient "
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Mischa [mailto:[email protected]]\n> Sent: Wednesday, April 06, 2005 1:47 PM\n> To: [email protected]\n> Subject: COPY Hacks (WAS: RE: [PERFORM] Postgresql vs \n> SQLserver for this\n> application ?)\n> \n> [...]\n> Using COPY ... FROM STDIN via the Perl DBI (DBD::Pg) interface,\n> I accidentally strung together several \\n-terminated input lines,\n> and sent them to the server with a single \"putline\".\n> \n> To my (happy) surprise, I ended up with exactly that number of\n> rows in the target table.\n> \n> Is this a bug? Is this fundamental to the protocol?\n\nJust guessing without looking at the code, I assume that the\nserver doesn't care if you send your data in lines, words, or\nmassive blocks. It simply looks for the newline terminator to\ndetermine end-of-block. The reason putline works nicely is\nprobably that it terminates your rows with a newline character.\nBut as you noticed, you can do that yourself. I would say that\nit's intrinsic to the way I/O is typically done. It may very\nwell be that the function that implements COPY never sees when\nexactly you make a function call from Perl, but only sees a\nbuffer getting filled up with data that it needs to parse. From\nthat perspective, it's easy to see why you simply need to \nproperly terminate your rows to get the expected behavior.\nConsider COPYing from a file...odds are it doesn't read data\nfrom the file exactly 1 row at a time, but rather some block-\nsize multiple at a time. The only way COPY could work correctly\nis if it ignored the size of data sent to it and only parsed on\n\\n boundaries.\n\n> Since it hasn't been documented (but then, \"endcopy\" isn't \n> documented), I've been shy of investing in perf testing such\n> mass copy calls. But, if it DOES work, it should be reducing\n> the number of network roundtrips.\n> [...]\n\nFeel free to use your technique. I would be *extremely* \nsurprised if there were a reason it shouldn't work.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n",
"msg_date": "Wed, 6 Apr 2005 15:06:39 -0500",
"msg_from": "\"Dave Held\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: COPY Hacks (WAS: RE: Postgresql vs SQLserver for this application\n\t?)"
}
] |
[
{
"msg_contents": "Hi!!!\n\nWe are running PostgreSQL server version 7.4.6 on RedHat 9 (Shrike) on \nsingle Pentium 4 (2.66 GHz) box with SCSI disc and 512 MB RAM.\nOur database contains several tables (small size) and one special table \nwith ~1000000 records (it contains log entries from system activity).We \ndecided that its time to do a little clean-up and it's still running \n(for about 12 hours) and it seems that it won't stop :((\n\nHere schema of largest table:\n Table \"public.activities\"\n Column | Type | Modifiers\n-------------------+-----------------------------+-----------\n act_id | bigint | not null\n act_type | character varying(32) | not null\n act_activity_date | timestamp without time zone | not null\n act_synch_date | timestamp without time zone |\n act_state | character varying(32) |\n act_mcn_id | bigint |\n act_mcn_alarm | character varying(16) |\n act_cmd_id | bigint |\n act_ctr_id | bigint |\n act_emp_id | bigint |\n act_parent_id | bigint |\n act_rpt_id | bigint |\nIndexes:\n \"activities_pkey\" primary key, btree (act_id)\n \"activities_act_cmd_id\" btree (act_cmd_id)\n \"activities_act_ctr_id\" btree (act_ctr_id)\n \"activities_act_state_idx\" btree (act_state)\n \"activities_act_type_idx\" btree (act_type)\nForeign-key constraints:\n \"fk7a1b3bed494acc46\" FOREIGN KEY (act_ctr_id) REFERENCES \ncontrollers(ctr_id)\n \"fk7a1b3bed4c50f03f\" FOREIGN KEY (act_emp_id) REFERENCES \nemployees(emp_id)\n \"fk7a1b3bed48e1ca8d\" FOREIGN KEY (act_cmd_id) REFERENCES \ncommands(cmd_id)\n \"fk7a1b3bed5969e16f\" FOREIGN KEY (act_mcn_id) REFERENCES \nmachines(mcn_id)\n \"fk7a1b3bedf3fd6e40\" FOREIGN KEY (act_parent_id) REFERENCES \nactivities(act_id)\n \"fk7a1b3bed62ac0851\" FOREIGN KEY (act_rpt_id) REFERENCES\n\nand our killer delete:\n\nmrt-vend2-jpalka=# explain delete from activities where \nact_type='controller-activity' and act_ctr_id in (select ctr_id from \ncontrollers where ctr_opr_id in (1,2));\n QUERY PLAN \n\n--------------------------------------------------------------------------------------------------------\n Merge IN Join (cost=9.87..17834.97 rows=84933 width=6)\n Merge Cond: (\"outer\".act_ctr_id = \"inner\".ctr_id)\n -> Index Scan using activities_act_ctr_id on activities \n(cost=0.00..34087.59 rows=402627 width=14)\n Filter: ((act_type)::text = 'controller-activity'::text)\n -> Sort (cost=9.87..10.09 rows=89 width=8)\n Sort Key: controllers.ctr_id\n -> Seq Scan on controllers (cost=0.00..6.99 rows=89 width=8)\n Filter: ((ctr_opr_id = 1) OR (ctr_opr_id = 2))\n(8 rows)\nreports(rpt_id)\n\nTable controllers contains about 200 records.Is it problem with large \nnumber of foreign keys in activities table?\n\nCan you help me?\n\nThanks,\nJaroslaw Palka\n",
"msg_date": "Wed, 06 Apr 2005 23:05:05 +0200",
"msg_from": "=?UTF-8?B?SmFyb3PFgmF3IFBhxYJrYQ==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Never ending delete story"
},
{
"msg_contents": "=?UTF-8?B?SmFyb3PFgmF3IFBhxYJrYQ==?= <[email protected]> writes:\n> We are running PostgreSQL server version 7.4.6 on RedHat 9 (Shrike) on \n> single Pentium 4 (2.66 GHz) box with SCSI disc and 512 MB RAM.\n> Our database contains several tables (small size) and one special table \n> with ~1000000 records (it contains log entries from system activity).We \n> decided that its time to do a little clean-up and it's still running \n> (for about 12 hours) and it seems that it won't stop :((\n\nDo you have any foreign keys linking *to* (not from) this table?\nIf so, they probably need indexes on the far end. Also check for\ndatatype discrepancies.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Apr 2005 02:40:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Never ending delete story "
}
] |
[
{
"msg_contents": "I wanted to see if I could squeeze any more performance out of a C set \nreturning function I wrote. As such, I looked to a profiler. Is it \npossible to get profile information on the function I wrote? I've got \npostmaster and my function compiled with profiling support, and can find \nthe gmon.out files... can I actually look at the call tree that occurs \nwhen my function is being executed or will I be limited to viewing calls \nto functions in the postmaster binary?\n\n-Adam\n\n",
"msg_date": "Wed, 06 Apr 2005 16:22:59 -0700",
"msg_from": "Adam Palmblad <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tweaking a C Function I wrote"
},
{
"msg_contents": "Adam Palmblad wrote:\n> can I actually look at the call tree that occurs when my function is\n> being executed or will I be limited to viewing calls to functions in\n> the postmaster binary?\n\nYou're the one with the gprof data, you tell us :)\n\nIt wouldn't surprise me if gprof didn't get profiling data for dlopen'ed \nshared libraries (I haven't checked), but I think both oprofile and \ncallgrind should be able to.\n\n(If you do decide to use gprof and you're on Linux, be sure to compile \nPostgres with CFLAGS=\"-DLINUX_PROFILE\", to get valid profiling data.)\n\n-Neil\n",
"msg_date": "Thu, 07 Apr 2005 09:42:30 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tweaking a C Function I wrote"
},
{
"msg_contents": "Neil Conway <[email protected]> writes:\n> It wouldn't surprise me if gprof didn't get profiling data for dlopen'ed \n> shared libraries (I haven't checked), but I think both oprofile and \n> callgrind should be able to.\n\nNone of the platforms I use are very good at this :-(. Consider\nbuilding a special backend binary with the functions of interest\nstatically linked into it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Apr 2005 00:03:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tweaking a C Function I wrote "
}
] |
[
{
"msg_contents": "hi,\n\n I am using psql 7.1.3\n\nI didn't find option analyse in explain command..\n\nhow to get time taken by SQL procedure/query?\n\nregards,\nstp..\n",
"msg_date": "Thu, 7 Apr 2005 12:51:06 +0530 (IST)",
"msg_from": "\"S.Thanga Prakash\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "help on explain analyse in psql 7.1.3 (linux)"
},
{
"msg_contents": "> I didn't find option analyse in explain command..\n> \n> how to get time taken by SQL procedure/query?\n\nExplain analyze was added in 7.2 - you really need to upgrade...\n\nYou can use \\timing in psql to get an approximation...\n\nChris\n",
"msg_date": "Thu, 07 Apr 2005 15:49:24 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: help on explain analyse in psql 7.1.3 (linux)"
},
{
"msg_contents": "hi,\n\n\tthanks for immediate response..\n\nregards,\nstp..\n\nOn Thu, 7 Apr 2005, Christopher Kings-Lynne wrote:\n\n> > I didn't find option analyse in explain command..\n> > \n> > how to get time taken by SQL procedure/query?\n> \n> Explain analyze was added in 7.2 - you really need to upgrade...\n> \n> You can use \\timing in psql to get an approximation...\n> \n> Chris\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n",
"msg_date": "Thu, 7 Apr 2005 14:15:19 +0530 (IST)",
"msg_from": "\"S.Thanga Prakash\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: help on explain analyse in psql 7.1.3 (linux)"
},
{
"msg_contents": "Christopher Kings-Lynne <[email protected]> writes:\n>> I didn't find option analyse in explain command..\n>> \n>> how to get time taken by SQL procedure/query?\n\n> Explain analyze was added in 7.2 - you really need to upgrade...\n\n> You can use \\timing in psql to get an approximation...\n\n7.1 psql hasn't got \\timing either ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Apr 2005 11:43:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: help on explain analyse in psql 7.1.3 (linux) "
},
{
"msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\nChristopher Kings-Lynne wrote:\n> Explain analyze was added in 7.2 - you really need to upgrade...\n>\n> You can use \\timing in psql to get an approximation...\n\nActually, \\timing was not added until 7.2 either! So, the\noriginal poster really, really needs to upgrade... :)\n\n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200504072129\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n\n-----BEGIN PGP SIGNATURE-----\n\niD8DBQFCVd69vJuQZxSWSsgRAvRHAJ9T1uxfWEnHSNI/+iiiHiJ2I1IGUgCggMYb\ntjDwzfseK3aDAKHI5Ko1S/Q=\n=AvKY\n-----END PGP SIGNATURE-----\n\n\n",
"msg_date": "Fri, 8 Apr 2005 01:30:10 -0000",
"msg_from": "\"Greg Sabino Mullane\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: help on explain analyse in psql 7.1.3 (linux)"
}
] |
[
{
"msg_contents": "hi,\n\n\tI am using psql 7.1.3\n\nI didn't find option analyse in explain command..\n\nhow to get time taken by SQL procedure/query?\n\nregards,\nstp..\n\n",
"msg_date": "Thu, 7 Apr 2005 12:51:47 +0530 (IST)",
"msg_from": "\"S.Thanga Prakash\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "help on explain analyse"
},
{
"msg_contents": "S.Thanga Prakash wrote:\n\n>hi,\n>\n>\tI am using psql 7.1.3\n>\n>I didn't find option analyse in explain command..\n>\n>how to get time taken by SQL procedure/query?\n>\n>regards,\n>stp..\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 7: don't forget to increase your free space map settings\n> \n>\nI don't believe it was added until 7.2. It is highly recommended that \nyou upgrade. Performance and stability have both been improved \ntremendously between 7.1 and 8.0.\n\nJohn\n=:->",
"msg_date": "Sun, 10 Apr 2005 23:57:51 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: help on explain analyse"
}
] |
[
{
"msg_contents": "hi,\n\n how to find the time taken by an query/stored procedure?\n\nI am using psql 7.1.3 in linux 7.2\n\nhow to execute 'explain analyse' in the psql? Is it supported at 7.1.3 ?\n\n\nlooking forward for replies..\nregards,\nstp.\n",
"msg_date": "Thu, 7 Apr 2005 13:14:51 +0530 (IST)",
"msg_from": "\"S.Thanga Prakash\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "help on time calculation"
},
{
"msg_contents": "> how to find the time taken by an query/stored procedure?\n\nIn psql, use \\timing for an approximate time.\n\n> I am using psql 7.1.3 in linux 7.2\n> \n> how to execute 'explain analyse' in the psql? Is it supported at 7.1.3 ?\n\nExplain analyze is NOT supported in PostgreSQL 7.1. You really should \nupgrade your PostgreSQL to version 8.0.\n\nChris\n",
"msg_date": "Thu, 07 Apr 2005 16:18:32 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: help on time calculation"
}
] |
[
{
"msg_contents": "Hi All,\n\nThanks to all on the NOVICE list that gave me help I now have a query running\nthat returns the results I am after. :-) Now of course I want it to run\nfaster. Currently it clocks in at ~160ms. I have checked over the indexes\nand I belive that the tables are indexed properly. The largest table,\ntbl_item, only has 2000 rows. Is it possible to reduce the time of this query\nfurther? I have included the output of EXPLAIN ANALYZE below the query. \nUnfortunately I am still struggling trying to learn how to interpret the\noutput. TIA\n\n SELECT tbl_item.id AS item_id,\n tbl_item.item_type,\n tbl_item.inactive AS item_inactive,\n tbl_item.description AS item_description,\n CASE WHEN tbl_item.item_class=0 THEN 'Non-Stock'\n WHEN tbl_item.item_class=1 THEN 'Stock'\n WHEN tbl_item.item_class=2 THEN 'Description'\n WHEN tbl_item.item_class=3 THEN 'Assembly'\n WHEN tbl_item.item_class=4 THEN 'Service'\n WHEN tbl_item.item_class=5 THEN 'Labor'\n WHEN tbl_item.item_class=6 THEN 'Activity'\n WHEN tbl_item.item_class=7 THEN 'Charge'\n ELSE 'Unrecognized'\n END AS item_class,\n tbl_item.sales_gl_account AS acct_sales_gl_nmbr,\n sales_desc.description AS acct_sales_gl_name,\n tbl_item.inventory_gl_account AS acct_inv_gl_nmbr,\n inv_desc.description AS acct_inv_gl_name,\n tbl_item.cogs_gl_account AS acct_cogs_gl_nmbr,\n cogs_desc.description AS acct_cogs_gl_name,\n CASE WHEN tbl_item.costing_method=0 THEN 'Average'\n WHEN tbl_item.costing_method=1 THEN 'FIFO'\n WHEN tbl_item.costing_method=2 THEN 'LIFO'\n ELSE 'Unrecognized'\n END AS acct_cost_method,\n tbl_mesh.mesh_size,\n tbl_mesh.unit_of_measure AS mesh_uom,\n tbl_mesh.mesh_type,\n tbl_item.purchase_description,\n tbl_item.last_unit_cost AS purchase_unit_cost,\n tbl_item.purchase_uom AS purchase_uom,\n tbl_item.reorder_point AS purchase_point,\n tbl_item.reorder_quantity AS purchase_quantity,\n tbl_item.sales_description,\n tbl_item.last_unit_cost/peachtree.tbl_item.ptos_uom_factor AS\n sales_unit_cost,\n tbl_item.unit_of_measure AS sales_uom,\n tbl_item.weight AS sales_weight,\n tbl_current.last_count\n + tbl_current.received\n - tbl_current.shipped AS inv_on_hand,\n\ttbl_current.allocated AS inv_committed,\n\ttbl_current.last_count\n + tbl_current.received\n - tbl_current.shipped\n - tbl_current.allocated AS inv_available,\n\ttbl_current.on_order AS inv_on_order\n FROM tbl_item\n LEFT JOIN tbl_mesh\n ON ( tbl_item.id = tbl_mesh.item_id )\n JOIN tbl_gl_account AS sales_desc\n ON ( tbl_item.sales_gl_account = sales_desc.account_id )\n JOIN tbl_gl_account AS inv_desc\n ON ( tbl_item.inventory_gl_account = inv_desc.account_id )\n JOIN tbl_gl_account AS cogs_desc\n ON ( tbl_item.cogs_gl_account = cogs_desc.account_id )\n LEFT JOIN tbl_current\n ON ( tbl_item.id = tbl_current.item_id )\n ORDER BY tbl_item.id;\n\n\nSort (cost=5749.75..5758.98 rows=3691 width=333) (actual\ntime=154.923..156.070 rows=1906 loops=1)\n Sort Key: tbl_item.id\n -> Hash Left Join (cost=2542.56..5194.32 rows=3691 width=333) (actual\ntime=30.475..146.074 rows=1906 loops=1)\n Hash Cond: ((\"outer\".id)::text = (\"inner\".item_id)::text)\n -> Hash Join (cost=15.85..366.14 rows=3691 width=313) (actual\ntime=2.292..82.281 rows=1906 loops=1)\n Hash Cond: ((\"outer\".sales_gl_account)::text =\n(\"inner\".account_id)::text)\n -> Hash Join (cost=11.18..305.81 rows=3749 width=290) (actual\ntime=1.632..61.052 rows=1906 loops=1)\n Hash Cond: ((\"outer\".cogs_gl_account)::text =\n(\"inner\".account_id)::text)\n -> Hash Join (cost=6.50..244.60 rows=3808 width=267)\n(actual time=1.034..40.873 rows=1906 loops=1)\n Hash Cond: ((\"outer\".inventory_gl_account)::text =\n(\"inner\".account_id)::text)\n -> Hash Left Join (cost=1.82..182.50 rows=3868\nwidth=244) (actual time=0.407..20.878 rows=1936 loops=1)\n Hash Cond: ((\"outer\".id)::text =\n(\"inner\".item_id)::text)\n -> Seq Scan on tbl_item (cost=0.00..160.68\nrows=3868 width=224) (actual time=0.131..5.022 rows=1936 loops=1)\n -> Hash (cost=1.66..1.66 rows=66 width=34)\n(actual time=0.236..0.236 rows=0 loops=1)\n -> Seq Scan on tbl_mesh \n(cost=0.00..1.66 rows=66 width=34) (actual time=0.031..0.149 rows=66 loops=1)\n -> Hash (cost=4.14..4.14 rows=214 width=32)\n(actual time=0.573..0.573 rows=0 loops=1)\n -> Seq Scan on tbl_gl_account inv_desc \n(cost=0.00..4.14 rows=214 width=32) (actual time=0.005..0.317 rows=214 loops=1)\n -> Hash (cost=4.14..4.14 rows=214 width=32) (actual\ntime=0.556..0.556 rows=0 loops=1)\n -> Seq Scan on tbl_gl_account cogs_desc \n(cost=0.00..4.14 rows=214 width=32) (actual time=0.005..0.294 rows=214 loops=1)\n -> Hash (cost=4.14..4.14 rows=214 width=32) (actual\ntime=0.603..0.603 rows=0 loops=1)\n -> Seq Scan on tbl_gl_account sales_desc \n(cost=0.00..4.14 rows=214 width=32) (actual time=0.031..0.343 rows=214 loops=1)\n -> Hash (cost=1775.57..1775.57 rows=76457 width=31) (actual\ntime=26.114..26.114 rows=0 loops=1)\n -> Seq Scan on tbl_current (cost=0.00..1775.57 rows=76457\nwidth=31) (actual time=22.870..25.024 rows=605 loops=1)\nTotal runtime: 158.053 ms\n\n\nKind Regards,\nKeith\n",
"msg_date": "Thu, 7 Apr 2005 10:17:22 -0400",
"msg_from": "\"Keith Worthington\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "4 way JOIN using aliases"
},
{
"msg_contents": "Keith,\n\n> Thanks to all on the NOVICE list that gave me help I now have a query\n> running that returns the results I am after. :-) Now of course I want it\n> to run faster. Currently it clocks in at ~160ms. I have checked over the\n> indexes and I belive that the tables are indexed properly. The largest\n> table, tbl_item, only has 2000 rows. Is it possible to reduce the time of\n> this query further? \n\nProbably not, no. For a 7-way join including 2 LEFT JOINs on the \nunrestricted contents of all tables, 160ms is pretty darned good. If these \ntables were large, you'd be looking at a much longer estimation time. The \nonly real way to speed it up would be to find a way to eliminate the left \njoins. Also, PostgreSQL 8.0 might optimize this query a little better.\n\nThe only thing I can see to tweak is that the estimate on the number of rows \nin tbl_item is wrong; probably you need to ANALYZE tbl_item. But I doubt \nthat will make a difference in execution time.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sun, 10 Apr 2005 18:41:05 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 4 way JOIN using aliases"
},
{
"msg_contents": "Keith Worthington wrote:\n> -> Seq Scan on tbl_current (cost=0.00..1775.57 rows=76457\n> width=31) (actual time=22.870..25.024 rows=605 loops=1)\n\nThis rowcount is way off -- have you run ANALYZE recently?\n\n-Neil\n",
"msg_date": "Mon, 11 Apr 2005 11:55:06 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 4 way JOIN using aliases"
},
{
"msg_contents": "Neil Conway wrote:\n> Keith Worthington wrote:\n> \n>> -> Seq Scan on tbl_current (cost=0.00..1775.57 rows=76457\n>> width=31) (actual time=22.870..25.024 rows=605 loops=1)\n> \n> \n> This rowcount is way off -- have you run ANALYZE recently?\n> \n> -Neil\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n> \n\nNeil,\n\nI run vacuumdb with the analyze option every morning via a cron job. In \nmy ignorance I do not know if that is the same thing.\n\n-- \nKind Regards,\nKeith\n",
"msg_date": "Mon, 11 Apr 2005 20:43:46 -0400",
"msg_from": "Keith Worthington <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 4 way JOIN using aliases"
}
] |
[
{
"msg_contents": "Running this explain on windows box, but production on linux both 8.0.1\n\nThe MSSQL is beating me out for some reason on this query.\n\nThe linux box is much more powerful, I may have to increase the cache, but I\nam pretty sure its not an issue yet.\n\nIt has 8 gig internal memory any recommendation on the cache size to use?\n\n \n\nexplain analyze select * from viwassoclist where clientnum = 'SAKS'\n\n \n\n\"Merge Join (cost=59871.79..60855.42 rows=7934 width=112) (actual\ntime=46906.000..48217.000 rows=159959 loops=1)\"\n\n\" Merge Cond: (\"outer\".locationid = \"inner\".locationid)\"\n\n\" -> Sort (cost=393.76..394.61 rows=338 width=48) (actual\ntime=62.000..62.000 rows=441 loops=1)\"\n\n\" Sort Key: l.locationid\"\n\n\" -> Index Scan using ix_location on tbllocation l\n(cost=0.00..379.56 rows=338 width=48) (actual time=15.000..62.000 rows=441\nloops=1)\"\n\n\" Index Cond: ('SAKS'::text = (clientnum)::text)\"\n\n\" -> Sort (cost=59478.03..59909.58 rows=172618 width=75) (actual\ntime=46844.000..46985.000 rows=159960 loops=1)\"\n\n\" Sort Key: a.locationid\"\n\n\" -> Merge Right Join (cost=0.00..39739.84 rows=172618 width=75)\n(actual time=250.000..43657.000 rows=176431 loops=1)\"\n\n\" Merge Cond: (((\"outer\".clientnum)::text =\n(\"inner\".clientnum)::text) AND (\"outer\".id = \"inner\".jobtitleid))\"\n\n\" -> Index Scan using ix_tbljobtitle_id on tbljobtitle jt\n(cost=0.00..194.63 rows=6391 width=37) (actual time=32.000..313.000\nrows=5689 loops=1)\"\n\n\" Filter: (1 = presentationid)\"\n\n\" -> Index Scan using ix_tblassoc_jobtitleid on tblassociate a\n(cost=0.00..38218.08 rows=172618 width=53) (actual time=31.000..41876.000\nrows=176431 loops=1)\"\n\n\" Index Cond: ((clientnum)::text = 'SAKS'::text)\"\n\n\"Total runtime: 48500.000 ms\"\n\n \n\nCREATE OR REPLACE VIEW viwassoclist AS \n\n SELECT a.clientnum, a.associateid, a.associatenum, a.lastname, a.firstname,\njt.value AS jobtitle, l.name AS \"location\", l.locationid AS mainlocationid,\nl.divisionid, l.regionid, l.districtid, (a.lastname::text || ', '::text) ||\na.firstname::text AS assocname, a.isactive, a.isdeleted\n\n FROM tblassociate a\n\n LEFT JOIN tbljobtitle jt ON a.jobtitleid = jt.id AND jt.clientnum::text =\na.clientnum::text AND 1 = jt.presentationid\n\n JOIN tbllocation l ON a.locationid = l.locationid AND l.clientnum::text =\na.clientnum::text;\n\n \n\n \n\nCREATE TABLE tblassociate\n\n(\n\n clientnum varchar(16) NOT NULL,\n\n associateid int4 NOT NULL,\n\n associatenum varchar(10),\n\n firstname varchar(50),\n\n middleinit varchar(5),\n\n lastname varchar(50),\n\n ssn varchar(18),\n\n dob timestamp,\n\n address varchar(100),\n\n city varchar(50),\n\n state varchar(50),\n\n country varchar(50),\n\n zip varchar(10),\n\n homephone varchar(14),\n\n cellphone varchar(14),\n\n pager varchar(14),\n\n associateaccount varchar(50),\n\n doh timestamp,\n\n dot timestamp,\n\n rehiredate timestamp,\n\n lastdayworked timestamp,\n\n staffexecid int4,\n\n jobtitleid int4,\n\n locationid int4,\n\n deptid int4,\n\n positionnum int4,\n\n worktypeid int4,\n\n sexid int4,\n\n maritalstatusid int4,\n\n ethnicityid int4,\n\n weight float8,\n\n heightfeet int4,\n\n heightinches int4,\n\n haircolorid int4,\n\n eyecolorid int4,\n\n isonalarmlist bool NOT NULL DEFAULT false,\n\n isactive bool NOT NULL DEFAULT true,\n\n ismanager bool NOT NULL DEFAULT false,\n\n issecurity bool NOT NULL DEFAULT false,\n\n createdbyid int4,\n\n isdeleted bool NOT NULL DEFAULT false,\n\n militarybranchid int4,\n\n militarystatusid int4,\n\n patrontypeid int4,\n\n identificationtypeid int4,\n\n workaddress varchar(200),\n\n testtypeid int4,\n\n testscore int4,\n\n pin int4,\n\n county varchar(50),\n\n CONSTRAINT pk_tblassociate PRIMARY KEY (clientnum, associateid),\n\n CONSTRAINT ix_tblassociate UNIQUE (clientnum, associatenum)\n\n)\n\n \n\nCREATE TABLE tbljobtitle\n\n(\n\n clientnum varchar(16) NOT NULL,\n\n id int4 NOT NULL,\n\n value varchar(50),\n\n code varchar(16),\n\n isdeleted bool DEFAULT false,\n\n presentationid int4 NOT NULL DEFAULT 1,\n\n CONSTRAINT pk_tbljobtitle PRIMARY KEY (clientnum, id, presentationid)\n\n)\n\n \n\nCREATE TABLE tbllocation\n\n(\n\n clientnum varchar(16) NOT NULL,\n\n locationid int4 NOT NULL,\n\n districtid int4 NOT NULL,\n\n regionid int4 NOT NULL,\n\n divisionid int4 NOT NULL,\n\n locationnum varchar(8),\n\n name varchar(50),\n\n clientlocnum varchar(50),\n\n address varchar(100),\n\n address2 varchar(100),\n\n city varchar(50),\n\n state varchar(2) NOT NULL DEFAULT 'zz'::character varying,\n\n zip varchar(10),\n\n countryid int4,\n\n phone varchar(15),\n\n fax varchar(15),\n\n payname varchar(40),\n\n contact char(36),\n\n active bool NOT NULL DEFAULT true,\n\n coiprogram text,\n\n coilimit text,\n\n coiuser varchar(255),\n\n coidatetime varchar(32),\n\n ec_note_field varchar(1050),\n\n locationtypeid int4,\n\n open_time timestamp,\n\n close_time timestamp,\n\n insurance_loc_id varchar(50),\n\n lpregionid int4,\n\n sic int4,\n\n CONSTRAINT pk_tbllocation PRIMARY KEY (clientnum, locationid),\n\n CONSTRAINT ix_tbllocation_1 UNIQUE (clientnum, locationnum, name),\n\n CONSTRAINT ix_tbllocation_unique_number UNIQUE (clientnum, divisionid,\nregionid, districtid, locationnum)\n\n)\n\n \n\nJoel Fradkin\n\n \n\n \n\n\n\n\n\n\n\n\n\n\nRunning this explain on windows box, but production on linux\nboth 8.0.1\nThe MSSQL is beating me out for some reason on this query.\nThe linux box is much more powerful, I may have to increase\nthe cache, but I am pretty sure its not an issue yet.\nIt has 8 gig internal memory any recommendation on the cache\nsize to use?\n \nexplain analyze select * from viwassoclist where clientnum =\n'SAKS'\n \n\"Merge Join (cost=59871.79..60855.42 rows=7934\nwidth=112) (actual time=46906.000..48217.000 rows=159959 loops=1)\"\n\" Merge Cond: (\"outer\".locationid =\n\"inner\".locationid)\"\n\" -> Sort (cost=393.76..394.61\nrows=338 width=48) (actual time=62.000..62.000 rows=441 loops=1)\"\n\" Sort Key: l.locationid\"\n\" -> \nIndex Scan using ix_location on tbllocation l (cost=0.00..379.56 rows=338\nwidth=48) (actual time=15.000..62.000 rows=441 loops=1)\"\n\" \nIndex Cond: ('SAKS'::text = (clientnum)::text)\"\n\" -> Sort (cost=59478.03..59909.58\nrows=172618 width=75) (actual time=46844.000..46985.000 rows=159960\nloops=1)\"\n\" Sort Key: a.locationid\"\n\" -> \nMerge Right Join (cost=0.00..39739.84 rows=172618 width=75) (actual\ntime=250.000..43657.000 rows=176431 loops=1)\"\n\" \nMerge Cond: (((\"outer\".clientnum)::text = (\"inner\".clientnum)::text)\nAND (\"outer\".id = \"inner\".jobtitleid))\"\n\" \n-> Index Scan using ix_tbljobtitle_id on tbljobtitle jt \n(cost=0.00..194.63 rows=6391 width=37) (actual time=32.000..313.000 rows=5689\nloops=1)\"\n\" \nFilter: (1 = presentationid)\"\n\" \n-> Index Scan using ix_tblassoc_jobtitleid on tblassociate a \n(cost=0.00..38218.08 rows=172618 width=53) (actual time=31.000..41876.000 rows=176431\nloops=1)\"\n\" \nIndex Cond: ((clientnum)::text = 'SAKS'::text)\"\n\"Total runtime: 48500.000 ms\"\n \nCREATE OR REPLACE VIEW viwassoclist AS \n SELECT a.clientnum, a.associateid, a.associatenum, a.lastname,\na.firstname, jt.value AS jobtitle, l.name AS \"location\", l.locationid\nAS mainlocationid, l.divisionid, l.regionid, l.districtid, (a.lastname::text ||\n', '::text) || a.firstname::text AS assocname, a.isactive, a.isdeleted\n FROM tblassociate a\n LEFT JOIN tbljobtitle jt ON a.jobtitleid = jt.id\nAND jt.clientnum::text = a.clientnum::text AND 1 = jt.presentationid\n JOIN tbllocation l ON a.locationid = l.locationid\nAND l.clientnum::text = a.clientnum::text;\n \n \nCREATE TABLE tblassociate\n(\n clientnum varchar(16) NOT NULL,\n associateid int4 NOT NULL,\n associatenum varchar(10),\n firstname varchar(50),\n middleinit varchar(5),\n lastname varchar(50),\n ssn varchar(18),\n dob timestamp,\n address varchar(100),\n city varchar(50),\n state varchar(50),\n country varchar(50),\n zip varchar(10),\n homephone varchar(14),\n cellphone varchar(14),\n pager varchar(14),\n associateaccount varchar(50),\n doh timestamp,\n dot timestamp,\n rehiredate timestamp,\n lastdayworked timestamp,\n staffexecid int4,\n jobtitleid int4,\n locationid int4,\n deptid int4,\n positionnum int4,\n worktypeid int4,\n sexid int4,\n maritalstatusid int4,\n ethnicityid int4,\n weight float8,\n heightfeet int4,\n heightinches int4,\n haircolorid int4,\n eyecolorid int4,\n isonalarmlist bool NOT NULL DEFAULT false,\n isactive bool NOT NULL DEFAULT true,\n ismanager bool NOT NULL DEFAULT false,\n issecurity bool NOT NULL DEFAULT false,\n createdbyid int4,\n isdeleted bool NOT NULL DEFAULT false,\n militarybranchid int4,\n militarystatusid int4,\n patrontypeid int4,\n identificationtypeid int4,\n workaddress varchar(200),\n testtypeid int4,\n testscore int4,\n pin int4,\n county varchar(50),\n CONSTRAINT pk_tblassociate PRIMARY KEY (clientnum, associateid),\n CONSTRAINT ix_tblassociate UNIQUE (clientnum, associatenum)\n)\n \nCREATE TABLE tbljobtitle\n(\n clientnum varchar(16) NOT NULL,\n id int4 NOT NULL,\n value varchar(50),\n code varchar(16),\n isdeleted bool DEFAULT false,\n presentationid int4 NOT NULL DEFAULT 1,\n CONSTRAINT pk_tbljobtitle PRIMARY KEY (clientnum, id,\npresentationid)\n)\n \nCREATE TABLE tbllocation\n(\n clientnum varchar(16) NOT NULL,\n locationid int4 NOT NULL,\n districtid int4 NOT NULL,\n regionid int4 NOT NULL,\n divisionid int4 NOT NULL,\n locationnum varchar(8),\n name varchar(50),\n clientlocnum varchar(50),\n address varchar(100),\n address2 varchar(100),\n city varchar(50),\n state varchar(2) NOT NULL DEFAULT 'zz'::character\nvarying,\n zip varchar(10),\n countryid int4,\n phone varchar(15),\n fax varchar(15),\n payname varchar(40),\n contact char(36),\n active bool NOT NULL DEFAULT true,\n coiprogram text,\n coilimit text,\n coiuser varchar(255),\n coidatetime varchar(32),\n ec_note_field varchar(1050),\n locationtypeid int4,\n open_time timestamp,\n close_time timestamp,\n insurance_loc_id varchar(50),\n lpregionid int4,\n sic int4,\n CONSTRAINT pk_tbllocation PRIMARY KEY (clientnum, locationid),\n CONSTRAINT ix_tbllocation_1 UNIQUE (clientnum, locationnum,\nname),\n CONSTRAINT ix_tbllocation_unique_number UNIQUE (clientnum,\ndivisionid, regionid, districtid, locationnum)\n)\n \nJoel Fradkin",
"msg_date": "Thu, 07 Apr 2005 11:13:57 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Any way to speed this up?"
},
{
"msg_contents": "On Thu, 07 Apr 2005 11:13:57 -0400, Joel Fradkin wrote\n[snip]\n> \" -> Sort (cost=393.76..394.61 rows=338 width=48) (actual\n> time=62.000..62.000 rows=441 loops=1)\"\n> \n> \" Sort Key: l.locationid\"\n> \n> \" -> Index Scan using ix_location on tbllocation l\n> \n> (cost=0.00..379.56 rows=338 width=48) (actual time=15.000..62.000 rows=441\n> loops=1)\"\n> \n> \" Index Cond: ('SAKS'::text = (clientnum)::text)\"\n> \n> \" -> Sort (cost=59478.03..59909.58 rows=172618 width=75) (actual\n> time=46844.000..46985.000 rows=159960 loops=1)\"\n> \n> \" Sort Key: a.locationid\"\n[snip]\n> \n> CREATE TABLE tblassociate\n[snip]\n> \n> CONSTRAINT pk_tblassociate PRIMARY KEY (clientnum, associateid),\n> \n> CONSTRAINT ix_tblassociate UNIQUE (clientnum, associatenum)\n> \n[snip]\n> \n> Joel Fradkin\n\nJoel,\n\nI am REALLY new at this and struggling to understand EXPLAIN ANALYZE output\nbut for what it is worth it looks like the sort on a.locationid is taking up a\nlot of the time. I do not see an index on that column. I would suggest\nindexing tblassociate.locationid and seeing if that helps.\n\nKind Regards,\nKeith\n",
"msg_date": "Thu, 7 Apr 2005 11:27:03 -0400",
"msg_from": "\"Keith Worthington\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any way to speed this up?"
},
{
"msg_contents": "Joel Fradkin wrote:\n\n> Running this explain on windows box, but production on linux both 8.0.1\n>\n> The MSSQL is beating me out for some reason on this query.\n>\n> The linux box is much more powerful, I may have to increase the cache,\n> but I am pretty sure its not an issue yet.\n>\n> It has 8 gig internal memory any recommendation on the cache size to use?\n>\n>\n>\n> explain analyze select * from viwassoclist where clientnum = 'SAKS'\n>\n>\n>\n> \"Merge Join (cost=59871.79..60855.42 rows=7934 width=112) (actual\n> time=46906.000..48217.000 rows=159959 loops=1)\"\n>\nThe first thing I noticed was this. Notice that the estimated rows is\n8k, the actual rows is 160k. Which means the planner is mis-estimating\nthe selectivity of your merge.\n\n> \" -> Sort (cost=59478.03..59909.58 rows=172618 width=75) (actual\n> time=46844.000..46985.000 rows=159960 loops=1)\"\n>\n> \" Sort Key: a.locationid\"\n>\n\nThis sort actually isn't taking very long. It starts at 46800 and runs\nuntil 47000 so it takes < 1 second.\n\n> \" -> Merge Right Join (cost=0.00..39739.84 rows=172618\n> width=75) (actual time=250.000..43657.000 rows=176431 loops=1)\"\n>\n> \" Merge Cond: (((\"outer\".clientnum)::text =\n> (\"inner\".clientnum)::text) AND (\"outer\".id = \"inner\".jobtitleid))\"\n>\n> \" -> Index Scan using ix_tbljobtitle_id on tbljobtitle\n> jt (cost=0.00..194.63 rows=6391 width=37) (actual\n> time=32.000..313.000 rows=5689 loops=1)\"\n>\n> \" Filter: (1 = presentationid)\"\n>\n> \" -> Index Scan using ix_tblassoc_jobtitleid on\n> tblassociate a (cost=0.00..38218.08 rows=172618 width=53) (actual\n> time=31.000..41876.000 rows=176431 loops=1)\"\n>\n> \" Index Cond: ((clientnum)::text = 'SAKS'::text)\"\n>\nThis is where the actual expense is. The merge right join starts at 250,\nand runs until 43000. Which seems to be caused primarily by the index\nscan of tblassociate. How many rows are in tblassociate? I'm assuming\nquite a bit, since the planner thinks an index scan is faster than seq\nscan for 170k rows. (If you have > 2M this is probably accurate)\n\nI don't really know how long this should take, but 38s for 172k rows\nseems a little long.\n\nJohn\n=:->",
"msg_date": "Thu, 07 Apr 2005 10:36:54 -0500",
"msg_from": "John Arbash Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any way to speed this up?"
},
{
"msg_contents": "\"Joel Fradkin\" <[email protected]> writes:\n> Running this explain on windows box, but production on linux both 8.0.1\n\nAre you using any nondefault optimizer settings? The vast bulk of the\ntime is going into the indexscan on tblassociate (almost 42 out of the\n48 seconds), and I'm a bit surprised it didn't choose a seqscan and sort\ninstead. Or even more likely, forget the merge joins altogether and use\nhash joins --- the other tables are plenty small enough to fit in hash\ntables.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Apr 2005 12:04:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any way to speed this up? "
},
{
"msg_contents": "John Arbash Meinel <[email protected]> writes:\n>> \" -> Sort (cost=59478.03..59909.58 rows=172618 width=75) (actual\n>> time=46844.000..46985.000 rows=159960 loops=1)\"\n>> \n>> \" Sort Key: a.locationid\"\n>> \n\n> This sort actually isn't taking very long. It starts at 46800 and runs\n> until 47000 so it takes < 1 second.\n\n>> \" -> Merge Right Join (cost=0.00..39739.84 rows=172618\n>> width=75) (actual time=250.000..43657.000 rows=176431 loops=1)\"\n\nYou're not reading it quite right. The first \"actual\" number is the\ntime at which the first result row was delivered, which for a sort is\nafter the completion of (the bulk of) the sorting work. What you\nreally need to look at is the difference between the completion times\nof the node and its immediate input(s). In this case I'd blame the\nsort for 46985.000 - 43657.000 msec.\n\nCome to think of it, though, you should not be putting a whole lot of\ntrust in EXPLAIN ANALYZE numbers taken on Windows, because they are\nbased on gettimeofday which has absolutely awful resolution on that\nplatform. (There's a workaround for this in our CVS, but it's not in\n8.0.*.) I think we can still conclude that the indexscan on\ntblassociate is most of the cost, but I wouldn't venture to say that\nit's exactly such-and-such percent.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Apr 2005 12:13:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any way to speed this up? "
},
{
"msg_contents": "shared_buffers = 8000\t\t# min 16, at least max_connections*2, 8KB\neach\nwork_mem = 8192#1024\t\t# min 64, size in KB\nmax_fsm_pages = 30000\t\t# min max_fsm_relations*16, 6 bytes each\neffective_cache_size = 40000 #1000\t# typically 8KB each\nrandom_page_cost = 1.2#4\t\t# units are one sequential page\nfetch cost\n\nThese are the items I changed.\nIn the development box I turned random page cost to .2 because I figured it\nwould all be faster using an index as all my data is at a minimum being\nselected by clientnum.\n\nBut the analyze I sent in is from these settings above on a windows box.\nIf I was running the analyze (pgadmin) on a windows box but connecting to a\nlinux box would the times be accurate or do I have to run the analyze on the\nlinux box for that to happen?\n\nI am a little unclear why I would need an index on associate by location as\nI thought it would be using indexes in location and jobtitle for their\njoins.\nI did not say where locationid = x in my query on the view.\nI have so much to learn about SQL.\nJoel\n\n\n\n",
"msg_date": "Thu, 07 Apr 2005 12:33:46 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Any way to speed this up?"
},
{
"msg_contents": "\"Joel Fradkin\" <[email protected]> writes:\n> random_page_cost = 1.2#4\t\t# units are one sequential page\n> fetch cost\n\nThat is almost certainly overoptimistic; it's causing the planner to\nuse indexscans when it shouldn't. Try 2 or 3 or thereabouts.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Apr 2005 12:42:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any way to speed this up? "
},
{
"msg_contents": "Joel Fradkin wrote:\n\n>shared_buffers = 8000\t\t# min 16, at least max_connections*2, 8KB\n>each\n>work_mem = 8192#1024\t\t# min 64, size in KB\n>max_fsm_pages = 30000\t\t# min max_fsm_relations*16, 6 bytes each\n>effective_cache_size = 40000 #1000\t# typically 8KB each\n>random_page_cost = 1.2#4\t\t# units are one sequential page\n>fetch cost\n>\n>These are the items I changed.\n>In the development box I turned random page cost to .2 because I figured it\n>would all be faster using an index as all my data is at a minimum being\n>selected by clientnum.\n>\n>\nYou're random page cost is *way* too low. I would probably change this\nto no less that 2.0.\n\n>But the analyze I sent in is from these settings above on a windows box.\n>If I was running the analyze (pgadmin) on a windows box but connecting to a\n>linux box would the times be accurate or do I have to run the analyze on the\n>linux box for that to happen?\n>\n>\n>\nEXPLAIN ANALYZE is done on the server side, so it doesn't matter what\nyou use to connect to it. The \\timing flag occurs on the local side, and\nis thus influenced my network latency (but it only tells you the time\nfor the whole query anyway).\n\n>I am a little unclear why I would need an index on associate by location as\n>I thought it would be using indexes in location and jobtitle for their\n>joins.\n>I did not say where locationid = x in my query on the view.\n>I have so much to learn about SQL.\n>Joel\n>\n>\n> CREATE OR REPLACE VIEW viwassoclist AS\n> SELECT a.clientnum, a.associateid, a.associatenum, a.lastname,\n> a.firstname, jt.value AS jobtitle, l.name AS \"location\", l.locationid\n> AS mainlocationid, l.divisionid, l.regionid, l.districtid,\n> (a.lastname::text || ', '::text) || a.firstname::text AS assocname,\n> a.isactive, a.isdeleted\n> FROM tblassociate a\n> LEFT JOIN tbljobtitle jt ON a.jobtitleid = jt.id AND\n> jt.clientnum::text = a.clientnum::text AND 1 = jt.presentationid\n> JOIN tbllocation l ON a.locationid = l.locationid AND\n> l.clientnum::text = a.clientnum::text;\n\n ^^^^^^^^^^^^^^^^^^^\nThe locationid is defined in your view. This is the part that postgres\nuses to merge all of the different tables together, it doesn't really\nmatter whether you restrict it with a WHERE clause.\n\nTry just setting your random page cost back to something more\nreasonable, and try again.\n\nJohn\n=:->",
"msg_date": "Thu, 07 Apr 2005 11:43:02 -0500",
"msg_from": "John Arbash Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any way to speed this up?"
},
{
"msg_contents": "Here is the result after putting it back to 4 the original value (I had done\nthat prior to your suggestion of using 2 or 3) to see what might change.\nI also vacummed and thought I saw records deleted in associate, which I\nfound odd as this is a test site and no new records were added or deleted.\n\n\"Merge Join (cost=86788.09..87945.00 rows=10387 width=112) (actual\ntime=19703.000..21154.000 rows=159959 loops=1)\"\n\" Merge Cond: (\"outer\".locationid = \"inner\".locationid)\"\n\" -> Sort (cost=1245.50..1246.33 rows=332 width=48) (actual\ntime=62.000..62.000 rows=441 loops=1)\"\n\" Sort Key: l.locationid\"\n\" -> Index Scan using ix_location on tbllocation l\n(cost=0.00..1231.60 rows=332 width=48) (actual time=15.000..62.000 rows=441\nloops=1)\"\n\" Index Cond: ('SAKS'::text = (clientnum)::text)\"\n\" -> Sort (cost=85542.59..86042.39 rows=199922 width=75) (actual\ntime=19641.000..19955.000 rows=159960 loops=1)\"\n\" Sort Key: a.locationid\"\n\" -> Merge Right Join (cost=60850.40..62453.22 rows=199922\nwidth=75) (actual time=13500.000..14734.000 rows=176431 loops=1)\"\n\" Merge Cond: ((\"outer\".id = \"inner\".jobtitleid) AND\n(\"outer\".\"?column4?\" = \"inner\".\"?column10?\"))\"\n\" -> Sort (cost=554.11..570.13 rows=6409 width=37) (actual\ntime=94.000..94.000 rows=6391 loops=1)\"\n\" Sort Key: jt.id, (jt.clientnum)::text\"\n\" -> Seq Scan on tbljobtitle jt (cost=0.00..148.88\nrows=6409 width=37) (actual time=0.000..63.000 rows=6391 loops=1)\"\n\" Filter: (1 = presentationid)\"\n\" -> Sort (cost=60296.29..60796.09 rows=199922 width=53)\n(actual time=13406.000..13859.000 rows=176431 loops=1)\"\n\" Sort Key: a.jobtitleid, (a.clientnum)::text\"\n\" -> Seq Scan on tblassociate a (cost=0.00..38388.79\nrows=199922 width=53) (actual time=62.000..10589.000 rows=176431 loops=1)\"\n\" Filter: ((clientnum)::text = 'SAKS'::text)\"\n\"Total runtime: 22843.000 ms\"\n\nJoel Fradkin\n \n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Thursday, April 07, 2005 11:43 AM\nTo: Joel Fradkin\nCc: 'PostgreSQL Perform'\nSubject: Re: [PERFORM] Any way to speed this up? \n\n\"Joel Fradkin\" <[email protected]> writes:\n> random_page_cost = 1.2#4\t\t# units are one sequential page\n> fetch cost\n\nThat is almost certainly overoptimistic; it's causing the planner to\nuse indexscans when it shouldn't. Try 2 or 3 or thereabouts.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 07 Apr 2005 13:14:33 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Any way to speed this up?"
},
{
"msg_contents": "Joel Fradkin wrote:\n\n>Here is the result after putting it back to 4 the original value (I had done\n>that prior to your suggestion of using 2 or 3) to see what might change.\n>I also vacummed and thought I saw records deleted in associate, which I\n>found odd as this is a test site and no new records were added or deleted.\n>\n>\n\nWell, that looks 2x as fast, right?\n\nYou might try\nSET enable_mergejoin TO off;\n\nJust to see if you can force a hash-join and see how long that takes.\nYou might also try increasing work_mem.\nYou can do that just in the current session with\n\nSET work_mem TO ....;\n\nJohn\n=:->",
"msg_date": "Thu, 07 Apr 2005 12:22:37 -0500",
"msg_from": "John Arbash Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any way to speed this up?"
},
{
"msg_contents": "2 things to point out from this last run:\n\n50% of the time is taken scanning tblassociate \n -> Seq Scan on tblassociate a (cost=0.00..38388.79 rows=199922 width=53) (actual time=62.000..10589.000 rows=176431 loops=1)\n Filter: ((clientnum)::text = 'SAKS'::text)\n\nIf you had an index on clientnum and didn't cast it to text in the view,\nyou might be able to use an indexscan, which could be faster (depends on\nhow big the table actually is).\n\nThis sort is taking about 25% of the time:\n -> Sort (cost=85542.59..86042.39 rows=199922 width=75) (actual time=19641.000..19955.000 rows=159960 loops=1)\"\n Sort Key: a.locationid\n -> Merge Right Join (cost=60850.40..62453.22 rows=199922 width=75) (actual time=13500.000..14734.000 rows=176431 loops=1)\n\nI suspect it shouldn't take 5 seconds to sort 160k rows in memory, and\nthat this sort is spilling to disk. If you increase your working memory\nthe sort might fit entirely in memory. As a quick test, you could set\nworking memory to 80% of system memory and see how that changes the\nspeed. But you wouldn't want to set it that high in production.\n\nOn Thu, Apr 07, 2005 at 01:14:33PM -0400, Joel Fradkin wrote:\n> Here is the result after putting it back to 4 the original value (I had done\n> that prior to your suggestion of using 2 or 3) to see what might change.\n> I also vacummed and thought I saw records deleted in associate, which I\n> found odd as this is a test site and no new records were added or deleted.\n> \n> \"Merge Join (cost=86788.09..87945.00 rows=10387 width=112) (actual\n> time=19703.000..21154.000 rows=159959 loops=1)\"\n> \" Merge Cond: (\"outer\".locationid = \"inner\".locationid)\"\n> \" -> Sort (cost=1245.50..1246.33 rows=332 width=48) (actual\n> time=62.000..62.000 rows=441 loops=1)\"\n> \" Sort Key: l.locationid\"\n> \" -> Index Scan using ix_location on tbllocation l\n> (cost=0.00..1231.60 rows=332 width=48) (actual time=15.000..62.000 rows=441\n> loops=1)\"\n> \" Index Cond: ('SAKS'::text = (clientnum)::text)\"\n> \" -> Sort (cost=85542.59..86042.39 rows=199922 width=75) (actual\n> time=19641.000..19955.000 rows=159960 loops=1)\"\n> \" Sort Key: a.locationid\"\n> \" -> Merge Right Join (cost=60850.40..62453.22 rows=199922\n> width=75) (actual time=13500.000..14734.000 rows=176431 loops=1)\"\n> \" Merge Cond: ((\"outer\".id = \"inner\".jobtitleid) AND\n> (\"outer\".\"?column4?\" = \"inner\".\"?column10?\"))\"\n> \" -> Sort (cost=554.11..570.13 rows=6409 width=37) (actual\n> time=94.000..94.000 rows=6391 loops=1)\"\n> \" Sort Key: jt.id, (jt.clientnum)::text\"\n> \" -> Seq Scan on tbljobtitle jt (cost=0.00..148.88\n> rows=6409 width=37) (actual time=0.000..63.000 rows=6391 loops=1)\"\n> \" Filter: (1 = presentationid)\"\n> \" -> Sort (cost=60296.29..60796.09 rows=199922 width=53)\n> (actual time=13406.000..13859.000 rows=176431 loops=1)\"\n> \" Sort Key: a.jobtitleid, (a.clientnum)::text\"\n> \" -> Seq Scan on tblassociate a (cost=0.00..38388.79\n> rows=199922 width=53) (actual time=62.000..10589.000 rows=176431 loops=1)\"\n> \" Filter: ((clientnum)::text = 'SAKS'::text)\"\n> \"Total runtime: 22843.000 ms\"\n> \n> Joel Fradkin\n> \n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]] \n> Sent: Thursday, April 07, 2005 11:43 AM\n> To: Joel Fradkin\n> Cc: 'PostgreSQL Perform'\n> Subject: Re: [PERFORM] Any way to speed this up? \n> \n> \"Joel Fradkin\" <[email protected]> writes:\n> > random_page_cost = 1.2#4\t\t# units are one sequential page\n> > fetch cost\n> \n> That is almost certainly overoptimistic; it's causing the planner to\n> use indexscans when it shouldn't. Try 2 or 3 or thereabouts.\n> \n> \t\t\tregards, tom lane\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n> \n\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Sat, 9 Apr 2005 10:17:21 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any way to speed this up?"
}
] |
[
{
"msg_contents": "Folks,\n\nI'm wondering if it might be useful to be able to add estimated selectivity to \na function definition for purposes of query estimation. Currently function \nscans automatically return a flat default 1000 estimated rows. It seems \nlike the DBA ought to be able to ALTER FUNCTION and give it a row estimate \nfor planning purposes. \n\nThoughts?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 8 Apr 2005 15:15:50 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Functionscan estimates"
},
{
"msg_contents": "On Fri, Apr 08, 2005 at 03:15:50PM -0700, Josh Berkus wrote:\n> \n> I'm wondering if it might be useful to be able to add estimated selectivity to \n> a function definition for purposes of query estimation. Currently function \n> scans automatically return a flat default 1000 estimated rows. It seems \n> like the DBA ought to be able to ALTER FUNCTION and give it a row estimate \n> for planning purposes. \n\nAbout a month ago I mentioned that I'd find that useful. In a\nfollowup, Christopher Kings-Lynne brought up the idea of a GUC\nvariable that could give hints about the expected row count.\n\nhttp://archives.postgresql.org/pgsql-hackers/2005-03/msg00146.php\nhttp://archives.postgresql.org/pgsql-hackers/2005-03/msg00153.php\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n",
"msg_date": "Fri, 8 Apr 2005 16:38:20 -0600",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Functionscan estimates"
},
{
"msg_contents": "On Fri, Apr 08, 2005 at 04:38:20PM -0600, Michael Fuhr wrote:\n> On Fri, Apr 08, 2005 at 03:15:50PM -0700, Josh Berkus wrote:\n> > \n> > I'm wondering if it might be useful to be able to add estimated selectivity to \n> > a function definition for purposes of query estimation. Currently function \n> > scans automatically return a flat default 1000 estimated rows. It seems \n> > like the DBA ought to be able to ALTER FUNCTION and give it a row estimate \n> > for planning purposes. \n> \n> About a month ago I mentioned that I'd find that useful. In a\n> followup, Christopher Kings-Lynne brought up the idea of a GUC\n> variable that could give hints about the expected row count.\n\nThat seems pretty limited ... what happens if the query contains more\nthat one SRF?\n\nMaybe issuing some sort of special call to the function (say, with\nsome boolean in the call info struct) on which it returns planning data;\nthus the planner can call the function itself. The hard part would be\nfiguring out how to do it without breaking backwards compatibility with\nfunctions that don't know how to handle that. (And how to do it in\nplpgsql).\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"La principal caracter�stica humana es la tonter�a\"\n(Augusto Monterroso)\n",
"msg_date": "Fri, 8 Apr 2005 18:45:56 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Functionscan estimates"
},
{
"msg_contents": "Alvaro, Michael,\n\n> > About a month ago I mentioned that I'd find that useful. In a\n> > followup, Christopher Kings-Lynne brought up the idea of a GUC\n> > variable that could give hints about the expected row count.\n>\n> That seems pretty limited ... what happens if the query contains more\n> that one SRF?\n\nYeah, I'd see that as a pretty bad idea too. I don't want to tell the planner \nhow many rows I expect \"all functions\" to return, I want to tell it how many \n*one particular* function will return.\n\n> Maybe issuing some sort of special call to the function (say, with\n> some boolean in the call info struct) on which it returns planning data;\n> thus the planner can call the function itself. The hard part would be\n> figuring out how to do it without breaking backwards compatibility with\n> functions that don't know how to handle that. (And how to do it in\n> plpgsql).\n\nOr in pl/perl, or pl/python, or plsh .... doesn't sound feasable. \n\nMy solution would be a lot simpler, since we could simply populate \npg_proc.proestrows with \"1000\" by default if not changed by the DBA. In an \neven better world, we could tie it to a table, saying that, for example, \nproestrows = my_table*0.02.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 8 Apr 2005 16:04:27 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Functionscan estimates"
},
{
"msg_contents": "On Fri, Apr 08, 2005 at 04:04:27PM -0700, Josh Berkus wrote:\n\n> My solution would be a lot simpler, since we could simply populate \n> pg_proc.proestrows with \"1000\" by default if not changed by the DBA. In an \n> even better world, we could tie it to a table, saying that, for example, \n> proestrows = my_table*0.02.\n\nThe problem with that approach is that it can't differ depending on the\narguments to the function, so it too seems limited to me.\n\nIdeally an estimator would be able to peek at other table statistics and\ndo some computation with them, just like other nodes are able to.\n\nAnother idea would be have an estimator function (pg_proc.proestimator)\nfor each regular function. The estimator would be a very cheap function\nto be called with the same arguments, and it would return the estimated\nnumber of tuples the other function would return. The default estimator\ncould be \"return 1000\".\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"A wizard is never late, Frodo Baggins, nor is he early.\n He arrives precisely when he means to.\" (Gandalf, en LoTR FoTR)\n",
"msg_date": "Fri, 8 Apr 2005 19:57:31 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Functionscan estimates"
},
{
"msg_contents": "Not too many releases ago, there were several columns in pg_proc that\nwere intended to support estimation of the runtime cost and number of\nresult rows of set-returning functions. I believe in fact that these\nwere the remains of Joe Hellerstein's thesis on expensive-function\nevaluation, and are exactly what he was talking about here:\nhttp://archives.postgresql.org/pgsql-hackers/2002-06/msg00085.php\n\nBut with all due respect to Joe, I think the reason that stuff got\ntrimmed is that it didn't work very well. In most cases it's\n*hard* to write an estimator for a SRF. Let's see you produce\none for dblink() for instance ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 09 Apr 2005 00:00:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Functionscan estimates "
},
{
"msg_contents": "\n> My solution would be a lot simpler, since we could simply populate\n> pg_proc.proestrows with \"1000\" by default if not changed by the DBA. In \n> an\n> even better world, we could tie it to a table, saying that, for example,\n> proestrows = my_table*0.02.\n\n\tWhat if the estimated row is a function of a parameter ?\n\tSay a function takes as a parameter :\n\t- a number to use in a LIMIT\n\t- it's a function to generate a certain number of values from a \npredetermined set (like, array -> set returning function)\n\n\tIn all those cases it's no use to have just a fixed number.\n\n\tId suggest two solutions :\n\t- The ideal solution which is impossible to do :\n\tThe function tells the planner about its stats, looking at its parameters\n\n\t- A solution that would be possible to do\n\t pg_proc.proestrows is... the name of another function, defined by the \nuser, which takes the exact same parameters as the set returning function \nwe're talking about, and which returns estimates.\n\n\tFor instance, in pseudo-sql :\n\nCREATE FUNCTION int_array_srf( INTEGER[] ) RETURNS SETOF INTEGER LANGUAGE \nplpgsql AS $$\nBEGIN\n\tFOR _i IN 1..icount($1)\n\t\tRETURN NEXT $1[_i];\n\tEND\nEND\tIn the two cases above, this would give :\n\nCREATE FUNCTION array_srf_estimator( INTEGER[] ) RETURNS INTEGER LANGUAGE \nplpgsql AS $$\nBEGIN\n\tRETURN icount( $1 );\nEND;\n\nALTER FUNCTION array_srf SET ESTIMATOR array_srf_estimator;\n\n\tAnother interesting case would be the famous \"Top 5 by category\" case \nwhere we use a SRF to emulate an index skip scan. Say we have a table \nCategories and a table Users, each User having columns \"categories\" and \n\"score\" and we want the N users with best score in each category :\n\nCREATE FUNCTION top_n_by_category( INTEGER ) RETURN SETOF users%ROWTYPE \nLANGUAGE plpgsql AS $$\nDECLARE\n\t_cat_id\tINTEGER;\n\t_n ALIAS FOR $1;\n\t_user\tusers%ROWTYPE;\nBEGIN\n\tFOR _cat_id IN SELECT category_id FROM categories DO\n\t\tFOR _user IN SELECT * FROM users WHERE category_id = _cat_id ORDER BY \nscore DESC LIMIT _n DO\n\t\t\tRETURN NEXT _user;\n\t\tEND\n\tEND\nEND\n\t\nCREATE FUNCTION top_n_by_category_estimator( INTEGER ) RETURN INTEGER \nLANGUAGE plpgsql AS $$\nBEGIN\n\tRETURN $1 * (the estimated number of rows for the categories table taken \n from the table statistics);\nEND;\n\nALTER FUNCTION top_n_by_category SET ESTIMATOR top_n_by_category_estimator;\n\n\tGot it ?\n\n\tThe array_srf case would be extremely useful as this type of function is \ngenerally used to join against other tables, and having estimates is \nuseful for that.\n\tThe top_n case would be useless if we're just returning the rows from the \nfunction directly, but would be useful if we'll join them to other tables.\n\n\tThis sounds pretty simple, powerful, and versatile.\n\n\tAdditionally, in some cases (most notably the array case) it's easy to \nestimate the statistics on the returned values because they're all in the \narray already, so the mechanism could be extended to have a way of \nreturning a pseudo pg_stats for a Set Returning function.\n\n\tFor instance, say you have a SRF which returns N random rows from a \ntable. It could have an estimator which would return a rowcount of N, and \na statistics estimator which would return the sats rows for the source \ntable, appropriately modified.\n\n\tThis sounds harder to do.\n\n\tWHat do you think ?\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Sat, 09 Apr 2005 13:25:47 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Functionscan estimates"
},
{
"msg_contents": "\n> But with all due respect to Joe, I think the reason that stuff got\n> trimmed is that it didn't work very well. In most cases it's\n> *hard* to write an estimator for a SRF. Let's see you produce\n> one for dblink() for instance ...\n\n\tGood one...\n\tWell in some cases it'll be impossible, but suppose I have a function \nget_id_for_something() which just grabs an ID using dblink, then I know it \nreturns one row, and pg would be interested in that information too !\n",
"msg_date": "Sat, 09 Apr 2005 13:29:10 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Functionscan estimates "
},
{
"msg_contents": "On Sat, Apr 09, 2005 at 12:00:56AM -0400, Tom Lane wrote:\n> Not too many releases ago, there were several columns in pg_proc that\n> were intended to support estimation of the runtime cost and number of\n> result rows of set-returning functions. I believe in fact that these\n> were the remains of Joe Hellerstein's thesis on expensive-function\n> evaluation, and are exactly what he was talking about here:\n> http://archives.postgresql.org/pgsql-hackers/2002-06/msg00085.php\n> \n> But with all due respect to Joe, I think the reason that stuff got\n> trimmed is that it didn't work very well. In most cases it's\n> *hard* to write an estimator for a SRF. Let's see you produce\n> one for dblink() for instance ...\n\nActually, if the remote database supported a way to get a rows estimate\nfrom the query passed to db_link, it would be trivial, since you'd just\npass that back.\n\nIn fact, having such a function (estimate_rows_for_sql(text)) would\nprobably be very useful to functions that wanted to support returning a\nrows estimate.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Sat, 9 Apr 2005 10:22:57 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Functionscan estimates"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Sat, Apr 09, 2005 at 12:00:56AM -0400, Tom Lane wrote:\n>> But with all due respect to Joe, I think the reason that stuff got\n>> trimmed is that it didn't work very well. In most cases it's\n>> *hard* to write an estimator for a SRF. Let's see you produce\n>> one for dblink() for instance ...\n\n> Actually, if the remote database supported a way to get a rows estimate\n> from the query passed to db_link, it would be trivial, since you'd just\n> pass that back.\n\nThis assumes that (1) you have the complete query argument at the time\nof estimation, and (2) it's OK to contact the remote database and do an\nEXPLAIN at that time. Both of these seem pretty shaky assumptions.\n\nThe larger point is that writing an estimator for an SRF is frequently a\ntask about as difficult as writing the SRF itself, and sometimes much\n*more* difficult due to lack of information. I don't foresee a whole\nlot of use of an estimator hook designed as proposed here. In\nparticular, if the API is such that we can only use the estimator when\nall the function arguments are plan-time constants, it's not going to be\nvery helpful.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 09 Apr 2005 11:45:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Functionscan estimates "
},
{
"msg_contents": "Tom Lane wrote:\n> Not too many releases ago, there were several columns in pg_proc that\n> were intended to support estimation of the runtime cost and number of\n> result rows of set-returning functions. I believe in fact that these\n> were the remains of Joe Hellerstein's thesis on expensive-function\n> evaluation\n\nFYI, Hellerstein's thesis on xfunc optimization is available here:\n\n ftp://ftp.cs.wisc.edu/pub/tech-reports/reports/1996/tr1304.ps.Z\n\nThere's also a paper on this subject by Hellerstein that was published \nin Transactions on Database Systems:\n\n http://www.cs.berkeley.edu/~jmh/miscpapers/todsxfunc.pdf\n\nI haven't had a chance to digest either one yet, but it might be worth a \nlook.\n\n-Neil\n",
"msg_date": "Sun, 10 Apr 2005 15:25:25 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Functionscan estimates"
},
{
"msg_contents": "Tom Lane wrote:\n> The larger point is that writing an estimator for an SRF is frequently a\n> task about as difficult as writing the SRF itself\n\nTrue, although I think this doesn't necessarily kill the idea. If \nwriting an estimator for a given SRF is too difficult, the user is no \nworse off than they are today. Hopefully there would be a fairly large \nclass of SRFs for which writing an estimator would be relatively simple, \nand result in improved planner behavior.\n\n> I don't foresee a whole lot of use of an estimator hook designed as\n> proposed here. In particular, if the API is such that we can only\n> use the estimator when all the function arguments are plan-time\n> constants, it's not going to be very helpful.\n\nYes :( One approach might be to break the function's domain into pieces \nand have the estimator function calculate the estimated result set size \nfor each piece. So, given a trivial function like:\n\nfoo(int):\n if $1 < 10 then produce 100 rows\n else produce 10000 rows\n\nIf the planner has encoded the distribution of input tuples to the \nfunction as a histogram, it could invoke the SRF's estimator function \nfor the boundary values of each histogram bucket, and use that to get an \nidea of the function's likely result set size at runtime.\n\nAnd yes, the idea as sketched is totally unworkable :) For one thing, \nthe difficulty of doing this grows rapidly as the number of arguments to \nthe function increases. But perhaps there is some variant of this idea \nthat might work...\n\nAnother thought is that the estimator could provide information on the \ncost of evaluating the function, the number of tuples produced by the \nfunction, and even the distribution of those tuples.\n\nBTW, why is this on -performance? It should be on -hackers.\n\n-Neil\n",
"msg_date": "Sun, 10 Apr 2005 15:44:00 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Functionscan estimates"
},
{
"msg_contents": "People:\n\n(HACKERS: Please read this entire thread at \nhttp://archives.postgresql.org/pgsql-performance/2005-04/msg00179.php \nSorry for crossing this over.)\n\n> > The larger point is that writing an estimator for an SRF is frequently a\n> > task about as difficult as writing the SRF itself\n>\n> True, although I think this doesn't necessarily kill the idea. If\n> writing an estimator for a given SRF is too difficult, the user is no\n> worse off than they are today. Hopefully there would be a fairly large\n> class of SRFs for which writing an estimator would be relatively simple,\n> and result in improved planner behavior.\n\nFor that matter, even supplying an estimate constant would be a vast \nimprovement over current functionality. I would suggest, in fact, that we \nallow the use of either a constant number, or an estimator function, in that \ncolumn. Among other things, this would allow implementing the constant \nnumber right now and the use of an estimating function later, in case we can \ndo the one but not the other for 8.1.\n\nTo be more sophisticated about the estimator function, it could take a subset \nof the main functions arguments, based on $1 numbering, for example:\nCREATE FUNCTION some_func ( INT, TEXT, TEXT, INT, INT ) ...\nALTER FUNCTION some_func WITH ESTIMATOR some_func_est( $4, $5 )\n\nThis would make writing estimators which would work for several functions \neasier. Estimators would be a special type of functions which would take \nany params and RETURN ESTIMATOR, which would be implicitly castable from some \ngeneral numeric type (like INT or FLOAT).\n\n> > I don't foresee a whole lot of use of an estimator hook designed as\n> > proposed here. In particular, if the API is such that we can only\n> > use the estimator when all the function arguments are plan-time\n> > constants, it's not going to be very helpful.\n\nActually, 95% of the time I use SRFs they are accepting constants and not row \nreferences. And I use a lot of SRFs.\n\n>\n> Yes :( One approach might be to break the function's domain into pieces\n> and have the estimator function calculate the estimated result set size\n> for each piece. So, given a trivial function like:\n>\n> foo(int):\n> if $1 < 10 then produce 100 rows\n> else produce 10000 rows\n>\n> If the planner has encoded the distribution of input tuples to the\n> function as a histogram, it could invoke the SRF's estimator function\n> for the boundary values of each histogram bucket, and use that to get an\n> idea of the function's likely result set size at runtime.\n>\n> And yes, the idea as sketched is totally unworkable :) For one thing,\n> the difficulty of doing this grows rapidly as the number of arguments to\n> the function increases. But perhaps there is some variant of this idea\n> that might work...\n>\n> Another thought is that the estimator could provide information on the\n> cost of evaluating the function, the number of tuples produced by the\n> function, and even the distribution of those tuples.\n\nAnother possibility would be to support default values for all estimator \nfunctions and have functions called in row context passed DEFAULT, thus \nleaving it up to the estimator writer to supply median values for context \ncases. Or to simply take the \"first\" values and use those. \n\nWhile any of these possibilites aren't ideal, they are an improvement over the \ncurrent \"flat 1000\" estimate. As I said, even the ability to set a \nper-function flat constant estimate would be an improvement.\n\n> BTW, why is this on -performance? It should be on -hackers.\n\n'cause I spend more time reading -performance, and I started the thread. \nCrossed over now.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sun, 10 Apr 2005 18:29:38 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Functionscan estimates"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm just in the middle of performance tunning of our database running\non PostgreSQL, and I've several questions (I've searched the online\ndocs, but without success).\n\n1) When I first use the EXPLAIN ANALYZE command, the time is much\n larger than in case of subsequent invocations of EXPLAIN ANALYZE.\n I suppose the plan prepared during the first invocation is cached\n somewhere, but I'm not sure where and for how long.\n\n I suppose the execution plans are connection specific, but\n I'm not sure whether this holds for the sql queries inside the\n triggers too. I've done some testing but the things are somehow\n more difficult thanks to persistent links (the commands will\n be executed from PHP).\n\n2) Is there some (performance) difference between BEFORE and AFTER\n triggers? I believe there's no measurable difference.\n\n3) Vast majority of SQL commands inside the trigger checks whether there\n exists a row that suits some conditions (same IP, visitor ID etc.)\n Currently I do this by\n\n SELECT INTO tmp id FROM ... JOIN ... WHERE ... LIMIT 1\n IF NOT FOUND THEN\n ....\n END IF;\n\n and so on. I believe this is fast and low-cost solution (compared\n to the COUNT(*) way I've used before), but is there some even better\n (faster) way to check row existence?\n\nThanks\nt.v.\n",
"msg_date": "Sun, 10 Apr 2005 06:36:56 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "performance - triggers, row existence etc."
},
{
"msg_contents": "[email protected] wrote:\n\n>Hello,\n>\n>I'm just in the middle of performance tunning of our database running\n>on PostgreSQL, and I've several questions (I've searched the online\n>docs, but without success).\n>\n>1) When I first use the EXPLAIN ANALYZE command, the time is much\n> larger than in case of subsequent invocations of EXPLAIN ANALYZE.\n> I suppose the plan prepared during the first invocation is cached\n> somewhere, but I'm not sure where and for how long.\n>\n> \n>\nThis is actually true for any command. If you just use \\timing and not \nexplain analyze, you will see that the first time is usually \nsignificantly longer than the rest.\n\nIt's because the tables you are using are being cached in RAM (by the OS \n& by postgres).\nIt's not a planning difference, it's a bulk data cache difference.\n\nWhen and how long is dependent on how much RAM you have, and how much of \nthe database you are using.\n\n> I suppose the execution plans are connection specific, but\n> I'm not sure whether this holds for the sql queries inside the\n> triggers too. I've done some testing but the things are somehow\n> more difficult thanks to persistent links (the commands will\n> be executed from PHP).\n> \n>\nConnection specific????\nIf you were doing PREPARE myquery AS SELECT ...; Then myquery would only \nexist for that connection. And cursors & temp tables are only for the \ngiven connection.\nBut otherwise I don't think the connection matters.\n\n>2) Is there some (performance) difference between BEFORE and AFTER\n> triggers? I believe there's no measurable difference.\n> \n>\nI don't know that there is a performance difference, but there is a \nsemantic one. If you are trying to (potentially) prevent the row from \nbeing inserted you must do that BEFORE, since the row doesn't exist yet. \nIf you are trying to update a foreign key reference to the new object, \nyou must do that AFTER, so that the row exists to reference.\n\n>3) Vast majority of SQL commands inside the trigger checks whether there\n> exists a row that suits some conditions (same IP, visitor ID etc.)\n> Currently I do this by\n>\n> SELECT INTO tmp id FROM ... JOIN ... WHERE ... LIMIT 1\n> IF NOT FOUND THEN\n> ....\n> END IF;\n>\n> and so on. I believe this is fast and low-cost solution (compared\n> to the COUNT(*) way I've used before), but is there some even better\n> (faster) way to check row existence?\n>\n> \n>\nSELECT ... WHERE EXISTS ...;\nI'm not sure what you are trying to do, but this makes a good joined \ncommand.\n\nSELECT what_I_want FROM table WHERE EXISTS (SELECT what_I_need FROM \nothertable);\n\nIn general, though, SELECT WHERE LIMIT 1 is about as fast as you can get.\n\n>Thanks\n>t.v.\n> \n>\nJohn\n=:->",
"msg_date": "Sun, 10 Apr 2005 08:09:46 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance - triggers, row existence etc."
}
] |
[
{
"msg_contents": "My server is crashing on a delete statement.\n\nHere's the error message in the log file:\n\nLOCATION: ShutdownXLOG, xlog.c:3090\nLOG: 00000: database system is shut down\nLOCATION: ShutdownXLOG, xlog.c:3104\nLOG: 00000: database system was shut down at 2005-04-10 21:54:34 CDT\nLOCATION: StartupXLOG, xlog.c:2596\nLOG: 00000: checkpoint record is at C/665D45E0\nLOCATION: StartupXLOG, xlog.c:2628\nLOG: 00000: redo record is at C/665D45E0; undo record is at 0/0; shutdown TRUE\nLOCATION: StartupXLOG, xlog.c:2653\nLOG: 00000: next transaction ID: 109177; next OID: 92547340\nLOCATION: StartupXLOG, xlog.c:2656\nLOG: 00000: database system is ready\nLOCATION: StartupXLOG, xlog.c:2946\nLOG: 00000: recycled transaction log file \"0000000C00000063\"\nLOCATION: MoveOfflineLogs, xlog.c:1656\nLOG: 00000: recycled transaction log file \"0000000C00000064\"\nLOCATION: MoveOfflineLogs, xlog.c:1656\nLOG: 00000: recycled transaction log file \"0000000C00000065\"\nLOCATION: MoveOfflineLogs, xlog.c:1656\nWARNING: 25P01: there is no transaction in progress\nLOCATION: EndTransactionBlock, xact.c:1607\nWARNING: 25P01: there is no transaction in progress\nLOCATION: EndTransactionBlock, xact.c:1607\nERROR: 42601: syntax error at end of input at character 77\nLOCATION: yyerror, scan.l:565\nWARNING: 25P01: there is no transaction in progress\nLOCATION: EndTransactionBlock, xact.c:1607\nERROR: 42601: syntax error at end of input at character 77\nLOCATION: yyerror, scan.l:565\nWARNING: 25P01: there is no transaction in progress\nLOCATION: EndTransactionBlock, xact.c:1607\nWARNING: 25001: there is already a transaction in progress\nLOCATION: BeginTransactionBlock, xact.c:1545\nERROR: 42601: syntax error at end of input at character 77\nLOCATION: yyerror, scan.l:565\nWARNING: 25001: there is already a transaction in progress\nLOCATION: BeginTransactionBlock, xact.c:1545\nERROR: 42601: syntax error at end of input at character 77\nLOCATION: yyerror, scan.l:565\nLOG: 00000: received fast shutdown request\nLOCATION: pmdie, postmaster.c:1736\nLOG: 00000: aborting any active transactions\nLOCATION: pmdie, postmaster.c:1743\nFATAL: 57P01: terminating connection due to administrator command\nLOCATION: ProcessInterrupts, postgres.c:1955\nFATAL: 57P01: terminating connection due to administrator command\nLOCATION: ProcessInterrupts, postgres.c:1955\nFATAL: 57P01: terminating connection due to administrator command\nLOCATION: ProcessInterrupts, postgres.c:1955\nFATAL: 57P01: terminating connection due to administrator command\nLOCATION: ProcessInterrupts, postgres.c:1955\nLOG: 00000: shutting down\nLOCATION: ShutdownXLOG, xlog.c:3090\nLOG: 00000: database system is shut down\nLOCATION: ShutdownXLOG, xlog.c:3104\n\n\nI just turned off SQL command logging, stopped and started the process\nand now this command which worked just fine before is causing the DB\nto crash. I'm running Postgres 7.4.7 on Solaris 9 with PostGIS 0.9.1.\n\nThe data I'm deleting is the parent table with many inherited child tables.\n\nAny ideas?\n\n-Don\n-- \nDonald Drake\nPresident\nDrake Consulting\nhttp://www.drakeconsult.com/\nhttp://www.MailLaunder.com/\nhttp://www.mobilemeridian.com/\n312-560-1574\n",
"msg_date": "Sun, 10 Apr 2005 22:16:30 -0500",
"msg_from": "Don Drake <[email protected]>",
"msg_from_op": true,
"msg_subject": "Server crashing"
},
{
"msg_contents": "Don Drake <[email protected]> writes:\n> My server is crashing on a delete statement.\n> Here's the error message in the log file:\n\n> LOG: 00000: received fast shutdown request\n> LOCATION: pmdie, postmaster.c:1736\n\nThat says that something sent the postmaster a SIGINT signal.\nI think it's highly unlikely that the DELETE statement did it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Apr 2005 02:29:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server crashing "
},
{
"msg_contents": "Well, a vacuum on the entire DB seemed to have cleaned things up.\n\nNo other user was logged into the server, and I certainly did not send\nthe signal.\n\nI did clean up the serverlog file by truncating it ( > serverlog)\nwhile the DB was running, I don't think it liked that since it crashed\nthe DB. I've done this on my Linux server many times and it never\ncomplained. I won't be doing that again.\n\n-Don\n\nOn Apr 11, 2005 1:29 AM, Tom Lane <[email protected]> wrote:\n> Don Drake <[email protected]> writes:\n> > My server is crashing on a delete statement.\n> > Here's the error message in the log file:\n> \n> > LOG: 00000: received fast shutdown request\n> > LOCATION: pmdie, postmaster.c:1736\n> \n> That says that something sent the postmaster a SIGINT signal.\n> I think it's highly unlikely that the DELETE statement did it.\n> \n> regards, tom lane\n> \n\n\n-- \nDonald Drake\nPresident\nDrake Consulting\nhttp://www.drakeconsult.com/\nhttp://www.MailLaunder.com/\nhttp://www.mobilemeridian.com/\n312-560-1574\n",
"msg_date": "Mon, 11 Apr 2005 09:24:18 -0500",
"msg_from": "Don Drake <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Server crashing"
},
{
"msg_contents": "by Truncating the serverlog do you mean the data_log (as in my case)\nlog file of the postgresql sever ?. If thats the file you truncated\nthan i think its not a good habit..since you might need it at some\npoint of time for some debugging purpose in production.\n\nYou could use something like assuming there is a dummy file of 0 bytes\nin logs folder..\n\ncp data_log data_log_$current_time\ncat logs/dummy_file>data_log\ngzip data_log_$current_time\nmv data_log_$current_time.gz logs/data_log_$current_time.gz\n\nHope this helps \n\nBest\nGourish Singbal\n\n\nOn Apr 11, 2005 7:54 PM, Don Drake <[email protected]> wrote:\n> Well, a vacuum on the entire DB seemed to have cleaned things up.\n> \n> No other user was logged into the server, and I certainly did not send\n> the signal.\n> \n> I did clean up the serverlog file by truncating it ( > serverlog)\n> while the DB was running, I don't think it liked that since it crashed\n> the DB. I've done this on my Linux server many times and it never\n> complained. I won't be doing that again.\n> \n> -Don\n> \n> On Apr 11, 2005 1:29 AM, Tom Lane <[email protected]> wrote:\n> > Don Drake <[email protected]> writes:\n> > > My server is crashing on a delete statement.\n> > > Here's the error message in the log file:\n> >\n> > > LOG: 00000: received fast shutdown request\n> > > LOCATION: pmdie, postmaster.c:1736\n> >\n> > That says that something sent the postmaster a SIGINT signal.\n> > I think it's highly unlikely that the DELETE statement did it.\n> >\n> > regards, tom lane\n> >\n> \n> --\n> Donald Drake\n> President\n> Drake Consulting\n> http://www.drakeconsult.com/\n> http://www.MailLaunder.com/\n> http://www.mobilemeridian.com/\n> 312-560-1574\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n> \n\n\n-- \nBest,\nGourish Singbal\n",
"msg_date": "Mon, 11 Apr 2005 20:45:36 +0530",
"msg_from": "Gourish Singbal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server crashing"
}
] |
[
{
"msg_contents": "...\n> \n> 2) Is there some (performance) difference between BEFORE and AFTER\n> triggers? I believe there's no measurable difference.\n> \n\nBEFORE triggers might be faster, because you get a chance to reject the\nrecord before it is inserted into table. Common practice is to put\nvalidity checks into BEFORE triggers and updates of other tables into\nAFTER triggers. See also\nhttp://archives.postgresql.org/pgsql-sql/2005-04/msg00088.php.\n\n> 3) Vast majority of SQL commands inside the trigger checks \n> whether there\n> exists a row that suits some conditions (same IP, visitor ID etc.)\n> Currently I do this by\n> \n> SELECT INTO tmp id FROM ... JOIN ... WHERE ... LIMIT 1\n> IF NOT FOUND THEN\n> ....\n> END IF;\n> \n> and so on. I believe this is fast and low-cost solution (compared\n> to the COUNT(*) way I've used before), but is there some \n> even better\n> (faster) way to check row existence?\n> \n\nYou could save one temporary variable by using PERFORM:\n\nPERFORM 1 FROM ... JOIN ... WHERE ... LIMIT 1;\nIF NOT FOUND THEN\n...\nEND IF;\n\nYou might want to consider, if you need FOR UPDATE in those queries, so\nthat the referenced row maintains it's state until the end of\ntransaction. BTW, foreign keys weren't enough?\n\n Tambet\n",
"msg_date": "Mon, 11 Apr 2005 15:59:56 +0300",
"msg_from": "\"Tambet Matiisen\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance - triggers, row existence etc."
}
] |
[
{
"msg_contents": "I am running 8.0.1 on a desktop xp system and a AS4 redhat system.\nThe redhat will be my production server in a week or so and it is returning\nslower the my desk top?\nI understand about the perc cards on the Dell (redhat) but my Dell 2 proc\nbox runs much faster (MSSQL) then my desktop, so I am wondering if I messed\nup Linux or have a postgres config issue. \n\nOn my desktop (1 proc 2 gigs of memor) I get:\n\"Merge Join (cost=7135.56..7296.25 rows=7906 width=228) (actual\ntime=5281.000..6266.000 rows=160593 loops=1)\"\n\" Merge Cond: (\"outer\".locationid = \"inner\".locationid)\"\n\" -> Sort (cost=955.78..957.07 rows=514 width=79) (actual\ntime=0.000..0.000 rows=441 loops=1)\"\n\" Sort Key: l.locationid\"\n\" -> Index Scan using ix_location on tbllocation l \n(cost=0.00..932.64 rows=514 width=79) (actual time=0.000..0.000 rows=441\nloops=1)\"\n\" Index Cond: ('SAKS'::text = (clientnum)::text)\"\n\" -> Sort (cost=6179.77..6187.46 rows=3076 width=173) (actual\ntime=5281.000..5424.000 rows=160594 loops=1)\"\n\" Sort Key: a.locationid\"\n\" -> Merge Left Join (cost=154.41..6001.57 rows=3076 width=173)\n(actual time=94.000..2875.000 rows=177041 loops=1)\"\n\" Merge Cond: (((\"outer\".clientnum)::text =\n\"inner\".\"?column4?\") AND (\"outer\".jobtitleid = \"inner\".id))\"\n\" -> Index Scan using ix_tblassoc_jobtitleid on tblassociate\na (cost=0.00..5831.49 rows=3076 width=134) (actual time=0.000..676.000\nrows=177041 loops=1)\"\n\" Index Cond: ((clientnum)::text = 'SAKS'::text)\"\n\" -> Sort (cost=154.41..154.50 rows=34 width=67) (actual\ntime=78.000..204.000 rows=158255 loops=1)\"\n\" Sort Key: (jt.clientnum)::text, jt.id\"\n\" -> Seq Scan on tbljobtitle jt (cost=0.00..153.55\nrows=34 width=67) (actual time=0.000..31.000 rows=6603 loops=1)\"\n\" Filter: (1 = presentationid)\"\n\"Total runtime: 6563.000 ms\"\nOn my production (4 proc, 8 gigs of memory)\n\"Merge Join (cost=69667.87..70713.46 rows=15002 width=113) (actual\ntime=12140.091..12977.841 rows=160593 loops=1)\"\n\" Merge Cond: (\"outer\".locationid = \"inner\".locationid)\"\n\" -> Sort (cost=790.03..791.11 rows=433 width=49) (actual\ntime=2.936..3.219 rows=441 loops=1)\"\n\" Sort Key: l.locationid\"\n\" -> Index Scan using ix_location on tbllocation l \n(cost=0.00..771.06 rows=433 width=49) (actual time=0.062..1.981 rows=441\nloops=1)\"\n\" Index Cond: ('SAKS'::text = (clientnum)::text)\"\n\" -> Sort (cost=68877.84..69320.17 rows=176933 width=75) (actual\ntime=12137.081..12305.125 rows=160594 loops=1)\"\n\" Sort Key: a.locationid\"\n\" -> Merge Right Join (cost=46271.48..48961.53 rows=176933\nwidth=75) (actual time=9096.623..10092.311 rows=177041 loops=1)\"\n\" Merge Cond: (((\"outer\".clientnum)::text =\n\"inner\".\"?column10?\") AND (\"outer\".id = \"inner\".jobtitleid))\"\n\" -> Index Scan using ix_tbljobtitle_id on tbljobtitle jt \n(cost=0.00..239.76 rows=6604 width=37) (actual time=0.068..12.157 rows=5690\nloops=1)\"\n\" Filter: (1 = presentationid)\"\n\" -> Sort (cost=46271.48..46713.81 rows=176933 width=53)\n(actual time=9081.546..9295.495 rows=177041 loops=1)\"\n\" Sort Key: (a.clientnum)::text, a.jobtitleid\"\n\" -> Seq Scan on tblassociate a (cost=0.00..30849.25\nrows=176933 width=53) (actual time=543.931..1674.518 rows=177041 loops=1)\"\n\" Filter: ((clientnum)::text = 'SAKS'::text)\"\n\"Total runtime: 13101.402 ms\"\n \nI am at a bit of a loss as I would have thought my soon to be production box\nshould be blowing away my desktop?\n \nAlso stupid newb question?\nI am a bit confused looking at the results of explain analyze.\nI would have thought the explain analyze select * from viwassoclist where\nclientnum ='SAKS'\nWould first limit the result set by clientnum = SAKS is this the bottom\nline?\n\" -> Seq Scan on tblassociate a (cost=0.00..30849.25\nrows=176933 width=53) (actual time=543.931..1674.518 rows=177041 loops=1)\"\n\" Filter: ((clientnum)::text = 'SAKS'::text)\"\nwhich if I understand this (not saying I do) is taking actual\ntime=543.931..1674.518 rows=177041 loops=1\nthis means 1 loop takes between 543 and 1674 milisecs to return 177041 rows?\nAnd the analyzer thought I would take cost=0.00..30849.25?\n \nI am just trying to understand if I can do the sql different to get a faster\nresult.\nI am going to try and eliminate my left outer joins and aggregates on select\nthroughout the app as well as eliminate some unions that exist.\n \n\n\nJoel Fradkin\n \nWazagua, Inc.\n2520 Trailmate Dr\nSarasota, Florida 34243\nTel. 941-753-7111 ext 305\n \[email protected]\nwww.wazagua.com\nPowered by Wazagua\nProviding you with the latest Web-based technology & advanced tools.\n© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc\n This email message is for the use of the intended recipient(s) and may\ncontain confidential and privileged information. Any unauthorized review,\nuse, disclosure or distribution is prohibited. If you are not the intended\nrecipient, please contact the sender by reply email and delete and destroy\nall copies of the original message, including attachments.\n \n\n \n\n\n",
"msg_date": "Mon, 11 Apr 2005 13:14:32 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is there somthing I need to do on my production server?"
},
{
"msg_contents": "\n\nHere is the config for the AS4 server.\n# -----------------------------\n# PostgreSQL configuration file\n# -----------------------------\n#\n# This file consists of lines of the form:\n#\n# name = value\n#\n# (The '=' is optional.) White space may be used. Comments are introduced\n# with '#' anywhere on a line. The complete list of option names and\n# allowed values can be found in the PostgreSQL documentation. The\n# commented-out settings shown in this file represent the default values.\n#\n# Please note that re-commenting a setting is NOT sufficient to revert it\n# to the default value, unless you restart the postmaster.\n#\n# Any option can also be given as a command line switch to the\n# postmaster, e.g. 'postmaster -c log_connections=on'. Some options\n# can be changed at run-time with the 'SET' SQL command.\n#\n# This file is read on postmaster startup and when the postmaster\n# receives a SIGHUP. If you edit the file on a running system, you have\n# to SIGHUP the postmaster for the changes to take effect, or use\n# \"pg_ctl reload\". Some settings, such as listen_address, require\n# a postmaster shutdown and restart to take effect.\n\n\n#---------------------------------------------------------------------------\n# FILE LOCATIONS\n#---------------------------------------------------------------------------\n\n# The default values of these variables are driven from the -D command line\n# switch or PGDATA environment variable, represented here as ConfigDir.\n# data_directory = 'ConfigDir' # use data in another directory\n#data_directory = '/pgdata/data'\n# hba_file = 'ConfigDir/pg_hba.conf' # the host-based authentication file\n# ident_file = 'ConfigDir/pg_ident.conf' # the IDENT configuration file\n\n# If external_pid_file is not explicitly set, no extra pid file is written.\n# external_pid_file = '(none)' # write an extra pid file\n\n\n#---------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#---------------------------------------------------------------------------\n\n# - Connection Settings -\n\n#listen_addresses = 'localhost' # what IP interface(s) to listen on;\n # defaults to localhost, '*' = any\n\nlisten_addresses = '*'\nport = 5432\nmax_connections = 100\n # note: increasing max_connections costs about 500 bytes of shared\n # memory per connection slot, in addition to costs from\nshared_buffers\n # and max_locks_per_transaction.\n#superuser_reserved_connections = 2\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777 # octal\n#rendezvous_name = '' # defaults to the computer name\n\n# - Security & Authentication -\n\n#authentication_timeout = 60 # 1-600, in seconds\n#ssl = false\n#password_encryption = true\n#krb_server_keyfile = ''\n#db_user_namespace = false\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 12288 #5000 min 16, at least max_connections*2, 8KB each\n#work_mem = 1024 # min 64, size in KB\nwork_mem = 16384 # 8192\n#maintenance_work_mem = 16384 # min 1024, size in KB\n#max_stack_depth = 2048 # min 100, size in KB\n\n# - Free Space Map -\n\nmax_fsm_pages = 100000 #30000 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 1500 #1000 # min 100, ~50 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25\n#preload_libraries = ''\n\n# - Cost-Based Vacuum Delay -\n\n#vacuum_cost_delay = 0 # 0-1000 milliseconds\n#vacuum_cost_page_hit = 1 # 0-10000 credits\n#vacuum_cost_pagE_miss = 10 # 0-10000 credits\n#vacuum_cost_page_dirty = 20 # 0-10000 credits\n#vacuum_cost_limit = 200 # 0-10000 credits\n\n# - Background writer -\n\n#bgwriter_delay = 200 # 10-10000 milliseconds between rounds\n#bgwriter_percent = 1 # 0-100% of dirty buffers in each round\n#bgwriter_maxpages = 100 # 0-1000 buffers max per round\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\nfsync = true # turns forced synchronization on or off\nwal_sync_method = open_sync# fsync # the default varies across\nplatforms:\n # fsync, fdatasync, open_sync, or\nopen_datasync\nwal_buffers = 2048#8 # min 4, 8KB each\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n\n# - Checkpoints -\n\ncheckpoint_segments = 100 #3 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300 # range 30-3600, in seconds\n#checkpoint_warning = 30 # 0 is off, in seconds\n\n# - Archiving -\n\n#archive_command = '' # command to use to archive a logfile\nsegment\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\n#enable_hashagg = true\n#enable_hashjoin = true\n#enable_indexscan = true\n#enable_mergejoin = false\n#enable_nestloop = true\n#enable_seqscan = true\n#enable_sort = true\n#enable_tidscan = true\n\n# - Planner Cost Constants -\n\neffective_cache_size = 262144 #40000 typically 8KB each\n#random_page_cost = 4 # units are one sequential page fetch cost\nrandom_page_cost = 2\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n\n# - Genetic Query Optimizer -\n\n#geqo = true\n#geqo_threshold = 12\n#geqo_effort = 5 # range 1-10\n#geqo_pool_size = 0 # selects default based on effort\n#geqo_generations = 0 # selects default based on effort\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\ndefault_statistics_target = 250#10 # range 1-1000\n#from_collapse_limit = 8\n#join_collapse_limit = 8 # 1 disables collapsing of explicit JOINs\n\n\n#---------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------\n\n# - Where to Log -\n\n#log_destination = 'stderr' # Valid values are combinations of stderr,\n # syslog and eventlog, depending on\n # platform.\n\n# This is relevant when logging to stderr:\nredirect_stderr = true # Enable capturing of stderr into log files.\n# These are only relevant if redirect_stderr is true:\nlog_directory = 'pg_log' # Directory where log files are written.\n # May be specified absolute or relative to\nPGDATA\nlog_filename = 'postgresql-%a.log' # Log file name pattern.\n # May include strftime() escapes\nlog_truncate_on_rotation = true # If true, any existing log file of the\n # same name as the new log file will be\ntruncated\n # rather than appended to. But such truncation\n # only occurs on time-driven rotation,\n # not on restarts or size-driven rotation.\n # Default is false, meaning append to existing\n # files in all cases.\nlog_rotation_age = 1440 # Automatic rotation of logfiles will happen\nafter\n # so many minutes. 0 to disable.\nlog_rotation_size = 0 # Automatic rotation of logfiles will happen\nafter\n # so many kilobytes of log output. 0 to\ndisable.\n\n# These are relevant when logging to syslog:\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n\n# - When to Log -\n\n#client_min_messages = notice # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # log, notice, warning, error\n\n#log_min_messages = notice # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error, log,\nfatal,\n # panic\n\n#log_error_verbosity = default # terse, default, or verbose messages\n\n#log_min_error_statement = panic # Values in order of increasing severity:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error,\npanic(off)\n\n#log_min_duration_statement = -1 # -1 is disabled, in milliseconds.\n\n#silent_mode = false # DO NOT USE without syslog or\nredirect_stderr\n\n# - What to Log -\n\n#debug_print_parse = false\n#debug_print_rewritten = false\n#debug_print_plan = false\n#debug_pretty_print = false\n#log_connections = false\n#log_disconnections = false\n#log_duration = false\n#log_line_prefix = '' # e.g. '<%u%%%d> '\n # %u=user name %d=database name\n # %r=remote host and port\n # %p=PID %t=timestamp %i=command tag\n # %c=session id %l=session line number\n # %s=session start timestamp %x=transaction\nid\n # %q=stop here in non-session processes\n # %%='%'\n#log_statement = 'none' # none, mod, ddl, all\n#log_hostname = false\n\n______________________________\n\n\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Statistics Monitoring -\n\n#log_parser_stats = false\n#log_planner_stats = false\n#log_executor_stats = false\n#log_statement_stats = false\n\n# - Query/Index Statistics Collector -\n\n#stats_start_collector = true\n#stats_command_string = false\n#stats_block_level = false\n#stats_row_level = false\n#stats_reset_on_server_start = true\n#---------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#search_path = '$user,public' # schema names\n#default_tablespace = '' # a tablespace name, or '' for default\n#check_function_bodies = true\n#default_transaction_isolation = 'read committed'\n#default_transaction_read_only = false\n#statement_timeout = 0 # 0 is disabled, in milliseconds\n\n# - Locale and Formatting -\n\n#datestyle = 'iso, mdy'\n#timezone = unknown # actually, defaults to TZ environment\nsetting\n#australian_timezones = false\n#extra_float_digits = 0 # min -15, max 2\n#client_encoding = sql_ascii # actually, defaults to database encoding\n\n# These settings are initialized by initdb -- they might be changed\nlc_monetary = 'en_US.UTF-8' # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formatting\n\n# - Other Defaults -\n\n#explain_pretty_print = true\n#dynamic_library_path = '$libdir'\n\n\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\n#deadlock_timeout = 1000 # in milliseconds\n#max_locks_per_transaction = 64 # min 10, ~200*max_connections bytes each\n\n\n#---------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n\n# - Previous Postgres Versions -\n\n#add_missing_from = true\n#regex_flavor = advanced # advanced, extended, or basic\n#sql_inheritance = true\n#default_with_oids = true\n\n# - Other Platforms & Clients -\n\n#transform_null_equals = false\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Mon, 11 Apr 2005 14:29:22 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is there somthing I need to do on my production server?"
}
] |
[
{
"msg_contents": "hi\ni'm not totally sure i should ask on this mailing list - so if you think\ni should better ask someplace else, please let me know.\n\nthe problem i have is that specific queries (inserts and updates) take a\nlong time to run.\n\nof course i do vacuum analyze frequently. i also use explain analyze on\nqueries.\n\nthe problem is that both the inserts and updated operate on\nheavy-tirggered tables.\nand it made me wonder - is there a way to tell how much time of backend\nwas spent on triggers, index updates and so on?\nlike:\ntotal query time: 1 secons\ntrigger a: 0.50 second\ntrigger b: 0.25 second\nindex update: 0.1 second\n\nsomething like this.\n\nis it possible?\nwill it be ever possible?\n\nhubert\n\n-- \nhubert lubaczewski\nNetwork Operations Center\neo Networks Sp. z o.o.\n",
"msg_date": "Tue, 12 Apr 2005 12:46:43 +0200",
"msg_from": "hubert lubaczewski <[email protected]>",
"msg_from_op": true,
"msg_subject": "profiling postgresql queries?"
},
{
"msg_contents": "hubert lubaczewski <[email protected]> writes:\n> and it made me wonder - is there a way to tell how much time of backend\n> was spent on triggers, index updates and so on?\n\nIn CVS tip, EXPLAIN ANALYZE will break out the time spent in each\ntrigger. This is not in any released version, but if you're desperate\nyou could load up a play server with your data and test.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Apr 2005 10:10:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: profiling postgresql queries? "
},
{
"msg_contents": "Speaking of triggers...\n\nIs there any plan to speed up plpgsql tiggers? Fairly simple\ncrosstable insert triggers seem to slow my inserts to a crawl.\n\nIs the best thing just to write triggers in C (I really don't want to\nput this stuff in the application logic because it really doesn't\nbelong there).\n\nAlex Turner\nnetEconomist\n\nOn Apr 12, 2005 10:10 AM, Tom Lane <[email protected]> wrote:\n> hubert lubaczewski <[email protected]> writes:\n> > and it made me wonder - is there a way to tell how much time of backend\n> > was spent on triggers, index updates and so on?\n> \n> In CVS tip, EXPLAIN ANALYZE will break out the time spent in each\n> trigger. This is not in any released version, but if you're desperate\n> you could load up a play server with your data and test.\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n",
"msg_date": "Tue, 12 Apr 2005 10:18:31 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: profiling postgresql queries?"
},
{
"msg_contents": "On Tue, Apr 12, 2005 at 12:46:43PM +0200, hubert lubaczewski wrote:\n> \n> the problem is that both the inserts and updated operate on\n> heavy-tirggered tables.\n> and it made me wonder - is there a way to tell how much time of backend\n> was spent on triggers, index updates and so on?\n> like:\n> total query time: 1 secons\n> trigger a: 0.50 second\n> trigger b: 0.25 second\n> index update: 0.1 second\n\nEXPLAIN ANALYZE in 8.1devel (CVS HEAD) prints a few statistics for\ntriggers:\n\nEXPLAIN ANALYZE UPDATE foo SET x = 10 WHERE x = 20;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------\n Index Scan using foo_x_idx on foo (cost=0.00..14.44 rows=10 width=22) (actual time=0.184..0.551 rows=7 loops=1)\n Index Cond: (x = 20)\n Trigger row_trig1: time=1.625 calls=7\n Trigger row_trig2: time=1.346 calls=7\n Trigger stmt_trig1: time=1.436 calls=1\n Total runtime: 9.659 ms\n(6 rows)\n\n8.1devel changes frequently (sometimes requiring initdb) and isn't\nsuitable for production, but if the trigger statistics would be\nhelpful then you could set up a test server and load a copy of your\ndatabase into it. Just beware that because it's bleeding edge, it\nmight destroy your data and it might behave differently than released\nversions.\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n",
"msg_date": "Tue, 12 Apr 2005 08:43:59 -0600",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: profiling postgresql queries?"
},
{
"msg_contents": "On Tue, Apr 12, 2005 at 10:18:31AM -0400, Alex Turner wrote:\n> Speaking of triggers...\n> Is there any plan to speed up plpgsql tiggers? Fairly simple\n> crosstable insert triggers seem to slow my inserts to a crawl.\n\nplpgsql is quite fast actually. if some triggers slow inserts too much,\ni guess you should be able to spped them up with some performance review\nof trigger code.\n\ndepesz\n\n-- \nhubert lubaczewski\nNetwork Operations Center\neo Networks Sp. z o.o.",
"msg_date": "Tue, 12 Apr 2005 16:45:57 +0200",
"msg_from": "hubert lubaczewski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: profiling postgresql queries?"
},
{
"msg_contents": "On Tue, Apr 12, 2005 at 08:43:59AM -0600, Michael Fuhr wrote:\n> 8.1devel changes frequently (sometimes requiring initdb) and isn't\n> suitable for production, but if the trigger statistics would be\n> helpful then you could set up a test server and load a copy of your\n> database into it. Just beware that because it's bleeding edge, it\n> might destroy your data and it might behave differently than released\n> versions.\n\ngreat. this is exactly what i need. thanks for hint.\n\ndepesz\n\n-- \nhubert lubaczewski\nNetwork Operations Center\neo Networks Sp. z o.o.",
"msg_date": "Tue, 12 Apr 2005 17:00:19 +0200",
"msg_from": "hubert lubaczewski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: profiling postgresql queries?"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Keith Worthington [mailto:[email protected]]\n> Sent: Monday, April 11, 2005 7:44 PM\n> To: Neil Conway\n> Cc: PostgreSQL Perform\n> Subject: Re: [PERFORM] 4 way JOIN using aliases\n> \n> Neil Conway wrote:\n> > Keith Worthington wrote:\n> > \n> >> -> Seq Scan on tbl_current \n> (cost=0.00..1775.57 rows=76457\n> >> width=31) (actual time=22.870..25.024 rows=605 loops=1)\n> > \n> > \n> > This rowcount is way off -- have you run ANALYZE recently?\n> > [...]\n> \n> I run vacuumdb with the analyze option every morning via a \n> cron job. In my ignorance I do not know if that is the same\n> thing.\n\nPass it an --analyze option if you aren't already.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n",
"msg_date": "Tue, 12 Apr 2005 08:41:55 -0500",
"msg_from": "\"Dave Held\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 4 way JOIN using aliases"
},
{
"msg_contents": "On Tue, 12 Apr 2005 08:41:55 -0500, Dave Held wrote\n> > -----Original Message-----\n> > From: Keith Worthington [mailto:[email protected]]\n> > Sent: Monday, April 11, 2005 7:44 PM\n> > To: Neil Conway\n> > Cc: PostgreSQL Perform\n> > Subject: Re: [PERFORM] 4 way JOIN using aliases\n> > \n> > Neil Conway wrote:\n> > > Keith Worthington wrote:\n> > > \n> > >> -> Seq Scan on tbl_current \n> > (cost=0.00..1775.57 rows=76457\n> > >> width=31) (actual time=22.870..25.024 rows=605 loops=1)\n> > > \n> > > \n> > > This rowcount is way off -- have you run ANALYZE recently?\n> > > [...]\n> > \n> > I run vacuumdb with the analyze option every morning via a \n> > cron job. In my ignorance I do not know if that is the same\n> > thing.\n> \n> Pass it an --analyze option if you aren't already.\n> \n> __\n> David B. Held\n> \n\nHere is the command I have in the cron file.\n\nvacuumdb --full --analyze --verbose --username dbuser --dbname ${IPA_DB} >>\n${IPA_LOG_DIR}/ipavcmdb.log 2>&1\n\nIf this performs the analyze as I thought it should I do not know why the row\ncount is so badly off.\n\nKind Regards,\nKeith\n",
"msg_date": "Tue, 12 Apr 2005 14:14:26 -0400",
"msg_from": "\"Keith Worthington\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 4 way JOIN using aliases"
}
] |
[
{
"msg_contents": "Hello,\nI am having a bit of trouble updating a single integer column.\nMy table has around 10 columns and 260 000 records.\n\n\nupdate no.records set uid = 2;\n(uid is an integer. It has a btree index)\n\nThis update takes more than 20 minutes to execute. Is this normal? This \nwill be totally unacceptable when my table grows.\nAny ideas?\n\n\n",
"msg_date": "Tue, 12 Apr 2005 16:12:32 +0200",
"msg_from": "Bendik R.Johansen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow update"
},
{
"msg_contents": "\"Bendik R.Johansen\" <[email protected]> writes:\n> I am having a bit of trouble updating a single integer column.\n> My table has around 10 columns and 260 000 records.\n\n> update no.records set uid = 2;\n> (uid is an integer. It has a btree index)\n\n> This update takes more than 20 minutes to execute. Is this normal?\n\nTakes about 20 seconds to update a table of that size on my machine...\n\nWhat PG version is this? We used to have some performance issues with\nvery large numbers of equal keys in btree indexes. Does dropping the\nindex make it faster?\n\nAre there foreign keys referencing this table from other tables?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Apr 2005 10:35:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow update "
},
{
"msg_contents": "Hello, thank you for the quick reply.\n\nI am running version 8.0.1\n\nBelow is the schema for the table i will be using. I tried dropping the \nindex, but it did not help.\n\n Table \"no.records\"\n Column | Type | \nModifiers\n-------------+-------------------------- \n+-------------------------------------------------------\n id | integer | not null default \nnextval('\"no\".records_id_seq'::text)\n origid | integer |\n cid | character varying(16) | default ''::character varying\n category | integer[] |\n name | character varying(255) | not null default \n''::character varying\n address | character varying(128) |\n street | character varying(127) |\n postalcode | integer |\n postalsite | character varying(64) |\n email | character varying(64) |\n website | character varying(64) |\n phone | character varying(16) |\n fax | character varying(16) |\n contact | character varying(64) |\n info | text |\n position | point |\n importid | integer |\n exportid | integer |\n created | timestamp with time zone |\n creator | integer |\n updated | timestamp with time zone | default \n('now'::text)::timestamp(6) with time zone\n updater | integer |\n uid | integer |\n relevance | real | not null default 0\n phonetic | text |\n uncertainty | integer | default 99999999\n indexed | boolean | default false\n record | text |\nIndexes:\n \"records_pkey\" PRIMARY KEY, btree (id)\n \"records_category_idx\" gist (category)\n \"records_cid_idx\" btree (cid)\n \"records_uid_idx\" btree (uid)\n\n\nOn Apr 12, 2005, at 16:35, Tom Lane wrote:\n\n> \"Bendik R.Johansen\" <[email protected]> writes:\n>> I am having a bit of trouble updating a single integer column.\n>> My table has around 10 columns and 260 000 records.\n>\n>> update no.records set uid = 2;\n>> (uid is an integer. It has a btree index)\n>\n>> This update takes more than 20 minutes to execute. Is this normal?\n>\n> Takes about 20 seconds to update a table of that size on my machine...\n>\n> What PG version is this? We used to have some performance issues with\n> very large numbers of equal keys in btree indexes. Does dropping the\n> index make it faster?\n>\n> Are there foreign keys referencing this table from other tables?\n>\n> \t\t\tregards, tom lane\n>\n\n",
"msg_date": "Tue, 12 Apr 2005 17:03:04 +0200",
"msg_from": "Bendik R.Johansen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow update "
},
{
"msg_contents": "\"Bendik R. Johansen\" <[email protected]> writes:\n> Below is the schema for the table i will be using. I tried dropping the \n> index, but it did not help.\n\n> Indexes:\n> \"records_pkey\" PRIMARY KEY, btree (id)\n> \"records_category_idx\" gist (category)\n> \"records_cid_idx\" btree (cid)\n> \"records_uid_idx\" btree (uid)\n\nHmm ... my suspicion would fall first on the GIST index, to tell you the\ntruth. Did you try dropping that one?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Apr 2005 11:16:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow update "
},
{
"msg_contents": "Yes, I tried dropping it but it did not make a difference.\nCould the table be corrupt or something?\nWell, the important thing is that I now know that this is not typical \nfor PostgreSQL, so I will not have to rethink my whole project.\n\nThanks, so far.\n\n\nOn Apr 12, 2005, at 17:16, Tom Lane wrote:\n\n> \"Bendik R. Johansen\" <[email protected]> writes:\n>> Below is the schema for the table i will be using. I tried dropping \n>> the\n>> index, but it did not help.\n>\n>> Indexes:\n>> \"records_pkey\" PRIMARY KEY, btree (id)\n>> \"records_category_idx\" gist (category)\n>> \"records_cid_idx\" btree (cid)\n>> \"records_uid_idx\" btree (uid)\n>\n> Hmm ... my suspicion would fall first on the GIST index, to tell you \n> the\n> truth. Did you try dropping that one?\n>\n> \t\t\tregards, tom lane\n>\n\n",
"msg_date": "Tue, 12 Apr 2005 17:37:42 +0200",
"msg_from": "Bendik R.Johansen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow update "
},
{
"msg_contents": "\"Bendik R. Johansen\" <[email protected]> writes:\n> Yes, I tried dropping it but it did not make a difference.\n> Could the table be corrupt or something?\n\nYou didn't directly answer the question about whether there were foreign\nkeys leading to this table. Checking foreign keys could be the problem,\nparticularly if the referencing columns don't have indexes.\n\nAlso, maybe the table is just bloated? What does VACUUM VERBOSE say\nabout it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Apr 2005 11:40:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow update "
}
] |
[
{
"msg_contents": "I'd like to create a fail-over server in case of a problem. Ideally, it\nwould be synchronized with our main database server, but I don't see any\nmajor problem with having a delay of up to 4 hours between syncs.\n\nMy database is a little shy of 10 Gigs, with much of that data being in an\narchived log table. Every day a batch job is run which adds 100,000 records\nover the course of 3 hours (the batch job does a lot of pre/post\nprocessing).\n\nDoing a restore of the db backup in vmware takes about 3 hours. I suspect a\npowerful server with a better disk setup could do it faster, but I don't\nhave servers like that at my disposal, so I need to assume worst-case of 3-4\nhours is typical.\n\nSo, my question is this: My server currently works great, performance wise.\nI need to add fail-over capability, but I'm afraid that introducing a\nstressful task such as replication will hurt my server's performance. Is\nthere any foundation to my fears? I don't need to replicate the archived log\ndata because I can easily restore that in a separate step from the nightly\nbackup if disaster occurs. Also, my database load is largely selects. My\napplication works great with PostgreSQL 7.3 and 7.4, but I'm currently using\n7.3. \n\nI'm eager to hear your thoughts and experiences,\n-- \nMatthew Nuzum <[email protected]>\nwww.followers.net - Makers of \"Elite Content Management System\"\nEarn a commission of $100 - $750 by recommending Elite CMS. Visit\nhttp://www.elitecms.com/Contact_Us.partner for details.\n\n\n",
"msg_date": "Tue, 12 Apr 2005 11:25:04 -0500",
"msg_from": "\"Matthew Nuzum\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance hit for replication"
},
{
"msg_contents": "\n>So, my question is this: My server currently works great, performance wise.\n>I need to add fail-over capability, but I'm afraid that introducing a\n>stressful task such as replication will hurt my server's performance. Is\n>there any foundation to my fears? I don't need to replicate the archived log\n>data because I can easily restore that in a separate step from the nightly\n>backup if disaster occurs. Also, my database load is largely selects. My\n>application works great with PostgreSQL 7.3 and 7.4, but I'm currently using\n>7.3. \n>\n>I'm eager to hear your thoughts and experiences,\n> \n>\nWell with replicator you are going to take a pretty big hit initially \nduring the full\nsync but then you could use batch replication and only replicate every \n2-3 hours.\n\nI am pretty sure Slony has similar capabilities.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n",
"msg_date": "Tue, 12 Apr 2005 09:37:01 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance hit for replication"
},
{
"msg_contents": "On Tuesday 12 April 2005 09:25, Matthew Nuzum wrote:\n> I'd like to create a fail-over server in case of a problem. Ideally, it\n> would be synchronized with our main database server, but I don't see any\n> major problem with having a delay of up to 4 hours between syncs.\n>\n> My database is a little shy of 10 Gigs, with much of that data being in an\n> archived log table. Every day a batch job is run which adds 100,000 records\n> over the course of 3 hours (the batch job does a lot of pre/post\n> processing).\n>\n> Doing a restore of the db backup in vmware takes about 3 hours. I suspect a\n> powerful server with a better disk setup could do it faster, but I don't\n> have servers like that at my disposal, so I need to assume worst-case of\n> 3-4 hours is typical.\n>\n> So, my question is this: My server currently works great, performance wise.\n> I need to add fail-over capability, but I'm afraid that introducing a\n> stressful task such as replication will hurt my server's performance. Is\n> there any foundation to my fears? I don't need to replicate the archived\n> log data because I can easily restore that in a separate step from the\n> nightly backup if disaster occurs. Also, my database load is largely\n> selects. My application works great with PostgreSQL 7.3 and 7.4, but I'm\n> currently using 7.3.\n>\n> I'm eager to hear your thoughts and experiences,\n\nYour application sounds like a perfact candidate for Slony-I \nhttp://www.slony.info . Using Slony-I I see about a 5-7% performance hit in \nterms of the number of insert.update/delete per second i can process.\n\nDepending on your network connection , DML volume, and the power of your \nbackup server, the replica could be as little as 10 seconds behind the \norigin. A failover/switchover could occur in under 60 seconds.\n\n-- \nDarcy Buskermolen\nWavefire Technologies Corp.\n\nhttp://www.wavefire.com\nph: 250.717.0200\nfx: 250.763.1759\n",
"msg_date": "Tue, 12 Apr 2005 09:54:52 -0700",
"msg_from": "Darcy Buskermolen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance hit for replication"
},
{
"msg_contents": "> >I'm eager to hear your thoughts and experiences,\n> >\n> >\n> Well with replicator you are going to take a pretty big hit initially\n> during the full\n> sync but then you could use batch replication and only replicate every\n> 2-3 hours.\n> \n> Sincerely,\n> \n> Joshua D. Drake\n> \n\nThanks, I'm looking at your product and will contact you off list for more\ndetails soon.\n\nOut of curiosity, does batch mode produce a lighter load? Live updating will\nprovide maximum data security, and I'm most interested in how it affects the\nserver.\n\n-- \nMatthew Nuzum <[email protected]>\nwww.followers.net - Makers of \"Elite Content Management System\"\nEarn a commission of $100 - $750 by recommending Elite CMS. Visit\nhttp://www.elitecms.com/Contact_Us.partner for details.\n\n",
"msg_date": "Tue, 12 Apr 2005 11:55:40 -0500",
"msg_from": "\"Matthew Nuzum\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance hit for replication"
},
{
"msg_contents": "Matthew Nuzum wrote:\n\n>>>I'm eager to hear your thoughts and experiences,\n>>>\n>>>\n>>> \n>>>\n>>Well with replicator you are going to take a pretty big hit initially\n>>during the full\n>>sync but then you could use batch replication and only replicate every\n>>2-3 hours.\n>>\n>>Sincerely,\n>>\n>>Joshua D. Drake\n>>\n>> \n>>\n>\n>Thanks, I'm looking at your product and will contact you off list for more\n>details soon.\n>\n>Out of curiosity, does batch mode produce a lighter load?\n>\nWell more of a burstier load. You could also do live replication but \nreplicator requires\nsome IO which VMWare just ins't that good at :)\n\nSincerely,\n\nJoshua D. Drake\n\n",
"msg_date": "Tue, 12 Apr 2005 10:13:27 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance hit for replication"
},
{
"msg_contents": "[email protected] (\"Joshua D. Drake\") writes:\n>>So, my question is this: My server currently works great,\n>>performance wise. I need to add fail-over capability, but I'm\n>>afraid that introducing a stressful task such as replication will\n>>hurt my server's performance. Is there any foundation to my fears? I\n>>don't need to replicate the archived log data because I can easily\n>>restore that in a separate step from the nightly backup if disaster\n>>occurs. Also, my database load is largely selects. My application\n>>works great with PostgreSQL 7.3 and 7.4, but I'm currently using\n>>7.3.\n>>\n>>I'm eager to hear your thoughts and experiences,\n>>\n> Well with replicator you are going to take a pretty big hit\n> initially during the full sync but then you could use batch\n> replication and only replicate every 2-3 hours.\n>\n> I am pretty sure Slony has similar capabilities.\n\nYes, similar capabilities, similar \"pretty big hit.\"\n\nThere's a downside to \"batch replication\" that some of the data\nstructures grow in size if you have appreciable periods between\nbatches.\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\nhttp://www.ntlug.org/~cbbrowne/slony.html\nRules of the Evil Overlord #78. \"I will not tell my Legions of Terror\n\"And he must be taken alive!\" The command will be: ``And try to take\nhim alive if it is reasonably practical.''\"\n<http://www.eviloverlord.com/>\n",
"msg_date": "Tue, 12 Apr 2005 16:25:14 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance hit for replication"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Keith Worthington [mailto:[email protected]]\n> Sent: Tuesday, April 12, 2005 1:14 PM\n> To: Dave Held; PostgreSQL Perform\n> Subject: Re: [PERFORM] 4 way JOIN using aliases\n> \n> > > I run vacuumdb with the analyze option every morning via a \n> > > cron job. In my ignorance I do not know if that is the same\n> > > thing.\n> > \n> > Pass it an --analyze option if you aren't already.\n> \n> Here is the command I have in the cron file.\n> \n> vacuumdb --full --analyze --verbose --username dbuser \n> --dbname ${IPA_DB} >>\n> ${IPA_LOG_DIR}/ipavcmdb.log 2>&1\n> \n> If this performs the analyze as I thought it should I do not \n> know why the row\n> count is so badly off.\n\nYou may need to increase the statistics target for the relevant\ncolumns. Look at:\n\nhttp://www.postgresql.org/docs/7.4/static/sql-altertable.html\n\nIn particular, the SET STATISTICS clause.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n",
"msg_date": "Tue, 12 Apr 2005 13:22:12 -0500",
"msg_from": "\"Dave Held\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 4 way JOIN using aliases"
}
] |
[
{
"msg_contents": "Hi all,\n\nI've just noticed an interesting behaviour with PGSQL. My software is\nmade up of few different modules that interact through PGSQL database.\nAlmost every query they do is an individual transaction and there is a\ngood reason for that. After every query done there is some processing\ndone by those modules and I didn't want to lock the database in a\nsingle transaction while that processing is happening. Now, the\ninteresting behaviour is this. I've ran netstat on the machine where\nmy software is running and I searched for tcp connections to my PGSQL\nserver. What i found was hundreds of lines like this:\n\ntcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:39504 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:40720 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:39135 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43002 remus.dstc.monash:41631 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:41119 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:41311 TIME_WAIT \ntcp 0 0 remus.dstc.monash.:8649 remus.dstc.monash:41369 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:40479 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:39454 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:39133 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43002 remus.dstc.monash:41501 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:39132 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:41308 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43002 remus.dstc.monash:40667 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:41179 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:39323 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:41434 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43002 remus.dstc.monash:40282 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:41050 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:41177 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:39001 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:41305 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:38937 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:39128 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:40600 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43002 remus.dstc.monash:41624 TIME_WAIT \ntcp 0 0 remus.dstc.monash:43002 remus.dstc.monash:39000 TIME_WAIT \n\nNow could someone explain to me what this really means and what effect\nit might have on the machine (the same machine where I ran this\nquery)? Would there eventually be a shortage of available ports if\nthis kept growing? The reason I am asking this is because one of my\nmodules was raising exception saying that TCP connection could not be\nestablish to a server it needed to connect to. This may sound\nconfusing so I'll try to explain this.\n\nWe have this scenario, there is a PGSQL server (postmaster) which is\nrunning on machine A. Then there is a custom server called DBServer\nwhich is running on machine B. This server accepts connections from a\nclient called an Agent. Agent may ran on any machine out there and it\nwould connect back to DBServer asking for some information. The\ncommunication between these two is in the form of SQL queries. When\nagent sends a query to DBServer it passes that query to machine A\npostmaster and then passes back the result of the query to that Agent.\nThe connection problem I mentioned in the paragraph above happens when\nAgent tries to connect to DBServer.\n\nSo the only question I have here is would those lingering socket\nconnections above have any effect on the problem I am having. If not I\nam sorry for bothering you all with this, if yes I would like to know\nwhat I could do to avoid that.\n\nAny help would be appreciated,\nRegards,\nSlavisa\n",
"msg_date": "Wed, 13 Apr 2005 12:29:53 +1000",
"msg_from": "Slavisa Garic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Many connections lingering"
},
{
"msg_contents": "Slavisa Garic <[email protected]> writes:\n> ... Now, the\n> interesting behaviour is this. I've ran netstat on the machine where\n> my software is running and I searched for tcp connections to my PGSQL\n> server. What i found was hundreds of lines like this:\n\n> tcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:39504 TIME_WAIT\n> tcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:40720 TIME_WAIT\n> tcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:39135 TIME_WAIT\n\nThis is a network-level issue: the TCP stack on your machine knows the\nconnection has been closed, but it hasn't seen an acknowledgement of\nthat fact from the other machine, and so it's remembering the connection\nnumber so that it can definitively say \"that connection is closed\" if\nthe other machine asks. I'd guess that either you have a flaky network\nor there's something bogus about the TCP stack on the client machine.\nAn occasional dropped FIN packet is no surprise, but hundreds of 'em\nare suspicious.\n\n> Now could someone explain to me what this really means and what effect\n> it might have on the machine (the same machine where I ran this\n> query)? Would there eventually be a shortage of available ports if\n> this kept growing? The reason I am asking this is because one of my\n> modules was raising exception saying that TCP connection could not be\n> establish to a server it needed to connect to.\n\nThat kinda sounds like \"flaky network\" to me, but I could be wrong.\nIn any case, you'd have better luck asking kernel or network hackers\nabout this than database weenies ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Apr 2005 22:51:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Many connections lingering "
},
{
"msg_contents": "\nTom Lane <[email protected]> writes:\n\n> Slavisa Garic <[email protected]> writes:\n> > ... Now, the\n> > interesting behaviour is this. I've ran netstat on the machine where\n> > my software is running and I searched for tcp connections to my PGSQL\n> > server. What i found was hundreds of lines like this:\n> \n> > tcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:39504 TIME_WAIT\n> > tcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:40720 TIME_WAIT\n> > tcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:39135 TIME_WAIT\n> \n> This is a network-level issue: the TCP stack on your machine knows the\n> connection has been closed, but it hasn't seen an acknowledgement of\n> that fact from the other machine, and so it's remembering the connection\n> number so that it can definitively say \"that connection is closed\" if\n> the other machine asks. I'd guess that either you have a flaky network\n> or there's something bogus about the TCP stack on the client machine.\n> An occasional dropped FIN packet is no surprise, but hundreds of 'em\n> are suspicious.\n\nNo, what Tom's describing is a different pair of states called FIN_WAIT_1 and\nFIN_WAIT_2. TIME_WAIT isn't waiting for a packet, just a timeout. This is to\nprevent any delayed packets from earlier in the connection causing problems\nwith a subsequent good connection. Otherwise you could get data from the old\nconnection mixed in the data for later ones.\n\n> > Now could someone explain to me what this really means and what effect\n> > it might have on the machine (the same machine where I ran this\n> > query)? Would there eventually be a shortage of available ports if\n> > this kept growing? The reason I am asking this is because one of my\n> > modules was raising exception saying that TCP connection could not be\n> > establish to a server it needed to connect to.\n\nWhat it does indicate is that each query you're making is probably not just a\nseparate transaction but a separate TCP connection. That's probably not\nnecessary. If you have a single long-lived process you could just keep the TCP\nconnection open and issue a COMMIT after each transaction. That's what I would\nrecommend doing.\n\n\nUnless you have thousands of these TIME_WAIT connections they probably aren't\nactually directly the cause of your failure to establish connections. But yes\nit can happen. \n\nWhat's more likely happening here is that you're stressing the server by\nissuing so many connection attempts that you're triggering some bug, either in\nthe TCP stack or Postgres that is causing some connection attempts to not be\nhandled properly.\n\nI'm skeptical that there's a bug in Postgres since lots of people do in fact\nrun web servers configured to open a new connection for every page. But this\nwouldn't happen to be a Windows server would it? Perhaps the networking code\nin that port doesn't do the right thing in this case? \n\n-- \ngreg\n\n",
"msg_date": "12 Apr 2005 23:27:09 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Many connections lingering"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> This is a network-level issue: the TCP stack on your machine knows the\n>> connection has been closed, but it hasn't seen an acknowledgement of\n>> that fact from the other machine, and so it's remembering the connection\n>> number so that it can definitively say \"that connection is closed\" if\n>> the other machine asks.\n\n> No, what Tom's describing is a different pair of states called FIN_WAIT_1 and\n> FIN_WAIT_2. TIME_WAIT isn't waiting for a packet, just a timeout.\n\nD'oh, obviously it's been too many years since I read Stevens ;-)\n\nSo AFAICS this status report doesn't actually indicate any problem,\nother than massively profligate use of separate connections. Greg's\ncorrect that there's some risk of resource exhaustion at the TCP level,\nbut it's not very likely. I'd be more concerned about the amount of\nresources wasted in starting a separate Postgres backend for each\nconnection. PG backends are fairly heavyweight objects --- if you\nare at all concerned about performance, you want to get a decent number\nof queries done in each connection. Consider using a connection pooler.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Apr 2005 01:01:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Many connections lingering "
},
{
"msg_contents": "Hi Greg,\n\nThis is not a Windows server. Both server and client are the same\nmachine (done for testing purposes) and it is a Fedora RC2 machine.\nThis also happens on debian server and client in which case they were\ntwo separate machines.\n\nThere are thousands (2+) of these waiting around and each one of them\ndissapears after 50ish seconds. I tried psql command line and\nmonitored that connection in netstats. After I did a graceful exit\n(\\quit) the connection changed to TIME_WAIT and it was sitting there\nfor around 50 seconds. I thought I could do what you suggested with\nhaving one connection and making each query a full BEGIN/QUERY/COMMIT\ntransaction but I thought I could avoid that :).\n\nThis is a serious problem for me as there are multiple users using our\nsoftware on our server and I would want to avoid having connections\nopen for a long time. In the scenario mentioned below I haven't\nexplained the magnitute of the communications happening between Agents\nand DBServer. There could possibly be 100 or more Agents per\nexperiment, per user running on remote machines at the same time,\nhence we need short transactions/pgsql connections. Agents need a\nreliable connection because failure to connect could mean a loss of\ncomputation results that were gathered over long periods of time.\n\nThanks for the help by the way :),\nRegards,\nSlavisa\n\nOn 12 Apr 2005 23:27:09 -0400, Greg Stark <[email protected]> wrote:\n> \n> Tom Lane <[email protected]> writes:\n> \n> > Slavisa Garic <[email protected]> writes:\n> > > ... Now, the\n> > > interesting behaviour is this. I've ran netstat on the machine where\n> > > my software is running and I searched for tcp connections to my PGSQL\n> > > server. What i found was hundreds of lines like this:\n> >\n> > > tcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:39504 TIME_WAIT\n> > > tcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:40720 TIME_WAIT\n> > > tcp 0 0 remus.dstc.monash:43001 remus.dstc.monash:39135 TIME_WAIT\n> >\n> > This is a network-level issue: the TCP stack on your machine knows the\n> > connection has been closed, but it hasn't seen an acknowledgement of\n> > that fact from the other machine, and so it's remembering the connection\n> > number so that it can definitively say \"that connection is closed\" if\n> > the other machine asks. I'd guess that either you have a flaky network\n> > or there's something bogus about the TCP stack on the client machine.\n> > An occasional dropped FIN packet is no surprise, but hundreds of 'em\n> > are suspicious.\n> \n> No, what Tom's describing is a different pair of states called FIN_WAIT_1 and\n> FIN_WAIT_2. TIME_WAIT isn't waiting for a packet, just a timeout. This is to\n> prevent any delayed packets from earlier in the connection causing problems\n> with a subsequent good connection. Otherwise you could get data from the old\n> connection mixed in the data for later ones.\n> \n> > > Now could someone explain to me what this really means and what effect\n> > > it might have on the machine (the same machine where I ran this\n> > > query)? Would there eventually be a shortage of available ports if\n> > > this kept growing? The reason I am asking this is because one of my\n> > > modules was raising exception saying that TCP connection could not be\n> > > establish to a server it needed to connect to.\n> \n> What it does indicate is that each query you're making is probably not just a\n> separate transaction but a separate TCP connection. That's probably not\n> necessary. If you have a single long-lived process you could just keep the TCP\n> connection open and issue a COMMIT after each transaction. That's what I would\n> recommend doing.\n> \n> Unless you have thousands of these TIME_WAIT connections they probably aren't\n> actually directly the cause of your failure to establish connections. But yes\n> it can happen.\n> \n> What's more likely happening here is that you're stressing the server by\n> issuing so many connection attempts that you're triggering some bug, either in\n> the TCP stack or Postgres that is causing some connection attempts to not be\n> handled properly.\n> \n> I'm skeptical that there's a bug in Postgres since lots of people do in fact\n> run web servers configured to open a new connection for every page. But this\n> wouldn't happen to be a Windows server would it? Perhaps the networking code\n> in that port doesn't do the right thing in this case?\n> \n> --\n> greg\n> \n>\n",
"msg_date": "Wed, 13 Apr 2005 15:09:03 +1000",
"msg_from": "Slavisa Garic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Many connections lingering"
},
{
"msg_contents": "\nOn Apr 13, 2005, at 1:09 AM, Slavisa Garic wrote:\n\n> This is not a Windows server. Both server and client are the same\n> machine (done for testing purposes) and it is a Fedora RC2 machine.\n> This also happens on debian server and client in which case they were\n> two separate machines.\n>\n> There are thousands (2+) of these waiting around and each one of them\n> dissapears after 50ish seconds. I tried psql command line and\n> monitored that connection in netstats. After I did a graceful exit\n> (\\quit) the connection changed to TIME_WAIT and it was sitting there\n> for around 50 seconds. I thought I could do what you suggested with\n> having one connection and making each query a full BEGIN/QUERY/COMMIT\n> transaction but I thought I could avoid that :).\n\n\nIf you do a bit of searching on TIME_WAIT you'll find this is a common \nTCP/IP related problem, but the behavior is within the specs of the \nprotocol. I don't know how to do it on Linux, but you should be able \nto change TIME_WAIT to a shorter value. For the archives, here is a \npointer on changing TIME_WAIT on Windows:\n\nhttp://www.winguides.com/registry/display.php/878/\n\n\nJohn DeSoi, Ph.D.\nhttp://pgedit.com/\nPower Tools for PostgreSQL\n\n",
"msg_date": "Wed, 13 Apr 2005 08:31:13 -0400",
"msg_from": "John DeSoi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Many connections lingering"
},
{
"msg_contents": "If there are potentially hundreds of clients at a time, then you may be\nrunning into the maximum connection limit.\n\nIn postgresql.conf, there is a max_connections setting which IIRC\ndefaults to 100. If you try to open more concurrent connections to the\nbackend than that, you will get a connection refused.\n\nIf your DB is fairly gnarly and your performance needs are minimal it\nshould be safe to increase max_connections. An alternative approach\nwould be to add some kind of database broker program. Instead of each\nagent connecting directly to the database, they could pass their data to\na broker, which could then implement connection pooling.\n\n-- Mark Lewis\n\nOn Tue, 2005-04-12 at 22:09, Slavisa Garic wrote:\n> This is a serious problem for me as there are multiple users using our\n> software on our server and I would want to avoid having connections\n> open for a long time. In the scenario mentioned below I haven't\n> explained the magnitute of the communications happening between Agents\n> and DBServer. There could possibly be 100 or more Agents per\n> experiment, per user running on remote machines at the same time,\n> hence we need short transactions/pgsql connections. Agents need a\n> reliable connection because failure to connect could mean a loss of\n> computation results that were gathered over long periods of time.\n\n\n",
"msg_date": "Wed, 13 Apr 2005 09:42:29 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Many connections lingering"
},
{
"msg_contents": "Slavisa Garic wrote:\n> This is a serious problem for me as there are multiple users using our\n> software on our server and I would want to avoid having connections\n> open for a long time. In the scenario mentioned below I haven't\n> explained the magnitute of the communications happening between Agents\n> and DBServer. There could possibly be 100 or more Agents per\n> experiment, per user running on remote machines at the same time,\n> hence we need short transactions/pgsql connections. Agents need a\n> reliable connection because failure to connect could mean a loss of\n> computation results that were gathered over long periods of time.\n\nPlenty of others have discussed the technical reasons why you are seeing \nthese connection issues. If you find it difficult to change your way of \nworking, you might find the pgpool connection-pooling project useful:\n http://pgpool.projects.postgresql.org/\n\nHTH\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 13 Apr 2005 18:36:55 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [NOVICE] Many connections lingering"
},
{
"msg_contents": "I have a performance problem; I'd like any suggestions on where to continue\ninvestigation. \n\nA set of insert-only processes seems to serialize itself. :-(\n\nThe processes appear to be blocked on disk IO, and probably the table drive,\nrather than the pg_xlog drive.\n\nEach process is inserting a block of 10K rows into a table.\nI'm guessing they are \"serialized\" because one process by itself takes 15-20\nsecs; running ten processes in parallel averages 100-150 secs (each), with\nelapsed (wall) time of 150-200 secs. \n\nPolling pg_locks shows each process has (been granted) only the locks you would\nexpect. I RARELY see an Exclusive lock on an index, and then only on one index\nat a time.\n\nA sample from pg_locks:\n\nTABLE/INDEX GRANTED PID MODE\nm_reason t 7340 AccessShare\nmessage t 7340 AccessShare\nmessage t 7340 RowExclusive\npk_message t 7340 AccessShare\ntmp_message t 7340 AccessShare\n(\"m_reason\" is a one-row lookup table; see INSERT cmd below).\n\n--------------------------\nThe query plan is quite reasonable (see below).\n\nOn a side note, this is the first app I've had to deal with that is sweet to\npg_xlog, but hammers the drive bearing the base table (3x the traffic).\n\n\"log_executor_stats\" for a sample insert look reasonable (except the \"elapsed\"!)\n\n! system usage stats:\n! 308.591728 elapsed 3.480000 user 1.270000 system sec\n! [4.000000 user 1.390000 sys total]\n! 0/0 [0/0] filesystem blocks in/out\n! 18212/15 [19002/418] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n! 0/0 [0/0] voluntary/involuntary context switches\n! buffer usage stats:\n! Shared blocks: 9675 read, 8781 written, buffer hit rate = 97.66%\n! Local blocks: 504 read, 64 written, buffer hit rate = 0.00%\n! Direct blocks: 0 read, 0 written\n\nSummarized \"ps\" output for the above backend process, sampled every 5 secs,\nshows it is 94% in the 'D' state, 3% in the 'S' state.\n\n================\n== BACKGROUND ==\n================\n\n**SOFTWARE\n- PG 7.4.6, RedHat 8.\n\n----------------------------------\n**HARDWARE\nXeon 2x2 2.4GHz 2GB RAM\n4 x 73GB SCSI; pg_xlog and base on separate drives.\n\n----------------------------------\n**APPLICATION\n\nSix machines post batches of 10K messages to the PG db server.\nMachine #nn generates its ID keys as \"nn00000000001\"::bigint etc.\n\nEach process runs:\n- \"COPY tmp_message FROM STDIN\" loads its own one-use TEMP table.\n- \" INSERT INTO message \n SELECT tmp.* FROM tmp_message AS tmp\n JOIN m_reason ON m_reason.name = tmp.reason\n LEFT JOIN message USING (ID) WHERE message.ID is null\n (check required because crash recovery logic requires idempotent insert)\n \"DROP TABLE tmp_message\" --- call me paranoid, this is 7.4\n\nThe COPY step time is almost constant when #processes varies from 1 to 10.\n\n----------------------------------\n**POSTGRES\npg_autovacuum is running with default parameters.\n\nNon-default GUC values:\ncheckpoint_segments = 512\ndefault_statistics_target = 200\neffective_cache_size = 500000\nlog_min_duration_statement = 1000\nmax_fsm_pages = 1000000\nmax_fsm_relations = 1000\nrandom_page_cost = 1\nshared_buffers = 10000\nsort_mem = 16384\nstats_block_level = true\nstats_command_string = true\nstats_row_level = true\nvacuum_mem = 65536\nwal_buffers = 2000\n\nWal_buffers and checkpoint_segments look outrageous, \nbut were tuned for another process, that posts batches of 10000 6KB rows\nin a single insert.\n----------------------------------\nTABLE/INDEX STATISTICS\n\n----------------------------------\nMACHINE STATISTICS\n\nps gives the backend process as >98% in (D) state, with <1% CPU.\n\nA \"top\" snapshot:\nCPU states: cpu user nice system irq softirq iowait idle\n total 2.0% 0.0% 0.8% 0.0% 0.0% 96.9% 0.0%\n cpu00 2.5% 0.0% 1.9% 0.0% 0.0% 95.4% 0.0%\n cpu01 1.7% 0.0% 0.1% 0.0% 0.3% 97.6% 0.0%\n cpu02 0.5% 0.0% 0.7% 0.0% 0.0% 98.6% 0.0%\n cpu03 3.1% 0.0% 0.5% 0.0% 0.0% 96.2% 0.0%\nMem: 2061552k av, 2041752k used, 19800k free, 0k shrd, 21020k buff\n\niostat reports that the $PGDATA/base drive is being worked but not overworked.\nThe pg_xlog drive is underworked:\n\n KBPS TPS KBPS TPS KBPS TPS KBPS TPS\n12:30 1 2 763 16 31 8 3336 269\n12:40 5 3 1151 22 5 5 2705 320\n ^pg_xlog^ ^base^\n\nThe base drive has run as much as 10MBPS, 5K TPS.\n----------------------------------\nEXPLAIN ANALYZE output:\nThe plan is eminently reasonable. But there's no visible relationship\nbetween the top level \"actual time\" and the \"total runtime\":\n\nNested Loop Left Join\n (cost=0.00..31109.64 rows=9980 width=351)\n (actual time=0.289..2357.346 rows=9980 loops=1)\n Filter: (\"inner\".id IS NULL)\n -> Nested Loop\n (cost=0.00..735.56 rows=9980 width=351)\n (actual time=0.092..1917.677 rows=9980 loops=1)\n Join Filter: ((\"outer\".name)::text = (\"inner\".reason)::text)\n -> Seq Scan on m_reason r\n (cost=0.00..1.01 rows=1 width=12)\n (actual time=0.008..0.050 rows=1 loops=1)\n -> Seq Scan on tmp_message t\n (cost=0.00..609.80 rows=9980 width=355)\n (actual time=0.067..1756.617 rows=9980 loops=1)\n -> Index Scan using pk_message on message\n (cost=0.00..3.02 rows=1 width=8)\n (actual time=0.014..0.014 rows=0 loops=9980)\n Index Cond: (\"outer\".id = message.id)\nTotal runtime: 737401.687 ms\n\n-- \n\"Dreams come true, not free.\" -- S.Sondheim, ITW\n\n",
"msg_date": "Wed, 13 Apr 2005 15:16:54 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Strange serialization problem"
},
{
"msg_contents": "Hi,\n\nThis looks very interesting. I'll give it a better look and see if the\nperformance penalties pgpool brings are not substantial in which case\nthis program could be very helpful,\n\nThanks for the hint,\nSlavisa\n\nOn 4/14/05, Richard Huxton <[email protected]> wrote:\n> Slavisa Garic wrote:\n> > This is a serious problem for me as there are multiple users using our\n> > software on our server and I would want to avoid having connections\n> > open for a long time. In the scenario mentioned below I haven't\n> > explained the magnitute of the communications happening between Agents\n> > and DBServer. There could possibly be 100 or more Agents per\n> > experiment, per user running on remote machines at the same time,\n> > hence we need short transactions/pgsql connections. Agents need a\n> > reliable connection because failure to connect could mean a loss of\n> > computation results that were gathered over long periods of time.\n> \n> Plenty of others have discussed the technical reasons why you are seeing\n> these connection issues. If you find it difficult to change your way of\n> working, you might find the pgpool connection-pooling project useful:\n> http://pgpool.projects.postgresql.org/\n> \n> HTH\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n",
"msg_date": "Thu, 14 Apr 2005 11:11:51 +1000",
"msg_from": "Slavisa Garic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [NOVICE] Many connections lingering"
},
{
"msg_contents": "HI Mark,\n\nMy DBServer module already serves as a broker. At the moment it opens\na new connection for every incoming Agent connection. I did it this\nway because I wanted to leave synchronisation to PGSQL. I might have\nto modify it a bit and use a shared, single connection for all agents.\nI guess that is not a bad option I just have to ensure that the code\nis not below par :),\n\nAlso thank for the postgresql.conf hint, that limit was pretty low on\nour server so this might help a bit,\n\nRegards,\nSlavisa\n\nOn 4/14/05, Mark Lewis <[email protected]> wrote:\n> If there are potentially hundreds of clients at a time, then you may be\n> running into the maximum connection limit.\n> \n> In postgresql.conf, there is a max_connections setting which IIRC\n> defaults to 100. If you try to open more concurrent connections to the\n> backend than that, you will get a connection refused.\n> \n> If your DB is fairly gnarly and your performance needs are minimal it\n> should be safe to increase max_connections. An alternative approach\n> would be to add some kind of database broker program. Instead of each\n> agent connecting directly to the database, they could pass their data to\n> a broker, which could then implement connection pooling.\n> \n> -- Mark Lewis\n> \n> On Tue, 2005-04-12 at 22:09, Slavisa Garic wrote:\n> > This is a serious problem for me as there are multiple users using our\n> > software on our server and I would want to avoid having connections\n> > open for a long time. In the scenario mentioned below I haven't\n> > explained the magnitute of the communications happening between Agents\n> > and DBServer. There could possibly be 100 or more Agents per\n> > experiment, per user running on remote machines at the same time,\n> > hence we need short transactions/pgsql connections. Agents need a\n> > reliable connection because failure to connect could mean a loss of\n> > computation results that were gathered over long periods of time.\n> \n>\n",
"msg_date": "Thu, 14 Apr 2005 11:15:51 +1000",
"msg_from": "Slavisa Garic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Many connections lingering"
}
] |
[
{
"msg_contents": " \n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Matthew Nuzum\n> Sent: 12 April 2005 17:25\n> To: [email protected]\n> Subject: [PERFORM] performance hit for replication\n> \n> So, my question is this: My server currently works great, \n> performance wise.\n> I need to add fail-over capability, but I'm afraid that introducing a\n> stressful task such as replication will hurt my server's \n> performance. Is\n> there any foundation to my fears? I don't need to replicate \n> the archived log\n> data because I can easily restore that in a separate step \n> from the nightly\n> backup if disaster occurs. Also, my database load is largely \n> selects. My\n> application works great with PostgreSQL 7.3 and 7.4, but I'm \n> currently using\n> 7.3. \n\nIf it's possible to upgrade to 8.0 then perhaps you could make use of\nPITR and continuously ship log files to your standby machine.\n\nhttp://www.postgresql.org/docs/8.0/interactive/backup-online.html\n\nI can't help further with this as I've yet to give it a go myself, but\nothers here may have tried it.\n\nRegards, Dave.\n",
"msg_date": "Wed, 13 Apr 2005 09:02:56 +0100",
"msg_from": "\"Dave Page\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance hit for replication"
}
] |
[
{
"msg_contents": "I must be missing something important, because I am just not seeing why this\nquery is slower on a 4 processor 8 gig machine running redhat AS4.\n\nThe SQL:\nexplain analyze SELECT a.clientnum, a.associateid, a.associatenum,\na.lastname, a.firstname, jt.value AS jobtitle, l.name AS \"location\",\nl.locationid AS mainlocationid, l.divisionid, l.regionid, l.districtid,\n(a.lastname::text || ', '::text) || a.firstname::text AS assocname,\na.isactive, a.isdeleted\n FROM tblassociate a\n left JOIN tbljobtitle jt ON a.jobtitleid = jt.id AND jt.clientnum::text =\na.clientnum::text AND 1 = jt.presentationid\n JOIN tbllocation l ON a.locationid = l.locationid AND l.clientnum::text =\na.clientnum::text\nwhere a.clientnum = 'SAKS'; \n\nMachine 1 my desktop:\n\"Merge Join (cost=74970.51..75975.46 rows=8244 width=113) (actual\ntime=5141.000..6363.000 rows=160593 loops=1)\"\n\" Merge Cond: (\"outer\".locationid = \"inner\".locationid)\"\n\" -> Sort (cost=656.22..657.11 rows=354 width=49) (actual\ntime=16.000..16.000 rows=441 loops=1)\"\n\" Sort Key: l.locationid\"\n\" -> Index Scan using ix_location on tbllocation l\n(cost=0.00..641.23 rows=354 width=49) (actual time=0.000..0.000 rows=441\nloops=1)\"\n\" Index Cond: ('SAKS'::text = (clientnum)::text)\"\n\" -> Sort (cost=74314.29..74791.06 rows=190710 width=75) (actual\ntime=5125.000..5316.000 rows=160594 loops=1)\"\n\" Sort Key: a.locationid\"\n\" -> Merge Right Join (cost=0.00..52366.50 rows=190710 width=75)\n(actual time=16.000..1973.000 rows=177041 loops=1)\"\n\" Merge Cond: (((\"outer\".clientnum)::text =\n(\"inner\".clientnum)::text) AND (\"outer\".id = \"inner\".jobtitleid))\"\n\" -> Index Scan using ix_tbljobtitle_id on tbljobtitle jt\n(cost=0.00..244.75 rows=6622 width=37) (actual time=0.000..16.000 rows=5690\nloops=1)\"\n\" Filter: (1 = presentationid)\"\n\" -> Index Scan using ix_tblassoc_jobtitleid on tblassociate a\n(cost=0.00..50523.83 rows=190710 width=53) (actual time=0.000..643.000\nrows=177041 loops=1)\"\n\" Index Cond: ((clientnum)::text = 'SAKS'::text)\"\n\"Total runtime: 6719.000 ms\"\n\nTest Linux machine:\n\"Merge Join (cost=48126.04..49173.57 rows=15409 width=113) (actual\ntime=11832.165..12678.025 rows=160593 loops=1)\"\n\" Merge Cond: (\"outer\".locationid = \"inner\".locationid)\"\n\" -> Sort (cost=807.64..808.75 rows=443 width=49) (actual\ntime=2.418..2.692 rows=441 loops=1)\"\n\" Sort Key: l.locationid\"\n\" -> Index Scan using ix_location on tbllocation l\n(cost=0.00..788.17 rows=443 width=49) (actual time=0.036..1.677 rows=441\nloops=1)\"\n\" Index Cond: ('SAKS'::text = (clientnum)::text)\"\n\" -> Sort (cost=47318.40..47758.44 rows=176015 width=75) (actual\ntime=11829.660..12002.746 rows=160594 loops=1)\"\n\" Sort Key: a.locationid\"\n\" -> Merge Right Join (cost=24825.80..27512.71 rows=176015\nwidth=75) (actual time=8743.848..9750.775 rows=177041 loops=1)\"\n\" Merge Cond: (((\"outer\".clientnum)::text =\n\"inner\".\"?column10?\") AND (\"outer\".id = \"inner\".jobtitleid))\"\n\" -> Index Scan using ix_tbljobtitle_id on tbljobtitle jt\n(cost=0.00..239.76 rows=6604 width=37) (actual time=0.016..11.323 rows=5690\nloops=1)\"\n\" Filter: (1 = presentationid)\"\n\" -> Sort (cost=24825.80..25265.84 rows=176015 width=53)\n(actual time=8729.320..8945.292 rows=177041 loops=1)\"\n\" Sort Key: (a.clientnum)::text, a.jobtitleid\"\n\" -> Index Scan using ix_associate_clientnum on\ntblassociate a (cost=0.00..9490.20 rows=176015 width=53) (actual\ntime=0.036..1071.867 rows=177041 loops=1)\"\n\" Index Cond: ((clientnum)::text = 'SAKS'::text)\"\n\"Total runtime: 12802.019 ms\"\n\nI tried to remove the left outer thinking it would speed it up, and it used\na seq search on tblassoc and ran 2 times slower.\n\n\nJoel Fradkin\n \nWazagua, Inc.\n2520 Trailmate Dr\nSarasota, Florida 34243\nTel. 941-753-7111 ext 305\n \[email protected]\nwww.wazagua.com\nPowered by Wazagua\nProviding you with the latest Web-based technology & advanced tools.\nC 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc\n This email message is for the use of the intended recipient(s) and may\ncontain confidential and privileged information. Any unauthorized review,\nuse, disclosure or distribution is prohibited. If you are not the intended\nrecipient, please contact the sender by reply email and delete and destroy\nall copies of the original message, including attachments.\n \n\n \n\n\n",
"msg_date": "Wed, 13 Apr 2005 10:41:08 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "speed of querry?"
},
{
"msg_contents": "Joel Fradkin wrote:\n> I must be missing something important, because I am just not seeing why this\n> query is slower on a 4 processor 8 gig machine running redhat AS4.\n\nWell, the 4 processors aren't going to help with a single query. \nHowever, assuming the configurations for both machines are comparable, \nyou shouldn't be seeing a doubling in query-time.\n\nI have, however, spotted something very strange towards the bottom of \neach explain:\n\n> Machine 1 my desktop:\n\n> \" -> Merge Right Join (cost=0.00..52366.50 rows=190710 width=75)\n> (actual time=16.000..1973.000 rows=177041 loops=1)\"\n> \" Merge Cond: (((\"outer\".clientnum)::text =\n> (\"inner\".clientnum)::text) AND (\"outer\".id = \"inner\".jobtitleid))\"\n\n\n> Test Linux machine:\n\n> \" -> Merge Right Join (cost=24825.80..27512.71 rows=176015\n> width=75) (actual time=8743.848..9750.775 rows=177041 loops=1)\"\n> \" Merge Cond: (((\"outer\".clientnum)::text =\n> \"inner\".\"?column10?\") AND (\"outer\".id = \"inner\".jobtitleid))\"\n\nIn the first, we match outer.clientnum to inner.clientnum, in the second \nit's \"?column10?\" - are you sure the query was identical in each case. \nI'm guessing the unidentified column in query 2 is the reason for the \nsort a couple of lines below it, which seems to take up a large chunk of \ntime.\n\n--\n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 13 Apr 2005 18:30:35 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: speed of querry?"
},
{
"msg_contents": "are you sure the query was identical in each case. \n\nI just ran a second time same results ensuring that the query is the same.\nNot sure why it is doing a column10 thing. Any ideas what to look for?\nBoth data bases are a restore from the same backup file.\n\nOne is running redhat the other XP, I believe both are the same version of\npostgres except for the different platform (8.0.1 I am pretty sure).\n\nI just spent the morning with Dell hoping for some explanation from them.\nThey said I had to have the database on the same type of OS and hardware for\nthem to think the issue was hardware. They are escalating to the software\ngroup.\n\nI did a default Redhat install so it very well may be an issue with my lack\nof knowledge on Linux.\n\nHe did mention by default the Perc4 do cache, so I may need to visit the\ndata center to set the percs to not cache.\n\n--\n Richard Huxton\n Archonet Ltd\n\n",
"msg_date": "Wed, 13 Apr 2005 14:29:51 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: speed of querry?"
},
{
"msg_contents": "Richard Huxton <[email protected]> writes:\n> In the first, we match outer.clientnum to inner.clientnum, in the second \n> it's \"?column10?\" - are you sure the query was identical in each case. \n> I'm guessing the unidentified column in query 2 is the reason for the \n> sort a couple of lines below it, which seems to take up a large chunk of \n> time.\n\nThe \"?column10?\" is because EXPLAIN isn't excessively bright about\nreporting references to outputs of lower plan nodes. (Gotta fix that\nsometime.) The real point here is that the planner thought that a scan\nplus sort would be faster than scanning an index that exactly matched\nthe sort order the Merge Join needed ... and it was wrong :-(\n\nSo this is just the usual sort of question of \"are your stats up to\ndate, maybe you need to increase stats targets, or else play with\nrandom_page_cost, etc\" ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Apr 2005 02:47:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: speed of querry? "
},
{
"msg_contents": "I have done a vacuum and a vacuum analyze.\nI can try again for kicks, but it is not in production so no new records are\nadded and vacuum analyze is ran after any mods to the indexes.\n\nI am still pursuing Dell on why the monster box is so much slower then the\ndesktop as well.\n\nJoel Fradkin\n \nWazagua, Inc.\n2520 Trailmate Dr\nSarasota, Florida 34243\nTel. 941-753-7111 ext 305\n \[email protected]\nwww.wazagua.com\nPowered by Wazagua\nProviding you with the latest Web-based technology & advanced tools.\nC 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc\n This email message is for the use of the intended recipient(s) and may\ncontain confidential and privileged information. Any unauthorized review,\nuse, disclosure or distribution is prohibited. If you are not the intended\nrecipient, please contact the sender by reply email and delete and destroy\nall copies of the original message, including attachments.\n \n\n \n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Thursday, April 14, 2005 1:47 AM\nTo: Richard Huxton\nCc: Joel Fradkin; PostgreSQL Perform\nSubject: Re: [PERFORM] speed of querry? \n\nRichard Huxton <[email protected]> writes:\n> In the first, we match outer.clientnum to inner.clientnum, in the second \n> it's \"?column10?\" - are you sure the query was identical in each case. \n> I'm guessing the unidentified column in query 2 is the reason for the \n> sort a couple of lines below it, which seems to take up a large chunk of \n> time.\n\nThe \"?column10?\" is because EXPLAIN isn't excessively bright about\nreporting references to outputs of lower plan nodes. (Gotta fix that\nsometime.) The real point here is that the planner thought that a scan\nplus sort would be faster than scanning an index that exactly matched\nthe sort order the Merge Join needed ... and it was wrong :-(\n\nSo this is just the usual sort of question of \"are your stats up to\ndate, maybe you need to increase stats targets, or else play with\nrandom_page_cost, etc\" ...\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 14 Apr 2005 09:52:21 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: speed of querry?"
},
{
"msg_contents": "On 4/14/05, Joel Fradkin <[email protected]> wrote:\n> I have done a vacuum and a vacuum analyze.\n> I can try again for kicks, but it is not in production so no new records are\n> added and vacuum analyze is ran after any mods to the indexes.\n> \n> I am still pursuing Dell on why the monster box is so much slower then the\n> desktop as well.\n\nFirst thing: Do something like:\nALTER TABLE tbljobtitle ALTER COLUMN clientnum SET STATISTICS 50;\nmake it for each column used, make it even higher than 50 for\nmany-values columns.\nTHEN make VACUUM ANALYZE;\n\nThen do a query couple of times (EXPLAIN ANALYZE also :)), then do:\nSET enable_seqscan = off;\nand rerun the query -- if it was significantly faster, you will want to do:\nSET enable_seqscan = on;\nand tweak:\nSET random_page_cost = 2.1;\n...and play with values. When you reach the random_page_cost which\nsuits your data, you will want to put it into postgresql.conf\n\nI am sorry if it is already known to you. :) Also, it is a rather simplistic\napproach to tuning PostgreSQL but it is worth doing. Especially the\nstatistics part. :)\n\n Regards,\n Dawid\n",
"msg_date": "Thu, 14 Apr 2005 16:20:36 +0200",
"msg_from": "Dawid Kuroczko <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: speed of querry?"
},
{
"msg_contents": "Josh from commandprompt.com had me alter the config to have\ndefault_statistics_target = 250\n\nIs this somehow related to what your asking me to do?\nI did do an analyze, but have only ran the viw a few times.\n\nJoel Fradkin\n \n-----Original Message-----\nFrom: Dawid Kuroczko [mailto:[email protected]] \nSent: Thursday, April 14, 2005 9:21 AM\nTo: Joel Fradkin\nCc: PostgreSQL Perform\nSubject: Re: [PERFORM] speed of querry?\n\nOn 4/14/05, Joel Fradkin <[email protected]> wrote:\n> I have done a vacuum and a vacuum analyze.\n> I can try again for kicks, but it is not in production so no new records\nare\n> added and vacuum analyze is ran after any mods to the indexes.\n> \n> I am still pursuing Dell on why the monster box is so much slower then the\n> desktop as well.\n\nFirst thing: Do something like:\nALTER TABLE tbljobtitle ALTER COLUMN clientnum SET STATISTICS 50;\nmake it for each column used, make it even higher than 50 for\nmany-values columns.\nTHEN make VACUUM ANALYZE;\n\nThen do a query couple of times (EXPLAIN ANALYZE also :)), then do:\nSET enable_seqscan = off;\nand rerun the query -- if it was significantly faster, you will want to do:\nSET enable_seqscan = on;\nand tweak:\nSET random_page_cost = 2.1;\n...and play with values. When you reach the random_page_cost which\nsuits your data, you will want to put it into postgresql.conf\n\nI am sorry if it is already known to you. :) Also, it is a rather\nsimplistic\napproach to tuning PostgreSQL but it is worth doing. Especially the\nstatistics part. :)\n\n Regards,\n Dawid\n\n",
"msg_date": "Thu, 14 Apr 2005 10:45:48 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: speed of querry?"
},
{
"msg_contents": "On 4/14/05, Joel Fradkin <[email protected]> wrote:\n> Josh from commandprompt.com had me alter the config to have\n> default_statistics_target = 250\n> \n> Is this somehow related to what your asking me to do?\n> I did do an analyze, but have only ran the viw a few times.\n\nwell, he did suggest the right thing. However this parameter\napplies to newly created tables, so either recreate the tables\nor do the ALTER TABLE I've sent eariler.\n\nBasically it tells postgres how many values should it keep for\nstatistics per column. The config default_statistics_target\nis the default (= used when creating table) and ALTER... is\na way to change it later.\n\nThe more statistics PostgreSQL has means it can better\npredict how much data will be returned -- and this directly\nleads to a choice how to handle the data (order in which\ntables should be read, whether to use index or not, which\nalgorithm use for join, etc.). The more statistics, the better\nPostgreSQL is able to predict. The more statistics, the slower\nplanner is able to do the analysis. So you have to find\na value which will be as much as is needed to accurately\npredict the results but not more! PostgreSQL's default of\n10 is a bit conservative, hence the suggestions to increase\nit. :) [ and so is random_page_cost or some people have\nfound that in their cases it is beneficial to reduce the value,\neven as much as below 2. ]\n\nHope this clairifies things a bit.\n\n Regards,\n Dawid\n",
"msg_date": "Thu, 14 Apr 2005 18:04:02 +0200",
"msg_from": "Dawid Kuroczko <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: speed of querry?"
},
{
"msg_contents": "Dawid Kuroczko <[email protected]> writes:\n> Basically it tells postgres how many values should it keep for\n> statistics per column. The config default_statistics_target\n> is the default (= used when creating table) and ALTER... is\n> a way to change it later.\n\nNot quite. default_statistics_target is the value used by ANALYZE for\nany column that hasn't had an explicit ALTER SET STATISTICS done on it.\nSo you can change default_statistics_target and that will affect\nexisting tables.\n\n(It used to work the way you are saying, but that was a few releases\nback...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Apr 2005 12:20:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: speed of querry? "
},
{
"msg_contents": "I did as described to alter table and did not see any difference in speed.\nI am trying to undo the symbolic link to the data array and set it up on\nraid 5 disks in the machine just to test if there is an issue with the\nconfig of the raid 10 array or a problem with the controller.\n\nI am kinda lame at Linux so not sure I have got it yet still testing.\nStill kind puzzled why it chose tow different option, but one is running\nwindows version of postgres, so maybe that has something to do with it.\n\nThe data bases and configs (as far as page cost) are the same.\n\nJoel Fradkin\n \nWazagua, Inc.\n2520 Trailmate Dr\nSarasota, Florida 34243\nTel. 941-753-7111 ext 305\n \[email protected]\nwww.wazagua.com\nPowered by Wazagua\nProviding you with the latest Web-based technology & advanced tools.\nC 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc\n This email message is for the use of the intended recipient(s) and may\ncontain confidential and privileged information. Any unauthorized review,\nuse, disclosure or distribution is prohibited. If you are not the intended\nrecipient, please contact the sender by reply email and delete and destroy\nall copies of the original message, including attachments.\n \n\n \n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Thursday, April 14, 2005 11:21 AM\nTo: Dawid Kuroczko\nCc: Joel Fradkin; PERFORM\nSubject: Re: [PERFORM] speed of querry? \n\nDawid Kuroczko <[email protected]> writes:\n> Basically it tells postgres how many values should it keep for\n> statistics per column. The config default_statistics_target\n> is the default (= used when creating table) and ALTER... is\n> a way to change it later.\n\nNot quite. default_statistics_target is the value used by ANALYZE for\nany column that hasn't had an explicit ALTER SET STATISTICS done on it.\nSo you can change default_statistics_target and that will affect\nexisting tables.\n\n(It used to work the way you are saying, but that was a few releases\nback...)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 14 Apr 2005 12:38:58 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: speed of querry?"
},
{
"msg_contents": "Well so far I have 1.5 hours with commandpromt.com and 8 + hours with Dell\nand have not seen any changes in the speed of my query.\n\nI did move the data base to the raid 5 drives and did see a 1 second\nimprovement from 13 secs to 12 secs (keep in mind it runs in 6 on the\noptiplex).\n\nThe dell guy ran Bonie and found 40meg per sec read/write speed for the\narrays.\n\nHe also installed version 8.0.2 (went fine on AS4 he had to uninstall 8.0.1\nfirst).\n\nHe is going to get a 6650 in his test lab to see what he can fugure out.\nI will say both commandprompt.com and Dell have been very professional and I\nam impressed at the level of support available for Redhat from Dell and\npostgres. As always I still feel this list has been my most useful asset,\nbut I am glad there are folks to call on. I am trying to go live soon and\nneed to get this resolved.\n\nI told the guy from Dell it makes no sense that a windows 2.4 single proc\nwith 750 meg of ram can go faster then a 4 proc (3.ghz) 8 gig machine.\nBoth databases were restored from the same file. Same view etc.\n\nConfig files are set the same except for amount of cached ram, although\nCommandprompt.com had me adjust a few items that should help going into\nproduction, put planning stuff is basicly the same.\n\nThis view returns in 3 secs on MSSQL server on the optiplex (750 meg 2.4\nbox); and 6 secs using postgres on windows and 12-13 secs on the 4 processor\nbox. Needless to say I am very frustrated. Maybe Dell will turn up something\ntesting in their lab. It took a bit of perseverance to get to the right guy\nat Dell (the first guy actually told me to load it all on a like machine and\nif it was very much slower on my original they would pursue it otherwise it\nwas not an issue. I was like the machine cost 30K you going to send me one\nto test that. But seriously I am open to trying anything (loading AS3, using\npostgres 7.4)? The fellow at Dell does not think it is a hardware problem,\nso if it is Linux (could very well be, but he seemed very sharp and did not\ncome up with anything yet) or postgres config (again Josh at\ncommandprompt.com was very sharp) then what do I do now to isolate the\nissue? At least they are loading one in the lab (in theory, I cant send them\nmy database, so who knows what they will test). Dell changed the file system\nto ext2 is that going to bite me in the butt? It did not seem to change the\nspeed of my explain analyze.\n\nJoel Fradkin\n \n\nDawid Kuroczko <[email protected]> writes:\n> Basically it tells postgres how many values should it keep for\n> statistics per column. The config default_statistics_target\n> is the default (= used when creating table) and ALTER... is\n> a way to change it later.\n\nNot quite. default_statistics_target is the value used by ANALYZE for\nany column that hasn't had an explicit ALTER SET STATISTICS done on it.\nSo you can change default_statistics_target and that will affect\nexisting tables.\n\n(It used to work the way you are saying, but that was a few releases\nback...)\n\n\t\t\tregards, tom lane\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n",
"msg_date": "Thu, 14 Apr 2005 18:01:19 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: speed of querry?"
}
] |
[
{
"msg_contents": "Hello!\n\nI have a partial index (btree(col) WHERE col > 0) on table2 ('col' contains alot of NULL-values).\n\nThere's also a foreign key on the column pointing to the primary key of table1 (ON UPDATE CASCADE ON DELETE SET NULL). During update/delete, it seems like it cannot use the partial index to find corresponding rows matching the foreign key (doing a full seqscan instead)? \n\nIs there any special reason for not letting the planner use the partial index when appropriate? \n\n\n\n\\d table1\n Table \"public.table1\"\n Column | Type | Modifiers\n--------+---------+-----------\n id | integer | not null\n text | text |\nIndexes:\n \"table1_pkey\" primary key, btree (id)\n\n\\d table2\n Table \"public.table2\"\n Column | Type | Modifiers\n--------+---------+-----------\n id | integer | not null\n col | integer |\n value | integer |\nIndexes:\n \"table2_pkey\" primary key, btree (id)\n\n\n\nCREATE INDEX col_part_key ON table2 USING btree(col) WHERE col > 0;\nANALYZE table2;\nEXPLAIN ANALYZE DELETE FROM table2 WHERE col=1;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------\n Index Scan using col_part_key on table2 (cost=0.00..6.01 rows=6 width=6) (actual time=0.592..1.324 rows=8 loops=1)\n Index Cond: (col = 1)\n Total runtime: 4.904 ms\n\n\n\nDelete manually WITHOUT foreign key:\n\n\ntest=> begin work;\nBEGIN\nTime: 0.808 ms\ntest=> explain analyze delete from table1 where id=1;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------\n Index Scan using table1_pkey on table1 (cost=0.00..3.01 rows=2 width=6) (actual time=0.312..0.324 rows=1 loops=1)\n Index Cond: (id = 1)\n Total runtime: 0.623 ms\n(3 rows)\n\nTime: 3.912 ms\ntest=> explain analyze delete from table2 where col=1;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Index Scan using col_part_key on table2 (cost=0.00..14.70 rows=36 width=6) (actual time=0.338..0.557 rows=8 loops=1)\n Index Cond: (col = 1)\n Total runtime: 0.881 ms\n(3 rows)\n\nTime: 3.802 ms\ntest=> rollback;\nROLLBACK\n\n\n\n\nDelete WITH foreign key:\n\n\ntest=> ALTER TABLE table2 ADD CONSTRAINT col_fkey FOREIGN KEY (col) REFERENCES table1(id) ON UPDATE CASCADE ON DELETE SET NULL;\nALTER TABLE\nTime: 3783.009 ms\n\ntest=> begin work;\nBEGIN\nTime: 1.509 ms\ntest=> explain analyze delete from table1 where id=1;\nrollback;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------\n Index Scan using table1_pkey on table1 (cost=0.00..3.01 rows=2 width=6) (actual time=0.769..0.781 rows=1 loops=1)\n Index Cond: (id = 1)\n Total runtime: 1.027 ms\n(3 rows)\n\nTime: 3458.585 ms\ntest=> rollback;\nROLLBACK\nTime: 1.506 ms\n\n\n/Nichlas\n",
"msg_date": "Wed, 13 Apr 2005 17:45:46 +0200",
"msg_from": "Nichlas =?iso-8859-1?Q?L=F6fdahl?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Foreign keys and partial indexes"
},
{
"msg_contents": "Nichlas =?iso-8859-1?Q?L=F6fdahl?= <[email protected]> writes:\n> I have a partial index (btree(col) WHERE col > 0) on table2 ('col' contains alot of NULL-values).\n\n> There's also a foreign key on the column pointing to the primary key of table1 (ON UPDATE CASCADE ON DELETE SET NULL). During update/delete, it seems like it cannot use the partial index to find corresponding rows matching the foreign key (doing a full seqscan instead)? \n\n> Is there any special reason for not letting the planner use the partial index when appropriate? \n\nIt doesn't know it's appropriate. There's nothing constraining the FK\nto be positive, after all.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Apr 2005 12:05:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Foreign keys and partial indexes "
}
] |
[
{
"msg_contents": "Hi all,\n\nJust wanted everyone to know what we're pulling CVS HEAD nightly so it\ncan be tested in STP now. Let me know if you have any questions.\n\nTests are not automatically run yet, but I hope to remedy that\nshortly.\n\nFor those not familiar with STP and PLM, here are a couple of links:\n\nSTP\n\thttp://www.osdl.org/stp/\n\nPLM\n\thttp://www.osdl.org/plm-cgi/plm\n\nMark\n",
"msg_date": "Wed, 13 Apr 2005 11:11:41 -0700",
"msg_from": "Mark Wong <[email protected]>",
"msg_from_op": true,
"msg_subject": "PLM pulling from CVS nightly for testing in STP"
},
{
"msg_contents": "Mark,\n\n> Just wanted everyone to know what we're pulling CVS HEAD nightly so it\n> can be tested in STP now. Let me know if you have any questions.\n\nWay cool. How do I find the PLM number? How are you nameing these?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 13 Apr 2005 11:35:36 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PLM pulling from CVS nightly for testing in STP"
},
{
"msg_contents": "On Wed, Apr 13, 2005 at 11:35:36AM -0700, Josh Berkus wrote:\n> Mark,\n> \n> > Just wanted everyone to know what we're pulling CVS HEAD nightly so it\n> > can be tested in STP now. Let me know if you have any questions.\n> \n> Way cool. How do I find the PLM number? How are you nameing these?\n\nThe naming convention I'm using is postgresql-YYYYMMDD, for example\npostgresql-20050413, for the anonymous cvs export from today (April\n13). I have a cronjob that'll do the export at 1AM PST8PDT.\n\nThe search page for the PLM numbers is here:\n\thttps://www.osdl.org/plm-cgi/plm?module=search\n\nor you can use the stpbot on linuxnet.mit.edu#osdl.\n\nMark\n",
"msg_date": "Wed, 13 Apr 2005 13:41:48 -0700",
"msg_from": "Mark Wong <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PLM pulling from CVS nightly for testing in STP"
},
{
"msg_contents": "I have dbt-2 tests automatically running against each pull from CVS\nand have started to automatically compile results here:\n\thttp://developer.osdl.org/markw/postgrescvs/\n\nI did start with a bit of a minimalistic approach, so I'm open for any\ncomments, feedback, etc.\n\nMark\n",
"msg_date": "Tue, 19 Apr 2005 15:13:43 -0700",
"msg_from": "Mark Wong <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PLM pulling from CVS nightly for testing in STP"
}
] |
[
{
"msg_contents": "Someone (twanger) sent me here from the IRC channel with the following:\n\nI have a query that normally takes 0.150 seconds, but after an insert \ncan take 14 seconds.\n\nHere's the scenario:\n\nRun this query:\n select *\n from cad_part\n left join smart_part using (cannon_part_id)\n where cad_import_id = 91\n order by cad_part_reference_letter, cad_part_id\n\nThe result is returned in about 150ms.\n\nThen I run my import operation which adds 1 new cad_import row, about 30 \nnew cad_part rows, and about 100 new cad_line rows (which aren't \ninvolved in the above query). In this case, the new cad_import row has a \nPK of cad_import_id = 92.\n\nWhen I run the query again (only the where clause changed):\n select *\n from cad_part\n left join smart_part using (cannon_part_id)\n where cad_import_id = 92\n order by cad_part_reference_letter, cad_part_id\n\nit takes about 14 seconds (and has a different plan).\n\nI can repeat the first query (id=91) and it still executes in 150ms and \nthen repeat the second query and in still takes ~14 seconds. \n\nI've found two things that fix this. First, if I run analyze, the second \nquery will take 150ms.\n\nSecond, if I set enable_nestloop to false the second query will use that \nsame plan that the first does and complete in 150ms.\n\nI've posted a bunch of details on my website including the size of the \ntables (all pretty small), both query plans, and some of the schema.\n\nhttp://tom-mack.com/query_details.html\n\nI also just redid the query without the final order by clause with the \nsame results.\n\nSo I guess my question is, am I doing something wrong? did I miss an \nindex or something? is this a bug (a 100x hit for not running analyze \nseems a little severe)? should I just run \"analyze cad_part\" after my \ninserts to that table?\n\nThanks,\n\n--Tom\n\n\n",
"msg_date": "Wed, 13 Apr 2005 16:06:57 -0500",
"msg_from": "Tom Mack <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem with slow query (caused by improper nestloop?)"
}
] |
[
{
"msg_contents": "Hi,\n\nJust wondering... Is Postgresql able to use data present within indexes \nwithout looking up the table data?\n\nTo be more explicit, let's say I have table with two fields a and b. If I \nhave an index on (a,b) and I do a request like \"SELECT b FROM table WHERE \na=x\", will Postgresql use only the index, or will it need to also read the \ntable page for that (those) row(s)?\n\nThere might be a reason why this is not possible (I don't know if the \nindexes have all necessary transaction ID information?) but otherwise this \ncould possibly provide an interesting performance gain for some operations, \nin particular with some types of joins. Or maybe it already does it.\n\nAny hint welcome!\n\nThanks,\n\nJacques.\n\n\n",
"msg_date": "Thu, 14 Apr 2005 11:45:03 +0200",
"msg_from": "Jacques Caron <[email protected]>",
"msg_from_op": true,
"msg_subject": "Use of data within indexes"
},
{
"msg_contents": "> To be more explicit, let's say I have table with two fields a and b. If \n> I have an index on (a,b) and I do a request like \"SELECT b FROM table \n> WHERE a=x\", will Postgresql use only the index, or will it need to also \n> read the table page for that (those) row(s)?\n\nIt must read the table because of visibility considerations.\n\n> There might be a reason why this is not possible (I don't know if the \n> indexes have all necessary transaction ID information?) but otherwise \n> this could possibly provide an interesting performance gain for some \n> operations, in particular with some types of joins. Or maybe it already \n> does it.\n\nIt's already been thought of :)\n\nThe 4 or so columns that store visibility information are not in the \nindexes, to do so would require a significant write cost.\n\nChris\n",
"msg_date": "Thu, 14 Apr 2005 22:04:24 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of data within indexes"
}
] |
[
{
"msg_contents": "I am new to cross references between tables, and I am trying to \nunderstand how they impact performance. From reading the documentation I \nwas under the impression that deffering foreign keys would yield about \nthe same performance as dropping them before a copy, and adding them \nafter. However, I cannot see this in my test case.\n\nI have a table A with an int column ID that references table B column \nID. Table B has about 150k rows, and has an index on B.ID. When trying \nto copy 1 million rows into A, I get the following \\timings:\n\n1) drop FK, copy (200s), add FK (5s)\n2) add FK defferable initially deffered, copy (I aborted after 30min)\n3) add FK defferable initially deffered, begin, copy (200s), commit (I \naborted after 30min)\n\nHow do I explain why test cases 2 and 3 do not come close to case 1? Am \nI missing something obvious?\n\nSince the database I am working on has many FKs, I would rather not have \nto drop/add them when I am loading large data sets.\n\nIf it would help I can write this out in a reproducable scenario. I am \nusing postgresql 7.4.5 at the moment.\n\nSincerely,\n\n-- \nRichard van den Berg, CISSP\n-------------------------------------------\nTrust Factory B.V. | www.dna-portal.net\nBazarstraat 44a | www.trust-factory.com\n2518AK The Hague | Phone: +31 70 3620684\nThe Netherlands | Fax : +31 70 3603009\n-------------------------------------------\n",
"msg_date": "Thu, 14 Apr 2005 13:59:52 +0200",
"msg_from": "Richard van den Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Foreign key slows down copy/insert"
},
{
"msg_contents": "> Since the database I am working on has many FKs, I would rather not have to \n> drop/add them when I am loading large data sets.\n\nYou may want to hunt the archives. IIRCC I saw a couple of posts in the \nrecent months about an update you can do to one of the system tables to disable\nthe key checks and then re-enable them after your done with the import.\n\nSincerely,\n\nJoshua D. Drake\n>\n\n-- \nCommand Prompt, Inc., Your PostgreSQL solutions company. 503-667-4564\nCustom programming, 24x7 support, managed services, and hosting\nOpen Source Authors: plPHP, pgManage, Co-Authors: plPerlNG\nReliable replication, Mammoth Replicator - http://www.commandprompt.com/\n\n\n",
"msg_date": "Mon, 18 Apr 2005 19:20:25 -0700 (PDT)",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Foreign key slows down copy/insert"
},
{
"msg_contents": "On Apr 14, 2005, at 7:59 AM, Richard van den Berg wrote:\n\n> How do I explain why test cases 2 and 3 do not come close to case 1? \n> Am I missing something obvious?\n\nthere's cost involved with enforcing the FK: if you're indexes can't be \nused then you're doing a boatload of sequence scans to find and lock \nthe referenced rows in the parent tables.\n\nMake sure you have indexes on your FK columns (on *both* tables), and \nthat the data type on both tables is the same.\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806",
"msg_date": "Wed, 20 Apr 2005 11:27:22 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Foreign key slows down copy/insert"
}
] |
[
{
"msg_contents": "I am new to cross references between tables, and I am trying to\nunderstand how they impact performance. From reading the documentation I\nwas under the impression that deffering foreign keys would yield about\nthe same performance as dropping them before a copy, and adding them\nafter. However, I cannot see this in my test case.\n\nI have a table A with an int column ID that references table B column\nID. Table B has about 150k rows, and has an index on B.ID. When trying\nto copy 1 million rows into A, I get the following \\timings:\n\n1) drop FK, copy (200s), add FK (5s)\n2) add FK defferable initially deffered, copy (I aborted after 30min)\n3) add FK defferable initially deffered, begin, copy (200s), commit (I\naborted after 30min)\n\nHow do I explain why test cases 2 and 3 do not come close to case 1? Am\nI missing something obvious?\n\nSince the database I am working on has many FKs, I would rather not have\nto drop/add them when I am loading large data sets.\n\nIf it would help I can write this out in a reproducable scenario. I am\nusing postgresql 7.4.5 at the moment.\n\nSincerely,\n\n-- \nRichard van den Berg, CISSP\n-------------------------------------------\nTrust Factory B.V. | www.dna-portal.net\nBazarstraat 44a | www.trust-factory.com\n2518AK The Hague | Phone: +31 70 3620684\nThe Netherlands | Fax : +31 70 3603009\n-------------------------------------------\n\n",
"msg_date": "Thu, 14 Apr 2005 14:21:25 +0200",
"msg_from": "Richard van den Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Foreign key slows down copy/insert"
},
{
"msg_contents": "> I am new to cross references between tables, and I am trying to\n> understand how they impact performance. From reading the documentation I\n> was under the impression that deffering foreign keys would yield about\n> the same performance as dropping them before a copy, and adding them\n> after. However, I cannot see this in my test case.\n\nEven if you defer them, it just defers the check, doesn't eliminate it...\n\n> I have a table A with an int column ID that references table B column\n> ID. Table B has about 150k rows, and has an index on B.ID. When trying\n> to copy 1 million rows into A, I get the following \\timings:\n> \n> 1) drop FK, copy (200s), add FK (5s)\n> 2) add FK defferable initially deffered, copy (I aborted after 30min)\n> 3) add FK defferable initially deffered, begin, copy (200s), commit (I\n> aborted after 30min)\n> \n> How do I explain why test cases 2 and 3 do not come close to case 1? Am\n> I missing something obvious?\n\nDeferring makes no difference to FK checking speed...\n\n> Since the database I am working on has many FKs, I would rather not have\n> to drop/add them when I am loading large data sets.\n\nWell, that's what people do - even pg_dump will restore data and add the \nforeign key afterward...\n\nChris\n",
"msg_date": "Thu, 14 Apr 2005 22:07:39 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Foreign key slows down copy/insert"
},
{
"msg_contents": "Hello Chris,\n\nThanks for your answers.\n\nChristopher Kings-Lynne wrote:\n> Deferring makes no difference to FK checking speed...\n\nBut why then is the speed acceptable if I copy and then manually add the \nFK? Is the check done by the FK so much different from when it is done \nautomatically using an active deffered FK?\n\n> Well, that's what people do - even pg_dump will restore data and add the \n> foreign key afterward...\n\nIf I have to go this route, is there a way of automatically dropping and \nre-adding FKs? I can probably query pg_constraints and drop the \nappropriate ones, but how do I re-add them after the copy/insert?\n\nSincerely,\n\n-- \nRichard van den Berg, CISSP\n-------------------------------------------\nTrust Factory B.V. | www.dna-portal.net\nBazarstraat 44a | www.trust-factory.com\n2518AK The Hague | Phone: +31 70 3620684\nThe Netherlands | Fax : +31 70 3603009\n-------------------------------------------\n",
"msg_date": "Thu, 14 Apr 2005 16:26:30 +0200",
"msg_from": "Richard van den Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Foreign key slows down copy/insert"
},
{
"msg_contents": ">> Deferring makes no difference to FK checking speed...\n> \n> \n> But why then is the speed acceptable if I copy and then manually add the \n> FK? Is the check done by the FK so much different from when it is done \n> automatically using an active deffered FK?\n\nYeah I think it uses a different query formulation... Actually I only \nassume that deferred fk's don't use that - I guess your experiment \nproves that.\n\n>> Well, that's what people do - even pg_dump will restore data and add \n>> the foreign key afterward...\n> \n> If I have to go this route, is there a way of automatically dropping and \n> re-adding FKs? I can probably query pg_constraints and drop the \n> appropriate ones, but how do I re-add them after the copy/insert?\n\nActually, you can just \"disable\" them if you want to be really dirty :) \n You have to be confident that the data you're inserting does satisfy \nthe FK, however otherwise you can end up with invalid data.\n\nTo see how to do that, try pg_dump with --disable-triggers mode enabled. \n Just do a data-only dump.\n\nChris\n",
"msg_date": "Thu, 14 Apr 2005 22:35:00 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Foreign key slows down copy/insert"
},
{
"msg_contents": "\nOn Thu, 14 Apr 2005, Richard van den Berg wrote:\n\n> Hello Chris,\n>\n> Thanks for your answers.\n>\n> Christopher Kings-Lynne wrote:\n> > Deferring makes no difference to FK checking speed...\n>\n> But why then is the speed acceptable if I copy and then manually add the\n> FK? Is the check done by the FK so much different from when it is done\n> automatically using an active deffered FK?\n\nYes, because currently the check done by the FK on an insert type activity\nis a per-row inserted check while the check done when adding a FK acts on\nthe entire table in a go which allows better optimization of that case\n(while generally being worse on small number inserts especially on large\ntables). At some point, if we can work out how to do all the semantics\nproperly, it'd probably be possible to replace the insert type check with\na per-statement check which would be somewhere in between. That requires\naccess to the affected rows inside the trigger which I don't believe is\navailable currently.\n",
"msg_date": "Thu, 14 Apr 2005 07:45:33 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Foreign key slows down copy/insert"
},
{
"msg_contents": "Stephan Szabo <[email protected]> writes:\n> ... At some point, if we can work out how to do all the semantics\n> properly, it'd probably be possible to replace the insert type check with\n> a per-statement check which would be somewhere in between. That requires\n> access to the affected rows inside the trigger which I don't believe is\n> available currently.\n\nNot necessarily. It occurs to me that maybe what we need is \"lossy\nstorage\" of the trigger events. If we could detect that the queue of\npending checks for a particular FK is getting large, we could discard\nthe whole queue and replace it with one entry that says \"run the\nwholesale check again when we are ready to fire triggers\". I'm not\nsure how to detect this efficiently, though --- the trigger manager\ndoesn't presently know anything about FKs being different from\nany other kind of trigger.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Apr 2005 11:05:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Foreign key slows down copy/insert "
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n>> But why then is the speed acceptable if I copy and then manually add \n>> the FK? Is the check done by the FK so much different from when it is \n>> done automatically using an active deffered FK?\n> \n> Yeah I think it uses a different query formulation... Actually I only \n> assume that deferred fk's don't use that - I guess your experiment \n> proves that.\n\nIn my tests deferred or not deferred makes no difference in speed. I am \nstill quite surprised by how huge the difference is.. this makes FKs \nquite unusable when added a lot of data to a table.\n\n\n> Actually, you can just \"disable\" them if you want to be really dirty :) \n\nThanks for the pointer. I got this from the archives:\n\n------------------------\nupdate pg_class set reltriggers=0 where relname = 'YOUR_TABLE_NAME';\n\nto enable them after you are done, do\n\nupdate pg_class set reltriggers = count(*) from pg_trigger where \npg_class.oid=tgrelid and relname='YOUR_TABLE_NAME';\n------------------------\n\nI assume the re-enabling will cause an error when the copy/insert added \ndata that does not satisfy the FK. In that case I'll indeed end up with \ninvalid data, but at least I will know about it.\n\nThanks,\n\n-- \nRichard van den Berg, CISSP\n-------------------------------------------\nTrust Factory B.V. | www.dna-portal.net\nBazarstraat 44a | www.trust-factory.com\n2518AK The Hague | Phone: +31 70 3620684\nThe Netherlands | Fax : +31 70 3603009\n-------------------------------------------\n",
"msg_date": "Thu, 14 Apr 2005 17:08:22 +0200",
"msg_from": "Richard van den Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Foreign key slows down copy/insert"
},
{
"msg_contents": "> Thanks for the pointer. I got this from the archives:\n> \n> ------------------------\n> update pg_class set reltriggers=0 where relname = 'YOUR_TABLE_NAME';\n> \n> to enable them after you are done, do\n> \n> update pg_class set reltriggers = count(*) from pg_trigger where \n> pg_class.oid=tgrelid and relname='YOUR_TABLE_NAME';\n> ------------------------\n> \n> I assume the re-enabling will cause an error when the copy/insert added \n> data that does not satisfy the FK. In that case I'll indeed end up with \n> invalid data, but at least I will know about it.\n\nNo it certainly won't warn you. You have _avoided_ the check entirely. \n That's why I was warning you...\n\nIf you wanted to be really careful, you could:\n\nbeing;\nlock tables for writes...\nturn off triggers\ninsert\ndelete where rows don't match fk constraint\nturn on triggers\ncommit;\n\nChris\n",
"msg_date": "Thu, 14 Apr 2005 23:13:59 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Foreign key slows down copy/insert"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> No it certainly won't warn you. You have _avoided_ the check entirely. \n> That's why I was warning you...\n\nI figured as much when I realized it was just a simple table update. I \nwas thinking more of a DB2 style \"set integrity\" command.\n\n> If you wanted to be really careful, you could:\n\nSo I will be re-checking my own FKs. That's not really what I'd expect \nfrom a FK.\n\nMy problem with this really is that in my database it is hard to predict \nwhich inserts will be huge (and thus need FKs dissabled), so I would \nhave to code it around all inserts. Instead I can code my own integirty \nlogic and avoid using FKs all together.\n\nThanks,\n\n-- \nRichard van den Berg, CISSP\n-------------------------------------------\nTrust Factory B.V. | www.dna-portal.net\nBazarstraat 44a | www.trust-factory.com\n2518AK The Hague | Phone: +31 70 3620684\nThe Netherlands | Fax : +31 70 3603009\n-------------------------------------------\n Have you visited our new DNA Portal?\n-------------------------------------------\n",
"msg_date": "Thu, 14 Apr 2005 17:25:36 +0200",
"msg_from": "Richard van den Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Foreign key slows down copy/insert"
},
{
"msg_contents": "> My problem with this really is that in my database it is hard to predict \n> which inserts will be huge (and thus need FKs dissabled), so I would \n> have to code it around all inserts. Instead I can code my own integirty \n> logic and avoid using FKs all together.\n\nJust drop the fk and re-add it, until postgres gets more smarts.\n\nChris\n",
"msg_date": "Thu, 14 Apr 2005 23:28:11 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Foreign key slows down copy/insert"
},
{
"msg_contents": "Christopher Kings-Lynne <[email protected]> writes:\n> No it certainly won't warn you. You have _avoided_ the check entirely. \n> That's why I was warning you...\n\n> If you wanted to be really careful, you could:\n\nProbably the better bet is to drop and re-add the FK constraint.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Apr 2005 11:28:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Foreign key slows down copy/insert "
},
{
"msg_contents": "On Thu, 14 Apr 2005, Tom Lane wrote:\n\n> Stephan Szabo <[email protected]> writes:\n> > ... At some point, if we can work out how to do all the semantics\n> > properly, it'd probably be possible to replace the insert type check with\n> > a per-statement check which would be somewhere in between. That requires\n> > access to the affected rows inside the trigger which I don't believe is\n> > available currently.\n>\n> Not necessarily. It occurs to me that maybe what we need is \"lossy\n> storage\" of the trigger events. If we could detect that the queue of\n> pending checks for a particular FK is getting large, we could discard\n> the whole queue and replace it with one entry that says \"run the\n> wholesale check again when we are ready to fire triggers\". I'm not\n\nYeah, but there's a potentially large middle ground where neither our\ncurrent plan nor check the entire table is particularly good for that we\nmight be able to handle better. It'd be nice to also fall back to check\nthe entire table for even larger changes.\n",
"msg_date": "Thu, 14 Apr 2005 09:11:41 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Foreign key slows down copy/insert "
},
{
"msg_contents": "\nAbout the foreign key performance:\n\nMaybe foreign key checks could be delayed into the COMMIT phase.\nIn that position, you could check, that there are lots of foreign key \nchecks\nfor each foreign key pending, and do the foreign key check for an area\nor for the whole table, if it is faster.\n\nI have heard, that the database must be in consistent state after COMMIT,\nbut it does not have necessarily to be okay inside a transaction.\n\n1. COMMIT wanted\n2. If there are lots of foreign key checks pending, do either an area \nforeign key check\n(join result must be 0 rows), or a full table join.\n3. If all foreign key checks are okay, complete the COMMIT operation.\n4. If a foreign key check fails, go into the ROLLBACK NEEDED state.\n\nMaybe Tom Lane meant the same.\n\nset option delayed_foreign_keys=true;\nBEGIN;\ninsert 1000 rows.\nCOMMIT;\n\nRegards,\nMarko Ristola\n\nChristopher Kings-Lynne wrote:\n\n>> My problem with this really is that in my database it is hard to \n>> predict which inserts will be huge (and thus need FKs dissabled), so \n>> I would have to code it around all inserts. Instead I can code my own \n>> integirty logic and avoid using FKs all together.\n>\n>\n> Just drop the fk and re-add it, until postgres gets more smarts.\n>\n> Chris\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n\n",
"msg_date": "Thu, 14 Apr 2005 19:26:41 +0300",
"msg_from": "Marko Ristola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Foreign key slows down copy/insert"
},
{
"msg_contents": "\n> I have a table A with an int column ID that references table B column\n> ID. Table B has about 150k rows, and has an index on B.ID. When trying\n> to copy 1 million rows into A, I get the following \\timings:\n\n\tYou're using 7.4.5. It's possible that you have a type mismatch in your \nforeign keys which prevents use of the index on B.\n\tFirst of all, be really sure it's THAT foreign key, ie. do your COPY with \nonly ONE foreign key at a time if you have several, and see which one is \nthe killer.\n\n\tThen, supposing it's the column in A which REFERENCE's B(id) :\n\n\tSELECT id FROM A LIMIT 1;\n\t(check type)\n\n\tSELECT id FROM B LIMIT 1;\n\t(check type)\n\n\tEXPLAIN ANALYZE the following :\n\n\tSELECT * FROM B WHERE id = (SELECT id FROM A LIMIT 1);\n\n\tIt should use the index. Does it ?\n\n",
"msg_date": "Thu, 14 Apr 2005 19:22:19 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Foreign key slows down copy/insert"
},
{
"msg_contents": "PFC wrote:\n> You're using 7.4.5. It's possible that you have a type mismatch in \n> your foreign keys which prevents use of the index on B.\n\nI read about this pothole and made damn sure the types match. (Actually, \nI kinda hoped that was the problem, it would have been an easy fix.)\n\n> First of all, be really sure it's THAT foreign key, ie. do your COPY \n> with only ONE foreign key at a time if you have several, and see which \n> one is the killer.\n\nI took exactly this route, and the first FK I tried already hit the \njackpot. The real table had 4 FKs.\n\n> EXPLAIN ANALYZE the following :\n> \n> SELECT * FROM B WHERE id = (SELECT id FROM A LIMIT 1);\n> \n> It should use the index. Does it ?\n\nIt sure looks like it:\n\nIndex Scan using ix_B on B (cost=0.04..3.06 rows=1 width=329) (actual \ntime=93.824..93.826 rows=1 loops=1)\n Index Cond: (id = $0)\n InitPlan\n -> Limit (cost=0.00..0.04 rows=1 width=4) (actual \ntime=15.128..15.129 rows=1 loops=1)\n -> Seq Scan on A (cost=0.00..47569.70 rows=1135570 \nwidth=4) (actual time=15.121..15.121 rows=1 loops=1)\n Total runtime: 94.109 ms\n\nThe real problem seems to be what Chris and Stephen pointed out: even \nthough the FK check is deferred, it is done on a per-row bases. With 1M \nrows, this just takes forever.\n\nThanks for the help.\n\n-- \nRichard van den Berg, CISSP\n-------------------------------------------\nTrust Factory B.V. | www.dna-portal.net\nBazarstraat 44a | www.trust-factory.com\n2518AK The Hague | Phone: +31 70 3620684\nThe Netherlands | Fax : +31 70 3603009\n-------------------------------------------\n",
"msg_date": "Fri, 15 Apr 2005 10:14:28 +0200",
"msg_from": "Richard van den Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Foreign key slows down copy/insert"
},
{
"msg_contents": "\n> Index Scan using ix_B on B (cost=0.04..3.06 rows=1 width=329) (actual \n> time=93.824..93.826 rows=1 loops=1)\n> Index Cond: (id = $0)\n> InitPlan\n> -> Limit (cost=0.00..0.04 rows=1 width=4) (actual \n> time=15.128..15.129 rows=1 loops=1)\n> -> Seq Scan on A (cost=0.00..47569.70 rows=1135570 \n> width=4) (actual time=15.121..15.121 rows=1 loops=1)\n> Total runtime: 94.109 ms\n\n\t94 ms for an index scan ?\n\tthis look really slow...\n\n\twas the index in the RAM cache ? does it fit ? is it faster the second \ntime ? If it's still that slow, something somewhere is severely screwed.\n\n\tB has 150K rows you say, so everything about B should fit in RAM, and you \nshould get 0.2 ms for an index scan, not 90 ms !\n\tTry this :\n\n\tLocate the files on disk which are involved in table B (table + indexes) \nlooking at the system catalogs\n\tLook at the size of the files. Is the index severely bloated ? REINDEX ? \nDROP/Recreate the index ?\n\tLoad them into the ram cache (just cat files | wc -b several times until \nit's almost instantaneous)\n\tRetry your query and your COPY\n\n\tI know it's stupid... but it's a lot faster to load an index in the cache \nby plainly reading the file rather than accessing it randomly.\n\t(even though, with this number of rows, it should not be THAT slow !)\n\n",
"msg_date": "Fri, 15 Apr 2005 12:22:52 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Foreign key slows down copy/insert"
},
{
"msg_contents": "PFC wrote:\n> 94 ms for an index scan ?\n> this look really slow...\n\nThat seems to be network latency. My psql client is connecting over \nethernet to the database server. Retrying the command gives very \ndifferent values, as low as 20ms. That 94ms was the highest I've seen. \nRunning the same command locally (via a Unix socket) yields 2.5 ms every \ntime.\n\nAm I correct is assuming that the timings are calculated locally by psql \non my client, thus including network latency?\n\n-- \nRichard van den Berg, CISSP\n-------------------------------------------\nTrust Factory B.V. | www.dna-portal.net\nBazarstraat 44a | www.trust-factory.com\n2518AK The Hague | Phone: +31 70 3620684\nThe Netherlands | Fax : +31 70 3603009\n-------------------------------------------\n",
"msg_date": "Fri, 15 Apr 2005 13:22:43 +0200",
"msg_from": "Richard van den Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Foreign key slows down copy/insert"
},
{
"msg_contents": "> Am I correct is assuming that the timings are calculated locally by psql \n> on my client, thus including network latency?\n\nNo explain analyze is done on the server...\n\nChris\n",
"msg_date": "Fri, 15 Apr 2005 19:36:56 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Foreign key slows down copy/insert"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> No explain analyze is done on the server...\n\nYes, but the psql \\timing is calculated on the client, right? That is \nthe value that PFC was refering to.\n\n-- \nRichard van den Berg, CISSP\n-------------------------------------------\nTrust Factory B.V. | www.dna-portal.net\nBazarstraat 44a | www.trust-factory.com\n2518AK The Hague | Phone: +31 70 3620684\nThe Netherlands | Fax : +31 70 3603009\n-------------------------------------------\n",
"msg_date": "Fri, 15 Apr 2005 14:44:47 +0200",
"msg_from": "Richard van den Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Foreign key slows down copy/insert"
},
{
"msg_contents": "Richard van den Berg <[email protected]> writes:\n> Christopher Kings-Lynne wrote:\n>> No explain analyze is done on the server...\n\n> Yes, but the psql \\timing is calculated on the client, right? That is \n> the value that PFC was refering to.\n\nYou didn't show us any \\timing. The 94.109 ms figure is all server-side.\n\nAs an example:\n\nregression=# \\timing\nTiming is on.\nregression=# explain analyze select * from tenk1;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------\n Seq Scan on tenk1 (cost=0.00..458.00 rows=10000 width=244) (actual time=0.050..149.615 rows=10000 loops=1)\n Total runtime: 188.518 ms\n(2 rows)\n\nTime: 210.885 ms\nregression=#\n\nHere, 188.5 is at the server, 210.8 is at the client. The difference is\nnot all network delay, either --- parse/plan overhead is in there too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Apr 2005 09:55:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Foreign key slows down copy/insert "
},
{
"msg_contents": "Tom Lane wrote:\n> You didn't show us any \\timing. The 94.109 ms figure is all server-side.\n\nWhoop, my mistake. I had been retesting without the explain, just the \nquery. I re-run the explain analyze a few times, and it only reports \n90ms the first time. After that it reports 2ms even over the network \n(the \\timing on those are about 50ms which includes the network latency).\n\nThanks,\n\n-- \nRichard van den Berg, CISSP\n-------------------------------------------\nTrust Factory B.V. | www.dna-portal.net\nBazarstraat 44a | www.trust-factory.com\n2518AK The Hague | Phone: +31 70 3620684\nThe Netherlands | Fax : +31 70 3603009\n-------------------------------------------\n",
"msg_date": "Fri, 15 Apr 2005 16:10:04 +0200",
"msg_from": "Richard van den Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Foreign key slows down copy/insert"
}
] |
[
{
"msg_contents": "\nOur vendor is trying to sell us on an Intel SRCS16 SATA raid controller\ninstead of the 3ware one.\n\nPoking around it seems this does come with Linux drivers and there is a\nbattery backup option. So it doesn't seem to be completely insane.\n\nAnyone have any experience with these controllers?\n\nI'm also wondering about whether I'm better off with one of these SATA raid\ncontrollers or just going with SCSI drives.\n\n-- \ngreg\n\n",
"msg_date": "14 Apr 2005 10:54:45 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": true,
"msg_subject": "Intel SRCS16 SATA raid?"
},
{
"msg_contents": "Greg,\n\nI posted this link under a different thread (the $7k server thread). It is\na very good read on why SCSI is better for servers than ATA. I didn't note\nbias, though it is from a drive manufacturer. YMMV. There is an\ninteresting, though dated appendix on different manufacturers' drive\ncharacteristics.\n\nhttp://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\n\nEnjoy,\n\nRick\n\[email protected] wrote on 04/14/2005 09:54:45 AM:\n\n>\n> Our vendor is trying to sell us on an Intel SRCS16 SATA raid controller\n> instead of the 3ware one.\n>\n> Poking around it seems this does come with Linux drivers and there is a\n> battery backup option. So it doesn't seem to be completely insane.\n>\n> Anyone have any experience with these controllers?\n>\n> I'm also wondering about whether I'm better off with one of these SATA\nraid\n> controllers or just going with SCSI drives.\n>\n> --\n> greg\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n",
"msg_date": "Thu, 14 Apr 2005 11:22:15 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Intel SRCS16 SATA raid?"
},
{
"msg_contents": "I have read a large chunk of this, and I would highly recommend it to\nanyone who has been participating in the drive discussions. It is\nmost informative!!\n\nAlex Turner\nnetEconomist\n\nOn 4/14/05, [email protected] <[email protected]> wrote:\n> Greg,\n> \n> I posted this link under a different thread (the $7k server thread). It is\n> a very good read on why SCSI is better for servers than ATA. I didn't note\n> bias, though it is from a drive manufacturer. YMMV. There is an\n> interesting, though dated appendix on different manufacturers' drive\n> characteristics.\n> \n> http://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\n> \n> Enjoy,\n> \n> Rick\n> \n> [email protected] wrote on 04/14/2005 09:54:45 AM:\n> \n> >\n> > Our vendor is trying to sell us on an Intel SRCS16 SATA raid controller\n> > instead of the 3ware one.\n> >\n> > Poking around it seems this does come with Linux drivers and there is a\n> > battery backup option. So it doesn't seem to be completely insane.\n> >\n> > Anyone have any experience with these controllers?\n> >\n> > I'm also wondering about whether I'm better off with one of these SATA\n> raid\n> > controllers or just going with SCSI drives.\n> >\n> > --\n> > greg\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 8: explain analyze is your friend\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n>\n",
"msg_date": "Thu, 14 Apr 2005 13:01:30 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel SRCS16 SATA raid?"
},
{
"msg_contents": "I have put together a little head to head performance of a 15k SCSI,\n10k SCSI 10K SATA w/TCQ, 10K SATA wo/TCQ and 7.2K SATA drive\ncomparison at storage review\n\nhttp://www.storagereview.com/php/benchmark/compare_rtg_2001.php?typeID=10&testbedID=3&osID=4&raidconfigID=1&numDrives=1&devID_0=232&devID_1=40&devID_2=259&devID_3=267&devID_4=261&devID_5=248&devCnt=6\n\nIt does illustrate some of the weaknesses of SATA drives, but all in\nall the Raptor drives put on a good show.\n\nAlex Turner\nnetEconomist\n\nOn 4/14/05, Alex Turner <[email protected]> wrote:\n> I have read a large chunk of this, and I would highly recommend it to\n> anyone who has been participating in the drive discussions. It is\n> most informative!!\n> \n> Alex Turner\n> netEconomist\n> \n> On 4/14/05, [email protected] <[email protected]> wrote:\n> > Greg,\n> >\n> > I posted this link under a different thread (the $7k server thread). It is\n> > a very good read on why SCSI is better for servers than ATA. I didn't note\n> > bias, though it is from a drive manufacturer. YMMV. There is an\n> > interesting, though dated appendix on different manufacturers' drive\n> > characteristics.\n> >\n> > http://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\n> >\n> > Enjoy,\n> >\n> > Rick\n> >\n> > [email protected] wrote on 04/14/2005 09:54:45 AM:\n> >\n> > >\n> > > Our vendor is trying to sell us on an Intel SRCS16 SATA raid controller\n> > > instead of the 3ware one.\n> > >\n> > > Poking around it seems this does come with Linux drivers and there is a\n> > > battery backup option. So it doesn't seem to be completely insane.\n> > >\n> > > Anyone have any experience with these controllers?\n> > >\n> > > I'm also wondering about whether I'm better off with one of these SATA\n> > raid\n> > > controllers or just going with SCSI drives.\n> > >\n> > > --\n> > > greg\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 8: explain analyze is your friend\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 9: the planner will ignore your desire to choose an index scan if your\n> > joining column's datatypes do not match\n> >\n>\n",
"msg_date": "Thu, 14 Apr 2005 13:13:41 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel SRCS16 SATA raid?"
},
{
"msg_contents": "Nice research Alex.\r\n\r\nYour data strongly support the information in the paper. Your SCSI drives\r\nblew away the others in all of the server benchmarks. They're only\r\nmarginally better in desktop use.\r\n\r\nI do find it somewhat amazing that a 15K SCSI 320 drive isn't going to help\r\nme play Unreal Tournament much faster. That's okay. I suck at it anyway.\r\nMy kid has never lost to me. She enjoys seeing daddy as a bloody smear and\r\nbouncing body parts anyway. It promotes togetherness.\r\n\r\nHere's a quote from the paper:\r\n\r\n\"[SCSI] interfaces support multiple initiators or hosts. The\r\ndrive must keep track of separate sets of information for each\r\nhost to which it is attached, e.g., maintaining the processor\r\npointer sets for multiple initiators and tagged commands.\r\nThe capability of SCSI/FC to efficiently process commands\r\nand tasks in parallel has also resulted in a higher overhead\r\n“kernel” structure for the firmware.\"\r\n\r\nHas anyone ever seen a system with multiple hosts or initiators on a SCSI\r\nbus? Seems like it would be a very cool thing in an SMP architecture, but\r\nI've not seen an example implemented.\r\n\r\nRick\r\n\r\nAlex Turner <[email protected]> wrote on 04/14/2005 12:13:41 PM:\r\n\r\n> I have put together a little head to head performance of a 15k SCSI,\r\n> 10k SCSI 10K SATA w/TCQ, 10K SATA wo/TCQ and 7.2K SATA drive\r\n> comparison at storage review\r\n>\r\n> http://www.storagereview.com/php/benchmark/compare_rtg_2001.php?\r\n>\r\ntypeID=10&testbedID=3&osID=4&raidconfigID=1&numDrives=1&devID_0=232&devID_1=40&devID_2=259&devID_3=267&devID_4=261&devID_5=248&devCnt=6\r\n\r\n>\r\n> It does illustrate some of the weaknesses of SATA drives, but all in\r\n> all the Raptor drives put on a good show.\r\n>\r\n> Alex Turner\r\n> netEconomist\r\n>\r\n> On 4/14/05, Alex Turner <[email protected]> wrote:\r\n> > I have read a large chunk of this, and I would highly recommend it to\r\n> > anyone who has been participating in the drive discussions. It is\r\n> > most informative!!\r\n> >\r\n> > Alex Turner\r\n> > netEconomist\r\n> >\r\n> > On 4/14/05, [email protected]\r\n> <[email protected]> wrote:\r\n> > > Greg,\r\n> > >\r\n> > > I posted this link under a different thread (the $7k server\r\n> thread). It is\r\n> > > a very good read on why SCSI is better for servers than ATA. I\r\n> didn't note\r\n> > > bias, though it is from a drive manufacturer. YMMV. There is an\r\n> > > interesting, though dated appendix on different manufacturers' drive\r\n> > > characteristics.\r\n> > >\r\n> > > http://www.seagate.\r\n>\r\ncom/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\r\n\r\n> > >\r\n> > > Enjoy,\r\n> > >\r\n> > > Rick\r\n> > >\r\n> > > [email protected] wrote on 04/14/2005 09:54:45\r\nAM:\r\n> > >\r\n> > > >\r\n> > > > Our vendor is trying to sell us on an Intel SRCS16 SATA raid\r\ncontroller\r\n> > > > instead of the 3ware one.\r\n> > > >\r\n> > > > Poking around it seems this does come with Linux drivers and there\r\nis a\r\n> > > > battery backup option. So it doesn't seem to be completely insane.\r\n> > > >\r\n> > > > Anyone have any experience with these controllers?\r\n> > > >\r\n> > > > I'm also wondering about whether I'm better off with one of these\r\nSATA\r\n> > > raid\r\n> > > > controllers or just going with SCSI drives.\r\n> > > >\r\n> > > > --\r\n> > > > greg\r\n> > > >\r\n> > > >\r\n> > > > ---------------------------(end of\r\nbroadcast)---------------------------\r\n> > > > TIP 8: explain analyze is your friend\r\n> > >\r\n> > > ---------------------------(end of\r\nbroadcast)---------------------------\r\n> > > TIP 9: the planner will ignore your desire to choose an index scan if\r\nyour\r\n> > > joining column's datatypes do not match\r\n> > >\r\n> >",
"msg_date": "Thu, 14 Apr 2005 12:49:17 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Intel SRCS16 SATA raid?"
},
{
"msg_contents": "[email protected] wrote:\n> Greg,\n> \n> I posted this link under a different thread (the $7k server thread). It is\n> a very good read on why SCSI is better for servers than ATA. I didn't note\n> bias, though it is from a drive manufacturer. YMMV. There is an\n> interesting, though dated appendix on different manufacturers' drive\n> characteristics.\n> \n> http://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\n\nI have read this and it is an _excellent_ read about disk drives. The\nbottom line is that the SCSI/IDE distinctions is more of an indicator of\nthe drive, rather than the main feature of the drive. The main feature\nis that certain drives are Enterprise Storage and are designed for high\nreliability and speed, while Personal Server drives are designed for low\ncost. The IDE/SCSI issue is only an indicator of this.\n\nThere are a lot more variabilities between these two types of drives\nthan I knew. I recommend it for anyone who is choosing drives for a\nsystem.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 May 2005 22:26:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel SRCS16 SATA raid?"
}
] |
[
{
"msg_contents": "Imagine a system in \"furious activity\" with two (2) process regularly occuring\n\nProcess One: Looooong read (or write). Takes 20ms to do seek, latency, and \n stream off. Runs over and over. \nProcess Two: Single block read ( or write ). Typical database row access. \n Optimally, could be submillisecond. happens more or less randomly. \n\n\nLet's say process one starts, and then process two. Assume, for sake of this discussion, \nthat P2's block lies w/in P1's swath. (But doesn't have to...)\n\nNow, everytime process two has to wait at LEAST 20ms to complete. In a queue-reordering\nsystem, it could be a lot faster. And me, looking for disk service times on P2, keep\nwondering \"why does a single diskblock read keep taking >20ms?\"\n\n\nSoooo....it doesn't need to be \"a read\" or \"a write\". It doesn't need to be \"furious activity\"\n(two processes is not furious, even for a single user desktop.) This is not a \"corner case\", \nand while it doesn't take into account kernel/drivecache/UBC buffering issues, I think it\nshines a light on why command re-ordering might be useful. <shrug> \n\nYMMV. \n\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Kevin Brown\nSent: Thursday, April 14, 2005 4:36 AM\nTo: [email protected]\nSubject: Re: [PERFORM] How to improve db performance with $7K?\n\n\nGreg Stark wrote:\n\n\n> I think you're being misled by analyzing the write case.\n> \n> Consider the read case. When a user process requests a block and that \n> read makes its way down to the driver level, the driver can't just put \n> it aside and wait until it's convenient. It has to go ahead and issue \n> the read right away.\n\nWell, strictly speaking it doesn't *have* to. It could delay for a couple of milliseconds to see if other requests come in, and then issue the read if none do. If there are already other requests being fulfilled, then it'll schedule the request in question just like the rest.\n\n> In the 10ms or so that it takes to seek to perform that read\n> *nothing* gets done. If the driver receives more read or write \n> requests it just has to sit on them and wait. 10ms is a lifetime for a \n> computer. In that time dozens of other processes could have been \n> scheduled and issued reads of their own.\n\nThis is true, but now you're talking about a situation where the system goes from an essentially idle state to one of furious activity. In other words, it's a corner case that I strongly suspect isn't typical in situations where SCSI has historically made a big difference.\n\nOnce the first request has been fulfilled, the driver can now schedule the rest of the queued-up requests in disk-layout order.\n\n\nI really don't see how this is any different between a system that has tagged queueing to the disks and one that doesn't. The only difference is where the queueing happens. In the case of SCSI, the queueing happens on the disks (or at least on the controller). In the case of SATA, the queueing happens in the kernel.\n\nI suppose the tagged queueing setup could begin the head movement and, if another request comes in that requests a block on a cylinder between where the head currently is and where it's going, go ahead and read the block in question. But is that *really* what happens in a tagged queueing system? It's the only major advantage I can see it having.\n\n\n> The same thing would happen if you had lots of processes issuing lots \n> of small fsynced writes all over the place. Postgres doesn't really do \n> that though. It sort of does with the WAL logs, but that shouldn't \n> cause a lot of seeking. Perhaps it would mean that having your WAL \n> share a spindle with other parts of the OS would have a bigger penalty \n> on IDE drives than on SCSI drives though?\n\nPerhaps.\n\nBut I rather doubt that has to be a huge penalty, if any. When a process issues an fsync (or even a sync), the kernel doesn't *have* to drop everything it's doing and get to work on it immediately. It could easily gather a few more requests, bundle them up, and then issue them. If there's a lot of disk activity, it's probably smart to do just that. All fsync and sync require is that the caller block until the data hits the disk (from the point of view of the kernel). The specification doesn't require that the kernel act on the calls immediately or write only the blocks referred to by the call in question.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n",
"msg_date": "Thu, 14 Apr 2005 15:02:27 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
}
] |
[
{
"msg_contents": "sorry, don't remember whether it's SCSI or SATA II, but IIRC\nthe Areca controllers are just stellar for things. \n\nIf you do get SATA for db stuff..especially multiuser...i still\nhaven't seen anything to indicate an across-the-board primacy\nfor SATA over SCSI. I'd go w/SCSI, or if SATA for $$$ reasons, I'd\nbe sure to have many spindles and RAID 10. \n\nmy 0.02. I'm surely not an expert of any kind. \n\n\n\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Greg Stark\nSent: Thursday, April 14, 2005 10:55 AM\nTo: [email protected]\nSubject: [PERFORM] Intel SRCS16 SATA raid?\n\n\n\nOur vendor is trying to sell us on an Intel SRCS16 SATA raid controller instead of the 3ware one.\n\nPoking around it seems this does come with Linux drivers and there is a battery backup option. So it doesn't seem to be completely insane.\n\nAnyone have any experience with these controllers?\n\nI'm also wondering about whether I'm better off with one of these SATA raid controllers or just going with SCSI drives.\n\n-- \ngreg\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 8: explain analyze is your friend\n",
"msg_date": "Thu, 14 Apr 2005 15:08:32 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Intel SRCS16 SATA raid?"
},
{
"msg_contents": "\n>\n>\n>Our vendor is trying to sell us on an Intel SRCS16 SATA raid controller instead of the 3ware one.\n> \n>\nWell I have never even heard of it. 3ware is the defacto authority of \nreasonable SATA RAID. If you were to\ngo with a different brand I would go with LSI. The LSI 150-6 is a nice \ncard with a battery backup option as well.\n\nOh and 3ware has BBU for certain models as well.\n\nSincerely,\n\nJoshua D. Drake\n\n",
"msg_date": "Thu, 14 Apr 2005 09:44:18 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel SRCS16 SATA raid?"
},
{
"msg_contents": "Joshua D. Drake wrote:\n> Well I have never even heard of it. 3ware is the defacto authority of \n> reasonable SATA RAID. \n\nno! 3ware was rather early in this business, but there are plenty of \n(IMHO, and some other people's opinion) better alternatives available. \n3ware has good Linux drivers, but the performance of their current \ncontrollers isn't that good.\n\nHave a look at this: http://www.tweakers.net/reviews/557/1\n\nespecially the sequential writes with RAID-5 on this page:\n\nhttp://www.tweakers.net/reviews/557/19\n\nWe have been a long-time user of a 3ware 8506 controller (8 disks, \nRAID-5) and have purchased 2 Areca ARC-1120 now since we weren't \nsatisfied with the performance and the 2TB per array limit...\n\n-mjy\n",
"msg_date": "Fri, 15 Apr 2005 13:16:10 +0200",
"msg_from": "Marinos Yannikos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel SRCS16 SATA raid?"
},
{
"msg_contents": "No offense to that review, but it was really wasn't that good, and\ndrew bad conclusions from the data. I posted it originaly and\nimmediately regretted it.\n\nSee http://www.tweakers.net/reviews/557/18\n\nAmazingly the controller with 1Gig cache manages a write throughput of\n750MB/sec on a single drive.\n\nquote:\n\"Floating high above the crowd, the ARC-1120 has a perfect view on the\nstruggles of the other adapters. \"\n\nIt's because the adapter has 1Gig of RAM, nothing to do with the RAID\narchitecture, it's clearly caching the entire dataset. The drive\ncan't physicaly run that fast. These guys really don't know what they\nare doing.\n\nCuriously:\nhttp://www.tweakers.net/reviews/557/25\n\nThe 3ware does very well as a data drive for MySQL.\n\nThe size of your cache is going to _directly_ affect RAID 5\nperformance. Put a gig of memory in a 3ware 9500S and benchmark it\nagainst the Areca then.\n\nAlso - folks don't run data paritions on RAID 5 because the write\nspeed is too low. When you look at the results for RAID 10, the 3ware\nleads the pack.\n\nSee also:\nhttp://www20.tomshardware.com/storage/20041227/areca-raid6-06.html\n\nI trust toms hardware a little more to set up a good review to be honest.\n\nThe 3ware trounces the Areca in all IO/sec test.\n\nAlex Turner\nnetEconomist\n\nOn 4/15/05, Marinos Yannikos <[email protected]> wrote:\n> Joshua D. Drake wrote:\n> > Well I have never even heard of it. 3ware is the defacto authority of\n> > reasonable SATA RAID.\n> \n> no! 3ware was rather early in this business, but there are plenty of\n> (IMHO, and some other people's opinion) better alternatives available.\n> 3ware has good Linux drivers, but the performance of their current\n> controllers isn't that good.\n> \n> Have a look at this: http://www.tweakers.net/reviews/557/1\n> \n> especially the sequential writes with RAID-5 on this page:\n> \n> http://www.tweakers.net/reviews/557/19\n> \n> We have been a long-time user of a 3ware 8506 controller (8 disks,\n> RAID-5) and have purchased 2 Areca ARC-1120 now since we weren't\n> satisfied with the performance and the 2TB per array limit...\n> \n> -mjy\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n",
"msg_date": "Fri, 15 Apr 2005 10:43:47 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel SRCS16 SATA raid?"
},
{
"msg_contents": "Alex Turner wrote:\n> No offense to that review, but it was really wasn't that good, and\n> drew bad conclusions from the data. I posted it originaly and\n> immediately regretted it.\n> \n> See http://www.tweakers.net/reviews/557/18\n> \n> Amazingly the controller with 1Gig cache manages a write throughput of\n> 750MB/sec on a single drive.\n> \n> quote:\n> \"Floating high above the crowd, the ARC-1120 has a perfect view on the\n> struggles of the other adapters. \"\n> \n> It's because the adapter has 1Gig of RAM, nothing to do with the RAID\n> architecture, it's clearly caching the entire dataset. The drive\n> can't physicaly run that fast. These guys really don't know what they\n> are doing.\n\nPerhaps you didn't read the whole page. It says right at the beginning:\n\n\"Because of its simplicity and short test duration, the ATTO Disk \nBenchmark is used a lot for comparing the 'peformance' of hard disks. \nThe tool measures the sequential transfer rate of a partition using a \ntest length of 32MB at most. Because of this small dataset, ATTO is \nunsuitable for measuring media transfer rates of intelligent \nRAID-adapters which are equipped with cache memory. The smart RAID \nadapters will serve the requested data directly from their cache, as a \nresult of which the results have no relationship to the media transfer \nrates of these cards. For this reason ATTO is an ideal tool to test the \ncache transfer rates of intelligent RAID-adapters.\"\n\nTherefore, the results on this page are valid - they're supposed to show \nthe cache/transfer speed, the dataset is 32MB(!) and should fit in the \ncaches of all cards.\n\n> See also:\n> http://www20.tomshardware.com/storage/20041227/areca-raid6-06.html\n> \n> I trust toms hardware a little more to set up a good review to be honest.\n\nI don't, for many (historical) reasons.\n\n> The 3ware trounces the Areca in all IO/sec test.\n\nMaybe, but with no mention of stripe size and other configuration \ndetails, this is somewhat suspicious. I'll be able to offer benchmarks \nfor the 8506-8 vs. the 1120 shortly (1-2 weeks), if you're interested \n(pg_bench, for example, to be a bit more on-topic).\n\nRegards,\n Marinos\n",
"msg_date": "Fri, 15 Apr 2005 18:16:51 +0200",
"msg_from": "Marinos Yannikos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel SRCS16 SATA raid? (somewhat OT)"
}
] |
[
{
"msg_contents": "Hi.\n\nOur professor told us the following story: Oracle. A client issued a\nselective delete statement on a big table. After two days he lost\npatience and pulled the plug. Unfortunately while starting up, oracle\nhad to restore all the deleted rows, which took it another two days. He\nreasoned that one better copies all rows that are not to be deleted in\nanother table drops the original table afterwards. (Concurrency, fks,\nindexes are not the question here). So I wondered how this works in\nPostgreSQL. As I understand it, what's going on is the following:\n\n1. The transaction 45 is started. It is recorded as in-progress.\n2. The rows selected in the delete statement are one by one marked as\nto-be-deleted by txn 45. Among them row 27.\n3. If a concurrently running read committed txn 47 wants to update row\n27, it blocks, awaiting whether txn 45 commits or aborts.\n4.1 When txn 45 commits, it is marked as such.\n5.1 txn 47 can continue, but as row 27 was deleted, it is not affected\nby txn 47's update statement.\n4.2 When txn 45 aborts, it is marked as such. This means the same as not\nbeing marked at all.\n5.2 txn 47 continues and updates row 27.\n\nNow if you pull the plug after 2, at startup, pg will go through the\nin-progress txns and mark them as aborted. That's all the recovery in\nthis case. All rows are still there. O(1).\n\nHow does oracle do that? Has all this something to do with mvcc? Why\ndoes it take oracle so long to recover?\n\nThanks\n\nMarkus\n-- \nMarkus Bertheau <[email protected]>",
"msg_date": "Thu, 14 Apr 2005 17:47:23 +0200",
"msg_from": "Markus Bertheau <[email protected]>",
"msg_from_op": true,
"msg_subject": "recovery after long delete"
},
{
"msg_contents": "Markus Bertheau <[email protected]> writes:\n> Now if you pull the plug after 2, at startup, pg will go through the\n> in-progress txns and mark them as aborted. That's all the recovery in\n> this case. All rows are still there. O(1).\n\nRight. (Actually it's O(checkpoint interval), because we have to make\nsure that everything we did since the last checkpoint actually got to\ndisk --- but in principle, there's zero recovery effort.)\n\n> How does oracle do that? Has all this something to do with mvcc? Why\n> does it take oracle so long to recover?\n\nOracle doesn't do MVCC the same way we do. They update rows in place\nand put the previous version of a row into an \"undo log\". If the\ntransaction aborts, they have to go back through the undo log and put\nback the previous version of the row. I'm not real clear on how that\napplies to deletions, but I suppose it's the same deal: cost of undoing\na transaction in Oracle is proportional to the number of rows it\nchanged. There's also the little problem that the space available for\nUNDO logs is limited :-(\n\nAs against which, they don't have to VACUUM. So it's a tradeoff.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Apr 2005 12:34:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: recovery after long delete "
},
{
"msg_contents": "\nMarkus Bertheau <[email protected]> writes:\n\n> How does oracle do that? Has all this something to do with mvcc? Why\n> does it take oracle so long to recover?\n\nPostgres does \"pessimistic MVCC\" where it keeps the old versions where they\nare in the table. Only after it's committed can they be cleaned up and reused.\nSo aborting is a noop but committing requires additional cleanup (which is put\noff until vacuum runs).\n\nOracle does \"optimistic MVCC\" where it assumes most transactions will commit\nand most transactions will be reading mostly committed data. So it immediately\ndoes all the cleanup for the commit. It stores the old version in separate\nstorage spaces called the rollback segment and redo logs. Committing is a noop\n(almost, there are some details, search for \"delayed block cleanout\") whereas\nrolling back requires copying back all that old data from the redo logs back\nto the table.\n\nEngineering is all about tradeoffs.\n\n-- \ngreg\n\n",
"msg_date": "14 Apr 2005 14:11:15 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: recovery after long delete"
}
] |
[
{
"msg_contents": "On 4/14/05, Tom Lane <[email protected]> wrote:\n>\n> That's basically what it comes down to: SCSI lets the disk drive itself\n> do the low-level I/O scheduling whereas the ATA spec prevents the drive\n> from doing so (unless it cheats, ie, caches writes). Also, in SCSI it's\n> possible for the drive to rearrange reads as well as writes --- which\n> AFAICS is just not possible in ATA. (Maybe in the newest spec...)\n>\n> The reason this is so much more of a win than it was when ATA was\n> designed is that in modern drives the kernel has very little clue about\n> the physical geometry of the disk. Variable-size tracks, bad-block\n> sparing, and stuff like that make for a very hard-to-predict mapping\n> from linear sector addresses to actual disk locations. Combine that\n> with the fact that the drive controller can be much smarter than it was\n> twenty years ago, and you can see that the case for doing I/O scheduling\n> in the kernel and not in the drive is pretty weak.\n>\n>\n\nSo if you all were going to choose between two hard drives where:\ndrive A has capacity C and spins at 15K rpms, and\ndrive B has capacity 2 x C and spins at 10K rpms and\nall other features are the same, the price is the same and C is enough\ndisk space which would you choose?\n\nI've noticed that on IDE drives, as the capacity increases the data\ndensity increases and there is a pereceived (I've not measured it)\nperformance increase.\n\nWould the increased data density of the higher capacity drive be of\ngreater benefit than the faster spindle speed of drive A?\n\n-- \nMatthew Nuzum <[email protected]>\nwww.followers.net - Makers of Elite Content Management System\nView samples of Elite CMS in action by visiting\nhttp://www.followers.net/portfolio/\n\n\n\n",
"msg_date": "Thu, 14 Apr 2005 10:55:42 -0500",
"msg_from": "\"Matthew Nuzum\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "\"Matthew Nuzum\" <[email protected]> writes:\n> So if you all were going to choose between two hard drives where:\n> drive A has capacity C and spins at 15K rpms, and\n> drive B has capacity 2 x C and spins at 10K rpms and\n> all other features are the same, the price is the same and C is enough\n> disk space which would you choose?\n\n> I've noticed that on IDE drives, as the capacity increases the data\n> density increases and there is a pereceived (I've not measured it)\n> performance increase.\n\n> Would the increased data density of the higher capacity drive be of\n> greater benefit than the faster spindle speed of drive A?\n\nDepends how they got the 2x capacity increase. If they got it by\nincreased bit density --- same number of tracks, but more sectors\nper track --- then drive B actually has a higher transfer rate,\nbecause in one rotation it can transfer twice as much data as drive A.\nMore tracks per cylinder (ie, more platters) can also be a speed win\nsince you can touch more data before you have to seek to another\ncylinder. Drive B will lose if the 2x capacity was all from adding\ncylinders (unless its seek-time spec is way better than A's ... which\nis unlikely but not impossible, considering the cylinders are probably\ncloser together).\n\nUsually there's some-of-each involved, so it's hard to make any\ndefinite statement without more facts.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Apr 2005 12:44:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K? "
},
{
"msg_contents": "\"Matthew Nuzum\" <[email protected]> writes:\n\n> drive A has capacity C and spins at 15K rpms, and\n> drive B has capacity 2 x C and spins at 10K rpms and\n> all other features are the same, the price is the same and C is enough\n> disk space which would you choose?\n\nIn this case you always choose the 15k RPM drive, at least for Postgres.\nThe 15kRPM reduces the latency which improves performance when fsyncing\ntransaction commits.\n\nThe real question is whether you choose the single 15kRPM drive or additional\ndrives at 10kRPM... Additional spindles would give a much bigger bandwidth\nimprovement but questionable latency improvement.\n\n> Would the increased data density of the higher capacity drive be of\n> greater benefit than the faster spindle speed of drive A?\n\nactually a 2xC capacity drive probably just has twice as many platters which\nmeans it would perform identically to the C capacity drive. If it has denser\nplatters that might improve performance slightly.\n\n\n-- \ngreg\n\n",
"msg_date": "14 Apr 2005 13:55:10 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "\n> The real question is whether you choose the single 15kRPM drive or \n> additional\n> drives at 10kRPM... Additional spindles would give a much bigger\n\n\tAnd the bonus question.\n\tExpensive fast drives as a RAID for everything, or for the same price \nmany more slower drives (even SATA) so you can put the transaction log, \ntables, indexes all on separate physical drives ? Like put one very \nfrequently used table on its own disk ?\n\tFor the same amount of money which one would be more interesting ?\n",
"msg_date": "Thu, 14 Apr 2005 20:42:26 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
}
] |
[
{
"msg_contents": "I'm not subscribed to performance at this time. I reviewed the\nthread and owe everything I know about this to Wei Hong whose\nbrilliance exceeds all others :) All misinterpretations are\nmine alone.\n\nI have not reviewed hellerstein's papers posted by neil, but I\nwill.\n\nMy understanding of this issue is at a very high user level.\nIn Illustra SRF functions were not necessarily special functions. \nAll functions could have a cost associated with them, set by the writer of\nthe function in order for the planner to reorder function calls.\nThe stonebraker airplane level example was:\n\tselect ... from ... where f(id) = 3 and expensive_image_function(img)\nThe idea, of course is to weight the expensive function so it was\npushed to the end of the execution.\n\nThe only difference I see with SRFs in Postgres is that you may want\nthe cost represented as one row returned and another weighting representing\nthe number of estimated rows. I think this conclusion has already\nbeen drawn.\n\nIt seems to make sense, if the optimizer can use this information, to\ninclude wild and/or educated guesses for the costs of the SRF.\n\nI'm sure I haven't contributed here anything new, but perhaps \nphrased it differently.\n\nCopy me on replies and I'll participate as I can.\n\n--elein\n\nOn Thu, Apr 14, 2005 at 08:36:38AM +0100, Simon Riggs wrote:\n> Elein,\n> \n> Any chance you could join this discussion on PERFORM ?\n> \n> I understand you did time with Illustra. I thought they had solved the\n> optimizer plug-in issue...how did they do it?\n> \n> Best Regards, Simon Riggs\n> \n> \n> -------- Forwarded Message --------\n> From: Tom Lane <[email protected]>\n> To: Alvaro Herrera <[email protected]>\n> Cc: Josh Berkus <[email protected]>, Michael Fuhr <[email protected]>,\n> \n> Subject: Re: [PERFORM] Functionscan estimates\n> Date: Sat, 09 Apr 2005 00:00:56 -0400\n> Not too many releases ago, there were several columns in pg_proc that\n> were intended to support estimation of the runtime cost and number of\n> result rows of set-returning functions. I believe in fact that these\n> were the remains of Joe Hellerstein's thesis on expensive-function\n> evaluation, and are exactly what he was talking about here:\n> http://archives.postgresql.org/pgsql-hackers/2002-06/msg00085.php\n> \n> But with all due respect to Joe, I think the reason that stuff got\n> trimmed is that it didn't work very well. In most cases it's\n> *hard* to write an estimator for a SRF. Let's see you produce\n> one for dblink() for instance ...\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n> \n",
"msg_date": "Thu, 14 Apr 2005 10:39:03 -0700",
"msg_from": "[email protected] (elein)",
"msg_from_op": true,
"msg_subject": "Re: [Fwd: Re: Functionscan estimates]"
},
{
"msg_contents": "On Thu, Apr 14, 2005 at 10:39:03AM -0700, elein wrote:\n\n> All functions could have a cost associated with them, set by the writer of\n> the function in order for the planner to reorder function calls.\n> The stonebraker airplane level example was:\n> \tselect ... from ... where f(id) = 3 and expensive_image_function(img)\n> The idea, of course is to weight the expensive function so it was\n> pushed to the end of the execution.\n\nSo there was only a constant cost associated with the function? No\nestimator function, for example?\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"If you have nothing to say, maybe you need just the right tool to help you\nnot say it.\" (New York Times, about Microsoft PowerPoint)\n",
"msg_date": "Thu, 14 Apr 2005 14:58:09 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: Re: Functionscan estimates]"
},
{
"msg_contents": "Hmmm. My brain is being jostled and I'm confusing illustra-postgres,\ninformix-postgres and postgresql. Some things had functions and\nsome things had constants and I do not remember which products had\nwhat combination. But probably how they are in postgresql, post\nhellerstein, is how I am remembering.\n\nI can find out for sure, given a little time, by querying old contacts.\nIt would be best if I had a clear question to ask, though.\n\n--elein\n\n\nOn Thu, Apr 14, 2005 at 02:58:09PM -0400, Alvaro Herrera wrote:\n> On Thu, Apr 14, 2005 at 10:39:03AM -0700, elein wrote:\n> \n> > All functions could have a cost associated with them, set by the writer of\n> > the function in order for the planner to reorder function calls.\n> > The stonebraker airplane level example was:\n> > \tselect ... from ... where f(id) = 3 and expensive_image_function(img)\n> > The idea, of course is to weight the expensive function so it was\n> > pushed to the end of the execution.\n> \n> So there was only a constant cost associated with the function? No\n> estimator function, for example?\n> \n> -- \n> Alvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n> \"If you have nothing to say, maybe you need just the right tool to help you\n> not say it.\" (New York Times, about Microsoft PowerPoint)\n> \n",
"msg_date": "Thu, 14 Apr 2005 13:51:43 -0700",
"msg_from": "[email protected] (elein)",
"msg_from_op": true,
"msg_subject": "Re: [Fwd: Re: Functionscan estimates]"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Alex Turner [mailto:[email protected]]\n> Sent: Thursday, April 14, 2005 12:14 PM\n> To: [email protected]\n> Cc: Greg Stark; [email protected];\n> [email protected]\n> Subject: Re: [PERFORM] Intel SRCS16 SATA raid?\n> \n> \n> I have put together a little head to head performance of a 15k SCSI,\n> 10k SCSI 10K SATA w/TCQ, 10K SATA wo/TCQ and 7.2K SATA drive\n> comparison at storage review\n> \n> http://www.storagereview.com/php/benchmark/compare_rtg_2001.ph\n> p?typeID=10&testbedID=3&osID=4&raidconfigID=1&numDrives=1&devI\n> D_0=232&devID_1=40&devID_2=259&devID_3=267&devID_4=261&devID_5\n> =248&devCnt=6\n> \n> It does illustrate some of the weaknesses of SATA drives, but all in\n> all the Raptor drives put on a good show.\n> [...]\n\nI think it's a little misleading that your tests show 0ms seek times\nfor some of the write tests. The environmental test also selects a\nmissing data point as the winner. Besides that, it seems to me that\nseek time is one of the most important features for a DB server, which\nmeans that the SCSI drives are the clear winners and the non-WD SATA\ndrives are the embarrassing losers. Transfer rate is import, but\nperhaps less so because DBs tend to read/write small blocks rather\nthan large files. On the server suite, which seems to me to be the\nmost relevant for DBs, the Atlas 15k spanks the other drives by a\nfairly large margin (especially the lesser SATA drives). When you \nignore the \"consumer app\" benchmarks, I wouldn't be so confident in \nsaying that the Raptors \"put on a good show\".\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n",
"msg_date": "Thu, 14 Apr 2005 12:43:45 -0500",
"msg_from": "\"Dave Held\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Intel SRCS16 SATA raid?"
},
{
"msg_contents": "Just to clarify these are tests from http://www.storagereview.com, not\nmy own. I guess they couldn't get number for those parts. I think\neveryone understands that a 0ms seek time impossible, and indicates a\nmissing data point.\n\nThanks,\n\nAlex Turner\nnetEconomist\n\nOn 4/14/05, Dave Held <[email protected]> wrote:\n> > -----Original Message-----\n> > From: Alex Turner [mailto:[email protected]]\n> > Sent: Thursday, April 14, 2005 12:14 PM\n> > To: [email protected]\n> > Cc: Greg Stark; [email protected];\n> > [email protected]\n> > Subject: Re: [PERFORM] Intel SRCS16 SATA raid?\n> >\n> >\n> > I have put together a little head to head performance of a 15k SCSI,\n> > 10k SCSI 10K SATA w/TCQ, 10K SATA wo/TCQ and 7.2K SATA drive\n> > comparison at storage review\n> >\n> > http://www.storagereview.com/php/benchmark/compare_rtg_2001.ph\n> > p?typeID=10&testbedID=3&osID=4&raidconfigID=1&numDrives=1&devI\n> > D_0=232&devID_1=40&devID_2=259&devID_3=267&devID_4=261&devID_5\n> > =248&devCnt=6\n> >\n> > It does illustrate some of the weaknesses of SATA drives, but all in\n> > all the Raptor drives put on a good show.\n> > [...]\n> \n> I think it's a little misleading that your tests show 0ms seek times\n> for some of the write tests. The environmental test also selects a\n> missing data point as the winner. Besides that, it seems to me that\n> seek time is one of the most important features for a DB server, which\n> means that the SCSI drives are the clear winners and the non-WD SATA\n> drives are the embarrassing losers. Transfer rate is import, but\n> perhaps less so because DBs tend to read/write small blocks rather\n> than large files. On the server suite, which seems to me to be the\n> most relevant for DBs, the Atlas 15k spanks the other drives by a\n> fairly large margin (especially the lesser SATA drives). When you\n> ignore the \"consumer app\" benchmarks, I wouldn't be so confident in\n> saying that the Raptors \"put on a good show\".\n> \n> __\n> David B. Held\n> Software Engineer/Array Services Group\n> 200 14th Ave. East, Sartell, MN 56377\n> 320.534.3637 320.253.7800 800.752.8129\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n",
"msg_date": "Thu, 14 Apr 2005 18:43:18 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel SRCS16 SATA raid?"
},
{
"msg_contents": "Looking at the numbers, the raptor with TCQ enabled was close or beat\nthe Atlas III 10k drive on most benchmarks.\n\nNaturaly a 15k drive is going to be faster in many areas, but it is\nalso much more expensive. It was only 44% better on the server tests\nthan the raptor with TCQ, but it costs nearly 300% more ($538 cdw.com,\n$180 newegg.com). Note also that the 15k drive was the only drive\nthat kept up with the raptor on raw transfer speed, which is going to\nmatter for WAL.\n\nFor those of us on a budget, a quality controller card with lots of\nRAM is going to be our biggest friend because it can cache writes, and\nimprove performance. The 3ware controllers seem to be universally\nbenchmarked as the best SATA RAID 10 controllers where database\nperformance is concerned. Even the crappy tweakers.net review had the\n3ware as the fastest controller for a MySQL data partition in RAID 10.\n\nThe Raptor drives can be had for as little as $180/ea, which is quite\na good price point considering they can keep up with their SCSI 10k\nRPM counterparts on almost all tests with NCQ enabled (Note that 3ware\ncontrollers _don't_ support NCQ, although they claim their HBA based\nqueueing is 95% as good as NCQ on the drive).\n\nAlex Turner\nnetEconomist\n\nOn 4/14/05, Dave Held <[email protected]> wrote:\n> > -----Original Message-----\n> > From: Alex Turner [mailto:[email protected]]\n> > Sent: Thursday, April 14, 2005 12:14 PM\n> > To: [email protected]\n> > Cc: Greg Stark; [email protected];\n> > [email protected]\n> > Subject: Re: [PERFORM] Intel SRCS16 SATA raid?\n> >\n> >\n> > I have put together a little head to head performance of a 15k SCSI,\n> > 10k SCSI 10K SATA w/TCQ, 10K SATA wo/TCQ and 7.2K SATA drive\n> > comparison at storage review\n> >\n> > http://www.storagereview.com/php/benchmark/compare_rtg_2001.ph\n> > p?typeID=10&testbedID=3&osID=4&raidconfigID=1&numDrives=1&devI\n> > D_0=232&devID_1=40&devID_2=259&devID_3=267&devID_4=261&devID_5\n> > =248&devCnt=6\n> >\n> > It does illustrate some of the weaknesses of SATA drives, but all in\n> > all the Raptor drives put on a good show.\n> > [...]\n> \n> I think it's a little misleading that your tests show 0ms seek times\n> for some of the write tests. The environmental test also selects a\n> missing data point as the winner. Besides that, it seems to me that\n> seek time is one of the most important features for a DB server, which\n> means that the SCSI drives are the clear winners and the non-WD SATA\n> drives are the embarrassing losers. Transfer rate is import, but\n> perhaps less so because DBs tend to read/write small blocks rather\n> than large files. On the server suite, which seems to me to be the\n> most relevant for DBs, the Atlas 15k spanks the other drives by a\n> fairly large margin (especially the lesser SATA drives). When you\n> ignore the \"consumer app\" benchmarks, I wouldn't be so confident in\n> saying that the Raptors \"put on a good show\".\n> \n> __\n> David B. Held\n> Software Engineer/Array Services Group\n> 200 14th Ave. East, Sartell, MN 56377\n> 320.534.3637 320.253.7800 800.752.8129\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n",
"msg_date": "Thu, 14 Apr 2005 19:15:24 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel SRCS16 SATA raid?"
},
{
"msg_contents": "Alex Turner wrote:\n> Looking at the numbers, the raptor with TCQ enabled was close or beat\n> the Atlas III 10k drive on most benchmarks.\n> \n> Naturaly a 15k drive is going to be faster in many areas, but it is\n> also much more expensive. It was only 44% better on the server tests\n> than the raptor with TCQ, but it costs nearly 300% more ($538 cdw.com,\n> $180 newegg.com).\n\nTrue, but that's a one time expense (300%) for a 44% gain ALL the time. \n '44% better' is nothing to sneeze at. I'd easily pay the price for \nthe gain in a large server env.\n\n-- \nUntil later, Geoffrey\n",
"msg_date": "Thu, 14 Apr 2005 21:38:14 -0400",
"msg_from": "Geoffrey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel SRCS16 SATA raid?"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Greg Stark [mailto:[email protected]]\n> Sent: Thursday, April 14, 2005 12:55 PM\n> To: [email protected]\n> Cc: [email protected]\n> Subject: Re: [PERFORM] How to improve db performance with $7K?\n> \n> \"Matthew Nuzum\" <[email protected]> writes:\n> \n> > drive A has capacity C and spins at 15K rpms, and\n> > drive B has capacity 2 x C and spins at 10K rpms and\n> > all other features are the same, the price is the same and \n> > C is enough disk space which would you choose?\n> \n> In this case you always choose the 15k RPM drive, at least \n> for Postgres. The 15kRPM reduces the latency which improves\n> performance when fsyncing transaction commits.\n\nI think drive B is clearly the best choice. Matt said \"all\nother features are the same\", including price. I take that to\nmean that the seek time and throughput are also identical.\nHowever, I think it's fairly clear that there is no such pair\nof actual devices. If Matt really meant that they have the same\ncache size, interface, etc, then I would agree with you. The\n15k drive is likely to have the better seek time.\n\n> The real question is whether you choose the single 15kRPM \n> drive or additional drives at 10kRPM... Additional spindles\n> would give a much bigger bandwidth improvement but questionable\n> latency improvement.\n\nUnder the assumption that the seek times and throughput are\nrealistic rather than contrived as in the stated example, I would\nsay the 15k drive is the likely winner. It probably has the\nbetter seek time, and it seems that latency is more important\nthan bandwidth for DB apps.\n\n> > Would the increased data density of the higher capacity drive\n> > be of greater benefit than the faster spindle speed of drive\n> > A?\n> \n> actually a 2xC capacity drive probably just has twice as many \n> platters which means it would perform identically to the C\n> capacity drive. If it has denser platters that might improve\n> performance slightly.\n\nWell, according to the paper referenced by Richard, twice as many\nplatters means that it probably has slightly worse seek time\n(because of the increased mass of the actuator/rw-head). Yet\nanother reason why the smaller drive might be preferable. Of\ncourse, the data density is certainly a factor, as you say. But\nsince the drives are within a factor of 2, it seems likely that\nreal drives would have comparable densities.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n",
"msg_date": "Thu, 14 Apr 2005 13:21:13 -0500",
"msg_from": "\"Dave Held\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
}
] |
[
{
"msg_contents": "I've been doing some reading up on this, trying to keep up here, \nand have found out that (experts, just yawn and cover your ears)\n\n1) some SATA drives (just type II, I think?) have a \"Phase Zero\"\n implementation of Tagged Command Queueing (the special sauce\n for SCSI).\n2) This SATA \"TCQ\" is called NCQ and I believe it basically\n allows the disk software itself to do the reordering\n (this is called \"simple\" in TCQ terminology) It does not\n yet allow the TCQ \"head of queue\" command, allowing the\n current tagged request to go to head of queue, which is\n a simple way of manifesting a \"high priority\" request.\n\n3) SATA drives are not yet multi-initiator?\n\nLargely b/c of 2 and 3, multi-initiator SCSI RAID'ed drives\nare likely to whomp SATA II drives for a while yet (read: a\nyear or two) in multiuser PostGres applications. \n\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Greg Stark\nSent: Thursday, April 14, 2005 2:04 PM\nTo: Kevin Brown\nCc: [email protected]\nSubject: Re: [PERFORM] How to improve db performance with $7K?\n\n\nKevin Brown <[email protected]> writes:\n\n> Greg Stark wrote:\n> \n> \n> > I think you're being misled by analyzing the write case.\n> > \n> > Consider the read case. When a user process requests a block and \n> > that read makes its way down to the driver level, the driver can't \n> > just put it aside and wait until it's convenient. It has to go ahead \n> > and issue the read right away.\n> \n> Well, strictly speaking it doesn't *have* to. It could delay for a \n> couple of milliseconds to see if other requests come in, and then \n> issue the read if none do. If there are already other requests being \n> fulfilled, then it'll schedule the request in question just like the \n> rest.\n\nBut then the cure is worse than the disease. You're basically describing exactly what does happen anyways, only you're delaying more requests than necessary. That intervening time isn't really idle, it's filled with all the requests that were delayed during the previous large seek...\n\n> Once the first request has been fulfilled, the driver can now schedule \n> the rest of the queued-up requests in disk-layout order.\n> \n> I really don't see how this is any different between a system that has \n> tagged queueing to the disks and one that doesn't. The only \n> difference is where the queueing happens.\n\nAnd *when* it happens. Instead of being able to issue requests while a large seek is happening and having some of them satisfied they have to wait until that seek is finished and get acted on during the next large seek.\n\nIf my theory is correct then I would expect bandwidth to be essentially equivalent but the latency on SATA drives to be increased by about 50% of the average seek time. Ie, while a busy SCSI drive can satisfy most requests in about 10ms a busy SATA drive would satisfy most requests in 15ms. (add to that that 10k RPM and 15kRPM SCSI drives have even lower seek times and no such IDE/SATA drives exist...)\n\nIn reality higher latency feeds into a system feedback loop causing your application to run slower causing bandwidth demands to be lower as well. It's often hard to distinguish root causes from symptoms when optimizing complex systems.\n\n-- \ngreg\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n",
"msg_date": "Thu, 14 Apr 2005 18:30:29 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "If SATA drives don't have the ability to replace SCSI for a multi-user\nPostgres apps, but you needed to save on cost (ALWAYS an issue), \ncould/would you implement SATA for your logs (pg_xlog) and keep the rest \non SCSI?\n\nSteve Poe\n\nMohan, Ross wrote:\n\n>I've been doing some reading up on this, trying to keep up here, \n>and have found out that (experts, just yawn and cover your ears)\n>\n>1) some SATA drives (just type II, I think?) have a \"Phase Zero\"\n> implementation of Tagged Command Queueing (the special sauce\n> for SCSI).\n>2) This SATA \"TCQ\" is called NCQ and I believe it basically\n> allows the disk software itself to do the reordering\n> (this is called \"simple\" in TCQ terminology) It does not\n> yet allow the TCQ \"head of queue\" command, allowing the\n> current tagged request to go to head of queue, which is\n> a simple way of manifesting a \"high priority\" request.\n>\n>3) SATA drives are not yet multi-initiator?\n>\n>Largely b/c of 2 and 3, multi-initiator SCSI RAID'ed drives\n>are likely to whomp SATA II drives for a while yet (read: a\n>year or two) in multiuser PostGres applications. \n>\n>\n>\n>-----Original Message-----\n>From: [email protected] [mailto:[email protected]] On Behalf Of Greg Stark\n>Sent: Thursday, April 14, 2005 2:04 PM\n>To: Kevin Brown\n>Cc: [email protected]\n>Subject: Re: [PERFORM] How to improve db performance with $7K?\n>\n>\n>Kevin Brown <[email protected]> writes:\n>\n> \n>\n>>Greg Stark wrote:\n>>\n>>\n>> \n>>\n>>>I think you're being misled by analyzing the write case.\n>>>\n>>>Consider the read case. When a user process requests a block and \n>>>that read makes its way down to the driver level, the driver can't \n>>>just put it aside and wait until it's convenient. It has to go ahead \n>>>and issue the read right away.\n>>> \n>>>\n>>Well, strictly speaking it doesn't *have* to. It could delay for a \n>>couple of milliseconds to see if other requests come in, and then \n>>issue the read if none do. If there are already other requests being \n>>fulfilled, then it'll schedule the request in question just like the \n>>rest.\n>> \n>>\n>\n>But then the cure is worse than the disease. You're basically describing exactly what does happen anyways, only you're delaying more requests than necessary. That intervening time isn't really idle, it's filled with all the requests that were delayed during the previous large seek...\n>\n> \n>\n>>Once the first request has been fulfilled, the driver can now schedule \n>>the rest of the queued-up requests in disk-layout order.\n>>\n>>I really don't see how this is any different between a system that has \n>>tagged queueing to the disks and one that doesn't. The only \n>>difference is where the queueing happens.\n>> \n>>\n>\n>And *when* it happens. Instead of being able to issue requests while a large seek is happening and having some of them satisfied they have to wait until that seek is finished and get acted on during the next large seek.\n>\n>If my theory is correct then I would expect bandwidth to be essentially equivalent but the latency on SATA drives to be increased by about 50% of the average seek time. Ie, while a busy SCSI drive can satisfy most requests in about 10ms a busy SATA drive would satisfy most requests in 15ms. (add to that that 10k RPM and 15kRPM SCSI drives have even lower seek times and no such IDE/SATA drives exist...)\n>\n>In reality higher latency feeds into a system feedback loop causing your application to run slower causing bandwidth demands to be lower as well. It's often hard to distinguish root causes from symptoms when optimizing complex systems.\n>\n> \n>\n\n",
"msg_date": "Thu, 14 Apr 2005 11:44:28 -0700",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Steve Poe wrote:\n\n> If SATA drives don't have the ability to replace SCSI for a multi-user\n\nI don't think it is a matter of not having the ability. SATA all in all \nis fine as long as\nit is battery backed. It isn't as high performing as SCSI but who says \nit has to be?\n\nThere are plenty of companies running databases on SATA without issue. Would\nI put it on a database that is expecting to have 500 connections at all \ntimes? No.\nThen again, if you have an application with that requirement, you have \nthe money\nto buy a big fat SCSI array.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n> Postgres apps, but you needed to save on cost (ALWAYS an issue), \n> could/would you implement SATA for your logs (pg_xlog) and keep the \n> rest on SCSI?\n>\n> Steve Poe\n>\n> Mohan, Ross wrote:\n>\n>> I've been doing some reading up on this, trying to keep up here, and \n>> have found out that (experts, just yawn and cover your ears)\n>>\n>> 1) some SATA drives (just type II, I think?) have a \"Phase Zero\"\n>> implementation of Tagged Command Queueing (the special sauce\n>> for SCSI).\n>> 2) This SATA \"TCQ\" is called NCQ and I believe it basically\n>> allows the disk software itself to do the reordering\n>> (this is called \"simple\" in TCQ terminology) It does not\n>> yet allow the TCQ \"head of queue\" command, allowing the\n>> current tagged request to go to head of queue, which is\n>> a simple way of manifesting a \"high priority\" request.\n>>\n>> 3) SATA drives are not yet multi-initiator?\n>>\n>> Largely b/c of 2 and 3, multi-initiator SCSI RAID'ed drives\n>> are likely to whomp SATA II drives for a while yet (read: a\n>> year or two) in multiuser PostGres applications.\n>>\n>>\n>> -----Original Message-----\n>> From: [email protected] \n>> [mailto:[email protected]] On Behalf Of Greg Stark\n>> Sent: Thursday, April 14, 2005 2:04 PM\n>> To: Kevin Brown\n>> Cc: [email protected]\n>> Subject: Re: [PERFORM] How to improve db performance with $7K?\n>>\n>>\n>> Kevin Brown <[email protected]> writes:\n>>\n>> \n>>\n>>> Greg Stark wrote:\n>>>\n>>>\n>>> \n>>>\n>>>> I think you're being misled by analyzing the write case.\n>>>>\n>>>> Consider the read case. When a user process requests a block and \n>>>> that read makes its way down to the driver level, the driver can't \n>>>> just put it aside and wait until it's convenient. It has to go \n>>>> ahead and issue the read right away.\n>>>> \n>>>\n>>> Well, strictly speaking it doesn't *have* to. It could delay for a \n>>> couple of milliseconds to see if other requests come in, and then \n>>> issue the read if none do. If there are already other requests \n>>> being fulfilled, then it'll schedule the request in question just \n>>> like the rest.\n>>> \n>>\n>>\n>> But then the cure is worse than the disease. You're basically \n>> describing exactly what does happen anyways, only you're delaying \n>> more requests than necessary. That intervening time isn't really \n>> idle, it's filled with all the requests that were delayed during the \n>> previous large seek...\n>>\n>> \n>>\n>>> Once the first request has been fulfilled, the driver can now \n>>> schedule the rest of the queued-up requests in disk-layout order.\n>>>\n>>> I really don't see how this is any different between a system that \n>>> has tagged queueing to the disks and one that doesn't. The only \n>>> difference is where the queueing happens.\n>>> \n>>\n>>\n>> And *when* it happens. Instead of being able to issue requests while \n>> a large seek is happening and having some of them satisfied they have \n>> to wait until that seek is finished and get acted on during the next \n>> large seek.\n>>\n>> If my theory is correct then I would expect bandwidth to be \n>> essentially equivalent but the latency on SATA drives to be increased \n>> by about 50% of the average seek time. Ie, while a busy SCSI drive \n>> can satisfy most requests in about 10ms a busy SATA drive would \n>> satisfy most requests in 15ms. (add to that that 10k RPM and 15kRPM \n>> SCSI drives have even lower seek times and no such IDE/SATA drives \n>> exist...)\n>>\n>> In reality higher latency feeds into a system feedback loop causing \n>> your application to run slower causing bandwidth demands to be lower \n>> as well. It's often hard to distinguish root causes from symptoms \n>> when optimizing complex systems.\n>>\n>> \n>>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if \n> your\n> joining column's datatypes do not match\n\n\n",
"msg_date": "Thu, 14 Apr 2005 12:16:55 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Problem with this strategy. You want battery-backed write caching for \nbest performance & safety. (I've tried IDE for WAL before w/ write \ncaching off -- the DB got crippled whenever I had to copy files from/to \nthe drive on the WAL partition -- ended up just moving WAL back on the \nsame SCSI drive as the main DB.) That means in addition to a $$$ SCSI \ncaching controller, you also need a $$$ SATA caching controller. From my \nglance at prices, advanced SATA controllers seem to cost nearly as their \nSCSI counterparts.\n\nThis also looks to be the case for the drives themselves. Sure you can \nget super cheap 7200RPM SATA drives but they absolutely suck for \ndatabase work. Believe me, I gave it a try once -- ugh. The highend WD \n10K Raptors look pretty good though -- the benchmarks @ storagereview \nseem to put these drives at about 90% of SCSI 10Ks for both single-user \nand multi-user. However, they're also priced like SCSIs -- here's what I \nfound @ Mwave (going through pricewatch to find WD740GDs):\n\nSeagate 7200 SATA -- 80GB $59\nWD 10K SATA -- 72GB $182\nSeagate 10K U320 -- 72GB $289\n\nUsing the above prices for a fixed budget for RAID-10, you could get:\n\nSATA 7200 -- 680MB per $1000\nSATA 10K -- 200MB per $1000\nSCSI 10K -- 125MB per $1000\n\nFor a 99% read-only DB that required lots of disk space (say something \nlike Wikipedia or blog host), using consumer level SATA probably is ok. \nFor anything else, I'd consider SATA 10K if (1) I do not need 15K RPM \nand (2) I don't have SCSI intrastructure already.\n\n\nSteve Poe wrote:\n> If SATA drives don't have the ability to replace SCSI for a multi-user\n> Postgres apps, but you needed to save on cost (ALWAYS an issue), \n> could/would you implement SATA for your logs (pg_xlog) and keep the rest \n> on SCSI?\n> \n> Steve Poe\n> \n> Mohan, Ross wrote:\n> \n>> I've been doing some reading up on this, trying to keep up here, and \n>> have found out that (experts, just yawn and cover your ears)\n>>\n>> 1) some SATA drives (just type II, I think?) have a \"Phase Zero\"\n>> implementation of Tagged Command Queueing (the special sauce\n>> for SCSI).\n>> 2) This SATA \"TCQ\" is called NCQ and I believe it basically\n>> allows the disk software itself to do the reordering\n>> (this is called \"simple\" in TCQ terminology) It does not\n>> yet allow the TCQ \"head of queue\" command, allowing the\n>> current tagged request to go to head of queue, which is\n>> a simple way of manifesting a \"high priority\" request.\n>>\n>> 3) SATA drives are not yet multi-initiator?\n>>\n>> Largely b/c of 2 and 3, multi-initiator SCSI RAID'ed drives\n>> are likely to whomp SATA II drives for a while yet (read: a\n>> year or two) in multiuser PostGres applications.\n>>\n>>\n>> -----Original Message-----\n>> From: [email protected] \n>> [mailto:[email protected]] On Behalf Of Greg Stark\n>> Sent: Thursday, April 14, 2005 2:04 PM\n>> To: Kevin Brown\n>> Cc: [email protected]\n>> Subject: Re: [PERFORM] How to improve db performance with $7K?\n>>\n>>\n>> Kevin Brown <[email protected]> writes:\n>>\n>> \n>>\n>>> Greg Stark wrote:\n>>>\n>>>\n>>> \n>>>\n>>>> I think you're being misled by analyzing the write case.\n>>>>\n>>>> Consider the read case. When a user process requests a block and \n>>>> that read makes its way down to the driver level, the driver can't \n>>>> just put it aside and wait until it's convenient. It has to go ahead \n>>>> and issue the read right away.\n>>>> \n>>>\n>>> Well, strictly speaking it doesn't *have* to. It could delay for a \n>>> couple of milliseconds to see if other requests come in, and then \n>>> issue the read if none do. If there are already other requests being \n>>> fulfilled, then it'll schedule the request in question just like the \n>>> rest.\n>>> \n>>\n>>\n>> But then the cure is worse than the disease. You're basically \n>> describing exactly what does happen anyways, only you're delaying more \n>> requests than necessary. That intervening time isn't really idle, it's \n>> filled with all the requests that were delayed during the previous \n>> large seek...\n>>\n>> \n>>\n>>> Once the first request has been fulfilled, the driver can now \n>>> schedule the rest of the queued-up requests in disk-layout order.\n>>>\n>>> I really don't see how this is any different between a system that \n>>> has tagged queueing to the disks and one that doesn't. The only \n>>> difference is where the queueing happens.\n>>> \n>>\n>>\n>> And *when* it happens. Instead of being able to issue requests while a \n>> large seek is happening and having some of them satisfied they have to \n>> wait until that seek is finished and get acted on during the next \n>> large seek.\n>>\n>> If my theory is correct then I would expect bandwidth to be \n>> essentially equivalent but the latency on SATA drives to be increased \n>> by about 50% of the average seek time. Ie, while a busy SCSI drive can \n>> satisfy most requests in about 10ms a busy SATA drive would satisfy \n>> most requests in 15ms. (add to that that 10k RPM and 15kRPM SCSI \n>> drives have even lower seek times and no such IDE/SATA drives exist...)\n>>\n>> In reality higher latency feeds into a system feedback loop causing \n>> your application to run slower causing bandwidth demands to be lower \n>> as well. It's often hard to distinguish root causes from symptoms when \n>> optimizing complex systems.\n>>\n>> \n>>\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n> \n",
"msg_date": "Mon, 18 Apr 2005 01:05:09 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "\nWilliam Yu <[email protected]> writes:\n\n> Using the above prices for a fixed budget for RAID-10, you could get:\n> \n> SATA 7200 -- 680MB per $1000\n> SATA 10K -- 200MB per $1000\n> SCSI 10K -- 125MB per $1000\n\nWhat a lot of these analyses miss is that cheaper == faster because cheaper\nmeans you can buy more spindles for the same price. I'm assuming you picked\nequal sized drives to compare so that 200MB/$1000 for SATA is almost twice as\nmany spindles as the 125MB/$1000. That means it would have almost double the\nbandwidth. And the 7200 RPM case would have more than 5x the bandwidth.\n\nWhile 10k RPM drives have lower seek times, and SCSI drives have a natural\nseek time advantage, under load a RAID array with fewer spindles will start\nhitting contention sooner which results into higher latency. If the controller\nworks well the larger SATA arrays above should be able to maintain their\nmediocre latency much better under load than the SCSI array with fewer drives\nwould maintain its low latency response time despite its drives' lower average\nseek time.\n\n-- \ngreg\n\n",
"msg_date": "18 Apr 2005 10:59:05 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "This is fundamentaly untrue.\n\nA mirror is still a mirror. At most in a RAID 10 you can have two\nsimultaneous seeks. You are always going to be limited by the seek\ntime of your drives. It's a stripe, so you have to read from all\nmembers of the stripe to get data, requiring all drives to seek. \nThere is no advantage to seek time in adding more drives. By adding\nmore drives you can increase throughput, but the max throughput of the\nPCI-X bus isn't that high (I think around 400MB/sec) You can easily\nget this with a six or seven drive RAID 5, or a ten drive RAID 10. At\nthat point you start having to factor in the cost of a bigger chassis\nto hold more drives, which can be big bucks.\n\nAlex Turner\nnetEconomist\n\nOn 18 Apr 2005 10:59:05 -0400, Greg Stark <[email protected]> wrote:\n> \n> William Yu <[email protected]> writes:\n> \n> > Using the above prices for a fixed budget for RAID-10, you could get:\n> >\n> > SATA 7200 -- 680MB per $1000\n> > SATA 10K -- 200MB per $1000\n> > SCSI 10K -- 125MB per $1000\n> \n> What a lot of these analyses miss is that cheaper == faster because cheaper\n> means you can buy more spindles for the same price. I'm assuming you picked\n> equal sized drives to compare so that 200MB/$1000 for SATA is almost twice as\n> many spindles as the 125MB/$1000. That means it would have almost double the\n> bandwidth. And the 7200 RPM case would have more than 5x the bandwidth.\n> \n> While 10k RPM drives have lower seek times, and SCSI drives have a natural\n> seek time advantage, under load a RAID array with fewer spindles will start\n> hitting contention sooner which results into higher latency. If the controller\n> works well the larger SATA arrays above should be able to maintain their\n> mediocre latency much better under load than the SCSI array with fewer drives\n> would maintain its low latency response time despite its drives' lower average\n> seek time.\n> \n> --\n> greg\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n>\n",
"msg_date": "Mon, 18 Apr 2005 11:17:25 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "\nAlex Turner <[email protected]> writes:\n\n> This is fundamentaly untrue.\n> \n> A mirror is still a mirror. At most in a RAID 10 you can have two\n> simultaneous seeks. You are always going to be limited by the seek\n> time of your drives. It's a stripe, so you have to read from all\n> members of the stripe to get data, requiring all drives to seek. \n> There is no advantage to seek time in adding more drives. \n\nAdding drives will not let you get lower response times than the average seek\ntime on your drives*. But it will let you reach that response time more often. \n\nThe actual response time for a random access to a drive is the seek time plus\nthe time waiting for your request to actually be handled. Under heavy load\nthat could be many milliseconds. The more drives you have the fewer requests\neach drive has to handle.\n\nLook at the await and svctime columns of iostat -x.\n\nUnder heavy random access load those columns can show up performance problems\nmore accurately than the bandwidth columns. You could be doing less bandwidth\nbut be having latency issues. While reorganizing data to allow for more\nsequential reads is the normal way to address that, simply adding more\nspindles can be surprisingly effective.\n\n> By adding more drives you can increase throughput, but the max throughput of\n> the PCI-X bus isn't that high (I think around 400MB/sec) You can easily get\n> this with a six or seven drive RAID 5, or a ten drive RAID 10. At that point\n> you start having to factor in the cost of a bigger chassis to hold more\n> drives, which can be big bucks.\n\nYou could use software raid to spread the drives over multiple PCI-X cards.\nBut if 400MB/s isn't enough bandwidth then you're probably in the realm of\n\"enterprise-class\" hardware anyways.\n\n* (Actually even that's possible: you could limit yourself to a portion of the\n drive surface to reduce seek time)\n\n-- \ngreg\n\n",
"msg_date": "18 Apr 2005 11:43:54 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "[snip]\n> \n> Adding drives will not let you get lower response times than the average seek\n> time on your drives*. But it will let you reach that response time more often.\n> \n[snip]\n\nI believe your assertion is fundamentaly flawed. Adding more drives\nwill not let you reach that response time more often. All drives are\nrequired to fill every request in all RAID levels (except possibly\n0+1, but that isn't used for enterprise applicaitons). Most requests\nin OLTP require most of the request time to seek, not to read. Only\nin single large block data transfers will you get any benefit from\nadding more drives, which is atypical in most database applications. \nFor most database applications, the only way to increase\ntransactions/sec is to decrease request service time, which is\ngeneraly achieved with better seek times or a better controller card,\nor possibly spreading your database accross multiple tablespaces on\nseperate paritions.\n\nMy assertion therefore is that simply adding more drives to an already\ncompetent* configuration is about as likely to increase your database\neffectiveness as swiss cheese is to make your car run faster.\n\nAlex Turner\nnetEconomist\n\n*Assertion here is that the DBA didn't simply configure all tables and\nxlog on a single 7200 RPM disk, but has seperate physical drives for\nxlog and tablespace at least on 10k drives.\n",
"msg_date": "Mon, 18 Apr 2005 12:56:48 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Alex Turner wrote:\n\n>[snip]\n>\n>\n>>Adding drives will not let you get lower response times than the average seek\n>>time on your drives*. But it will let you reach that response time more often.\n>>\n>>\n>>\n>[snip]\n>\n>I believe your assertion is fundamentaly flawed. Adding more drives\n>will not let you reach that response time more often. All drives are\n>required to fill every request in all RAID levels (except possibly\n>0+1, but that isn't used for enterprise applicaitons).\n>\nActually 0+1 is the recommended configuration for postgres databases\n(both for xlog and for the bulk data), because the write speed of RAID5\nis quite poor.\nHence you base assumption is not correct, and adding drives *does* help.\n\n>Most requests\n>in OLTP require most of the request time to seek, not to read. Only\n>in single large block data transfers will you get any benefit from\n>adding more drives, which is atypical in most database applications.\n>For most database applications, the only way to increase\n>transactions/sec is to decrease request service time, which is\n>generaly achieved with better seek times or a better controller card,\n>or possibly spreading your database accross multiple tablespaces on\n>seperate paritions.\n>\n>\nThis is probably true. However, if you are doing lots of concurrent\nconnections, and things are properly spread across multiple spindles\n(using RAID0+1, or possibly tablespaces across multiple raids).\nThen each seek occurs on a separate drive, which allows them to occur at\nthe same time, rather than sequentially. Having 2 processes competing\nfor seeking on the same drive is going to be worse than having them on\nseparate drives.\nJohn\n=:->",
"msg_date": "Mon, 18 Apr 2005 12:16:14 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Hi,\n\nAt 18:56 18/04/2005, Alex Turner wrote:\n>All drives are required to fill every request in all RAID levels\n\nNo, this is definitely wrong. In many cases, most drives don't actually \nhave the data requested, how could they handle the request?\n\nWhen reading one random sector, only *one* drive out of N is ever used to \nservice any given request, be it RAID 0, 1, 0+1, 1+0 or 5.\n\nWhen writing:\n- in RAID 0, 1 drive\n- in RAID 1, RAID 0+1 or 1+0, 2 drives\n- in RAID 5, you need to read on all drives and write on 2.\n\nOtherwise, what would be the point of RAID 0, 0+1 or 1+0?\n\nJacques.\n\n\n",
"msg_date": "Mon, 18 Apr 2005 19:32:09 +0200",
"msg_from": "Jacques Caron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Alex Turner wrote:\n\n>[snip]\n> \n>\n>>Adding drives will not let you get lower response times than the average seek\n>>time on your drives*. But it will let you reach that response time more often.\n>>\n>> \n>>\n>[snip]\n>\n>I believe your assertion is fundamentaly flawed. Adding more drives\n>will not let you reach that response time more often. All drives are\n>required to fill every request in all RAID levels (except possibly\n>0+1, but that isn't used for enterprise applicaitons). Most requests\n>in OLTP require most of the request time to seek, not to read. Only\n>in single large block data transfers will you get any benefit from\n>adding more drives, which is atypical in most database applications. \n>For most database applications, the only way to increase\n>transactions/sec is to decrease request service time, which is\n>generaly achieved with better seek times or a better controller card,\n>or possibly spreading your database accross multiple tablespaces on\n>seperate paritions.\n>\n>My assertion therefore is that simply adding more drives to an already\n>competent* configuration is about as likely to increase your database\n>effectiveness as swiss cheese is to make your car run faster.\n> \n>\n\nConsider the case of a mirrored file system with a mostly read() \nworkload. Typical behavior is to use a round-robin method for issueing \nthe read operations to each mirror in turn, but one can use other \nmethods like a geometric algorithm that will issue the reads to the \ndrive with the head located closest to the desired track. Some \nsystems have many mirrors of the data for exactly this behavior. In \nfact, one can carry this logic to the extreme and have one drive for \nevery cylinder in the mirror, thus removing seek latencies completely. \nIn fact this extreme case would also remove the rotational latency as \nthe cylinder will be in the disks read cache. :-) Of course, writing \ndata would be a bit slow!\n\nI'm not sure I understand your assertion that \"all drives are required \nto fill every request in all RAID levels\". After all, in mirrored \nreads only one mirror needs to read any given block of data, so I don't \nknow what goal is achieved in making other mirrors read the same data.\n\nMy assertion (based on ample personal experience) is that one can \n*always* get improved performance by adding more drives. Just limit the \ndrives to use the first few cylinders so that the average seek time is \ngreatly reduced and concatenate the drives together. One can then build \nthe usual RAID device out of these concatenated metadevices. Yes, one \nis wasting lots of disk space, but that's life. If your goal is \nperformance, then you need to put your money on the table. The \nsystem will be somewhat unreliable because of the device count, \nadditional SCSI buses, etc., but that too is life in the high \nperformance world.\n\n-- Alan\n",
"msg_date": "Mon, 18 Apr 2005 13:34:28 -0400",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Hi,\n\nAt 16:59 18/04/2005, Greg Stark wrote:\n\n>William Yu <[email protected]> writes:\n>\n> > Using the above prices for a fixed budget for RAID-10, you could get:\n> >\n> > SATA 7200 -- 680MB per $1000\n> > SATA 10K -- 200MB per $1000\n> > SCSI 10K -- 125MB per $1000\n>\n>What a lot of these analyses miss is that cheaper == faster because cheaper\n>means you can buy more spindles for the same price. I'm assuming you picked\n>equal sized drives to compare so that 200MB/$1000 for SATA is almost twice as\n>many spindles as the 125MB/$1000. That means it would have almost double the\n>bandwidth. And the 7200 RPM case would have more than 5x the bandwidth.\n>\n>While 10k RPM drives have lower seek times, and SCSI drives have a natural\n>seek time advantage, under load a RAID array with fewer spindles will start\n>hitting contention sooner which results into higher latency. If the controller\n>works well the larger SATA arrays above should be able to maintain their\n>mediocre latency much better under load than the SCSI array with fewer drives\n>would maintain its low latency response time despite its drives' lower average\n>seek time.\n\nI would definitely agree. More factors in favor of more cheap drives:\n- cheaper drives (7200 rpm) have larger disks (3.7\" diameter against 2.6 or \n3.3). That means the outer tracks hold more data, and the same amount of \ndata is held on a smaller area, which means less tracks, which means \nreduced seek times. You can roughly count the real average seek time as \n(average seek time over full disk * size of dataset / capacity of disk). \nAnd you actually need to physicall seek less often too.\n\n- more disks means less data per disk, which means the data is further \nconcentrated on outer tracks, which means even lower seek times\n\nAlso, what counts is indeed not so much the time it takes to do one single \nrandom seek, but the number of random seeks you can do per second. Hence, \nmore disks means more seeks per second (if requests are evenly distributed \namong all disks, which a good stripe size should achieve).\n\nNot taking into account TCQ/NCQ or write cache optimizations, the important \nparameter (random seeks per second) can be approximated as:\n\nN * 1000 / (lat + seek * ds / (N * cap))\n\nWhere:\nN is the number of disks\nlat is the average rotational latency in milliseconds (500/(rpm/60))\nseek is the average seek over the full disk in milliseconds\nds is the dataset size\ncap is the capacity of each disk\n\nUsing this formula and a variety of disks, counting only the disks \nthemselves (no enclosures, controllers, rack space, power, maintenance...), \ntrying to maximize the number of seeks/second for a fixed budget (1000 \neuros) with a dataset size of 100 GB makes SATA drives clear winners: you \ncan get more than 4000 seeks/second (with 21 x 80GB disks) where SCSI \ncannot even make it to the 1400 seek/second point (with 8 x 36 GB disks). \nResults can vary quite a lot based on the dataset size, which illustrates \nthe importance of \"staying on the edges\" of the disks. I'll try to make the \nanalysis more complete by counting some of the \"overhead\" (obviously 21 \ndrives has a lot of other implications!), but I believe SATA drives still \nwin in theory.\n\nIt would be interesting to actually compare this to real-world (or \nnearly-real-world) benchmarks to measure the effectiveness of features like \nTCQ/NCQ etc.\n\nJacques.\n \n\n\n",
"msg_date": "Mon, 18 Apr 2005 19:41:49 +0200",
"msg_from": "Jacques Caron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Alex,\n\nIn the situation of the animal hospital server I oversee, their \napplication is OLTP. Adding hard drives (6-8) does help performance. \nBenchmarks like pgbench and OSDB agree with it, but in reality users \ncould not see noticeable change. However, moving the top 5/10 tables and \nindexes to their own space made a greater impact.\n\nSomeone who reads PostgreSQL 8.0 Performance Checklist is going to see \npoint #1 add more disks is the key. How about adding a subpoint to \nexplaining when more disks isn't enough or applicable? I maybe \ngeneralizing the complexity of tuning an OLTP application, but some \nclarity could help.\n\nSteve Poe\n\n\n\n\n",
"msg_date": "Mon, 18 Apr 2005 10:46:01 -0700",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Ok - well - I am partially wrong...\n\nIf you're stripe size is 64Kb, and you are reading 256k worth of data,\nit will be spread across four drives, so you will need to read from\nfour devices to get your 256k of data (RAID 0 or 5 or 10), but if you\nare only reading 64kb of data, I guess you would only need to read\nfrom one disk.\n\nSo my assertion that adding more drives doesn't help is pretty\nwrong... particularly with OLTP because it's always dealing with\nblocks that are smaller that the stripe size.\n\nAlex Turner\nnetEconomist\n\nOn 4/18/05, Jacques Caron <[email protected]> wrote:\n> Hi,\n> \n> At 18:56 18/04/2005, Alex Turner wrote:\n> >All drives are required to fill every request in all RAID levels\n> \n> No, this is definitely wrong. In many cases, most drives don't actually\n> have the data requested, how could they handle the request?\n> \n> When reading one random sector, only *one* drive out of N is ever used to\n> service any given request, be it RAID 0, 1, 0+1, 1+0 or 5.\n> \n> When writing:\n> - in RAID 0, 1 drive\n> - in RAID 1, RAID 0+1 or 1+0, 2 drives\n> - in RAID 5, you need to read on all drives and write on 2.\n> \n> Otherwise, what would be the point of RAID 0, 0+1 or 1+0?\n> \n> Jacques.\n> \n>\n",
"msg_date": "Mon, 18 Apr 2005 14:16:15 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Not true - the recommended RAID level is RAID 10, not RAID 0+1 (at\nleast I would never recommend 1+0 for anything).\n\nRAID 10 and RAID 0+1 are _quite_ different. One gives you very good\nredundancy, the other is only slightly better than RAID 5, but\noperates faster in degraded mode (single drive).\n\nAlex Turner\nnetEconomist\n\nOn 4/18/05, John A Meinel <[email protected]> wrote:\n> Alex Turner wrote:\n> \n> >[snip]\n> >\n> >\n> >>Adding drives will not let you get lower response times than the average seek\n> >>time on your drives*. But it will let you reach that response time more often.\n> >>\n> >>\n> >>\n> >[snip]\n> >\n> >I believe your assertion is fundamentaly flawed. Adding more drives\n> >will not let you reach that response time more often. All drives are\n> >required to fill every request in all RAID levels (except possibly\n> >0+1, but that isn't used for enterprise applicaitons).\n> >\n> Actually 0+1 is the recommended configuration for postgres databases\n> (both for xlog and for the bulk data), because the write speed of RAID5\n> is quite poor.\n> Hence you base assumption is not correct, and adding drives *does* help.\n> \n> >Most requests\n> >in OLTP require most of the request time to seek, not to read. Only\n> >in single large block data transfers will you get any benefit from\n> >adding more drives, which is atypical in most database applications.\n> >For most database applications, the only way to increase\n> >transactions/sec is to decrease request service time, which is\n> >generaly achieved with better seek times or a better controller card,\n> >or possibly spreading your database accross multiple tablespaces on\n> >seperate paritions.\n> >\n> >\n> This is probably true. However, if you are doing lots of concurrent\n> connections, and things are properly spread across multiple spindles\n> (using RAID0+1, or possibly tablespaces across multiple raids).\n> Then each seek occurs on a separate drive, which allows them to occur at\n> the same time, rather than sequentially. Having 2 processes competing\n> for seeking on the same drive is going to be worse than having them on\n> separate drives.\n> John\n> =:->\n> \n> \n>\n",
"msg_date": "Mon, 18 Apr 2005 14:18:21 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "I think the add more disks thing is really from the point of view that\none disk isn't enough ever. You should really have at least four\ndrives configured into two RAID 1s. Most DBAs will know this, but\nmost average Joes won't.\n\nAlex Turner\nnetEconomist\n\nOn 4/18/05, Steve Poe <[email protected]> wrote:\n> Alex,\n> \n> In the situation of the animal hospital server I oversee, their\n> application is OLTP. Adding hard drives (6-8) does help performance.\n> Benchmarks like pgbench and OSDB agree with it, but in reality users\n> could not see noticeable change. However, moving the top 5/10 tables and\n> indexes to their own space made a greater impact.\n> \n> Someone who reads PostgreSQL 8.0 Performance Checklist is going to see\n> point #1 add more disks is the key. How about adding a subpoint to\n> explaining when more disks isn't enough or applicable? I maybe\n> generalizing the complexity of tuning an OLTP application, but some\n> clarity could help.\n> \n> Steve Poe\n> \n>\n",
"msg_date": "Mon, 18 Apr 2005 14:19:36 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "So I wonder if one could take this stripe size thing further and say\nthat a larger stripe size is more likely to result in requests getting\nserved parallized across disks which would lead to increased\nperformance?\n\nAgain, thanks to all people on this list, I know that I have learnt a\n_hell_ of alot since subscribing.\n\nAlex Turner\nnetEconomist\n\nOn 4/18/05, Alex Turner <[email protected]> wrote:\n> Ok - well - I am partially wrong...\n> \n> If you're stripe size is 64Kb, and you are reading 256k worth of data,\n> it will be spread across four drives, so you will need to read from\n> four devices to get your 256k of data (RAID 0 or 5 or 10), but if you\n> are only reading 64kb of data, I guess you would only need to read\n> from one disk.\n> \n> So my assertion that adding more drives doesn't help is pretty\n> wrong... particularly with OLTP because it's always dealing with\n> blocks that are smaller that the stripe size.\n> \n> Alex Turner\n> netEconomist\n> \n> On 4/18/05, Jacques Caron <[email protected]> wrote:\n> > Hi,\n> >\n> > At 18:56 18/04/2005, Alex Turner wrote:\n> > >All drives are required to fill every request in all RAID levels\n> >\n> > No, this is definitely wrong. In many cases, most drives don't actually\n> > have the data requested, how could they handle the request?\n> >\n> > When reading one random sector, only *one* drive out of N is ever used to\n> > service any given request, be it RAID 0, 1, 0+1, 1+0 or 5.\n> >\n> > When writing:\n> > - in RAID 0, 1 drive\n> > - in RAID 1, RAID 0+1 or 1+0, 2 drives\n> > - in RAID 5, you need to read on all drives and write on 2.\n> >\n> > Otherwise, what would be the point of RAID 0, 0+1 or 1+0?\n> >\n> > Jacques.\n> >\n> >\n>\n",
"msg_date": "Mon, 18 Apr 2005 14:21:04 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "\nJacques Caron <[email protected]> writes:\n\n> When writing:\n> - in RAID 0, 1 drive\n> - in RAID 1, RAID 0+1 or 1+0, 2 drives\n> - in RAID 5, you need to read on all drives and write on 2.\n\nActually RAID 5 only really needs to read from two drives. The existing parity\nblock and the block you're replacing. It just xors the old block, the new\nblock, and the existing parity block to generate the new parity block.\n\n-- \ngreg\n\n",
"msg_date": "18 Apr 2005 14:24:14 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Hi,\n\nAt 20:16 18/04/2005, Alex Turner wrote:\n>So my assertion that adding more drives doesn't help is pretty\n>wrong... particularly with OLTP because it's always dealing with\n>blocks that are smaller that the stripe size.\n\nWhen doing random seeks (which is what a database needs most of the time), \nthe number of disks helps improve the number of seeks per second (which is \nthe bottleneck in this case). When doing sequential reads, the number of \ndisks helps improve total throughput (which is the bottleneck in that case).\n\nIn short: in always helps :-)\n\nJacques.\n\n\n",
"msg_date": "Mon, 18 Apr 2005 20:24:25 +0200",
"msg_from": "Jacques Caron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Alex Turner wrote:\n> Not true - the recommended RAID level is RAID 10, not RAID 0+1 (at\n> least I would never recommend 1+0 for anything).\n\nUhmm I was under the impression that 1+0 was RAID 10 and that 0+1 is NOT\nRAID 10.\n\nRef: http://www.acnc.com/raid.html\n\nSincerely,\n\nJoshua D. Drake\n\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n",
"msg_date": "Mon, 18 Apr 2005 11:27:58 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Hi,\n\nAt 20:21 18/04/2005, Alex Turner wrote:\n>So I wonder if one could take this stripe size thing further and say\n>that a larger stripe size is more likely to result in requests getting\n>served parallized across disks which would lead to increased\n>performance?\n\nActually, it would be pretty much the opposite. The smaller the stripe \nsize, the more evenly distributed data is, and the more disks can be used \nto serve requests. If your stripe size is too large, many random accesses \nwithin one single file (whose size is smaller than the stripe size/number \nof disks) may all end up on the same disk, rather than being split across \nmultiple disks (the extreme case being stripe size = total size of all \ndisks, which means concatenation). If all accesses had the same cost (i.e. \nno seek time, only transfer time), the ideal would be to have a stripe size \nequal to the number of disks.\n\nBut below a certain size, you're going to use multiple disks to serve one \nsingle request which would not have taken much more time from a single disk \n(reading even a large number of consecutive blocks within one cylinder does \nnot take much more time than reading a single block), so you would add \nunnecessary seeks on a disk that could have served another request in the \nmeantime. You should definitely not go below the filesystem block size or \nthe database block size.\n\nThere is a interesting discussion of the optimal stripe size in the vinum \nmanpage on FreeBSD:\n\nhttp://www.freebsd.org/cgi/man.cgi?query=vinum&apropos=0&sektion=0&manpath=FreeBSD+5.3-RELEASE+and+Ports&format=html\n\n(look for \"Performance considerations\", towards the end -- note however \nthat some of the calculations are not entirely correct).\n\nBasically it says the optimal stripe size is somewhere between 256KB and \n4MB, preferably an odd number, and that some hardware RAID controllers \ndon't like big stripe sizes. YMMV, as always.\n\nJacques.\n\n\n",
"msg_date": "Mon, 18 Apr 2005 20:43:45 +0200",
"msg_from": "Jacques Caron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Mistype.. I meant 0+1 in the second instance :(\n\n\nOn 4/18/05, Joshua D. Drake <[email protected]> wrote:\n> Alex Turner wrote:\n> > Not true - the recommended RAID level is RAID 10, not RAID 0+1 (at\n> > least I would never recommend 1+0 for anything).\n> \n> Uhmm I was under the impression that 1+0 was RAID 10 and that 0+1 is NOT\n> RAID 10.\n> \n> Ref: http://www.acnc.com/raid.html\n> \n> Sincerely,\n> \n> Joshua D. Drake\n> \n> \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> \n>\n",
"msg_date": "Mon, 18 Apr 2005 14:50:46 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "On 4/18/05, Jacques Caron <[email protected]> wrote:\n> Hi,\n> \n> At 20:21 18/04/2005, Alex Turner wrote:\n> >So I wonder if one could take this stripe size thing further and say\n> >that a larger stripe size is more likely to result in requests getting\n> >served parallized across disks which would lead to increased\n> >performance?\n> \n> Actually, it would be pretty much the opposite. The smaller the stripe\n> size, the more evenly distributed data is, and the more disks can be used\n> to serve requests. If your stripe size is too large, many random accesses\n> within one single file (whose size is smaller than the stripe size/number\n> of disks) may all end up on the same disk, rather than being split across\n> multiple disks (the extreme case being stripe size = total size of all\n> disks, which means concatenation). If all accesses had the same cost (i.e.\n> no seek time, only transfer time), the ideal would be to have a stripe size\n> equal to the number of disks.\n> \n[snip]\n\nAhh yes - but the critical distinction is this:\nThe smaller the stripe size, the more disks will be used to serve _a_\nrequest - which is bad for OLTP because you want fewer disks per\nrequest so that you can have more requests per second because the cost\nis mostly seek. If more than one disk has to seek to serve a single\nrequest, you are preventing that disk from serving a second request at\nthe same time.\n\nTo have more throughput in MB/sec, you want a smaller stripe size so\nthat you have more disks serving a single request allowing you to\nmultiple by effective drives to get total bandwidth.\n\nBecause OLTP is made up of small reads and writes to a small number of\ndifferent files, I would guess that you want those files split up\nacross your RAID, but not so much that a single small read or write\noperation would traverse more than one disk. That would infer that\nyour optimal stripe size is somewhere on the right side of the bell\ncurve that represents your database read and write block count\ndistribution. If on average the dbwritter never flushes less than 1MB\nto disk at a time, then I guess your best stripe size would be 1MB,\nbut that seems very large to me.\n\nSo I think therefore that I may be contending the exact opposite of\nwhat you are postulating!\n\nAlex Turner\nnetEconomist\n",
"msg_date": "Mon, 18 Apr 2005 15:26:31 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Oooops, I revived the never-ending $7K thread. :)\n\nWell part of my message is to first relook at the idea that SATA is \ncheap but slow. Most people look at SATA from the view of consumer-level \ndrives, no NCQ/TCQ -- basically these drives are IDEs that can connect \nto SATA cables. But if you then look at the server-level SATAs from WD, \nyou see performance close to server-level 10K SCSIs and pricing also close.\n\nStarting with the idea of using 20 consumer-level SATA drives versus 4 \n10K SCSIs, the main problem of course is the lack of advanced queueing \nin these drives. I'm sure there's some threshold where the number of \ndrives advantage exceeds the disadvantage of no queueing -- what that \nis, I don't have a clue.\n\nNow if you stuffed a ton of memory onto a SATA caching controller and \nthese controllers did the queue management instead of the drives, that \nwould eliminate most of the performance issues.\n\nThen you're just left with the management issues. Getting those 20 \ndrives stuffed in a big case and keeping a close eye on the drives since \ndrive failure will be a much bigger deal.\n\n\n\nGreg Stark wrote:\n> William Yu <[email protected]> writes:\n> \n> \n>>Using the above prices for a fixed budget for RAID-10, you could get:\n>>\n>>SATA 7200 -- 680MB per $1000\n>>SATA 10K -- 200MB per $1000\n>>SCSI 10K -- 125MB per $1000\n> \n> \n> What a lot of these analyses miss is that cheaper == faster because cheaper\n> means you can buy more spindles for the same price. I'm assuming you picked\n> equal sized drives to compare so that 200MB/$1000 for SATA is almost twice as\n> many spindles as the 125MB/$1000. That means it would have almost double the\n> bandwidth. And the 7200 RPM case would have more than 5x the bandwidth.\n> \n> While 10k RPM drives have lower seek times, and SCSI drives have a natural\n> seek time advantage, under load a RAID array with fewer spindles will start\n> hitting contention sooner which results into higher latency. If the controller\n> works well the larger SATA arrays above should be able to maintain their\n> mediocre latency much better under load than the SCSI array with fewer drives\n> would maintain its low latency response time despite its drives' lower average\n> seek time.\n> \n",
"msg_date": "Mon, 18 Apr 2005 13:37:11 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "On Mon, Apr 18, 2005 at 07:41:49PM +0200, Jacques Caron wrote:\n> It would be interesting to actually compare this to real-world (or \n> nearly-real-world) benchmarks to measure the effectiveness of features like \n> TCQ/NCQ etc.\n\nI was just thinking that it would be very interesting to benchmark\ndifferent RAID configurations using dbt2. I don't know if this is\nsomething that the lab is setup for or capable of, though.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 19 Apr 2005 19:03:39 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Mohan, Ross [mailto:[email protected]]\n> Sent: Thursday, April 14, 2005 1:30 PM\n> To: [email protected]\n> Subject: Re: [PERFORM] How to improve db performance with $7K?\n> \n> Greg Stark wrote:\n> > \n> > Kevin Brown <[email protected]> writes:\n> > \n> > > Greg Stark wrote:\n> > > \n> > > > I think you're being misled by analyzing the write case.\n> > > > \n> > > > Consider the read case. When a user process requests a block\n> > > > and that read makes its way down to the driver level, the \n> > > > driver can't just put it aside and wait until it's convenient.\n> > > > It has to go ahead and issue the read right away.\n> > > \n> > > Well, strictly speaking it doesn't *have* to. It could delay\n> > > for a couple of milliseconds to see if other requests come in,\n> > > and then issue the read if none do. If there are already other \n> > > requests being fulfilled, then it'll schedule the request in\n> > > question just like the rest.\n> >\n> > But then the cure is worse than the disease. You're basically \n> > describing exactly what does happen anyways, only you're \n> > delaying more requests than necessary. That intervening time \n> > isn't really idle, it's filled with all the requests that \n> > were delayed during the previous large seek...\n> > [...]\n> \n> [...]\n> 1) some SATA drives (just type II, I think?) have a \"Phase Zero\"\n> implementation of Tagged Command Queueing (the special sauce\n> for SCSI).\n> [...]\n> Largely b/c of 2 and 3, multi-initiator SCSI RAID'ed drives\n> are likely to whomp SATA II drives for a while yet (read: a\n> year or two) in multiuser PostGres applications. \n\nI would say it depends on the OS. What Kevin is describing sounds\njust like the Anticipatory I/O Scheduler in Linux 2.6:\n\nhttp://www.linuxjournal.com/article/6931\n\nFor certain application contexts, it looks like a big win. Not\nentirely sure if Postgres is one of them, though. If SCSI beats\nSATA, it sounds like it will be mostly due to better seek times.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n",
"msg_date": "Thu, 14 Apr 2005 13:46:12 -0500",
"msg_from": "\"Dave Held\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Joel Fradkin [mailto:[email protected]]\n> Sent: Thursday, April 14, 2005 11:39 AM\n> To: 'Tom Lane'; 'Dawid Kuroczko'\n> Cc: 'PERFORM'\n> Subject: Re: [PERFORM] speed of querry?\n> \n> \n> I did as described to alter table and did not see any \n> difference in speed. I am trying to undo the symbolic\n> link to the data array and set it up on raid 5 disks in\n> the machine just to test if there is an issue with the\n> config of the raid 10 array or a problem with the controller.\n> \n> I am kinda lame at Linux so not sure I have got it yet still\n> testing. Still kind puzzled why it chose tow different option,\n> but one is running windows version of postgres, so maybe that\n> has something to do with it.\n\nThat sounds like a plausible explanation. However, it could\nsimply be that the statistics gathered on each box are\nsufficiently different to cause different plans.\n\n> The data bases and configs (as far as page cost) are the same.\n\nDid you do as Dawid suggested?\n\n> [...]\n> Then do a query couple of times (EXPLAIN ANALYZE also :)), then\n> do:\n> SET enable_seqscan = off;\n> and rerun the query -- if it was significantly faster, you will\n> want to do:\n> SET enable_seqscan = on;\n> and tweak:\n> SET random_page_cost = 2.1;\n> ...and play with values. When you reach the random_page_cost\n> which suits your data, you will want to put it into\n> postgresql.conf\n> [...]\n\nThis is above and beyond toying with the column statistics. You\nare basically telling the planner to use an index. Try this,\nand post the EXPLAIN ANALYZE for the seqscan = off case on the\nslow box if it doesn't speed things up for you.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n",
"msg_date": "Thu, 14 Apr 2005 17:17:22 -0500",
"msg_from": "\"Dave Held\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: speed of querry?"
},
{
"msg_contents": "It is still slower on the Linux box. (included is explain with SET\nenable_seqscan = off;\nexplain analyze select * from viwassoclist where clientnum ='SAKS') See\nbelow.\n\nI did a few other tests (changing drive arrays helped by 1 second was slower\non my raid 10 on the powervault).\n\nPulling just raw data is much faster on the Linux box.\n\"Seq Scan on tblresponse_line (cost=100000000.00..100089717.78 rows=4032078\nwidth=67) (actual time=0.028..4600.431 rows=4032078 loops=1)\"\n\"Total runtime: 6809.399 ms\"\nWindows box\n\"Seq Scan on tblresponse_line (cost=0.00..93203.68 rows=4031968 width=67)\n(actual time=16.000..11316.000 rows=4031968 loops=1)\"\n\"Total runtime: 16672.000 ms\"\n\nI am going to reload the data bases, just to see what I get.\nI am thinking I may have to flatten the files for postgres (eliminate joins\nof any kind for reporting etc). Might make a good deal more data, but I\nthink from the app's point of view it is a good idea anyway, just not sure\nhow to handle editing.\n\nJoel Fradkin\n \n\"Merge Join (cost=49697.60..50744.71 rows=14987 width=113) (actual\ntime=11301.160..12171.072 rows=160593 loops=1)\"\n\" Merge Cond: (\"outer\".locationid = \"inner\".locationid)\"\n\" -> Sort (cost=788.81..789.89 rows=432 width=49) (actual\ntime=3.318..3.603 rows=441 loops=1)\"\n\" Sort Key: l.locationid\"\n\" -> Index Scan using ix_location on tbllocation l\n(cost=0.00..769.90 rows=432 width=49) (actual time=0.145..2.283 rows=441\nloops=1)\"\n\" Index Cond: ('SAKS'::text = (clientnum)::text)\"\n\" -> Sort (cost=48908.79..49352.17 rows=177352 width=75) (actual\ntime=11297.774..11463.780 rows=160594 loops=1)\"\n\" Sort Key: a.locationid\"\n\" -> Merge Right Join (cost=26247.95..28942.93 rows=177352\nwidth=75) (actual time=8357.010..9335.362 rows=177041 loops=1)\"\n\" Merge Cond: (((\"outer\".clientnum)::text =\n\"inner\".\"?column10?\") AND (\"outer\".id = \"inner\".jobtitleid))\"\n\" -> Index Scan using ix_tbljobtitle_id on tbljobtitle jt\n(cost=0.00..243.76 rows=6604 width=37) (actual time=0.122..12.049 rows=5690\nloops=1)\"\n\" Filter: (1 = presentationid)\"\n\" -> Sort (cost=26247.95..26691.33 rows=177352 width=53)\n(actual time=8342.271..8554.943 rows=177041 loops=1)\"\n\" Sort Key: (a.clientnum)::text, a.jobtitleid\"\n\" -> Index Scan using ix_associate_clientnum on\ntblassociate a (cost=0.00..10786.17 rows=177352 width=53) (actual\ntime=0.166..1126.052 rows=177041 loops=1)\"\n\" Index Cond: ((clientnum)::text = 'SAKS'::text)\"\n\"Total runtime: 12287.502 ms\"\n\n\nThis is above and beyond toying with the column statistics. You\nare basically telling the planner to use an index. Try this,\nand post the EXPLAIN ANALYZE for the seqscan = off case on the\nslow box if it doesn't speed things up for you.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n",
"msg_date": "Fri, 15 Apr 2005 09:12:25 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: speed of querry?"
},
{
"msg_contents": "\"Joel Fradkin\" <[email protected]> writes:\n> \"Merge Join (cost=49697.60..50744.71 rows=14987 width=113) (actual\n> time=11301.160..12171.072 rows=160593 loops=1)\"\n> \" Merge Cond: (\"outer\".locationid = \"inner\".locationid)\"\n> \" -> Sort (cost=788.81..789.89 rows=432 width=49) (actual\n> time=3.318..3.603 rows=441 loops=1)\"\n> \" Sort Key: l.locationid\"\n> \" -> Index Scan using ix_location on tbllocation l\n> (cost=0.00..769.90 rows=432 width=49) (actual time=0.145..2.283 rows=441\n> loops=1)\"\n> \" Index Cond: ('SAKS'::text = (clientnum)::text)\"\n> \" -> Sort (cost=48908.79..49352.17 rows=177352 width=75) (actual\n> time=11297.774..11463.780 rows=160594 loops=1)\"\n> \" Sort Key: a.locationid\"\n> \" -> Merge Right Join (cost=26247.95..28942.93 rows=177352\n> width=75) (actual time=8357.010..9335.362 rows=177041 loops=1)\"\n> \" Merge Cond: (((\"outer\".clientnum)::text =\n> \"inner\".\"?column10?\") AND (\"outer\".id = \"inner\".jobtitleid))\"\n> \" -> Index Scan using ix_tbljobtitle_id on tbljobtitle jt\n> (cost=0.00..243.76 rows=6604 width=37) (actual time=0.122..12.049 rows=5690\n> loops=1)\"\n> \" Filter: (1 = presentationid)\"\n> \" -> Sort (cost=26247.95..26691.33 rows=177352 width=53)\n> (actual time=8342.271..8554.943 rows=177041 loops=1)\"\n> \" Sort Key: (a.clientnum)::text, a.jobtitleid\"\n> \" -> Index Scan using ix_associate_clientnum on\n> tblassociate a (cost=0.00..10786.17 rows=177352 width=53) (actual\n> time=0.166..1126.052 rows=177041 loops=1)\"\n> \" Index Cond: ((clientnum)::text = 'SAKS'::text)\"\n> \"Total runtime: 12287.502 ms\"\n\nIt strikes me as odd that the thing isn't considering hash joins for\nat least some of these steps. Can you force it to (by setting\nenable_mergejoin off)? If not, what are the datatypes of the join\ncolumns exactly?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Apr 2005 10:06:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: speed of querry? "
},
{
"msg_contents": "\n\nJoel Fradkin\n \nTurning off merg joins seems to of done it but what do I need to do so I am\nnot telling the system explicitly not to use them, I must be missing some\nsetting?\n\nOn linux box.\n\nexplain analyze select * from viwassoclist where clientnum ='SAKS'\n\n\"Hash Join (cost=988.25..292835.36 rows=15773 width=113) (actual\ntime=23.514..3024.064 rows=160593 loops=1)\"\n\" Hash Cond: (\"outer\".locationid = \"inner\".locationid)\"\n\" -> Hash Left Join (cost=185.57..226218.77 rows=177236 width=75) (actual\ntime=21.147..2221.098 rows=177041 loops=1)\"\n\" Hash Cond: ((\"outer\".jobtitleid = \"inner\".id) AND\n((\"outer\".clientnum)::text = (\"inner\".clientnum)::text))\"\n\" -> Seq Scan on tblassociate a (cost=0.00..30851.25 rows=177236\nwidth=53) (actual time=0.390..1095.385 rows=177041 loops=1)\"\n\" Filter: ((clientnum)::text = 'SAKS'::text)\"\n\" -> Hash (cost=152.55..152.55 rows=6604 width=37) (actual\ntime=20.609..20.609 rows=0 loops=1)\"\n\" -> Seq Scan on tbljobtitle jt (cost=0.00..152.55 rows=6604\nwidth=37) (actual time=0.033..12.319 rows=6603 loops=1)\"\n\" Filter: (1 = presentationid)\"\n\" -> Hash (cost=801.54..801.54 rows=454 width=49) (actual\ntime=2.196..2.196 rows=0 loops=1)\"\n\" -> Index Scan using ix_location on tbllocation l\n(cost=0.00..801.54 rows=454 width=49) (actual time=0.111..1.755 rows=441\nloops=1)\"\n\" Index Cond: ('SAKS'::text = (clientnum)::text)\"\n\"Total runtime: 3120.366 ms\"\n\nhere are the table defs and view if that helps. I posted the config a while\nback, but can do it again if you need to see it.\n\nCREATE OR REPLACE VIEW viwassoclist AS \n SELECT a.clientnum, a.associateid, a.associatenum, a.lastname, a.firstname,\njt.value AS jobtitle, l.name AS \"location\", l.locationid AS mainlocationid,\nl.divisionid, l.regionid, l.districtid, (a.lastname::text || ', '::text) ||\na.firstname::text AS assocname, a.isactive, a.isdeleted\n FROM tblassociate a\n LEFT JOIN tbljobtitle jt ON a.jobtitleid = jt.id AND jt.clientnum::text =\na.clientnum::text AND 1 = jt.presentationid\n JOIN tbllocation l ON a.locationid = l.locationid AND l.clientnum::text =\na.clientnum::text;\n\nCREATE TABLE tblassociate\n(\n clientnum varchar(16) NOT NULL,\n associateid int4 NOT NULL,\n associatenum varchar(10),\n firstname varchar(50),\n middleinit varchar(5),\n lastname varchar(50),\n ssn varchar(18),\n dob timestamp,\n address varchar(100),\n city varchar(50),\n state varchar(50),\n country varchar(50),\n zip varchar(10),\n homephone varchar(14),\n cellphone varchar(14),\n pager varchar(14),\n associateaccount varchar(50),\n doh timestamp,\n dot timestamp,\n rehiredate timestamp,\n lastdayworked timestamp,\n staffexecid int4,\n jobtitleid int4,\n locationid int4,\n deptid int4,\n positionnum int4,\n worktypeid int4,\n sexid int4,\n maritalstatusid int4,\n ethnicityid int4,\n weight float8,\n heightfeet int4,\n heightinches int4,\n haircolorid int4,\n eyecolorid int4,\n isonalarmlist bool NOT NULL DEFAULT false,\n isactive bool NOT NULL DEFAULT true,\n ismanager bool NOT NULL DEFAULT false,\n issecurity bool NOT NULL DEFAULT false,\n createdbyid int4,\n isdeleted bool NOT NULL DEFAULT false,\n militarybranchid int4,\n militarystatusid int4,\n patrontypeid int4,\n identificationtypeid int4,\n workaddress varchar(200),\n testtypeid int4,\n testscore int4,\n pin int4,\n county varchar(50),\n CONSTRAINT pk_tblassociate PRIMARY KEY (clientnum, associateid),\n CONSTRAINT ix_tblassociate UNIQUE (clientnum, associatenum)\n)\nCREATE TABLE tbljobtitle\n(\n clientnum varchar(16) NOT NULL,\n id int4 NOT NULL,\n value varchar(50),\n code varchar(16),\n isdeleted bool DEFAULT false,\n presentationid int4 NOT NULL DEFAULT 1,\n CONSTRAINT pk_tbljobtitle PRIMARY KEY (clientnum, id, presentationid)\n)\nCREATE TABLE tbllocation\n(\n clientnum varchar(16) NOT NULL,\n locationid int4 NOT NULL,\n districtid int4 NOT NULL,\n regionid int4 NOT NULL,\n divisionid int4 NOT NULL,\n locationnum varchar(8),\n name varchar(50),\n clientlocnum varchar(50),\n address varchar(100),\n address2 varchar(100),\n city varchar(50),\n state varchar(2) NOT NULL DEFAULT 'zz'::character varying,\n zip varchar(10),\n countryid int4,\n phone varchar(15),\n fax varchar(15),\n payname varchar(40),\n contact char(36),\n active bool NOT NULL DEFAULT true,\n coiprogram text,\n coilimit text,\n coiuser varchar(255),\n coidatetime varchar(32),\n ec_note_field varchar(1050),\n locationtypeid int4,\n open_time timestamp,\n close_time timestamp,\n insurance_loc_id varchar(50),\n lpregionid int4,\n sic int4,\n CONSTRAINT pk_tbllocation PRIMARY KEY (clientnum, locationid),\n CONSTRAINT ix_tbllocation_1 UNIQUE (clientnum, locationnum, name),\n CONSTRAINT ix_tbllocation_unique_number UNIQUE (clientnum, divisionid,\nregionid, districtid, locationnum)\n)\n\nIt strikes me as odd that the thing isn't considering hash joins for\nat least some of these steps. Can you force it to (by setting\nenable_mergejoin off)? If not, what are the datatypes of the join\ncolumns exactly?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 15 Apr 2005 14:15:15 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: speed of querry?"
},
{
"msg_contents": "\"Joel Fradkin\" <[email protected]> writes:\n> Turning off merg joins seems to of done it but what do I need to do so I am\n> not telling the system explicitly not to use them, I must be missing some\n> setting?\n\n> \" -> Hash Left Join (cost=185.57..226218.77 rows=177236 width=75) (actual\n> time=21.147..2221.098 rows=177041 loops=1)\"\n> \" Hash Cond: ((\"outer\".jobtitleid = \"inner\".id) AND\n> ((\"outer\".clientnum)::text = (\"inner\".clientnum)::text))\"\n\nIt's overestimating the cost of this join for some reason ... and I\nthink I see why. It's not accounting for the combined effect of the\ntwo hash clauses, only for the \"better\" one. What are the statistics\nfor tbljobtitle.id and tbljobtitle.clientnum --- how many distinct\nvalues of each, and are the distributions skewed to a few popular values?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Apr 2005 18:38:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: speed of querry? "
}
] |
[
{
"msg_contents": "Hi,\ni am thinking about swiching to plperl as it seems to me much more \nflexible and easier to create functions.\n\nwhat is the recommended PL for postgres? or which one is most widely \nused / most popular?\nis there a performance difference between plpgsql and plperl ?\n\nporting to other systems is not a real issue as all my servers have perl \ninstalled.\n\nThanks for any advice\n\nAlex\n\n\n",
"msg_date": "Fri, 15 Apr 2005 21:49:07 +1000",
"msg_from": "Alex <[email protected]>",
"msg_from_op": true,
"msg_subject": "plperl vs plpgsql"
},
{
"msg_contents": "After takin a swig o' Arrakan spice grog, [email protected] (Alex) belched out:\n> i am thinking about swiching to plperl as it seems to me much more\n> flexible and easier to create functions.\n>\n> what is the recommended PL for postgres? or which one is most widely\n> used / most popular?\n> is there a performance difference between plpgsql and plperl ?\n\nIf what you're trying to do is \"munge text,\" pl/perl will be a whole\nlot more suitable than pl/pgsql because it has a rich set of text\nmungeing tools and string functions which pl/pgsql lacks.\n\nIf you intend to do a lot of work involving reading unmunged tuples\nfrom this table and that, pl/pgsql provides a much more natural\nsyntax, and will probably be a bit faster as the query processor may\neven be able to expand some of the actions, rather than needing to\ntreat Perl code as an \"opaque blob.\"\n\nI would definitely be inclined to use the more natural language for\nthe given task...\n-- \nwm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','acm.org').\nhttp://linuxdatabases.info/info/internet.html\n\"If you want to talk with some experts about something, go to the bar\nwhere they hang out, buy a round of beers, and they'll surely talk\nyour ear off, leaving you wiser than before.\n\nIf you, a stranger, show up at the bar, walk up to the table, and ask\nthem to fax you a position paper, they'll tell you to call their\noffice in the morning and ask for a rate sheet.\" -- Miguel Cruz\n",
"msg_date": "Fri, 15 Apr 2005 08:57:33 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: plperl vs plpgsql"
},
{
"msg_contents": "Is there a performance difference between the two?\nwhich of the PL is most widely used. One problem i have with the plpgsql \nis that the quoting is really a pain.\n\n\n\nChristopher Browne wrote:\n\n>After takin a swig o' Arrakan spice grog, [email protected] (Alex) belched out:\n> \n>\n>>i am thinking about swiching to plperl as it seems to me much more\n>>flexible and easier to create functions.\n>>\n>>what is the recommended PL for postgres? or which one is most widely\n>>used / most popular?\n>>is there a performance difference between plpgsql and plperl ?\n>> \n>>\n>\n>If what you're trying to do is \"munge text,\" pl/perl will be a whole\n>lot more suitable than pl/pgsql because it has a rich set of text\n>mungeing tools and string functions which pl/pgsql lacks.\n>\n>If you intend to do a lot of work involving reading unmunged tuples\n>from this table and that, pl/pgsql provides a much more natural\n>syntax, and will probably be a bit faster as the query processor may\n>even be able to expand some of the actions, rather than needing to\n>treat Perl code as an \"opaque blob.\"\n>\n>I would definitely be inclined to use the more natural language for\n>the given task...\n> \n>\n\n\n",
"msg_date": "Sun, 17 Apr 2005 22:56:47 +1000",
"msg_from": "Alex <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: plperl vs plpgsql"
},
{
"msg_contents": "On 2005-04-17 14:56, Alex wrote:\n> Is there a performance difference between the two?\n\nAs Christopher already pointed out, it depends on what you want to do.\nIf you're doing some complex string processing, it will be easier (and\nin some cases) faster to do in plperl, if you're mainly dealing with\nsets, plpgsql will be better suited.\n\n> which of the PL is most widely used.\n\nplpgsql.\n\n> One problem i have with the plpgsql\n> is that the quoting is really a pain.\n\nIn current versions of PostgreSQL you can use $$ quoting, which should\nmake your life easier:\nhttp://www.postgresql.org/docs/8.0/static/plpgsql-structure.html\nhttp://www.postgresql.org/docs/8.0/static/plperl.html\n\n\nHTH,\nstefan\n",
"msg_date": "Sun, 17 Apr 2005 15:32:37 +0200",
"msg_from": "Stefan Weiss <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: plperl vs plpgsql"
},
{
"msg_contents": "On Sun, 17 Apr 2005, Alex wrote:\n\n> Is there a performance difference between the two?\n\nHello,\n\nIt depends on what you are using it for. My experience is that for some\nreason plPGSQL is faster when looping but other than that they should\nbe very similar.\n\n\n> which of the PL is most widely used. One problem i have with the plpgsql is \n> that the quoting is really a pain.\n\nplpgsql but I believe that will change in a short period of time.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n-- \nCommand Prompt, Inc., Your PostgreSQL solutions company. 503-667-4564\nCustom programming, 24x7 support, managed services, and hosting\nOpen Source Authors: plPHP, pgManage, Co-Authors: plPerlNG\nReliable replication, Mammoth Replicator - http://www.commandprompt.com/\n\n\n",
"msg_date": "Sun, 17 Apr 2005 06:51:38 -0700 (PDT)",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: plperl vs plpgsql"
},
{
"msg_contents": "After a long battle with technology, [email protected] (Alex), an earthling, wrote:\n> Christopher Browne wrote:\n>>After takin a swig o' Arrakan spice grog, [email protected] (Alex) belched out:\n>>>i am thinking about swiching to plperl as it seems to me much more\n>>>flexible and easier to create functions.\n>>>\n>>>what is the recommended PL for postgres? or which one is most widely\n>>>used / most popular?\n>>>is there a performance difference between plpgsql and plperl ?\n>>>\n>>>\n>>\n>>If what you're trying to do is \"munge text,\" pl/perl will be a whole\n>>lot more suitable than pl/pgsql because it has a rich set of text\n>>mungeing tools and string functions which pl/pgsql lacks.\n>>\n>>If you intend to do a lot of work involving reading unmunged tuples\n>>from this table and that, pl/pgsql provides a much more natural\n>>syntax, and will probably be a bit faster as the query processor may\n>>even be able to expand some of the actions, rather than needing to\n>>treat Perl code as an \"opaque blob.\"\n>>\n>>I would definitely be inclined to use the more natural language for\n>>the given task...\n\n> Is there a performance difference between the two?\n> which of the PL is most widely used. One problem i have with the\n> plpgsql is that the quoting is really a pain.\n\nYou seem to be inclined to play the mistaken game of \"Which language\nis the fastest?\" which encourages myopic use of bad benchmarks.\n\nIn 8.0, quoting in pl/pgsql is less of a pain, as you can use $$ as\nthe begin/end indicators.\n\nPerformance will always depend on what you're doing.\n\n- If you doing heavy amounts of \"text munging,\" Perl has highly\n optimized library routines that you're likely to be taking\n advantage of which will likely be way faster than any pl/pgsql\n equivalent.\n\n- If you are writing \"set operations,\" operating on table data,\n the fact that pl/pgsql won't need to 'context switch' between\n language mode and 'accessing data from the database' mode will\n probably make it a bit quicker than pl/Perl.\n\n- If you need some sort of \"ultimate fastness,\" then you might look to\n writing in a language that compiles to assembler so that your loops\n will run as quick and tight as possible, which would encourage\n writing stored procedures in C. Alas, this is _way_ harder to debug\n and deploy, and errors could pretty readily destroy your database\n instance if they were sufficiently awful.\n\npl/pgsql is almost certainly the most widely used procedural language,\nif you're into \"popularity contests.\"\n\nI would be very much inclined to start with whichever language makes\nit the easiest to write and maintain the algorithms you plan to write.\nI would only move to another language if the initial choice proved to\n_systematically_ be a conspicuous bottleneck.\n-- \noutput = (\"cbbrowne\" \"@\" \"gmail.com\")\nhttp://linuxdatabases.info/info/linuxdistributions.html\n\"One of the most dangerous things in the universe is an ignorant\npeople with real grievances. That is nowhere near as dangerous,\nhowever, as an informed and intelligent society with grievances. The\ndamage that vengeful intelligence can wreak, you cannot even imagine.\"\n-- Miles Teg, Heretics of Dune\n",
"msg_date": "Sun, 17 Apr 2005 19:17:18 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: plperl vs plpgsql"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Alex Turner [mailto:[email protected]]\n> Sent: Thursday, April 14, 2005 6:15 PM\n> To: Dave Held\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Intel SRCS16 SATA raid?\n> \n> Looking at the numbers, the raptor with TCQ enabled was close or\n> beat the Atlas III 10k drive on most benchmarks.\n\nAnd I would be willing to bet that the Atlas 10k is not using the\nsame generation of technology as the Raptors.\n\n> Naturaly a 15k drive is going to be faster in many areas, but it\n> is also much more expensive. It was only 44% better on the server\n> tests than the raptor with TCQ, but it costs nearly 300% more ($538\n> cdw.com, $180 newegg.com).\n\nState that in terms of cars. Would you be willing to pay 300% more\nfor a car that is 44% faster than your competitor's? Of course you\nwould, because we all recognize that the cost of speed/performance\ndoes not scale linearly. Naturally, you buy the best speed that you\ncan afford, but when it comes to hard drives, the only major feature\nwhose price tends to scale anywhere close to linearly is capacity.\n\n> Note also that the 15k drive was the only drive that kept up with\n> the raptor on raw transfer speed, which is going to matter for WAL.\n\nSo get a Raptor for your WAL partition. ;)\n\n> [...]\n> The Raptor drives can be had for as little as $180/ea, which is\n> quite a good price point considering they can keep up with their\n> SCSI 10k RPM counterparts on almost all tests with NCQ enabled\n> (Note that 3ware controllers _don't_ support NCQ, although they\n> claim their HBA based queueing is 95% as good as NCQ on the drive).\n\nJust keep in mind the points made by the Seagate article. You're\nbuying much more than just performance for that $500+. You're also\nbuying vibrational tolerance, high MTBF, better internal \nenvironmental controls, and a pretty significant margin on seek time,\nwhich is probably your most important feature for disks storing tables.\nAn interesting test would be to stick several drives in a cabinet and\ngraph how performance is affected at the different price points/\ntechnologies/number of drives.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n",
"msg_date": "Fri, 15 Apr 2005 08:40:13 -0500",
"msg_from": "\"Dave Held\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Intel SRCS16 SATA raid?"
},
{
"msg_contents": "Dave wrote \"An interesting test would be to stick several drives in a\ncabinet and\ngraph how performance is affected at the different price points/\ntechnologies/number of drives.\"\n\n From the discussion on the $7k server thread, it seems the RAID controller\nwould\nbe an important data point also. And RAID level. And application\nload/kind.\n\nHmmm. I just talked myself out of it. Seems like I'd end up with\nsomething\nakin to those database benchmarks we all love to hate.\n\nRick\n\[email protected] wrote on 04/15/2005 08:40:13 AM:\n\n> > -----Original Message-----\n> > From: Alex Turner [mailto:[email protected]]\n> > Sent: Thursday, April 14, 2005 6:15 PM\n> > To: Dave Held\n> > Cc: [email protected]\n> > Subject: Re: [PERFORM] Intel SRCS16 SATA raid?\n> >\n> > Looking at the numbers, the raptor with TCQ enabled was close or\n> > beat the Atlas III 10k drive on most benchmarks.\n>\n> And I would be willing to bet that the Atlas 10k is not using the\n> same generation of technology as the Raptors.\n>\n> > Naturaly a 15k drive is going to be faster in many areas, but it\n> > is also much more expensive. It was only 44% better on the server\n> > tests than the raptor with TCQ, but it costs nearly 300% more ($538\n> > cdw.com, $180 newegg.com).\n>\n> State that in terms of cars. Would you be willing to pay 300% more\n> for a car that is 44% faster than your competitor's? Of course you\n> would, because we all recognize that the cost of speed/performance\n> does not scale linearly. Naturally, you buy the best speed that you\n> can afford, but when it comes to hard drives, the only major feature\n> whose price tends to scale anywhere close to linearly is capacity.\n>\n> > Note also that the 15k drive was the only drive that kept up with\n> > the raptor on raw transfer speed, which is going to matter for WAL.\n>\n> So get a Raptor for your WAL partition. ;)\n>\n> > [...]\n> > The Raptor drives can be had for as little as $180/ea, which is\n> > quite a good price point considering they can keep up with their\n> > SCSI 10k RPM counterparts on almost all tests with NCQ enabled\n> > (Note that 3ware controllers _don't_ support NCQ, although they\n> > claim their HBA based queueing is 95% as good as NCQ on the drive).\n>\n> Just keep in mind the points made by the Seagate article. You're\n> buying much more than just performance for that $500+. You're also\n> buying vibrational tolerance, high MTBF, better internal\n> environmental controls, and a pretty significant margin on seek time,\n> which is probably your most important feature for disks storing tables.\n> An interesting test would be to stick several drives in a cabinet and\n> graph how performance is affected at the different price points/\n> technologies/number of drives.\n>\n> __\n> David B. Held\n> Software Engineer/Array Services Group\n> 200 14th Ave. East, Sartell, MN 56377\n> 320.534.3637 320.253.7800 800.752.8129\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n",
"msg_date": "Fri, 15 Apr 2005 09:21:02 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Intel SRCS16 SATA raid?"
},
{
"msg_contents": "The original thread was how much can I get for $7k\n\nYou can't fit a 15k RPM SCSI solution into $7K ;) Some of us are on a budget!\n\n10k RPM SATA drives give acceptable performance at a good price, thats\nreally the point here.\n\nI have never really argued that SATA is going to match SCSI\nperformance on multidrive arrays for IO/sec. But it's all about the\nbenjamins baby. If I told my boss we need $25k for a database\nmachine, he'd tell me that was impossible, and I have $5k to do it. \nIf I tell him $7k - he will swallow that. We don't _need_ the amazing\nperformance of a 15k RPM drive config. Our biggest hit is reads, so\nwe can buy 3xSATA machines and load balance. It's all about the\napplication, and buying what is appropriate. I don't buy a Corvette\nif all I need is a malibu.\n\nAlex Turner\nnetEconomist\n\nOn 4/15/05, Dave Held <[email protected]> wrote:\n> > -----Original Message-----\n> > From: Alex Turner [mailto:[email protected]]\n> > Sent: Thursday, April 14, 2005 6:15 PM\n> > To: Dave Held\n> > Cc: [email protected]\n> > Subject: Re: [PERFORM] Intel SRCS16 SATA raid?\n> >\n> > Looking at the numbers, the raptor with TCQ enabled was close or\n> > beat the Atlas III 10k drive on most benchmarks.\n> \n> And I would be willing to bet that the Atlas 10k is not using the\n> same generation of technology as the Raptors.\n> \n> > Naturaly a 15k drive is going to be faster in many areas, but it\n> > is also much more expensive. It was only 44% better on the server\n> > tests than the raptor with TCQ, but it costs nearly 300% more ($538\n> > cdw.com, $180 newegg.com).\n> \n> State that in terms of cars. Would you be willing to pay 300% more\n> for a car that is 44% faster than your competitor's? Of course you\n> would, because we all recognize that the cost of speed/performance\n> does not scale linearly. Naturally, you buy the best speed that you\n> can afford, but when it comes to hard drives, the only major feature\n> whose price tends to scale anywhere close to linearly is capacity.\n> \n> > Note also that the 15k drive was the only drive that kept up with\n> > the raptor on raw transfer speed, which is going to matter for WAL.\n> \n> So get a Raptor for your WAL partition. ;)\n> \n> > [...]\n> > The Raptor drives can be had for as little as $180/ea, which is\n> > quite a good price point considering they can keep up with their\n> > SCSI 10k RPM counterparts on almost all tests with NCQ enabled\n> > (Note that 3ware controllers _don't_ support NCQ, although they\n> > claim their HBA based queueing is 95% as good as NCQ on the drive).\n> \n> Just keep in mind the points made by the Seagate article. You're\n> buying much more than just performance for that $500+. You're also\n> buying vibrational tolerance, high MTBF, better internal\n> environmental controls, and a pretty significant margin on seek time,\n> which is probably your most important feature for disks storing tables.\n> An interesting test would be to stick several drives in a cabinet and\n> graph how performance is affected at the different price points/\n> technologies/number of drives.\n> \n> __\n> David B. Held\n> Software Engineer/Array Services Group\n> 200 14th Ave. East, Sartell, MN 56377\n> 320.534.3637 320.253.7800 800.752.8129\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n",
"msg_date": "Fri, 15 Apr 2005 11:01:56 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel SRCS16 SATA raid?"
},
{
"msg_contents": "This is a different thread that the $7k server thread.\nGreg Stark started it and wrote:\n \n \"I'm also wondering about whether I'm better off with one of these \n SATA raid \n controllers or just going with SCSI drives.\" \n \n \n\nRick\n\[email protected] wrote on 04/15/2005 10:01:56 AM:\n\n> The original thread was how much can I get for $7k\n>\n> You can't fit a 15k RPM SCSI solution into $7K ;) Some of us are ona\nbudget!\n>\n> 10k RPM SATA drives give acceptable performance at a good price, thats\n> really the point here.\n>\n> I have never really argued that SATA is going to match SCSI\n> performance on multidrive arrays for IO/sec. But it's all about the\n> benjamins baby. If I told my boss we need $25k for a database\n> machine, he'd tell me that was impossible, and I have $5k to do it.\n> If I tell him $7k - he will swallow that. We don't _need_ the amazing\n> performance of a 15k RPM drive config. Our biggest hit is reads, so\n> we can buy 3xSATA machines and load balance. It's all about the\n> application, and buying what is appropriate. I don't buy a Corvette\n> if all I need is a malibu.\n>\n> Alex Turner\n> netEconomist\n>\n> On 4/15/05, Dave Held <[email protected]> wrote:\n> > > -----Original Message-----\n> > > From: Alex Turner [mailto:[email protected]]\n> > > Sent: Thursday, April 14, 2005 6:15 PM\n> > > To: Dave Held\n> > > Cc: [email protected]\n> > > Subject: Re: [PERFORM] Intel SRCS16 SATA raid?\n> > >\n> > > Looking at the numbers, the raptor with TCQ enabled was close or\n> > > beat the Atlas III 10k drive on most benchmarks.\n> >\n> > And I would be willing to bet that the Atlas 10k is not using the\n> > same generation of technology as the Raptors.\n> >\n> > > Naturaly a 15k drive is going to be faster in many areas, but it\n> > > is also much more expensive. It was only 44% better on the server\n> > > tests than the raptor with TCQ, but it costs nearly 300% more ($538\n> > > cdw.com, $180 newegg.com).\n> >\n> > State that in terms of cars. Would you be willing to pay 300% more\n> > for a car that is 44% faster than your competitor's? Of course you\n> > would, because we all recognize that the cost of speed/performance\n> > does not scale linearly. Naturally, you buy the best speed that you\n> > can afford, but when it comes to hard drives, the only major feature\n> > whose price tends to scale anywhere close to linearly is capacity.\n> >\n> > > Note also that the 15k drive was the only drive that kept up with\n> > > the raptor on raw transfer speed, which is going to matter for WAL.\n> >\n> > So get a Raptor for your WAL partition. ;)\n> >\n> > > [...]\n> > > The Raptor drives can be had for as little as $180/ea, which is\n> > > quite a good price point considering they can keep up with their\n> > > SCSI 10k RPM counterparts on almost all tests with NCQ enabled\n> > > (Note that 3ware controllers _don't_ support NCQ, although they\n> > > claim their HBA based queueing is 95% as good as NCQ on the drive).\n> >\n> > Just keep in mind the points made by the Seagate article. You're\n> > buying much more than just performance for that $500+. You're also\n> > buying vibrational tolerance, high MTBF, better internal\n> > environmental controls, and a pretty significant margin on seek time,\n> > which is probably your most important feature for disks storing tables.\n> > An interesting test would be to stick several drives in a cabinet and\n> > graph how performance is affected at the different price points/\n> > technologies/number of drives.\n> >\n> > __\n> > David B. Held\n> > Software Engineer/Array Services Group\n> > 200 14th Ave. East, Sartell, MN 56377\n> > 320.534.3637 320.253.7800 800.752.8129\n> >\n> > ---------------------------(end of\nbroadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> >\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n\n",
"msg_date": "Fri, 15 Apr 2005 10:20:20 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Intel SRCS16 SATA raid?"
},
{
"msg_contents": "On Apr 15, 2005, at 11:01 AM, Alex Turner wrote:\n\n> You can't fit a 15k RPM SCSI solution into $7K ;) Some of us are on a \n> budget!\n>\n\nI just bought a pair of Dual Opteron, 4GB RAM, LSI 320-2X RAID dual \nchannel with 8 36GB 15kRPM seagate drives. Each one of these boxes set \nme back just over $7k, including onsite warrantee.\n\nThey totally blow away the Dell Dual XEON with external 14 disk RAID \n(also 15kRPM drives, manufacturer unknown) which also has 4GB RAM and a \nDell PERC 3/DC controller, the whole of which set me back over $15k.\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806",
"msg_date": "Fri, 15 Apr 2005 11:52:47 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel SRCS16 SATA raid?"
},
{
"msg_contents": "I stand corrected!\n\nMaybe I should re-evaluate our own config!\n\nAlex T\n\n(The dell PERC controllers do pretty much suck on linux)\n\nOn 4/15/05, Vivek Khera <[email protected]> wrote:\n> \n> On Apr 15, 2005, at 11:01 AM, Alex Turner wrote:\n> \n> > You can't fit a 15k RPM SCSI solution into $7K ;) Some of us are on a\n> > budget!\n> >\n> \n> I just bought a pair of Dual Opteron, 4GB RAM, LSI 320-2X RAID dual\n> channel with 8 36GB 15kRPM seagate drives. Each one of these boxes set\n> me back just over $7k, including onsite warrantee.\n> \n> They totally blow away the Dell Dual XEON with external 14 disk RAID\n> (also 15kRPM drives, manufacturer unknown) which also has 4GB RAM and a\n> Dell PERC 3/DC controller, the whole of which set me back over $15k.\n> \n> Vivek Khera, Ph.D.\n> +1-301-869-4449 x806\n> \n> \n>\n",
"msg_date": "Fri, 15 Apr 2005 12:07:58 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel SRCS16 SATA raid?"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Alex Turner [mailto:[email protected]]\n> Sent: Friday, April 15, 2005 9:44 AM\n> To: Marinos Yannikos\n> Cc: Joshua D. Drake; Mohan, Ross; [email protected]\n> Subject: Re: [PERFORM] Intel SRCS16 SATA raid?\n> \n> No offense to that review, but it was really wasn't that good,\n> and drew bad conclusions from the data. I posted it originaly\n> and immediately regretted it.\n\nI didn't read the whole thing, but it didn't seem that bad to me.\n\n> See http://www.tweakers.net/reviews/557/18\n> \n> Amazingly the controller with 1Gig cache manages a write throughput\n> of 750MB/sec on a single drive.\n> \n> quote:\n> \"Floating high above the crowd, the ARC-1120 has a perfect view on\n> the struggles of the other adapters. \"\n> \n> It's because the adapter has 1Gig of RAM, nothing to do with the RAID\n> architecture, it's clearly caching the entire dataset. The drive\n> can't physicaly run that fast. \n\nAnd that's pretty much exactly what the article says. Even before the\npart you quoted. Not sure what the problem is there.\n\n> These guys really don't know what they are doing.\n\nThey weren't pretending that the drive array was serving up data at\nthat rate directly from the physical media. They clearly indicated\nthat they were testing controller cache speed with the small test.\n\n> Curiously:\n> http://www.tweakers.net/reviews/557/25\n> \n> The 3ware does very well as a data drive for MySQL.\n> [...]\n\nIf you take a close look, they pretty much outright say that the Areca\ncontroller does very poorly on the random accesses typical of DB work.\nThey also specifically mention that the 3ware still dominates the\ncompetition in this area.\n\nDave\n\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n",
"msg_date": "Fri, 15 Apr 2005 10:09:45 -0500",
"msg_from": "\"Dave Held\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Intel SRCS16 SATA raid?"
}
] |
[
{
"msg_contents": "\nSorry to blend threads, but in my kinda longish, somewhat thankless, \nessentially anonymous, and quite average career as a dba, I have \nfound that the 7K would be best spent on a definitive end-to-end\n\"application critical path\" test (pretty easy to instrument apps\nand lash on test harnesses these days). \n\n\nIf it's \"the disk subsystem\", then by all means, spend the 7K there. \n\nIf the \"7K$\" is for \"hardware only\", then disk is always a good choice. For\na really small shop, maybe it's an upgrade to a dual CPU opteron MOBO, eg. \ndunno.\n\nIf, however, in the far-more-likely case that the application code\nor system/business process is the throttle point, it'd be a great\nuse of money to have a test report showing that to the \"higher ups\". \nThat's where the best scalability bang-for-buck can be made. \n\n\n- Ross\n\np.s. having said this, and as already been noted \"7K\" ain't\n going to buy that much....maybe the ability to go RAID 10?\n \np.p.s Why don't we start a PGSQL-7K listserv, to handle this EPIC thread? :-)\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of [email protected]\nSent: Friday, April 15, 2005 11:20 AM\nTo: Alex Turner\nCc: Dave Held; [email protected]; [email protected]\nSubject: Re: [PERFORM] Intel SRCS16 SATA raid?\n\n\nThis is a different thread that the $7k server thread.\nGreg Stark started it and wrote:\n \n \"I'm also wondering about whether I'm better off with one of these \n SATA raid \n controllers or just going with SCSI drives.\" \n \n \n\nRick\n\[email protected] wrote on 04/15/2005 10:01:56 AM:\n\n> The original thread was how much can I get for $7k\n>\n> You can't fit a 15k RPM SCSI solution into $7K ;) Some of us are ona\nbudget!\n>\n> 10k RPM SATA drives give acceptable performance at a good price, thats \n> really the point here.\n>\n> I have never really argued that SATA is going to match SCSI \n> performance on multidrive arrays for IO/sec. But it's all about the \n> benjamins baby. If I told my boss we need $25k for a database \n> machine, he'd tell me that was impossible, and I have $5k to do it. If \n> I tell him $7k - he will swallow that. We don't _need_ the amazing \n> performance of a 15k RPM drive config. Our biggest hit is reads, so \n> we can buy 3xSATA machines and load balance. It's all about the \n> application, and buying what is appropriate. I don't buy a Corvette \n> if all I need is a malibu.\n>\n> Alex Turner\n> netEconomist\n>\n> On 4/15/05, Dave Held <[email protected]> wrote:\n> > > -----Original Message-----\n> > > From: Alex Turner [mailto:[email protected]]\n> > > Sent: Thursday, April 14, 2005 6:15 PM\n> > > To: Dave Held\n> > > Cc: [email protected]\n> > > Subject: Re: [PERFORM] Intel SRCS16 SATA raid?\n> > >\n> > > Looking at the numbers, the raptor with TCQ enabled was close or \n> > > beat the Atlas III 10k drive on most benchmarks.\n> >\n> > And I would be willing to bet that the Atlas 10k is not using the \n> > same generation of technology as the Raptors.\n> >\n> > > Naturaly a 15k drive is going to be faster in many areas, but it \n> > > is also much more expensive. It was only 44% better on the server \n> > > tests than the raptor with TCQ, but it costs nearly 300% more \n> > > ($538 cdw.com, $180 newegg.com).\n> >\n> > State that in terms of cars. Would you be willing to pay 300% more \n> > for a car that is 44% faster than your competitor's? Of course you \n> > would, because we all recognize that the cost of speed/performance \n> > does not scale linearly. Naturally, you buy the best speed that you \n> > can afford, but when it comes to hard drives, the only major feature \n> > whose price tends to scale anywhere close to linearly is capacity.\n> >\n> > > Note also that the 15k drive was the only drive that kept up with \n> > > the raptor on raw transfer speed, which is going to matter for \n> > > WAL.\n> >\n> > So get a Raptor for your WAL partition. ;)\n> >\n> > > [...]\n> > > The Raptor drives can be had for as little as $180/ea, which is \n> > > quite a good price point considering they can keep up with their \n> > > SCSI 10k RPM counterparts on almost all tests with NCQ enabled \n> > > (Note that 3ware controllers _don't_ support NCQ, although they \n> > > claim their HBA based queueing is 95% as good as NCQ on the \n> > > drive).\n> >\n> > Just keep in mind the points made by the Seagate article. You're \n> > buying much more than just performance for that $500+. You're also \n> > buying vibrational tolerance, high MTBF, better internal \n> > environmental controls, and a pretty significant margin on seek \n> > time, which is probably your most important feature for disks \n> > storing tables. An interesting test would be to stick several drives \n> > in a cabinet and graph how performance is affected at the different \n> > price points/ technologies/number of drives.\n> >\n> > __\n> > David B. Held\n> > Software Engineer/Array Services Group\n> > 200 14th Ave. East, Sartell, MN 56377\n> > 320.534.3637 320.253.7800 800.752.8129\n> >\n> > ---------------------------(end of\nbroadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> >\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n",
"msg_date": "Fri, 15 Apr 2005 16:17:46 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Spend 7K *WHERE*? WAS Intel SRCS16 SATA raid? and How to Improve\n\tw/7K$?"
},
{
"msg_contents": "Ross,\n\nI agree with you, but I' am the lowly intergrator/analyst, I have to solve the problem without all\nthe authority (sounds like a project manager). I originally started this thread since I had the $7k budget.\n\nI am not a dba/developer. but I play one on t.v., so I can only assume that throwing money\nat the application code means one understand what the bottleneck in the code and what it takes to fix it.\n\nIn this situation, the code is hidden by the vendor that connects to the database. So, besides persisent requests of the vendor to improve the area of the application, the balance of tuning lies with the hardware. The answer is *both* hardware and application code. Finding the right balance is key. Your mileage may vary.\n\nSteve Poe\n\n \n\n\n>If, however, in the far-more-likely case that the application code\n>or system/business process is the throttle point, it'd be a great\n>use of money to have a test report showing that to the \"higher ups\". \n>That's where the best scalability bang-for-buck can be made. \n\n\n- Ross\n\n>p.s. having said this, and as already been noted \"7K\" ain't\n> going to buy that much....maybe the ability to go RAID 10?\n \n>p.p.s Why don't we start a PGSQL-7K listserv, to handle this EPIC thread? :-) \n\n\n",
"msg_date": "Fri, 15 Apr 2005 11:11:40 -0700",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Spend 7K *WHERE*? WAS Intel SRCS16 SATA raid? and How"
}
] |
[
{
"msg_contents": "Greg, et al. \n\nI never found any evidence of a \"stop and get an intermediate request\"\nfunctionality in the TCQ protocol. \n\nIIRC, what is there is\n\n1) Ordered\n2) Head First\n3) Simple\n\nimplemented as choices. *VERY* roughly, that'd be like\n(1) disk subsystem satisfies requests as submitted, (2) let's\nthe \"this\" request be put at the very head of the per se disk\nqueue after the currently-running disk request is complete, and\n(3) is \"let the per se disk and it's software reorder the requests\non-hand as per it's onboard software\". (N.B. in the last, it's\nthe DISK not the controller making those decisions). (N.B. too, that\nthis last is essentially what NCQ (cf. TCQ) is doing )\n\nI know we've been batting around a hypothetical case of SCSI\nwhere it \"stops and gets smth. on the way\", but I can find\nno proof (yet) that this is done, pro forma, by SCSI drives. \n\nIn other words, SCSI is a necessary, but not sufficient cause\nfor intermediate reading. \n\nFWIW\n\n- Ross\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Greg Stark\nSent: Friday, April 15, 2005 2:02 PM\nTo: Tom Lane\nCc: Kevin Brown; [email protected]\nSubject: Re: [PERFORM] How to improve db performance with $7K?\n\n\nTom Lane <[email protected]> writes:\n\n> Yes, you can probably assume that blocks with far-apart numbers are \n> going to require a big seek, and you might even be right in supposing \n> that a block with an intermediate number should be read on the way. \n> But you have no hope at all of making the right decisions at a more \n> local level --- say, reading various sectors within the same cylinder \n> in an optimal fashion. You don't know where the track boundaries are, \n> so you can't schedule in a way that minimizes rotational latency. \n> You're best off to throw all the requests at the drive together and \n> let the drive sort it out.\n\nConsider for example three reads, one at the beginning of the disk, one at the very end, and one in the middle. If the three are performed in the logical order (assuming the head starts at the beginning), then the drive has to seek, say, 4ms to get to the middle and 4ms to get to the end.\n\nBut if the middle block requires a full rotation to reach it from when the head arrives that adds another 8ms of rotational delay (assuming a 7200RPM drive).\n\nWhereas the drive could have seeked over to the last block, then seeked back in 8ms and gotten there just in time to perform the read for free.\n\n\nI'm not entirely convinced this explains all of the SCSI drives' superior performance though. The above is about a worst-case scenario. should really only have a small effect, and it's not like the drive firmware can really schedule things perfectly either.\n\n\nI think most of the difference is that the drive manufacturers just don't package their high end drives with ATA interfaces. So there are no 10k RPM ATA drives and no 15k RPM ATA drives. I think WD is making fast SATA drives but most of the manufacturers aren't even doing that.\n\n-- \ngreg\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 8: explain analyze is your friend\n",
"msg_date": "Fri, 15 Apr 2005 18:26:57 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
}
] |
[
{
"msg_contents": "I think there are many people who feel that $7,000 is a good budget for a\ndatabase server, me being one.\n\n * I agree with the threads that more disks are better.\n * I also agree that SCSI is better, but can be hard to justify if your\nbudget is tight, and I have great certainty that 2x SATA drives on a good\ncontroller is better than x SCSI drives for many work loads.\n * I also feel that good database design and proper maintenance can be one\nof the single biggest performance enhancers available. This can be labor\nintensive, however, and sometimes throwing more hardware at a problem is\ncheaper than restructuring a db.\n\nEither way, having a good hardware platform is an excellent place to start,\nas much of your tuning will depend on certain aspects of your hardware.\n\nSo if you need a db server, and you have $7k to spend, I'd say spend it.\n>From this list, I've gathered that I/O and RAM are your two most important\ninvestments.\n\nOnce you get that figured out, you can still do some performance tuning on\nyour new server using the excellent advice from this mailing list.\n\nBy the way, for all those who make this list work, I've rarely found such a\nthorough, helpful and considerate group of people as these on the\nperformance list.\n\n-- \nMatthew Nuzum <[email protected]>\nwww.followers.net - Makers of \"Elite Content Management System\"\nView samples of Elite CMS in action by visiting\nhttp://www.followers.net/portfolio/\n\n\n\n",
"msg_date": "Fri, 15 Apr 2005 15:43:47 -0500",
"msg_from": "\"Matthew Nuzum\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Spend 7K *WHERE*? WAS Intel SRCS16 SATA raid? and How"
},
{
"msg_contents": "On Fri, 2005-04-15 at 15:43 -0500, Matthew Nuzum wrote:\n> I think there are many people who feel that $7,000 is a good budget for a\n> database server, me being one.\n\nThe budget for a database server is usually some %age of the value of\nthe data within the database or the value of it's availability. Big\nbudget hardware (well, going from $7k to $100k) often brings more\nredundancy and reliability improvement than performance improvement.\n\nIf you're going to lose $100k in business because the database was\nunavailable for 12 hours, then kick $75k into the hardware and call a\nprofit of $25k over 3 years (hardware lifetime is 3 years, catastrophic\nfailure happens once every 3 or so years...).\n\nDitto for backup systems. If the company depends on the data in the\ndatabase for it's survival, where bankruptcy or worse would happen as a\nresult of complete dataloss, then it would be a good idea to invest a\nsignificant amount of the companies revenue into making damn sure that\ndoesn't happen. Call it an insurance policy.\n\n\nPerformance for me dictates which hardware is purchased and\nconfiguration is used within $BUDGET, but $BUDGET itself is nearly\nalways defined by the value of the data stored.\n\n\n> * I agree with the threads that more disks are better.\n> * I also agree that SCSI is better, but can be hard to justify if your\n> budget is tight, and I have great certainty that 2x SATA drives on a good\n> controller is better than x SCSI drives for many work loads.\n> * I also feel that good database design and proper maintenance can be one\n> of the single biggest performance enhancers available. This can be labor\n> intensive, however, and sometimes throwing more hardware at a problem is\n> cheaper than restructuring a db.\n> \n> Either way, having a good hardware platform is an excellent place to start,\n> as much of your tuning will depend on certain aspects of your hardware.\n> \n> So if you need a db server, and you have $7k to spend, I'd say spend it.\n> >From this list, I've gathered that I/O and RAM are your two most important\n> investments.\n> \n> Once you get that figured out, you can still do some performance tuning on\n> your new server using the excellent advice from this mailing list.\n> \n> By the way, for all those who make this list work, I've rarely found such a\n> thorough, helpful and considerate group of people as these on the\n> performance list.\n> \n-- \n\n",
"msg_date": "Fri, 15 Apr 2005 17:33:03 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Spend 7K *WHERE*? WAS Intel SRCS16 SATA raid? and How"
},
{
"msg_contents": "Rod Taylor wrote:\n > On Fri, 2005-04-15 at 15:43 -0500, Matthew Nuzum wrote:\n >> * I agree with the threads that more disks are better.\n >> * I also agree that SCSI is better, but can be hard to justify\n\nHere's another approach to spend $7000 that we're currently\ntrying.... but it'll only work for certain systems if you can\nuse load balancing and/or application level partitioning\nof your software.\n\nFor $859 you can buy\n a Dell SC1425 with (*see footnote)\n 2 Xeon 2.8GHz processors (*see footnote)\n 1 GB ram\n 1 80GB hard drive. (*see footnote)\n\nDoing the math, it seems I could get 8 of\nthese systems for that $6870, giving me:\n 16 Xeon processors (*see footnote),\n 640 GB of disk space spread over 8 spindles\n 8 GB of ram\n 16 1Gbps network adapters.\n\n\nDespite the non-optimal hardware (* see footnote), the price\nof each system and extra redundancy may make up the difference\nfor some applications.\n\nFor example, I didn't see many other $7000 proposals have\nhave nearly 10GB of ram, or over a dozen CPUs (even counting\nthe raid controllers), or over a half a terrabyte of storage ,\nor capable of 5-10 Gbit/sec of network traffic... The extra\ncapacity would allow me to have redundancy that would somewhat\nmake up for the flakier hardware, no raid, etc.\n\nThoughts? Over the next couple months I'll be evaluating\na cluster of 4 systems almost exactly as I described (but\nwith cheaper dual hard drives in each system), for a GIS\nsystem that does lend itself well to application-level\npartitioning.\n\n Ron\n\n(* footnotes)\n Yeah, I know some reports here say that dual Xeons can suck;\n but Dell's throwing in the second one for free.\n Yeah, I know some reports here say Dells can suck, but it\n was easy to get a price quote online, and they're a nice\n business partner of ours.\n Yeah, I should get 2 hard drives in each system, but Dell\n wanting an additional $160 for a 80GB hard drive is not a good deal.\n Yeah, I know I'd be better off with 2GB ram, but Dell\n wants $400 (half the price of an entire additional\n system) for the upgrade from 1GB to 2.\n\n I also realize that application level partitioning needed\n to take advantage of a loose cluster like this is not practical\n for many applications.\n",
"msg_date": "Fri, 15 Apr 2005 17:10:45 -0700",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Spend 7K *WHERE*? WAS Intel SRCS16 SATA raid? and How"
},
{
"msg_contents": "On Apr 15, 2005, at 8:10 PM, Ron Mayer wrote:\n\n> For example, I didn't see many other $7000 proposals have\n> have nearly 10GB of ram, or over a dozen CPUs (even counting\n> the raid controllers), or over a half a terrabyte of storage ,\n> or capable of 5-10 Gbit/sec of network traffic... The extra\n\nAnd how much are you spending on the switch that will carry 10Gb/sec \ntraffic?\n\n> capacity would allow me to have redundancy that would somewhat\n> make up for the flakier hardware, no raid, etc.\n\nit would work for some class of applications which are pretty much \nread-only. and don't forget to factor in the overhead of the \nreplication...\n\n>\n> Thoughts? Over the next couple months I'll be evaluating\n> a cluster of 4 systems almost exactly as I described (but\n> with cheaper dual hard drives in each system), for a GIS\n> system that does lend itself well to application-level\n> partitioning.\n\nI'd go with fewer bigger boxes with RAID so i can sleep better at night \n:-)\n\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806",
"msg_date": "Wed, 20 Apr 2005 11:24:56 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Spend 7K *WHERE*? WAS Intel SRCS16 SATA raid? and How"
}
] |
[
{
"msg_contents": "\nHi folks,\n\nI like to use (immutable) functions for looking up serveral \n(almost constant) things, i.e fetching a username by id. \nThis makes my queries more clear.\n\nBut is this really performant ?\n\nLets imagine: \n\nWe've got an table with user accounts (uid,name,...). Then we've\ngot another one which contains some items assigned to users, and\nso are linked to them by an uid field.\nNow want to view the items with usernames instead of just uid:\n\na) SELECT items.a, items.b, ..., users.username FROM items, users\n\tWHERE items.uid = users.uid;\n \nc) CREATE FUNCTION id2username(oid) RETURNS text \n LANGUAGE 'SQL' IMMUTABLE AS '\n\tSELECT username AS RESULT FROM users WHERE uid = $1';\n\t\n SELECT items.a, items.b, ..., id2username(users.uid);\n \n\nWhich one is faster with\n a) only a few users (<50) \n b) many users ( >1k )\nwhile we have several 10k of items ?\n\n\nthx\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n cellphone: +49 174 7066481\n---------------------------------------------------------------------\n -- DSL ab 0 Euro. -- statische IP -- UUCP -- Hosting -- Webshops --\n---------------------------------------------------------------------\n",
"msg_date": "Fri, 15 Apr 2005 22:55:11 +0200",
"msg_from": "Enrico Weigelt <[email protected]>",
"msg_from_op": true,
"msg_subject": "immutable functions vs. join for lookups ?"
},
{
"msg_contents": "Enrico Weigelt <[email protected]> writes:\n> c) CREATE FUNCTION id2username(oid) RETURNS text \n> LANGUAGE 'SQL' IMMUTABLE AS '\n> \tSELECT username AS RESULT FROM users WHERE uid = $1';\n\nThis is simply dangerous. The function is *NOT* immutable (it is\nstable though). When ... not if ... your application breaks because\nyou got the wrong answers, you'll get no sympathy from anyone.\n\nThe correct question to ask was \"if I make a stable function like\nthis, is it likely to be faster than the join?\". The answer is\n\"probably not; at best it will be equal to the join\". The best the\nplanner is likely to be able to do with the function-based query\nis equivalent to a nestloop with inner indexscan (assuming there is\nan index on users.uid). If that's the best plan then the join case\nshould find it too ... but if you are selecting a lot of items rows\nthen it won't be the best plan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Apr 2005 17:12:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: immutable functions vs. join for lookups ? "
},
{
"msg_contents": "* Tom Lane <[email protected]> wrote:\n> Enrico Weigelt <[email protected]> writes:\n> > c) CREATE FUNCTION id2username(oid) RETURNS text \n> > LANGUAGE 'SQL' IMMUTABLE AS '\n> > \tSELECT username AS RESULT FROM users WHERE uid = $1';\n> \n> This is simply dangerous. The function is *NOT* immutable (it is\n> stable though). When ... not if ... your application breaks because\n> you got the wrong answers, you'll get no sympathy from anyone.\n\nIn my case it is immutable. The username never changes.\n\n\ncu\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n cellphone: +49 174 7066481\n---------------------------------------------------------------------\n -- DSL ab 0 Euro. -- statische IP -- UUCP -- Hosting -- Webshops --\n---------------------------------------------------------------------\n",
"msg_date": "Sun, 17 Apr 2005 08:06:04 +0200",
"msg_from": "Enrico Weigelt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: immutable functions vs. join for lookups ?"
},
{
"msg_contents": "On 4/17/05, Enrico Weigelt <[email protected]> wrote:\n> * Tom Lane <[email protected]> wrote:\n> > Enrico Weigelt <[email protected]> writes:\n> > > c) CREATE FUNCTION id2username(oid) RETURNS text\n> > > LANGUAGE 'SQL' IMMUTABLE AS '\n> > > SELECT username AS RESULT FROM users WHERE uid = $1';\n> >\n> > This is simply dangerous. The function is *NOT* immutable (it is\n> > stable though). When ... not if ... your application breaks because\n> > you got the wrong answers, you'll get no sympathy from anyone.\n> \n> In my case it is immutable. The username never changes.\n> \nEven if your data never changes it *can* change so the function should\nbe at most stable not immutable.\n\nregards,\nJaime Casanova\n",
"msg_date": "Sun, 17 Apr 2005 03:37:07 -0500",
"msg_from": "Jaime Casanova <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: immutable functions vs. join for lookups ?"
},
{
"msg_contents": "On 4/15/05, Enrico Weigelt <[email protected]> wrote:\n> a) SELECT items.a, items.b, ..., users.username FROM items, users\n> WHERE items.uid = users.uid;\n> \n> c) CREATE FUNCTION id2username(oid) RETURNS text\n> LANGUAGE 'SQL' IMMUTABLE AS '\n> SELECT username AS RESULT FROM users WHERE uid = $1';\n\nYou will be told that this function is not immutable but stable, and this\nis quite right. But consider such a function:\n\nCREATE OR REPLACE FUNCTION id2username (oid int) RETURNS TEXT AS $$\n BEGIN\n IF oid = 0 THEN RETURN 'foo';\n ELSIF oid = 1 THEN RETURN 'bar';\n END IF;\n END;\n$$ LANGUAGE plpgsql IMMUTABLE;\n\nversus a lookup table with similar data. Logic suggests it should be faster\nthan a table... It got me worried when I added: \"RAISE WARNING 'Called'\"\nafter begin and I got lots of \"Called\" warnings when using this IMMUTABLE\nfunction in select... And the timings for ~6000 values in aaa table\n(and two values in lookup table) are:\n\nThere is a query, output of the EXPLAIN ANALYZE, Time of EXPLAIN\nANALYZE and \"Real time\" of SELECT (without EXPLAIN ANALYZE):\n\na) simple select from temp table, and a lookup cost:\n EXPLAIN ANALYZE SELECT n FROM aaa;\n Seq Scan on aaa (cost=0.00..87.92 rows=5992 width=4) (actual\ntime=0.011..24.849 rows=6144 loops=1)\n Total runtime: 51.881 ms\n(2 rows)\nTime: 52,882 ms\nReal time: 16,261 ms\n\n EXPLAIN ANALYZE SELECT id2username(n) FROM aaa limit 2;\nLimit (cost=0.00..0.03 rows=2 width=4) (actual time=0.111..0.150\nrows=2 loops=1)\n -> Seq Scan on aaa (cost=0.00..104.80 rows=6144 width=4) (actual\ntime=0.102..0.129 rows=2 loops=1)\n Total runtime: 0.224 ms\n(3 rows)\nTime: 1,308 ms\nReal time: 1,380 ms\n\nb) natural join with lookup table:\n EXPLAIN ANALYZE SELECT username FROM aaa NATURAL JOIN lookup;\n Hash Join (cost=2.45..155.09 rows=3476 width=32) (actual\ntime=0.306..83.677 rows=6144 loops=1)\n Hash Cond: (\"outer\".n = \"inner\".n)\n -> Seq Scan on aaa (cost=0.00..87.92 rows=5992 width=4) (actual\ntime=0.006..25.517 rows=6144 loops=1)\n -> Hash (cost=2.16..2.16 rows=116 width=36) (actual\ntime=0.237..0.237 rows=0 loops=1)\n -> Seq Scan on lookup (cost=0.00..2.16 rows=116 width=36)\n(actual time=0.016..0.034 rows=2 loops=1)\n Total runtime: 107.378 ms\n(6 rows)\nTime: 109,040 ms\nReal time: 25,364 ms\n\nc) IMMUTABLE \"static\" lookup function:\n EXPLAIN ANALYZE SELECT id2username(n) FROM aaa;\nSeq Scan on aaa (cost=0.00..104.80 rows=6144 width=4) (actual\ntime=0.089..116.397 rows=6144 loops=1)\n Total runtime: 143.800 ms\n(2 rows)\nTime: 144,869 ms\nReal time: 102,428 ms\n\nd) self-join with a function ;)\n EXPLAIN ANALYZE SELECT * FROM (SELECT n, id2username(n) AS username\nFROM (SELECT DISTINCT n FROM aaa) AS values) AS v_lookup RIGHT JOIN\naaa USING (n);\n Hash Left Join (cost=506.82..688.42 rows=6144 width=36) (actual\ntime=102.382..182.661 rows=6144 loops=1)\n Hash Cond: (\"outer\".n = \"inner\".n)\n -> Seq Scan on aaa (cost=0.00..89.44 rows=6144 width=4) (actual\ntime=0.012..24.360 rows=6144 loops=1)\n -> Hash (cost=506.82..506.82 rows=2 width=36) (actual\ntime=102.217..102.217 rows=0 loops=1)\n -> Subquery Scan v_lookup (cost=476.05..506.82 rows=2\nwidth=36) (actual time=53.626..102.057 rows=2 loops=1)\n -> Subquery Scan \"values\" (cost=476.05..506.80 rows=2\nwidth=4) (actual time=53.613..102.023 rows=2 loops=1)\n -> Unique (cost=476.05..506.77 rows=2 width=4)\n(actual time=53.456..101.772 rows=2 loops=1)\n -> Sort (cost=476.05..491.41 rows=6144\nwidth=4) (actual time=53.440..76.710 rows=6144 loops=1)\n Sort Key: n\n -> Seq Scan on aaa \n(cost=0.00..89.44 rows=6144 width=4) (actual time=0.013..26.626\nrows=6144 loops=1)\n Total runtime: 209.378 ms\n(11 rows)\nTime: 211,460 ms\nReal time: 46,682 ms\n\n...so this IMMUTABLE is twice as slow (~100 ms) as the query joining\nitself with a SELECT DISTINCT on an IMMUTABLE function (~50 ms),\nwhich is twice as slow as JOIN against lookup table (~25 ms), and I feel\nthis IMMUTABLE function could be around ~20 ms (~16 ms plus\ncalling the function two times plus giving the values).\n\nAh, and this is PostgreSQL 8.0.1 running under FreeBSD on a\nCPU: Intel(R) Celeron(R) CPU 2.40GHz (2400.10-MHz 686-class CPU).\n\n Regards,\n Dawid\n\nPS: I have a feeling that IMMUTABLE functions worked better in 7.4,\nyet I am unable to confirm this.\n",
"msg_date": "Mon, 18 Apr 2005 11:00:38 +0200",
"msg_from": "Dawid Kuroczko <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: immutable functions vs. join for lookups ?"
},
{
"msg_contents": "* Jaime Casanova <[email protected]> wrote:\n\n<snip>\n> Even if your data never changes it *can* change so the function should\n> be at most stable not immutable.\n\nokay, the planner sees that the table could potentionally change.\nbut - as the dba - I'd like to tell him, this table *never* changes \nin practise (or at most there will be an insert once a year)\n\nisnt there any way to enforce the function to be really immutable ?\n\n\ncu\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n cellphone: +49 174 7066481\n---------------------------------------------------------------------\n -- DSL ab 0 Euro. -- statische IP -- UUCP -- Hosting -- Webshops --\n---------------------------------------------------------------------\n",
"msg_date": "Thu, 21 Apr 2005 21:22:26 +0200",
"msg_from": "Enrico Weigelt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: immutable functions vs. join for lookups ?"
},
{
"msg_contents": "On 4/21/05, Enrico Weigelt <[email protected]> wrote:\n> <snip>\n> > Even if your data never changes it *can* change so the function should\n> > be at most stable not immutable.\n> \n> okay, the planner sees that the table could potentionally change.\n> but - as the dba - I'd like to tell him, this table *never* changes\n> in practise (or at most there will be an insert once a year)\n> \n> isnt there any way to enforce the function to be really immutable ?\n\nNever say never. :)\n\nAnd to answer your question -- your IMMUTABLE function may reference\nother functions (even VOLATILE). So you may create a \"caller\" immutable\nfunction which just calls your non-immutable function. But from\nperformance standpoint there is not much difference (probably your\nSTABLE function will be faster than STABLE inside IMMUTABLE function).\n\nAh, and please note that some time in future PostgreSQL may require\nthat IMMUTABLE function calls only IMMUTABLE functions.\n\n Regards,\n Dawid\n",
"msg_date": "Fri, 22 Apr 2005 12:15:24 +0200",
"msg_from": "Dawid Kuroczko <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: immutable functions vs. join for lookups ?"
}
] |
[
{
"msg_contents": "I'm in the fortunate position of having a newly built database server \nthat's pre-production. I'm about to run it through the ringer with some \nsimulations of business data and logic, but I wanted to post the \nresults of some preliminary pgbench marking.\n\nhttp://www.sitening.com/pgbench.html\n\nTo me, it looks like basic transactional performance is modestly \nimproved at 8.0 across a variety of metrics. I think this bodes well \nfor more realistic loads, but I'll be curious to see the results of \nsome of the simulations.\n\nI've still got a little bit of preparatory time with this box, so I can \ncontinue to do some experimentation.\n\nI'd be curious to see whether these numbers meet developer expectations \nand to see whether the developer and user community have insight into \nother pgbench options that would be useful to see.\n\nThanks!\n\n-tfo\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\n\nStrategic Open Source: Open Your i™\n\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-260-0005\n",
"msg_date": "Fri, 15 Apr 2005 16:02:29 -0500",
"msg_from": "Thomas F.O'Connell <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgbench Comparison of 7.4.7 to 8.0.2"
},
{
"msg_contents": "\"Thomas F.O'Connell\" <[email protected]> writes:\n> http://www.sitening.com/pgbench.html\n\nYou need to run *many* more transactions than that to get pgbench\nnumbers that aren't mostly noise. In my experience 1000 transactions\nper client is a rock-bottom minimum to get repeatable numbers; 10000 per\nis better.\n\nAlso, in any run where #clients >= scaling factor, what you're measuring\nis primarily contention to update the \"branches\" rows. Which is not\nnecessarily a bad thing to check, but it's generally not the most\ninteresting performance domain (if your app is like that you need to\nredesign the app...)\n\n> To me, it looks like basic transactional performance is modestly \n> improved at 8.0 across a variety of metrics.\n\nThat's what I would expect --- we usually do some performance work in\nevery release cycle, but there was not a huge amount of it for 8.0.\n\nHowever, these numbers don't prove much either way.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Apr 2005 17:23:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench Comparison of 7.4.7 to 8.0.2 "
},
{
"msg_contents": "Tom,\n\nPeople's opinions on pgbench may vary, so take what I say with a grain \nof salt. Here are my thoughts:\n\n1) Test with no less than 200 transactions per client. I've heard with \nless than this, your results will vary too much with the direction of \nthe wind blowing. A high enough value will help rule out some \"noise\" \nfactor. If I am wrong, please let me know.\n\n\n2) How is the database going to be used? What percentage will be \nread/write if you had to guess? Pgbench is like a TPC-B with will help \nguage the potential throughput of your tps. However, it may not stress \nthe server enough to help you make key performance changes. However, \nbenchmarks are like statistics...full of lies <g>.\n\n3) Run not just a couple pgbench runs, but *many* (I do between 20-40 \nruns) so you can rule out noise and guage improvement on median results.\n\n4) Find something that you test OLTP-type transactions. I used OSDB \nsince it is simple to implement and use. Although OSDL's OLTP testing \nwill closer to reality.\n\nSteve Poe\n",
"msg_date": "Fri, 15 Apr 2005 14:24:55 -0700",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench Comparison of 7.4.7 to 8.0.2"
},
{
"msg_contents": "Okay. I updated my benchmark page with new numbers, which are the \nresult of extensive pgbench usage over this past week. In fact, I \nmodified pgbench (for both of the latest version of postgres) to be \nable to accept multiple iterations as an argument and report the \nresults of each iteration as well as a summary of mean tps at the end. \nThe modifications of the source are included on the new page, and I'd \nbe happy to submit them as patches if this seems like useful \nfunctionality to the developers and the community. I find it nicer to \nhave pgbench be the authoritative source of iterative results rather \nthan a wrapper script, but it'd be nice to have an extra set of eyes \nguarantee that I've included in the loop everything that ought to be \nthere.\n\nA couple of notes:\n\n* There was some interesting oscillation behavior in both version of \npostgres that occurred with 25 clients and 1000 transactions at a \nscaling factor of 100. This was repeatable with the distribution \nversion of pgbench run iteratively from the command line. I'm not sure \nhow to explain this.\n\n* I'm not really sure why the single client run at 1000 transactions \nseemed so much slower than all successive iterations, including single \nclient with 10000 transactions at a scaling factor of 100. It's \npossible that I should be concerned about how throughput was so much \nhigher for 10000 transactions.\n\nAnyway, the code changes, the configuration details, and the results \nare all posted here:\n\nhttp://www.sitening.com/pgbench.html\n\nOnce again, I'd be curious to get feedback from developers and the \ncommunity about the results, and I'm happy to answer any questions.\n\n-tfo\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\n\nStrategic Open Source: Open Your i™\n\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-260-0005\n\nOn Apr 15, 2005, at 4:23 PM, Tom Lane wrote:\n\n> \"Thomas F.O'Connell\" <[email protected]> writes:\n>> http://www.sitening.com/pgbench.html\n>\n> You need to run *many* more transactions than that to get pgbench\n> numbers that aren't mostly noise. In my experience 1000 transactions\n> per client is a rock-bottom minimum to get repeatable numbers; 10000 \n> per\n> is better.\n>\n> Also, in any run where #clients >= scaling factor, what you're \n> measuring\n> is primarily contention to update the \"branches\" rows. Which is not\n> necessarily a bad thing to check, but it's generally not the most\n> interesting performance domain (if your app is like that you need to\n> redesign the app...)\n>\n>> To me, it looks like basic transactional performance is modestly\n>> improved at 8.0 across a variety of metrics.\n>\n> That's what I would expect --- we usually do some performance work in\n> every release cycle, but there was not a huge amount of it for 8.0.\n>\n> However, these numbers don't prove much either way.\n>\n> \t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 23 Apr 2005 21:31:13 -0500",
"msg_from": "Thomas F.O'Connell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbench Comparison of 7.4.7 to 8.0.2 "
},
{
"msg_contents": "Steve,\n\nPer your and Tom's recommendations, I significantly increased the \nnumber of transactions used for testing. See my last post.\n\nThe database will have pretty heavy mixed use, i.e., both reads and \nwrites.\n\nI performed 32 iterations per scenario this go-round.\n\nI'll look into OSDB for further benchmarking. Thanks for the tip.\n\nSince pgbench is part of the postgres distribution and I had it at hand \nand it seems to be somewhat widely referenced, I figured I go ahead and \npost preliminary results from it.\n\n-tfo\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\n\nStrategic Open Source: Open Your i™\n\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-260-0005\n\nOn Apr 15, 2005, at 4:24 PM, Steve Poe wrote:\n\n> Tom,\n>\n> People's opinions on pgbench may vary, so take what I say with a grain \n> of salt. Here are my thoughts:\n>\n> 1) Test with no less than 200 transactions per client. I've heard with \n> less than this, your results will vary too much with the direction of \n> the wind blowing. A high enough value will help rule out some \"noise\" \n> factor. If I am wrong, please let me know.\n>\n>\n> 2) How is the database going to be used? What percentage will be \n> read/write if you had to guess? Pgbench is like a TPC-B with will help \n> guage the potential throughput of your tps. However, it may not stress \n> the server enough to help you make key performance changes. However, \n> benchmarks are like statistics...full of lies <g>.\n>\n> 3) Run not just a couple pgbench runs, but *many* (I do between 20-40 \n> runs) so you can rule out noise and guage improvement on median \n> results.\n>\n> 4) Find something that you test OLTP-type transactions. I used OSDB \n> since it is simple to implement and use. Although OSDL's OLTP testing \n> will closer to reality.\n>\n> Steve Poe\n\n",
"msg_date": "Sat, 23 Apr 2005 21:33:00 -0500",
"msg_from": "Thomas F.O'Connell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbench Comparison of 7.4.7 to 8.0.2"
},
{
"msg_contents": " >There was some interesting oscillation behavior in both version of \npostgres that occurred with 25 >clients and 1000 transactions at a \nscaling factor of 100. This was repeatable with the distribution \n >version of pgbench run iteratively from the command line. I'm not sure \nhow to explain this.\n\nTom,\n\nWhen you see these oscillations, do they occur after so many generated \nresults? Some oscillation is normal, in my opinion, from 10-15% of the \nperformance is noise-related.\n\nThe key is to tune the server that you either 1) minimize the \noscillation and/or 2)increase your overall performance above the 10-15% \nbaseline, and 3) find out what the mean and standard deviation between \nall your results.\n\nIf your results are within that range, this maybe \"normal\". I follow-up \nwith you later on what I do.\n\nSteve Poe\n\n\n",
"msg_date": "Sun, 24 Apr 2005 23:58:16 -0700",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench Comparison of 7.4.7 to 8.0.2"
},
{
"msg_contents": "Interesting. I should've included standard deviation in my pgbench \niteration patch. Maybe I'll go back and do that.\n\nI was seeing oscillation across the majority of iterations in the 25 \nclients/1000 transaction runs on both database versions.\n\nI've got my box specs and configuration files posted. If you see \nanything obvious about the tuning parameters that should be tweaked, \nplease let me know.\n\nThanks for the feedback!\n\n-tfo\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\n\nStrategic Open Source: Open Your i™\n\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-260-0005\n\nOn Apr 25, 2005, at 1:58 AM, Steve Poe wrote:\n\n> >There was some interesting oscillation behavior in both version of \n> postgres that occurred with 25 >clients and 1000 transactions at a \n> scaling factor of 100. This was repeatable with the distribution \n> >version of pgbench run iteratively from the command line. I'm not \n> sure how to explain this.\n>\n> Tom,\n>\n> When you see these oscillations, do they occur after so many generated \n> results? Some oscillation is normal, in my opinion, from 10-15% of the \n> performance is noise-related.\n>\n> The key is to tune the server that you either 1) minimize the \n> oscillation and/or 2)increase your overall performance above the \n> 10-15% baseline, and 3) find out what the mean and standard deviation \n> between all your results.\n>\n> If your results are within that range, this maybe \"normal\". I \n> follow-up with you later on what I do.\n>\n> Steve Poe\n\n",
"msg_date": "Mon, 25 Apr 2005 10:44:24 -0500",
"msg_from": "Thomas F.O'Connell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbench Comparison of 7.4.7 to 8.0.2"
},
{
"msg_contents": "Tom,\n\nJust a quick thought: after each run/sample of pgbench, I drop the \ndatabase and recreate it. When I don't my results become more skewed.\n\nSteve Poe\n\n\nThomas F.O'Connell wrote:\n\n> Interesting. I should've included standard deviation in my pgbench \n> iteration patch. Maybe I'll go back and do that.\n>\n> I was seeing oscillation across the majority of iterations in the 25 \n> clients/1000 transaction runs on both database versions.\n>\n> I've got my box specs and configuration files posted. If you see \n> anything obvious about the tuning parameters that should be tweaked, \n> please let me know.\n>\n> Thanks for the feedback!\n>\n> -tfo\n>\n> -- \n> Thomas F. O'Connell\n> Co-Founder, Information Architect\n> Sitening, LLC\n>\n> Strategic Open Source: Open Your iâ„¢\n>\n> http://www.sitening.com/\n> 110 30th Avenue North, Suite 6\n> Nashville, TN 37203-6320\n> 615-260-0005\n>\n> On Apr 25, 2005, at 1:58 AM, Steve Poe wrote:\n>\n>> >There was some interesting oscillation behavior in both version of \n>> postgres that occurred with 25 >clients and 1000 transactions at a \n>> scaling factor of 100. This was repeatable with the distribution \n>> >version of pgbench run iteratively from the command line. I'm not \n>> sure how to explain this.\n>>\n>> Tom,\n>>\n>> When you see these oscillations, do they occur after so many \n>> generated results? Some oscillation is normal, in my opinion, from \n>> 10-15% of the performance is noise-related.\n>>\n>> The key is to tune the server that you either 1) minimize the \n>> oscillation and/or 2)increase your overall performance above the \n>> 10-15% baseline, and 3) find out what the mean and standard deviation \n>> between all your results.\n>>\n>> If your results are within that range, this maybe \"normal\". I \n>> follow-up with you later on what I do.\n>>\n>> Steve Poe\n>\n>\n>\n\n",
"msg_date": "Mon, 25 Apr 2005 10:18:23 -0700",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench Comparison of 7.4.7 to 8.0.2"
},
{
"msg_contents": "Considering the default vacuuming behavior, why would this be?\n\n-tfo\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\n\nStrategic Open Source: Open Your i™\n\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-260-0005\n\nOn Apr 25, 2005, at 12:18 PM, Steve Poe wrote:\n\n> Tom,\n>\n> Just a quick thought: after each run/sample of pgbench, I drop the \n> database and recreate it. When I don't my results become more skewed.\n>\n> Steve Poe\n",
"msg_date": "Tue, 26 Apr 2005 01:26:46 -0500",
"msg_from": "Thomas F.O'Connell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbench Comparison of 7.4.7 to 8.0.2"
},
{
"msg_contents": "Tom,\n\nHonestly, you've got me. It was either comment from Tom Lane or Josh \nthat the os is caching the results (I may not be using the right terms \nhere), so I thought it the database is dropped and recreated, I would \nsee less of a skew (or variation) in the results. Someone which to comment?\n\nSteve Poe\n\n\nThomas F.O'Connell wrote:\n\n> Considering the default vacuuming behavior, why would this be?\n>\n> -tfo\n>\n> -- \n> Thomas F. O'Connell\n> Co-Founder, Information Architect\n> Sitening, LLC\n>\n> Strategic Open Source: Open Your iâ„¢\n>\n> http://www.sitening.com/\n> 110 30th Avenue North, Suite 6\n> Nashville, TN 37203-6320\n> 615-260-0005\n>\n> On Apr 25, 2005, at 12:18 PM, Steve Poe wrote:\n>\n>> Tom,\n>>\n>> Just a quick thought: after each run/sample of pgbench, I drop the \n>> database and recreate it. When I don't my results become more skewed.\n>>\n>> Steve Poe\n>\n>\n\n",
"msg_date": "Tue, 26 Apr 2005 10:49:46 -0700",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench Comparison of 7.4.7 to 8.0.2"
}
] |
[
{
"msg_contents": "Hello,\n\nI need to write several PL/pgSQL functions all returning a \"result set\" wich \ncan be obtained by a single SELECT statement.\nFor now the functions are called by a Java application.\nBoth REFCURSOR and SETOF serve my purpose, but I was wondering if there is a \nperfonance difference between the two. The result set can become quite \nlarge.\n\nI hope not to ask this question the 1001 time, though I couldn't find \nanything on the net.. Any hints are welcome.\n\nRegards\nR�diger \n\n\n",
"msg_date": "Sun, 17 Apr 2005 22:05:29 +0200",
"msg_from": "\"R���diger Herrmann\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "refcurosr vs. setof"
},
{
"msg_contents": "On Sun, Apr 17, 2005 at 10:05:29PM +0200, R�diger Herrmann wrote:\n> \n> I need to write several PL/pgSQL functions all returning a \"result set\" wich \n> can be obtained by a single SELECT statement.\n> For now the functions are called by a Java application.\n> Both REFCURSOR and SETOF serve my purpose, but I was wondering if there is a \n> perfonance difference between the two. The result set can become quite \n> large.\n\nHere's an excerpt from the \"Control Structures\" section of the\nPL/pgSQL documentation:\n\n The current implementation of RETURN NEXT for PL/pgSQL stores\n the entire result set before returning from the function, as\n discussed above. That means that if a PL/pgSQL function produces\n a very large result set, performance may be poor: data will be\n written to disk to avoid memory exhaustion, but the function\n itself will not return until the entire result set has been\n generated....Currently, the point at which data begins being\n written to disk is controlled by the work_mem configuration\n variable.\n\nYou might want to test both ways in typical and worst-case scenarios\nand see how each performs.\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n",
"msg_date": "Mon, 18 Apr 2005 21:10:01 -0600",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: refcurosr vs. setof"
}
] |
[
{
"msg_contents": "Hello. \n\nI'm trying to restore my database from dump in several parrallel processes, but restore process works too slow.\nNumber of rows about 100 000 000,\nRAM: 8192M\nCPU: Ultra Sparc 3\nNumber of CPU: 4\nOS: SunOS sun 5.8\nRDBMS: PostgreSQL 8.0\n\n==================== prstat info ====================\n\n PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP\n 14231 postgres 41M 37M sleep 58 0 0:00.01 0.2% postgres/1\n 14136 postgres 41M 37M sleep 58 0 0:00.03 0.2% postgres/1\n 14211 postgres 41M 37M sleep 58 0 0:00.01 0.2% postgres/1\n 14270 postgres 41M 37M sleep 58 0 0:00.00 0.2% postgres/1\n 13767 postgres 41M 37M sleep 58 0 0:00.18 0.2% postgres/1\n 13684 postgres 41M 36M sleep 58 0 0:00.14 0.2% postgres/1\n\n NPROC USERNAME SIZE RSS MEMORY TIME CPU\n 74 root 272M 191M 2.3% 0:26.29 24%\n 124 postgres 1520M 1306M 16% 0:03.05 5.0%\n\n\nHow to encrease postgresql speed? Why postgres took only 5.0% of CPU time?\n\nNurlan Mukhanov \n",
"msg_date": "Mon, 18 Apr 2005 08:50:55 +0400",
"msg_from": "\"Nurlan Mukhanov (AL/EKZ)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql works too slow"
},
{
"msg_contents": "Nurlan,\n\nTry enabliing your checkpoint_segments. In my example, our database \nrestore took 75mins. After enabling checkpoints_segments to 20, we cut \nit down to less than 30 minutes. Is your pg_xlog on a seperate disc..or \nat least a partition? This will help too. A checkpoints_segments of 20, \nif memory serves correctly, will occupy around 800-900M of disc space in \npg_xlog.\n\nSteve Poe\n\n\nNurlan Mukhanov (AL/EKZ) wrote:\n\n>Hello. \n>\n>I'm trying to restore my database from dump in several parrallel processes, but restore process works too slow.\n>Number of rows about 100 000 000,\n>RAM: 8192M\n>CPU: Ultra Sparc 3\n>Number of CPU: 4\n>OS: SunOS sun 5.8\n>RDBMS: PostgreSQL 8.0\n>\n>==================== prstat info ====================\n>\n> PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP\n> 14231 postgres 41M 37M sleep 58 0 0:00.01 0.2% postgres/1\n> 14136 postgres 41M 37M sleep 58 0 0:00.03 0.2% postgres/1\n> 14211 postgres 41M 37M sleep 58 0 0:00.01 0.2% postgres/1\n> 14270 postgres 41M 37M sleep 58 0 0:00.00 0.2% postgres/1\n> 13767 postgres 41M 37M sleep 58 0 0:00.18 0.2% postgres/1\n> 13684 postgres 41M 36M sleep 58 0 0:00.14 0.2% postgres/1\n>\n> NPROC USERNAME SIZE RSS MEMORY TIME CPU\n> 74 root 272M 191M 2.3% 0:26.29 24%\n> 124 postgres 1520M 1306M 16% 0:03.05 5.0%\n>\n>\n>How to encrease postgresql speed? Why postgres took only 5.0% of CPU time?\n>\n>Nurlan Mukhanov \n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n> \n>\n\n",
"msg_date": "Sun, 17 Apr 2005 22:42:21 -0700",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql works too slow"
},
{
"msg_contents": "\nHi All......\n\nI'm writing a php web application which sells mp3 music for production\nuse (a rather limited audience, as opposed to a general mp3 download\nsite).\n\nSince I've heard large objects were a bother I've written it so the mp3s\nwere in files with long obfuscated filenames and have put them in a\ndirectory hidden behind basic authentication, planning a php front end to\nthe basic authentication and storing the file urls in the database. Now\nI'm considering shortlived symbolic filenames to further make it difficult\nto rip off the mp3 files by other users with valid log in credentials that\ncan get past the basic authentication.\n\nBasically it's turning into one big unwieldy kluge.\n\nI'm reading about large object php functions and am considering storing\nthe mp3s themselves as large objects in postgreSQL, rather than just the\nfilenames, and it's starting to look better and better! It would be very\neasy to make it so that only the valid user could pull the mp3 large\nobject out of postgreSQL.\n\nIs storing large objects as easy as the php functions make it look? What\nabout the pg_dump difficulties with large objects?\n\nI'm using Debian Stable which has postgreSQL 7.2.1 and PHP 4.1.2 which so\nfar has been working fine with my small text databases, but I suspect if I\nwant to consider large objects I should really upgrade, eh?\n\nTIA....\n\nbrew\n\n ==========================================================================\n Strange Brew ([email protected])\n Check out my Stock Option Covered Call website http://www.callpix.com\n and my Musician's Online Database Exchange http://www.TheMode.com\n ==========================================================================\n\n",
"msg_date": "Mon, 18 Apr 2005 02:09:10 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Storing Large Objects"
},
{
"msg_contents": "Steve Poe <[email protected]> writes:\n> Try enabliing your checkpoint_segments. In my example, our database \n> restore took 75mins. After enabling checkpoints_segments to 20, we cut \n> it down to less than 30 minutes.\n\nIncreasing maintenance_work_mem might help too ... or several other\nsettings ... with no information about exactly *what* is slow, it's\nhard to say.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Apr 2005 02:11:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql works too slow "
},
{
"msg_contents": "[email protected] writes:\n> I'm using Debian Stable which has postgreSQL 7.2.1 and PHP 4.1.2 which so\n> far has been working fine with my small text databases, but I suspect if I\n> want to consider large objects I should really upgrade, eh?\n\n[ jaw drops... ] Debian Stable is shipping 7.2.**1**?\n\nYou might want to get yourself a more responsibly managed distro.\nI won't necessarily argue with someone's decision to stick on the\n7.2 major release, but not to adopt 7.2.* bug fixes is mere insanity.\n7.2.1 was released more than three years ago and has multiple known\ndata-loss and security issues.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Apr 2005 02:18:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Storing Large Objects "
},
{
"msg_contents": ">>Try enabliing your checkpoint_segments. In my example, our database \n>>restore took 75mins. After enabling checkpoints_segments to 20, we cut \n>>it down to less than 30 minutes.\n> \n> \n> Increasing maintenance_work_mem might help too ... or several other\n> settings ... with no information about exactly *what* is slow, it's\n> hard to say.\n\nTry turning fsync = false for the duration of your reload.\n\nChris\n",
"msg_date": "Mon, 18 Apr 2005 14:40:53 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql works too slow"
},
{
"msg_contents": "\nOn Mon, 18 Apr 2005, Tom Lane wrote:\n\n> [ jaw drops... ] Debian Stable is shipping 7.2.**1**?\n>\n> You might want to get yourself a more responsibly managed distro.\n> I won't necessarily argue with someone's decision to stick on the\n> 7.2 major release, but not to adopt 7.2.* bug fixes is mere insanity.\n> 7.2.1 was released more than three years ago and has multiple known\n> data-loss and security issues.\n\nIt's been recommended to me that I switch to testing, which is running\n7.4.7 i think. And I've done that on my development laptop.\nMaybe I should switch to that on my production server, too, since it's\nworking OK on my laptop.\n\nIt's a more recent version of php, too.\n\nIt's have been argued that Debian testing is at least as stable as\neverybody elses stable and Debian stable is *really* stable. Although with\n7.2.1 maybe not so secure.....\n\nbrew\n\n ==========================================================================\n Strange Brew ([email protected])\n Check out my Stock Option Covered Call website http://www.callpix.com\n and my Musician's Online Database Exchange http://www.TheMode.com\n ==========================================================================\n\n",
"msg_date": "Mon, 18 Apr 2005 02:47:37 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Storing Large Objects "
},
{
"msg_contents": "On Monday 18 April 2005 00:18, Tom Lane wrote:\n> [ jaw drops... ] Debian Stable is shipping 7.2.**1**?\n>\n> You might want to get yourself a more responsibly managed\n> distro. I won't necessarily argue with someone's decision to\n> stick on the 7.2 major release, but not to adopt 7.2.* bug\n> fixes is mere insanity. 7.2.1 was released more than three\n> years ago and has multiple known data-loss and security issues.\n\nIt's probably not quite so bad as you think. But it probably as \n\"stable\" (sort of like the Canadian Shield bedrock is stable). \nThey backport patches to older software. The actual number is \n7.2.1-2woody8\n\nThe changelog \n(http://packages.debian.org/changelogs/pool/main/p/postgresql/postgresql_7.2.1-2woody8/changelog)\nhas entries on:\n\nThu, 10 Feb 2005 15:20:03 +0100\nTue, 1 Feb 2005 12:55:44 +0100\nTue, 26 Oct 2004 15:54:22 +0200\nThu, 13 May 2004 11:00:07 +0200\n3 Nov 2003 10:14:08 +0100\n8 Sep 2002 19:33:32 +0200\n5 Sep 2002 09:49:10 -0400\nSun, 31 Mar 2002 21:25:41 +0100\nFri, 29 Mar 2002 02:17:31 +0000\nSat, 23 Mar 2002 09:43:05 +0000\nand I believe 40 more entries going back to\nFri, 26 Jan 2001 20:27:30 +0000\n\nFor stable, one can always look to www.backports.org to get newer \nsoftware versions, instead of using the sometimes \"ancient\" stuff \nin \"stable\". Backports has 7.4.7-1 as the version it is \ndistributing. The changelog file isn't seen in the directory \nwith the various debs, but the date on the package is:\n11-Feb-2005 08:10\n\nI don't run stable myself, but for other people I have had them \ndoing updates against backports.org for some software.\n\nGord (just another Debian user)\n",
"msg_date": "Mon, 18 Apr 2005 05:12:03 -0600",
"msg_from": "Gordon Haverland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Debian stable, was Re: Storing Large Objects"
},
{
"msg_contents": "\nOn Mon, 18 Apr 2005, Gordon Haverland wrote:\n\n> For stable, one can always look to www.backports.org to get newer\n> software versions, instead of using the sometimes \"ancient\" stuff in\n> \"stable\". Backports has 7.4.7-1 as the version it is distributing.\n\nHey, that's pretty neat. I love apt-get (lazy?), getting a newer version\nof postgreSQL that is compiled against the stable libs looks pretty good.\nI'll try it out on one of my test servers - I think it might be safer\neventually to do that on my production server rather than run Debian\ntesting. I'm rather nervous about that because apt-get testing once left\nmy laptop without php for a few days, that would be a disaster for a\nproduction website!\n\nThanks Gordon.\n\nbrew\n\n ==========================================================================\n Strange Brew ([email protected])\n Check out my Stock Option Covered Call website http://www.callpix.com\n and my Musician's Online Database Exchange http://www.TheMode.com\n ==========================================================================\n\n",
"msg_date": "Mon, 18 Apr 2005 08:53:10 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Debian stable, was Re: Storing Large Objects"
},
{
"msg_contents": "On Mon, Apr 18, 2005 at 02:09:10 -0400,\n [email protected] wrote:\n> \n> Since I've heard large objects were a bother I've written it so the mp3s\n> were in files with long obfuscated filenames and have put them in a\n> directory hidden behind basic authentication, planning a php front end to\n> the basic authentication and storing the file urls in the database. Now\n> I'm considering shortlived symbolic filenames to further make it difficult\n> to rip off the mp3 files by other users with valid log in credentials that\n> can get past the basic authentication.\n\nWhy not put the files somewhere where only the application can get at them\ninstead of under the document root. That way they have to compromise your\napplication to get at them. No amount of url guessing will give direct\naccess.\n",
"msg_date": "Mon, 18 Apr 2005 10:21:18 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Storing Large Objects"
},
{
"msg_contents": "\nBruno....\n\n> Why not put the files somewhere where only the application can get at\n> them instead of under the document root. That way they have to\n> compromise your application to get at them. No amount of url guessing\n> will give direct access.\n\nGreat idea. That's what I will do, no need to complicate my life (and\nserver) by storing the files as large objects in the database. It looks\nlike php's readfile function will do the job nicely.\n\nThanks Bruno!\n\nbrew\n\n ==========================================================================\n Strange Brew ([email protected])\n Check out my Stock Option Covered Call website http://www.callpix.com\n and my Musician's Online Database Exchange http://www.TheMode.com\n ==========================================================================\n\n",
"msg_date": "Mon, 18 Apr 2005 20:44:21 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Storing Large Objects"
},
{
"msg_contents": "On Mon, 2005-04-18 at 08:50 +0400, Nurlan Mukhanov (AL/EKZ) wrote:\n> I'm trying to restore my database from dump in several parrallel processes, but restore process works too slow.\n> Number of rows about 100 000 000,\n> RAM: 8192M\n> CPU: Ultra Sparc 3\n> Number of CPU: 4\n> OS: SunOS sun 5.8\n> RDBMS: PostgreSQL 8.0\n\n> How to encrease postgresql speed? Why postgres took only 5.0% of CPU time?\n\nWhen you say restore...what are you actually doing? \nAn archive recovery?\nA reload?\nA file-level restore of database?\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Tue, 19 Apr 2005 15:46:33 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql works too slow"
},
{
"msg_contents": "Simon Riggs wrote:\n\n>On Mon, 2005-04-18 at 08:50 +0400, Nurlan Mukhanov (AL/EKZ) wrote:\n> \n>\n>>I'm trying to restore my database from dump in several parrallel processes, but restore process works too slow.\n>>Number of rows about 100 000 000,\n>>RAM: 8192M\n>>CPU: Ultra Sparc 3\n>>Number of CPU: 4\n>>OS: SunOS sun 5.8\n>>RDBMS: PostgreSQL 8.0\n>> \n>>\n>\n> \n>\n>>How to encrease postgresql speed? Why postgres took only 5.0% of CPU time?\n>> \n>>\n>\n>When you say restore...what are you actually doing? \n>An archive recovery?\n>A reload?\n>A file-level restore of database?\n>\n> \n>\nIf you are doing a restore off a pg_dump, did you dump the data as \ninserts? This takes a lot more time to restore.\n\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp. \n\n",
"msg_date": "Tue, 19 Apr 2005 11:06:35 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql works too slow"
}
] |
[
{
"msg_contents": "> d) self-join with a function ;)\n> EXPLAIN ANALYZE SELECT * FROM (SELECT n, id2username(n) AS username\n> FROM (SELECT DISTINCT n FROM aaa) AS values) AS v_lookup RIGHT JOIN\n> aaa USING (n);\n\nThat's pretty clever. \nIt sure seems like the server was not caching the results of the\nfunction...maybe the server thought it was to small a table to bother? \n\nMerlin\n",
"msg_date": "Mon, 18 Apr 2005 08:50:46 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: immutable functions vs. join for lookups ?"
},
{
"msg_contents": "On 4/18/05, Merlin Moncure <[email protected]> wrote:\n> > d) self-join with a function ;)\n> > EXPLAIN ANALYZE SELECT * FROM (SELECT n, id2username(n) AS username\n> > FROM (SELECT DISTINCT n FROM aaa) AS values) AS v_lookup RIGHT JOIN\n> > aaa USING (n);\n> \n> That's pretty clever.\n> It sure seems like the server was not caching the results of the\n> function...maybe the server thought it was to small a table to bother?\n\nNah, I don't thinks so. Having around 2 097 152 rows of 1s and 0s takes\n48 seconds for id2username() query.\nThe \"self join\" you've quoted above takes 32 seconds.\nSELECT n FROM aaa; takes 7 seconds.\n\nThinking further...\nSELECT CASE n WHEN 0 THEN 'foo' WHEN 1 THEN 'bar' END FROM aaa;\ntakes 9 seconds.\n\nCREATE OR REPLACE FUNCTION id2un_case(oid int) RETURNS text AS $$\nBEGIN RETURN CASE oid WHEN 0 THEN 'foo' WHEN 1 THEN 'bar' END; END; $$\nLANGUAGE plpgsql IMMUTABLE;\nSELECT id2un_case(n) FROM aaa;\n...takes 36 seconds\n\n...and to see how it depends on flags used:\nSELECT count(id2un_case(n)) FROM aaa;\n...id2un_case(n) IMMUTABLE takes 29900,114 ms\n...id2un_case(n) IMMUTABLE STRICT takes 30187,958 ms\n...id2un_case(n) STABLE takes 31457,560 ms\n...id2un_case(n) takes 33545,178 ms\n...id2un_case(n) VOLATILE takes 35150,920 ms\n(and a count(CASE n WHEN ... END) FROM aaa takes: 2564,188 ms\n\n\nI understand that these measurements are not too accurate. They\nwere done on idle system, and the queries were run couple of times\n(to make sure they're cached :)). I believe either something is minor\nperformance difference between IMMUTABLE STABLE and even\nVOLATILE plpgsql... :(\n\nOh, and doing things like \"ORDER BY n\" or \"WHERE n = 1\" didn't help\neither...\n\nI still wonder whether it's only my case or is there really something\nwrong with these functions?\n\n Regards,\n Dawid\n",
"msg_date": "Mon, 18 Apr 2005 16:19:37 +0200",
"msg_from": "Dawid Kuroczko <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: immutable functions vs. join for lookups ?"
},
{
"msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n>> d) self-join with a function ;)\n>> EXPLAIN ANALYZE SELECT * FROM (SELECT n, id2username(n) AS username\n>> FROM (SELECT DISTINCT n FROM aaa) AS values) AS v_lookup RIGHT JOIN\n>> aaa USING (n);\n\n> That's pretty clever. \n> It sure seems like the server was not caching the results of the\n> function...maybe the server thought it was to small a table to bother? \n\nNo, it probably flattened the subquery on sight (looking at the actual\nEXPLAIN output would confirm or disprove that). You could prevent the\nflattening by adding OFFSET 0 in the subquery. However, the SELECT\nDISTINCT sub-sub-query is expensive enough, and the join itself is\nexpensive enough, that you would need an *enormously* expensive\nid2username() function to make this a win.\n\nIt would be interesting sometime to try to teach the planner about\ninlining SQL-language functions to become joins. That is, given\n\ncreate function id2name(int) returns text as\n'select name from mytab where id = $1' language sql stable;\n\nselect uid, id2name(uid) from othertab where something;\n\nI think that in principle this could automatically be converted to\n\nselect uid, name from othertab left join mytab on (uid = id) where something;\n\nwhich is much more amenable to join optimization. There are some\npitfalls though, particularly that you'd have to be able to prove that\nthe function's query couldn't return more than one row (else the join\nmight produce more result rows than the original query).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Apr 2005 11:50:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: immutable functions vs. join for lookups ? "
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> It would be interesting sometime to try to teach the planner about\n> inlining SQL-language functions to become joins. That is, given\n> \n> create function id2name(int) returns text as\n> 'select name from mytab where id = $1' language sql stable;\n> \n> select uid, id2name(uid) from othertab where something;\n> \n> I think that in principle this could automatically be converted to\n> \n> select uid, name from othertab left join mytab on (uid = id) where something;\n\nThe Inlining of the function is presumably a side-issue. I have tons of\nqueries that use subqueries in the select list for which the same behaviour\nwould be appropriate.\n\nThings like\n\nselect uid, (select name from mytab where id = uid) as name from othertab ...\n\n\n> There are some pitfalls though, particularly that you'd have to be able to\n> prove that the function's query couldn't return more than one row (else the\n> join might produce more result rows than the original query).\n\nOr just have a special join type that has the desired behaviour in that case.\nIe, pretend the query was really\n\nSELECT * FROM othertab LEFT SINGLE JOIN mytab ...\n\nWhere \"LEFT SINGLE JOIN\" is an imaginary syntax that doesn't actually have to\nexist in the parser, but exists in the planner/executor and behaves\ndifferently in the case of duplicate matches.\n\nActually I could see such a syntax being useful directly too.\n\n-- \ngreg\n\n",
"msg_date": "18 Apr 2005 14:33:15 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: immutable functions vs. join for lookups ?"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> It would be interesting sometime to try to teach the planner about\n>> inlining SQL-language functions to become joins. That is, given\n\n> The Inlining of the function is presumably a side-issue. I have tons of\n> queries that use subqueries in the select list for which the same behaviour\n> would be appropriate.\n\nYeah, I was actually thinking about a two-step process: inline the\nfunction to produce somethig equivalent to a handwritten scalar\nsub-SELECT, and then try to convert sub-SELECTs into joins.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Apr 2005 15:50:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: immutable functions vs. join for lookups ? "
},
{
"msg_contents": "You should re-run the function test using SQL as the function language\ninstead of plpgsql. There might be some performance to be had there.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 19 Apr 2005 19:30:54 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: immutable functions vs. join for lookups ?"
},
{
"msg_contents": "On 4/20/05, Jim C. Nasby <[email protected]> wrote:\n> You should re-run the function test using SQL as the function language\n> instead of plpgsql. There might be some performance to be had there.\n\nYay! You're right! I wonder why have I forgotten about LANGUAGE SQL. :)\nIt's 30 seconds vs 5 seconds for CASE ... END insisde PLpgsql vs CASE...END\nLANGUAGE SQL. :) I.e. its almost the same as in-place entered SQL.\n\n Regards,\n Dawid\n",
"msg_date": "Wed, 20 Apr 2005 10:35:48 +0200",
"msg_from": "Dawid Kuroczko <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: immutable functions vs. join for lookups ?"
},
{
"msg_contents": "> Yay! You're right! I wonder why have I forgotten about LANGUAGE SQL. :)\n> It's 30 seconds vs 5 seconds for CASE ... END insisde PLpgsql vs CASE...END\n> LANGUAGE SQL. :) I.e. its almost the same as in-place entered SQL.\n> \n> Regards,\n> Dawid\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n\n",
"msg_date": "Wed, 20 Apr 2005 16:52:48 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: immutable functions vs. join for lookups ?"
},
{
"msg_contents": "> Yay! You're right! I wonder why have I forgotten about LANGUAGE SQL. :)\n> It's 30 seconds vs 5 seconds for CASE ... END insisde PLpgsql vs CASE...END\n> LANGUAGE SQL. :) I.e. its almost the same as in-place entered SQL.\n\nProbably because simple SQL functions get inlined by the optimiser.\n\nChris\n",
"msg_date": "Wed, 20 Apr 2005 16:53:12 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: immutable functions vs. join for lookups ?"
},
{
"msg_contents": "* Tom Lane <[email protected]> wrote:\n\n<snip>\n> Yeah, I was actually thinking about a two-step process: inline the\n> function to produce somethig equivalent to a handwritten scalar\n> sub-SELECT, and then try to convert sub-SELECTs into joins.\n\n... back to my original question ... \n\nWhat kind of query should I use ?\nIs a join better than a function ? \n\n\ncu\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n cellphone: +49 174 7066481\n---------------------------------------------------------------------\n -- DSL ab 0 Euro. -- statische IP -- UUCP -- Hosting -- Webshops --\n---------------------------------------------------------------------\n",
"msg_date": "Thu, 21 Apr 2005 21:23:58 +0200",
"msg_from": "Enrico Weigelt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: immutable functions vs. join for lookups ?"
},
{
"msg_contents": "On 4/21/05, Enrico Weigelt <[email protected]> wrote:\n> * Tom Lane <[email protected]> wrote:\n> \n> <snip>\n> > Yeah, I was actually thinking about a two-step process: inline the\n> > function to produce somethig equivalent to a handwritten scalar\n> > sub-SELECT, and then try to convert sub-SELECTs into joins.\n> \n> ... back to my original question ...\n> \n> What kind of query should I use ?\n> Is a join better than a function ?\n\nIt appears that JOINs are usually faster. So if performance is an\nimportant issue, go with JOIN (and VIEWs probably). Functions are nicer\n(in terms off look and feel).\n\n Regards,\n Dawid\n",
"msg_date": "Fri, 22 Apr 2005 12:08:50 +0200",
"msg_from": "Dawid Kuroczko <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: immutable functions vs. join for lookups ?"
}
] |
[
{
"msg_contents": "\nWhat are the statistics\nfor tbljobtitle.id and tbljobtitle.clientnum \nI added default_statistics_target = 250 to the config and re-loaded the data\nbase. If that is what you mean?\n\n--- how many distinct values of each, \n\ntbljobtitle.id 6764 for all clients 1018 for SAKS\ntbljobtitle.clientnum 237 distinct clientnums just 1 for SAKS\n\nand are the distributions skewed to a few popular values?\nThere are 3903 distinct values for jobtitle\n\nNot sure if I answered the questions, let me know if you need more info.\nIt appears there are 1018 job titles in the table for saks and 6764 for all\nthe clients. There can be more values as presentation layer can have more\nthen one value for an id. SAKS is not using presentation layer yet as there\nare only 1018 distinct values 1 for each id.\n\nJoel\n\n\n\n",
"msg_date": "Mon, 18 Apr 2005 09:00:48 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "FW: speed of querry?"
}
] |
[
{
"msg_contents": "Another odd thing is when I tried turning off merge joins on the XP desktop\nIt took 32 secs to run compared to the 6 secs it was taking.\nOn the Linux (4proc box) it is now running in 3 secs with the mergejoins\nturned off.\n\nUnfortunately it takes over 2 minutes to actually return the 160,000+ rows.\nI am guessing that is either network (I have gig cards on a LAN) or perhaps\nthe ODBC driver (using PGADMIN III to do the select).\n\nI tried to run on psql on the server but it was putting it out to more.\nIf I do it and use > test.txt will it run it all out so I can get a time?\nDoes it display the time anywhere like in pgadminIII?\n\n",
"msg_date": "Mon, 18 Apr 2005 09:01:43 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "FW: speed of querry?"
},
{
"msg_contents": "On Mon, 18 Apr 2005, Joel Fradkin wrote:\n\n> Another odd thing is when I tried turning off merge joins on the XP desktop\n> It took 32 secs to run compared to the 6 secs it was taking.\n> On the Linux (4proc box) it is now running in 3 secs with the mergejoins\n> turned off.\n>\n> Unfortunately it takes over 2 minutes to actually return the 160,000+ rows.\n> I am guessing that is either network (I have gig cards on a LAN) or perhaps\n> the ODBC driver (using PGADMIN III to do the select).\n>\n> I tried to run on psql on the server but it was putting it out to more.\n> If I do it and use > test.txt will it run it all out so I can get a time?\n> Does it display the time anywhere like in pgadminIII?\n\nRedirecting should turn the pager off. \\timing will add a timing number\nafter queries. If you want to not be bothered by the pager, you can turn\nif off with \\pset pager off.\n\n",
"msg_date": "Mon, 18 Apr 2005 07:30:58 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: speed of querry?"
}
] |
[
{
"msg_contents": "Sorry if this posts twice I posted and did not see it hit the list.\n\nWhat are the statistics\nfor tbljobtitle.id and tbljobtitle.clientnum \nI added default_statistics_target = 250 to the config and re-loaded the data\nbase. If that is what you mean?\n\n--- how many distinct values of each, \n\ntbljobtitle.id 6764 for all clients 1018 for SAKS\ntbljobtitle.clientnum 237 distinct clientnums just 1 for SAKS\n\nand are the distributions skewed to a few popular values?\nThere are 3903 distinct values for jobtitle\n\nNot sure if I answered the questions, let me know if you need more info.\nIt appears there are 1018 job titles in the table for saks and 6764 for all\nthe clients. There can be more values as presentation layer can have more\nthen one value for an id. SAKS is not using presentation layer yet as there\nare only 1018 distinct values 1 for each id.\n\nJoel\n\n\n\n",
"msg_date": "Mon, 18 Apr 2005 09:09:21 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "FW: speed of querry?"
}
] |
[
{
"msg_contents": " \n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Joel Fradkin\n> Sent: 18 April 2005 14:02\n> To: PostgreSQL Perform\n> Subject: FW: [PERFORM] speed of querry?\n> \n> Another odd thing is when I tried turning off merge joins on \n> the XP desktop\n> It took 32 secs to run compared to the 6 secs it was taking.\n> On the Linux (4proc box) it is now running in 3 secs with the \n> mergejoins\n> turned off.\n> \n> Unfortunately it takes over 2 minutes to actually return the \n> 160,000+ rows.\n> I am guessing that is either network (I have gig cards on a \n> LAN) or perhaps\n> the ODBC driver (using PGADMIN III to do the select).\n\npgAdmin III uses libpq, not the ODBC driver.\n\nRegards, Dave\n",
"msg_date": "Mon, 18 Apr 2005 14:18:57 +0100",
"msg_from": "\"Dave Page\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: speed of querry?"
},
{
"msg_contents": "pgAdmin III uses libpq, not the ODBC driver.\n\nSorry I am not too aware of all the semantics.\nI guess the question is if it is normal to take 2 mins to get 160K of\nrecords, or is there something else I can do (I plan on limiting the query\nscreens using limit and offset; I realize this will only be effective for\nthe early part of the returned record set, but I believe they don't page\nthrough a bunch of records, they probably add search criteria). But for\nreporting I will need to return all the records and this seems slow to me\n(but it might be in line with what I get now; I will have to do some\nbenchmarking).\n\nThe application is a mixture of .net and asp and will soon have java.\nSo I am using the .net library for the .net pages and the ODBC driver for\nthe asp pages.\n\nI did find using a view for the location join sped up the query a great\ndeal, I will have to see if there are other places I can use that thinking\n(instead of joining on the associate table and its dependants I can just\njoin on a view of that data, etc).\n\nBasically I have a view that does a join from location to district, region\nand division tables. The old viwassoclist had those joined to the assoc\ntable in the viwassoclist, I changed it to use the view I created where the\ntables were joined to the location table and in assoclist I just join to the\nlocation view. This really made a huge difference in speed.\n\nRegards, Dave\n\n",
"msg_date": "Mon, 18 Apr 2005 09:31:57 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: speed of querry?"
}
] |
[
{
"msg_contents": "Hi all,\n\nSome months ago i post a similar problem here i it was solved by running\nvaccumdb time by time.\nSo, when i started using the postgres, i never been used the vacuumdb, and\nafter 2 months i started using once a week, after few weeks, i tried once a\nday and now twice a day.\n\nAt this weekend i have started to use pg_autovacuum with default settings.\n\nI really worried about that, because it's no enough anymore, and users claim\nabout performace. But running the vacuumdb full, everthing starts to run\nbetter again, so i think the problem is not related to a specific query.\n\nWhat I can do to check what I have change to get more performance ? \nCould I use vacuum verbose to check what is going on ? So, how ? \n\nMost all the time, even user querying the server the machine is 96%-100%\nidle. The discs are SCSI, FreeBSD 5.3, the size of database is 1.1Gb, max 30\nconnections and 10 concurrent conections. My server have 512Mb Ram and 256Mb\nhas changed to SHMAX. There is max 1000 inserted/excluded/Updated row by\nday.\n\nThese are my kernel params:\n--------------------------\noptions SHMMAXPGS=65536\noptions SEMMNI=40\noptions SEMMNS=240\noptions SEMUME=40\noptions SEMMNU=120\n\nPostgresql.conf non-default settings\n------------------------------------\ntcpip_socket = true\nmax_connections = 30\n\nshared_buffers = 1024\nsort_mem = 2048\nvacuum_mem = 16384\n\nwal_buffers = 16\ncheckpoint_segments = 5\n\neffective_cache_size = 16384\nrandom_page_cost = 2\n\nstats_start_collector = true\nstats_row_level = true\n\n\nI follow the most of all discussions in this group and tried myself change\nthe parameters, but now, I don't know more what to do to get better\nperformance.\n\nThanks a Lot\nRodrigo Moreno\n\n\n\n",
"msg_date": "Mon, 18 Apr 2005 11:36:01 -0300",
"msg_from": "\"Rodrigo Moreno\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to improve postgres performace"
},
{
"msg_contents": "\"Rodrigo Moreno\" <[email protected]> writes:\n> At this weekend i have started to use pg_autovacuum with default settings.\n\n> I really worried about that, because it's no enough anymore, and users claim\n> about performace. But running the vacuumdb full, everthing starts to run\n> better again, so i think the problem is not related to a specific query.\n\nIt sounds like you may not have the FSM settings set large enough for\nyour database. The default settings are only enough for a small DB\n(perhaps a few hundred meg).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Apr 2005 11:58:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve postgres performace "
},
{
"msg_contents": "Tom,\n\nHow to check if the value it's enough ? The log generate by vacuum verbose\ncan help ?\n\nThe current values for:\n\nmax_fsm_pages = 1048576 \nmax_fsm_relations = 1000 \n\nthis is enough ?\n\nRegards,\nRodrigo \n\n-----Mensagem original-----\nDe: Tom Lane [mailto:[email protected]] \nEnviada em: segunda-feira, 18 de abril de 2005 12:58\nPara: Rodrigo Moreno\nCc: [email protected]\nAssunto: Re: [PERFORM] How to improve postgres performace\n\n\"Rodrigo Moreno\" <[email protected]> writes:\n> At this weekend i have started to use pg_autovacuum with default settings.\n\n> I really worried about that, because it's no enough anymore, and users \n> claim about performace. But running the vacuumdb full, everthing \n> starts to run better again, so i think the problem is not related to a\nspecific query.\n\nIt sounds like you may not have the FSM settings set large enough for your\ndatabase. The default settings are only enough for a small DB (perhaps a\nfew hundred meg).\n\n\t\t\tregards, tom lane\n\n\n\n",
"msg_date": "Mon, 18 Apr 2005 13:31:22 -0300",
"msg_from": "\"Rodrigo Moreno\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RES: How to improve postgres performace"
},
{
"msg_contents": "\"Rodrigo Moreno\" <[email protected]> writes:\n> The current values for:\n> max_fsm_pages = 1048576 \n> max_fsm_relations = 1000 \n> this is enough ?\n\nThat max_fsm_pages value is enough to cover 8Gb, so it should work OK\nfor a database disk footprint up to 10 or so Gb. I don't know how many\ntables in your installation so I can't say if max_fsm_relations is high\nenough, but you can check that by looking at the tail end of the output\nof VACUUM VERBOSE. (Or just count 'em ;-))\n\nOffhand these look reasonable, though, so if you are seeing database\nbloat over time it probably means you need to tweak your autovacuum\nsettings. I'm not much of an autovacuum expert, but maybe someone\nelse can help you there.\n\nYou might want to keep track of physical file sizes over a period of\ntime and try to determine exactly where the bloat is happening.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Apr 2005 13:32:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RES: How to improve postgres performace "
},
{
"msg_contents": "> That max_fsm_pages value is enough to cover 8Gb, so it should work OK for\na database disk footprint up to 10 or so Gb. > I don't know how many tables\nin your installation so I can't say if max_fsm_relations is high enough, but\nyou can check >that by looking at the tail end of the output of VACUUM\nVERBOSE. (Or just count 'em ;-))\n\nThe last count in vacuum verbose shows me 92 relations, and I know the lower\nvalue for max_fsm_relations is enough, maybe I'll change to 500.\n\n> Offhand these look reasonable, though, so if you are seeing database bloat\nover time it probably means you need to tweak > your autovacuum settings.\nI'm not much of an autovacuum expert, but maybe someone else can help you\nthere.\n\nI'll let the autovacuum running this week to see what happen. \n\n> You might want to keep track of physical file sizes over a period of time\nand try to determine exactly where the bloat > is happening.\n\nThere is two mostly used and bigger tables, I'll keep eyes on both tables.\n\nThanks \nRodrigo Moreno\n\n\n",
"msg_date": "Mon, 18 Apr 2005 14:46:56 -0300",
"msg_from": "\"Rodrigo Moreno\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RES: RES: How to improve postgres performace"
},
{
"msg_contents": "On Mon, Apr 18, 2005 at 11:36:01AM -0300, Rodrigo Moreno wrote:\n> I really worried about that, because it's no enough anymore, and users claim\n> about performace. But running the vacuumdb full, everthing starts to run\n> better again, so i think the problem is not related to a specific query.\n\nVacuum full will skew your results, unless you plan on running vacuum\nfull all the time. This is because you will always have some amount of\ndead tuples in a table that has any update or delete activity. A regular\nvacuum doesn't remove these tuples, it just marks them as available. So\nover time, depending on how frequently a table is vacuumed, it will\nsettle down to a steady-state size that is greater than it's size after\na vacuum full.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 19 Apr 2005 19:35:16 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve postgres performace"
},
{
"msg_contents": "Hi all,\n\nI need your help to determine the configuration of a server machine, and the\nPG DBMS.\nThere will be not more than 25 concurrent users. They will use a business\nsoftware that accesses tha database. The database will be not that large, it\nseems that none of the tables's recordcount will exceed 1-2 million, but\nthere will be a lot of small (<5000 record) tables. The numbert of tables\nwill be about 300. What server would you install to such a site to make the\ndatabase respond quickly in any case?\nI would like to leave fsync on.\nPerhaps you need some additional information. In this case just indicate it.\n\nThanks in advance,\nOtto\n\n\n",
"msg_date": "Fri, 10 Jun 2005 23:12:56 +0200",
"msg_from": "=?iso-8859-1?Q?Havasv=F6lgyi_Ott=F3?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "need suggestion for server sizing"
},
{
"msg_contents": "Any XServe from Apple will do it.\n\nJ�nos\nOn Jun 10, 2005, at 5:12 PM, Havasv�lgyi Ott� wrote:\n\n> Hi all,\n>\n> I need your help to determine the configuration of a server \n> machine, and the\n> PG DBMS.\n> There will be not more than 25 concurrent users. They will use a \n> business\n> software that accesses tha database. The database will be not that \n> large, it\n> seems that none of the tables's recordcount will exceed 1-2 \n> million, but\n> there will be a lot of small (<5000 record) tables. The numbert of \n> tables\n> will be about 300. What server would you install to such a site to \n> make the\n> database respond quickly in any case?\n> I would like to leave fsync on.\n> Perhaps you need some additional information. In this case just \n> indicate it.\n>\n> Thanks in advance,\n> Otto\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that \n> your\n> message can get through to the mailing list cleanly\n>\n\n\n\n------------------------------------------\n\"There was a mighty king in the land of the Huns whose goodness and \nwisdom had no equal.\"\nNibelungen-Lied\n\n",
"msg_date": "Mon, 13 Jun 2005 09:25:30 -0400",
"msg_from": "=?ISO-8859-1?Q?J=E1nos?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: need suggestion for server sizing"
},
{
"msg_contents": "Janos,\n\nThank you. Sorry, but I wanted to install Linux on the server, I haven't\nmentioned it. I am not that familiar in the server-world. So, what\nconfiguration is enough on X86 (32 bit) architecture for PostgreSQL with the\nconditions listed in my previous post?\n\nThanks,\nOtto\n\n\n----- Original Message ----- \nFrom: \"J�nos\" <[email protected]>\nTo: \"Havasv�lgyi Ott�\" <[email protected]>\nCc: <[email protected]>\nSent: Monday, June 13, 2005 3:25 PM\nSubject: Re: [NOVICE] need suggestion for server sizing\n\n\nAny XServe from Apple will do it.\n\nJ�nos\nOn Jun 10, 2005, at 5:12 PM, Havasv�lgyi Ott� wrote:\n\n> Hi all,\n>\n> I need your help to determine the configuration of a server\n> machine, and the\n> PG DBMS.\n> There will be not more than 25 concurrent users. They will use a\n> business\n> software that accesses tha database. The database will be not that\n> large, it\n> seems that none of the tables's recordcount will exceed 1-2\n> million, but\n> there will be a lot of small (<5000 record) tables. The numbert of\n> tables\n> will be about 300. What server would you install to such a site to\n> make the\n> database respond quickly in any case?\n> I would like to leave fsync on.\n> Perhaps you need some additional information. In this case just\n> indicate it.\n>\n> Thanks in advance,\n> Otto\n>\n>\n>\n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that\n> your\n> message can get through to the mailing list cleanly\n>\n\n\n\n------------------------------------------\n\"There was a mighty king in the land of the Huns whose goodness and\nwisdom had no equal.\"\nNibelungen-Lied\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match\n\n\n\n",
"msg_date": "Tue, 14 Jun 2005 18:31:09 +0200",
"msg_from": "=?ISO-8859-1?Q?Havasv=F6lgyi_Ott=F3?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: need suggestion for server sizing"
},
{
"msg_contents": "i currently develop on a winxp laptop. i use cygwin\nand my pgsql version is 7.4.x (x=3, maybe?).\n\ni have to manually start up apache and pgsql. i'm\nspending some time learning linux. i found that i\ncould edit my .bash_profile so i don't have to type\nthe path to cygserver every time i tried to start it.\n\ni'd like to do something similar when using pg_ctl to\nstart and stop the postmaster.\n\ni read the the help files for pg_ctl and it said\nPGDATA was the default if there was no -D flag and\nthen a directy path to the data directory.\n\ni want to set PGDATA path to my DATA directory, but i\ncan't find PGDATA on my system.\n\ncan anyone help here?\n\ntia...\n\n\n\t\t\n__________________________________ \nYahoo! Mail Mobile \nTake Yahoo! Mail with you! Check email on your mobile phone. \nhttp://mobile.yahoo.com/learn/mail \n",
"msg_date": "Tue, 14 Jun 2005 12:18:10 -0700 (PDT)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "PGDATA"
},
{
"msg_contents": "--- [email protected] wrote:\n\n> i currently develop on a winxp laptop. i use cygwin\n> and my pgsql version is 7.4.x (x=3, maybe?).\n> \n> i have to manually start up apache and pgsql. i'm\n> spending some time learning linux. i found that i\n> could edit my .bash_profile so i don't have to type\n> the path to cygserver every time i tried to start\n> it.\n> \n> i'd like to do something similar when using pg_ctl\n> to\n> start and stop the postmaster.\n> \n> i read the the help files for pg_ctl and it said\n> PGDATA was the default if there was no -D flag and\n> then a directy path to the data directory.\n> \n> i want to set PGDATA path to my DATA directory, but\n> i\n> can't find PGDATA on my system.\n> \n> can anyone help here?\n> \n> tia...\n\ni was able to solve this one. i added...\n\nexport PGDATA=/usr/share/postgresql/data\nexport\nPATH=/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/share/postgresql/data\n\nto my .bash_profile (i want a local configuration).\n\nnow all i have to do in cygwin to start up pgsql is to\ntype the following...\n\nfirst...\ncygserver & \n\nonce cygserver is up and running, i hit [enter] to get\na prompt then... \n\nsecond...\npg_ctl start -o -i\n\nkinda cool for a rookie - and we won't mention i've\nbeen typing the directory path for almost a year... -lol-\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nYahoo! Mail - Helps protect you from nasty viruses. \nhttp://promotions.yahoo.com/new_mail\n",
"msg_date": "Tue, 14 Jun 2005 12:34:43 -0700 (PDT)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGDATA - SOLVED"
},
{
"msg_contents": "Mr. Operations, I've had a similar frustrating experience with PGDATA;\nbasically, i can't make it \"stick.\" Once I shut down the postgresql\nserver the PGDATA configuration is cleared and has to be stated again\nbefore starting postgres, or I'll get an error message.\n\nSo I had to insert the command into the system start-up process.\nIn Gentoo, it's /etc/conf.d/local.start\nand the command that works is ---\n\nexport PGDATA=/var/lib/postgresql/data\n\nMike\n\nOn 6/14/05, [email protected] <[email protected]> wrote:\n> --- [email protected] wrote:\n> \n> > i currently develop on a winxp laptop. i use cygwin\n> > and my pgsql version is 7.4.x (x=3, maybe?).\n> >\n> > i have to manually start up apache and pgsql. i'm\n> > spending some time learning linux. i found that i\n> > could edit my .bash_profile so i don't have to type\n> > the path to cygserver every time i tried to start\n> > it.\n> >\n> > i'd like to do something similar when using pg_ctl\n> > to\n> > start and stop the postmaster.\n> >\n> > i read the the help files for pg_ctl and it said\n> > PGDATA was the default if there was no -D flag and\n> > then a directy path to the data directory.\n> >\n> > i want to set PGDATA path to my DATA directory, but\n> > i\n> > can't find PGDATA on my system.\n> >\n> > can anyone help here?\n> >\n> > tia...\n> \n> i was able to solve this one. i added...\n> \n> export PGDATA=/usr/share/postgresql/data\n> export\n> PATH=/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/share/postgresql/data\n> \n> to my .bash_profile (i want a local configuration).\n> \n> now all i have to do in cygwin to start up pgsql is to\n> type the following...\n> \n> first...\n> cygserver &\n> \n> once cygserver is up and running, i hit [enter] to get\n> a prompt then...\n> \n> second...\n> pg_ctl start -o -i\n> \n> kinda cool for a rookie - and we won't mention i've\n> been typing the directory path for almost a year... -lol-\n> \n> \n> \n> __________________________________\n> Do you Yahoo!?\n> Yahoo! Mail - Helps protect you from nasty viruses.\n> http://promotions.yahoo.com/new_mail\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n",
"msg_date": "Tue, 14 Jun 2005 16:41:45 -0400",
"msg_from": "Mike <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGDATA - SOLVED"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Greg Stark [mailto:[email protected]]\n> Sent: Monday, April 18, 2005 9:59 AM\n> To: William Yu\n> Cc: [email protected]\n> Subject: Re: [PERFORM] How to improve db performance with $7K?\n> \n> William Yu <[email protected]> writes:\n> \n> > Using the above prices for a fixed budget for RAID-10, you \n> > could get:\n> > \n> > SATA 7200 -- 680MB per $1000\n> > SATA 10K -- 200MB per $1000\n> > SCSI 10K -- 125MB per $1000\n> \n> What a lot of these analyses miss is that cheaper == faster \n> because cheaper means you can buy more spindles for the same\n> price. I'm assuming you picked equal sized drives to compare\n> so that 200MB/$1000 for SATA is almost twice as many spindles\n> as the 125MB/$1000. That means it would have almost double\n> the bandwidth. And the 7200 RPM case would have more than 5x\n> the bandwidth.\n> [...]\n\nHmm...so you're saying that at some point, quantity beats quality?\nThat's an interesting point. However, it presumes that you can\nactually distribute your data over a larger number of drives. If\nyou have a db with a bottleneck of one or two very large tables,\nthe extra spindles won't help unless you break up the tables and\nglue them together with query magic. But it's still a point to\nconsider.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n",
"msg_date": "Mon, 18 Apr 2005 10:20:36 -0500",
"msg_from": "\"Dave Held\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "On Mon, Apr 18, 2005 at 10:20:36AM -0500, Dave Held wrote:\n> Hmm...so you're saying that at some point, quantity beats quality?\n> That's an interesting point. However, it presumes that you can\n> actually distribute your data over a larger number of drives. If\n> you have a db with a bottleneck of one or two very large tables,\n> the extra spindles won't help unless you break up the tables and\n> glue them together with query magic. But it's still a point to\n> consider.\n\nHuh? Do you know how RAID10 works?\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 19 Apr 2005 19:05:28 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.