threads
listlengths
1
275
[ { "msg_contents": "Hi,\n\nAnother \"funny\" thing: I have a query which runs\non (Linux) PostgreSQL 7.4.x under 10 sec. I tried\nto run it on (Windows) PostgreSQL 8.0 yesterday.\nIt didn't finished at all! (I shoot it down after 10 minutes)\nI made various tests and I figured out something interesting:\nThe same query with:\n\tA, \"history.undo_action_id > 0\" runs in 10 sec.\t\n\tB, \"history.undo_action_id is not null\" runs in 10 sec.\t\n\tC, \"history.undo_action_id is null\" runs forever (?!)\nI used EXPLAIN but I couldn't figure out what the problem was.\nIn every explain output are 3 lines:\n\" -> Index Scan using speed_3 on history (.........)\"\n\" Index Cond: (type_id = 6)\"\n\" Filter: (undo_action_id IS NOT NULL)\"\nwhere \"speed_3\" is a btree index on history.type_id. There is also an index\nfor history.undo_action_id (btree) but it is not used.\n\nThe tables are well indexed, and have about 200.000 records.\n\nThe SQL file and the 3 scenarios are in attachment.\n\nHelp, anyone?\n\nVig Sándor\n\nThe information transmitted is intended only for the person or entity to\nwhich it is addressed and may contain confidential and/or privileged\nmaterial. Any review, retransmission, dissemination or other use of, or\ntaking of any action in reliance upon, this information by persons or\nentities other than the intended recipient is prohibited. If you received\nthis in error, please contact the sender and delete the material from any\ncomputer.", "msg_date": "Fri, 25 Feb 2005 11:10:35 +0100", "msg_from": "\"Vig, Sandor (G/FI-2)\" <[email protected]>", "msg_from_op": true, "msg_subject": "IS NULL vs IS NOT NULL" }, { "msg_contents": "\nOn Fri, 25 Feb 2005, Vig, Sandor (G/FI-2) wrote:\n\n> Hi,\n>\n> Another \"funny\" thing: I have a query which runs\n> on (Linux) PostgreSQL 7.4.x under 10 sec. I tried\n> to run it on (Windows) PostgreSQL 8.0 yesterday.\n> It didn't finished at all! (I shoot it down after 10 minutes)\n> I made various tests and I figured out something interesting:\n> The same query with:\n> \tA, \"history.undo_action_id > 0\" runs in 10 sec.\n> \tB, \"history.undo_action_id is not null\" runs in 10 sec.\n> \tC, \"history.undo_action_id is null\" runs forever (?!)\n> I used EXPLAIN but I couldn't figure out what the problem was.\n\nEXPLAIN ANALYZE would be more useful. My first guess would be that the IS\nNULL is returning many more than the estimated 1 row and as such a nested\nloop is a bad plan. How many history rows match type_id=6 and\nundo_action_id is null?\n\n\n", "msg_date": "Fri, 25 Feb 2005 06:56:10 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IS NULL vs IS NOT NULL" } ]
[ { "msg_contents": "-huh-\n\nA lot of High-Tech ideas. But there is another way:\nSimply measure the current IO load (pro DB if you must), \nmake an estimation how it could change in the future \n(max. 3 years) and make a worst case scenario.\n\nThan you should make a new array each time the\nworst case scenario hits the IO bottleneck of your\nconfig. (I mean the random read/write bandwith of\na raid array) than make so many raid arrays you \nneed. It's just that simple. :-)))\n\nYou should/must redesign it in every 3 years, that's for sure.\n\nVig Sándor\n\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of John Arbash\nMeinel\nSent: Thursday, February 24, 2005 8:41 PM\nTo: John Allgood\nCc: [email protected]\nSubject: Re: [PERFORM] Peformance Tuning Opterons/ Hard Disk Layout\n\n\nJohn Allgood wrote:\n\n> Hello Again\n>\n> In the below statement you mention putting each database on its own\n> raid mirror.\n>\n> \"However, sticking with your arrangement, it would seem that you might be\n> able to get some extra performance if each database is on it's own raid,\n> since you are fairly likely to have 2 transactions occuring at the same\n> time, that don't affect eachother (since you wouldn't have any foreign\n> keys, etc on 2 separate databases.)\"\n>\n> That would take alot of disk drives to accomplish. I was thinking\n> maybe putting three or four databases on each raid and dividing the\n> heaviest used databases on each mirrored set. And for each of these\n> sets have its own mirror for pg_xlog. My question is what is the best\n> way to setup postgres databases on different disks. I have setup\n> multiple postmasters on this system as a test. The only problem was\n> configuring each databases \"ie postgresql.conf, pg_hba.conf\". Is\n> there anyway in postgres to have everything in one cluster and have it\n> seperated onto multiple drives. Here is a example of what is was\n> thinking about.\n>\nI think this is something that you would have to try and see what works.\nMy first feeling is that 8-disks in RAID10 is better than 4 sets of RAID1.\n\n> MIRROR1 - Database Group 1\n> MIRROR2 - pg_xlog for database group 1\n> MIRROR3 - Database Group 2\n> MIRROR4 - pg_xlog for database group 2\n> MIRROR5 - Database Group 3\n> MIRROR6 - pg_xlog for database group 3\n>\n> This will take about 12 disk drives. I have a 14 bay Storage Bay I can\n> use two of the drives for hotspare's.\n>\nI would have all of them in 1 database cluster, which means they are all\nserved by the same postgres daemon. Which I believe means that they all\nuse the same pg_xlog. That means you only need 1 raid for pg_xlog,\nthough I would make it a 4-drive RAID10. (RAID1 is redundant, but\nactually slower on writes, you need the 0 to speed up reading/writing, I\ncould be wrong).\n\nI believe you can still split each database onto it's own raid later on\nif you find that you need to.\n\nSo this is my proposal 1:\nOS RAID (sounds like this is not in the Storage Bay).\n4-drives RAID10 pg_xlog\n8-drives RAID10 database cluster\n2-drives Hot spares / RAID1\n\nIf you feel like you want to partition your databases, you could also do\nproposal 2:\n4-drives RAID10 pg_xlog\n4-drives RAID10 databases master + 1-4\n4-drives RAID10 databases 5-9\n2-drives hotspare / RAID1\n\nIf you think partitioning is better than striping, you could do proposal 3:\n4-drives RAID10 pg_xlog\n2-drives RAID1 master database\n2-drives RAID1 databases 1,2,3\n2-drives RAID1 databases 4,5\n2-drives RAID1 databases 6,7\n2-drives RAID1 databases 8,9\n\nThere are certainly a lot of potential arrangements here, and it's not\nlike I've tried a lot of them. pg_xlog seems like a big enough\nbottleneck that it would be good to put it on it's own RAID10, to make\nit as fast as possible.\n\nIt also depends a lot on whether you will be write heavy/read heavy,\netc. RAID5 works quite well for reading, very poor for writing. But if\nthe only reason to have the master database is to perform read heavy\nqueries, and all the writing is done at night in bulk fashion with\ncareful tuning to avoid saturation, then maybe you would want to put the\nmaster database on a RAID5 so that you can get extra disk space.\nYou could do proposal 4:\n4-drive RAID10 pg_xlog\n4-drive RAID5 master db\n2-drive RAID1 dbs 1-3\n2-drive RAID1 dbs 4-6\n2-drive RAID1 dbs 7-9\n\nYou might also do some testing and find that pg_xlog doesn't deserve\nit's own 4 disks, and they would be better off in the bulk tables.\n\nUnfortunately a lot of this would come down to performance testing on\nyour dataset, with a real data load. Which isn't very easy to do.\nI personally like the simplicity of proposal 1.\n\nJohn\n=:->\n\n>\n> Thanks\n>\n> John Allgood - ESC\n> Systems Administrator\n\n\n\nThe information transmitted is intended only for the person or entity to\nwhich it is addressed and may contain confidential and/or privileged\nmaterial. Any review, retransmission, dissemination or other use of, or\ntaking of any action in reliance upon, this information by persons or\nentities other than the intended recipient is prohibited. If you received\nthis in error, please contact the sender and delete the material from any\ncomputer.\n", "msg_date": "Fri, 25 Feb 2005 12:19:19 +0100", "msg_from": "\"Vig, Sandor (G/FI-2)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Peformance Tuning Opterons/ Hard Disk Layout" } ]
[ { "msg_contents": "Given some recent posts / irc issues with dead tuple bloat..\n\nAnd given that a lot of these people have at least been smart enough to \nexplain analyze would it be A. possible B. useful C. None of the above \nto have various \"scan\" nodes of explain analyze also report how many \ninvisible / dead tuples they had to disqualify (Just to clarify, they \nmatched the search criteria, but were invisible due to MVCC rules). \nSome thing like:\n\n Seq Scan on boards (cost=0.00..686.30 rows=25430 width=0) (actual \ntime=8.866..5407.693 rows=18636 loops=1 invisiblerows=8934983098294)\n\nThis may help us to point out tuple bloat issues quicker... or it may \ngive the developer enough of a clue to search around and find out he \nneeds to vacuum... hmm.. but once we have an integrated autovacuum it \nwill be a moot point.....\n\nAlso another thing I started working on back in the day and hope to \nfinish when I get time (that is a funny idea) is having explain analyze \nreport when a step required the use of temp files.\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n", "msg_date": "Fri, 25 Feb 2005 08:49:23 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": true, "msg_subject": "Possible interesting extra information for explain analyze?" }, { "msg_contents": "Jeff <[email protected]> writes:\n> Given some recent posts / irc issues with dead tuple bloat..\n> And given that a lot of these people have at least been smart enough to \n> explain analyze would it be A. possible B. useful C. None of the above \n> to have various \"scan\" nodes of explain analyze also report how many \n> invisible / dead tuples they had to disqualify (Just to clarify, they \n> matched the search criteria, but were invisible due to MVCC rules). \n\nI think this would not help a whole lot because (particularly on\nindexscans) you won't get a very accurate picture of the true extent\nof bloat. The contrib/pgstattuple utility is more useful for measuring\nthat sort of thing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 25 Feb 2005 11:05:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible interesting extra information for explain analyze? " }, { "msg_contents": "On Fri, 2005-02-25 at 08:49 -0500, Jeff wrote:\n> Also another thing I started working on back in the day and hope to \n> finish when I get time (that is a funny idea) is having explain analyze \n> report when a step required the use of temp files.\n\nSounds useful. Please work on it...\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Sun, 27 Feb 2005 20:25:45 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible interesting extra information for explain" } ]
[ { "msg_contents": "Hello,\n\nI'm experiencing performance problems with 7.4.3 on OpenBSD 3.6, at\nleast I think so. It is running on a Xeon 3 GHz with 2 GB RAM.\n\nI have a table with 22 columns, all integer, timestamp or varchar and\n10 indizes on integer, timestamp and varchar columns.\n\nThe table got 8500 rows (but growing). I try to make an UPDATE on the\ntable with 7000 affected rows. This update takes about 2-6 seconds.\n\nHas it to be that slow? I'm running the same query on MySQL or Oracle\ndatabases faster on similar machines.\n\nEXPLAIN ANALYZE UPDATE ... tells me:\nQUERY PLAN:\nSeq Scan on table (cost=0.00..286.57 rows=4804 width=146) (actual\ntime=405.206..554.433 rows=7072 loops=1)\nFilter: (system_knoten_links > 3501)\nTotal runtime: 2928.500 ms\n\nSo that looks fine to me, except the runtime.\n\nWithout indizes the query is fast with 456 ms.\nTrying to disable fsync to avoid some disc operations aren't helping.\n\nSincerely TIA,\nGlenn\n\n\n\n", "msg_date": "Sat, 26 Feb 2005 13:13:07 +0100", "msg_from": "Glenn Kusardi <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 7.4.3 Performance issues on OpenBSD" } ]
[ { "msg_contents": "Hi, All\n\nI'm trying to tune a software RAID 0 (striped) on a solaris 9, sparc box. Currently I'm using a raid 1 (mirrored) array on two discs for the data area,\nand I put in 4 new drives last night (all are f-cal). On the new array I have a width of 4, and used the default interleave factor of 32k. I believe\na smaller interleave factor may get me better read performance (I'm seeing a bulk load performance increase of about 35% but a 7-8x worse read performance\nbetween the two RAID setups.)\n\nConventional wisdom is using an interleave factor < = db default block size gives the best read performance. I would like to try that (though this testing\nis burning a lot of daylight, since I'll have to reload the db every time I remake the RAID.)\n\nQuestion: what't the best block size to use for postgresql on solaris? (I'm using 7.4.5)\n", "msg_date": "Sun, 27 Feb 2005 14:32:01 -0800", "msg_from": "Tom Arthurs <[email protected]>", "msg_from_op": true, "msg_subject": "PG block sizes" } ]
[ { "msg_contents": "Hi *,\n\nI am looking for the fastest wal_sync_method (postgres 8, Linux (Redhat) 2.4.29, ext3, SCSI HW-Raid 5).\n\nAny experiences and/or tips?.\n\nThanks in advance\n\nStefan\n\n\n\n\n\n\nHi *,\n \nI am looking for the fastest wal_sync_method \n(postgres 8, Linux (Redhat) 2.4.29, ext3, SCSI HW-Raid 5).\n \nAny experiences and/or tips?.\n \nThanks in advance\n \nStefan", "msg_date": "Mon, 28 Feb 2005 22:23:10 +0100", "msg_from": "\"Stefan Hans\" <[email protected]>", "msg_from_op": true, "msg_subject": "wal_sync_methods" } ]
[ { "msg_contents": "Trying to determine the best overall approach for the following\nscenario:\n\nEach month our primary table accumulates some 30 million rows (which\ncould very well hit 60+ million rows per month by year's end). Basically\nthere will end up being a lot of historical data with little value\nbeyond archival.\n\nThe question arises then as the best approach of which I have enumerated\nthree:\n\n1) Just allow the records to accumulate and maintain constant vacuuming,\netc allowing for the fact that most queries will only be from a recent\nsubset of data and should be mostly cached.\n\n2) Each month:\nSELECT * INTO 3monthsago_dynamically_named_table FROM bigtable WHERE\ntargetdate < $3monthsago;\nDELETE FROM bigtable where targetdate < $3monthsago;\nVACUUM ANALYZE bigtable;\npg_dump 3monthsago_dynamically_named_table for archiving;\n\n3) Each month:\nCREATE newmonth_dynamically_named_table (like mastertable) INHERITS\n(mastertable);\nmodify the copy.sql script to copy newmonth_dynamically_named_table;\npg_dump 3monthsago_dynamically_named_table for archiving;\ndrop table 3monthsago_dynamically_named_table;\n\nAny takes on which approach makes most sense from a performance and/or\nmaintenance point of view and are there other options I may have missed?\n\nSven Willenberger\n\n", "msg_date": "Mon, 28 Feb 2005 18:59:13 -0500", "msg_from": "Sven Willenberger <[email protected]>", "msg_from_op": true, "msg_subject": "Inheritence versus delete from" }, { "msg_contents": "Sven Willenberger <[email protected]> writes:\n> 3) Each month:\n> CREATE newmonth_dynamically_named_table (like mastertable) INHERITS\n> (mastertable);\n> modify the copy.sql script to copy newmonth_dynamically_named_table;\n> pg_dump 3monthsago_dynamically_named_table for archiving;\n> drop table 3monthsago_dynamically_named_table;\n\nA number of people use the above approach. It's got some limitations,\nmainly that the planner isn't super bright about what you are doing\n--- in particular, joins involving such a table may work slowly.\n\nOn the whole I'd probably go with the other approach (one big table).\nA possible win is to use CLUSTER rather than VACUUM ANALYZE to recover\nspace after your big deletes; however this assumes that you can schedule\ndowntime to do the CLUSTERs in.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 28 Feb 2005 20:07:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inheritence versus delete from " }, { "msg_contents": "Sven Willenberger wrote:\n\n>Trying to determine the best overall approach for the following\n>scenario:\n>\n>Each month our primary table accumulates some 30 million rows (which\n>could very well hit 60+ million rows per month by year's end). Basically\n>there will end up being a lot of historical data with little value\n>beyond archival.\n>\n>\n>\nIf this statement is true, then 2 seems the best plan.\n\n>2) Each month:\n>SELECT * INTO 3monthsago_dynamically_named_table FROM bigtable WHERE\n>targetdate < $3monthsago;\n>DELETE FROM bigtable where targetdate < $3monthsago;\n>VACUUM ANALYZE bigtable;\n>pg_dump 3monthsago_dynamically_named_table for archiving;\n>\n>\n>\nIt seems like this method would force the table to stay small, and would\nkeep your queries fast. But if you ever actually *need* the old data,\nthen you start having problems.\n\n...\n\nI think (3) would tend to force a whole bunch of joins (one for each\nchild table), rather than just one join against 3months of data.\n\n>Any takes on which approach makes most sense from a performance and/or\n>maintenance point of view and are there other options I may have missed?\n>\n>Sven Willenberger\n>\n>\nIf you can get away with it 2 is the best.\n\nJohn\n=:->", "msg_date": "Mon, 28 Feb 2005 19:41:20 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inheritence versus delete from" }, { "msg_contents": "On Tue, 2005-03-01 at 09:48 -0600, John Arbash Meinel wrote:\n> Sven Willenberger wrote:\n> \n> >Trying to determine the best overall approach for the following\n> >scenario:\n> >\n> >Each month our primary table accumulates some 30 million rows (which\n> >could very well hit 60+ million rows per month by year's end). Basically\n> >there will end up being a lot of historical data with little value\n> >beyond archival.\n> >\n> >The question arises then as the best approach of which I have enumerated\n> >three:\n> >\n> \n> I just thought of another possibility. You could create each table\n> month-by-month, and then use a view to combine them, and possibly a rule\n> to keep things clean.\n> \n> So you would do something like:\n> \n> I will assume you already have the data in one big table to show the\n> easiest way to create the small tables.\n> \n> create table tblname-2005-01 as select * from orig_tbl where day >=\n> '2005-01-01' and day < '2005-02-01';\n> create table tblname-2005-02 as select * from orig_tbl where day >=\n> '2005-02-01' and day < '2005-03-01';\n> create table tblname-2005-03 as select * from orig_tbl where day >=\n> '2005-03-01' and day < '2005-04-01';\n> -- create appropriate indicies, rules, constraints on these tables\n> \n> Then you create a view which includes all of these tables.\n> \n> create or replace view tblname as\n> select * from tblname-2005-01\n> union all select * from tblname-2005-02\n> union all select * from tblname-2005-03\n> ;\n> \n> Then create insert and update rules which fixe which table gets the new\n> data.\n> \n> create rule up_tblname as on update to tblname do instead\n> update tblname-2005-03 set\n> col1 = NEW.col1,\n> col2 = NEW.col2,\n> ...\n> where id = NEW.id;\n> -- This assumes that you have a unique id on your tables. This is just\n> whatever your\n> -- primary key is, so it should be a decent assumption.\n> \n> create rule ins_tblname as on insert to tblname do instead\n> insert into tblname-2005-03 (col1, col2, ...)\n> values (new.col1, new.col2, ...);\n> \n> Now the downside of this method, is that every month you need to create\n> a new table, and then update the views and the rules. The update rules\n> are pretty straightforward, though.\n> \n> The nice thing is that it keeps your data partitioned, and you don't\n> ever have a large select/delete step. You probably will want a small one\n> each month to keep the data exactly aligned by month. You don't really\n> have to have exact alignments, but as humans, we tend to like that stuff. :)\n> \n> Probably this is more overhead than you would like to do. Especially if\n> you know that you can get away with method 2 (keep 1 big table, and just\n> remove old rows out of it every month.)\n> \n> But this method means that all of your data stays live, but queries with\n> appropriate restrictions should stay fast. You also have the ability\n> (with v8.0) to move the individual tables onto separate disks.\n> \n> One more time, though, if you can get away with removing old data and\n> just archiving it, do so. But if you want to keep the data live, there\n> are a couple of alternatives.\n> \n\nActually that was the thought behind my using inheritance; when querying\nthe <bigtable>, it basically does a union all; also, I think it would be\nquicker to insert directly into the child table (simply by modifying my\nquery once a month) rather than the overhead sustained by the rule.\n\nSince the children tables are individual tables, all the benefits you\ncite above still hold. \n\nThanks for the input on this ... will have to try a couple things to see\nwhich is most manageable.\\\n\nSven\n\n", "msg_date": "Tue, 01 Mar 2005 11:27:52 -0500", "msg_from": "Sven Willenberger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inheritence versus delete from" }, { "msg_contents": "Sven Willenberger wrote:\n\n>On Tue, 2005-03-01 at 09:48 -0600, John Arbash Meinel wrote:\n>\n>\n>>Sven Willenberger wrote:\n>>\n>>\n>>\n>>>Trying to determine the best overall approach for the following\n>>>scenario:\n>>>\n>>>Each month our primary table accumulates some 30 million rows (which\n>>>could very well hit 60+ million rows per month by year's end). Basically\n>>>there will end up being a lot of historical data with little value\n>>>beyond archival.\n>>>\n>>>The question arises then as the best approach of which I have enumerated\n>>>three:\n>>>\n>>>\n>>>\n>>I just thought of another possibility. You could create each table\n>>month-by-month, and then use a view to combine them, and possibly a rule\n>>to keep things clean.\n>>\n>>So you would do something like:\n>>\n>>I will assume you already have the data in one big table to show the\n>>easiest way to create the small tables.\n>>\n>>create table tblname-2005-01 as select * from orig_tbl where day >=\n>>'2005-01-01' and day < '2005-02-01';\n>>create table tblname-2005-02 as select * from orig_tbl where day >=\n>>'2005-02-01' and day < '2005-03-01';\n>>create table tblname-2005-03 as select * from orig_tbl where day >=\n>>'2005-03-01' and day < '2005-04-01';\n>>-- create appropriate indicies, rules, constraints on these tables\n>>\n>>Then you create a view which includes all of these tables.\n>>\n>>create or replace view tblname as\n>> select * from tblname-2005-01\n>> union all select * from tblname-2005-02\n>> union all select * from tblname-2005-03\n>>;\n>>\n>>Then create insert and update rules which fixe which table gets the new\n>>data.\n>>\n>>create rule up_tblname as on update to tblname do instead\n>> update tblname-2005-03 set\n>> col1 = NEW.col1,\n>> col2 = NEW.col2,\n>> ...\n>> where id = NEW.id;\n>>-- This assumes that you have a unique id on your tables. This is just\n>>whatever your\n>>-- primary key is, so it should be a decent assumption.\n>>\n>>create rule ins_tblname as on insert to tblname do instead\n>> insert into tblname-2005-03 (col1, col2, ...)\n>> values (new.col1, new.col2, ...);\n>>\n>>Now the downside of this method, is that every month you need to create\n>>a new table, and then update the views and the rules. The update rules\n>>are pretty straightforward, though.\n>>\n>>The nice thing is that it keeps your data partitioned, and you don't\n>>ever have a large select/delete step. You probably will want a small one\n>>each month to keep the data exactly aligned by month. You don't really\n>>have to have exact alignments, but as humans, we tend to like that stuff. :)\n>>\n>>Probably this is more overhead than you would like to do. Especially if\n>>you know that you can get away with method 2 (keep 1 big table, and just\n>>remove old rows out of it every month.)\n>>\n>>But this method means that all of your data stays live, but queries with\n>>appropriate restrictions should stay fast. You also have the ability\n>>(with v8.0) to move the individual tables onto separate disks.\n>>\n>>One more time, though, if you can get away with removing old data and\n>>just archiving it, do so. But if you want to keep the data live, there\n>>are a couple of alternatives.\n>>\n>>\n>>\n>\n>Actually that was the thought behind my using inheritance; when querying\n>the <bigtable>, it basically does a union all; also, I think it would be\n>quicker to insert directly into the child table (simply by modifying my\n>query once a month) rather than the overhead sustained by the rule.\n>\n>Since the children tables are individual tables, all the benefits you\n>cite above still hold.\n>\n>Thanks for the input on this ... will have to try a couple things to see\n>which is most manageable.\\\n>\n>Sven\n>\n>\n\nYou're right, child tables to act like that. I just recall that at least\nat one point, postgres didn't handle indexes with child tables very\nwell. That's more just what someone else ran into, so he could have been\ndoing something wrong.\nI agree, if child tables end up doing a union all, then it is much\neasier to maintain. A select against the master table should\nautomatically get all of the child tables.\nIt might just be that you need to create a new index on the child table\nwhenever you create it, and then postgres can use that new index to do\nthe filtering.\n\nJohn\n=:->", "msg_date": "Tue, 01 Mar 2005 10:41:40 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inheritence versus delete from" }, { "msg_contents": "Sven Willenberger wrote:\n> Trying to determine the best overall approach for the following\n> scenario:\n> \n> Each month our primary table accumulates some 30 million rows (which\n> could very well hit 60+ million rows per month by year's end). Basically\n> there will end up being a lot of historical data with little value\n> beyond archival.\n> \n> The question arises then as the best approach of which I have enumerated\n> three:\n> \n> 1) Just allow the records to accumulate and maintain constant vacuuming,\n> etc allowing for the fact that most queries will only be from a recent\n> subset of data and should be mostly cached.\n> \n> 2) Each month:\n> SELECT * INTO 3monthsago_dynamically_named_table FROM bigtable WHERE\n> targetdate < $3monthsago;\n> DELETE FROM bigtable where targetdate < $3monthsago;\n> VACUUM ANALYZE bigtable;\n> pg_dump 3monthsago_dynamically_named_table for archiving;\n\n\nIn my experience copy/delete in a single transaction 60+ million rows\nis not feseable, at least on my 1 GB ram, 2 way CPU box.\n\n\n\nRegards\nGaetano Mendola\n\n", "msg_date": "Wed, 02 Mar 2005 01:56:52 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inheritence versus delete from" } ]
[ { "msg_contents": "Hi all,\n\nI am doing research for a project of mine where I need to store several \nbillion values for a monitoring and historical tracking system for a big \ncomputer system. My currect estimate is that I have to store (somehow) \naround 1 billion values each month (possibly more).\n\nI was wondering if anyone has had any experience with these kind of big \nnumbers of data in a postgres sql database and how this affects database \ndesign and optimization.\n\nWhat would be important issues when setting up a database this big, and \nis it at all doable? Or would it be a insane to think about storing up \nto 5-10 billion rows in a postgres database.\n\nThe database's performance is important. There would be no use in \nstoring the data if a query will take ages. Query's should be quite fast \nif possible.\n\nI would really like to hear people's thoughts/suggestions or \"go see a \nshrink, you must be mad\" statements ;)\n\nKind regards,\n\nRamon Bastiaans\n\n\n", "msg_date": "Tue, 01 Mar 2005 10:34:29 +0100", "msg_from": "Ramon Bastiaans <[email protected]>", "msg_from_op": true, "msg_subject": "multi billion row tables: possible or insane?" }, { "msg_contents": "Ramon Bastiaans schrieb:\n> My currect estimate is that I have to store (somehow) \n> around 1 billion values each month (possibly more).\n\nYou should post the actual number or power of ten,\nsince \"billion\" is not always interpreted the same way...\n\nrgds\n\nthomas\n", "msg_date": "Tue, 01 Mar 2005 13:40:23 +0100", "msg_from": "Thomas Ganss\n\t<tganss_at_t_dash_online_dot_de-remove-all-after-first-real-dash@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: multi billion row tables: possible or insane?" }, { "msg_contents": "\nOn Mar 1, 2005, at 4:34 AM, Ramon Bastiaans wrote:\n>\n> What would be important issues when setting up a database this big, \n> and is it at all doable? Or would it be a insane to think about \n> storing up to 5-10 billion rows in a postgres database.\n>\n\nBuy a bunch of disks.\nAnd then go out and buy more disks.\nWhen you are done with that - go buy some more disks.\nThen buy some ram.\nThen buy more disks.\n\nYou want the fastest IO possible.\n\nI'd also recommend the opteron route since you can also put heaping \ngobules of ram in there as well.\n\n> The database's performance is important. There would be no use in \n> storing the data if a query will take ages. Query's should be quite \n> fast if possible.\n>\n\nAnd make sure you tune your queries.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n", "msg_date": "Tue, 1 Mar 2005 08:37:16 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi billion row tables: possible or insane?" }, { "msg_contents": "Hi, Ramon,\n\nRamon Bastiaans schrieb:\n\n> The database's performance is important. There would be no use in\n> storing the data if a query will take ages. Query's should be quite fast\n> if possible.\n\nWhich kind of query do you want to run?\n\nQueries that involve only a few rows should stay quite fast when you set\nup the right indices.\n\nHowever, queries that involve sequential scans over your table (like\naverage computation) will take years. Get faaaaaast I/O for this. Or,\nbetter, use a multidimensional data warehouse engine. Those can\nprecalculate needed aggregate functions and reports. But they need loads\nof storage (because of very redundant data storage), and I don't know\nany open source or cheap software.\n\nMarkus\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 z�rich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n", "msg_date": "Tue, 01 Mar 2005 15:01:50 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi billion row tables: possible or insane?" }, { "msg_contents": "Ramon Bastiaans wrote:\n\n> Hi all,\n>\n> I am doing research for a project of mine where I need to store\n> several billion values for a monitoring and historical tracking system\n> for a big computer system. My currect estimate is that I have to store\n> (somehow) around 1 billion values each month (possibly more).\n>\nIf you have that 1 billion perfectly distributed over all hours of the\nday, then you need 1e9/30/24/3600 = 385 transactions per second.\n\nWhich I'm pretty sure is possible with postgres, you just need pretty\nbeefy hardware. And like Jeff said, lots of disks for lots of IO.\nLike a quad opteron, with 16GB of ram, and around 14-20 very fast disks.\nraid10 not raid5, etc. To improve query performance, you can do some\nload balancing by having replication machines by using Slony.\n\nOr if you can do batch processing, you could split up the work into a\nfew update machines, which then do bulk updates on the master database.\nThis lets you get more machines into the job, since you can't share a\ndatabase across multiple machines.\n\n> I was wondering if anyone has had any experience with these kind of\n> big numbers of data in a postgres sql database and how this affects\n> database design and optimization.\n>\nWell, one of the biggest things is if you can get bulk updates, or if\nclients can handle data being slightly out of date, so you can use\ncacheing. Can you segregate your data into separate tables as much as\npossible? Are your clients okay if aggregate information takes a little\nwhile to update?\n\nOne trick is to use semi-lazy materialized views to get your updates to\nbe fast.\n\n> What would be important issues when setting up a database this big,\n> and is it at all doable? Or would it be a insane to think about\n> storing up to 5-10 billion rows in a postgres database.\n\nI think you if you can design the db properly, it is doable. But if you\nhave a clients saying \"I need up to the second information on 1 billion\nrows\", you're never going to get it.\n\n>\n> The database's performance is important. There would be no use in\n> storing the data if a query will take ages. Query's should be quite\n> fast if possible.\n>\nAgain, it depends on the queries being done.\nThere are some nice tricks you can use, like doing a month-by-month\npartitioning (if you are getting 1G inserts, you might want week-by-week\npartitioning), and then with a date column index, and a union all view\nyou should be able to get pretty good insert speed, and still keep fast\n*recent* queries. Going through 1billion rows is always going to be\nexpensive.\n\n> I would really like to hear people's thoughts/suggestions or \"go see a\n> shrink, you must be mad\" statements ;)\n>\n> Kind regards,\n>\n> Ramon Bastiaans\n\nI think it would be possible, but there are a lot of design issues with\na system like this. You can't go into it thinking that you can design a\nmulti billion row database the same way you would design a million row db.\n\nJohn\n=:->", "msg_date": "Tue, 01 Mar 2005 09:19:00 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi billion row tables: possible or insane?" }, { "msg_contents": "What do your \"values\" consist of?\n\nWould it be possible to group several hundred or thousand of them into a\nsingle row somehow that still makes it possible for your queries to get at \nthem efficiently? \n\nWhat kind of queries will you want to run against the data?\n\nFor example if you have a measurement of some process value each\nmillisecond, it might be a good performance tradeoff to pack a whole\nsecond of measurements into a single row if your data processing only\nneeds to access the values sequentially. With this single step you\nimmediately reduced your row and transaction number to the 1/1000th.\n\nPlease tell us more.\n\nOn Tue, 1 Mar 2005, Ramon Bastiaans wrote:\n\n> Hi all,\n> \n> I am doing research for a project of mine where I need to store several \n> billion values for a monitoring and historical tracking system for a big \n> computer system. My currect estimate is that I have to store (somehow) \n> around 1 billion values each month (possibly more).\n> \n> I was wondering if anyone has had any experience with these kind of big \n> numbers of data in a postgres sql database and how this affects database \n> design and optimization.\n> \n> What would be important issues when setting up a database this big, and \n> is it at all doable? Or would it be a insane to think about storing up \n> to 5-10 billion rows in a postgres database.\n> \n> The database's performance is important. There would be no use in \n> storing the data if a query will take ages. Query's should be quite fast \n> if possible.\n> \n> I would really like to hear people's thoughts/suggestions or \"go see a \n> shrink, you must be mad\" statements ;)\n> \n> Kind regards,\n> \n> Ramon Bastiaans\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n", "msg_date": "Tue, 1 Mar 2005 16:54:42 +0100 (CET)", "msg_from": "Andras Kadinger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi billion row tables: possible or insane?" }, { "msg_contents": "Hi, John,\n\nJohn Arbash Meinel schrieb:\n\n>> I am doing research for a project of mine where I need to store\n>> several billion values for a monitoring and historical tracking system\n>> for a big computer system. My currect estimate is that I have to store\n>> (somehow) around 1 billion values each month (possibly more).\n>>\n> If you have that 1 billion perfectly distributed over all hours of the\n> day, then you need 1e9/30/24/3600 = 385 transactions per second.\n\nI hope that he does not use one transaction per inserted row.\n\nIn your in-house tests, we got a speedup factor of up to some hundred\nwhen bundling rows on insertions. The fastest speed was with using\nbunches of some thousand rows per transaction, and running about 5\nprocesses in parallel.\n\nRegard the usual performance tips: Use a small, but fast-writing RAID\nfor transaction log (no RAID-5 or RAID-6 variants), possibly a mirroring\nof two harddisk-backed SSD. Use different disks for the acutal data\n(here, LVM2 with growing volumes could be very handy). Have enough RAM.\nUse a fast file system.\n\nBTW, as you read about the difficulties that you'll face with this\nenormous amount of data: Don't think that your task will much be easier\nor cheaper using any other DBMS, neither commercial nor open source. For\nall of them, you'll need \"big iron\" hardware, and a skilled team of\nadmins to set up and maintain the database.\n\nMarkus\n\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 z�rich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n", "msg_date": "Tue, 01 Mar 2005 17:26:48 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi billion row tables: possible or insane?" }, { "msg_contents": "Markus Schaber wrote:\n\n>Hi, John,\n>\n>John Arbash Meinel schrieb:\n>\n>\n>\n>>>I am doing research for a project of mine where I need to store\n>>>several billion values for a monitoring and historical tracking system\n>>>for a big computer system. My currect estimate is that I have to store\n>>>(somehow) around 1 billion values each month (possibly more).\n>>>\n>>>\n>>>\n>>If you have that 1 billion perfectly distributed over all hours of the\n>>day, then you need 1e9/30/24/3600 = 385 transactions per second.\n>>\n>>\n>\n>I hope that he does not use one transaction per inserted row.\n>\n>In your in-house tests, we got a speedup factor of up to some hundred\n>when bundling rows on insertions. The fastest speed was with using\n>bunches of some thousand rows per transaction, and running about 5\n>processes in parallel.\n>\n>\nYou're right. I guess it just depends on how the data comes in, and what\nyou can do at the client ends. That is kind of where I was saying put a\nmachine in front which gathers up the information, and then does a batch\nupdate. If your client can do this directly, then you have the same\nadvantage.\n\n>\n>\nJohn\n=:->", "msg_date": "Tue, 01 Mar 2005 10:44:58 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi billion row tables: possible or insane?" }, { "msg_contents": "Ramon,\n\n> What would be important issues when setting up a database this big, and\n> is it at all doable? Or would it be a insane to think about storing up\n> to 5-10 billion rows in a postgres database.\n\nWhat's your budget? You're not going to do this on a Dell 2650. Do you \nhave the kind of a budget necessary to purchase/build a good SAN, \nQuad-opteron machine, etc.? Or at least hire some tuning help?\n\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 1 Mar 2005 17:11:18 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi billion row tables: possible or insane?" }, { "msg_contents": "Ramon Bastiaans wrote:\n\n> I am doing research for a project of mine where I need to store \n> several billion values for a monitoring and historical tracking system \n> for a big computer system. My currect estimate is that I have to store \n> (somehow) around 1 billion values each month (possibly more).\n>\n> I was wondering if anyone has had any experience with these kind of \n> big numbers of data in a postgres sql database and how this affects \n> database design and optimization.\n>\n> What would be important issues when setting up a database this big, \n> and is it at all doable? Or would it be a insane to think about \n> storing up to 5-10 billion rows in a postgres database.\n>\n> The database's performance is important. There would be no use in \n> storing the data if a query will take ages. Query's should be quite \n> fast if possible.\n>\n> I would really like to hear people's thoughts/suggestions or \"go see a \n> shrink, you must be mad\" statements ;)\n\nIt just dawned on me that we're doing something that, while not the \nsame, might be relevant. One of our tables has ~85M rows in it \naccording to the output from an \"explain select * from table\". I don't \nplan on trying a select count(*) any time soon :) We add and remove \nabout 25M rows a day to/from this table which would be about 750M \nrows/month total. Given our current usage of the database, it could \nhandle a larger row/day rate without too much trouble. (The problem \nisn't adding rows but deleting rows.)\n\n Column | Type | Modifiers\n--------------+----------+-----------\n timeseriesid | bigint |\n bindata | bytea |\n binsize | integer |\n rateid | smallint |\n ownerid | smallint |\nIndexes:\n \"idx_timeseries\" btree (timeseriesid)\n\nIn this case, each bytea entry is typically about 2KB of data, so the \ntotal table size is about 150GB, plus some index overhead.\n\nA second table has ~100M rows according to explain select *. Again it \nhas about 30M rows added and removed / day. \n\n Column | Type | Modifiers\n------------+-----------------------+-----------\n uniqid | bigint |\n type | character varying(50) |\n memberid | bigint |\n tag | character varying(50) |\n membertype | character varying(50) |\n ownerid | smallint |\nIndexes:\n \"composite_memberid\" btree (memberid)\n \"composite_uniqid\" btree (uniqid)\n\nThere are some additional tables that have a few million rows / day of \nactivity, so call it 60M rows/day added and removed. We run a vacuum \nevery day.\n\nThe box is an dual Opteron 248 from Sun. Linux 2.6, 8GB of memory. We \nuse reiserfs. We started with XFS but had several instances of file \nsystem corruption. Obviously, no RAID 5. The xlog is on a 2 drive \nmirror and the rest is on separate mirrored volume. The drives are \nfiber channel but that was a mistake as the driver from IBM wasn't very \ngood.\n\nSo, while we don't have a billion rows we do have ~200M total rows in \nall the tables and we're certainly running the daily row count that \nyou'd need to obtain. But scaling this sort of thing up can be tricky \nand your milage may vary.\n\nIn a prior career I ran a \"data intensive computing center\" and helped \ndo some design work for a high energy physics experiment: petabytes of \ndata, big tape robots, etc., the usual Big Science toys. You might \ntake a look at ROOT and some of the activity from those folks if you \ndon't need transactions and all the features of a general database like \npostgresql.\n\n-- Alan\n", "msg_date": "Tue, 01 Mar 2005 21:28:36 -0500", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi billion row tables: possible or insane?" }, { "msg_contents": "On Tue, Mar 01, 2005 at 10:34:29AM +0100, Ramon Bastiaans wrote:\n> Hi all,\n> \n> I am doing research for a project of mine where I need to store several \n> billion values for a monitoring and historical tracking system for a big \n> computer system. My currect estimate is that I have to store (somehow) \n> around 1 billion values each month (possibly more).\n\nOn a side-note, do you need to keep the actual row-level details for\nhistory? http://rrs.decibel.org might be of some use.\n\nOther than that, what others have said. Lots and lots of disks in\nRAID10, and opterons (though I would choose opterons not for memory size\nbut because of memory *bandwidth*).\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Fri, 4 Mar 2005 16:05:07 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi billion row tables: possible or insane?" } ]
[ { "msg_contents": "385 transaction/sec? \n\nfsync = false\n\nrisky but fast.\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of John Arbash\nMeinel\nSent: Tuesday, March 01, 2005 4:19 PM\nTo: Ramon Bastiaans\nCc: [email protected]\nSubject: Re: [PERFORM] multi billion row tables: possible or insane?\n\n\nRamon Bastiaans wrote:\n\n> Hi all,\n>\n> I am doing research for a project of mine where I need to store\n> several billion values for a monitoring and historical tracking system\n> for a big computer system. My currect estimate is that I have to store\n> (somehow) around 1 billion values each month (possibly more).\n>\nIf you have that 1 billion perfectly distributed over all hours of the\nday, then you need 1e9/30/24/3600 = 385 transactions per second.\n\nWhich I'm pretty sure is possible with postgres, you just need pretty\nbeefy hardware. And like Jeff said, lots of disks for lots of IO.\nLike a quad opteron, with 16GB of ram, and around 14-20 very fast disks.\nraid10 not raid5, etc. To improve query performance, you can do some\nload balancing by having replication machines by using Slony.\n\nOr if you can do batch processing, you could split up the work into a\nfew update machines, which then do bulk updates on the master database.\nThis lets you get more machines into the job, since you can't share a\ndatabase across multiple machines.\n\n> I was wondering if anyone has had any experience with these kind of\n> big numbers of data in a postgres sql database and how this affects\n> database design and optimization.\n>\nWell, one of the biggest things is if you can get bulk updates, or if\nclients can handle data being slightly out of date, so you can use\ncacheing. Can you segregate your data into separate tables as much as\npossible? Are your clients okay if aggregate information takes a little\nwhile to update?\n\nOne trick is to use semi-lazy materialized views to get your updates to\nbe fast.\n\n> What would be important issues when setting up a database this big,\n> and is it at all doable? Or would it be a insane to think about\n> storing up to 5-10 billion rows in a postgres database.\n\nI think you if you can design the db properly, it is doable. But if you\nhave a clients saying \"I need up to the second information on 1 billion\nrows\", you're never going to get it.\n\n>\n> The database's performance is important. There would be no use in\n> storing the data if a query will take ages. Query's should be quite\n> fast if possible.\n>\nAgain, it depends on the queries being done.\nThere are some nice tricks you can use, like doing a month-by-month\npartitioning (if you are getting 1G inserts, you might want week-by-week\npartitioning), and then with a date column index, and a union all view\nyou should be able to get pretty good insert speed, and still keep fast\n*recent* queries. Going through 1billion rows is always going to be\nexpensive.\n\n> I would really like to hear people's thoughts/suggestions or \"go see a\n> shrink, you must be mad\" statements ;)\n>\n> Kind regards,\n>\n> Ramon Bastiaans\n\nI think it would be possible, but there are a lot of design issues with\na system like this. You can't go into it thinking that you can design a\nmulti billion row database the same way you would design a million row db.\n\nJohn\n=:->\n\n\nThe information transmitted is intended only for the person or entity to\nwhich it is addressed and may contain confidential and/or privileged\nmaterial. Any review, retransmission, dissemination or other use of, or\ntaking of any action in reliance upon, this information by persons or\nentities other than the intended recipient is prohibited. If you received\nthis in error, please contact the sender and delete the material from any\ncomputer.\n", "msg_date": "Tue, 1 Mar 2005 16:40:29 +0100 ", "msg_from": "\"Vig, Sandor (G/FI-2)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: multi billion row tables: possible or insane?" }, { "msg_contents": "Vig, Sandor (G/FI-2) wrote:\n\n>385 transaction/sec?\n>\n>fsync = false\n>\n>risky but fast.\n>\n>\n\nI think with a dedicated RAID10 for pg_xlog (or possibly a battery\nbacked up ramdisk), and then a good amount of disks in a bulk RAID10 or\npossibly a good partitioning of the db across multiple raids, you could\nprobably get a good enough tps.\n\nBut you're right, fsync=false could certainly give you the performance,\nthough a power outage means potential *real* corruption. Not just\nmissing transactions, but duplicated rows, all sorts of ugliness.\n\nJohn\n=:->", "msg_date": "Tue, 01 Mar 2005 09:52:12 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi billion row tables: possible or insane?" }, { "msg_contents": "Isn't that 385 rows/second. Presumably one can insert more than one \nrow in a transaction?\n\n-- Alan\n\nVig, Sandor (G/FI-2) wrote:\n\n>385 transaction/sec? \n>\n>fsync = false\n>\n>risky but fast.\n>\n>-----Original Message-----\n>From: [email protected]\n>[mailto:[email protected]]On Behalf Of John Arbash\n>Meinel\n>Sent: Tuesday, March 01, 2005 4:19 PM\n>To: Ramon Bastiaans\n>Cc: [email protected]\n>Subject: Re: [PERFORM] multi billion row tables: possible or insane?\n>\n>\n>Ramon Bastiaans wrote:\n>\n> \n>\n>>Hi all,\n>>\n>>I am doing research for a project of mine where I need to store\n>>several billion values for a monitoring and historical tracking system\n>>for a big computer system. My currect estimate is that I have to store\n>>(somehow) around 1 billion values each month (possibly more).\n>>\n>> \n>>\n>If you have that 1 billion perfectly distributed over all hours of the\n>day, then you need 1e9/30/24/3600 = 385 transactions per second.\n>\n>Which I'm pretty sure is possible with postgres, you just need pretty\n>beefy hardware. And like Jeff said, lots of disks for lots of IO.\n>Like a quad opteron, with 16GB of ram, and around 14-20 very fast disks.\n>raid10 not raid5, etc. To improve query performance, you can do some\n>load balancing by having replication machines by using Slony.\n>\n>Or if you can do batch processing, you could split up the work into a\n>few update machines, which then do bulk updates on the master database.\n>This lets you get more machines into the job, since you can't share a\n>database across multiple machines.\n>\n> \n>\n>>I was wondering if anyone has had any experience with these kind of\n>>big numbers of data in a postgres sql database and how this affects\n>>database design and optimization.\n>>\n>> \n>>\n>Well, one of the biggest things is if you can get bulk updates, or if\n>clients can handle data being slightly out of date, so you can use\n>cacheing. Can you segregate your data into separate tables as much as\n>possible? Are your clients okay if aggregate information takes a little\n>while to update?\n>\n>One trick is to use semi-lazy materialized views to get your updates to\n>be fast.\n>\n> \n>\n>>What would be important issues when setting up a database this big,\n>>and is it at all doable? Or would it be a insane to think about\n>>storing up to 5-10 billion rows in a postgres database.\n>> \n>>\n>\n>I think you if you can design the db properly, it is doable. But if you\n>have a clients saying \"I need up to the second information on 1 billion\n>rows\", you're never going to get it.\n>\n> \n>\n>>The database's performance is important. There would be no use in\n>>storing the data if a query will take ages. Query's should be quite\n>>fast if possible.\n>>\n>> \n>>\n>Again, it depends on the queries being done.\n>There are some nice tricks you can use, like doing a month-by-month\n>partitioning (if you are getting 1G inserts, you might want week-by-week\n>partitioning), and then with a date column index, and a union all view\n>you should be able to get pretty good insert speed, and still keep fast\n>*recent* queries. Going through 1billion rows is always going to be\n>expensive.\n>\n> \n>\n>>I would really like to hear people's thoughts/suggestions or \"go see a\n>>shrink, you must be mad\" statements ;)\n>>\n>>Kind regards,\n>>\n>>Ramon Bastiaans\n>> \n>>\n>\n>I think it would be possible, but there are a lot of design issues with\n>a system like this. You can't go into it thinking that you can design a\n>multi billion row database the same way you would design a million row db.\n>\n>John\n>=:->\n>\n>\n>The information transmitted is intended only for the person or entity to\n>which it is addressed and may contain confidential and/or privileged\n>material. Any review, retransmission, dissemination or other use of, or\n>taking of any action in reliance upon, this information by persons or\n>entities other than the intended recipient is prohibited. If you received\n>this in error, please contact the sender and delete the material from any\n>computer.\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n>\n\n", "msg_date": "Tue, 01 Mar 2005 10:57:54 -0500", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi billion row tables: possible or insane?" }, { "msg_contents": "Not true - with fsync on I get nearly 500 tx/s, with it off I'm as\nhigh as 1600/sec with dual opteron and 14xSATA drives and 4GB RAM on a\n3ware Escalade. Database has 3 million rows.\n\nAs long as queries use indexes, multi billion row shouldn't be too\nbad. Full table scan will suck though.\n\nAlex Turner\nnetEconomist\n\n\nOn Tue, 1 Mar 2005 16:40:29 +0100, Vig, Sandor (G/FI-2)\n<[email protected]> wrote:\n> 385 transaction/sec?\n> \n> fsync = false\n> \n> risky but fast.\n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of John Arbash\n> Meinel\n> Sent: Tuesday, March 01, 2005 4:19 PM\n> To: Ramon Bastiaans\n> Cc: [email protected]\n> Subject: Re: [PERFORM] multi billion row tables: possible or insane?\n> \n> Ramon Bastiaans wrote:\n> \n> > Hi all,\n> >\n> > I am doing research for a project of mine where I need to store\n> > several billion values for a monitoring and historical tracking system\n> > for a big computer system. My currect estimate is that I have to store\n> > (somehow) around 1 billion values each month (possibly more).\n> >\n> If you have that 1 billion perfectly distributed over all hours of the\n> day, then you need 1e9/30/24/3600 = 385 transactions per second.\n> \n> Which I'm pretty sure is possible with postgres, you just need pretty\n> beefy hardware. And like Jeff said, lots of disks for lots of IO.\n> Like a quad opteron, with 16GB of ram, and around 14-20 very fast disks.\n> raid10 not raid5, etc. To improve query performance, you can do some\n> load balancing by having replication machines by using Slony.\n> \n> Or if you can do batch processing, you could split up the work into a\n> few update machines, which then do bulk updates on the master database.\n> This lets you get more machines into the job, since you can't share a\n> database across multiple machines.\n> \n> > I was wondering if anyone has had any experience with these kind of\n> > big numbers of data in a postgres sql database and how this affects\n> > database design and optimization.\n> >\n> Well, one of the biggest things is if you can get bulk updates, or if\n> clients can handle data being slightly out of date, so you can use\n> cacheing. Can you segregate your data into separate tables as much as\n> possible? Are your clients okay if aggregate information takes a little\n> while to update?\n> \n> One trick is to use semi-lazy materialized views to get your updates to\n> be fast.\n> \n> > What would be important issues when setting up a database this big,\n> > and is it at all doable? Or would it be a insane to think about\n> > storing up to 5-10 billion rows in a postgres database.\n> \n> I think you if you can design the db properly, it is doable. But if you\n> have a clients saying \"I need up to the second information on 1 billion\n> rows\", you're never going to get it.\n> \n> >\n> > The database's performance is important. There would be no use in\n> > storing the data if a query will take ages. Query's should be quite\n> > fast if possible.\n> >\n> Again, it depends on the queries being done.\n> There are some nice tricks you can use, like doing a month-by-month\n> partitioning (if you are getting 1G inserts, you might want week-by-week\n> partitioning), and then with a date column index, and a union all view\n> you should be able to get pretty good insert speed, and still keep fast\n> *recent* queries. Going through 1billion rows is always going to be\n> expensive.\n> \n> > I would really like to hear people's thoughts/suggestions or \"go see a\n> > shrink, you must be mad\" statements ;)\n> >\n> > Kind regards,\n> >\n> > Ramon Bastiaans\n> \n> I think it would be possible, but there are a lot of design issues with\n> a system like this. You can't go into it thinking that you can design a\n> multi billion row database the same way you would design a million row db.\n> \n> John\n> =:->\n> \n> The information transmitted is intended only for the person or entity to\n> which it is addressed and may contain confidential and/or privileged\n> material. Any review, retransmission, dissemination or other use of, or\n> taking of any action in reliance upon, this information by persons or\n> entities other than the intended recipient is prohibited. If you received\n> this in error, please contact the sender and delete the material from any\n> computer.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n", "msg_date": "Fri, 4 Mar 2005 19:15:55 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi billion row tables: possible or insane?" } ]
[ { "msg_contents": "I would like to know whether there is any command which the server will give the\nrecord ID back to the client when client puts the data and the server generates\nan autoincrement ID for that record.\nFor example if many clients try to put the money data to the server and each\nrecord from each client has its own record ID by autoincrement process of the\nserver [x+1] and i don't need to lock the record since it will bring the system\nto slow down. That client wil then want to know which ID that server gives to\nthat record in order to select that record to print the reciept [bill].\nI know that in mysql there is a command \"last_record_id\" which acts the same as\nI mention above. Does anybody know that , please give me the detail?\n\nAmrit,Thailand\n\n\n", "msg_date": "Tue, 1 Mar 2005 22:46:02 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "What is the postgres sql command for last_user_id ???" }, { "msg_contents": "[email protected] wrote:\n> I would like to know whether there is any command which the server will give the\n> record ID back to the client when client puts the data and the server generates\n> an autoincrement ID for that record.\n> For example if many clients try to put the money data to the server and each\n> record from each client has its own record ID by autoincrement process of the\n> server [x+1] and i don't need to lock the record since it will bring the system\n> to slow down. That client wil then want to know which ID that server gives to\n> that record in order to select that record to print the reciept [bill].\n> I know that in mysql there is a command \"last_record_id\" which acts the same as\n> I mention above. Does anybody know that , please give me the detail?\n> \n> Amrit,Thailand\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n> \n\nhttp://www.postgresql.org/docs/8.0/static/functions-sequence.html\n", "msg_date": "Wed, 02 Mar 2005 07:43:44 +0100", "msg_from": "stig erikson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the postgres sql command for last_user_id ???" }, { "msg_contents": "On Tue, Mar 01, 2005 at 10:46:02PM +0700, [email protected] wrote:\n\n> I would like to know whether there is any command which the server will give the\n> record ID back to the client when client puts the data and the server generates\n> an autoincrement ID for that record.\n\nSee \"How do I get the value of a SERIAL insert?\" and the question\nimmediately following it in the FAQ:\n\nhttp://www.postgresql.org/files/documentation/faqs/FAQ.html#4.11.2\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Wed, 2 Mar 2005 01:12:57 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the postgres sql command for last_user_id ???" } ]
[ { "msg_contents": "Greetings,\n\nI have been beating myself up today trying to optimize indices for a \nquery that uses LIKE. In my research I have read that the locale \nsetting may affect PostgreSQL's choice of seq scan vs index scan. I am \nrunning Fedora Core 2 and it appears when I run \"locale\" that it is set \nto 'en.US-UTF-8'.\n\nDid I fall into a \"gotcha\" trap here about C vs non-C locales? I'm not \nmuch of a C programmer so I have no idea what all this touches and \neverything has been left as default during PG compilation as well as \nFedora install. I can pg_dump and initdb again with --locale=C if \nthis will allow my LIKE queries to use indexes, but I just wanted to \nknow if there was some other place I needed to change locales in the \nsystem? e.g. postgresql.conf or env vars? Or, would the initdb and \nreload alone fix it?\n\nI'm running 8.0.1 if that matters.\n\nThanks\n\n", "msg_date": "Tue, 1 Mar 2005 17:44:07 -0700", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Confusion about locales and 'like' indexes" }, { "msg_contents": "Dan Harris <[email protected]> writes:\n> query that uses LIKE. In my research I have read that the locale \n> setting may affect PostgreSQL's choice of seq scan vs index scan.\n\nNon-C-locale indexes can't support LIKE because the sort ordering\nisn't necessarily right.\n\n> I am running Fedora Core 2 and it appears when I run \"locale\" that it\n> is set to 'en.US-UTF-8'.\n\nThis is not a definitive indication of the environment the database\nsees, though. Try \"show lc_collate\".\n\n> I can pg_dump and initdb again with --locale=C if \n> this will allow my LIKE queries to use indexes, but I just wanted to \n> know if there was some other place I needed to change locales in the \n> system? e.g. postgresql.conf or env vars? Or, would the initdb and \n> reload alone fix it?\n\nThat would do it. Alternatively you can create special-purpose indexes\nwith one of the xxx_pattern_ops operator classes to support LIKE.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Mar 2005 20:42:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confusion about locales and 'like' indexes " } ]
[ { "msg_contents": "I've tried to use Dan Tow's tuning method and created all the right indexes from his diagraming method, but the query still performs quite slow both inside the application and just inside pgadmin III. Can anyone be kind enough to help me tune it so that it performs better in postgres? I don't think it's using the right indexes, or maybe postgres needs special treatment.\n\nI've converted the below query to SQL from a Hibernate query, so the syntax is probably not perfect but it's semantics are exactly the same. I've done so by looking at the source code, but I can't run it to get the exact SQL since I don't have the database on my home machine.\n\nselect s.*\nfrom shipment s\n inner join carrier_code cc on s.carrier_code_id = cc.id\n inner join carrier c on cc.carrier_id = c.id\n inner join carrier_to_person ctp on ctp.carrier_id = c.id\n inner join person p on p.id = ctp.person_id\n inner join shipment_status cs on s.current_status_id = cs.id\n inner join release_code rc on cs.release_code_id = rc.id\n left join shipment_status ss on ss.shipment_id = s.id\nwhere\n p.id = :personId and\n s.is_purged = false and\n rc.number = '9' and\n cs is not null and\n cs.date >= current_date - 31\norder by cs.date desc\n\nJust assume I have no indexes for the moment because while some of the indexes I made make it work faster, it's still around 250 milliseconds and under heavy load, the query performs very badly (6-7 seconds).\n\nFor your information:\n\nshipment contains 40,000 rows\nshipment_status contains 80,000 rows\nrelease_code contains 8 rows\nperson contains 300 rows\ncarrier contains 60 rows\ncarrier_code contains 70 rows\n\nThe filter ratios are:\n\nrc.number = '9' (0.125)\ncs.date >= current_date - 31 (.10)\np.id = ? (0.003)\ns.is_purged = false (.98)\n\nI really hope someone can help since I'm pretty much stuck.\n\nBest regards and many thanks,\nKen\n\n\n\n\n\n\nI've tried to use Dan Tow's tuning method and \ncreated all the right indexes from his diagraming method, but the query still \nperforms quite slow both inside the application and just inside pgadmin \nIII.  Can anyone be kind enough to help me tune it so that it performs \nbetter in postgres?  I don't think it's using the right indexes, or maybe \npostgres needs special treatment.\n \nI've converted the below query to SQL from a \nHibernate query, so the syntax is probably not perfect but it's semantics \nare exactly the same.  I've done so by \nlooking at the source code, but I can't run it to get the exact SQL since I \ndon't have the database on my home machine.\n \nselect s.*from shipment \ns    inner join carrier_code cc on s.carrier_code_id = \ncc.id\n    inner join carrier c on \ncc.carrier_id = c.id\n    inner join carrier_to_person ctp \non ctp.carrier_id = c.id\n    inner join person p on p.id = \nctp.person_id\n    inner join shipment_status cs on \ns.current_status_id = cs.id\n    inner join release_code rc on \ncs.release_code_id = rc.id\n    left join shipment_status ss on \nss.shipment_id = s.idwhere    p.id = :personId \nand    s.is_purged = false and    \nrc.number = '9' and    cs is not null \nand    cs.date >= current_date - 31order by cs.date \ndesc\nJust assume I have no indexes for the moment \nbecause while some of the indexes I made make it work faster, it's still around \n250 milliseconds and under heavy load, the query performs very badly (6-7 \nseconds).\n \nFor your information:\n \nshipment contains 40,000 rows\nshipment_status contains 80,000 rows\nrelease_code contains 8 rows\nperson contains 300 rows\ncarrier contains 60 rows\ncarrier_code contains 70 rows\n \nThe filter ratios are:\n \nrc.number = '9' (0.125)\ncs.date >= current_date - 31 (.10)\np.id = ? (0.003)\ns.is_purged = false (.98)\n \nI really hope someone can help since I'm pretty \nmuch stuck.\n \nBest regards and many thanks,\nKen", "msg_date": "Wed, 2 Mar 2005 01:51:11 -0500", "msg_from": "\"Ken Egervari\" <[email protected]>", "msg_from_op": true, "msg_subject": "Help with tuning this query" }, { "msg_contents": "Ken Egervari wrote:\n> I've tried to use Dan Tow's tuning method\n\nWho? What?\n\n > and created all the right\n> indexes from his diagraming method, but the query still performs\n> quite slow both inside the application and just inside pgadmin III.\n> Can anyone be kind enough to help me tune it so that it performs\n> better in postgres? I don't think it's using the right indexes, or\n> maybe postgres needs special treatment.\n> \n> I've converted the below query to SQL from a Hibernate query, so the\n> syntax is probably not perfect but it's semantics are exactly the\n> same. I've done so by looking at the source code, but I can't run it\n> to get the exact SQL since I don't have the database on my home\n> machine.\n\nHibernate is a java thing, no? It'd be helpful to have the actual SQL \nthe hibernate class (or whatever) generates. One of the problems with \nSQL is that you can have multiple ways to get the same results and it's \nnot always possible for the planner to convert from one to the other.\n\nAnyway, people will want to see EXPLAIN ANALYSE for the query in \nquestion. Obviously, make sure you've vacuumed and analysed the tables \nin question recently. Oh, and make sure yousay what version of PG you're \nrunning.\n\n> select s.* from shipment s inner join carrier_code cc on\n> s.carrier_code_id = cc.id inner join carrier c on cc.carrier_id =\n> c.id inner join carrier_to_person ctp on ctp.carrier_id = c.id inner\n> join person p on p.id = ctp.person_id inner join shipment_status cs\n> on s.current_status_id = cs.id inner join release_code rc on\n> cs.release_code_id = rc.id left join shipment_status ss on\n> ss.shipment_id = s.id where p.id = :personId and s.is_purged = false\n> and rc.number = '9' and cs is not null and cs.date >= current_date -\n> 31 order by cs.date desc\n\n1. Why are you quoting the 9 when checking against rc.number?\n2. The \"cs is not null\" doesn't appear to be qualified - which table?\n\n> Just assume I have no indexes for the moment because while some of\n> the indexes I made make it work faster, it's still around 250\n> milliseconds and under heavy load, the query performs very badly (6-7\n> seconds).\n\n3. If you rewrite the \"current_date - 31\" as a suitable ago(31) function \nthen you can use an index on cs.date\n4. Are you familiar with the configuration setting \"join_collapse_limit\"?\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 02 Mar 2005 08:58:06 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning this query" }, { "msg_contents": "Richard Huxton wrote:\n> Ken Egervari wrote:\n> \n>> I've tried to use Dan Tow's tuning method\n> Who? What?\n\nhttp://www.singingsql.com/\nDan has written some remarkable papers on sql tuning. Some of it is pretty complex, but his book \n\"SQL Tuning\" is an excellent resource.\n\n-- \n_______________________________\n\nThis e-mail may be privileged and/or confidential, and the sender does\nnot waive any related rights and obligations. Any distribution, use or\ncopying of this e-mail or the information it contains by other than an\nintended recipient is unauthorized. If you received this e-mail in\nerror, please advise me (by return e-mail or otherwise) immediately.\n_______________________________\n", "msg_date": "Wed, 02 Mar 2005 08:13:34 -0800", "msg_from": "Bricklen Anderson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning this query" }, { "msg_contents": "Bricklen Anderson wrote:\n> Richard Huxton wrote:\n> > Ken Egervari wrote:\n> > \n> >> I've tried to use Dan Tow's tuning method\n> > Who? What?\n> \n> http://www.singingsql.com/\n\nThat URL is invalid for me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 2 Mar 2005 11:18:59 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning this query" }, { "msg_contents": "Ken Egervari wrote:\n\n> I've tried to use Dan Tow's tuning method and created all the right\n> indexes from his diagraming method, but the query still performs quite\n> slow both inside the application and just inside pgadmin III. Can\n> anyone be kind enough to help me tune it so that it performs better in\n> postgres? I don't think it's using the right indexes, or maybe\n> postgres needs special treatment.\n>\n\nFirst, what version of postgres, and have you run VACUUM ANALYZE recently?\nAlso, please attach the result of running EXPLAIN ANALYZE.\n(eg, explain analyze select s.* from shipment ...)\n\nIt's very possible that you don't have up-to-date statistics, which\ncauses postgres to make a bad estimate of what the fastest plan is.\n\nAlso, if you are using an older version of postgres (like 7.1) you\nreally should upgrade. There are quite a few performance and real bug fixes.\n\n> I've converted the below query to SQL from a Hibernate query, so the\n> syntax is probably not perfect but it's semantics are exactly the\n> same. I've done so by looking at the source code, but I can't run it\n> to get the exact SQL since I don't have the database on my home machine.\n\nI don't know how to make Hibernate do what you want, but if you change\nthe query to using subselects (not all databases support this, so\nhibernate might not let you), you can see a performance improvement.\nAlso sometimes using explicit joins can be worse than just letting the\nquery manager figure it out. So something like\nselect s.* from shipment s, carrier_code cc, carrier c, ...\n where s.carrier_code_id = cc.id and c.id = cc.carrier_id and ....\n\nBut again, since this is generated from another program (Hibernate), I\nreally don't know how you tell it how to tune the SQL. Probably the\nbiggest \"non-bug\" performance improvements are from tuning the SQL.\nBut if postgres isn't using the right indexes, etc, you can probably fix\nthat.\n\nJohn\n=:->\n\n>\n> select s.*\n> from shipment s\n> inner join carrier_code cc on s.carrier_code_id = cc.id\n> inner join carrier c on cc.carrier_id = c.id\n> inner join carrier_to_person ctp on ctp.carrier_id = c.id\n> inner join person p on p.id = ctp.person_id\n> inner join shipment_status cs on s.current_status_id = cs.id\n> inner join release_code rc on cs.release_code_id = rc.id\n> left join shipment_status ss on ss.shipment_id = s.id\n> where\n> p.id = :personId and\n> s.is_purged = false and\n> rc.number = '9' and\n> cs is not null and\n> cs.date >= current_date - 31\n> order by cs.date desc\n> Just assume I have no indexes for the moment because while some of the\n> indexes I made make it work faster, it's still around 250 milliseconds\n> and under heavy load, the query performs very badly (6-7 seconds).\n>\n> For your information:\n>\n> shipment contains 40,000 rows\n> shipment_status contains 80,000 rows\n> release_code contains 8 rows\n> person contains 300 rows\n> carrier contains 60 rows\n> carrier_code contains 70 rows\n>\n> The filter ratios are:\n>\n> rc.number = '9' (0.125)\n> cs.date >= current_date - 31 (.10)\n> p.id = ? (0.003)\n> s.is_purged = false (.98)\n>\n> I really hope someone can help since I'm pretty much stuck.\n>\n> Best regards and many thanks,\n> Ken", "msg_date": "Wed, 02 Mar 2005 10:56:47 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning this query" }, { "msg_contents": ">First, what version of postgres, and have you run VACUUM ANALYZE recently?\n>Also, please attach the result of running EXPLAIN ANALYZE.\n>(eg, explain analyze select s.* from shipment ...)\n\nI'm using postgres 8.0. I wish I could paste explain analyze, but I won't \nbe at work for a few days. I was hoping some Postgres/SQL experts here \nwould be able to simply look at the query and make recommendations because \nit's not a very difficult or unique query.\n\n>It's very possible that you don't have up-to-date statistics, which\n>causes postgres to make a bad estimate of what the fastest plan is.\n\nI run VACUUM ANALYZE religiously. I even dumped the production database and \nused it as my test database after a full vacuum analyze. It's really as \nfresh as it can be.\n\n>I don't know how to make Hibernate do what you want, but if you change\n>the query to using subselects (not all databases support this, so\n>hibernate might not let you), you can see a performance improvement.\n\nYes, Hibernate supports sub-selects. In fact, I can even drop down to JDBC \nexplicitly, so whatever SQL tricks out there I can use will work on \nHibernate. In what way will sub-selects improve this query?\n\n>Also sometimes using explicit joins can be worse than just letting the\n>query manager figure it out. So something like\n>select s.* from shipment s, carrier_code cc, carrier c, ...\n> where s.carrier_code_id = cc.id and c.id = cc.carrier_id and ....\n\nI think I can avoid using joins in Hibernate, but it makes the query harder \nto maintain. How much of a performance benefit are we talking with this \nchange? Since hibernate is an object language, you don't actually have to \nspecify many joins. You can use the \"dot\" notation.\n\n Query query = session.createQuery(\n \"select shipment \" +\n \"from Shipment shipment \" +\n \" inner join \nshipment.cargoControlNumber.carrierCode.carrier.persons person \" +\n \" inner join shipment.currentStatus currentStatus \" +\n \" inner join currentStatus.releaseCode releaseCode \" +\n \" left join fetch shipment.currentStatus \" +\n \"where \" +\n \" person.id = :personId and \" +\n \" shipment.isPurged = false and \" +\n \" releaseCode.number = '9' and \" +\n \" currentStatus is not null and \" +\n \" currentStatus.date >= current_date - 31 \" +\n \"order by currentStatus.date desc\"\n );\n\n query.setParameter( \"personId\", personId );\n\n query.setFirstResult( firstResult );\n query.setMaxResults( maxResults );\n\n return query.list();\n\nAs you can see, it's fairly elegant language and maps to SQL quite well.\n\n>But again, since this is generated from another program (Hibernate), I\n>really don't know how you tell it how to tune the SQL. Probably the\n>biggest \"non-bug\" performance improvements are from tuning the SQL.\n\nI agree, but the ones I've tried aren't good enough. I have made these \nindexes that apply to this query as well as others in my from looking at my \nSQL scripts. Many of my queries have really sped up to 14 milliseconds from \nthese indexes. But I can't make this query run any faster.\n\nCREATE INDEX carrier_to_person_person_id_idx ON carrier_to_person USING \nbtree (person_id);\nCREATE INDEX carrier_to_person_carrier_id_idx ON carrier_to_person USING \nbtree (carrier_id);\nCREATE INDEX carrier_code_carrier_id_idx ON carrier_code USING btree \n(carrier_id);\nCREATE INDEX shipment_carrier_code_id_idx ON shipment USING btree \n(carrier_code_id);\nCREATE INDEX current_status_date_idx ON shipment_status USING btree (date);\nCREATE INDEX shipment_current_status_id_idx ON shipment USING btree \n(current_status_id);\nCREATE INDEX shipment_status_shipment_id_idx ON shipment_status USING btree \n(shipment_id);\n\nThanks for your responses everyone. I'll try and get you that explain \nanalyze. I'm just not at work at the moment but this is a problem that I'm \nsimply puzzled and worried about. I'm getting all of this from CVS on my \nwork server.\n\nKen \n\n", "msg_date": "Wed, 2 Mar 2005 12:23:23 -0500", "msg_from": "\"Ken Egervari\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with tuning this query" }, { "msg_contents": "Ken Egervari wrote:\n\n>> First, what version of postgres, and have you run VACUUM ANALYZE\n>> recently?\n>> Also, please attach the result of running EXPLAIN ANALYZE.\n>> (eg, explain analyze select s.* from shipment ...)\n>\n>\n> I'm using postgres 8.0. I wish I could paste explain analyze, but I\n> won't be at work for a few days. I was hoping some Postgres/SQL\n> experts here would be able to simply look at the query and make\n> recommendations because it's not a very difficult or unique query.\n>\nThat's the problem. Without explain analyze, it's hard to say why it is\nperforming weird, because it *does* look like a straightforward query.\n\n>> It's very possible that you don't have up-to-date statistics, which\n>> causes postgres to make a bad estimate of what the fastest plan is.\n>\n>\n> I run VACUUM ANALYZE religiously. I even dumped the production\n> database and used it as my test database after a full vacuum analyze.\n> It's really as fresh as it can be.\n>\nGood. Again, this is just the first precaution, as not everyone is as\ncareful as you. And without the explain analyze, you can't tell what the\nplanner estimates are.\n\n>> I don't know how to make Hibernate do what you want, but if you change\n>> the query to using subselects (not all databases support this, so\n>> hibernate might not let you), you can see a performance improvement.\n>\n>\n> Yes, Hibernate supports sub-selects. In fact, I can even drop down to\n> JDBC explicitly, so whatever SQL tricks out there I can use will work\n> on Hibernate. In what way will sub-selects improve this query?\n>\nWhen doing massive joins across multiple tables (as you are doing) it is\nfrequently faster to do a couple of small joins where you only need a\ncouple of rows as input to the rest. Something like:\n\nselect * from shipment s\nwhere s.carrier_code_id in\n (select cc.id from carrier_code cc join carrier c on\ncc.carrier_id = c.id)\nand s.current_status_id in (select cs.id from shipment_status cs where ...)\n\nAgain it's something that you can try. I have found quite a few of my\nqueries performed much better with subselects.\nI'm guessing it's because with big queries it has a harder time figuring\nout how to refactor (the decision tree becomes big). But I'm not really\nsure. I just know it can work.\n\n>> Also sometimes using explicit joins can be worse than just letting the\n>> query manager figure it out. So something like\n>> select s.* from shipment s, carrier_code cc, carrier c, ...\n>> where s.carrier_code_id = cc.id and c.id = cc.carrier_id and ....\n>\n>\n> I think I can avoid using joins in Hibernate, but it makes the query\n> harder to maintain. How much of a performance benefit are we talking\n> with this change? Since hibernate is an object language, you don't\n> actually have to specify many joins. You can use the \"dot\" notation.\n>\nI'm not saying this *will* improve performance. It is just something to\ntry. It very easily could not be worth the overhead.\n\n> Query query = session.createQuery(\n> \"select shipment \" +\n> \"from Shipment shipment \" +\n> \" inner join\n> shipment.cargoControlNumber.carrierCode.carrier.persons person \" +\n> \" inner join shipment.currentStatus currentStatus \" +\n> \" inner join currentStatus.releaseCode releaseCode \" +\n> \" left join fetch shipment.currentStatus \" +\n> \"where \" +\n> \" person.id = :personId and \" +\n> \" shipment.isPurged = false and \" +\n> \" releaseCode.number = '9' and \" +\n> \" currentStatus is not null and \" +\n> \" currentStatus.date >= current_date - 31 \" +\n> \"order by currentStatus.date desc\"\n> );\n>\n> query.setParameter( \"personId\", personId );\n>\n> query.setFirstResult( firstResult );\n> query.setMaxResults( maxResults );\n>\n> return query.list();\n>\n> As you can see, it's fairly elegant language and maps to SQL quite well.\n>\n>> But again, since this is generated from another program (Hibernate), I\n>> really don't know how you tell it how to tune the SQL. Probably the\n>> biggest \"non-bug\" performance improvements are from tuning the SQL.\n>\n>\n> I agree, but the ones I've tried aren't good enough. I have made\n> these indexes that apply to this query as well as others in my from\n> looking at my SQL scripts. Many of my queries have really sped up to\n> 14 milliseconds from these indexes. But I can't make this query run\n> any faster.\n>\n> CREATE INDEX carrier_to_person_person_id_idx ON carrier_to_person\n> USING btree (person_id);\n> CREATE INDEX carrier_to_person_carrier_id_idx ON carrier_to_person\n> USING btree (carrier_id);\n> CREATE INDEX carrier_code_carrier_id_idx ON carrier_code USING btree\n> (carrier_id);\n> CREATE INDEX shipment_carrier_code_id_idx ON shipment USING btree\n> (carrier_code_id);\n> CREATE INDEX current_status_date_idx ON shipment_status USING btree\n> (date);\n> CREATE INDEX shipment_current_status_id_idx ON shipment USING btree\n> (current_status_id);\n> CREATE INDEX shipment_status_shipment_id_idx ON shipment_status USING\n> btree (shipment_id);\n>\n> Thanks for your responses everyone. I'll try and get you that explain\n> analyze. I'm just not at work at the moment but this is a problem\n> that I'm simply puzzled and worried about. I'm getting all of this\n> from CVS on my work server.\n>\n> Ken\n\nThere is also the possibility that you are having problems with\ncross-column correlation, or poor distribution of a column. Postgres\ndoesn't keep cross-column statistics, so if 2 columns are correlated,\nthen it mis-estimates selectivity, and might pick the wrong plan.\n\nIn general your query looks decent, we just need to figure out what is\ngoing on.\n\nJohn\n=:->", "msg_date": "Wed, 02 Mar 2005 11:38:24 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning this query" }, { "msg_contents": "On Wed, 2005-03-02 at 01:51 -0500, Ken Egervari wrote:\n> \n> select s.*\n> from shipment s\n> inner join carrier_code cc on s.carrier_code_id = cc.id\n> inner join carrier c on cc.carrier_id = c.id\n> inner join carrier_to_person ctp on ctp.carrier_id = c.id\n> inner join person p on p.id = ctp.person_id\n> inner join shipment_status cs on s.current_status_id = cs.id\n> inner join release_code rc on cs.release_code_id = rc.id\n> left join shipment_status ss on ss.shipment_id = s.id\n> where\n> p.id = :personId and\n> s.is_purged = false and\n> rc.number = '9' and\n> cs is not null and\n> cs.date >= current_date - 31\n> order by cs.date desc\n> ... \n> shipment contains 40,000 rows\n> shipment_status contains 80,000 rows\n\nI may be missing something, but it looks like the second join\non shipment_status (the left join) is not adding anything to your\nresults, except more work. ss is not used for output, nor in the where\nclause, so what is its purpose ?\n\nif cs.date has an upper limit, it might be helpful to change the\ncondition to a BETWEEN\n\nin any case, i would think you might need an index on\n shipment(carrier_code_id)\n shipment(current_status_id)\n shipment_status(id)\n\ngnari\n\n\n\n", "msg_date": "Wed, 02 Mar 2005 18:13:47 +0000", "msg_from": "Ragnar =?ISO-8859-1?Q?Hafsta=F0?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning this query" }, { "msg_contents": ">> select s.*\n>> from shipment s\n>> inner join carrier_code cc on s.carrier_code_id = cc.id\n>> inner join carrier c on cc.carrier_id = c.id\n>> inner join carrier_to_person ctp on ctp.carrier_id = c.id\n>> inner join person p on p.id = ctp.person_id\n>> inner join shipment_status cs on s.current_status_id = cs.id\n>> inner join release_code rc on cs.release_code_id = rc.id\n>> left join shipment_status ss on ss.shipment_id = s.id\n>> where\n>> p.id = :personId and\n>> s.is_purged = false and\n>> rc.number = '9' and\n>> cs is not null and\n>> cs.date >= current_date - 31\n>> order by cs.date desc\n>> ...\n>> shipment contains 40,000 rows\n>> shipment_status contains 80,000 rows\n>\n> I may be missing something, but it looks like the second join\n> on shipment_status (the left join) is not adding anything to your\n> results, except more work. ss is not used for output, nor in the where\n> clause, so what is its purpose ?\n\nIt does look strange doesn't it? I would think the same thing if it were \nthe first time I looked at it. But rest assured, it's done by design. A \nshipment relates to many shipment_status rows, but only 1 is the current \nshipment_status for the shipment. The first does queries on the current \nstatus only and doesn't analyze the rest of the related items. The second \nleft join is for eager loading so that I don't have to run a seperate query \nto fetch the children for each shipment. This really does improve \nperformance because otherwise you'll have to make N+1 queries to the \ndatabase, and that's just too much overhead. Since I need all the \nshipment_status children along with the shipment for the domain logic to \nwork on them, I have to load them all.\n\nOn average, a shipment will have 2 shipment_status rows. So if the query \nselects 100 shipments, the query returns 200 rows. Hibernate is intelligent \nenough to map the shipment_status children to the appropriate shipment \nautomatically.\n\n> if cs.date has an upper limit, it might be helpful to change the\n> condition to a BETWEEN\n\nWell, I could create an upper limit. It would be the current date. Would \nadding in this redundant condition improve performance? I've clustered the \nshipment table so that the dates are together, which has improved \nperformance. I'm not sure adding in this implicit condition will speed up \nanything, but I will definately try it.\n\n> in any case, i would think you might need an index on\n> shipment(carrier_code_id)\n> shipment(current_status_id)\n> shipment_status(id)\n\nUnfortunately, I have indexes on all three (Postgres implicitly creates \nindexes for unique keys). Here are the other 2 that are already created:\n\nCREATE INDEX shipment_carrier_code_id_idx ON shipment USING btree \n(carrier_code_id);\nCREATE INDEX shipment_current_status_id_idx ON shipment USING btree \n(current_status_id);\n\nSo I guess we've been thinking the same thing. Don't get me wrong. These \nindexes speed up the query from 1.6 seconds to 250 milliseconds. I just \nneed to be around 30 milliseconds.\n\nAnother idea that had occured to me was trying to force postgres to driver \non the person table because that filter ratio is so great compared to \neverything else, but I do remember looking at the explain days ago and it \nwas one of the last tables being filtered/joined. Is there anyway to force \npostgres to pick person? The reason I ask is because this would really \nreduce the number of rows it pulls out from the shipment table.\n\nThanks for comments. I'll try making that date explicit and change the \nquery to use between to see if that does anything.\n\nRegards and many thanks,\nKen \n\n", "msg_date": "Wed, 2 Mar 2005 13:28:43 -0500", "msg_from": "\"Ken Egervari\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with tuning this query" }, { "msg_contents": "On Wed, 2005-03-02 at 13:28 -0500, Ken Egervari wrote:\n> >> select s.*\n> >> from shipment s\n> >> inner join carrier_code cc on s.carrier_code_id = cc.id\n> >> inner join carrier c on cc.carrier_id = c.id\n> >> inner join carrier_to_person ctp on ctp.carrier_id = c.id\n> >> inner join person p on p.id = ctp.person_id\n> >> inner join shipment_status cs on s.current_status_id = cs.id\n> >> inner join release_code rc on cs.release_code_id = rc.id\n> >> left join shipment_status ss on ss.shipment_id = s.id\n> >> where\n> >> p.id = :personId and\n> >> s.is_purged = false and\n> >> rc.number = '9' and\n> >> cs is not null and\n> >> cs.date >= current_date - 31\n> >> order by cs.date desc\n> >\n> > I may be missing something, but it looks like the second join\n> > on shipment_status (the left join) is not adding anything to your\n> > results, except more work. ss is not used for output, nor in the where\n> > clause, so what is its purpose ?\n> ... The second \n> left join is for eager loading so that I don't have to run a seperate query \n> to fetch the children for each shipment. This really does improve \n> performance because otherwise you'll have to make N+1 queries to the \n> database, and that's just too much overhead.\n\nare you saying that you are actually doing a\n select s.*,ss.* ...\n?\n\n> > if cs.date has an upper limit, it might be helpful to change the\n> > condition to a BETWEEN\n> \n> Well, I could create an upper limit. It would be the current date. Would \n> adding in this redundant condition improve performance?\n\nit might help the planner estimate better the number of cs rows \naffected. whether this improves performance depends on whether\nthe best plans are sensitive to this.\n\nan EXPLAIN ANALYSE might reduce the guessing.\n\ngnari\n\n\n", "msg_date": "Wed, 02 Mar 2005 18:49:53 +0000", "msg_from": "Ragnar =?ISO-8859-1?Q?Hafsta=F0?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning this query" }, { "msg_contents": ">> left join is for eager loading so that I don't have to run a seperate\n>> query\n>> to fetch the children for each shipment. This really does improve\n>> performance because otherwise you'll have to make N+1 queries to the\n>> database, and that's just too much overhead.\n>\n> are you saying that you are actually doing a\n> select s.*,ss.* ...\n> ?\n\nYes, this is how the SQL should be written. When I manually converted the\nquery, I forgot to include this detail. In hibernate, you don't need to\nspecifiy the ss.* because you are dealing with objects, so you just say\nshipment. The ss.* is indicated in the \"fetch\" part of the Hibernate query.\nThat was my mistake.\n\n> it might help the planner estimate better the number of cs rows\n> affected. whether this improves performance depends on whether\n> the best plans are sensitive to this.\n\nThis sounds like a good idea since cs rows are quite large. shipment and\nshipment_status are the largest tables in the database and they will grow\nvery large over time.\n\n", "msg_date": "Wed, 2 Mar 2005 13:56:57 -0500", "msg_from": "\"Ken Egervari\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with tuning this query" }, { "msg_contents": ">it might help the planner estimate better the number of cs rows\n>affected. whether this improves performance depends on whether\n>the best plans are sensitive to this.\n\nI managed to try this and see if it did anything. Unfortunately, it made no \ndifference. It's still 250 milliseconds. It was a good suggestion though. \nI believed it work too.\n\n> an EXPLAIN ANALYSE might reduce the guessing.\n\nOkay, here is the explain analyze I managed to get from work. It came out \nto 312ms here, but without the analyze it actually runs at ~250ms. It is \nusing indexes, so my guess is that there are too many joins or it's not \ndriving on person fast enough. Release code is such a small table that I \ndont think that sequencial scan matters. Thanks for taking the time to \nanalyze this.\n\nSort (cost=1902.27..1902.31 rows=17 width=91) (actual time=312.000..312.000 \nrows=39 loops=1)\n Sort Key: ss.date\n -> Hash Join (cost=617.07..1901.92 rows=17 width=91) (actual \ntime=234.000..312.000 rows=39 loops=1)\n Hash Cond: (\"outer\".carrier_code_id = \"inner\".id)\n -> Merge Join (cost=602.54..1882.73 rows=870 width=91) (actual \ntime=234.000..312.000 rows=310 loops=1)\n Merge Cond: (\"outer\".current_status_id = \"inner\".id)\n -> Index Scan using shipment_current_status_id_idx on \nshipment s (cost=0.00..2552.13 rows=60327 width=66) (actual \ntime=0.000..61.000 rows=27711 loops=1)\n Filter: (is_purged = false)\n -> Sort (cost=602.54..607.21 rows=1866 width=25) (actual \ntime=125.000..125.000 rows=6934 loops=1)\n Sort Key: ss.id\n -> Hash Join (cost=1.11..501.17 rows=1866 width=25) \n(actual time=0.000..78.000 rows=6934 loops=1)\n Hash Cond: (\"outer\".release_code_id = \"inner\".id)\n -> Index Scan using current_status_date_idx on \nshipment_status ss (cost=0.00..406.78 rows=14924 width=25) (actual \ntime=0.000..47.000 rows=15053 loops=1)\n Index Cond: (date >= (('now'::text)::date - \n31))\n Filter: (id IS NOT NULL)\n -> Hash (cost=1.10..1.10 rows=1 width=4) (actual \ntime=0.000..0.000 rows=0 loops=1)\n -> Seq Scan on release_code rc \n(cost=0.00..1.10 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=1)\n Filter: ((number)::text = '9'::text)\n -> Hash (cost=14.53..14.53 rows=2 width=4) (actual \ntime=0.000..0.000 rows=0 loops=1)\n -> Nested Loop (cost=4.92..14.53 rows=2 width=4) (actual \ntime=0.000..0.000 rows=2 loops=1)\n -> Index Scan using person_pkey on person p \n(cost=0.00..5.75 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=1)\n Index Cond: (id = 355)\n -> Hash Join (cost=4.92..8.75 rows=2 width=8) (actual \ntime=0.000..0.000 rows=2 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".carrier_id)\n -> Seq Scan on carrier c (cost=0.00..3.54 \nrows=54 width=4) (actual time=0.000..0.000 rows=54 loops=1)\n -> Hash (cost=4.92..4.92 rows=2 width=16) \n(actual time=0.000..0.000 rows=0 loops=1)\n -> Hash Join (cost=3.04..4.92 rows=2 \nwidth=16) (actual time=0.000..0.000 rows=2 loops=1)\n Hash Cond: (\"outer\".carrier_id = \n\"inner\".carrier_id)\n -> Seq Scan on carrier_code cc \n(cost=0.00..1.57 rows=57 width=8) (actual time=0.000..0.000 rows=57 loops=1)\n -> Hash (cost=3.04..3.04 rows=1 \nwidth=8) (actual time=0.000..0.000 rows=0 loops=1)\n -> Index Scan using \ncarrier_to_person_person_id_idx on carrier_to_person ctp (cost=0.00..3.04 \nrows=1 width=8) (actual time=0.000..0.000 rows=1 loops=1)\n Index Cond: (355 = \nperson_id)\nTotal runtime: 312.000 ms\n\nKen \n\n", "msg_date": "Wed, 2 Mar 2005 15:06:58 -0500", "msg_from": "\"Ken Egervari\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with tuning this query (with explain analyze finally)" }, { "msg_contents": "\"Ken Egervari\" <[email protected]> writes:\n> Okay, here is the explain analyze I managed to get from work.\n\nWhat platform is this on? It seems very strange/fishy that all the\nactual-time values are exact integral milliseconds.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Mar 2005 15:29:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning this query (with explain analyze finally) " }, { "msg_contents": "> \"Ken Egervari\" <[email protected]> writes:\n>> Okay, here is the explain analyze I managed to get from work.\n>\n> What platform is this on? It seems very strange/fishy that all the\n> actual-time values are exact integral milliseconds.\n>\n> regards, tom lane\n\nMy machine is WinXP professional, athon xp 2100, but I get similar results \non my Intel P4 3.0Ghz as well (which is also running WinXP). Why do you \nask? \n\n", "msg_date": "Wed, 2 Mar 2005 15:38:16 -0500", "msg_from": "\"Ken Egervari\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with tuning this query (with explain analyze finally) " }, { "msg_contents": "\"Ken Egervari\" <[email protected]> writes:\n>> What platform is this on? It seems very strange/fishy that all the\n>> actual-time values are exact integral milliseconds.\n\n> My machine is WinXP professional, athon xp 2100, but I get similar results \n> on my Intel P4 3.0Ghz as well (which is also running WinXP). Why do you \n> ask? \n\nWell, what it suggests is that gettimeofday() is only returning a result\ngood to the nearest millisecond. (Win32 hackers, does that sound right?)\n\nIf so, I'd have to take the EXPLAIN ANALYZE results with a big grain of\nsalt, because what it's trying to do is add up a lot of\nmostly-sub-millisecond intervals. What would essentially happen is that\nwhichever plan node had control at a particular millisecond boundary\nwould get charged for the whole preceding millisecond, and any other\nnodes (which might have actually eaten most of the millisecond) would\nget charged nothing.\n\nOver a sufficiently long query run, the errors would average out, but\nthis wasn't that long --- 312 milliseconds, so in essence we are trying\nto estimate the query's behavior from only 312 samples of where it was\nat the millisecond boundaries. I don't trust profiles based on less\nthan a few thousand samples ...\n\nMost modern machines seem to have clocks that can count elapsed time\ndown to near the microsecond level. Anyone know if it's possible to get\nsuch numbers out of Windows, or are we stuck with milliseconds?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Mar 2005 17:29:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Help with tuning this query (with explain analyze\n\tfinally)" }, { "msg_contents": "Tom Lane wrote:\n\n>\"Ken Egervari\" <[email protected]> writes:\n> \n>\n>>Okay, here is the explain analyze I managed to get from work.\n>> \n>>\n>\n>What platform is this on? It seems very strange/fishy that all the\n>actual-time values are exact integral milliseconds.\n>\n>\t\n>\nI always get round milliseconds on running. In fact, I think I've seen \ncases where it was actually rounding to 15/16ms. Which is the resolution \nof the \"clock()\" call (IIRC).\n\nThis is the function I have for returning time better than clock(), but \nit looks like it is still stuck no better than 1ms.\n/*\n * MSVC has a function called _ftime64, which is in\n * \"sys/timeb.h\", which should be accurate to milliseconds\n */\n\n#include <sys/types.h>\n#include <sys/timeb.h>\n\ndouble mf::getTime()\n{\n struct __timeb64 timeNow;\n _ftime64(&timeNow);\n return timeNow.time + timeNow.millitm / 1000.0;\n}\n\nI did, however, find this page:\nhttp://www.wideman-one.com/gw/tech/dataacq/wintiming.htm\n\nWhich talks about the high performance counter, which is supposed to be \nable to get better than 1us resolution.\n\nGetSystemTimes() returns the idle/kernel/user times, and seems to have a \nresolution of about 100ns (.1us) GetLocalTime()/GetSystemTime() only has \na resolution of milliseconds.\n\nIn my simple test, I was actually getting timings with a resolution of \n.3us for the QueryPerformanceCounter(). That was the overhead of just \nthe call, since it was called either in a bare loop, or just one after \nthe other.\n\nSo probably we just need to switch to QueryPerformanceCounter() \n[/Frequency].\n\nJohn\n=:->", "msg_date": "Wed, 02 Mar 2005 17:25:10 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning this query (with explain analyze finally)" }, { "msg_contents": "> If so, I'd have to take the EXPLAIN ANALYZE results with a big grain of\n> salt, because what it's trying to do is add up a lot of\n> mostly-sub-millisecond intervals. What would essentially happen is that\n> whichever plan node had control at a particular millisecond boundary\n> would get charged for the whole preceding millisecond, and any other\n> nodes (which might have actually eaten most of the millisecond) would\n> get charged nothing.\n\nWell, we do know that it's at least 75% accurate. I'm only looking for a \nrelative increase in performance. My goal is to try and get this query down \nto 30 milliseconds. But even 125 or 75 would be an improvement. Any \nimprovement, even based on fuzzy data, is still an improvement. Being \nprecise isn't really that important, at least not to me or the people using \nthe application. I can see how rounding can throw off results in the inner \nparts of the plan though, but I think we should try and work with the \nexplain as it is. If there is anything else I can give you to help me out, \nplease ask and I will kindly do it. I want to make this easy for you.\n\n> Over a sufficiently long query run, the errors would average out, but\n> this wasn't that long --- 312 milliseconds, so in essence we are trying\n> to estimate the query's behavior from only 312 samples of where it was\n> at the millisecond boundaries. I don't trust profiles based on less\n> than a few thousand samples ...\n\nI'm just using data from the production database, which only has 5 digits \nworth of rows in the main tables. I don't think I can get millions of rows \nin these tables, although I wish I could. I'd have to write a program to \ninsert the data randomly and try to make it distributed the way a real \nproduction database might look in a few years if I wanted the most accurate \nresults. I would try to make the dates bunched up correctly and add more \ncarriers and shipments over time (as more customers would use the system) \nexpoentially.\n\nBut I'm trying to be practical too. This query is too slow for 5 digits of \nrows in the database. Imagine how bad it would be with millions! \nUnfortunately, this query gets ran by hundreds of people logged in every 60 \nseconds on average. It must be as fast as possible. During peak times, \npeople have to wait 5 or 6 seconds just to see the results of this query.\n\nI understand the app may be at fault too, but if this query performed \nfaster, I'm sure that would solve that problem because it's inheritly slow \nand the app is very well layered. It makes good use of frameworks like \nSpring, Hibernate and database pooling, which have been used on many \napplications and have been running very well for us. The fact that the \nquery is slow in PgAdmin III or phpPgAdmin speaks that the query can be \ntuned better.\n\nI am no master tuner. I have read as much as I could about database tuning \nin general, about the proper use of Hibernate and so on. Frankly, I am not \nexperienced enough to solve this problem and I wish to learn from the \nexperts, like you Tom, John, Ragnar and others that have responded kindly to \nmy request.\n\n> Most modern machines seem to have clocks that can count elapsed time\n> down to near the microsecond level. Anyone know if it's possible to get\n> such numbers out of Windows, or are we stuck with milliseconds?\n\nThese results came from PgAdmin III directly. I'm not sure how I can get \ndifferent results even if I knew of a way. \n\n", "msg_date": "Wed, 2 Mar 2005 20:20:33 -0500", "msg_from": "\"Ken Egervari\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with tuning this query (with explain analyze finally) " }, { "msg_contents": ">I took John's advice and tried to work with sub-selects. I tried this \n>variation, which actually seems like it would make a difference \n>conceptually since it drives on the person table quickly. But to my \n>surprise, the query runs at about 375 milliseconds. I think it's because \n>it's going over that shipment table multiple times, which is where the \n>results are coming from.\n\nI also made a version that runs over shipment a single time, but it's \nexactly 250 milliseconds. I guess the planner does the exact same thing.\n\nselect s.*, ss.*\n\nfrom shipment s\n inner join shipment_status ss on s.current_status_id=ss.id\n inner join release_code rc on ss.release_code_id=rc.id\n left outer join driver d on s.driver_id=d.id\n left outer join carrier_code cc on s.carrier_code_id=cc.id\nwhere s.carrier_code_id in (\n select cc.id\n from person p\n inner join carrier_to_person ctp on p.id=ctp.person_id\n inner join carrier c on ctp.carrier_id=c.id\n inner join carrier_code cc on cc.carrier_id = c.id\n where p.id = 355\n )\n and s.current_status_id is not null\n and s.is_purged=false\n and(rc.number='9' )\n and(ss.date>=current_date-31 )\n\norder by ss.date desc \n\n", "msg_date": "Wed, 2 Mar 2005 21:51:55 -0500", "msg_from": "\"Ken Egervari\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with tuning this query (more musings) " }, { "msg_contents": "I took John's advice and tried to work with sub-selects. I tried this\nvariation, which actually seems like it would make a difference conceptually\nsince it drives on the person table quickly. But to my surprise, the query\nruns at about 375 milliseconds. I think it's because it's going over that\nshipment table multiple times, which is where the results are coming from.\n\nselect s.*, ss.*\n\nfrom shipment s\n inner join shipment_status ss on s.current_status_id=ss.id\n inner join release_code rc on ss.release_code_id=rc.id\n left outer join driver d on s.driver_id=d.id\n left outer join carrier_code cc on s.carrier_code_id=cc.id\nwhere s.id in (\n select s.id\n from person p\n inner join carrier_to_person ctp on p.id=ctp.person_id\n inner join carrier c on ctp.carrier_id=c.id\n inner join carrier_code cc on cc.carrier_id = c.id\n inner join shipment s on s.carrier_code_id = cc.id\n where p.id = 355\n and s.current_status_id is not null\n and s.is_purged=false\n )\n and(rc.number='9' )\n and(ss.date>=current_date-31 )\n\norder by ss.date desc\n\n*** Musing 1\nAlso, \"s.current_status_id is not null\" is an important filter that I forgot\nto mention. In this example where p.id = 355, it filters out 90% of the\nrows. In general, that filter ratio is 0.46 though, which is not quite so\nhigh. However, this filter gets better over time because more and more\nusers will use a filter that will make this value null. It's still not as\nstrong as person though and probably never will be. But I thought I'd\nmention it nonetheless.\n\n*** Musing 2\nI do think that the filter \"ss.date>=current_date-31\" is slowing this query\ndown. I don't think it's the mention of \"current_date\" or even that it's\ndynamic instead of static. I think the range is just too big. For example,\nif I use:\n\nand ss.date between '2005-02-01 00:00:00' and '2005-02-28 23:59:59'\n\nThe query still results in 250 milliseconds. But if I make the range very\nsmall - say Feb 22nd of 2005:\n\nand ss.date between '2005-02-22 00:00:00' and '2005-02-22 23:59:59'\n\nNow the entire query runs in 47 milliseconds on average. If I can't make\nthis query perform any better, should I change the user interface to select\nthe date instead of showing the last 31 days to benefit from this single-day\nfilter? This causes more clicks to select the day (like from a calendar),\nbut most users probably aren't interested in seeing the entire listing\nanyway. However, it's a very important requirement that users know that\nshipment enteries exist in the last 31 days (because they are usually\nsure-fire problems if they are still in this query after a few days).\n\nI guess I'm wondering if tuning the query is futile and I should get the\nrequirements changed, or is there something I can do to really speed it up?\n\nThanks again,\nKen\n\n", "msg_date": "Wed, 2 Mar 2005 21:52:39 -0500", "msg_from": "\"Ken Egervari\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with tuning this query (Some musings) " }, { "msg_contents": "Ken Egervari wrote:\n> I've tried to use Dan Tow's tuning method and created all the right indexes from his diagraming method, but the query still performs quite slow both inside the application and just inside pgadmin III. Can anyone be kind enough to help me tune it so that it performs better in postgres? I don't think it's using the right indexes, or maybe postgres needs special treatment.\n> \n> I've converted the below query to SQL from a Hibernate query, so the syntax is probably not perfect but it's semantics are exactly the same. I've done so by looking at the source code, but I can't run it to get the exact SQL since I don't have the database on my home machine.\n> \n> select s.*\n> from shipment s\n> inner join carrier_code cc on s.carrier_code_id = cc.id\n> inner join carrier c on cc.carrier_id = c.id\n> inner join carrier_to_person ctp on ctp.carrier_id = c.id\n> inner join person p on p.id = ctp.person_id\n> inner join shipment_status cs on s.current_status_id = cs.id\n> inner join release_code rc on cs.release_code_id = rc.id\n> left join shipment_status ss on ss.shipment_id = s.id\n> where\n> p.id = :personId and\n> s.is_purged = false and\n> rc.number = '9' and\n> cs is not null and\n> cs.date >= current_date - 31\n> order by cs.date desc\n>\n\nYou might be able to coerce the planner to drive off person by\nrearranging the join orders, plus a few other bits... hopefully I have\nnot brutalized the query to the point where it does not work :-) :\n\nselect p.id, s*, ss.*\nfrom person p\n inner join carrier_to_person ctp on p.id = ctp.person_id\n inner join carrier c on ctp.carrier_id = c.id\n inner join carrier_code cc on cc.carrier_id = c.id\n inner join shipment s on s.carrier_code_id = cc.id\n inner join shipment_status cs on s.current_status_id = cs.id\n inner join release_code rc on cs.release_code_id = rc.id\n left join shipment_status ss on ss.shipment_id = s.id\nwhere\n p.id = :personId and\n s.is_purged = false and\n rc.number = 9 and\n cs is not null and\n cs.date between current_date - 31 and current_date\norder by cs.date desc\n\n\nI have added the 'p.id' in the select list in the hope that that might\nencourage the planner to take seriously the idea of getting the person\nrow(?) first. In addition I made 9 a number and closed the inequality\n(just in case it helps a bit).\n\n\n\n\n\n", "msg_date": "Thu, 03 Mar 2005 17:30:16 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning this query" }, { "msg_contents": "Ken Egervari wrote:\n\n>> I took John's advice and tried to work with sub-selects. I tried \n>> this variation, which actually seems like it would make a difference \n>> conceptually since it drives on the person table quickly. But to my \n>> surprise, the query runs at about 375 milliseconds. I think it's \n>> because it's going over that shipment table multiple times, which is \n>> where the results are coming from.\n>\n>\n> I also made a version that runs over shipment a single time, but it's \n> exactly 250 milliseconds. I guess the planner does the exact same thing.\n>\nWhy are you now left joining driver and carrier code, but inner joining \nshipment_status? I assume this is the *real* query that you are executing.\n\n From the earlier explain analyze, and your statements, the initial \nperson p should be the heavily selective portion.\n\nAnd what does \"driver\" get you? It isn't in the return, and it isn't \npart of a selectivity clause.\nYou are also double joining against carrier code, once as a left outer \njoin, and once in the inner join.\n\nThis query doesn't seem quite right. Are you sure it is generating the \nrows you are expecting?\n\n> select s.*, ss.*\n>\n> from shipment s\n> inner join shipment_status ss on s.current_status_id=ss.id\n> inner join release_code rc on ss.release_code_id=rc.id\n> left outer join driver d on s.driver_id=d.id\n> left outer join carrier_code cc on s.carrier_code_id=cc.id\n> where s.carrier_code_id in (\n> select cc.id\n> from person p\n> inner join carrier_to_person ctp on p.id=ctp.person_id\n> inner join carrier c on ctp.carrier_id=c.id\n> inner join carrier_code cc on cc.carrier_id = c.id\n> where p.id = 355\n> )\n> and s.current_status_id is not null\n> and s.is_purged=false\n> and(rc.number='9' )\n> and(ss.date>=current_date-31 )\n>\n> order by ss.date desc\n\nYou might want to post the explain analyze of this query to have a point \nof reference, but what about something like this:\nselect s.*, ss.*\n\nfrom shipment_status ss on s.current_status_id=ss.id\njoin (select s.* from shipment s\n where s.carrier_code_id in\n (select cc.id\n from person p\n inner join carrier_to_person ctp on p.id=ctp.person_id\n inner join carrier c on ctp.carrier_id=c.id\n inner join carrier_code cc on cc.carrier_id = c.id\n where p.id = 355\n )\n and s.current_status_id is not null\n and s.is_purged=false\n) as i -- Just a name for the subselect since it is in a join\ninner join release_code rc on ss.release_code_id=rc.id\nwhere (rc.number='9' )\nand(ss.date between current_date-31 and current_date())\n\norder by ss.date desc\n\nMy idea with this query is to minimize the number of shipment rows that \nneed to be generated before joining with the other rows. My syntax is \nprobably a little bit off, since I can't actually run it against real \ntables.\nBut looking at your *original* query, you were getting 15000 rows out of \nshipment_status, and then 27700 rows out of shipment, which was then \nbeing merge-joined down to only 300 rows, and then hash-joined down to 39.\n\nI'm just trying to think of ways to prevent it from blossoming into 27k \nrows to start with.\n\nPlease double check your query, because it seems to be grabbing \nunnecessary rows with the left joins, and then post another explain \nanalyze with one (or several) different subselect forms.\n\nJohn\n=:->", "msg_date": "Wed, 02 Mar 2005 23:04:59 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning this query (more musings)" }, { "msg_contents": "Ken,\n\n> I've tried to use Dan Tow's tuning method and created all the right indexes\n> from his diagraming method, but the query still performs quite slow both\n> inside the application and just inside pgadmin III.  Can anyone be kind\n> enough to help me tune it so that it performs better in postgres?  I don't\n> think it's using the right indexes, or maybe postgres needs special\n> treatment.\n\nFWIW, I picked up Dan Tow's book to give it a read, and they guy isn't \nqualified to author \"SQL Tuning\". You should chuck that book, it won't help \nyou -- not with Oracle or SQL Server, and certainly not with PostgreSQL. \nO'Reilly continues to have trouble turning out quality database books.\n\nAlso, if you *were* using Dan's method, you'd be driving off Person, not \nShipment.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 2 Mar 2005 21:36:23 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning this query" }, { "msg_contents": "Ken,\n\n>         ->  Merge Join  (cost=602.54..1882.73 rows=870 width=91) (actual\n> time=234.000..312.000 rows=310 loops=1)\n>               Merge Cond: (\"outer\".current_status_id = \"inner\".id)\n\nHmmm ... this merge join appears to be the majority of your execution \ntime .... at least within the resolution that PGWin allows us. Please try \ntwo things, and give us Explain Analyzes:\n\n1) To determine your query order ala Dan Tow and drive off of person, please \nSET JOIN_COLLAPSE_LIMIT = 1 and then run Mark Kirkwood's version of the \nquery. (Not that I believe in Dan Tow ... see previous message ... but it \nwould be interesting to see the results.\n\n2) Force PG to drop the merge join via SET ENABLE_MERGEJOIN = FALSE;\n\nAlso, please let us know some about the server you're using and your \nconfiguration parameters, particularly:\nshared_buffers\nwork_mem\neffective_cache_size\nrandom_page_cost\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 2 Mar 2005 21:52:13 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning this query (with explain analyze finally)" }, { "msg_contents": "John,\n\n>Why are you now left joining driver and carrier code, but inner joining\n>shipment_status? I assume this is the *real* query that you are executing.\n\nWell, the old and new versions are real queries. I changed the query a bit \nbecause I noticed for some users, the listing was pulling out many different \ndrivers. Each separate query on the driver took about 10 milliseconds. For \na listing of 39 results, that's a possible 390 milliseconds assuming all the \ndrivers are different and none of them are cached. So, I just left joined \nthe driver and it added about 5 milliseconds of overhead to this query. I \napoligize for not communicating this change, but I had to make it to speed \nthis stuff up during the day until I could fix the root of the problem. One \nthing that I learned is that left joining and including lots of columns \nrarely slows the query. The same was done for the carrier_code, although \nthis only saved 15 milliseconds.\n\nThe end result is still high because the query we are talking about is very \nexpensive, but at least the following queries that appeared after are \neliminated altogether. The overhead and separate queries really places a \nhamper on overall performance. For the person 355, the overhead was about \n300 milliseconds since 10 of the drivers were null. I hope this makes \nsense.\n\n>From the earlier explain analyze, and your statements, the initial\n>person p should be the heavily selective portion.\n\nI totally agree. I just never really figured out how to tell postgres my \nintentions.\n\n>You are also double joining against carrier code, once as a left outer\n>join, and once in the inner join.\n\nYes, that was my mistake since Hibernate didn't generate that - I manually \nput in those sub-selects.\n\n>This query doesn't seem quite right. Are you sure it is generating the\n>rows you are expecting?\n\nYes, the results are the same with the left joins. I didn't include d.* and \ncc.* in the select, which again, is my mistake. The main problem is when I \nmake changes to the query, I don't think about it in terms of how SQL does \nit. I think about Hibernate does it. Earger loading rows is different from \nselecting the main row at the top of the query. I bet this comes as very \nstrange, but in Hibernate they are two-different things. I've been using \nHibernate for so long that working with SQL is not so natural for me. This \nis my mistake and I apologize.\n\n>You might want to post the explain analyze of this query to have a point\n>of reference, but what about something like this:\n>select s.*, ss.*\n\nOkay. Here is syntax-corrected version of your very creative query. I \nwouldn't have thought of doing something like this at all. It makes perfect \nsense that you are commanding the database to do what it should be doing, \nwhich is something I really like since the concept of a planner picking \nstuff for me makes me unsettled (even if it is doing it right).\n\nselect i.*, ss.*\nfrom shipment_status ss\n inner join release_code rc on ss.release_code_id=rc.id,\n (\n select s.*\n from shipment s\n where s.current_status_id is not null\n and s.is_purged=false\n and s.carrier_code_id in (\n select cc.id\n from person p\n inner join carrier_to_person ctp on p.id=ctp.person_id\n inner join carrier c on ctp.carrier_id=c.id\n inner join carrier_code cc on cc.carrier_id = c.id\n where p.id = 355\n )\n ) as i\nwhere (rc.number='9' )\n and(i.current_status_id = ss.id)\n and(ss.date between current_date-31 and current_date);\n\nWhen running this on my production database, the speed is 265 milliseconds \non average running it 20 times (lowest was 250, highest was 281). Not quite \nwhat we want, but I'm sure the tuning of this new query hasn't really \nstarted. Here is the EXPLAIN ANALYZE. It seems very similiar to the one \npostgres picked out but it's a bit shorter.\n\nHash IN Join (cost=676.15..1943.11 rows=14 width=91) (actual \ntime=250.000..328.000 rows=39 loops=1)\n Hash Cond: (\"outer\".carrier_code_id = \"inner\".id)\n -> Merge Join (cost=661.65..1926.51 rows=392 width=91) (actual \ntime=250.000..328.000 rows=310 loops=1)\n Merge Cond: (\"outer\".current_status_id = \"inner\".id)\n -> Index Scan using shipment_current_status_id_idx on shipment s \n(cost=0.00..2702.56 rows=27257 width=66) (actual time=0.000..110.000 \nrows=27711 loops=1)\n Filter: ((current_status_id IS NOT NULL) AND (is_purged = \nfalse))\n -> Sort (cost=661.65..666.46 rows=1922 width=25) (actual \ntime=140.000..172.000 rows=6902 loops=1)\n Sort Key: ss.id\n -> Hash Join (cost=1.11..556.82 rows=1922 width=25) (actual \ntime=0.000..94.000 rows=6902 loops=1)\n Hash Cond: (\"outer\".release_code_id = \"inner\".id)\n -> Index Scan using current_status_date_idx on \nshipment_status ss (cost=0.01..459.64 rows=15372 width=25) (actual \ntime=0.000..94.000 rows=14925 loops=1)\n Index Cond: ((date >= (('now'::text)::date - 31)) \nAND (date <= ('now'::text)::date))\n -> Hash (cost=1.10..1.10 rows=1 width=4) (actual \ntime=0.000..0.000 rows=0 loops=1)\n -> Seq Scan on release_code rc (cost=0.00..1.10 \nrows=1 width=4) (actual time=0.000..0.000 rows=1 loops=1)\n Filter: ((number)::text = '9'::text)\n -> Hash (cost=14.49..14.49 rows=2 width=4) (actual time=0.000..0.000 \nrows=0 loops=1)\n -> Nested Loop (cost=6.87..14.49 rows=2 width=4) (actual \ntime=0.000..0.000 rows=2 loops=1)\n -> Index Scan using person_pkey on person p (cost=0.00..5.73 \nrows=1 width=4) (actual time=0.000..0.000 rows=1 loops=1)\n Index Cond: (id = 355)\n -> Hash Join (cost=6.87..8.74 rows=2 width=8) (actual \ntime=0.000..0.000 rows=2 loops=1)\n Hash Cond: (\"outer\".carrier_id = \"inner\".carrier_id)\n -> Seq Scan on carrier_code cc (cost=0.00..1.57 \nrows=57 width=8) (actual time=0.000..0.000 rows=57 loops=1)\n -> Hash (cost=6.86..6.86 rows=1 width=12) (actual \ntime=0.000..0.000 rows=0 loops=1)\n -> Hash Join (cost=3.04..6.86 rows=1 width=12) \n(actual time=0.000..0.000 rows=1 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".carrier_id)\n -> Seq Scan on carrier c (cost=0.00..3.54 \nrows=54 width=4) (actual time=0.000..0.000 rows=54 loops=1)\n -> Hash (cost=3.04..3.04 rows=1 width=8) \n(actual time=0.000..0.000 rows=0 loops=1)\n -> Index Scan using \ncarrier_to_person_person_id_idx on carrier_to_person ctp (cost=0.00..3.04 \nrows=1 width=8) (actual time=0.000..0.000 rows=1 loops=1)\n Index Cond: (355 = person_id)\nTotal runtime: 344.000 ms\n\n>My idea with this query is to minimize the number of shipment rows that\n>need to be generated before joining with the other rows. My syntax is\n>probably a little bit off, since I can't actually run it against real\n>tables.\n\nYes, I tried adding redundant 'from clauses' with the shipment or \nshipment_status tables and each caluse adds 100 milliseconds. I wish they \nweren't so expensive.\n\n>But looking at your *original* query, you were getting 15000 rows out of\n>shipment_status, and then 27700 rows out of shipment, which was then\n>being merge-joined down to only 300 rows, and then hash-joined down to 39.\n>I'm just trying to think of ways to prevent it from blossoming into 27k\n>rows to start with.\n\nYes, nothing has changed from the original query. By the looks of things, \nthe sub-select version returns slightly less rows but not much \nunfortunately. I'm trying to figure out how to minimize the rows \ntraversals. Maybe I should explain a bit about the app so you can get an \nidea on why the shipment rows are so big?\n\nYou see, the app keeps track of custom status for shipments. Either the \nstatus comes in, so the shipment row is created along with 1 or more \nshipment_status rows, or the shipments are prepared in advance (so no \nshipment_status rows are assigned to them immediately).\n\nIn the case of p.id = 355, there are ~27000 shipments. But most of these \nare prepared in advance, which don't concern this query at all and should be \nfiltered out. That's why the \"s.current_status is not null\" is important. \nThis filter will reduce the rows from 27000 to about 3500, which is all the \nreal shipments with customs status. The others will gain rows in \nshipment_status over time, but new shipment rows will be created in advance \nas well.\n\nAt some point, it will probably balance out, but since the features to \nprepare shipments in advance are new, only some carriers will have more \nshipments than shipment_status rows. In some cases, there are no prepared \nshipments. When this happens, there is usually a 1:2 ratio between shipment \nand shipment_status. I think this weird distribution makes queries like \nthis kind of hard to predict the performance of. Anyway, I think it's \nbetter to assume that previous case where shipment rows > shipment_status \nwill tend to be the norm over time.\n\nIf the query won't perform properly, I'm wondering if the requirements \nshould really change. For example, there is another table called \nrelease_office that is also associated with shipment. I could filter by \nthat too. I could then offer a screen to select the release office first \nand only show the shipments with that release office. The will reduce the \nnumber of shipments for some users, but not all. Some users use only one or \ntwo release offices, so it wouldn't be a big help.\n\nI could also make the query select a certain day instead of a range. Like I \nsaid in a previous post, this makes the query run at 47 milliseconds. \nHowever, this might make it harder for users to access the information... \nand if they clicked 31 days on the calendar, that's really 47*31 \nmilliseconds total. I guess I'd have to ask for usability studies or \nsomething to figure out what people really hope to gain from these listings \nin the first place and how they'd want to work with them. Maybe it's not a \nperformance problem - maybe it's a usability problem. However, even if that \nwere the case, I'd still want to know how to fix something like this for my \nown knowledge since I'm still learning.\n\nI also know others are using postgres quite successfully with tables \ncontaining millions of rows, in applications far more riskier than mine. \nI'm not sure why this query is any different. Is there a configuration \nsetting I can use to make things speed up perhaps?\n\nAnyhow, thanks for taking the time helping me out John. I'm going to play \nwith more sub-selects and see if I find a combination that works a bit \nbetter. I'll post my results in a bit. If we do figure this out, it might \nbe worthwhile for me to make a case-study and make it available over \nwww.postgres.org so other people can benefit from this experience too.\n\nMany thanks!\n\nKen \n\n", "msg_date": "Thu, 3 Mar 2005 01:35:42 -0500", "msg_from": "\"Ken Egervari\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with tuning this query (more musings)" }, { "msg_contents": "Josh,\n\n>1) To determine your query order ala Dan Tow and drive off of person, \n>please\n>SET JOIN_COLLAPSE_LIMIT = 1 and then run Mark Kirkwood's version of the\n>query. (Not that I believe in Dan Tow ... see previous message ... but it\n>would be interesting to see the results.\n\nUnfortunately, the query still takes 250 milliseconds. I tried it with \nother queries and the results are the same as before. Here is the explain \nanalayze anyway:\n\nSort (cost=2036.83..2036.87 rows=16 width=103) (actual \ntime=328.000..328.000 rows=39 loops=1)\n Sort Key: cs.date\n -> Nested Loop Left Join (cost=620.61..2036.51 rows=16 width=103) \n(actual time=250.000..328.000 rows=39 loops=1)\n -> Hash Join (cost=620.61..1984.90 rows=16 width=78) (actual \ntime=250.000..328.000 rows=39 loops=1)\n Hash Cond: (\"outer\".carrier_code_id = \"inner\".id)\n -> Merge Join (cost=606.11..1965.99 rows=825 width=74) \n(actual time=250.000..328.000 rows=310 loops=1)\n Merge Cond: (\"outer\".current_status_id = \"inner\".id)\n -> Index Scan using shipment_current_status_id_idx on \nshipment s (cost=0.00..2701.26 rows=60307 width=66) (actual \ntime=0.000..77.000 rows=27711 loops=1)\n Filter: (is_purged = false)\n -> Sort (cost=606.11..610.50 rows=1756 width=12) \n(actual time=141.000..141.000 rows=6902 loops=1)\n Sort Key: cs.id\n -> Hash Join (cost=1.11..511.48 rows=1756 \nwidth=12) (actual time=0.000..109.000 rows=6902 loops=1)\n Hash Cond: (\"outer\".release_code_id = \n\"inner\".id)\n -> Index Scan Backward using \ncurrent_status_date_idx on shipment_status cs (cost=0.01..422.58 rows=14047 \nwidth=16) (actual time=0.000..78.000 rows=14925 loops=1)\n Index Cond: ((date >= \n(('now'::text)::date - 31)) AND (date <= ('now'::text)::date))\n Filter: (cs.* IS NOT NULL)\n -> Hash (cost=1.10..1.10 rows=1 width=4) \n(actual time=0.000..0.000 rows=0 loops=1)\n -> Seq Scan on release_code rc \n(cost=0.00..1.10 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=1)\n Filter: ((number)::text = \n'9'::text)\n -> Hash (cost=14.49..14.49 rows=2 width=8) (actual \ntime=0.000..0.000 rows=0 loops=1)\n -> Nested Loop (cost=6.87..14.49 rows=2 width=8) \n(actual time=0.000..0.000 rows=2 loops=1)\n -> Index Scan using person_pkey on person p \n(cost=0.00..5.73 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=1)\n Index Cond: (id = 355)\n -> Hash Join (cost=6.87..8.74 rows=2 width=8) \n(actual time=0.000..0.000 rows=2 loops=1)\n Hash Cond: (\"outer\".carrier_id = \n\"inner\".carrier_id)\n -> Seq Scan on carrier_code cc \n(cost=0.00..1.57 rows=57 width=8) (actual time=0.000..0.000 rows=57 loops=1)\n -> Hash (cost=6.86..6.86 rows=1 width=12) \n(actual time=0.000..0.000 rows=0 loops=1)\n -> Hash Join (cost=3.04..6.86 rows=1 \nwidth=12) (actual time=0.000..0.000 rows=1 loops=1)\n Hash Cond: (\"outer\".id = \n\"inner\".carrier_id)\n -> Seq Scan on carrier c \n(cost=0.00..3.54 rows=54 width=4) (actual time=0.000..0.000 rows=54 loops=1)\n -> Hash (cost=3.04..3.04 \nrows=1 width=8) (actual time=0.000..0.000 rows=0 loops=1)\n -> Index Scan using \ncarrier_to_person_person_id_idx on carrier_to_person ctp (cost=0.00..3.04 \nrows=1 width=8) (actual time=0.000..0.000 rows=1 loops=1)\n Index Cond: (355 = \nperson_id)\n -> Index Scan using shipment_status_shipment_id_idx on \nshipment_status ss (cost=0.00..3.20 rows=2 width=25) (actual \ntime=0.000..0.000 rows=1 loops=39)\n Index Cond: (ss.shipment_id = \"outer\".id)\nTotal runtime: 328.000 ms\n\n>2) Force PG to drop the merge join via SET ENABLE_MERGEJOIN = FALSE;\n\nSetting this option had no effect either In fact, the query is a bit slower \n(266 milliseconds but 250 came up once in 20 executions).\n\n>Also, please let us know some about the server you're using and your\n>configuration parameters, particularly:\n>shared_buffers\n>work_mem\n>effective_cache_size\n>random_page_cost\n\nWell, I'm on a test machine so the settings haven't changed one bit from the \ndefaults. This may sound embarrassing, but I bet the production server is \nnot custom configured either. The computer I'm running these queries on is \njust a simple Athon XP 2100+ on WinXP with 1GB of RAM. The production \nserver is a faster P4, but the rest is the same. Here are the 4 values in \nmy configuration, but 3 of them were commented:\n\nshared_buffers = 1000\n#work_mem = 1024\n#effective_cache_size = 1000\n#random_page_cost = 4\n\nI'm not sure what these do, but I'm guessing the last 2 affect the planner \nto do different things with the statistics. Should I increase the first \ntwo?\n\nRegards,\nKen \n\n", "msg_date": "Thu, 3 Mar 2005 01:59:13 -0500", "msg_from": "\"Ken Egervari\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with tuning this query (with explain analyze finally)" }, { "msg_contents": "Ken Egervari wrote:\n> \n> Hash IN Join (cost=676.15..1943.11 rows=14 width=91) (actual \n> time=250.000..328.000 rows=39 loops=1)\n> Hash Cond: (\"outer\".carrier_code_id = \"inner\".id)\n> -> Merge Join (cost=661.65..1926.51 rows=392 width=91) (actual \n> time=250.000..328.000 rows=310 loops=1)\n> Merge Cond: (\"outer\".current_status_id = \"inner\".id)\n> -> Index Scan using shipment_current_status_id_idx on shipment s \n> (cost=0.00..2702.56 rows=27257 width=66) (actual time=0.000..110.000 \n> rows=27711 loops=1)\n> Filter: ((current_status_id IS NOT NULL) AND (is_purged = \n> false))\n\nThere's a feature in PG called partial indexes - see CREATE INDEX \nreference for details. Basically you can do something like:\n\nCREATE INDEX foo_idx ON shipment (carrier_code_id)\nWHERE current_status_id IS NOT NULL\nAND is_purged = FALSE;\n\nSomething similar may be a win here, although the above index might not \nbe quite right - sorry, bit tired at moment.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 03 Mar 2005 07:06:47 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning this query (more musings)" }, { "msg_contents": ">2) Force PG to drop the merge join via SET ENABLE_MERGEJOIN = FALSE;\n\nActually, it was 312 milliseconds, so it got worse.\n", "msg_date": "Thu, 3 Mar 2005 04:21:33 -0500", "msg_from": "\"Ken Egervari\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with tuning this query (with explain analyze finally)" }, { "msg_contents": "Ken,\n\nWell, I'm a bit stumped on troubleshooting the actual query since Windows' \npoor time resolution makes it impossible to trust the actual execution times. \nObviously this is something we need to look into for the Win32 port for \n8.1 ..\n\n> shared_buffers = 1000\n\nThis may be slowing up that merge join. Try resetting it to 6000. I'm not \nsure what system settings you might have to do on Windows to get it to \nsupport higher shared buffers; see the docs.\n\n> #work_mem = 1024\n\nUp this to 4096 for testing purposes; your production value will vary \ndepending on several factors; see link below.\n\n> #effective_cache_size = 1000\n\nIncrease this to the actual amount of RAM you have available, about 750MB (you \ndo the math)\n\n> #random_page_cost = 4\n\nLeave this for now. \n\nSee www.powerpostgresql.com/PerfList for more information.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 3 Mar 2005 09:35:14 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning this query (with explain analyze finally)" }, { "msg_contents": "Josh,\n\nI did everything you said and my query does perform a bit better. I've been \ngetting speeds from 203 to 219 to 234 milliseconds now. I tried increasing \nthe work mem and the effective cache size from the values you provided, but \nI didn't see any more improvement. I've tried to looking into setting the \nshared buffers for Windows XP, but I'm not sure how to do it. I'm looking \nin the manual at:\nhttp://www.postgresql.org/docs/8.0/interactive/kernel-resources.html#SYSVIPC-PARAMETERS\n\nIt doesn't mention windows at all. Does anyone have any ideas on have to \nfix this?\n\nHere is the new explain analyze.\n\nSort (cost=1996.21..1996.26 rows=17 width=165) (actual \ntime=297.000..297.000 rows=39 loops=1)\n Sort Key: ss.date\n -> Merge Right Join (cost=1951.26..1995.87 rows=17 width=165) (actual \ntime=297.000..297.000 rows=39 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".driver_id)\n -> Index Scan using driver_pkey on driver d (cost=0.00..42.16 \nrows=922 width=43) (actual time=0.000..0.000 rows=922 loops=1)\n -> Sort (cost=1951.26..1951.30 rows=17 width=122) (actual \ntime=297.000..297.000 rows=39 loops=1)\n Sort Key: s.driver_id\n -> Hash Join (cost=586.48..1950.91 rows=17 width=122) \n(actual time=219.000..297.000 rows=39 loops=1)\n Hash Cond: (\"outer\".carrier_code_id = \"inner\".id)\n -> Merge Join (cost=571.97..1931.95 rows=830 width=87) \n(actual time=219.000..297.000 rows=310 loops=1)\n Merge Cond: (\"outer\".current_status_id = \n\"inner\".id)\n -> Index Scan using \nshipment_current_status_id_idx on shipment s (cost=0.00..2701.26 rows=60307 \nwidth=66) (actual time=0.000..62.000 rows=27711 loops=1)\n Filter: (is_purged = false)\n -> Sort (cost=571.97..576.38 rows=1766 width=21) \n(actual time=125.000..156.000 rows=6902 loops=1)\n Sort Key: ss.id\n -> Hash Join (cost=1.11..476.72 rows=1766 \nwidth=21) (actual time=0.000..93.000 rows=6902 loops=1)\n Hash Cond: (\"outer\".release_code_id = \n\"inner\".id)\n -> Index Scan Backward using \ncurrent_status_date_idx on shipment_status ss (cost=0.00..387.35 rows=14122 \nwidth=21) (actual time=0.000..16.000 rows=14925 loops=1)\n Index Cond: (date >= \n(('now'::text)::date - 31))\n -> Hash (cost=1.10..1.10 rows=1 \nwidth=4) (actual time=0.000..0.000 rows=0 loops=1)\n -> Seq Scan on release_code rc \n(cost=0.00..1.10 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=1)\n Filter: ((number)::text = \n'9'::text)\n -> Hash (cost=14.51..14.51 rows=2 width=35) (actual \ntime=0.000..0.000 rows=0 loops=1)\n -> Nested Loop (cost=4.92..14.51 rows=2 \nwidth=35) (actual time=0.000..0.000 rows=2 loops=1)\n -> Index Scan using person_pkey on person p \n(cost=0.00..5.73 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=1)\n Index Cond: (id = 355)\n -> Hash Join (cost=4.92..8.75 rows=2 \nwidth=39) (actual time=0.000..0.000 rows=2 loops=1)\n Hash Cond: (\"outer\".id = \n\"inner\".carrier_id)\n -> Seq Scan on carrier c \n(cost=0.00..3.54 rows=54 width=4) (actual time=0.000..0.000 rows=54 loops=1)\n -> Hash (cost=4.92..4.92 rows=2 \nwidth=43) (actual time=0.000..0.000 rows=0 loops=1)\n -> Hash Join (cost=3.04..4.92 \nrows=2 width=43) (actual time=0.000..0.000 rows=2 loops=1)\n Hash Cond: \n(\"outer\".carrier_id = \"inner\".carrier_id)\n -> Seq Scan on \ncarrier_code cc (cost=0.00..1.57 rows=57 width=35) (actual \ntime=0.000..0.000 rows=57 loops=1)\n -> Hash (cost=3.04..3.04 \nrows=1 width=8) (actual time=0.000..0.000 rows=0 loops=1)\n -> Index Scan using \ncarrier_to_person_person_id_idx on carrier_to_person ctp (cost=0.00..3.04 \nrows=1 width=8) (actual time=0.000..0.000 rows=1 loops=1)\n Index Cond: \n(355 = person_id)\nTotal runtime: 297.000 ms \n\n", "msg_date": "Thu, 3 Mar 2005 18:42:46 -0500", "msg_from": "\"Ken Egervari\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with tuning this query (with explain analyze finally)" }, { "msg_contents": "Ken Egervari wrote:\n\n> Josh,\n>\n> I did everything you said and my query does perform a bit better.\n> I've been getting speeds from 203 to 219 to 234 milliseconds now. I\n> tried increasing the work mem and the effective cache size from the\n> values you provided, but I didn't see any more improvement. I've\n> tried to looking into setting the shared buffers for Windows XP, but\n> I'm not sure how to do it. I'm looking in the manual at:\n> http://www.postgresql.org/docs/8.0/interactive/kernel-resources.html#SYSVIPC-PARAMETERS\n>\n>\nYou probably don't need to change anything for Windows. If you set\nshared_buffers too high, then postgres won't start. If it is starting,\nthen you don't need to modify the OS to get more shared buffers. (For\ninstance, on my Mac, I can't get shared_buffers > 500 without changing\nthings, but on windows I run with 3000 and no modification).\n\n> It doesn't mention windows at all. Does anyone have any ideas on have\n> to fix this?\n>\nDo you need the interior sort? It's taking ~93ms to get 7k rows from\nshipment_status, and then another 30ms to sort them. This isn't a lot,\nso it might be fine.\n\nAlso, did you ever try CLUSTER current_status_date_idx ON shipment_status.\nThis groups the rows in shipment_status by their status date, which\nhelps put items with the same date next to eachother. This may effect\nother portions of the query, or other queries. Also, if you are\ninserting sequentially, it would seem that the items would already be\nnaturally near eachother based on date.\n\nThe next big cost is having to merge the 28k rows with the fast hash\nplan, which takes about 80ms.\n\nI guess the biggest issue is that you are doing a lot of work, and it\ntakes time to do it. Also, I've noticed that this query is being run\nwith exactly the same data. Which is good to compare two methods. But\nremember to test on multiple potential values. You might be better off\none way with this query, but much worse for a different dataset. I\nnoticed that this seems to have fewer rows than what postgres thinks the\n*average* number would be. (It predicts 60k and you only get 28k rows).\n\nIf this query is performed a lot, and you can be okay with a slight\ndelay in updating, you could always switch to some sort of lazy\nmaterialized view.\n\nYou could also always throw more hardware at it. :) If the\nshipment_status is one of the bottlenecks, create a 4-disk raid10 and\nmove the table over.\nI don't remember what your hardware is, but I don't remember it being a\nquad opteron with 16GB ram, and 20 15k SCSI disks, with the transaction\nlog on a solid state disk. :)\n\nWhy do you need the query to be 30ms? ~250ms is still pretty fast. If\nyou are needing updates faster than that, you might look more into *why*\nand then handle it from a higher level.\n\nAnd naturally, the most important this is to test it under load. 250ms\nis pretty good, but if under load it goes back to 6s, then we probably\nshould look for different alternatives. Also, what is the load that is\ncausing the problem? Is it that you have some other big seqscans which\nare causing all of your tables to go out of cache?\n\nAlso, I believe I remember you saying that your production server is a\nP4, is that a single P4? Because I know postgres prefers Opterons to\nPentium Xeons when in a multiprocessor machine. Look through the\narchives about spinlocks and the context switch bug. (context storm,\netc). Plus, since opterons are 64-bit, you can throw a lot more RAM at\nthem. I believe opterons outperform xeons for the same cost, *and* you\ncan scale them up with extra ram.\n\nBut remember, the biggest bottleneck is almost *always* the I/O. So put\nmore & faster disks into the system first.\n\nJohn\n=:->\n\n> Here is the new explain analyze.\n>\n> Sort (cost=1996.21..1996.26 rows=17 width=165) (actual\n> time=297.000..297.000 rows=39 loops=1)\n> Sort Key: ss.date\n> -> Merge Right Join (cost=1951.26..1995.87 rows=17 width=165)\n> (actual time=297.000..297.000 rows=39 loops=1)\n> Merge Cond: (\"outer\".id = \"inner\".driver_id)\n> -> Index Scan using driver_pkey on driver d (cost=0.00..42.16\n> rows=922 width=43) (actual time=0.000..0.000 rows=922 loops=1)\n> -> Sort (cost=1951.26..1951.30 rows=17 width=122) (actual\n> time=297.000..297.000 rows=39 loops=1)\n> Sort Key: s.driver_id\n> -> Hash Join (cost=586.48..1950.91 rows=17 width=122)\n> (actual time=219.000..297.000 rows=39 loops=1)\n> Hash Cond: (\"outer\".carrier_code_id = \"inner\".id)\n> -> Merge Join (cost=571.97..1931.95 rows=830\n> width=87) (actual time=219.000..297.000 rows=310 loops=1)\n> Merge Cond: (\"outer\".current_status_id =\n> \"inner\".id)\n> -> Index Scan using\n> shipment_current_status_id_idx on shipment s (cost=0.00..2701.26\n> rows=60307 width=66) (actual time=0.000..62.000 rows=27711 loops=1)\n> Filter: (is_purged = false)\n> -> Sort (cost=571.97..576.38 rows=1766\n> width=21) (actual time=125.000..156.000 rows=6902 loops=1)\n> Sort Key: ss.id\n> -> Hash Join (cost=1.11..476.72\n> rows=1766 width=21) (actual time=0.000..93.000 rows=6902 loops=1)\n> Hash Cond:\n> (\"outer\".release_code_id = \"inner\".id)\n> -> Index Scan Backward using\n> current_status_date_idx on shipment_status ss (cost=0.00..387.35\n> rows=14122 width=21) (actual time=0.000..16.000 rows=14925 loops=1)\n> Index Cond: (date >=\n> (('now'::text)::date - 31))\n> -> Hash (cost=1.10..1.10 rows=1\n> width=4) (actual time=0.000..0.000 rows=0 loops=1)\n> -> Seq Scan on\n> release_code rc (cost=0.00..1.10 rows=1 width=4) (actual\n> time=0.000..0.000 rows=1 loops=1)\n> Filter:\n> ((number)::text = '9'::text)\n> -> Hash (cost=14.51..14.51 rows=2 width=35)\n> (actual time=0.000..0.000 rows=0 loops=1)\n> -> Nested Loop (cost=4.92..14.51 rows=2\n> width=35) (actual time=0.000..0.000 rows=2 loops=1)\n> -> Index Scan using person_pkey on\n> person p (cost=0.00..5.73 rows=1 width=4) (actual time=0.000..0.000\n> rows=1 loops=1)\n> Index Cond: (id = 355)\n> -> Hash Join (cost=4.92..8.75 rows=2\n> width=39) (actual time=0.000..0.000 rows=2 loops=1)\n> Hash Cond: (\"outer\".id =\n> \"inner\".carrier_id)\n> -> Seq Scan on carrier c\n> (cost=0.00..3.54 rows=54 width=4) (actual time=0.000..0.000 rows=54\n> loops=1)\n> -> Hash (cost=4.92..4.92 rows=2\n> width=43) (actual time=0.000..0.000 rows=0 loops=1)\n> -> Hash Join\n> (cost=3.04..4.92 rows=2 width=43) (actual time=0.000..0.000 rows=2\n> loops=1)\n> Hash Cond:\n> (\"outer\".carrier_id = \"inner\".carrier_id)\n> -> Seq Scan on\n> carrier_code cc (cost=0.00..1.57 rows=57 width=35) (actual\n> time=0.000..0.000 rows=57 loops=1)\n> -> Hash\n> (cost=3.04..3.04 rows=1 width=8) (actual time=0.000..0.000 rows=0\n> loops=1)\n> -> Index Scan\n> using carrier_to_person_person_id_idx on carrier_to_person ctp\n> (cost=0.00..3.04 rows=1 width=8) (actual time=0.000..0.000 rows=1\n> loops=1)\n> Index\n> Cond: (355 = person_id)\n> Total runtime: 297.000 ms\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly", "msg_date": "Thu, 03 Mar 2005 18:22:14 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning this query (with explain analyze finally)" }, { "msg_contents": "Josh,\n\nThanks so much for your comments. They are incredibly insightful and you \nclearly know your stuff. It's so great that I'm able to learn so much from \nyou. I really appreciate it.\n\n>Do you need the interior sort? It's taking ~93ms to get 7k rows from\n>shipment_status, and then another 30ms to sort them. This isn't a lot,\n>so it might be fine.\n\nRunning the query without the sort doesn't actually improve performance \nunfortunately, which I find strange. I think the analyze is giving bad \nfeedback because taking all sorts out completely makes no difference in \nperformance. Dan Tow's book actually said the same thing... how sorting \nrarely takes up the bulk of the work. Although I know you didn't like his \nbook much, but I had observed that in my experience too.\n\n>Also, did you ever try CLUSTER current_status_date_idx ON shipment_status.\n>This groups the rows in shipment_status by their status date, which\n>helps put items with the same date next to eachother. This may effect\n>other portions of the query, or other queries. Also, if you are\n>inserting sequentially, it would seem that the items would already be\n>naturally near eachother based on date.\n\nYes, this was one of the first things I tried actually and it is currently \nclustered. Since shipment status comes into our system at real time, the \ndates are more or less in order as well.\n\n>The next big cost is having to merge the 28k rows with the fast hash\n>plan, which takes about 80ms.\n>\n>I guess the biggest issue is that you are doing a lot of work, and it\n>takes time to do it. Also, I've noticed that this query is being run\n>with exactly the same data. Which is good to compare two methods. But\n>remember to test on multiple potential values. You might be better off\n>one way with this query, but much worse for a different dataset. I\n>noticed that this seems to have fewer rows than what postgres thinks the\n>*average* number would be. (It predicts 60k and you only get 28k rows).\n\nWell, the example where p.id = 355 is an above normal case where performance \nis typically bad. If a user's company has very few shipments and \nshipment_status rows, performance isn't going to matter much and those \nqueries usually perform much faster. I really needed to tune this for the \nlarger customers who do have thousands of rows for their entire company and \nwill probably reach 6 digits by the end of next year. For the person 355, \nthey've only been on the system for 3 months and they already have 27700 \nrows. Even if this makes the smaller customers a bit slower, I think it's \nworth it if I can speed up cases like this, who all have very similar data \ndistribution.\n\n>If this query is performed a lot, and you can be okay with a slight\n>delay in updating, you could always switch to some sort of lazy\n>materialized view.\n\nI thought about this, but it's very important since shipment and \nshipment_status are both updated in real time 24/7/365. I think I might be \nable to cache it within the application for 60 seconds at most, but it would \nmake little difference since people tend to refresh within that time anyway. \nIt's very important that real-time inforamtion exists though.\n\n>You could also always throw more hardware at it. :) If the\n>shipment_status is one of the bottlenecks, create a 4-disk raid10 and\n>move the table over.\n>I don't remember what your hardware is, but I don't remember it being a\n>quad opteron with 16GB ram, and 20 15k SCSI disks, with the transaction\n>log on a solid state disk. :)\n\nThat sounds like an awesome system. I loved to have something like that. \nUnfortunately, the production server is just a single processor machine with \n1 GB ram. I think throwing more disks at it is probably the best bet, \nmoving the shipment and shipment_status tables over as you suggested. \nThat's great advice.\n\n>Why do you need the query to be 30ms? ~250ms is still pretty fast. If\n>you are needing updates faster than that, you might look more into *why*\n>and then handle it from a higher level.\n\n30ms is a good target, although I guess I was naive for setting that goal \nperhaps. I've just taken queries that ran at 600ms and with 1 or 2 indexes, \nthey went down to 15ms.\n\nLet's say we have 200 users signed into the application at the same time. \nThe application refreshes their shipment information automatically to make \nsure it's up to date on the user's screen. The application will execute the \nquery we are trying to tune every 60 seconds for most of these users. Users \ncan set the refresh time to be higher, but 60 is the lowest amount so I'm \njust assuming everyone has it at 60.\n\nAnyway, if you have 200 users logged in, that's 200 queries in the 60 second \nperiod, which is about 3-4 queries every second. As you can see, it's \ngetting maxed out, and because of bad luck, the queries are bunched together \nand are being called at the same time, making 8-9 queries in the same second \nand that's where the performance is starting to degrade. I just know that \nif I could get this down to 30 ms, or even 100, we'd be okay for a few \nmonths without throwing hardware at the problem. Also keep in mind that \nother application logic and Hibernate mapping is occuring to, so 3-4 queries \na second is already no good when everything is running on a single machine.\n\nThis isn't the best setup, but it's the best we can afford. We are just a \nnew startup company. Cheaper servers and open source keep our costs low. \nBut money is starting to come in after 10 months of hard work, so we'll be \nable to replace our server within the next 2 months. It'll be a neccessity \nbecause we are signing on some big clients now and they'll have 40 or 50 \nusers for a single company. If they are all logged in at the same time, \nthat's a lot of queries.\n\n>And naturally, the most important this is to test it under load. 250ms\n>is pretty good, but if under load it goes back to 6s, then we probably\n>should look for different alternatives. Also, what is the load that is\n>causing the problem? Is it that you have some other big seqscans which\n>are causing all of your tables to go out of cache?\n\nNo, this query and another very close to it are probably the most executed \nin the system. In fact, even checking the page stats on the web server \ntells us that the pages that use these queries are 80% of the pages viewed \nin our application. If I can fix this problem, I've fixed our performance \nproblems period. The statistics queries are very slow too, but I don't care \nabout that since nobody goes to them much (maybe once a month. People don't \nmind waiting for that sort of information anyway).\n\nI'm very interested in those other alternatives since I may have to \nexperiment with them. I'm under the impression that this query is actually \nperforming quite well for what I'm throwing at it and the work that it's \ndoing.\n\n>Also, I believe I remember you saying that your production server is a\n>P4, is that a single P4? Because I know postgres prefers Opterons to\n>Pentium Xeons when in a multiprocessor machine. Look through the\n>archives about spinlocks and the context switch bug. (context storm,\n>etc). Plus, since opterons are 64-bit, you can throw a lot more RAM at\n>them. I believe opterons outperform xeons for the same cost, *and* you\n>can scale them up with extra ram.\n\nYeah, we have nothing of that sort. It's really just a P4 3.0 Ghz \nprocessor. Like I mentioned before, we just put computers together from \nwhat we had and built our application on them. Our business is new, we \ndon't have a lot of money and we're just starting to actually have a good \nclient base. It's finally growing after all of this time but we are still \nusing the servers we started with.\n\n>But remember, the biggest bottleneck is almost *always* the I/O. So put\n>more & faster disks into the system first.\n\nI will price that raid setup you recommended. That will probably be the \nfirst adjustment to our server if we don't just replace the entire thing.\n\nThanks again,\nKen \n\n", "msg_date": "Fri, 4 Mar 2005 00:22:12 -0500", "msg_from": "\"Ken Egervari\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with tuning this query (with explain analyze finally)" }, { "msg_contents": "Ken Egervari wrote:\n> Let's say we have 200 users signed into the application at the same \n> time. The application refreshes their shipment information automatically \n> to make sure it's up to date on the user's screen. The application will \n> execute the query we are trying to tune every 60 seconds for most of \n> these users. Users can set the refresh time to be higher, but 60 is the \n> lowest amount so I'm just assuming everyone has it at 60.\n> \n> Anyway, if you have 200 users logged in, that's 200 queries in the 60 \n> second period, which is about 3-4 queries every second. \n\nCan you turn the problem around? Calculate what you want for all users \n(once every 60 seconds) and stuff those results into a summary table. \nThen let the users query the summary table as often as they like (with \nthe understanding that the figures aren't going to update any faster \nthan once a minute)\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 04 Mar 2005 15:56:25 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning this query (with explain analyze finally)" }, { "msg_contents": "Richard,\n\nWhat do you mean by summary table? Basically a cache of the query into a \ntable with replicated column names of all the joins? I'd probably have to \nwhipe out the table every minute and re-insert the data for each carrier in \nthe system. I'm not sure how expensive this operation would be, but I'm \nguessing it would be fairly heavy-weight. And maintaince would be a lot \nharder because of the duplicated columns, making refactorings on the \ndatabase more error-prone. Am I understanding your suggestion correctly? \nPlease correct me if I am.\n\n> Can you turn the problem around? Calculate what you want for all users \n> (once every 60 seconds) and stuff those results into a summary table. Then \n> let the users query the summary table as often as they like (with the \n> understanding that the figures aren't going to update any faster than once \n> a minute)\n\n", "msg_date": "Fri, 4 Mar 2005 11:36:26 -0500", "msg_from": "\"Ken\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning this query (with explain analyze finally)" }, { "msg_contents": "Ken wrote:\n\n> Richard,\n>\n> What do you mean by summary table? Basically a cache of the query\n> into a table with replicated column names of all the joins? I'd\n> probably have to whipe out the table every minute and re-insert the\n> data for each carrier in the system. I'm not sure how expensive this\n> operation would be, but I'm guessing it would be fairly heavy-weight.\n> And maintaince would be a lot harder because of the duplicated\n> columns, making refactorings on the database more error-prone. Am I\n> understanding your suggestion correctly? Please correct me if I am.\n>\n>> Can you turn the problem around? Calculate what you want for all\n>> users (once every 60 seconds) and stuff those results into a summary\n>> table. Then let the users query the summary table as often as they\n>> like (with the understanding that the figures aren't going to update\n>> any faster than once a minute)\n>\nIt's the same idea of a materialized view, or possibly just a lazy cache.\n\nJust try this query:\n\nCREATE TABLE cachedview AS\nselect p.id as person_id, s.*, ss.*\n\nfrom shipment s\ninner join shipment_status ss on s.current_status_id=ss.id\ninner join release_code rc on ss.release_code_id=rc.id\nleft outer join driver d on s.driver_id=d.id\nleft outer join carrier_code cc on s.carrier_code_id=cc.id\nwhere s.carrier_code_id in (\n select cc.id\n from person p\n inner join carrier_to_person ctp on p.id=ctp.person_id\n inner join carrier c on ctp.carrier_id=c.id\n inner join carrier_code cc on cc.carrier_id = c.id\n)\nand s.current_status_id is not null\nand s.is_purged=false\nand(rc.number='9' )\nand(ss.date>=current_date-31 )\n\norder by ss.date desc ;\n\nNotice that I took out the internal p.id = blah.\nThen you can do:\n\nCREATE INDEX cachedview_person_id_idx ON cachedview(person_id);\n\nThen from the client side, you can just run:\nSELECT * from cachedview WHERE person_id = <id>;\n\nNow, this assumes that rc.number='9' is what you always want. If that\nisn't the case, you could refactor a little bit.\n\nThis unrolls all of the work, a table which should be really fast to\nquery. If this query takes less than 10s to generate, than just have a\nservice run it every 60s. I think for refreshing, it is actually faster\nto drop the table and recreate it, rather than deleteing the entries.\nDropping also has the advantage that if you ever add more rows to s or\nss, then the table automatically gets the new entries.\n\nAnother possibility, is to have the \"cachedview\" not use \"s.*, ss.*\",\nbut instead just include whatever the primary keys are for those tables.\nThen your final query becomes:\n\nSELECT s.*, ss.* FROM cachedview cv, s, ss WHERE cv.person_id = <id>,\ncv.s_id = s.<pkey>, cv.ss_id = ss.<pkey>;\n\nAgain, this should be really fast, because you should have an index on\ncv.person_id and only have say 300 rows there, and then you are just\nfetching a few rows from s and ss. You can also use this time to do some\nof your left joins against other tables.\n\nDoes this make sense? The biggest advantage you have is your \"60s\"\nstatement. With that in hand, I think you can do a lot of caching\noptimizations.\n\nJohn\n=:->", "msg_date": "Fri, 04 Mar 2005 10:56:39 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning this query (with explain analyze finally)" }, { "msg_contents": "Ken Egervari wrote:\n\n> Josh,\n>\n...\n\n> I thought about this, but it's very important since shipment and\n> shipment_status are both updated in real time 24/7/365. I think I\n> might be able to cache it within the application for 60 seconds at\n> most, but it would make little difference since people tend to refresh\n> within that time anyway. It's very important that real-time\n> inforamtion exists though.\n>\nIs 60s real-time enough for you? That's what it sounds like. It would be\nnice if you could have 1hr, but there's still a lot of extra work you\ncan do in 60s.\n\n>> You could also always throw more hardware at it. :) If the\n>> shipment_status is one of the bottlenecks, create a 4-disk raid10 and\n>> move the table over.\n>> I don't remember what your hardware is, but I don't remember it being a\n>> quad opteron with 16GB ram, and 20 15k SCSI disks, with the transaction\n>> log on a solid state disk. :)\n>\n>\n> That sounds like an awesome system. I loved to have something like\n> that. Unfortunately, the production server is just a single processor\n> machine with 1 GB ram. I think throwing more disks at it is probably\n> the best bet, moving the shipment and shipment_status tables over as\n> you suggested. That's great advice.\n>\nWell, disk I/O is one side, but probably sticking another 1GB (2GB\ntotal) also would be a fairly economical upgrade for performance.\n\nYou are looking for query performance, not really update performance,\nright? So buy a 4-port SATA controller, and some WD Raptor 10k SATA\ndisks. With this you can create a RAID10 for < $2k (probably like $1k).\n\n> 30ms is a good target, although I guess I was naive for setting that\n> goal perhaps. I've just taken queries that ran at 600ms and with 1 or\n> 2 indexes, they went down to 15ms.\n\nIt all depends on your query. If you have a giant table (1M rows), and\nyou are doing a seqscan for only 5 rows, then adding an index will give\nyou enormous productivity gains. But you are getting 30k rows, and\ncombining them with 6k rows, plus a bunch of other stuff. I think we've\ntuned the query about as far as we can.\n\n>\n> Let's say we have 200 users signed into the application at the same\n> time. The application refreshes their shipment information\n> automatically to make sure it's up to date on the user's screen. The\n> application will execute the query we are trying to tune every 60\n> seconds for most of these users. Users can set the refresh time to be\n> higher, but 60 is the lowest amount so I'm just assuming everyone has\n> it at 60.\n>\n> Anyway, if you have 200 users logged in, that's 200 queries in the 60\n> second period, which is about 3-4 queries every second. As you can\n> see, it's getting maxed out, and because of bad luck, the queries are\n> bunched together and are being called at the same time, making 8-9\n> queries in the same second and that's where the performance is\n> starting to degrade. I just know that if I could get this down to 30\n> ms, or even 100, we'd be okay for a few months without throwing\n> hardware at the problem. Also keep in mind that other application\n> logic and Hibernate mapping is occuring to, so 3-4 queries a second is\n> already no good when everything is running on a single machine.\n>\nThe other query I just sent, where you do the query for all users at\nonce, and then cache the result, *might* be cheaper than doing a bunch\nof different queries.\nHowever, you may find that doing the query for *all* users takes to\nlong. So you could keep another table indicating who the most recent\npeople logged in are, and then only cache the info for those people.\nThis does start getting a little more involved, so see if you can do all\nusers before heading down this road.\n\n> This isn't the best setup, but it's the best we can afford. We are\n> just a new startup company. Cheaper servers and open source keep our\n> costs low. But money is starting to come in after 10 months of hard\n> work, so we'll be able to replace our server within the next 2\n> months. It'll be a neccessity because we are signing on some big\n> clientsnow and they'll have 40 or 50 users for a single company. If\n> they are all logged in at the same time, that's a lot of queries.\n>\nSure. Just realize you can't really support 200 concurrent connections\nwith a single P4 and 1GB of ram.\n\nJohn\n=:->", "msg_date": "Fri, 04 Mar 2005 11:07:35 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning this query (with explain analyze finally)" }, { "msg_contents": "Ken,\n\n> I did everything you said and my query does perform a bit better. I've\n> been getting speeds from 203 to 219 to 234 milliseconds now. I tried\n> increasing the work mem and the effective cache size from the values you\n> provided, but I didn't see any more improvement. I've tried to looking\n> into setting the shared buffers for Windows XP, but I'm not sure how to do\n> it. I'm looking in the manual at:\n\nNow that you know how to change the shared_buffers, want to go ahead and run \nthe query again?\n\nI'm pretty concerned about your case, because based on your description I \nwould expect < 100ms on a Linux machine. So I'm wondering if this is a \nproblem with WindowsXP performance, or if it's something we can fix through \ntuning.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 4 Mar 2005 10:29:11 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning this query (with explain analyze finally)" }, { "msg_contents": "John Arbash Meinel wrote:\n\n> Ken wrote:\n>\n>> Richard,\n>>\n>> What do you mean by summary table? Basically a cache of the query\n>> into a table with replicated column names of all the joins? I'd\n>> probably have to whipe out the table every minute and re-insert the\n>> data for each carrier in the system. I'm not sure how expensive this\n>> operation would be, but I'm guessing it would be fairly heavy-weight.\n>> And maintaince would be a lot harder because of the duplicated\n>> columns, making refactorings on the database more error-prone. Am I\n>> understanding your suggestion correctly? Please correct me if I am.\n>>\n>>> Can you turn the problem around? Calculate what you want for all\n>>> users (once every 60 seconds) and stuff those results into a summary\n>>> table. Then let the users query the summary table as often as they\n>>> like (with the understanding that the figures aren't going to update\n>>> any faster than once a minute)\n>>\n>>\n> It's the same idea of a materialized view, or possibly just a lazy cache.\n>\n...\n\n> This unrolls all of the work, a table which should be really fast to\n> query. If this query takes less than 10s to generate, than just have a\n> service run it every 60s. I think for refreshing, it is actually faster\n> to drop the table and recreate it, rather than deleteing the entries.\n> Dropping also has the advantage that if you ever add more rows to s or\n> ss, then the table automatically gets the new entries.\n>\nJust as a small update. If completely regenerating the cache takes to \nlong, the other way to do it, is to create insert and update triggers on \ns and ss, such that as they change, they also update the cachedview table.\n\nSomething like\n\nCREATE TRIGGER on_ss_ins AFTER INSERT ON ss FOR EACH ROW EXECUTE\n INSERT INTO cached_view SELECT p.id as person_id, s.*, ss.* FROM \n<the big stuff> WHERE s.id = NEW.id;\n\nThis runs the same query, but notice that the WHERE means it only allows \nthe new row. So this query should run fast. It is a little bit of \noverhead on each of your inserts, but it should keep the cache \nup-to-date. With something like this, I would have the final client \nquery still include the date restriction, since you accumulate older \nrows into the cached view. But you can run a daily process that prunes \nout everything older than 31 days, which keeps the cachedview from \ngetting really large.\n\nJohn\n=:->", "msg_date": "Fri, 04 Mar 2005 13:56:12 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning this query (with explain analyze finally)" } ]
[ { "msg_contents": "Hi All,\n\nI am wondering about the relative performance of \"insert into table1 select distinct a,b from ...\" and \"insert into table1 select a,b from ... group by a,b\" when querying tables of different sizes (10K, 100K, 1s, 10s, 100s of millions of rows). \n\nThe distinct way tends to sort/unique and the group by tends to hash aggregate... any opinions on which is better?\n\nI can also change the schema to a certain extent, so would it be worthwhile to put indices on the queried tables (or refactor them) hoping the distinct does an index scan instead of sort... would the query planner take advantage of that?\n\nThanks,\n\nShawn\n\n", "msg_date": "Wed, 2 Mar 2005 12:52:10 -0500", "msg_from": "\"Shawn Chisholm\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance tradeoff" }, { "msg_contents": "Shawn,\n\n> I can also change the schema to a certain extent, so would it be worthwhile\n> to put indices on the queried tables (or refactor them) hoping the distinct\n> does an index scan instead of sort... would the query planner take\n> advantage of that?\n\nUse the GROUP BY, with an index on the grouped columns and lots of work_mem \n(sort_mem in 7.4). This will give the planner the option of a hashaggregate \nwhich could be significantly faster than the other methods.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 2 Mar 2005 21:31:01 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance tradeoff" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Wednesday, March 02, 2005 4:30 PM\n> To: Ken Egervari\n> Cc: [email protected]; \n> [email protected]\n> Subject: Re: [PERFORM] Help with tuning this query (with \n> explain analyze\n> finally)\n> \n> [...]\n> Well, what it suggests is that gettimeofday() is only \n> returning a result good to the nearest millisecond. (Win32\n> hackers, does that sound right?)\n\nNo. There's no such thing as gettimeofday() in Win32. So it\nmust be making some other call, or perhaps an emulation.\n\n> [...]\n> Most modern machines seem to have clocks that can count elapsed\n> time down to near the microsecond level. Anyone know if it's \n> possible to get such numbers out of Windows, or are we stuck with\n> milliseconds?\n\nQueryPerformanceCounter() is your friend.\n\nhttp://lists.boost.org/MailArchives/boost/msg45626.php\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n", "msg_date": "Wed, 2 Mar 2005 17:01:14 -0600", "msg_from": "\"Dave Held\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Help with tuning this query (with explain analyze\n\tfinally)" } ]
[ { "msg_contents": "I have about 5M names stored on my DB. Currently the searches are very\nquick unless, they are on a very common last name ie. SMITH. The Index\nis always used, but I still hit 10-20 seconds on a SMITH or Jones\nsearch, and I average about 6 searches a second and max out at about\n30/s. Any suggestions on how I could arrange things to make this search\nquicker? I have 4gb of mem on a raid 5 w/ 3 drives. I'm hoping that I\ncan increase this speed w/o a HW upgrade.\n\nthanx,\n-jj-\n\n\n\n-- \nYou probably wouldn't worry about what people think of you if you could\nknow how seldom they do.\n -- Olin Miller.\n\n", "msg_date": "Thu, 03 Mar 2005 10:38:29 -0600", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": true, "msg_subject": "name search query speed" }, { "msg_contents": "I'm not sure what the answer is but maybe I can help? Would clustering the \nname index make this faster? I thought that would bunch up the pages so the \nnames were more or less in order, which would improve search time. Just a \nguess though.\n\nKen\n\n----- Original Message ----- \nFrom: \"Jeremiah Jahn\" <[email protected]>\nTo: \"postgres performance\" <[email protected]>\nSent: Thursday, March 03, 2005 11:38 AM\nSubject: [PERFORM] name search query speed\n\n\n>I have about 5M names stored on my DB. Currently the searches are very\n> quick unless, they are on a very common last name ie. SMITH. The Index\n> is always used, but I still hit 10-20 seconds on a SMITH or Jones\n> search, and I average about 6 searches a second and max out at about\n> 30/s. Any suggestions on how I could arrange things to make this search\n> quicker? I have 4gb of mem on a raid 5 w/ 3 drives. I'm hoping that I\n> can increase this speed w/o a HW upgrade.\n>\n> thanx,\n> -jj-\n>\n>\n>\n> -- \n> You probably wouldn't worry about what people think of you if you could\n> know how seldom they do.\n> -- Olin Miller.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n> \n\n", "msg_date": "Thu, 3 Mar 2005 12:00:01 -0500", "msg_from": "\"Ken Egervari\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: name search query speed" }, { "msg_contents": "yes, it does. I forgot to mention, that I also have clustering on that\ntable by my name_field index. My Bad.\n\nOn Thu, 2005-03-03 at 12:00 -0500, Ken Egervari wrote:\n> I'm not sure what the answer is but maybe I can help? Would clustering the \n> name index make this faster? I thought that would bunch up the pages so the \n> names were more or less in order, which would improve search time. Just a \n> guess though.\n> \n> Ken\n> \n> ----- Original Message ----- \n> From: \"Jeremiah Jahn\" <[email protected]>\n> To: \"postgres performance\" <[email protected]>\n> Sent: Thursday, March 03, 2005 11:38 AM\n> Subject: [PERFORM] name search query speed\n> \n> \n> >I have about 5M names stored on my DB. Currently the searches are very\n> > quick unless, they are on a very common last name ie. SMITH. The Index\n> > is always used, but I still hit 10-20 seconds on a SMITH or Jones\n> > search, and I average about 6 searches a second and max out at about\n> > 30/s. Any suggestions on how I could arrange things to make this search\n> > quicker? I have 4gb of mem on a raid 5 w/ 3 drives. I'm hoping that I\n> > can increase this speed w/o a HW upgrade.\n> >\n> > thanx,\n> > -jj-\n> >\n> >\n> >\n> > -- \n> > You probably wouldn't worry about what people think of you if you could\n> > know how seldom they do.\n> > -- Olin Miller.\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n> > \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n-- \nYou probably wouldn't worry about what people think of you if you could\nknow how seldom they do.\n -- Olin Miller.\n\n", "msg_date": "Thu, 03 Mar 2005 11:23:37 -0600", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: name search query speed" }, { "msg_contents": "Jeremiah,\n\n> I have about 5M names stored on my DB. Currently the searches are very\n> quick unless, they are on a very common last name ie. SMITH. The Index\n> is always used, but I still hit 10-20 seconds on a SMITH or Jones\n> search, and I average about 6 searches a second and max out at about\n> 30/s. Any suggestions on how I could arrange things to make this search\n> quicker? I have 4gb of mem on a raid 5 w/ 3 drives. I'm hoping that I\n> can increase this speed w/o a HW upgrade.\n\nFirst off, see http://www.powerpostgresql.com/PerfList about your \nconfiguration settings.\n\nThe problem you're running into with SMITH is that, if your query is going to \nreturn a substantial number of rows (variable, but generally anything over 5% \nof the table and 1000 rows) is not able to make effective use of an index. \nThis makes it fall back on a sequential scan, and based on you execution \ntime, I'd guess that the table is a bit too large to fit in memory.\n\nAFTER you've made the configuration changes above, AND run VACUUM ANALYZE on \nyour database, if you're still having problems post an EXPLAIN ANALYZE of the \nquery to this list.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 3 Mar 2005 09:44:37 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: name search query speed" }, { "msg_contents": "Jeremiah Jahn wrote:\n\n>I have about 5M names stored on my DB. Currently the searches are very\n>quick unless, they are on a very common last name ie. SMITH. The Index\n>is always used, but I still hit 10-20 seconds on a SMITH or Jones\n>search, and I average about 6 searches a second and max out at about\n>30/s. Any suggestions on how I could arrange things to make this search\n>quicker? I have 4gb of mem on a raid 5 w/ 3 drives. I'm hoping that I\n>can increase this speed w/o a HW upgrade.\n>\n>thanx,\n>-jj-\n>\n>\n>\n> \n>\nIt sounds like the problem is just that you have a lot of rows that need \nto be returned. Can you just put a limit on the query? And then change \nthe client app to recognize when the limit is reached, and either give a \nlink to more results, or refine query, or something like that.\n\nJohn\n=:->", "msg_date": "Thu, 03 Mar 2005 11:46:02 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: name search query speed" }, { "msg_contents": "Hi, Jeremiah,\n\nJeremiah Jahn schrieb:\n> yes, it does. I forgot to mention, that I also have clustering on that\n> table by my name_field index. My Bad.\n\nFine. Did you run ANALYZE and CLUSTER on the table after every large\nbunch of insertions / updates?\n\nMarkus\n\n\n-- \nMarkus Schaber | Dipl. Informatiker | Software Development GIS\n\nFight against software patents in EU! http://ffii.org/\n http://nosoftwarepatents.org/\n", "msg_date": "Thu, 03 Mar 2005 19:36:45 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: name search query speed" }, { "msg_contents": "On Thu, 2005-03-03 at 11:46 -0600, John A Meinel wrote:\n> Jeremiah Jahn wrote:\n> \n> >I have about 5M names stored on my DB. Currently the searches are very\n> >quick unless, they are on a very common last name ie. SMITH. The Index\n> >is always used, but I still hit 10-20 seconds on a SMITH or Jones\n> >search, and I average about 6 searches a second and max out at about\n> >30/s. Any suggestions on how I could arrange things to make this search\n> >quicker? I have 4gb of mem on a raid 5 w/ 3 drives. I'm hoping that I\n> >can increase this speed w/o a HW upgrade.\n> >\n> >thanx,\n> >-jj-\n> >\n> >\n> >\n> > \n> >\n> It sounds like the problem is just that you have a lot of rows that need \n> to be returned. Can you just put a limit on the query? And then change \n> the client app to recognize when the limit is reached, and either give a \n> link to more results, or refine query, or something like that.\nNot really, about 2% of the returned rows are thrown away for security\nreasons based on the current user, security groups they belong to and\ndifferent flags in the data itself. So the count for this is generated\non the fly needed for pagination in the app which expresses the total\nnumber of finds, but only displays 40 of them. If any one knows a way to\ndetermine the total number of matches without needing to iterate through\nthem using jdbc, I'm all ears as this would save me huge amounts of time\nand limit/offset would become an option. \n\n> \n> John\n> =:->\n> \n-- \n\"A power so great, it can only be used for Good or Evil!\"\n -- Firesign Theatre, \"The Giant Rat of Summatra\"\n\n", "msg_date": "Thu, 03 Mar 2005 14:14:47 -0600", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: name search query speed" }, { "msg_contents": "On Thu, 2005-03-03 at 09:44 -0800, Josh Berkus wrote:\n> Jeremiah,\n> \n> > I have about 5M names stored on my DB. Currently the searches are very\n> > quick unless, they are on a very common last name ie. SMITH. The Index\n> > is always used, but I still hit 10-20 seconds on a SMITH or Jones\n> > search, and I average about 6 searches a second and max out at about\n> > 30/s. Any suggestions on how I could arrange things to make this search\n> > quicker? I have 4gb of mem on a raid 5 w/ 3 drives. I'm hoping that I\n> > can increase this speed w/o a HW upgrade.\n> \n> First off, see http://www.powerpostgresql.com/PerfList about your \n> configuration settings.\n> \n> The problem you're running into with SMITH is that, if your query is going to \n> return a substantial number of rows (variable, but generally anything over 5% \n> of the table and 1000 rows) is not able to make effective use of an index. \n> This makes it fall back on a sequential scan, and based on you execution \n> time, I'd guess that the table is a bit too large to fit in memory.\n> \n> AFTER you've made the configuration changes above, AND run VACUUM ANALYZE on \n> your database, if you're still having problems post an EXPLAIN ANALYZE of the \n> query to this list.\n> \n\nie. throw more hardware at it. All of the other things on the list,\nexcept for effective_cache_size have always been done. I bumped it up\nfrom the default to 2600000. Will see if that makes a difference.\n\nthanx,\n-jj-\n\n\n-- \n\"A power so great, it can only be used for Good or Evil!\"\n -- Firesign Theatre, \"The Giant Rat of Summatra\"\n\n", "msg_date": "Thu, 03 Mar 2005 14:19:17 -0600", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: name search query speed" }, { "msg_contents": "Jeremiah Jahn wrote:\n\n>On Thu, 2005-03-03 at 11:46 -0600, John A Meinel wrote:\n>\n>\n...\n\n>Not really, about 2% of the returned rows are thrown away for security\n>reasons based on the current user, security groups they belong to and\n>different flags in the data itself. So the count for this is generated\n>on the fly needed for pagination in the app which expresses the total\n>number of finds, but only displays 40 of them. If any one knows a way to\n>determine the total number of matches without needing to iterate through\n>them using jdbc, I'm all ears as this would save me huge amounts of time\n>and limit/offset would become an option.\n>\n>\n>\nWell, what is wrong with \"select count(*) from <the query I would have\ndone>\"?\nAre you saying 2% are thrown away, or only 2% are kept?\nIs this being done at the client side? Is there a way to incorporate the\nsecurity info into the database, so that the query actually only returns\nthe rows you care about? That seems like it would be a decent way to\nspeed it up, if you can restrict the number of rows that it needs to\nlook at.\n\nThere are other alternatives, such as materialized views, or temp\ntables, where you select into the temp table the rows that the user\nwould request, and then you generate limit/offset from that. The first\nquery would be a little slow, since it would get all the rows, but all\nsubsequent accesses for that user could be really fast.\n\nThe other possibility is to do \"limit 200\", and then in your list of\npages, you could have:\n1, 2, 3, 4, 5, ...\nThis means that you don't have to worry about getting 10,000 entries,\nwhich probably isn't really useful for the user anyway, and you can\nstill break things into 40 entry pages, just 200 entries at a time.\nJohn\n=:->\n\n>>John\n>>=:->\n>>\n>>\n>>", "msg_date": "Thu, 03 Mar 2005 15:37:21 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: name search query speed" }, { "msg_contents": "Jeremiah Jahn wrote:\n> I have about 5M names stored on my DB. Currently the searches are very\n> quick unless, they are on a very common last name ie. SMITH. The Index\n> is always used, but I still hit 10-20 seconds on a SMITH or Jones\n> search, and I average about 6 searches a second and max out at about\n> 30/s. Any suggestions on how I could arrange things to make this search\n> quicker? I have 4gb of mem on a raid 5 w/ 3 drives. I'm hoping that I\n> can increase this speed w/o a HW upgrade.\n\nIf it's just \"SMITH\", the only fix is to throw more hardware at the \nproblem. I've got my own database of medical providers & facilities in \nthe millions and anytime somebody tries to search for MEDICAL FACILITY, \nit takes forever. I've tried every optimization possible but when you \nhave 500K records with the word \"MEDICAL\" in it, what can you do? You've \ngot to check all 500K records to see if it matches your criteria.\n\nFor multi-word searches, what I've found does work is to periodically \ngenerate stats on work frequencies and use those stats to search the \nleast common words first. For example, if somebody enters \"ALTABATES \nMEDICAL HOSPITAL\", I can get the ~50 providers with ALTABATES in the \nname and then do a 2nd and 3rd pass to filter against MEDICAL and HOSPITAL.\n", "msg_date": "Thu, 03 Mar 2005 18:55:55 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: name search query speed" }, { "msg_contents": "Jeremiah Jahn wrote:\n> I have about 5M names stored on my DB. Currently the searches are very\n> quick unless, they are on a very common last name ie. SMITH. The Index\n> is always used, but I still hit 10-20 seconds on a SMITH or Jones\n> search, and I average about 6 searches a second and max out at about\n> 30/s. Any suggestions on how I could arrange things to make this search\n> quicker? I have 4gb of mem on a raid 5 w/ 3 drives. I'm hoping that I\n> can increase this speed w/o a HW upgrade.\n> \n> thanx,\n> -jj-\n> \n\nis there a chance you could benefit from indices spanning over multiple columns?\nmaybe the user that searches for SMITH knows more then the last name, ie first \nname, location (zip code, name of city, etc.)?\n", "msg_date": "Fri, 04 Mar 2005 16:46:28 +0100", "msg_from": "stig erikson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: name search query speed" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Jeremiah Jahn [mailto:[email protected]]\n> Sent: Thursday, March 03, 2005 2:15 PM\n> To: John A Meinel\n> Cc: postgres performance\n> Subject: Re: [PERFORM] name search query speed\n> \n> [...]\n> So the count for this is generated on the fly needed for\n> pagination in the app which expresses the total number of\n> finds, but only displays 40 of them. If any one knows a way\n> to determine the total number of matches without needing to \n> iterate through them using jdbc, I'm all ears as this would\n> save me huge amounts of time and limit/offset would become\n> an option. \n\nIs there a reason you can't do a count(field) query first? If\nso, you can get the number of records returned by setting\nabsolute(-1) and getting the row number.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n", "msg_date": "Thu, 3 Mar 2005 14:26:27 -0600", "msg_from": "\"Dave Held\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: name search query speed" }, { "msg_contents": "doesn't that cause two queries? I used to do it that way and cut my time\nsubstantially by counting in-line. Even though the results were cached\nit still took more time. Also since the tables is constantly be updated\nthe returned total would not always match the number of results on the\nsecond query.\n\nOn Thu, 2005-03-03 at 14:26 -0600, Dave Held wrote:\n> > -----Original Message-----\n> > From: Jeremiah Jahn [mailto:[email protected]]\n> > Sent: Thursday, March 03, 2005 2:15 PM\n> > To: John A Meinel\n> > Cc: postgres performance\n> > Subject: Re: [PERFORM] name search query speed\n> > \n> > [...]\n> > So the count for this is generated on the fly needed for\n> > pagination in the app which expresses the total number of\n> > finds, but only displays 40 of them. If any one knows a way\n> > to determine the total number of matches without needing to \n> > iterate through them using jdbc, I'm all ears as this would\n> > save me huge amounts of time and limit/offset would become\n> > an option. \n> \n> Is there a reason you can't do a count(field) query first? If\n> so, you can get the number of records returned by setting\n> absolute(-1) and getting the row number.\n> \n> __\n> David B. Held\n> Software Engineer/Array Services Group\n> 200 14th Ave. East, Sartell, MN 56377\n> 320.534.3637 320.253.7800 800.752.8129\n-- \n\"A power so great, it can only be used for Good or Evil!\"\n -- Firesign Theatre, \"The Giant Rat of Summatra\"\n\n", "msg_date": "Thu, 03 Mar 2005 14:47:29 -0600", "msg_from": "Jeremiah Jahn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: name search query speed" }, { "msg_contents": "Hi, Jeremiah,\n\nJeremiah Jahn schrieb:\n> doesn't that cause two queries? I used to do it that way and cut my time\n> substantially by counting in-line. Even though the results were cached\n> it still took more time.\n\nThis sounds rather strange.\n\n> Also since the tables is constantly be updated\n> the returned total would not always match the number of results on the\n> second query.\n\nDid you run both queries in the same transaction, with transaction\nisolation level set to serializable? If yes, you found a serious bug in\nPostgreSQL transaction engine.\n\nMarkus\n\n-- \nMarkus Schaber | Dipl. Informatiker | Software Development GIS\n\nFight against software patents in EU! http://ffii.org/\n http://nosoftwarepatents.org/\n", "msg_date": "Thu, 03 Mar 2005 23:03:44 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: name search query speed" } ]
[ { "msg_contents": "\nI have a query that runs quite quickly using a hash join when run\nstandalone.\n\nWhen I use this query as a subquery the planner always seems to\npick a differnt plan with an order of magnitude worse performance.\n\nThis bad plan is chosen even when the outer sql statement is\na trivial expression like this:\n select * from (query) as a;\nwhich I believe should be a no-op.\n\n\nShould the optimizer have noticed that it could have used a hash\njoin in this case? Anything I can do to help convince it to?\n\n Explain analyze output follows.\n Thanks,\n Ron\n\n\n\n============================================================================\n\nfli=# explain analyze SELECT * from (select * from userfeatures.points join icons using (iconid) where the_geom && setSRID('BOX3D(-123.40 25.66,-97.87 43.17)'::BOX3D, -1 )) as upf ;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..446.42 rows=1 width=120) (actual time=-0.096..7928.546 rows=15743 loops=1)\n Join Filter: (\"outer\".iconid = \"inner\".iconid)\n -> Seq Scan on points (cost=0.00..444.43 rows=1 width=82) (actual time=0.096..132.255 rows=15743 loops=1)\n Filter: (the_geom && '010300000001000000050000009A99999999D95EC0295C8FC2F5A839409A99999999D95EC0F6285C8FC295454048E17A14AE7758C0F6285C8FC295454048E17A14AE7758C0295C8FC2F5A839409A99999999D95EC0295C8FC2F5A83940'::geometry)\n -> Seq Scan on icons (cost=0.00..1.44 rows=44 width=42) (actual time=0.006..0.242 rows=44 loops=15743)\n Total runtime: 8005.766 ms\n(6 rows)\n\nfli=# explain analyze select * from userfeatures.points join icons using (iconid) where the_geom && setSRID('BOX3D(-123.40 25.66,-97.87 43.17)'::BOX3D, -1 );\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=1.55..682.84 rows=15789 width=120) (actual time=0.641..320.002 rows=15743 loops=1)\n Hash Cond: (\"outer\".iconid = \"inner\".iconid)\n -> Seq Scan on points (cost=0.00..444.43 rows=15794 width=82) (actual time=0.067..94.307 rows=15743 loops=1)\n Filter: (the_geom && '010300000001000000050000009A99999999D95EC0295C8FC2F5A839409A99999999D95EC0F6285C8FC295454048E17A14AE7758C0F6285C8FC295454048E17A14AE7758C0295C8FC2F5A839409A99999999D95EC0295C8FC2F5A83940'::geometry)\n -> Hash (cost=1.44..1.44 rows=44 width=42) (actual time=0.530..0.530 rows=0 loops=1)\n -> Seq Scan on icons (cost=0.00..1.44 rows=44 width=42) (actual time=0.026..0.287 rows=44 loops=1)\n Total runtime: 397.003 ms\n(7 rows)\n\n\n\n\n\n", "msg_date": "Fri, 4 Mar 2005 03:27:23 -0800 (PST)", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": true, "msg_subject": "Query's fast standalone - slow as a subquery." }, { "msg_contents": "Ron Mayer <[email protected]> writes:\n> -> Seq Scan on points (cost=0.00..444.43 rows=1 width=82) (actual time=0.096..132.255 rows=15743 loops=1)\n> Filter: (the_geom && '010300000001000000050000009A99999999D95EC0295C8FC2F5A839409A99999999D95EC0F6285C8FC295454048E17A14AE7758C0F6285C8FC295454048E17A14AE7758C0295C8FC2F5A839409A99999999D95EC0295C8FC2F5A83940'::geometry)\n\n> -> Seq Scan on points (cost=0.00..444.43 rows=15794 width=82) (actual time=0.067..94.307 rows=15743 loops=1)\n> Filter: (the_geom && '010300000001000000050000009A99999999D95EC0295C8FC2F5A839409A99999999D95EC0F6285C8FC295454048E17A14AE7758C0F6285C8FC295454048E17A14AE7758C0295C8FC2F5A839409A99999999D95EC0295C8FC2F5A83940'::geometry)\n\nApparently the selectivity of the && condition is misestimated in the\nfirst case (note the radically wrong rowcount estimate), leading to an\ninefficient join plan choice. I suppose this is a bug in the postgis\nselectivity routines --- better complain to them.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Mar 2005 10:22:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query's fast standalone - slow as a subquery. " }, { "msg_contents": "Hi there :-)\n\nI'm really, really having trouble with this query... It is a part of,\nhmmm... 200 similar querys that I dinyamically build and run in a\nstored procedure. This one, for example, takes 27seconds to run. The\nwhole stored procedure executes in about 15minutes. This is too much\nwhen compared to the exact same database, with the same indexes and\nsame data running under SqlServer 2000, which takes 21seconds to run\nthe whole batch.\n\nAny help would be extremely appreciated. I've also tried to tune up\nthe configuration\n\ninsert into MRS_REPLICATION_OUT select 514, 10000168, C.contxt_id,\nC.contxt_elmt_ix, CAST(null as NUMERIC(18)), CAST(null as\nNUMERIC(18)), CAST(null as NUMERIC(18)), CAST(null as NUMERIC(18)),\nCAST(null as NUMERIC(18)), null, 1 from c2iedm.CONTXT as P inner join\nc2iedm.CONTXT_ELMT as C on (P.contxt_id=C.contxt_id) inner join\nMRS_REPLICATION_OUT as S on S.ent_id=10000029 and (CAST(P.contxt_id AS\nnumeric(18)) = S.pk1) inner join MRS_TRANSACTION TRANS on\nTRANS.trans_id=514 left join NON_REPL_DATA_OWNER NRDO on\nNRDO.non_repl_data_owner_id=C.owner_id left join REPL_DATA_OWNER_RSDNC\nRDOR on RDOR.owner_id=C.owner_id and\nRDOR.rsdnc_node_id=TRANS.recv_node_id left join MRS_REPLICATION_OUT\nOUT on OUT.trans_id=514 and OUT.ent_id=10000168 and ((CAST(C.contxt_id\nAS numeric(18)) = OUT.pk1 AND CAST(C.contxt_elmt_ix AS numeric(18)) =\nOUT.pk2)) inner join MRS_TRANSACTION RED_TRANS on\nTRANS.prov_node_id=RED_TRANS.prov_node_id and\nTRANS.recv_node_id=RED_TRANS.recv_node_id left join\nMRS_REPLICATION_OUT RED_OUT on RED_TRANS.cat_code = 'OUT' and\nRED_TRANS.trans_type in ('X01', 'X02') and\nRED_TRANS.trans_id=RED_OUT.trans_id where S.age=0 and S.trans_id=514\nand (NRDO.non_repl_data_owner_id is null) AND (RDOR.repl_data_owner_id\nis null) AND (OUT.trans_id is null) AND (RED_OUT.trans_id is null);\n\nThis kind of inserts generate few rows. Between 8k and 15k for this particular\ninsert, and about 20k for the whole batch. If I try to run a batch\nto generate about 50k rows, then I'll be stuck here for more that 45h.\nCompare this to 12minutes when running SqlServer 2000.\n\nHere is the result of explain analyze:\n\n\"Merge Left Join (cost=1338.32..1377.99 rows=45 width=32) (actual\ntime=719.000..26437.000 rows=14862 loops=1)\"\n\" Merge Cond: (\"outer\".trans_id = \"inner\".trans_id)\"\n\" Join Filter: ((\"outer\".cat_code = 'OUT'::bpchar) AND\n((\"outer\".trans_type = 'X01'::bpchar) OR (\"outer\".trans_type =\n'X02'::bpchar)))\"\n\" Filter: (\"inner\".trans_id IS NULL)\"\n\" -> Sort (cost=1067.36..1067.47 rows=45 width=56) (actual\ntime=719.000..735.000 rows=14862 loops=1)\"\n\" Sort Key: red_trans.trans_id\"\n\" -> Merge Join (cost=851.66..1066.12 rows=45 width=56)\n(actual time=407.000..673.000 rows=14862 loops=1)\"\n\" Merge Cond: (\"outer\".recv_node_id = \"inner\".recv_node_id)\"\n\" Join Filter: (\"outer\".prov_node_id = \"inner\".prov_node_id)\"\n\" -> Nested Loop Left Join (cost=847.14..987.28\nrows=3716 width=60) (actual time=407.000..610.000 rows=14862 loops=1)\"\n\" Join Filter: (((\"outer\".contxt_id)::numeric(18,0)\n= \"inner\".pk1) AND ((\"outer\".contxt_elmt_ix)::numeric(18,0) =\n\"inner\".pk2))\"\n\" Filter: (\"inner\".trans_id IS NULL)\"\n\" -> Merge Left Join (cost=718.22..746.87\nrows=3716 width=60) (actual time=407.000..563.000 rows=14862 loops=1)\"\n\" Merge Cond: ((\"outer\".recv_node_id =\n\"inner\".rsdnc_node_id) AND (\"outer\".owner_id = \"inner\".owner_id))\"\n\" Filter: (\"inner\".repl_data_owner_id IS NULL)\"\n\" -> Sort (cost=717.19..726.48 rows=3716\nwidth=74) (actual time=407.000..423.000 rows=14862 loops=1)\"\n\" Sort Key: trans.recv_node_id, c.owner_id\"\n\" -> Nested Loop Left Join\n(cost=1.01..496.84 rows=3716 width=74) (actual time=0.000..312.000\nrows=14862 loops=1)\"\n\" Join Filter:\n(\"inner\".non_repl_data_owner_id = \"outer\".owner_id)\"\n\" Filter:\n(\"inner\".non_repl_data_owner_id IS NULL)\"\n\" -> Nested Loop\n(cost=0.00..412.22 rows=3716 width=74) (actual time=0.000..186.000\nrows=14862 loops=1)\"\n\" -> Seq Scan on\nmrs_transaction trans (cost=0.00..2.05 rows=1 width=28) (actual\ntime=0.000..0.000 rows=1 loops=1)\"\n\" Filter: (trans_id =\n514::numeric)\"\n\" -> Nested Loop\n(cost=0.00..373.01 rows=3716 width=46) (actual time=0.000..139.000\nrows=14862 loops=1)\"\n\" Join Filter:\n(\"outer\".contxt_id = \"inner\".contxt_id)\"\n\" -> Nested Loop\n(cost=0.00..4.81 rows=1 width=16) (actual time=0.000..0.000 rows=4\nloops=1)\"\n\" Join Filter:\n((\"inner\".contxt_id)::numeric(18,0) = \"outer\".pk1)\"\n\" -> Index\nScan using ix_mrs_replication_out_all on mrs_replication_out s\n(cost=0.00..3.76 rows=1 width=16) (actual time=0.000..0.000 rows=4\nloops=1)\"\n\" Index\nCond: ((ent_id = 10000029::numeric) AND (age = 0::numeric) AND\n(trans_id = 514::numeric))\"\n\" -> Seq Scan\non contxt p (cost=0.00..1.02 rows=2 width=16) (actual\ntime=0.000..0.000 rows=2 loops=4)\"\n\" -> Seq Scan on\ncontxt_elmt c (cost=0.00..275.31 rows=7431 width=46) (actual\ntime=0.000..7.500 rows=7431 loops=4)\"\n\" -> Materialize\n(cost=1.01..1.02 rows=1 width=12) (actual time=0.000..0.001 rows=1\nloops=14862)\"\n\" -> Seq Scan on\nnon_repl_data_owner nrdo (cost=0.00..1.01 rows=1 width=12) (actual\ntime=0.000..0.000 rows=1 loops=1)\"\n\" -> Sort (cost=1.03..1.03 rows=2 width=42)\n(actual time=0.000..0.000 rows=2 loops=1)\"\n\" Sort Key: rdor.rsdnc_node_id, rdor.owner_id\"\n\" -> Seq Scan on repl_data_owner_rsdnc\nrdor (cost=0.00..1.02 rows=2 width=42) (actual time=0.000..0.000\nrows=2 loops=1)\"\n\" -> Materialize (cost=128.92..128.93 rows=1\nwidth=42) (actual time=0.000..0.000 rows=0 loops=14862)\"\n\" -> Seq Scan on mrs_replication_out \"out\"\n(cost=0.00..128.92 rows=1 width=42) (actual time=0.000..0.000 rows=0\nloops=1)\"\n\" Filter: ((trans_id = 514::numeric)\nAND (ent_id = 10000168::numeric))\"\n\" -> Sort (cost=4.52..4.73 rows=84 width=52) (actual\ntime=0.000..15.000 rows=1 loops=1)\"\n\" Sort Key: red_trans.recv_node_id\"\n\" -> Seq Scan on mrs_transaction red_trans\n(cost=0.00..1.84 rows=84 width=52) (actual time=0.000..0.000 rows=1\nloops=1)\"\n\" -> Sort (cost=270.96..277.78 rows=2728 width=10) (actual\ntime=0.000..5255.000 rows=8932063 loops=1)\"\n\" Sort Key: red_out.trans_id\"\n\" -> Seq Scan on mrs_replication_out red_out\n(cost=0.00..115.28 rows=2728 width=10) (actual time=0.000..0.000\nrows=602 loops=1)\"\n\"Total runtime: 27094.000 ms\"\n\nOnce again, thanks in advance.\n\nHugo Ferreira\n--\nGPG Fingerprint: B0D7 1249 447D F5BB 22C5 5B9B 078C 2615 504B 7B85\n", "msg_date": "Mon, 7 Mar 2005 17:01:58 +0000", "msg_from": "Hugo Ferreira <[email protected]>", "msg_from_op": false, "msg_subject": "Help trying to tune query that executes 40x slower than in SqlServer" }, { "msg_contents": "Hugo,\n\n> insert into MRS_REPLICATION_OUT select 514, 10000168,  C.contxt_id,\n> C.contxt_elmt_ix, CAST(null as NUMERIC(18)), CAST(null as\n> NUMERIC(18)), CAST(null as NUMERIC(18)), CAST(null as NUMERIC(18)),\n> CAST(null as NUMERIC(18)), null, 1 from c2iedm.CONTXT as P inner join\n> c2iedm.CONTXT_ELMT as C on (P.contxt_id=C.contxt_id) inner join\n> MRS_REPLICATION_OUT as S on S.ent_id=10000029 and (CAST(P.contxt_id AS\n> numeric(18)) = S.pk1) inner join MRS_TRANSACTION TRANS on\n\nCan you *format* this query please, and re-submit it? Proper query format \nlooks like:\n\nSELECT a.1, b.2\nFROM a JOIN b ON a.1 = b.3\n\tJOIN c ON b.4 = c.1\nWHERE a.5 < 6\n AND c.7 = '2005-01-01';\n\n... for maximum readability. \n\nAlso, when asking others to help debug your queries, it helps them (and, \nfrankly, you) if you can NOT use single-letter table aliases. Single-letter \ntable aliases are evil for the same reason that single-letter variables in \ncode are.\n\nThanks!\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 7 Mar 2005 09:28:47 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help trying to tune query that executes 40x slower than in\n\tSqlServer" }, { "msg_contents": "I'm sorry for my unpolite query alignment. Here is the query in a more\nhuman-readable format:\n\nSELECT 514, 10000168, C.contxt_id, C.contxt_elmt_ix, null, null,\nnull, null, null, null, 1\nFROM CONTXT as P INNER JOIN CONTXT_ELMT as C on P.contxt_id = C.contxt_id\n INNER JOIN MRS_REPLICATION_OUT as S on S.ent_id=10000029\n AND P.contxt_id = S.pk1\n INNER JOIN MRS_TRANSACTION TRANS on TRANS.trans_id=514\n LEFT JOIN ON_REPL_DATA_OWNER NRDO on\nNRDO.non_repl_data_owner_id = C.owner_id\n LEFT JOIN REPL_DATA_OWNER_RSDNC RDOR on RDOR.owner_id = C.owner_id\n AND RDOR.rsdnc_node_id=TRANS.recv_node_id\n LEFT JOIN MRS_REPLICATION_OUT OUT on OUT.trans_id = 514\n AND OUT.ent_id=10000168 and C.contxt_id = OUT.pk1\n AND C.contxt_elmt_ix = OUT.pk2\n INNER JOIN MRS_TRANSACTION RED_TRANS on\nTRANS.prov_node_id=RED_TRANS.prov_node_id\n AND TRANS.recv_node_id=RED_TRANS.recv_node_id\n LEFT JOIN MRS_REPLICATION_OUT RED_OUT on RED_TRANS.cat_code = 'OUT'\n AND RED_TRANS.trans_type in ('X01', 'X02')\n AND RED_TRANS.trans_id = RED_OUT.trans_id\nWHERE S.age=0 and S.trans_id=514\n AND (NRDO.non_repl_data_owner_id is null)\n AND (RDOR.repl_data_owner_id is null)\n AND (OUT.trans_id is null)\n AND (RED_OUT.trans_id is null);\n\nBecause GMAIL also cuts out text at 80 characters, I also send the\nquery in attachment.\n\nOnce again thanks for your help,\n\nHugo Ferreira\n\n> Can you *format* this query please, and re-submit it? Proper query format\n> looks like:\n> \n> SELECT a.1, b.2\n> FROM a JOIN b ON a.1 = b.3\n> JOIN c ON b.4 = c.1\n> WHERE a.5 < 6\n> AND c.7 = '2005-01-01';\n> \n> ... for maximum readability.\n\n-- \nGPG Fingerprint: B0D7 1249 447D F5BB 22C5 5B9B 078C 2615 504B 7B85", "msg_date": "Mon, 7 Mar 2005 17:45:32 +0000", "msg_from": "Hugo Ferreira <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help trying to tune query that executes 40x slower than in\n\tSqlServer" }, { "msg_contents": "Hugo Ferreira <[email protected]> writes:\n> SELECT 514, 10000168, C.contxt_id, C.contxt_elmt_ix, null, null,\n> null, null, null, null, 1\n> FROM CONTXT as P INNER JOIN CONTXT_ELMT as C on P.contxt_id = C.contxt_id\n> INNER JOIN MRS_REPLICATION_OUT as S on S.ent_id=10000029\n> AND P.contxt_id = S.pk1\n> INNER JOIN MRS_TRANSACTION TRANS on TRANS.trans_id=514\n> LEFT JOIN ON_REPL_DATA_OWNER NRDO on\n> NRDO.non_repl_data_owner_id = C.owner_id\n> LEFT JOIN REPL_DATA_OWNER_RSDNC RDOR on RDOR.owner_id = C.owner_id\n> AND RDOR.rsdnc_node_id=TRANS.recv_node_id\n> LEFT JOIN MRS_REPLICATION_OUT OUT on OUT.trans_id = 514\n> AND OUT.ent_id=10000168 and C.contxt_id = OUT.pk1\n> AND C.contxt_elmt_ix = OUT.pk2\n> INNER JOIN MRS_TRANSACTION RED_TRANS on\n> TRANS.prov_node_id=RED_TRANS.prov_node_id\n> AND TRANS.recv_node_id=RED_TRANS.recv_node_id\n> LEFT JOIN MRS_REPLICATION_OUT RED_OUT on RED_TRANS.cat_code = 'OUT'\n> AND RED_TRANS.trans_type in ('X01', 'X02')\n> AND RED_TRANS.trans_id = RED_OUT.trans_id\n\nI think the problem is that the intermix of inner and left joins forces\nPostgres to do the joins in a particular order, per\nhttp://www.postgresql.org/docs/8.0/static/explicit-joins.html\nand this order is quite non optimal for your data. In particular it\nlooks like joining red_trans to red_out first, instead of last,\nwould be a good idea (I think but am not 100% certain that this\ndoesn't change the results).\n\nIt is possible but complicated to determine that reordering outer joins\nis safe in some cases. We don't currently have such logic in PG. It\nmay be that SQL Server does have that capability and that's why it's\nfinding a much better plan ... but for now you have to do that by hand\nin PG.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Mar 2005 13:02:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help trying to tune query that executes 40x slower than in\n\tSqlServer" }, { "msg_contents": "Hi,\n\nWell, I think the problem is far more complex than just joins\nreordering... I've restrucutred the query so that it won't use any\nexplicit joins.Instead it now has a series of 'in (select ...)' and\n'not exists (select ...)'. This actually got faster... sometimes!!!\n\nselect 1, 10000168, C.contxt_id, C.contxt_elmt_ix, null, null, null,\nnull, null, null, 1\nfrom CONTXT as P, CONTXT_ELMT as C, MRS_REPLICATION_OUT as S,\nMRS_TRANSACTION as TRANS\nwhere S.age=0 \n\tand S.trans_id=1 \n\tand S.trans_id = TRANS.trans_id \n\tand S.ent_id = 10000029 \n\tand (P.contxt_id=C.contxt_id) and (P.contxt_id = S.pk1) \n\tand (C.owner_id not in (select non_repl_data_owner_id from\nNON_REPL_DATA_OWNER))\n\tAND (C.owner_id not in (select repl_data_owner_id from REPL_DATA_OWNER_RSDNC \n\t\t\t\t\twhere rsdnc_node_id = TRANS.recv_node_id))\n\tAND (not exists (select pk1 from MRS_REPLICATION_OUT \n\t\t\t\twhere trans_id=1 \n\t\t\t\t\tand ent_id=10000168 \n\t\t\t\t\tand C.contxt_id = pk1 \n\t\t\t\t\tAND C.contxt_elmt_ix = pk2)) \n\tAND (not exists (select pk1 from MRS_TRANSACTION RED_TRANS,\nMRS_REPLICATION_OUT RED_OUT\n\t\t\t\twhere RED_TRANS.cat_code = 'OUT' \n\t\t\t\t\tand RED_TRANS.trans_type in ('X01', 'X02') \n\t\t\t\t\tand RED_TRANS.trans_id=RED_OUT.trans_id \n\t\t\t\t\tand RED_TRANS.prov_node_id=TRANS.prov_node_id \n\t\t\t\t\tand RED_TRANS.recv_node_id=TRANS.recv_node_id \n\t\t\t\t\tand RED_OUT.ent_id=10000168 \n\t\t\t\t\tand C.contxt_id = pk1 \n\t\t\t\t\tAND C.contxt_elmt_ix = pk2))\n\n\nFor example... I run the query, it takes 122seconds. Then I delete the\ntarget tables, vacuum the database, re-run it again: 9s. But if I run\nvacuum several times, and then run, it takes again 122seconds. If I\nstop this 122seconds query, say, at second 3 and then run it again, it\nwill only take 9s. It simply doesn't make sense. Also, explain analyse\nwill give me diferent plans each time I run it... Unfortunately, this\nis rendering PostgreSQL unusable for our goals. Any ideas?\n\nBy the way, I got the following indexes over MRS_REPLICATION_OUT which\nseems to speed up things:\n\nCREATE INDEX ix_mrs_replication_out_all ON mrs_replication_out \nUSING btree (ent_id, age, trans_id);\n\nCREATE INDEX ix_mrs_replication_pks ON mrs_replication_out \nUSING btree (trans_id, ent_id, pk1, pk2, pk3, pk4, pk5, pk6, pk7);\n\nNote: pk2... pk7 are nullable columns. trans_id is the least variant\ncolumn. pk1 is the most variant column. Most of the times, the\nexecution plan includes an 'index scan' over the first index\n(ix_mrs_replication_out_all), followed by a filter with columns from\nthe second index (trans_id, ent_id, pk1, pk2, pk3, pk4, pk5, pk6,\npk7), though the 'age' column is not used... Any guess why??\n\nThanks in advance,\n\nHugo Ferreira\n\n> It is possible but complicated to determine that reordering outer joins\n> is safe in some cases. We don't currently have such logic in PG. It\n> may be that SQL Server does have that capability and that's why it's\n> finding a much better plan ... but for now you have to do that by hand\n> in PG.\n\n-- \nGPG Fingerprint: B0D7 1249 447D F5BB 22C5 5B9B 078C 2615 504B 7B85\n", "msg_date": "Wed, 9 Mar 2005 12:08:21 +0000", "msg_from": "Hugo Ferreira <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help trying to tune query that executes 40x slower than in\n\tSqlServer" }, { "msg_contents": "On Wed, 9 Mar 2005 11:08 pm, Hugo Ferreira wrote:\n> For example... I run the query, it takes 122seconds. Then I delete the\n> target tables, vacuum the database, re-run it again: 9s. But if I run\n> vacuum several times, and then run, it takes again 122seconds. If I\n> stop this 122seconds query, say, at second 3 and then run it again, it\n> will only take 9s. It simply doesn't make sense. Also, explain analyse\n> will give me diferent plans each time I run it... Unfortunately, this\n> is rendering PostgreSQL unusable for our goals. Any ideas?\n> \nThe explain analyze is still be best information if you want assistance with\nwhat postgresql is doing, and how to stop it. If you could attach \nexplain analyzes for both the fast (9s), and slow (122s) runs, that would\nhelp people get an idea of how the query is running. At the moment\nwe don't know how postgresql is actually executing the query.\n\nRegards\n\nRussell Smith.\n", "msg_date": "Thu, 10 Mar 2005 09:08:32 +1100", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help trying to tune query that executes 40x slower than in\n\tSqlServer" }, { "msg_contents": "Hugo,\n\n I think your problem is with the MRS_TRANSACTION TRANS table. It is \nnot joining anything when declared, but later it is joining thru a LEFT \nJOIN of the REPL_DATA_OWNER_RSDNC table. In fact I'm not sure that this \ntable is really needed. I would suggest rewriting your FROM clause. It \nappears a little busy and includes additional filters that are taken \ncare of in the WHERE clause.\n\n What are the table layouts and what fields are indexed? \n\n\n\nHugo Ferreira wrote:\n\n>Hi there :-)\n>\n>I'm really, really having trouble with this query... It is a part of,\n>hmmm... 200 similar querys that I dinyamically build and run in a\n>stored procedure. This one, for example, takes 27seconds to run. The\n>whole stored procedure executes in about 15minutes. This is too much\n>when compared to the exact same database, with the same indexes and\n>same data running under SqlServer 2000, which takes 21seconds to run\n>the whole batch.\n>\n>Any help would be extremely appreciated. I've also tried to tune up\n>the configuration\n>\n>insert into MRS_REPLICATION_OUT select 514, 10000168, C.contxt_id,\n>C.contxt_elmt_ix, CAST(null as NUMERIC(18)), CAST(null as\n>NUMERIC(18)), CAST(null as NUMERIC(18)), CAST(null as NUMERIC(18)),\n>CAST(null as NUMERIC(18)), null, 1 from c2iedm.CONTXT as P inner join\n>c2iedm.CONTXT_ELMT as C on (P.contxt_id=C.contxt_id) inner join\n>MRS_REPLICATION_OUT as S on S.ent_id=10000029 and (CAST(P.contxt_id AS\n>numeric(18)) = S.pk1) inner join MRS_TRANSACTION TRANS on\n>TRANS.trans_id=514 left join NON_REPL_DATA_OWNER NRDO on\n>NRDO.non_repl_data_owner_id=C.owner_id left join REPL_DATA_OWNER_RSDNC\n>RDOR on RDOR.owner_id=C.owner_id and\n>RDOR.rsdnc_node_id=TRANS.recv_node_id left join MRS_REPLICATION_OUT\n>OUT on OUT.trans_id=514 and OUT.ent_id=10000168 and ((CAST(C.contxt_id\n>AS numeric(18)) = OUT.pk1 AND CAST(C.contxt_elmt_ix AS numeric(18)) =\n>OUT.pk2)) inner join MRS_TRANSACTION RED_TRANS on\n>TRANS.prov_node_id=RED_TRANS.prov_node_id and\n>TRANS.recv_node_id=RED_TRANS.recv_node_id left join\n>MRS_REPLICATION_OUT RED_OUT on RED_TRANS.cat_code = 'OUT' and\n>RED_TRANS.trans_type in ('X01', 'X02') and\n>RED_TRANS.trans_id=RED_OUT.trans_id where S.age=0 and S.trans_id=514\n>and (NRDO.non_repl_data_owner_id is null) AND (RDOR.repl_data_owner_id\n>is null) AND (OUT.trans_id is null) AND (RED_OUT.trans_id is null);\n>\n>This kind of inserts generate few rows. Between 8k and 15k for this particular\n>insert, and about 20k for the whole batch. If I try to run a batch\n>to generate about 50k rows, then I'll be stuck here for more that 45h.\n>Compare this to 12minutes when running SqlServer 2000.\n>\n>Here is the result of explain analyze:\n>\n>\"Merge Left Join (cost=1338.32..1377.99 rows=45 width=32) (actual\n>time=719.000..26437.000 rows=14862 loops=1)\"\n>\" Merge Cond: (\"outer\".trans_id = \"inner\".trans_id)\"\n>\" Join Filter: ((\"outer\".cat_code = 'OUT'::bpchar) AND\n>((\"outer\".trans_type = 'X01'::bpchar) OR (\"outer\".trans_type =\n>'X02'::bpchar)))\"\n>\" Filter: (\"inner\".trans_id IS NULL)\"\n>\" -> Sort (cost=1067.36..1067.47 rows=45 width=56) (actual\n>time=719.000..735.000 rows=14862 loops=1)\"\n>\" Sort Key: red_trans.trans_id\"\n>\" -> Merge Join (cost=851.66..1066.12 rows=45 width=56)\n>(actual time=407.000..673.000 rows=14862 loops=1)\"\n>\" Merge Cond: (\"outer\".recv_node_id = \"inner\".recv_node_id)\"\n>\" Join Filter: (\"outer\".prov_node_id = \"inner\".prov_node_id)\"\n>\" -> Nested Loop Left Join (cost=847.14..987.28\n>rows=3716 width=60) (actual time=407.000..610.000 rows=14862 loops=1)\"\n>\" Join Filter: (((\"outer\".contxt_id)::numeric(18,0)\n>= \"inner\".pk1) AND ((\"outer\".contxt_elmt_ix)::numeric(18,0) =\n>\"inner\".pk2))\"\n>\" Filter: (\"inner\".trans_id IS NULL)\"\n>\" -> Merge Left Join (cost=718.22..746.87\n>rows=3716 width=60) (actual time=407.000..563.000 rows=14862 loops=1)\"\n>\" Merge Cond: ((\"outer\".recv_node_id =\n>\"inner\".rsdnc_node_id) AND (\"outer\".owner_id = \"inner\".owner_id))\"\n>\" Filter: (\"inner\".repl_data_owner_id IS NULL)\"\n>\" -> Sort (cost=717.19..726.48 rows=3716\n>width=74) (actual time=407.000..423.000 rows=14862 loops=1)\"\n>\" Sort Key: trans.recv_node_id, c.owner_id\"\n>\" -> Nested Loop Left Join\n>(cost=1.01..496.84 rows=3716 width=74) (actual time=0.000..312.000\n>rows=14862 loops=1)\"\n>\" Join Filter:\n>(\"inner\".non_repl_data_owner_id = \"outer\".owner_id)\"\n>\" Filter:\n>(\"inner\".non_repl_data_owner_id IS NULL)\"\n>\" -> Nested Loop\n>(cost=0.00..412.22 rows=3716 width=74) (actual time=0.000..186.000\n>rows=14862 loops=1)\"\n>\" -> Seq Scan on\n>mrs_transaction trans (cost=0.00..2.05 rows=1 width=28) (actual\n>time=0.000..0.000 rows=1 loops=1)\"\n>\" Filter: (trans_id =\n>514::numeric)\"\n>\" -> Nested Loop\n>(cost=0.00..373.01 rows=3716 width=46) (actual time=0.000..139.000\n>rows=14862 loops=1)\"\n>\" Join Filter:\n>(\"outer\".contxt_id = \"inner\".contxt_id)\"\n>\" -> Nested Loop\n>(cost=0.00..4.81 rows=1 width=16) (actual time=0.000..0.000 rows=4\n>loops=1)\"\n>\" Join Filter:\n>((\"inner\".contxt_id)::numeric(18,0) = \"outer\".pk1)\"\n>\" -> Index\n>Scan using ix_mrs_replication_out_all on mrs_replication_out s\n>(cost=0.00..3.76 rows=1 width=16) (actual time=0.000..0.000 rows=4\n>loops=1)\"\n>\" Index\n>Cond: ((ent_id = 10000029::numeric) AND (age = 0::numeric) AND\n>(trans_id = 514::numeric))\"\n>\" -> Seq Scan\n>on contxt p (cost=0.00..1.02 rows=2 width=16) (actual\n>time=0.000..0.000 rows=2 loops=4)\"\n>\" -> Seq Scan on\n>contxt_elmt c (cost=0.00..275.31 rows=7431 width=46) (actual\n>time=0.000..7.500 rows=7431 loops=4)\"\n>\" -> Materialize\n>(cost=1.01..1.02 rows=1 width=12) (actual time=0.000..0.001 rows=1\n>loops=14862)\"\n>\" -> Seq Scan on\n>non_repl_data_owner nrdo (cost=0.00..1.01 rows=1 width=12) (actual\n>time=0.000..0.000 rows=1 loops=1)\"\n>\" -> Sort (cost=1.03..1.03 rows=2 width=42)\n>(actual time=0.000..0.000 rows=2 loops=1)\"\n>\" Sort Key: rdor.rsdnc_node_id, rdor.owner_id\"\n>\" -> Seq Scan on repl_data_owner_rsdnc\n>rdor (cost=0.00..1.02 rows=2 width=42) (actual time=0.000..0.000\n>rows=2 loops=1)\"\n>\" -> Materialize (cost=128.92..128.93 rows=1\n>width=42) (actual time=0.000..0.000 rows=0 loops=14862)\"\n>\" -> Seq Scan on mrs_replication_out \"out\"\n>(cost=0.00..128.92 rows=1 width=42) (actual time=0.000..0.000 rows=0\n>loops=1)\"\n>\" Filter: ((trans_id = 514::numeric)\n>AND (ent_id = 10000168::numeric))\"\n>\" -> Sort (cost=4.52..4.73 rows=84 width=52) (actual\n>time=0.000..15.000 rows=1 loops=1)\"\n>\" Sort Key: red_trans.recv_node_id\"\n>\" -> Seq Scan on mrs_transaction red_trans\n>(cost=0.00..1.84 rows=84 width=52) (actual time=0.000..0.000 rows=1\n>loops=1)\"\n>\" -> Sort (cost=270.96..277.78 rows=2728 width=10) (actual\n>time=0.000..5255.000 rows=8932063 loops=1)\"\n>\" Sort Key: red_out.trans_id\"\n>\" -> Seq Scan on mrs_replication_out red_out\n>(cost=0.00..115.28 rows=2728 width=10) (actual time=0.000..0.000\n>rows=602 loops=1)\"\n>\"Total runtime: 27094.000 ms\"\n>\n>Once again, thanks in advance.\n>\n>Hugo Ferreira\n>--\n>GPG Fingerprint: B0D7 1249 447D F5BB 22C5 5B9B 078C 2615 504B 7B85\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n>\n> \n>\n\n", "msg_date": "Wed, 09 Mar 2005 20:09:20 -0600", "msg_from": "Jim Johannsen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help trying to tune query that executes 40x slower" } ]
[ { "msg_contents": " I face problem when running the following pgplsql\nfunction. The problem is it takes more than 24hours to\ncomplete\n the calculation. The EMP table has about 200,000\nrecords. I execute the function through psql \"select\ncalculate()\";\n (There is no cyclic link inside the data).\n \n Computer used: IBM xSeries 225, RAM 1GB, SCSI 36GB\n O/S : RedHat Linux Enterprise 3.0 AS\n PostgreSQL version 8.0.1\n fsync=false\n \n I would very appreciate if anyone can help to find\nout what the problem is, or any others way to improve\nthe performance\n of the function. \n \n Is there any difference between select in FOR LOOP\nwith CURSOR in term of performance ?\n\n EMP Table\n GEN char(3),\n CODE varchar(20),\n PARENT varchar(20),\n POSITION INT4 DEFAULT 0,\n PG NUMERIC(15,2) DEFAULT 0,\n P NUMERIC(15,2) DEFAULT 0,\n QUA CHAR(1) DEFAULT '0',\n .\n .\n . \n create index EMP_GEN on EMP (GEN);\n create index EMP_CODE on EMP (CODE);\n create index EMP_PARENT on PARENT (PARENT);\n \n Sample EMP DATA:\n GEN CODE PARENT POSITION P PG QUA\n ===============================================\n 000 A001 **** 3 100 0 '1'\n 001 A002 A001 2 50 0 '1'\n 001 A003 A001 1 50 0 '1'\n 001 A004 A001 1 20 0 '1'\n 002 A005 A003 2 20 0 '1'\n 002 A006 A004 3 30 0 '1'\n ...\n ...\n \n \n for vTMP_ROW in select CODE,PARENT,POSITION from\nEMP order by GEN desc loop \n vCODE := vTMP_ROW.CODE;\n vPARENT := vTMP_ROW.PARENT;\n nPOSITION := vTMP_ROW.POSITION;\n\n update EMP set PG=PG+P where CODE = vCODE;\n\n select into vCURR_ROW PG,POSITION from EMP\nwhere CODE = vCODE;\n \n nPG := vCURR_ROW.PG;\n nPOSITION := vCURR_ROW.POSITION;\n\n vUPL := vPARENT;\n \n loop\n select into vUPL_ROW\nCODE,PARENT,POSITION,P,QUA from EMP where CODE = vUPL;\n if found then\n if vUPL_ROW.POSITION > nPOSITION and\nvUPL_ROW.QUA = ''1'' then\n update EMP set PG=PG+nPG where CODE =\nvUPL;\n exit;\n end if;\n else \n exit; \n end if;\n vUPL := vUPL_ROW.PARENT;\n end loop;\n end loop;\n \n .\n .\n .\n\nThank You\n \n\n\n\t\n\t\t\n__________________________________ \nCelebrate Yahoo!'s 10th Birthday! \nYahoo! Netrospective: 100 Moments of the Web \nhttp://birthday.yahoo.com/netrospective/\n", "msg_date": "Fri, 4 Mar 2005 09:53:28 -0800 (PST)", "msg_from": "Charles Joseph <[email protected]>", "msg_from_op": true, "msg_subject": "Select in FOR LOOP Performance" }, { "msg_contents": "Charles Joseph <[email protected]> writes:\n> I face problem when running the following pgplsql\n> function. The problem is it takes more than 24hours to\n> complete\n> the calculation. The EMP table has about 200,000\n> records.\n\nSure there are no infinite loops of PARENT links in your table?\n\nAlso, if CODE is supposed to be unique, you should probably declare\nits index that way. Or at least make sure the planner knows it's\nunique (have you ANALYZEd the table lately?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Mar 2005 13:29:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select in FOR LOOP Performance " } ]
[ { "msg_contents": "I notice that by default, postgres sets numeric fields to\nstorage MAIN. What exactly does that mean? Does that mean\nit stores it in some type of compressed BCD format? If so,\nhow much performance gain can I expect by setting the storage\nto PLAIN? Also, the docs say that char(n) is implemented more\nor less the same way as text. Does that mean that setting\na field to, say, char(2) PLAIN is not going be any faster\nthan text PLAIN? That seems a bit counter-intuitive. I\nwould hope that a char(2) PLAIN would just reserve two chars\nin the record structure without any overhead of pointers to\nexternal data. Is there a reason this isn't supported?\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n", "msg_date": "Fri, 4 Mar 2005 14:59:01 -0600", "msg_from": "\"Dave Held\" <[email protected]>", "msg_from_op": true, "msg_subject": "MAIN vs. PLAIN" }, { "msg_contents": "\"Dave Held\" <[email protected]> writes:\n> I notice that by default, postgres sets numeric fields to\n> storage MAIN. What exactly does that mean?\n\nSee http://developer.postgresql.org/docs/postgres/storage-toast.html\n\nThere isn't any amazingly strong reason why numeric defaults to MAIN\nrather than EXTENDED, which is the default for every other toastable\ndatatype --- except that I thought it'd be a good idea to have at\nleast one type that did so, just to exercise that code path in the\ntuple toaster. And numeric shouldn't ordinarily be large enough to\nneed out-of-line storage anyway. It's unlikely even to need\ncompression, really, but as long as it's a varlena type the overhead\nto support toasting is nearly nil.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Mar 2005 23:33:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MAIN vs. PLAIN " } ]
[ { "msg_contents": "Hi,\n\nI have performance problems with the postmaster-process. The CPU usage is permanently \naround 90 %.\n\nMy DB:\n\n 3,5 GB Size\n largest table ca. 700 000 rows\n\nMost access:\n\n Finde a row - Update if exist Insert else\n\nAverage access:\n\n 10 times a second\n\n every Query needs ca. 20 ms\n\n\nAll server options are still default !!\n\nAny idea on wich \"screw\" I can turn ?.\n\n\nThanks for suggestions!\n\n\nMichael Zöphel\n\ne-Mail: [email protected]\n\n\n\n\n\n\n\nHi,\n \nI have performance problems with the \npostmaster-process. The CPU usage is permanently \naround 90 %.\n \nMy DB:\n \n    3,5 GB Size\n    largest table ca. 700 000 \nrows\n \nMost access:\n \n    Finde a row - Update if exist \nInsert else\n \nAverage access:\n \n    10 times a second\n \n    every Query needs ca. 20 \nms\n \n \nAll server options are still default \n!!\n \nAny idea on wich \"screw\" I can turn ?.\n \n \nThanks for suggestions!\n \n \nMichael Zöphel\n \ne-Mail: [email protected]", "msg_date": "Sat, 5 Mar 2005 13:26:37 +0100", "msg_from": "\"Michael Zoephel\" <[email protected]>", "msg_from_op": true, "msg_subject": "performance problems" } ]
[ { "msg_contents": "Hello!\n \nFirst off, I'm a real newbie at trying to read the output of explain\nanalyze.\n \nI have several similar queries in my application that I've got\nincorporated into views.  When they run sub 300ms, the users don't\nseem to mind.  However, one of them (query is below along with some\nrelevant table information) is running about 800ms and my users are\nstarting to grumble.\n \nI ran explain analyze on it (explain analyze results are\nbelow).  I noticed that the biggest chunk of time is being taken\nby a Hash Join near the top of the output (I'm still not sure what the\nindentation means and what the order means).  If I look at the\nestimate, it is comparable to several other hash join estimates in the\nquery; however, the actual cost in time is significantly higher than\nthose other hash joins.  Is this significant?\n \nI tried optimizing according to \"SQL Tuning\" by Tow, but this\nactually seemed to slow things down.  It also seemed that the\nquery optimizer in PostgreSQL reordered things on its own according to\nits own plan anyway.  Is this correct?\n \nI'd appreciate any help I can get to try to get this query below\n300ms.\n \nThanks!\nMark\n \nThe platform is a dual 2.2GHz Xeon 1.2GB RAM with mirrored drives\n(raid 1) running Win2000 Pro.  I run \"vacuum analyze\" every\nnight.  The postgresql.conf is basically standard except that I've\nopened it up to listen to the external network.  Other changes:\n \nmax_connections = 100\nshared_buffers = 10000\n \nquery (the person_id = 1 in the where clause is changed on a case by\ncase basis - depending upon who's running the query):\n \nexplain analyze SELECT DISTINCT c.job_id, g.person_id,\nc.job_no, b.deadline, c.name, bid_date(c.job_id) AS bid_date, c.miscq,\nc.city, c.st, j.name AS eng, c.s_team AS salesteam,\n       \nCASE           \nWHEN c.file_loc = 0 THEN 'No Bid'::character\nvarying           \nWHEN c.file_loc = -1 THEN 'Bid Board'::character\nvarying           \nWHEN c.file_loc = -2 THEN 'Lost Job'::character\nvarying           \nWHEN c.file_loc = -3 THEN 'See Job Notes'::character\nvarying           \nWHEN c.file_loc < -3 OR c.file_loc IS NULL THEN ''::character\nvarying           \nWHEN h.initials IS NOT NULL THEN\nh.initials           \nELSE 'Unknown person'::character\nvarying        END AS file_loc,\nCOALESCE(c.city::text || COALESCE(', '::text || c.st::text, ''::text),\nCOALESCE(c.st, ''::character varying)::text) AS \"location\", c.file_loc\nAS file_loc_id   FROM status a   LEFT JOIN\nstatus_list b ON a.status_id = b.status_id AND b.active  \nLEFT JOIN job c ON c.job_id = b.job_id   LEFT JOIN\nbuilder_list d ON c.job_id = d.job_id AND (d.won_heat OR d.won_vent OR\nd.won_tc OR c.heat AND d.bid_heat AND d.won_heat IS NULL OR c.vent AND\nd.bid_vent AND d.won_vent IS NULL OR c.tc AND d.bid_tc AND d.won_tc IS\nNULL) AND d.role = 'C'::bpchar   LEFT JOIN company e ON\nd.company_id = e.company_id   LEFT JOIN call_list f ON\ne.company_id = f.company_id   LEFT JOIN person g ON\nf.person_id = g.person_id OR \"position\"(c.s_team::text,\ng.initials::text) > 0   LEFT JOIN person h ON\nc.file_loc = h.person_id   LEFT JOIN builder_list i ON\nc.job_id = i.job_id AND i.role = 'E'::bpchar   LEFT JOIN\ncompany j ON i.company_id = j.company_id  WHERE a.name::text =\n'Awaiting Award'::character varying::text and g.person_id = 1 \nORDER BY c.job_id, g.person_id, c.job_no, b.deadline, c.name,\nbid_date(c.job_id), c.miscq, c.city, COALESCE(c.city::text ||\nCOALESCE(', '::text || c.st::text, ''::text), COALESCE(c.st,\n''::character varying)::text), c.st, CASE   \nWHEN c.file_loc = 0 THEN 'No Bid'::character\nvarying    WHEN c.file_loc = -1 THEN 'Bid\nBoard'::character varying    WHEN c.file_loc = -2\nTHEN 'Lost Job'::character varying    WHEN\nc.file_loc = -3 THEN 'See Job Notes'::character\nvarying    WHEN c.file_loc < -3 OR c.file_loc IS\nNULL THEN ''::character varying    WHEN h.initials\nIS NOT NULL THEN h.initials    ELSE 'Unknown\nperson'::character varyingEND, j.name, c.s_team,\nc.file_loc;\nTables:\nstatus - 14 rows\nstatus_list - 6566 rows\njob - 2210 rows\nbuilder_list - 9670 rows\ncompany - 1249 rows\ncall_list - 4731 rows\nperson - 27 rows\n \nPrimary keys:\nany field with a \"_id\" suffix is a primary key; and thus is\nimplicitly indexed.\n \nOther indexes:\nstatus_list(job_id) btree\nstatus_list(status_id) btree\njob(file_loc) btree\nbuilder_list(company_id) btree\ncall_list(company_id) btree\ncall_list(person_id) btree\ncall_list(company_id) btree\nperson(company_id) btree\n \nexplain analyze:\nUnique  (cost=1798.47..1809.38 rows=291 width=114) (actual\ntime=766.000..781.000 rows=566 loops=1)  -> \nSort  (cost=1798.47..1799.19 rows=291 width=114) (actual\ntime=766.000..766.000 rows=1473\nloops=1)        Sort Key:\nc.job_id, g.person_id, c.job_no, b.deadline, c.name,\nbid_date(c.job_id), c.miscq, c.city, COALESCE(((c.city)::text ||\nCOALESCE((', '::text || (c.st)::text), ''::text)), (COALESCE(c.st,\n''::character varying))::text), c.st, CASE WHEN (c.fi\n(..)        ->  Hash\nLeft Join  (cost=1750.81..1786.56 rows=291 width=114) (actual\ntime=453.000..750.000 rows=1473\nloops=1)             \nHash Cond: (\"outer\".company_id =\n\"inner\".company_id)             \n->  Merge Left Join  (cost=1707.20..1722.53 rows=291\nwidth=95) (actual time=437.000..484.000 rows=1473\nloops=1)                   \nMerge Cond: (\"outer\".job_id =\n\"inner\".job_id)                   \n->  Sort  (cost=1382.44..1383.17 rows=291 width=91) (actual\ntime=406.000..406.000 rows=1473\nloops=1)                         \nSort Key:\nc.job_id                         \n->  Hash Left Join  (cost=1137.28..1370.53 rows=291\nwidth=91) (actual time=234.000..390.000 rows=1473\nloops=1)                               \nHash Cond: (\"outer\".file_loc =\n\"inner\".person_id)                               \n->  Nested Loop  (cost=1135.94..1365.27 rows=291 width=84)\n(actual time=234.000..390.000 rows=1473\nloops=1)                                     \nJoin Filter: ((\"inner\".person_id = \"outer\".person_id) OR\n(\"position\"((\"inner\".s_team)::text, (\"outer\".initials)::text) >\n0))                                     \n->  Seq Scan on person g  (cost=0.00..1.34 rows=1 width=11)\n(actual time=0.000..0.000 rows=1\nloops=1)                                           \nFilter: (person_id =\n1)                                     \n->  Merge Right Join  (cost=1135.94..1349.74 rows=811\nwidth=84) (actual time=234.000..297.000 rows=7490\nloops=1)                                           \nMerge Cond: (\"outer\".company_id =\n\"inner\".company_id)                                           \n->  Index Scan using idx_company_id_call_list on call_list\nf  (cost=0.00..189.80 rows=4731 width=8) (actual\ntime=0.000..15.000 rows=4731\nloops=1)                                           \n->  Sort  (cost=1135.94..1136.48 rows=214 width=84) (actual\ntime=234.000..234.000 rows=7490\nloops=1)                                                 \nSort Key:\ne.company_id                                                 \n->  Merge Right Join  (cost=1004.19..1127.66 rows=214\nwidth=84) (actual time=203.000..219.000 rows=1569\nloops=1)                                                       \nMerge Cond: (\"outer\".company_id =\n\"inner\".company_id)                                                       \n->  Index Scan using company_pkey on company e \n(cost=0.00..117.13 rows=1249 width=4) (actual time=0.000..0.000\nrows=1249\nloops=1)                                                       \n->  Sort  (cost=1004.19..1004.73 rows=214 width=84) (actual\ntime=203.000..203.000 rows=1569\nloops=1)                                                             \nSort Key:\nd.company_id                                                             \n->  Hash Left Join  (cost=633.74..995.91 rows=214 width=84)\n(actual time=156.000..187.000 rows=1569\nloops=1)                                                                   \nHash Cond: (\"outer\".job_id =\n\"inner\".job_id)                                                                   \nJoin Filter: (\"inner\".won_heat OR \"inner\".won_vent OR \"inner\".won_tc OR\n(\"outer\".heat AND \"inner\".bid_heat AND (\"inner\".won_heat IS NULL)) OR\n(\"outer\".vent AND \"inner\".bid_vent AND (\"inner\n(..)                                                                   \n->  Merge Left Join  (cost=368.17..381.60 rows=159\nwidth=83) (actual time=78.000..93.000 rows=695\nloops=1)                                                                         \nMerge Cond: (\"outer\".job_id =\n\"inner\".job_id)                                                                         \n->  Sort  (cost=168.31..168.71 rows=159 width=8) (actual\ntime=31.000..31.000 rows=695\nloops=1)                                                               \n               \nSort Key:\nb.job_id                                                                               \n->  Nested Loop Left Join  (cost=0.00..162.50 rows=159\nwidth=8) (actual time=0.000..31.000 rows=695\nloops=1)                                                                                     \nJoin Filter: (\"outer\".status_id =\n\"inner\".status_id)                                                                                     \n->  Seq Scan on status a  (cost=0.00..1.18 rows=1 width=4)\n(actual time=0.000..0.000 rows=1\nloops=1)                                                                                           \nFilter: ((name)::text = 'Awaiting\nAward'::text)                                                                                     \n->  Seq Scan on status_list b  (cost=0.00..133.66 rows=2213\nwidth=12) (actual time=0.000..15.000 rows=2210\nloops=1)                                                                                           \nFilter:\nactive                                                                         \n->  Sort  (cost=199.86..205.39 rows=2210 width=79) (actual\ntime=47.000..47.000 rows=2194\nloops=1)                                                                               \nSort Key:\nc.job_id                                                                               \n->  Seq Scan on job c  (cost=0.00..77.10 rows=2210\nwidth=79) (actual time=0.000..31.000 rows=2210\nloops=1)                                                                   \n->  Hash  (cost=202.88..202.88 rows=7475 width=14) (actual\ntime=78.000..78.000 rows=0\nloops=1)                                                                         \n->  Seq Scan on builder_list d  (cost=0.00..202.88\nrows=7475 width=14) (actual time=0.000..15.000 rows=7517\nloops=1)                                                                               \nFilter: (role =\n'C'::bpchar)                               \n->  Hash  (cost=1.27..1.27 rows=27 width=11) (actual\ntime=0.000..0.000 rows=0\nloops=1)                                     \n->  Seq Scan on person h  (cost=0.00..1.27 rows=27\nwidth=11) (actual time=0.000..0.000 rows=27\nloops=1)                   \n->  Sort  (cost=324.76..330.25 rows=2196 width=8) (actual\ntime=31.000..31.000 rows=3044\nloops=1)                         \nSort Key:\ni.job_id                         \n->  Seq Scan on builder_list i  (cost=0.00..202.88\nrows=2196 width=8) (actual time=0.000..31.000 rows=2153\nloops=1)                               \nFilter: (role =\n'E'::bpchar)             \n->  Hash  (cost=40.49..40.49 rows=1249 width=27) (actual\ntime=16.000..16.000 rows=0 loops=1)\n                   \n->  Seq Scan on company j  (cost=0.00..40.49 rows=1249\nwidth=27) (actual time=0.000..0.000 rows=1249 loops=1)Total\nruntime: 781.000 ms\n", "msg_date": "Sat, 5 Mar 2005 11:38:35 -0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Query Optimization - Hash Join estimate off?" } ]
[ { "msg_contents": "Hi,\nI have been doing a project that modify the original codes for pg.\nNow I feel very interesting in table or index partition.\nAnd I have paid great attention to the discussion about that before, and \ngot so much information. \nNow I have some ideas about how to partition on pg. I want to mainly modify \nthe query part, or the storage part. Can any one give me some suggestions \nfor this? Thank you. \n\nBest regards\nAlford\n\n\n", "msg_date": "Mon, 07 Mar 2005 14:26:18 +0800", "msg_from": "\"xsk\" <[email protected]>", "msg_from_op": true, "msg_subject": "How to Partition?" } ]
[ { "msg_contents": "> >> What platform is this on? It seems very strange/fishy \n> that all the \n> >> actual-time values are exact integral milliseconds.\n> \n> > My machine is WinXP professional, athon xp 2100, but I get similar \n> > results on my Intel P4 3.0Ghz as well (which is also \n> running WinXP). \n> > Why do you ask?\n> \n> Well, what it suggests is that gettimeofday() is only \n> returning a result good to the nearest millisecond. (Win32 \n> hackers, does that sound right?)\n\nYes. The gettimeofday() implementation (in\nsrc/backend/port/gettimeofday.c).\nActually, in reality you don't even get millisecond resolution it seems\n(after some reading up). More along the line of\n10-millisecond-resolution.\n\nSee for example\nhttp://msdn.microsoft.com/msdnmag/issues/04/03/HighResolutionTimer/.\n\n\n\n> Most modern machines seem to have clocks that can count \n> elapsed time down to near the microsecond level. Anyone know \n> if it's possible to get such numbers out of Windows, or are \n> we stuck with milliseconds?\n\nThere are, see link above. But it's definitly not easy. I don't think we\ncan just take the complete code from their exmaple (due to licensing).\nWe could go with the \"middle way\", but it has a couple of pitfalls.\n\nDo we need actual high precision time, or do we just need to be able to\nget high precision differences? Getting the differences is fairly easy,\nbut if you need to \"sync up\" any drif then it becomes a bit more\ndifficult.\n\n\n//Magnus\n", "msg_date": "Mon, 7 Mar 2005 09:55:41 +0100", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Help with tuning this query (with explain analyze\n\tfinally)" }, { "msg_contents": "\"Magnus Hagander\" <[email protected]> writes:\n> Do we need actual high precision time, or do we just need to be able to\n> get high precision differences? Getting the differences is fairly easy,\n> but if you need to \"sync up\" any drif then it becomes a bit more\n> difficult.\n\nYou're right, we only care about differences not absolute time. If\nthere's something like a microseconds-since-bootup counter, it would\nbe fine.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Mar 2005 09:11:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Help with tuning this query (with explain analyze\n\tfinally)" } ]
[ { "msg_contents": "> > Do we need actual high precision time, or do we just need \n> to be able \n> > to get high precision differences? Getting the differences \n> is fairly \n> > easy, but if you need to \"sync up\" any drif then it becomes \n> a bit more \n> > difficult.\n> \n> You're right, we only care about differences not absolute \n> time. If there's something like a microseconds-since-bootup \n> counter, it would be fine.\n\nThere is. I beleive QueryPerformanceCounter has sub-mirosecond\nresolution.\n\nCan we just replace gettimeofday() with a version that's basically:\nif (never_run_before)\n GetSystemTime() and get current timer for baseline.\nnow = baseline + current timer - baseline timer;\nreturn now;\n\n\nOr do we need to make changes at the points where the function is\nactually called?\n\n\n//Magnus\n", "msg_date": "Mon, 7 Mar 2005 15:36:42 +0100", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Help with tuning this query (with explain analyze\n\tfinally)" }, { "msg_contents": "\"Magnus Hagander\" <[email protected]> writes:\n> There is. I beleive QueryPerformanceCounter has sub-mirosecond\n> resolution.\n\n> Can we just replace gettimeofday() with a version that's basically:\n\nNo, because it's also used for actual time-of-day calls. It'd be\nnecessary to hack executor/instrument.c in particular.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Mar 2005 09:45:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Help with tuning this query (with explain analyze\n\tfinally)" }, { "msg_contents": "Tom Lane wrote:\n\n>\"Magnus Hagander\" <[email protected]> writes:\n>\n>\n>>There is. I beleive QueryPerformanceCounter has sub-mirosecond\n>>resolution.\n>>\n>>\n>>Can we just replace gettimeofday() with a version that's basically:\n>>\n>>\n>\n>No, because it's also used for actual time-of-day calls. It'd be\n>necessary to hack executor/instrument.c in particular.\n>\n>\t\t\tregards, tom lane\n>\n>\nIt seems that there are 2 possibilities. Leave gettimeofday as it is,\nand then change code that calls it for deltas with a\n\"pg_get_high_res_delta_time()\", which on most platforms is just\ngettimeofday, but on win32 is a wrapper for QueryPerformanceCounter().\n\nOr we modify the win32 gettimeofday call to something like:\n\ngettimeofday(struct timeval *tv, struct timezone *tz)\n{\n static int initialized = 0;\n static LARGE_INTEGER freq = {0};\n static LARGE_INTEGER base = {0};\n static struct time_t base_tm = {0};\n LARGE_INTEGER now = {0};\n int64_t delta_secs = 0;\n\n if(!initialized) {\n QueryPerformanceFrequency(&freq);\n base_tm = time(NULL); // This can be any moderately accurate time\nfunction, maybe getlocaltime if it exists\n QueryPerformanceCounter(&base);\n }\n\n QueryPerformanceCounter(&now);\n delta_secs = now.QuadPart - base.QuadPart;\n tv->tv_sec = delta_secs / freq.QuadPart;\n delta_secs -= *tv.tv_sec * freq.QuadPart;\n tv->tv_usec = delta_secs * 1000000 / freq.QuadPart\n\n tv->tv_sec += base_tm;\n\n return 0;\n}", "msg_date": "Mon, 07 Mar 2005 10:29:46 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Help with tuning this query (with" }, { "msg_contents": "John A Meinel <[email protected]> writes:\n>>> Can we just replace gettimeofday() with a version that's basically:\n>> \n>> No, because it's also used for actual time-of-day calls. It'd be\n>> necessary to hack executor/instrument.c in particular.\n\n> Or we modify the win32 gettimeofday call to something like:\n\nThat's what Magnus was talking about, but it's really no good because\nit would cause Postgres' now() function to fail to track post-boot-time\nchanges in the system date setting. Which I think would rightly be\nconsidered a bug.\n\nThe EXPLAIN ANALYZE instrumentation code will really be happier with a\nstraight time-since-bootup counter; by using gettimeofday, it is\nvulnerable to giving wrong answers if someone changes the date setting\nwhile the EXPLAIN is running. But there is (AFAIK) no such call among\nthe portable Unix syscalls. It seems reasonable to me to #ifdef that\ncode to make use of QueryPerformanceCounter on Windows. This does not\nmean we want to alter the behavior of gettimeofday() where it's being\nused to find out the time of day.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Mar 2005 11:38:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Help with tuning this query (with explain analyze\n\tfinally)" }, { "msg_contents": "Tom Lane wrote:\n\n>John A Meinel <[email protected]> writes:\n>\n>\n>>>>Can we just replace gettimeofday() with a version that's basically:\n>>>>\n>>>>\n>>>No, because it's also used for actual time-of-day calls. It'd be\n>>>necessary to hack executor/instrument.c in particular.\n>>>\n>>>\n>\n>\n>\n>>Or we modify the win32 gettimeofday call to something like:\n>>\n>>\n>\n>That's what Magnus was talking about, but it's really no good because\n>it would cause Postgres' now() function to fail to track post-boot-time\n>changes in the system date setting. Which I think would rightly be\n>considered a bug.\n>\n>The EXPLAIN ANALYZE instrumentation code will really be happier with a\n>straight time-since-bootup counter; by using gettimeofday, it is\n>vulnerable to giving wrong answers if someone changes the date setting\n>while the EXPLAIN is running. But there is (AFAIK) no such call among\n>the portable Unix syscalls. It seems reasonable to me to #ifdef that\n>code to make use of QueryPerformanceCounter on Windows. This does not\n>mean we want to alter the behavior of gettimeofday() where it's being\n>used to find out the time of day.\n>\n>\t\t\tregards, tom lane\n>\n>\n>\n\nWhat if you changed the \"initialized\" to\n\nif (count & 0xFF == 0) {\n count = 1;\n // get the new time of day\n}\n++count;\n\nThen we would only be wrong for 256 gettimeofday calls. I agree it isn't\ngreat, though. And probably better to just abstract (possibly just with\n#ifdef) the calls for accurate timing, from the calls that actually need\nthe real time.\n\nJohn\n=:->", "msg_date": "Mon, 07 Mar 2005 11:24:07 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Help with tuning this query (with" }, { "msg_contents": "John A Meinel <[email protected]> writes:\n\n> Then we would only be wrong for 256 gettimeofday calls. I agree it isn't\n> great, though. And probably better to just abstract (possibly just with\n> #ifdef) the calls for accurate timing, from the calls that actually need\n> the real time.\n\nWhat would be really neato would be to use the rtdsc (sp?) or equivalent\nassembly instruction where available. Most processors provide such a thing and\nit would give much lower overhead and much more accurate answers.\n\nThe main problem I see with this would be on multi-processor machines.\n(QueryPerformanceCounter does work properly on multi-processor machines,\nright?)\n\n-- \ngreg\n\n", "msg_date": "07 Mar 2005 13:05:30 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Help with tuning this query (with" } ]
[ { "msg_contents": "Env: Sun E4500 with 8 gig of RAM in total. Database is stored\nlocally (not on a network storage devise). A copy of the\npostgresql.conf file is attached.\n\nWhen running queries we are experiencing much bigger result times than\nanticipated.\n\nAttached is a copy of our postgresql.conf file and of our the table\ndefinitions and row counts.\n\nBelow is an example of SQL and the explain plans.\n\nAny help/pointers/tips/etc. for getting this speed up would be great!! \n\nCheers\n\n\nSELECT C.component_id, I.cli,\n BL.ncos_value, BL.description,\n SG.switch_group_code, SG.servcom_name,\n S.description AS status,\n RC.description AS process_status,\n OT.description AS order_type,\n P.party_name,\n RDCR.consumer_ref AS consumer_ref,\n C.raised_dtm AS created_dtm,\n (SELECT dtm FROM orders.communication WHERE\ncomponent_id = C.component_id ORDER BY dtm DESC LIMIT 1) AS status_dtm\n FROM (SELECT * FROM parties.party WHERE\nparty_id = 143 AND is_active = true) P\n JOIN orders.commercial_order CO ON\nCO.party_id = P.party_id\n JOIN (SELECT raised_dtm, component_id,\nlast_supplier_status, component_type_id, current_status_id_fr,\ncommercial_order_id FROM orders.component WHERE raised_dtm BETWEEN\n'2003-01-01 00:00:00'::timestamp AND '2005-01-01 23:59:59'::timestamp \nAND component_type_id IN (3, 2, 1)) C ON C.commercial_order_id =\nCO.commercial_order_id\n JOIN (SELECT * FROM orders.ida WHERE cli IS\nNOT NULL ) I ON C.component_id = I.component_id\n --Get the consumer reference if there is one\n LEFT JOIN parties.consumer_ref RDCR ON\nCO.consumer_ref = RDCR.consumer_ref_id\n --May or may not have barring level or ncos\ndependant on the order type\n LEFT JOIN line_configs.ida_barring_level BL\nON I.ida_barring_level_id = BL.ida_barring_level_id\n LEFT JOIN line_configs.switch_group SG ON\nI.switchgroup_id = SG.switch_group_id\n --Get the order type\n JOIN business_rules.component_type CT ON\nC.component_type_id = CT.component_type_id\n JOIN business_rules.order_type OT ON\nOT.order_type_id = CT.order_type_id\n --Get the status\n LEFT JOIN orders.status S ON S.status_id =\nC.current_status_id_fr\n --Get the process status\n LEFT JOIN orders.response_code RC ON\nRC.response_code_id = C.last_supplier_status\n \n \nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Hash Join (cost=18.02..16067.46 rows=1158 width=277) (actual\ntime=639100.57..957020.42 rows=34638 loops=1)\n Hash Cond: (\"outer\".last_supplier_status = \"inner\".response_code_id)\n -> Hash Join (cost=9.29..16038.49 rows=1158 width=218) (actual\ntime=639084.27..937250.67 rows=34638 loops=1)\n Hash Cond: (\"outer\".current_status_id_fr = \"inner\".status_id)\n -> Hash Join (cost=8.17..16017.14 rows=1158 width=197)\n(actual time=639083.19..931508.95 rows=34638 loops=1)\n Hash Cond: (\"outer\".order_type_id = \"inner\".order_type_id)\n -> Hash Join (cost=6.99..15995.69 rows=1158\nwidth=180) (actual time=639082.01..926146.92 rows=34638 loops=1)\n Hash Cond: (\"outer\".component_type_id =\n\"inner\".component_type_id)\n -> Hash Join (cost=5.47..15973.91 rows=1158\nwidth=172) (actual time=639080.29..921574.75 rows=34638 loops=1)\n Hash Cond: (\"outer\".switchgroup_id =\n\"inner\".switch_group_id)\n -> Hash Join (cost=1.49..15949.66\nrows=1158 width=147) (actual time=639074.90..917437.55 rows=34638\nloops=1)\n Hash Cond:\n(\"outer\".ida_barring_level_id = \"inner\".ida_barring_level_id)\n -> Merge Join (cost=0.00..15927.90\nrows=1158 width=112) (actual time=639073.24..914042.15 rows=34638\nloops=1)\n Merge Cond:\n(\"outer\".consumer_ref = \"inner\".consumer_ref_id)\n -> Nested Loop \n(cost=0.00..2630554.06 rows=1158 width=91) (actual\ntime=639072.57..909395.62 rows=34638 loops=1)\n -> Nested Loop \n(cost=0.00..2626789.68 rows=1244 width=66) (actual\ntime=639053.64..902100.16 rows=34638 loops=1)\n -> Nested Loop \n(cost=0.00..2599576.29 rows=7041 width=38) (actual\ntime=2073.94..891860.92 rows=46376 loops=1)\n Join Filter:\n(\"outer\".party_id = \"inner\".party_id)\n -> Index\nScan using commercial_order_consumer_ref_ix on commercial_order co \n(cost=0.00..19499.42 rows=725250 width=12) (actual time=8.62..30310.16\nrows=725250 loops=1)\n -> Seq Scan\non party (cost=0.00..3.54 rows=1 width=26) (actual time=0.62..1.16\nrows=1 loops=725250)\n Filter:\n((party_id = 143) AND (is_active = true))\n -> Index Scan\nusing component_commercial_order_id_ix on component (cost=0.00..3.85\nrows=1 width=28) (actual time=0.17..0.18 rows=1 loops=46376)\n Index Cond:\n(component.commercial_order_id = \"outer\".commercial_order_id)\n Filter:\n((raised_dtm >= '2003-01-01 00:00:00'::timestamp without time zone)\nAND (raised_dtm <= '2005-01-01 23:59:59'::timestamp without time zone)\nAND ((component_type_id = 3) OR (component_type_id = 2) OR\n(component_type_id = 1)))\n -> Index Scan using\nida_pkey on ida (cost=0.00..3.01 rows=1 width=25) (actual\ntime=0.12..0.14 rows=1 loops=34638)\n Index Cond:\n(\"outer\".component_id = ida.component_id)\n Filter: (cli IS NOT NULL)\n -> Index Scan using\nconsumer_ref_pk on consumer_ref rdcr (cost=0.00..24.31 rows=937\nwidth=21) (actual time=0.48..0.48 rows=1 loops=1)\n -> Hash (cost=1.39..1.39 rows=39\nwidth=35) (actual time=1.07..1.07 rows=0 loops=1)\n -> Seq Scan on\nida_barring_level bl (cost=0.00..1.39 rows=39 width=35) (actual\ntime=0.07..0.76 rows=39 loops=1)\n -> Hash (cost=3.59..3.59 rows=159\nwidth=25) (actual time=4.54..4.54 rows=0 loops=1)\n -> Seq Scan on switch_group sg \n(cost=0.00..3.59 rows=159 width=25) (actual time=0.09..3.13 rows=159\nloops=1)\n -> Hash (cost=1.41..1.41 rows=41 width=8)\n(actual time=0.90..0.90 rows=0 loops=1)\n -> Seq Scan on component_type ct \n(cost=0.00..1.41 rows=41 width=8) (actual time=0.08..0.64 rows=41\nloops=1)\n -> Hash (cost=1.15..1.15 rows=15 width=17) (actual\ntime=0.43..0.43 rows=0 loops=1)\n -> Seq Scan on order_type ot (cost=0.00..1.15\nrows=15 width=17) (actual time=0.08..0.31 rows=15 loops=1)\n -> Hash (cost=1.09..1.09 rows=9 width=21) (actual\ntime=0.29..0.29 rows=0 loops=1)\n -> Seq Scan on status s (cost=0.00..1.09 rows=9\nwidth=21) (actual time=0.08..0.22 rows=9 loops=1)\n -> Hash (cost=7.99..7.99 rows=299 width=59) (actual\ntime=8.69..8.69 rows=0 loops=1)\n -> Seq Scan on response_code rc (cost=0.00..7.99 rows=299\nwidth=59) (actual time=0.16..5.94 rows=299 loops=1)\n SubPlan\n -> Limit (cost=21.23..21.23 rows=1 width=8) (actual\ntime=0.45..0.46 rows=1 loops=34638)\n -> Sort (cost=21.23..21.27 rows=16 width=8) (actual\ntime=0.44..0.44 rows=1 loops=34638)\n Sort Key: dtm\n -> Index Scan using communication_component_id_ix on\ncommunication (cost=0.00..20.90 rows=16 width=8) (actual\ntime=0.12..0.14 rows=1 loops=34638)\n Index Cond: (component_id = $0)\n Total runtime: 957091.40 msec\n(47 rows)\n\n\n\nSELECT raised_dtm, component_id, last_supplier_status,\ncomponent_type_id, current_status_id_fr, commercial_order_id FROM\norders.component WHERE raised_dtm BETWEEN '2003-01-01\n00:00:00'::timestamp AND '2005-01-01 23:59:59'::timestamp AND\ncomponent_type_id IN (3, 2, 1)\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using component_raised_dtm_ix on component \n(cost=0.00..17442.38 rows=128571 width=28) (actual time=1.04..20781.05\nrows=307735 loops=1)\n Index Cond: ((raised_dtm >= '2003-01-01 00:00:00'::timestamp\nwithout time zone) AND (raised_dtm <= '2005-01-01 23:59:59'::timestamp\nwithout time zone))\n Filter: ((component_type_id = 3) OR (component_type_id = 2) OR\n(component_type_id = 1))\n Total runtime: 21399.79 msec\n(4 rows)\n\n\nSELECT * FROM orders.ida WHERE cli IS NOT NULL;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------\n Seq Scan on ida (cost=0.00..12420.24 rows=677424 width=25) (actual\ntime=0.15..16782.27 rows=677415 loops=1)\n Filter: (cli IS NOT NULL)\n Total runtime: 17885.80 msec\n(3 rows)", "msg_date": "Mon, 7 Mar 2005 14:46:38 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Tuning, configuration for 7.3.5 on a Sun E4500" }, { "msg_contents": "Tsarevich,\n\n> When running queries we are experiencing much bigger result times than\n> anticipated.\n>\n> Attached is a copy of our postgresql.conf file and of our the table\n> definitions and row counts.\n\nLooks like you haven't run ANALYZE on the database anytime recently. Try that \nand re-run.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 7 Mar 2005 09:31:06 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning, configuration for 7.3.5 on a Sun E4500" }, { "msg_contents": "Analyze has been run on the database quite frequently during the\ncourse of us trying to figure out this performance issue. It is also\na task that is crontabbed nightly.\n\n\nOn Mon, 7 Mar 2005 09:31:06 -0800, Josh Berkus <[email protected]> wrote:\n> Tsarevich,\n> \n> > When running queries we are experiencing much bigger result times than\n> > anticipated.\n> >\n> > Attached is a copy of our postgresql.conf file and of our the table\n> > definitions and row counts.\n> \n> Looks like you haven't run ANALYZE on the database anytime recently. Try that\n> and re-run.\n> \n> --\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n", "msg_date": "Tue, 8 Mar 2005 07:47:37 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tuning, configuration for 7.3.5 on a Sun E4500" }, { "msg_contents": "Tsarevich,\n\n> Analyze has been run on the database quite frequently during the\n> course of us trying to figure out this performance issue. It is also\n> a task that is crontabbed nightly.\n\nHmmm. Then you probably need to up the STATISTICS levels on the target \ncolumn, because PG is mis-estimating the number of rows returned \nsignificantly. That's done by:\n\nALTER TABLE {table} ALTER COLUMN {column} SET STATISTICS {number}\n\nGenerally, I find that if mis-estimation occurs, you need to raise statistics \nto at least 250.\n\nHere's where I see the estimation issues with your EXPLAIN:\n\n                                                   ->  Index Scan\nusing component_commercial_order_id_ix on component  (cost=0.00..3.85\nrows=1 width=28) (actual time=0.17..0.18 rows=1 loops=46376)\n                                                         Index Cond:\n(component.commercial_order_id = \"outer\".commercial_order_id)\n                                                         Filter:\n((raised_dtm >= '2003-01-01 00:00:00'::timestamp without time zone)\nAND (raised_dtm <= '2005-01-01 23:59:59'::timestamp without time zone)\nAND ((component_type_id = 3) OR (component_type_id = 2) OR\n(component_type_id = 1)))\n\n                 ->  Index Scan using communication_component_id_ix on\ncommunication  (cost=0.00..20.90 rows=16 width=8) (actual\ntime=0.12..0.14 rows=1 loops=34638)\n                       Index Cond: (component_id = $0)\n\nSo it looks like you need to raise the stats on communication.component_id and \ncomponent.commercial_order_id,raised_dtm,component_type_id. You also may \nwant to consider a multi-column index on the last set.\n\nBTW, if you have any kind of data update traffic at all, ANALYZE once a day is \nnot adequate.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 8 Mar 2005 09:10:57 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning, configuration for 7.3.5 on a Sun E4500" } ]
[ { "msg_contents": "The recompile was done by the sysadmin, but I believe the flags are -pg \n-DLINUX_PROFILING for profiling, and -g for debug symbols.\nThis leaves gmon.out files around, which you can then do a \"gprof \n/usr/bin/postmaster gmon.out\" to see whats going on.\n\nMy problem is that this gives me data on what functions are being called \nwith respect to the postmaster binary, but I don't know\nwhich of my functions - in my shared library - in my C procedure are \ntaking the most time.\n\n-Adam\nMohan, Ross wrote:\n\n>Adam - \n>\n>Is compiling postmaster with profiling support just a flag\n>in the build/make? Or is there something more involved? \n>\n>I'd like to be able to do this in the future and so am\n>curious about means/methods. \n>\n>If this is a RTFM, just let me know that (am currently \n>Reading The F Manual), but if you have any \"special sauce\"\n>here, that'd be of great interest. \n>\n>Thanks\n>\n>-Ross\n>\n>-----Original Message-----\n>From: [email protected] [mailto:[email protected]] On Behalf Of Adam Palmblad\n>Sent: Wednesday, April 06, 2005 7:23 PM\n>To: [email protected]\n>Subject: [PERFORM] Tweaking a C Function I wrote\n>\n>\n>I wanted to see if I could squeeze any more performance out of a C set \n>returning function I wrote. As such, I looked to a profiler. Is it \n>possible to get profile information on the function I wrote? I've got \n>postmaster and my function compiled with profiling support, and can find \n>the gmon.out files... can I actually look at the call tree that occurs \n>when my function is being executed or will I be limited to viewing calls \n>to functions in the postmaster binary?\n>\n>-Adam\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n>\n> \n>\n\n", "msg_date": "Mon, 07 Mar 2005 14:26:04 -0800", "msg_from": "Adam Palmblad <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Building postmaster with Profiling Support WAS \"Tweaking a C" }, { "msg_contents": "Adam - \n\nIs compiling postmaster with profiling support just a flag\nin the build/make? Or is there something more involved? \n\nI'd like to be able to do this in the future and so am\ncurious about means/methods. \n\nIf this is a RTFM, just let me know that (am currently \nReading The F Manual), but if you have any \"special sauce\"\nhere, that'd be of great interest. \n\nThanks\n\n-Ross\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Adam Palmblad\nSent: Wednesday, April 06, 2005 7:23 PM\nTo: [email protected]\nSubject: [PERFORM] Tweaking a C Function I wrote\n\n\nI wanted to see if I could squeeze any more performance out of a C set \nreturning function I wrote. As such, I looked to a profiler. Is it \npossible to get profile information on the function I wrote? I've got \npostmaster and my function compiled with profiling support, and can find \nthe gmon.out files... can I actually look at the call tree that occurs \nwhen my function is being executed or will I be limited to viewing calls \nto functions in the postmaster binary?\n\n-Adam\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n", "msg_date": "Thu, 7 Apr 2005 15:21:32 -0000", "msg_from": "\"Mohan, Ross\" <[email protected]>", "msg_from_op": false, "msg_subject": "Building postmaster with Profiling Support WAS \"Tweaking a C Function\n\tI wrote\"" }, { "msg_contents": "\"Mohan, Ross\" <[email protected]> writes:\n> Is compiling postmaster with profiling support just a flag\n> in the build/make? Or is there something more involved? \n\ncd .../src/backend\nmake PROFILE=\"-pg -DLINUX_PROFILE\" all\nreinstall binary\n\nYou don't need -DLINUX_PROFILE if not on Linux, of course.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Apr 2005 11:57:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Building postmaster with Profiling Support WAS \"Tweaking a C\n\tFunction I wrote\"" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Greg Stark [mailto:[email protected]]\n> Sent: Monday, March 07, 2005 12:06 PM\n> To: John A Meinel\n> Cc: Tom Lane; Magnus Hagander; Ken Egervari;\n> [email protected]; [email protected]\n> Subject: Re: [pgsql-hackers-win32] [PERFORM] Help with tuning \n> this query (with\n> [...]\n> What would be really neato would be to use the rtdsc (sp?) or \n> equivalent assembly instruction where available. Most processors\n> provide such a thing and it would give much lower overhead and much\n> more accurate answers.\n> \n> The main problem I see with this would be on multi-processor\n> machines. (QueryPerformanceCounter does work properly on \n> multi-processor machines, right?)\n\nI believe QueryPerformanceCounter() already does this.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n", "msg_date": "Mon, 7 Mar 2005 16:30:40 -0600", "msg_from": "\"Dave Held\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Help with tuning this query (with" }, { "msg_contents": "\n\"Dave Held\" <[email protected]> writes:\n\n> > What would be really neato would be to use the rtdsc (sp?) or \n> > equivalent assembly instruction where available. Most processors\n> > provide such a thing and it would give much lower overhead and much\n> > more accurate answers.\n> > \n> > The main problem I see with this would be on multi-processor\n> > machines. (QueryPerformanceCounter does work properly on \n> > multi-processor machines, right?)\n> \n> I believe QueryPerformanceCounter() already does this.\n\nThis would be a good example of why selectively quoting the part of the\nmessage to which you're responding to is more useful than just blindly echoing\nmy message back to me.\n\nAlready does what? \n\nUse rtdsc? In which case using it would be a mistake. Since rtdsc doesn't work\nacross processors. And using it via QueryPerformanceCounter would be a\nnon-portable approach to using rtdsc. Much better to devise a portable\napproach that works on any architecture where something equivalent is\navailable.\n\nOr already works on multi-processor machines? In which case, uh, ok.\n\n\n-- \ngreg\n\n", "msg_date": "07 Mar 2005 18:15:29 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Help with tuning this query (with" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Monday, March 07, 2005 10:39 AM\n> To: John A Meinel\n> Cc: Magnus Hagander; Ken Egervari; [email protected];\n> [email protected]\n> Subject: Re: [pgsql-hackers-win32] [PERFORM] Help with tuning \n> this query\n> (with explain analyze finally)\n> \n> [...]\n> The EXPLAIN ANALYZE instrumentation code will really be happier with a\n> straight time-since-bootup counter; by using gettimeofday, it is\n> vulnerable to giving wrong answers if someone changes the date setting\n> while the EXPLAIN is running. But there is (AFAIK) no such call among\n> the portable Unix syscalls. It seems reasonable to me to #ifdef that\n> code to make use of QueryPerformanceCounter on Windows. This does not\n> mean we want to alter the behavior of gettimeofday() where it's being\n> used to find out the time of day.\n\nThere is always clock(). It's mandated by ANSI C, but my docs say\nthat POSIX requires CLOCKS_PER_SEC == 1000000 regardless of actual\ntimer resolution, which seems a little brain-dead to me.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n", "msg_date": "Mon, 7 Mar 2005 16:34:32 -0600", "msg_from": "\"Dave Held\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Help with tuning this query (with explain analyze\n\tfinally)" }, { "msg_contents": "Dave Held wrote:\n\n>There is always clock(). It's mandated by ANSI C, but my docs say\n>that POSIX requires CLOCKS_PER_SEC == 1000000 regardless of actual\n>timer resolution, which seems a little brain-dead to me.\n>\n>__\n>David B. Held\n> \n>\n\nMy experience with clock() on win32 is that CLOCKS_PER_SEC was 1000, and \nit had a resolution of 55clocks / s. When I just did this:\n\nint main(int argc, char **argv)\n{\n int start = clock();\n int now = start;\n cout << \"Clock: \" << CLOCKS_PER_SEC << endl;\n for(int i = 0; i < 10; ++i) {\n while(now == clock()) {\n // Do nothing\n }\n now = clock();\n cout << now-start << \"\\t\" << (now - start) / (double) \nCLOCKS_PER_SEC << endl;\n }\n}\n\nI got:\nClock: 1000\n16 0.016\n31 0.031\n47 0.047\n62 0.062\n78 0.078\n93 0.093\n109 0.109\n125 0.125\n141 0.141\n156 0.156\n\nWhich is about 1/0.016 = 62.5 clocks per second.\nI'm pretty sure this is slightly worse than what we want. :)\nIt might be better on other platforms, but on win32 clock() is most \ndefinitely *not* what you want.\nJohn\n=:->", "msg_date": "Mon, 07 Mar 2005 16:48:10 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Help with tuning this query (with" }, { "msg_contents": "John A Meinel <[email protected]> writes:\n> Dave Held wrote:\n>> There is always clock().\n\n> My experience with clock() on win32 is that CLOCKS_PER_SEC was 1000, and \n> it had a resolution of 55clocks / s. When I just did this:\n\nThe other problem is it measures process CPU time, not elapsed time\nwhich is probably more significant for our purposes.\n\nWhich brings up a question: just what does QueryPerformanceCounter\nmeasure?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Mar 2005 17:56:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Help with tuning this query (with " }, { "msg_contents": "Tom Lane wrote:\n\n>John A Meinel <[email protected]> writes:\n> \n>\n>>Dave Held wrote:\n>> \n>>\n>>>There is always clock().\n>>> \n>>>\n>\n> \n>\n>>My experience with clock() on win32 is that CLOCKS_PER_SEC was 1000, and \n>>it had a resolution of 55clocks / s. When I just did this:\n>> \n>>\n>\n>The other problem is it measures process CPU time, not elapsed time\n>which is probably more significant for our purposes.\n>\n>Which brings up a question: just what does QueryPerformanceCounter\n>measure?\n>\n>\t\t\tregards, tom lane\n> \n>\nclock() according to the Visual Studio Help measures wall clock time. \nBut you're right, POSIX says it is approximation of processor time.\n\nThe docs don't say specifically what QueryPerformanceCounter() measures, \nbut states\n\n> The *QueryPerformanceCounter* function retrieves the current value of \n> the high-resolution performance counter.\n>\nIt also states:\n\n>\n> Remarks\n>\n> On a multiprocessor machine, it should not matter which processor is \n> called. However, you can get different results on different processors \n> due to bugs in the BIOS or the HAL. To specify processor affinity for \n> a thread, use the *SetThreadAffinityMask* function.\n>\n\nSo it sounds like it is actually querying some counter independent of \nprocessing.\n\nIn fact, there is also this statement:\n\n> *QueryPerformanceFrequency*\n>\n> The QueryPerformanceFrequency function retrieves the frequency of the \n> high-resolution performance counter, if one exists. The frequency \n> cannot change while the system is running.\n>\nIf that is accurate, it would make QueryPerformanceCounter independent \nof things like speed stepping, etc. So again, it sounds independent of \nprocessing.\n\nJohn\n=:->", "msg_date": "Mon, 07 Mar 2005 22:35:09 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Help with tuning this query (with" } ]
[ { "msg_contents": " \n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf \n> Of Tom Lane\n> Sent: 07 March 2005 22:57\n> To: John A Meinel\n> Cc: Dave Held; [email protected]; \n> [email protected]\n> Subject: Re: [pgsql-hackers-win32] [PERFORM] Help with tuning \n> this query (with \n> \n> Which brings up a question: just what does QueryPerformanceCounter\n> measure?\n\nhttp://msdn.microsoft.com/library/default.asp?url=/library/en-us/winui/w\ninui/windowsuserinterface/windowing/timers/abouttimers.asp says:\n\nIf a high-resolution performance counter exists on the system, you can\nuse the QueryPerformanceFrequency function to express the frequency, in\ncounts per second. The value of the count is processor dependent. On\nsome processors, for example, the count might be the cycle rate of the\nprocessor clock.\n\nThe QueryPerformanceCounter function retrieves the current value of the\nhigh-resolution performance counter. By calling this function at the\nbeginning and end of a section of code, an application essentially uses\nthe counter as a high-resolution timer. For example, suppose that\nQueryPerformanceFrequency indicates that the frequency of the\nhigh-resolution performance counter is 50,000 counts per second. If the\napplication calls QueryPerformanceCounter immediately before and\nimmediately after the section of code to be timed, the counter values\nmight be 1500 counts and 3500 counts, respectively. These values would\nindicate that .04 seconds (2000 counts) elapsed while the code executed.\n\n\nRegards, Dave.\n", "msg_date": "Mon, 7 Mar 2005 23:08:57 -0000", "msg_from": "\"Dave Page\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Help with tuning this query (with " } ]
[ { "msg_contents": " I'm trying to understand why a particular query is slow, and it seems \nlike the optimizer is choosing a strange plan. See this summary:\n\n\n* I have a large table, with an index on the primary key 'id' and on a \nfield 'foo'.\n> select count(*) from foo;\n1,000,000\n> select count(*) from foo where bar = 41;\n7\n\n* This query happens very quickly.\n> explain select * from foo where barId = 412 order by id desc;\nSort ()\n Sort key= id\n -> Index scan using bar_index on foo ()\n Index cond: barId = 412\n\nBut this query takes forever\n\n> explain select * from foo where barId = 412 order by id desc limit 25;\nLimit ()\n -> Index scan backward using primarykey_index\n Filter: barID = 412\n\n\nCould anyone shed some light on what might be happening here?\n\n - Michael\n\n\n-- \nUsing Opera's revolutionary e-mail client: http://www.opera.com/mail/\n", "msg_date": "Mon, 07 Mar 2005 18:39:43 -0500", "msg_from": "\"Michael McFarland\" <[email protected]>", "msg_from_op": true, "msg_subject": "adding 'limit' leads to very slow query" }, { "msg_contents": "Michael McFarland wrote:\n\n> I'm trying to understand why a particular query is slow, and it \n> seems like the optimizer is choosing a strange plan. See this summary:\n>\n...\n\n>> explain select * from foo where barId = 412 order by id desc limit 25;\n>\n> Limit ()\n> -> Index scan backward using primarykey_index\n> Filter: barID = 412\n>\n>\n> Could anyone shed some light on what might be happening here?\n>\n> - Michael\n\nIt is using the wrong index. The problem is that order by + limit \ngenerally means that you can use the index on the \"+\" to get the items \nin the correct order. In this case, however, you need it to find all of \nthe barId=412 first, since apparently that is more selective than the limit.\n\nIt really sounds like the postgres statistics are out of date. And \neither you haven't run vacuum analyze recently, or you need to keep \nhigher statistics on either one or both of barId and id.\n\nJohn\n=:->", "msg_date": "Mon, 07 Mar 2005 22:46:37 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: adding 'limit' leads to very slow query" }, { "msg_contents": "On Mon, 7 Mar 2005, Michael McFarland wrote:\n\n> I'm trying to understand why a particular query is slow, and it seems\n> like the optimizer is choosing a strange plan. See this summary:\n>\n>\n> * I have a large table, with an index on the primary key 'id' and on a\n> field 'foo'.\n> > select count(*) from foo;\n> 1,000,000\n> > select count(*) from foo where bar = 41;\n> 7\n>\n> * This query happens very quickly.\n> > explain select * from foo where barId = 412 order by id desc;\n> Sort ()\n> Sort key= id\n> -> Index scan using bar_index on foo ()\n> Index cond: barId = 412\n>\n> But this query takes forever\n>\n> > explain select * from foo where barId = 412 order by id desc limit 25;\n> Limit ()\n> -> Index scan backward using primarykey_index\n> Filter: barID = 412\n\nYou didn't show the row estimates, but I'd guess that it's expecting\neither that ther are more rows that match barId=412 than there actually\nare (which may be solvable by raising the statistics target on the column\nand re-analyzing) such that going backwards on id in order to make 25\nmatching rows isn't a bad plan or that barId and id are correlated which\nis unfortunately not going to be recognized right now.\n", "msg_date": "Mon, 7 Mar 2005 23:03:43 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: adding 'limit' leads to very slow query" }, { "msg_contents": " I continue to be stumped by this. You are right that I should have \nlisted the estimates provided by explain... basically for the select where \nbar = 41, it's estimating there will be 40,000 rows instead of 7, out of \nwhat's actuallly 5 million records in the table.\n\n So far I've tried increase statistics for the bar column from the \ndefault 10 to 100 (vacuum analyzing after) and the explain-plan hasn't \nchanged. I also notice that afterward, the pg_stats record for the bar \ncolumn still only lists the top 5 values of bar (out of 68 unique values \nin the table). Are there any other settings I could try to improve the \ndetail of the statistics?\n\n By the way, I think I do have a workaround for this particular query:\n select * from (select * from foo where barId = 412 order by id \ndesc) as tempview limit 25;\nThis query uses the bar index and completes instantly. However, I feel \nlike I should find the heart of the problem, since bad statistics could \nend up affecting other plans, right?\n\n - Mike\n\n\nOn Mon, 7 Mar 2005 23:03:43 -0800 (PST), Stephan Szabo \n<[email protected]> wrote:\n\n> On Mon, 7 Mar 2005, Michael McFarland wrote:\n>\n>> I'm trying to understand why a particular query is slow, and it seems\n>> like the optimizer is choosing a strange plan. See this summary:\n>>\n>>\n>> * I have a large table, with an index on the primary key 'id' and on a\n>> field 'foo'.\n>> > select count(*) from foo;\n>> 1,000,000\n>> > select count(*) from foo where bar = 41;\n>> 7\n>>\n>> * This query happens very quickly.\n>> > explain select * from foo where barId = 412 order by id desc;\n>> Sort ()\n>> Sort key= id\n>> -> Index scan using bar_index on foo ()\n>> Index cond: barId = 412\n>>\n>> But this query takes forever\n>>\n>> > explain select * from foo where barId = 412 order by id desc limit 25;\n>> Limit ()\n>> -> Index scan backward using primarykey_index\n>> Filter: barID = 412\n>\n> You didn't show the row estimates, but I'd guess that it's expecting\n> either that ther are more rows that match barId=412 than there actually\n> are (which may be solvable by raising the statistics target on the column\n> and re-analyzing) such that going backwards on id in order to make 25\n> matching rows isn't a bad plan or that barId and id are correlated which\n> is unfortunately not going to be recognized right now.\n>\n\n-- \nUsing Opera's revolutionary e-mail client: http://www.opera.com/mail/\n", "msg_date": "Wed, 09 Mar 2005 11:00:20 -0500", "msg_from": "\"Michael McFarland\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: adding 'limit' leads to very slow query" }, { "msg_contents": "On Wed, 9 Mar 2005, Michael McFarland wrote:\n\n> I continue to be stumped by this. You are right that I should have\n> listed the estimates provided by explain... basically for the select where\n> bar = 41, it's estimating there will be 40,000 rows instead of 7, out of\n> what's actuallly 5 million records in the table.\n>\n> So far I've tried increase statistics for the bar column from the\n> default 10 to 100 (vacuum analyzing after) and the explain-plan hasn't\n> changed. I also notice that afterward, the pg_stats record for the bar\n\nDid the estimates change at all?\n\n> column still only lists the top 5 values of bar (out of 68 unique values\n> in the table). Are there any other settings I could try to improve the\n> detail of the statistics?\n\nWell, I'd first try moving up to a statistic target of 1000 in\norder to try sampling a greater number of rows. I'd wonder if there's\nenough difference in frequency that it's just not visiting any with the\nother values. I'm not sure that it'll help that much though; hopefully\nsomeone else will have an idea.\n\n> By the way, I think I do have a workaround for this particular query:\n> select * from (select * from foo where barId = 412 order by id\n> desc) as tempview limit 25;\n> This query uses the bar index and completes instantly. However, I feel\n> like I should find the heart of the problem, since bad statistics could\n> end up affecting other plans, right?\n\nYeah, it's best to get it to estimate somewhat reasonably before looking\nfor workarounds.\n", "msg_date": "Mon, 14 Mar 2005 06:50:21 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: adding 'limit' leads to very slow query" } ]
[ { "msg_contents": "The following query takes approx. 3-5+ minutes\nto complete. I would like to get this down to around\n2-3 seconds. Other RDBMS complete it in <1 second.\n\nI am running 8.0.1 on XP P4 2.6 1GB for dev work. \n\nselect i.internalid, c.code\nfrom local.internal i\ninner join country.ip c on\n(i.ip between c.startip and c.endip)\n\nNested Loop (cost=167.59..7135187.85 rows=31701997\nwidth=10) (actual\ntime=63.000..776094.000 rows=5235 loops=1)\n Join Filter: ((inner.ip >= outer.startip) AND\n(inner.ip <=\nouter.endip))\n -> Seq Scan on ip c (cost=0.00..2071.02 rows=54502\nwidth=28)\n(actual time=0.000..313.000 rows=54502 loops=1)\n -> Materialize (cost=167.59..219.94 rows=5235\nwidth=15) (actual\ntime=0.000..2.973 rows=5235 loops=54502)\n -> Seq Scan on internal i (cost=0.00..162.35\nrows=5235\nwidth=15) (actual time=0.000..16.000 rows=5235\nloops=1)\nTotal runtime: 776110.000 ms\n\n\n-- data from ip-to-country.webhosting.info\nCREATE TABLE country.ip -- 54,502 rows\n(\n startip inet NOT NULL,\n endip inet NOT NULL,\n code char(2) NOT NULL,\n CONSTRAINT ip_pkey PRIMARY KEY (startip, endip)\n);\n-- 1, 192.168.1.10, 192.168.2.100, US\n-- 2, 192.168.3.0, 192.168.3.118, US\n\nCREATE TABLE local.internal -- 5000+ rows\n(\n internalid serial NOT NULL,\n ip inet NOT NULL,\n port int2 NOT NULL,\n CONSTRAINT internal_pkey PRIMARY KEY (internalid)\n);\nCREATE INDEX ip_idx ON local.internal (ip);\n-- 1, 10.0.0.100, 80\n-- 2, 10.0.0.102, 80\n-- 3, 10.0.0.103, 443\n\n--\npostgresql.conf\nhave tried many settings with no improvement\nmax_connections = 50\nshared_buffers = 30000\nwork_mem = 2048\nsort_mem = 2048\n\n\nHave tried many different indexes with no help:\nCREATE INDEX endip_idx ON country.ip;\nCREATE INDEX startip_idx ON country.ip;\nCREATE UNIQUE INDEX e_s_idx ON country.ip\n (endip, startip);\n\n\nAny suggestions would be greatly appreciated.\n\n\n\t\n\t\t\n__________________________________ \nCelebrate Yahoo!'s 10th Birthday! \nYahoo! Netrospective: 100 Moments of the Web \nhttp://birthday.yahoo.com/netrospective/\n", "msg_date": "Mon, 7 Mar 2005 15:54:08 -0800 (PST)", "msg_from": "jesse d <[email protected]>", "msg_from_op": true, "msg_subject": "Help with slow running query" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Greg Stark [mailto:[email protected]]\n> Sent: Monday, March 07, 2005 5:15 PM\n> To: Dave Held\n> Cc: Greg Stark; John A Meinel; Tom Lane; Magnus Hagander; Ken \n> Egervari;\n> [email protected]; [email protected]\n> Subject: Re: [pgsql-hackers-win32] [PERFORM] Help with tuning \n> this query\n> (with\n> \n> \"Dave Held\" <[email protected]> writes:\n> \n> > > What would be really neato would be to use the rtdsc (sp?) or \n> > > equivalent assembly instruction where available. Most\n> > > processors provide such a thing and it would give much lower \n> > > overhead and much more accurate answers.\n> > > \n> > > The main problem I see with this would be on multi-processor\n> > > machines. (QueryPerformanceCounter does work properly on \n> > > multi-processor machines, right?)\n> > \n> > I believe QueryPerformanceCounter() already does this.\n> [...]\n> Already does what? \n> \n> Use rtdsc?\n\nYes.\n\n> In which case using it would be a mistake. Since rtdsc doesn't\n> work across processors.\n\nIt doesn't always use RDTSC. I can't find anything authoritative on\nwhen it does. I would assume that it would use RDTSC when available\nand something else otherwise.\n\n> And using it via QueryPerformanceCounter would be a non-portable\n> approach to using rtdsc. Much better to devise a portable\n> approach that works on any architecture where something equivalent\n> is available.\n\nHow do you know that QueryPerformanceCounter doesn't use RDTSC\nwhere available, and something appropriate otherwise? I don't see\nhow any strategy that explicitly executes RDTSC can be called \n\"portable\".\n\n> Or already works on multi-processor machines? In which case, uh, ok.\n\nAccording to MSDN it does work on MP systems, and they say that \"it\ndoesn't matter which CPU gets called\".\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n", "msg_date": "Mon, 7 Mar 2005 18:11:34 -0600", "msg_from": "\"Dave Held\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Help with tuning this query (with" }, { "msg_contents": "On Mon, Mar 07, 2005 at 06:11:34PM -0600, Dave Held wrote:\n>> In which case using it would be a mistake. Since rtdsc doesn't\n>> work across processors.\n> It doesn't always use RDTSC. I can't find anything authoritative on\n> when it does. I would assume that it would use RDTSC when available\n> and something else otherwise.\n\nRDTSC is a bad source of information for this kind of thing, as the CPU\nfrequency might vary. Check your QueryPerformanceFrequency() -- most likely\nit will not match your clock speed. I haven't tested on a lot of machines,\nbut I've never seen QueryPerformanceFrequency() ever match the clock speed,\nwhich it most probably would if it was using RDTSC. (I've been told it uses\nsome other kind of timer available on most motherboards, but I don't know the\ndetails.)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Tue, 8 Mar 2005 02:37:50 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Help with tuning this query (with" }, { "msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> RDTSC is a bad source of information for this kind of thing, as the CPU\n> frequency might vary.\n\nOne thought that was bothering me was that if the CPU goes idle while\nwaiting for disk I/O, its clock might stop or slow down dramatically.\nIf we believed such a counter for EXPLAIN, we'd severely understate\nthe cost of disk I/O.\n\nI dunno if that is the case on any Windows hardware or not, but none\nof this thread is making me feel confident that we know what\nQueryPerformanceCounter does measure.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Mar 2005 21:02:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Help with tuning this query (with " }, { "msg_contents": "On Mon, Mar 07, 2005 at 09:02:38PM -0500, Tom Lane wrote:\n> One thought that was bothering me was that if the CPU goes idle while\n> waiting for disk I/O, its clock might stop or slow down dramatically.\n> If we believed such a counter for EXPLAIN, we'd severely understate\n> the cost of disk I/O.\n> \n> I dunno if that is the case on any Windows hardware or not, but none\n> of this thread is making me feel confident that we know what\n> QueryPerformanceCounter does measure.\n\nI believe the counter is actually good in such a situation -- I'm not a Win32\nguru, but I believe it is by far the best timer for measuring, well,\nperformance of a process like this. After all, it's what it was designed to\nbe :-)\n\nOBTW, I think I can name something like 15 or 20 different function calls to\nmeasure time in the Win32 API (all of them in use); it really is a giant\nmess.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Tue, 8 Mar 2005 03:06:24 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Help with tuning this query (with" }, { "msg_contents": "\n\tFrom the Linux Kernel (make menuconfig) there seem to be two new reliable \nsources for timing information. Note the remark about \"Time Stamp Counter\" \nbelow. Question is, which one of these (or others) are your API functions \nusing ? I have absolutely no idea !\n\n\nCONFIG_HPET_TIMER: \nThis enables the use of the HPET for the kernel's internal timer.\n HPET is the next generation timer replacing legacy 8254s.\n You can safely choose Y here. However, HPET will only be\n activated if the platform and the BIOS support this feature.\n Otherwise the 8254 will be used for timing services.\n Choose N to continue using the legacy 8254 timer.\n Symbol: HPET_TIMER [=y]\n Prompt: HPET Timer Support\n Defined at arch/i386/Kconfig:440\n Location:\n -> Processor type and features\n\nCONFIG_X86_PM_TIMER: \nThe Power Management Timer is available on all \nACPI-capable, in most cases even if \nACPI is unusable or blacklisted. \nThis timing source is not affected by powermanagement \nfeatures like aggressive processor \nidling, throttling, frequency and/or \nvoltage scaling, unlike the commonly used Time Stamp \nCounter (TSC) timing source.\n So, if you see messages like 'Losing too many ticks!' in \nthe kernel logs, and/or you are using \nthis on a notebook which does not \nyet have an HPET, you should say \"Y\" here.\n Symbol: X86_PM_TIMER \n[=y] \nPrompt: Power Management Timer \nSupport \nDefined at \ndrivers/acpi/Kconfig:319 \nDepends on: !X86_VOYAGER && !X86_VISWS && !IA64_HP_SIM && (IA64 || X86) && \nX86 && ACPI && ACPI_\n Location: \n-> Power management options (ACPI, \nAPM) -> ACPI \n(Advanced Configuration and Power Interface) \nSupport -> ACPI Support (ACPI [=y])\n\n\n\n\n\n\n\n\n\nOn Tue, 08 Mar 2005 03:06:24 +0100, Steinar H. Gunderson \n<[email protected]> wrote:\n\n> On Mon, Mar 07, 2005 at 09:02:38PM -0500, Tom Lane wrote:\n>> One thought that was bothering me was that if the CPU goes idle while\n>> waiting for disk I/O, its clock might stop or slow down dramatically.\n>> If we believed such a counter for EXPLAIN, we'd severely understate\n>> the cost of disk I/O.\n>>\n>> I dunno if that is the case on any Windows hardware or not, but none\n>> of this thread is making me feel confident that we know what\n>> QueryPerformanceCounter does measure.\n>\n> I believe the counter is actually good in such a situation -- I'm not a \n> Win32\n> guru, but I believe it is by far the best timer for measuring, well,\n> performance of a process like this. After all, it's what it was designed \n> to\n> be :-)\n>\n> OBTW, I think I can name something like 15 or 20 different function \n> calls to\n> measure time in the Win32 API (all of them in use); it really is a giant\n> mess.\n>\n> /* Steinar */\n\n\n", "msg_date": "Tue, 08 Mar 2005 04:51:43 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Help with tuning this query (with" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Monday, March 07, 2005 4:57 PM\n> To: John A Meinel\n> Cc: Dave Held; [email protected];\n> [email protected]\n> Subject: Re: [pgsql-hackers-win32] [PERFORM] Help with tuning \n> this query\n> (with\n> \n> John A Meinel <[email protected]> writes:\n> > Dave Held wrote:\n> >> There is always clock().\n> \n> > My experience with clock() on win32 is that CLOCKS_PER_SEC \n> > was 1000, and it had a resolution of 55clocks / s.\n\nWhich is why I suggested QueryPerformanceCounter for Win32. I\nonly suggested clock() for *nix.\n\n> The other problem is it measures process CPU time, not elapsed time\n> which is probably more significant for our purposes.\n\nActually, the bigger problem is that a quick test of clock() on\nLinux shows that it only has a maximum resolution of 10ms on my\nhardware. Looks like gettimeofday() is the best choice.\n\n> Which brings up a question: just what does QueryPerformanceCounter\n> measure?\n\nI think it measures raw CPU cycles, roughly, which seems like it \nwould more or less correspond to wall time.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n", "msg_date": "Mon, 7 Mar 2005 18:29:31 -0600", "msg_from": "\"Dave Held\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Help with tuning this query (with" } ]
[ { "msg_contents": "> > RDTSC is a bad source of information for this kind of thing, as the \n> > CPU frequency might vary.\n> \n> One thought that was bothering me was that if the CPU goes \n> idle while waiting for disk I/O, its clock might stop or slow \n> down dramatically.\n> If we believed such a counter for EXPLAIN, we'd severely \n> understate the cost of disk I/O.\n> \n> I dunno if that is the case on any Windows hardware or not, \n> but none of this thread is making me feel confident that we \n> know what QueryPerformanceCounter does measure.\n\nI'm \"reasonaly confident\" that QPC will measure actual wallclock time as\npassed, using a chip that is external to the CPU. (Don't ask me which\nchip :P).\n\nThe docs specifically say: \"Note that the frequency of the\nhigh-resolution performance counter is not the processor speed.\" \n\nIt also indicates that it is possible for hardware not to support it, in\nwhich case the frequency will be reported as zero. I don't know any\nremotely modern wintel system that doesn't, though - it seems this may\nbe referring to the old MIPS port of NT that didn't have it.\n\nI also find:\n\"Depending on the processor and exact version of NT you're using, on an\nIntel you get either the Time Stamp Counter, or the 1.1... MHz timer\nbuilt into the motherboard.\"\n\n\nSo I think we're perfectly safe relying on it. And certainly not alone\nin doing so :-)\n\n//Magnus\n", "msg_date": "Tue, 8 Mar 2005 10:05:08 +0100", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Help with tuning this query (with " } ]
[ { "msg_contents": "I posted this on hackers, but I had to post it here.\n\n===================================================================================================================================\nHi all,\nrunning a 7.4.5 engine, I'm facing this bad plan:\n\n\nempdb=# explain analyze SELECT name,url,descr,request_status,url_status,size_mb,estimated_start,request_time_stamp\nempdb-# FROM v_sc_user_request\nempdb-# WHERE\nempdb-# login = 'babinow1'\nempdb-# LIMIT 10 ;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=1716.38..1716.39 rows=1 width=232) (actual time=52847.239..52847.322 rows=10 loops=1)\n -> Subquery Scan v_sc_user_request (cost=1716.38..1716.39 rows=1 width=232) (actual time=52847.234..52847.301 rows=10 loops=1)\n -> Sort (cost=1716.38..1716.39 rows=1 width=201) (actual time=52847.219..52847.227 rows=10 loops=1)\n Sort Key: sr.id_sat_request\n -> Nested Loop Left Join (cost=1478.82..1716.37 rows=1 width=201) (actual time=3254.483..52847.064 rows=31 loops=1)\n Join Filter: (\"outer\".id_package = \"inner\".id_package)\n -> Nested Loop (cost=493.09..691.55 rows=1 width=193) (actual time=347.665..940.582 rows=31 loops=1)\n -> Nested Loop (cost=493.09..688.49 rows=1 width=40) (actual time=331.446..505.628 rows=31 loops=1)\n Join Filter: (\"inner\".id_user = \"outer\".id_user)\n -> Index Scan using user_login_login_key on user_login ul (cost=0.00..4.00 rows=2 width=16) (actual time=12.065..12.071 rows=1 loops=1)\n Index Cond: ((login)::text = 'babinow1'::text)\n -> Materialize (cost=493.09..531.37 rows=7656 width=28) (actual time=167.654..481.813 rows=8363 loops=1)\n -> Seq Scan on sat_request sr (cost=0.00..493.09 rows=7656 width=28) (actual time=167.644..467.344 rows=8363 loops=1)\n Filter: (request_time > (now() - '1 mon'::interval))\n -> Index Scan using url_pkey on url u (cost=0.00..3.05 rows=1 width=161) (actual time=13.994..14.000 rows=1 loops=31)\n Index Cond: (\"outer\".id_url = u.id_url)\n -> Subquery Scan vsp (cost=985.73..1016.53 rows=1103 width=12) (actual time=25.328..1668.754 rows=493 loops=31)\n -> Merge Join (cost=985.73..1011.01 rows=1103 width=130) (actual time=25.321..1666.666 rows=493 loops=31)\n Merge Cond: (\"outer\".id_program = \"inner\".id_program)\n -> Sort (cost=20.74..20.97 rows=93 width=19) (actual time=0.385..0.431 rows=47 loops=31)\n Sort Key: programs.id_program\n -> Seq Scan on programs (cost=0.00..17.70 rows=93 width=19) (actual time=0.022..11.709 rows=48 loops=1)\n Filter: (id_program <> 0)\n -> Sort (cost=964.99..967.75 rows=1102 width=115) (actual time=14.592..15.218 rows=493 loops=31)\n Sort Key: sequences.id_program\n -> Merge Join (cost=696.16..909.31 rows=1102 width=115) (actual time=79.717..451.495 rows=493 loops=1)\n Merge Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Merge Left Join (cost=0.00..186.59 rows=1229 width=103) (actual time=0.101..366.854 rows=1247 loops=1)\n Merge Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Index Scan using packages_pkey on packages p (cost=0.00..131.04 rows=1229 width=103) (actual time=0.048..163.503 rows=1247 loops=1)\n -> Index Scan using package_security_id_package_key on package_security ps (cost=0.00..46.83 rows=855 width=4) (actual time=0.022..178.599 rows=879 loops=1)\n -> Sort (cost=696.16..705.69 rows=3812 width=16) (actual time=79.582..79.968 rows=493 loops=1)\n Sort Key: sequences.id_package\n -> Seq Scan on sequences (cost=0.00..469.42 rows=3812 width=16) (actual time=0.012..78.863 rows=493 loops=1)\n Filter: (estimated_start IS NOT NULL)\n Total runtime: 52878.516 ms\n(36 rows)\n\n\nDisabling the nestloop then the execution time become more affordable:\n\nempdb=# set enable_nestloop = false;\nSET\nempdb=# explain analyze SELECT name,url,descr,request_status,url_status,size_mb,estimated_start,request_time_stamp\nempdb-# FROM v_sc_user_request\nempdb-# WHERE\nempdb-# login = 'babinow1'\nempdb-# LIMIT 10 ;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=4467.64..4467.65 rows=1 width=232) (actual time=7091.233..7091.289 rows=10 loops=1)\n -> Subquery Scan v_sc_user_request (cost=4467.64..4467.65 rows=1 width=232) (actual time=7091.228..7091.272 rows=10 loops=1)\n -> Sort (cost=4467.64..4467.64 rows=1 width=201) (actual time=7091.216..7091.221 rows=10 loops=1)\n Sort Key: sr.id_sat_request\n -> Merge Left Join (cost=4462.07..4467.63 rows=1 width=201) (actual time=6377.732..7091.067 rows=31 loops=1)\n Merge Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Sort (cost=3389.81..3389.81 rows=1 width=193) (actual time=1338.759..1338.814 rows=31 loops=1)\n Sort Key: sr.id_package\n -> Merge Join (cost=3285.05..3389.80 rows=1 width=193) (actual time=1318.877..1338.651 rows=31 loops=1)\n Merge Cond: (\"outer\".id_url = \"inner\".id_url)\n -> Sort (cost=1029.26..1029.26 rows=1 width=40) (actual time=703.085..703.113 rows=31 loops=1)\n Sort Key: sr.id_url\n -> Merge Join (cost=991.00..1029.25 rows=1 width=40) (actual time=702.740..702.984 rows=31 loops=1)\n Merge Cond: (\"outer\".id_user = \"inner\".id_user)\n -> Sort (cost=986.99..1006.13 rows=7656 width=28) (actual time=648.559..655.302 rows=8041 loops=1)\n Sort Key: sr.id_user\n -> Seq Scan on sat_request sr (cost=0.00..493.09 rows=7656 width=28) (actual time=201.968..614.631 rows=8363 loops=1)\n Filter: (request_time > (now() - '1 mon'::interval))\n -> Sort (cost=4.01..4.02 rows=2 width=16) (actual time=35.252..35.282 rows=1 loops=1)\n Sort Key: ul.id_user\n -> Index Scan using user_login_login_key on user_login ul (cost=0.00..4.00 rows=2 width=16) (actual time=35.214..35.221 rows=1 loops=1)\n Index Cond: ((login)::text = 'babinow1'::text)\n -> Sort (cost=2255.79..2308.95 rows=21264 width=161) (actual time=587.664..602.490 rows=21250 loops=1)\n Sort Key: u.id_url\n -> Seq Scan on url u (cost=0.00..727.32 rows=21264 width=161) (actual time=0.026..418.586 rows=21264 loops=1)\n -> Sort (cost=1072.27..1075.03 rows=1103 width=12) (actual time=5015.761..5016.092 rows=493 loops=1)\n Sort Key: vsp.id_package\n -> Subquery Scan vsp (cost=985.73..1016.53 rows=1103 width=12) (actual time=898.876..5014.570 rows=494 loops=1)\n -> Merge Join (cost=985.73..1011.01 rows=1103 width=130) (actual time=898.869..5011.954 rows=494 loops=1)\n Merge Cond: (\"outer\".id_program = \"inner\".id_program)\n -> Sort (cost=20.74..20.97 rows=93 width=19) (actual time=29.669..29.708 rows=47 loops=1)\n Sort Key: programs.id_program\n -> Seq Scan on programs (cost=0.00..17.70 rows=93 width=19) (actual time=0.035..29.525 rows=48 loops=1)\n Filter: (id_program <> 0)\n -> Sort (cost=964.99..967.75 rows=1102 width=115) (actual time=868.619..869.286 rows=494 loops=1)\n Sort Key: sequences.id_program\n -> Merge Join (cost=696.16..909.31 rows=1102 width=115) (actual time=44.820..867.649 rows=494 loops=1)\n Merge Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Merge Left Join (cost=0.00..186.59 rows=1229 width=103) (actual time=19.563..835.352 rows=1248 loops=1)\n Merge Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Index Scan using packages_pkey on packages p (cost=0.00..131.04 rows=1229 width=103) (actual time=12.796..457.520 rows=1248 loops=1)\n -> Index Scan using package_security_id_package_key on package_security ps (cost=0.00..46.83 rows=855 width=4) (actual time=6.703..283.944 rows=879 loops=1)\n -> Sort (cost=696.16..705.69 rows=3812 width=16) (actual time=25.222..25.705 rows=494 loops=1)\n Sort Key: sequences.id_package\n -> Seq Scan on sequences (cost=0.00..469.42 rows=3812 width=16) (actual time=0.017..24.412 rows=494 loops=1)\n Filter: (estimated_start IS NOT NULL)\n Total runtime: 7104.946 ms\n(47 rows)\n\n\n\nMay I know wich parameter may I tune in order to avoid to \"disable\" the nested loop ?\n===================================================================================================================================\n\nI tried to reduce the runtime cost adding a new column on sat_request ( expired boolean ) in order to\nhave a better row extimation ( I used a partial index ) but nothing changed.\n\n\nI finally was able to reduce the cost putting ( just for this query ):\nset cpu_tuple_cost = 0.07\n\nis it a resonable value ?\n\n\nempdb=# set cpu_tuple_cost = 0.07;\nSET\nempdb=# explain analyze select *\nempdb-# from v_sc_user_request\nempdb-# where login = 'babinow1';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan v_sc_user_request (cost=1978.23..1978.30 rows=1 width=364) (actual time=1612.719..1613.064 rows=31 loops=1)\n -> Sort (cost=1978.23..1978.23 rows=1 width=201) (actual time=1612.700..1612.728 rows=31 loops=1)\n Sort Key: sr.id_sat_request\n -> Merge Left Join (cost=1974.05..1978.22 rows=1 width=201) (actual time=1537.343..1612.565 rows=31 loops=1)\n Merge Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Sort (cost=887.92..887.93 rows=1 width=193) (actual time=475.924..476.020 rows=31 loops=1)\n Sort Key: sr.id_package\n -> Nested Loop (cost=4.07..887.91 rows=1 width=193) (actual time=145.782..475.851 rows=31 loops=1)\n -> Hash Join (cost=4.07..884.65 rows=1 width=40) (actual time=139.816..464.678 rows=31 loops=1)\n Hash Cond: (\"outer\".id_user = \"inner\".id_user)\n -> Index Scan using idx_sat_request_expired on sat_request sr (cost=0.00..838.69 rows=8363 width=28) (actual time=19.696..443.702 rows=8460 loops=1)\n Index Cond: (expired = false)\n -> Hash (cost=4.07..4.07 rows=2 width=16) (actual time=11.779..11.779 rows=0 loops=1)\n -> Index Scan using user_login_login_key on user_login ul (cost=0.00..4.07 rows=2 width=16) (actual time=11.725..11.732 rows=1 loops=1)\n Index Cond: ((login)::text = 'babinow1'::text)\n -> Index Scan using url_pkey on url u (cost=0.00..3.19 rows=1 width=161) (actual time=0.345..0.347 rows=1 loops=31)\n Index Cond: (\"outer\".id_url = u.id_url)\n -> Sort (cost=1086.13..1088.16 rows=813 width=12) (actual time=1060.374..1060.622 rows=390 loops=1)\n Sort Key: vsp.id_package\n -> Subquery Scan vsp (cost=676.18..1046.83 rows=813 width=12) (actual time=625.645..1059.388 rows=480 loops=1)\n -> Hash Join (cost=676.18..989.92 rows=813 width=131) (actual time=625.637..1057.105 rows=480 loops=1)\n Hash Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Hash Left Join (cost=79.67..302.87 rows=1341 width=104) (actual time=4.336..18.549 rows=1342 loops=1)\n Hash Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Seq Scan on packages p (cost=0.00..145.87 rows=1341 width=104) (actual time=0.007..3.357 rows=1342 loops=1)\n -> Hash (cost=77.27..77.27 rows=961 width=4) (actual time=3.685..3.685 rows=0 loops=1)\n -> Seq Scan on package_security ps (cost=0.00..77.27 rows=961 width=4) (actual time=0.016..2.175 rows=974 loops=1)\n -> Hash (cost=594.48..594.48 rows=813 width=31) (actual time=620.397..620.397 rows=0 loops=1)\n -> Hash Join (cost=20.60..594.48 rows=813 width=31) (actual time=38.307..619.406 rows=480 loops=1)\n Hash Cond: (\"outer\".id_program = \"inner\".id_program)\n -> Seq Scan on sequences (cost=0.00..512.82 rows=830 width=16) (actual time=16.858..595.262 rows=480 loops=1)\n Filter: (estimated_start IS NOT NULL)\n -> Hash (cost=20.48..20.48 rows=47 width=19) (actual time=21.093..21.093 rows=0 loops=1)\n -> Seq Scan on programs (cost=0.00..20.48 rows=47 width=19) (actual time=9.369..20.980 rows=48 loops=1)\n Filter: (id_program <> 0)\n Total runtime: 1614.123 ms\n(36 rows)\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n", "msg_date": "Tue, 08 Mar 2005 11:32:03 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "bad plan" }, { "msg_contents": "Gaetano Mendola wrote:\n> running a 7.4.5 engine, I'm facing this bad plan:\n> \n> empdb=# explain analyze SELECT name,url,descr,request_status,url_status,size_mb,estimated_start,request_time_stamp\n> empdb-# FROM v_sc_user_request\n> empdb-# WHERE\n> empdb-# login = 'babinow1'\n> empdb-# LIMIT 10 ;\n\n> -> Subquery Scan vsp (cost=985.73..1016.53 rows=1103 width=12) (actual time=25.328..1668.754 rows=493 loops=31)\n> -> Merge Join (cost=985.73..1011.01 rows=1103 width=130) (actual time=25.321..1666.666 rows=493 loops=31)\n> Merge Cond: (\"outer\".id_program = \"inner\".id_program)\n\nThe problem to address is in this subquery. That's a total of 31 x \n(1668.754 - 25.328) = 50seconds (about).\n\nSince your query is so simple, I'm guessing v_sc_user_request is a view. \nCan you provide the definition?\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 08 Mar 2005 11:39:58 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad plan" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nRichard Huxton wrote:\n> Gaetano Mendola wrote:\n> \n>> running a 7.4.5 engine, I'm facing this bad plan:\n>>\n>> empdb=# explain analyze SELECT\n>> name,url,descr,request_status,url_status,size_mb,estimated_start,request_time_stamp\n>>\n>> empdb-# FROM v_sc_user_request\n>> empdb-# WHERE\n>> empdb-# login = 'babinow1'\n>> empdb-# LIMIT 10 ;\n> \n> \n>> -> Subquery Scan vsp (cost=985.73..1016.53\n>> rows=1103 width=12) (actual time=25.328..1668.754 rows=493 loops=31)\n>> -> Merge Join (cost=985.73..1011.01\n>> rows=1103 width=130) (actual time=25.321..1666.666 rows=493 loops=31)\n>> Merge Cond: (\"outer\".id_program =\n>> \"inner\".id_program)\n> \n> \n> The problem to address is in this subquery. That's a total of 31 x\n> (1668.754 - 25.328) = 50seconds (about).\n> \n> Since your query is so simple, I'm guessing v_sc_user_request is a view.\n> Can you provide the definition?\n\nOf course:\n\n\n\nCREATE OR REPLACE VIEW v_sc_user_request AS\n SELECT\n *\n FROM\n v_sat_request vsr LEFT OUTER JOIN v_sc_packages vsp USING ( id_package )\n WHERE\n vsr.request_time > now() - '1 month'::interval AND\n vsr.expired = FALSE\n ORDER BY id_sat_request DESC\n;\n\n\nCREATE OR REPLACE VIEW v_sc_packages AS\n SELECT\n *\n FROM\n v_programs vpr,\n v_packages vpk,\n v_sequences vs\n\n WHERE\n ------------ JOIN -------------\n vpr.id_program = vs.id_program AND\n vpk.id_package = vs.id_package AND\n -------------------------------\n vs.estimated_start IS NOT NULL\n;\n\nCREATE OR REPLACE VIEW v_sat_request AS\n SELECT\n *\n FROM\n sat_request sr,\n url u,\n user_login ul\n WHERE\n ---------------- JOIN ---------------------\n sr.id_url = u.id_url AND\n sr.id_user = ul.id_user\n -------------------------------------------\n;\n\n\nthat column expired was added since yesterday\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.5 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFCLZkD7UpzwH2SGd4RAv8/AKCA5cNfu6vEKZ6m/ke1JsVRdsOTXQCbBMt4\nZPTFjwyb52CrFxdUTD6gejs=\n=STzz\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Tue, 08 Mar 2005 13:22:28 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad plan" }, { "msg_contents": "Gaetano Mendola wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> Richard Huxton wrote:\n> \n>>Gaetano Mendola wrote:\n>>\n>>\n>>>running a 7.4.5 engine, I'm facing this bad plan:\n>>>\n>>>empdb=# explain analyze SELECT\n>>>name,url,descr,request_status,url_status,size_mb,estimated_start,request_time_stamp\n>>>\n>>>empdb-# FROM v_sc_user_request\n>>>empdb-# WHERE\n>>>empdb-# login = 'babinow1'\n>>>empdb-# LIMIT 10 ;\n>>\n>>\n>>> -> Subquery Scan vsp (cost=985.73..1016.53\n>>>rows=1103 width=12) (actual time=25.328..1668.754 rows=493 loops=31)\n>>> -> Merge Join (cost=985.73..1011.01\n>>>rows=1103 width=130) (actual time=25.321..1666.666 rows=493 loops=31)\n>>> Merge Cond: (\"outer\".id_program =\n>>>\"inner\".id_program)\n>>\n>>\n>>The problem to address is in this subquery. That's a total of 31 x\n>>(1668.754 - 25.328) = 50seconds (about).\n>>\n>>Since your query is so simple, I'm guessing v_sc_user_request is a view.\n>>Can you provide the definition?\n> \n> \n> Of course:\n> \n> \n> \n> CREATE OR REPLACE VIEW v_sc_user_request AS\n> SELECT\n> *\n> FROM\n> v_sat_request vsr LEFT OUTER JOIN v_sc_packages vsp USING ( id_package )\n> WHERE\n> vsr.request_time > now() - '1 month'::interval AND\n> vsr.expired = FALSE\n> ORDER BY id_sat_request DESC\n> ;\n> \n> \n> CREATE OR REPLACE VIEW v_sc_packages AS\n> SELECT\n> *\n> FROM\n> v_programs vpr,\n> v_packages vpk,\n> v_sequences vs\n> \n> WHERE\n> ------------ JOIN -------------\n> vpr.id_program = vs.id_program AND\n> vpk.id_package = vs.id_package AND\n> -------------------------------\n> vs.estimated_start IS NOT NULL\n> ;\n> \n> CREATE OR REPLACE VIEW v_sat_request AS\n> SELECT\n> *\n> FROM\n> sat_request sr,\n> url u,\n> user_login ul\n> WHERE\n> ---------------- JOIN ---------------------\n> sr.id_url = u.id_url AND\n> sr.id_user = ul.id_user\n> -------------------------------------------\n> ;\n\nOK, so looking at the original EXPLAIN the order of processing seems to be:\n1. v_sat_request is evaluated and filtered on login='...' (lines 7..15)\nThis gives us 31 rows\n2. The left-join from v_sat_request to v_sc_packages is processed (lines \n5..6)\nThis involves the subquery scan on vsp (from line 16) where it seems to \nthink the best idea is a merge join of programs to sequences.\n\nSo - I think we need to look at the performance of your view \n\"v_sc_packages\" and the views that it depends on. OK - can you reply to \nthis with just the definitions of v_sc_packages and what it depends on, \nand we can have a look at that.\n\nDo you need all these tables involved in this query? I don't think PG is \nsmart enough to completely discard a join if it's not needed by the \noutput. Thinking about it, I'm not sure you could safely.\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 08 Mar 2005 13:20:37 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad plan" }, { "msg_contents": "Richard Huxton wrote:\n\n> OK, so looking at the original EXPLAIN the order of processing seems to be:\n> 1. v_sat_request is evaluated and filtered on login='...' (lines 7..15)\n> This gives us 31 rows\n> 2. The left-join from v_sat_request to v_sc_packages is processed (lines\n> 5..6)\n> This involves the subquery scan on vsp (from line 16) where it seems to\n> think the best idea is a merge join of programs to sequences.\n\nWhel basically v_sc_packages depends on other 3 views that are just a simple\ninterface to a plain table.\n\n\nIf I execute a select only on this table I get reasonable executions time:\n\n\n=== cpu_tuple_cost = 0.07\n\n# explain analyze select * from v_sc_packages where id_package = 19628;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..15.96 rows=1 width=131) (actual time=41.450..41.494 rows=1 loops=1)\n -> Nested Loop (cost=0.00..11.86 rows=1 width=116) (actual time=1.022..1.055 rows=1 loops=1)\n -> Nested Loop Left Join (cost=0.00..7.89 rows=1 width=104) (actual time=0.330..0.345 rows=1 loops=1)\n -> Index Scan using packages_pkey on packages p (cost=0.00..3.90 rows=1 width=104) (actual time=0.070..0.075 rows=1 loops=1)\n Index Cond: (id_package = 19628)\n -> Index Scan using package_security_id_package_key on package_security ps (cost=0.00..3.91 rows=1 width=4) (actual time=0.232..0.237 rows=1 loops=1)\n Index Cond: (\"outer\".id_package = ps.id_package)\n -> Index Scan using idx_sequences_id_package on sequences (cost=0.00..3.90 rows=1 width=16) (actual time=0.670..0.685 rows=1 loops=1)\n Index Cond: (19628 = id_package)\n Filter: (estimated_start IS NOT NULL)\n -> Index Scan using programs_pkey on programs (cost=0.00..4.02 rows=1 width=19) (actual time=0.078..0.086 rows=1 loops=1)\n Index Cond: (programs.id_program = \"outer\".id_program)\n Filter: (id_program <> 0)\n Total runtime: 42.650 ms\n(14 rows)\n\n=== cpu_tuple_cost = 0.01\n\n# explain analyze select * from v_sc_packages where id_package = 19628;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..15.54 rows=1 width=131) (actual time=25.062..69.977 rows=1 loops=1)\n -> Nested Loop (cost=0.00..11.56 rows=1 width=116) (actual time=5.396..50.299 rows=1 loops=1)\n -> Nested Loop Left Join (cost=0.00..7.71 rows=1 width=104) (actual time=5.223..32.842 rows=1 loops=1)\n -> Index Scan using packages_pkey on packages p (cost=0.00..3.84 rows=1 width=104) (actual time=0.815..7.235 rows=1 loops=1)\n Index Cond: (id_package = 19628)\n -> Index Scan using package_security_id_package_key on package_security ps (cost=0.00..3.85 rows=1 width=4) (actual time=4.366..25.555 rows=1 loops=1)\n Index Cond: (\"outer\".id_package = ps.id_package)\n -> Index Scan using idx_sequences_id_package on sequences (cost=0.00..3.84 rows=1 width=16) (actual time=0.147..17.422 rows=1 loops=1)\n Index Cond: (19628 = id_package)\n Filter: (estimated_start IS NOT NULL)\n -> Index Scan using programs_pkey on programs (cost=0.00..3.96 rows=1 width=19) (actual time=0.043..0.049 rows=1 loops=1)\n Index Cond: (programs.id_program = \"outer\".id_program)\n Filter: (id_program <> 0)\n Total runtime: 70.254 ms\n(14 rows)\n\n\nand I get the best with this:\n\n=== cpu_tuple_cost = 0.001\n\n\n# explain analyze select * from v_sc_packages where id_package = 19628;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..15.48 rows=1 width=131) (actual time=2.516..2.553 rows=1 loops=1)\n -> Nested Loop (cost=0.00..7.78 rows=1 width=31) (actual time=1.439..1.457 rows=1 loops=1)\n -> Index Scan using idx_sequences_id_package on sequences (cost=0.00..3.83 rows=1 width=16) (actual time=0.442..0.450 rows=1 loops=1)\n Index Cond: (19628 = id_package)\n Filter: (estimated_start IS NOT NULL)\n -> Index Scan using programs_pkey on programs (cost=0.00..3.95 rows=1 width=19) (actual time=0.972..0.978 rows=1 loops=1)\n Index Cond: (programs.id_program = \"outer\".id_program)\n Filter: (id_program <> 0)\n -> Nested Loop Left Join (cost=0.00..7.68 rows=1 width=104) (actual time=0.110..0.125 rows=1 loops=1)\n -> Index Scan using packages_pkey on packages p (cost=0.00..3.84 rows=1 width=104) (actual time=0.040..0.046 rows=1 loops=1)\n Index Cond: (id_package = 19628)\n -> Index Scan using package_security_id_package_key on package_security ps (cost=0.00..3.84 rows=1 width=4) (actual time=0.036..0.042 rows=1 loops=1)\n Index Cond: (\"outer\".id_package = ps.id_package)\n Total runtime: 2.878 ms\n(14 rows)\n\n\n\nbut with this last setting for the original query is choosed a very bad plan.\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n", "msg_date": "Tue, 08 Mar 2005 19:35:31 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad plan" }, { "msg_contents": "Gaetano Mendola wrote:\n> Richard Huxton wrote:\n> \n> \n>>OK, so looking at the original EXPLAIN the order of processing seems to be:\n>>1. v_sat_request is evaluated and filtered on login='...' (lines 7..15)\n>>This gives us 31 rows\n>>2. The left-join from v_sat_request to v_sc_packages is processed (lines\n>>5..6)\n>>This involves the subquery scan on vsp (from line 16) where it seems to\n>>think the best idea is a merge join of programs to sequences.\n> \n> \n> Whel basically v_sc_packages depends on other 3 views that are just a simple\n> interface to a plain table.\n> \n> \n> If I execute a select only on this table I get reasonable executions time:\n> \n> \n> === cpu_tuple_cost = 0.07\n> \n> # explain analyze select * from v_sc_packages where id_package = 19628;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..15.96 rows=1 width=131) (actual time=41.450..41.494 rows=1 loops=1)\n> -> Nested Loop (cost=0.00..11.86 rows=1 width=116) (actual time=1.022..1.055 rows=1 loops=1)\n> -> Nested Loop Left Join (cost=0.00..7.89 rows=1 width=104) (actual time=0.330..0.345 rows=1 loops=1)\n> -> Index Scan using packages_pkey on packages p (cost=0.00..3.90 rows=1 width=104) (actual time=0.070..0.075 rows=1 loops=1)\n> Index Cond: (id_package = 19628)\n> -> Index Scan using package_security_id_package_key on package_security ps (cost=0.00..3.91 rows=1 width=4) (actual time=0.232..0.237 rows=1 loops=1)\n> Index Cond: (\"outer\".id_package = ps.id_package)\n> -> Index Scan using idx_sequences_id_package on sequences (cost=0.00..3.90 rows=1 width=16) (actual time=0.670..0.685 rows=1 loops=1)\n> Index Cond: (19628 = id_package)\n> Filter: (estimated_start IS NOT NULL)\n> -> Index Scan using programs_pkey on programs (cost=0.00..4.02 rows=1 width=19) (actual time=0.078..0.086 rows=1 loops=1)\n> Index Cond: (programs.id_program = \"outer\".id_program)\n> Filter: (id_program <> 0)\n> Total runtime: 42.650 ms\n> (14 rows)\n\n> === cpu_tuple_cost = 0.01\n\n> === cpu_tuple_cost = 0.001\n\nI don't know what you think you're measuring, but it's nothing to do \nwith the plans. If you look at the plans carefully, you'll see they're \nall the same. The \"cost\" numbers change because that's the parameter \nyou're changing.\n\nI'm not sure it makes sense to vary cpu_tuple_cost from 0.07 down to \n0.001 - that's a factor of 70 difference. I might be tempted to halve or \ndouble it, but even then only after some serious testing.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 08 Mar 2005 19:20:22 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad plan" }, { "msg_contents": "Gaetano Mendola <[email protected]> writes:\n>> Since your query is so simple, I'm guessing v_sc_user_request is a view.\n>> Can you provide the definition?\n\n> Of course:\n\nI don't think you've told us the whole truth about the v_sc_packages\nview. The definition as given doesn't work at all (it'll have\nduplicate column names), but more to the point, if it were that simple\nthen the planner would fold it into the parent query. The subquery\nscan node indicates that folding did not occur. The most likely reason\nfor that is that there's an ORDER BY in the view.\n\nPutting ORDER BYs in views that you intend to use as components of other\nviews is a bad practice from a performance perspective...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Mar 2005 15:01:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad plan " }, { "msg_contents": "Tom Lane wrote:\n> \n> Putting ORDER BYs in views that you intend to use as components of other\n> views is a bad practice from a performance perspective...\n\nThere are also a lot of views involved here for very few output columns. \nTom - is the planner smart enough to optimise-out unneeded columns from \na SELECT * view if it's part of a join/subquery and you only use one or \ntwo columns?\n\nSecondly, in the original plan we have:\n-> Nested Loop Left Join (cost=1478.82..1716.37 rows=1 width=201) \n(actual time=3254.483..52847.064 rows=31 loops=1)\n\nNow, we've got 31 rows instead of 1 here. The one side of the join ends \nup as:\n-> Subquery Scan vsp (cost=985.73..1016.53 rows=1103 width=12) (actual \ntime=25.328..1668.754 rows=493 loops=31)\n-> Merge Join (cost=985.73..1011.01 rows=1103 width=130) (actual \ntime=25.321..1666.666 rows=493 loops=31)\n\nWould I be right in thinking the planner doesn't materialise the \nsubquery because it's expecting 1 loop not 31? If there were 1 row the \nplan would seem OK to me.\n\nIs there any mileage in the idea of a \"lazy\" planner that keeps some \nalternative paths around in case they're needed? Or a reactive one that \ncan re-plan nodes when assumptions turn out to be wrong?\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 08 Mar 2005 20:25:02 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad plan" }, { "msg_contents": "Richard Huxton <[email protected]> writes:\n> There are also a lot of views involved here for very few output columns. \n> Tom - is the planner smart enough to optimise-out unneeded columns from \n> a SELECT * view if it's part of a join/subquery and you only use one or \n> two columns?\n\nIf the view gets flattened, yes, but I believe that it's not bright\nenough to do so when it can't flatten the view. You could tell easily\nenough by looking at the row-width estimates at various levels of the\nplan. (Let's see ... in Gaetano's plan the SubqueryScan is returning\n12-byte rows where its input MergeJoin is returning 130-byte rows,\nso sure enough the view is computing a lot of stuff that then gets\nthrown away.)\n\n> Would I be right in thinking the planner doesn't materialise the \n> subquery because it's expecting 1 loop not 31? If there were 1 row the \n> plan would seem OK to me.\n\nRight; it doesn't see any predicted gain from the extra cost of\nmaterializing. But to me the main problem here is not that, it is that\nthe entire shape of the plan would likely be different if it weren't for\nthe \"optimization fence\" that the Subquery Scan node represents. I\nsuspect too that the use of mergejoin as opposed to anything else within\nthe vsp subplan is driven more by the need to produce sorted output than\nby what is the cheapest way to get the rows.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Mar 2005 15:42:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad plan " }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nTom Lane wrote:\n> Gaetano Mendola <[email protected]> writes:\n> \n>>>Since your query is so simple, I'm guessing v_sc_user_request is a view.\n>>>Can you provide the definition?\n> \n> \n>>Of course:\n> \n> \n> I don't think you've told us the whole truth about the v_sc_packages\n> view. The definition as given doesn't work at all (it'll have\n> duplicate column names), but more to the point, if it were that simple\n> then the planner would fold it into the parent query. The subquery\n> scan node indicates that folding did not occur. The most likely reason\n> for that is that there's an ORDER BY in the view.\n\nI didn't say the complete truth because the view definition is long so I just omitted\nall fields.\n\nexplain analyze SELECT name,url,descr,request_status,url_status,size_mb,estimated_start,request_time_stamp\nFROM v_sc_user_request\nWHERE login = 'babinow1'\nLIMIT 10 ;\n\nthese are the complete definitions of views involved in the query:\n\n\n\nCREATE OR REPLACE VIEW v_sc_user_request AS\n SELECT\n vsr.id_sat_request AS id_sat_request,\n vsr.id_user AS id_user,\n vsr.login AS login,\n vsr.url AS url,\n vsr.name AS name,\n vsr.descr AS descr,\n vsr.size AS size,\n trunc(vsr.size/1024.0/1024.0,2) AS size_mb,\n vsr.id_sat_request_status AS id_sat_request_status,\n sp_lookup_key('sat_request_status', vsr.id_sat_request_status) AS request_status,\n sp_lookup_descr('sat_request_status', vsr.id_sat_request_status) AS request_status_descr,\n vsr.id_url_status AS id_url_status,\n sp_lookup_key('url_status', vsr.id_url_status) AS url_status,\n sp_lookup_descr('url_status', vsr.id_url_status) AS url_status_descr,\n vsr.url_time_stamp AS url_time_stamp,\n date_trunc('seconds',vsr.request_time) AS request_time_stamp,\n vsr.id_package AS id_package,\n COALESCE(date_trunc('seconds',vsp.estimated_start)::text,'NA') AS estimated_start\n\n FROM\n v_sat_request vsr LEFT OUTER JOIN v_sc_packages vsp USING ( id_package )\n WHERE\n vsr.request_time > now() - '1 month'::interval AND\n vsr.expired = FALSE\n ORDER BY id_sat_request DESC\n;\n\n\n\n\nCREATE OR REPLACE VIEW v_sat_request AS\n SELECT\n sr.id_user AS id_user,\n ul.login AS login,\n sr.id_sat_request AS id_sat_request,\n u.id_url AS id_url,\n u.url AS url,\n u.name AS name,\n u.descr AS descr,\n u.size AS size,\n u.storage AS storage,\n sr.id_package AS id_package,\n sr.id_sat_request_status AS id_sat_request_status,\n sr.request_time AS request_time,\n sr.work_time AS request_work_time,\n u.id_url_status AS id_url_status,\n u.time_stamp AS url_time_stamp,\n sr.expired AS expired\n FROM\n sat_request sr,\n url u,\n user_login ul\n WHERE\n ---------------- JOIN ---------------------\n sr.id_url = u.id_url AND\n sr.id_user = ul.id_user\n -------------------------------------------\n;\n\n\n\n\nCREATE OR REPLACE VIEW v_sc_packages AS\n SELECT\n\n vpr.id_program AS id_program,\n vpr.name AS program_name,\n\n vpk.id_package AS id_package,\n date_trunc('seconds', vs.estimated_start) AS estimated_start,\n\n vpk.name AS package_name,\n vpk.TYPE AS TYPE,\n vpk.description AS description,\n vpk.target AS target,\n vpk.fec AS fec_alg,\n vpk.output_group - vpk.input_group AS fec_redundancy,\n vpk.priority AS priority,\n vpk.updatable AS updatable,\n vpk.auto_listen AS auto_listen,\n vpk.start_file AS start_file,\n vpk.view_target_group AS view_target_group,\n vpk.target_group AS target_group\n\n FROM\n v_programs vpr,\n v_packages vpk,\n v_sequences vs\n\n WHERE\n ------------ JOIN -------------\n vpr.id_program = vs.id_program AND\n vpk.id_package = vs.id_package AND\n\n -------------------------------\n vs.estimated_start IS NOT NULL\n;\n\n\n\nCREATE OR REPLACE VIEW v_programs AS\n SELECT id_program AS id_program,\n id_publisher AS id_publisher,\n name AS name,\n description AS description,\n sp_lookup_key('program_type', id_program_type) AS TYPE,\n sp_lookup_key('program_status', id_program_status) AS status,\n last_position AS last_position\n FROM programs\n WHERE id_program<>0\n;\n\n\nCREATE OR REPLACE VIEW v_packages AS\n SELECT p.id_package AS id_package,\n p.id_publisher AS id_publisher,\n p.name AS name,\n p.information AS information,\n p.description AS description,\n sp_lookup_key('package_type', p.id_package_type)\n AS TYPE,\n sp_lookup_key('target', p.id_target)\n AS target,\n p.port AS port,\n p.priority AS priority,\n sp_lookup_key('fec', p.id_fec)\n AS fec,\n p.input_group AS input_group,\n p.output_group AS output_group,\n p.updatable AS updatable,\n p.checksum AS checksum,\n p.version AS version,\n p.start_file AS start_file,\n p.view_target_group AS view_target_group,\n p.target_group AS target_group,\n p.auto_listen AS auto_listen,\n p.public_flag AS public_flag,\n p.needed_version AS needed_version,\n p.logic_version AS logic_version,\n p.package_size AS package_size,\n ps.id_drm_process AS id_drm_process,\n ps.id_cas_service AS id_cas_service,\n ps.id_cas_settings AS id_cas_settings,\n ps.id_drm_service AS id_drm_service\n\n FROM packages p LEFT OUTER JOIN package_security ps USING (id_package)\n ;\n\n\n\nCREATE OR REPLACE VIEW v_sequences AS\n SELECT id_package AS id_package,\n id_program AS id_program,\n internal_position AS internal_position,\n estimated_start AS estimated_start\n FROM sequences\n;\n\n\n\n> Putting ORDER BYs in views that you intend to use as components of other\n> views is a bad practice from a performance perspective...\n\nIndeed when a view is involved in a join we do not put \"order by\" in it ( at\nleast this is what I try to do ), I have to say also that some time I see that replacing\nthe view with the tables that it represent the execution time is better\n( I have an example to show you if you are interested in it ).\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.5 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFCLkef7UpzwH2SGd4RAt90AJ9e3qUSx2fxiOO2aA30TbLsOdyV7ACfd0RY\n+2A3U6dDfWw/H4eWcmI8mS0=\n=t1AD\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Wed, 09 Mar 2005 01:47:28 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad plan" } ]
[ { "msg_contents": "Hi, I have the following strange situation:\n\noocms=# vacuum full analyze;\nVACUUM\noocms=# \\df+ class_get_number_of_objects\n Список функций\n Схема | Имя | Тип данных результата | Типы данных аргументов | Владелец | Язык | Исходный текст | Описание\n-------+-----------------------------+-----------------------+------------------------+----------+---------+----------------+-----------------------------------------------------------------------------------------------\n oocms | class_get_number_of_objects | integer | text | oocms | plpgsql |\nDECLARE\n arg_class_name ALIAS FOR $1;\nBEGIN\n IF arg_class_name IS NULL THEN\n RAISE WARNING 'class_get_number_of_objects() with NULL class name called';\n RETURN NULL;\n END IF;\n RETURN\n count(1)\n FROM\n objects\n WHERE\n class = arg_class_name;\nEND;\n | Return the number of existing or deleted objects of a class. Arguments: the name of the class\n(1 запись)\n\noocms=# explain analyze select count(1) from objects where class = 'Picture';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Aggregate (cost=278.16..278.16 rows=1 width=0) (actual time=44.121..44.123 rows=1 loops=1)\n -> Seq Scan on objects (cost=0.00..267.65 rows=4205 width=0) (actual time=0.030..33.325 rows=4308 loops=1)\n Filter: (\"class\" = 'Picture'::text)\n Total runtime: 44.211 ms\n(записей: 4)\n\noocms=# explain analyze select class_get_number_of_objects('Picture');\n QUERY PLAN\n--------------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=27.019..27.022 rows=1 loops=1)\n Total runtime: 27.062 ms\n(записей: 2)\n\n\nI.e. a function takes 27 ms to do what takes an equivalent piece of sql\n43 ms. How can this be explained?\n\nSome more info:\n\noocms=# select class_get_number_of_objects('Picture');\n class_get_number_of_objects\n-----------------------------\n 4308\n(1 запись)\n\noocms=# select count(1) from objects;\n count\n-------\n 13332\n(1 запись)\n\noocms=# \\d objects\n Таблица \"oocms.objects\"\n Колонка | Тип | Модификаторы\n-----------+--------------------------+---------------------------------------------------------------\n object_id | integer | not null default nextval('oocms.objects_object_id_seq'::text)\n class | text | not null\n created | timestamp with time zone | not null default ('now'::text)::timestamp(6) with time zone\nИндексы:\n \"objects_pkey\" PRIMARY KEY, btree (object_id)\n \"fooooo\" btree (\"class\")\nОграничения по внешнему ключу:\n \"objects_class_fkey\" FOREIGN KEY (\"class\") REFERENCES classes(name) ON UPDATE CASCADE\n\n\n-- \nMarkus Bertheau ☭ <[email protected]>\n\n", "msg_date": "Tue, 08 Mar 2005 13:20:32 +0100", "msg_from": "Markus Bertheau =?UTF-8?Q?=E2=98=AD?= <[email protected]>", "msg_from_op": true, "msg_subject": "pl/pgsql faster than raw SQL?" }, { "msg_contents": "Markus Bertheau ☭ wrote:\n> oocms=# explain analyze select count(1) from objects where class = 'Picture';\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=278.16..278.16 rows=1 width=0) (actual time=44.121..44.123 rows=1 loops=1)\n> -> Seq Scan on objects (cost=0.00..267.65 rows=4205 width=0) (actual time=0.030..33.325 rows=4308 loops=1)\n> Filter: (\"class\" = 'Picture'::text)\n> Total runtime: 44.211 ms\n> (записей: 4)\n> \n> oocms=# explain analyze select class_get_number_of_objects('Picture');\n> QUERY PLAN\n> --------------------------------------------------------------------------------------\n> Result (cost=0.00..0.01 rows=1 width=0) (actual time=27.019..27.022 rows=1 loops=1)\n> Total runtime: 27.062 ms\n\nWell, you're saving planning time with the plpgsql version, but that's \nnot going to come to 17ms (you'd hope). The EXPLAIN will take up time \nitself, and it can look deeper into the SQL version. Try timing two \nscripts with 100 of each and see if they really differ by that much.\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 08 Mar 2005 12:48:32 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pl/pgsql faster than raw SQL?" }, { "msg_contents": "Markus Bertheau ☭ wrote:\n\n>Hi, I have the following strange situation:\n>\n> \n>\n...\n\n>oocms=# explain analyze select count(1) from objects where class = 'Picture';\n> QUERY PLAN\n>----------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=278.16..278.16 rows=1 width=0) (actual time=44.121..44.123 rows=1 loops=1)\n> -> Seq Scan on objects (cost=0.00..267.65 rows=4205 width=0) (actual time=0.030..33.325 rows=4308 loops=1)\n> Filter: (\"class\" = 'Picture'::text)\n> Total runtime: 44.211 ms\n>(записей: 4)\n>\n>oocms=# explain analyze select class_get_number_of_objects('Picture');\n> QUERY PLAN\n>--------------------------------------------------------------------------------------\n> Result (cost=0.00..0.01 rows=1 width=0) (actual time=27.019..27.022 rows=1 loops=1)\n> Total runtime: 27.062 ms\n>(записей: 2)\n>\n>\n>I.e. a function takes 27 ms to do what takes an equivalent piece of sql\n>43 ms. How can this be explained?\n>\n>Some more info:\n> \n>\nIn explain analyze, there is a per-row overhead of 2 gettimeofday() \ncalls. This is usually very low and hidden in I/O, but on queries where \nyou go through a lot of rows, but things are cached in ram, it can show up.\nSo the explain analyze is going deep into the SQL query.\nWith a stored procedure, explain analyze only runs the procedure, it \ndoesn't instrument the actual function. So you don't have that per-row \noverhead.\n\nFor an alternate accurate view. Try:\n# \\timing\n# explain analyze select count(1) from objects where class = 'Picture';\n# explain analyze select class_get_number_of_objects('Picture');\n\n\\timing will also give you the time it takes to run the query, but it \ndoesn't instrument anything.\n\nJohn\n=:->", "msg_date": "Tue, 08 Mar 2005 09:10:30 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pl/pgsql faster than raw SQL?" }, { "msg_contents": "Markus Bertheau ☭ wrote:\n> Hi, I have the following strange situation:\n\nthat is no so strange. I have an example where:\n\nSELECT * FROM my_view WHERE field1 = 'New'; ==> 800 seconds\n\nSELECT * FROM my_view; ==> 2 seconds\n\nthe only solution I had was to write a function table with\nthe second select in a loop that was returnin the row if\nthe field1 was equal = 'New'.\nIt's strange but happen.\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n", "msg_date": "Wed, 09 Mar 2005 01:54:32 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pl/pgsql faster than raw SQL?" }, { "msg_contents": "Gaetano Mendola wrote:\n\n>Markus Bertheau ☭ wrote:\n> \n>\n>>Hi, I have the following strange situation:\n>> \n>>\n>\n>that is no so strange. I have an example where:\n>\n>SELECT * FROM my_view WHERE field1 = 'New'; ==> 800 seconds\n>\n>SELECT * FROM my_view; ==> 2 seconds\n>\n>the only solution I had was to write a function table with\n>the second select in a loop that was returnin the row if\n>the field1 was equal = 'New'.\n>It's strange but happen.\n>\n>\n>\n>Regards\n>Gaetano Mendola\n> \n>\n\nThat sounds more like you had bad statistics on the field1 column, which \ncaused postgres to switch from a seqscan to an index scan, only there \nwere so many rows with field1='New' that it actually would have been \nfaster with a seqscan.\n\nOtherwise what you did is very similar to the \"nested loop\" of postgres \nwhich it selects when appropriate.\n\nThe other issue with views is that depending on their definition, \nsometimes postgres can flatten them out and optimize the query, and \nsometimes it can't. Order by is one of the big culprits for bad queries \ninvolving views.\n\nJohn\n=:->", "msg_date": "Tue, 08 Mar 2005 19:27:55 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pl/pgsql faster than raw SQL?" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nJohn A Meinel wrote:\n > That sounds more like you had bad statistics on the field1 column, which\n> caused postgres to switch from a seqscan to an index scan, only there\n> were so many rows with field1='New' that it actually would have been\n> faster with a seqscan.\n\nThe field1 was a calculated field and with the filter \"='New'\"\npostgres was executing that function on more rows than without filter.\n\n\n\nRegards\nGaetano Mendola\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.5 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFCLtwZ7UpzwH2SGd4RAhU5AJwMeFWwIO/UfdU0QTDo+FTCxPhqYACfYNVl\n1yBUEObhZhUDnNDXdsJ/bi0=\n=xc8U\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Wed, 09 Mar 2005 12:20:57 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pl/pgsql faster than raw SQL?" } ]
[ { "msg_contents": "I have two index questions. The first is about an issue that has been\nrecently discussed,\n\nand I just wanted to be sure of my understanding. Functions like count(),\nmax(), etc. will\n\nuse sequential scans instead of index scans because the index doesn't know\nwhich rows\n\nare actually visible.is this correct?\n\n \n\nSecond:\n\n \n\nI created an index in a table with over 10 million rows.\n\nThe index is on field x, which is a double.\n\n \n\nThe following command, as I expected, results in an index scan:\n\n \n\n=# explain select * from data where x = 0;\n\n QUERY PLAN\n\n-------------------------------------------------------------------------\n\n Index Scan using data_x_ix on data (cost=0.00..78.25 rows=19 width=34)\n\n Index Cond: (x = 0::double precision)\n\n(2 rows)\n\n \n\n \n\nBut this command, in which the only difference if > instead of =, is a\nsequential scan.\n\n \n\n=# explain select * from data where x > 0;\n\n QUERY PLAN\n\n------------------------------------------------------------------\n\n Seq Scan on data (cost=0.00..1722605.20 rows=62350411 width=34)\n\n Filter: (x > 0::double precision)\n\n(2 rows)\n\n \n\nWhy is this?\n\n(This is with pg 8.0.1 on a PC running FC3 with 1GB ram.if it matters)\n\n\n\n\n\n\n\n\n\n\nI have two index questions.  The first is about an\nissue that has been recently discussed,\nand I just wanted to be sure of my understanding. \nFunctions like count(), max(), etc. will\nuse sequential scans instead of index scans because the\nindex doesn’t know which rows\nare actually visible…is this correct?\n \nSecond:\n \nI created an index in a table with over 10 million rows.\nThe index is on field x, which is a double.\n \nThe following command, as I expected, results in an index\nscan:\n \n=# explain select * from data where x = 0;\n                              \nQUERY PLAN\n-------------------------------------------------------------------------\n Index Scan using data_x_ix on data  (cost=0.00..78.25\nrows=19 width=34)\n   Index Cond: (x = 0::double precision)\n(2 rows)\n \n \nBut this command, in which the only difference if >\ninstead of =, is a sequential scan.\n \n=# explain select * from data where x > 0;\n                           \nQUERY PLAN\n------------------------------------------------------------------\n Seq Scan on data  (cost=0.00..1722605.20\nrows=62350411 width=34)\n   Filter: (x > 0::double precision)\n(2 rows)\n \nWhy is this?\n(This is with pg 8.0.1 on a PC running FC3 with 1GB ram…if\nit matters)", "msg_date": "Tue, 8 Mar 2005 13:35:53 -0500", "msg_from": "\"Rick Schumeyer\" <[email protected]>", "msg_from_op": true, "msg_subject": "index scan on =, but not < ?" }, { "msg_contents": "Your hypothesis about index usage of count() and max() is correct.\n\nAs for why you see index usage in your first example query and not your \nsecond: compare the number of rows in question. An index is extremely \nuseful if 19 rows will be returned. But when 62350411 rows will be \nreturned, you're talking about a substantial fraction of the table. A \nsequential scan will probably correctly be judged to be faster by the \nplanner.\n\n-tfo\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-260-0005\n\nOn Mar 8, 2005, at 12:35 PM, Rick Schumeyer wrote:\n\n> I have two index questions.� The first is about an issue that has been \n> recently discussed,\n> and I just wanted to be sure of my understanding.� Functions like \n> count(), max(), etc. will\n> use sequential scans instead of index scans because the index doesn�t \n> know which rows\n> are actually visible�is this correct?\n>\n> �\n>\n> Second:\n>\n> �\n>\n> I created an index in a table with over 10 million rows.\n> The index is on field x, which is a double.\n>\n> The following command, as I expected, results in an index scan:\n>\n> =# explain select * from data where x = 0;\n>\n> ������������������������������ QUERY PLAN\n>\n> ----------------------------------------------------------------------- \n> --\n> �Index Scan using data_x_ix on data� (cost=0.00..78.25 rows=19 \n> width=34)\n> �� Index Cond: (x = 0::double precision)\n> (2 rows)\n> �\n>\n> But this command, in which the only difference if > instead of =, is a \n> sequential scan.\n>\n>\n> =# explain select * from data where x > 0;\n>\n> ��������������������������� QUERY PLAN\n>\n> ------------------------------------------------------------------\n>\n> �Seq Scan on data� (cost=0.00..1722605.20 rows=62350411 width=34)\n> �� Filter: (x > 0::double precision)\n> (2 rows)\n>\n> Why is this?\n>\n> (This is with pg 8.0.1 on a PC running FC3 with 1GB ram�if it matters)\n\n", "msg_date": "Tue, 8 Mar 2005 12:52:41 -0600", "msg_from": "Thomas F.O'Connell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index scan on =, but not < ?" }, { "msg_contents": "Rick Schumeyer wrote:\n\n> I have two index questions. The first is about an issue that has been\n> recently discussed,\n>\n> and I just wanted to be sure of my understanding. Functions like\n> count(), max(), etc. will\n>\n> use sequential scans instead of index scans because the index doesn�t\n> know which rows\n>\n> are actually visible�is this correct?\n>\nActually, index scans are chosen whenever the cost is expected to be\ncheaper than a sequential scan. This is generally about < 10% of the\ntotal number of rows.\n\n> Second:\n>\n> I created an index in a table with over 10 million rows.\n>\n> The index is on field x, which is a double.\n>\n> The following command, as I expected, results in an index scan:\n>\n> =# explain select * from data where x = 0;\n>\n> QUERY PLAN\n>\n> -------------------------------------------------------------------------\n>\n> Index Scan using data_x_ix on data (cost=0.00..78.25 rows=19 width=34)\n>\n> Index Cond: (x = 0::double precision)\n>\n> (2 rows)\n>\nSince you have 10m rows, when it expects to get only 19 rows, it is much\nfaster to use an index.\n\n> But this command, in which the only difference if > instead of =, is a\n> sequential scan.\n>\n> =# explain select * from data where x > 0;\n>\n> QUERY PLAN\n>\n> ------------------------------------------------------------------\n>\n> Seq Scan on data (cost=0.00..1722605.20 rows=62350411 width=34)\n>\n> Filter: (x > 0::double precision)\n>\n> (2 rows)\n>\nHere, pg expects to find 62M rows (you must have significantly more than\n10M rows). In this case a sequential scan is much faster than an indexed\none, so that's what pg does.\n\n> Why is this?\n>\n> (This is with pg 8.0.1 on a PC running FC3 with 1GB ram�if it matters)\n>\nIf you think there is truly a performance problem, try attaching the\nresults of \"explain analyze\" in which we might be able to tell you that\nyour statistics inaccurate (run vacuum analyze if you haven't).\n\nJohn\n=:->", "msg_date": "Tue, 08 Mar 2005 13:01:20 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index scan on =, but not < ?" }, { "msg_contents": "That makes a lot of sense. Sure enough, if I change the query from \nWHERE x > 0 (which return a lot of rows) to\nWHERE x > 0 AND x < 1\nI now get an index scan.\n\n> As for why you see index usage in your first example query and not your\n> second: compare the number of rows in question. An index is extremely\n> useful if 19 rows will be returned. But when 62350411 rows will be\n> returned, you're talking about a substantial fraction of the table. A\n> sequential scan will probably correctly be judged to be faster by the\n> planner.\n> \n\n", "msg_date": "Tue, 8 Mar 2005 14:02:50 -0500", "msg_from": "\"Rick Schumeyer\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index scan on =, but not < ?" }, { "msg_contents": "On Tue, 8 Mar 2005, Rick Schumeyer wrote:\n\n> =# explain select * from data where x = 0;\n> -------------------------------------------------------------------------\n> Index Scan using data_x_ix on data (cost=0.00..78.25 rows=19 width=34)\n> Index Cond: (x = 0::double precision)\n> \n> But this command, in which the only difference if > instead of =, is a\n> sequential scan.\n> \n> =# explain select * from data where x > 0;\n> ------------------------------------------------------------------\n> Seq Scan on data (cost=0.00..1722605.20 rows=62350411 width=34)\n> Filter: (x > 0::double precision)\n> \n> Why is this?\n\nThat is because it's faster to execute the x>0 query with a seq. scan then \na index scan. Postgresql is doing the right thing here.\n\nPg estimates that the first query will return 19 rows and that the second \nquery will return 62350411 rows. To return 62350411 rows it's faster to \njust scan the table and not use the index.\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Tue, 8 Mar 2005 20:04:55 +0100 (CET)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index scan on =, but not < ?" }, { "msg_contents": "On Tue, Mar 08, 2005 at 13:35:53 -0500,\n Rick Schumeyer <[email protected]> wrote:\n> I have two index questions. The first is about an issue that has been\n> recently discussed,\n> \n> and I just wanted to be sure of my understanding. Functions like count(),\n> max(), etc. will\n> \n> use sequential scans instead of index scans because the index doesn't know\n> which rows\n> \n> are actually visible.is this correct?\n\nNot exactly. If the number of rows to be examined is on the order of 5%\nof the table, an index scan will probably be slower than a sequential\nscan. The visibility issue makes index scans slower in the case that\nthe only columns of interest are in the index.\nAnother issue is that max could in theory use an index, but there isn't\na mechanism for Postgres to know how to do this in general for aggregates\nwhere it is possible. There have been discussions in the past about\nhow this could be done, but no one has done it yet.\n", "msg_date": "Tue, 8 Mar 2005 22:38:21 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index scan on =, but not < ?" }, { "msg_contents": "On Tue, Mar 08, 2005 at 10:38:21PM -0600, Bruno Wolff III wrote:\n> Not exactly. If the number of rows to be examined is on the order of 5%\n> of the table, an index scan will probably be slower than a sequential\n> scan. The visibility issue makes index scans slower in the case that\n\nShouldn't that be 50%?\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Tue, 8 Mar 2005 22:55:19 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index scan on =, but not < ?" }, { "msg_contents": "On Tue, Mar 08, 2005 at 22:55:19 -0600,\n \"Jim C. Nasby\" <[email protected]> wrote:\n> On Tue, Mar 08, 2005 at 10:38:21PM -0600, Bruno Wolff III wrote:\n> > Not exactly. If the number of rows to be examined is on the order of 5%\n> > of the table, an index scan will probably be slower than a sequential\n> > scan. The visibility issue makes index scans slower in the case that\n> \n> Shouldn't that be 50%?\n\nNo. When you are doing an index scan of a significant part of the table,\nyou will fetch some heap pages more than once. You will also be fetching\nblocks out of order, so you will lose out on read ahead optimization\nby the OS. This assumes that you don't get a lot of cache hits on the\nhelp pages. If a significant portion of the table is cached, then the\ntrade off point will be at a higher percentage of the table.\n", "msg_date": "Tue, 8 Mar 2005 23:20:20 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index scan on =, but not < ?" }, { "msg_contents": "On Tue, Mar 08, 2005 at 11:20:20PM -0600, Bruno Wolff III wrote:\n> On Tue, Mar 08, 2005 at 22:55:19 -0600,\n> \"Jim C. Nasby\" <[email protected]> wrote:\n> > On Tue, Mar 08, 2005 at 10:38:21PM -0600, Bruno Wolff III wrote:\n> > > Not exactly. If the number of rows to be examined is on the order of 5%\n> > > of the table, an index scan will probably be slower than a sequential\n> > > scan. The visibility issue makes index scans slower in the case that\n> > \n> > Shouldn't that be 50%?\n> \n> No. When you are doing an index scan of a significant part of the table,\n> you will fetch some heap pages more than once. You will also be fetching\n> blocks out of order, so you will lose out on read ahead optimization\n> by the OS. This assumes that you don't get a lot of cache hits on the\n> help pages. If a significant portion of the table is cached, then the\n> trade off point will be at a higher percentage of the table.\n\nAhh, I was thinking of a high correlation factor on the index. I still\nquestion 5% though... that seems awefully low.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Wed, 9 Mar 2005 11:22:38 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index scan on =, but not < ?" }, { "msg_contents": "Assuming your system isn't starved for memory, shouldn't repeated page \nfetches be hitting the cache?\n\nI've also wondered about the conventional wisdom that read ahead doesn't \nhelp random reads. I may well be missing something, but *if* the OS has \nenough memory to cache most of the table, surely read ahead will still \nwork to your advantage?\n\nBruno Wolff III wrote:\n\n>No. When you are doing an index scan of a significant part of the table,\n>you will fetch some heap pages more than once. You will also be fetching\n>blocks out of order, so you will lose out on read ahead optimization\n>by the OS. This assumes that you don't get a lot of cache hits on the\n>help pages. If a significant portion of the table is cached, then the\n>trade off point will be at a higher percentage of the table.\n> \n>\n", "msg_date": "Thu, 10 Mar 2005 10:06:23 +1000", "msg_from": "David Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index scan on =, but not < ?" }, { "msg_contents": "Jim C. Nasby wrote:\n\n>Ahh, I was thinking of a high correlation factor on the index. I still\n>question 5% though... that seems awefully low.\n> \n>\nNot really. It all depends on how many records you're packing into each \npage. 1% may well be the threshold for small records.\n\nTom mentioned this in the last couple of months. He was citing a uniform \ndistribution as an example and I thought that sounded a little \npessimistic, but when I did the (possibly faulty) math with a random \ndistribution, I discovered he wasn't far off.\n\nIt's not this simple, but if you can fit 50 randomly organized records \ninto each page and you want to retrieve 2% of the rows, it's likely \nyou'll have to fetch every page - believe it or not.\n\nWhat concerns me is that this all depends on the correlation factor, and \nI suspect that the planner is not giving enough weight to this. \nActually, I'm wondering if it's even looking at the statistic, but I \nhaven't created a test to check. It might explain quite a few complaints \nabout the planner not utilizing indexes.\n", "msg_date": "Thu, 10 Mar 2005 10:24:46 +1000", "msg_from": "David Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index scan on =, but not < ?" }, { "msg_contents": "On Thu, 10 Mar 2005 10:24:46 +1000, David Brown <[email protected]>\nwrote:\n>What concerns me is that this all depends on the correlation factor, and \n>I suspect that the planner is not giving enough weight to this. \n\nThe planner does the right thing for correlations very close to 1 (and\n-1) and for correlations near zero. For correlations somewhere between\n0 and 1 the cost is estimated by interpolation, but it tends too much\ntowards the bad end, IMHO.\n\nServus\n Manfred\n", "msg_date": "Thu, 17 Mar 2005 09:12:42 +0100", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index scan on =, but not < ?" } ]
[ { "msg_contents": "PG Hackers,\n\nWhat follows is iostat output from a TPC-H test on Solaris 10. The machine \nis creating indexes on a table which is 50G in size, so it needs to use \npgsql_tmp for internal swapping:\n\n tty md15 sd1 sd2 sd3 cpu\n tin tout kps tps serv kps tps serv kps tps serv kps tps serv us sy wt id\n 0 84 22526 1211 1 1024 1 5 0 0 0 5634 337 1 30 8 \n0 61\n 0 242 24004 1337 1 1024 1 5 0 0 0 6007 355 1 33 8 \n0 59\n 0 85 22687 1277 1 1024 1 5 0 0 0 5656 322 1 31 8 \n0 62\n 0 85 20876 1099 1 1024 2 9 0 0 0 5185 292 1 28 7 \n0 64\n\nmd15 is WAL (pg_xlog). \nsd3 is PGDATA. \nsd1 i pgsql_tmp.\n\nAs you can see, we're getting a nice 23mb/s peak for WAL (thanks to \nforcedirectio) and database writes peak at 6mb/s. However, pgsql_tmp, which \nis being used heavily, hovers around 1mb/s, and never goes above 1.5mb/s. \nThis seems to be throttling the whole system.\n\nAny suggestions on why this should be? Do we have a performance bug in the \npg_tmp code?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 8 Mar 2005 12:18:24 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Why would writes to pgsql_tmp bottleneck at 1mb/s?" }, { "msg_contents": "People:\n\n> As you can see, we're getting a nice 23mb/s peak for WAL (thanks to\n> forcedirectio) and database writes peak at 6mb/s. However, pgsql_tmp,\n> which is being used heavily, hovers around 1mb/s, and never goes above\n> 1.5mb/s. This seems to be throttling the whole system.\n\nNever mind, I'm a dork. I accidentally cut the \"SET maintenance_work_mem = \n2000000\" out of my config file, and it was running with the default \n1024K ....\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 8 Mar 2005 16:17:57 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why would writes to pgsql_tmp bottleneck at 1mb/s?" }, { "msg_contents": "Hmmmm.....\n\n> > As you can see, we're getting a nice 23mb/s peak for WAL (thanks to\n> > forcedirectio) and database writes peak at 6mb/s. However, pgsql_tmp,\n> > which is being used heavily, hovers around 1mb/s, and never goes above\n> > 1.5mb/s. This seems to be throttling the whole system.\n>\n> Never mind, I'm a dork. I accidentally cut the \"SET maintenance_work_mem\n> = 2000000\" out of my config file, and it was running with the default 1024K\n\nMaybe I'm not an idiot (really!) even with almost 2GB of maintenance_mem, PG \nstill writes to pgsql_tmp no faster than 2MB/s. I think there may be an \nartificial bottleneck there. Question is, PostgreSQL, OS or hardware?\n\nSuggestions?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 8 Mar 2005 16:44:32 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why would writes to pgsql_tmp bottleneck at 1mb/s?" }, { "msg_contents": "> Maybe I'm not an idiot (really!) even with almost 2GB of maintenance_mem, PG\n> still writes to pgsql_tmp no faster than 2MB/s. I think there may be an\n> artificial bottleneck there. Question is, PostgreSQL, OS or hardware?\n\nI'm curious: what is your cpu usage while this is happening? I've\nnoticed similar slow index creation behaviour, but I did not make any\nconnection to pgsql_temp (because it was not on a separate partition).\n I was indexing an oid field of a 700GB table and it took about four\ndays on a 1.2GHz UltraSparcIII (solaris 9, 8GB core). I noticed that\nthe one CPU that was pegged at near 100%, leading me to believe it\nwas CPU bound. Odd thing is that the same operation on a 2GHz Pentium\nIV box (Linux) on the same data took about a day. Truss showed that\na great majority of that time was in userland.\n\n -Aaron\n", "msg_date": "Tue, 8 Mar 2005 20:13:29 -0500", "msg_from": "Aaron Birkland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why would writes to pgsql_tmp bottleneck at 1mb/s?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Maybe I'm not an idiot (really!) even with almost 2GB of maintenance_mem, PG\n> still writes to pgsql_tmp no faster than 2MB/s. I think there may be an \n> artificial bottleneck there. Question is, PostgreSQL, OS or hardware?\n\nAFAIR that's just fwrite() ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Mar 2005 23:27:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why would writes to pgsql_tmp bottleneck at 1mb/s? " }, { "msg_contents": "Tom,\n\n> > Maybe I'm not an idiot (really!) even with almost 2GB of\n> > maintenance_mem, PG still writes to pgsql_tmp no faster than 2MB/s. I\n> > think there may be an artificial bottleneck there. Question is,\n> > PostgreSQL, OS or hardware?\n>\n> AFAIR that's just fwrite() ...\n\nWell, are there any hacks to speed it up? It's about doubling the amount of \ntime it takes to create an index on a very large table.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 8 Mar 2005 20:37:44 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why would writes to pgsql_tmp bottleneck at 1mb/s?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> AFAIR that's just fwrite() ...\n\n> Well, are there any hacks to speed it up? It's about doubling the amount of\n> time it takes to create an index on a very large table.\n\nHuh? Doubled compared to what?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Mar 2005 23:37:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why would writes to pgsql_tmp bottleneck at 1mb/s? " }, { "msg_contents": "Tom,\n\n> Huh? Doubled compared to what?\n\nCompared to how much data writing I can do to the database when pgsql_tmp \nisn't engaged.\n\nIn other words, when pgsql_tmp isn't being written, database writing is 9mb/s. \nWhen pgsql_tmp gets engaged, that drops to 4mb/s.\n\nAlternatively, the WAL drive, which is the same hardware, will write at \n10mb/s.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 8 Mar 2005 23:00:46 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why would writes to pgsql_tmp bottleneck at 1mb/s?" } ]
[ { "msg_contents": "All,\n\nI hope that this is the right place to post. I am relatively new to\nPostgreSQL (i.e., < 1 year in coding) and am just starting to\ndelve into the issues of query optimization. I have hunted around\nthe web for the basics of query optimization, but I have not had\nmuch success in interpreting the documents. I have also been\ntrying to learn the basics of the EXPLAIN command....also without\nmuch success, but I will keep trying.\n\nAnyway, here is what the system reports on the following command:\n\nEXPLAIN SELECT a.country_code, a.state_county_fips,\n icell, jcell, a.beld3_species_id, pollutant_code,\n SUM(b.ratio * d.emissions_factor * a.percent_ag *\n e.ag_fraction * 10000) as normalized_emissions\nFROM \"globals\".\"biogenic_beld3_data\" a, \n \"spatial\".\"tmpgrid\" b, \n \"globals\".\"biogenic_emissions_factors\" d,\n \"globals\".\"biogenic_beld3_ag_data\" e\nWHERE a.beld3_icell=b.b_icell AND \n a.beld3_jcell=b.b_jcell AND\n a.country_code=e.country_code AND\n a.state_county_fips=e.state_county_fips AND\n a.beld3_species_id=d.beld3_species_id AND\n a.ag_forest_records > 0 AND\n a.percent_ag > 0 AND d.emissions_factor > 0\nGROUP BY a.country_code, a.state_county_fips, icell, jcell,\n a.beld3_species_id, pollutant_code\nORDER BY a.country_code, a.state_county_fips, icell, jcell,\n a.beld3_species_id, pollutant_code;\n\n QUERY \nPLAN \n-----------------------------------------------------------------------------------------------\nGroupAggregate (cost=65034.94..71110.50 rows=151889 width=73)\n->Sort (cost=65034.94..65414.66 rows=151889 width=73)\n Sort Key: a.country_code, a.state_county_fips, b.icell, b.jcell,\n a.beld3_species_id, d.pollutant_code\n ->Hash Join (cost=33749.64..37412.88 rows=151889 width=73)\n Hash Cond: (\"outer\".beld3_species_id = \"inner\".beld3_species_id)\n ->Merge Join (cost=33728.84..35303.61 rows=37972 width=56)\n Merge Cond: (((\"outer\".country_code)::text = \n\"inner\".\"?column8?\") AND\n ((\"outer\".state_county_fips)::text = \n\"inner\".\"?column9?\"))\n ->Index Scan using biogenic_beld3_ag_data_pk on \nbiogenic_beld3_ag_data e \n (cost=0.00..806.68 rows=20701 width=26)\n ->Sort (cost=33728.84..33741.67 rows=5131 width=45)\n Sort Key: (a.country_code)::text, \n(a.state_county_fips)::text\n ->Nested Loop (cost=0.00..33412.65 rows=5131 width=45)\n ->Seq Scan on biogenic_beld3_data a \n(cost=0.00..3593.02 rows=5637 width=37)\n Filter: ((ag_forest_records > 0) AND (percent_ag \n > 0::numeric))\n ->Index Scan using tmpgrid_pk on tmpgrid b \n(cost=0.00..5.27 rows=1 width=24)\n Index Cond: ((b.b_icell = \"outer\".beld3_icell) AND\n (b.b_jcell = \"outer\".beld3_jcell))\n ->Hash (cost=18.50..18.50 rows=920 width=21)\n ->Seq Scan on biogenic_emissions_factors d \n(cost=0.00..18.50 rows=920 width=21)\n Filter: (emissions_factor > 0::numeric)\n(18 rows)\n\n\nFirstly, I am frankly mystified on how to interpret all this. If anyone\ncould point me to a document or two that will help me decipher this,\nI will greatly appreciate it.\n\nSecondly, I have figured out that SEQ SCANs are typically bad. I am\nconcerned that a SEQ SCAN is being performed on 'biogenic_beld3_data'\nwhich is the largest table in the query. I would rather have a SEQ SCAN\nbe performed on 'tmpgrid' which contains the keys that subset the data\nfrom 'biogenic_beld3_data.' Is this naive on my part?\n\nThirdly, I have run EXPLAIN on other queries that report back a\nGroupAggregate Cost=<low 300,000s> that runs in about 30 minutes\non my relatively highend linux machine. But when I run this particular\nquery, it takes on the order of 90 minutes to complete. Any thoughts\non why this happens will be appreciated.\n\nFinally, if anyone can be so kind as to provide insight on how to better\noptimize this query, I will, again, be deeply grateful.\n\nThanks in advance.\n\nterrakit\n", "msg_date": "Tue, 08 Mar 2005 16:04:17 -0800", "msg_from": "James G Wilkinson <[email protected]>", "msg_from_op": true, "msg_subject": "Query Optimization" }, { "msg_contents": "James G Wilkinson wrote:\n\n> All,\n>\n...\n\n> Firstly, I am frankly mystified on how to interpret all this. If anyone\n> could point me to a document or two that will help me decipher this,\n> I will greatly appreciate it.\n>\nI assume you have looked at:\nhttp://www.postgresql.org/docs/8.0/static/performance-tips.html\nAnd didn't find it helpful enough. I'm not really sure what help you are \nasking. Are you saying that this query is performing slowly and you want \nto speed it up? Or you just want to understand how to interpret the \noutput of explain?\n\n> Secondly, I have figured out that SEQ SCANs are typically bad. I am\n> concerned that a SEQ SCAN is being performed on 'biogenic_beld3_data'\n> which is the largest table in the query. I would rather have a SEQ SCAN\n> be performed on 'tmpgrid' which contains the keys that subset the data\n> from 'biogenic_beld3_data.' Is this naive on my part?\n\nIt depends how much data is being extracted. If you have 1,000,000 rows, \nand only need 10, then an index scan is wonderful. If you need 999,999, \nthen a sequential scan is much better (the break even point is <10%)\n From the explain, it thinks it is going to be needing 5,637 rows from \nbiogenic_beld3_data, what is that portion relative to the total?\n\nThe values at least look like you've run vacuum analyze. Have you tried \nrunning \"explain analyze\" instead of just explain? Then you can see if \nthe planners estimates are accurate.\n\nIf you want some help to force it, you could try a subselect query. \nSomething like:\n\nselect * from biogenic_beld3_data b where b.beld3_icell = (select \nb_icell from tmpgrid_pk) and b.beld3_jcell = (select b_jcell from \ntmpgrid_pk);\n\n>\n> Thirdly, I have run EXPLAIN on other queries that report back a\n> GroupAggregate Cost=<low 300,000s> that runs in about 30 minutes\n> on my relatively highend linux machine. But when I run this particular\n> query, it takes on the order of 90 minutes to complete. Any thoughts\n> on why this happens will be appreciated.\n>\nRemember cost is in terms of page fetches, not in seconds.\nProbably it is just an issue of postgres mis-estimating the selectivity \nof one of your queries.\nAlso, you have a fairly complex SUM occurring involving 4 \nmultiplications on an estimated 150,000 rows. While doesn't seem like it \nshould take 90 minutes, it also isn't a trivial operation.\n\n> Finally, if anyone can be so kind as to provide insight on how to better\n> optimize this query, I will, again, be deeply grateful.\n>\n> Thanks in advance.\n>\n> terrakit\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\nJohn\n=:->", "msg_date": "Tue, 08 Mar 2005 19:46:37 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Optimization" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi all,\nthis is the third email that I post but I do not see it in archives,\nthe email was too long I believe so this time I will limit the rows.\nBasically I'm noticing that a simple vacuum full is not enough to\nshrink completelly the table:\n\n# vacuum full verbose url;\nINFO: vacuuming \"public.url\"\nINFO: \"url\": found 268392 removable, 21286 nonremovable row versions in 8563 pages\nDETAIL: 22 dead row versions cannot be removed yet.\nNonremovable row versions range from 104 to 860 bytes long.\nThere were 13924 unused item pointers.\nTotal free space (including removable row versions) is 63818404 bytes.\n4959 pages are or will become empty, including 7 at the end of the table.\n8296 pages containing 63753840 free bytes are potential move destinations.\nCPU 0.33s/0.12u sec elapsed 9.55 sec.\n\n[SNIPPED]\n\nINFO: \"url\": moved 2 row versions, truncated 8563 to 8550 pages\n\n\nand after 4 vacuum full:\n\n\n\nempdb=# vacuum full verbose url;\nINFO: vacuuming \"public.url\"\nINFO: \"url\": found 13 removable, 21264 nonremovable row versions in 8504 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 104 to 860 bytes long.\nThere were 280528 unused item pointers.\nTotal free space (including removable row versions) is 63349188 bytes.\n4913 pages are or will become empty, including 0 at the end of the table.\n8234 pages containing 63340628 free bytes are potential move destinations.\nCPU 0.17s/0.04u sec elapsed 0.49 sec.\n\n[SNIPPED]\n\nINFO: \"url\": moved 5666 row versions, truncated 8504 to 621 pages\n\n\n\n\nanyone knows why ? I had the same behaviour with a 46000 rows table with\n46000 pages! It was reduced to 3000 pages after 7 vacuum full.\n\n\nRegards\nGaetano Mendola\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.5 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFCLksV7UpzwH2SGd4RAoz3AKDvXSx3w/jRz/NR1pgtrxIZs8cJcwCg/0xm\nzSr0sPDBkp8V1WXjREoVdLk=\n=EHv2\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Wed, 09 Mar 2005 02:02:13 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "vacuum full, why multiple times ?" }, { "msg_contents": "On Wed, Mar 09, 2005 at 02:02:13AM +0100, Gaetano Mendola wrote:\n\n> Basically I'm noticing that a simple vacuum full is not enough to\n> shrink completelly the table:\n> \n> # vacuum full verbose url;\n> INFO: vacuuming \"public.url\"\n> INFO: \"url\": found 268392 removable, 21286 nonremovable row versions in 8563 pages\n> DETAIL: 22 dead row versions cannot be removed yet.\n\nHow busy is the database? I'd guess that each time you run VACUUM,\nthere are still open transactions that have visibility to the dead\nrows, so VACUUM doesn't touch them. Those transactions eventually\ncomplete, and eventually VACUUM FULL does what you're expecting.\nI don't know if that's the only possible cause, but I get results\nsimilar to yours if I have transactions open when I run VACUUM.\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Tue, 8 Mar 2005 19:15:33 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum full, why multiple times ?" }, { "msg_contents": "Michael Fuhr wrote:\n> On Wed, Mar 09, 2005 at 02:02:13AM +0100, Gaetano Mendola wrote:\n> \n> \n>>Basically I'm noticing that a simple vacuum full is not enough to\n>>shrink completelly the table:\n>>\n>># vacuum full verbose url;\n>>INFO: vacuuming \"public.url\"\n>>INFO: \"url\": found 268392 removable, 21286 nonremovable row versions in 8563 pages\n>>DETAIL: 22 dead row versions cannot be removed yet.\n> \n> \n> How busy is the database? I'd guess that each time you run VACUUM,\n> there are still open transactions that have visibility to the dead\n> rows, so VACUUM doesn't touch them. Those transactions eventually\n> complete, and eventually VACUUM FULL does what you're expecting.\n> I don't know if that's the only possible cause, but I get results\n> similar to yours if I have transactions open when I run VACUUM.\n> \n\nThat was my first tough but it seem strange that 2 dead rows where\ngrabbing 7883 pages, don't you think ?\n\n\n# vacuum full verbose url;\nINFO: vacuuming \"public.url\"\nINFO: \"url\": found 74 removable, 21266 nonremovable row versions in 8550 pages\nDETAIL: 2 dead row versions cannot be removed yet.\n[SNIP]\nINFO: \"url\": moved 11 row versions, truncated 8550 to 8504 pages\n\n\nand in the next run:\n\n\n# vacuum full verbose url;\nINFO: vacuuming \"public.url\"\nINFO: \"url\": found 13 removable, 21264 nonremovable row versions in 8504 pages\nDETAIL: 0 dead row versions cannot be removed yet.\n[SNIP]\nINFO: \"url\": moved 5666 row versions, truncated 8504 to 621 pages\n\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n", "msg_date": "Wed, 09 Mar 2005 12:27:30 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum full, why multiple times ?" }, { "msg_contents": "Gaetano Mendola wrote:\n> \n> # vacuum full verbose url;\n> INFO: vacuuming \"public.url\"\n> INFO: \"url\": found 74 removable, 21266 nonremovable row versions in 8550 pages\n> DETAIL: 2 dead row versions cannot be removed yet.\n> [SNIP]\n> INFO: \"url\": moved 11 row versions, truncated 8550 to 8504 pages\n> \n> \n> and in the next run:\n> \n> \n> # vacuum full verbose url;\n> INFO: vacuuming \"public.url\"\n> INFO: \"url\": found 13 removable, 21264 nonremovable row versions in 8504 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> [SNIP]\n> INFO: \"url\": moved 5666 row versions, truncated 8504 to 621 pages\n\nIf page number 8549 was the one being held, I don't think vacuum can \ntruncate the file. The empty space can be re-used, but the rows can't be \nmoved to a lower page while a transaction is using them.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 09 Mar 2005 12:30:01 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum full, why multiple times ?" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nRichard Huxton wrote:\n> Gaetano Mendola wrote:\n> \n>>\n>> # vacuum full verbose url;\n>> INFO: vacuuming \"public.url\"\n>> INFO: \"url\": found 74 removable, 21266 nonremovable row versions in\n>> 8550 pages\n>> DETAIL: 2 dead row versions cannot be removed yet.\n>> [SNIP]\n>> INFO: \"url\": moved 11 row versions, truncated 8550 to 8504 pages\n>>\n>>\n>> and in the next run:\n>>\n>>\n>> # vacuum full verbose url;\n>> INFO: vacuuming \"public.url\"\n>> INFO: \"url\": found 13 removable, 21264 nonremovable row versions in\n>> 8504 pages\n>> DETAIL: 0 dead row versions cannot be removed yet.\n>> [SNIP]\n>> INFO: \"url\": moved 5666 row versions, truncated 8504 to 621 pages\n> \n> \n> If page number 8549 was the one being held, I don't think vacuum can\n> truncate the file. The empty space can be re-used, but the rows can't be\n> moved to a lower page while a transaction is using them.\n\nIt's clear now.\n\n\nRegards\nGaetano Mendola\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.5 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFCLwhu7UpzwH2SGd4RAhEIAKDodnb03RvInDOJz9H+4w//DgJifACeNINP\n0UMkQ0yBwNAZw91clvAUjRI=\n=e+mM\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Wed, 09 Mar 2005 15:30:07 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum full, why multiple times ?" }, { "msg_contents": "Gaetano Mendola <[email protected]> writes:\n> Richard Huxton wrote:\n>> If page number 8549 was the one being held, I don't think vacuum can\n>> truncate the file. The empty space can be re-used, but the rows can't be\n>> moved to a lower page while a transaction is using them.\n\n> It's clear now.\n\nNot entirely. VACUUM FULL doesn't really worry about whether anyone\nelse \"is using\" the table --- it knows no one else is, because it holds\nexclusive lock on the table. However it must preserve dead tuples that\nwould still be visible to any existing transaction, because that other\ntransaction could come along and look at the table after VACUUM\nfinishes and releases the lock.\n\nWhat really drives the process is that VACUUM FULL moves tuples in order\nto make the file shorter (release empty pages at the end) --- and not\nfor any other reason. So it could stop when there is still plenty of\ndead space in the table. It stops when the last nonempty page contains\na tuple that it can't find room for in any earlier page.\n\nWhat I suppose you saw was that page 8503 contained a tuple so large it\nwouldn't fit in the free space on any earlier page. By the time of the\nsecond vacuum, either this tuple was deleted, or deletion of some other\ntuples had made a hole big enough for it to fit in.\n\nThe extent of the truncation in the second vacuum says that you had\nquite a lot of free space, so it's a bit surprising that there wasn't\nenough room in any one page for such a tuple to be moved, but that seems\nto be what happened.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Mar 2005 10:16:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum full, why multiple times ? " }, { "msg_contents": "Tom Lane wrote:\n> Gaetano Mendola <[email protected]> writes:\n> \n>>Richard Huxton wrote:\n>>\n>>>If page number 8549 was the one being held, I don't think vacuum can\n>>>truncate the file. The empty space can be re-used, but the rows can't be\n>>>moved to a lower page while a transaction is using them.\n> \n> \n>>It's clear now.\n> \n> \n> Not entirely. VACUUM FULL doesn't really worry about whether anyone\n> else \"is using\" the table --- it knows no one else is, because it holds\n> exclusive lock on the table. However it must preserve dead tuples that\n> would still be visible to any existing transaction, because that other\n> transaction could come along and look at the table after VACUUM\n> finishes and releases the lock.\n> \n> What really drives the process is that VACUUM FULL moves tuples in order\n> to make the file shorter (release empty pages at the end) --- and not\n> for any other reason. So it could stop when there is still plenty of\n> dead space in the table. It stops when the last nonempty page contains\n> a tuple that it can't find room for in any earlier page.\n> \n> What I suppose you saw was that page 8503 contained a tuple so large it\n> wouldn't fit in the free space on any earlier page. By the time of the\n> second vacuum, either this tuple was deleted, or deletion of some other\n> tuples had made a hole big enough for it to fit in.\n> \n> The extent of the truncation in the second vacuum says that you had\n> quite a lot of free space, so it's a bit surprising that there wasn't\n> enough room in any one page for such a tuple to be moved, but that seems\n> to be what happened.\n\nAll rows of that table are almost of the same size, so this is not the\nreason, and neither any row was deleted.\nMay be the page 8503 was cointainig a dead row ?\n\nI can send you off line another vacuum full sequence if you need it, I sent it\nto list but apparently the size was too much and noone of you seen it.\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n", "msg_date": "Wed, 09 Mar 2005 17:17:20 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum full, why multiple times ?" } ]
[ { "msg_contents": "Hi folks,\n\nI'm about to start testing PGv8 on an Opteron 64bit box with 12GB Ram\nrunning RedHat.\nA bunch of raided drives in the backend.\n\nExpecting 50GB of data per month (100GB+ initial load).\n\nI do not see any example config settings. Have some MySql experience\nand and for it there are config settings for small or large server\noperations.\n\nDoes PG have similar examples (if so they are well hidden…at least\nfrom Google search<g>).\n\nIf not can any of you send me a typical config for such an environment.\nI basically want to get a good setting so I can good insert\nperformance…low vol selects.\n\nDB data consists of customer info and their call history.\nLots of customers…lots of call history. Not that wide of rows.\n\nWant to be able to insert at rate of 1000's per sec. Should think thats poss.\n\nAny help you folks can provide to optimize this in a shootout between\nit and MSql (we hope to be move from Oracle)\n\nTx,\nDavid\n", "msg_date": "Tue, 8 Mar 2005 19:07:48 -0800", "msg_from": "David B <[email protected]>", "msg_from_op": true, "msg_subject": "64bit Opteron multi drive raid. Help with best config settings" }, { "msg_contents": "On Tue, 2005-03-08 at 19:07 -0800, David B wrote:\n> I do not see any example config settings. Have some MySql experience\n> and and for it there are config settings for small or large server\n> operations.\n\nFor starters, this might be helpful:\n\nhttp://www.powerpostgresql.com/PerfList\n\nThen this:\nhttp://www.powerpostgresql.com/Downloads/annotated_conf_80.html\nhttp://www.powerpostgresql.com/Downloads/annotated_conf_80.pdf\n\nSomeone else might have an example config for you.\n\nHTH,\n\n-- \nKarim Nassar\nDepartment of Computer Science\nBox 15600, College of Engineering and Natural Sciences\nNorthern Arizona University, Flagstaff, Arizona 86011\nOffice: (928) 523-5868 -=- Mobile: (928) 699-9221", "msg_date": "Tue, 08 Mar 2005 20:53:06 -0700", "msg_from": "Karim Nassar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 64bit Opteron multi drive raid. Help with best\tconfig" } ]
[ { "msg_contents": "All,\n\nI have a table with ~ 3 million records. I'm indexing a field holding\nnames, no more than 200 bytes each. Indexing the resulting tsvector\ntakes forever. It's been running now for more than 40 hours on a Linux\nwith PG 8.01, a single Xeon & 4GB RAM. My work_mem postgresql.conf\nparameter is at 240960 and maintenance_work_mem at 96384, although the\nindex task is using at most 12MB. Task is 99% cpu bound. Is there any\nway I may speed up the indexing?\n\n\nTIA,\n\n\t\n-- \nWerner Bohl <[email protected]>\nIDS de Costa Rica S.A.\n\n", "msg_date": "Wed, 09 Mar 2005 11:25:47 -0600", "msg_from": "Werner Bohl <[email protected]>", "msg_from_op": true, "msg_subject": "How to speed up tsearch2 indexing" }, { "msg_contents": "On Wed, 9 Mar 2005, Werner Bohl wrote:\n\n> All,\n>\n> I have a table with ~ 3 million records. I'm indexing a field holding\n> names, no more than 200 bytes each. Indexing the resulting tsvector\n> takes forever. It's been running now for more than 40 hours on a Linux\n> with PG 8.01, a single Xeon & 4GB RAM. My work_mem postgresql.conf\n> parameter is at 240960 and maintenance_work_mem at 96384, although the\n> index task is using at most 12MB. Task is 99% cpu bound. Is there any\n> way I may speed up the indexing?\n\nWhat's your tsearch2 configuration ? Do you use dictionaries ?\nI wrote a brief explanation of tsearch2 internals\nhttp://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals\n\nHope, it could help you.\n\n>\n>\n> TIA,\n>\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Wed, 9 Mar 2005 20:41:01 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to speed up tsearch2 indexing" }, { "msg_contents": "On Wed, 2005-03-09 at 20:41 +0300, Oleg Bartunov wrote:\n\n> What's your tsearch2 configuration ? Do you use dictionaries ?\n> I wrote a brief explanation of tsearch2 internals\n> http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals\n> \nTsearch2 is using default english configuration. No dictionaries, just\nput some more stop words (10) in english.stop.\n\n> Hope, it could help you.\n> \n> >\n> >\n> > TIA,\n> >\n> >\n> >\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n", "msg_date": "Wed, 09 Mar 2005 14:21:06 -0600", "msg_from": "Werner Bohl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to speed up tsearch2 indexing" }, { "msg_contents": "On Wed, 9 Mar 2005, Werner Bohl wrote:\n\n> On Wed, 2005-03-09 at 20:41 +0300, Oleg Bartunov wrote:\n>\n>> What's your tsearch2 configuration ? Do you use dictionaries ?\n>> I wrote a brief explanation of tsearch2 internals\n>> http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals\n>>\n> Tsearch2 is using default english configuration. No dictionaries, just\n> put some more stop words (10) in english.stop.\n\nit's not good, because you, probably, have a lot of unique words.\nDo you have some statistics, see stat() function ?\n\n\n>\n>> Hope, it could help you.\n>>\n>>>\n>>>\n>>> TIA,\n>>>\n>>>\n>>>\n>>\n>> \tRegards,\n>> \t\tOleg\n>> _____________________________________________________________\n>> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n>> Sternberg Astronomical Institute, Moscow University (Russia)\n>> Internet: [email protected], http://www.sai.msu.su/~megera/\n>> phone: +007(095)939-16-83, +007(095)939-23-83\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 6: Have you searched our list archives?\n>>\n>> http://archives.postgresql.org\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Wed, 9 Mar 2005 23:27:33 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to speed up tsearch2 indexing" } ]
[ { "msg_contents": "I'm using a 7.4.6 Perl app that bulk-loads a table, \nby executing a \"COPY TMP_Message FROM STDIN\", then letting\n$dbh->func($message_text.\"\\n\", \"putline\")\n\nSpeculation made me try catenating Several \\n-terminated lines\ntogether, and making a single putline() call with that.\nLo and behold, all the lines went in as separate rows, as I hoped.\n\nI haven't measured the performance difference using this multiline\nbatching. I'm hoping that there will be as much,since the app is really sucking\non a 500 msg/sec firehose, and the db side needs serious speeding up. Question\nis, am I playing with a version-dependent anomaly, or should I expect this\nto continue in 8.x (until, eventually, silently, something causes\nthis to break)?\n\nI'm presuming that this is not a Perl DBI/DBD::Pg question,\nbut rather, depending on the underlying pq lib and fe protocol.\n\n-- \n\"Dreams come true, not free.\"\n\n", "msg_date": "Wed, 9 Mar 2005 21:29:47 -0800", "msg_from": "Mischa <[email protected]>", "msg_from_op": true, "msg_subject": "Multi-line requests in COPY ... FROM STDIN" } ]
[ { "msg_contents": "From rom http://www.powerpostgresql.com/PerfList/\n\n\"even in a two-disk server, you can put the transaction log onto the\noperating system disk and reap some benefits.\"\n\nContext: I have a two disk server that is about to become dedicated to\npostgresql (it's a sun v40z running gentoo linux).\n\nWhat's \"theoretically better\"? \n\n1) OS and pg_xlog on one disk, rest of postgresql on the other? (if I\n understand the above correctly)\n2) Everything striped Raid 0?\n3) <some answer from someone smarter than me>\n\nTIA,\n-- \nKarim Nassar\nDepartment of Computer Science\nBox 15600, College of Engineering and Natural Sciences\nNorthern Arizona University, Flagstaff, Arizona 86011\nOffice: (928) 523-5868 -=- Mobile: (928) 699-9221", "msg_date": "Thu, 10 Mar 2005 00:44:44 -0700", "msg_from": "Karim Nassar <[email protected]>", "msg_from_op": true, "msg_subject": "What's better: Raid 0 or disk for seperate pg_xlog" }, { "msg_contents": "Karim Nassar wrote:\n\n>Context: I have a two disk server that is about to become dedicated to\n>postgresql (it's a sun v40z running gentoo linux).\n>\n>What's \"theoretically better\"? \n>\n>1) OS and pg_xlog on one disk, rest of postgresql on the other? (if I\n> understand the above correctly)\n>2) Everything striped Raid 0?\n>\nHow lucky are you feeling? If you don't mind doubling your chances of \ndata loss (a bit worse than that because recovery is nearly impossible), \ngo ahead and use RAID 0 (which of course is not RAID by definition).\n\nThe WAL on a separate disk is your best bet if the problem is slow updates.\n\nIf prevention of data loss is a consideration, RAID 1 (mirroring) is the \nanswer.\n\n\n", "msg_date": "Fri, 11 Mar 2005 00:14:36 +1000", "msg_from": "David Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's better: Raid 0 or disk for seperate pg_xlog" }, { "msg_contents": "Karim Nassar wrote:\n\n>From rom http://www.powerpostgresql.com/PerfList/\n>\n>\"even in a two-disk server, you can put the transaction log onto the\n>operating system disk and reap some benefits.\"\n>\n>Context: I have a two disk server that is about to become dedicated to\n>postgresql (it's a sun v40z running gentoo linux).\n>\n>What's \"theoretically better\"? \n>\n>1) OS and pg_xlog on one disk, rest of postgresql on the other? (if I\n> understand the above correctly)\n>2) Everything striped Raid 0?\n>3) <some answer from someone smarter than me>\n>\n>TIA,\n> \n>\nWith 2 disks, you have 3 options, RAID0, RAID1, and 2 independent disks.\n\nRAID0 - Fastest read and write speed. Not redundant, if either disk \nfails you lose everything on *both* disks.\nRAID1 - Redundant, slow write speed, but should be fast read speed. If \none disk fails, you have a backup.\n2 independent - With pg_xlog on a separate disk, writing (updates) \nshould stay reasonably fast. If one disk dies, you lose that disk, but \nnot both.\n\nHow critical is your data? How update heavy versus read heavy, etc are \nyou? Do you have a way to restore the database if something fails? If \nyou do nightly pg_dumps, will you survive if you lose a days worth of \ntransactions?\n\nIn general I would recommend RAID1, because that is the safe bet. If \nyour db is the bottleneck, and your data isn't all that critical, and \nyou are read heavy, I would probably go with RAID1, if you are write \nheavy I would say 2 independent disks.\n\nJohn\n=:->", "msg_date": "Thu, 10 Mar 2005 09:26:51 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's better: Raid 0 or disk for seperate pg_xlog" }, { "msg_contents": "Am Donnerstag, 10. März 2005 08:44 schrieb Karim Nassar:\n> From rom http://www.powerpostgresql.com/PerfList/\n>\n> \"even in a two-disk server, you can put the transaction log onto the\n> operating system disk and reap some benefits.\"\n>\n> Context: I have a two disk server that is about to become dedicated to\n> postgresql (it's a sun v40z running gentoo linux).\n>\n> What's \"theoretically better\"?\n>\n> 1) OS and pg_xlog on one disk, rest of postgresql on the other? (if I\n> understand the above correctly)\n> 2) Everything striped Raid 0?\n> 3) <some answer from someone smarter than me>\n\nBecause of hard disk seeking times, a separate disk for WAL will be a lot \nbetter.\n\nregards\n", "msg_date": "Thu, 10 Mar 2005 17:41:28 +0100", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's better: Raid 0 or disk for seperate pg_xlog" }, { "msg_contents": "Thanks to all for the tips.\n\nOn Thu, 2005-03-10 at 09:26 -0600, John A Meinel wrote:\n> How critical is your data? How update heavy versus read heavy, etc are you? \n\nLarge, relatively infrequent uploads, with frequent reads. The\napplication is a web front-end to scientific research data. The\nscientists have their own copy of the data, so if something went really\nbad, we could probably get them to upload again.\n\n> Do you have a way to restore the database if something fails? If \n> you do nightly pg_dumps, will you survive if you lose a days worth of \n> transactions?\n\nFor now, we have access to a terabyte backup server, and the DB is small\nenough that my sysadmin lets me have hourly pg_dumps for last 24 hours\nbacked up nightly. Veritas is configured to save daily pg_dumps for the\nlast week, a weekly dump for the last month and a monthly version for\nthe last 6 months.\n\n> In general I would recommend RAID1, because that is the safe bet. If \n> your db is the bottleneck, and your data isn't all that critical, and \n> you are read heavy, I would probably go with RAID1, if you are write \n> heavy I would say 2 independent disks.\n\nI feel that we have enough data safety such that I want to go for speed.\nSome of the queries are very large joins, and I am going for pure\nthroughput at this point - unless someone can find a hole in my backup\ntactic.\n\nOf course, later we will have money to throw at more spindles. But for\nnow, I am trying gaze in to the future and maximize my current\ncapabilities.\n\n\nSeems to me that the \"best\" solution would be:\n\n* disk 0 partition 1..n - os mounts\n partition n+1 - /var/lib/postgres/data/pg_xlog\n\n* disk 1 partition 1 - /var/lib/postgres/data\n\n* Further (safe) performance gains can be had by adding more spindles as\nsuch: \n - first disk: RAID1 to disk 1\n - next 2 disks: RAID 0 across the above\n\nDo I grok it?\n\nThanks again,\n-- \nKarim Nassar\nDepartment of Computer Science\nBox 15600, College of Engineering and Natural Sciences\nNorthern Arizona University, Flagstaff, Arizona 86011\nOffice: (928) 523-5868 -=- Mobile: (928) 699-9221\n\n", "msg_date": "Thu, 10 Mar 2005 12:50:31 -0700", "msg_from": "Karim Nassar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What's better: Raid 0 or disk for seperate pg_xlog" }, { "msg_contents": "Karim Nassar wrote:\n\n>Thanks to all for the tips.\n> \n>\n...\n\n>>In general I would recommend RAID1, because that is the safe bet. If \n>>your db is the bottleneck, and your data isn't all that critical, and \n>>you are read heavy, I would probably go with RAID1, if you are write \n>> \n>>\n ^^^^^ -> RAID0\n\n>>heavy I would say 2 independent disks.\n>> \n>>\n>\n>I feel that we have enough data safety such that I want to go for speed.\n>Some of the queries are very large joins, and I am going for pure\n>throughput at this point - unless someone can find a hole in my backup\n>tactic.\n>\n>Of course, later we will have money to throw at more spindles. But for\n>now, I am trying gaze in to the future and maximize my current\n>capabilities.\n>\n>\n>Seems to me that the \"best\" solution would be:\n>\n>* disk 0 partition 1..n - os mounts\n> partition n+1 - /var/lib/postgres/data/pg_xlog\n>\n>* disk 1 partition 1 - /var/lib/postgres/data\n>\n>* Further (safe) performance gains can be had by adding more spindles as\n>such: \n> - first disk: RAID1 to disk 1\n> - next 2 disks: RAID 0 across the above\n> \n>\nSounds decent to me.\nI did make the mistake that you might want to consider a RAID0. But the \nperformance gains might be small, and you potentially lose everything.\nBut your update strategy seems dead on.\n\n>Do I grok it?\n>\n>Thanks again,\n> \n>\n\nJohn\n=:->", "msg_date": "Thu, 10 Mar 2005 13:56:03 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's better: Raid 0 or disk for seperate pg_xlog" }, { "msg_contents": "Karim Nassar wrote:\n> Thanks to all for the tips.\n> \n> On Thu, 2005-03-10 at 09:26 -0600, John A Meinel wrote:\n> \n>>How critical is your data? How update heavy versus read heavy, etc are you? \n> \n> \n> Large, relatively infrequent uploads, with frequent reads. The\n> application is a web front-end to scientific research data. The\n> scientists have their own copy of the data, so if something went really\n> bad, we could probably get them to upload again.\n\nIf you have very few updates and your reads aren't mostly from RAM you \ncould be better off with simply mirroring (assuming that gains you read \nbandwidth). Failing that, use the tablespace feature to balance your \nread load as far as you can.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 11 Mar 2005 08:05:57 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's better: Raid 0 or disk for seperate pg_xlog" } ]
[ { "msg_contents": "Hi there!\n\nI think I may have a problem with the statistics in my postgresql 8.0\nrunning under Windowx XP. When I view both pg_stat_all_tables and\npg_stat_all_indexes, all the numeric columns that should hold the\nstatistics are 0 (zero). My configuration file has the following:\n\nstats_start_collector = true\nstats_command_string = true\nstats_reset_on_server_start = false\n\nAny tip?\n\nThanks in advance,\n\nHugo Ferreira\n\n-- \nGPG Fingerprint: B0D7 1249 447D F5BB 22C5 5B9B 078C 2615 504B 7B85\n", "msg_date": "Fri, 11 Mar 2005 11:53:18 +0000", "msg_from": "Hugo Ferreira <[email protected]>", "msg_from_op": true, "msg_subject": "Statistics not working??" }, { "msg_contents": "On Fri, 11 Mar 2005, Hugo Ferreira wrote:\n\n> Hi there!\n>\n> I think I may have a problem with the statistics in my postgresql 8.0\n> running under Windowx XP. When I view both pg_stat_all_tables and\n> pg_stat_all_indexes, all the numeric columns that should hold the\n> statistics are 0 (zero). My configuration file has the following:\n>\n> stats_start_collector = true\n> stats_command_string = true\n> stats_reset_on_server_start = false\n>\n> Any tip?\n\nYou need to define stats_block_level and/or stats_row_level\n\n\n>\n> Thanks in advance,\n>\n> Hugo Ferreira\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Fri, 11 Mar 2005 15:03:15 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Statistics not working??" } ]
[ { "msg_contents": "why this query needs more time? Its very slow\n\nthx\n\n//////////////////////////////////QUERY\nselect\n coalesce(personaldetails.masterid::numeric,personaldetails.id) +\n(coalesce(personaldetails.id::numeric,0)/1000000) as sorting,\n floor(coalesce(personaldetails.masterid::numeric,personaldetails.id) +\n(coalesce(personaldetails.id::numeric,0)/1000000)) as ppid,\n personaldetails.id as pid,\n personaldetails.masterid,\n coalesce(personaldetails.prefix,'') || '' ||\ncoalesce(personaldetails.firstname,' ') || ' ' ||\ncoalesce(personaldetails.lastname,'''') as fullname,\n personaldetails.regtypeid,\n personaldetails.regdate,\n personaldetails.regprice,\n coalesce(regtypes.regtype,' ') || ' ' ||\ncoalesce(regtypes.subregtype,' ') as regtypetitle,\n regtypes.regtype,\n regtypes.subregtype,\n regtypedates.title,\n balance('MASTER-REGISTRATION',personaldetails.id) as balance,\n coalesce(pd2.prefix,' ') || ' ' || coalesce(pd2.firstname,' ') ||\n' ' || coalesce(pd2.lastname,' ') as accfullname,\n coalesce(rt2.regtype,'''') || ' ' || coalesce(rt2.subregtype,' ') as\naccregtypetitle,\n pd2.id as accid,\n pd2.regtypeid as accregtypeid,\n pd2.regdate as accregdate,\n pd2.regprice as accregprice,\n rt2.regtype as accregtype,\n rt2.subregtype as accsubregtype,\n rd2.title as acctitle,\n balance('MASTER-REGISTRATION',pd2.id) as accbalance,\n case when coalesce(balance('REGISTRATION',personaldetails.id),0)<=0\nthen 1 else 0 end as balancestatus\n\nfrom personaldetails\nleft outer join regtypes on regtypes.id=personaldetails.regtypeid\nleft outer join regtypedates on regtypes.dateid=regtypedates.id\nleft outer join personaldetails pd2 on personaldetails.id=pd2.masterid\nleft outer join regtypes rt2 on rt2.id=pd2.regtypeid\nleft outer join regtypedates rd2 on rt2.dateid=rd2.id\nwhere personaldetails.masterid is null\n///////////////////////////////////////////////////// RESULT STATISTICS\nTotal query runtime: 348892 ms.\nData retrieval runtime: 311 ms.\n763 rows retrieved.\n//////////////////////////////////////////////////// EXPLAIN QUERY\n\nHash Left Join (cost=109.32..109.95 rows=5 width=434)\n Hash Cond: (\"outer\".dateid = \"inner\".id)\n -> Merge Left Join (cost=108.27..108.46 rows=5 width=409)\n Merge Cond: (\"outer\".regtypeid = \"inner\".id)\n -> Sort (cost=106.19..106.20 rows=5 width=347)\n Sort Key: pd2.regtypeid\n -> Hash Left Join (cost=90.11..106.13 rows=5 width=347)\n Hash Cond: (\"outer\".id = \"inner\".masterid)\n -> Hash Left Join (cost=45.49..45.71 rows=5 width=219)\n Hash Cond: (\"outer\".dateid = \"inner\".id)\n -> Merge Left Join (cost=44.44..44.63 rows=5\nwidth=194)\n Merge Cond: (\"outer\".regtypeid = \"inner\".id)\n -> Sort (cost=42.36..42.37 rows=5\nwidth=132)\n Sort Key: personaldetails.regtypeid\n -> Seq Scan on personaldetails\n(cost=0.00..42.30 rows=5 width=132)\n Filter: (masterid IS NULL)\n -> Sort (cost=2.08..2.16 rows=31 width=66)\n Sort Key: regtypes.id\n -> Seq Scan on regtypes\n(cost=0.00..1.31 rows=31 width=66)\n -> Hash (cost=1.04..1.04 rows=4 width=33)\n -> Seq Scan on regtypedates\n(cost=0.00..1.04 rows=4 width=33)\n -> Hash (cost=42.30..42.30 rows=930 width=132)\n -> Seq Scan on personaldetails pd2\n(cost=0.00..42.30 rows=930 width=132)\n -> Sort (cost=2.08..2.16 rows=31 width=66)\n Sort Key: rt2.id\n -> Seq Scan on regtypes rt2 (cost=0.00..1.31 rows=31\nwidth=66)\n -> Hash (cost=1.04..1.04 rows=4 width=33)\n -> Seq Scan on regtypedates rd2 (cost=0.00..1.04 rows=4 width=33)\n\n\n\n", "msg_date": "Fri, 11 Mar 2005 13:54:56 +0200", "msg_from": "\"AL��� ���EL���K\" <[email protected]>", "msg_from_op": true, "msg_subject": "more execution time" }, { "msg_contents": "ALᅵ ᅵELᅵK wrote:\n> why this query needs more time? Its very slow\n\nDifficult to say for sure - could you provide the output of EXPLAIN \nANALYSE rather than just EXPLAIN?\n\nSome other immediate observations:\n1. Perhaps don't post to so many mailing lists at once. If you reply to \nthis, maybe reduce it to pgsql-performance?\n2. You don't say whether the row estimates are accurate in the EXPLAIN.\n3. You seem to be needlessly coalescing personaldetails.masterid since \nyou check for it being null in your WHERE clause\n4. Do you really need to cast to numeric and generate a \"sorting\" column \nthat you then don't ORDER BY?\n5. Is ppid an id number? And are you sure it's safe to calculate it like \nthat?\n6. What is balance() and how long does it take to calculate its result?\n\n> select\n> coalesce(personaldetails.masterid::numeric,personaldetails.id) +\n> (coalesce(personaldetails.id::numeric,0)/1000000) as sorting,\n> floor(coalesce(personaldetails.masterid::numeric,personaldetails.id) +\n> (coalesce(personaldetails.id::numeric,0)/1000000)) as ppid,\n\n> balance('MASTER-REGISTRATION',personaldetails.id) as balance,\n\n> balance('MASTER-REGISTRATION',pd2.id) as accbalance,\n\nI'm guessing point 6 is actually your problem - try it without the calls \nto balance() and see what that does to your timings.\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 11 Mar 2005 13:05:28 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] more execution time" } ]
[ { "msg_contents": "Hi all,\n\nIs the number of rows in explain the number of rows that is expected to be visited or retrieved?\n\nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n", "msg_date": "Fri, 11 Mar 2005 14:45:12 +0100", "msg_from": "\"Joost Kraaijeveld\" <[email protected]>", "msg_from_op": true, "msg_subject": "What is the number of rows in explain?" }, { "msg_contents": "Joost Kraaijeveld wrote:\n\n>Hi all,\n>\n>Is the number of rows in explain the number of rows that is expected to be visited or retrieved?\n>\n>Groeten,\n>\n>Joost Kraaijeveld\n>Askesis B.V.\n>Molukkenstraat 14\n>6524NB Nijmegen\n>tel: 024-3888063 / 06-51855277\n>fax: 024-3608416\n>e-mail: [email protected]\n>web: www.askesis.nl \n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n>\nIn general, it is the number of rows expected to be retrieved. Since a \nSequential Scan always visits every row, but the rows= number is after \nfiltering.\n\nJohn\n=:->", "msg_date": "Fri, 11 Mar 2005 08:43:45 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the number of rows in explain?" } ]
[ { "msg_contents": "Hi List,\n\ni have a query plan who is bad with standard cpu_tuple_costs and good if \nI raise cpu_tuple_costs. Is it is a good practice to raise them if i \nwant to force postgres to use indexes more often? Or is it is better to \ndisable sequence scans?\n\nCIMSOFT=# ANALYSE mitpln;\nANALYZE\n\nCIMSOFT=# EXPLAIN ANALYSE SELECT * FROM mitpln WHERE \ndate_to_yearmonth_dec(mpl_date)='20050';\n QUERY PLAN\n\n--------------------------------------------------------------------------------\n Seq Scan on mitpln (cost=0.00..1411.85 rows=2050 width=69) (actual \ntime=562.000..1203.000 rows=1269 loops=1)\n Filter: ((date_to_yearmonth_dec((mpl_date)::timestamp without time \nzone))::text = '20050'::text)\n Total runtime: 1203.000 ms\n(3 rows)\n\nCIMSOFT=# SET cpu_tuple_cost = 0.07;\nSET\nCIMSOFT=# EXPLAIN ANALYSE SELECT * FROM mitpln WHERE \ndate_to_yearmonth_dec(mpl_date)='20050';\n QUERY PLAN\n\n--------------------------------------------------------------------------------\n Index Scan using mitpln_yearmonth_dec on mitpln (cost=0.00..2962.86 \nrows=2050width=69) (actual time=0.000..0.000 rows=1269 loops=1)\n Index Cond: ((date_to_yearmonth_dec((mpl_date)::timestamp without \ntime zone))::text = '20050'::text)\n Total runtime: 16.000 ms\n(3 rows)\n\n\nCIMSOFT=# \\d mitpln\n Table \"public.mitpln\"\n Column | Type | Modifiers\n\n--------------+-----------------------+-----------------------------------------\n mpl_id | integer | not null default \nnextval('public.mitpln_mpl_id_seq'::text)\n mpl_date | date |\n mpl_minr | integer | not null\n mpl_tpl_name | character varying(20) |\n mpl_feiertag | character varying(50) |\n mpl_min | real |\n mpl_saldo | real |\n mpl_buch | boolean | not null default false\n mpl_absaldo | real |\n mpl_vhz | real |\n dbrid | character varying | default nextval('db_id_seq'::text)\nIndexes:\n \"mitpln_pkey\" PRIMARY KEY, btree (mpl_id)\n \"mitpln_idindex\" UNIQUE, btree (dbrid)\n \"xtt5126\" UNIQUE, btree (mpl_date, mpl_minr)\n \"mitpln_yearmonth_dec\" btree \n(date_to_yearmonth_dec(mpl_date::timestamp with\nout time zone))\n\n\nCIMSOFT=# SELECT count(*) FROM mitpln;\n count\n-------\n 26128\n(1 row)\n", "msg_date": "Fri, 11 Mar 2005 15:25:29 +0100", "msg_from": "Daniel Schuchardt <[email protected]>", "msg_from_op": true, "msg_subject": "cpu_tuple_cost" }, { "msg_contents": "I have forgotten this :\n\nCREATE OR REPLACE FUNCTION date_to_yearmonth_dec(TIMESTAMP) RETURNS \nVARCHAR AS'\nBEGIN\n RETURN extract(year FROM $1) || extract(month FROM $1)-1;\nEND'LANGUAGE plpgsql IMMUTABLE;\n", "msg_date": "Fri, 11 Mar 2005 15:27:46 +0100", "msg_from": "Daniel Schuchardt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cpu_tuple_cost" }, { "msg_contents": "Daniel Schuchardt <[email protected]> writes:\n> i have a query plan who is bad with standard cpu_tuple_costs and good if \n> I raise cpu_tuple_costs. Is it is a good practice to raise them if i \n> want to force postgres to use indexes more often?\n\nReducing random_page_cost is usually the best way to get the planner to\nfavor indexscans more.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Mar 2005 01:29:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu_tuple_cost " }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n> Reducing random_page_cost is usually the best way to get the\n> planner to favor indexscans more.\n\nOn that note, can I raise the idea again of dropping the default\nvalue for random_page_cost in postgresql.conf? I think 4 is too\nconservative in this day and age. Certainly the person who will\nbe negatively impacted by a default drop of 4 to 3 will be the\nexception and not the rule.\n\n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200503140702\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n-----BEGIN PGP SIGNATURE-----\n\niD8DBQFCNX2avJuQZxSWSsgRAk7QAJ4lye7pEcQIWMRV2fs15bHGY2zBbACeJtLC\nE/vUG/lagjcyWPt9gfngsn0=\n=CKIq\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Tue, 15 Mar 2005 02:05:01 -0000", "msg_from": "\"Greg Sabino Mullane\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu_tuple_cost " }, { "msg_contents": "Greg Sabino Mullane wrote:\n[ There is text before PGP section. ]\n> \n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> \n> > Reducing random_page_cost is usually the best way to get the\n> > planner to favor indexscans more.\n> \n> On that note, can I raise the idea again of dropping the default\n> value for random_page_cost in postgresql.conf? I think 4 is too\n> conservative in this day and age. Certainly the person who will\n> be negatively impacted by a default drop of 4 to 3 will be the\n> exception and not the rule.\n\nAgreed. I think we should reduce it at least to 3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 14 Mar 2005 21:17:52 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu_tuple_cost" }, { "msg_contents": "\"Greg Sabino Mullane\" <[email protected]> writes:\n> On that note, can I raise the idea again of dropping the default\n> value for random_page_cost in postgresql.conf? I think 4 is too\n> conservative in this day and age. Certainly the person who will\n> be negatively impacted by a default drop of 4 to 3 will be the\n> exception and not the rule.\n\nThe ones who'd be negatively impacted are the ones we haven't\nbeen hearing from ;-). To assume that they aren't out there\nis a logical fallacy.\n\nI still think that 4 is about right for large databases (where\n\"large\" is in comparison to available RAM).\n\nAlso, to the extent that we think these numbers mean anything at all,\nwe should try to keep them matching the physical parameters we think\nthey represent. I think that the \"reduce random_page_cost\" mantra\nis not an indication that that parameter is wrong, but that the\ncost models it feeds into need more work. One thing we *know*\nis wrong is the costing of nestloop inner indexscans: there needs\nto be a correction for caching of index blocks across repeated\nscans. I've looked at this a few times but not come up with\nanything that seemed convincing. Another thing I've wondered\nabout more than once is if we shouldn't discount fetching of\nhigher-level btree pages on the grounds that they're probably\nin RAM already, even if the indexscan isn't inside a loop.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Mar 2005 21:23:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu_tuple_cost " }, { "msg_contents": "Greg,\n\n> On that note, can I raise the idea again of dropping the default\n> value for random_page_cost in postgresql.conf? I think 4 is too\n> conservative in this day and age. Certainly the person who will\n> be negatively impacted by a default drop of 4 to 3 will be the\n> exception and not the rule.\n\nI don't agree. The defaults are there for people who aren't going to read \nenough of the documentation to set them. As such, conservative for the \ndefaults is appropriate.\n\nIf we were going to change anything automatically, it would be to set \neffective_cache_size to 1/3 of RAM at initdb time. However, I don't know any \nmethod to determine RAM size that works on all the platforms we support.\n\nTom,\n\n> Also, to the extent that we think these numbers mean anything at all,\n> we should try to keep them matching the physical parameters we think\n> they represent. \n\nPersonally, what I would love to see is the system determining and caching \nsome of these parameters automatically. For example, in a database which \nhas been running in production for a couple of days, it should be possible to \ndetermine the ratio of average random seek tuple cost to average seq scan \ntuple cost.\n\nOther parameters should really work the same way. Effective_cache_size, for \nexample, is a blunt instrument to replace what the database should ideally do \nthrough automated interactive fine tuning. Particularly since we have 2 \nseparate caches (or 3, if you count t1 and t2 from 2Q). What the planner \nreally needs to know is: is this table or index already in the t1 or t2 cache \n(can't we determine this?)? How likely is it to be in the filesystem cache? \nThe latter question is not just one of size (table < memory), but one of \nfrequency of access.\n\nOf course, this stuff is really, really hard which is why we rely on the \nGUCs ...\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 14 Mar 2005 22:10:40 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu_tuple_cost" }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\nJosh Berkus wrote:\n> I don't agree. The defaults are there for people who aren't going to read\n> enough of the documentation to set them. As such, conservative for the\n> defaults is appropriate.\n\nSure, but I would argue that 4 is *too* conservative. We've certainly changed\nother values over the years. I see it as those most affected by this change\nare those who are least likely to have the know-how to change the default,\nand are also the majority of our users. I've often had to reduce this, working\non many databases, on many versions of PostgreSQL. Granted, I don't work on\nany huge, complex, hundreds of gig databases, but that supports my point -\nif you are really better off with a /higher/ (than 3) random_page_cost, you\nalready should be tweaking a lot of stuff yourself anyway. Tom Lane has a\ngood point about tweaking other default parameters as well, and that's a\nworthy goal, but I don't think extended searching for a \"sweet spot\" should\nprevent us from making a small yet important (IMO!) change in the default\nof this one variable.\n\nN.B. My own personal starting default is 2, but I thought 3 was a nice\nmiddle ground more likely to reach consensus here. :)\n\n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200503141727\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n-----BEGIN PGP SIGNATURE-----\n\niD8DBQFCNhFAvJuQZxSWSsgRAgZiAJ9947emxFoMMXKooJHi2ZPIQr9xGACgjaFf\nhBCPTuHZwGFzomf1Z1TDpVo=\n=KX9t\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Tue, 15 Mar 2005 12:35:19 -0000", "msg_from": "\"Greg Sabino Mullane\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changing the random_page_cost default (was: cpu_tuple_cost)" }, { "msg_contents": "\nOn Mar 15, 2005, at 6:35 AM, Greg Sabino Mullane wrote:\n\n> Granted, I don't work on\n> any huge, complex, hundreds of gig databases, but that supports my \n> point -\n> if you are really better off with a /higher/ (than 3) \n> random_page_cost, you\n> already should be tweaking a lot of stuff yourself anyway.\n\nI think this is a good point. The people that tend to benefit from the \nlower cost are precisely the people least likely to know to change it. \nIt's the \"install & go\" crowd with smaller databases and only a few \nusers/low concurrency that expect it to \"just work\". The bigger \ninstallations are more like to have dedicated DB admins that understand \ntuning.\n\nWasn't there an idea on the table once to ship with several different \nconfiguration files with different defaults for small, medium, large, \netc. installs? Wouldn't it make sense to ask the user during initdb to \npick from one of the default config files? Or even have a few simple \nquestions like \"How much memory do you expect to be available to \nPostgreSQL?\" and \"How many concurrent users do you expect to have?\". \nIt's one thing to know how much memory is in a machine, it quite \nanother thing to know how much the user wants dedicated to PostgreSQL. \nA couple of questions like that can go a long way to coming up with \nbetter ballpark figures.\n\n--\nJeff Hoffmann\[email protected]\n\n", "msg_date": "Tue, 15 Mar 2005 07:31:19 -0600", "msg_from": "Jeff Hoffmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changing the random_page_cost default (was: cpu_tuple_cost)" }, { "msg_contents": "\"Greg Sabino Mullane\" <[email protected]> writes:\n> N.B. My own personal starting default is 2, but I thought 3 was a nice\n> middle ground more likely to reach consensus here. :)\n\nYour argument seems to be \"this produces nice results for me\", not\n\"I have done experiments to measure the actual value of the parameter\nand it is X\". I *have* done experiments of that sort, which is where\nthe default of 4 came from. I remain of the opinion that reducing\nrandom_page_cost is a band-aid that compensates (but only partially)\nfor problems elsewhere. We can see that it's not a real fix from\nthe not-infrequent report that people have to reduce random_page_cost\nbelow 1.0 to get results anywhere near local reality. That doesn't say\nthat the parameter value is wrong, it says that the model it's feeding\ninto is wrong.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 Mar 2005 10:22:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changing the random_page_cost default (was: cpu_tuple_cost) " }, { "msg_contents": "Tom Lane wrote:\n> \"Greg Sabino Mullane\" <[email protected]> writes:\n> \n>>N.B. My own personal starting default is 2, but I thought 3 was a nice\n>>middle ground more likely to reach consensus here. :)\n> \n> \n> Your argument seems to be \"this produces nice results for me\", not\n> \"I have done experiments to measure the actual value of the parameter\n> and it is X\". I *have* done experiments of that sort, which is where\n> the default of 4 came from. I remain of the opinion that reducing\n> random_page_cost is a band-aid that compensates (but only partially)\n> for problems elsewhere. We can see that it's not a real fix from\n> the not-infrequent report that people have to reduce random_page_cost\n> below 1.0 to get results anywhere near local reality. That doesn't say\n> that the parameter value is wrong, it says that the model it's feeding\n> into is wrong.\n> \n\nI would like to second that. A while back I performed a number of \nexperiments on differing hardware and came to the conclusion that *real* \nrandom_page_cost was often higher than 4 (like 10-15 for multi-disk raid \n systems).\n\nHowever I have frequently adjusted Pg's random_page_cost to be less than \n4 - if it helped queries perform better.\n\nSo yes, it looks like the model is the issue - not the value of the \nparameter!\n\nregards\n\nMark\n\n", "msg_date": "Wed, 16 Mar 2005 11:44:06 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changing the random_page_cost default (was:" }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n> Your argument seems to be \"this produces nice results for me\", not\n> \"I have done experiments to measure the actual value of the parameter\n> and it is X\". I *have* done experiments of that sort, which is where\n> the default of 4 came from. I remain of the opinion that reducing\n> random_page_cost is a band-aid that compensates (but only partially)\n> for problems elsewhere. We can see that it's not a real fix from\n> the not-infrequent report that people have to reduce random_page_cost\n> below 1.0 to get results anywhere near local reality. That doesn't say\n> that the parameter value is wrong, it says that the model it's feeding\n> into is wrong.\n\nGood points: allow me to rephrase my question then:\n\nWhen I install a new version of PostgreSQL and start testing my\napplications, one of the most common problems is that many of my queries\nare not hitting an index. I typically drop random_page_cost to 2 or\nlower and this speeds things very significantly. How can I determine a\nbetter way to speed up my queries, and why would this be advantageous\nover simply dropping random_page_cost? How can I use my particular\nsituation to help develop a better model and perhaps make the defaults\nwork better for my queries and other people with databaes like mine.\n(fairly simple schema, not too large (~2 Gig total), SCSI, medium to\nhigh complexity queries, good amount of RAM available)?\n\n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200503150600\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n\n-----BEGIN PGP SIGNATURE-----\n\niD8DBQFCNsCbvJuQZxSWSsgRAs0sAJwLFsGApzfYNV5jPL0gGVW5BH37hwCfRSW8\ned3sLnMg1UOTgN3oL9JSIFo=\n=cZIe\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Wed, 16 Mar 2005 01:03:10 -0000", "msg_from": "\"Greg Sabino Mullane\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changing the random_page_cost default (was: cpu_tuple_cost) " }, { "msg_contents": "On Mon, 14 Mar 2005 21:23:29 -0500, Tom Lane <[email protected]> wrote:\n> I think that the \"reduce random_page_cost\" mantra\n>is not an indication that that parameter is wrong, but that the\n>cost models it feeds into need more work.\n\nOne of these areas is the cost interpolation depending on correlation.\nThis has been discussed on -hackes in October 2002 and August 2003\n(\"Correlation in cost_index()\"). My Postgres installations contain the\npatch presented during that discussion (and another index correlation\npatch), and I use *higher* values for random_page_cost (up to 10).\n\nServus\n Manfred\n", "msg_date": "Thu, 17 Mar 2005 09:20:55 +0100", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu_tuple_cost " }, { "msg_contents": "Tom Lane wrote:\n\n> \n> Reducing random_page_cost is usually the best way to get the planner to\n> favor indexscans more.\n> \n\nOk, I tried a bit with random_page_cost and I have set it to 1 to become \n PG using the index on mitpln:\n\nCIMSOFT=# ANALYSE mitpln;\nANALYZE\nCIMSOFT=# SET random_page_cost=2;\nSET\nCIMSOFT=# EXPLAIN ANALYSE SELECT * FROM mitpln WHERE \ndate_to_yearmonth_dec(mpl_date)='20050';\n QUERY PLAN\n\n--------------------------------------------------------------------------------\n\n Seq Scan on mitpln (cost=0.00..1173.78 rows=1431 width=69) (actual \ntime=219.000..1125.000 rows=1266 loops=1)\n Filter: ((date_to_yearmonth_dec((mpl_date)::timestamp without time \nzone))::text = '20050'::text)\n Total runtime: 1125.000 ms\n(3 rows)\n\nCIMSOFT=# SET random_page_cost=1;\nSET\nCIMSOFT=# EXPLAIN ANALYSE SELECT * FROM mitpln WHERE \ndate_to_yearmonth_dec(mpl_date)='20050';\n QUERY PLAN\n\n--------------------------------------------------------------------------------\n Index Scan using mitpln_yearmonth_dec on mitpln (cost=0.00..699.01 \nrows=1431 width=69) (actual time=0.000..16.000 rows=1266 loops=1)\n Index Cond: ((date_to_yearmonth_dec((mpl_date)::timestamp without \ntime zone))::text = '20050'::text)\n Total runtime: 16.000 ms\n(3 rows)\n\n\nCIMSOFT=# \\d mitpln\n Table \"public.mitpln\"\n Column | Type | Modifiers\n\n--------------+-----------------------+-----------------------------------------\n mpl_id | integer | not null default \nnextval('public.mitpln_mpl_id_seq'::text)\n mpl_date | date |\n mpl_minr | integer | not null\n mpl_tpl_name | character varying(20) |\n mpl_feiertag | character varying(50) |\n mpl_min | real |\n mpl_saldo | real |\n mpl_buch | boolean | not null default false\n mpl_absaldo | real |\n mpl_vhz | real |\n dbrid | character varying | default nextval('db_id_seq'::text)\nIndexes:\n \"mitpln_pkey\" PRIMARY KEY, btree (mpl_id)\n \"mitpln_idindex\" UNIQUE, btree (dbrid)\n \"xtt5126\" UNIQUE, btree (mpl_date, mpl_minr)\n \"mitpln_yearmonth_dec\" btree \n(date_to_yearmonth_dec(mpl_date::timestamp with\nout time zone))\n \"mpl_minr\" btree (mpl_minr)\n \"mpl_minr_nobuch\" btree (mpl_minr) WHERE NOT mpl_buch\n\n\nCIMSOFT=# SELECT count(*) FROM mitpln;\n count\n-------\n 26330\n(1 row)\n\n\nCREATE OR REPLACE FUNCTION date_to_yearmonth_dec(TIMESTAMP) RETURNS \nVARCHAR AS'\nBEGIN\n RETURN extract(year FROM $1) || extract(month FROM $1)-1;\nEND'LANGUAGE plpgsql IMMUTABLE;\n\n\nDaniel\n\nPS : thats a 2.4 GHZ P4 Server with 1 GB Ram and RAID - SCSI\n(WIN2000, PG8.0.1)\n", "msg_date": "Thu, 17 Mar 2005 10:37:05 +0100", "msg_from": "Daniel Schuchardt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cpu_tuple_cost" } ]
[ { "msg_contents": "Hi all,\n\nI'm preparing a set of servers which will eventually need to handle a high \nvolume of queries (both reads and writes, but most reads are very simple \nindex-based queries returning a limited set of rows, when not just one), \nand I would like to optimize things as much as possible, so I have a few \nquestions on the exact way PostgreSQL's MVCC works, and how transactions, \nupdates and vacuuming interact. I hope someone will be able to point me in \nthe right direction (feel free to give pointers if I missed the places \nwhere this is described).\n\n From what I understand (and testing confirms it), bundling many queries in \none single transaction is more efficient than having each query be a \nseparate transaction (like with autocommit on). However, I wonder about the \nlimits of this:\n\n- are there any drawbacks to grouping hundreds or thousands of queries \n(inserts/updates) over several minutes in one single transaction? Other \nthan the fact that the inserts/updates will not be visible until committed, \nof course. Essentially turning autocommit off, and doing a commit once in a \nwhile.\n\n- does this apply only to inserts/selects/updates or also for selects? \nAnother way to put this is: does a transaction with only one select \nactually have much transaction-related work to do? Or, does a transaction \nwith only selects actually have any impact anywhere? Does it really leave a \ntrace anywhere? Again, I understand that selects grouped in a transaction \nwill not see updates done after the start of the transaction (unless done \nby the same process).\n\n- if during a single transaction several UPDATEs affect the same row, will \nMVCC generate as many row versions as there are updates (like would be the \ncase with autocommit) or will they be grouped into one single row version?\n\nAnother related issue is that many of the tables are indexed on a date \nfield, and one process does a lot of updates on \"recent\" rows (which lead \nto many dead tuples), but after that \"older\" rows tend to remain pretty \nmuch unchanged for quite a while. Other than splitting the tables into \n\"old\" and \"recent\" tables, is there any way to make vacuum more efficient? \nScanning the whole table for dead tuples when only a small portion of the \ntable actually has any does not feel like being very efficient in this \nsituation.\n\nOther issue: every five minutes or so, I see a noticeable performance drop \nas PostgreSQL checkpoints. This is 7.4.3 with pretty lousy hardware, I know \n8.0 with decent hardware and separate disk(s) for pg_xlog will definitely \nhelp, but I really wonder if there is any way to reduce the amount of work \nthat needs to be done at that point (I'm a strong believer of fixing \nsoftware before hardware). I have already bumped checkpoint_segments to 8, \nbut I'm not quite sure I understand how this helps (or doesn't help) \nthings. Logs show 3 to 6 \"recycled transaction log file\" lines at that \ntime, that seems quite a lot of work for a load that's still pretty low. \nDoes grouping of more queries in transactions help with this? Are there \nother parameters that can affect things, or is just a matter of how much \ninserts/updates/deletes are done, and the amount of data that was changed?\n\nLast point: some of the servers have expandable data (and will be \nreplicated with slony-I) and will run with fsync off. I have read \nconflicting statements as to what exactly this does: some sources indicate \nthat setting fsync off actually switches off WAL/checkpointing, others that \nit just prevents the fsync (or equivalent) system calls. Since I still see \ncheckpointing in that case, I guess it's not exactly the former, but I \nwould love to understand more about it. Really, I would love to be able to \nset some tables or databases to \"go as fast as you can and don't worry \nabout transactions, MVCC or anything like that\", but I'm not sure that \noption exists...\n\nThanks,\n\nJacques.\n\n\n", "msg_date": "Fri, 11 Mar 2005 16:40:49 +0100", "msg_from": "Jacques Caron <[email protected]>", "msg_from_op": true, "msg_subject": "Performance tuning" }, { "msg_contents": "\nHello All,\n\nI have a couple of questions about running 2 databases:\n\n1) on a single 7.4.6 postgres instance does each database have it own WAL\n file or is that shared? Is it the same on 8.0.x?\n\n2) what's the high performance way of moving 200 rows between similar\n tables on different databases? Does it matter if the databases are\n on the same or seperate postgres instances?\n\nBackground:\nMy web app does lots of inserts that aren't read until a session is \ncomplete. The plan is to put the heavy insert session onto a ramdisk based \npg-db and transfer the relevant data to the master pg-db upon session \ncompletion. Currently running 7.4.6.\n\nIndividual session data is not as critical as the master pg-db so the risk \nassociated with running the session pg-db on a ramdisk is acceptable. \nAll this is to get past the I/O bottleneck, already tweaked the config \nfiles, run on multiple RAID-1 spindles, profiled the queries, maxed \nthe CPU/ram. Migrating to 64bit fedora soon.\n\nThanks, this mailing list has been invaluable.\n\nJelle\n", "msg_date": "Fri, 11 Mar 2005 10:54:30 -0800 (PST)", "msg_from": "jelle <[email protected]>", "msg_from_op": false, "msg_subject": "Questions about 2 databases." }, { "msg_contents": "jelle <[email protected]> writes:\n> 1) on a single 7.4.6 postgres instance does each database have it own WAL\n> file or is that shared? Is it the same on 8.0.x?\n\nShared.\n\n> 2) what's the high performance way of moving 200 rows between similar\n> tables on different databases? Does it matter if the databases are\n> on the same or seperate postgres instances?\n\nCOPY would be my recommendation. For a no-programming-effort solution\nyou could just pipe the output of pg_dump --data-only -t mytable\ninto psql. Not sure if it's worth developing a custom application to\nreplace that.\n\n> My web app does lots of inserts that aren't read until a session is \n> complete. The plan is to put the heavy insert session onto a ramdisk based \n> pg-db and transfer the relevant data to the master pg-db upon session \n> completion. Currently running 7.4.6.\n\nUnless you have a large proportion of sessions that are abandoned and\nhence never need be transferred to the main database at all, this seems\nlike a dead waste of effort :-(. The work to put the data into the main\ndatabase isn't lessened at all; you've just added extra work to manage\nthe buffer database.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Mar 2005 15:33:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions about 2 databases. " }, { "msg_contents": "On Fri, 11 Mar 2005, Tom Lane wrote:\n\n[ snip ]\n\n> COPY would be my recommendation. For a no-programming-effort solution\n> you could just pipe the output of pg_dump --data-only -t mytable\n> into psql. Not sure if it's worth developing a custom application to\n> replace that.\n\nI'm a programming-effort kind of guy so I'll try COPY.\n\n>\n>> My web app does lots of inserts that aren't read until a session is\n>> complete. The plan is to put the heavy insert session onto a ramdisk based\n>> pg-db and transfer the relevant data to the master pg-db upon session\n>> completion. Currently running 7.4.6.\n>\n> Unless you have a large proportion of sessions that are abandoned and\n> hence never need be transferred to the main database at all, this seems\n> like a dead waste of effort :-(. The work to put the data into the main\n> database isn't lessened at all; you've just added extra work to manage\n> the buffer database.\n\nThe insert heavy sessions average 175 page hits generating XML, 1000 \ninsert/updates which comprise 90% of the insert/update load, of which 200 \ninserts need to be transferred to the master db. The other sessions are \nread/cache bound. I hoping to get a speed-up from moving the temporary \nstuff off the master db and using 1 transaction instead of 175 to the disk \nbased master db.\n\nThanks,\nJelle\n", "msg_date": "Fri, 11 Mar 2005 13:43:00 -0800 (PST)", "msg_from": "jelle <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions about 2 databases. " }, { "msg_contents": "\n\n> My web app does lots of inserts that aren't read until a session is \n> complete. The plan is to put the heavy insert session onto a ramdisk \n> based pg-db and transfer the relevant data to the master pg-db upon \n> session completion. Currently running 7.4.6.\n\n\tFrom what you say I'd think you want to avoid making one write \ntransaction to the main database on each page view, right ?\n\tYou could simply store the data in a file, and at the end of the session, \nread the file and do all the writes in one transaction.\n", "msg_date": "Sat, 12 Mar 2005 03:15:21 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions about 2 databases." }, { "msg_contents": "Jacques Caron wrote:\n> I'm preparing a set of servers which will eventually need to handle a \n> high volume of queries (both reads and writes, but most reads are very \n> simple index-based queries returning a limited set of rows, when not \n> just one), and I would like to optimize things as much as possible, so I \n> have a few questions on the exact way PostgreSQL's MVCC works, and how \n> transactions, updates and vacuuming interact. I hope someone will be \n> able to point me in the right direction (feel free to give pointers if I \n> missed the places where this is described).\n> \n> From what I understand (and testing confirms it), bundling many queries \n> in one single transaction is more efficient than having each query be a \n> separate transaction (like with autocommit on). However, I wonder about \n> the limits of this:\n> \n> - are there any drawbacks to grouping hundreds or thousands of queries \n> (inserts/updates) over several minutes in one single transaction? Other \n> than the fact that the inserts/updates will not be visible until \n> committed, of course. Essentially turning autocommit off, and doing a \n> commit once in a while.\n\n1. If any locks are held then they will be held for much longer, causing \nother processes to block.\n2. PG needs to be able to roll back the changes - thousands of simple \ninserts are fine, millions will probably not be.\n\n> - does this apply only to inserts/selects/updates or also for selects? \n> Another way to put this is: does a transaction with only one select \n> actually have much transaction-related work to do? Or, does a \n> transaction with only selects actually have any impact anywhere? Does it \n> really leave a trace anywhere? Again, I understand that selects grouped \n> in a transaction will not see updates done after the start of the \n> transaction (unless done by the same process).\n\nThere are implications if a SELECT has side-effects (I can call a \nfunction in a select - that might do anything).\n\n> - if during a single transaction several UPDATEs affect the same row, \n> will MVCC generate as many row versions as there are updates (like would \n> be the case with autocommit) or will they be grouped into one single row \n> version?\n\nI believe there will be many versions. Certainly for 8.0 that must be \nthe case to support savepoints within a transaction.\n\n> Another related issue is that many of the tables are indexed on a date \n> field, and one process does a lot of updates on \"recent\" rows (which \n> lead to many dead tuples), but after that \"older\" rows tend to remain \n> pretty much unchanged for quite a while. Other than splitting the tables \n> into \"old\" and \"recent\" tables, is there any way to make vacuum more \n> efficient? Scanning the whole table for dead tuples when only a small \n> portion of the table actually has any does not feel like being very \n> efficient in this situation.\n\nNot really.\n\n> Other issue: every five minutes or so, I see a noticeable performance \n> drop as PostgreSQL checkpoints. This is 7.4.3 with pretty lousy \n> hardware, I know 8.0 with decent hardware and separate disk(s) for \n> pg_xlog will definitely help, but I really wonder if there is any way to \n> reduce the amount of work that needs to be done at that point (I'm a \n> strong believer of fixing software before hardware). I have already \n> bumped checkpoint_segments to 8, but I'm not quite sure I understand how \n> this helps (or doesn't help) things. Logs show 3 to 6 \"recycled \n> transaction log file\" lines at that time, that seems quite a lot of work \n> for a load that's still pretty low. Does grouping of more queries in \n> transactions help with this? Are there other parameters that can affect \n> things, or is just a matter of how much inserts/updates/deletes are \n> done, and the amount of data that was changed?\n\nYou might be better off reducing the number of checkpoint segments, and \ndecreasing the timeout. There is a balance between doing a lot of work \nin one go, and the overhead of many smaller bursts of activity.\n\n> Last point: some of the servers have expandable data (and will be \n> replicated with slony-I) and will run with fsync off. I have read \n> conflicting statements as to what exactly this does: some sources \n> indicate that setting fsync off actually switches off WAL/checkpointing, \n> others that it just prevents the fsync (or equivalent) system calls. \n> Since I still see checkpointing in that case, I guess it's not exactly \n> the former, but I would love to understand more about it. Really, I \n> would love to be able to set some tables or databases to \"go as fast as \n> you can and don't worry about transactions, MVCC or anything like that\", \n> but I'm not sure that option exists...\n\nSetting fsync=false means the sync isn't done, so data might still be \ncached below PG's level. I'm not sure it's ever going to be possible to \nmark a table as \"ignore transactions\" - it would be a lot of work, and \nmeans you couldn't guarantee transactions that included that table in \nany way.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 14 Mar 2005 09:41:20 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance tuning" }, { "msg_contents": "jelle wrote:\n> The insert heavy sessions average 175 page hits generating XML, 1000 \n> insert/updates which comprise 90% of the insert/update load, of which\n> 200 inserts need to be transferred to the master db. The other \n> sessions are read/cache bound. I hoping to get a speed-up from moving\n> the temporary stuff off the master db and using 1 transaction\n> instead of 175 to the disk based master db.\n\nJust a thought:\nWouldn't it be sufficient to have the \"temporary\", fast session-table\nin a RAM-disk? I suspect you could do this rather easily using a TABLESPACE.\nAll the indices could be in this TABLESPACE as well (at least after\nhaving a quick look at the short help for CREATE TABLE and assuming you are\nusing PostgreSQL >= 8.0).\n\nRegards\nMirko\n", "msg_date": "Mon, 14 Mar 2005 11:43:52 +0100", "msg_from": "Mirko Zeibig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions about 2 databases." } ]
[ { "msg_contents": "As a test, I ran a query in the pgAdmin query tool, which returns about 15K records from a PostgreSQL v8.01 table on my Win2K server.\n\nI ran the same query from the local server, from another PC on the same 100 mbit local network, and from a PC on a different network, over the internet. \n\nThe times for the query to run and the data to return for each of the three \nlocations are shown here: Local Server : 571+521 ms Local network: 1187+1266 ms Internet:14579+4016 msMy question is this: Why does the execution time for the query to run increase so much? Since the query should be running on the server, it's time should be somewhat independent of the network transport delay. (unlike the data transport time) However, it appears to actually be hypersensitive to the transport delay. The ratios of time for the data transport (assuming 1 for the local server) are:\n1 : 2.43 : 7.71\n\nwhereas the query execution time ratios are:\n1 : 2.08 : 25.5 (!!!)\n\nObviously, the transport times will be greater. But why does the execution time bloat so?\n\n\n\n\nAs a test, I ran a query in the pgAdmin query tool, which returns about 15K records from a PostgreSQL v8.01 table on my Win2K server.I ran the same query from the local server, from another PC on the same 100 mbit local network, and from a PC on a different network, over the internet. The times for the query to run and the data to return for each of the three locations are shown here: \n\nLocal Server : 571+521 ms \nLocal network: 1187+1266 ms \nInternet:14579+4016 msMy question is this: Why does the execution time for the query to run increase so much? Since the query should be running on the server, it's time should be somewhat independent of the network transport delay. (unlike the data transport time) However, it appears to actually be hypersensitive to the transport delay. The ratios of time for the data transport (assuming 1 for the local server) are:1 : 2.43 : 7.71whereas the query execution time ratios are:1 : 2.08 : 25.5  (!!!)Obviously, the transport times will be greater.  But why does the execution time bloat so?", "msg_date": "Fri, 11 Mar 2005 11:47:54 -0700", "msg_from": "\"Lou O'Quin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query performance" }, { "msg_contents": "\"Lou O'Quin\" <[email protected]> writes:\n> it appears to actually be hypersensitive to the transport delay. The =\n> ratios of time for the data transport (assuming 1 for the local server) =\n> are:\n> 1 : 2.43 : 7.71\n\n> whereas the query execution time ratios are:\n> 1 : 2.08 : 25.5 (!!!)\n\nHow do you know that's what the data transport time is --- ie, how can\nyou measure that separately from the total query time?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Mar 2005 14:10:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance " } ]
[ { "msg_contents": "Hi Tom. I referenced the status line of pgAdmin. Per the pgAdmin help file:\n \n\"The status line will show how long the last query took to complete. If a dataset was returned, not only the elapsed time for server execution is displayed, but also the time to retrieve the data from the server to the Data Output page.\"\n \nLou\n\n>>> Tom Lane <[email protected]> 3/11/2005 12:10 PM >>>\n\n\"Lou O'Quin\" <[email protected]> writes:\n> it appears to actually be hypersensitive to the transport delay. The =\n> ratios of time for the data transport (assuming 1 for the local server) =\n> are:\n> 1 : 2.43 : 7.71\n\n> whereas the query execution time ratios are:\n> 1 : 2.08 : 25.5 (!!!)\n\nHow do you know that's what the data transport time is --- ie, how can\nyou measure that separately from the total query time?\n\n regards, tom lane\n\n\n\n\n\n\n\nHi Tom.  I referenced the status line of pgAdmin.  Per the pgAdmin help file:\n \n\"The status line will show how long the last query took to complete. If a dataset was returned, not only the elapsed time for server execution is displayed, but also the time to retrieve the data from the server to the Data Output page.\"\n \nLou>>> Tom Lane <[email protected]> 3/11/2005 12:10 PM >>>\n\"Lou O'Quin\" <[email protected]> writes:> it appears to actually be hypersensitive to the transport delay. The => ratios of time for the data transport (assuming 1 for the local server) => are:> 1 : 2.43 : 7.71> whereas the query execution time ratios are:> 1 : 2.08 : 25.5  (!!!)How do you know that's what the data transport time is --- ie, how canyou measure that separately from the total query time?            regards, tom lane", "msg_date": "Fri, 11 Mar 2005 12:38:16 -0700", "msg_from": "\"Lou O'Quin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query performance" }, { "msg_contents": "\"Lou O'Quin\" <[email protected]> writes:\n> Hi Tom. I referenced the status line of pgAdmin. Per the pgAdmin help\n> file:\n>\n> \"The status line will show how long the last query took to complete. If a\n> dataset was returned, not only the elapsed time for server execution is\n> displayed, but also the time to retrieve the data from the server to the\n> Data Output page.\"\n\nWell, you should probably ask the pgadmin boys exactly what they are\nmeasuring. In any case, the Postgres server overlaps query execution\nwith result sending, so I don't think it's possible to get a pure\nmeasurement of just one of those costs --- certainly not by looking at\nit only from the client end.\n\nBTW, one factor to consider is that if the test client machines weren't\nall the same speed, that would have some impact on their ability to\nabsorb 15K records ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Mar 2005 15:21:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance " } ]
[ { "msg_contents": "Hi,\n\nI have a RAID5 array (mdadm) with 14 disks + 1 spare. This partition has \nan Ext3 filesystem which is used by Postgres. Currently we are loading a \n50G database on this server from a Postgres dump (copy, not insert) and \nare experiencing very slow write performance (35 records per second).\n\nTop shows that the Postgres process (postmaster) is being constantly put \ninto D state for extended periods of time (2-3 seconds) which I assume \nis because it's waiting for disk io. I have just started gathering \nsystem statistics and here is what sar -b shows: (this is while the db \nis being loaded - pg_restore)\n\n \t tps rtps wtps bread/s bwrtn/s\n01:35:01 PM 275.77 76.12 199.66 709.59 2315.23\n01:45:01 PM 287.25 75.56 211.69 706.52 2413.06\n01:55:01 PM 281.73 76.35 205.37 711.84 2389.86\n02:05:01 PM 282.83 76.14 206.69 720.85 2418.51\n02:15:01 PM 284.07 76.15 207.92 707.38 2443.60\n02:25:01 PM 265.46 75.91 189.55 708.87 2089.21\n02:35:01 PM 285.21 76.02 209.19 709.58 2446.46\nAverage: 280.33 76.04 204.30 710.66 2359.47\n\nThis is a Sun e450 with dual TI UltraSparc II processors and 2G of RAM. \nIt is currently running Debian Sarge with a 2.4.27-sparc64-smp custom \ncompiled kernel. Postgres is installed from the Debian package and uses \nall the configuration defaults.\n\nI am also copying the pgsql-performance list.\n\nThanks in advance for any advice/pointers.\n\n\nArshavir\n\nFollowing is some other info that might be helpful.\n\n/proc/scsi# mdadm -D /dev/md1\n/dev/md1:\n Version : 00.90.00\n Creation Time : Wed Feb 23 17:23:41 2005\n Raid Level : raid5\n Array Size : 123823616 (118.09 GiB 126.80 GB)\n Device Size : 8844544 (8.43 GiB 9.06 GB)\n Raid Devices : 15\n Total Devices : 17\nPreferred Minor : 1\n Persistence : Superblock is persistent\n\n Update Time : Thu Feb 24 10:05:38 2005\n State : active\n Active Devices : 15\nWorking Devices : 16\n Failed Devices : 1\n Spare Devices : 1\n\n Layout : left-symmetric\n Chunk Size : 64K\n\n UUID : 81ae2c97:06fa4f4d:87bfc6c9:2ee516df\n Events : 0.8\n\n Number Major Minor RaidDevice State\n 0 8 64 0 active sync /dev/sde\n 1 8 80 1 active sync /dev/sdf\n 2 8 96 2 active sync /dev/sdg\n 3 8 112 3 active sync /dev/sdh\n 4 8 128 4 active sync /dev/sdi\n 5 8 144 5 active sync /dev/sdj\n 6 8 160 6 active sync /dev/sdk\n 7 8 176 7 active sync /dev/sdl\n 8 8 192 8 active sync /dev/sdm\n 9 8 208 9 active sync /dev/sdn\n 10 8 224 10 active sync /dev/sdo\n 11 8 240 11 active sync /dev/sdp\n 12 65 0 12 active sync /dev/sdq\n 13 65 16 13 active sync /dev/sdr\n 14 65 32 14 active sync /dev/sds\n\n 15 65 48 15 spare /dev/sdt\n\n# dumpe2fs -h /dev/md1\ndumpe2fs 1.35 (28-Feb-2004)\nFilesystem volume name: <none>\nLast mounted on: <not available>\nFilesystem UUID: 1bb95bd6-94c7-4344-adf2-8414cadae6fc\nFilesystem magic number: 0xEF53\nFilesystem revision #: 1 (dynamic)\nFilesystem features: has_journal dir_index needs_recovery large_file\nDefault mount options: (none)\nFilesystem state: clean\nErrors behavior: Continue\nFilesystem OS type: Linux\nInode count: 15482880\nBlock count: 30955904\nReserved block count: 1547795\nFree blocks: 28767226\nFree inodes: 15482502\nFirst block: 0\nBlock size: 4096\nFragment size: 4096\nBlocks per group: 32768\nFragments per group: 32768\nInodes per group: 16384\nInode blocks per group: 512\nFilesystem created: Wed Feb 23 17:27:13 2005\nLast mount time: Wed Feb 23 17:45:25 2005\nLast write time: Wed Feb 23 17:45:25 2005\nMount count: 2\nMaximum mount count: 28\nLast checked: Wed Feb 23 17:27:13 2005\nCheck interval: 15552000 (6 months)\nNext check after: Mon Aug 22 18:27:13 2005\nReserved blocks uid: 0 (user root)\nReserved blocks gid: 0 (group root)\nFirst inode: 11\nInode size: 128\nJournal inode: 8\nDefault directory hash: tea\nDirectory Hash Seed: c35c0226-3b52-4dad-b102-f22feb773592\nJournal backup: inode blocks\n\n# lspci | grep SCSI\n0000:00:03.0 SCSI storage controller: LSI Logic / Symbios Logic 53c875 \n(rev 14)\n0000:00:03.1 SCSI storage controller: LSI Logic / Symbios Logic 53c875 \n(rev 14)\n0000:00:04.0 SCSI storage controller: LSI Logic / Symbios Logic 53c875 \n(rev 14)\n0000:00:04.1 SCSI storage controller: LSI Logic / Symbios Logic 53c875 \n(rev 14)\n0000:04:02.0 SCSI storage controller: LSI Logic / Symbios Logic 53c875 \n(rev 03)\n0000:04:03.0 SCSI storage controller: LSI Logic / Symbios Logic 53c875 \n(rev 03)\n\n/proc/scsi# more scsi\nAttached devices:\nHost: scsi0 Channel: 00 Id: 00 Lun: 00\n Vendor: SEAGATE Model: ST39103LCSUN9.0G Rev: 034A\n Type: Direct-Access ANSI SCSI revision: 02\nHost: scsi0 Channel: 00 Id: 01 Lun: 00\n Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n Type: Direct-Access ANSI SCSI revision: 03\nHost: scsi0 Channel: 00 Id: 02 Lun: 00\n Vendor: SEAGATE Model: ST39103LCSUN9.0G Rev: 034A\n Type: Direct-Access ANSI SCSI revision: 02\nHost: scsi0 Channel: 00 Id: 03 Lun: 00\n Vendor: SEAGATE Model: ST39103LCSUN9.0G Rev: 034A\n Type: Direct-Access ANSI SCSI revision: 02\nHost: scsi1 Channel: 00 Id: 00 Lun: 00\n Vendor: SEAGATE Model: ST39103LCSUN9.0G Rev: 034A\n Type: Direct-Access ANSI SCSI revision: 02\nHost: scsi1 Channel: 00 Id: 01 Lun: 00\n Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n Type: Direct-Access ANSI SCSI revision: 03\nHost: scsi1 Channel: 00 Id: 02 Lun: 00\n Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n Type: Direct-Access ANSI SCSI revision: 03\nHost: scsi1 Channel: 00 Id: 03 Lun: 00\n Vendor: SEAGATE Model: ST39103LCSUN9.0G Rev: 034A\n Type: Direct-Access ANSI SCSI revision: 02\nHost: scsi2 Channel: 00 Id: 00 Lun: 00\n Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n Type: Direct-Access ANSI SCSI revision: 03\nHost: scsi2 Channel: 00 Id: 01 Lun: 00\n Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n Type: Direct-Access ANSI SCSI revision: 03\nHost: scsi2 Channel: 00 Id: 02 Lun: 00\n Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n Type: Direct-Access ANSI SCSI revision: 03\nHost: scsi2 Channel: 00 Id: 03 Lun: 00\n Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n Type: Direct-Access ANSI SCSI revision: 03\nHost: scsi3 Channel: 00 Id: 00 Lun: 00\n Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n Type: Direct-Access ANSI SCSI revision: 03\nHost: scsi3 Channel: 00 Id: 01 Lun: 00\n Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n Type: Direct-Access ANSI SCSI revision: 03\nHost: scsi3 Channel: 00 Id: 02 Lun: 00\n Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n Type: Direct-Access ANSI SCSI revision: 03\nHost: scsi3 Channel: 00 Id: 03 Lun: 00\n Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n Type: Direct-Access ANSI SCSI revision: 03\nHost: scsi4 Channel: 00 Id: 06 Lun: 00\n Vendor: TOSHIBA Model: XM6201TASUN32XCD Rev: 1103\n Type: CD-ROM ANSI SCSI revision: 02\nHost: scsi5 Channel: 00 Id: 00 Lun: 00\n Vendor: FUJITSU Model: MAG3091L SUN9.0G Rev: 1111\n Type: Direct-Access ANSI SCSI revision: 02\nHost: scsi5 Channel: 00 Id: 01 Lun: 00\n Vendor: FUJITSU Model: MAG3091L SUN9.0G Rev: 1111\n Type: Direct-Access ANSI SCSI revision: 02\nHost: scsi5 Channel: 00 Id: 02 Lun: 00\n Vendor: FUJITSU Model: MAG3091L SUN9.0G Rev: 1111\n Type: Direct-Access ANSI SCSI revision: 02\nHost: scsi5 Channel: 00 Id: 03 Lun: 00\n Vendor: FUJITSU Model: MAG3091L SUN9.0G Rev: 1111\n Type: Direct-Access ANSI SCSI revision: 02\n\n\n\n\n\n\n-- \nArshavir Grigorian\nSystems Administrator/Engineer\n", "msg_date": "Fri, 11 Mar 2005 14:48:02 -0500", "msg_from": "Arshavir Grigorian <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres on RAID5" }, { "msg_contents": "\nArshavir Grigorian <[email protected]> writes:\n\n> Hi,\n> \n> I have a RAID5 array (mdadm) with 14 disks + 1 spare. This partition has an\n> Ext3 filesystem which is used by Postgres. \n\nPeople are going to suggest moving to RAID1+0. I'm unconvinced that RAID5\nacross 14 drivers shouldn't be able to keep up with RAID1 across 7 drives\nthough. It would be interesting to see empirical data.\n\nOne thing that does scare me is the Postgres transaction log and the ext3\njournal both sharing these disks with the data. Ideally both of these things\nshould get (mirrored) disks of their own separate from the data files.\n\nBut 2-3s pauses seem disturbing. I wonder whether ext3 is issuing a cache\nflush on every fsync to get the journal pushed out. This is a new linux\nfeature that's necessary with ide but shouldn't be necessary with scsi.\n\nIt would be interesting to know whether postgres performs differently with\nfsync=off. This would even be a reasonable mode to run under for initial\ndatabase loads. It shouldn't make much of a difference with hardware like this\nthough. And you should be aware that running under this mode in production\nwould put your data at risk.\n\n-- \ngreg\n\n", "msg_date": "13 Mar 2005 23:36:13 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on RAID5" }, { "msg_contents": "Greg Stark wrote:\n\n>Arshavir Grigorian <[email protected]> writes:\n>\n> \n>\n>>Hi,\n>>\n>>I have a RAID5 array (mdadm) with 14 disks + 1 spare. This partition has an\n>>Ext3 filesystem which is used by Postgres. \n>> \n>>\n>\n>People are going to suggest moving to RAID1+0. I'm unconvinced that RAID5\n>across 14 drivers shouldn't be able to keep up with RAID1 across 7 drives\n>though. It would be interesting to see empirical data.\n>\n>One thing that does scare me is the Postgres transaction log and the ext3\n>journal both sharing these disks with the data. Ideally both of these things\n>should get (mirrored) disks of their own separate from the data files.\n>\n>But 2-3s pauses seem disturbing. I wonder whether ext3 is issuing a cache\n>flush on every fsync to get the journal pushed out. This is a new linux\n>feature that's necessary with ide but shouldn't be necessary with scsi.\n>\n>It would be interesting to know whether postgres performs differently with\n>fsync=off. This would even be a reasonable mode to run under for initial\n>database loads. It shouldn't make much of a difference with hardware like this\n>though. And you should be aware that running under this mode in production\n>would put your data at risk.\n>\nHi\nI'm coming in from the raid list so I didn't get the full story.\n\nMay I ask what kernel?\n\nI only ask because I upgraded to 2.6.11.2 and happened to be watching \nxosview on my (probably) completely different setup (1Tb xfs/lvm2/raid5 \nserved by nfs to a remote sustained read/write app), when I saw all read \nactivity cease for 2/3 seconds whilst the disk wrote, then disk read \nresumed. This occured repeatedly during a read/edit/write of a 3Gb file.\n\nPerformance not critical here so on the \"hmm, that's odd\" todo list :)\n\nDavid\n\n", "msg_date": "Mon, 14 Mar 2005 07:44:53 +0000", "msg_from": "David Greaves <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on RAID5 (possible sync blocking read type" }, { "msg_contents": "a 14 drive stripe will max out the PCI bus long before anything else,\nthe only reason for a stripe this size is to get a total accessible\nsize up. A 6 drive RAID 10 on a good controller can get up to\n400Mb/sec which is pushing the limit of the PCI bus (taken from\noffical 3ware 9500S 8MI benchmarks). 140 drives is not going to beat\n6 drives because you've run out of bandwidth on the PCI bus.\n\nThe debait on RAID 5 rages onward. The benchmarks I've seen suggest\nthat RAID 5 is consistantly slower than RAID 10 with the same number\nof drivers, but others suggest that RAID 5 can be much faster that\nRAID 10 (see arstechnica.com) (Theoretical performance of RAID 5 is\ninline with a RAID 0 stripe of N-1 drives, RAID 10 has only N/2 drives\nin a stripe, perfomance should be nearly double - in theory of\ncourse).\n\n35 Trans/sec is pretty slow, particularly if they are only one row at\na time. I typicaly get 200-400/sec on our DB server on a bad day. Up\nto 1100 on a fresh database.\n\nI suggested running a bonnie benchmark, or some other IO perftest to\ndetermine if it's the array itself performing badly, or if there is\nsomething wrong with postgresql.\n\nIf the array isn't kicking out at least 50MB/sec read/write\nperformance, something is wrong.\n\nUntil you've isolated the problem to either postgres or the array,\neverything else is simply speculation.\n\nIn a perfect world, you would have two 6 drive RAID 10s. on two PCI\nbusses, with system tables on a third parition, and archive logging on\na fourth. Unsurprisingly this looks alot like the Oracle recommended\nminimum config.\n\nAlso a note for interest is that this is _software_ raid...\n\nAlex Turner\nnetEconomist\n\nOn 13 Mar 2005 23:36:13 -0500, Greg Stark <[email protected]> wrote:\n> \n> Arshavir Grigorian <[email protected]> writes:\n> \n> > Hi,\n> >\n> > I have a RAID5 array (mdadm) with 14 disks + 1 spare. This partition has an\n> > Ext3 filesystem which is used by Postgres.\n> \n> People are going to suggest moving to RAID1+0. I'm unconvinced that RAID5\n> across 14 drivers shouldn't be able to keep up with RAID1 across 7 drives\n> though. It would be interesting to see empirical data.\n> \n> One thing that does scare me is the Postgres transaction log and the ext3\n> journal both sharing these disks with the data. Ideally both of these things\n> should get (mirrored) disks of their own separate from the data files.\n> \n> But 2-3s pauses seem disturbing. I wonder whether ext3 is issuing a cache\n> flush on every fsync to get the journal pushed out. This is a new linux\n> feature that's necessary with ide but shouldn't be necessary with scsi.\n> \n> It would be interesting to know whether postgres performs differently with\n> fsync=off. This would even be a reasonable mode to run under for initial\n> database loads. It shouldn't make much of a difference with hardware like this\n> though. And you should be aware that running under this mode in production\n> would put your data at risk.\n> \n> --\n> greg\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n>\n", "msg_date": "Mon, 14 Mar 2005 14:53:42 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on RAID5" }, { "msg_contents": "\nAlex Turner <[email protected]> writes:\n\n> a 14 drive stripe will max out the PCI bus long before anything else,\n\nHopefully anyone with a 14 drive stripe is using some combination of 64 bit\nPCI-X cards running at 66Mhz...\n\n> the only reason for a stripe this size is to get a total accessible\n> size up. \n\nWell, many drives also cuts average latency. So even if you have no need for\nmore bandwidth you still benefit from a lower average response time by adding\nmore drives.\n\n-- \ngreg\n\n", "msg_date": "14 Mar 2005 15:17:11 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on RAID5" }, { "msg_contents": "All,\n\nI have a 13 disk (250G each) software raid 5 set using 1 16 port adaptec SATA controller. \nI am very happy with the performance. The reason I went with the 13 disk raid 5 set was for the space NOT performance. \n I have a single postgresql database that is over 2 TB with about 500 GB free on the disk. This raid set performs\nabout the same as my ICP SCSI raid controller (also with raid 5). \n\nThat said, now that postgresql 8 has tablespaces, I would NOT create 1 single raid 5 set, but 3 smaller sets. I also DO\nNOT have my wal and log's on this raid set, but on a smaller 2 disk mirror.\n\nJim\n\n---------- Original Message -----------\nFrom: Greg Stark <[email protected]>\nTo: Alex Turner <[email protected]>\nCc: Greg Stark <[email protected]>, Arshavir Grigorian <[email protected]>, [email protected],\[email protected]\nSent: 14 Mar 2005 15:17:11 -0500\nSubject: Re: [PERFORM] Postgres on RAID5\n\n> Alex Turner <[email protected]> writes:\n> \n> > a 14 drive stripe will max out the PCI bus long before anything else,\n> \n> Hopefully anyone with a 14 drive stripe is using some combination of 64 bit\n> PCI-X cards running at 66Mhz...\n> \n> > the only reason for a stripe this size is to get a total accessible\n> > size up.\n> \n> Well, many drives also cuts average latency. So even if you have no need for\n> more bandwidth you still benefit from a lower average response time by adding\n> more drives.\n> \n> -- \n> greg\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n------- End of Original Message -------\n\n", "msg_date": "Mon, 14 Mar 2005 15:35:41 -0500", "msg_from": "\"Jim Buttafuoco\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on RAID5" }, { "msg_contents": "Alex Turner wrote:\n> a 14 drive stripe will max out the PCI bus long before anything else,\n> the only reason for a stripe this size is to get a total accessible\n> size up. A 6 drive RAID 10 on a good controller can get up to\n> 400Mb/sec which is pushing the limit of the PCI bus (taken from\n> offical 3ware 9500S 8MI benchmarks). 140 drives is not going to beat\n> 6 drives because you've run out of bandwidth on the PCI bus.\n> \n> The debait on RAID 5 rages onward. The benchmarks I've seen suggest\n> that RAID 5 is consistantly slower than RAID 10 with the same number\n> of drivers, but others suggest that RAID 5 can be much faster that\n> RAID 10 (see arstechnica.com) (Theoretical performance of RAID 5 is\n> inline with a RAID 0 stripe of N-1 drives, RAID 10 has only N/2 drives\n> in a stripe, perfomance should be nearly double - in theory of\n> course).\n> \n> 35 Trans/sec is pretty slow, particularly if they are only one row at\n> a time. I typicaly get 200-400/sec on our DB server on a bad day. Up\n> to 1100 on a fresh database.\n\nWell, by putting the pg_xlog directory on a separate disk/partition, I \nwas able to increase this rate to about 50 or so per second (still \npretty far from your numbers). Next I am going to try putting the \npg_xlog on a RAID1+0 array and see if that helps.\n\n> I suggested running a bonnie benchmark, or some other IO perftest to\n> determine if it's the array itself performing badly, or if there is\n> something wrong with postgresql.\n> \n> If the array isn't kicking out at least 50MB/sec read/write\n> performance, something is wrong.\n> \n> Until you've isolated the problem to either postgres or the array,\n> everything else is simply speculation.\n> \n> In a perfect world, you would have two 6 drive RAID 10s. on two PCI\n> busses, with system tables on a third parition, and archive logging on\n> a fourth. Unsurprisingly this looks alot like the Oracle recommended\n> minimum config.\n\nCould you please elaborate on this setup a little more? How do you put \nsystem tables on a separate partition? I am still using version 7, and \nwithout tablespaces (which is how Oracle controls this), I can't figure \nout how to put different tables on different partitions. Thanks.\n\n\nArshavir\n\n\n\n> Also a note for interest is that this is _software_ raid...\n> \n> Alex Turner\n> netEconomist\n> \n> On 13 Mar 2005 23:36:13 -0500, Greg Stark <[email protected]> wrote:\n> \n>>Arshavir Grigorian <[email protected]> writes:\n>>\n>>\n>>>Hi,\n>>>\n>>>I have a RAID5 array (mdadm) with 14 disks + 1 spare. This partition has an\n>>>Ext3 filesystem which is used by Postgres.\n>>\n>>People are going to suggest moving to RAID1+0. I'm unconvinced that RAID5\n>>across 14 drivers shouldn't be able to keep up with RAID1 across 7 drives\n>>though. It would be interesting to see empirical data.\n>>\n>>One thing that does scare me is the Postgres transaction log and the ext3\n>>journal both sharing these disks with the data. Ideally both of these things\n>>should get (mirrored) disks of their own separate from the data files.\n>>\n>>But 2-3s pauses seem disturbing. I wonder whether ext3 is issuing a cache\n>>flush on every fsync to get the journal pushed out. This is a new linux\n>>feature that's necessary with ide but shouldn't be necessary with scsi.\n>>\n>>It would be interesting to know whether postgres performs differently with\n>>fsync=off. This would even be a reasonable mode to run under for initial\n>>database loads. It shouldn't make much of a difference with hardware like this\n>>though. And you should be aware that running under this mode in production\n>>would put your data at risk.\n>>\n>>--\n>>greg\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 9: the planner will ignore your desire to choose an index scan if your\n>> joining column's datatypes do not match\n>>\n\n\n-- \nArshavir Grigorian\nSystems Administrator/Engineer\nM-CAM, Inc.\[email protected]\n+1 703-682-0570 ext. 432\nContents Confidential\n", "msg_date": "Mon, 14 Mar 2005 16:03:43 -0500", "msg_from": "Arshavir Grigorian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres on RAID5" }, { "msg_contents": "Arshavir Grigorian wrote:\n> Alex Turner wrote:\n> \n[]\n> Well, by putting the pg_xlog directory on a separate disk/partition, I \n> was able to increase this rate to about 50 or so per second (still \n> pretty far from your numbers). Next I am going to try putting the \n> pg_xlog on a RAID1+0 array and see if that helps.\n\npg_xlog is written syncronously, right? It should be, or else reliability\nof the database will be at a big question...\n\nI posted a question on Feb-22 here in linux-raid, titled \"*terrible*\ndirect-write performance with raid5\". There's a problem with write\nperformance of a raid4/5/6 array, which is due to the design.\n\nConsider raid5 array (raid4 will be exactly the same, and for raid6,\njust double the parity writes) with N data block and 1 parity block.\nAt the time of writing a portion of data, parity block should be\nupdated too, to be consistent and recoverable. And here, the size of\nthe write plays very significant role. If your write size is smaller\nthan chunk_size*N (N = number of data blocks in a stripe), in order\nto calculate correct parity you have to read data from the remaining\ndrives. The only case where you don't need to read data from other\ndrives is when you're writing by the size of chunk_size*N, AND the\nwrite is block-aligned. By default, chunk_size is 64Kb (min is 4Kb).\nSo the only reasonable direct-write size of N drives will be 64Kb*N,\nor else raid code will have to read \"missing\" data to calculate the\nparity block. Ofcourse, in 99% cases you're writing in much smaller\nsizes, say 4Kb or so. And here, the more drives you have, the\nLESS write speed you will have.\n\nWhen using the O/S buffer and filesystem cache, the system has much\nmore chances to re-order requests and sometimes even omit reading\nentirely (when you perform many sequentional writes for example,\nwithout sync in between), so buffered writes might be much fast.\nBut not direct or syncronous writes, again especially when you're\ndoing alot of sequential writes...\n\nSo to me it looks like an inherent problem of raid5 architecture\nwrt database-like workload -- databases tends to use syncronous\nor direct writes to ensure good data consistency.\n\nFor pgsql, which (i don't know for sure but reportedly) uses syncronous\nwrits only for the transaction log, it is a good idea to put that log\nonly to a raid1 or raid10 array, but NOT to raid5 array.\n\nJust IMHO ofcourse.\n\n/mjt\n", "msg_date": "Tue, 15 Mar 2005 01:47:16 +0300", "msg_from": "Michael Tokarev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on RAID5" }, { "msg_contents": "You said:\n\"If your write size is smaller than chunk_size*N (N = number of data blocks\nin a stripe), in order to calculate correct parity you have to read data\nfrom the remaining drives.\"\n\nNeil explained it in this message:\nhttp://marc.theaimsgroup.com/?l=linux-raid&m=108682190730593&w=2\n\nGuy\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Michael Tokarev\nSent: Monday, March 14, 2005 5:47 PM\nTo: Arshavir Grigorian\nCc: [email protected]; [email protected]\nSubject: Re: [PERFORM] Postgres on RAID5\n\nArshavir Grigorian wrote:\n> Alex Turner wrote:\n> \n[]\n> Well, by putting the pg_xlog directory on a separate disk/partition, I \n> was able to increase this rate to about 50 or so per second (still \n> pretty far from your numbers). Next I am going to try putting the \n> pg_xlog on a RAID1+0 array and see if that helps.\n\npg_xlog is written syncronously, right? It should be, or else reliability\nof the database will be at a big question...\n\nI posted a question on Feb-22 here in linux-raid, titled \"*terrible*\ndirect-write performance with raid5\". There's a problem with write\nperformance of a raid4/5/6 array, which is due to the design.\n\nConsider raid5 array (raid4 will be exactly the same, and for raid6,\njust double the parity writes) with N data block and 1 parity block.\nAt the time of writing a portion of data, parity block should be\nupdated too, to be consistent and recoverable. And here, the size of\nthe write plays very significant role. If your write size is smaller\nthan chunk_size*N (N = number of data blocks in a stripe), in order\nto calculate correct parity you have to read data from the remaining\ndrives. The only case where you don't need to read data from other\ndrives is when you're writing by the size of chunk_size*N, AND the\nwrite is block-aligned. By default, chunk_size is 64Kb (min is 4Kb).\nSo the only reasonable direct-write size of N drives will be 64Kb*N,\nor else raid code will have to read \"missing\" data to calculate the\nparity block. Ofcourse, in 99% cases you're writing in much smaller\nsizes, say 4Kb or so. And here, the more drives you have, the\nLESS write speed you will have.\n\nWhen using the O/S buffer and filesystem cache, the system has much\nmore chances to re-order requests and sometimes even omit reading\nentirely (when you perform many sequentional writes for example,\nwithout sync in between), so buffered writes might be much fast.\nBut not direct or syncronous writes, again especially when you're\ndoing alot of sequential writes...\n\nSo to me it looks like an inherent problem of raid5 architecture\nwrt database-like workload -- databases tends to use syncronous\nor direct writes to ensure good data consistency.\n\nFor pgsql, which (i don't know for sure but reportedly) uses syncronous\nwrits only for the transaction log, it is a good idea to put that log\nonly to a raid1 or raid10 array, but NOT to raid5 array.\n\nJust IMHO ofcourse.\n\n/mjt\n-\nTo unsubscribe from this list: send the line \"unsubscribe linux-raid\" in\nthe body of a message to [email protected]\nMore majordomo info at http://vger.kernel.org/majordomo-info.html\n\n", "msg_date": "Mon, 14 Mar 2005 18:49:11 -0500", "msg_from": "\"Guy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on RAID5" }, { "msg_contents": "Folks,\n\n> You said:\n> \"If your write size is smaller than chunk_size*N (N = number \n> of data blocks in a stripe), in order to calculate correct \n> parity you have to read data from the remaining drives.\"\n> \n> Neil explained it in this message:\n> http://marc.theaimsgroup.com/?l=linux-raid&m=108682190730593&w=2\n\nHaving read that and the recent posts:\n\nHas anyone done any performance checks on the md code to determine what, if\nany, effect the stripe size has on performance? One might suppose the variables\nwould be stripe size, file size and read vs write. Possibly number of drives in\narray, too.\n\nReason for asking:\n\nWhen I set up my raid5 array, I chose a stripe of 256K, on the grounds that a\nlarge number of the files on the drive are multi-megabytes (fairly evenly in\nthe 20MB - 100MB range) and I supposed that a large stripe would speed things\nup for those files. Was I right?\n\nRegards,\n\nRuth\n\n\n", "msg_date": "Tue, 15 Mar 2005 16:17:55 -0000", "msg_from": "\"Ruth Ivimey-Cook\" <[email protected]>", "msg_from_op": false, "msg_subject": "Effect of Stripe Size (was Postgres on RAID5)" }, { "msg_contents": "In my experience, if you are concerned about filesystem performance, don't\nuse ext3. It is one of the slowest filesystems I have ever used\nespecially for writes. I would suggest either reiserfs or xfs.\n--David Dougall\n\n\nOn Fri, 11 Mar 2005, Arshavir Grigorian wrote:\n\n> Hi,\n>\n> I have a RAID5 array (mdadm) with 14 disks + 1 spare. This partition has\n> an Ext3 filesystem which is used by Postgres. Currently we are loading a\n> 50G database on this server from a Postgres dump (copy, not insert) and\n> are experiencing very slow write performance (35 records per second).\n>\n> Top shows that the Postgres process (postmaster) is being constantly put\n> into D state for extended periods of time (2-3 seconds) which I assume\n> is because it's waiting for disk io. I have just started gathering\n> system statistics and here is what sar -b shows: (this is while the db\n> is being loaded - pg_restore)\n>\n> \t tps rtps wtps bread/s bwrtn/s\n> 01:35:01 PM 275.77 76.12 199.66 709.59 2315.23\n> 01:45:01 PM 287.25 75.56 211.69 706.52 2413.06\n> 01:55:01 PM 281.73 76.35 205.37 711.84 2389.86\n> 02:05:01 PM 282.83 76.14 206.69 720.85 2418.51\n> 02:15:01 PM 284.07 76.15 207.92 707.38 2443.60\n> 02:25:01 PM 265.46 75.91 189.55 708.87 2089.21\n> 02:35:01 PM 285.21 76.02 209.19 709.58 2446.46\n> Average: 280.33 76.04 204.30 710.66 2359.47\n>\n> This is a Sun e450 with dual TI UltraSparc II processors and 2G of RAM.\n> It is currently running Debian Sarge with a 2.4.27-sparc64-smp custom\n> compiled kernel. Postgres is installed from the Debian package and uses\n> all the configuration defaults.\n>\n> I am also copying the pgsql-performance list.\n>\n> Thanks in advance for any advice/pointers.\n>\n>\n> Arshavir\n>\n> Following is some other info that might be helpful.\n>\n> /proc/scsi# mdadm -D /dev/md1\n> /dev/md1:\n> Version : 00.90.00\n> Creation Time : Wed Feb 23 17:23:41 2005\n> Raid Level : raid5\n> Array Size : 123823616 (118.09 GiB 126.80 GB)\n> Device Size : 8844544 (8.43 GiB 9.06 GB)\n> Raid Devices : 15\n> Total Devices : 17\n> Preferred Minor : 1\n> Persistence : Superblock is persistent\n>\n> Update Time : Thu Feb 24 10:05:38 2005\n> State : active\n> Active Devices : 15\n> Working Devices : 16\n> Failed Devices : 1\n> Spare Devices : 1\n>\n> Layout : left-symmetric\n> Chunk Size : 64K\n>\n> UUID : 81ae2c97:06fa4f4d:87bfc6c9:2ee516df\n> Events : 0.8\n>\n> Number Major Minor RaidDevice State\n> 0 8 64 0 active sync /dev/sde\n> 1 8 80 1 active sync /dev/sdf\n> 2 8 96 2 active sync /dev/sdg\n> 3 8 112 3 active sync /dev/sdh\n> 4 8 128 4 active sync /dev/sdi\n> 5 8 144 5 active sync /dev/sdj\n> 6 8 160 6 active sync /dev/sdk\n> 7 8 176 7 active sync /dev/sdl\n> 8 8 192 8 active sync /dev/sdm\n> 9 8 208 9 active sync /dev/sdn\n> 10 8 224 10 active sync /dev/sdo\n> 11 8 240 11 active sync /dev/sdp\n> 12 65 0 12 active sync /dev/sdq\n> 13 65 16 13 active sync /dev/sdr\n> 14 65 32 14 active sync /dev/sds\n>\n> 15 65 48 15 spare /dev/sdt\n>\n> # dumpe2fs -h /dev/md1\n> dumpe2fs 1.35 (28-Feb-2004)\n> Filesystem volume name: <none>\n> Last mounted on: <not available>\n> Filesystem UUID: 1bb95bd6-94c7-4344-adf2-8414cadae6fc\n> Filesystem magic number: 0xEF53\n> Filesystem revision #: 1 (dynamic)\n> Filesystem features: has_journal dir_index needs_recovery large_file\n> Default mount options: (none)\n> Filesystem state: clean\n> Errors behavior: Continue\n> Filesystem OS type: Linux\n> Inode count: 15482880\n> Block count: 30955904\n> Reserved block count: 1547795\n> Free blocks: 28767226\n> Free inodes: 15482502\n> First block: 0\n> Block size: 4096\n> Fragment size: 4096\n> Blocks per group: 32768\n> Fragments per group: 32768\n> Inodes per group: 16384\n> Inode blocks per group: 512\n> Filesystem created: Wed Feb 23 17:27:13 2005\n> Last mount time: Wed Feb 23 17:45:25 2005\n> Last write time: Wed Feb 23 17:45:25 2005\n> Mount count: 2\n> Maximum mount count: 28\n> Last checked: Wed Feb 23 17:27:13 2005\n> Check interval: 15552000 (6 months)\n> Next check after: Mon Aug 22 18:27:13 2005\n> Reserved blocks uid: 0 (user root)\n> Reserved blocks gid: 0 (group root)\n> First inode: 11\n> Inode size: 128\n> Journal inode: 8\n> Default directory hash: tea\n> Directory Hash Seed: c35c0226-3b52-4dad-b102-f22feb773592\n> Journal backup: inode blocks\n>\n> # lspci | grep SCSI\n> 0000:00:03.0 SCSI storage controller: LSI Logic / Symbios Logic 53c875\n> (rev 14)\n> 0000:00:03.1 SCSI storage controller: LSI Logic / Symbios Logic 53c875\n> (rev 14)\n> 0000:00:04.0 SCSI storage controller: LSI Logic / Symbios Logic 53c875\n> (rev 14)\n> 0000:00:04.1 SCSI storage controller: LSI Logic / Symbios Logic 53c875\n> (rev 14)\n> 0000:04:02.0 SCSI storage controller: LSI Logic / Symbios Logic 53c875\n> (rev 03)\n> 0000:04:03.0 SCSI storage controller: LSI Logic / Symbios Logic 53c875\n> (rev 03)\n>\n> /proc/scsi# more scsi\n> Attached devices:\n> Host: scsi0 Channel: 00 Id: 00 Lun: 00\n> Vendor: SEAGATE Model: ST39103LCSUN9.0G Rev: 034A\n> Type: Direct-Access ANSI SCSI revision: 02\n> Host: scsi0 Channel: 00 Id: 01 Lun: 00\n> Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n> Type: Direct-Access ANSI SCSI revision: 03\n> Host: scsi0 Channel: 00 Id: 02 Lun: 00\n> Vendor: SEAGATE Model: ST39103LCSUN9.0G Rev: 034A\n> Type: Direct-Access ANSI SCSI revision: 02\n> Host: scsi0 Channel: 00 Id: 03 Lun: 00\n> Vendor: SEAGATE Model: ST39103LCSUN9.0G Rev: 034A\n> Type: Direct-Access ANSI SCSI revision: 02\n> Host: scsi1 Channel: 00 Id: 00 Lun: 00\n> Vendor: SEAGATE Model: ST39103LCSUN9.0G Rev: 034A\n> Type: Direct-Access ANSI SCSI revision: 02\n> Host: scsi1 Channel: 00 Id: 01 Lun: 00\n> Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n> Type: Direct-Access ANSI SCSI revision: 03\n> Host: scsi1 Channel: 00 Id: 02 Lun: 00\n> Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n> Type: Direct-Access ANSI SCSI revision: 03\n> Host: scsi1 Channel: 00 Id: 03 Lun: 00\n> Vendor: SEAGATE Model: ST39103LCSUN9.0G Rev: 034A\n> Type: Direct-Access ANSI SCSI revision: 02\n> Host: scsi2 Channel: 00 Id: 00 Lun: 00\n> Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n> Type: Direct-Access ANSI SCSI revision: 03\n> Host: scsi2 Channel: 00 Id: 01 Lun: 00\n> Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n> Type: Direct-Access ANSI SCSI revision: 03\n> Host: scsi2 Channel: 00 Id: 02 Lun: 00\n> Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n> Type: Direct-Access ANSI SCSI revision: 03\n> Host: scsi2 Channel: 00 Id: 03 Lun: 00\n> Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n> Type: Direct-Access ANSI SCSI revision: 03\n> Host: scsi3 Channel: 00 Id: 00 Lun: 00\n> Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n> Type: Direct-Access ANSI SCSI revision: 03\n> Host: scsi3 Channel: 00 Id: 01 Lun: 00\n> Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n> Type: Direct-Access ANSI SCSI revision: 03\n> Host: scsi3 Channel: 00 Id: 02 Lun: 00\n> Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n> Type: Direct-Access ANSI SCSI revision: 03\n> Host: scsi3 Channel: 00 Id: 03 Lun: 00\n> Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n> Type: Direct-Access ANSI SCSI revision: 03\n> Host: scsi4 Channel: 00 Id: 06 Lun: 00\n> Vendor: TOSHIBA Model: XM6201TASUN32XCD Rev: 1103\n> Type: CD-ROM ANSI SCSI revision: 02\n> Host: scsi5 Channel: 00 Id: 00 Lun: 00\n> Vendor: FUJITSU Model: MAG3091L SUN9.0G Rev: 1111\n> Type: Direct-Access ANSI SCSI revision: 02\n> Host: scsi5 Channel: 00 Id: 01 Lun: 00\n> Vendor: FUJITSU Model: MAG3091L SUN9.0G Rev: 1111\n> Type: Direct-Access ANSI SCSI revision: 02\n> Host: scsi5 Channel: 00 Id: 02 Lun: 00\n> Vendor: FUJITSU Model: MAG3091L SUN9.0G Rev: 1111\n> Type: Direct-Access ANSI SCSI revision: 02\n> Host: scsi5 Channel: 00 Id: 03 Lun: 00\n> Vendor: FUJITSU Model: MAG3091L SUN9.0G Rev: 1111\n> Type: Direct-Access ANSI SCSI revision: 02\n>\n>\n>\n>\n>\n>\n> --\n> Arshavir Grigorian\n> Systems Administrator/Engineer\n> -\n> To unsubscribe from this list: send the line \"unsubscribe linux-raid\" in\n> the body of a message to [email protected]\n> More majordomo info at http://vger.kernel.org/majordomo-info.html\n>\n>\n>\n", "msg_date": "Wed, 16 Mar 2005 09:47:35 -0700 (MST)", "msg_from": "David Dougall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on RAID5" }, { "msg_contents": "David Dougall wrote:\n> In my experience, if you are concerned about filesystem performance, don't\n> use ext3. It is one of the slowest filesystems I have ever used\n> especially for writes. I would suggest either reiserfs or xfs.\n\nI'm a bit afraid to start yet another filesystem flamewar, but.\nPlease don't make such a claims without providing actual numbers\nand config details. Pretty please.\n\next3 performs well for databases, there's no reason for it to be\nslow. Ok, enable data=journal and use it with eg Oracle - you will\nsee it is slow. But in that case it isn't the filesystem to blame,\nit's operator error, simple as that.\n\nAnd especially reiserfs, with its tail packing enabled by default,\nis NOT suitable for databases...\n\n/mjt\n", "msg_date": "Wed, 16 Mar 2005 19:55:49 +0300", "msg_from": "Michael Tokarev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on RAID5" } ]
[ { "msg_contents": "I'll post there concerning how they determine the query execution time vs. data retrieval time.\n \nI did think about the processor/memory when choosing the machines - all three of the processors are similar. All are Pentium P4s with 512 MB memory.\nthe server is Win2K, P4, 2.3 gHz\nthe local network client is a WinXP Pro, P4, 2.2 gHz\nthe remote network client is WinXP Pro, P4, 1.9 gHz\n \nLou\n\n>>> Tom Lane <[email protected]> 3/11/2005 1:21 PM >>>\n\n\"Lou O'Quin\" <[email protected]> writes:\n> Hi Tom. I referenced the status line of pgAdmin. Per the pgAdmin help\n> file:\n>\n> \"The status line will show how long the last query took to complete. If a\n> dataset was returned, not only the elapsed time for server execution is\n> displayed, but also the time to retrieve the data from the server to the\n> Data Output page.\"\n\nWell, you should probably ask the pgadmin boys exactly what they are\nmeasuring. In any case, the Postgres server overlaps query execution\nwith result sending, so I don't think it's possible to get a pure\nmeasurement of just one of those costs --- certainly not by looking at\nit only from the client end.\n\nBTW, one factor to consider is that if the test client machines weren't\nall the same speed, that would have some impact on their ability to\nabsorb 15K records ...\n\n regards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match\n\n\n\n\n\n\n\nI'll post there concerning how they determine the query execution time vs. data retrieval time.\n \nI did think about the processor/memory when choosing the machines - all three of the processors are similar.  All are Pentium P4s with 512 MB memory.\nthe server is Win2K, P4, 2.3 gHz\nthe local network client  is a WinXP Pro, P4, 2.2 gHzthe remote network client is WinXP Pro, P4, 1.9 gHz\n \nLou\n>>> Tom Lane <[email protected]> 3/11/2005 1:21 PM >>>\n\"Lou O'Quin\" <[email protected]> writes:> Hi Tom.  I referenced the status line of pgAdmin.  Per the pgAdmin help> file:>> \"The status line will show how long the last query took to complete. If a> dataset was returned, not only the elapsed time for server execution is> displayed, but also the time to retrieve the data from the server to the> Data Output page.\"Well, you should probably ask the pgadmin boys exactly what they aremeasuring.  In any case, the Postgres server overlaps query executionwith result sending, so I don't think it's possible to get a puremeasurement of just one of those costs --- certainly not by looking atit only from the client end.BTW, one factor to consider is that if the test client machines weren'tall the same speed, that would have some impact on their ability toabsorb 15K records ...            regards, tom lane---------------------------(end of broadcast)---------------------------TIP 9: the planner will ignore your desire to choose an index scan if your      joining column's datatypes do not match", "msg_date": "Fri, 11 Mar 2005 13:35:03 -0700", "msg_from": "\"Lou O'Quin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query performance" } ]
[ { "msg_contents": "> this seems\n> like a dead waste of effort :-(. The work to put the data into the main\n> database isn't lessened at all; you've just added extra work to manage\n> the buffer database.\n\nTrue from the view point of the server, but not from the throughput in the\nclient session (client viewpoint). The client will have a blazingly fast\nsession with the buffer database. I'm assuming the buffer database table\nsize is zero or very small. Constraints will be a problem if there are\nPKs, FKs that need satisfied on the server that are not adequately testable\nin the buffer. Might not be a problem if the full table fits on the RAM\ndisk, but you still have to worry about two clients inserting the same PK.\n\nRick\n\n\n \n Tom Lane \n <[email protected]> To: [email protected] \n Sent by: cc: [email protected] \n pgsql-performance-owner@pos Subject: Re: [PERFORM] Questions about 2 databases. \n tgresql.org \n \n \n 03/11/2005 03:33 PM \n \n \n\n\n\n\njelle <[email protected]> writes:\n> 1) on a single 7.4.6 postgres instance does each database have it own WAL\n> file or is that shared? Is it the same on 8.0.x?\n\nShared.\n\n> 2) what's the high performance way of moving 200 rows between similar\n> tables on different databases? Does it matter if the databases are\n> on the same or seperate postgres instances?\n\nCOPY would be my recommendation. For a no-programming-effort solution\nyou could just pipe the output of pg_dump --data-only -t mytable\ninto psql. Not sure if it's worth developing a custom application to\nreplace that.\n\n> My web app does lots of inserts that aren't read until a session is\n> complete. The plan is to put the heavy insert session onto a ramdisk\nbased\n> pg-db and transfer the relevant data to the master pg-db upon session\n> completion. Currently running 7.4.6.\n\nUnless you have a large proportion of sessions that are abandoned and\nhence never need be transferred to the main database at all, this seems\nlike a dead waste of effort :-(. The work to put the data into the main\ndatabase isn't lessened at all; you've just added extra work to manage\nthe buffer database.\n\n regards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match\n\n\n\n", "msg_date": "Fri, 11 Mar 2005 15:51:07 -0500", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Questions about 2 databases." } ]
[ { "msg_contents": "Hi,\n\nI have a RAID5 array (mdadm) with 14 disks + 1 spare. This partition has\nan Ext3 filesystem which is used by Postgres. Currently we are loading a\n50G database on this server from a Postgres dump (copy, not insert) and\nare experiencing very slow write performance (35 records per second).\n\nTop shows that the Postgres process (postmaster) is being constantly put\ninto D state for extended periods of time (2-3 seconds) which I assume\nis because it's waiting for disk io. I have just started gathering\nsystem statistics and here is what sar -b shows: (this is while the db\nis being loaded - pg_restore)\n\n \t tps rtps wtps bread/s bwrtn/s\n01:35:01 PM 275.77 76.12 199.66 709.59 2315.23\n01:45:01 PM 287.25 75.56 211.69 706.52 2413.06\n01:55:01 PM 281.73 76.35 205.37 711.84 2389.86\n02:05:01 PM 282.83 76.14 206.69 720.85 2418.51\n02:15:01 PM 284.07 76.15 207.92 707.38 2443.60\n02:25:01 PM 265.46 75.91 189.55 708.87 2089.21\n02:35:01 PM 285.21 76.02 209.19 709.58 2446.46\nAverage: 280.33 76.04 204.30 710.66 2359.47\n\nThis is a Sun e450 with dual TI UltraSparc II processors and 2G of RAM.\nIt is currently running Debian Sarge with a 2.4.27-sparc64-smp custom\ncompiled kernel. Postgres is installed from the Debian package and uses\nall the configuration defaults.\n\nI am also copying the pgsql-performance list.\n\nThanks in advance for any advice/pointers.\n\n\nArshavir\n\nFollowing is some other info that might be helpful.\n\n/proc/scsi# mdadm -D /dev/md1\n/dev/md1:\n Version : 00.90.00\n Creation Time : Wed Feb 23 17:23:41 2005\n Raid Level : raid5\n Array Size : 123823616 (118.09 GiB 126.80 GB)\n Device Size : 8844544 (8.43 GiB 9.06 GB)\n Raid Devices : 15\n Total Devices : 17\nPreferred Minor : 1\n Persistence : Superblock is persistent\n\n Update Time : Thu Feb 24 10:05:38 2005\n State : active\n Active Devices : 15\nWorking Devices : 16\n Failed Devices : 1\n Spare Devices : 1\n\n Layout : left-symmetric\n Chunk Size : 64K\n\n UUID : 81ae2c97:06fa4f4d:87bfc6c9:2ee516df\n Events : 0.8\n\n Number Major Minor RaidDevice State\n 0 8 64 0 active sync /dev/sde\n 1 8 80 1 active sync /dev/sdf\n 2 8 96 2 active sync /dev/sdg\n 3 8 112 3 active sync /dev/sdh\n 4 8 128 4 active sync /dev/sdi\n 5 8 144 5 active sync /dev/sdj\n 6 8 160 6 active sync /dev/sdk\n 7 8 176 7 active sync /dev/sdl\n 8 8 192 8 active sync /dev/sdm\n 9 8 208 9 active sync /dev/sdn\n 10 8 224 10 active sync /dev/sdo\n 11 8 240 11 active sync /dev/sdp\n 12 65 0 12 active sync /dev/sdq\n 13 65 16 13 active sync /dev/sdr\n 14 65 32 14 active sync /dev/sds\n\n 15 65 48 15 spare /dev/sdt\n\n# dumpe2fs -h /dev/md1\ndumpe2fs 1.35 (28-Feb-2004)\nFilesystem volume name: <none>\nLast mounted on: <not available>\nFilesystem UUID: 1bb95bd6-94c7-4344-adf2-8414cadae6fc\nFilesystem magic number: 0xEF53\nFilesystem revision #: 1 (dynamic)\nFilesystem features: has_journal dir_index needs_recovery large_file\nDefault mount options: (none)\nFilesystem state: clean\nErrors behavior: Continue\nFilesystem OS type: Linux\nInode count: 15482880\nBlock count: 30955904\nReserved block count: 1547795\nFree blocks: 28767226\nFree inodes: 15482502\nFirst block: 0\nBlock size: 4096\nFragment size: 4096\nBlocks per group: 32768\nFragments per group: 32768\nInodes per group: 16384\nInode blocks per group: 512\nFilesystem created: Wed Feb 23 17:27:13 2005\nLast mount time: Wed Feb 23 17:45:25 2005\nLast write time: Wed Feb 23 17:45:25 2005\nMount count: 2\nMaximum mount count: 28\nLast checked: Wed Feb 23 17:27:13 2005\nCheck interval: 15552000 (6 months)\nNext check after: Mon Aug 22 18:27:13 2005\nReserved blocks uid: 0 (user root)\nReserved blocks gid: 0 (group root)\nFirst inode: 11\nInode size: 128\nJournal inode: 8\nDefault directory hash: tea\nDirectory Hash Seed: c35c0226-3b52-4dad-b102-f22feb773592\nJournal backup: inode blocks\n\n# lspci | grep SCSI\n0000:00:03.0 SCSI storage controller: LSI Logic / Symbios Logic 53c875\n(rev 14)\n0000:00:03.1 SCSI storage controller: LSI Logic / Symbios Logic 53c875\n(rev 14)\n0000:00:04.0 SCSI storage controller: LSI Logic / Symbios Logic 53c875\n(rev 14)\n0000:00:04.1 SCSI storage controller: LSI Logic / Symbios Logic 53c875\n(rev 14)\n0000:04:02.0 SCSI storage controller: LSI Logic / Symbios Logic 53c875\n(rev 03)\n0000:04:03.0 SCSI storage controller: LSI Logic / Symbios Logic 53c875\n(rev 03)\n\n/proc/scsi# more scsi\nAttached devices:\nHost: scsi0 Channel: 00 Id: 00 Lun: 00\n Vendor: SEAGATE Model: ST39103LCSUN9.0G Rev: 034A\n Type: Direct-Access ANSI SCSI revision: 02\nHost: scsi0 Channel: 00 Id: 01 Lun: 00\n Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n Type: Direct-Access ANSI SCSI revision: 03\nHost: scsi0 Channel: 00 Id: 02 Lun: 00\n Vendor: SEAGATE Model: ST39103LCSUN9.0G Rev: 034A\n Type: Direct-Access ANSI SCSI revision: 02\nHost: scsi0 Channel: 00 Id: 03 Lun: 00\n Vendor: SEAGATE Model: ST39103LCSUN9.0G Rev: 034A\n Type: Direct-Access ANSI SCSI revision: 02\nHost: scsi1 Channel: 00 Id: 00 Lun: 00\n Vendor: SEAGATE Model: ST39103LCSUN9.0G Rev: 034A\n Type: Direct-Access ANSI SCSI revision: 02\nHost: scsi1 Channel: 00 Id: 01 Lun: 00\n Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n Type: Direct-Access ANSI SCSI revision: 03\nHost: scsi1 Channel: 00 Id: 02 Lun: 00\n Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n Type: Direct-Access ANSI SCSI revision: 03\nHost: scsi1 Channel: 00 Id: 03 Lun: 00\n Vendor: SEAGATE Model: ST39103LCSUN9.0G Rev: 034A\n Type: Direct-Access ANSI SCSI revision: 02\nHost: scsi2 Channel: 00 Id: 00 Lun: 00\n Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n Type: Direct-Access ANSI SCSI revision: 03\nHost: scsi2 Channel: 00 Id: 01 Lun: 00\n Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n Type: Direct-Access ANSI SCSI revision: 03\nHost: scsi2 Channel: 00 Id: 02 Lun: 00\n Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n Type: Direct-Access ANSI SCSI revision: 03\nHost: scsi2 Channel: 00 Id: 03 Lun: 00\n Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n Type: Direct-Access ANSI SCSI revision: 03\nHost: scsi3 Channel: 00 Id: 00 Lun: 00\n Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n Type: Direct-Access ANSI SCSI revision: 03\nHost: scsi3 Channel: 00 Id: 01 Lun: 00\n Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n Type: Direct-Access ANSI SCSI revision: 03\nHost: scsi3 Channel: 00 Id: 02 Lun: 00\n Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n Type: Direct-Access ANSI SCSI revision: 03\nHost: scsi3 Channel: 00 Id: 03 Lun: 00\n Vendor: SEAGATE Model: ST39204LCSUN9.0G Rev: 4207\n Type: Direct-Access ANSI SCSI revision: 03\nHost: scsi4 Channel: 00 Id: 06 Lun: 00\n Vendor: TOSHIBA Model: XM6201TASUN32XCD Rev: 1103\n Type: CD-ROM ANSI SCSI revision: 02\nHost: scsi5 Channel: 00 Id: 00 Lun: 00\n Vendor: FUJITSU Model: MAG3091L SUN9.0G Rev: 1111\n Type: Direct-Access ANSI SCSI revision: 02\nHost: scsi5 Channel: 00 Id: 01 Lun: 00\n Vendor: FUJITSU Model: MAG3091L SUN9.0G Rev: 1111\n Type: Direct-Access ANSI SCSI revision: 02\nHost: scsi5 Channel: 00 Id: 02 Lun: 00\n Vendor: FUJITSU Model: MAG3091L SUN9.0G Rev: 1111\n Type: Direct-Access ANSI SCSI revision: 02\nHost: scsi5 Channel: 00 Id: 03 Lun: 00\n Vendor: FUJITSU Model: MAG3091L SUN9.0G Rev: 1111\n Type: Direct-Access ANSI SCSI revision: 02\n\n\n\n\n\n\n-- \nArshavir Grigorian\nSystems Administrator/Engineer\n\n", "msg_date": "Fri, 11 Mar 2005 16:13:05 -0500", "msg_from": "Arshavir Grigorian <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres on RAID5" }, { "msg_contents": "Arshavir Grigorian <[email protected]> writes:\n> I have a RAID5 array (mdadm) with 14 disks + 1 spare. This partition has\n> an Ext3 filesystem which is used by Postgres. Currently we are loading a\n> 50G database on this server from a Postgres dump (copy, not insert) and\n> are experiencing very slow write performance (35 records per second).\n\nWhat PG version is this? What version of pg_dump made the dump file?\nHow are you measuring that write rate (seeing that pg_restore doesn't\nprovide any such info)?\n\n> Postgres is installed from the Debian package and uses\n> all the configuration defaults.\n\nThe defaults are made for a fairly small machine, not big iron. At a\nminimum you want to kick shared_buffers up to 10K or more.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Mar 2005 17:07:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on RAID5 " }, { "msg_contents": "Tom Lane wrote:\n> Arshavir Grigorian <[email protected]> writes:\n> \n>>I have a RAID5 array (mdadm) with 14 disks + 1 spare. This partition has\n>>an Ext3 filesystem which is used by Postgres. Currently we are loading a\n>>50G database on this server from a Postgres dump (copy, not insert) and\n>>are experiencing very slow write performance (35 records per second).\n> \n> \n> What PG version is this? What version of pg_dump made the dump file?\n> How are you measuring that write rate (seeing that pg_restore doesn't\n> provide any such info)?\n\nSorry I missed the version. Both (the db from which the dump was created \nand the one it's being loaded on) run on Pg 7.4.\n\nWell, if the restore is going on for X number of hours and you have Y \nrecords loaded, it's not hard to ballpark.\n\n> \n> \n>>Postgres is installed from the Debian package and uses\n>>all the configuration defaults.\n> \n> \n> The defaults are made for a fairly small machine, not big iron. At a\n> minimum you want to kick shared_buffers up to 10K or more.\n> \n> \t\t\tregards, tom lane\nWill do. Thanks.\n\n\nArshavir\n\n", "msg_date": "Fri, 11 Mar 2005 17:29:11 -0500", "msg_from": "Arshavir Grigorian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres on RAID5" }, { "msg_contents": "On Fri, Mar 11, 2005 at 05:29:11PM -0500, Arshavir Grigorian wrote:\n> Tom Lane wrote:\n\n> >The defaults are made for a fairly small machine, not big iron. At a\n> >minimum you want to kick shared_buffers up to 10K or more.\n> >\n> Will do. Thanks.\n\nAlso, it may help that you bump up sort_mem while doing [the CREATE\nINDEX part of] the restore.\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"We are who we choose to be\", sang the goldfinch\nwhen the sun is high (Sandman)\n", "msg_date": "Fri, 11 Mar 2005 19:36:46 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on RAID5" }, { "msg_contents": "Arshavir Grigorian <[email protected]> writes:\n> Tom Lane wrote:\n>> How are you measuring that write rate (seeing that pg_restore doesn't\n>> provide any such info)?\n\n> Well, if the restore is going on for X number of hours and you have Y \n> records loaded, it's not hard to ballpark.\n\nYeah, but how do you know that you have Y records loaded?\n\nWhat I'm trying to get at is what the restore is actually spending its\ntime on. It seems unlikely that a COPY per se would run that slowly;\nfar more likely that the expense is involved with index construction\nor foreign key verification. You could possibly determine what's what\nby watching the backend process with \"ps\" to see what statement type\nit's executing most of the time.\n\nBTW, is this a full database restore (schema + data), or are you trying\nto load data into pre-existing tables? The latter is generally a whole\nlot slower because both index updates and foreign key checks have to be\ndone retail instead of wholesale. There are various ways of working\naround that but you have to be aware of what you're doing.\n\nAlso, if it is indexing that's eating the time, boosting the sort_mem\nsetting for the server would help a lot.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Mar 2005 17:37:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on RAID5 " }, { "msg_contents": "Many thanks for all the response.\n\nI guess there are a lot of things to change and tweak and I wonder what \nwould be a good benchmarking sample dataset (size, contents).\n\nMy tables are very large (the smallest is 7+ mil records) and take \nseveral days to load (if not weeks). It would be nice to have a sample \ndataset that would be large enough to mimic my large datasets, but small \nenough to load in a short priod of time. Any suggestions?\n\n\nArshavir\n", "msg_date": "Fri, 11 Mar 2005 19:22:56 -0500", "msg_from": "Arshavir Grigorian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres on RAID5" }, { "msg_contents": "A,\n\n> This is a Sun e450 with dual TI UltraSparc II processors and 2G of RAM.\n> It is currently running Debian Sarge with a 2.4.27-sparc64-smp custom\n> compiled kernel. Postgres is installed from the Debian package and uses\n> all the configuration defaults.\n\nPlease read http://www.powerpostgresql.com/PerfList\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 11 Mar 2005 17:32:10 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on RAID5" }, { "msg_contents": "\n\tLook for the possibility that a foreign key check might not be using an \nindex. This would yield a seq scan for each insertion, which might be your \nproblem.\n\n\nOn Fri, 11 Mar 2005 19:22:56 -0500, Arshavir Grigorian <[email protected]> \nwrote:\n\n> Many thanks for all the response.\n>\n> I guess there are a lot of things to change and tweak and I wonder what \n> would be a good benchmarking sample dataset (size, contents).\n>\n> My tables are very large (the smallest is 7+ mil records) and take \n> several days to load (if not weeks). It would be nice to have a sample \n> dataset that would be large enough to mimic my large datasets, but small \n> enough to load in a short priod of time. Any suggestions?\n>\n>\n> Arshavir\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n\n", "msg_date": "Sat, 12 Mar 2005 03:20:18 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on RAID5" }, { "msg_contents": "I would recommend running a bonnie++ benchmark on your array to see if\nit's the array/controller/raid being crap, or wether it's postgres. I\nhave had some very surprising results from arrays that theoretically\nshould be fast, but turned out to be very slow.\n\nI would also seriously have to recommend against a 14 drive RAID 5!\nThis is statisticaly as likely to fail as a 7 drive RAID 0 (not\ncounting the spare, but rebuiling a spare is very hard on existing\ndrives).\n\nAlex Turner\nnetEconomist\n\n\nOn Fri, 11 Mar 2005 16:13:05 -0500, Arshavir Grigorian <[email protected]> wrote:\n> Hi,\n> \n> I have a RAID5 array (mdadm) with 14 disks + 1 spare. This partition has\n> an Ext3 filesystem which is used by Postgres. Currently we are loading a\n> 50G database on this server from a Postgres dump (copy, not insert) and\n> are experiencing very slow write performance (35 records per second).\n> \n> Top shows that the Postgres process (postmaster) is being constantly put\n> into D state for extended periods of time (2-3 seconds) which I assume\n> is because it's waiting for disk io. I have just started gathering\n> system statistics and here is what sar -b shows: (this is while the db\n> is being loaded - pg_restore)\n> \n> tps rtps wtps bread/s bwrtn/s\n> 01:35:01 PM 275.77 76.12 199.66 709.59 2315.23\n> 01:45:01 PM 287.25 75.56 211.69 706.52 2413.06\n> 01:55:01 PM 281.73 76.35 205.37 711.84 2389.86\n> \n[snip]\n", "msg_date": "Fri, 11 Mar 2005 22:50:20 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on RAID5" }, { "msg_contents": ">On Fri, 11 Mar 2005 16:13:05 -0500, Arshavir Grigorian <[email protected]> wrote:\n> \n>\n>>Hi,\n>>\n>>I have a RAID5 array (mdadm) with 14 disks + 1 spare. This partition has\n>>an Ext3 filesystem which is used by Postgres. Currently we are loading a\n>>50G database on this server from a Postgres dump (copy, not insert) and\n>>are experiencing very slow write performance (35 records per second).\n>> \n>>\n\nThat isn't that surprising. RAID 5 has never been known for its write\nperformance. You should be running RAID 10.\n\nSincerely,\n\nJoshua D. Drake\n\n\n>>Top shows that the Postgres process (postmaster) is being constantly put\n>>into D state for extended periods of time (2-3 seconds) which I assume\n>>is because it's waiting for disk io. I have just started gathering\n>>system statistics and here is what sar -b shows: (this is while the db\n>>is being loaded - pg_restore)\n>>\n>> tps rtps wtps bread/s bwrtn/s\n>>01:35:01 PM 275.77 76.12 199.66 709.59 2315.23\n>>01:45:01 PM 287.25 75.56 211.69 706.52 2413.06\n>>01:55:01 PM 281.73 76.35 205.37 711.84 2389.86\n>>\n>> \n>>\n>[snip]\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Fri, 11 Mar 2005 20:05:54 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on RAID5" }, { "msg_contents": "Josh Berkus wrote:\n> A,\n> \n> \n>>This is a Sun e450 with dual TI UltraSparc II processors and 2G of RAM.\n>>It is currently running Debian Sarge with a 2.4.27-sparc64-smp custom\n>>compiled kernel. Postgres is installed from the Debian package and uses\n>>all the configuration defaults.\n> \n> \n> Please read http://www.powerpostgresql.com/PerfList\n> \nI have read that document. Very informative/useful. Thanks.\n", "msg_date": "Mon, 14 Mar 2005 13:54:44 -0500", "msg_from": "Arshavir Grigorian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres on RAID5" }, { "msg_contents": "Alex Turner wrote:\n> I would recommend running a bonnie++ benchmark on your array to see if\n> it's the array/controller/raid being crap, or wether it's postgres. I\n> have had some very surprising results from arrays that theoretically\n> should be fast, but turned out to be very slow.\n> \n> I would also seriously have to recommend against a 14 drive RAID 5!\n> This is statisticaly as likely to fail as a 7 drive RAID 0 (not\n> counting the spare, but rebuiling a spare is very hard on existing\n> drives).\n\nThanks for the reply.\n\nHere are the results of the bonnie test on my array:\n\n./bonnie -s 10000 -d . > oo 2>&1\nFile './Bonnie.23736', size: 10485760000\nWriting with putc()...done\nRewriting...done\nWriting intelligently...done\nReading with getc()...done\nReading intelligently...done\nSeeker 1...Seeker 2...Seeker 3...start 'em...done...done...done...\n -------Sequential Output-------- ---Sequential Input-- --Random--\n -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---\n MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU\n 10000 4762 96.0 46140 78.8 31180 61.0 3810 99.9 71586 67.7 411.8 13.1\n\nOn a different note, I am not sure how the probability of RAID5 over 15 \ndisks failing is the same as that of a RAID0 array over 7 disks. RAID5 \ncan operate in a degraded mode (14 disks - 1 bad), RAID0 on the other \nhand cannot operate on 6 disks (6 disks - 1 bad). Am I missing something?\n\nAre you saying running RAID0 on a set of 2 RAID1 arrays of 7 each? That \nwould work fine, except I cannot afford to \"loose\" that much space.\n\nCare to comment on these numbers? Thanks.\n\n\n\nArshavir\n", "msg_date": "Mon, 14 Mar 2005 15:54:34 -0500", "msg_from": "Arshavir Grigorian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres on RAID5" }, { "msg_contents": "Actualy my statistics were off a bit I realised - chance of failure\nfor one drive is 1 in X. change of failure in RAID 0 is 7 in X,\nchance of one drive failure in 14 drive RAID 5 is 14 in X,13 in X for\nsecond drive, total probably is 182 in X*X, which is much lower than\nRAID 0.\n\nYour drive performance is less than stellar for a 14 drive stripe, and\nCPU usage for writes is very high. Even so - this should be enough\nthrough put to get over 100 rows/sec assuming you have virtualy no\nstored procs (I have noticed that stored procs in plpgsql REALLY slow\npg_sql down).\n\nAlex Turner\nnetEconomist\n\nOn Mon, 14 Mar 2005 15:54:34 -0500, Arshavir Grigorian <[email protected]> wrote:\n> Alex Turner wrote:\n> > I would recommend running a bonnie++ benchmark on your array to see if\n> > it's the array/controller/raid being crap, or wether it's postgres. I\n> > have had some very surprising results from arrays that theoretically\n> > should be fast, but turned out to be very slow.\n> >\n> > I would also seriously have to recommend against a 14 drive RAID 5!\n> > This is statisticaly as likely to fail as a 7 drive RAID 0 (not\n> > counting the spare, but rebuiling a spare is very hard on existing\n> > drives).\n> \n> Thanks for the reply.\n> \n> Here are the results of the bonnie test on my array:\n> \n> ./bonnie -s 10000 -d . > oo 2>&1\n> File './Bonnie.23736', size: 10485760000\n> Writing with putc()...done\n> Rewriting...done\n> Writing intelligently...done\n> Reading with getc()...done\n> Reading intelligently...done\n> Seeker 1...Seeker 2...Seeker 3...start 'em...done...done...done...\n> -------Sequential Output-------- ---Sequential Input-- --Random--\n> -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---\n> MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU\n> 10000 4762 96.0 46140 78.8 31180 61.0 3810 99.9 71586 67.7 411.8 13.1\n> \n> On a different note, I am not sure how the probability of RAID5 over 15\n> disks failing is the same as that of a RAID0 array over 7 disks. RAID5\n> can operate in a degraded mode (14 disks - 1 bad), RAID0 on the other\n> hand cannot operate on 6 disks (6 disks - 1 bad). Am I missing something?\n> \n> Are you saying running RAID0 on a set of 2 RAID1 arrays of 7 each? That\n> would work fine, except I cannot afford to \"loose\" that much space.\n> \n> Care to comment on these numbers? Thanks.\n> \n> \n> Arshavir\n>\n", "msg_date": "Mon, 14 Mar 2005 16:31:03 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on RAID5" } ]
[ { "msg_contents": "Hi Arshavir Grigorian,\n\n0. If possible move to 8.0.1 - bgwriter help you\n\n1. Create RAID1 for redo and place drives on separate\nSCSI channel\n\n2. Update postgresql.conf:\nshared_buffers = 10000-50000\nwork_mem = 100000-300000\nmaintenance_work_mem = 100000-300000\nmax_fsm_pages = 1500000\nmax_fsm_relations = 16000\nwal_buffers = 32\ncheckpoint_segments = 32 # 16MB each !!\ncheckpoint_timeout = 600\ncheckpoint_warning = 60\neffective_cache_size = 128000\nrandom_page_cost = 3\ndefault_statistics_target = 100\nlog_min_error_statement = warning\nlog_min_duration_statement = 1000 # for logging long SQL\n\n3. If possible migrate from RAID5 to RAID10.\n\n4. Add (if need) 2 new drive for OS and use ALL \n20x9GB drive for DB storage.\n\n5. Remove CDROM from work configuration and start use\nthis scsi channel.\n\nBest regards,\n Alexander Kirpa\n\n", "msg_date": "Sat, 12 Mar 2005 04:21:29 +0200", "msg_from": "\"Alexander Kirpa\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres on RAID5" } ]
[ { "msg_contents": "Hi!\n\nIn one of our applications we have a database function, which\nrecalculates COGS (cost of good sold) for certain period. This involves\ndeleting bunch of rows from one table, inserting them again in correct\norder and updating them one-by-one (sometimes one row twice) to reflect\ncurrent state. The problem is, that this generates an enormous amount of\ntuples in that table.\n\nIf I'm correct, the dead tuples must be scanned also during table and\nindex scan, so a lot of dead tuples slows down queries considerably,\nespecially when the table doesn't fit into shared buffers any more. And\nas I'm in transaction, I can't VACUUM to get rid of those tuples. In one\noccasion the page count for a table went from 400 to 22000 at the end.\n\nAll this made me wonder, why is new tuple created after every update?\nOne tuple per transaction should be enough, because you always commit or\nrollback transaction as whole. And my observations seem to indicate,\nthat new index tuple is created after column update even if this column\nis not indexed.\n\nOne tuple per transaction would save a loads of I/O bandwidth, so I\nbelieve there must be a reason why it isn't implemented as such. Or were\nmy assumptions wrong, that dead tuples must be read from disk?\n\n Tambet\n", "msg_date": "Sat, 12 Mar 2005 15:08:32 +0200", "msg_from": "\"Tambet Matiisen\" <[email protected]>", "msg_from_op": true, "msg_subject": "One tuple per transaction" }, { "msg_contents": "Tambet,\n\n> In one of our applications we have a database function, which\n> recalculates COGS (cost of good sold) for certain period. This involves\n> deleting bunch of rows from one table, inserting them again in correct\n> order and updating them one-by-one (sometimes one row twice) to reflect\n> current state. The problem is, that this generates an enormous amount of\n> tuples in that table.\n\nSounds like you have an application design problem ... how about re-writing \nyour function so it's a little more sensible?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sat, 12 Mar 2005 14:05:20 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One tuple per transaction" }, { "msg_contents": "\"\"Tambet Matiisen\"\" <[email protected]> writes\n> Hi!\n>\n> In one of our applications we have a database function, which\n> recalculates COGS (cost of good sold) for certain period. This involves\n> deleting bunch of rows from one table, inserting them again in correct\n> order and updating them one-by-one (sometimes one row twice) to reflect\n> current state. The problem is, that this generates an enormous amount of\n> tuples in that table.\n>\n> If I'm correct, the dead tuples must be scanned also during table and\n> index scan, so a lot of dead tuples slows down queries considerably,\n> especially when the table doesn't fit into shared buffers any more. And\n> as I'm in transaction, I can't VACUUM to get rid of those tuples. In one\n> occasion the page count for a table went from 400 to 22000 at the end.\n\nNot exactly. The dead tuple in the index will be scanned the first time (and\nits pointed heap tuple as well), then we will mark it dead, then next time\nwe came here, we will know that the index tuple actually points to a uesless\ntuple, so we will not scan its pointed heap tuple.\n\n>\n> All this made me wonder, why is new tuple created after every update?\n> One tuple per transaction should be enough, because you always commit or\n> rollback transaction as whole. And my observations seem to indicate,\n> that new index tuple is created after column update even if this column\n> is not indexed.\n\nThis is one cost of MVCC. A good thing of MVCC is there is no conflict\nbetween read and write - maybe some applications need this.\n\nA reference could be found here:\n\nhttp://www.postgresql.org/docs/8.0/static/storage-page-layout.html#HEAPTUPLEHEADERDATA-TABLE\n\n>\n> One tuple per transaction would save a loads of I/O bandwidth, so I\n> believe there must be a reason why it isn't implemented as such. Or were\n> my assumptions wrong, that dead tuples must be read from disk?\n>\n> Tambet\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n\n\n", "msg_date": "Mon, 14 Mar 2005 09:41:30 +0800", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One tuple per transaction" }, { "msg_contents": "On L, 2005-03-12 at 14:05 -0800, Josh Berkus wrote:\n> Tambet,\n> \n> > In one of our applications we have a database function, which\n> > recalculates COGS (cost of good sold) for certain period. This involves\n> > deleting bunch of rows from one table, inserting them again in correct\n> > order and updating them one-by-one (sometimes one row twice) to reflect\n> > current state. The problem is, that this generates an enormous amount of\n> > tuples in that table.\n> \n> Sounds like you have an application design problem ... how about re-writing \n> your function so it's a little more sensible?\n\nAlso, you could at least use a temp table for intermediate steps. This\nwill at least save WAL traffic.\n\n-- \nHannu Krosing <[email protected]>\n", "msg_date": "Thu, 17 Mar 2005 23:27:25 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One tuple per transaction" } ]
[ { "msg_contents": "\n\n\n\n\n\nHello,\nMy version of Postgresql is 7.4.3.\nI have a simple table with 2 indexes:\n                             Table \"public.tst\"\n Column |            Type             |              Modifiers\n--------+-----------------------------+-------------------------------------\n tst_id | bigint                      | default nextval('tst_id_seq'::text)\n mmd5   | character varying(32)       | not null\n active | character(1)                | not null\n lud    | timestamp without time zone | default now()\nIndexes:\n    \"tst_idx\" unique, btree (mmd5, active)\n    \"tst_tst_id_key\" unique, btree (tst_id)\n\nThere are exactly 1,000,000 (one million) rows in the table (tst).  There are no NULLS, empty columns in any row.\nI get really fast response times when using the following select statement (Less than 1 second).\nmaach=# explain select * from tst where mmd5 = '71e1c18cbc708a0bf28fe106e03256c7' and active = 'A';\n                                              QUERY PLAN\n------------------------------------------------------------------------------------------------------\n Index Scan using tst_idx on tst  (cost=0.00..6.02 rows=1 width=57)\n   Index Cond: (((mmd5)::text = '71e1c18cbc708a0bf28fe106e03256c7'::text) AND (active = 'A'::bpchar))\n(2 rows)\n\nI get really slow repoonse times when using the following select statement (About 20 seconds).\nmaach=# explain select * from tst where tst_id = 639246;\n                       QUERY PLAN\n--------------------------------------------------------\n Seq Scan on tst  (cost=0.00..23370.00 rows=1 width=57)\n   Filter: (tst_id = 639246)\n(2 rows)\n\nWhy is the second select statement so slow, it should be using the \"tst_tst_id_key\" unique, btree (tst_id) index, but instead EXPLAIN says it's using a Seq Scan.  If it was using the index, this select statement should be as fast if not faster than the above select statement.\nWhen I turned off,  maach=# SET ENABLE_SEQSCAN TO OFF;\nThe slow select statement gets even slower.\nmaach=# explain select * from tst where tst_id = 639246;\n                             QUERY PLAN\n--------------------------------------------------------------------\n Seq Scan on tst  (cost=100000000.00..100023370.00 rows=1 width=57)\n   Filter: (tst_id = 639246)\n(2 rows)\n\nWhy do I have to use 2 columns to create a fast/efficient index?  I want to get the single column index to be the fastest index for my select statements.  How do I accomplish this.\nThanks,\nTom\n\n\n", "msg_date": "Sat, 12 Mar 2005 23:40:47 -0600", "msg_from": "\"Tom Pfeifer\" <[email protected]>", "msg_from_op": true, "msg_subject": "Index use and slow queries" }, { "msg_contents": "On Sun, 13 Mar 2005 04:40 pm, Tom Pfeifer wrote:\n> Hello,\n> \n> \n> My version of Postgresql is 7.4.3. \n> I have a simple table with 2 indexes: \n>                              Table \"public.tst\" \n>  Column |            Type             |              Modifiers \n> --------+-----------------------------+------------------------------------- \n>  tst_id | bigint                      | default nextval('tst_id_seq'::text) \n>  mmd5   | character varying(32)       | not null \n>  active | character(1)                | not null \n>  lud    | timestamp without time zone | default now() \n> Indexes: \n>     \"tst_idx\" unique, btree (mmd5, active) \n>     \"tst_tst_id_key\" unique, btree (tst_id) \n> \n> \n> \n> There are exactly 1,000,000 (one million) rows in the table (tst).  There are no NULLS, empty columns in any row.\n> \n> \n> I get really fast response times when using the following select statement (Less than 1 second). \n> maach=# explain select * from tst where mmd5 = '71e1c18cbc708a0bf28fe106e03256c7' and active = 'A'; \n>                                               QUERY PLAN \n> ------------------------------------------------------------------------------------------------------ \n>  Index Scan using tst_idx on tst  (cost=0.00..6.02 rows=1 width=57) \n>    Index Cond: (((mmd5)::text = '71e1c18cbc708a0bf28fe106e03256c7'::text) AND (active = 'A'::bpchar)) \n> (2 rows) \n> \n> \n> \n> I get really slow repoonse times when using the following select statement (About 20 seconds). \n> maach=# explain select * from tst where tst_id = 639246; \n\nBefore 8.0, bigint would not use an index unless you cast it, or quote it.\n\neg\nexplain select * from tst where tst_id = 639246::int8; \nexplain select * from tst where tst_id = '639246'; \n\nHope this helps.\n\nRussell Smith\n", "msg_date": "Sun, 13 Mar 2005 17:07:46 +1100", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index use and slow queries" }, { "msg_contents": "Russell Smith <[email protected]> writes:\n> On Sun, 13 Mar 2005 04:40 pm, Tom Pfeifer wrote:\n>> I get really slow repoonse times when using the following select statement (About 20 seconds). \n>> maach=# explain select * from tst where tst_id = 639246; \n\n> Before 8.0, bigint would not use an index unless you cast it, or quote it.\n\n> explain select * from tst where tst_id = 639246::int8; \n> explain select * from tst where tst_id = '639246'; \n\n... or you compare to a value large enough to be int8 naturally, eg\n\n> explain select * from tst where tst_id = 123456639246;\n\nThe issue here is that (a) 639246 is naturally typed as int4, and\n(b) before 8.0 we couldn't use cross-type comparisons such as int8 = int4\nwith an index.\n\nYou can find a whole lot of angst about this issue and related ones\nif you care to review the last six or eight years of the pgsql-hackers\narchives. It was only recently that we found a way to support\ncross-type index operations without breaking the fundamental\ntype-extensibility features of Postgres. (In hindsight, we spent way\ntoo much time fixated on the notion that we needed to find a way to\nimplicitly convert the non-indexed value to match the indexed column's\ntype, rather than biting the bullet and supporting cross-type operations\ndirectly with indexes. Oh well, hindsight is always 20/20.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Mar 2005 01:26:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index use and slow queries " } ]
[ { "msg_contents": "Hi all,\n\nI am new to PostgreSQL and query optimizations. We have recently moved \nour project from MySQL to PostgreSQL and we are having performance \nproblem with one of our most often used queries. On MySQL the speed was \nsufficient but PostgreSQL chooses time expensive query plan. I would \nlike to optimize it somehow but the query plan from EXPLAIN ANALYZE is \nlittle bit cryptic to me.\n\nSo the first thing I would like is to understand the query plan. I have \nread \"performance tips\" and FAQ but it didn't move me too much further.\n\nI would appreciate if someone could help me to understand the query plan \nand what are the possible general options I can test. I think at this \nmoment the most expensive part is the \"Sort\". Am I right? If so, how \ncould I generally avoid it (turning something on or off, using \nparentheses for JOINs etc.) to force some more efficient query plan?\n\nThank you for any suggestions.\n\nQUERY PLAN\n\nMerge Right Join (cost=9868.84..9997.74 rows=6364 width=815) (actual time=9982.022..10801.216 rows=6364 loops=1)\n\n Merge Cond: (\"outer\".idpk = \"inner\".cadastralunitidfk)\n\n -> Index Scan using cadastralunits_pkey on cadastralunits (cost=0.00..314.72 rows=13027 width=31) (actual time=0.457..0.552 rows=63 loops=1)\n\n -> Sort (cost=9868.84..9884.75 rows=6364 width=788) (actual time=9981.405..10013.708 rows=6364 loops=1)\n\n Sort Key: addevicessites.cadastralunitidfk\n\n -> Hash Left Join (cost=5615.03..7816.51 rows=6364 width=788) (actual time=3898.603..9884.248 rows=6364 loops=1)\n\n Hash Cond: (\"outer\".addevicessitepartnerstickeridfk = \"inner\".idpk)\n\n -> Hash Left Join (cost=5612.27..7718.29 rows=6364 width=762) (actual time=3898.243..9104.791 rows=6364 loops=1)\n\n Hash Cond: (\"outer\".addevicessitepartnermaintaineridfk = \"inner\".idpk)\n\n -> Hash Left Join (cost=5609.51..7620.06 rows=6364 width=736) (actual time=3897.996..8341.965 rows=6364 loops=1)\n\n Hash Cond: (\"outer\".addevicessitepartnerelectricitysupplieridfk = \"inner\".idpk)\n\n -> Hash Left Join (cost=5606.74..7521.84 rows=6364 width=710) (actual time=3897.736..7572.182 rows=6364 loops=1)\n\n Hash Cond: (\"outer\".addevicessitepartneridentificationoperatoridfk = \"inner\".idpk)\n\n -> Nested Loop Left Join (cost=5603.98..7423.62 rows=6364 width=684) (actual time=3897.436..6821.713 rows=6364 loops=1)\n\n Join Filter: (\"outer\".addevicessitestatustypeidfk = \"inner\".idpk)\n\n -> Nested Loop Left Join (cost=5602.93..6706.61 rows=6364 width=657) (actual time=3897.294..6038.976 rows=6364 loops=1)\n\n Join Filter: (\"outer\".addevicessitepositionidfk = \"inner\".idpk)\n\n -> Nested Loop Left Join (cost=5601.89..6276.01 rows=6364 width=634) (actual time=3897.158..5303.575 rows=6364 loops=1)\n\n Join Filter: (\"outer\".addevicessitevisibilityidfk = \"inner\".idpk)\n\n -> Merge Right Join (cost=5600.85..5702.21 rows=6364 width=602) (actual time=3896.963..4583.749 rows=6364 loops=1)\n\n Merge Cond: (\"outer\".idpk = \"inner\".addevicessitesizeidfk)\n\n -> Index Scan using addevicessitesizes_pkey on addevicessitesizes (cost=0.00..5.62 rows=110 width=14) (actual time=0.059..0.492 rows=110 loops=1)\n\n -> Sort (cost=5600.85..5616.76 rows=6364 width=592) (actual time=3896.754..3915.022 rows=6364 loops=1)\n\n Sort Key: addevicessites.addevicessitesizeidfk\n\n -> Hash Left Join (cost=2546.59..4066.81 rows=6364 width=592) (actual time=646.162..3792.310 rows=6364 loops=1)\n\n Hash Cond: (\"outer\".addevicessitedistrictidfk = \"inner\".idpk)\n\n -> Hash Left Join (cost=2539.29..3964.05 rows=6364 width=579) (actual time=645.296..3142.128 rows=6364 loops=1)\n\n Hash Cond: (\"outer\".addevicessitestreetdescriptionidfk = \"inner\".idpk)\n\n -> Hash Left Join (cost=2389.98..2724.64 rows=6364 width=544) (actual time=632.806..2466.030 rows=6364 loops=1)\n\n Hash Cond: (\"outer\".addevicessitestreetidfk = \"inner\".idpk)\n\n -> Hash Left Join (cost=2324.25..2515.72 rows=6364 width=518) (actual time=626.081..1822.137 rows=6364 loops=1)\n\n Hash Cond: (\"outer\".addevicessitecityidfk = \"inner\".idpk)\n\n -> Merge Right Join (cost=2321.70..2417.71 rows=6364 width=505) (actual time=625.598..1220.967 rows=6364 loops=1)\n\n Merge Cond: (\"outer\".idpk = \"inner\".addevicessitecountyidfk)\n\n -> Sort (cost=5.83..6.10 rows=110 width=17) (actual time=0.348..0.391 rows=110 loops=1)\n\n Sort Key: addevicessitecounties.idpk\n\n -> Seq Scan on addevicessitecounties (cost=0.00..2.10 rows=110 width=17) (actual time=0.007..0.145 rows=110 loops=1)\n\n -> Sort (cost=2315.87..2331.78 rows=6364 width=492) (actual time=625.108..640.325 rows=6364 loops=1)\n\n Sort Key: addevicessites.addevicessitecountyidfk\n\n -> Merge Right Join (cost=0.00..1006.90 rows=6364 width=492) (actual time=0.145..543.043 rows=6364 loops=1)\n\n Merge Cond: (\"outer\".idpk = \"inner\".addevicessiteregionidfk)\n\n -> Index Scan using addevicessiteregions_pkey on addevicessiteregions (cost=0.00..3.17 rows=15 width=23) (actual time=0.011..0.031 rows=15 loops=1)\n\n -> Index Scan using addevicessites_addevicessiteregionidfk on addevicessites (cost=0.00..924.14 rows=6364 width=473) (actual time=0.010..9.825 rows=6364 loops=1)\n\n -> Hash (cost=2.24..2.24 rows=124 width=17) (actual time=0.238..0.238 rows=0 loops=1)\n\n -> Seq Scan on addevicessitecities (cost=0.00..2.24 rows=124 width=17) (actual time=0.009..0.145 rows=124 loops=1)\n\n -> Hash (cost=58.58..58.58 rows=2858 width=34) (actual time=6.532..6.532 rows=0 loops=1)\n\n -> Seq Scan on addevicessitestreets (cost=0.00..58.58 rows=2858 width=34) (actual time=0.040..4.129 rows=2858 loops=1)\n\n -> Hash (cost=96.85..96.85 rows=4585 width=43) (actual time=11.786..11.786 rows=0 loops=1)\n\n -> Seq Scan on addevicessitestreetdescriptions (cost=0.00..96.85 rows=4585 width=43) (actual time=0.036..7.290 rows=4585 loops=1)\n\n -> Hash (cost=6.44..6.44 rows=344 width=21) (actual time=0.730..0.730 rows=0 loops=1)\n\n -> Seq Scan on addevicessitedistricts (cost=0.00..6.44 rows=344 width=21) (actual time=0.027..0.478 rows=344 loops=1)\n\n -> Materialize (cost=1.04..1.08 rows=4 width=36) (actual time=0.000..0.002 rows=4 loops=6364)\n\n -> Seq Scan on addevicessitevisibilities (cost=0.00..1.04 rows=4 width=36) (actual time=0.036..0.050 rows=4 loops=1)\n\n -> Materialize (cost=1.03..1.06 rows=3 width=27) (actual time=0.001..0.002 rows=3 loops=6364)\n\n -> Seq Scan on addevicessitepositions (cost=0.00..1.03 rows=3 width=27) (actual time=0.013..0.017 rows=3 loops=1)\n\n -> Materialize (cost=1.05..1.10 rows=5 width=31) (actual time=0.000..0.002 rows=5 loops=6364)\n\n -> Seq Scan on addevicessitestatustypes (cost=0.00..1.05 rows=5 width=31) (actual time=0.012..0.019 rows=5 loops=1)\n\n -> Hash (cost=2.61..2.61 rows=61 width=34) (actual time=0.171..0.171 rows=0 loops=1)\n\n -> Seq Scan on partneridentifications partneridentificationsoperator (cost=0.00..2.61 rows=61 width=34) (actual time=0.027..0.126 rows=61 loops=1)\n\n -> Hash (cost=2.61..2.61 rows=61 width=34) (actual time=0.130..0.130 rows=0 loops=1)\n\n -> Seq Scan on partners partnerselectricitysupplier (cost=0.00..2.61 rows=61 width=34) (actual time=0.003..0.076 rows=61 loops=1)\n\n -> Hash (cost=2.61..2.61 rows=61 width=34) (actual time=0.118..0.118 rows=0 loops=1)\n\n -> Seq Scan on partners partnersmaintainer (cost=0.00..2.61 rows=61 width=34) (actual time=0.003..0.075 rows=61 loops=1)\n\n -> Hash (cost=2.61..2.61 rows=61 width=34) (actual time=0.171..0.171 rows=0 loops=1)\n\n -> Seq Scan on partners partnerssticker (cost=0.00..2.61 rows=61 width=34) (actual time=0.029..0.120 rows=61 loops=1)\n\nTotal runtime: 10811.567 ms\n\n\n-- \nMiroslav ďż˝ulc", "msg_date": "Sun, 13 Mar 2005 16:32:52 +0100", "msg_from": "=?ISO-8859-2?Q?Miroslav_=A9ulc?= <[email protected]>", "msg_from_op": true, "msg_subject": "How to read query plan" }, { "msg_contents": "Miroslav ďż˝ulc wrote:\n\n> Hi all,\n>\n> I am new to PostgreSQL and query optimizations. We have recently moved\n> our project from MySQL to PostgreSQL and we are having performance\n> problem with one of our most often used queries. On MySQL the speed\n> was sufficient but PostgreSQL chooses time expensive query plan. I\n> would like to optimize it somehow but the query plan from EXPLAIN\n> ANALYZE is little bit cryptic to me.\n>\n> So the first thing I would like is to understand the query plan. I\n> have read \"performance tips\" and FAQ but it didn't move me too much\n> further.\n>\n> I would appreciate if someone could help me to understand the query\n> plan and what are the possible general options I can test. I think at\n> this moment the most expensive part is the \"Sort\". Am I right? If so,\n> how could I generally avoid it (turning something on or off, using\n> parentheses for JOINs etc.) to force some more efficient query plan?\n>\n> Thank you for any suggestions.\n>\nYou really need to post the original query, so we can see *why* postgres\nthinks it needs to run the plan this way.\n\nAlso, the final sort actually isn't that expensive.\n\nWhen you have the numbers (cost=xxx..yyy) the xxx is the time when the\nstep can start, and the yyy is the time when the step can finish. For a\nlot of steps, it can start running while the sub-steps are still feeding\nback more data, for others, it has to wait for the sub-steps to finish.\n\nThe first thing to look for, is to make sure the estimated number of\nrows is close to the actual number of rows. If they are off, then\npostgres may be mis-estimating the optimal plan. (If postgres thinks it\nis going to only need 10 rows, it may use an index scan, but when 1000\nrows are returned, a seq scan might have been faster.)\n\nYou seem to be doing a lot of outer joins. Is that necessary? I don't\nreally know what you are looking for, but you are joining against enough\ntables, that I think this query is always going to be slow.\n\n From what I can tell, you have 1 table which has 6364 rows, and you are\ngrabbing all of those rows, and then outer joining it with about 11\nother tables.\n\nI would actually guess that the most expensive parts of the plan are the\nNESTED LOOPS which when they go to materialize have to do a sequential\nscan, and they get executed 6364 times. It looks like the other tables\nare small (only 3-5 rows), so it takes about 0.05 ms for each seqscan,\nthe problem is that because you are doing it 6k times, it ends up taking\nabout 300ms of your time.\n\nYou could try setting \"set enable_nestloop to off\".\nI don't know that it will be faster, but it could be.\n\nIn general, though, it seems like you should be asking a different\nquestion, rather than trying to optimize the query that you have.\n\nCan you post the original SQL statement, and maybe describe what you are\ntrying to do?\n\nJohn\n=:->", "msg_date": "Sun, 13 Mar 2005 10:24:14 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "On Sun, 2005-03-13 at 16:32 +0100, Miroslav ďż˝ulc wrote:\n> Hi all,\n> \n> I am new to PostgreSQL and query optimizations. We have recently moved \n> our project from MySQL to PostgreSQL and we are having performance \n> problem with one of our most often used queries. On MySQL the speed was \n> sufficient but PostgreSQL chooses time expensive query plan. I would \n> like to optimize it somehow but the query plan from EXPLAIN ANALYZE is \n> little bit cryptic to me.\n> \n\n[snip output of EXPLAIN ANALYZE]\n\nfor those of us who have not yet reached the level where one can\ninfer it from the query plan, how abour showing us the actual\nquery too ?\n\nbut as an example of what to look for, consider the first few lines\n(reformatted): \n\n> Merge Right Join (cost=9868.84..9997.74 rows=6364 width=815) \n> (actual time=9982.022..10801.216 rows=6364 loops=1)\n> Merge Cond: (\"outer\".idpk = \"inner\".cadastralunitidfk)\n> -> Index Scan using cadastralunits_pkey on cadastralunits \n> (cost=0.00..314.72 rows=13027 width=31)\n> (actual time=0.457..0.552 rows=63 loops=1)\n> -> Sort (cost=9868.84..9884.75 rows=6364 width=788)\n> (actual time=9981.405..10013.708 rows=6364 loops=1)\n\nnotice that the index scan is expected to return 13027 rows, but\nactually returns 63. this might influence the a choice of plan.\n\ngnari\n\n\n\n", "msg_date": "Sun, 13 Mar 2005 16:51:04 +0000", "msg_from": "Ragnar =?ISO-8859-1?Q?Hafsta=F0?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "Hi John,\n\nthank you for your response.\n\nJohn Arbash Meinel wrote:\n\n> You really need to post the original query, so we can see *why* postgres\n> thinks it needs to run the plan this way.\n\nHere it is:\n\nSELECT AdDevicesSites.IDPK, AdDevicesSites.AdDevicesSiteSizeIDFK, \nAdDevicesSites.AdDevicesSiteRegionIDFK, \nAdDevicesSites.AdDevicesSiteCountyIDFK, \nAdDevicesSites.AdDevicesSiteCityIDFK, \nAdDevicesSites.AdDevicesSiteDistrictIDFK, \nAdDevicesSites.AdDevicesSiteStreetIDFK, \nAdDevicesSites.AdDevicesSiteStreetDescriptionIDFK, \nAdDevicesSites.AdDevicesSitePositionIDFK, \nAdDevicesSites.AdDevicesSiteVisibilityIDFK, \nAdDevicesSites.AdDevicesSiteStatusTypeIDFK, \nAdDevicesSites.AdDevicesSitePartnerIdentificationOperatorIDFK, \nAdDevicesSites.AdDevicesSitePartnerElectricitySupplierIDFK, \nAdDevicesSites.AdDevicesSitePartnerMaintainerIDFK, \nAdDevicesSites.AdDevicesSitePartnerStickerIDFK, \nAdDevicesSites.CadastralUnitIDFK, AdDevicesSites.MediaType, \nAdDevicesSites.Mark, AdDevicesSites.Amount, AdDevicesSites.Distance, \nAdDevicesSites.OwnLightening, AdDevicesSites.LocationDownTown, \nAdDevicesSites.LocationSuburb, AdDevicesSites.LocationBusinessDistrict, \nAdDevicesSites.LocationResidentialDistrict, \nAdDevicesSites.LocationIndustrialDistrict, \nAdDevicesSites.LocationNoBuildings, AdDevicesSites.ParkWayHighWay, \nAdDevicesSites.ParkWayFirstClassRoad, AdDevicesSites.ParkWayOtherRoad, \nAdDevicesSites.ParkWayStreet, AdDevicesSites.ParkWayAccess, \nAdDevicesSites.ParkWayExit, AdDevicesSites.ParkWayParkingPlace, \nAdDevicesSites.ParkWayPassangersOnly, AdDevicesSites.ParkWayCrossRoad, \nAdDevicesSites.PositionStandAlone, \nAdDevicesSites.NeighbourhoodPublicTransportation, \nAdDevicesSites.NeighbourhoodInterCityTransportation, \nAdDevicesSites.NeighbourhoodPostOffice, \nAdDevicesSites.NeighbourhoodNewsStand, \nAdDevicesSites.NeighbourhoodAmenities, \nAdDevicesSites.NeighbourhoodSportsSpot, \nAdDevicesSites.NeighbourhoodHealthServiceSpot, \nAdDevicesSites.NeighbourhoodShops, \nAdDevicesSites.NeighbourhoodShoppingCenter, \nAdDevicesSites.NeighbourhoodSuperMarket, \nAdDevicesSites.NeighbourhoodPetrolStation, \nAdDevicesSites.NeighbourhoodSchool, AdDevicesSites.NeighbourhoodBank, \nAdDevicesSites.NeighbourhoodRestaurant, \nAdDevicesSites.NeighbourhoodHotel, AdDevicesSites.RestrictionCigarettes, \nAdDevicesSites.RestrictionPolitics, AdDevicesSites.RestrictionSpirits, \nAdDevicesSites.RestrictionSex, AdDevicesSites.RestrictionOther, \nAdDevicesSites.RestrictionNote, AdDevicesSites.SpotMapFile, \nAdDevicesSites.SpotPhotoFile, AdDevicesSites.SourcePhotoTimeStamp, \nAdDevicesSites.SourceMapTimeStamp, AdDevicesSites.Price, \nAdDevicesSites.WebPrice, AdDevicesSites.CadastralUnitCode, \nAdDevicesSites.BuildingNumber, AdDevicesSites.ParcelNumber, \nAdDevicesSites.GPSLatitude, AdDevicesSites.GPSLongitude, \nAdDevicesSites.GPSHeight, AdDevicesSites.MechanicalOpticalCoordinates, \nAdDevicesSites.Deleted, AdDevicesSites.Protected, \nAdDevicesSites.DateCreated, AdDevicesSites.DateLastModified, \nAdDevicesSites.DateDeleted, AdDevicesSites.CreatedByUserIDFK, \nAdDevicesSites.LastModifiedByUserIDFK, AdDevicesSites.DeletedByUserIDFK, \nAdDevicesSites.PhotoLastModificationDate, \nAdDevicesSites.MapLastModificationDate, AdDevicesSites.DateLastImported, \nAdDevicesSiteRegions.Name AS AdDevicesSiteRegionName, \nAdDevicesSiteCounties.Name AS AdDevicesSiteCountyName, \nAdDevicesSiteCities.Name AS AdDevicesSiteCityName, \nAdDevicesSiteStreets.Name AS AdDevicesSiteStreetName, \nAdDevicesSiteDistricts.Name AS AdDevicesSiteDistrictName, \nAdDevicesSiteStreetDescriptions.Name_cs AS \nAdDevicesSiteStreetDescriptionName_cs, \nAdDevicesSiteStreetDescriptions.Name_en AS \nAdDevicesSiteStreetDescriptionName_en, AdDevicesSiteSizes.Name AS \nAdDevicesSiteSizeName, SUBSTRING(AdDevicesSiteVisibilities.Name_cs, 3) \nAS AdDevicesSiteVisibilityName_cs, \nSUBSTRING(AdDevicesSiteVisibilities.Name_en, 3) AS \nAdDevicesSiteVisibilityName_en, AdDevicesSitePositions.Name_cs AS \nAdDevicesSitePositionName_cs, AdDevicesSitePositions.Name_en AS \nAdDevicesSitePositionName_en, AdDevicesSiteStatusTypes.Name_cs AS \nAdDevicesSiteStatusTypeName_cs, AdDevicesSiteStatusTypes.Name_en AS \nAdDevicesSiteStatusTypeName_en, PartnerIdentificationsOperator.Name AS \nPartnerIdentificationOperatorName, PartnersElectricitySupplier.Name AS \nPartnerElectricitySupplierName, PartnersMaintainer.Name AS \nPartnerMaintainerName, PartnersSticker.Name AS PartnerStickerName, \nCadastralUnits.Code AS CadastralUnitCodeNative, CadastralUnits.Name AS \nCadastralUnitName\nFROM AdDevicesSites\nLEFT JOIN AdDevicesSiteRegions ON AdDevicesSites.AdDevicesSiteRegionIDFK \n= AdDevicesSiteRegions.IDPK\nLEFT JOIN AdDevicesSiteCounties ON \nAdDevicesSites.AdDevicesSiteCountyIDFK = AdDevicesSiteCounties.IDPK\nLEFT JOIN AdDevicesSiteCities ON AdDevicesSites.AdDevicesSiteCityIDFK = \nAdDevicesSiteCities.IDPK\nLEFT JOIN AdDevicesSiteStreets ON AdDevicesSites.AdDevicesSiteStreetIDFK \n= AdDevicesSiteStreets.IDPK\nLEFT JOIN AdDevicesSiteStreetDescriptions ON \nAdDevicesSites.AdDevicesSiteStreetDescriptionIDFK = \nAdDevicesSiteStreetDescriptions.IDPK\nLEFT JOIN AdDevicesSiteDistricts ON \nAdDevicesSites.AdDevicesSiteDistrictIDFK = AdDevicesSiteDistricts.IDPK\nLEFT JOIN AdDevicesSiteSizes ON AdDevicesSites.AdDevicesSiteSizeIDFK = \nAdDevicesSiteSizes.IDPK\nLEFT JOIN AdDevicesSiteVisibilities ON \nAdDevicesSites.AdDevicesSiteVisibilityIDFK = AdDevicesSiteVisibilities.IDPK\nLEFT JOIN AdDevicesSitePositions ON \nAdDevicesSites.AdDevicesSitePositionIDFK = AdDevicesSitePositions.IDPK\nLEFT JOIN AdDevicesSiteStatusTypes ON \nAdDevicesSites.AdDevicesSiteStatusTypeIDFK = AdDevicesSiteStatusTypes.IDPK\nLEFT JOIN PartnerIdentifications AS PartnerIdentificationsOperator ON \nAdDevicesSites.AdDevicesSitePartnerIdentificationOperatorIDFK = \nPartnerIdentificationsOperator.IDPK\nLEFT JOIN Partners AS PartnersElectricitySupplier ON \nAdDevicesSites.AdDevicesSitePartnerElectricitySupplierIDFK = \nPartnersElectricitySupplier.IDPK\nLEFT JOIN Partners AS PartnersMaintainer ON \nAdDevicesSites.AdDevicesSitePartnerMaintainerIDFK = PartnersMaintainer.IDPK\nLEFT JOIN Partners AS PartnersSticker ON \nAdDevicesSites.AdDevicesSitePartnerStickerIDFK = PartnersSticker.IDPK\nLEFT JOIN CadastralUnits ON AdDevicesSites.CadastralUnitIDFK = \nCadastralUnits.IDPK\n\n> Also, the final sort actually isn't that expensive.\n>\n> When you have the numbers (cost=xxx..yyy) the xxx is the time when the\n> step can start, and the yyy is the time when the step can finish. For a\n> lot of steps, it can start running while the sub-steps are still feeding\n> back more data, for others, it has to wait for the sub-steps to finish.\n\nThis is thi bit of information I didn't find in the documentation and \nwere looking for. Thank you for the enlightening :-) With this knowledge \nI can see that the JOINs are the bottleneck.\n\n> The first thing to look for, is to make sure the estimated number of\n> rows is close to the actual number of rows. If they are off, then\n> postgres may be mis-estimating the optimal plan. (If postgres thinks it\n> is going to only need 10 rows, it may use an index scan, but when 1000\n> rows are returned, a seq scan might have been faster.)\n\nThe \"row=\" numbers are equal to those of the total count of items in \nthat tables (generated by VACUUM ANALYZE).\n\n> You seem to be doing a lot of outer joins. Is that necessary?\n\nThese external tables contain information that are a unique parameter of \nthe AdDevice (like Position, Region, County, City etc.), in some \ncontaining localized description of the property attribute. Some of them \ncould be moved into the main table but that would create a redundancy, \nsome of them cannot be moved into the main table (like information about \nPartners which is definitely another object with respect to AdDevices). \nI think the names of the tables are self-explanatory so it should be \nclear what each table stores. Is this design incorrect?\n\nIn fact, we only need about 30 records at a time but LIMIT can speed-up \nthe query only when looking for the first 30 records. Setting OFFSET \nslows the query down.\n\n> I don't\n> really know what you are looking for, but you are joining against enough\n> tables, that I think this query is always going to be slow.\n\nIn MySQL the query was not so slow and I don't see any reason why there \nshould be large differences in SELECT speed. But if the design of the \ntables is incorrect, we will correct it.\n\n> From what I can tell, you have 1 table which has 6364 rows, and you are\n> grabbing all of those rows, and then outer joining it with about 11\n> other tables.\n\nHere are the exact numbers:\n\nAdDevicesSites - 6364\nAdDevicesSiteRegions - 15\nAdDevicesSiteCounties - 110\nAdDevicesSiteCities - 124\nAdDevicesSiteStreets - 2858\nAdDevicesSiteStreetDescriptions - 4585\nAdDevicesSiteDistricts - 344\nAdDevicesSiteSizes - 110\nAdDevicesSiteVisibilities - 4\nAdDevicesSitePositions - 3\nAdDevicesSiteStatusTypes - 5\nPartnerIdentifications - 61\nPartners - 61\nCadastralUnits - 13027\n\n> I would actually guess that the most expensive parts of the plan are the\n> NESTED LOOPS which when they go to materialize have to do a sequential\n> scan, and they get executed 6364 times. It looks like the other tables\n> are small (only 3-5 rows), so it takes about 0.05 ms for each seqscan,\n> the problem is that because you are doing it 6k times, it ends up taking\n> about 300ms of your time.\n>\n> You could try setting \"set enable_nestloop to off\".\n> I don't know that it will be faster, but it could be.\n\nI have tried that and it resulted in about 2 sec slowdown :-(\n\n> In general, though, it seems like you should be asking a different\n> question, rather than trying to optimize the query that you have.\n\nYou mean \"how should I improve the design to make the query faster\"?\n\n> Can you post the original SQL statement, and maybe describe what you are\n> trying to do?\n\nI hope the explanation above is clear and sufficient :-)\n\n>\n> John\n> =:->\n>", "msg_date": "Sun, 13 Mar 2005 18:10:31 +0100", "msg_from": "=?ISO-8859-2?Q?Miroslav_=A9ulc?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "Hi Ragnar,\n\nRagnar Hafstað wrote:\n\n>[snip output of EXPLAIN ANALYZE]\n>\n>for those of us who have not yet reached the level where one can\n>infer it from the query plan, how abour showing us the actual\n>query too ?\n> \n>\nI thought it will be sufficient to show me where the main bottleneck is. \nAnd in fact, the query is rather lengthy. But I have included it in the \nresponse to John. So sorry for the incompletness.\n\n>but as an example of what to look for, consider the first few lines\n>(reformatted): \n> \n>\n>>Merge Right Join (cost=9868.84..9997.74 rows=6364 width=815) \n>> (actual time=9982.022..10801.216 rows=6364 loops=1)\n>> Merge Cond: (\"outer\".idpk = \"inner\".cadastralunitidfk)\n>> -> Index Scan using cadastralunits_pkey on cadastralunits \n>> (cost=0.00..314.72 rows=13027 width=31)\n>> (actual time=0.457..0.552 rows=63 loops=1)\n>> -> Sort (cost=9868.84..9884.75 rows=6364 width=788)\n>> (actual time=9981.405..10013.708 rows=6364 loops=1)\n>> \n>>\n>notice that the index scan is expected to return 13027 rows, but\n>actually returns 63. this might influence the a choice of plan.\n> \n>\nYes, the situation in this scenario is that the table of CadastralUnits \ncontains all units from country but the AdDevices in this case are only \nfrom the 63 CadastralUnits. So the result - 63 rows - is just this \nlittle subset. Up to that, not all AdDevices have CadastralUnitIDFK set \nto an IDPK that exists in CadastralUnits but to zero (= no CadastralUnit \nset).\n\n>gnari\n> \n>\nMiroslav Šulc", "msg_date": "Sun, 13 Mar 2005 18:23:51 +0100", "msg_from": "=?UTF-8?B?TWlyb3NsYXYgxaB1bGM=?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "Miroslav ďż˝ulc wrote:\n\n> Hi John,\n>\n> thank you for your response.\n>\nHow about a quick side track.\nHave you played around with your shared_buffers, maintenance_work_mem,\nand work_mem settings?\nWhat version of postgres are you using? The above names changed in 8.0,\nand 8.0 also has some perfomance improvements over the 7.4 series.\n\nWhat is your hardware? Are you testing this while there is load on the\nsystem, or under no load.\nAre you re-running the query multiple times, and reporting the later\nspeeds, or just the first time? (If nothing is loaded into memory, the\nfirst run is easily 10x slower than later ones.)\n\nJust some background info. If you have set these to reasonable values,\nwe probably don't need to spend much time here, but it's just one of\nthose things to check.\n\nJohn\n=:->", "msg_date": "Sun, 13 Mar 2005 11:30:25 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "Miroslav ďż˝ulc wrote:\n\n> Hi John,\n>\n> thank you for your response.\n>\nI will comment on things separately.\n\n> John Arbash Meinel wrote:\n>\n...\n\n> These external tables contain information that are a unique parameter\n> of the AdDevice (like Position, Region, County, City etc.), in some\n> containing localized description of the property attribute. Some of\n> them could be moved into the main table but that would create a\n> redundancy, some of them cannot be moved into the main table (like\n> information about Partners which is definitely another object with\n> respect to AdDevices). I think the names of the tables are\n> self-explanatory so it should be clear what each table stores. Is this\n> design incorrect?\n>\nIt's actually more of a question as to why you are doing left outer\njoins, rather than simple joins.\nAre the tables not fully populated? If so, why not?\n\nHow are you using this information? Why is it useful to get back rows\nthat don't have all of their information filled out?\nWhy is it useful to have so many columns returned? It seems like it most\ncases, you are only going to be able to use *some* of the information,\nwhy not create more queries that are specialized, rather than one get\neverything query.\n\n> In fact, we only need about 30 records at a time but LIMIT can\n> speed-up the query only when looking for the first 30 records. Setting\n> OFFSET slows the query down.\n>\nHave you thought about using a cursor instead of using limit + offset?\nThis may not help the overall time, but it might let you split up when\nthe time is spent.\nBEGIN;\nDECLARE <cursor_name> CURSOR FOR SELECT ... FROM ...;\nFETCH FORWARD 30 FROM <cursor_name>;\nFETCH FORWARD 30 FROM <cursor_name>;\n...\nEND;\n\n>> I don't\n>> really know what you are looking for, but you are joining against enough\n>> tables, that I think this query is always going to be slow.\n>\n>\n> In MySQL the query was not so slow and I don't see any reason why\n> there should be large differences in SELECT speed. But if the design\n> of the tables is incorrect, we will correct it.\n>\nIn the other post I asked about your postgres settings. The defaults are\npretty stingy, so that *might* be an issue.\n\n>> From what I can tell, you have 1 table which has 6364 rows, and you are\n>> grabbing all of those rows, and then outer joining it with about 11\n>> other tables.\n>\n>\n> Here are the exact numbers:\n>\n> AdDevicesSites - 6364\n> AdDevicesSiteRegions - 15\n> AdDevicesSiteCounties - 110\n> AdDevicesSiteCities - 124\n> AdDevicesSiteStreets - 2858\n> AdDevicesSiteStreetDescriptions - 4585\n> AdDevicesSiteDistricts - 344\n> AdDevicesSiteSizes - 110\n> AdDevicesSiteVisibilities - 4\n> AdDevicesSitePositions - 3\n> AdDevicesSiteStatusTypes - 5\n> PartnerIdentifications - 61\n> Partners - 61\n> CadastralUnits - 13027\n>\nAnd if I understand correctly, you consider all of these to be outer\njoins. Meaning you want *all* of AdDevicesSites, and whatever info goes\nalong with it, but there are no restrictions as to what rows you want.\nYou want everything you can get.\n\nDo you actually need *everything*? You mention only needing 30, what for?\n\n>> I would actually guess that the most expensive parts of the plan are the\n>> NESTED LOOPS which when they go to materialize have to do a sequential\n>> scan, and they get executed 6364 times. It looks like the other tables\n>> are small (only 3-5 rows), so it takes about 0.05 ms for each seqscan,\n>> the problem is that because you are doing it 6k times, it ends up taking\n>> about 300ms of your time.\n>>\n>> You could try setting \"set enable_nestloop to off\".\n>> I don't know that it will be faster, but it could be.\n>\n>\n> I have tried that and it resulted in about 2 sec slowdown :-(\n\nGenerally, the optimizer *does* select the best query plan. As long as\nit has accurate statistics, which it seems to in this case.\n\n>\n>> In general, though, it seems like you should be asking a different\n>> question, rather than trying to optimize the query that you have.\n>\n>\n> You mean \"how should I improve the design to make the query faster\"?\n>\nThere is one possibility if we don't find anything nicer. Which is to\ncreate a lazy materialized view. Basically, you run this query, and\nstore it in a table. Then when you want to do the SELECT, you just do\nthat against the unrolled table.\nYou can then create triggers, etc to keep the data up to date.\nHere is a good documentation of it:\nhttp://jonathangardner.net/PostgreSQL/materialized_views/matviews.html\n\nIt is basically a way that you can un-normalize data, in a safe way.\n\nAlso, another thing that you can do, is instead of using a cursor, you\ncan create a temporary table with the results of the query, and create a\nprimary key which is just a simple counter. Then instead of doing limit\n+ offset, you can select * where id > 0 and id < 30; ... select * where\nid > 30 and id < 60; etc.\n\nIt still requires the original query to be run, though, so it is not\nnecessarily optimal for you.\n\n>> Can you post the original SQL statement, and maybe describe what you are\n>> trying to do?\n>\n>\n> I hope the explanation above is clear and sufficient :-)\n>\n>>\n>> John\n>> =:->\n>\n\nUnfortunately, I don't really see any obvious problems with your query\nin the way that you are using it. The problem is that you are not\napplying any selectivity, so postgres has to go to all the tables, and\nget all the rows, and then try to logically merge them together. It is\ndoing a hash merge, which is generally one of the faster ones and it\nseems to be doing the right thing.\n\nI would be curious to see how mysql was handling this query, to see if\nthere was something different it was trying to do. I'm also curious how\nmuch of a difference there was.\n\nJohn\n=:->", "msg_date": "Sun, 13 Mar 2005 11:50:22 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "John Arbash Meinel wrote:\n\n> How about a quick side track.\n> Have you played around with your shared_buffers, maintenance_work_mem,\n> and work_mem settings?\n\nI have tried to set shared_buffers to 48000 now but no speedup \n(11,098.813 ms third try). The others are still default. I'll see \ndocumentation and will play with the other parameters.\n\n> What version of postgres are you using?\n\n8.0.1\n\n> The above names changed in 8.0,\n> and 8.0 also has some perfomance improvements over the 7.4 series.\n>\n> What is your hardware?\n\nMy dev notebook Acer TravelMate 292LMi\n$ cat /proc/cpuinfo\nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 9\nmodel name : Intel(R) Pentium(R) M processor 1500MHz\nstepping : 5\ncpu MHz : 1495.485\ncache size : 1024 KB\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 2\nwp : yes\nflags : fpu vme de pse tsc msr mce cx8 sep mtrr pge mca cmov \npat clflush dts acpi mmx fxsr sse sse2 tm pbe est tm2\nbogomips : 2957.31\n\n$ cat /proc/meminfo\nMemTotal: 516136 kB\nMemFree: 18024 kB\nBuffers: 21156 kB\nCached: 188868 kB\nSwapCached: 24 kB\nActive: 345596 kB\nInactive: 119344 kB\nHighTotal: 0 kB\nHighFree: 0 kB\nLowTotal: 516136 kB\nLowFree: 18024 kB\nSwapTotal: 1004020 kB\nSwapFree: 1003996 kB\nDirty: 4 kB\nWriteback: 0 kB\nMapped: 343676 kB\nSlab: 18148 kB\nCommitLimit: 1262088 kB\nCommitted_AS: 951536 kB\nPageTables: 2376 kB\nVmallocTotal: 516056 kB\nVmallocUsed: 90528 kB\nVmallocChunk: 424912 kB\n\nIDE disc.\n\n# hdparm -Tt /dev/hda\n/dev/hda:\n Timing cached reads: 1740 MB in 2.00 seconds = 870.13 MB/sec\n Timing buffered disk reads: 40 MB in 3.30 seconds = 12.10 MB/sec\n\n> Are you testing this while there is load on the\n> system, or under no load.\n\nThe load is low. This is few seconds after I have run the EXPLAIN ANALYZE.\n\n# cat /proc/loadavg\n0.31 0.51 0.33 1/112 6909\n\n> Are you re-running the query multiple times, and reporting the later\n> speeds, or just the first time? (If nothing is loaded into memory, the\n> first run is easily 10x slower than later ones.)\n\nThe times changes only little. First run was about 13 sec, second about \n10 sec, third about 11 sec etc.\n\n> Just some background info. If you have set these to reasonable values,\n> we probably don't need to spend much time here, but it's just one of\n> those things to check.\n\nSure you are right. I'll try the other parameters.\n\n>\n> John\n> =:->\n>\nMiroslav", "msg_date": "Sun, 13 Mar 2005 19:10:21 +0100", "msg_from": "=?ISO-8859-2?Q?Miroslav_=A9ulc?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "John Arbash Meinel wrote:\n\n> It's actually more of a question as to why you are doing left outer\n> joins, rather than simple joins.\n> Are the tables not fully populated? If so, why not?\n\nSome records do not consist of full information (they are collected from \ndifferent sources which use different approach to the data collection) \nso using INNER JOIN would cause some records wouldn't be displayed which \nis unacceptable.\n\n> How are you using this information? Why is it useful to get back rows\n> that don't have all of their information filled out?\n\nEach row contains main information which are important. The other \ninformation are also important but may be missing. Information are \ndisplay on lists of 30 rows or on a card. When using filter the query is \nmuch faster but the case without filter has these results.\n\n> Why is it useful to have so many columns returned? It seems like it most\n> cases, you are only going to be able to use *some* of the information,\n> why not create more queries that are specialized, rather than one get\n> everything query.\n\nMany of the columns are just varchar(1) (because of the migration from \nMySQL enum field type) so the record is not so long as it could seem. \nThese fields are just switches (Y(es) or N(o)). The problem is users can \ndefine their own templates and in different scenarios there might be \ndisplayed different information so reducing the number of fields would \nmean in some cases it wouldn't work as expected. But if we couldn't \nspeed the query up, we will try to improve it other way.\nIs there any serious reason not to use so much fields except memory \nusage? It seems to me that it shouldn't have a great impact on the speed \nin this case.\n\n> Have you thought about using a cursor instead of using limit + offset?\n> This may not help the overall time, but it might let you split up when\n> the time is spent.\n> ......\n\nNo. I come from MySQL world where these things are not common (at least \nwhen using MyISAM databases). The other reason (if I understand it well) \nis that the retrieval of the packages of 30 records is not sequential. \nOur app is web based and we use paging. User can select page 1 and then \npage 10, then go backward to page 9 etc.\n\n> And if I understand correctly, you consider all of these to be outer\n> joins. Meaning you want *all* of AdDevicesSites, and whatever info goes\n> along with it, but there are no restrictions as to what rows you want.\n> You want everything you can get.\n>\n> Do you actually need *everything*? You mention only needing 30, what for?\n\nFor display of single page consisting of 30 rows. The reason I query all \nrows is that this is one of the filters users can use. User can display \njust bigboards or billboards (or specify more advanced filters) but \nhe/she can also display AdDevices without any filter (page by page). \nBefore I select the 30 row, I need to order them by a key and after that \nselect the records, so this is also the reason why to ask for all rows. \nThe key for sorting might be different for each run.\n\n> There is one possibility if we don't find anything nicer. Which is to\n> create a lazy materialized view. Basically, you run this query, and\n> store it in a table. Then when you want to do the SELECT, you just do\n> that against the unrolled table.\n> You can then create triggers, etc to keep the data up to date.\n> Here is a good documentation of it:\n> http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html\n>\n> It is basically a way that you can un-normalize data, in a safe way.\n>\n> Also, another thing that you can do, is instead of using a cursor, you\n> can create a temporary table with the results of the query, and create a\n> primary key which is just a simple counter. Then instead of doing limit\n> + offset, you can select * where id > 0 and id < 30; ... select * where\n> id > 30 and id < 60; etc.\n>\n> It still requires the original query to be run, though, so it is not\n> necessarily optimal for you.\n\nThese might be the other steps in case we cannot speed-up the query. I \nwould prefer to speed the query up :-)\n\n> Unfortunately, I don't really see any obvious problems with your query\n> in the way that you are using it. The problem is that you are not\n> applying any selectivity, so postgres has to go to all the tables, and\n> get all the rows, and then try to logically merge them together. It is\n> doing a hash merge, which is generally one of the faster ones and it\n> seems to be doing the right thing.\n>\n> I would be curious to see how mysql was handling this query, to see if\n> there was something different it was trying to do. I'm also curious how\n> much of a difference there was.\n\nIn fact, on MySQL I didn't see any slow reactions so I didn't measure \nand inspect it. But I can try it if I figure out how to copy the \ndatabase from PostgreSQL to MySQL.\n\n>\n> John\n> =:->\n>\nThank you for your time and help.\n\nMiroslav", "msg_date": "Sun, 13 Mar 2005 20:07:00 +0100", "msg_from": "=?ISO-8859-2?Q?Miroslav_=A9ulc?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "John Arbash Meinel <[email protected]> writes:\n> How about a quick side track.\n> Have you played around with your shared_buffers, maintenance_work_mem,\n> and work_mem settings?\n\nIndeed. The hash joins seem unreasonably slow considering how little\ndata they are processing (unless this is being run on some ancient\ntoaster...). One thought that comes to mind is that work_mem may be\nset so small that the hashes are forced into multiple batches.\n\nAnother question worth asking is what are the data types of the columns\nbeing joined on. If they are character types, what locale and encoding\nis the database using?\n\n> Are you re-running the query multiple times, and reporting the later\n> speeds, or just the first time? (If nothing is loaded into memory, the\n> first run is easily 10x slower than later ones.)\n\nThat cost would be paid during the bottom-level scans though. The thing\nthat strikes me here is that nearly all of the cost is being spent\njoining.\n\n> What version of postgres are you using?\n\nAnd what's the platform (hardware and OS)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Mar 2005 14:15:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to read query plan " }, { "msg_contents": "Miroslav ďż˝ulc wrote:\n\n> John Arbash Meinel wrote:\n\n...\n\n> Many of the columns are just varchar(1) (because of the migration from\n> MySQL enum field type) so the record is not so long as it could seem.\n> These fields are just switches (Y(es) or N(o)). The problem is users\n> can define their own templates and in different scenarios there might\n> be displayed different information so reducing the number of fields\n> would mean in some cases it wouldn't work as expected. But if we\n> couldn't speed the query up, we will try to improve it other way.\n> Is there any serious reason not to use so much fields except memory\n> usage? It seems to me that it shouldn't have a great impact on the\n> speed in this case.\n\nIs there a reason to use varchar(1) instead of char(1). There probably\nis 0 performance difference, I'm just curious.\n\n>\n>> Have you thought about using a cursor instead of using limit + offset?\n>> This may not help the overall time, but it might let you split up when\n>> the time is spent.\n>> ......\n>\n>\n> No. I come from MySQL world where these things are not common (at\n> least when using MyISAM databases). The other reason (if I understand\n> it well) is that the retrieval of the packages of 30 records is not\n> sequential. Our app is web based and we use paging. User can select\n> page 1 and then page 10, then go backward to page 9 etc.\n>\nWell, with cursors you can also do \"FETCH ABSOLUTE 1 FROM\n<cursor_name>\", which sets the cursor position, and then you can \"FETCH\nFORWARD 30\".\nI honestly don't know how the performance will be, but it is something\nthat you could try.\n\n>> And if I understand correctly, you consider all of these to be outer\n>> joins. Meaning you want *all* of AdDevicesSites, and whatever info goes\n>> along with it, but there are no restrictions as to what rows you want.\n>> You want everything you can get.\n>>\n>> Do you actually need *everything*? You mention only needing 30, what\n>> for?\n>\n>\n> For display of single page consisting of 30 rows. The reason I query\n> all rows is that this is one of the filters users can use. User can\n> display just bigboards or billboards (or specify more advanced\n> filters) but he/she can also display AdDevices without any filter\n> (page by page). Before I select the 30 row, I need to order them by a\n> key and after that select the records, so this is also the reason why\n> to ask for all rows. The key for sorting might be different for each run.\n>\nHow are you caching the information in the background in order to\nsupport paging? Since you aren't using limit/offset, and you don't seem\nto be creating a temporary table, I assume you have a layer inbetween\nthe web server and the database (or possibly inside the webserver) which\nkeeps track of current session information. Is that true?\n\n> These might be the other steps in case we cannot speed-up the query. I\n> would prefer to speed the query up :-)\n\nNaturally fast query comes first. I just have the feeling it is either a\npostgres configuration problem, or an intrinsic problem to postgres.\nGiven your constraints, there's not much that we can change about the\nquery itself.\n\n> In fact, on MySQL I didn't see any slow reactions so I didn't measure\n> and inspect it. But I can try it if I figure out how to copy the\n> database from PostgreSQL to MySQL.\n\nI figured you still had a copy of the MySQL around to compare to. You\nprobably don't need to spend too much time on it yet.\n\nJohn\n=:->", "msg_date": "Sun, 13 Mar 2005 13:26:29 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "Tom Lane wrote:\n\n>John Arbash Meinel <[email protected]> writes:\n> \n>\n>>How about a quick side track.\n>>Have you played around with your shared_buffers, maintenance_work_mem,\n>>and work_mem settings?\n>> \n>>\n>\n>Indeed. The hash joins seem unreasonably slow considering how little\n>data they are processing (unless this is being run on some ancient\n>toaster...). One thought that comes to mind is that work_mem may be\n>set so small that the hashes are forced into multiple batches.\n> \n>\nI've just tried to uncomment the settings for these parameters with with \nno impact on the query speed.\n\nshared_buffers = 48000 # min 16, at least max_connections*2, \n8KB each\nwork_mem = 1024 # min 64, size in KB\nmaintenance_work_mem = 16384 # min 1024, size in KB\nmax_stack_depth = 2048 # min 100, size in KB\n\n>Another question worth asking is what are the data types of the columns\n>being joined on. If they are character types, what locale and encoding\n>is the database using?\n> \n>\nI have checked this and there are some JOINs smallint against integer. \nIs that problem? I would use smallint for IDPKs of some smaller tables \nbut the lack of SMALLSERIAL and my laziness made me use SERIAL instead \nwhich is integer.\n\n>That cost would be paid during the bottom-level scans though. The thing\n>that strikes me here is that nearly all of the cost is being spent\n>joining.\n> \n>\n>>What version of postgres are you using?\n>> \n>>\n>\n>And what's the platform (hardware and OS)?\n> \n>\nI've already posted the hardware info. OS is Linux (Gentoo) with kernel \n2.6.11.\n\n>\t\t\tregards, tom lane\n> \n>\nMiroslav", "msg_date": "Sun, 13 Mar 2005 20:43:58 +0100", "msg_from": "=?windows-1250?Q?Miroslav_=8Aulc?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "John Arbash Meinel wrote:\n\n> Is there a reason to use varchar(1) instead of char(1). There probably\n> is 0 performance difference, I'm just curious.\n\nNo, not at all. I'm just not used to char().\n\n> Well, with cursors you can also do \"FETCH ABSOLUTE 1 FROM\n> <cursor_name>\", which sets the cursor position, and then you can \"FETCH\n> FORWARD 30\".\n> I honestly don't know how the performance will be, but it is something\n> that you could try.\n\nThis is science for me at this moment :-)\n\n>> For display of single page consisting of 30 rows. The reason I query\n>> all rows is that this is one of the filters users can use. User can\n>> display just bigboards or billboards (or specify more advanced\n>> filters) but he/she can also display AdDevices without any filter\n>> (page by page). Before I select the 30 row, I need to order them by a\n>> key and after that select the records, so this is also the reason why\n>> to ask for all rows. The key for sorting might be different for each \n>> run.\n>>\n> How are you caching the information in the background in order to\n> support paging? Since you aren't using limit/offset, and you don't seem\n> to be creating a temporary table, I assume you have a layer inbetween\n> the web server and the database (or possibly inside the webserver) which\n> keeps track of current session information. Is that true?\n\nI just need three information:\n1) used filter (stored in session, identified by filter index in query \nstring)\n2) page length (static predefined)\n3) what page to display (in query string)\n\n>> In fact, on MySQL I didn't see any slow reactions so I didn't measure\n>> and inspect it. But I can try it if I figure out how to copy the\n>> database from PostgreSQL to MySQL.\n>\n>\n> I figured you still had a copy of the MySQL around to compare to. You\n> probably don't need to spend too much time on it yet.\n\nIt's not so simple because there are some differences between MySQL and \nPostgreSQL in how they handle case sensitivity etc. The database table \nstructures are not the same too because of different data types support \nand data values support.\n\n>\n> John\n> =:->\n\nMiroslav", "msg_date": "Sun, 13 Mar 2005 20:51:05 +0100", "msg_from": "=?ISO-8859-2?Q?Miroslav_=A9ulc?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "=?windows-1250?Q?Miroslav_=8Aulc?= <[email protected]> writes:\n> I've just tried to uncomment the settings for these parameters with with \n> no impact on the query speed.\n\n> shared_buffers = 48000 # min 16, at least max_connections*2, \n> 8KB each\n> work_mem = 1024 # min 64, size in KB\n> maintenance_work_mem = 16384 # min 1024, size in KB\n> max_stack_depth = 2048 # min 100, size in KB\n\nHmm. Given the small size of the auxiliary tables, you'd think they'd\nfit in 1MB work_mem no problem. But try bumping work_mem up to 10MB\njust to see if it makes a difference. (BTW, you do know that altering\nthe .conf file doesn't in itself do anything? You have to SIGHUP the\npostmaster to make it notice the change ... and for certain parameters\nsuch as shared_buffers, you actually have to stop and restart the\npostmaster. You can use the SHOW command to verify whether a change\nhas taken effect.)\n\n> I have checked this and there are some JOINs smallint against integer. \n> Is that problem?\n\nThat probably explains why some of the joins are merges instead of\nhashes --- hash join doesn't work across datatypes. Doesn't seem like\nit should be a huge problem though. I was more concerned about the\npossibility of slow locale-dependent string comparisons.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Mar 2005 14:54:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to read query plan " }, { "msg_contents": "Tom Lane wrote:\n\n>=?windows-1250?Q?Miroslav_=8Aulc?= <[email protected]> writes:\n> \n>\n>>shared_buffers = 48000 # min 16, at least max_connections*2, \n>>8KB each\n>>work_mem = 1024 # min 64, size in KB\n>>maintenance_work_mem = 16384 # min 1024, size in KB\n>>max_stack_depth = 2048 # min 100, size in KB\n>> \n>>\n>\n>Hmm. Given the small size of the auxiliary tables, you'd think they'd\n>fit in 1MB work_mem no problem. But try bumping work_mem up to 10MB\n>just to see if it makes a difference. (BTW, you do know that altering\n>the .conf file doesn't in itself do anything? You have to SIGHUP the\n>postmaster to make it notice the change ... and for certain parameters\n>such as shared_buffers, you actually have to stop and restart the\n>postmaster. You can use the SHOW command to verify whether a change\n>has taken effect.)\n> \n>\nI've tried to set work_mem to 10240, restarted postmaster and tried the \nEXPLAIN ANALYZE but there is only cca 200 ms speedup.\n\n>>I have checked this and there are some JOINs smallint against integer. \n>>Is that problem?\n>> \n>>\n>That probably explains why some of the joins are merges instead of\n>hashes --- hash join doesn't work across datatypes. Doesn't seem like\n>it should be a huge problem though. I was more concerned about the\n>possibility of slow locale-dependent string comparisons.\n> \n>\nThere are only JOINs number against number. I've tried to change one of \nthe fields from smallint to integer but there was no speedup.\n\n>\t\t\tregards, tom lane\n> \n>\nMiroslav", "msg_date": "Sun, 13 Mar 2005 21:03:34 +0100", "msg_from": "=?windows-1250?Q?Miroslav_=8Aulc?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "=?windows-1250?Q?Miroslav_=8Aulc?= <[email protected]> writes:\n> There are only JOINs number against number.\n\nHmph. There's no reason I can see that hash joins should be as slow as\nthey seem to be in your test.\n\nIs the data confidential? If you'd be willing to send me a pg_dump\noff-list, I'd like to replicate this test and try to see where the time\nis going.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Mar 2005 15:26:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to read query plan " }, { "msg_contents": "=?windows-1250?Q?Miroslav_=8Aulc?= <[email protected]> writes:\n>> Is the data confidential? If you'd be willing to send me a pg_dump\n>> off-list, I'd like to replicate this test and try to see where the time\n>> is going.\n>> \n> Thank you very much for your offer. The data are partially confidental \n> so I hashed some of the text information and changed some values (not \n> the values for the JOINs) so I could send it to you. I've checked the \n> EXPLAIN ANALYZE if anything changed and the result is merely the same \n> (maybe cca 1 sec slower - maybe because the hash caused the text data to \n> be longer).\n\nNo problem; thank you for supplying the test case. What I find is\nrather surprising: most of the runtime can be blamed on disassembling\nand reassembling tuples during the join steps. Here are the hot spots\naccording to gprof:\n\n-----------------------------------------------\n 1.27 7.38 8277/103737 ExecScan [16]\n 2.93 17.02 19092/103737 ExecNestLoop <cycle 2> [14]\n 3.91 22.70 25456/103737 ExecMergeJoin <cycle 2> [13]\n 7.81 45.40 50912/103737 ExecHashJoin <cycle 2> [12]\n[9] 86.3 15.92 92.50 103737 ExecProject [9]\n 7.65 76.45 8809835/9143692 ExecEvalVar [10]\n 3.42 4.57 103737/103775 heap_formtuple [17]\n 0.03 0.24 12726/143737 ExecMakeFunctionResultNoSets [24]\n 0.02 0.12 103737/290777 ExecStoreTuple [44]\n 0.01 0.00 2/2 ExecEvalFunc [372]\n 0.00 0.00 2/22 ExecMakeFunctionResult [166]\n-----------------------------------------------\n 0.00 0.00 42/9143692 ExecEvalFuncArgs [555]\n 0.05 0.51 59067/9143692 ExecHashGetHashValue [32]\n 0.24 2.38 274748/9143692 ExecMakeFunctionResultNoSets [24]\n 7.65 76.45 8809835/9143692 ExecProject [9]\n[10] 69.5 7.94 79.34 9143692 ExecEvalVar [10]\n 79.34 0.00 8750101/9175517 nocachegetattr [11]\n-----------------------------------------------\n\nI think the reason this is popping to the top of the runtime is that the\njoins are so wide (an average of ~85 columns in a join tuple according\nto the numbers above). Because there are lots of variable-width columns\ninvolved, most of the time the fast path for field access doesn't apply\nand we end up going to nocachegetattr --- which itself is going to be\nslow because it has to scan over so many columns. So the cost is\nroughly O(N^2) in the number of columns.\n\nAs a short-term hack, you might be able to improve matters if you can\nreorder your LEFT JOINs to have the minimum number of columns\npropagating up from the earlier join steps. In other words make the\nlater joins add more columns than the earlier, as much as you can.\n\nThis is actually good news, because before 8.0 we had much worse\nproblems than this with extremely wide tuples --- there were O(N^2)\nbehaviors all over the place due to the old List algorithms. Neil\nConway's rewrite of the List code got rid of those problems, and now\nwe can see the places that are left to optimize. The fact that there\nseems to be just one is very nice indeed.\n\nSince ExecProject operations within a nest of joins are going to be\ndealing entirely with Vars, I wonder if we couldn't speed matters up\nby having a short-circuit case for a projection that is only Vars.\nEssentially it would be a lot like execJunk.c, except able to cope\nwith two input tuples. Using heap_deformtuple instead of retail\nextraction of fields would eliminate the O(N^2) penalty for wide tuples.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Mar 2005 19:08:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] How to read query plan " }, { "msg_contents": "I wrote:\n> Since ExecProject operations within a nest of joins are going to be\n> dealing entirely with Vars, I wonder if we couldn't speed matters up\n> by having a short-circuit case for a projection that is only Vars.\n> Essentially it would be a lot like execJunk.c, except able to cope\n> with two input tuples. Using heap_deformtuple instead of retail\n> extraction of fields would eliminate the O(N^2) penalty for wide tuples.\n\nActually, we already had a pending patch (from Atsushi Ogawa) that\neliminates that particular O(N^2) behavior in another way. After\napplying it, I get about a factor-of-4 reduction in the runtime for\nMiroslav's example.\n\nExecEvalVar and associated routines are still a pretty good fraction of\nthe runtime, so it might still be worth doing something like the above,\nbut it'd probably be just a marginal win instead of a big win.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Mar 2005 00:00:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] How to read query plan " }, { "msg_contents": "Tom Lane wrote:\n\n>...\n>I think the reason this is popping to the top of the runtime is that the\n>joins are so wide (an average of ~85 columns in a join tuple according\n>to the numbers above). Because there are lots of variable-width columns\n>involved, most of the time the fast path for field access doesn't apply\n>and we end up going to nocachegetattr --- which itself is going to be\n>slow because it has to scan over so many columns. So the cost is\n>roughly O(N^2) in the number of columns.\n> \n>\nAs there are a lot of varchar(1) in the AdDevicesSites table, wouldn't \nbe helpful to change them to char(1)? Would it solve the variable-width \nproblem at least for some fields and speed the query up?\n\n>As a short-term hack, you might be able to improve matters if you can\n>reorder your LEFT JOINs to have the minimum number of columns\n>propagating up from the earlier join steps. In other words make the\n>later joins add more columns than the earlier, as much as you can.\n> \n>\nThat will be hard as the main table which contains most of the fields is \nLEFT JOINed with the others. I'll look at it if I find some way to \nimprove it.\n\nI'm not sure whether I understand the process of performing the plan but \nI imagine that the data from AdDevicesSites are retrieved only once when \nthey are loaded and maybe stored in memory. Are the columns stored in \nthe order they are in the SQL command? If so, wouldn't it be useful to \nmove all varchar fields at the end of the SELECT query? I'm just \nguessing because I don't know at all how a database server is \nimplemented and what it really does.\n\n>..\n>\t\t\tregards, tom lane\n> \n>\nMiroslav", "msg_date": "Mon, 14 Mar 2005 09:43:19 +0100", "msg_from": "=?windows-1250?Q?Miroslav_=8Aulc?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] How to read query plan" }, { "msg_contents": "Tom Lane wrote:\n\n>I wrote:\n> \n>\n>>Since ExecProject operations within a nest of joins are going to be\n>>dealing entirely with Vars, I wonder if we couldn't speed matters up\n>>by having a short-circuit case for a projection that is only Vars.\n>>Essentially it would be a lot like execJunk.c, except able to cope\n>>with two input tuples. Using heap_deformtuple instead of retail\n>>extraction of fields would eliminate the O(N^2) penalty for wide tuples.\n>> \n>>\n>\n>Actually, we already had a pending patch (from Atsushi Ogawa) that\n>eliminates that particular O(N^2) behavior in another way. After\n>applying it, I get about a factor-of-4 reduction in the runtime for\n>Miroslav's example.\n> \n>\nIs there a chance we will see this patch in the 8.0.2 release? And when \ncan we expect this release?\n\n>ExecEvalVar and associated routines are still a pretty good fraction of\n>the runtime, so it might still be worth doing something like the above,\n>but it'd probably be just a marginal win instead of a big win.\n>\n>\t\t\tregards, tom lane\n> \n>\nMiroslav", "msg_date": "Mon, 14 Mar 2005 09:44:48 +0100", "msg_from": "=?windows-1250?Q?Miroslav_=8Aulc?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] How to read query plan" }, { "msg_contents": "John Arbash Meinel wrote:\n\n>> In fact, on MySQL I didn't see any slow reactions so I didn't measure\n>> and inspect it. But I can try it if I figure out how to copy the\n>> database from PostgreSQL to MySQL.\n>\n>\n> I figured you still had a copy of the MySQL around to compare to. You\n> probably don't need to spend too much time on it yet.\n\nSo I have some results. I have tested the query on both PostgreSQL 8.0.1 \nand MySQL 4.1.8 with LIMIT set to 30 and OFFSET set to 6000. PostgreSQL \nresult is 11,667.916 ms, MySQL result is 448.4 ms.\n\nBoth databases are running on the same machine (my laptop) and contain \nthe same data. However there are some differences in the data table \ndefinitions:\n1) in PostgreSQL I use 'varchar(1)' for a lot of fields and in MySQL I \nuse 'enum'\n2) in PostgreSQL in some cases I use connection fields that are not of \nthe same type (smallint <-> integer (SERIAL)), in MySQL I use the same types\n\n>\n> John\n> =:->\n\nMiroslav", "msg_date": "Mon, 14 Mar 2005 09:58:49 +0100", "msg_from": "=?ISO-8859-2?Q?Miroslav_=A9ulc?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "> 1) in PostgreSQL I use 'varchar(1)' for a lot of fields and in MySQL I \n> use 'enum'\n> 2) in PostgreSQL in some cases I use connection fields that are not of \n> the same type (smallint <-> integer (SERIAL)), in MySQL I use the same \n> types\n\nWell both those things will make PostgreSQL slower...\n\nChris\n", "msg_date": "Mon, 14 Mar 2005 17:02:54 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "\n\tInstead of a varchar(1) containing 'y' or 'n' you could use a BOOL or an \ninteger.\n\tYour query seems of the form :\n\n\tSELECT FROM main_table LEFT JOIN a lot of tables ORDER BY sort_key LIMIT \nN OFFSET M;\n\n\tI would suggest to rewrite it in a simpler way : instead of generating \nthe whole result set, sorting it, and then grabbing a slice, generate only \nthe ror id's, grab a slice, and then generate the full rows from that.\n\n\t- If you order by a field which is in main_table :\n\tSELECT FROM main_table LEFT JOIN a lot of tables WHERE main_table.id IN \n(SELECT id FROM main_table ORDER BY sort_key LIMIT N OFFSET M\n) ORDER BY sort_key LIMIT N OFFSET M;\n\n\t- If you order by a field in one of the child tables, I guess you only \nwant to display the rows in the main table which have this field, ie. \nnot-null in the LEFT JOIN. You can also use the principle above.\n\n\t- You can use a straight join instead of an IN.\n\n\nOn Mon, 14 Mar 2005 09:58:49 +0100, Miroslav ᅵulc \n<[email protected]> wrote:\n\n> John Arbash Meinel wrote:\n>\n>>> In fact, on MySQL I didn't see any slow reactions so I didn't measure\n>>> and inspect it. But I can try it if I figure out how to copy the\n>>> database from PostgreSQL to MySQL.\n>>\n>>\n>> I figured you still had a copy of the MySQL around to compare to. You\n>> probably don't need to spend too much time on it yet.\n>\n> So I have some results. I have tested the query on both PostgreSQL 8.0.1\n> and MySQL 4.1.8 with LIMIT set to 30 and OFFSET set to 6000. PostgreSQL\n> result is 11,667.916 ms, MySQL result is 448.4 ms.\n>\n> Both databases are running on the same machine (my laptop) and contain\n> the same data. However there are some differences in the data table\n> definitions:\n> 1) in PostgreSQL I use 'varchar(1)' for a lot of fields and in MySQL I\n> use 'enum'\n> 2) in PostgreSQL in some cases I use connection fields that are not of\n> the same type (smallint <-> integer (SERIAL)), in MySQL I use the same \n> types\n>\n>>\n>> John\n>> =:->\n>\n> Miroslav\n\n\n", "msg_date": "Mon, 14 Mar 2005 10:08:48 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n\n>> 1) in PostgreSQL I use 'varchar(1)' for a lot of fields and in MySQL \n>> I use 'enum'\n>> 2) in PostgreSQL in some cases I use connection fields that are not \n>> of the same type (smallint <-> integer (SERIAL)), in MySQL I use the \n>> same types\n>\n>\n> Well both those things will make PostgreSQL slower...\n\nI think so. I'll change the varchar(1) fields to char(1) where possible \nand will think out what I will do with the smallint <-> integer JOINs. \nSomething like SMALLSERIAL would be pleasant :-) I thought I will wait \nfor Tom Lane's reaction to my improvement suggestions I have posted in \nother mail but maybe he has a deep night because of different time zone.\n\n>\n> Chris\n\nMiroslav", "msg_date": "Mon, 14 Mar 2005 10:09:38 +0100", "msg_from": "=?ISO-8859-2?Q?Miroslav_=A9ulc?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "PFC wrote:\n\n>\n> Instead of a varchar(1) containing 'y' or 'n' you could use a BOOL \n> or an integer.\n\nSure I could. The problem is our project still supports both MySQL and \nPostgreSQL. We used enum('Y','N') in MySQL so there would be a lot of \nchanges in the code if we would change to the BOOL data type.\n\n> Your query seems of the form :\n>\n> SELECT FROM main_table LEFT JOIN a lot of tables ORDER BY sort_key \n> LIMIT N OFFSET M;\n>\n> I would suggest to rewrite it in a simpler way : instead of \n> generating the whole result set, sorting it, and then grabbing a \n> slice, generate only the ror id's, grab a slice, and then generate \n> the full rows from that.\n>\n> - If you order by a field which is in main_table :\n> SELECT FROM main_table LEFT JOIN a lot of tables WHERE \n> main_table.id IN (SELECT id FROM main_table ORDER BY sort_key LIMIT N \n> OFFSET M\n> ) ORDER BY sort_key LIMIT N OFFSET M;\n>\n> - If you order by a field in one of the child tables, I guess you \n> only want to display the rows in the main table which have this \n> field, ie. not-null in the LEFT JOIN. You can also use the principle \n> above.\n>\n> - You can use a straight join instead of an IN.\n\nDo you mean something like this?\n\nSELECT Table.IDPK, Table2.varchar1, Table2.varchar2, ...\nFROM Table\nLEFT JOIN many tables\nINNER JOIN Table AS Table2\n\nMiroslav", "msg_date": "Mon, 14 Mar 2005 10:17:44 +0100", "msg_from": "=?ISO-8859-15?Q?Miroslav_=A6ulc?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "In article <[email protected]>,\n=?ISO-8859-15?Q?Miroslav_=A6ulc?= <[email protected]> writes:\n\n>> Instead of a varchar(1) containing 'y' or 'n' you could use a\n>> BOOL or an integer.\n\n> Sure I could. The problem is our project still supports both MySQL and\n> PostgreSQL. We used enum('Y','N') in MySQL so there would be a lot of\n> changes in the code if we would change to the BOOL data type.\n\nSince BOOL is exactly what you want to express and since MySQL also\nsupports BOOL (*), you should make that change anyway.\n\n(*) MySQL recognizes BOOL as a column type and silently uses\nTINYINT(1) instead.\n\n", "msg_date": "14 Mar 2005 15:05:52 +0100", "msg_from": "Harald Fuchs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "=?windows-1250?Q?Miroslav_=8Aulc?= <[email protected]> writes:\n> As there are a lot of varchar(1) in the AdDevicesSites table, wouldn't \n> be helpful to change them to char(1)? Would it solve the variable-width \n> problem at least for some fields and speed the query up?\n\nNo, because char(1) isn't physically fixed-width (consider multibyte\nencodings). There's really no advantage to char(N) in Postgres.\n\nI don't know what you're doing with those fields, but if they are\neffectively booleans or small codes you might be able to convert them to\nbool or int fields. There is also the \"char\" datatype (not to be\nconfused with char(1)) which can hold single ASCII characters, but is\nnonstandard and a bit impoverished as to functionality.\n\nHowever, I doubt this is worth pursuing. One of the things I tested\nyesterday was a quick hack to organize the storage of intermediate join\ntuples with fixed-width fields first and non-fixed ones later. It\nreally didn't help much at all :-(. I think the trouble with your\nexample is that in the existing code, the really fast path applies only\nwhen the tuple contains no nulls --- and since you're doing all that\nleft joining, there's frequently at least one null lurking.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Mar 2005 10:03:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] How to read query plan " }, { "msg_contents": "=?windows-1250?Q?Miroslav_=8Aulc?= <[email protected]> writes:\n> Tom Lane wrote:\n>> Actually, we already had a pending patch (from Atsushi Ogawa) that\n>> eliminates that particular O(N^2) behavior in another way. After\n>> applying it, I get about a factor-of-4 reduction in the runtime for\n>> Miroslav's example.\n>> \n> Is there a chance we will see this patch in the 8.0.2 release?\n\nNo. We are not in the habit of making non-bug-fix changes in stable\nbranches. Ogawa's patch is in CVS for 8.1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Mar 2005 10:04:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] How to read query plan " }, { "msg_contents": "Tom Lane wrote:\n\n>=?windows-1250?Q?Miroslav_=8Aulc?= <[email protected]> writes:\n> \n>\n>>As there are a lot of varchar(1) in the AdDevicesSites table, wouldn't \n>>be helpful to change them to char(1)? Would it solve the variable-width \n>>problem at least for some fields and speed the query up?\n>> \n>>\n>\n>No, because char(1) isn't physically fixed-width (consider multibyte\n>encodings). There's really no advantage to char(N) in Postgres.\n> \n>\nI was aware of that :-(\n\n>I don't know what you're doing with those fields, but if they are\n>effectively booleans or small codes you might be able to convert them to\n>bool or int fields. There is also the \"char\" datatype (not to be\n>confused with char(1)) which can hold single ASCII characters, but is\n>nonstandard and a bit impoverished as to functionality.\n> \n>\nThe problem lies in migration from MySQL to PostgreSQL. In MySQL we \n(badly) choose enum for yes/no switches (there's nothing like boolean \nfield type in MySQL as I know but we could use tinyint). It will be very \ntime consuming to rewrite all such enums and check the code whether it \nworks.\n\n>However, I doubt this is worth pursuing. One of the things I tested\n>yesterday was a quick hack to organize the storage of intermediate join\n>tuples with fixed-width fields first and non-fixed ones later. It\n>really didn't help much at all :-(. I think the trouble with your\n>example is that in the existing code, the really fast path applies only\n>when the tuple contains no nulls --- and since you're doing all that\n>left joining, there's frequently at least one null lurking.\n> \n>\nUnfortunatelly I don't see any other way than LEFT JOINing in this case.\n\n>\t\t\tregards, tom lane\n> \n>\nMiroslav", "msg_date": "Mon, 14 Mar 2005 16:21:37 +0100", "msg_from": "=?windows-1250?Q?Miroslav_=8Aulc?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] How to read query plan" }, { "msg_contents": "Harald Fuchs wrote:\n\n>>Sure I could. The problem is our project still supports both MySQL and\n>>PostgreSQL. We used enum('Y','N') in MySQL so there would be a lot of\n>>changes in the code if we would change to the BOOL data type.\n>> \n>>\n>\n>Since BOOL is exactly what you want to express and since MySQL also\n>supports BOOL (*), you should make that change anyway.\n> \n>\nI know that. The time will have to come.\n\n>(*) MySQL recognizes BOOL as a column type and silently uses\n>TINYINT(1) instead.\n> \n>\nI've checked that and you are right, but the BOOL is in MySQL from \nversion 4.1.0 though we could use tinyint instead of enum - our bad choice.\n\nMiroslav", "msg_date": "Mon, 14 Mar 2005 16:27:44 +0100", "msg_from": "=?windows-1252?Q?Miroslav_=8Aulc?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "Miroslav ďż˝ulc wrote:\n\n> Tom Lane wrote:\n>\n>> ...\n>> I think the reason this is popping to the top of the runtime is that the\n>> joins are so wide (an average of ~85 columns in a join tuple according\n>> to the numbers above). Because there are lots of variable-width columns\n>> involved, most of the time the fast path for field access doesn't apply\n>> and we end up going to nocachegetattr --- which itself is going to be\n>> slow because it has to scan over so many columns. So the cost is\n>> roughly O(N^2) in the number of columns.\n>>\n>>\n> As there are a lot of varchar(1) in the AdDevicesSites table, wouldn't\n> be helpful to change them to char(1)? Would it solve the\n> variable-width problem at least for some fields and speed the query up?\n>\nI'm guessing there really wouldn't be a difference. I think varchar()\nand char() are stored the same way, just one always has space padding. I\nbelieve they are both varlena types, so they are still \"variable\" length.\n\n>> As a short-term hack, you might be able to improve matters if you can\n>> reorder your LEFT JOINs to have the minimum number of columns\n>> propagating up from the earlier join steps. In other words make the\n>> later joins add more columns than the earlier, as much as you can.\n>>\n>>\n> That will be hard as the main table which contains most of the fields\n> is LEFT JOINed with the others. I'll look at it if I find some way to\n> improve it.\n\nOne thing that you could try, is to select just the primary keys from\nthe main table, and then later on, join back to that table to get the\nrest of the columns. It is a little bit hackish, but if it makes your\nquery faster, you might want to try it.\n\n>\n> I'm not sure whether I understand the process of performing the plan\n> but I imagine that the data from AdDevicesSites are retrieved only\n> once when they are loaded and maybe stored in memory. Are the columns\n> stored in the order they are in the SQL command? If so, wouldn't it be\n> useful to move all varchar fields at the end of the SELECT query? I'm\n> just guessing because I don't know at all how a database server is\n> implemented and what it really does.\n>\nI don't think they are stored in the order of the SELECT <> portion. I'm\nguessing they are loaded and saved as you go. But that the order of the\nLEFT JOIN at the end is probably important.\n\n>> ..\n>> regards, tom lane\n>>\n>>\n> Miroslav\n\n\nJohn\n=:->", "msg_date": "Mon, 14 Mar 2005 09:54:29 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] How to read query plan" }, { "msg_contents": "Miroslav �ulc <[email protected]> writes:\n\n> I think so. I'll change the varchar(1) fields to char(1) where possible \n\nchar isn't faster than varchar on postgres. If anything it may be slightly\nslower because every comparison first needs to pad both sides with spaces.\n\n-- \ngreg\n\n", "msg_date": "14 Mar 2005 10:58:55 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "=?ISO-8859-15?Q?Miroslav_=A6ulc?= <[email protected]> writes:\n> PFC wrote:\n>> Instead of a varchar(1) containing 'y' or 'n' you could use a BOOL \n>> or an integer.\n\n> Sure I could. The problem is our project still supports both MySQL and \n> PostgreSQL. We used enum('Y','N') in MySQL so there would be a lot of \n> changes in the code if we would change to the BOOL data type.\n\nJust FYI, I did a quick search-and-replace on your dump to replace\nvarchar(1) by \"char\", which makes the column fixed-width without any\nchange in the visible data. This made hardly any difference in the\njoin speed though :-(. So that is looking like a dead end.\n\nJohn's idea about re-joining to the main table to pick up the bulk of\nits fields only after joining to the sub-tables might work.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Mar 2005 11:23:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to read query plan " }, { "msg_contents": "Tom Lane wrote:\n\n>Just FYI, I did a quick search-and-replace on your dump to replace\n>varchar(1) by \"char\", which makes the column fixed-width without any\n>change in the visible data. This made hardly any difference in the\n>join speed though :-(. So that is looking like a dead end.\n> \n>\nI'll try to change the data type to bool but the problem I stand in \nfront of is that the code expects that SELECTs return 'Y' or 'N' but \nwhat I have found out till now is that PostgreSQL returns 't' or 'f' for \nbool data. I think about some solution but they use CPU :-(\n\n>John's idea about re-joining to the main table to pick up the bulk of\n>its fields only after joining to the sub-tables might work.\n> \n>\nI'll try that. It seems it could work.\n\n>\t\t\tregards, tom lane\n> \n>\nMiroslav", "msg_date": "Mon, 14 Mar 2005 17:33:44 +0100", "msg_from": "=?windows-1250?Q?Miroslav_=8Aulc?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "Miroslav ᅵulc wrote:\n\n> PFC wrote:\n>\n>> Your query seems of the form :\n>>\n>> SELECT FROM main_table LEFT JOIN a lot of tables ORDER BY\n>> sort_key LIMIT N OFFSET M;\n>>\n>> I would suggest to rewrite it in a simpler way : instead of\n>> generating the whole result set, sorting it, and then grabbing a\n>> slice, generate only the ror id's, grab a slice, and then generate\n>> the full rows from that.\n>>\n>> - If you order by a field which is in main_table :\n>> SELECT FROM main_table LEFT JOIN a lot of tables WHERE\n>> main_table.id IN (SELECT id FROM main_table ORDER BY sort_key LIMIT\n>> N OFFSET M\n>> ) ORDER BY sort_key LIMIT N OFFSET M;\n>>\n>> - If you order by a field in one of the child tables, I guess you\n>> only want to display the rows in the main table which have this\n>> field, ie. not-null in the LEFT JOIN. You can also use the principle\n>> above.\n>>\n>> - You can use a straight join instead of an IN.\n>\n>\n> Do you mean something like this?\n>\n> SELECT Table.IDPK, Table2.varchar1, Table2.varchar2, ...\n> FROM Table\n> LEFT JOIN many tables\n> INNER JOIN Table AS Table2\n>\n> Miroslav\n\nI would also recommend using the subselect format. Where any columns\nthat you are going to need to sort on show up in the subselect.\n\nSo you would have:\n\nSELECT ...\n FROM main_table\n LEFT JOIN tablea ON ...\n LEFT JOIN tableb ON ...\n ...\n JOIN other_table ON ...\n WHERE main_table.idpk IN\n (SELECT idpk\n FROM main_table JOIN other_table ON main_table.idpk =\nother_table.<main_idpk>\n WHERE ...\n ORDER BY other_table.abcd LIMIT n OFFSET m)\n;\n\nI think the final LIMIT + OFFSET would give you the wrong results, since\nyou have already filtered out the important rows.\nI also think you don't need the final order by, since the results should\nalready be in sorted order.\n\nNow this also assumes that if someone is sorting on a row, then they\ndon't want null entries. If they do, then you can change the subselect\ninto a left join. But with appropriate selectivity and indexes, an inner\njoin can filter out a lot of rows, and give you better performance.\n\nThe inner subselect gives you selectivity on the main table, so that you\ndon't have to deal with all the columns in the search, and then you\ndon't have to deal with all the rows later on.\n\nI think you can also do this:\n\nSELECT ...\n FROM (SELECT main_table.idpk, other_table.<columns> FROM main_table\nJOIN other_table ....) as p\n LEFT JOIN ...\n JOIN main_table ON main_table.idpk = p.idpk;\n\nIn that case instead of selecting out the id and putting that into the\nwhere, you put it in the from, and then join against it.\nI don't really know which is better.\n\nJohn\n=:->", "msg_date": "Mon, 14 Mar 2005 10:36:17 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "Hi,\n\nI have an idea about your problem. Will it be difficult not to change \nthe entire code but only the queries? You can change type in the \nPostgres to bool. Then, when select data you can use a CASE..WHEN to \nreturn 'Y' or 'N' or even write a little function which accepts bool and \nreturns 'Y' or 'N'. In this case in all your queries you will have to \nreplace the select of bool field with select form the function.\n\nKaloyan\n\nMiroslav пїЅulc wrote:\n\n> Tom Lane wrote:\n>\n>> Just FYI, I did a quick search-and-replace on your dump to replace\n>> varchar(1) by \"char\", which makes the column fixed-width without any\n>> change in the visible data. This made hardly any difference in the\n>> join speed though :-(. So that is looking like a dead end.\n>> \n>>\n> I'll try to change the data type to bool but the problem I stand in \n> front of is that the code expects that SELECTs return 'Y' or 'N' but \n> what I have found out till now is that PostgreSQL returns 't' or 'f' \n> for bool data. I think about some solution but they use CPU :-(\n>\n>> John's idea about re-joining to the main table to pick up the bulk of\n>> its fields only after joining to the sub-tables might work.\n>> \n>>\n> I'll try that. It seems it could work.\n>\n>> regards, tom lane\n>> \n>>\n> Miroslav\n>\n>------------------------------------------------------------------------\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n> \n>\n\n\n\n\n\n\n\nHi,\n\nI have an idea about your problem. Will it be difficult not to change\nthe entire code but only the queries? You can change type in the\nPostgres to bool. Then, when select data you can use a CASE..WHEN to\nreturn 'Y' or 'N' or even write a little function which accepts bool\nand returns 'Y' or 'N'. In this case in all your queries you will have\nto replace the select of bool field with select form the function. \n\nKaloyan\n\nMiroslav пїЅulc wrote:\nTom Lane\nwrote:\n \n\nJust FYI, I did a quick search-and-replace on\nyour dump to replace\n \nvarchar(1) by \"char\", which makes the column fixed-width without any\n \nchange in the visible data.пїЅ This made hardly any difference in the\n \njoin speed though :-(.пїЅ So that is looking like a dead end.\n \nпїЅ\n \n\n\nI'll try to change the data type to bool but the problem I stand in\nfront of is that the code expects that SELECTs return 'Y' or 'N' but\nwhat I have found out till now is that PostgreSQL returns 't' or 'f'\nfor bool data. I think about some solution but they use CPU :-(\n \n\nJohn's idea about re-joining to the main\ntable to pick up the bulk of\n \nits fields only after joining to the sub-tables might work.\n \nпїЅ\n \n\n\nI'll try that. It seems it could work.\n \n\nпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅ regards, tom lane\n \nпїЅ\n \n\n\nMiroslav\n \n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq", "msg_date": "Mon, 14 Mar 2005 19:03:37 +0200", "msg_from": "Kaloyan Iliev Iliev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "Kaloyan Iliev Iliev wrote:\n\n> Hi,\n>\n> I have an idea about your problem. Will it be difficult not to change \n> the entire code but only the queries? You can change type in the \n> Postgres to bool. Then, when select data you can use a CASE..WHEN to \n> return 'Y' or 'N' or even write a little function which accepts bool \n> and returns 'Y' or 'N'. In this case in all your queries you will have \n> to replace the select of bool field with select form the function.\n\nThank you for your suggestion. I had a private message exchange with \nHarald Fuchs who suggested the same (except the function). Here is what \nwhe \"exchanged\":\n\nHarald Fuchs wrote:\n\n> If you can control the SELECTs, just use\n>\n> SELECT CASE col WHEN true THEN 'Y' ELSE 'N' END\n>\n> instead of\n>\n> SELECT col\n>\n> Thus you wouldn't need to change your application code.\n> \n>\nI use single SELECT for both PostgreSQL and MySQL. I could use your \nsolution. It would just require some tagging of bool fields in SELECTs \nso I could add the CASE statement in case I use PostgreSQL backend.\n\nMiroslav", "msg_date": "Mon, 14 Mar 2005 18:07:54 +0100", "msg_from": "=?UTF-8?B?TWlyb3NsYXYgxaB1bGM=?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to read query plan" }, { "msg_contents": "=?ISO-8859-2?Q?Miroslav_=A9ulc?= <[email protected]> writes:\n> [ concerning a deeply nested LEFT JOIN to get data from a star schema ]\n\n> So I have some results. I have tested the query on both PostgreSQL 8.0.1 \n> and MySQL 4.1.8 with LIMIT set to 30 and OFFSET set to 6000. PostgreSQL \n> result is 11,667.916 ms, MySQL result is 448.4 ms.\n\nThat's a fairly impressive discrepancy :-(, and even the slot_getattr()\npatch that Atsushi Ogawa provided isn't going to close the gap.\n(I got about a 4x speedup on Miroslav's example in my testing, which\nleaves us still maybe 6x slower than MySQL.)\n\nLooking at the post-patch profile for the test case, there is still\nquite a lot of cycles going into tuple assembly and disassembly:\n\nEach sample counts as 0.01 seconds.\n % cumulative self self total \n time seconds seconds calls ms/call ms/call name \n 24.47 4.49 4.49 _mcount\n 8.01 5.96 1.47 9143692 0.00 0.00 ExecEvalVar\n 6.92 7.23 1.27 6614373 0.00 0.00 slot_deformtuple\n 6.54 8.43 1.20 9143692 0.00 0.00 slot_getattr\n 6.21 9.57 1.14 103737 0.01 0.03 ExecTargetList\n 5.56 10.59 1.02 103775 0.01 0.01 DataFill\n 3.22 11.18 0.59 103775 0.01 0.01 ComputeDataSize\n 2.83 11.70 0.52 ExecEvalVar\n 2.72 12.20 0.50 9094122 0.00 0.00 memcpy\n 2.51 12.66 0.46 encore\n 2.40 13.10 0.44 427448 0.00 0.00 nocachegetattr\n 2.13 13.49 0.39 103775 0.00 0.02 heap_formtuple\n 2.07 13.87 0.38 noshlibs\n 1.20 14.09 0.22 225329 0.00 0.00 _doprnt\n 1.20 14.31 0.22 msquadloop\n 1.14 14.52 0.21 chunks\n 0.98 14.70 0.18 871885 0.00 0.00 AllocSetAlloc\n 0.98 14.88 0.18 $$dyncall\n 0.76 15.02 0.14 594242 0.00 0.00 FunctionCall3\n 0.71 15.15 0.13 213312 0.00 0.00 comparetup_heap\n 0.65 15.27 0.12 6364 0.02 0.13 printtup\n 0.60 15.38 0.11 790702 0.00 0.00 pfree\n\n(_mcount is profiling overhead, ignore it.) It looks to me like just\nabout everything in the top dozen functions is there as a result of the\nfact that join steps form new tuples that are the merge of their input\ntuples. Even our favorite villains, palloc and pfree, are down in the\nsub-percent range.\n\nI am guessing that the reason MySQL wins on this is that they avoid\ndoing any data copying during a join step. I wonder whether we could\naccomplish the same by taking Ogawa's patch to the next level: allow\na TupleTableSlot to contain either a \"materialized\" tuple as now,\nor a \"virtual\" tuple that is simply an array of Datums and null flags.\n(It's virtual in the sense that any pass-by-reference Datums would have\nto be pointing to data at the next level down.) This would essentially\nturn the formtuple and deformtuple operations into no-ops, and get rid\nof a lot of the associated overhead such as ComputeDataSize and\nDataFill. The only operations that would have to forcibly materialize\na tuple would be ones that need to keep the tuple till after they fetch\ntheir next input tuple --- hashing and sorting are examples, but very\nmany plan node types don't ever need to do that.\n\nI haven't worked out the details, but it seems likely that this could be\na relatively nonintrusive patch. The main thing that would be an issue\nwould be that direct reference to slot->val would become verboten (since\nyou could no longer be sure there was a materialized tuple there).\nI think this would possibly affect some contrib stuff, which is a strong\nhint that it'd break some existing user-written code out there.\n\nThoughts?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Mar 2005 12:24:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Avoiding tuple construction/deconstruction during joining" }, { "msg_contents": "Tom Lane wrote:\n\n>>So I have some results. I have tested the query on both PostgreSQL 8.0.1 \n>>and MySQL 4.1.8 with LIMIT set to 30 and OFFSET set to 6000. PostgreSQL \n>>result is 11,667.916 ms, MySQL result is 448.4 ms.\n>> \n>>\n>\n>That's a fairly impressive discrepancy :-(, and even the slot_getattr()\n>patch that Atsushi Ogawa provided isn't going to close the gap.\n>(I got about a 4x speedup on Miroslav's example in my testing, which\n>leaves us still maybe 6x slower than MySQL.)\n> \n>\nAs I wrote, the comparison is not \"fair\". Here are the conditions:\n\n\"Both databases are running on the same machine (my laptop) and contain \nthe same data. However there are some differences in the data table \ndefinitions:\n1) in PostgreSQL I use 'varchar(1)' for a lot of fields and in MySQL I \nuse 'enum'\n2) in PostgreSQL in some cases I use connection fields that are not of \nthe same type (smallint <-> integer (SERIAL)), in MySQL I use the same \ntypes\"\n\nFor those not used to MySQL, enum is an integer \"mapped\" to a text \nstring (that's how I see it). That means that when you have field such \nas enum('yes','no','DK'), in the table there are stored numbers 1, 2 and \n3 which are mapped to the text values 'yes', 'no' and 'DK'. The \ndescription is not accurate (I'm not MySQL programmer, I didn't check it \nrecently and I didn't inspect the code - I wouldn't understand it \neither) but I think it's not that important. What is important is the \nfact that MySQL has to work with some dozen fields that are numbers and \nPostgreSQL has to work with the same fields as varchar(). Some of the \nother fields are also varchars. This might (or might not) cause the \nspeed difference. However I think that if devs figure out how to speed \nthis up, other cases will benefit from the improvement too.\n\nAs I understood from the contributions of other, the 2) shouldn't have a \ngreat impact on the speed.\n\n>Looking at the post-patch profile for the test case, there is still\n>quite a lot of cycles going into tuple assembly and disassembly:\n>\n>Each sample counts as 0.01 seconds.\n> % cumulative self self total \n> time seconds seconds calls ms/call ms/call name \n> 24.47 4.49 4.49 _mcount\n> 8.01 5.96 1.47 9143692 0.00 0.00 ExecEvalVar\n> 6.92 7.23 1.27 6614373 0.00 0.00 slot_deformtuple\n> 6.54 8.43 1.20 9143692 0.00 0.00 slot_getattr\n> 6.21 9.57 1.14 103737 0.01 0.03 ExecTargetList\n> 5.56 10.59 1.02 103775 0.01 0.01 DataFill\n> 3.22 11.18 0.59 103775 0.01 0.01 ComputeDataSize\n> 2.83 11.70 0.52 ExecEvalVar\n> 2.72 12.20 0.50 9094122 0.00 0.00 memcpy\n> 2.51 12.66 0.46 encore\n> 2.40 13.10 0.44 427448 0.00 0.00 nocachegetattr\n> 2.13 13.49 0.39 103775 0.00 0.02 heap_formtuple\n> 2.07 13.87 0.38 noshlibs\n> 1.20 14.09 0.22 225329 0.00 0.00 _doprnt\n> 1.20 14.31 0.22 msquadloop\n> 1.14 14.52 0.21 chunks\n> 0.98 14.70 0.18 871885 0.00 0.00 AllocSetAlloc\n> 0.98 14.88 0.18 $$dyncall\n> 0.76 15.02 0.14 594242 0.00 0.00 FunctionCall3\n> 0.71 15.15 0.13 213312 0.00 0.00 comparetup_heap\n> 0.65 15.27 0.12 6364 0.02 0.13 printtup\n> 0.60 15.38 0.11 790702 0.00 0.00 pfree\n>\n>(_mcount is profiling overhead, ignore it.) It looks to me like just\n>about everything in the top dozen functions is there as a result of the\n>fact that join steps form new tuples that are the merge of their input\n>tuples. Even our favorite villains, palloc and pfree, are down in the\n>sub-percent range.\n>\n>I am guessing that the reason MySQL wins on this is that they avoid\n>doing any data copying during a join step. I wonder whether we could\n>accomplish the same by taking Ogawa's patch to the next level: allow\n>a TupleTableSlot to contain either a \"materialized\" tuple as now,\n>or a \"virtual\" tuple that is simply an array of Datums and null flags.\n>(It's virtual in the sense that any pass-by-reference Datums would have\n>to be pointing to data at the next level down.) This would essentially\n>turn the formtuple and deformtuple operations into no-ops, and get rid\n>of a lot of the associated overhead such as ComputeDataSize and\n>DataFill. The only operations that would have to forcibly materialize\n>a tuple would be ones that need to keep the tuple till after they fetch\n>their next input tuple --- hashing and sorting are examples, but very\n>many plan node types don't ever need to do that.\n>\n>I haven't worked out the details, but it seems likely that this could be\n>a relatively nonintrusive patch. The main thing that would be an issue\n>would be that direct reference to slot->val would become verboten (since\n>you could no longer be sure there was a materialized tuple there).\n>I think this would possibly affect some contrib stuff, which is a strong\n>hint that it'd break some existing user-written code out there.\n>\n>Thoughts?\n> \n>\nMine thought is that I don't know what you are talking about :-) Now \nseriously, I am far below this level of knowledge. But I can contribute \na test that (maybe) can help. I have rewritten the query so it JOINs the \nvarchar() fields (in fact all fields except the IDPK) at the last INNER \nJOIN. Though there is one more JOIN, the query is more than 5 times \nfaster (1975.312 ms) :-)\n\nSo my silly opinion is that if the planner could decide that there are \ntoo much time expensive fields that are not needed during performing \nJOINs and these could be JOINed at the last step, it would do it this \nway :-)\n\nBelow is the adjusted query and the EXPLAIN ANALYZE output. (Tom, you \ncan run it on the data I have sent you and it should run without changes.)\n\nSELECT AdDevicesSites.IDPK, AdDevicesSites2.AdDevicesSiteSizeIDFK, \nAdDevicesSites2.AdDevicesSiteRegionIDFK, \nAdDevicesSites2.AdDevicesSiteCountyIDFK, \nAdDevicesSites2.AdDevicesSiteCityIDFK, \nAdDevicesSites2.AdDevicesSiteDistrictIDFK, \nAdDevicesSites2.AdDevicesSiteStreetIDFK, \nAdDevicesSites2.AdDevicesSiteStreetDescriptionIDFK, \nAdDevicesSites2.AdDevicesSitePositionIDFK, \nAdDevicesSites2.AdDevicesSiteVisibilityIDFK, \nAdDevicesSites2.AdDevicesSiteStatusTypeIDFK, \nAdDevicesSites2.AdDevicesSitePartnerIdentificationOperatorIDFK, \nAdDevicesSites2.AdDevicesSitePartnerElectricitySupplierIDFK, \nAdDevicesSites2.AdDevicesSitePartnerMaintainerIDFK, \nAdDevicesSites2.AdDevicesSitePartnerStickerIDFK, \nAdDevicesSites2.CadastralUnitIDFK, AdDevicesSites2.MediaType, \nAdDevicesSites2.Mark, AdDevicesSites2.Amount, AdDevicesSites2.Distance, \nAdDevicesSites2.OwnLightening, AdDevicesSites2.LocationDownTown, \nAdDevicesSites2.LocationSuburb, \nAdDevicesSites2.LocationBusinessDistrict, \nAdDevicesSites2.LocationResidentialDistrict, \nAdDevicesSites2.LocationIndustrialDistrict, \nAdDevicesSites2.LocationNoBuildings, AdDevicesSites2.ParkWayHighWay, \nAdDevicesSites2.ParkWayFirstClassRoad, AdDevicesSites2.ParkWayOtherRoad, \nAdDevicesSites2.ParkWayStreet, AdDevicesSites2.ParkWayAccess, \nAdDevicesSites2.ParkWayExit, AdDevicesSites2.ParkWayParkingPlace, \nAdDevicesSites2.ParkWayPassangersOnly, AdDevicesSites2.ParkWayCrossRoad, \nAdDevicesSites2.PositionStandAlone, \nAdDevicesSites2.NeighbourhoodPublicTransportation, \nAdDevicesSites2.NeighbourhoodInterCityTransportation, \nAdDevicesSites2.NeighbourhoodPostOffice, \nAdDevicesSites2.NeighbourhoodNewsStand, \nAdDevicesSites2.NeighbourhoodAmenities, \nAdDevicesSites2.NeighbourhoodSportsSpot, \nAdDevicesSites2.NeighbourhoodHealthServiceSpot, \nAdDevicesSites2.NeighbourhoodShops, \nAdDevicesSites2.NeighbourhoodShoppingCenter, \nAdDevicesSites2.NeighbourhoodSuperMarket, \nAdDevicesSites2.NeighbourhoodPetrolStation, \nAdDevicesSites2.NeighbourhoodSchool, AdDevicesSites2.NeighbourhoodBank, \nAdDevicesSites2.NeighbourhoodRestaurant, \nAdDevicesSites2.NeighbourhoodHotel, \nAdDevicesSites2.RestrictionCigarettes, \nAdDevicesSites2.RestrictionPolitics, AdDevicesSites2.RestrictionSpirits, \nAdDevicesSites2.RestrictionSex, AdDevicesSites2.RestrictionOther, \nAdDevicesSites2.RestrictionNote, AdDevicesSites2.SpotMapFile, \nAdDevicesSites2.SpotPhotoFile, AdDevicesSites2.SourcePhotoTimeStamp, \nAdDevicesSites2.SourceMapTimeStamp, AdDevicesSites2.Price, \nAdDevicesSites2.WebPrice, AdDevicesSites2.CadastralUnitCode, \nAdDevicesSites2.BuildingNumber, AdDevicesSites2.ParcelNumber, \nAdDevicesSites2.GPSLatitude, AdDevicesSites2.GPSLongitude, \nAdDevicesSites2.GPSHeight, AdDevicesSites2.MechanicalOpticalCoordinates, \nAdDevicesSites2.Deleted, AdDevicesSites2.Protected, \nAdDevicesSites2.DateCreated, AdDevicesSites2.DateLastModified, \nAdDevicesSites2.DateDeleted, AdDevicesSites2.CreatedByUserIDFK, \nAdDevicesSites2.LastModifiedByUserIDFK, \nAdDevicesSites2.DeletedByUserIDFK, \nAdDevicesSites2.PhotoLastModificationDate, \nAdDevicesSites2.MapLastModificationDate, \nAdDevicesSites2.DateLastImported, AdDevicesSiteRegions.Name AS \nAdDevicesSiteRegionName, AdDevicesSiteCounties.Name AS \nAdDevicesSiteCountyName, AdDevicesSiteCities.Name AS \nAdDevicesSiteCityName, AdDevicesSiteStreets.Name AS \nAdDevicesSiteStreetName, AdDevicesSiteDistricts.Name AS \nAdDevicesSiteDistrictName, AdDevicesSiteStreetDescriptions.Name_cs AS \nAdDevicesSiteStreetDescriptionName_cs, \nAdDevicesSiteStreetDescriptions.Name_en AS \nAdDevicesSiteStreetDescriptionName_en, AdDevicesSiteSizes.Name AS \nAdDevicesSiteSizeName, SUBSTRING(AdDevicesSiteVisibilities.Name_cs, 3) \nAS AdDevicesSiteVisibilityName_cs, \nSUBSTRING(AdDevicesSiteVisibilities.Name_en, 3) AS \nAdDevicesSiteVisibilityName_en, AdDevicesSitePositions.Name_cs AS \nAdDevicesSitePositionName_cs, AdDevicesSitePositions.Name_en AS \nAdDevicesSitePositionName_en, AdDevicesSiteStatusTypes.Name_cs AS \nAdDevicesSiteStatusTypeName_cs, AdDevicesSiteStatusTypes.Name_en AS \nAdDevicesSiteStatusTypeName_en, PartnerIdentificationsOperator.Name AS \nPartnerIdentificationOperatorName, PartnersElectricitySupplier.Name AS \nPartnerElectricitySupplierName, PartnersMaintainer.Name AS \nPartnerMaintainerName, PartnersSticker.Name AS PartnerStickerName, \nCadastralUnits.Code AS CadastralUnitCodeNative, CadastralUnits.Name AS \nCadastralUnitName\nFROM AdDevicesSites\nLEFT JOIN AdDevicesSiteRegions ON AdDevicesSites.AdDevicesSiteRegionIDFK \n= AdDevicesSiteRegions.IDPK\nLEFT JOIN AdDevicesSiteCounties ON \nAdDevicesSites.AdDevicesSiteCountyIDFK = AdDevicesSiteCounties.IDPK\nLEFT JOIN AdDevicesSiteCities ON AdDevicesSites.AdDevicesSiteCityIDFK = \nAdDevicesSiteCities.IDPK\nLEFT JOIN AdDevicesSiteStreets ON AdDevicesSites.AdDevicesSiteStreetIDFK \n= AdDevicesSiteStreets.IDPK\nLEFT JOIN AdDevicesSiteStreetDescriptions ON \nAdDevicesSites.AdDevicesSiteStreetDescriptionIDFK = \nAdDevicesSiteStreetDescriptions.IDPK\nLEFT JOIN AdDevicesSiteDistricts ON \nAdDevicesSites.AdDevicesSiteDistrictIDFK = AdDevicesSiteDistricts.IDPK\nLEFT JOIN AdDevicesSiteSizes ON AdDevicesSites.AdDevicesSiteSizeIDFK = \nAdDevicesSiteSizes.IDPK\nLEFT JOIN AdDevicesSiteVisibilities ON \nAdDevicesSites.AdDevicesSiteVisibilityIDFK = AdDevicesSiteVisibilities.IDPK\nLEFT JOIN AdDevicesSitePositions ON \nAdDevicesSites.AdDevicesSitePositionIDFK = AdDevicesSitePositions.IDPK\nLEFT JOIN AdDevicesSiteStatusTypes ON \nAdDevicesSites.AdDevicesSiteStatusTypeIDFK = AdDevicesSiteStatusTypes.IDPK\nLEFT JOIN PartnerIdentifications AS PartnerIdentificationsOperator ON \nAdDevicesSites.AdDevicesSitePartnerIdentificationOperatorIDFK = \nPartnerIdentificationsOperator.IDPK\nLEFT JOIN Partners AS PartnersElectricitySupplier ON \nAdDevicesSites.AdDevicesSitePartnerElectricitySupplierIDFK = \nPartnersElectricitySupplier.IDPK\nLEFT JOIN Partners AS PartnersMaintainer ON \nAdDevicesSites.AdDevicesSitePartnerMaintainerIDFK = PartnersMaintainer.IDPK\nLEFT JOIN Partners AS PartnersSticker ON \nAdDevicesSites.AdDevicesSitePartnerStickerIDFK = PartnersSticker.IDPK\nLEFT JOIN CadastralUnits ON AdDevicesSites.CadastralUnitIDFK = \nCadastralUnits.IDPK\nINNER JOIN AdDevicesSites AS AdDevicesSites2 ON AdDevicesSites.IDPK = \nAdDevicesSites2.IDPK\n\nQUERY PLAN\n\nHash Join (cost=193867.68..235417.92 rows=142556 width=815) (actual time=1577.080..2200.677 rows=6364 loops=1)\n\n Hash Cond: (\"outer\".idpk = \"inner\".idpk)\n\n -> Seq Scan on addevicessites addevicessites2 (cost=0.00..13118.56 rows=142556 width=473) (actual time=40.650..49.195 rows=6364 loops=1)\n\n -> Hash (cost=186898.29..186898.29 rows=142556 width=350) (actual time=1534.080..1534.080 rows=0 loops=1)\n\n -> Merge Right Join (cost=184758.38..186898.29 rows=142556 width=350) (actual time=1187.653..1244.955 rows=6364 loops=1)\n\n Merge Cond: (\"outer\".idpk = \"inner\".cadastralunitidfk)\n\n -> Index Scan using cadastralunits_pkey on cadastralunits (cost=0.00..314.49 rows=13027 width=31) (actual time=0.034..0.128 rows=63 loops=1)\n\n -> Sort (cost=184758.38..185114.77 rows=142556 width=325) (actual time=1187.582..1190.111 rows=6364 loops=1)\n\n Sort Key: addevicessites.cadastralunitidfk\n\n -> Hash Left Join (cost=93887.04..143074.76 rows=142556 width=325) (actual time=502.584..1129.145 rows=6364 loops=1)\n\n Hash Cond: (\"outer\".addevicessitepartnerstickeridfk = \"inner\".idpk)\n\n -> Hash Left Join (cost=93884.28..140933.65 rows=142556 width=307) (actual time=502.388..1075.572 rows=6364 loops=1)\n\n Hash Cond: (\"outer\".addevicessitepartnermaintaineridfk = \"inner\".idpk)\n\n -> Hash Left Join (cost=93881.51..138792.55 rows=142556 width=289) (actual time=502.208..1023.802 rows=6364 loops=1)\n\n Hash Cond: (\"outer\".addevicessitepartnerelectricitysupplieridfk = \"inner\".idpk)\n\n -> Hash Left Join (cost=93878.75..136651.45 rows=142556 width=271) (actual time=502.041..969.959 rows=6364 loops=1)\n\n Hash Cond: (\"outer\".addevicessitepartneridentificationoperatoridfk = \"inner\".idpk)\n\n -> Nested Loop Left Join (cost=93875.99..134510.35 rows=142556 width=253) (actual time=501.849..915.256 rows=6364 loops=1)\n\n Join Filter: (\"outer\".addevicessitestatustypeidfk = \"inner\".idpk)\n\n -> Nested Loop Left Join (cost=93874.93..118471.74 rows=142556 width=228) (actual time=501.826..818.436 rows=6364 loops=1)\n\n Join Filter: (\"outer\".addevicessitepositionidfk = \"inner\".idpk)\n\n -> Nested Loop Left Join (cost=93873.90..108848.18 rows=142556 width=207) (actual time=501.802..737.137 rows=6364 loops=1)\n\n Join Filter: (\"outer\".addevicessitevisibilityidfk = \"inner\".idpk)\n\n -> Merge Right Join (cost=93872.86..96017.09 rows=142556 width=177) (actual time=501.741..554.834 rows=6364 loops=1)\n\n Merge Cond: (\"outer\".idpk = \"inner\".addevicessitesizeidfk)\n\n -> Index Scan using addevicessitesizes_pkey on addevicessitesizes (cost=0.00..5.62 rows=110 width=14) (actual time=0.009..0.264 rows=110 loops=1)\n\n -> Sort (cost=93872.86..94229.25 rows=142556 width=169) (actual time=501.697..504.267 rows=6364 loops=1)\n\n Sort Key: addevicessites.addevicessitesizeidfk\n\n -> Hash Left Join (cost=57752.91..65764.23 rows=142556 width=169) (actual time=234.321..466.130 rows=6364 loops=1)\n\n Hash Cond: (\"outer\".addevicessitedistrictidfk = \"inner\".idpk)\n\n -> Hash Left Join (cost=57743.17..63616.15 rows=142556 width=164) (actual time=233.456..421.267 rows=6364 loops=1)\n\n Hash Cond: (\"outer\".addevicessitestreetdescriptionidfk = \"inner\".idpk)\n\n -> Hash Left Join (cost=57634.86..62707.43 rows=142556 width=137) (actual time=223.396..362.945 rows=6364 loops=1)\n\n Hash Cond: (\"outer\".addevicessitestreetidfk = \"inner\".idpk)\n\n -> Hash Left Join (cost=57566.95..61844.20 rows=142556 width=119) (actual time=217.101..312.605 rows=6364 loops=1)\n\n Hash Cond: (\"outer\".addevicessitecityidfk = \"inner\".idpk)\n\n -> Merge Right Join (cost=57561.85..59700.76 rows=142556 width=110) (actual time=216.635..266.672 rows=6364 loops=1)\n\n Merge Cond: (\"outer\".idpk = \"inner\".addevicessitecountyidfk)\n\n -> Sort (cost=6.19..6.48 rows=117 width=17) (actual time=0.350..0.389 rows=116 loops=1)\n\n Sort Key: addevicessitecounties.idpk\n\n -> Seq Scan on addevicessitecounties (cost=0.00..2.17 rows=117 width=17) (actual time=0.008..0.146 rows=117 loops=1)\n\n -> Sort (cost=57555.66..57912.05 rows=142556 width=99) (actual time=216.250..218.611 rows=6364 loops=1)\n\n Sort Key: addevicessites.addevicessitecountyidfk\n\n -> Merge Right Join (cost=33573.63..35712.03 rows=142556 width=99) (actual time=173.646..199.036 rows=6364 loops=1)\n\n Merge Cond: (\"outer\".idpk = \"inner\".addevicessiteregionidfk)\n\n -> Sort (cost=1.44..1.48 rows=15 width=23) (actual time=0.055..0.059 rows=13 loops=1)\n\n Sort Key: addevicessiteregions.idpk\n\n -> Seq Scan on addevicessiteregions (cost=0.00..1.15 rows=15 width=23) (actual time=0.016..0.032 rows=15 loops=1)\n\n -> Sort (cost=33572.19..33928.58 rows=142556 width=82) (actual time=173.559..176.398 rows=6364 loops=1)\n\n Sort Key: addevicessites.addevicessiteregionidfk\n\n -> Seq Scan on addevicessites (cost=0.00..13118.56 rows=142556 width=82) (actual time=62.345..164.783 rows=6364 loops=1)\n\n -> Hash (cost=4.48..4.48 rows=248 width=17) (actual time=0.283..0.283 rows=0 loops=1)\n\n -> Seq Scan on addevicessitecities (cost=0.00..4.48 rows=248 width=17) (actual time=0.011..0.162 rows=138 loops=1)\n\n -> Hash (cost=60.53..60.53 rows=2953 width=34) (actual time=6.229..6.229 rows=0 loops=1)\n\n -> Seq Scan on addevicessitestreets (cost=0.00..60.53 rows=2953 width=34) (actual time=0.010..3.816 rows=2984 loops=1)\n\n -> Hash (cost=96.85..96.85 rows=4585 width=43) (actual time=10.017..10.017 rows=0 loops=1)\n\n -> Seq Scan on addevicessitestreetdescriptions (cost=0.00..96.85 rows=4585 width=43) (actual time=0.008..6.371 rows=4585 loops=1)\n\n -> Hash (cost=8.59..8.59 rows=459 width=21) (actual time=0.815..0.815 rows=0 loops=1)\n\n -> Seq Scan on addevicessitedistricts (cost=0.00..8.59 rows=459 width=21) (actual time=0.007..0.541 rows=382 loops=1)\n\n -> Materialize (cost=1.04..1.08 rows=4 width=36) (actual time=0.000..0.002 rows=4 loops=6364)\n\n -> Seq Scan on addevicessitevisibilities (cost=0.00..1.04 rows=4 width=36) (actual time=0.026..0.035 rows=4 loops=1)\n\n -> Materialize (cost=1.03..1.06 rows=3 width=27) (actual time=0.000..0.001 rows=3 loops=6364)\n\n -> Seq Scan on addevicessitepositions (cost=0.00..1.03 rows=3 width=27) (actual time=0.007..0.010 rows=3 loops=1)\n\n -> Materialize (cost=1.05..1.10 rows=5 width=31) (actual time=0.000..0.002 rows=5 loops=6364)\n\n -> Seq Scan on addevicessitestatustypes (cost=0.00..1.05 rows=5 width=31) (actual time=0.006..0.016 rows=5 loops=1)\n\n -> Hash (cost=2.61..2.61 rows=61 width=34) (actual time=0.144..0.144 rows=0 loops=1)\n\n -> Seq Scan on partneridentifications partneridentificationsoperator (cost=0.00..2.61 rows=61 width=34) (actual time=0.007..0.103 rows=61 loops=1)\n\n -> Hash (cost=2.61..2.61 rows=61 width=34) (actual time=0.122..0.122 rows=0 loops=1)\n\n -> Seq Scan on partners partnerselectricitysupplier (cost=0.00..2.61 rows=61 width=34) (actual time=0.004..0.072 rows=61 loops=1)\n\n -> Hash (cost=2.61..2.61 rows=61 width=34) (actual time=0.136..0.136 rows=0 loops=1)\n\n -> Seq Scan on partners partnersmaintainer (cost=0.00..2.61 rows=61 width=34) (actual time=0.005..0.086 rows=61 loops=1)\n\n -> Hash (cost=2.61..2.61 rows=61 width=34) (actual time=0.143..0.143 rows=0 loops=1)\n\n -> Seq Scan on partners partnerssticker (cost=0.00..2.61 rows=61 width=34) (actual time=0.009..0.098 rows=61 loops=1)\n\nTotal runtime: 2210.937 ms\n\n\n>\t\t\tregards, tom lane\n> \n>\nMiroslav", "msg_date": "Mon, 14 Mar 2005 20:11:43 +0100", "msg_from": "=?windows-1250?Q?Miroslav_=8Aulc?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoiding tuple construction/deconstruction during joining" }, { "msg_contents": "=?windows-1250?Q?Miroslav_=8Aulc?= <[email protected]> writes:\n> seriously, I am far below this level of knowledge. But I can contribute \n> a test that (maybe) can help. I have rewritten the query so it JOINs the \n> varchar() fields (in fact all fields except the IDPK) at the last INNER \n> JOIN. Though there is one more JOIN, the query is more than 5 times \n> faster (1975.312 ms) :-)\n\nThat confirms my thought that passing the data up through multiple\nlevels of join is what's killing us. I'll work on a solution. This\nwill of course be even less back-patchable to 8.0.* than Ogawa's work,\nbut hopefully it will fix the issue for 8.1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Mar 2005 14:22:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoiding tuple construction/deconstruction during joining " }, { "msg_contents": "Tom Lane wrote:\n\n>=?windows-1250?Q?Miroslav_=8Aulc?= <[email protected]> writes:\n> \n>\n>>seriously, I am far below this level of knowledge. But I can contribute \n>>a test that (maybe) can help. I have rewritten the query so it JOINs the \n>>varchar() fields (in fact all fields except the IDPK) at the last INNER \n>>JOIN. Though there is one more JOIN, the query is more than 5 times \n>>faster (1975.312 ms) :-)\n>> \n>>\n>\n>That confirms my thought that passing the data up through multiple\n>levels of join is what's killing us. I'll work on a solution. This\n>will of course be even less back-patchable to 8.0.* than Ogawa's work,\n>but hopefully it will fix the issue for 8.1.\n>\n>\t\t\tregards, tom lane\n> \n>\nTom, thank you and the others for help. I'm glad that my problem can \nhelp PostgreSQL improve and that there are people like you that \nunderstand each little bit of the software :-)\n\nMiroslav", "msg_date": "Mon, 14 Mar 2005 20:31:51 +0100", "msg_from": "=?windows-1250?Q?Miroslav_=8Aulc?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoiding tuple construction/deconstruction during joining" }, { "msg_contents": "\n\tI have asked him for the data and played with his queries, and obtained \nmassive speedups with the following queries :\n\nhttp://boutiquenumerique.com/pf/miroslav/query.sql\nhttp://boutiquenumerique.com/pf/miroslav/query2.sql\nhttp://boutiquenumerique.com/pf/miroslav/materialize.sql\n\n\tNote that my optimized version of the Big Joins is not much faster that \nthe materialized view without index (hash joins are damn fast in postgres) \nbut of course using an index...\n", "msg_date": "Tue, 15 Mar 2005 18:17:31 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Avoiding tuple construction/deconstruction during\n\tjoining" }, { "msg_contents": "\n\tOn my machine (Laptop with Pentium-M 1.6 GHz and 512MB DDR333) I get the \nfollowing timings :\n\n\tBig Joins Query will all the fields and no order by (I just put a SELECT \n* in the first table) yielding about 6k rows :\n\t=> 12136.338 ms\n\n\tReplacing the SELECT * from the table with many fields by just a SELECT \nof the foreign key columns :\n\t=> 1874.612 ms\n\n\tI felt like playing a bit so I implemented a hash join in python \n(download the file, it works on Miroslav's data) :\n\tAll timings do not include time to fetch the data from the database. \nFetching all the tables takes about 1.1 secs.\n\n\t* With something that looks like the current implementation (copying \ntuples around) and fetching all the fields from the big table :\n\t=> Fetching all the tables : 1.1 secs.\n\t=> Joining : 4.3 secs\n\n\t* Fetching only the integer fields\n\t=> Fetching all the tables : 0.4 secs.\n\t=> Joining : 1.7 secs\n\n\t* A smarter join which copies nothing and updates the rows as they are \nprocessed, adding fields :\n\t=> Fetching all the tables : 1.1 secs.\n\t=> Joining : 0.4 secs\n\tWith the just-in-time compiler activated, it goes down to about 0.25 \nseconds.\n\n\tFirst thing, this confirms what Tom said.\n\tIt also means that doing this query in the application can be a lot \nfaster than doing it in postgres including fetching all of the tables. \nThere's a problem somewhere ! It should be the other way around ! The \npython mappings (dictionaries : { key : value } ) are optimized like crazy \nbut they store column names for each row. And it's a dynamic script \nlanguage ! Argh.\n\n\tNote : run the program like this :\n\npython test.py |less -S\n\n\tSo that the time spent scrolling your terminal does not spoil the \nmeasurements.\n\nDownload test program :\nhttp://boutiquenumerique.com/pf/miroslav/test.py\n", "msg_date": "Tue, 15 Mar 2005 20:53:13 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Avoiding tuple construction/deconstruction during\n\tjoining" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Josh Berkus [mailto:[email protected]] \n> Sent: Sunday, March 13, 2005 12:05 AM\n> To: Tambet Matiisen\n> Cc: [email protected]\n> Subject: Re: [PERFORM] One tuple per transaction\n> \n> \n> Tambet,\n> \n> > In one of our applications we have a database function, which \n> > recalculates COGS (cost of good sold) for certain period. This \n> > involves deleting bunch of rows from one table, inserting \n> them again \n> > in correct order and updating them one-by-one (sometimes one row \n> > twice) to reflect current state. The problem is, that this \n> generates \n> > an enormous amount of tuples in that table.\n> \n> Sounds like you have an application design problem ... how \n> about re-writing \n> your function so it's a little more sensible?\n> \n\nI agree, that I have violated the no 1 rule of transactions - don't make\nthe transaction last too long. But imagine a situation, where a table is\nupdated twice in transaction. Why? Perhaps programmer felt, that the\ncode is more modular in this way. Now if you have tons of those\ntransactions, the I/O throughput is twice as big as it could be, because\nevery transaction creates two tuples instead of one. One tuple per\ntransaction could allow the programmer to keep his modular code and\nbenefit from the increased performance.\n\n Tambet\n", "msg_date": "Sun, 13 Mar 2005 19:21:26 +0200", "msg_from": "\"Tambet Matiisen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: One tuple per transaction" } ]
[ { "msg_contents": "Guys,\n\nI am having a problem firing queries on one of the tables which is\nhaving \"limit\" as the column name.\n\nIf a run an insert/select/update command on that table i get the below error.\n\nERROR: syntax error at or near \"limit\" at character 71\n\nAny Help would be realyl great to solve the problem.\n\npostgresql 7.4.5 and linux OS\n\n-- \nBest,\nGourish Singbal\n", "msg_date": "Mon, 14 Mar 2005 12:44:08 +0530", "msg_from": "Gourish Singbal <[email protected]>", "msg_from_op": true, "msg_subject": "column name is \"LIMIT\"" }, { "msg_contents": "Put \"\" around the column name, eg:\n\ninsert into \"limit\" values (1, 2,3 );\n\nChris\n\nGourish Singbal wrote:\n> Guys,\n> \n> I am having a problem firing queries on one of the tables which is\n> having \"limit\" as the column name.\n> \n> If a run an insert/select/update command on that table i get the below error.\n> \n> ERROR: syntax error at or near \"limit\" at character 71\n> \n> Any Help would be realyl great to solve the problem.\n> \n> postgresql 7.4.5 and linux OS\n> \n", "msg_date": "Mon, 14 Mar 2005 15:21:53 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column name is \"LIMIT\"" }, { "msg_contents": "On Mon, 14 Mar 2005 06:14 pm, Gourish Singbal wrote:\n> Guys,\n> \n> I am having a problem firing queries on one of the tables which is\n> having \"limit\" as the column name.\n> \n> If a run an insert/select/update command on that table i get the below error.\n> \n> ERROR: syntax error at or near \"limit\" at character 71\n\nselect \"limit\" from limit_table WHERE \"limit\" < 50 LIMIT 2;\n\nYou need to quote the field name, and make sure the case is correct.\n> \n> Any Help would be realyl great to solve the problem.\n> \n> postgresql 7.4.5 and linux OS\n> \nYou should probably upgrade to 7.4.7\n\nRegards\n\nRussell Smith.\n", "msg_date": "Mon, 14 Mar 2005 18:25:22 +1100", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column name is \"LIMIT\"" }, { "msg_contents": "Thanks a lot,\n\nwe might be upgrading to 8.0.1 soon.. till than using double quotes\nshould be fine.\n\nregards\ngourish\n\nOn Mon, 14 Mar 2005 18:25:22 +1100, Russell Smith <[email protected]> wrote:\n> On Mon, 14 Mar 2005 06:14 pm, Gourish Singbal wrote:\n> > Guys,\n> >\n> > I am having a problem firing queries on one of the tables which is\n> > having \"limit\" as the column name.\n> >\n> > If a run an insert/select/update command on that table i get the below error.\n> >\n> > ERROR: syntax error at or near \"limit\" at character 71\n> \n> select \"limit\" from limit_table WHERE \"limit\" < 50 LIMIT 2;\n> \n> You need to quote the field name, and make sure the case is correct.\n> >\n> > Any Help would be realyl great to solve the problem.\n> >\n> > postgresql 7.4.5 and linux OS\n> >\n> You should probably upgrade to 7.4.7\n> \n> Regards\n> \n> Russell Smith.\n> \n\n\n-- \nBest,\nGourish Singbal\n", "msg_date": "Mon, 14 Mar 2005 14:18:55 +0530", "msg_from": "Gourish Singbal <[email protected]>", "msg_from_op": true, "msg_subject": "Re: column name is \"LIMIT\"" }, { "msg_contents": "You will still need to use double quotes in 8.0.1...\n\nChris\n\nGourish Singbal wrote:\n> Thanks a lot,\n> \n> we might be upgrading to 8.0.1 soon.. till than using double quotes\n> should be fine.\n> \n> regards\n> gourish\n> \n> On Mon, 14 Mar 2005 18:25:22 +1100, Russell Smith <[email protected]> wrote:\n> \n>>On Mon, 14 Mar 2005 06:14 pm, Gourish Singbal wrote:\n>>\n>>>Guys,\n>>>\n>>>I am having a problem firing queries on one of the tables which is\n>>>having \"limit\" as the column name.\n>>>\n>>>If a run an insert/select/update command on that table i get the below error.\n>>>\n>>>ERROR: syntax error at or near \"limit\" at character 71\n>>\n>>select \"limit\" from limit_table WHERE \"limit\" < 50 LIMIT 2;\n>>\n>>You need to quote the field name, and make sure the case is correct.\n>>\n>>>Any Help would be realyl great to solve the problem.\n>>>\n>>>postgresql 7.4.5 and linux OS\n>>>\n>>\n>>You should probably upgrade to 7.4.7\n>>\n>>Regards\n>>\n>>Russell Smith.\n>>\n> \n> \n> \n", "msg_date": "Mon, 14 Mar 2005 16:55:33 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column name is \"LIMIT\"" }, { "msg_contents": "So is it to make SQL parser context-sensitive - say the parser will\nunderstand that in statement \"SELECT * from LIMIT\", LIMIT is just a table\nname, instead of keyword?\n\nThere might be some conflicts when using Yacc, but I am not sure how\ndifficult will be ...\n\nCheers,\nQingqing\n\n\"Christopher Kings-Lynne\" <[email protected]>\n> You will still need to use double quotes in 8.0.1...\n>\n> Chris\n>\n> Gourish Singbal wrote:\n> > Thanks a lot,\n> >\n> > we might be upgrading to 8.0.1 soon.. till than using double quotes\n> > should be fine.\n> >\n> > regards\n> > gourish\n> >\n> > On Mon, 14 Mar 2005 18:25:22 +1100, Russell Smith <[email protected]>\nwrote:\n> >\n> >>On Mon, 14 Mar 2005 06:14 pm, Gourish Singbal wrote:\n> >>\n> >>>Guys,\n> >>>\n> >>>I am having a problem firing queries on one of the tables which is\n> >>>having \"limit\" as the column name.\n> >>>\n> >>>If a run an insert/select/update command on that table i get the below\nerror.\n> >>>\n> >>>ERROR: syntax error at or near \"limit\" at character 71\n> >>\n> >>select \"limit\" from limit_table WHERE \"limit\" < 50 LIMIT 2;\n> >>\n> >>You need to quote the field name, and make sure the case is correct.\n> >>\n> >>>Any Help would be realyl great to solve the problem.\n> >>>\n> >>>postgresql 7.4.5 and linux OS\n> >>>\n> >>\n> >>You should probably upgrade to 7.4.7\n> >>\n> >>Regards\n> >>\n> >>Russell Smith.\n> >>\n> >\n> >\n> >\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n\n", "msg_date": "Mon, 14 Mar 2005 17:26:55 +0800", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column name is \"LIMIT\"" }, { "msg_contents": "Yeah... how come no one told him \"don't do that\"? LIMIT is an SQL\nreserved word, so it's likely to cause trouble in any database you try\nto use it on... I'd strongly recommend renaming that column asap. You\ncan see other reserved words at\nhttp://www.postgresql.org/docs/8.0/interactive/sql-keywords-appendix.html\n\nRobert Treat\n\nOn Mon, 2005-03-14 at 03:55, Christopher Kings-Lynne wrote:\n> You will still need to use double quotes in 8.0.1...\n> \n> Chris\n> \n> Gourish Singbal wrote:\n> > Thanks a lot,\n> > \n> > we might be upgrading to 8.0.1 soon.. till than using double quotes\n> > should be fine.\n> > \n> > regards\n> > gourish\n> > \n> > On Mon, 14 Mar 2005 18:25:22 +1100, Russell Smith <[email protected]> wrote:\n> > \n> >>On Mon, 14 Mar 2005 06:14 pm, Gourish Singbal wrote:\n> >>\n> >>>Guys,\n> >>>\n> >>>I am having a problem firing queries on one of the tables which is\n> >>>having \"limit\" as the column name.\n> >>>\n> >>>If a run an insert/select/update command on that table i get the below error.\n> >>>\n> >>>ERROR: syntax error at or near \"limit\" at character 71\n> >>\n> >>select \"limit\" from limit_table WHERE \"limit\" < 50 LIMIT 2;\n> >>\n> >>You need to quote the field name, and make sure the case is correct.\n> >>\n> >>>Any Help would be realyl great to solve the problem.\n> >>>\n> >>>postgresql 7.4.5 and linux OS\n> >>>\n> >>\n> >>You should probably upgrade to 7.4.7\n> >>\n> >>Regards\n> >>\n> >>Russell Smith.\n> >>\n> > \n> > \n> > \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "14 Mar 2005 13:28:34 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column name is \"LIMIT\"" }, { "msg_contents": "On 3/14/2005 1:28 PM, Robert Treat wrote:\n\n> Yeah... how come no one told him \"don't do that\"? LIMIT is an SQL\n> reserved word, so it's likely to cause trouble in any database you try\n> to use it on... I'd strongly recommend renaming that column asap. You\n> can see other reserved words at\n> http://www.postgresql.org/docs/8.0/interactive/sql-keywords-appendix.html\n> \n> Robert Treat\n\nNote also that the Slony-I replication system has problems with column \nnames identical to reserved words. This is rooted in the fact that the \nquote_ident() function doesn't quote reserved words ... as it IMHO is \nsupposed to do.\n\n\nJan\n\n> \n> On Mon, 2005-03-14 at 03:55, Christopher Kings-Lynne wrote:\n>> You will still need to use double quotes in 8.0.1...\n>> \n>> Chris\n>> \n>> Gourish Singbal wrote:\n>> > Thanks a lot,\n>> > \n>> > we might be upgrading to 8.0.1 soon.. till than using double quotes\n>> > should be fine.\n>> > \n>> > regards\n>> > gourish\n>> > \n>> > On Mon, 14 Mar 2005 18:25:22 +1100, Russell Smith <[email protected]> wrote:\n>> > \n>> >>On Mon, 14 Mar 2005 06:14 pm, Gourish Singbal wrote:\n>> >>\n>> >>>Guys,\n>> >>>\n>> >>>I am having a problem firing queries on one of the tables which is\n>> >>>having \"limit\" as the column name.\n>> >>>\n>> >>>If a run an insert/select/update command on that table i get the below error.\n>> >>>\n>> >>>ERROR: syntax error at or near \"limit\" at character 71\n>> >>\n>> >>select \"limit\" from limit_table WHERE \"limit\" < 50 LIMIT 2;\n>> >>\n>> >>You need to quote the field name, and make sure the case is correct.\n>> >>\n>> >>>Any Help would be realyl great to solve the problem.\n>> >>>\n>> >>>postgresql 7.4.5 and linux OS\n>> >>>\n>> >>\n>> >>You should probably upgrade to 7.4.7\n>> >>\n>> >>Regards\n>> >>\n>> >>Russell Smith.\n>> >>\n>> > \n>> > \n>> > \n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 2: you can get off all lists at once with the unregister command\n>> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Mon, 14 Mar 2005 13:55:50 -0500", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column name is \"LIMIT\"" }, { "msg_contents": "> Note also that the Slony-I replication system has problems \n> with column \n> names identical to reserved words. This is rooted in the fact \n> that the \n> quote_ident() function doesn't quote reserved words ... as it IMHO is \n> supposed to do.\n> \n> \n> Jan\n> \n\nDoes this apply to table names as well or just columns?\n\nBryan\n", "msg_date": "Mon, 14 Mar 2005 11:26:44 -0800", "msg_from": "\"Bryan Encina\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column name is \"LIMIT\"" }, { "msg_contents": "Jan Wieck <[email protected]> writes:\n> quote_ident() function doesn't quote reserved words ... as it IMHO is \n> supposed to do.\n\nYou're right, it probably should. The equivalent code in pg_dump knows\nabout this, but quote_ident() doesn't.\n\nOne thing that's been on my mind with respect to all this is that it\nwould be nice not to quote \"non-reserved\" keywords. Most of the weird\nnon-SQL-spec keywords that we have are non-reserved, and we could more\neasily keep them out of people's faces if we didn't quote them in dumps.\nOf course such a policy would raise the ante for any change that makes\nan existing keyword reserved when it wasn't before, but that's already\na dangerous kind of change.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Mar 2005 14:30:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column name is \"LIMIT\" " }, { "msg_contents": "On 3/14/2005 2:26 PM, Bryan Encina wrote:\n>> Note also that the Slony-I replication system has problems \n>> with column \n>> names identical to reserved words. This is rooted in the fact \n>> that the \n>> quote_ident() function doesn't quote reserved words ... as it IMHO is \n>> supposed to do.\n>> \n>> \n>> Jan\n>> \n> \n> Does this apply to table names as well or just columns?\n> \n> Bryan\n\nSure does, don't try to replicate a table named \"user\".\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Mon, 14 Mar 2005 18:16:02 -0500", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column name is \"LIMIT\"" }, { "msg_contents": "On 3/14/2005 4:26 AM, Qingqing Zhou wrote:\n\n> So is it to make SQL parser context-sensitive - say the parser will\n> understand that in statement \"SELECT * from LIMIT\", LIMIT is just a table\n> name, instead of keyword?\n\nMore or less, yes. To use a reserved keyword as an identifier (table or \ncolumn name), it must be enclosed in double quotes. Double quotes are \nalso used to make identifiers case sensitive. So\n\n select someval, \"SOMEVAL\", \"someVAL\" from \"user\";\n\nis a valid query retrieving 3 distinct columns from the table \"user\". \nThere is a builtin function quote_ident() in PostgreSQL that is supposed \nto return a properly quoted string allowed as an identifier for whatever \nname is passed in. But it fails to do so for all lower case names that \nare reserved keywords.\n\nThe queries Slony executes on the replicas are constructed using that \nquoting function, and therefore Slony fails to build valid SQL for \nreplicated table containing reserved keyword identifiers. One solution \nwould be to brute-force quote all identifiers in Slony ... not sure what \nthe side effects performance wise would be.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Mon, 21 Mar 2005 09:59:22 -0500", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column name is \"LIMIT\"" }, { "msg_contents": "Jan Wieck <[email protected]> writes:\n> There is a builtin function quote_ident() in PostgreSQL that is supposed \n> to return a properly quoted string allowed as an identifier for whatever \n> name is passed in. But it fails to do so for all lower case names that \n> are reserved keywords.\n\nNot any more ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 Mar 2005 11:30:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column name is \"LIMIT\" " } ]
[ { "msg_contents": "Hi,\n\nI'm using PostgreSQL 8 for a mmorpg.\nThe part of each operation is : select: 50%, update: 40%, insert: 10%.\nI have no more than 4-5 concurrent connections to the database, but each \nof them does A LOT of queries (several per second).\nThe database size is about 1GB, but it'll probably be around 2GB in a \nfews months.\nThe OS will be FreeBSD (version production 5.3 probably, or 4.10)\n\nAt this time, i'm looking for a new server. Before to buy it, I grab \nsome informations..\nSo, my question is : what would be the best hardware for this type of \nneeds ?\nOf course, I'm not asking for a trademark and/or for prices, but for hints.\n\n- What is the most important part of the system : CPU ? RAM ? Disks ?\n- Is a server with 2 or more CPUs much better than a server with a \nsingle one, for a pgsql database ?\n- How much RAM do I need ? The size of the data ? Twice the size ?\n- I heard Raid1+0 is better than Raid 5. Is it right ? What would be the \nbest configuration, regarding performances and security ?\n- Does the CPU type (i386, PowerPC, ....) matters ?\n- A lot of queries probably generate a lot of network output. Does the \nnetwork controller matters ?\n- And finally, last question : is it possible to run a single postgresql \ndatabase on several servers ? (hardware clustering)\n\nThanks in advance for your answers, and sorry for my crap english (i'm \nfrench).\n\nCamille Chafer\n\n\n\n\n\n\n\nHi,\n\nI'm using PostgreSQL 8 for a mmorpg.\nThe part of each operation is : select: 50%, update: 40%, insert: 10%.\nI have no more than 4-5 concurrent connections to the database, but\neach of them does A LOT of queries (several per second).\nThe database size is about 1GB, but it'll probably be around 2GB in a\nfews months.\nThe OS will be FreeBSD (version production 5.3 probably, or 4.10)\n\nAt this time, i'm looking for a new server. Before to buy it, I grab\nsome informations..\nSo, my question is : what would be the best hardware for this type of\nneeds ?\nOf course, I'm not asking for a trademark and/or for prices, but for\nhints.\n\n- What is the most important part of the system : CPU ? RAM ? Disks ?\n- Is a server with 2 or more CPUs much better than a server with a\nsingle one, for a pgsql database ?\n- How much RAM do I need ? The size of the data ? Twice the size ?\n- I heard Raid1+0 is better than Raid 5. Is it right ? What would be\nthe best configuration, regarding performances and security ?\n- Does the CPU type (i386, PowerPC, ....) matters ?\n- A lot of queries probably generate a lot of network output. Does the\nnetwork controller matters ?\n- And finally, last question : is it possible to run a single\npostgresql database on several servers ? (hardware clustering)\n\nThanks in advance for your answers, and sorry for my crap english (i'm\nfrench).\n\nCamille Chafer", "msg_date": "Mon, 14 Mar 2005 12:54:58 +0100", "msg_from": "Camille Chafer <[email protected]>", "msg_from_op": true, "msg_subject": "Hardware impact on performances" }, { "msg_contents": "Camille Chafer wrote:\n> Hi,\n> \n> I'm using PostgreSQL 8 for a mmorpg.\n> The part of each operation is : select: 50%, update: 40%, insert: 10%.\n> I have no more than 4-5 concurrent connections to the database, but each \n> of them does A LOT of queries (several per second).\n> The database size is about 1GB, but it'll probably be around 2GB in a \n> fews months.\n> The OS will be FreeBSD (version production 5.3 probably, or 4.10)\n> \n> At this time, i'm looking for a new server. Before to buy it, I grab \n> some informations..\n> So, my question is : what would be the best hardware for this type of \n> needs ?\n> Of course, I'm not asking for a trademark and/or for prices, but for hints.\n> \n> - What is the most important part of the system : CPU ? RAM ? Disks ?\n\nUsually Disks/RAM. Since you've got a lot of updates/inserts, \nbattery-backed write-cache on your raid controller would be good.\n\n> - Is a server with 2 or more CPUs much better than a server with a \n> single one, for a pgsql database ?\n\nWith 2+ connections, each can be serviced by one CPU. Of course, if your \ndisk I/O is saturated then it won't help.\n\n> - How much RAM do I need ? The size of the data ? Twice the size ?\n\nIdeally, enough to hold your \"working set\". That is, enough cache to \nstore all pages/indexes you regularly access.\n\n> - I heard Raid1+0 is better than Raid 5. Is it right ? What would be the \n> best configuration, regarding performances and security ?\n\nIt can depend - check the list archives for a lot of discussion on this. \n More disks is always better.\n\n> - Does the CPU type (i386, PowerPC, ....) matters ?\n\nDual-Xeons have given problems. A lot of people seem to think \nOpteron-based systems provide good value.\n\n> - A lot of queries probably generate a lot of network output. Does the \n> network controller matters ?\n\nWell, obviously the more time spent handling network I/O, the less time \nyou spend running queries. I'd think it would have to be a *lot* of \nactivity to make a serious difference.\n\n> - And finally, last question : is it possible to run a single postgresql \n> database on several servers ? (hardware clustering)\n\nNot easily, and it probably wouldn't provide any performance benefit. \nPlenty of replication options though.\n\n> Thanks in advance for your answers, and sorry for my crap english (i'm \n> french).\n\nYour English is perfect.\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 21 Mar 2005 09:55:00 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware impact on performances" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi all,\nrunning 7.4.x I still have problem\nwith the select but I do not find any solution apart to rise to 0.7 the\ncpu_tuple_cost, I'm reposting it in the hope to discover a glitch in\nthe planner.\n\n\n# explain analyze select * from v_sc_user_request where login = 'Zoneon';\n QUERY PLAN\n- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan v_sc_user_request (cost=1029.67..1029.68 rows=1 width=364) (actual time=319350.564..319352.632 rows=228 loops=1)\n -> Sort (cost=1029.67..1029.68 rows=1 width=203) (actual time=319350.537..319350.683 rows=228 loops=1)\n Sort Key: sr.id_sat_request\n -> Nested Loop Left Join (cost=491.15..1029.66 rows=1 width=203) (actual time=897.252..319349.443 rows=228 loops=1)\n Join Filter: (\"outer\".id_package = \"inner\".id_package)\n -> Nested Loop (cost=4.00..382.67 rows=1 width=195) (actual time=31.252..2635.751 rows=228 loops=1)\n -> Hash Join (cost=4.00..379.59 rows=1 width=40) (actual time=31.174..578.979 rows=228 loops=1)\n Hash Cond: (\"outer\".id_user = \"inner\".id_user)\n -> Index Scan using idx_sat_request_expired on sat_request sr (cost=0.00..360.02 rows=3112 width=28) (actual time=0.150..535.697 rows=7990 loops=1)\n Index Cond: (expired = false)\n Filter: (request_time > (now() - '1 mon'::interval))\n -> Hash (cost=4.00..4.00 rows=2 width=16) (actual time=30.542..30.542 rows=0 loops=1)\n -> Index Scan using user_login_login_key on user_login ul (cost=0.00..4.00 rows=2 width=16) (actual time=30.482..30.490 rows=1 loops=1)\n Index Cond: ((login)::text = 'Zoneon'::text)\n -> Index Scan using url_pkey on url u (cost=0.00..3.08 rows=1 width=163) (actual time=8.982..8.988 rows=1 loops=228)\n Index Cond: (\"outer\".id_url = u.id_url)\n -> Subquery Scan vsp (cost=487.15..642.42 rows=1298 width=12) (actual time=4.703..1384.172 rows=429 loops=228)\n -> Hash Join (cost=487.15..641.12 rows=1298 width=128) (actual time=4.697..1382.081 rows=429 loops=228)\n Hash Cond: (\"outer\".id_program = \"inner\".id_program)\n -> Hash Join (cost=469.80..599.65 rows=1320 width=113) (actual time=0.755..30.305 rows=429 loops=228)\n Hash Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Hash Left Join (cost=13.86..79.54 rows=1479 width=101) (actual time=0.298..24.121 rows=1468 loops=228)\n Hash Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Seq Scan on packages p (cost=0.00..53.48 rows=1479 width=101) (actual time=0.265..10.898 rows=1468 loops=228)\n -> Hash (cost=11.10..11.10 rows=1104 width=4) (actual time=2.506..2.506 rows=0 loops=1)\n -> Seq Scan on package_security ps (cost=0.00..11.10 rows=1104 width=4) (actual time=0.018..1.433 rows=1096 loops=1)\n -> Hash (cost=450.47..450.47 rows=2186 width=16) (actual time=92.435..92.435 rows=0 loops=1)\n -> Seq Scan on sequences (cost=0.00..450.47 rows=2186 width=16) (actual time=0.044..91.641 rows=429 loops=1)\n Filter: (estimated_start IS NOT NULL)\n -> Hash (cost=17.20..17.20 rows=57 width=19) (actual time=0.383..0.383 rows=0 loops=1)\n -> Seq Scan on programs (cost=0.00..17.20 rows=57 width=19) (actual time=0.024..0.323 rows=48 loops=1)\n Filter: (id_program <> 0)\n Total runtime: 319364.927 ms\n\n# set cpu_tuple_cost = 0.7;\n\n\n# explain analyze select * from v_sc_user_request where login = 'Zoneon';\n QUERY PLAN\n- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan v_sc_user_request (cost=14708.99..14709.69 rows=1 width=364) (actual time=9956.650..9958.273 rows=228 loops=1)\n -> Sort (cost=14708.99..14708.99 rows=1 width=203) (actual time=9956.635..9956.778 rows=228 loops=1)\n Sort Key: sr.id_sat_request\n -> Merge Left Join (cost=14701.75..14708.98 rows=1 width=203) (actual time=8138.468..9955.724 rows=228 loops=1)\n Merge Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Sort (cost=6909.94..6909.95 rows=1 width=195) (actual time=5454.427..5454.760 rows=228 loops=1)\n Sort Key: sr.id_package\n -> Nested Loop (cost=4.70..6909.93 rows=1 width=195) (actual time=0.763..5453.236 rows=228 loops=1)\n -> Hash Join (cost=4.70..6905.45 rows=1 width=40) (actual time=0.718..2325.661 rows=228 loops=1)\n Hash Cond: (\"outer\".id_user = \"inner\".id_user)\n -> Index Scan using idx_sat_request_expired on sat_request sr (cost=0.00..6884.49 rows=3112 width=28) (actual time=0.090..2310.108 rows=7989 loops=1)\n Index Cond: (expired = false)\n Filter: (request_time > (now() - '1 mon'::interval))\n -> Hash (cost=4.70..4.70 rows=2 width=16) (actual time=0.150..0.150 rows=0 loops=1)\n -> Index Scan using user_login_login_key on user_login ul (cost=0.00..4.70 rows=2 width=16) (actual time=0.129..0.133 rows=1 loops=1)\n Index Cond: ((login)::text = 'Zoneon'::text)\n -> Index Scan using url_pkey on url u (cost=0.00..3.78 rows=1 width=163) (actual time=13.029..13.685 rows=1 loops=228)\n Index Cond: (\"outer\".id_url = u.id_url)\n -> Sort (cost=7791.81..7795.05 rows=1298 width=12) (actual time=2674.369..2674.791 rows=429 loops=1)\n Sort Key: vsp.id_package\n -> Subquery Scan vsp (cost=3026.61..7724.69 rows=1298 width=12) (actual time=177.979..2672.841 rows=429 loops=1)\n -> Hash Join (cost=3026.61..6816.09 rows=1298 width=128) (actual time=177.969..2670.402 rows=429 loops=1)\n Hash Cond: (\"outer\".id_program = \"inner\".id_program)\n -> Hash Join (cost=2968.72..5826.77 rows=1320 width=113) (actual time=158.053..200.867 rows=429 loops=1)\n Hash Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Hash Left Join (cost=785.56..2656.75 rows=1479 width=101) (actual time=3.127..40.350 rows=1468 loops=1)\n Hash Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Seq Scan on packages p (cost=0.00..1087.30 rows=1479 width=101) (actual time=0.039..24.680 rows=1468 loops=1)\n -> Hash (cost=782.80..782.80 rows=1104 width=4) (actual time=2.622..2.622 rows=0 loops=1)\n -> Seq Scan on package_security ps (cost=0.00..782.80 rows=1104 width=4) (actual time=0.012..1.401 rows=1096 loops=1)\n -> Hash (cost=2177.70..2177.70 rows=2186 width=16) (actual time=154.563..154.563 rows=0 loops=1)\n -> Seq Scan on sequences (cost=0.00..2177.70 rows=2186 width=16) (actual time=0.012..153.654 rows=429 loops=1)\n Filter: (estimated_start IS NOT NULL)\n -> Hash (cost=57.74..57.74 rows=57 width=19) (actual time=0.289..0.289 rows=0 loops=1)\n -> Seq Scan on programs (cost=0.00..57.74 rows=57 width=19) (actual time=0.022..0.224 rows=48 loops=1)\n Filter: (id_program <> 0)\n Total runtime: 9959.293 ms\n(37 rows)\n\n\n\nhere the views definition:\n\nCREATE OR REPLACE VIEW v_sc_user_request AS\n SELECT\n vsr.id_sat_request AS id_sat_request,\n vsr.id_user AS id_user,\n vsr.login AS login,\n vsr.url AS url,\n vsr.name AS name,\n vsr.descr AS descr,\n vsr.size AS size,\n trunc(vsr.size/1024.0/1024.0,2) AS size_mb,\n vsr.id_sat_request_status AS id_sat_request_status,\n sp_lookup_key('sat_request_status', vsr.id_sat_request_status) AS request_status,\n sp_lookup_descr('sat_request_status', vsr.id_sat_request_status) AS request_status_descr,\n vsr.id_url_status AS id_url_status,\n sp_lookup_key('url_status', vsr.id_url_status) AS url_status,\n sp_lookup_descr('url_status', vsr.id_url_status) AS url_status_descr,\n vsr.url_time_stamp AS url_time_stamp,\n date_trunc('seconds',vsr.request_time) AS request_time_stamp,\n vsr.id_package AS id_package,\n COALESCE(date_trunc('seconds',vsp.estimated_start)::text,'NA') AS estimated_start\n\n FROM\n v_sat_request vsr LEFT OUTER JOIN v_sc_packages vsp USING ( id_package )\n WHERE\n vsr.request_time > now() - '1 month'::interval AND\n vsr.expired = FALSE\n ORDER BY id_sat_request DESC\n;\n\n\n\n\nCREATE OR REPLACE VIEW v_sat_request AS\n SELECT\n sr.id_user AS id_user,\n ul.login AS login,\n sr.id_sat_request AS id_sat_request,\n u.id_url AS id_url,\n u.url AS url,\n u.name AS name,\n u.descr AS descr,\n u.size AS size,\n u.storage AS storage,\n sr.id_package AS id_package,\n sr.id_sat_request_status AS id_sat_request_status,\n sr.request_time AS request_time,\n sr.work_time AS request_work_time,\n u.id_url_status AS id_url_status,\n u.time_stamp AS url_time_stamp,\n sr.expired AS expired\n FROM\n sat_request sr,\n url u,\n user_login ul\n WHERE\n ---------------- JOIN ---------------------\n sr.id_url = u.id_url AND\n sr.id_user = ul.id_user\n -------------------------------------------\n;\n\n\n\n\nCREATE OR REPLACE VIEW v_sc_packages AS\n SELECT\n\n vpr.id_program AS id_program,\n vpr.name AS program_name,\n\n vpk.id_package AS id_package,\n date_trunc('seconds', vs.estimated_start) AS estimated_start,\n\n vpk.name AS package_name,\n vpk.TYPE AS TYPE,\n vpk.description AS description,\n vpk.target AS target,\n vpk.fec AS fec_alg,\n vpk.output_group - vpk.input_group AS fec_redundancy,\n vpk.priority AS priority,\n vpk.updatable AS updatable,\n vpk.auto_listen AS auto_listen,\n vpk.start_file AS start_file,\n vpk.view_target_group AS view_target_group,\n vpk.target_group AS target_group\n\n FROM\n v_programs vpr,\n v_packages vpk,\n v_sequences vs\n\n WHERE\n ------------ JOIN -------------\n vpr.id_program = vs.id_program AND\n vpk.id_package = vs.id_package AND\n\n -------------------------------\n vs.estimated_start IS NOT NULL\n;\n\n\n\nCREATE OR REPLACE VIEW v_programs AS\n SELECT id_program AS id_program,\n id_publisher AS id_publisher,\n name AS name,\n description AS description,\n sp_lookup_key('program_type', id_program_type) AS TYPE,\n sp_lookup_key('program_status', id_program_status) AS status,\n last_position AS last_position\n FROM programs\n WHERE id_program<>0\n;\n\n\nCREATE OR REPLACE VIEW v_packages AS\n SELECT p.id_package AS id_package,\n p.id_publisher AS id_publisher,\n p.name AS name,\n p.information AS information,\n p.description AS description,\n sp_lookup_key('package_type', p.id_package_type)\n AS TYPE,\n sp_lookup_key('target', p.id_target)\n AS target,\n p.port AS port,\n p.priority AS priority,\n sp_lookup_key('fec', p.id_fec)\n AS fec,\n p.input_group AS input_group,\n p.output_group AS output_group,\n p.updatable AS updatable,\n p.checksum AS checksum,\n p.version AS version,\n p.start_file AS start_file,\n p.view_target_group AS view_target_group,\n p.target_group AS target_group,\n p.auto_listen AS auto_listen,\n p.public_flag AS public_flag,\n p.needed_version AS needed_version,\n p.logic_version AS logic_version,\n p.package_size AS package_size,\n ps.id_drm_process AS id_drm_process,\n ps.id_cas_service AS id_cas_service,\n ps.id_cas_settings AS id_cas_settings,\n ps.id_drm_service AS id_drm_service\n\n FROM packages p LEFT OUTER JOIN package_security ps USING (id_package)\n ;\n\n\n\nCREATE OR REPLACE VIEW v_sequences AS\n SELECT id_package AS id_package,\n id_program AS id_program,\n internal_position AS internal_position,\n estimated_start AS estimated_start\n FROM sequences\n;\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.5 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFCNcAn7UpzwH2SGd4RAkBrAJ4+TFXKVggjNH2ddjezNt1GAGgSAQCfXGQt\nBeEVkXECodZRCg395mAdaJE=\n=UVGS\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Mon, 14 Mar 2005 17:47:36 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Bad Performance[2]" } ]
[ { "msg_contents": "Alex Turner wrote: \n> 35 Trans/sec is pretty slow, particularly if they are only one row at\n> a time. I typicaly get 200-400/sec on our DB server on a bad day. Up\n> to 1100 on a fresh database.\n\nWell, don't rule out that his raid controller is not caching his writes.\nHis WAL sync method may be overriding his raid cache policy and flushing\nhis writes to disk, always. Win32 has the same problem, and before\nMagnus's O_DIRECT patch, there was no way to easily work around it\nwithout turning fsync off. I'd suggest playing with different WAL sync\nmethods before trying anything else.\n\nMerli\n", "msg_date": "Mon, 14 Mar 2005 16:18:00 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres on RAID5" }, { "msg_contents": "He doesn't have a RAID controller, it's software RAID...\n\nAlex Turner\nnetEconomis\n\n\nOn Mon, 14 Mar 2005 16:18:00 -0500, Merlin Moncure\n<[email protected]> wrote:\n> Alex Turner wrote:\n> > 35 Trans/sec is pretty slow, particularly if they are only one row at\n> > a time. I typicaly get 200-400/sec on our DB server on a bad day. Up\n> > to 1100 on a fresh database.\n> \n> Well, don't rule out that his raid controller is not caching his writes.\n> His WAL sync method may be overriding his raid cache policy and flushing\n> his writes to disk, always. Win32 has the same problem, and before\n> Magnus's O_DIRECT patch, there was no way to easily work around it\n> without turning fsync off. I'd suggest playing with different WAL sync\n> methods before trying anything else.\n> \n> Merli\n>\n", "msg_date": "Mon, 14 Mar 2005 16:42:07 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on RAID5" } ]
[ { "msg_contents": "\nBruce Momjian <[email protected]> wrote:\n\n> Agreed. I think we should reduce it at least to 3.\n\nNote that changing it from 4 to 3 or even 2 is unlikely to really change much.\nMany of the plans people complain about turn out to have critical points\ncloser to 1.2 or 1.1. \n\nThe only reason things work out better with such low values is because people\nhave data sets that fit more or less entirely in RAM. So values close to 1 or\neven equal to 1 actually represent the reality.\n\nThe \"this day and age\" argument isn't very convincing. Hard drive capacity\ngrowth has far outstripped hard drive seek time and bandwidth improvements.\nRandom access has more penalty than ever.\n\n-- \ngreg\n\n", "msg_date": "15 Mar 2005 00:30:41 -0500", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cpu_tuple_cost" }, { "msg_contents": "Gregory Stark wrote:\n\n>The \"this day and age\" argument isn't very convincing. Hard drive capacity\n>growth has far outstripped hard drive seek time and bandwidth improvements.\n>Random access has more penalty than ever.\n> \n>\nIn point of fact, there haven't been noticeable seek time improvements \nfor years. Transfer rates, on the other hand, have gone through the roof.\n\nWhich is why I would question the published tuning advice that \nrecommends lowering it to 2 for arrays. Arrays increase the effective \ntransfer rate more than they reduce random access times. Dropping from 4 \nto 2 would reflect going from a typical single 7200rpm ATA drive to a \n15000rpm SCSI drive, but striping will move it back up again - probably \neven higher than 4 with a big array (at a guess, perhaps the \nrelationship might be approximated as a square root after allowing for \nthe array type?).\n\nWith default settings, I've seen the planner pick the wrong index unless \nrandom_page_cost was set to 2. But in testing on an ATA drive, I \nachieved slightly better plan costings by increasing cpu_tuple_cost \n(relative to cpu_index_tuple_cost - by default it's only a factor of 10) \nand actually *raising* random_page_cost to 5! So why pick on one \nparameter? It's all going to vary according to the query and the data.\n\nI agree with Tom 100%. Pulling levers on a wonky model is no solution.\n", "msg_date": "Wed, 16 Mar 2005 09:41:07 +1000", "msg_from": "David Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu_tuple_cost" }, { "msg_contents": "\nDavid Brown <[email protected]> writes:\n\n> Gregory Stark wrote:\n> \n> >The \"this day and age\" argument isn't very convincing. Hard drive capacity\n> >growth has far outstripped hard drive seek time and bandwidth improvements.\n> >Random access has more penalty than ever.\n>\n> In point of fact, there haven't been noticeable seek time improvements for\n> years. Transfer rates, on the other hand, have gone through the roof.\n\nEr, yeah. I stated it wrong. The real ratio here is between seek time and\nthroughput.\n\nTypical 7200RPM drives have average seek times are in the area of 10ms.\nTypical sustained transfer rates are in the range of 40Mb/s. Postgres reads\n8kB blocks at a time.\n\nSo 800kB/s for random access reads. And 40Mb/s for sequential reads. That's a\nfactor of 49. I don't think anyone wants random_page_cost to be set to 50\nthough.\n\nFor a high end 15k drive I see average seek times get as low as 3ms. And\nsustained transfer rates get as high as 100Mb/s. So about 2.7Mb/s for random\naccess reads or about a random_page_cost of 37. Still pretty extreme.\n\nSo what's going on with the empirically derived value of 4? Perhaps this is\nbecause even though Postgres is reading an entire table sequentially it's\nunlikely to be the only I/O consumer? The sequential reads would be\ninterleaved occasionally by some other I/O forcing a seek to continue.\n\nIn which case the true random_page_cost seems like it would be extremely\nsensitive to the amount of readahead the OS does. To reach a random_page_cost\nof 4 given the numbers above for a 7200RPM drive requires that just under 25%\nof the I/O of a sequential table scan be random seeks [*]. That translates to\n32kB of sequential reading, which actually does sound like a typical value for\nOS readahead.\n\nI wonder if those same empirical tests would show even higher values of\nrandom_page_cost if the readahead were turned up to 64kB or 128kB.\n\n\n\n\n[*] A bit of an algebraic diversion: \n\n 1s/10ms = 100 random buffers/s. \n random_page_cost = 4 so net sequential buffers/s = 400.\n\n solve:\n\n 400 buffers = rnd+seq\n 1000ms = .2*seq + 10*rnd\n\n\n-- \ngreg\n\n", "msg_date": "16 Mar 2005 02:23:47 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu_tuple_cost" }, { "msg_contents": "Greg,\n\n> So 800kB/s for random access reads. And 40Mb/s for sequential reads. That's\n> a factor of 49. I don't think anyone wants random_page_cost to be set to 50\n> though.\n>\n> For a high end 15k drive I see average seek times get as low as 3ms. And\n> sustained transfer rates get as high as 100Mb/s. So about 2.7Mb/s for\n> random access reads or about a random_page_cost of 37. Still pretty\n> extreme.\n\nActually, what you're demonstrating here is that there's really no point in \nhaving a random_page_cost GUC, since the seek/scan ratio is going to be high \nregardless. \n\nAlthough I can point out that you left out the fact that the disk needs to do \na seek to find the beginning of the seq scan area, and even then some file \nfragmentation is possible. Finally, I've never seen PostgreSQL manage more \nthan 70% of the maximum read rate, and in most cases more like 30%. \n\n> So what's going on with the empirically derived value of 4? \n\nIt's not empirically derived; it's a value we plug into an \ninternal-to-postgresql formula. And \"4\" is a fairly conservative value that \nworks for a lot of systems.\n\nRealistically, the values we should be deriving from are:\n-- median file cache size for postgresql files\n-- average disk read throughput\n-- effective processor calculation throughput\n-- median I/O contention\n\nHowever, working those 4 hardware \"facts\" into forumulas that allow us to \ncalculate the actual cost of a query execution plan is somebody's PhD paper.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 16 Mar 2005 10:03:10 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu_tuple_cost" }, { "msg_contents": "\nJosh Berkus <[email protected]> writes:\n\n> Although I can point out that you left out the fact that the disk needs to do \n> a seek to find the beginning of the seq scan area, and even then some file \n> fragmentation is possible. Finally, I've never seen PostgreSQL manage more \n> than 70% of the maximum read rate, and in most cases more like 30%. \n\nHm. I just did a quick test. It wasn't really long enough to get a good\nestimate, but it seemed to reach about 30MB/s on this drive that's only\ncapable of 40-50MB/s depending on the location on the platters.\n\nThat's true though, some of my calculated 25% random seeks could be caused by\nfragmentation. But it seems like that would be a small part.\n\n> > So what's going on with the empirically derived value of 4? \n> \n> It's not empirically derived; it's a value we plug into an\n> internal-to-postgresql formula.\n\nI thought Tom said he got the value by doing empirical tests.\n\n-- \ngreg\n\n", "msg_date": "16 Mar 2005 14:45:26 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu_tuple_cost" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> So what's going on with the empirically derived value of 4? \n\n> It's not empirically derived;\n\nYes it is. I ran experiments back in the late 90s to derive it.\nCheck the archives.\n\nDisks have gotten noticeably bigger since then, but I don't think\nthe ratio of seek time to rotation rate has changed much.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 16 Mar 2005 15:42:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu_tuple_cost " }, { "msg_contents": "Tom,\n\n> Yes it is.  I ran experiments back in the late 90s to derive it.\n> Check the archives.\n\nHmmmm ... which list?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 17 Mar 2005 09:54:29 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu_tuple_cost" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> Yes it is.  I ran experiments back in the late 90s to derive it.\n>> Check the archives.\n\n> Hmmmm ... which list?\n\n-hackers, no doubt. -performance didn't exist then.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Mar 2005 12:57:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu_tuple_cost " }, { "msg_contents": "On Thu, Mar 17, 2005 at 09:54:29AM -0800, Josh Berkus wrote:\n> \n> > Yes it is. �I ran experiments back in the late 90s to derive it.\n> > Check the archives.\n> \n> Hmmmm ... which list?\n\nThese look like relevant threads:\n\nhttp://archives.postgresql.org/pgsql-hackers/2000-01/msg00910.php\nhttp://archives.postgresql.org/pgsql-hackers/2000-02/msg00215.php\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Thu, 17 Mar 2005 11:26:53 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu_tuple_cost" } ]
[ { "msg_contents": "> ------------------------------\n> \n> Date: Mon, 14 Mar 2005 09:41:30 +0800\n> From: \"Qingqing Zhou\" <[email protected]>\n> To: [email protected]\n> Subject: Re: One tuple per transaction\n> Message-ID: <[email protected]>\n> \n> \"\"Tambet Matiisen\"\" <[email protected]> writes\n...\n> > If I'm correct, the dead tuples must be scanned also during \n> table and \n> > index scan, so a lot of dead tuples slows down queries \n> considerably, \n> > especially when the table doesn't fit into shared buffers any more. \n> > And as I'm in transaction, I can't VACUUM to get rid of \n> those tuples. \n> > In one occasion the page count for a table went from 400 to \n> 22000 at \n> > the end.\n> \n> Not exactly. The dead tuple in the index will be scanned the \n> first time (and its pointed heap tuple as well), then we will \n> mark it dead, then next time we came here, we will know that \n> the index tuple actually points to a uesless tuple, so we \n> will not scan its pointed heap tuple.\n> \n\nBut the dead index tuple will still be read from disk next time? Maybe\nreally the performance loss will be neglible, but if most of tuples in\nyour table/index are dead, then it might be significant.\n\nConsider the often suggested solution for speeding up \"select count(*)\nfrom table\" query: make another table rowcounts and for each of the\noriginal tables add insert and delete triggers to update row count in\nrowcounts table. Actually this is standard denormalization technique,\nwhich I use often. For example to ensure that order.total =\nsum(order_line.total).\n\nNow, if typical inserts into your most active table occur in batches of\n3 rows, in one transaction, then row count for this table is updated 3\ntimes during transaction. 3 updates generate 3 tuples, while 2 of them\nare dead from the very start. You effectively commit 2 useless tuples.\nAfter millions of inserts you end up with rowcounts table having 2/3 of\ndead tuples and queries start to slow down.\n\nCurrent solution is to vacuum often. My proposal was to create new tuple\nonly with first update. The next updates in the same transaction would\nupdate the existing tuple, not create a new. \n\nBut as I'm writing this, I'm starting to get some of the associated\nimplementation problems. The updated tuple might not be the same size as\nprevious tuple. Tuple updates are probably not implemented anyway. And\nfor a reason, as disk write takes the same time, regardless if you\nupdate or write new data. And tons of other problems, which developers\nare probably more aware of.\n\nBut one thing still bothers me. Why is new index tuple generated when I\nupdate non-indexed column? OK, I get it again. Index tuple points to\nheap tuple, thus after update it would point to dead tuple. And as it\ntakes the same time to update pointer or to write a new tuple, it's\neasier to write a new.\n\nCase closed.\n\n Tambet\n", "msg_date": "Tue, 15 Mar 2005 11:01:22 +0200", "msg_from": "\"Tambet Matiisen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: One tuple per transaction" }, { "msg_contents": "Tambet Matiisen wrote:\n>>\n>>Not exactly. The dead tuple in the index will be scanned the \n>>first time (and its pointed heap tuple as well), then we will \n>>mark it dead, then next time we came here, we will know that \n>>the index tuple actually points to a uesless tuple, so we \n>>will not scan its pointed heap tuple.\n>>\n> \n> \n> But the dead index tuple will still be read from disk next time? Maybe\n> really the performance loss will be neglible, but if most of tuples in\n> your table/index are dead, then it might be significant.\n\nWhen a block is read from disk, any dead tuples in that block will be \nread in. Vacuum recovers these.\n\n> Consider the often suggested solution for speeding up \"select count(*)\n> from table\" query: make another table rowcounts and for each of the\n> original tables add insert and delete triggers to update row count in\n> rowcounts table. Actually this is standard denormalization technique,\n> which I use often. For example to ensure that order.total =\n> sum(order_line.total).\n\nThis does of course completely destroy concurrency. Since you need to \nlock the summary table, other clients have to wait until you are done.\n\n> Now, if typical inserts into your most active table occur in batches of\n> 3 rows, in one transaction, then row count for this table is updated 3\n> times during transaction. 3 updates generate 3 tuples, while 2 of them\n> are dead from the very start. You effectively commit 2 useless tuples.\n> After millions of inserts you end up with rowcounts table having 2/3 of\n> dead tuples and queries start to slow down.\n> \n> Current solution is to vacuum often. My proposal was to create new tuple\n> only with first update. The next updates in the same transaction would\n> update the existing tuple, not create a new. \n\nHow do you roll back to a savepoint with this model?\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 15 Mar 2005 09:37:41 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One tuple per transaction" }, { "msg_contents": "On Tuesday 15 March 2005 04:37, Richard Huxton wrote:\n> Tambet Matiisen wrote:\n> > Now, if typical inserts into your most active table occur in batches of\n> > 3 rows, in one transaction, then row count for this table is updated 3\n> > times during transaction. 3 updates generate 3 tuples, while 2 of them\n> > are dead from the very start. You effectively commit 2 useless tuples.\n> > After millions of inserts you end up with rowcounts table having 2/3 of\n> > dead tuples and queries start to slow down.\n> >\n> > Current solution is to vacuum often. My proposal was to create new tuple\n> > only with first update. The next updates in the same transaction would\n> > update the existing tuple, not create a new.\n>\n> How do you roll back to a savepoint with this model?\n>\n\nYou can't, but you could add the caveat to just do this auto-reuse within any \ngiven nested transaction. Then as long as you aren't using savepoints you \nget to reclaim all the space/ \n\n On a similar note I was just wondering if it would be possible to mark any of \nthese dead tuples as ready to be reused at transaction commit time, since we \nknow that they are dead to any and all other transactions currently going on. \nThis would save you from having to vacuum to get the tuples marked ready for \nreuse. In the above scenario this could be a win, whether it would be \noverall is hard to say. \n\n-- \nRobert Treat\nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n", "msg_date": "Tue, 15 Mar 2005 16:52:30 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One tuple per transaction" }, { "msg_contents": "Robert Treat <[email protected]> writes:\n> On a similar note I was just wondering if it would be possible to\n> mark any of these dead tuples as ready to be reused at transaction\n> commit time, since we know that they are dead to any and all other\n> transactions currently going on.\n\nI believe VACUUM already knows that xmin = xmax implies the tuple\nis dead to everyone.\n\n> This would save you from having to vacuum to get the tuples marked\n> ready for reuse.\n\nNo; you forgot about reclaiming associated index entries.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 Mar 2005 18:51:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One tuple per transaction " }, { "msg_contents": "On Tue, Mar 15, 2005 at 06:51:19PM -0500, Tom Lane wrote:\n> Robert Treat <[email protected]> writes:\n> > On a similar note I was just wondering if it would be possible to\n> > mark any of these dead tuples as ready to be reused at transaction\n> > commit time, since we know that they are dead to any and all other\n> > transactions currently going on.\n> \n> I believe VACUUM already knows that xmin = xmax implies the tuple\n> is dead to everyone.\n\nHuh, that is too simplistic in a subtransactions' world, isn't it?\n\nOne way to solve this would be that a transaction that kills a tuple\nchecks whether it was created by itself (not necessarily the same Xid),\nand somehow report it to the FSM right away.\n\nThat'd mean physically moving a lot of tuples in the page, so ISTM it's\ntoo expensive an \"optimization.\" Oh, and also delete the tuple from\nindexes.\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"Vivir y dejar de vivir son soluciones imaginarias.\nLa existencia est� en otra parte\" (Andre Breton)\n", "msg_date": "Tue, 15 Mar 2005 20:06:29 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One tuple per transaction" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> On Tue, Mar 15, 2005 at 06:51:19PM -0500, Tom Lane wrote:\n>> I believe VACUUM already knows that xmin = xmax implies the tuple\n>> is dead to everyone.\n\n> Huh, that is too simplistic in a subtransactions' world, isn't it?\n\nWell, it's still correct as a fast-path check. There are extensions\nyou could imagine making ... but offhand I agree that it's not worth\nthe trouble. Maybe in a few years when everyone and his sister is\nusing subtransactions constantly, we'll feel a need to optimize these\ncases. \n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 Mar 2005 23:44:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One tuple per transaction " } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Richard Huxton [mailto:[email protected]] \n> Sent: Tuesday, March 15, 2005 11:38 AM\n> To: Tambet Matiisen\n> Cc: [email protected]\n> Subject: Re: [PERFORM] One tuple per transaction\n> \n...\n> \n> > Consider the often suggested solution for speeding up \n> \"select count(*) \n> > from table\" query: make another table rowcounts and for each of the \n> > original tables add insert and delete triggers to update \n> row count in \n> > rowcounts table. Actually this is standard denormalization \n> technique, \n> > which I use often. For example to ensure that order.total = \n> > sum(order_line.total).\n> \n> This does of course completely destroy concurrency. Since you need to \n> lock the summary table, other clients have to wait until you are done.\n> \n\nYes, it does for rowcounts table. But consider the orders example - it\nonly locks the order which I add lines. As there is mostly one client\ndealing with one order, but possibly thousands dealing with different\norders, it should not pose any concurrency restrictions.\n\n> > Now, if typical inserts into your most active table occur \n> in batches \n> > of 3 rows, in one transaction, then row count for this table is \n> > updated 3 times during transaction. 3 updates generate 3 \n> tuples, while \n> > 2 of them are dead from the very start. You effectively commit 2 \n> > useless tuples. After millions of inserts you end up with rowcounts \n> > table having 2/3 of dead tuples and queries start to slow down.\n> > \n> > Current solution is to vacuum often. My proposal was to create new \n> > tuple only with first update. The next updates in the same \n> transaction \n> > would update the existing tuple, not create a new.\n> \n> How do you roll back to a savepoint with this model?\n> \n\nEvery savepoint initiates a new (sub)transaction.\n\n Tambet\n", "msg_date": "Tue, 15 Mar 2005 12:24:49 +0200", "msg_from": "\"Tambet Matiisen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: One tuple per transaction" } ]
[ { "msg_contents": "Hello,\n\njust recently I held a short course on PG.\n\nOne course attendant, Robert Dollinger, got \ninterested in benchmarking single inserts (since\nhe currently maintains an application that does\nexactly that on Firebird and speed is an issue\nthere).\n\nHe came up with a table that I think is interesting\nfor other people so I asked permission to publish\nit on this list.\n\nHere it is:\nhttp://1006.org/pg/postgresql_firebird_win_linux.pdf\n\nNote: some german words are there, I can't change\nthe pdf, so here's a short explanation:\n\nHe tested the speed of 4000 inserts through a Delphi\napplication with zeos components.\n\nthe 3 parameters are:\n\n* transaction\n - single: all 4000 inserts inside 1 transaction\n - multi: 4000 inserts with 4000 commits\n\n* fsync (for PG) or forced writes (for FB)\n - true/false\n\n* \"Verbindung\" = connection\n - local\n - LAN\n - wireless\n\n notes: the server ran either on a windows desktop\n machine or a linux laptop; the client allways ran\n on the windows desktop\n\nTimings are in msec, note that you cannot directly\ncompare Windows and Linux Performance, since machines\nwere different.\n\nYou can, however, compare PG to Firebird, and you\ncan see the effect of the 3 varied parametert.\n\nOne thing that stands out is how terribly\nbad Windows performed with many small single\ntransactions and fsync=true.\n\nAppearantly fsync on Windows is a very costly\noperation.\n\nAnother (good) thing is that PG beats FB on all\nother tests :-)\n\n\nBye, Chris.\n\n\n\n\n\n", "msg_date": "Tue, 15 Mar 2005 14:44:07 +0100", "msg_from": "Chris Mair <[email protected]>", "msg_from_op": true, "msg_subject": "interesting benchmarks PG/Firebird Linux/Windows fsync/nofsync" }, { "msg_contents": "Chris Mair wrote:\n> Timings are in msec, note that you cannot directly\n> compare Windows and Linux Performance, since machines\n> were different.\n> \n> You can, however, compare PG to Firebird, and you\n> can see the effect of the 3 varied parametert.\n> \n> One thing that stands out is how terribly\n> bad Windows performed with many small single\n> transactions and fsync=true.\n> \n> Appearantly fsync on Windows is a very costly\n> operation.\n\nYes, we now enable FILE_FLAG_WRITE_THROUGH on Win32 for open_sync and I\nam about to open a discussion whether this should be the default for\nWin32, and whether we should backpatch this to 8.0.X.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 15 Mar 2005 08:55:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: interesting benchmarks PG/Firebird Linux/Windows fsync/nofsync" }, { "msg_contents": "\"Bruce Momjian\" <[email protected]> writes\n>\n> Yes, we now enable FILE_FLAG_WRITE_THROUGH on Win32 for open_sync and I\n> am about to open a discussion whether this should be the default for\n> Win32, and whether we should backpatch this to 8.0.X.\n\nJust a short msg: Oracle/SQL Server enable it as default in win32 *no matter\nwhat* ...\n\nRegards,\nQingqing\n\n\n", "msg_date": "Wed, 16 Mar 2005 09:44:18 +0800", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: interesting benchmarks PG/Firebird Linux/Windows fsync/nofsync" } ]
[ { "msg_contents": "\n> One thing that stands out is how terribly bad Windows \n> performed with many small single transactions and fsync=true.\n> \n> Appearantly fsync on Windows is a very costly operation.\n\nWhat's the hardware? If you're running on disks with write cache\nenabled, fsync on windows will write through the write cache *no matter\nwhat*. I don't know of any other OS where it will do that.\n\nIf you don't have a battery backed write cache, then all other\nconfigurations are considered very dangerous in case your machine\ncrashes.\n\nIf you have battery backed write cache, then yes, pg on windows will\nperform poorly indeed.\n\n\nThere is a patch in the queue for 8.0.2, and already applied to 8.1\nIIRC, that will fix the bad performance with write-cache on win32.\n\n(can't read the PDF, it crashes my adobe reader for some reason. Perhaps\nit contains the information above...)\n\n//Magnus\n", "msg_date": "Tue, 15 Mar 2005 14:52:05 +0100", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: interesting benchmarks PG/Firebird Linux/Windows fsync/nofsync" } ]
[ { "msg_contents": "Hi all,\n\nI suspect this problem/bug has been dealt with already, but I couldn't\nfind anything in the mail archives.\n\nI'm using postgres 7.3, and I managed to recreate the problem using the attached \nfiles. \n\nThe database structure is in slow_structure.sql\n\nAfter creating the database, using this script, I ran run_before_load__fast.sql\n\nThen I created a load file using create_loadfile.sh (It creates a file called load.sql)\n\nI timed the loading of this file, and it loaded in 1 min 11.567 sec\n\nThen I recreated the database from slow_structure.sql, ran run_before_load__slow.sql,\nand then loaded the same load.sql and it took 3 min 51.293 sec which is about 6 times slower.\n\nI tried the same thing on postgres 8.0.0 to see if it does the same thing, but there it\nwas consistently slow : 3 min 31.367 sec\n\nThe only way I got the load.sql to load fast on postgres 8.0.0, was by not creating\nany of the foreign key constraints that point to the \"main\" table, and then enabling them\nafterwards. This gave me the fastest time overall : 1 min 4.911 sec\n\nMy problem is that on the postgres 7.3.4 database I'm working with, a load process that\nused to take 40 minutes, now takes 4 hours, because of 3 rows data being loaded into \na table (similar in setup to the \"main\" table in the example) before the indexes were created.\n(This happens automatically when you dump and re-import the database (7.3.4))\n\nIs there a way to get it to load fast again on the 7.3 database without dropping the foreign \nkey constraints (After running run_before_load_slow.sql) ?\n\nAnd, if someone knows off-hand, what's happening here?\n\nTIA\nKind Regards\nStefan", "msg_date": "Tue, 15 Mar 2005 17:23:45 +0200", "msg_from": "Stef <[email protected]>", "msg_from_op": true, "msg_subject": "Slow loads when indexes added." } ]
[ { "msg_contents": "Hi all,\n\n Il get this strange problem when deleting rows from a Java program. \nSometime (For what I noticed it's not all the time) the server take \nalmost forever to delete rows from table.\n\nHere It takes 20 minutes to delete the IC table.\n\nJava logs:\nINFO [Thread-386] (Dao.java:227) 2005-03-15 15:38:34,754 : Execution \nSQL file: resources/ukConfiguration/reset_application.sql\nDELETE FROM YR\nINFO [Thread-386] (Dao.java:227) 2005-03-15 15:38:34,964 : Execution \nSQL file: resources/inventory/item/reset_application.sql\nDELETE FROM IC\nINFO [Thread-386] (Dao.java:227) 2005-03-15 15:58:45,072 : Execution \nSQL file: resources/ukResource/reset_application.sql\nDELETE FROM RA\n\n\n I get this problem on my dev (Windows/7.4/Cygwin) environment. But now \nI see that it's also have this problem on my production env. Yes I \ntought I was maybe just a cygwin/Windows problem .. apparently not :-((((\n\nOn my dev I can see the Postgresql related process running at almost 50% \nof CPU usage for all the time. So I suppose it's something inside \nPostgresql. I rememeber having tried to delete the content of my table \n(IC) from PgAdminIII and I took couples of seconds!!! Not minutes. So \nthe process don't jam but take time .. any Idea what postgresql is doing \nduring this time??\n\nIf you have any idea on what the problem could be... I really appreciate \nit. \n\nThanks for any help!\n/David\n \n \n\n\n", "msg_date": "Tue, 15 Mar 2005 16:24:17 -0500", "msg_from": "David Gagnon <[email protected]>", "msg_from_op": true, "msg_subject": "Performance problem on delete from for 10k rows. May takes 20 minutes\n\tthrough JDBC interface" }, { "msg_contents": "On Tue, Mar 15, 2005 at 04:24:17PM -0500, David Gagnon wrote:\n\n> Il get this strange problem when deleting rows from a Java program. \n> Sometime (For what I noticed it's not all the time) the server take \n> almost forever to delete rows from table.\n\nDo other tables have foreign key references to the table you're\ndeleting from? If so, are there indexes on the foreign key columns?\n\nDo you have triggers or rules on the table?\n\nHave you queried pg_locks during the long-lasting deletes to see\nif the deleting transaction is waiting for a lock on something?\n\n> I rememeber having tried to delete the content of my table (IC) from\n> PgAdminIII and I took couples of seconds!!! Not minutes.\n\nHow many records did you delete in this case? If there are foreign\nkey references, how many records were in the referencing tables?\nHow repeatable is the disparity in delete time? A single test case\nmight have been done under different conditions, so it might not\nmean much. No offense intended, but \"I remember\" doesn't carry as\nmuch weight as a documented example.\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Tue, 15 Mar 2005 16:38:34 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem on delete from for 10k rows. May takes 20\n\tminutes through JDBC interface" }, { "msg_contents": "> I get this problem on my dev (Windows/7.4/Cygwin) environment. But now \n> I see that it's also have this problem on my production env. Yes I \n> tought I was maybe just a cygwin/Windows problem .. apparently not :-((((\n\nCare to try again with logging enabled on the PostgreSQL side within the\ndevelopment environment?\n\nlog_statement = true\nlog_duration = true\nlog_connections = on\n\nThen run it via Java and from pgAdminIII and send us the two log\nsnippets as attachments?\n\nThanks.\n-- \n\n", "msg_date": "Tue, 15 Mar 2005 18:50:27 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem on delete from for 10k rows. " }, { "msg_contents": "Hi All,\n\nI rerun the example with the debug info turned on in postgresl. As you \ncan see all dependent tables (that as foreign key on table IC) are \nemptied before the DELETE FROM IC statement is issued. For what I \nunderstand the performance problem seem to came from those selects that \npoint back to IC ( LOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x \nWHERE \"icnum\" = $1 FOR UPDATE OF x). There are 6 of them. I don't know \nwhere they are comming from. But if I want to delete the content of the \ntable (~10k) it may be long to those 6 selects for each deleted rows. \nWhy are those selects are there ? Are those select really run on each \nrow deleted?\n\nI'm running version 7.4.5 on cygwin. I ran the same delete from \npgAdminIII and I got 945562ms for all the deletes within the same \ntransaction .. (so I was wrong saying it took less time in \nPgAdminIII... sorry about this).\n\nDo you have any idea why those 6 selects are there?\n\nMaybe I can drop indexes before deleting the content of the table. I \ndidn't planned to because tables are quite small and it's more \ncomplicated in my environment. And tell me if I'm wrong but if I drop \nindexed do I have to reload all my stored procedure (to reset the \nplanner related info)??? Remember having read that somewhere.. (was it \nin the Postgresql General Bit newletter ...anyway)\n\nThanks for your help I really appr�ciate it :-)\n\n/David\n\nLOG: duration: 144.000 ms\nLOG: statement: DELETE FROM YN\nLOG: duration: 30.000 ms\nLOG: statement: DELETE FROM YO\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"yo\" x WHERE \"yotype\" = $1 \nAND \"yonum\" = $2 FOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"yn\" x WHERE \"ynyotype\" = \n$1 AND \"ynyonum\" = $2 FOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"yo\" x WHERE \"yotype\" = $1 \nAND \"yonum\" = $2 FOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"yr\" x WHERE \"yryotype\" = \n$1 AND \"yryonum\" = $2 FOR UPDATE OF x\nLOG: duration: 83.000 ms\nLOG: connection received: host=127.0.0.1 port=2196\nLOG: connection authorized: user=admin database=webCatalog\nLOG: statement: set datestyle to 'ISO'; select version(), case when \npg_encoding_to_char(1) = 'SQL_ASCII' then 'UNKNOWN' else \ngetdatabaseencoding() end;\nLOG: duration: 2.000 ms\nLOG: statement: set client_encoding = 'UNICODE'\nLOG: duration: 0.000 ms\nLOG: statement: DELETE FROM IY\nLOG: duration: 71.000 ms\nLOG: statement: DELETE FROM IA\nLOG: duration: 17.000 ms\nLOG: statement: DELETE FROM IQ\nLOG: duration: 384.000 ms\nLOG: statement: DELETE FROM IC\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = $1 \nFOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"iq\" x WHERE \"iqicnum\" = $1 \nFOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = $1 \nFOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"ia\" x WHERE \"iaicnum\" = $1 \nFOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = $1 \nFOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"iy\" x WHERE \"iyicnumo\" = \n$1 FOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = $1 \nFOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"iy\" x WHERE \"iyicnumr\" = \n$1 FOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = $1 \nFOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"il\" x WHERE \"ilicnum\" = $1 \nFOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = $1 \nFOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"bd\" x WHERE \"bdicnum\" = $1 \nFOR UPDATE OF x\nLOG: duration: 656807.000 msMichael Fuhr wrote:\n\n\n\n\n\n-----------------------\nDELETE FROM BM;\nDELETE FROM BD;\nDELETE FROM BO;\nDELETE FROM IL;\nDELETE FROM YR;\nDELETE FROM YN;\nDELETE FROM YO;\nDELETE FROM IY;\nDELETE FROM IA;\nDELETE FROM IQ;\nDELETE FROM IC;\n\nMichael Fuhr wrote:\n\n>On Tue, Mar 15, 2005 at 04:24:17PM -0500, David Gagnon wrote:\n>\n> \n>\n>> Il get this strange problem when deleting rows from a Java program. \n>>Sometime (For what I noticed it's not all the time) the server take \n>>almost forever to delete rows from table.\n>> \n>>\n>\n>Do other tables have foreign key references to the table you're\n>deleting from? If so, are there indexes on the foreign key columns?\n>\n>Do you have triggers or rules on the table?\n>\n>Have you queried pg_locks during the long-lasting deletes to see\n>if the deleting transaction is waiting for a lock on something?\n>\n> \n>\n>>I rememeber having tried to delete the content of my table (IC) from\n>>PgAdminIII and I took couples of seconds!!! Not minutes.\n>> \n>>\n>\n>How many records did you delete in this case? If there are foreign\n>key references, how many records were in the referencing tables?\n>How repeatable is the disparity in delete time? A single test case\n>might have been done under different conditions, so it might not\n>mean much. No offense intended, but \"I remember\" doesn't carry as\n>much weight as a documented example.\n>\n> \n>\n\n", "msg_date": "Wed, 16 Mar 2005 08:18:39 -0500", "msg_from": "David Gagnon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problem on delete from for 10k rows. May" }, { "msg_contents": "\n\nDavid Gagnon wrote:\n\n> Hi All,\n>\n> I rerun the example with the debug info turned on in postgresl. As you \n> can see all dependent tables (that as foreign key on table IC) are \n> emptied before the DELETE FROM IC statement is issued. For what I \n> understand the performance problem seem to came from those selects \n> that point back to IC ( LOG: statement: SELECT 1 FROM ONLY \n> \"public\".\"ic\" x WHERE \"icnum\" = $1 FOR UPDATE OF x). There are 6 of \n> them. I don't know where they are comming from. But if I want to \n> delete the content of the table (~10k) it may be long to those 6 \n> selects for each deleted rows. Why are those selects are there ? Are \n> those select really run on each row deleted?\n\nYou are using hibernate. Hibernate is generating them to lock the tables.\n\n>\n>\n> I'm running version 7.4.5 on cygwin. I ran the same delete from \n> pgAdminIII and I got 945562ms for all the deletes within the same \n> transaction .. (so I was wrong saying it took less time in \n> PgAdminIII... sorry about this).\n>\n> Do you have any idea why those 6 selects are there?\n\nHibernate\n\n>\n> Maybe I can drop indexes before deleting the content of the table. I \n> didn't planned to because tables are quite small and it's more \n> complicated in my environment. And tell me if I'm wrong but if I drop \n> indexed do I have to reload all my stored procedure (to reset the \n> planner related info)??? Remember having read that somewhere.. (was it \n> in the Postgresql General Bit newletter ...anyway)\n>\n> Thanks for your help I really appr�ciate it :-)\n>\n> /David\n>\n> LOG: duration: 144.000 ms\n> LOG: statement: DELETE FROM YN\n> LOG: duration: 30.000 ms\n> LOG: statement: DELETE FROM YO\n> LOG: statement: SELECT 1 FROM ONLY \"public\".\"yo\" x WHERE \"yotype\" = \n> $1 AND \"yonum\" = $2 FOR UPDATE OF x\n> LOG: statement: SELECT 1 FROM ONLY \"public\".\"yn\" x WHERE \"ynyotype\" = \n> $1 AND \"ynyonum\" = $2 FOR UPDATE OF x\n> LOG: statement: SELECT 1 FROM ONLY \"public\".\"yo\" x WHERE \"yotype\" = \n> $1 AND \"yonum\" = $2 FOR UPDATE OF x\n> LOG: statement: SELECT 1 FROM ONLY \"public\".\"yr\" x WHERE \"yryotype\" = \n> $1 AND \"yryonum\" = $2 FOR UPDATE OF x\n> LOG: duration: 83.000 ms\n> LOG: connection received: host=127.0.0.1 port=2196\n> LOG: connection authorized: user=admin database=webCatalog\n> LOG: statement: set datestyle to 'ISO'; select version(), case when \n> pg_encoding_to_char(1) = 'SQL_ASCII' then 'UNKNOWN' else \n> getdatabaseencoding() end;\n> LOG: duration: 2.000 ms\n> LOG: statement: set client_encoding = 'UNICODE'\n> LOG: duration: 0.000 ms\n> LOG: statement: DELETE FROM IY\n> LOG: duration: 71.000 ms\n> LOG: statement: DELETE FROM IA\n> LOG: duration: 17.000 ms\n> LOG: statement: DELETE FROM IQ\n> LOG: duration: 384.000 ms\n> LOG: statement: DELETE FROM IC\n> LOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = $1 \n> FOR UPDATE OF x\n> LOG: statement: SELECT 1 FROM ONLY \"public\".\"iq\" x WHERE \"iqicnum\" = \n> $1 FOR UPDATE OF x\n> LOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = $1 \n> FOR UPDATE OF x\n> LOG: statement: SELECT 1 FROM ONLY \"public\".\"ia\" x WHERE \"iaicnum\" = \n> $1 FOR UPDATE OF x\n> LOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = $1 \n> FOR UPDATE OF x\n> LOG: statement: SELECT 1 FROM ONLY \"public\".\"iy\" x WHERE \"iyicnumo\" = \n> $1 FOR UPDATE OF x\n> LOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = $1 \n> FOR UPDATE OF x\n> LOG: statement: SELECT 1 FROM ONLY \"public\".\"iy\" x WHERE \"iyicnumr\" = \n> $1 FOR UPDATE OF x\n> LOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = $1 \n> FOR UPDATE OF x\n> LOG: statement: SELECT 1 FROM ONLY \"public\".\"il\" x WHERE \"ilicnum\" = \n> $1 FOR UPDATE OF x\n> LOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = $1 \n> FOR UPDATE OF x\n> LOG: statement: SELECT 1 FROM ONLY \"public\".\"bd\" x WHERE \"bdicnum\" = \n> $1 FOR UPDATE OF x\n> LOG: duration: 656807.000 msMichael Fuhr wrote:\n>\n>\n>\n>\n>\n> -----------------------\n> DELETE FROM BM;\n> DELETE FROM BD;\n> DELETE FROM BO;\n> DELETE FROM IL;\n> DELETE FROM YR;\n> DELETE FROM YN;\n> DELETE FROM YO;\n> DELETE FROM IY;\n> DELETE FROM IA;\n> DELETE FROM IQ;\n> DELETE FROM IC;\n>\n> Michael Fuhr wrote:\n>\n>> On Tue, Mar 15, 2005 at 04:24:17PM -0500, David Gagnon wrote:\n>>\n>> \n>>\n>>> Il get this strange problem when deleting rows from a Java program. \n>>> Sometime (For what I noticed it's not all the time) the server take \n>>> almost forever to delete rows from table.\n>>> \n>>\n>>\n>> Do other tables have foreign key references to the table you're\n>> deleting from? If so, are there indexes on the foreign key columns?\n>>\n>> Do you have triggers or rules on the table?\n>>\n>> Have you queried pg_locks during the long-lasting deletes to see\n>> if the deleting transaction is waiting for a lock on something?\n>>\n>> \n>>\n>>> I rememeber having tried to delete the content of my table (IC) from\n>>> PgAdminIII and I took couples of seconds!!! Not minutes.\n>>> \n>>\n>>\n>> How many records did you delete in this case? If there are foreign\n>> key references, how many records were in the referencing tables?\n>> How repeatable is the disparity in delete time? A single test case\n>> might have been done under different conditions, so it might not\n>> mean much. No offense intended, but \"I remember\" doesn't carry as\n>> much weight as a documented example.\n>>\n>> \n>>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Wed, 16 Mar 2005 09:06:24 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem on delete from for 10k rows. May" }, { "msg_contents": "Hi,\n\nI'm using ibatis. But in this particular case the sql statement come \nfrom a plain ascii file and it's run by the Ibatis ScriptRunner class. \nBeside the fact this class come from ibatis framework it's just plain \nsql connection (I'm I wrong???). Just to be sure, here is the code from \nthe class. I must say that i run script that contains create table, \nalter table, insert statements with the same runner. \n\nIf I wrong please tell me .. I like to be wrong when the result is \neliminating a misunderstanding from my part :-)\n\nThanks for your help!\n\n/David\n\n\n\n public void runScript(Connection conn, Reader reader)\n throws IOException, SQLException {\n StringBuffer command = null;\n try {\n LineNumberReader lineReader = new LineNumberReader(reader);\n String line = null;\n while ((line = lineReader.readLine()) != null) {\n if (command == null) {\n command = new StringBuffer();\n }\n String trimmedLine = line.trim();\n if (trimmedLine.startsWith(\"--\")) {\n println(trimmedLine);\n if (log.isDebugEnabled()) {\n log.debug(trimmedLine);\n }\n } else if (trimmedLine.length() < 1 || \ntrimmedLine.startsWith(\"//\")) {\n //Do nothing\n } else if (trimmedLine.endsWith(\";\")) {\n command.append(line.substring(0, \nline.lastIndexOf(\";\")));\n command.append(\" \");\n Statement statement = conn.createStatement();\n\n println(command);\n if (log.isDebugEnabled()) {\n log.debug(command);\n }\n\n boolean hasResults = false;\n if (stopOnError) {\n hasResults = statement.execute(command.toString());\n } else {\n try {\n statement.execute(command.toString());\n } catch (SQLException e) {\n e.fillInStackTrace();\n printlnError(\"Error executing: \" + command);\n printlnError(e);\n }\n }\n\n if (autoCommit && !conn.getAutoCommit()) {\n conn.commit();\n }\n\n ResultSet rs = statement.getResultSet();\n if (hasResults && rs != null) {\n ResultSetMetaData md = rs.getMetaData();\n int cols = md.getColumnCount();\n for (int i = 0; i < cols; i++) {\n String name = md.getColumnName(i);\n print(name + \"\\t\");\n }\n println(\"\");\n while (rs.next()) {\n for (int i = 0; i < cols; i++) {\n String value = rs.getString(i);\n print(value + \"\\t\");\n }\n println(\"\");\n }\n }\n\n command = null;\n try {\n statement.close();\n } catch (Exception e) {\n // Ignore to workaround a bug in Jakarta DBCP\n// e.printStackTrace();\n }\n Thread.yield();\n } else {\n command.append(line);\n command.append(\" \");\n }\n }\n if (!autoCommit) {\n conn.commit();\n }\n } catch (SQLException e) {\n e.fillInStackTrace();\n printlnError(\"Error executing: \" + command);\n printlnError(e);\n log.error(\"Error executing: \" + command, e);\n throw e;\n } catch (IOException e) {\n e.fillInStackTrace();\n printlnError(\"Error executing: \" + command);\n printlnError(e);\n log.error(\"Error executing: \" + command, e);\n throw e;\n } finally {\n conn.rollback();\n flush();\n }\n }\n\n\nDave Cramer wrote:\n\n>\n>\n> David Gagnon wrote:\n>\n>> Hi All,\n>>\n>> I rerun the example with the debug info turned on in postgresl. As \n>> you can see all dependent tables (that as foreign key on table IC) \n>> are emptied before the DELETE FROM IC statement is issued. For what \n>> I understand the performance problem seem to came from those selects \n>> that point back to IC ( LOG: statement: SELECT 1 FROM ONLY \n>> \"public\".\"ic\" x WHERE \"icnum\" = $1 FOR UPDATE OF x). There are 6 of \n>> them. I don't know where they are comming from. But if I want to \n>> delete the content of the table (~10k) it may be long to those 6 \n>> selects for each deleted rows. Why are those selects are there ? \n>> Are those select really run on each row deleted?\n>\n>\n> You are using hibernate. Hibernate is generating them to lock the tables.\n>\n>>\n>>\n>> I'm running version 7.4.5 on cygwin. I ran the same delete from \n>> pgAdminIII and I got 945562ms for all the deletes within the same \n>> transaction .. (so I was wrong saying it took less time in \n>> PgAdminIII... sorry about this).\n>>\n>> Do you have any idea why those 6 selects are there?\n>\n>\n> Hibernate\n>\n>>\n>> Maybe I can drop indexes before deleting the content of the table. I \n>> didn't planned to because tables are quite small and it's more \n>> complicated in my environment. And tell me if I'm wrong but if I \n>> drop indexed do I have to reload all my stored procedure (to reset \n>> the planner related info)??? Remember having read that somewhere.. \n>> (was it in the Postgresql General Bit newletter ...anyway)\n>>\n>> Thanks for your help I really appr�ciate it :-)\n>>\n>> /David\n>>\n>> LOG: duration: 144.000 ms\n>> LOG: statement: DELETE FROM YN\n>> LOG: duration: 30.000 ms\n>> LOG: statement: DELETE FROM YO\n>> LOG: statement: SELECT 1 FROM ONLY \"public\".\"yo\" x WHERE \"yotype\" = \n>> $1 AND \"yonum\" = $2 FOR UPDATE OF x\n>> LOG: statement: SELECT 1 FROM ONLY \"public\".\"yn\" x WHERE \"ynyotype\" \n>> = $1 AND \"ynyonum\" = $2 FOR UPDATE OF x\n>> LOG: statement: SELECT 1 FROM ONLY \"public\".\"yo\" x WHERE \"yotype\" = \n>> $1 AND \"yonum\" = $2 FOR UPDATE OF x\n>> LOG: statement: SELECT 1 FROM ONLY \"public\".\"yr\" x WHERE \"yryotype\" \n>> = $1 AND \"yryonum\" = $2 FOR UPDATE OF x\n>> LOG: duration: 83.000 ms\n>> LOG: connection received: host=127.0.0.1 port=2196\n>> LOG: connection authorized: user=admin database=webCatalog\n>> LOG: statement: set datestyle to 'ISO'; select version(), case when \n>> pg_encoding_to_char(1) = 'SQL_ASCII' then 'UNKNOWN' else \n>> getdatabaseencoding() end;\n>> LOG: duration: 2.000 ms\n>> LOG: statement: set client_encoding = 'UNICODE'\n>> LOG: duration: 0.000 ms\n>> LOG: statement: DELETE FROM IY\n>> LOG: duration: 71.000 ms\n>> LOG: statement: DELETE FROM IA\n>> LOG: duration: 17.000 ms\n>> LOG: statement: DELETE FROM IQ\n>> LOG: duration: 384.000 ms\n>> LOG: statement: DELETE FROM IC\n>> LOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = \n>> $1 FOR UPDATE OF x\n>> LOG: statement: SELECT 1 FROM ONLY \"public\".\"iq\" x WHERE \"iqicnum\" = \n>> $1 FOR UPDATE OF x\n>> LOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = \n>> $1 FOR UPDATE OF x\n>> LOG: statement: SELECT 1 FROM ONLY \"public\".\"ia\" x WHERE \"iaicnum\" = \n>> $1 FOR UPDATE OF x\n>> LOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = \n>> $1 FOR UPDATE OF x\n>> LOG: statement: SELECT 1 FROM ONLY \"public\".\"iy\" x WHERE \"iyicnumo\" \n>> = $1 FOR UPDATE OF x\n>> LOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = \n>> $1 FOR UPDATE OF x\n>> LOG: statement: SELECT 1 FROM ONLY \"public\".\"iy\" x WHERE \"iyicnumr\" \n>> = $1 FOR UPDATE OF x\n>> LOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = \n>> $1 FOR UPDATE OF x\n>> LOG: statement: SELECT 1 FROM ONLY \"public\".\"il\" x WHERE \"ilicnum\" = \n>> $1 FOR UPDATE OF x\n>> LOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = \n>> $1 FOR UPDATE OF x\n>> LOG: statement: SELECT 1 FROM ONLY \"public\".\"bd\" x WHERE \"bdicnum\" = \n>> $1 FOR UPDATE OF x\n>> LOG: duration: 656807.000 msMichael Fuhr wrote:\n>>\n>>\n>>\n>>\n>>\n>> -----------------------\n>> DELETE FROM BM;\n>> DELETE FROM BD;\n>> DELETE FROM BO;\n>> DELETE FROM IL;\n>> DELETE FROM YR;\n>> DELETE FROM YN;\n>> DELETE FROM YO;\n>> DELETE FROM IY;\n>> DELETE FROM IA;\n>> DELETE FROM IQ;\n>> DELETE FROM IC;\n>>\n>> Michael Fuhr wrote:\n>>\n>>> On Tue, Mar 15, 2005 at 04:24:17PM -0500, David Gagnon wrote:\n>>>\n>>> \n>>>\n>>>> Il get this strange problem when deleting rows from a Java \n>>>> program. Sometime (For what I noticed it's not all the time) the \n>>>> server take almost forever to delete rows from table.\n>>>> \n>>>\n>>>\n>>>\n>>> Do other tables have foreign key references to the table you're\n>>> deleting from? If so, are there indexes on the foreign key columns?\n>>>\n>>> Do you have triggers or rules on the table?\n>>>\n>>> Have you queried pg_locks during the long-lasting deletes to see\n>>> if the deleting transaction is waiting for a lock on something?\n>>>\n>>> \n>>>\n>>>> I rememeber having tried to delete the content of my table (IC) from\n>>>> PgAdminIII and I took couples of seconds!!! Not minutes.\n>>>> \n>>>\n>>>\n>>>\n>>> How many records did you delete in this case? If there are foreign\n>>> key references, how many records were in the referencing tables?\n>>> How repeatable is the disparity in delete time? A single test case\n>>> might have been done under different conditions, so it might not\n>>> mean much. No offense intended, but \"I remember\" doesn't carry as\n>>> much weight as a documented example.\n>>>\n>>> \n>>>\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 8: explain analyze is your friend\n>>\n>>\n>\n\n", "msg_date": "Wed, 16 Mar 2005 09:26:26 -0500", "msg_from": "David Gagnon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problem on delete from for 10k rows. May" }, { "msg_contents": "On Wed, Mar 16, 2005 at 08:18:39AM -0500, David Gagnon wrote:\n\nDavid,\n\n> I rerun the example with the debug info turned on in postgresl. As you \n> can see all dependent tables (that as foreign key on table IC) are \n> emptied before the DELETE FROM IC statement is issued. For what I \n> understand the performance problem seem to came from those selects that \n> point back to IC ( LOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x \n> WHERE \"icnum\" = $1 FOR UPDATE OF x). There are 6 of them. I don't know \n> where they are comming from.\n\nI think they come from the FK checking code. Try to run a VACUUM on the\nIC table just before you delete from the other tables; that should make\nthe checking almost instantaneous (assuming the vacuuming actually\nempties the table, which would depend on other transactions).\n\nIt would be better to be able to use TRUNCATE to do this, but in 8.0 you\ncan't if the tables have FKs. 8.1 is better on that regard ...\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"Ninguna manada de bestias tiene una voz tan horrible como la humana\" (Orual)\n", "msg_date": "Wed, 16 Mar 2005 10:35:11 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem on delete from for 10k rows. May" }, { "msg_contents": "Really? Postgres is generating these queries ???\n\nDave\n\nAlvaro Herrera wrote:\n\n>On Wed, Mar 16, 2005 at 08:18:39AM -0500, David Gagnon wrote:\n>\n>David,\n>\n> \n>\n>>I rerun the example with the debug info turned on in postgresl. As you \n>>can see all dependent tables (that as foreign key on table IC) are \n>>emptied before the DELETE FROM IC statement is issued. For what I \n>>understand the performance problem seem to came from those selects that \n>>point back to IC ( LOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x \n>>WHERE \"icnum\" = $1 FOR UPDATE OF x). There are 6 of them. I don't know \n>>where they are comming from.\n>> \n>>\n>\n>I think they come from the FK checking code. Try to run a VACUUM on the\n>IC table just before you delete from the other tables; that should make\n>the checking almost instantaneous (assuming the vacuuming actually\n>empties the table, which would depend on other transactions).\n>\n>It would be better to be able to use TRUNCATE to do this, but in 8.0 you\n>can't if the tables have FKs. 8.1 is better on that regard ...\n>\n> \n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Wed, 16 Mar 2005 09:56:43 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem on delete from for 10k rows. May" }, { "msg_contents": "Hi\n\n>>I rerun the example with the debug info turned on in postgresl. As you \n>>can see all dependent tables (that as foreign key on table IC) are \n>>emptied before the DELETE FROM IC statement is issued. For what I \n>>understand the performance problem seem to came from those selects that \n>>point back to IC ( LOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x \n>>WHERE \"icnum\" = $1 FOR UPDATE OF x). There are 6 of them. I don't know \n>>where they are comming from.\n>> \n>>\n>\n>I think they come from the FK checking code. Try to run a VACUUM on the\n>IC table just before you delete from the other tables; that should make\n>the checking almost instantaneous (assuming the vacuuming actually\n>empties the table, which would depend on other transactions).\n> \n>\nI'll try to vaccum first before I start the delete to see if it change \nsomething.\n\nThere is probably a good reason why but I don't understant why in a \nforeign key check it need to check the date it points to.\n\nYou delete a row from table IC and do a check for integrity on tables \nthat have foreign keys on IC (make sense). But why checking back IC? \nI'm pretty sure there is a good reason but it seems to have a big \nperformance impact... In this case. It means it's not really feasable \nto empty the content of a schema. The table has only 10k .. with a huge \ntable it's not feasible just because the checks on itselft!\n\nIs someone can explain why there is this extra check? Is that can be \nfixed or improved?\n\nThanks for your help\n\n/David\n\n\n\n\n\nLOG: duration: 144.000 ms\nLOG: statement: DELETE FROM YN\nLOG: duration: 30.000 ms\nLOG: statement: DELETE FROM YO\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"yo\" x WHERE \"yotype\" = $1 \nAND \"yonum\" = $2 FOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"yn\" x WHERE \"ynyotype\" = \n$1 AND \"ynyonum\" = $2 FOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"yo\" x WHERE \"yotype\" = $1 \nAND \"yonum\" = $2 FOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"yr\" x WHERE \"yryotype\" = \n$1 AND \"yryonum\" = $2 FOR UPDATE OF x\nLOG: duration: 83.000 ms\nLOG: connection received: host=127.0.0.1 port=2196\nLOG: connection authorized: user=admin database=webCatalog\nLOG: statement: set datestyle to 'ISO'; select version(), case when \npg_encoding_to_char(1) = 'SQL_ASCII' then 'UNKNOWN' else \ngetdatabaseencoding() end;\nLOG: duration: 2.000 ms\nLOG: statement: set client_encoding = 'UNICODE'\nLOG: duration: 0.000 ms\nLOG: statement: DELETE FROM IY\nLOG: duration: 71.000 ms\nLOG: statement: DELETE FROM IA\nLOG: duration: 17.000 ms\nLOG: statement: DELETE FROM IQ\nLOG: duration: 384.000 ms\nLOG: statement: DELETE FROM IC\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = $1 \nFOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"iq\" x WHERE \"iqicnum\" = $1 \nFOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = $1 \nFOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"ia\" x WHERE \"iaicnum\" = $1 \nFOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = $1 \nFOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"iy\" x WHERE \"iyicnumo\" = \n$1 FOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = $1 \nFOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"iy\" x WHERE \"iyicnumr\" = \n$1 FOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = $1 \nFOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"il\" x WHERE \"ilicnum\" = $1 \nFOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x WHERE \"icnum\" = $1 \nFOR UPDATE OF x\nLOG: statement: SELECT 1 FROM ONLY \"public\".\"bd\" x WHERE \"bdicnum\" = $1 \nFOR UPDATE OF x\nLOG: duration: 656807.000 msMichael Fuhr wrote:\n\n\n\n\n>It would be better to be able to use TRUNCATE to do this, but in 8.0 you\n>can't if the tables have FKs. 8.1 is better on that regard ...\n>\n> \n>\n\n", "msg_date": "Wed, 16 Mar 2005 10:13:42 -0500", "msg_from": "David Gagnon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problem on delete from for 10k rows. May" }, { "msg_contents": "On Wed, 16 Mar 2005, David Gagnon wrote:\n\n> Hi\n>\n> >>I rerun the example with the debug info turned on in postgresl. As you\n> >>can see all dependent tables (that as foreign key on table IC) are\n> >>emptied before the DELETE FROM IC statement is issued. For what I\n> >>understand the performance problem seem to came from those selects that\n> >>point back to IC ( LOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x\n> >>WHERE \"icnum\" = $1 FOR UPDATE OF x). There are 6 of them. I don't know\n> >>where they are comming from.\n> >>\n> >>\n> >\n> >I think they come from the FK checking code. Try to run a VACUUM on the\n> >IC table just before you delete from the other tables; that should make\n> >the checking almost instantaneous (assuming the vacuuming actually\n> >empties the table, which would depend on other transactions).\n> >\n> >\n> I'll try to vaccum first before I start the delete to see if it change\n> something.\n>\n> There is probably a good reason why but I don't understant why in a\n> foreign key check it need to check the date it points to.\n>\n> You delete a row from table IC and do a check for integrity on tables\n> that have foreign keys on IC (make sense). But why checking back IC?\n\nBecause in the general case there might be another row which satisfies the\nconstraint added between the delete and the check.\n\n", "msg_date": "Wed, 16 Mar 2005 08:28:07 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem on delete from for 10k rows. May" }, { "msg_contents": "\n\nStephan Szabo wrote:\n\n>On Wed, 16 Mar 2005, David Gagnon wrote:\n>\n> \n>\n>>Hi\n>>\n>> \n>>\n>>>>I rerun the example with the debug info turned on in postgresl. As you\n>>>>can see all dependent tables (that as foreign key on table IC) are\n>>>>emptied before the DELETE FROM IC statement is issued. For what I\n>>>>understand the performance problem seem to came from those selects that\n>>>>point back to IC ( LOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x\n>>>>WHERE \"icnum\" = $1 FOR UPDATE OF x). There are 6 of them. I don't know\n>>>>where they are comming from.\n>>>>\n>>>>\n>>>> \n>>>>\n>>>I think they come from the FK checking code. Try to run a VACUUM on the\n>>>IC table just before you delete from the other tables; that should make\n>>>the checking almost instantaneous (assuming the vacuuming actually\n>>>empties the table, which would depend on other transactions).\n>>>\n>>>\n>>> \n>>>\n>>I'll try to vaccum first before I start the delete to see if it change\n>>something.\n>>\n>>There is probably a good reason why but I don't understant why in a\n>>foreign key check it need to check the date it points to.\n>>\n>>You delete a row from table IC and do a check for integrity on tables\n>>that have foreign keys on IC (make sense). But why checking back IC?\n>> \n>>\n>\n>Because in the general case there might be another row which satisfies the\n>constraint added between the delete and the check.\n>\n> \n>\nSo it's means if I want to reset the shema with DELETE FROM Table \nstatemnets I must first drop indexes, delete the data and then recreate \nindexes and reload stored procedure.\n\nOr I can suspend the foreign key check in the db right. I saw something \non this. Is that possible to do this from the JDBC interface?\n\nIs there any other options I can consider ?\n\nThanks for your help!\n/David\n", "msg_date": "Wed, 16 Mar 2005 12:02:04 -0500", "msg_from": "David Gagnon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problem on delete from for 10k rows. May" }, { "msg_contents": "On Wed, 16 Mar 2005, David Gagnon wrote:\n\n>\n>\n> Stephan Szabo wrote:\n>\n> >On Wed, 16 Mar 2005, David Gagnon wrote:\n> >\n> >\n> >\n> >>Hi\n> >>\n> >>\n> >>\n> >>>>I rerun the example with the debug info turned on in postgresl. As you\n> >>>>can see all dependent tables (that as foreign key on table IC) are\n> >>>>emptied before the DELETE FROM IC statement is issued. For what I\n> >>>>understand the performance problem seem to came from those selects that\n> >>>>point back to IC ( LOG: statement: SELECT 1 FROM ONLY \"public\".\"ic\" x\n> >>>>WHERE \"icnum\" = $1 FOR UPDATE OF x). There are 6 of them. I don't know\n> >>>>where they are comming from.\n> >>>>\n> >>>>\n> >>>>\n> >>>>\n> >>>I think they come from the FK checking code. Try to run a VACUUM on the\n> >>>IC table just before you delete from the other tables; that should make\n> >>>the checking almost instantaneous (assuming the vacuuming actually\n> >>>empties the table, which would depend on other transactions).\n> >>>\n> >>>\n> >>>\n> >>>\n> >>I'll try to vaccum first before I start the delete to see if it change\n> >>something.\n> >>\n> >>There is probably a good reason why but I don't understant why in a\n> >>foreign key check it need to check the date it points to.\n> >>\n> >>You delete a row from table IC and do a check for integrity on tables\n> >>that have foreign keys on IC (make sense). But why checking back IC?\n> >>\n> >>\n> >\n> >Because in the general case there might be another row which satisfies the\n> >constraint added between the delete and the check.\n> >\n> >\n> >\n> So it's means if I want to reset the shema with DELETE FROM Table\n> statemnets I must first drop indexes, delete the data and then recreate\n> indexes and reload stored procedure.\n>\n> Or I can suspend the foreign key check in the db right. I saw something\n> on this. Is that possible to do this from the JDBC interface?\n\nI think you can remove the constraints and re-add them after which should\nhopefully be fast (a vacuum on the tables after the delete and before the\nadd might help, but I'm not sure). You could potentially defer the\nconstraint if it were deferrable, but I don't think that would help any.\n", "msg_date": "Wed, 16 Mar 2005 12:35:45 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem on delete from for 10k rows. May" } ]
[ { "msg_contents": "[email protected] mentioned :\n=> Try ANALYZE after loading the referenced tables, but before loading the main table\n\nI attached a new script for creating the load file...\n \nAnalyze didn't help, it actually took longer to load.\nI set autocommit to off, and put a commit after every 100\ninserts, chattr'd noatime atrribute off recursively on PGDATA, and\nset fsync to off, this improved the time from 3min 51sec to 2min 37 sec\nfor the slow scenario.\n\nBut I was already doing all these things in the app that \nused to take 40 minutes, but now takes four hours to load.\n\nAny other suggestions?\n\nKind Regards\nStefan\n", "msg_date": "Wed, 16 Mar 2005 09:59:39 +0200", "msg_from": "Stef <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow loads when indexes added." } ]
[ { "msg_contents": "> > >The \"this day and age\" argument isn't very convincing. Hard drive \n> > >capacity growth has far outstripped hard drive seek time \n> and bandwidth improvements.\n> > >Random access has more penalty than ever.\n> >\n> > In point of fact, there haven't been noticeable seek time \n> improvements \n> > for years. Transfer rates, on the other hand, have gone \n> through the roof.\n> \n> Er, yeah. I stated it wrong. The real ratio here is between \n> seek time and throughput.\n> \n> Typical 7200RPM drives have average seek times are in the \n> area of 10ms.\n> Typical sustained transfer rates are in the range of 40Mb/s. \n> Postgres reads 8kB blocks at a time.\n> \n> So 800kB/s for random access reads. And 40Mb/s for sequential \n> reads. That's a factor of 49. I don't think anyone wants \n> random_page_cost to be set to 50 though.\n> \n> For a high end 15k drive I see average seek times get as low \n> as 3ms. And sustained transfer rates get as high as 100Mb/s. \n> So about 2.7Mb/s for random access reads or about a \n> random_page_cost of 37. Still pretty extreme.\n> \n> So what's going on with the empirically derived value of 4? \n> Perhaps this is because even though Postgres is reading an \n> entire table sequentially it's unlikely to be the only I/O \n> consumer? The sequential reads would be interleaved \n> occasionally by some other I/O forcing a seek to continue.\n\nWhat about the cache memory on the disk? Even IDE disks have some 8Mb\ncache today, which makes a lot of difference for fairly short scans.\nEven if it's just read cache. That'll bring the speed of random access\ndown to a 1=1 relationship with sequential access, assuming all fits in\nthe cache.\n\n\n//Magnus\n", "msg_date": "Wed, 16 Mar 2005 10:42:04 +0100", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cpu_tuple_cost" }, { "msg_contents": "\"Magnus Hagander\" <[email protected]> writes:\n\n> What about the cache memory on the disk? Even IDE disks have some 8Mb\n> cache today, which makes a lot of difference for fairly short scans.\n> Even if it's just read cache. That'll bring the speed of random access\n> down to a 1=1 relationship with sequential access, assuming all fits in\n> the cache.\n\n8MB cache is really insignificant compared to the hundreds or thousands of\nmegabytes the OS would be using to cache. You could just add the 8MB to your\neffective_cache_size (except it's not really 100% effective since it would\ncontain some of the same blocks as the OS cache).\n\n-- \ngreg\n\n", "msg_date": "16 Mar 2005 10:42:52 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu_tuple_cost" } ]
[ { "msg_contents": "Hi,\n\nWhich one is faster: one way reading =\"single pass reading\"\nAssumption :\na. Need to have 3 millions records\nb. Need to have call 10 or 20 records repeatly\n (so for database it will be 10 times connection, each connection with one\nrecord.\n or can be fancy 1 connection call return 10 sets of records)\n\n\n1. Reading from Flat file\n Assume already give file name and just need to read the file\n (since it is flat file, each record represent a filename, with multiple\ndirectory category)\n\n2. Reading from XML file\n Assume schema already given just need to read the file\n (since it is xml file, each record represent an xml filename, with\nmultiple directory category)\n\n3. Reading from Postgresql\n Assume primary key has been done with indexing\n just need to search the number and grap the text content\n (assume 3 millions of records, search the number, read the content file)\n\ntrying to recreate WebDBReader (from nutch) using C#\nhttp://nutch.sourceforge.net/docs/api/net/nutch/db/WebDBReader.html\n\nThank you in advances,\nRosny\n\n\n\n\n", "msg_date": "Wed, 16 Mar 2005 03:15:48 -0800", "msg_from": "\"Rosny\" <[email protected]>", "msg_from_op": true, "msg_subject": "Which one is faster: one way reading =\"single pass reading\"" } ]
[ { "msg_contents": "Hello.\n\nI have a problem concerning multi-column indexes.\n\nI have a table containing some 250k lines.\n\nTable \"public.descriptionprodftdiclnk\"\n Column | Type | Modifiers\n-------------+---------+-----------\n idword | integer | not null\n idqualifier | integer | not null\nIndexes:\n \"descriptionprodftdiclnk_pkey\" primary key, btree (idword, idqualifier)\n \"ix_descriptionprodftdiclnk_idqualif\" btree (idqualifier)\n \nWhen analyzing a simple query on the idword column the query planner \ndisplays:\n\nexplain analyze select * from descriptionprodftdiclnk where idword=44;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on descriptionprodftdiclnk (cost=0.00..4788.14 rows=44388 \nwidth=8) (actual time=87.582..168.041 rows=43792 loops=1)\n Filter: (idword = 44)\n Total runtime: 195.339 ms\n(3 rows)\n\nI don't understand why the query planner would not use the default \ncreated multi-column index\non the primary key. According to the Postgres online documentation it \nshould. By setting the\n\"enable_seqscan\" parameter to off, i can force the planner to use the index:\n\nexplain analyze select * from descriptionprodftdiclnk where idword=44;\n \nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using descriptionprodftdiclnk_pkey on \ndescriptionprodftdiclnk (cost=0.00..36720.39 rows=44388 width=8) \n(actual time=0.205..73.489 rows=43792 loops=1)\n Index Cond: (idword = 44)\n Total runtime: 100.564 ms\n(3 rows)\n\n\n\nOn the other hand, by defining a new index on the idword column (and \n\"enable_seqscan\" set to on),\nthe query uses the index:\n\ncreate index ix_tempIndex on descriptionprodftdiclnk(idword);\nCREATE INDEX\nexplain analyze select * from descriptionprodftdiclnk where idword=44;\n QUERY \nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using ix_tempindex on descriptionprodftdiclnk \n(cost=0.00..916.24 rows=44388 width=8) (actual time=0.021..79.879 \nrows=43792 loops=1)\n Index Cond: (idword = 44)\n Total runtime: 107.081 ms\n(3 rows)\n\nCould someone provide an explanation for the planner's behaviour?\n\nThanks for your help,\nDaniel\n\n", "msg_date": "Wed, 16 Mar 2005 17:08:59 +0100", "msg_from": "Daniel Crisan <[email protected]>", "msg_from_op": true, "msg_subject": "multi-column index" }, { "msg_contents": "Daniel,\n\n> Table \"public.descriptionprodftdiclnk\"\n\nWhat is this, German? ;-)\n\n> explain analyze select * from descriptionprodftdiclnk where idword=44;\n> QUERY PLAN\n> ---------------------------------------------------------------------------\n>---------------------------------------------------- Seq Scan on\n> descriptionprodftdiclnk (cost=0.00..4788.14 rows=44388 width=8) (actual\n> time=87.582..168.041 rows=43792 loops=1)\n> Filter: (idword = 44)\n> Total runtime: 195.339 ms\n> (3 rows)\n\n> explain analyze select * from descriptionprodftdiclnk where idword=44;\n>\n> QUERY PLAN\n> ---------------------------------------------------------------------------\n>----------------------------------------------------------------------------\n>------------ Index Scan using descriptionprodftdiclnk_pkey on\n> descriptionprodftdiclnk (cost=0.00..36720.39 rows=44388 width=8)\n> (actual time=0.205..73.489 rows=43792 loops=1)\n> Index Cond: (idword = 44)\n> Total runtime: 100.564 ms\n> (3 rows)\n\n> create index ix_tempIndex on descriptionprodftdiclnk(idword);\n> CREATE INDEX\n> explain analyze select * from descriptionprodftdiclnk where idword=44;\n> QUERY\n> PLAN\n> ---------------------------------------------------------------------------\n>---------------------------------------------------------------------- Index\n> Scan using ix_tempindex on descriptionprodftdiclnk\n> (cost=0.00..916.24 rows=44388 width=8) (actual time=0.021..79.879\n> rows=43792 loops=1)\n> Index Cond: (idword = 44)\n> Total runtime: 107.081 ms\n> (3 rows)\n>\n> Could someone provide an explanation for the planner's behaviour?\n\nPretty simple, really. Look at the cost calculations for the index scan for \nthe multi-column index. PostgreSQL believes that:\nThe cost of a seq scan is 4788.14\nThe cost of an 2-column index scan is 36720.39\nThe cost of a 1-column index scan is 916.24\n\nAssuming that you ran each of these queries multiple times to eliminate \ncaching as a factor, the issue is that the cost calculations are wrong. We \ngive you a number of GUC variables to change that:\neffective_cache_size\nrandom_page_cost\ncpu_tuple_cost\netc.\n\nSee the RUNTIME-CONFIGURATION docs for more details.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 16 Mar 2005 10:09:24 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi-column index" }, { "msg_contents": "Whoa Josh! I don't believe you're going to reduce the cost by 10 times \nthrough a bit of tweaking - not without lowering the sequential scan \ncost as well.\n\nThe only thing I can think of is perhaps his primary index drastically \nneeds repacking. Otherwise, isn't there a real anomaly here? Halving the \nkey width might account for some of it, but it's still miles out of court.\n\nActually, I'm surprised the planner came up with such a low cost for the \nsingle column index, unless ... perhaps correlation statistics aren't \nused when determining costs for multi-column indexes?\n\nJosh Berkus wrote:\n\n>Pretty simple, really. Look at the cost calculations for the index scan for \n>the multi-column index. PostgreSQL believes that:\n>The cost of a seq scan is 4788.14\n>The cost of an 2-column index scan is 36720.39\n>The cost of a 1-column index scan is 916.24\n>\n>Assuming that you ran each of these queries multiple times to eliminate \n>caching as a factor, the issue is that the cost calculations are wrong. We \n>give you a number of GUC variables to change that:\n>effective_cache_size\n>random_page_cost\n>cpu_tuple_cost\n>etc.\n>\n>See the RUNTIME-CONFIGURATION docs for more details.\n>\n", "msg_date": "Thu, 17 Mar 2005 10:25:43 +1000", "msg_from": "David Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi-column index" }, { "msg_contents": "David Brown <[email protected]> writes:\n> Actually, I'm surprised the planner came up with such a low cost for the \n> single column index, unless ... perhaps correlation statistics aren't \n> used when determining costs for multi-column indexes?\n\nThe correlation calculation for multi-column indexes is pretty whacked\nout pre-8.0. I don't think it's that great in 8.0 either --- we really\nneed to make ANALYZE calculate the correlation explicitly for each\nindex, probably, rather than trying to use per-column correlations.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 16 Mar 2005 22:19:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi-column index " }, { "msg_contents": "On Wed, 16 Mar 2005 22:19:13 -0500, Tom Lane <[email protected]> wrote:\n>calculate the correlation explicitly for each index\n\nMay be it's time to revisit an old proposal that has failed to catch\nanybody's attention during the 7.4 beta period:\nhttp://archives.postgresql.org/pgsql-hackers/2003-08/msg00937.php\n\nI'm not sure I'd store index correlation in a separate table today.\nYou've invented something better for functional index statistics, AFAIR.\n\nServus\n Manfred\n", "msg_date": "Thu, 17 Mar 2005 09:51:36 +0100", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi-column index " }, { "msg_contents": "> May be it's time to revisit an old proposal that has failed to catch\n> anybody's attention during the 7.4 beta period:\n> http://archives.postgresql.org/pgsql-hackers/2003-08/msg00937.php\n> \n> I'm not sure I'd store index correlation in a separate table today.\n> You've invented something better for functional index statistics, AFAIR.\n\nMake it deal with cross-table fk correlations as well :)\n\nChris\n", "msg_date": "Thu, 17 Mar 2005 16:55:15 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi-column index" }, { "msg_contents": "On Thu, 17 Mar 2005 16:55:15 +0800, Christopher Kings-Lynne\n<[email protected]> wrote:\n>Make it deal with cross-table fk correlations as well :)\n\nThat's a different story. I guess it boils down to cross-column\nstatistics for a single table. Part of this is the correlation between\nvalues in two or more columns, which is not the same as the correlation\nbetween column (or index tuple) values and tuple positions.\n\nAnd yes, I did notice the smiley ;-)\n\nServus\n Manfred\n", "msg_date": "Thu, 17 Mar 2005 12:18:28 +0100", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi-column index" }, { "msg_contents": "Manfred Koizar <[email protected]> writes:\n> On Wed, 16 Mar 2005 22:19:13 -0500, Tom Lane <[email protected]> wrote:\n>> calculate the correlation explicitly for each index\n\n> May be it's time to revisit an old proposal that has failed to catch\n> anybody's attention during the 7.4 beta period:\n> http://archives.postgresql.org/pgsql-hackers/2003-08/msg00937.php\n\n> I'm not sure I'd store index correlation in a separate table today.\n> You've invented something better for functional index statistics, AFAIR.\n\nWell, the original motivation for calculating correlations on columns\nwas that historically, you didn't need to re-ANALYZE after creating an\nindex: the stats on the base table were already in place. So the idea\nwas to have the correlations already available whether or not the index\nexisted. This works fine for plain indexes on single columns ;-). We\ndidn't realize (or at least I didn't) how poorly the per-column stats\napply to multi-column indexes.\n\nI am coming around to the view that we really do need to calculate\nindex-specific correlation numbers, and that probably does need a\nspecial table ... or maybe better, add a column to pg_index. The column\nin pg_statistic is useless and should be removed, because there isn't\nany need for per-column correlation.\n\nNow, as to the actual mechanics of getting the numbers: the above link\nseems to imply reading the whole index in index order. Which is a\nhugely expensive proposition for a big index, especially one that's\ngrown rather than been built recently --- the physical and logical\norderings of the index will be different. (Hm, maybe we need a stat\nabout the extent of disorder within the index itself?) We need a way\nto get the number from a small sample of pages.\n\nThe idea I was toying with was to recalculate the index keys for the\nsample rows that ANALYZE already acquires, and then compare/sort\nthose. This is moderately expensive CPU-wise though, and it's also not\nclear what \"compare/sort\" means for non-btree indexes.\n\nIf we could get a correlation estimate by probing only a small fraction\nof the index pages, that would work, but in a disordered index I'm not\nsure how you figure out what you're looking at.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Mar 2005 13:15:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi-column index " }, { "msg_contents": "On Thu, 17 Mar 2005 23:48:30 -0800, Ron Mayer\n<[email protected]> wrote:\n>Would this also help estimates in the case where values in a table\n>are tightly clustered, though not in strictly ascending or descending\n>order?\n\nNo, I was just expanding the existing notion of correlation from single\ncolumns to index tuples.\n\n>For example, address data has many fields that are related\n>to each other (postal codes, cities, states/provinces).\n\nThis looks like a case for cross-column statistics, though you might not\nhave meant it as such. I guess what you're talking about can also be\ndescribed with a single column. In a list like\n\n 3 3 ... 3 1 1 ... 1 7 7 ... 7 4 4 ... 4 ...\n\nequal items are \"clustered\" together but the values are not \"correlated\"\nto their positions. This would require a whole new column\ncharacteristic, something like the probability that we find the same\nvalue in adjacent heap tuples, or the number of different values we can\nexpect on one heap page. The latter might even be easy to compute\nduring ANALYSE.\n\nServus\n Manfred\n", "msg_date": "Fri, 18 Mar 2005 11:34:03 +0100", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi-column index" }, { "msg_contents": "On Thu, 17 Mar 2005 13:15:32 -0500, Tom Lane <[email protected]> wrote:\n>I am coming around to the view that we really do need to calculate\n>index-specific correlation numbers,\n\nCorrelation is a first step. We might also want distribution\ninformation like number of distinct index tuples and histograms.\n\n>Now, as to the actual mechanics of getting the numbers: the above link\n>seems to imply reading the whole index in index order.\n\nThat turned out to be surprisingly easy (no need to look at data values,\nno operator lookup, etc.) to implement as a proof of concept. As it's\ngood enough for my use cases I never bothered to change it.\n\n> Which is a\n>hugely expensive proposition for a big index,\n\nJust a thought: Could the gathering of the sample be integrated into\nthe bulk delete phase of VACUUM? (I know, ANALYSE is not always\nperformed as an option to VACUUM, and VACUUM might not even have to\ndelete any index tuples.)\n\n> We need a way\n>to get the number from a small sample of pages.\n\nI had better (or at least different) ideas at that time, like walking\ndown the tree, but somehow lost impetus :-(\n\n>The idea I was toying with was to recalculate the index keys for the\n>sample rows that ANALYZE already acquires, and then compare/sort\n>those.\n\nThis seems to be the approach that perfectly fits into what we have now.\n\n> This is moderately expensive CPU-wise though, and it's also not\n>clear what \"compare/sort\" means for non-btree indexes.\n\nNothing. We'd need some notion of \"clusteredness\" instead of\ncorrelation. C.f. my answer to Ron in this thread.\n\nBTW, the more I think about it, the more I come to the conclusion that\nwhen the planner starts to account for \"clusteredness\", random page cost\nhas to be raised.\n\nServus\n Manfred\n", "msg_date": "Fri, 18 Mar 2005 11:42:23 +0100", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi-column index " } ]
[ { "msg_contents": "Consider this query:\n\nSELECT distinct owner from pictures; \n\n Unique (cost=361.18..382.53 rows=21 width=4) (actual time=14.197..17.639 rows=21 loops=1)\n -> Sort (cost=361.18..371.86 rows=4270 width=4) (actual time=14.188..15.450 rows=4270 loops=1)\n Sort Key: \"owner\"\n -> Seq Scan on pictures (cost=0.00..103.70 rows=4270 width=4) (actual time=0.012..5.795 rows=4270 loops=1)\n Total runtime: 19.147 ms\n\nI thought that 19ms to return 20 rows out of a 4000 rows table so I\nadded an index:\n\nCREATE INDEX pictures_owner ON pictures (owner);\n\nIt gives a slight improvement:\n\n Unique (cost=0.00..243.95 rows=21 width=4) (actual time=0.024..10.293 rows=21 loops=1)\n -> Index Scan using pictures_owner on pictures (cost=0.00..233.27 rows=4270 width=4) (actual time=0.022..8.227 rows=4270 loops=1)\n Total runtime: 10.369 ms\n\nBut still, it's a lot for 20 rows. I looked at other type of indexes,\nbut they seem to either not give beter perfs or be irrelevant. \n\nAny ideas, apart from more or less manually maintaining a list of\ndistinct owners in another table ?\n\n-- \nLaurent Martelli\[email protected] Java Aspect Components\nhttp://www.aopsys.com/ http://jac.objectweb.org\n\n", "msg_date": "Wed, 16 Mar 2005 18:58:35 +0100", "msg_from": "Laurent Martelli <[email protected]>", "msg_from_op": true, "msg_subject": "Speeding up select distinct" }, { "msg_contents": "\n\tTry :\n\n\tSELECT owner from pictures group by owner;\n\n> Any ideas, apart from more or less manually maintaining a list of\n> distinct owners in another table ?\n\n\tThat would be a good idea too for normalizing your database.\n\n", "msg_date": "Wed, 16 Mar 2005 19:07:21 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up select distinct" }, { "msg_contents": "On Wed, 2005-03-16 at 18:58 +0100, Laurent Martelli wrote:\n> Consider this query:\n> \n> SELECT distinct owner from pictures; \n\nThe performance has nothing to do with the number of rows returned, but\nrather the complexity of calculations and amount of data to sift through\nin order to find it.\n\n> Any ideas, apart from more or less manually maintaining a list of\n> distinct owners in another table ?\n\nThis would be the proper thing to do, along with adding a foreign key\nfrom pictures to the new owner structure for integrity enforcement.\n-- \n\n", "msg_date": "Wed, 16 Mar 2005 13:10:23 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up select distinct" }, { "msg_contents": "Wow, what a fast response !!!\n\n>>>>> \"PFC\" == PFC <[email protected]> writes:\n\n PFC> \tTry :\n\n PFC> \tSELECT owner from pictures group by owner;\n\nThat's a slight improvement, but there's still a seq scan on pictures:\n\n HashAggregate (cost=114.38..114.38 rows=21 width=4) (actual time=7.585..7.605 rows=21 loops=1)\n -> Seq Scan on pictures (cost=0.00..103.70 rows=4270 width=4) (actual time=0.015..3.272 rows=4270 loops=1)\n Total runtime: 7.719 ms\n\n\n\n\n-- \nLaurent Martelli\[email protected] Java Aspect Components\nhttp://www.aopsys.com/ http://jac.objectweb.org\n\n", "msg_date": "Wed, 16 Mar 2005 19:29:54 +0100", "msg_from": "Laurent Martelli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speeding up select distinct" }, { "msg_contents": ">>>>> \"Rod\" == Rod Taylor <[email protected]> writes:\n\n Rod> On Wed, 2005-03-16 at 18:58 +0100, Laurent Martelli wrote:\n >> Consider this query:\n >> \n >> SELECT distinct owner from pictures;\n\n Rod> The performance has nothing to do with the number of rows\n Rod> returned, but rather the complexity of calculations and amount\n Rod> of data to sift through in order to find it.\n\nYes, but I thought that an index might be able to know what distinct\nvalues there are and help optime that query very much.\n\n-- \nLaurent Martelli\[email protected] Java Aspect Components\nhttp://www.aopsys.com/ http://jac.objectweb.org\n\n", "msg_date": "Wed, 16 Mar 2005 19:31:14 +0100", "msg_from": "Laurent Martelli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speeding up select distinct" }, { "msg_contents": "On Wed, 2005-03-16 at 19:31 +0100, Laurent Martelli wrote:\n> >>>>> \"Rod\" == Rod Taylor <[email protected]> writes:\n> \n> Rod> On Wed, 2005-03-16 at 18:58 +0100, Laurent Martelli wrote:\n> >> Consider this query:\n> >> \n> >> SELECT distinct owner from pictures;\n> \n> Rod> The performance has nothing to do with the number of rows\n> Rod> returned, but rather the complexity of calculations and amount\n> Rod> of data to sift through in order to find it.\n> \n> Yes, but I thought that an index might be able to know what distinct\n> values there are and help optime that query very much.\n\nThe index does know. You just have to visit all of the pages within the\nindex to find out, which it does, and that's why you dropped 10ms.\n\nBut if you want a sub ms query, you're going to have to normalize the\nstructure.\n\n-- \n\n", "msg_date": "Wed, 16 Mar 2005 13:41:04 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up select distinct" }, { "msg_contents": "Laurent Martelli <[email protected]> writes:\n\n> PFC> \tSELECT owner from pictures group by owner;\n> \n> That's a slight improvement, but there's still a seq scan on pictures:\n\nIt should be a sequential scan. An index will be slower.\n\n> HashAggregate (cost=114.38..114.38 rows=21 width=4) (actual time=7.585..7.605 rows=21 loops=1)\n> -> Seq Scan on pictures (cost=0.00..103.70 rows=4270 width=4) (actual time=0.015..3.272 rows=4270 loops=1)\n> Total runtime: 7.719 ms\n\nThat's the best plan for this query.\n\n-- \ngreg\n\n", "msg_date": "16 Mar 2005 21:29:13 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up select distinct" } ]
[ { "msg_contents": "> Consider this query:\n> \n> SELECT distinct owner from pictures;\n\n[...]\n> Any ideas, apart from more or less manually maintaining a list of\n> distinct owners in another table ?\n\nyou answered your own question. With a 20 row owners table, you should\nbe directing your efforts there group by is faster than distinct, but\nboth are very wasteful and essentially require s full seqscan of the\ndetail table. \n\nWith a little hacking, you can change 'manual maintenance' to 'automatic\nmaintenance'.\n\n1. create table owner as select distinct owner from pictures;\n2. alter table owner add constraint owner_pkey(owner);\n3. alter table pictures add constraint ri_picture_owner(owner)\nreferences owner;\n4. make a little append_ownder function which adds an owner to the owner\ntable if there is not already one there. Inline this to your insert\nstatement on pictures.\n\nVoila!\nMerlin\np.s. normalize your data always!\n", "msg_date": "Wed, 16 Mar 2005 13:19:09 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speeding up select distinct" }, { "msg_contents": ">>>>> \"Merlin\" == Merlin Moncure <[email protected]> writes:\n\n >> Consider this query:\n >> \n >> SELECT distinct owner from pictures;\n\n Merlin> [...]\n >> Any ideas, apart from more or less manually maintaining a list of\n >> distinct owners in another table ?\n\n Merlin> you answered your own question. With a 20 row owners table,\n Merlin> you should be directing your efforts there group by is\n Merlin> faster than distinct, but both are very wasteful and\n Merlin> essentially require s full seqscan of the detail table.\n\n Merlin> With a little hacking, you can change 'manual maintenance'\n Merlin> to 'automatic maintenance'.\n\n Merlin> 1. create table owner as select distinct owner from\n Merlin> pictures; 2. alter table owner add constraint\n Merlin> owner_pkey(owner); 3. alter table pictures add constraint\n Merlin> ri_picture_owner(owner) references owner; 4. make a little\n Merlin> append_ownder function which adds an owner to the owner\n Merlin> table if there is not already one there. Inline this to your\n Merlin> insert statement on pictures.\n\nI just wished there was a means to fully automate all this and render\nit transparent to the user, just like an index.\n\n Merlin> Voila! Merlin p.s. normalize your data always!\n\nI have this:\n\npictures(\n PictureID serial PRIMARY KEY,\n Owner integer NOT NULL REFERENCES users,\n [...]);\nCREATE TABLE users (\n UserID serial PRIMARY KEY,\n Name character varying(255),\n [...]);\n\nIsn't it normalized ?\n\n-- \nLaurent Martelli\[email protected] Java Aspect Components\nhttp://www.aopsys.com/ http://jac.objectweb.org\n\n", "msg_date": "Wed, 16 Mar 2005 19:38:30 +0100", "msg_from": "Laurent Martelli <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up select distinct" } ]
[ { "msg_contents": "> I just wished there was a means to fully automate all this and render\n> it transparent to the user, just like an index.\n> \n> Merlin> Voila! Merlin p.s. normalize your data always!\n> \n> I have this:\n> \n> pictures(\n> PictureID serial PRIMARY KEY,\n> Owner integer NOT NULL REFERENCES users,\n> [...]);\n> CREATE TABLE users (\n> UserID serial PRIMARY KEY,\n> Name character varying(255),\n> [...]);\n> \n> Isn't it normalized ?\n\ntry:\nselect * from users where UserID in (select pictureId from pictures);\nselect * userid from users intersect select pictureid from pictures;\nselect distinct userid, [...] from users, pictures where user userid =\npictureid)\n\nif none of these give you what you want then you can solve this with a\nnew tble, picture_user using the instructions I gave previously.\n\nNot sure if your data is normalized, but ISTM you are over-using\nsurrogate keys. It may not be possible, but consider downgrading ID\ncolumns to unique and picking a natural key. Now you get better benefit\nof RI and you can sometimes remove joins from certain queries.\n\nRule: use natural keys when you can, surrogate keys when you have to.\nCorollary: use domains for fields used in referential integrity.\n\nMerlin\n\n", "msg_date": "Wed, 16 Mar 2005 13:56:07 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speeding up select distinct" } ]
[ { "msg_contents": "Hi all,\n\nCould someone explain me when I joined tree tables the querys that took\nabout 1sec to finish, takes 17secs to complete when I put tree tables joined\n?\n\nIf I join movest/natope, it's fast, if I join movest/produt, it's fast too,\nbut when I put a third joined table, forget, it's very slow.\n\nAll tables are vacuumed by vacummdb --full --analyze, every night\nAll Indexes are reindexed every night\n\nTABLES:\n-------\n\nMovest: +- 2 milions rows, indexed\nNatope: 30 rows PK(natope_id)\nProdut: +- 1400 Rows PK(codpro)\n\nEXPLAINS:\n---------\nexplain analyze\nselect a.codpro, a.datmov, a.vlrtot\nfrom movest a, natope b\nwhere a.tipmov = 'S'\n and a.codpro = 629001\n and a.datmov between '2005-03-01' and '2005-03-31'\n and a.natope = b.natope_id\n\n\"Merge Join (cost=35.68..36.23 rows=1 width=25) (actual time=2.613..2.840\nrows=6 loops=1)\"\n\" Merge Cond: (\"outer\".natope = \"inner\".\"?column2?\")\"\n\" -> Sort (cost=32.02..32.04 rows=7 width=35) (actual time=1.296..1.314\nrows=10 loops=1)\"\n\" Sort Key: a.natope\"\n\" -> Index Scan using ix_movest_03 on movest a (cost=0.00..31.92\nrows=7 width=35) (actual time=0.507..1.215 rows=10 loops=1)\"\n\" Index Cond: ((codpro = 629001::numeric) AND (datmov >=\n'2005-03-01'::date) AND (datmov <= '2005-03-31'::date))\"\n\" Filter: (tipmov = 'S'::bpchar)\"\n\" -> Sort (cost=3.65..3.82 rows=66 width=4) (actual time=1.132..1.203\nrows=49 loops=1)\"\n\" Sort Key: (b.natope_id)::numeric\"\n\" -> Seq Scan on natope b (cost=0.00..1.66 rows=66 width=4) (actual\ntime=0.117..0.500 rows=66 loops=1)\"\n\"Total runtime: 3.077 ms\"\n\n\n---------------\nexplain analyze\nselect a.codpro, a.datmov, a.vlrtot\nfrom movest a, natope b, produt c\nwhere a.tipmov = 'S'\n and a.codpro = 629001\n and a.datmov between '2005-03-01' and '2005-03-31'\n and a.natope = b.natope_id\n and a.codpro = c.codpro\n\n\"Nested Loop (cost=35.68..144.57 rows=2 width=25) (actual\ntime=2838.121..17257.168 rows=6 loops=1)\"\n\" -> Merge Join (cost=35.68..36.23 rows=1 width=25) (actual\ntime=1.808..2.280 rows=6 loops=1)\"\n\" Merge Cond: (\"outer\".natope = \"inner\".\"?column2?\")\"\n\" -> Sort (cost=32.02..32.04 rows=7 width=35) (actual\ntime=0.485..0.504 rows=10 loops=1)\"\n\" Sort Key: a.natope\"\n\" -> Index Scan using ix_movest_03 on movest a\n(cost=0.00..31.92 rows=7 width=35) (actual time=0.135..0.390 rows=10\nloops=1)\"\n\" Index Cond: ((codpro = 629001::numeric) AND (datmov >=\n'2005-03-01'::date) AND (datmov <= '2005-03-31'::date))\"\n\" Filter: (tipmov = 'S'::bpchar)\"\n\" -> Sort (cost=3.65..3.82 rows=66 width=4) (actual\ntime=1.114..1.209 rows=49 loops=1)\"\n\" Sort Key: (b.natope_id)::numeric\"\n\" -> Seq Scan on natope b (cost=0.00..1.66 rows=66 width=4)\n(actual time=0.058..0.485 rows=66 loops=1)\"\n\" -> Seq Scan on produt c (cost=0.00..108.26 rows=8 width=4) (actual\ntime=2688.356..2875.743 rows=1 loops=6)\"\n\" Filter: ((codpro)::numeric = 629001::numeric)\"\n\"Total runtime: 17257.865 ms\"\n\nBest Regards\nRodrigo Moreno\n\n\n", "msg_date": "Wed, 16 Mar 2005 17:10:17 -0300", "msg_from": "\"Rodrigo Moreno\" <[email protected]>", "msg_from_op": true, "msg_subject": "Help to find out problem with joined tables" }, { "msg_contents": "On Wed, Mar 16, 2005 at 05:10:17PM -0300, Rodrigo Moreno wrote:\n\n> If I join movest/natope, it's fast, if I join movest/produt, it's fast too,\n> but when I put a third joined table, forget, it's very slow.\n\nWhat version of PostgreSQL are you using?\n\n> All tables are vacuumed by vacummdb --full --analyze, every night\n> All Indexes are reindexed every night\n\nHow many updates/deletes do the tables see between vacuums?\n\n> Movest: +- 2 milions rows, indexed\n> Natope: 30 rows PK(natope_id)\n> Produt: +- 1400 Rows PK(codpro)\n\nCould you show the table definitions, or at least the definitions\nfor the relevant columns and indexes?\n\n> -> Seq Scan on produt c (cost=0.00..108.26 rows=8 width=4) (actual\n> time=2688.356..2875.743 rows=1 loops=6)\n> Filter: ((codpro)::numeric = 629001::numeric)\n\nWhat type is produt.codpro? You might be missing a potential index\nscan here due to mismatched types.\n\nThe times (2688.356..2875.743) here look odd, although I might be\noverlooking or misinterpreting something. I don't know what else\nmight cause that, but one thing that can is a lot of dead tuples\nin the table, hence my question about how much activity the tables\nsee between vacuums. Maybe somebody else can provide a better\nexplanation.\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Wed, 16 Mar 2005 21:42:18 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help to find out problem with joined tables" }, { "msg_contents": "Hi,\n\nThanks for your reply.\n\nI have made this test without any user connect and after vacuum and all\nindex recteated and tables analyzed.\n\nWell, produt.codpro is SERIAL\nAnd movest.codpro is NUMBER(8) \n\n\nThanks\nRodrigo\n\n-----Mensagem original-----\nDe: Michael Fuhr [mailto:[email protected]] \nEnviada em: quinta-feira, 17 de março de 2005 01:42\nPara: Rodrigo Moreno\nCc: [email protected]\nAssunto: Re: [PERFORM] Help to find out problem with joined tables\n\nOn Wed, Mar 16, 2005 at 05:10:17PM -0300, Rodrigo Moreno wrote:\n\n> If I join movest/natope, it's fast, if I join movest/produt, it's fast \n> too, but when I put a third joined table, forget, it's very slow.\n\nWhat version of PostgreSQL are you using?\n\n> All tables are vacuumed by vacummdb --full --analyze, every night All \n> Indexes are reindexed every night\n\nHow many updates/deletes do the tables see between vacuums?\n\n> Movest: +- 2 milions rows, indexed\n> Natope: 30 rows PK(natope_id)\n> Produt: +- 1400 Rows PK(codpro)\n\nCould you show the table definitions, or at least the definitions for the\nrelevant columns and indexes?\n\n> -> Seq Scan on produt c (cost=0.00..108.26 rows=8 width=4) (actual\n> time=2688.356..2875.743 rows=1 loops=6)\n> Filter: ((codpro)::numeric = 629001::numeric)\n\nWhat type is produt.codpro? You might be missing a potential index scan\nhere due to mismatched types.\n\nThe times (2688.356..2875.743) here look odd, although I might be\noverlooking or misinterpreting something. I don't know what else might\ncause that, but one thing that can is a lot of dead tuples in the table,\nhence my question about how much activity the tables see between vacuums.\nMaybe somebody else can provide a better explanation.\n\n--\nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n\n\n\n", "msg_date": "Thu, 17 Mar 2005 14:10:14 -0300", "msg_from": "\"Rodrigo Moreno\" <[email protected]>", "msg_from_op": true, "msg_subject": "RES: Help to find out problem with joined tables" } ]
[ { "msg_contents": "This apparently didn't go through the first time; so, I'm reposting...\n-------------------------------------------------------------------------------------\nHello! \n \nFirst off, I'm running 8.0.1 on Win2000 Server. Vacuum analyze is done\nevery night. Query cost parameters are standard, I've only bumped the\nestimated_cache_size up to 50000; shared_buffers=15000. \n \nI've been working on optimizing a query. In the process, I've been\nplaying around with Tow's method from \"SQL Tuning.\" He seems pretty\nenamored with nested loops over hash joins or merge joins. So, I\nthought\nI'd give that a try. Here's the explain analyze prior to my little\nadventure: \n \nQUERY PLAN\nSubquery Scan view_get_all_user_award2 (cost=1091.20..1116.09 rows=524\nwidth=493) (actual time=499.000..499.000 rows=368 loops=1)\n - Unique (cost=1091.20..1110.85 rows=524 width=114) (actual\ntime=499.000..499.000 rows=368 loops=1)\n - Sort (cost=1091.20..1092.51 rows=524 width=114) (actual\ntime=499.000..499.000 rows=1103 loops=1)\n Sort Key: c.job_id, g.person_id, c.job_no, b.deadline,\nc.name, bid_date(c.job_id), c.miscq, c.city, COALESCE(((c.city)::text ||\nCOALESCE((', '::text || (c.st)::text), ''::text)), (COALESCE(c.st,\n''::character varying))::text), c.st, CASE WHEN (c.file_loc = 0) THEN\n'No\nBid'::character varying WHEN (c.file_loc = -1) THEN 'Bid\nBoard'::character\nvarying WHEN (c.file_loc = -2) THEN 'Lost Job'::character varying WHEN\n(c.file_loc = -3) THEN 'See Job Notes'::character varying WHEN\n((c.file_loc < -3) OR (c.file_loc IS NULL)) THEN ''::character varying\nWHEN (h.initials IS NOT NULL) THEN h.initials ELSE 'Unknown\nperson'::character varying END, j.name, c.s_team, c.file_loc\n - Hash Join (cost=848.87..1067.53 rows=524 width=114)\n(actual time=171.000..484.000 rows=1103 loops=1)\n Hash Cond: (\"outer\".company_id = \"inner\".company_id)\n - Nested Loop (cost=805.21..1005.53 rows=524\nwidth=122) (actual time=156.000..314.000 rows=1103 loops=1)\n Join Filter: ((\"inner\".person_id =\n\"outer\".person_id) OR (\"position\"((\"inner\".s_team)::text,\n(\"outer\".initials)::text) 0))\n - Seq Scan on person g (cost=0.00..1.34\nrows=1 width=11) (actual time=0.000..0.000 rows=1 loops=1)\n Filter: (person_id = 1)\n - Hash Join (cost=805.21..988.81 rows=879\nwidth=122) (actual time=156.000..251.000 rows=5484 loops=1)\n Hash Cond: (\"outer\".company_id =\n\"inner\".company_id)\n - Hash Join (cost=761.55..931.96\nrows=879 width=103) (actual time=156.000..188.000 rows=5484 loops=1)\n Hash Cond: (\"outer\".company_id =\n\"inner\".company_id)\n - Seq Scan on call_list f \n(cost=0.00..78.46 rows=4746 width=8) (actual time=0.000..16.000\nrows=4752\nloops=1)\n - Hash (cost=761.11..761.11\nrows=176 width=95) (actual time=156.000..156.000 rows=0 loops=1)\n - Hash Join \n(cost=505.90..761.11 rows=176 width=95) (actual time=94.000..140.000\nrows=1079 loops=1)\n Hash Cond:\n(\"outer\".job_id = \"inner\".job_id)\n Join Filter:\n(\"outer\".won_heat OR \"outer\".won_vent OR \"outer\".won_tc OR (\"inner\".heat\nAND \"outer\".bid_heat AND (\"outer\".won_heat IS NULL)) OR (\"inner\".vent\nAND\n\"outer\".bid_vent AND (\"outer\".won_vent IS NULL)) OR (\"inner\".tc AND\n\"outer\".bid_tc AND (\"outer\".won_tc IS NULL)))\n - Seq Scan on\nbuilder_list d (cost=0.00..212.26 rows=7687 width=14) (actual\ntime=0.000..16.000 rows=7758 loops=1)\n Filter: (role =\n'C'::bpchar)\n - Hash \n(cost=505.55..505.55 rows=138 width=102) (actual time=94.000..94.000\nrows=0 loops=1)\n - Hash Join \n(cost=268.41..505.55 rows=138 width=102) (actual time=47.000..94.000\nrows=443 loops=1)\n Hash Cond:\n(\"outer\".job_id = \"inner\".job_id)\n - Seq\nScan on builder_list i (cost=0.00..212.26 rows=2335 width=8) (actual\ntime=0.000..47.000 rows=2245 loops=1)\n \nFilter: (role = 'E'::bpchar)\n - Hash \n(cost=268.06..268.06 rows=139 width=94) (actual time=47.000..47.000\nrows=0 loops=1)\n -\nHash Join (cost=156.15..268.06 rows=139 width=94) (actual\ntime=31.000..47.000 rows=451 loops=1)\n \n\nHash Cond: (\"outer\".file_loc = \"inner\".person_id)\n \n\n- Hash Join (cost=154.81..264.51 rows=166 width=87) (actual\ntime=31.000..47.000 rows=694 loops=1)\n \n\n Hash Cond: (\"outer\".job_id = \"inner\".job_id)\n \n\n - Seq Scan on job c (cost=0.00..78.57 rows=2357 width=79) (actual\ntime=0.000..0.000 rows=2302 loops=1)\n \n\n - Hash (cost=154.40..154.40 rows=166 width=8) (actual\ntime=31.000..31.000 rows=0 loops=1)\n \n\n - Hash Join (cost=1.18..154.40 rows=166 width=8) (actual\ntime=0.000..31.000 rows=694 loops=1)\n \n\n Hash Cond: (\"outer\".status_id = \"inner\".status_id)\n \n\n - Seq Scan on status_list b (cost=0.00..139.96\nrows=2320 width=12) (actual time=0.000..15.000 rows=2302 loops=1)\n \n\n Filter: active\n \n\n - Hash (cost=1.18..1.18 rows=1 width=4) (actual\ntime=0.000..0.000 rows=0 loops=1)\n \n\n - Seq Scan on status a (cost=0.00..1.18 rows=1\nwidth=4) (actual time=0.000..0.000 rows=1 loops=1)\n \n\n Filter: ((name)::text = 'Awaiting\nAward'::text)\n \n\n- Hash (cost=1.27..1.27 rows=27 width=11) (actual time=0.000..0.000\nrows=0 loops=1)\n \n\n - Seq Scan on person h (cost=0.00..1.27 rows=27 width=11) (actual\ntime=0.000..0.000 rows=27 loops=1)\n - Hash (cost=40.53..40.53 rows=1253\nwidth=27) (actual time=0.000..0.000 rows=0 loops=1)\n - Seq Scan on company j \n(cost=0.00..40.53 rows=1253 width=27) (actual time=0.000..0.000\nrows=1254\nloops=1)\n - Hash (cost=40.53..40.53 rows=1253 width=4)\n(actual time=15.000..15.000 rows=0 loops=1)\n - Seq Scan on company e (cost=0.00..40.53\nrows=1253 width=4) (actual time=0.000..0.000 rows=1254 loops=1)\nTotal runtime: 499.000 ms\n \n \nAs you can see, it's almost all hash joins and sequential scans. I\ntried\nexplain analyze again after setting enable_hashjoin=false and the\nplanner\nstarted using merge joins. So, I set enable_mergejoin=false and ran it\nagain. Here is the resulting explain analyze: \n \nQUERY PLAN\nSubquery Scan view_get_all_user_award2 (cost=9525.65..9550.54 rows=524\nwidth=493) (actual time=531.000..547.000 rows=368 loops=1)\n - Unique (cost=9525.65..9545.30 rows=524 width=114) (actual\ntime=531.000..547.000 rows=368 loops=1)\n - Sort (cost=9525.65..9526.96 rows=524 width=114) (actual\ntime=531.000..531.000 rows=1103 loops=1)\n Sort Key: c.job_id, g.person_id, c.job_no, b.deadline,\nc.name, bid_date(c.job_id), c.miscq, c.city, COALESCE(((c.city)::text ||\nCOALESCE((', '::text || (c.st)::text), ''::text)), (COALESCE(c.st,\n''::character varying))::text), c.st, CASE WHEN (c.file_loc = 0) THEN\n'No\nBid'::character varying WHEN (c.file_loc = -1) THEN 'Bid\nBoard'::character\nvarying WHEN (c.file_loc = -2) THEN 'Lost Job'::character varying WHEN\n(c.file_loc = -3) THEN 'See Job Notes'::character varying WHEN\n((c.file_loc < -3) OR (c.file_loc IS NULL)) THEN ''::character varying\nWHEN (h.initials IS NOT NULL) THEN h.initials ELSE 'Unknown\nperson'::character varying END, j.name, c.s_team, c.file_loc\n - Nested Loop (cost=1.30..9501.98 rows=524 width=114)\n(actual time=0.000..500.000 rows=1103 loops=1)\n Join Filter: ((\"inner\".person_id =\n\"outer\".person_id)\nOR (\"position\"((\"inner\".s_team)::text, (\"outer\".initials)::text) 0))\n - Seq Scan on person g (cost=0.00..1.34 rows=1\nwidth=11) (actual time=0.000..0.000 rows=1 loops=1)\n Filter: (person_id = 1)\n - Nested Loop (cost=1.30..9464.56 rows=1463\nwidth=114) (actual time=0.000..360.000 rows=5484 loops=1)\n - Nested Loop (cost=1.30..5350.20 rows=174\nwidth=118) (actual time=0.000..268.000 rows=1079 loops=1)\n - Nested Loop (cost=1.30..4399.46\nrows=174 width=114) (actual time=0.000..237.000 rows=1079 loops=1)\n Join Filter: (\"inner\".won_heat OR\n\"inner\".won_vent OR \"inner\".won_tc OR (\"outer\".heat AND \"inner\".bid_heat\nAND (\"inner\".won_heat IS NULL)) OR (\"outer\".vent AND \"inner\".bid_vent\nAND\n(\"inner\".won_vent IS NULL)) OR (\"outer\".tc AND \"inner\".bid_tc AND\n(\"inner\".won_tc IS NULL)))\n - Nested Loop \n(cost=1.30..3207.87 rows=138 width=121) (actual time=0.000..221.000\nrows=443 loops=1)\n - Nested Loop \n(cost=1.30..2453.84 rows=138 width=102) (actual time=0.000..221.000\nrows=443 loops=1)\n - Nested Loop \n(cost=1.30..1258.83 rows=139 width=94) (actual time=0.000..189.000\nrows=451 loops=1)\n Join Filter:\n(\"outer\".file_loc = \"inner\".person_id)\n - Nested Loop \n(cost=0.00..1156.69 rows=166 width=87) (actual time=0.000..31.000\nrows=694 loops=1)\n - Nested\nLoop (cost=0.00..170.14 rows=166 width=8) (actual time=0.000..0.000\nrows=694 loops=1)\n Join\nFilter: (\"outer\".status_id = \"inner\".status_id)\n -\nSeq Scan on status a (cost=0.00..1.18 rows=1 width=4) (actual\ntime=0.000..0.000 rows=1 loops=1)\n \n\nFilter: ((name)::text = 'Awaiting Award'::text)\n -\nSeq Scan on status_list b (cost=0.00..139.96 rows=2320 width=12)\n(actual\ntime=0.000..0.000 rows=2302 loops=1)\n \n\nFilter: active\n - Index\nScan using job_pkey on job c (cost=0.00..5.93 rows=1 width=79) (actual\ntime=0.000..0.023 rows=1 loops=694)\n \nIndex\nCond: (c.job_id = \"outer\".job_id)\n - Materialize \n(cost=1.30..1.57 rows=27 width=11) (actual time=0.000..0.069 rows=27\nloops=694)\n - Seq\nScan on person h (cost=0.00..1.27 rows=27 width=11) (actual\ntime=0.000..0.000 rows=27 loops=1)\n - Index Scan using\nidx_builder_list_job_id on builder_list i (cost=0.00..8.57 rows=2\nwidth=8) (actual time=0.000..0.000 rows=1 loops=451)\n Index Cond:\n(i.job_id = \"outer\".job_id)\n Filter: (role =\n'E'::bpchar)\n - Index Scan using\ncompany_pkey on company j (cost=0.00..5.45 rows=1 width=27) (actual\ntime=0.000..0.000 rows=1 loops=443)\n Index Cond:\n(\"outer\".company_id = j.company_id)\n - Index Scan using\nidx_builder_list_job_id on builder_list d (cost=0.00..8.57 rows=5\nwidth=14) (actual time=0.000..0.036 rows=3 loops=443)\n Index Cond: (d.job_id =\n\"outer\".job_id)\n Filter: (role = 'C'::bpchar)\n - Index Scan using company_pkey on\ncompany e (cost=0.00..5.45 rows=1 width=4) (actual time=0.029..0.029\nrows=1 loops=1079)\n Index Cond: (\"outer\".company_id =\ne.company_id)\n - Index Scan using idx_company_id_call_list\non call_list f (cost=0.00..23.57 rows=6 width=8) (actual\ntime=0.014..0.057 rows=5 loops=1079)\n Index Cond: (f.company_id =\n\"outer\".company_id)\nTotal runtime: 547.000 ms\n \n \nThe total run times turn out to be anecdotally insignificant (due to\nvariations from one run to the next). I haven't had a chance to\nquantify\nthe variations, I plan on doing that soon. However, now the planner is\nchoosing index scans for almost all the tables. Granted that the\nplanner\nseems to have chosen a different plan, but if I compare the estimated\ncosts and actual time for the index scans vs. the sequential scans, it\nlooks like the planner should be choosing index scans, but it isn't. \n \nSo, it would seem like my optimal plan should have hash joins with index\nscans. How do I get the planner to the same conclusion? Should the\njoin\nmethod influence the scan method? These seem like they should be\nunrelated to me. \n \nOf course, all of this might be moot if my lack of knowledge is\nshowing... \n \nAny thoughts??\nMark \n \n\n", "msg_date": "Wed, 16 Mar 2005 23:16:24 -0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Join method influences scan method?" }, { "msg_contents": "[email protected] writes:\n> So, it would seem like my optimal plan should have hash joins with index\n> scans.\n\nNo. The thing you are looking at here is a nestloop join with inner\nindex scan, which has to be understood as a unit even though EXPLAIN\ndoesn't describe it that way. The inner indexscan is repeated once\nfor each outer row, using a join key from the outer row as part of the\nindex lookup. That's simply not relevant to the other kinds of joins,\nbecause they expect the inner and outer relations to be scanned\nindependently.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Mar 2005 01:41:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join method influences scan method? " } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi all,\nI'm running a 7.4.x engine and I'm seeing in a explain analyze:\n\n- -> Hash (cost=4.00..4.00 rows=2 width=16) (actual time=30.542..30.542 rows=0 loops=1)\n -> Index Scan using user_login_login_key on user_login ul (cost=0.00..4.00 rows=2 width=16) (actual time=30.482..30.490 rows=1 loops=1)\n Index Cond: ((login)::text = 'Zoneon'::text)\n\nwhy postgres perform an extimation of 2 rows knowing that column it's a primary key ?\n\nIf I do an explain analyze directly on the table I get:\n\n# explain analyze select * from user_login where login = 'Zoneon';\n QUERY PLAN\n- ----------------------------------------------------------------------------------------------------------------------------------\n Index Scan using user_login_login_key on user_login (cost=0.00..4.00 rows=1 width=16) (actual time=0.050..0.052 rows=1 loops=1)\n Index Cond: ((login)::text = 'Zoneon'::text)\n Total runtime: 4.627 ms\n(3 rows)\n\nbtw, is it normal that cast ?\n\n\n\nRegards\nGaetano Mendola\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.5 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFCOXHy7UpzwH2SGd4RAgwAAJ9SMJ3OfYjv03IhhTbJ9GSLby4nfwCg5ezu\nUOH8wXRsNAvWRni7GSKlMps=\n=6yFm\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Thu, 17 Mar 2005 13:02:58 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "2 rows expected on a primary key" } ]
[ { "msg_contents": "Hi,\n is there some utilities for PG for tunnig database performance. To see \ntop 10 sql commands and so?\nThank you a lot.\n", "msg_date": "Thu, 17 Mar 2005 14:17:52 +0100", "msg_from": "Ales Vojacek <[email protected]>", "msg_from_op": true, "msg_subject": "TOP 10 SQL commands and more" }, { "msg_contents": "Ales,\n\n> is there some utilities for PG for tunnig database performance. To see\n> top 10 sql commands and so?\n\nLook up \"PQA\" on www.pgFoundry.org\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 17 Mar 2005 09:46:41 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TOP 10 SQL commands and more" } ]
[ { "msg_contents": "Hello all.\n \nI am having a couple of tables with couple of hundre millions records in\nthem. The tables contains a timestamp column. \nI am almost always interested in getting datas from a specific day or month.\nEach day contains aprox. 400.000 entries.\n \nWhen I do such queries as \" select ... from archive where m_date between\n'2005-01-01' and '2005-02-01' group by ... \" and so on. \nIt takes very long. I am having indexes that kicks in, but still it takes\nsometime.\n \nI have splitted the archive table in smaller monthly tables, it then goes a\nlot faster, but not fast enough. \n \nI know simular systems that uses Oracle and gains a lot on performance\nbecause of the partioning. That kind of anoyes me a bit :)\n \nDoes anyone of you have some good ideas on how speed up such queries on\nhuge tables?\n \nregards\nrune\n \n \n \n \n\n\n\nMelding\n\n\nHello \nall.\n \nI am having a couple \nof tables with couple of hundre millions records in them. The tables \ncontains a timestamp column. \nI am almost always \ninterested in getting datas from a specific day or month. Each day contains \naprox. 400.000 entries.\n \nWhen I do such \nqueries as \" select ... from archive where m_date between '2005-01-01' and \n'2005-02-01' group by ... \" and so on. \nIt takes very long. \nI am having \nindexes that kicks in, but still it takes sometime.\n \nI have splitted the \narchive table in smaller monthly tables, it then goes a lot faster, but not fast \nenough. \n \nI know simular \nsystems that uses Oracle and gains a lot on performance because of the \npartioning. That kind of anoyes me a bit :)\n \nDoes anyone of you \nhave some good ideas on how speed up  such queries on huge \ntables?\n \nregards\nrune", "msg_date": "Thu, 17 Mar 2005 15:01:43 +0100", "msg_from": "\"Lending, Rune\" <[email protected]>", "msg_from_op": true, "msg_subject": "queries on huge tables" }, { "msg_contents": "The most recent version of this thread starts here: \nhttp://archives.postgresql.org/pgsql-general/2005-03/msg00321.php . \nSearch the archives for \"table partition\", \"union view\" and \"partition\ninherits\" and you should find most relevant discussions.\n\nHope that helps!\n\nOn Thu, 17 Mar 2005 15:01:43 +0100, Lending, Rune\n<[email protected]> wrote:\n> \n> Hello all. \n> \n> I am having a couple of tables with couple of hundre millions records in\n> them. The tables contains a timestamp column. \n> I am almost always interested in getting datas from a specific day or month.\n> Each day contains aprox. 400.000 entries. \n> \n> When I do such queries as \" select ... from archive where m_date between\n> '2005-01-01' and '2005-02-01' group by ... \" and so on. \n> It takes very long. I am having indexes that kicks in, but still it takes\n> sometime. \n> \n> I have splitted the archive table in smaller monthly tables, it then goes a\n> lot faster, but not fast enough. \n> \n> I know simular systems that uses Oracle and gains a lot on performance\n> because of the partioning. That kind of anoyes me a bit :) \n> \n> Does anyone of you have some good ideas on how speed up such queries on\n> huge tables? \n> \n> regards \n> rune \n> \n> \n> \n> \n\n\n-- \nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\nhttp://open-ils.org\n", "msg_date": "Sat, 19 Mar 2005 13:16:42 +0000", "msg_from": "Mike Rylander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: queries on huge tables" } ]
[ { "msg_contents": "Greetings everyone,\n\nI am about to migrate to Postgres from MySQL. My DB isn't enormous (<\n1gb), consists mostly of just text, but is accessed quite heavily.\nBecause size isn't a huge issue, but performance is, I am willing to\nnormalize as necessary.\n\nCurrently I have a table \"Entries\" containing 500k rows. The table\ncontains many text columns, and a few others:\nEntryID (unique, indexed)\nUserID (references \"Users\" table, indexed)\nPrivate (boolean. indexed)\n\nMost of my queries return rows based on UserID, and also only if\nPrivate is FALSE. Would it be in the interest of best performance to\nsplit this table into two tables: \"EntriesPrivate\",\n\"EntriesNotPrivate\" and remove the \"Private\" column?\n\nI appreciate any feedback. I'm certainly not a DB design expert. :)\n\nThanks,\nAlex\n", "msg_date": "Thu, 17 Mar 2005 10:56:10 -0500", "msg_from": "Alexander Ranaldi <[email protected]>", "msg_from_op": true, "msg_subject": "Building a DB with performance in mind" }, { "msg_contents": "On Thu, Mar 17, 2005 at 10:56:10AM -0500, Alexander Ranaldi wrote:\n> Most of my queries return rows based on UserID, and also only if\n> Private is FALSE. Would it be in the interest of best performance to\n> split this table into two tables: \"EntriesPrivate\",\n> \"EntriesNotPrivate\" and remove the \"Private\" column?\n\nYou could do a partial index if you'd like (ie. one only indexing rows where\nPrivate=FALSE), but I'm not sure if it's the best solution.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 17 Mar 2005 17:09:34 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Building a DB with performance in mind" }, { "msg_contents": "Alexander Ranaldi wrote:\n\n>Greetings everyone,\n>\n>I am about to migrate to Postgres from MySQL. My DB isn't enormous (<\n>1gb), consists mostly of just text, but is accessed quite heavily.\n>Because size isn't a huge issue, but performance is, I am willing to\n>normalize as necessary.\n>\n>Currently I have a table \"Entries\" containing 500k rows. The table\n>contains many text columns, and a few others:\n>EntryID (unique, indexed)\n>UserID (references \"Users\" table, indexed)\n>Private (boolean. indexed)\n>\n>Most of my queries return rows based on UserID, and also only if\n>Private is FALSE. Would it be in the interest of best performance to\n>split this table into two tables: \"EntriesPrivate\",\n>\"EntriesNotPrivate\" and remove the \"Private\" column?\n>\n>\nPerhaps. You might also consider creating a multi-column index on\n(UserID, Private).\nHowever, in a more conceptual idea, separating the tables may help you\nwith preventing accidental queries. It's pretty easy to forget to add\n\"... AND Private = False\". It is much harder to accidentally add \"...\nJOIN EntriesPrivate ON ...\"\n\n>I appreciate any feedback. I'm certainly not a DB design expert. :)\n>\n>\n>\nIt shouldn't be very hard to test which one works better for you:\n\n\\timing\nCREATE INDEX entries_user_private_idx ON Entries(UserID, Private);\n\nSELECT * FROM Entries WHERE ... AND Private = False;\n\nCREATE TABLE EntriesPrivate AS SELECT * FROM Entries WHERE Private=True;\nCREATE TABLE EntriesPublic AS SELECT * FROM Entries WHERE Private=False;\nALTER TABLE EntriesPrivate DROP COLUMN Private;\nALTER TABLE EntriesPrivate ADD PRIMARY KEY (EntriesID);\nALTER TABLE EntriesPrivate ALTER COLUMN SET\nDEFAULT=nextval('Entries_...EntryId');\n-- Make sure you don't have duplicate entries. This could also be done\nwith a foreign key to some\n-- other entries table\nALTER TABLE EntriesPrivate ADD CONSTRAINT EntriesID NOT in (SELECT\nEntriesId FROM EntriesPublic);\nCREATE INDEX entriesprivate_userid_idx ON EntriesPrivate(UserID);\n\n-- Do the same thing for EntriesPublic\nALTER TABLE EntriesPublic DROP COLUMN Private;\n\nThese queries have not been tested, but they should give you a decent\nstarting point to creating 2 tables, and running a bunch of test queries\non them.\nI think the biggest difficulty is making sure that you don't get\nduplicate EntriesID values, assuming that is important to you.\nAlso, if you have foreign key references, this won't work. You'll have\nto create a new table (it can have just 1 column) containing EntriesID,\nand then you can reference that column from both of these tables.\n\nJohn\n=:->\n\n>Thanks,\n>Alex\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 8: explain analyze is your friend\n>\n>", "msg_date": "Thu, 17 Mar 2005 10:16:32 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Building a DB with performance in mind" } ]
[ { "msg_contents": "\nThe following bug has been logged online:\n\nBug reference: 1552\nLogged by: Brian O'Reilly\nEmail address: [email protected]\nPostgreSQL version: 8.0.1\nOperating system: Linux 2.6.11\nDescription: massive performance hit between 7.4 and 8.0.1\nDetails: \n\nWhen doing a lot of inserts to an empty table with a foreign key to another\ntable, there is an incredible performance degredation issue on 8.0.1. I have\na program that is inserting rows in an iterative loop, and in this form it\ninserts about 110,000 rows. On postgresql 7.4 on a debian machine it takes a\nshade over 2 minutes to complete. On an amd64 box running gentoo, it takes\nover an hour and fourty minutes to complete. The query plan on the debian\nhost that completes quickly follows:\n\n \"Fast\" machine, Debian, PSQL 7.4:\n \n----------------------------------------------------------------------------\n----------------------------------------------------\n Index Scan using requirements_pkey on requirements (cost=0.00..4.82 rows=2\nwidth=0) (actual time=0.013..0.013 rows=0 loops=1)\n Index Cond: (reqid = 10::bigint)\n Total runtime: 0.134 ms\n(3 rows)\n\nand the query plan on the 'slow' machine:\n\n\n QUERY PLAN\n----------------------------------------------------------------------------\n--------------------------\n Seq Scan on requirements (cost=0.00..0.00 rows=1 width=0) (actual\ntime=0.002..0.002 rows=0 loops=1)\n Filter: (reqid = 10::bigint)\n Total runtime: 0.040 ms\n(3 rows)\n\nThe script I am using to show this behaviour follows:\n\nCREATE TABLE packages\n (name text PRIMARY KEY);\nCREATE TABLE binary_packages\n (name text REFERENCES packages,\n version text,\n PRIMARY KEY(name, version));\nCREATE TABLE requirements\n (reqid bigint PRIMARY KEY,\n name text,\n version text,\n FOREIGN KEY (name, version) REFERENCES\nbinary_packages);\nCREATE TABLE constraints\n (constid bigint PRIMARY KEY,\n reqid bigint REFERENCES requirements,\n type text,\n name text REFERENCES packages,\n version text DEFAULT '',\n relation character(2));\n\nexplain analyze select 1 from only requirements where reqid='10';\n\nthe query optimiser seems to be setting a default strategy of doing\nsequential scans on an empty table, which is a fast strategy when the table\nis empty and not particularly full, but obviously on a large table the\nperformance is O(N^2). This is clearly a bug. Please let me know if I can\nprovide any more information.\n\nBrian O'Reilly\nSystem Architect.,\nDeepSky Media Resources\n", "msg_date": "Fri, 18 Mar 2005 23:21:02 +0000 (GMT)", "msg_from": "\"Brian O'Reilly\" <[email protected]>", "msg_from_op": true, "msg_subject": "BUG #1552: massive performance hit between 7.4 and 8.0.1" }, { "msg_contents": "Have you tried an analyze after 1,000 or so inserts? Also, you should \nbe able to disable sequence scans for the duration of the connection \nusing SET enable_seqscan=false. \n\n-Zeki\n\nBrian O'Reilly wrote:\n\n>The following bug has been logged online:\n>\n>Bug reference: 1552\n>Logged by: Brian O'Reilly\n>Email address: [email protected]\n>PostgreSQL version: 8.0.1\n>Operating system: Linux 2.6.11\n>Description: massive performance hit between 7.4 and 8.0.1\n>Details: \n>\n>When doing a lot of inserts to an empty table with a foreign key to another\n>table, there is an incredible performance degredation issue on 8.0.1. I have\n>a program that is inserting rows in an iterative loop, and in this form it\n>inserts about 110,000 rows. On postgresql 7.4 on a debian machine it takes a\n>shade over 2 minutes to complete. On an amd64 box running gentoo, it takes\n>over an hour and fourty minutes to complete. The query plan on the debian\n>host that completes quickly follows:\n>\n> \"Fast\" machine, Debian, PSQL 7.4:\n> \n>----------------------------------------------------------------------------\n>----------------------------------------------------\n> Index Scan using requirements_pkey on requirements (cost=0.00..4.82 rows=2\n>width=0) (actual time=0.013..0.013 rows=0 loops=1)\n> Index Cond: (reqid = 10::bigint)\n> Total runtime: 0.134 ms\n>(3 rows)\n>\n>and the query plan on the 'slow' machine:\n>\n>\n> QUERY PLAN\n>----------------------------------------------------------------------------\n>--------------------------\n> Seq Scan on requirements (cost=0.00..0.00 rows=1 width=0) (actual\n>time=0.002..0.002 rows=0 loops=1)\n> Filter: (reqid = 10::bigint)\n> Total runtime: 0.040 ms\n>(3 rows)\n>\n>The script I am using to show this behaviour follows:\n>\n>CREATE TABLE packages\n> (name text PRIMARY KEY);\n>CREATE TABLE binary_packages\n> (name text REFERENCES packages,\n> version text,\n> PRIMARY KEY(name, version));\n>CREATE TABLE requirements\n> (reqid bigint PRIMARY KEY,\n> name text,\n> version text,\n> FOREIGN KEY (name, version) REFERENCES\n>binary_packages);\n>CREATE TABLE constraints\n> (constid bigint PRIMARY KEY,\n> reqid bigint REFERENCES requirements,\n> type text,\n> name text REFERENCES packages,\n> version text DEFAULT '',\n> relation character(2));\n>\n>explain analyze select 1 from only requirements where reqid='10';\n>\n>the query optimiser seems to be setting a default strategy of doing\n>sequential scans on an empty table, which is a fast strategy when the table\n>is empty and not particularly full, but obviously on a large table the\n>performance is O(N^2). This is clearly a bug. Please let me know if I can\n>provide any more information.\n>\n>Brian O'Reilly\n>System Architect.,\n>DeepSky Media Resources\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 8: explain analyze is your friend\n> \n>\n\n", "msg_date": "Mon, 21 Mar 2005 10:27:17 -0500", "msg_from": "Zeki <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #1552: massive performance hit between 7.4 and 8.0.1" }, { "msg_contents": "On Fri, 2005-03-18 at 23:21 +0000, Brian O'Reilly wrote:\n> The following bug has been logged online:\n> \n> Bug reference: 1552\n> Logged by: Brian O'Reilly\n> Email address: [email protected]\n> PostgreSQL version: 8.0.1\n> Operating system: Linux 2.6.11\n> Description: massive performance hit between 7.4 and 8.0.1\n> Details: \n> \n> When doing a lot of inserts to an empty table with a foreign key to another\n> table, there is an incredible performance degredation issue on 8.0.1. I have\n> a program that is inserting rows in an iterative loop, and in this form it\n> inserts about 110,000 rows. On postgresql 7.4 on a debian machine it takes a\n> shade over 2 minutes to complete. On an amd64 box running gentoo, it takes\n> over an hour and fourty minutes to complete. The query plan on the debian\n> host that completes quickly follows:\n> \n\nThis may be a bug, thanks for filing it.\n\nHowever, we can't tell at the moment from what you've said.\n\nThe EXPLAINs you've enclosed are for SELECTs, yet your bug report\ndescribes INSERTs as being the things that are slow.\n[You may find better performance from using COPY]\n\nAlso, your tests have compared two systems, so it might be that the\nhardware or configuration of one system is different from the other. \n\nIf you could repeat the test on one single system, then this would\nassist in the diagnosis of this bug report. Also, if you could describe\nthe workload that is giving you a problem more exactly, that would help.\nSpecifically, can you confirm that you have run ANALYZE on the tables,\nand also give us some idea of numbers of rows in each table at the time\nyou first run your programs.\n\n> the query optimiser seems to be setting a default strategy of doing\n> sequential scans on an empty table, which is a fast strategy when the table\n> is empty and not particularly full, but obviously on a large table the\n> performance is O(N^2). \n\n> This is clearly a bug. \n\nThere is clearly a problem, but it is not yet clearly a bug. If it is a\nbug, we're interested in solving it as much as you.\n\n> Please let me know if I can\n> provide any more information.\n\nYes, all of the above, plus more. \n\nBest Regards, Simon Riggs\n\n", "msg_date": "Wed, 23 Mar 2005 08:40:30 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #1552: massive performance hit between 7.4 and 8.0.1" }, { "msg_contents": "Simon Riggs wrote:\n\n> The EXPLAINs you've enclosed are for SELECTs, yet your bug report\n> describes INSERTs as being the things that are slow.\n> [You may find better performance from using COPY]\n\nSimon,\n\nBrian and I are working together on this problem.\n\nWe're starting with an empty database, creating four tables, and \npopulating those tables with a total of 180,000-200,000 rows. Each \ntable has a primary key, and several of the tables reference foreign \nkeys in other tables. We've written a Python script, using psycopg, \nwhich executes all the queries to create the tables and insert the rows. \n The database is running on the same machine where the script runs.\n\nI've seen similar performance when issuing a COMMIT after each \ninsertion, and also after batching insertions in blocks of 250 per \nCOMMIT, so batching the commits is not helping much. I've looked at the \npossibility of using COPY, but in our production environment it will be \nprohibitive to build a flat file with all this data. I'd rather \ngenerate it on the fly, as we've been able to do with PostgreSQL 7.4.\n\n> Also, your tests have compared two systems, so it might be that the\n> hardware or configuration of one system is different from the other. \n\nWhen running with PostgreSQL 7.4 on a dual-CPU Athlon MP2400+ machine \nwith a gigabyte of RAM, running Debian Linux version 2.6.8.1, we were \nable to insert all this data in 5-7 minutes. It's taken a while to \ninstall Postgres 8.0.1 on the same machine, but now I have, and it's \ntaking 40-45 minutes to run the same insert script. This is similar to \nthe performance we saw on another machine, a fast single-CPU AMD64 box \nrunning Gentoo.\n\nI don't think it's a hardware issue. I dug around a bit, and found \nsuggestions that this sort of problem could be worked around by breaking \nthe database connection and restarting it after the tables had been \npartially filled. I modified our script to break and re-establish the \ndatabase connection when each table first has 4,000 records inserted, \nand the performance is greatly improved; it now takes only about 3.5 \nminutes to insert 180,000+ rows.\n\nI've since modified this script to build and populate a fifth table with \nover 1.3 million rows. The fifth table has no primary key, but lists a \nforeign key into one of the first four tables. With the above \nmodification (break and re-build the DB connection after 4,000 rows have \nbeen inserted), the whole database can be populated in about 15 minutes. \n I wouldn't have dared try to build a one-million-plus-row table until \nI found this speed-up.\n\n> If you could repeat the test on one single system, then this would\n> assist in the diagnosis of this bug report. Also, if you could describe\n> the workload that is giving you a problem more exactly, that would help.\n> Specifically, can you confirm that you have run ANALYZE on the tables,\n> and also give us some idea of numbers of rows in each table at the time\n> you first run your programs.\n\nJust to see if it would help, I tried modifying the script to run an \nANALYZE against each table after 4,000 insertions, instead of breaking \nand re-establishing the DB connection. I still saw ~45-minute times to \ninsert 180,000 rows. I then tried running ANALYZE against each table \nafter *each* 4,000 rows inserted, and again, it took about 45 minutes to \nrun the insert.\n\nEach table is empty when I first run the program. I am dropping and \nre-creating the database for each test run.\n\n> There is clearly a problem, but it is not yet clearly a bug. If it is a\n> bug, we're interested in solving it as much as you.\n\nI'd be happy to run further tests or provide more details, if they'll \nhelp. We now have a workaround which is allowing us to proceed with our \nproject, but I'd like to know if there's another way to do this. While \nI understand that large or complex databases require careful tuning, I \nwas surprised to see a six- or seven-fold increase in run times between \nPostgreSQL 7.4 and 8.0.1 on the same hardware, on an operation which \nseems fairly straightforward: populating an empty table.\n\nOne other thing which puzzled me: as a test, I tried modifying our \nscript to spit out raw SQL statements instead of connecting to the \ndatabase and performing the inserts itself. Normally, our script \npopulates two tables in one pass, and then populates the third and \nfourth tables in a second pass. I massaged the SQL by hand to group the \ninserts together by table, so that the first table would be entirely \npopulated, then the second, etc. When I ran this SQL script by piping \nit straight into psql, it finished in about four minutes. This is \ncomparable to the time it takes to run my modified script which breaks \nand re-establishes the connection to the database.\n\nIt would appear that psql is doing something right here which we have \nhad to go out of our way to get with psycopg.\n\nKeith Browne\[email protected]\n", "msg_date": "Wed, 23 Mar 2005 14:22:07 -0500", "msg_from": "Keith Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #1552: massive performance hit between 7.4 and 8.0.1" }, { "msg_contents": "On 2005-03-23, Keith Browne <[email protected]> wrote:\n> One other thing which puzzled me: as a test, I tried modifying our \n> script to spit out raw SQL statements instead of connecting to the \n> database and performing the inserts itself. Normally, our script \n> populates two tables in one pass, and then populates the third and \n> fourth tables in a second pass. I massaged the SQL by hand to group the \n> inserts together by table, so that the first table would be entirely \n> populated, then the second, etc. When I ran this SQL script by piping \n> it straight into psql, it finished in about four minutes.\n\nChanging the order so that the referenced table is fully populated, or at\nleast populated with more than a handful of pages of rows, before doing\n_any_ insert on a referencing table in the same session will avoid the\nmisplan of the FK trigger queries, because when the first insert happens\non a referencing table, there will be no reason for the planner to prefer\na sequential scan. So this result is not surprising at all.\n\n-- \nAndrew, Supernews\nhttp://www.supernews.com - individual and corporate NNTP services\n", "msg_date": "Wed, 23 Mar 2005 19:46:50 -0000", "msg_from": "Andrew - Supernews <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #1552: massive performance hit between 7.4 and 8.0.1" }, { "msg_contents": "Andrew - Supernews <[email protected]> writes:\n> Changing the order so that the referenced table is fully populated, or at\n> least populated with more than a handful of pages of rows, before doing\n> _any_ insert on a referencing table in the same session will avoid the\n> misplan of the FK trigger queries, because when the first insert happens\n> on a referencing table, there will be no reason for the planner to prefer\n> a sequential scan. So this result is not surprising at all.\n\nI'm still looking for an example that demonstrates why this is a common\nproblem that we need to worry about. ISTM that if an FK reference is\nhit when there are still zero entries in the referenced table, that\ninsertion will fail anyway, and so people wouldn't try to load data in\nsuch an order.\n\nIn the long term it would be good to replan the FK plans when the\nreferenced tables have grown so much that the plan ought to change.\nOnce we have the plan invalidation machinery that Neil is working on,\nit might be fairly practical to do that; but no such thing is going\nto appear in existing release branches of course.\n\nWe could band-aid this in 8.0 as previously suggested (have the planner\nassume > 0 pages when it sees actually 0 pages) but without seeing a\nconcrete example I can't tell if that will fix the complaint or not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 Mar 2005 15:12:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #1552: massive performance hit between 7.4 and 8.0.1 " }, { "msg_contents": "Tom Lane wrote:\n\n> I'm still looking for an example that demonstrates why this is a common\n> problem that we need to worry about. ISTM that if an FK reference is\n> hit when there are still zero entries in the referenced table, that\n> insertion will fail anyway, and so people wouldn't try to load data in\n> such an order.\n\nTom,\n\nWe're filling pairs of tables with rows having nearly a one-to-one \nmapping; very rarely, the second table will have multiple rows \ncorresponding to one row in the first table. When we insert the first \nrow in the second table, therefore, we've just put the corresponding row \ninto the first table, so the foreign key constraint is satisfied.\n\nI can't say how common this sort of thing will be. It appears to me \nthat BUG #1541 is similar to what we're seeing, and a search of the \nmailing lists also turns up this message:\n\nhttp://archives.postgresql.org/pgsql-performance/2004-11/msg00416.php\n\nwhich also describes symptoms similar to what I'm seeing.\n\n> We could band-aid this in 8.0 as previously suggested (have the planner\n> assume > 0 pages when it sees actually 0 pages) but without seeing a\n> concrete example I can't tell if that will fix the complaint or not.\n\nIt sounds like this could work for us, if it would disable sequential \nsearches into a table which grows from 0 to >60,000 rows in one session. \n Is breaking and re-establishing the database session the best \nworkaround, or is there a better way to provide a hint to the planner?\n\nRegards,\n\nKeith Browne\[email protected]\n", "msg_date": "Wed, 23 Mar 2005 15:55:07 -0500", "msg_from": "Keith Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #1552: massive performance hit between 7.4 and 8.0.1" }, { "msg_contents": "On 2005-03-23, Tom Lane <[email protected]> wrote:\n> Andrew - Supernews <[email protected]> writes:\n>> Changing the order so that the referenced table is fully populated, or at\n>> least populated with more than a handful of pages of rows, before doing\n>> _any_ insert on a referencing table in the same session will avoid the\n>> misplan of the FK trigger queries, because when the first insert happens\n>> on a referencing table, there will be no reason for the planner to prefer\n>> a sequential scan. So this result is not surprising at all.\n>\n> I'm still looking for an example that demonstrates why this is a common\n> problem that we need to worry about. ISTM that if an FK reference is\n> hit when there are still zero entries in the referenced table, that\n> insertion will fail anyway, and so people wouldn't try to load data in\n> such an order.\n\nThink \"1 row\", not \"0 rows\".\n\nIt is not reasonable to assume that _all_ cases of data loading (other than\nperhaps the very largest) will be done by loading entire tables at a time,\nespecially when importing from external sources where the data is\ndifferently structured.\n\n> We could band-aid this in 8.0 as previously suggested (have the planner\n> assume > 0 pages when it sees actually 0 pages) but without seeing a\n> concrete example I can't tell if that will fix the complaint or not.\n\nIt won't; the problem is with 1 page, not 0. \n\n-- \nAndrew, Supernews\nhttp://www.supernews.com - individual and corporate NNTP services\n", "msg_date": "Wed, 23 Mar 2005 21:26:55 -0000", "msg_from": "Andrew - Supernews <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #1552: massive performance hit between 7.4 and 8.0.1" }, { "msg_contents": "Keith Browne <[email protected]> writes:\n> Tom Lane wrote:\n>> I'm still looking for an example that demonstrates why this is a common\n>> problem that we need to worry about.\n\n> We're filling pairs of tables with rows having nearly a one-to-one \n> mapping; very rarely, the second table will have multiple rows \n> corresponding to one row in the first table. When we insert the first \n> row in the second table, therefore, we've just put the corresponding row \n> into the first table, so the foreign key constraint is satisfied.\n\nHmm ...\n\n>> We could band-aid this in 8.0 as previously suggested (have the planner\n>> assume > 0 pages when it sees actually 0 pages) but without seeing a\n>> concrete example I can't tell if that will fix the complaint or not.\n\n> It sounds like this could work for us,\n\nNo, it wouldn't, because by the time you do the first FK trigger you'd\nhave one row/one page in the referenced table, so it'd still look like a\nseqscan situation to the planner. The only way we could make that work\nis to effectively disable seqscans entirely, by *always* pretending the\ntable size is large enough to trigger an indexscan, even when the\nplanner can plainly see that it's not. This is not an acceptable answer\nIMHO.\n\n[ thinks for a bit... ] The reason 7.4 and before worked reasonably\nfor you is that they assumed the 10/1000 statistics for any\nnever-yet-vacuumed table, whether it is empty or not. (This worked fine\nfor your problem but shot a lot of other people in the foot, because\nthat's what the estimate would stay at even if the table grew vastly\nlarger, so long as it wasn't vacuuumed.) Maybe we could\nput in a hack that detects whether a table has yet been vacuumed, and\nsets 10/1000 as the minimum stats --- not fixed values, but minimum\nvalues that can be overridden when the table is actually larger --- \nuntil it has been vacuumed. I'm not sure if this is workable. It looks\nto me like we'd have to approximate the \"never vacuumed\" condition by\nchecking whether pg_class.reltuples and relpages are both zero, which\nis the initial condition all right but would also arise after a vacuum\nfinds nothing in the table. So basically the planner would never\noptimize the entirely-empty-table condition properly, even after vacuum.\nMaybe this is the least bad alternative for 8.0.*.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 Mar 2005 17:13:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #1552: massive performance hit between 7.4 and 8.0.1 " }, { "msg_contents": "On 2005-03-23, Tom Lane <[email protected]> wrote:\n> No, it wouldn't, because by the time you do the first FK trigger you'd\n> have one row/one page in the referenced table, so it'd still look like a\n> seqscan situation to the planner. The only way we could make that work\n> is to effectively disable seqscans entirely, by *always* pretending the\n> table size is large enough to trigger an indexscan, even when the\n> planner can plainly see that it's not. This is not an acceptable answer\n> IMHO.\n\nI'm not yet convinced the planner is right to _ever_ choose a seqscan for\nFK triggers. The idea that a seqscan is faster on small tables is\ntraditional, and it has some justification in the case where nothing is\nin the cache (since index scan will touch the disk twice in that case),\nbut I'm finding that for tables of the order of 50 rows (easily fitting in\none page) that index scans are as fast as or faster than seqscans for\ndoing simple one-row lookups provided the tables are in cache.\n\n-- \nAndrew, Supernews\nhttp://www.supernews.com - individual and corporate NNTP services\n", "msg_date": "Thu, 24 Mar 2005 09:14:25 -0000", "msg_from": "Andrew - Supernews <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #1552: massive performance hit between 7.4 and 8.0.1" }, { "msg_contents": "I wrote:\n> ... Maybe we could\n> put in a hack that detects whether a table has yet been vacuumed, and\n> sets 10/1000 as the minimum stats --- not fixed values, but minimum\n> values that can be overridden when the table is actually larger --- \n> until it has been vacuumed.\n\nFor lack of any better suggestions, I've done this in HEAD and 8.0\nbranches. It proved simplest to just limit the page estimate to be\nat least 10 pages when relpages == 0. The tuple estimate will be\nderived from that using pre-existing code that estimates the average\ntuple size.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 24 Mar 2005 14:22:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #1552: massive performance hit between 7.4 and 8.0.1 " }, { "msg_contents": "On Wed, 2005-03-23 at 14:22 -0500, Keith Browne wrote:\n> Simon Riggs wrote:\n> \n> > The EXPLAINs you've enclosed are for SELECTs, yet your bug report\n> > describes INSERTs as being the things that are slow.\n> > [You may find better performance from using COPY]\n\n> We're starting with an empty database, creating four tables, and \n> populating those tables with a total of 180,000-200,000 rows. Each \n> table has a primary key, and several of the tables reference foreign \n> keys in other tables. We've written a Python script, using psycopg, \n> which executes all the queries to create the tables and insert the rows. \n> The database is running on the same machine where the script runs.\n> \n> I've seen similar performance when issuing a COMMIT after each \n> insertion, and also after batching insertions in blocks of 250 per \n> COMMIT, so batching the commits is not helping much. I've looked at the \n> possibility of using COPY, but in our production environment it will be \n> prohibitive to build a flat file with all this data. I'd rather \n> generate it on the fly, as we've been able to do with PostgreSQL 7.4.\n> \n> > Also, your tests have compared two systems, so it might be that the\n> > hardware or configuration of one system is different from the other. \n> \n> When running with PostgreSQL 7.4 on a dual-CPU Athlon MP2400+ machine \n> with a gigabyte of RAM, running Debian Linux version 2.6.8.1, we were \n> able to insert all this data in 5-7 minutes. It's taken a while to \n> install Postgres 8.0.1 on the same machine, but now I have, and it's \n> taking 40-45 minutes to run the same insert script. This is similar to \n> the performance we saw on another machine, a fast single-CPU AMD64 box \n> running Gentoo.\n> \n> I don't think it's a hardware issue. I dug around a bit, and found \n> suggestions that this sort of problem could be worked around by breaking \n> the database connection and restarting it after the tables had been \n> partially filled. I modified our script to break and re-establish the \n> database connection when each table first has 4,000 records inserted, \n> and the performance is greatly improved; it now takes only about 3.5 \n> minutes to insert 180,000+ rows.\n> \n> I've since modified this script to build and populate a fifth table with \n> over 1.3 million rows. The fifth table has no primary key, but lists a \n> foreign key into one of the first four tables. With the above \n> modification (break and re-build the DB connection after 4,000 rows have \n> been inserted), the whole database can be populated in about 15 minutes. \n> I wouldn't have dared try to build a one-million-plus-row table until \n> I found this speed-up.\n> \n> > If you could repeat the test on one single system, then this would\n> > assist in the diagnosis of this bug report. Also, if you could describe\n> > the workload that is giving you a problem more exactly, that would help.\n> > Specifically, can you confirm that you have run ANALYZE on the tables,\n> > and also give us some idea of numbers of rows in each table at the time\n> > you first run your programs.\n> \n> Just to see if it would help, I tried modifying the script to run an \n> ANALYZE against each table after 4,000 insertions, instead of breaking \n> and re-establishing the DB connection. I still saw ~45-minute times to \n> insert 180,000 rows. I then tried running ANALYZE against each table \n> after *each* 4,000 rows inserted, and again, it took about 45 minutes to \n> run the insert.\n> \n> Each table is empty when I first run the program. I am dropping and \n> re-creating the database for each test run.\n> \n> > There is clearly a problem, but it is not yet clearly a bug. If it is a\n> > bug, we're interested in solving it as much as you.\n> \n> I'd be happy to run further tests or provide more details, if they'll \n> help. We now have a workaround which is allowing us to proceed with our \n> project, but I'd like to know if there's another way to do this. While \n> I understand that large or complex databases require careful tuning, I \n> was surprised to see a six- or seven-fold increase in run times between \n> PostgreSQL 7.4 and 8.0.1 on the same hardware, on an operation which \n> seems fairly straightforward: populating an empty table.\n> \n> One other thing which puzzled me: as a test, I tried modifying our \n> script to spit out raw SQL statements instead of connecting to the \n> database and performing the inserts itself. Normally, our script \n> populates two tables in one pass, and then populates the third and \n> fourth tables in a second pass. I massaged the SQL by hand to group the \n> inserts together by table, so that the first table would be entirely \n> populated, then the second, etc. When I ran this SQL script by piping \n> it straight into psql, it finished in about four minutes. This is \n> comparable to the time it takes to run my modified script which breaks \n> and re-establishes the connection to the database.\n\nOK. Not-a-bug.\n\nYour situation is covered in the manual with some sage advice\nhttp://www.postgresql.org/docs/8.0/static/populate.html\nIt doesn't go into great lengths about all the reasons why those\nrecommendations are good ones - but they are clear.\n\nThere isn't anything in there (yet) that says, \"turn off Referential\nIntegrity too\" and perhaps it should...\n\nThe tables you are loading all refer to one another with referential\nconstraints? Possibly a master-detail relationship, or two major\nentities joined via an associative one. The plan is bad because your FKs\npoint to what are initially empty tables. The best thing to do would be\nto add the RI constraints after the tables are loaded, rather than\nadding them before.\n\nYour program is issuing a Prepare statement, then followed by thousands\nof Execute statements. This reduces much of the overhead of\noptimization, since the plan is cached early in that sequence of\nexecutes. The plan thus remains the same all the way through, though as\nyou observe, that isn't optimal. The initial plan saw an empty table,\nthough it didn't stay empty long. Breaking the connection and\nreattaching forces the plan to be reevaluated; when this is performed\nafter the point at which a more optimal plan will be generated, your\nfurther inserts use the better plan and work continues as fast as\nbefore.\n\npsql doesn't suffer from this problem because it doesn't use Prepared\nstatements. That means you pay the cost of compiling each SQL statement\nat execution time, though gain the benefit of an immediate plan change\nat the optimal moment.\n\nI think we should spawn a TODO item from this:\n\n* Coerce FK lookups to always use an available index\n\nbut that in itself isn't a certain fix and might cause other\ndifficulties elsewhere.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Fri, 25 Mar 2005 10:18:37 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #1552: massive performance hit between 7.4 and 8.0.1" }, { "msg_contents": "On Fri, 2005-03-25 at 10:18 +0000, Simon Riggs wrote:\n> > When running with PostgreSQL 7.4 on a dual-CPU Athlon MP2400+ machine \n> > with a gigabyte of RAM, running Debian Linux version 2.6.8.1, we were \n> > able to insert all this data in 5-7 minutes. It's taken a while to \n> > install Postgres 8.0.1 on the same machine, but now I have, and it's \n> > taking 40-45 minutes to run the same insert script. \n\n<snip>\n\n> OK. Not-a-bug.\n>\n> Your situation is covered in the manual with some sage advice\n> http://www.postgresql.org/docs/8.0/static/populate.html\n> It doesn't go into great lengths about all the reasons why those\n> recommendations are good ones - but they are clear.\n\n\nSimon, this begs the question: what changed from 7.4->8.0 to require he\nmodify his script?\n\n\nTIA,\n-- \nKarim Nassar\nDepartment of Computer Science\nBox 15600, College of Engineering and Natural Sciences\nNorthern Arizona University, Flagstaff, Arizona 86011\nOffice: (928) 523-5868 -=- Mobile: (928) 699-9221\n\n", "msg_date": "Fri, 25 Mar 2005 03:50:36 -0700", "msg_from": "Karim Nassar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] BUG #1552: massive performance hit between\t7.4" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> I think we should spawn a TODO item from this:\n> * Coerce FK lookups to always use an available index\n\nNo, we aren't doing that.\n\nThe correct TODO item is \"Replan cached plans when table size has\nchanged a lot\" which of course depends on having a framework to do\nreplanning at all. I intend to take a look at that once Neil has\ncreated such a framework ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 25 Mar 2005 09:41:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #1552: massive performance hit between 7.4 and 8.0.1 " }, { "msg_contents": "On Fri, 2005-03-25 at 03:50 -0700, Karim Nassar wrote:\n> On Fri, 2005-03-25 at 10:18 +0000, Simon Riggs wrote:\n> > > When running with PostgreSQL 7.4 on a dual-CPU Athlon MP2400+ machine \n> > > with a gigabyte of RAM, running Debian Linux version 2.6.8.1, we were \n> > > able to insert all this data in 5-7 minutes. It's taken a while to \n> > > install Postgres 8.0.1 on the same machine, but now I have, and it's \n> > > taking 40-45 minutes to run the same insert script. \n> \n> <snip>\n> \n> > OK. Not-a-bug.\n> >\n> > Your situation is covered in the manual with some sage advice\n> > http://www.postgresql.org/docs/8.0/static/populate.html\n> > It doesn't go into great lengths about all the reasons why those\n> > recommendations are good ones - but they are clear.\n\n> Simon, this begs the question: what changed from 7.4->8.0 to require he\n> modify his script?\n\nGood question. Clearly, some combination of stats-plus-index-selection\ncode changed but I suspect this is a case of more, not less accuracy,\naffecting us here.\n\nThe FK code literally generates SQL statements, then prepares them.\nAFAICS it should be possible to add more code to \nsrc/backend/utils/adt/ritrigger.c to force the prepare of FK code to\navoid seq scans by executing \"SET enable_seqscan = off;\"\nI'll have a play....\n\nBut, the wider point raised by this is whether Prepare should be more\nconservative in the plan it generates. When we Execute a single query,\nit is perfectly OK to go for the \"best\" plan, since it is being executed\nonly this once and we can tell, right now, which one the \"best\" is.\n\nWith a Prepared query, it is clearly going to be executed many times and\nso we should consider that the optimal plan may change over time. \n\nIndex access has more overhead for small tables, but increases by (I\nthink) only logN as the number of rows in a table, N, increases.\nSequential scan access varies by N. Thus, as N increases from zero,\nfirst of all Seq Scan is the best plan - but only marginally better than\nIndex access, then this changes at some value of N, then after that\nindex access is the best plan. As N increases, Seq Scan access clearly\ndiverges badly from Indexed access. \n\nThe conservative choice for unknown, or varying N would be index access,\nrather than the best plan available when the query is prepared.\n\nI propose a more general TODO item:\n\n* Make Prepared queries always use indexed access, if it is available\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Fri, 25 Mar 2005 15:38:25 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] BUG #1552: massive performance hit" } ]
[ { "msg_contents": "Folks,\n\nI may (or may not) soon have funding for implementing full table partitioning \nin PostgreSQL. I thought it would be a good idea to discuss with people here \nwho are already using pseudo-partitioning what things need to be added to \nPostgresql in order to make full paritioning a reality; that is, what do \nother databases do that we don't?\n\nImplementations are seperated into phases I and II, II being \nharder-and-optional-stuff that may get done later, I being essential \nfeatures.\n\nPh. I\n-- CREATE TABLE ... WITH PARTITION ON {expression}\n ---- should automatically create expression index on {expression}\n-- INSERT INTO should automatically create new partitions where necessary\n ---- new tables should automatically inherit all constraints, indexes,\n keys of \"parent\" table\n-- UPDATE should automatically move rows between partitions where applicable\n-- Query Planner/Executor should be improved to not always materialize \nparitioned tables used in subqueries and joins.\n\nPh. II\n-- Foreign Keys to/from partitioned tables should become possible\n-- Query Planner/Executor should be improved to only join partitions which are \ncompliant with the query's WHERE or JOIN clauses where reasonable\n-- DELETE FROM should automatically drop empty partitions\n-- setting of WITH PARTITION ON {expression} TABLESPACE should automatically \ncreate a new tablespace for each new partition and its indexes.\n-- It should be possible to create new, empty partitions via a CREATE TABLE \nPARTITION OF {table} ON {value} expression.\n\nAll syntax above is, of course, highly debatable.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sat, 19 Mar 2005 12:02:38 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "What needs to be done for real Partitioning?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> -- CREATE TABLE ... WITH PARTITION ON {expression}\n\nI'd rather see the partition control stuff as ALTER TABLE commands,\nnot decoration on CREATE TABLE. See the WITH OIDS business we just went\nthrough: adding nonstandard decoration to a standard command isn't good.\n\n> -- INSERT INTO should automatically create new partitions where necessary\n> -- DELETE FROM should automatically drop empty partitions\n\nI am not sure I agree with either of those, and the reason is that they\nwould turn low-lock operations into high-lock operations. DELETE FROM\nwould be particularly bad. Furthermore, who wants to implement DROP\nPARTITION as a DELETE FROM? ISTM the whole point of partitioning is to\nbe able to load and unload whole partitions quickly, and having to\nDELETE all the rows in a partition isn't my idea of quick.\n\n> -- setting of WITH PARTITION ON {expression} TABLESPACE should automatically \n> create a new tablespace for each new partition and its indexes.\n\nThis is a bad idea. Where are you going to create these automatic\ntablespaces? What will they be named? Won't this require superuser\nprivileges? And what's the point anyway?\n\n> -- It should be possible to create new, empty partitions via a CREATE TABLE \n> PARTITION OF {table} ON {value} expression.\n\nHuh? ISTM this confuses establishment of a table's partition rule with\nthe act of pre-creating empty partitions for not-yet-used ranges of\npartition keys. Or are you trying to suggest that a table could be\npartitioned more than one way at a time? If so, how?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 Mar 2005 15:49:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning? " }, { "msg_contents": "\n\tThis is really great !\n\n\tThink about altering the partitioning (this is quite complex) : imagine a \ntable split in several partitions \"archive\" and \"current\" where a row is \nmoved from current to archive when it will not be updated anymore. \nSometimes you can partition on a simple numeric value, or even a boolean \nvalue in this case. Other times you'd have to partition on a date, \n(current month, current year, archive...). So, how to move the partition \nbetween the two tables so that the oldest rows in the current month table \nare moved to the current year table at the end of each month ?\n\n\tSome ideas :\n\thidden field (like oid was) to indicate in which partition the tuple is ?\n\n\nOn Sat, 19 Mar 2005 21:02:38 +0100, Josh Berkus <[email protected]> wrote:\n\n> Folks,\n>\n> I may (or may not) soon have funding for implementing full table \n> partitioning\n> in PostgreSQL. I thought it would be a good idea to discuss with people \n> here\n> who are already using pseudo-partitioning what things need to be added to\n> Postgresql in order to make full paritioning a reality; that is, what do\n> other databases do that we don't?\n>\n> Implementations are seperated into phases I and II, II being\n> harder-and-optional-stuff that may get done later, I being essential\n> features.\n>\n> Ph. I\n> -- CREATE TABLE ... WITH PARTITION ON {expression}\n> ---- should automatically create expression index on {expression}\n> -- INSERT INTO should automatically create new partitions where necessary\n> ---- new tables should automatically inherit all constraints, \n> indexes,\n> keys of \"parent\" table\n> -- UPDATE should automatically move rows between partitions where \n> applicable\n> -- Query Planner/Executor should be improved to not always materialize\n> paritioned tables used in subqueries and joins.\n>\n> Ph. II\n> -- Foreign Keys to/from partitioned tables should become possible\n> -- Query Planner/Executor should be improved to only join partitions \n> which are\n> compliant with the query's WHERE or JOIN clauses where reasonable\n> -- DELETE FROM should automatically drop empty partitions\n> -- setting of WITH PARTITION ON {expression} TABLESPACE should \n> automatically\n> create a new tablespace for each new partition and its indexes.\n> -- It should be possible to create new, empty partitions via a CREATE \n> TABLE\n> PARTITION OF {table} ON {value} expression.\n>\n> All syntax above is, of course, highly debatable.\n>\n\n\n", "msg_date": "Sat, 19 Mar 2005 23:24:39 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "From: \"Tom Lane\" <[email protected]>\n> Josh Berkus <[email protected]> writes:\n> > -- INSERT INTO should automatically create new partitions where\nnecessary\n> > -- DELETE FROM should automatically drop empty partitions\n>\n> I am not sure I agree with either of those, and the reason is that they\n> would turn low-lock operations into high-lock operations.\n\nI second this. We're current using an inheritance based partitioning scheme\nwith automatic partition creation in the application code, and have seen at\nleast one case of deadlock due to partition creation.\n\nOther phase II/III items might include:\n\n- Modify the partitioning scheme of a table. In the above example, adding a\n'200504' partition, and moving the '200502' orders into 'ARCHIVE'\n\n- The ability to place a partition in a tablespace. In the example above,\nit would be nice to put the 'ARCHIVE' partition would likely be placed on a\nslower set of disks than the most recent month's partition.\n\n- Global indexes (that is to say, an index spanning the the table rather\nthan an individual partition). This seems counterintuitive, but they've\ndramatically increased performance on one of our Oracle systems and should\nat least be worth considering.\n\n", "msg_date": "Sat, 19 Mar 2005 14:54:20 -0800", "msg_from": "\"Stacy White\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning? " }, { "msg_contents": "On Sat, Mar 19, 2005 at 11:24:39PM +0100, PFC wrote:\n\n> \tSome ideas :\n> \thidden field (like oid was) to indicate in which partition the tuple \n> \tis ?\n\nI think that to make partitioning really possible we need to have\nmulti-relfilenode tables.\n\nWe probably also need multi-table indexes. Implementing these would be\ngood for inheritance too.\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\nDios hizo a Ad�n, pero fue Eva quien lo hizo hombre.\n", "msg_date": "Sat, 19 Mar 2005 18:56:24 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "PFC <[email protected]> writes:\n> \tSome ideas :\n> \thidden field (like oid was) to indicate in which partition the tuple is ?\n\ntableoid would accomplish that already, assuming that the \"partitioned\ntable\" is effectively a view on separate physical tables.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 Mar 2005 18:02:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning? " }, { "msg_contents": "Tom, Stacy, Alvaro,\n\n> I'd rather see the partition control stuff as ALTER TABLE commands,\n> not decoration on CREATE TABLE. See the WITH OIDS business we just went\n> through: adding nonstandard decoration to a standard command isn't good.\n\nOK, sure.\n\n> > -- INSERT INTO should automatically create new partitions where necessary\n> > -- DELETE FROM should automatically drop empty partitions\n>\n> I am not sure I agree with either of those, and the reason is that they\n> would turn low-lock operations into high-lock operations. \n\nFor INSERT, I think that's a problem we need to work through. Partitioning \non any scheme where you have to depend on the middleware to create new \npartitions could never be more than a halfway implementation. For one thing, \nif we can't have 100% dependence on the idea that Table M, Partition 34 \ncontains index values Y-Z, then that form of advanced query rewriting (which \nis a huge performance gain on really large tables) becomes inaccessable.\n\nOr are you proposing, instead, that attempts to insert beyond the range raise \nan error?\n\n> DELETE FROM \n> would be particularly bad. Furthermore, who wants to implement DROP\n> PARTITION as a DELETE FROM? ISTM the whole point of partitioning is to\n> be able to load and unload whole partitions quickly, and having to\n> DELETE all the rows in a partition isn't my idea of quick.\n\nI mostly threw DELETE in for obvious symmetry. If it's complicated, we can \ndrop it. \n\nAnd you're right, I forgot DROP PARTITION.\n\n> This is a bad idea. Where are you going to create these automatic\n> tablespaces? What will they be named? Won't this require superuser\n> privileges? And what's the point anyway?\n\nStacy White suggests the more sensible version of this:\nALTER TABLE {table} CREATE PARTITION WITH VALUE {value} ON TABLESPACE \n{tablespacename}. Manually creating the partitions in the appropriate \nlocation probably makes the most sense.\n\nThe point, btw, is that if you have a 2TB table, you probably want to put its \npartitions on several seperate disk arrays.\n\n> Huh? ISTM this confuses establishment of a table's partition rule with\n> the act of pre-creating empty partitions for not-yet-used ranges of\n> partition keys. \n\nI don't understand why this would be confusing. If INSERT isn't creating \npartitions on new value breakpoint, then CREATE PARTITION needs to.\n\n> Or are you trying to suggest that a table could be \n> partitioned more than one way at a time? If so, how?\n\nNo.\n\n> - Modify the partitioning scheme of a table. In the above example, adding\n> a '200504' partition, and moving the '200502' orders into 'ARCHIVE'\n\nHmmm ... I don't see the point in automating this. Can you explain?\n\n> - Global indexes (that is to say, an index spanning the the table rather\n> than an individual partition). This seems counterintuitive, but they've\n> dramatically increased performance on one of our Oracle systems and should\n> at least be worth considering.\n\nHmmm, again can you detail this? Maybe some performance examples? It seems \nto me that global indexes might interfere with the maintenance advantages of \npartitioning.\n\n> We probably also need multi-table indexes. Implementing these would be\n> good for inheritance too.\n\nThey would be nice, but I don't see them as a requirement for making \npartitioning work.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sat, 19 Mar 2005 15:29:51 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "On Sat, Mar 19, 2005 at 12:02:38PM -0800, Josh Berkus wrote:\n\n> Folks,\n> \n> I may (or may not) soon have funding for implementing full table partitioning \n> in PostgreSQL. I thought it would be a good idea to discuss with people here \n> who are already using pseudo-partitioning what things need to be added to \n> Postgresql in order to make full paritioning a reality; that is, what do \n> other databases do that we don't?\n> \n> Implementations are seperated into phases I and II, II being \n> harder-and-optional-stuff that may get done later, I being essential \n> features.\n> \n> Ph. I\n> -- CREATE TABLE ... WITH PARTITION ON {expression}\n> ---- should automatically create expression index on {expression}\n\nALTER TABLE might be cleaner, perhaps?\n\n> -- INSERT INTO should automatically create new partitions where necessary\n> ---- new tables should automatically inherit all constraints, indexes,\n> keys of \"parent\" table\n> -- UPDATE should automatically move rows between partitions where applicable\n> -- Query Planner/Executor should be improved to not always materialize \n> paritioned tables used in subqueries and joins.\n\nWould the SELECT also look at the parent table, if it weren't empty? I can\nthink of cases where that'd be useful, especially if an existing table\ncan be partitioned with an ALTER TABLE.\n\nThis covers almost everything I'd want from table partitioning in the\nshort term.\n\n> Ph. II\n> -- Foreign Keys to/from partitioned tables should become possible\n> -- Query Planner/Executor should be improved to only join partitions which are \n> compliant with the query's WHERE or JOIN clauses where reasonable\n> -- DELETE FROM should automatically drop empty partitions\n> -- setting of WITH PARTITION ON {expression} TABLESPACE should automatically \n> create a new tablespace for each new partition and its indexes.\n> -- It should be possible to create new, empty partitions via a CREATE TABLE \n> PARTITION OF {table} ON {value} expression.\n> \n> All syntax above is, of course, highly debatable.\n\nMulti-table indexes would be nice too, though that leads to some problems\nwhen a partition is truncated or dropped, I guess.\n\nCheers,\n Steve\n", "msg_date": "Sat, 19 Mar 2005 15:38:11 -0800", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "\n> tableoid would accomplish that already, assuming that the \"partitioned\n> table\" is effectively a view on separate physical tables.\n>\n> \t\t\tregards, tom lane\n\n\tVery good.\n\n\tAlso note the possibility to mark a partition READ ONLY. Or even a table.\n\tIt does not seem very useful but just think that for instance the \"1999\", \n\"2000\" ... \"2004\" partitions of a big archive probably never change. \nREADLONY means we're sure they never change, thus no need to backup them \nevery time. Keeping the example of some DB arranged by years / current \nyear / current month, Just backup the \"current month\" part every day and \nthe \"current year\" every month when you switch partitions.\n\tThis could be achieved also by storing the time of last modification of a \ntable somewhere.\n\n", "msg_date": "Sun, 20 Mar 2005 00:52:45 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning? " }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>>> -- INSERT INTO should automatically create new partitions where necessary\n>>> -- DELETE FROM should automatically drop empty partitions\n>> \n>> I am not sure I agree with either of those, and the reason is that they\n>> would turn low-lock operations into high-lock operations. \n\n> For INSERT, I think that's a problem we need to work through.\n\nPossibly, but I'm concerned about locking and deadlock issues. The\nreason that this is iffy is you would start the operation with only\nan INSERT-grade lock, and then discover that you needed to add a\npartition, which is surely something that needs an exclusive-grade\nlock (consider two sessions trying to add the same partition at the\nsame time). So I don't see how to do it without lock upgrading,\nand lock upgrading is always a recipe for deadlocks.\n\nThe DELETE case is even worse because you can't physically release\nstorage until you're sure nothing in it is needed anymore by any open\ntransaction --- that introduces VACUUM-like issues as well as the\ndeadlock problem.\n\n> Or are you proposing, instead, that attempts to insert beyond the\n> range raise an error?\n\nThat was what I had in mind --- then adding partitions would require\na manual operation. This would certainly be good enough for \"phase I\"\nIMHO.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 Mar 2005 19:03:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning? " }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> We probably also need multi-table indexes.\n\nAs Josh says, that seems antithetical to the main point of partitioning,\nwhich is to be able to rapidly remove (and add) partitions of a table.\nIf you have to do index cleaning before you can drop a partition, what's\nthe point of partitioning?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 Mar 2005 19:05:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning? " }, { "msg_contents": "On Sat, Mar 19, 2005 at 07:03:19PM -0500, Tom Lane wrote:\n> Possibly, but I'm concerned about locking and deadlock issues. The\n> reason that this is iffy is you would start the operation with only\n> an INSERT-grade lock, and then discover that you needed to add a\n> partition, which is surely something that needs an exclusive-grade\n> lock (consider two sessions trying to add the same partition at the\n> same time). So I don't see how to do it without lock upgrading,\n> and lock upgrading is always a recipe for deadlocks.\n\nWhat about letting something periodical (say, vacuum) do this?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sun, 20 Mar 2005 01:16:23 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "On Sat, Mar 19, 2005 at 07:05:53PM -0500, Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n> > We probably also need multi-table indexes.\n> \n> As Josh says, that seems antithetical to the main point of partitioning,\n> which is to be able to rapidly remove (and add) partitions of a table.\n> If you have to do index cleaning before you can drop a partition, what's\n> the point of partitioning?\n\nHmm. You are right, but without that we won't be able to enforce\nuniqueness on the partitioned table (we could only enforce it on each\npartition, which would mean we can't partition on anything else than\nprimary keys if the tables have one). IMHO this is something to\nconsider.\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"El hombre nunca sabe de lo que es capaz hasta que lo intenta\" (C. Dickens)\n", "msg_date": "Sun, 20 Mar 2005 00:29:17 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "On Sun, 2005-03-20 at 00:29 -0400, Alvaro Herrera wrote:\n> On Sat, Mar 19, 2005 at 07:05:53PM -0500, Tom Lane wrote:\n> > Alvaro Herrera <[email protected]> writes:\n> > > We probably also need multi-table indexes.\n> > \n> > As Josh says, that seems antithetical to the main point of partitioning,\n> > which is to be able to rapidly remove (and add) partitions of a table.\n> > If you have to do index cleaning before you can drop a partition, what's\n> > the point of partitioning?\n> \n> Hmm. You are right, but without that we won't be able to enforce\n> uniqueness on the partitioned table (we could only enforce it on each\n> partition, which would mean we can't partition on anything else than\n> primary keys if the tables have one). IMHO this is something to\n> consider.\n\nCould uniqueness across partitions be checked for using a mechanism\nsimilar to what a deferred unique constraint would use (trigger / index\ncombination)?\n\n", "msg_date": "Sat, 19 Mar 2005 23:42:17 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Hmm. You are right, but without that we won't be able to enforce\n> uniqueness on the partitioned table (we could only enforce it on each\n> partition, which would mean we can't partition on anything else than\n> primary keys if the tables have one). IMHO this is something to\n> consider.\n\nWell, partitioning on the primary key would be Good Enough for 95% or\n99% of the real problems out there. I'm not excited about adding a\nlarge chunk of complexity to cover another few percent.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 Mar 2005 23:47:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning? " }, { "msg_contents": "\nJosh Berkus <[email protected]> writes:\n\n> -- INSERT INTO should automatically create new partitions where necessary\n> ---- new tables should automatically inherit all constraints, indexes,\n> keys of \"parent\" table\n\nI think you're going about this backwards.\n\nPhase I should be an entirely manual system where you add and remove\npartitions manually and create and drop indexes you want manually. You need\nthese low level interfaces anyways for a complete system, it doesn't make\nsense to have everything automatic and then later try to wedge in a low level\ninterface. Only once you have that do you then start offering options to do\nthese things automatically.\n\nI also think there are a few other components mixed up in your proposal that\nare really not integral to partitioned tables. Tablespaces and expression\nindexes may well be useful features to use in combination with partitioned\ntables, but they shouldn't be required or automatic.\n\n From my experience with Oracle I think there's one big concept that makes the\nwhole system make a lot more sense: individual partitions are really tables.\nThe partitioned tables themselves are just meta-objects like views.\n\nOnce you get that concept the whole featureset makes a lot more sense. You can\npull a partition out of a partitioned table and it becomes a normal table. You\ncan take a normal table and put it into a partitioned table. Creating a new\npartition or altering a partition is just the same as creating or altering a\nnew table (except for the actual data definition part).\n\nGiven that understanding it's clear that tablespaces are an entirely\northogonal feature. One that happens to play well with partitioned tables, but\nnot one that partitioned tables need any special support for. When you create\na new partition or create a table intending to add it as a partition to a\npartitioned table you specify the tablespace just as you would normally do.\n\nIt's also clear that the last thing you want is an index on the partition key.\nA big part of the advantage of partitioned tables is precisely that you get\nthe advantage of an index on a column without the extra expense.\n\nIt would also be reasonable to allow clustering individual partitions;\ncreating table or column constraints on some partitions and not others; or\neven allow having indexes on some partitions and not others. In general the\nonly operations that you wouldn't be able to do on an individual partition\nwould be operations that make the column definitions incompatible with the\nparent table.\n\nThe $64 question is how to specify the partitioning rules. That is, the rule\nfor determining which partition an insert should go into and which partitions\nto look for records in. Oracle handles this by specifying a list of columns\nwhen creating the partitioned table and then specifying either a range or\nspecific values for each individual partition. I can imagine other approaches\nbut none that allow for the planner and optimizer to take as much advantage of\nthe information. \n\nSo I think Phase I should look like:\n\n An ALTER TABLE command to make an inherited table \"abstract\" in the object\n oriented sense. That is, no records can be inserted in the parent table. If\n you follow the oracle model this is also where you specify the partition\n key. There's no index associated with this partition key though.\n\n A command to create a new partition, essentially syntactic sugar for a\n CREATE TABLE with an implied INHERITS clause and a constraint on the\n partition key. If you follow the oracle model then you explicitly specify\n which range or specific value of the partition key this partition holds.\n\n A command to remove a partition from the partitioned table and turn it into\n a regular table.\n\n A command to take a regular table and turn it into a partition. Again here\n you specify the range or value of the partition key. There has to be some\n verification that the table really holds the correct data though. Perhaps\n this could be skipped by providing a table with a properly constructed\n constraint in place.\n\n Magic to make INSERT/UPDATE figure out the correct partition to insert the new\n record. (Normally I would have suggested that UPDATE wasn't really necessary\n but in Postgres it seems like it would fall out naturally from having INSERT.)\n\nPhase II would be planner and executor improvements to take advantage of the\ninformation to speed up queries and allow for individual partitions to be\nread-only or otherwise inaccessible without impeding queries that don't need\nthat partition.\n\nPhase III would be autopilot features like having new partitions automatically\ncreated and destroyed and being able to specify in advance rules for\ndetermining which tablespaces to use for these new partitions.\n\nI'm not sure whether to put global indexes under Phase II or III. Personally I\nthink there's no point to them at all. They defeat the whole point of\npartitioned tables. Once you have global indexes adding and removing\npartitions becomes a lot harder and slower. You may as well have kept\neverything in one table in the first place. But apparently some people find\nthem useful.\n\n-- \ngreg\n\n", "msg_date": "20 Mar 2005 01:14:01 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "\n\n> It would also be reasonable to allow clustering individual partitions;\n> creating table or column constraints on some partitions and not others;\n\n\tI have a session mamagement which works like that, using views now.\n\n\tsessions.online is a table of the online sessions. It has a UNIQUE on \nuser_id.\n\tsessions.archive contains all the closed sessions. Obviously it does not \nhave a UNIQUE on user_id.\n\n\n\n\n", "msg_date": "Sun, 20 Mar 2005 11:20:23 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> So I think Phase I should look like:\n\n> An ALTER TABLE command to make an inherited table \"abstract\" in the object\n> oriented sense. That is, no records can be inserted in the parent table. If\n> you follow the oracle model this is also where you specify the partition\n> key. There's no index associated with this partition key though.\n\nCheck.\n\n> A command to create a new partition, essentially syntactic sugar for a\n> CREATE TABLE with an implied INHERITS clause and a constraint on the\n> partition key. If you follow the oracle model then you explicitly specify\n> which range or specific value of the partition key this partition holds.\n\nCheck.\n\n> A command to remove a partition from the partitioned table and turn it into\n> a regular table.\n\nUgh. Why? You can access the table directly anyway.\n\n> A command to take a regular table and turn it into a partition.\n\nDouble ugh. Verifying that the table matches the partition scheme seems\nlike a lot of ugly, bug-prone, unnecessary code. What's the use case\nfor this anyway?\n\nThose last two are *certainly* not Phase I requirements, and I don't\nthink we need them at all ever.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 Mar 2005 12:58:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning? " }, { "msg_contents": "From: \"Tom Lane\" <[email protected]>\n> Alvaro Herrera <[email protected]> writes:\n> > We probably also need multi-table indexes.\n> As Josh says, that seems antithetical to the main point of partitioning,\n> which is to be able to rapidly remove (and add) partitions of a table.\n> If you have to do index cleaning before you can drop a partition, what's\n> the point of partitioning?\n\nGlobal indexes (as opposed to partition local indexes) are useful in cases\nwhere you have a large number of partitions, index columns different than\nthe partition key, and index values that limit the query to just a subset of\nthe partitions.\n\nThe two domains that I'm most familiar with are warehouse management, and\nthe film industry. In both these cases it's logical to partition on\nday/week/month, it's frequently important to keep a lot of history, and it's\ncommon to have products that only show activity for a few months. In one of\nour production systems we have 800 partitions (by week, with a lot of\nhistory), but a popular product might have only 20 weeks worth of activity.\nSelecting records for the product requires at least 800 random-access reads\nif you have local indexes on 'product_no', 780 of which just tell the\nexecutor that the partition doesn't include any information on the product.\n\nThis is definitely a phase II item, but as I said before it's worth\nconsidering since good DBAs can do a lot with global indexes.\n\nFWIW, we see large benefits from partitioning other than the ability to\neasily drop data, for example:\n\n- We can vacuum only the active portions of a table\n- Postgres automatically keeps related records clustered together on disk,\nwhich makes it more likely that the blocks used by common queries can be\nfound in cache\n- The query engine uses full table scans on the relevant sections of data,\nand quickly skips over the irrelevant sections\n- 'CLUSTER'ing a single partition is likely to be significantly more\nperformant than clustering a large table\n\nIn fact, we have yet to drop a partition on any of our Oracle or Postgres\nproduction systems.\n\n", "msg_date": "Sun, 20 Mar 2005 11:59:29 -0800", "msg_from": "\"Stacy White\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning? " }, { "msg_contents": "Alvaro, Greg, Tom,\n\n> Hmm. You are right, but without that we won't be able to enforce\n> uniqueness on the partitioned table (we could only enforce it on each\n> partition, which would mean we can't partition on anything else than\n> primary keys if the tables have one). IMHO this is something to\n> consider.\n\nSure. However, for most partitioned use cases, the partition column will be \npart of the real key of the table (for example, for a security log, the real \nkey might be (timestamp, machine, application, event_type) with the partition \non extract(hour from timestamp)). As a result, there is no need to enforce \ninter-partition uniqueness; the paritioning scheme enforces it already.\n\nThe only need for inter-partition uniqueness is on surrogate integer keys. \nThis can already be enforced de-facto simply by using a sequence. While it \nwould be possible to create a uniqueness check that spans partitions, it \nwould be very expensive to do so, thus elminating some of the advantage of \npartitioning in the first place. I'm not saying that we won't want this \nsome day as an option, I just see it as a Phase III refinement.\n\nGreg, first of all, thanks for helping clean up my muddy thinking about \nimplementing partitions. Comments below:\n\n> Phase I should be an entirely manual system where you add and remove\n> partitions manually and create and drop indexes you want manually. You need\n> these low level interfaces anyways for a complete system, it doesn't make\n> sense to have everything automatic and then later try to wedge in a low\n> level interface. Only once you have that do you then start offering options\n> to do these things automatically.\n\nThis makes sense. Thanks!\n\n> whole system make a lot more sense: individual partitions are really\n> tables. The partitioned tables themselves are just meta-objects like views.\n\nSo, like the current pseudo-partitioning implementation, partitions would be \n\"full tables\" just with some special rules for query-rewriting when they are \npulled. This makes sense, I think I just got carried away in another \ndirection.\n\n> It's also clear that the last thing you want is an index on the partition\n> key. A big part of the advantage of partitioned tables is precisely that\n> you get the advantage of an index on a column without the extra expense.\n\nWell, you need it with the current pseudo-partitioning. What would allow us \nto eliminate indexing the partition key is special re-writing rules that only \npull the partitions compliant with the outer query. Until that step takes \nplace, the indexes are very much needed. So maybe the advanced planner \nrewriting is a Phase I item, not a Phase II item?\n\n> The $64 question is how to specify the partitioning rules. That is, the\n> rule for determining which partition an insert should go into and which\n> partitions to look for records in. Oracle handles this by specifying a list\n> of columns when creating the partitioned table and then specifying either a\n> range or specific values for each individual partition. I can imagine other\n> approaches but none that allow for the planner and optimizer to take as\n> much advantage of the information.\n\nWell, I would think that specifying an expression that defines a new partition \nat each change in value (like EXTRACT(day FROM timestamp) on a time-based \npartitioning) would cover 90% of implemenations and be a lot simpler to \nadminister. The Oracle approach has the advantage of allowing \"custom \nparitioning\" at the expense of greater complexity.\n\n> A command to remove a partition from the partitioned table and turn it\n> into a regular table.\n>\n> A command to take a regular table and turn it into a partition. Again\n> here you specify the range or value of the partition key. There has to be\n> some verification that the table really holds the correct data though.\n> Perhaps this could be skipped by providing a table with a properly\n> constructed constraint in place.\n\nLike Tom, I don't see the point in these. What do they do that CREATE TABLE \nAS and/or INSERT INTO do not?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 20 Mar 2005 12:03:41 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "On Sun, 20 Mar 2005, Josh Berkus wrote:\n\n>\n>> whole system make a lot more sense: individual partitions are really\n>> tables. The partitioned tables themselves are just meta-objects like views.\n\nIf partition is a table, so I could define different indices for them ?\nIn our prototype of scaled full text search we create another index\nwhich is optimized for \"archived\" (not changed) data - it's sort of\nstandard inverted index which is proven to be scaled, while tsearch2's index\nis good for \"online\" data. All interfaces ( dictionaries, parsers, ranking)\nare the same, so it's possible to combine search results.\nThis is rather easy to implement using table inheritance, but I'd like\nto do this with partitioning\n\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Sun, 20 Mar 2005 23:22:57 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "\nTom Lane <[email protected]> writes:\n\n> > A command to remove a partition from the partitioned table and turn it into\n> > a regular table.\n> \n> Ugh. Why? You can access the table directly anyway.\n> \n> > A command to take a regular table and turn it into a partition.\n> \n> Double ugh. Verifying that the table matches the partition scheme seems\n> like a lot of ugly, bug-prone, unnecessary code. What's the use case\n> for this anyway?\n> \n> Those last two are *certainly* not Phase I requirements, and I don't\n> think we need them at all ever.\n\nThese are effectively equivalent to \"ALTER TABLE RENAME\". Without these\ncommands you would be in pretty much the same position as a DBA without the\nability to rename tables.\n\nThe heart of partitioned tables is being able to load and unload entire\npartitions quickly. You have to have somewhere to \"unload\" them too. Most\npeople aren't happy just watching their data disappear entirely. They want to\nmove them other tables or even other databases. \n\nSimilarly, they have to have somewhere to load them from. They're usually not\nhappy loading data directly into their production data warehouse tables\nwithout manipulating the data, or doing things like clustering or indexing.\n\nYou could argue for some sort of setup where you could take a partition\n\"offline\" during which you could safely do things like export or manipulate\nthe data. But that's awfully limiting. What if I want to do things like add\ncolumns, or change data types, or any other manipulation that breaks the\nsymmetry with the production partitioned table.\n\nI don't think it's really hard at all to check that the table matches the\npartition scheme. You can just require that there be an existing table\nconstraint in place that matches the partitioning scheme. I think you can even\nbe fascist about the exact syntax of the constraint fitting precisely a\nspecified format.\n\n-- \ngreg\n\n", "msg_date": "20 Mar 2005 17:18:35 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "\"Stacy White\" <[email protected]> writes:\n> FWIW, we see large benefits from partitioning other than the ability to\n> easily drop data, for example:\n\n> - We can vacuum only the active portions of a table\n> - Postgres automatically keeps related records clustered together on disk,\n> which makes it more likely that the blocks used by common queries can be\n> found in cache\n> - The query engine uses full table scans on the relevant sections of data,\n> and quickly skips over the irrelevant sections\n> - 'CLUSTER'ing a single partition is likely to be significantly more\n> performant than clustering a large table\n\nGlobal indexes would seriously reduce the performance of both vacuum and\ncluster for a single partition, and if you want seq scans you don't need\nan index for that at all. So the above doesn't strike me as a strong\nargument for global indexes ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 Mar 2005 18:01:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning? " }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> You could argue for some sort of setup where you could take a partition\n> \"offline\" during which you could safely do things like export or manipulate\n> the data. But that's awfully limiting. What if I want to do things like add\n> columns, or change data types, or any other manipulation that breaks the\n> symmetry with the production partitioned table.\n\n[ scrapes eyebrows off ceiling... ] You don't really expect to be able\nto do that kind of thing to just one partition do you?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 Mar 2005 18:05:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning? " }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n\n> Well, I would think that specifying an expression that defines a new partition \n> at each change in value (like EXTRACT(day FROM timestamp) on a time-based \n> partitioning) would cover 90% of implemenations and be a lot simpler to \n> administer. The Oracle approach has the advantage of allowing \"custom \n> paritioning\" at the expense of greater complexity.\n\nHm. This is where I might be less helpful. Once you're submersed in one way of\ndoing things it can be hard to think outside the box like this.\n\nBut I fear this scheme might be harder to actually take advantage of. If I do\na query like \n\n WHERE timestamp BETWEEN '2005-01-01 11:00' AND '2005-01-01 12:00'\n\nHow do you determine which partitions that range will cover?\n\nAlso, it seems like it would be inconvenient to try to construct expressions\nto handle things like \"start a new partition ever 1 million values\".\n\nAnd worse, how would you handle changing schemes with this? Like, say we want\nto switch from starting one partition per month to starting one partition per\nweek?\n\n\n\nI think some actual use cases might be helpful for you. I can contribute an\ninteresting one, though I have to be intentionally vague even though I don't\nwork on that system any more.\n\nWe had a table with a layout like:\n\ntxnid serial,\ngroupid integer,\ndata...\n\nEach day a cron job created 6 new groups (actually later that was changed to\nsome other number). It then added a new partition to handle the range of the\nnew day's groups. Later another cron job exchanged out the partition from a\nweek earlier and exported that table, transfered it to another machine and\nloaded it there.\n\ntxnid was a unique identifier but we couldn't have a unique constraint because\nthat would have required a global index. That didn't cause any problems since\nit was a sequence generated column anyways.\n\nWe did have a unique index on <groupid,txnid> which is a local index because\ngroupid was the partition key. In reality nothing in our system ever really\nneeded a txn without knowing which group it came from anyways, so it was easy\nto change our queries to take advantage of this.\n\nWe had a lot of jobs, some *extremely* performance sensitive that depended on\nbeing able to scan the entire list of txns for a given day or a given set of\ngroupids. The partitions meant it could do a full table scan which made these\nextremely fast.\n\nThis was with Oracle 8i. All partition keys in 8i were ranges. In 9 Oracle\nadded the ability to make partition reference specific id values. Sort of like\nhow you're describing having a key expression. We might have considered using\nthat scheme with groupid but then it would have meant adding a bunch of new\npartitions each day and having some queries that would involve scanning\nmultiple partitions.\n\n-- \nGreg\n\n", "msg_date": "20 Mar 2005 18:14:59 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> Greg Stark <[email protected]> writes:\n> > You could argue for some sort of setup where you could take a partition\n> > \"offline\" during which you could safely do things like export or manipulate\n> > the data. But that's awfully limiting. What if I want to do things like add\n> > columns, or change data types, or any other manipulation that breaks the\n> > symmetry with the production partitioned table.\n> \n> [ scrapes eyebrows off ceiling... ] You don't really expect to be able\n> to do that kind of thing to just one partition do you?\n\nWell no. That's exactly why I would want to pull the partition out of the\npartitioned table so that I can then do whatever work I need to archive it\nwithout affecting the partitioned table.\n\nTake an analogous situation. I have a huge log file I want to rotate. The\nquickest most efficient way to do this would be to move it aside, HUP the\ndaemon (or whatever else I have to do to get it to open a new file) then gzip\nand archive the old log files.\n\n-- \ngreg\n\n", "msg_date": "20 Mar 2005 22:33:12 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> Global indexes would seriously reduce the performance of both vacuum and\n> cluster for a single partition, and if you want seq scans you don't need\n> an index for that at all. So the above doesn't strike me as a strong\n> argument for global indexes ...\n\nI think he means some sort of plan for queries like\n\n select * from invoices where customer_id = 1\n\nwhere customer 1 only did business with us for two years. One could imagine\nsome kind of very coarse grained bitmap index that just knows which partitions\ncustomer_id=1 appears in, and then does a sequential scan of those partitions.\n\nBut I think you can do nearly as well without using global indexes of any\ntype. Assuming you had local indexes on customer_id for each partition and\nseparate histograms for each partition the planner could conclude that it\nneeds sequential scans for some partitions and a quick index lookup expecting\n0 records for other partitions.\n\nNot as good as pruning partitions entirely but if you're doing a sequential\nscan the performance hit of a few index lookups isn't a problem.\n\n-- \ngreg\n\n", "msg_date": "20 Mar 2005 22:47:43 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "From: \"Tom Lane\" <[email protected]>\n> \"Stacy White\" <[email protected]> writes:\n> > FWIW, we see large benefits from partitioning other than the ability to\n> > easily drop data, for example:\n>\n> > - We can vacuum only the active portions of a table\n> > - Postgres automatically keeps related records clustered together on\ndisk,\n> > which makes it more likely that the blocks used by common queries can be\n> > found in cache\n> > - The query engine uses full table scans on the relevant sections of\ndata,\n> > and quickly skips over the irrelevant sections\n> > - 'CLUSTER'ing a single partition is likely to be significantly more\n> > performant than clustering a large table\n> Global indexes would seriously reduce the performance of both vacuum and\n> cluster for a single partition, and if you want seq scans you don't need\n> an index for that at all. So the above doesn't strike me as a strong\n> argument for global indexes ...\n\nTom, this list was in response to your question \"If you have to do index\ncleaning before you can drop a partition, what's the point of\npartitioning?\". I was trying to make the point that partioning isn't just\nabout being able to quickly drop data. The argument for global indexes came\nin the form of my war story and the description of the conditions under\nwhich global indexes will perform better than local indexes (see my original\nemail for details) . But, like I said, this would definitely be a phase\nII/III item.\n\n", "msg_date": "Sun, 20 Mar 2005 21:14:37 -0800", "msg_from": "\"Stacy White\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning? " }, { "msg_contents": "From: \"Greg Stark\" <[email protected]>\n> Tom Lane <[email protected]> writes:\n> Not as good as pruning partitions entirely but if you're doing a\nsequential\n> scan the performance hit of a few index lookups isn't a problem.\n\nGreg, I think you've got the right idea. For large databases, though, it\nwon't be uncommon to have large numbers of partitions, in which case we're\nnot talking about a few index lookups. The database I used in my example\nwasn't huge, but the table in question had over 800 partitions. A larger\ndatabase could have thousands. I suppose the importance of global indexes\ndepends on the sizes of the databases your target audience is running.\n\nHere's some more detail on our real-world experience: The group made the\ndecision to partition some of the larger tables for better performance. The\nidea that global indexes aren't useful is pretty common in the database\nworld, and 2 or 3 good DBAs suggested that the 'product_no' index be local.\nBut with the local indexes, performance on some queries was bad enough that\nthe group actually made the decision to switch back to unpartitioned tables.\n(The performance problems came about because of the overhead involved in\nsearching >800 indices to find the relevant rows).\n\nLuckily they that had the chance to work with a truly fantastic DBA (the\nauthor of an Oracle Press performance tuning book even) before they could\nswitch back. He convinced them to make some of their indexes global.\nPerformance dramatically improved (compared with both the unpartitioned\nschema, and the partitioned-and-locally-indexed schema), and they've since\nstayed with partitioned tables and a mix of local and global indexes.\n\nBut once again, I think that global indexes aren't as important as the Phase\nI items in any of the Phase I/Phase II breakdowns that have been proposed in\nthis thread.\n\n", "msg_date": "Sun, 20 Mar 2005 21:40:10 -0800", "msg_from": "\"Stacy White\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "On L, 2005-03-19 at 23:47 -0500, Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n> > Hmm. You are right, but without that we won't be able to enforce\n> > uniqueness on the partitioned table (we could only enforce it on each\n> > partition, which would mean we can't partition on anything else than\n> > primary keys if the tables have one). IMHO this is something to\n> > consider.\n> \n> Well, partitioning on the primary key would be Good Enough for 95% or\n> 99% of the real problems out there. I'm not excited about adding a\n> large chunk of complexity to cover another few percent.\n\nThat automatically means that partitioning expression has to be a range\nover PK. (you dont want to have every tuple in separate tabel :)\n\nAnd it also means that you have to automatically create new partitions.\n\nAre you sure that partitioning on anything else than PK would be\nsignificantly harder ?\n\nI have a case where I do manual partitioning over start_time\n(timestamp), but the PK is an id from a sequence. They are almost, but\nnot exactly in the same order. And I don't think that moving the PK to\nbe (start_time, id) just because of \"partitioning on PK only\" would be a\ngood design in any way.\n\nSo please don't design the system to partition on PK only.\n\n-- \nHannu Krosing <[email protected]>\n", "msg_date": "Mon, 21 Mar 2005 18:07:25 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "On L, 2005-03-19 at 12:02 -0800, Josh Berkus wrote:\n> Folks,\n> \n> I may (or may not) soon have funding for implementing full table partitioning \n> in PostgreSQL. \n\nIf you don't get it, contact me as there is a small possibility that I\nknow a company interested enough to fund (some) of it :)\n\n> I thought it would be a good idea to discuss with people here \n> who are already using pseudo-partitioning what things need to be added to \n> Postgresql in order to make full paritioning a reality; that is, what do \n> other databases do that we don't?\n\nAs these are already discussed in this thread, I'll try to outline a\nmethod of providing a global index (unique or not) in a way that will\nstill make it possible to quickly remove (and not-quite-so-quickly add)\na partition.\n\nThe structure is inspired by the current way of handling >1Gb tables.\n\nAs each tid consists of 32 bit page pointer we have pointerspace of\n35184372088832 bytes/index (4G of 8k pages). currently this is directly\npartitioned mapped to 1Gbyte/128kpage files, but we can, with minimal\nchanges to indexes, put a lookup table between index and page lookup.\n\nIn case of global index over partitions this table could point to 1G\nsubtables from different partition tables.\n\nThe drop partition table can also be fast - just record the pages in\nlookup table as deleted - means one change per 1G of dropped table.\nThe next vacuum should free pointers to deleted subfiles.\n\nAdding partitions is trickier - \n\nIf the added table forms part of partitioning index (say names from C to\nE), and there is a matching index on subtable, \n\nThen that part of btree can probably copied into the main btree index as\na tree btanch, which should be relatively fast (compared to building it\none tid at a time).\n\nElse adding the the index could probably also be sped up by some kind of\nindex merge - faster than building from scratch but slower than above.\n\n\nTo repeat - the global index over partitioned table should have te same\nstructure as our current b-tree index, only with added map of 128k index\npartitions to 1G subfiles of (possibly different) tables. This map will\nbe quite small - for 1Tb of data it will be only 1k entries - this will\nfit in cache on all modern processors and thus should add only tiny\nslowdown from current direct tid.page/128k method\n\n-- \nHannu Krosing <[email protected]>\n", "msg_date": "Mon, 21 Mar 2005 18:32:53 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "Stacy,\n\n> Luckily they that had the chance to work with a truly fantastic DBA (the\n> author of an Oracle Press performance tuning book even) before they could\n> switch back. He convinced them to make some of their indexes global.\n> Performance dramatically improved (compared with both the unpartitioned\n> schema, and the partitioned-and-locally-indexed schema), and they've since\n> stayed with partitioned tables and a mix of local and global indexes.\n\nHmmm. Wouldn't Greg's suggestion of a bitmap index which holds information on \nwhat values are found in what partition also solve this? Without 1/2 of \nthe overhead imposed by global indexes?\n\nI can actually see such a bitmap as being universally useful to the \npartitioning concept ... for one, it would resolve the whole \"partition on \n{value}\" issue.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 21 Mar 2005 09:55:03 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "On L, 2005-03-19 at 19:03 -0500, Tom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n> >>> -- INSERT INTO should automatically create new partitions where necessary\n> >>> -- DELETE FROM should automatically drop empty partitions\n> >> \n> >> I am not sure I agree with either of those, and the reason is that they\n> >> would turn low-lock operations into high-lock operations. \n> \n> > For INSERT, I think that's a problem we need to work through.\n> \n> Possibly, but I'm concerned about locking and deadlock issues. The\n> reason that this is iffy is you would start the operation with only\n> an INSERT-grade lock, and then discover that you needed to add a\n> partition, which is surely something that needs an exclusive-grade\n> lock (consider two sessions trying to add the same partition at the\n> same time). So I don't see how to do it without lock upgrading,\n> and lock upgrading is always a recipe for deadlocks.\n> \n> The DELETE case is even worse because you can't physically release\n> storage until you're sure nothing in it is needed anymore by any open\n> transaction --- that introduces VACUUM-like issues as well as the\n> deadlock problem.\n> \n\nIf we go with my proposal (other post in this thread) of doing most of\nthe partitioning in the level between logical file and physikal 1Gb\nstorage files, then adding a partition should be nearly the same as\ncrossing the 1G boundary is now.\n\nremoving the partition would be just plain vacuum (if we can make pg\nshring each 1G subfile independently)\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n-- \nHannu Krosing <[email protected]>\n", "msg_date": "Mon, 21 Mar 2005 20:23:15 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "On P, 2005-03-20 at 00:52 +0100, PFC wrote:\n> > tableoid would accomplish that already, assuming that the \"partitioned\n> > table\" is effectively a view on separate physical tables.\n> >\n> > \t\t\tregards, tom lane\n> \n> \tVery good.\n> \n> \tAlso note the possibility to mark a partition READ ONLY. Or even a table.\n> \tIt does not seem very useful but just think that for instance the \"1999\", \n> \"2000\" ... \"2004\" partitions of a big archive probably never change. \n> READLONY means we're sure they never change, thus no need to backup them \n> every time. Keeping the example of some DB arranged by years / current \n> year / current month, Just backup the \"current month\" part every day and \n> the \"current year\" every month when you switch partitions.\n> \tThis could be achieved also by storing the time of last modification of a \n> table somewhere.\n\nWould we still need regular VACUUMing of read-only table to avoid \nOID-wraparound ?\n\n-- \nHannu Krosing <[email protected]>\n", "msg_date": "Mon, 21 Mar 2005 20:26:24 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "On E, 2005-03-21 at 09:55 -0800, Josh Berkus wrote:\n> Stacy,\n> \n> > Luckily they that had the chance to work with a truly fantastic DBA (the\n> > author of an Oracle Press performance tuning book even) before they could\n> > switch back. He convinced them to make some of their indexes global.\n> > Performance dramatically improved (compared with both the unpartitioned\n> > schema, and the partitioned-and-locally-indexed schema), and they've since\n> > stayed with partitioned tables and a mix of local and global indexes.\n> \n> Hmmm. Wouldn't Greg's suggestion of a bitmap index which holds information on \n> what values are found in what partition also solve this? Without 1/2 of \n> the overhead imposed by global indexes?\n>\n> I can actually see such a bitmap as being universally useful to the \n> partitioning concept ... for one, it would resolve the whole \"partition on \n> {value}\" issue.\n\nI once (maybe about a year ago) tried to elaborate using bitmap \nindex(es) with page granularity as a tool for simultaneous clustering\nand lookup for data warehousing using postgres. the main idea was to\ndetermine storage location from AND of all \"clustered\" bitmap indexes\nand corresponding fast and clustered lookups.\n\nThis could/should/maybe :) possibly be combined with clustering as well.\n\n-- \nHannu Krosing <[email protected]>\n", "msg_date": "Mon, 21 Mar 2005 20:31:58 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "On Mon, Mar 21, 2005 at 09:55:03AM -0800, Josh Berkus wrote:\n> Stacy,\n> \n> > Luckily they that had the chance to work with a truly fantastic DBA (the\n> > author of an Oracle Press performance tuning book even) before they could\n> > switch back. He convinced them to make some of their indexes global.\n> > Performance dramatically improved (compared with both the unpartitioned\n> > schema, and the partitioned-and-locally-indexed schema), and they've since\n> > stayed with partitioned tables and a mix of local and global indexes.\n> \n> Hmmm. Wouldn't Greg's suggestion of a bitmap index which holds information on \n> what values are found in what partition also solve this? Without 1/2 of \n> the overhead imposed by global indexes?\n> \n> I can actually see such a bitmap as being universally useful to the \n> partitioning concept ... for one, it would resolve the whole \"partition on \n> {value}\" issue.\n\nI suspect both will have their uses. I've read quite a bit about global\nv. local indexs in Oracle, and there are definately cases where global\nis much better than local. Granted, there's some things with how Oracle\nhandles their catalog, etc. that might make local indexes more expensive\nfor them than they would be for PostgreSQL. It's also not clear how much\na 'partition bitmap' index would help.\n\nAs for the 'seqscan individual partitions' argument, that's not going to\nwork well at all for a case where you need to hit a relatively small\npercentage of rows in a relatively large number of partitions. SELECT\n... WHERE customer_id = 1 would be a good example of such a query\n(assuming the table is partitioned on something like invoice_date).\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Mon, 21 Mar 2005 16:07:45 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "On Sat, Mar 19, 2005 at 07:05:53PM -0500, Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n> > We probably also need multi-table indexes.\n> \n> As Josh says, that seems antithetical to the main point of partitioning,\n> which is to be able to rapidly remove (and add) partitions of a table.\n> If you have to do index cleaning before you can drop a partition, what's\n> the point of partitioning?\n\nWhy would you need to do index cleaning first? Presumably the code that\ngoes to check a heap tuple that an index pointed at to ensure that it\nwas visible in the current transaction would be able to recognize if the\npartition that tuple was in had been removed, and just ignore that index\nentry. Granted, you'd need to clean the index up at some point\n(presumably via vacuum), but it doesn't need to occur at partition drop\ntime.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Mon, 21 Mar 2005 16:11:09 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "I think Greg's email did a good job of putting this on track. Phase 1\nshould be manual, low-level type of support. Oracle has had partitioning\nfor years now, and IF they've added automated partition management, it's\nonly happened in 10g which is pretty recent.\n\nFor inserts that don't currently have a defined partition to fit in, the\nOracle model might be better than tossing an error: a partitioned table\nin Oracle also contains a default partition. Any rows that don't match a\ndefined partition go into the default partition. For many cases you'll\nnever have anything in the default partition, but sometimes you'll have\nsome partition values that occur infrequenttly enough in the table so as\nnot to warrant their own partition.\n\nThere's also another partitioning application that I think is often\noverlooked. I have a table with about 130M rows that is\n'pseudo-partitioned' by project_id. Right now, there are 5 different\nproject IDs that account for the bulk of those 130M rows. Oracle\nprovides a means to partition on discreet values. When you do this,\nthere's not actually any reason to even store the partition field in the\npartition tables, since it will be the same for every row in the\npartition. In my case, since the smallint project ID is being aligned to\na 4 byte boundary, having this feature would save ~120M rows * 4 bytes =\n480MB in the table. Granted, 480MB isn't anything for today's disk\nsizes, but it makes a huge difference when you look at it from an I/O\nstandpoint. Knowing that a partition contains only one value of a field\nor set of fields also means you can drop those fields from local indexes\nwithout losing any effectiveness. In my case, I have 2 indexes I could\ndrop project_id from. Does each node in a B-tree index have the full\nindex key? If so, then there would be substantial I/O gains to be had\nthere, as well. Even if each node doesn't store the full key, there\ncould still be faster to handle a narrower index.\n\nI realize this might be a more difficult case to support. It probably\ncouldn't be done using inheritance, though I don't know if inheritence\nor a union view is better for partitioning. In either case, this case\nmight not be a good candidate for phase 1, but I think partitioning\nshould be designed with it in mind.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Mon, 21 Mar 2005 16:58:03 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "On Sun, 2005-03-20 at 01:14 -0500, Greg Stark wrote:\n> Josh Berkus <[email protected]> writes:\n> \n> > -- INSERT INTO should automatically create new partitions where necessary\n> > ---- new tables should automatically inherit all constraints, indexes,\n> > keys of \"parent\" table\n> \n> I think you're going about this backwards.\n\nCertainly, there are two schools of thought here. I have been in two\nminds about which those two designs previously, and indeed here which\none to support.\n\n> Phase I should be an entirely manual system where you add and remove\n> partitions manually and create and drop indexes you want manually. You need\n> these low level interfaces anyways for a complete system, it doesn't make\n> sense to have everything automatic and then later try to wedge in a low level\n> interface. Only once you have that do you then start offering options to do\n> these things automatically.\n\nMaybe its just me, but ISTM that implementing an automatic system is\nactually easier to begin with. No commands, no syntax etc. You're right,\nyou need the low level interfaces anyway...\n\n> From my experience with Oracle I think there's one big concept that makes the\n> whole system make a lot more sense: individual partitions are really tables.\n> The partitioned tables themselves are just meta-objects like views.\n\nHmmm. Oracle provides a very DBA-intensive implementation that as Stacy\npoints out, many people still do not understand. It does work, well. And\nhas many of the wrinkles ironed out, even if not all of them are easy to\nunderstand why they exist at first glance.\n\nI think it most likely that Phase I should be a simplified blend of both\nideas, with a clear view towards minimum impact and implementability,\notherwise it may not make the cut for 8.1\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Tue, 22 Mar 2005 08:34:22 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "On Mon, Mar 21, 2005 at 08:26:24PM +0200, Hannu Krosing wrote:\n> On P, 2005-03-20 at 00:52 +0100, PFC wrote:\n\n> > \tAlso note the possibility to mark a partition READ ONLY. Or even a table.\n\n> Would we still need regular VACUUMing of read-only table to avoid \n> OID-wraparound ?\n\nYou could VACUUM FREEZE the table or partition, so you wouldn't need to\nvacuum it again.\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"Someone said that it is at least an order of magnitude more work to do\nproduction software than a prototype. I think he is wrong by at least\nan order of magnitude.\" (Brian Kernighan)\n", "msg_date": "Tue, 22 Mar 2005 09:10:00 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "On T, 2005-03-22 at 09:10 -0400, Alvaro Herrera wrote:\n> On Mon, Mar 21, 2005 at 08:26:24PM +0200, Hannu Krosing wrote:\n> > On P, 2005-03-20 at 00:52 +0100, PFC wrote:\n> \n> > > \tAlso note the possibility to mark a partition READ ONLY. Or even a table.\n> \n> > Would we still need regular VACUUMing of read-only table to avoid \n> > OID-wraparound ?\n> \n> You could VACUUM FREEZE the table or partition, so you wouldn't need to\n> vacuum it again.\n\nBut when I do just VACUUM; will this know to avoid vacuuming VACUUM\nFREEZE'd partitions ? \n\nOr could this be somehow liked to READ ONLY + VACUUM FREEZE state ?\n\n-- \nHannu Krosing <[email protected]>\n", "msg_date": "Tue, 22 Mar 2005 17:59:16 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "Hannu,\n\n> If you don't get it, contact me as there is a small possibility that I\n> know a company interested enough to fund (some) of it :)\n\nEnough people have been interested in this that if we get our acts together, \nwe may do it as multi-funded. Easier on our budget ...\n\n> As these are already discussed in this thread, I'll try to outline a\n> method of providing a global index (unique or not) in a way that will\n> still make it possible to quickly remove (and not-quite-so-quickly add)\n> a partition.\n<snip>\n> To repeat - the global index over partitioned table should have te same\n> structure as our current b-tree index, only with added map of 128k index\n> partitions to 1G subfiles of (possibly different) tables. This map will\n> be quite small - for 1Tb of data it will be only 1k entries - this will\n> fit in cache on all modern processors and thus should add only tiny\n> slowdown from current direct tid.page/128k method\n\nI think this is a cool idea. It would need to be linked to clustering, so \nthat each partition can be an iteration of the clustered index instead of a \nspecifc # of bytes. But it would give us the \"fully automated partitioning\" \nwhich is one fork of the two we want.\n\nPlus I'm keen on any idea that presents an alternative to aping Oracle.\n\nHow difficult would your proposal be to code?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 22 Mar 2005 09:01:23 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "\nAdded to TODO:\n\n* Support table partitioning that allows a single table to be stored\n in subtables that are partitioned based on the primary key or a WHERE\n clause\n\n\n---------------------------------------------------------------------------\n\nJosh Berkus wrote:\n> Hannu,\n> \n> > If you don't get it, contact me as there is a small possibility that I\n> > know a company interested enough to fund (some) of it :)\n> \n> Enough people have been interested in this that if we get our acts together, \n> we may do it as multi-funded. Easier on our budget ...\n> \n> > As these are already discussed in this thread, I'll try to outline a\n> > method of providing a global index (unique or not) in a way that will\n> > still make it possible to quickly remove (and not-quite-so-quickly add)\n> > a partition.\n> <snip>\n> > To repeat - the global index over partitioned table should have te same\n> > structure as our current b-tree index, only with added map of 128k index\n> > partitions to 1G subfiles of (possibly different) tables. This map will\n> > be quite small - for 1Tb of data it will be only 1k entries - this will\n> > fit in cache on all modern processors and thus should add only tiny\n> > slowdown from current direct tid.page/128k method\n> \n> I think this is a cool idea. It would need to be linked to clustering, so \n> that each partition can be an iteration of the clustered index instead of a \n> specifc # of bytes. But it would give us the \"fully automated partitioning\" \n> which is one fork of the two we want.\n> \n> Plus I'm keen on any idea that presents an alternative to aping Oracle.\n> \n> How difficult would your proposal be to code?\n> \n> -- \n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 22 Mar 2005 20:25:05 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" }, { "msg_contents": "Hi,\n\nOn Sun, Mar 20, 2005 at 06:01:49PM -0500, Tom Lane wrote:\n> Global indexes would seriously reduce the performance of both vacuum and\n> cluster for a single partition, and if you want seq scans you don't need\n> an index for that at all. So the above doesn't strike me as a strong\n> argument for global indexes ...\n\nI'd like to describe a usecase where a global index is usefull.\n\nWe have a datawarehouse with invoices for a rolling window of a few\nyears. Each invoice has several positions so a uk is\n(invoice,position). Dur to the fact that most of the queries are only on\na few months or some quarters of a year, our pk starts with the\ntime-attribute (followed by the dimension ids) which is the partition\nkey (range). During the nightly update, we receive each updated invoice\nso we have to update that special (global unique) row which is resolved\nvery fast by using the uk.\n\nSo you can see, that there is a usefull case for providing a global\nindex while using partitining and local indexes as well.\n\nRegards,\nYann\n", "msg_date": "Wed, 27 Apr 2005 15:31:39 +0200", "msg_from": "Yann Michel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What needs to be done for real Partitioning?" } ]
[ { "msg_contents": "Hi All,\n\nI have been reading about set returning functions. What I would like to \nknow is is there a performance advantage in using SRFs versus querying a \nview. Assuming the underlying SQL is the same for the view vs the \nfunction except for the WHERE clause which of these would you expect to \nbe faster? Or does the planner realize all this...\n\nSELECT * FROM view_big_query WHERE column1 = 1234;\n\nSELECT * FROM func_bug_query(1234);\n\n-- \nKind Regards,\nKeith\n", "msg_date": "Sun, 20 Mar 2005 22:39:57 -0500", "msg_from": "Keith Worthington <[email protected]>", "msg_from_op": true, "msg_subject": "View vs function" }, { "msg_contents": "On Sun, Mar 20, 2005 at 22:39:57 -0500,\n Keith Worthington <[email protected]> wrote:\n> Hi All,\n> \n> I have been reading about set returning functions. What I would like to \n> know is is there a performance advantage in using SRFs versus querying a \n> view. Assuming the underlying SQL is the same for the view vs the \n> function except for the WHERE clause which of these would you expect to \n> be faster? Or does the planner realize all this...\n\nIn general you are going to be better off with a view, since the planner\nknows what the view is doing and there may be some optimizations it\ncan make. Functions are just black boxes to the planner.\n\n> \n> SELECT * FROM view_big_query WHERE column1 = 1234;\n> \n> SELECT * FROM func_bug_query(1234);\n> \n> -- \n> Kind Regards,\n> Keith\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n", "msg_date": "Sun, 20 Mar 2005 22:27:20 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View vs function" }, { "msg_contents": "Bruno Wolff III wrote:\n> Functions are just black boxes to the planner.\n\n... unless the function is a SQL function that is trivial enough for the \nplanner to inline it into the plan of the invoking query. Currently, we \nwon't inline set-returning SQL functions that are used in the query's \nrangetable, though. This would be worth doing, I think -- I'm not sure \nhow much work it would be, though.\n\n-Neil\n", "msg_date": "Mon, 21 Mar 2005 16:13:09 +1100", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View vs function" }, { "msg_contents": "Neil Conway <[email protected]> writes:\n> Bruno Wolff III wrote:\n>> Functions are just black boxes to the planner.\n\n> ... unless the function is a SQL function that is trivial enough for the \n> planner to inline it into the plan of the invoking query. Currently, we \n> won't inline set-returning SQL functions that are used in the query's \n> rangetable, though. This would be worth doing, I think -- I'm not sure \n> how much work it would be, though.\n\nYeah, I've been thinking the same. It seems like it shouldn't be unduly\ndifficult --- not harder than inlining scalar-valued SQL functions, just\ndifferent validity conditions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 Mar 2005 01:40:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View vs function " } ]
[ { "msg_contents": "\nI was following the cpu_tuple_cost thread and wondering, if it could be\npossible to make PQA style utility to calculate configuration-specific\nvalues for planner cost constants. It could make use of output of\nlog_(statement|parser|planner|executor)_stats, tough I'm not sure if the\noutput contains anything useful for those purposes. \n\nOtherwise it could just collect statements, run EXPLAIN ANALYZE for all\nof them and then play with planner cost constants to get the estimated\nvalues as close as possible to actual values. Something like Goal Seek\nin Excel, if you pardon my reference to MS :).\n\nSomewhat similar project seems to be pgautotune from pgFoundry, but it\nonly considers shared_buffers, sort_mem and vacuum_mem. And it seems to\nuse synthetic data instead of actual database and actual statements from\nlog. And it has been inactive for a while.\n\n Tambet\n", "msg_date": "Mon, 21 Mar 2005 12:05:56 +0200", "msg_from": "\"Tambet Matiisen\" <[email protected]>", "msg_from_op": true, "msg_subject": "What about utility to calculate planner cost constants?" }, { "msg_contents": "Tambet,\n\n> I was following the cpu_tuple_cost thread and wondering, if it could be\n> possible to make PQA style utility to calculate configuration-specific\n> values for planner cost constants. It could make use of output of\n> log_(statement|parser|planner|executor)_stats, tough I'm not sure if the\n> output contains anything useful for those purposes.\n\nYeah, that's something I need to look at.\n\n> Otherwise it could just collect statements, run EXPLAIN ANALYZE for all\n> of them and then play with planner cost constants to get the estimated\n> values as close as possible to actual values. Something like Goal Seek\n> in Excel, if you pardon my reference to MS :).\n\nThat's not really practical. There are currently 5 major query tuning \nparameters, not counting the memory adjustments which really can't be left \nout. You can't realistically test all combinations of 6 variables.\n\n> Somewhat similar project seems to be pgautotune from pgFoundry, but it\n> only considers shared_buffers, sort_mem and vacuum_mem. And it seems to\n> use synthetic data instead of actual database and actual statements from\n> log. And it has been inactive for a while.\n\nYeah, pg_autotune is a dead project. Once we got OSDL able to run tests, we \ncame up with some rules-of-thumb which are more useful than autotune's \noutput. More importantly, the approach doesn't scale to the 15-20 GUCs which \nwe'd realistically want to test.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 21 Mar 2005 09:51:06 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What about utility to calculate planner cost constants?" }, { "msg_contents": "If by not practical you mean, \"no one has implemented a multivariable \ntesting approach,\" I'll agree with you. But multivariable testing is \ndefinitely a valid statistical approach to solving just such problems.\n\n-tfo\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-260-0005\n\nOn Mar 21, 2005, at 11:51 AM, Josh Berkus wrote:\n\n> That's not really practical. There are currently 5 major query tuning\n> parameters, not counting the memory adjustments which really can't be \n> left\n> out. You can't realistically test all combinations of 6 variables.\n\n", "msg_date": "Mon, 21 Mar 2005 12:18:17 -0600", "msg_from": "Thomas F.O'Connell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What about utility to calculate planner cost constants?" }, { "msg_contents": "Thomas,\n\n> If by not practical you mean, \"no one has implemented a multivariable\n> testing approach,\" I'll agree with you. But multivariable testing is\n> definitely a valid statistical approach to solving just such problems.\n\nWell, not practical as in: \"would take either $10 million in equipment or \n10,000 hours or both\"\n\n--Josh\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 21 Mar 2005 14:59:56 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What about utility to calculate planner cost constants?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n\n> > Otherwise it could just collect statements, run EXPLAIN ANALYZE for all\n> > of them and then play with planner cost constants to get the estimated\n> > values as close as possible to actual values. Something like Goal Seek\n> > in Excel, if you pardon my reference to MS :).\n> \n> That's not really practical. There are currently 5 major query tuning \n> parameters, not counting the memory adjustments which really can't be left \n> out. You can't realistically test all combinations of 6 variables.\n\nI don't think it would be very hard at all actually.\n\nIt's just a linear algebra problem with a bunch of independent variables and a\nsystem of equations. Solving for values for all of them is a straightforward\nproblem.\n\nOf course in reality these variables aren't actually independent because the\ncosting model isn't perfect. But that wouldn't be a problem, it would just\nreduce the accuracy of the results.\n\nWhat's needed is for the explain plan to total up the costing penalties\nindependently. So the result would be something like\n\n1000 * random_page_cost + 101 * sequential_page_cost + 2000 * index_tuple_cost\n+ ...\n\nIn other words a tuple like <1000,101,2000,...>\n\nAnd explain analyze would produce the above tuple along with the resulting\ntime.\n\nSome program would have to gather these values from the log or stats data and\ngather them up into a large linear system and solve for values that minimize\nthe divergence from the observed times.\n\n\n\n(costs penalties are currently normalized to sequential_page_cost being 1.\nThat could be maintained, or it could be changed to be normalized to an\nexpected 1ms.)\n\n(Also, currently explain analyze has overhead that makes this impractical.\nIdeally it could subtract out its overhead so the solutions would be accurate\nenough to be useful)\n\n-- \ngreg\n\n", "msg_date": "21 Mar 2005 23:42:50 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What about utility to calculate planner cost constants?" }, { "msg_contents": "On Mon, 21 Mar 2005 14:59:56 -0800, Josh Berkus <[email protected]> wrote:\n> > If by not practical you mean, \"no one has implemented a multivariable\n> > testing approach,\" I'll agree with you. But multivariable testing is\n> > definitely a valid statistical approach to solving just such problems.\n> Well, not practical as in: \"would take either $10 million in equipment or\n> 10,000 hours or both\"\n\nI think you don't need EXPLAIN ANALYZE each query with different GUCs,\nyou would only need EXPLAIN most of the times (which is way quicker).\nOnce you get 'near' actual values, you would do EXECUTE ANALYZE to\nverify the variables.\n\n Regards,\n Dawid\n", "msg_date": "Tue, 22 Mar 2005 10:58:43 +0100", "msg_from": "Dawid Kuroczko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What about utility to calculate planner cost constants?" }, { "msg_contents": "Greg Stark wrote:\n> Josh Berkus <[email protected]> writes:\n>>That's not really practical. There are currently 5 major query tuning \n>>parameters, not counting the memory adjustments which really can't be left \n>>out. You can't realistically test all combinations of 6 variables.\n> \n> I don't think it would be very hard at all actually.\n[snip]\n> What's needed is for the explain plan to total up the costing penalties\n> independently. So the result would be something like\n> \n> 1000 * random_page_cost + 101 * sequential_page_cost + 2000 * index_tuple_cost\n> + ...\n> \n> In other words a tuple like <1000,101,2000,...>\n >\n> And explain analyze would produce the above tuple along with the resulting\n> time.\n> \n> Some program would have to gather these values from the log or stats data and\n> gather them up into a large linear system and solve for values that minimize\n> the divergence from the observed times.\n\nYou'd only need to log them if they diverged from expected anyway. That \nshould result in fairly low activity pretty quickly (or we're wasting \nour time). Should they go to the stats collector rather than logs?\n\n> (Also, currently explain analyze has overhead that makes this impractical.\n> Ideally it could subtract out its overhead so the solutions would be accurate\n> enough to be useful)\n\nDon't we only need the top-level figures though? There's no need to \nrecord timings for each stage, just work completed.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 22 Mar 2005 11:56:24 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What about utility to calculate planner cost constants?" }, { "msg_contents": "Martha Stewart called it a Good Thing when [email protected] (Greg Stark) wrote:\n> I don't think it would be very hard at all actually.\n>\n> It's just a linear algebra problem with a bunch of independent\n> variables and a system of equations. Solving for values for all of\n> them is a straightforward problem.\n>\n> Of course in reality these variables aren't actually independent\n> because the costing model isn't perfect. But that wouldn't be a\n> problem, it would just reduce the accuracy of the results.\n\nAre you certain it's a linear system? I'm not. If it was a matter of\nminimizing a linear expression subject to some set of linear\nequations, then we could model this as a Linear Program for which\nthere are some perfectly good solvers available. (Few with BSD-style\nlicenses, but we could probably get some insight out of running for a\nwhile with something that's there...)\n\nI think there's good reason to consider it to be distinctly\nNON-linear, which makes it way more challenging to solve the problem.\n\nThere might well be some results to be gotten out of a linear\napproximation; the Grand Challenge is to come up with the model in the\nfirst place...\n-- \nwm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','gmail.com').\nhttp://linuxdatabases.info/info/or.html\n\"Tom Christiansen asked me, \"Chip, is there anything that you like\nthat isn't big and complicated?\" C++, EMACS, Perl, Unix, English-no, I\nguess not.\" -- Chip Salzenberg, when commenting on Perl6/C++\n", "msg_date": "Tue, 22 Mar 2005 08:09:40 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What about utility to calculate planner cost constants?" }, { "msg_contents": "On Tue, Mar 22, 2005 at 08:09:40AM -0500, Christopher Browne wrote:\n> Martha Stewart called it a Good Thing when [email protected] (Greg Stark) wrote:\n> > I don't think it would be very hard at all actually.\n> >\n> > It's just a linear algebra problem with a bunch of independent\n> > variables and a system of equations. Solving for values for all of\n> > them is a straightforward problem.\n> >\n> > Of course in reality these variables aren't actually independent\n> > because the costing model isn't perfect. But that wouldn't be a\n> > problem, it would just reduce the accuracy of the results.\n> \n> Are you certain it's a linear system? I'm not. If it was a matter of\n> minimizing a linear expression subject to some set of linear\n> equations, then we could model this as a Linear Program for which\n> there are some perfectly good solvers available. (Few with BSD-style\n> licenses, but we could probably get some insight out of running for a\n> while with something that's there...)\n> \n> I think there's good reason to consider it to be distinctly\n> NON-linear, which makes it way more challenging to solve the problem.\n> \nNon-linear optimization works very well in many cases. Issues such\nas local minima can be addressed. In a sense, the planner output\ncan be treated as a blackbox function and the \"goodness\" of the\nsolution is how well it approximates the actual query times. In this\ncase, it will be imperative to constrain some of the values to prevent\n\"crazy\" configurations.\n\nKen\n", "msg_date": "Tue, 22 Mar 2005 08:00:50 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What about utility to calculate planner cost constants?" }, { "msg_contents": "On Tue, Mar 22, 2005 at 08:09:40 -0500,\n Christopher Browne <[email protected]> wrote:\n> \n> Are you certain it's a linear system? I'm not. If it was a matter of\n> minimizing a linear expression subject to some set of linear\n> equations, then we could model this as a Linear Program for which\n> there are some perfectly good solvers available. (Few with BSD-style\n> licenses, but we could probably get some insight out of running for a\n> while with something that's there...)\n\nFor less than 100 equations and 100 unknowns, you should be able to use\nnaive solvers. After that you don't get very accurate answers without\nbeing more careful. I still have my numerical analysis text books around\nand can look algorithms up for doing this without too much trouble.\n", "msg_date": "Tue, 22 Mar 2005 09:17:07 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What about utility to calculate planner cost constants?" }, { "msg_contents": "Richard Huxton <[email protected]> writes:\n\n> You'd only need to log them if they diverged from expected anyway. That should\n> result in fairly low activity pretty quickly (or we're wasting our time).\n> Should they go to the stats collector rather than logs?\n\nI think you need to log them all. Otherwise when you go to analyze the numbers\nand come up with ideal values you're going to be basing your optimization on a\nskewed subset.\n\nI don't know whether the stats collector or the logs is better suited to this.\n\n> > (Also, currently explain analyze has overhead that makes this impractical.\n> > Ideally it could subtract out its overhead so the solutions would be accurate\n> > enough to be useful)\n> \n> Don't we only need the top-level figures though? There's no need to record\n> timings for each stage, just work completed.\n\nI guess you only need top level values. But you also might want some flag if\nthe row counts for any node were far off. In that case perhaps you would want\nto discard the data point.\n\n-- \ngreg\n\n", "msg_date": "22 Mar 2005 11:19:40 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What about utility to calculate planner cost constants?" }, { "msg_contents": "\nChristopher Browne <[email protected]> writes:\n\n> Are you certain it's a linear system? \n\nIf you just consider the guc parameters that tell postgres how long various\nreal world operations take (all the *_cost parameters) then it's a linear\nsystem. It has to be. The resulting time is just a sum of the times for some\nnumber of each of these real world operations.\n\nIf you include parameters like the geqo_* parameters or the hypothetical\nparameter that controls what selectivity to assume for clauses with unknown\nselectivity then no, it wouldn't be.\n\nBut if you assume the estimated row counts are correct and you're just trying\nto solve for the parameters to come up with the most accurate cost for the\ncurrent hardware then I think you're golden. \n\n> There might well be some results to be gotten out of a linear\n> approximation; the Grand Challenge is to come up with the model in the\n> first place...\n\nIndeed. The model's not perfect now of course, and it'll never really be\nperfect since some of the parameters represent operations that aren't always a\nconsistent cost. But you should be able to solve for the values that result in\nthe most accurate totals the most often. There may be some tradeoffs (and\ntherefore new guc variables :)\n\nPS\n\nIt occurs to me that there's no reason to use the unreliable EXPLAIN counts of\nthe costs. You may as well account accurately for them and use the actual\nvalues used in performing the query. This means there's no reason to discard\ninaccurately estimated data points.\n\nMoreover, the overhead issue a non-issue. Since you only need the total time,\nand the total costs. You would have the overhead of performing lots of\nincrements on those costs, but you only have to do two gettimeofdays. Once at\nthe beginning and once at the end.\n\n-- \ngreg\n\n", "msg_date": "22 Mar 2005 12:30:05 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What about utility to calculate planner cost constants?" }, { "msg_contents": "Greg Stark wrote:\n> Richard Huxton <[email protected]> writes:\n> \n>>You'd only need to log them if they diverged from expected anyway. That should\n>>result in fairly low activity pretty quickly (or we're wasting our time).\n>>Should they go to the stats collector rather than logs?\n> \n> I think you need to log them all. Otherwise when you go to analyze the numbers\n> and come up with ideal values you're going to be basing your optimization on a\n> skewed subset.\n\nI can see your thinking, I must admit I was thinking of a more iterative \nprocess: estimate deltas, change config, check, repeat. I'm not \nconvinced there are \"ideal\" values with a changing workload - for \nexample, random_page_cost will presumably vary depending on how much \ncontention there is for random seeks. Certainly, effective_cache size \ncould vary.\n\n> I don't know whether the stats collector or the logs is better suited to this.\n> \n>>>(Also, currently explain analyze has overhead that makes this impractical.\n>>>Ideally it could subtract out its overhead so the solutions would be accurate\n>>>enough to be useful)\n>>\n>>Don't we only need the top-level figures though? There's no need to record\n>>timings for each stage, just work completed.\n> \n> I guess you only need top level values. But you also might want some flag if\n> the row counts for any node were far off. In that case perhaps you would want\n> to discard the data point.\n\nI think you'd need to adjust work-estimates by actual-rows / estimated-rows.\n\nI _was_ trying to think of a clever way of using row mis-estimates to \ncorrect statistics automatically. This was triggered by the discussion a \nfew weeks ago about hints to the planner and the recent talk about plan \ncacheing. Some sort of feedback loop so the planner would know its \nestimates were off should be a big win from an ease-of-use point of \nview. Didn't look easy to do though :-(\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 22 Mar 2005 17:53:05 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What about utility to calculate planner cost constants?" }, { "msg_contents": "Christopher Browne <[email protected]> writes:\n> Martha Stewart called it a Good Thing when [email protected] (Greg Stark) wrote:\n>> It's just a linear algebra problem with a bunch of independent\n>> variables and a system of equations. Solving for values for all of\n>> them is a straightforward problem.\n\n> Are you certain it's a linear system? I'm not.\n\nI'm quite certain it isn't a linear system, because the planner's cost\nmodels include nonlinear equations.\n\nWhile I don't have a whole lot of hard evidence to back this up, my\nbelief is that our worst problems stem not from bad parameter values\nbut from wrong models. In particular we *know* that the cost model for\nnestloop-inner-indexscan joins is wrong, because it doesn't account for\ncacheing effects across repeated scans. There are some other obvious\nweak spots as well. It could be argued that we ought to allow the\nsystem to assume index cacheing even for standalone queries, on the\ngrounds that if you are doing a query often enough to care about it,\nthere was probably a recent use of the same query that pulled in the\nupper index levels. The current cost models all assume starting from\nground zero with empty caches for each query, and that is surely not\nreflective of many real-world cases.\n\nI've looked at fixing this a couple times, but so far my attempts\nto devise a more believable index access cost estimator have come\nout with numbers higher than the current estimates ... not the\ndirection we want it to go :-(\n\nAnyway, I see little point in trying to build an automatic parameter\noptimizer until we have cost models whose parameters are more stable\nthan the current ones.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 22 Mar 2005 13:34:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What about utility to calculate planner cost constants? " }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> Christopher Browne <[email protected]> writes:\n>> Are you certain it's a linear system? \n\n> If you just consider the guc parameters that tell postgres how long various\n> real world operations take (all the *_cost parameters) then it's a linear\n> system. It has to be.\n\nNo it doesn't. Think caching effects for instance. We do have cache\neffects in the cost models, even though they are wrong as per my nearby\nrant...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 22 Mar 2005 13:56:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What about utility to calculate planner cost constants? " }, { "msg_contents": "\nTom Lane <[email protected]> writes:\n\n> Christopher Browne <[email protected]> writes:\n> > Martha Stewart called it a Good Thing when [email protected] (Greg Stark) wrote:\n> >> It's just a linear algebra problem with a bunch of independent\n> >> variables and a system of equations. Solving for values for all of\n> >> them is a straightforward problem.\n> \n> > Are you certain it's a linear system? I'm not.\n> \n> I'm quite certain it isn't a linear system, because the planner's cost\n> models include nonlinear equations.\n\nThe equations will all be linear for the *_cost variables. If they weren't\nthey would be meaningless, the units would be wrong. Things like caching are\njust going to be the linear factors that determine how many random page costs\nand sequential page costs to charge the query.\n\n> While I don't have a whole lot of hard evidence to back this up, my\n> belief is that our worst problems stem not from bad parameter values\n> but from wrong models. \n\nI think these are orthogonal issues. \n\nThe time spent in real-world operations like random page accesses, sequential\npage accesses, cpu operations, index lookups, etc, are all measurable\nquantities. They can be directly measured or approximated by looking at the\nresulting net times. Measuring these things instead of asking the user to\nprovide them is just a nicer user experience.\n\nSeparately, plugging these values into more and more accurate model will come\nup with better estimates for how many of these operations a query will need to\nperform.\n\n> Anyway, I see little point in trying to build an automatic parameter\n> optimizer until we have cost models whose parameters are more stable\n> than the current ones.\n\nWell what people currently do is tweak the physical values until the produce\nresults for their work load that match reality. It would be neat if postgres\ncould do this automatically.\n\nArguably the more accurate the cost model the less of a motivation for\nautomatic adjustments there is since you could easily plug in accurate values\nfrom the hardware specs. But actually I think it'll always be a nice feature.\n\n-- \ngreg\n\n", "msg_date": "22 Mar 2005 16:28:18 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What about utility to calculate planner cost constants?" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> The time spent in real-world operations like random page accesses, sequential\n> page accesses, cpu operations, index lookups, etc, are all measurable\n> quantities. They can be directly measured or approximated by looking at the\n> resulting net times.\n\nThat's the theory, all right, and that's why I've been resistant to\nlowering random_page_cost just because \"it gives better answers\".\nTo the extent that you believe that is a real physical parameter with\na definable value (which is a bit debatable actually, but nevermind)\nit should be possible to measure it by experiment.\n\nThe difficulty with the notion of doing that measurement by timing\nPostgres operations is that it's a horribly bad experimental setup.\nYou have no way to isolate the effects of just one variable, or even\na small number of variables, which you really need to do if you want\nto estimate with any degree of precision. What's more, there are plenty\nof relevant factors that aren't in the model at all (such as the extent\nof other load on the machine), and so the noise in the measurements\nwill be enormous.\n\nAnd you can't just dismiss the issue of wrong cost models and say we can\nget numbers anyway. We see examples almost every day on this list where\nthe planner is so far off about indexscan vs seqscan costs that you'd\nhave to put random_page_cost far below 1 to make its numbers line up\nwith reality. That's not a matter of tuning the parameter, it's\nevidence that the cost model is wrong. If you try to solve for the\n\"right value\" of the parameter by comparing estimated and actual costs,\nyou'll get garbage, even without any issues about noisy measurements\nor numerical instability of your system of equations.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 22 Mar 2005 16:48:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What about utility to calculate planner cost constants? " }, { "msg_contents": "Tom Lane wrote:\n> \n> And you can't just dismiss the issue of wrong cost models and say we can\n> get numbers anyway.\n\nIs there a way to see more details about the cost estimates.\nEXPLAIN ANALYZE seems to show the total time and rows; but not\ninformation like how many disk pages were accessed.\n\nI get the feeling that sometimes the number of rows is estimated\nvery well, but the amount of disk I/O is way off.\n\nSometimes the number of pages read/written is grossly\noverestimated (if tables lave a lot of locally clustered data)\nor underestimated if a sort barely exceeds sort_mem.\n\n\nPerhaps an EXPLAN ANALYZE VERBOSE that would add info like this:\n\n Index scan ([...]estimated 1000 pages read) (actual[...] 10 pages read)\n\nwould help track those down?\n\n", "msg_date": "Tue, 22 Mar 2005 14:59:44 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What about utility to calculate planner cost constants?" } ]
[ { "msg_contents": "Hi everyone,\nI hope it is the correct newsletter for this question.\n\nCan I use an index on a varchar column to optimize the SELECT queries that \nuse \" column LIKE 'header%' \"?\nIf yes what is the best tree algotithm to use ?\n\nI don't care about optimising INSERT, DELETE and UPDATE queries, as they are \nonly done at night when the load is very low.\nThank you very much for any help,\nBenjamin Layet\n\n\n\n\n\n\n\n\nHi everyone,\nI hope it is the correct newsletter for this question.\n \nCan I use an index on a varchar column to optimize the SELECT queries that \nuse \" column LIKE 'header%'  \"?\nIf yes what is the best tree algotithm to use ?\n \nI don't care about optimising INSERT, DELETE and UPDATE queries, as they \nare only done at night when the load is very low.\nThank you very much for any help,\nBenjamin Layet", "msg_date": "Tue, 22 Mar 2005 18:22:24 +0900", "msg_from": "\"Layet Benjamin\" <[email protected]>", "msg_from_op": true, "msg_subject": "best practices with index on varchar column" }, { "msg_contents": "On Tue, 22 Mar 2005 18:22:24 +0900, Layet Benjamin\n<[email protected]> wrote:\n> Can I use an index on a varchar column to optimize the SELECT queries that\n> use \" column LIKE 'header%' \"? \n> If yes what is the best tree algotithm to use ? \n\nYes, that is the correct place. The best tree algorithm is B-Tree,\nwhich is the default. So no need for giving 'USING ...' to CREATE INDEX.\n\nThe other types of indexes are either not trees (HASH), different\nand more complex (GiST, RTREE) kinds of trees which are there\nfor different kinds of data (spatial, full text, etc).\n\nRemember to VACUUM ANALYZE this table from time to time,\nso the planner can judge efficiently whether to use this new\nindex or not.\n\nUse EXPLAIN ANALYZE SELECT .... to see whether the index\nis really used.\n\n> I don't care about optimising INSERT, DELETE and UPDATE queries, as they are\n> only done at night when the load is very low. \n> Thank you very much for any help, \n\nOh, they can benefit from the index anyhow. :)\n\n Regards,\n Dawid\n", "msg_date": "Tue, 22 Mar 2005 10:49:25 +0100", "msg_from": "Dawid Kuroczko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best practices with index on varchar column" }, { "msg_contents": "\n> Can I use an index on a varchar column to optimize the SELECT queries \n> that\n> use \" column LIKE 'header%' \"?\n\n\tYes\n\n> If yes what is the best tree algotithm to use ?\n\n\tBtree\n\n\tNote that if you want case insensitive matching you need to make an index \non lower(column) and SELECT WHERE lower(column) LIKE 'header%'\n\n\tLocales may bite you.\n", "msg_date": "Tue, 22 Mar 2005 11:49:36 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best practices with index on varchar column" }, { "msg_contents": "PFC <[email protected]> writes:\n>> Can I use an index on a varchar column to optimize the SELECT queries \n>> that use \" column LIKE 'header%' \"?\n\n> \tYes\n\n> \tNote that if you want case insensitive matching you need to make an index \n> on lower(column) and SELECT WHERE lower(column) LIKE 'header%'\n\n> \tLocales may bite you.\n\nYes. If your database locale is not \"C\" then the default btree index\nbehavior does not match up with what LIKE needs. In that case you need\na special index using the appropriate \"pattern_ops\" opclass, eg\n\nCREATE INDEX test_index ON test_table (col varchar_pattern_ops);\n\nor if you want case insensitive matching\n\nCREATE INDEX test_index ON test_table (lower(col) varchar_pattern_ops);\n\nand then write the queries with lower() as PFC illustrates. *Don't* use\nILIKE --- it basically can't use indexes at all.\n\nFor more info see\nhttp://www.postgresql.org/docs/8.0/static/indexes-opclass.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 22 Mar 2005 13:08:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best practices with index on varchar column " }, { "msg_contents": "\nI have an experience using LIKE in a VARCHAR column and select statement\nsuffers a lot so I decided to go back in CHAR \n\nNote: my database has about 50 millions records a b tree index \n\n\n\n> Can I use an index on a varchar column to optimize the SELECT queries \n> that\n> use \" column LIKE 'header%' \"?\n\n\tYes\n\n> If yes what is the best tree algotithm to use ?\n\n\tBtree\n\n\tNote that if you want case insensitive matching you need to make an\nindex \non lower(column) and SELECT WHERE lower(column) LIKE 'header%'\n\n\tLocales may bite you.\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n\n__________ NOD32 1.1023 (20050310) Information __________\n\nThis message was checked by NOD32 Antivirus System.\nhttp://www.nod32.com\n\n\n", "msg_date": "Wed, 23 Mar 2005 12:11:56 +0800", "msg_from": "\"Michael Ryan S. Puncia\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best practices with index on varchar column" }, { "msg_contents": "On Wed, 23 Mar 2005 12:11:56 +0800, Michael Ryan S. Puncia\n<[email protected]> wrote:\n> \n> I have an experience using LIKE in a VARCHAR column and select statement\n> suffers a lot so I decided to go back in CHAR\n> \n> Note: my database has about 50 millions records a b tree index\n\nStrange...\n\nAccording to the PostgreSQL's documentation:\n\n Tip: There are no performance differences between these three types,\napart from the increased storage size when using the blank-padded type.\nWhile character(n) has performance advantages in some other database\nsystems, it has no such advantages in PostgreSQL. In most situations text\nor character varying should be used instead.\n\n\nTo my best knowledge char and varchar are stored in a same way\n(4-byte length plus textual value), so using char should make tables\nbigger in your case. Then again, having each row exactly the same\nsize makes it easier to delete and then later insert a new row in\na same spot. Am I thinking correct? Is it a case where using char(n)\nmakes that table avoid hmm fragmentation of some sort?\n\n Regards,\n Dawid\n", "msg_date": "Wed, 23 Mar 2005 10:35:48 +0100", "msg_from": "Dawid Kuroczko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best practices with index on varchar column" }, { "msg_contents": "Dawid Kuroczko wrote:\n> On Wed, 23 Mar 2005 12:11:56 +0800, Michael Ryan S. Puncia\n> <[email protected]> wrote:\n> \n>>I have an experience using LIKE in a VARCHAR column and select statement\n>>suffers a lot so I decided to go back in CHAR\n\n> According to the PostgreSQL's documentation:\n> \n> Tip: There are no performance differences between these three types,\n> apart from the increased storage size when using the blank-padded type.\n> While character(n) has performance advantages in some other database\n> systems, it has no such advantages in PostgreSQL. In most situations text\n> or character varying should be used instead.\n> \n> \n> To my best knowledge char and varchar are stored in a same way\n> (4-byte length plus textual value), so using char should make tables\n> bigger in your case. Then again, having each row exactly the same\n> size makes it easier to delete and then later insert a new row in\n> a same spot. Am I thinking correct? Is it a case where using char(n)\n> makes that table avoid hmm fragmentation of some sort?\n\nThere aren't any noticeable differences between char and varchar. MVCC \ndoesn't overwrite rows anyway, so static size is irrelevant. In any \ncase, PG's toast setup splits out large text fields and compresses them \n- so it's not that simple.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 23 Mar 2005 11:31:28 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best practices with index on varchar column" } ]
[ { "msg_contents": "Hi,\n\nI'm looking for a *fast* solution to search thru ~ 4 million records of \nbook descriptions. I've installed PostgreSQL 8.0.1 on a dual opteron \nserver with 8G of memory, running Linux 2.6. I haven't done a lot of \ntuning on PostgreSQL itself, but here's the settings I have changed so far:\n\nshared_buffers = 2000 (anything much bigger says the kernel doesnt allow \n it, still have to look into that)\neffective_cache_size = 32768\n\nHere's my table:\n\nilab=# \\d books\n Table \"public.books\"\n Column | Type | Modifiers\n---------------+------------------------+----------------------------------------------------------\n recordnumber | integer | not null default \nnextval('books_recordnumber_seq'::text)\n membernumber | integer | not null default 0\n booknumber | character varying(20) | not null default \n''::character varying\n author | character varying(60) | not null default \n''::character varying\n titel | text | not null\n description | character varying(100) | not null default \n''::character varying\n descriprest | text | not null\n price | bigint | not null default 0::bigint\n keywords | character varying(100) | not null default \n''::character varying\n dollarprice | bigint | not null default 0::bigint\n countrynumber | smallint | not null default 0::smallint\n entrydate | date | not null\n status | smallint | not null default 0::smallint\n recordtype | smallint | not null default 0::smallint\n bookflags | smallint | not null default 0::smallint\n year | smallint | not null default 0::smallint\n firstedition | smallint | not null default 0::smallint\n dustwrapper | smallint | not null default 0::smallint\n signed | smallint | not null default 0::smallint\n cover | smallint | not null default 0::smallint\n specialfield | smallint | not null default 0::smallint\n idxfti | tsvector |\nIndexes:\n \"recordnumber_idx\" unique, btree (recordnumber)\n \"idxfti_idx\" gist (idxfti)\n\nidxfti is a tsvector of concatenated description and descriprest.\n\nilab=# select \navg(character_length(description)),avg(character_length(descriprest)) \nfrom books;\n avg | avg\n---------------------+----------------------\n 89.1596992873947218 | 133.0468689304200538\n\nQueries take forever to run. Right now we run a MySQL server, on which \nwe maintain our own indices (we split the description fields by word and \nhave different tables for words and the bookdescriptions they appear in).\n\nFor example, a query for the word 'terminology' on our MySQL search \ntakes 5.8 seconds and returns 375 results. The same query on postgresql \nusing the tsearch2 index takes 30802.105 ms and returns 298 results.\n\nHow do I speed this up? Should I change settings, add or change indexes \nor.. what?\n\nRick Jansen\n-- \nSystems Administrator for Rockingstone IT\nhttp://www.rockingstone.com\nhttp://www.megabooksearch.com - Search many book listing sites at once\n", "msg_date": "Tue, 22 Mar 2005 13:28:07 +0100", "msg_from": "Rick Jansen <[email protected]>", "msg_from_op": true, "msg_subject": "Tsearch2 performance on big database" }, { "msg_contents": "On Tue, 22 Mar 2005, Rick Jansen wrote:\n\n> Hi,\n>\n> I'm looking for a *fast* solution to search thru ~ 4 million records of book \n> descriptions. I've installed PostgreSQL 8.0.1 on a dual opteron server with \n> 8G of memory, running Linux 2.6. I haven't done a lot of tuning on PostgreSQL \n> itself, but here's the settings I have changed so far:\n>\n> shared_buffers = 2000 (anything much bigger says the kernel doesnt allow it, \n> still have to look into that)\n\nuse something like \necho \"150000000\" > /proc/sys/kernel/shmmax\nto increase shared memory. In your case you could dedicate much more\nmemory.\n\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Tue, 22 Mar 2005 15:36:11 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 performance on big database" }, { "msg_contents": "On Tue, 22 Mar 2005 15:36:11 +0300 (MSK), Oleg Bartunov <[email protected]> wrote:\n> On Tue, 22 Mar 2005, Rick Jansen wrote:\n> \n> > Hi,\n> >\n> > I'm looking for a *fast* solution to search thru ~ 4 million records of book\n> > descriptions. I've installed PostgreSQL 8.0.1 on a dual opteron server with\n> > 8G of memory, running Linux 2.6. I haven't done a lot of tuning on PostgreSQL\n> > itself, but here's the settings I have changed so far:\n> >\n> > shared_buffers = 2000 (anything much bigger says the kernel doesnt allow it,\n> > still have to look into that)\n> \n> use something like\n> echo \"150000000\" > /proc/sys/kernel/shmmax\n> to increase shared memory. In your case you could dedicate much more\n> memory.\n> \n> Regards,\n> Oleg\n\nAnd Oleg should know. Unless I'm mistaken, he (co)wrote tsearch2. \nOther than shared buffers, I can't imagine what could be causing that\nkind of slowness. EXPLAIN ANALYZE, please?\n\nAs an example of what I think you *should* be seeing, I have a similar\nbox (4 procs, but that doesn't matter for one query) and I can search\na column with tens of millions of rows in around a second.\n\n-- \nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\nhttp://open-ils.org\n", "msg_date": "Tue, 22 Mar 2005 12:48:06 +0000", "msg_from": "Mike Rylander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 performance on big database" }, { "msg_contents": "On Tue, 22 Mar 2005, Mike Rylander wrote:\n\n>\n> And Oleg should know. Unless I'm mistaken, he (co)wrote tsearch2.\n\nYou're not mistaken :)\n\n> Other than shared buffers, I can't imagine what could be causing that\n> kind of slowness. EXPLAIN ANALYZE, please?\n>\n\ntsearch2 config's also are very important. I've seen a lot of \nmistakes in configs !\n\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Tue, 22 Mar 2005 16:24:16 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 performance on big database" }, { "msg_contents": "Mike Rylander wrote:\n> On Tue, 22 Mar 2005 15:36:11 +0300 (MSK), Oleg Bartunov <[email protected]> wrote:\n>>\n>>use something like\n>>echo \"150000000\" > /proc/sys/kernel/shmmax\n>>to increase shared memory. In your case you could dedicate much more\n>>memory.\n>>\n>> Regards,\n>> Oleg\n\n\nThanks, I'll check that out.\n\n> And Oleg should know. Unless I'm mistaken, he (co)wrote tsearch2. \n> Other than shared buffers, I can't imagine what could be causing that\n> kind of slowness. EXPLAIN ANALYZE, please?\n> \n\nilab=# explain analyze select count(titel) from books where idxfti @@ \nto_tsquery('default', 'buckingham | palace');\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=35547.99..35547.99 rows=1 width=56) (actual \ntime=125968.119..125968.120 rows=1 loops=1)\n -> Index Scan using idxfti_idx on books (cost=0.00..35525.81 \nrows=8869 width=56) (actual time=0.394..125958.245 rows=3080 loops=1)\n Index Cond: (idxfti @@ '\\'buckingham\\' | \\'palac\\''::tsquery)\n Total runtime: 125968.212 ms\n(4 rows)\n\nTime: 125969.264 ms\nilab=#\n\n > As an example of what I think you *should* be seeing, I have a similar\n > box (4 procs, but that doesn't matter for one query) and I can search\n > a column with tens of millions of rows in around a second.\n >\n\nThat sounds very promising, I'd love to get those results.. could you \ntell me what your settings are, howmuch memory you have and such? Thanks.\n\nRick\n\n\n-- \nSystems Administrator for Rockingstone IT\nhttp://www.rockingstone.com\nhttp://www.megabooksearch.com - Search many book listing sites at once\n", "msg_date": "Tue, 22 Mar 2005 14:25:19 +0100", "msg_from": "Rick Jansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tsearch2 performance on big database" }, { "msg_contents": "On Tue, 22 Mar 2005 14:25:19 +0100, Rick Jansen <[email protected]> wrote:\n> \n> ilab=# explain analyze select count(titel) from books where idxfti @@\n> to_tsquery('default', 'buckingham | palace');\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=35547.99..35547.99 rows=1 width=56) (actual\n> time=125968.119..125968.120 rows=1 loops=1)\n> -> Index Scan using idxfti_idx on books (cost=0.00..35525.81\n> rows=8869 width=56) (actual time=0.394..125958.245 rows=3080 loops=1)\n> Index Cond: (idxfti @@ '\\'buckingham\\' | \\'palac\\''::tsquery)\n> Total runtime: 125968.212 ms\n> (4 rows)\n> \n> Time: 125969.264 ms\n> ilab=#\n\nAhh... I should have qualified my claim. I am creating a google-esqe\nsearch interface and almost every query uses '&' as the term joiner. \n'AND' queries and one-term queries are orders of magnitude faster than\n'OR' queries, and fortunately are the expected default for most users.\n (Think, \"I typed in these words, therefore I want to match these\nwords\"...) An interesting test may be to time multiple queries\nindependently, one for each search term, and see if the combined cost\nis less than a single 'OR' search. If so, you could use UNION to join\nthe results.\n\nHowever, the example you originally gave ('terminology') should be\nvery fast. On a comparable query (\"select count(value) from\nmetabib.full_rec where index_vector @@ to_tsquery('default','jane');\")\nI get 12ms.\n\nOleg, do you see anything else on the surface here?\n\nTry:\n\nEXPLAIN ANALYZE\n SELECT titel FROM books WHERE idxfti @@\n to_tsquery('default', 'buckingham')\n UNION\n SELECT titel FROM books WHERE idxfti @@\n to_tsquery('default', 'palace');\n\nand see if using '&' instead of '|' where you can helps out. I\nimagine you'd be surprised by the speed of:\n\n SELECT titel FROM books WHERE idxfti @@\n to_tsquery('default', 'buckingham&palace');\n \n\n> \n> > As an example of what I think you *should* be seeing, I have a similar\n> > box (4 procs, but that doesn't matter for one query) and I can search\n> > a column with tens of millions of rows in around a second.\n> >\n> \n> That sounds very promising, I'd love to get those results.. could you\n> tell me what your settings are, howmuch memory you have and such? \n\n16G of RAM on a dedicated machine.\n\n\nshared_buffers = 15000 # min 16, at least max_connections*2, 8KB each\nwork_mem = 10240 # min 64, size in KB\nmaintenance_work_mem = 1000000 # min 1024, size in KB\n# big m_w_m for loading data...\n\nrandom_page_cost = 2.5 # units are one sequential page fetch cost\n# fast drives, and tons of RAM\n\n\n-- \nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\nhttp://open-ils.org\n", "msg_date": "Tue, 22 Mar 2005 10:30:03 -0500", "msg_from": "Mike Rylander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 performance on big database" }, { "msg_contents": "Mike,\n\nno comments before Rick post tsearch configs and increased buffers !\nUnion shouldn't be faster than (term1|term2).\ntsearch2 internals description might help you understanding tsearch2 limitations.\nSee http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals\nAlso, don't miss my notes:\nhttp://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_Notes\n\nOleg\nOn Tue, 22 Mar 2005, Mike Rylander wrote:\n\n> On Tue, 22 Mar 2005 14:25:19 +0100, Rick Jansen <[email protected]> wrote:\n>>\n>> ilab=# explain analyze select count(titel) from books where idxfti @@\n>> to_tsquery('default', 'buckingham | palace');\n>> QUERY PLAN\n>> ----------------------------------------------------------------------------------------------------------------------------------------\n>> Aggregate (cost=35547.99..35547.99 rows=1 width=56) (actual\n>> time=125968.119..125968.120 rows=1 loops=1)\n>> -> Index Scan using idxfti_idx on books (cost=0.00..35525.81\n>> rows=8869 width=56) (actual time=0.394..125958.245 rows=3080 loops=1)\n>> Index Cond: (idxfti @@ '\\'buckingham\\' | \\'palac\\''::tsquery)\n>> Total runtime: 125968.212 ms\n>> (4 rows)\n>>\n>> Time: 125969.264 ms\n>> ilab=#\n>\n> Ahh... I should have qualified my claim. I am creating a google-esqe\n> search interface and almost every query uses '&' as the term joiner.\n> 'AND' queries and one-term queries are orders of magnitude faster than\n> 'OR' queries, and fortunately are the expected default for most users.\n> (Think, \"I typed in these words, therefore I want to match these\n> words\"...) An interesting test may be to time multiple queries\n> independently, one for each search term, and see if the combined cost\n> is less than a single 'OR' search. If so, you could use UNION to join\n> the results.\n>\n> However, the example you originally gave ('terminology') should be\n> very fast. On a comparable query (\"select count(value) from\n> metabib.full_rec where index_vector @@ to_tsquery('default','jane');\")\n> I get 12ms.\n>\n> Oleg, do you see anything else on the surface here?\n>\n> Try:\n>\n> EXPLAIN ANALYZE\n> SELECT titel FROM books WHERE idxfti @@\n> to_tsquery('default', 'buckingham')\n> UNION\n> SELECT titel FROM books WHERE idxfti @@\n> to_tsquery('default', 'palace');\n>\n> and see if using '&' instead of '|' where you can helps out. I\n> imagine you'd be surprised by the speed of:\n>\n> SELECT titel FROM books WHERE idxfti @@\n> to_tsquery('default', 'buckingham&palace');\n>\n>\n>>\n>> > As an example of what I think you *should* be seeing, I have a similar\n>> > box (4 procs, but that doesn't matter for one query) and I can search\n>> > a column with tens of millions of rows in around a second.\n>> >\n>>\n>> That sounds very promising, I'd love to get those results.. could you\n>> tell me what your settings are, howmuch memory you have and such?\n>\n> 16G of RAM on a dedicated machine.\n>\n>\n> shared_buffers = 15000 # min 16, at least max_connections*2, 8KB each\n> work_mem = 10240 # min 64, size in KB\n> maintenance_work_mem = 1000000 # min 1024, size in KB\n> # big m_w_m for loading data...\n>\n> random_page_cost = 2.5 # units are one sequential page fetch cost\n> # fast drives, and tons of RAM\n>\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Tue, 22 Mar 2005 18:45:17 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 performance on big database" }, { "msg_contents": "Oleg Bartunov wrote:\n> Mike,\n> \n> no comments before Rick post tsearch configs and increased buffers !\n> Union shouldn't be faster than (term1|term2).\n> tsearch2 internals description might help you understanding tsearch2 \n> limitations.\n> See http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals\n> Also, don't miss my notes:\n> http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_Notes\n> \n> Oleg\n\nThanks Oleg, i've seen those pages before :) I've set shared_buffers to \n45000 now (yes thats probably very much, isn't it?) and it already seems \na lot quicker.\n\nHow do I find out what my tsearch config is? I followed the intro \n(http://www.sai.msu.su/~megera/oddmuse/index.cgi/tsearch-v2-intro) and \napplied it to our books table, thats all, didnt change anything else \nabout configs.\n\n\n> On Tue, 22 Mar 2005, Mike Rylander wrote:\n>> Ahh... I should have qualified my claim. I am creating a google-esqe\n>> search interface and almost every query uses '&' as the term joiner.\n>> 'AND' queries and one-term queries are orders of magnitude faster than\n>> 'OR' queries, and fortunately are the expected default for most users.\n>> (Think, \"I typed in these words, therefore I want to match these\n>> words\"...) An interesting test may be to time multiple queries\n>> independently, one for each search term, and see if the combined cost\n>> is less than a single 'OR' search. If so, you could use UNION to join\n>> the results.\n\nWell I just asked my colleges and OR queries arent used by us anyway, so \nI'll test for AND queries instead.\n\n>> However, the example you originally gave ('terminology') should be\n>> very fast. On a comparable query (\"select count(value) from\n>> metabib.full_rec where index_vector @@ to_tsquery('default','jane');\")\n>> I get 12ms.\n\nilab=# select count(*) from books where idxfti @@ to_tsquery('default', \n'jane');\n count\n-------\n 4093\n(1 row)\nTime: 217395.820 ms\n\n:(\n\nilab=# explain analyze select count(*) from books where idxfti @@ \nto_tsquery('default', 'jane');\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=16591.95..16591.95 rows=1 width=0) (actual \ntime=4634.931..4634.932 rows=1 loops=1)\n -> Index Scan using idxfti_idx on books (cost=0.00..16581.69 \nrows=4102 width=0) (actual time=0.395..4631.454 rows=4093 loops=1)\n Index Cond: (idxfti @@ '\\'jane\\''::tsquery)\n Total runtime: 4635.023 ms\n(4 rows)\n\nTime: 4636.028 ms\nilab=#\n\n>> 16G of RAM on a dedicated machine.\n>>\n>>\n>> shared_buffers = 15000 # min 16, at least max_connections*2, \n>> 8KB each\n>> work_mem = 10240 # min 64, size in KB\n>> maintenance_work_mem = 1000000 # min 1024, size in KB\n>> # big m_w_m for loading data...\n>>\n>> random_page_cost = 2.5 # units are one sequential page fetch \n>> cost\n>> # fast drives, and tons of RAM\n>>\n\nRight.. well I'll try copying these settings, see how that works out, \nthanks :)\n\nRick\n-- \nSystems Administrator for Rockingstone IT\nhttp://www.rockingstone.com\nhttp://www.megabooksearch.com - Search many book listing sites at once\n", "msg_date": "Tue, 22 Mar 2005 17:05:55 +0100", "msg_from": "Rick Jansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tsearch2 performance on big database" }, { "msg_contents": "On Tue, 22 Mar 2005, Rick Jansen wrote:\n\n> Oleg Bartunov wrote:\n>> Mike,\n>> \n>> no comments before Rick post tsearch configs and increased buffers !\n>> Union shouldn't be faster than (term1|term2).\n>> tsearch2 internals description might help you understanding tsearch2 \n>> limitations.\n>> See http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals\n>> Also, don't miss my notes:\n>> http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_Notes\n>> \n>> Oleg\n>\n> Thanks Oleg, i've seen those pages before :) I've set shared_buffers to 45000 \n> now (yes thats probably very much, isn't it?) and it already seems a lot \n> quicker.\n>\n> How do I find out what my tsearch config is? I followed the intro \n> (http://www.sai.msu.su/~megera/oddmuse/index.cgi/tsearch-v2-intro) and \n> applied it to our books table, thats all, didnt change anything else about \n> configs.\n\nHmm, default configuration is too eager, you index every lexem using \nsimple dictionary) ! Probably, it's too much. Here is what I have for my \nrussian configuration in dictionary database:\n\n default_russian | lword | {en_ispell,en_stem}\n default_russian | lpart_hword | {en_ispell,en_stem}\n default_russian | lhword | {en_ispell,en_stem}\n default_russian | nlword | {ru_ispell,ru_stem}\n default_russian | nlpart_hword | {ru_ispell,ru_stem}\n default_russian | nlhword | {ru_ispell,ru_stem}\n\nNotice, I index only russian and english words, no numbers, url, etc.\nYou may just delete unwanted rows in pg_ts_cfgmap for your configuration,\nbut I'd recommend just update them setting dict_name to NULL.\nFor example, to not indexing integers:\n\nupdate pg_ts_cfgmap set dict_name=NULL where ts_name='default_russian' \nand tok_alias='int';\n\nvoc=# select token,dict_name,tok_type,tsvector from ts_debug('Do you have +70000 bucks');\n token | dict_name | tok_type | tsvector \n--------+---------------------+----------+----------\n Do | {en_ispell,en_stem} | lword |\n you | {en_ispell,en_stem} | lword |\n have | {en_ispell,en_stem} | lword |\n +70000 | | int |\n bucks | {en_ispell,en_stem} | lword | 'buck'\n\nOnly 'bucks' gets indexed :)\nHmm, probably I should add this into documentation.\n\nWhat about word statistics (# of unique words, for example).\n\n\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Tue, 22 Mar 2005 19:38:05 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 performance on big database" }, { "msg_contents": "Oleg Bartunov wrote:\n> On Tue, 22 Mar 2005, Rick Jansen wrote:\n> \n> Hmm, default configuration is too eager, you index every lexem using \n> simple dictionary) ! Probably, it's too much. Here is what I have for my \n> russian configuration in dictionary database:\n> \n> default_russian | lword | {en_ispell,en_stem}\n> default_russian | lpart_hword | {en_ispell,en_stem}\n> default_russian | lhword | {en_ispell,en_stem}\n> default_russian | nlword | {ru_ispell,ru_stem}\n> default_russian | nlpart_hword | {ru_ispell,ru_stem}\n> default_russian | nlhword | {ru_ispell,ru_stem}\n> \n> Notice, I index only russian and english words, no numbers, url, etc.\n> You may just delete unwanted rows in pg_ts_cfgmap for your configuration,\n> but I'd recommend just update them setting dict_name to NULL.\n> For example, to not indexing integers:\n> \n> update pg_ts_cfgmap set dict_name=NULL where ts_name='default_russian' \n> and tok_alias='int';\n> \n> voc=# select token,dict_name,tok_type,tsvector from ts_debug('Do you \n> have +70000 bucks');\n> token | dict_name | tok_type | tsvector \n> --------+---------------------+----------+----------\n> Do | {en_ispell,en_stem} | lword |\n> you | {en_ispell,en_stem} | lword |\n> have | {en_ispell,en_stem} | lword |\n> +70000 | | int |\n> bucks | {en_ispell,en_stem} | lword | 'buck'\n> \n> Only 'bucks' gets indexed :)\n> Hmm, probably I should add this into documentation.\n> \n> What about word statistics (# of unique words, for example).\n> \n\nI'm now following the guide to add the ispell dictionary and I've \nupdated most of the rows setting dict_name to NULL:\n\n ts_name | tok_alias | dict_name\n-----------------+--------------+-----------\n default | lword | {en_stem}\n default | nlword | {simple}\n default | word | {simple}\n default | part_hword | {simple}\n default | nlpart_hword | {simple}\n default | lpart_hword | {en_stem}\n default | hword | {simple}\n default | lhword | {en_stem}\n default | nlhword | {simple}\n\nThese are left, but I have no idea what a 'hword' or 'nlhword' or any \nother of these tokens are.\n\nAnyway, how do I find out the number of unique words or other word \nstatistics?\n\nRick\n-- \nSystems Administrator for Rockingstone IT\nhttp://www.rockingstone.com\nhttp://www.megabooksearch.com - Search many book listing sites at once\n", "msg_date": "Wed, 23 Mar 2005 09:52:27 +0100", "msg_from": "Rick Jansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tsearch2 performance on big database" }, { "msg_contents": "On Wed, 23 Mar 2005, Rick Jansen wrote:\n\n> Oleg Bartunov wrote:\n>> On Tue, 22 Mar 2005, Rick Jansen wrote:\n>> \n>> Hmm, default configuration is too eager, you index every lexem using simple \n>> dictionary) ! Probably, it's too much. Here is what I have for my russian \n>> configuration in dictionary database:\n>> \n>> default_russian | lword | {en_ispell,en_stem}\n>> default_russian | lpart_hword | {en_ispell,en_stem}\n>> default_russian | lhword | {en_ispell,en_stem}\n>> default_russian | nlword | {ru_ispell,ru_stem}\n>> default_russian | nlpart_hword | {ru_ispell,ru_stem}\n>> default_russian | nlhword | {ru_ispell,ru_stem}\n>> \n>> Notice, I index only russian and english words, no numbers, url, etc.\n>> You may just delete unwanted rows in pg_ts_cfgmap for your configuration,\n>> but I'd recommend just update them setting dict_name to NULL.\n>> For example, to not indexing integers:\n>> \n>> update pg_ts_cfgmap set dict_name=NULL where ts_name='default_russian' and \n>> tok_alias='int';\n>> \n>> voc=# select token,dict_name,tok_type,tsvector from ts_debug('Do you have \n>> +70000 bucks');\n>> token | dict_name | tok_type | tsvector \n>> --------+---------------------+----------+----------\n>> Do | {en_ispell,en_stem} | lword |\n>> you | {en_ispell,en_stem} | lword |\n>> have | {en_ispell,en_stem} | lword |\n>> +70000 | | int |\n>> bucks | {en_ispell,en_stem} | lword | 'buck'\n>> \n>> Only 'bucks' gets indexed :)\n>> Hmm, probably I should add this into documentation.\n>> \n>> What about word statistics (# of unique words, for example).\n>> \n>\n> I'm now following the guide to add the ispell dictionary and I've updated \n> most of the rows setting dict_name to NULL:\n>\n> ts_name | tok_alias | dict_name\n> -----------------+--------------+-----------\n> default | lword | {en_stem}\n> default | nlword | {simple}\n> default | word | {simple}\n> default | part_hword | {simple}\n> default | nlpart_hword | {simple}\n> default | lpart_hword | {en_stem}\n> default | hword | {simple}\n> default | lhword | {en_stem}\n> default | nlhword | {simple}\n>\n> These are left, but I have no idea what a 'hword' or 'nlhword' or any other \n> of these tokens are.\n\nfrom my notes http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_Notes\n I've asked how to know token types supported by parser. Actually, there is function token_type(parser), so you just use:\n\n \tselect * from token_type();\n\n>\n> Anyway, how do I find out the number of unique words or other word \n> statistics?\n\n\nfrom my notes http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_Notes\n\nIt's usefull to see words statistics, for example, to check how good your \ndictionaries work or how did you configure pg_ts_cfgmap. Also, you may notice \nprobable stop words relevant for your collection. \nTsearch provides stat() function:\n\n.......................\n\nDon't hesitate to read it and if you find some bugs or know better wording\nI'd be glad to improve my notes.\n\n>\n> Rick\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Wed, 23 Mar 2005 12:40:03 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 performance on big database" }, { "msg_contents": "Oleg Bartunov wrote:\n > from my notes\n > http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_Notes\n >\n > It's usefull to see words statistics, for example, to check how good\n > your dictionaries work or how did you configure pg_ts_cfgmap. Also, you\n > may notice probable stop words relevant for your collection. Tsearch\n > provides stat() function:\n >\n > .......................\n >\n > Don't hesitate to read it and if you find some bugs or know better \nwording\n > I'd be glad to improve my notes.\n >\n\nThanks, but that stat() query takes way too long.. I let it run for like\n4 hours and still nothing. The database I am testing tsearch2 on is also\nthe production database (mysql) server so I have to be careful not to\nuse too many resources :o\n\nAnyway, here's my pg_ts_cfgmap now (well the relevant bits):\n\ndefault_english | lhword | {en_ispell,en_stem}\ndefault_english | lpart_hword | {en_ispell,en_stem}\ndefault_english | lword | {en_ispell,en_stem}\n\nIs it normal that queries for single words (or perhaps they are words\nthat are common) take a really long time? Like this:\n\nilab=# explain analyze select count(*) from books where description_fti \n@@ to_tsquery('default', 'hispanic');\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=20369.81..20369.81 rows=1 width=0) (actual \ntime=261512.031..261512.031 rows=1 loops=1)\n -> Index Scan using idxfti_idx on books (cost=0.00..20349.70 \nrows=8041 width=0) (actual time=45777.760..261509.288 rows=674 loops=1)\n Index Cond: (description_fti @@ '\\'hispan\\''::tsquery)\n Total runtime: 261518.529 ms\n(4 rows)\n\nilab=# explain analyze select titel from books where description_fti @@ \nto_tsquery('default', 'buckingham & palace'); \n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using idxfti_idx on books (cost=0.00..20349.70 rows=8041 \nwidth=57) (actual time=18992.045..48863.385 rows=185 loops=1)\n Index Cond: (description_fti @@ '\\'buckingham\\' & \\'palac\\''::tsquery)\n Total runtime: 48863.874 ms\n(3 rows)\n\n\nI dont know what happened, these queries were a lot faster 2 days \nago..what the feck is going on?!\n\nRick\n\n-- \nSystems Administrator for Rockingstone IT\nhttp://www.rockingstone.com\nhttp://www.megabooksearch.com - Search many book listing sites at once\n", "msg_date": "Thu, 24 Mar 2005 11:41:04 +0100", "msg_from": "Rick Jansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tsearch2 performance on big database" }, { "msg_contents": "On Thu, 24 Mar 2005, Rick Jansen wrote:\n\n> Oleg Bartunov wrote:\n>> from my notes\n>> http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_Notes\n>>\n>> It's usefull to see words statistics, for example, to check how good\n>> your dictionaries work or how did you configure pg_ts_cfgmap. Also, you\n>> may notice probable stop words relevant for your collection. Tsearch\n>> provides stat() function:\n>>\n>> .......................\n>>\n>> Don't hesitate to read it and if you find some bugs or know better wording\n>> I'd be glad to improve my notes.\n>>\n>\n> Thanks, but that stat() query takes way too long.. I let it run for like\n> 4 hours and still nothing. The database I am testing tsearch2 on is also\n> the production database (mysql) server so I have to be careful not to\n> use too many resources :o\n\nstat() is indeed a bigdog, it was designed for developers needs,\nso we recommend to save results in table.\n\n>\n> Anyway, here's my pg_ts_cfgmap now (well the relevant bits):\n>\n> default_english | lhword | {en_ispell,en_stem}\n> default_english | lpart_hword | {en_ispell,en_stem}\n> default_english | lword | {en_ispell,en_stem}\n>\n> Is it normal that queries for single words (or perhaps they are words\n> that are common) take a really long time? Like this:\n>\n\n'hispanic' isn't common, I see you get only 674 rows and \n'buckingham & palace' returns 185 rows. Did you run 'vacuum analyze' ?\nI see a big discrepancy between estimated rows (8041) and actual rows.\n\n\n\n> ilab=# explain analyze select count(*) from books where description_fti @@ \n> to_tsquery('default', 'hispanic');\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=20369.81..20369.81 rows=1 width=0) (actual \n> time=261512.031..261512.031 rows=1 loops=1)\n> -> Index Scan using idxfti_idx on books (cost=0.00..20349.70 rows=8041 \n> width=0) (actual time=45777.760..261509.288 rows=674 loops=1)\n> Index Cond: (description_fti @@ '\\'hispan\\''::tsquery)\n> Total runtime: 261518.529 ms\n> (4 rows)\n>\n> ilab=# explain analyze select titel from books where description_fti @@ \n> to_tsquery('default', 'buckingham & palace'); \n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using idxfti_idx on books (cost=0.00..20349.70 rows=8041 \n> width=57) (actual time=18992.045..48863.385 rows=185 loops=1)\n> Index Cond: (description_fti @@ '\\'buckingham\\' & \\'palac\\''::tsquery)\n> Total runtime: 48863.874 ms\n> (3 rows)\n>\n>\n> I dont know what happened, these queries were a lot faster 2 days ago..what \n> the feck is going on?!\n>\n> Rick\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Thu, 24 Mar 2005 13:51:42 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 performance on big database" }, { "msg_contents": "Oleg Bartunov wrote:\n> \n> stat() is indeed a bigdog, it was designed for developers needs,\n> so we recommend to save results in table.\n> \n>>\n>> Anyway, here's my pg_ts_cfgmap now (well the relevant bits):\n>>\n>> default_english | lhword | {en_ispell,en_stem}\n>> default_english | lpart_hword | {en_ispell,en_stem}\n>> default_english | lword | {en_ispell,en_stem}\n>>\n>> Is it normal that queries for single words (or perhaps they are words\n>> that are common) take a really long time? Like this:\n>>\n> \n> 'hispanic' isn't common, I see you get only 674 rows and 'buckingham & \n> palace' returns 185 rows. Did you run 'vacuum analyze' ?\n> I see a big discrepancy between estimated rows (8041) and actual rows.\n> \n> \n\nYes, I did a vacuum analyze right before executing these queries.\n\nI'm going to recreate the gist index now, and do a vacuum full analyze \nafter that.. see if that makes a difference.\n\nRick\n\n-- \nSystems Administrator for Rockingstone IT\nhttp://www.rockingstone.com\nhttp://www.megabooksearch.com - Search many book listing sites at once\n", "msg_date": "Thu, 24 Mar 2005 11:58:04 +0100", "msg_from": "Rick Jansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tsearch2 performance on big database" } ]
[ { "msg_contents": "How can I improve speed on my queries. For example this query takes one \nday executing itself and it has not finalized !!!\n\"create table tmp_partes as select * from partes where identificacion \nnot in (select cedula from sujetos)\"\n\npartes have 1888000 rows, an index on identificacion\nsujetos have 5500000 rows, an index on cedula\n\n\n\n\n", "msg_date": "Tue, 22 Mar 2005 08:23:07 -0600", "msg_from": "Sabio - PSQL <[email protected]>", "msg_from_op": true, "msg_subject": "Too slow" }, { "msg_contents": "Sabio - PSQL wrote:\n\n> How can I improve speed on my queries. For example this query takes \n> one day executing itself and it has not finalized !!!\n> \"create table tmp_partes as select * from partes where identificacion \n> not in (select cedula from sujetos)\"\n>\n> partes have 1888000 rows, an index on identificacion\n> sujetos have 5500000 rows, an index on cedula\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n>\n>\ntry create table tmp_partes as select * from partes where not exists \n(select cedula from sujetos where cedula = partes.identificacion);\n\nThe \"not in (subselect)\" is very slow in postgresql.\n\nHTH,\n\nchris\n\n", "msg_date": "Tue, 22 Mar 2005 13:49:20 -0500", "msg_from": "\"Chris Hoover\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too slow" }, { "msg_contents": "Please post the results of that query as run through EXPLAIN ANALYZE.\n\nAlso, I'm going to reply to this on pgsql-performance, which is \nprobably where it better belongs.\n\n-tfo\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-260-0005\n\nOn Mar 22, 2005, at 8:23 AM, Sabio - PSQL wrote:\n\n> How can I improve speed on my queries. For example this query takes \n> one day executing itself and it has not finalized !!!\n> \"create table tmp_partes as select * from partes where identificacion \n> not in (select cedula from sujetos)\"\n>\n> partes have 1888000 rows, an index on identificacion\n> sujetos have 5500000 rows, an index on cedula\n\n", "msg_date": "Tue, 22 Mar 2005 12:50:29 -0600", "msg_from": "Thomas F.O'Connell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Too slow" }, { "msg_contents": "WITH: select * from partes where cedula not in (select cedula from sujetos)\nSeq Scan on partes (cost=0.00..168063925339.69 rows=953831 width=109)\n Filter: (NOT (subplan))\n SubPlan\n -> Seq Scan on sujetos (cost=0.00..162348.43 rows=5540143 width=15)\n\nWITH: select * from partes where not exists (select cedula from sujetos \nwhere cedula=partes.cedula)\nSeq Scan on partes (cost=0.00..7373076.94 rows=953831 width=109)\n Filter: (NOT (subplan))\n SubPlan\n -> Index Scan using sujetos_pkey on sujetos (cost=0.00..3.84 \nrows=1 width=15)\n Index Cond: ((cedula)::text = ($0)::text)\n\nThomas F. O'Connell wrote:\n\n> Please post the results of that query as run through EXPLAIN ANALYZE.\n>\n> Also, I'm going to reply to this on pgsql-performance, which is \n> probably where it better belongs.\n>\n> -tfo\n>\n> -- \n> Thomas F. O'Connell\n> Co-Founder, Information Architect\n> Sitening, LLC\n> http://www.sitening.com/\n> 110 30th Avenue North, Suite 6\n> Nashville, TN 37203-6320\n> 615-260-0005\n>\n> On Mar 22, 2005, at 8:23 AM, Sabio - PSQL wrote:\n>\n>> How can I improve speed on my queries. For example this query takes \n>> one day executing itself and it has not finalized !!!\n>> \"create table tmp_partes as select * from partes where identificacion \n>> not in (select cedula from sujetos)\"\n>>\n>> partes have 1888000 rows, an index on identificacion\n>> sujetos have 5500000 rows, an index on cedula\n>\n>\n>\n>\n\n\n", "msg_date": "Tue, 22 Mar 2005 13:10:08 -0600", "msg_from": "Sabio - PSQL <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Too slow" }, { "msg_contents": "\"Chris Hoover\" <[email protected]> writes:\n> The \"not in (subselect)\" is very slow in postgresql.\n\nIt's OK as long as the subselect result is small enough to hash, but\nwith 5500000 rows that's not going to happen :-(.\n\nAnother issue is that if there are any NULLs in the subselect then you\nwill probably not like the results. They are correct per spec but not\nvery intuitive.\n\nPersonally I'd try ye olde outer join trick:\n\nselect partes.*\n from partes left join sujetos on (identificacion = cedula)\n where cedula is null;\n\nA merge join on this would likely be the most effective solution.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 22 Mar 2005 14:14:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too slow " } ]
[ { "msg_contents": "I get the following output from explain analyze on a certain subset of\na large query I'm doing.\n\n From the looks of it, I need to increase how often postgres uses an\nindex over a seq scan, but I'm not sure how to do that. I looked\nthrough the run-time configuration docs on the website, but didn't see\nanything pertaining to index selectivity.\n\nThanks,\n\nAlex Turner\nnetEconomist\n\n\ntrendmls=# explain analyze select listnum from propmain where\nlistprice<=300000 and listprice>=220000;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------\n Seq Scan on propmain (cost=0.00..15556.05 rows=6228 width=4) (actual\ntime=0.093..506.730 rows=5671 loops=1)\n Filter: ((listprice <= 300000::numeric) AND (listprice >= 220000::numeric))\n Total runtime: 510.482 ms\n(3 rows)\n\ntrendmls=# explain analyze select listnum from propmain where\nlistprice<=300000 and listprice>=250000;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using propmain_listprice_i on propmain \n(cost=0.00..12578.65 rows=3486 width=4) (actual time=0.103..16.418\nrows=3440 loops=1)\n Index Cond: ((listprice <= 300000::numeric) AND (listprice >=\n250000::numeric))\n Total runtime: 18.528 ms\n(3 rows)\n", "msg_date": "Tue, 22 Mar 2005 09:56:02 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": true, "msg_subject": "Planner issue" }, { "msg_contents": "Alex Turner wrote:\n\n>I get the following output from explain analyze on a certain subset of\n>a large query I'm doing.\n>\n> \n>\nTry increases the statistics on the listprice column with alter\ntable and then re-run analyze.\n\nalter table foo alter column set statistics <n>\n\nSincerely,\n\nJoshua D. Drake\n\n\n>>From the looks of it, I need to increase how often postgres uses an\n>index over a seq scan, but I'm not sure how to do that. I looked\n>through the run-time configuration docs on the website, but didn't see\n>anything pertaining to index selectivity.\n>\n>Thanks,\n>\n>Alex Turner\n>netEconomist\n>\n>\n>trendmls=# explain analyze select listnum from propmain where\n>listprice<=300000 and listprice>=220000;\n> QUERY PLAN \n>--------------------------------------------------------------------------------------------------------------\n> Seq Scan on propmain (cost=0.00..15556.05 rows=6228 width=4) (actual\n>time=0.093..506.730 rows=5671 loops=1)\n> Filter: ((listprice <= 300000::numeric) AND (listprice >= 220000::numeric))\n> Total runtime: 510.482 ms\n>(3 rows)\n>\n>trendmls=# explain analyze select listnum from propmain where\n>listprice<=300000 and listprice>=250000;\n> QUERY PLAN \n>------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using propmain_listprice_i on propmain \n>(cost=0.00..12578.65 rows=3486 width=4) (actual time=0.103..16.418\n>rows=3440 loops=1)\n> Index Cond: ((listprice <= 300000::numeric) AND (listprice >=\n>250000::numeric))\n> Total runtime: 18.528 ms\n>(3 rows)\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Tue, 22 Mar 2005 08:22:59 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner issue" }, { "msg_contents": "This helps a bit when I set it to 1000 - but it's still pretty bad:\n\nI will use an index 220-300, but not 200-300.\n\nAlex\n\ntrendmls=# explain analyze select listnum from propmain where\nlistprice<=300000 and listprice>=200000;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------\n Seq Scan on propmain (cost=0.00..15517.56 rows=6842 width=4) (actual\ntime=0.039..239.760 rows=6847 loops=1)\n Filter: ((listprice <= 300000::numeric) AND (listprice >= 200000::numeric))\n Total runtime: 244.301 ms\n(3 rows)\n\ntrendmls=# set enable_seqscan=off;\nSET\ntrendmls=# explain analyze select listnum from propmain where\nlistprice<=300000 and listprice>=200000;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using propmain_listprice_i on propmain \n(cost=0.00..22395.95 rows=6842 width=4) (actual time=0.084..25.751\nrows=6847 loops=1)\n Index Cond: ((listprice <= 300000::numeric) AND (listprice >=\n200000::numeric))\n Total runtime: 30.193 ms\n(3 rows)\n\ntrendmls=#\n\n\n\nOn Tue, 22 Mar 2005 08:22:59 -0800, Joshua D. Drake\n<[email protected]> wrote:\n> Alex Turner wrote:\n> \n> >I get the following output from explain analyze on a certain subset of\n> >a large query I'm doing.\n> >\n> >\n> >\n> Try increases the statistics on the listprice column with alter\n> table and then re-run analyze.\n> \n> alter table foo alter column set statistics <n>\n> \n> Sincerely,\n> \n> Joshua D. Drake\n> \n> \n> >>From the looks of it, I need to increase how often postgres uses an\n> >index over a seq scan, but I'm not sure how to do that. I looked\n> >through the run-time configuration docs on the website, but didn't see\n> >anything pertaining to index selectivity.\n> >\n> >Thanks,\n> >\n> >Alex Turner\n> >netEconomist\n> >\n> >\n> >trendmls=# explain analyze select listnum from propmain where\n> >listprice<=300000 and listprice>=220000;\n> > QUERY PLAN\n> >--------------------------------------------------------------------------------------------------------------\n> > Seq Scan on propmain (cost=0.00..15556.05 rows=6228 width=4) (actual\n> >time=0.093..506.730 rows=5671 loops=1)\n> > Filter: ((listprice <= 300000::numeric) AND (listprice >= 220000::numeric))\n> > Total runtime: 510.482 ms\n> >(3 rows)\n> >\n> >trendmls=# explain analyze select listnum from propmain where\n> >listprice<=300000 and listprice>=250000;\n> > QUERY PLAN\n> >------------------------------------------------------------------------------------------------------------------------------------------\n> > Index Scan using propmain_listprice_i on propmain\n> >(cost=0.00..12578.65 rows=3486 width=4) (actual time=0.103..16.418\n> >rows=3440 loops=1)\n> > Index Cond: ((listprice <= 300000::numeric) AND (listprice >=\n> >250000::numeric))\n> > Total runtime: 18.528 ms\n> >(3 rows)\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 1: subscribe and unsubscribe commands go to [email protected]\n> >\n> >\n> \n> --\n> Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\n> Postgresql support, programming shared hosting and dedicated hosting.\n> +1-503-667-4564 - [email protected] - http://www.commandprompt.com\n> PostgreSQL Replicator -- production quality replication for PostgreSQL\n> \n> \n>\n", "msg_date": "Tue, 22 Mar 2005 14:36:46 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner issue" }, { "msg_contents": "On Tue, 2005-03-22 at 14:36 -0500, Alex Turner wrote:\n\n> I will use an index 220-300, but not 200-300.\n> ...\n> Seq Scan on propmain (cost=0.00..15517.56 rows=6842 width=4) (actual\n> time=0.039..239.760 rows=6847 loops=1)\n> ...\n> Index Scan using propmain_listprice_i on propmain \n> (cost=0.00..22395.95 rows=6842 width=4) (actual time=0.084..25.751\n> rows=6847 loops=1)\n\nthe rows estimates are accurate, so it is not a question of statistics\nanymore.\n\nfirst make sure effective_cache_size is correctly set, and then \nif that is not enough, you might try to lower random_page_cost a bit\n\n\ngnari\n\n\n", "msg_date": "Tue, 22 Mar 2005 20:09:03 +0000", "msg_from": "Ragnar =?ISO-8859-1?Q?Hafsta=F0?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner issue" }, { "msg_contents": "\nI'm guessing your data is actually more \"clustered\" than the\n\"correlation\" stastic thinks it is.\n\nAlex Turner wrote:\n > trendmls=# explain analyze select listnum from propmain where\n > listprice<=300000 and listprice>=200000;\n\n\nIs that a database of properties like land/houses?\n\nIf your table is clustered geographically (by zip code, etc),\nthe index scan might do quite well because all houses in a\nneighborhood may have similar prices (and therefore live on\njust a few disk pages). However since high-priced neighborhoods\nare scattered across the country, the optimizer would see\na very low \"correlation\" and not notice this clustering.\n\n\nIf this is the cause, one thing you could do is\nCLUSTER your table on propmain_listprice_i. I'm quite\nconfident it'll fix this particular query - but might\nslow down other queries.\n\n\n", "msg_date": "Tue, 22 Mar 2005 15:08:39 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner issue" }, { "msg_contents": "\nI'm guessing your data is actually more \"clustered\" than the\n\"correlation\" statistic thinks it is.\n\nAlex Turner wrote:\n > trendmls=# explain analyze select listnum from propmain where\n > listprice<=300000 and listprice>=200000;\n\n\nIs that a database of properties like land/houses?\n\nIf your table is clustered geographically (by zip code, etc),\nthe index scan might do quite well because all houses in a\nneighborhood may have similar prices (and therefore live on\njust a few disk pages). However since high-priced neighborhoods\nare scattered across the country, the optimizer would see\na very low \"correlation\" and not notice this clustering.\n\n\nIf this is the cause, one thing you could do is\nCLUSTER your table on propmain_listprice_i. I'm quite\nconfident it'll fix this particular query - but might\nslow down other queries.\n\n\n", "msg_date": "Tue, 22 Mar 2005 15:09:35 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner issue" } ]
[ { "msg_contents": "Hi everyone,\n\nI'm developping a web decisonnal application based on \n-Red Hat 3 ES\n-Postgresql 8.0.1 \n-Dell poweredge 2850, Ram 2Gb, 2 procs, 3 Ghz, 1Mb cache and 4 disks ext3 10,000 r/mn\nI am alone in the box and there is not any crontab.\n\nI have 2 databases (A and B) with exactly the same schemas: \n-one main table called \"aggregate\" having no indexes and supporting only SELECT statements (loaded one time a month with a new bundle of datas). Row size # 200 bytes (50 columns of type char(x) or integer) \n-and several small 'reference' tables not shown by the following example for clarity reasons.\n-Database A : aggregate contains 2,300,000 records ( 500 Mb)\n-Database B : aggregate contains 9,000,000 records ( 2 Gb)\n\nThere is no index on the aggregate table since the criterias, their number and their scope are freely choosen by the customers.\n\nThe query :\n select sum(ca) \n from aggregate \n where (issue_date >= '2004-01' and issue_date <= '2004-02' );\ntakes 5s on database A ( 5mn30s* the first time, probably to fill the cache) \nand 21mn* on database B (whatever it is the first time or not).\n\nexplain shows sequential scan of course:\n Aggregate (cost=655711.85..655711.85 rows=1 width=4)\n -> Seq Scan on \"aggregate\" (cost=0.00..647411.70 rows=3320060 width=4)\n Filter: ((issue_date >= '2004-01'::bpchar) AND (issue_date <= '2004-02'::bpchar))\n\n*Here is the 'top' display for these response times:\n91 processes: 90 sleeping, 1 running, 0 zombie, 0 stopped\nCPU states: cpu user nice system irq softirq iowait idle\n total 0,0% 0,0% 0,2% 0,1% 0,0% 48,6% 51,0%\n cpu00 0,0% 0,0% 0,0% 0,0% 0,0% 0,0% 100,0%\n cpu01 0,0% 0,0% 1,0% 0,0% 0,0% 99,0% 0,0%\n cpu02 0,0% 0,0% 0,0% 0,5% 0,0% 0,0% 99,5%\n cpu03 0,0% 0,0% 0,0% 0,0% 0,0% 95,5% 4,5%\nMem: 2061424k av, 2043944k used, 17480k free, 0k shrd, 6104k buff\n 1551692k actv, 172496k in_d, 30452k in_c\nSwap: 2096440k av, 0k used, 2096440k free 1792852k cached\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND\n21983 postgres 20 0 9312 9312 8272 D 0,2 0,4 0:00 1 postmaster\n 1 root 15 0 488 488 432 S 0,0 0,0 0:06 2 init\n 2 root RT 0 0 0 0 SW 0,0 0,0 0:00 0 migration/0\n\nFor the 5s response time, the 'top' command shows 0% iowait and 25% cpu.\n\n\n- I guess this is a cache issue but how can I manage/control it ? \nIs Postgres managing it's own cache or does it use the OS cache ?\n\n- Is using the cache is a good approach? \nIt does not seem to work for large databases : I tryed several different values for postgres.conf and /proc/sys/kernel/shmmax without detecting any response time enhancement (For example : shared_buffers = 190000 , sort_mem = 4096 , effective_cache_size = 37000 and kernel/shmmax=1200000000 )\nDo I have to upgrade the RAM to 6Gb or/and buy faster HD (of what type?) ?\nMoreover, a query on database B will destroy the cache previously build for database A, increasing the response time for the next query on database A. And I have in fact 15 databases !\n\n- In my case, what should be the best parameters combination between postgres.conf and /proc/sys/kernel/shmmax ?\n\n- is there a way to reduce the size of the \"aggregate\" table files (1Gb + 1Gb + 1 Gb + 0.8Gb = 3.8Gb for the \"aggregate\" table instead of 2Gb = 200 * 9,000,000 records) by playing with the data types or others parameters (fillfactor ?). \nVacuum (even full) seems to be useless since the aggregate table supports only 'copy aggregate from' and 'select'.\n\n- is it possible to define a sort of RAM filesystem (as it exists in DOS/Windows) which I could create and populate my databases into ? ...since the databases does not support updates for this application.\n\nSorry for my naive questions and my poor english but any help or advise will be greatly appreciated !\n\nPatrick Vedrines\n\nPS (maybe of interest for some users like me) : \nI created a partition on a new similar disk but on the last cylinders (near the periphery) and copied the database B into it: the response time is 25% faster (i.e. 15mn instead of 21mn). But 15 mn is still too long for my customers (5 mn would be nice).\n\n\n\n\n\n\n\n\n\n\nHi everyone,\n \nI'm developping a web decisonnal application based \non \n-Red Hat 3 ES\n-Postgresql 8.0.1 \n-Dell poweredge 2850, Ram 2Gb, 2 procs, 3 Ghz, 1Mb \ncache and 4 disks ext3 10,000 r/mn\nI am alone in the box and there is not any \ncrontab.\n \nI have 2 databases (A and B) with exactly the same \nschemas: \n-one main table called \"aggregate\" having no \nindexes and supporting only SELECT statements (loaded one time a month with a \nnew bundle of datas). Row size # 200 bytes (50 columns of type char(x) or \ninteger) \n-and several small 'reference' tables not shown by \nthe following example for clarity reasons.\n-Database A : aggregate contains 2,300,000 records \n( 500 Mb)\n\n-Database B : aggregate contains 9,000,000 \nrecords ( 2 Gb)\n \nThere is no index on the aggregate table since the \ncriterias, their number and their scope are freely choosen by the \ncustomers.\n \nThe query :\n        select  \nsum(ca)          from \naggregate            \nwhere  (issue_date >= '2004-01' and issue_date <= '2004-02' \n);\ntakes 5s on database A ( 5mn30s* the first time, \nprobably to fill the cache) \nand  21mn* on \ndatabase B (whatever it is the first time or not).\n \nexplain shows sequential scan of \ncourse:\n Aggregate  \n(cost=655711.85..655711.85 rows=1 width=4)   ->  Seq Scan \non \"aggregate\"  (cost=0.00..647411.70 rows=3320060 \nwidth=4)         Filter: \n((issue_date >= '2004-01'::bpchar) AND (issue_date <= \n'2004-02'::bpchar))\n \n*Here is the 'top' display for these response \ntimes:\n91 processes: 90 sleeping, 1 running, 0 zombie, 0 \nstoppedCPU states:  cpu    user    \nnice  system    irq  softirq  \niowait    \nidle           \ntotal    0,0%    0,0%    \n0,2%   0,1%     0,0%   48,6%   \n51,0%           \ncpu00    0,0%    0,0%    \n0,0%   0,0%     0,0%    0,0%  \n100,0%           \ncpu01    0,0%    0,0%    \n1,0%   0,0%     0,0%   99,0%    \n0,0%           \ncpu02    0,0%    0,0%    \n0,0%   0,5%     0,0%    \n0,0%   \n99,5%           \ncpu03    0,0%    0,0%    \n0,0%   0,0%     0,0%   95,5%    4,5%Mem:  \n2061424k av, 2043944k used,   17480k \nfree,       0k shrd,    6104k \nbuff                   \n1551692k actv,  172496k in_d,   30452k in_cSwap: 2096440k \nav,       0k used, 2096440k \nfree                 \n1792852k cached\n  PID USER     PRI  \nNI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU \nCOMMAND21983 postgres  20   0  9312 9312  8272 \nD     0,2  \n0,4   0:00   1 postmaster    1 \nroot      15   0   488  \n488   432 S     0,0  0,0   \n0:06   2 init    2 \nroot      RT   0     \n0    0     0 SW    0,0  \n0,0   0:00   0 migration/0\n\nFor the 5s response time, the 'top' command shows \n0% iowait and 25% cpu.\n \n \n- I guess this is a cache issue but how can I \nmanage/control it ? \nIs Postgres managing it's own cache or does it use the OS cache ?\n \n- Is using the cache is a good approach? \nIt does not seem to work for large databases : I tryed several different \nvalues for postgres.conf and /proc/sys/kernel/shmmax without detecting any \nresponse time enhancement (For example : shared_buffers = 190000 , sort_mem \n= 4096 , effective_cache_size = 37000 and kernel/shmmax=1200000000 \n)\nDo I have to upgrade the RAM to 6Gb or/and buy faster HD (of what \ntype?) ?\nMoreover, a query on database B will destroy the cache previously build for \ndatabase A, increasing the response time for the next query on database A. And I \nhave in fact 15 databases !\n \n\n- In my case, what should be the best parameters combination between \npostgres.conf and /proc/sys/kernel/shmmax ?\n \n- is there a way to reduce the size of the \"aggregate\" table files (1Gb + \n1Gb + 1 Gb + 0.8Gb = 3.8Gb for the \"aggregate\" table instead of 2Gb = 200 * \n9,000,000 records) by playing with the data types or others parameters \n(fillfactor ?). \nVacuum (even full) seems to be useless since the aggregate table supports \nonly 'copy aggregate from' and 'select'.\n \n- is it possible to define a sort of RAM filesystem (as it exists \nin DOS/Windows) which I could create and populate my databases into \n? ...since the databases does not support updates for this \napplication.\n \nSorry for my naive questions and my poor english but any help or advise \nwill be greatly appreciated !\n \nPatrick Vedrines\n \nPS (maybe of interest for some users like me) : \nI created a partition on a new similar disk but on the last cylinders (near \nthe periphery) and copied the database B into it: the response time is 25% \nfaster (i.e. 15mn instead of 21mn). But 15 mn is still too long for my customers \n(5 mn would be nice).", "msg_date": "Tue, 22 Mar 2005 19:08:23 +0100", "msg_from": "\"Patrick Vedrines\" <[email protected]>", "msg_from_op": true, "msg_subject": "CPU 0.1% IOWAIT 99% for decisonnal queries" }, { "msg_contents": "Hi Patrick,\n\n How is configured your disk array? Do you have a Perc 4?\n\nTip: Use reiserfs instead ext3, raid 0+1 and deadline I/O scheduler in \nkernel linux 2.6\n\nAtenciosamente,\n\nGustavo Franklin N�brega\nInfra-Estrutura e Banco de Dados\nPlanae Tecnologia da Informa��o\n(+55) 14 3224-3066 Ramal 209\nwww.planae.com.br\n\n\n\nPatrick Vedrines wrote:\n\n> Hi everyone,\n> \n> I'm developping a web decisonnal application based on\n> -Red Hat 3 ES\n> -Postgresql 8.0.1\n> -Dell poweredge 2850, Ram 2Gb, 2 procs, 3 Ghz, 1Mb cache and 4 disks \n> ext3 10,000 r/mn\n> I am alone in the box and there is not any crontab.\n> \n> I have 2 databases (A and B) with exactly the same schemas:\n> -one main table called \"aggregate\" having no indexes and supporting \n> only SELECT statements (loaded one time a month with a new bundle of \n> datas). Row size # 200 bytes (50 columns of type char(x) or integer) \n> -and several small 'reference' tables not shown by the following \n> example for clarity reasons.\n> -Database A : aggregate contains 2,300,000 records ( 500 Mb)\n> -Database B : aggregate contains 9,000,000 records ( 2 Gb)\n> \n> There is no index on the aggregate table since the criterias, their \n> number and their scope are freely choosen by the customers.\n> \n> The query :\n> select sum(ca) \n> from aggregate \n> where (issue_date >= '2004-01' and issue_date <= '2004-02' );\n> takes 5s on database A ( 5mn30s* the first time, probably to fill the \n> cache)\n> and 21mn* on database B (whatever it is the first time or not).\n> \n> explain shows sequential scan of course:\n> Aggregate (cost=655711.85..655711.85 rows=1 width=4)\n> -> Seq Scan on \"aggregate\" (cost=0.00..647411.70 rows=3320060 \n> width=4)\n> Filter: ((issue_date >= '2004-01'::bpchar) AND (issue_date <= \n> '2004-02'::bpchar))\n> \n> *Here is the 'top' display for these response times:\n> 91 processes: 90 sleeping, 1 running, 0 zombie, 0 stopped\n> CPU states: cpu user nice system irq softirq iowait idle\n> total 0,0% 0,0% 0,2% 0,1% 0,0% 48,6% 51,0%\n> cpu00 0,0% 0,0% 0,0% 0,0% 0,0% 0,0% 100,0%\n> cpu01 0,0% 0,0% 1,0% 0,0% 0,0% *99,0%* 0,0%\n> cpu02 0,0% 0,0% 0,0% 0,5% 0,0% 0,0% 99,5%\n> cpu03 0,0% 0,0% 0,0% 0,0% 0,0% *95,5%* 4,5%\n> Mem: 2061424k av, 2043944k used, 17480k free, 0k shrd, \n> 6104k buff\n> 1551692k actv, 172496k in_d, 30452k in_c\n> Swap: 2096440k av, 0k used, 2096440k free \n> 1792852k cached\n> PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND\n> 21983 postgres 20 0 9312 9312 8272 D *0,2* 0,4 0:00 1 \n> postmaster\n> 1 root 15 0 488 488 432 S 0,0 0,0 0:06 2 init\n> 2 root RT 0 0 0 0 SW 0,0 0,0 0:00 0 \n> migration/0\n> For the 5s response time, the 'top' command shows 0% iowait and 25% cpu.\n> \n> \n> - I guess this is a cache issue but how can I manage/control it ?\n> Is Postgres managing it's own cache or does it use the OS cache ?\n> \n> - Is using the cache is a good approach?\n> It does not seem to work for large databases : I tryed several \n> different values for postgres.conf and /proc/sys/kernel/shmmax without \n> detecting any response time enhancement (For example : shared_buffers \n> = 190000 , sort_mem = 4096 , effective_cache_size = 37000 \n> and kernel/shmmax=1200000000 )\n> Do I have to upgrade the RAM to 6Gb or/and buy faster HD (of what type?) ?\n> Moreover, a query on database B will destroy the cache previously \n> build for database A, increasing the response time for the next query \n> on database A. And I have in fact 15 databases !\n> \n> - In my case, what should be the best parameters combination between \n> postgres.conf and /proc/sys/kernel/shmmax ?\n> \n> - is there a way to reduce the size of the \"aggregate\" table files \n> (1Gb + 1Gb + 1 Gb + 0.8Gb = 3.8Gb for the \"aggregate\" table instead of \n> 2Gb = 200 * 9,000,000 records) by playing with the data types or \n> others parameters (fillfactor ?).\n> Vacuum (even full) seems to be useless since the aggregate table \n> supports only 'copy aggregate from' and 'select'.\n> \n> - is it possible to define a sort of RAM filesystem (as it exists in \n> DOS/Windows) which I could create and populate my databases into \n> ? ...since the databases does not support updates for this application.\n> \n> Sorry for my naive questions and my poor english but any help or \n> advise will be greatly appreciated !\n> \n> Patrick Vedrines\n> \n> PS (maybe of interest for some users like me) :\n> I created a partition on a new similar disk but on the last cylinders \n> (near the periphery) and copied the database B into it: the response \n> time is 25% faster (i.e. 15mn instead of 21mn). But 15 mn is still too \n> long for my customers (5 mn would be nice).\n> \n> \n> \n> \n\n\n\n\n\n\n\n\nHi Patrick,\n\n    How is configured your disk array? Do you have a Perc 4?\n\nTip: Use reiserfs instead ext3, raid 0+1 and deadline I/O scheduler in\nkernel linux 2.6\nAtenciosamente,\n\nGustavo Franklin Nóbrega\nInfra-Estrutura e Banco de Dados\nPlanae Tecnologia da Informação\n(+55) 14 3224-3066 Ramal 209\nwww.planae.com.br\n\n\n\nPatrick Vedrines wrote:\n\n\n\n\nHi everyone,\n \nI'm developping a web decisonnal\napplication based on \n-Red Hat 3 ES\n-Postgresql 8.0.1 \n-Dell poweredge 2850, Ram 2Gb, 2\nprocs, 3 Ghz, 1Mb cache and 4 disks ext3 10,000 r/mn\nI am alone in the box and there is\nnot any crontab.\n \nI have 2 databases (A and B) with\nexactly the same schemas: \n-one main table called \"aggregate\"\nhaving no indexes and supporting only SELECT statements (loaded one\ntime a month with a new bundle of datas). Row size # 200 bytes (50\ncolumns of type char(x) or integer) \n-and several small 'reference'\ntables not shown by the following example for clarity reasons.\n-Database A : aggregate contains\n2,300,000 records ( 500 Mb)\n\n-Database B : aggregate contains\n9,000,000 records ( 2 Gb)\n\n \nThere is no index on the aggregate\ntable since the criterias, their number and their scope are freely\nchoosen by the customers.\n \nThe query :\n        select  sum(ca)  \n        from aggregate    \n        where  (issue_date >= '2004-01' and issue_date <=\n'2004-02' );\ntakes 5s on database A ( 5mn30s* the\nfirst time, probably to fill the cache) \nand  21mn* on database B (whatever it is the first time or not).\n \nexplain shows sequential scan of\ncourse:\n Aggregate \n(cost=655711.85..655711.85 rows=1 width=4)\n   ->  Seq Scan on \"aggregate\"  (cost=0.00..647411.70 rows=3320060\nwidth=4)\n         Filter: ((issue_date >= '2004-01'::bpchar) AND (issue_date\n<= '2004-02'::bpchar))\n \n*Here is the 'top' display for these\nresponse times:\n91 processes: 90 sleeping, 1\nrunning, 0 zombie, 0 stopped\nCPU states:  cpu    user    nice  system    irq  softirq  iowait    idle\n           total    0,0%    0,0%    0,2%   0,1%     0,0%   48,6%   51,0%\n           cpu00    0,0%    0,0%    0,0%   0,0%     0,0%    0,0%  100,0%\n           cpu01    0,0%    0,0%    1,0%   0,0%     0,0%   99,0%    0,0%\n           cpu02    0,0%    0,0%    0,0%   0,5%     0,0%    0,0%   99,5%\n           cpu03    0,0%    0,0%    0,0%   0,0%     0,0%   95,5%    4,5%\nMem:  2061424k av, 2043944k used,   17480k free,       0k shrd,   \n6104k buff\n                   1551692k actv,  172496k in_d,   30452k in_c\nSwap: 2096440k av,       0k used, 2096440k free                \n1792852k cached\n  PID USER     PRI  NI  SIZE  RSS\nSHARE STAT %CPU %MEM   TIME CPU COMMAND\n21983 postgres  20   0  9312 9312  8272 D     0,2  0,4   0:00   1 postmaster\n    1 root      15   0   488  488   432 S     0,0  0,0   0:06   2 init\n    2 root      RT   0     0    0     0 SW    0,0  0,0   0:00   0\nmigration/0\n\n\nFor the 5s response time, the 'top'\ncommand shows 0% iowait and 25% cpu.\n \n \n- I guess this is a cache issue but\nhow can I manage/control it ? \nIs Postgres managing it's own cache or does it use the OS cache ?\n \n- Is using the cache is a good approach? \nIt does not seem to work for large databases : I tryed several\ndifferent values for postgres.conf and /proc/sys/kernel/shmmax without\ndetecting any response time enhancement (For example : shared_buffers =\n190000 , sort_mem = 4096 , effective_cache_size = 37000\nand kernel/shmmax=1200000000 )\nDo I have to upgrade the RAM to 6Gb or/and buy faster HD (of\nwhat type?) ?\nMoreover, a query on database B will destroy the cache\npreviously build for database A, increasing the response time for the\nnext query on database A. And I have in fact 15 databases !\n \n\n- In my case, what should be the best parameters combination\nbetween postgres.conf and /proc/sys/kernel/shmmax ?\n\n \n- is there a way to reduce the size of the \"aggregate\" table\nfiles (1Gb + 1Gb + 1 Gb + 0.8Gb = 3.8Gb for the \"aggregate\" table\ninstead of 2Gb = 200 * 9,000,000 records) by playing with the data\ntypes or others parameters (fillfactor ?). \nVacuum (even full) seems to be useless since the aggregate table\nsupports only 'copy aggregate from' and 'select'.\n \n- is it possible to define a sort of RAM filesystem (as\nit exists in DOS/Windows) which I could create and populate my\ndatabases into ? ...since the databases does not support updates for\nthis application.\n \nSorry for my naive questions and my poor english but any help or\nadvise will be greatly appreciated !\n \nPatrick Vedrines\n \nPS (maybe of interest for some users like me) : \nI created a partition on a new similar disk but on the last\ncylinders (near the periphery) and copied the database B into it: the\nresponse time is 25% faster (i.e. 15mn instead of 21mn). But 15 mn is\nstill too long for my customers (5 mn would be nice).", "msg_date": "Tue, 22 Mar 2005 15:28:12 -0300", "msg_from": "Gustavo F Nobrega - Planae <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU 0.1% IOWAIT 99% for decisonnal queries" }, { "msg_contents": "Patrick Vedrines wrote:\n> Hi everyone,\n> \n> I'm developping a web decisonnal application based on -Red Hat 3 ES \n> -Postgresql 8.0.1 -Dell poweredge 2850, Ram 2Gb, 2 procs, 3 Ghz, 1Mb\n> cache and 4 disks ext3 10,000 r/mn I am alone in the box and there is\n> not any crontab.\n> \n> I have 2 databases (A and B) with exactly the same schemas: -one main\n> table called \"aggregate\" having no indexes and supporting only SELECT\n> statements (loaded one time a month with a new bundle of datas).\n\nPerhaps look into clustering the tables.\n\n > Row\n> size # 200 bytes (50 columns of type char(x) or integer) -and several\n> small 'reference' tables not shown by the following example for\n> clarity reasons. -Database A : aggregate contains 2,300,000 records (\n> 500 Mb) -Database B : aggregate contains 9,000,000 records ( 2 Gb)\n> \n> There is no index on the aggregate table since the criterias, their\n> number and their scope are freely choosen by the customers.\n\nHmm... not convinced this is a good idea.\n\n> The query : select sum(ca) from aggregate where (issue_date >=\n> '2004-01' and issue_date <= '2004-02' ); takes 5s on database A (\n> 5mn30s* the first time, probably to fill the cache) and 21mn* on\n> database B (whatever it is the first time or not).\n\nBecause A fits in the cache and B doesn't.\n\n> - I guess this is a cache issue but how can I manage/control it ? Is\n> Postgres managing it's own cache or does it use the OS cache ?\n\nBoth\n\n> - Is using the cache is a good approach? It does not seem to work for\n> large databases : I tryed several different values for postgres.conf\n> and /proc/sys/kernel/shmmax without detecting any response time\n> enhancement (For example : shared_buffers = 190000 , sort_mem = 4096\n> , effective_cache_size = 37000 and kernel/shmmax=1200000000 ) Do I\n> have to upgrade the RAM to 6Gb or/and buy faster HD (of what type?) ?\n> Moreover, a query on database B will destroy the cache previously\n> build for database A, increasing the response time for the next query\n> on database A. And I have in fact 15 databases !\n\nIf you don't have any indexes and the table isn't clustered then PG has \nno choice but to scan the entire table for every query. As you note, \nthat's going to destroy your cache. You can increase the RAM but sooner \nor later, you'll get the same problem.\n\n> - In my case, what should be the best parameters combination between\n> postgres.conf and /proc/sys/kernel/shmmax ?\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\nhttp://www.powerpostgresql.com/PerfList\n\n> - is there a way to reduce the size of the \"aggregate\" table files\n> (1Gb + 1Gb + 1 Gb + 0.8Gb = 3.8Gb for the \"aggregate\" table instead\n> of 2Gb = 200 * 9,000,000 records) by playing with the data types or\n> others parameters (fillfactor ?). Vacuum (even full) seems to be\n> useless since the aggregate table supports only 'copy aggregate from'\n> and 'select'.\n\nYou can replace int4 with int2 and so on (where possible) but that will \nonly delay problems.\n\n> - is it possible to define a sort of RAM filesystem (as it exists in\n> DOS/Windows) which I could create and populate my databases into ?\n> ...since the databases does not support updates for this application.\n\nWon't help - your cache is already doing that. Some things you can do \n(in order of effort)\n\n1. Cluster the large tables\n2. Analyse your customers' queries and try a couple of indexes - some \nchoices will be more common than others.\n3. Split your tables into two - common fields, uncommon fields, that way \nfiltering on the common fields might take less space.\n4. Split your tables by date, one table per month or year. Then re-write \nyour customers' queries on-the-fly to select from the right table. Will \nonly help with queries on date of course.\n5. Place each database on its own machine or virtual machine so they \ndon't interfere with each other.\n\nI'd start with items 1,2 and see if that helps though.\n\nPS - it might make sense to have an unusually large shared_mem for PG, \nbut I'm not familiar enough with the changes in the cache handling in \n8.0 to say for sure.\nPPS - there are more changes coming for 8.1, but I know even less about \nthose.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 22 Mar 2005 18:57:35 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU 0.1% IOWAIT 99% for decisonnal queries" }, { "msg_contents": "On Tue, 2005-03-22 at 19:08 +0100, Patrick Vedrines wrote:\n> I have 2 databases (A and B) with exactly the same schemas: \n> -one main table called \"aggregate\" having no indexes and supporting\n> only SELECT statements (loaded one time a month with a new bundle of\n> datas). Row size # 200 bytes (50 columns of type char(x) or integer) \n> -and several small 'reference' tables not shown by the following\n> example for clarity reasons.\n> -Database A : aggregate contains 2,300,000 records ( 500 Mb)\n> -Database B : aggregate contains 9,000,000 records ( 2 Gb)\n\n> (For example : shared_buffers = 190000 , sort_mem = 4096 ,\n> effective_cache_size = 37000 and kernel/shmmax=1200000000 )\n> Do I have to upgrade the RAM to 6Gb or/and buy faster HD (of what\n> type?) ?\n\nSetting shared_buffers that high will do you no good at all, as Richard\nsuggests.\n\nYou've got 1.5Gb of shared_buffers and > 2Gb data. In 8.0, the scan will\nhardly use the cache at all, nor will it ever, since the data is bigger\nthan the cache. Notably, the scan of B should NOT spoil the cache for\nA...\n\nPriming the cache is quite hard...but not impossible.\n\nWhat will kill you on a shared_buffers that big is the bgwriter, which\nyou should turn off by setting bgwriter_maxpages = 0\n\n> PS (maybe of interest for some users like me) : \n> I created a partition on a new similar disk but on the last cylinders\n> (near the periphery) and copied the database B into it: the response\n> time is 25% faster (i.e. 15mn instead of 21mn). But 15 mn is still too\n> long for my customers (5 mn would be nice).\n\nSounds like your disks/layout/something is pretty sick. You don't\nmention I/O bandwidth, controller or RAID, so you should look more into\nthose topics.\n\nOn the other hand...just go for more RAM, as you suggest...but you\nshould create a RAMdisk, rather than use too large\nshared_buffers....that way your data is always in RAM, rather than maybe\nin RAM.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Tue, 22 Mar 2005 22:18:11 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU 0.1% IOWAIT 99% for decisonnal queries" }, { "msg_contents": "Hello Gustavo,\n\nYour question seems to say that you suspect a disk issue, and a few hours later, Simon told me \"Sounds like your disks/layout/something is pretty sick\".\nTo be clear in my own mind about it, I've just copyed (time cp) the \"aggregate\" table files (4 Gb) from one disk to an another one: it takes 22 mn !(3 Mb/s).\nThat seems to demonstrate that Postgres is not the cause of this issue. \nI've just untrusted to my system engineer the analysis of my disks...\n\nIn case I would have to change my disks, do you have any performance figures related to the types you mentionned (reiserfs vs ext3) ?\nI don't use RAID since the security is not a concern.\n\nThank a lot for your help !\n\nPatrick\n\n Hi Patrick,\n\n How is configured your disk array? Do you have a Perc 4?\n\n Tip: Use reiserfs instead ext3, raid 0+1 and deadline I/O scheduler in kernel linux 2.6\n\n\n\n\n\n\n\n\n\nHello Gustavo,\n \nYour question seems to say that you suspect a disk \nissue, and a few hours later, Simon told me \"Sounds like your \ndisks/layout/something is pretty sick\".\nTo be clear in my own mind about it, I've just \ncopyed (time cp) the \"aggregate\" table files (4 Gb) from one disk to an another \none: it takes 22 mn !(3 Mb/s).\nThat seems to demonstrate that Postgres is not the \ncause of this issue. \nI've just untrusted to my system engineer the \nanalysis of my disks...\n \nIn case I would have to change my disks, do you \nhave any performance figures related to the types you mentionned (reiserfs \nvs ext3) ?\nI don't use RAID since the security is not a \nconcern.\n \nThank a lot for your help !\n \nPatrick\n\n Hi \n Patrick,    How is configured your disk array? Do you \n have a Perc 4?Tip: Use reiserfs instead ext3, raid 0+1 and deadline \n I/O scheduler in kernel linux 2.6", "msg_date": "Thu, 24 Mar 2005 10:33:50 +0100", "msg_from": "\"Patrick Vedrines\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU 0.1% IOWAIT 99% for decisonnal queries" }, { "msg_contents": "Hello Simon,\n\n> Sounds like your disks/layout/something is pretty sick. You don't\n> mention I/O bandwidth, controller or RAID, so you should look more into\n> those topics.\nWell seen ! (as we say in France).\nAs I said to Gustavo, your last suspicion took me into a simple disk test: I've just copyed (time cp) the \"aggregate\" table files (4 Gb) from one disk to an another one: it takes 22 mn !(3 Mb/s).\nThat seems to demonstrate that Postgres is not the cause of this issue. \nI've just untrusted to my system engineer the analysis of my disks...\n\n> Setting shared_buffers that high will do you no good at all, as Richard\n> suggests.\nI (and my own tests) agree my you.\n\n> You've got 1.5Gb of shared_buffers and > 2Gb data. In 8.0, the scan will\n> hardly use the cache at all, nor will it ever, since the data is bigger\n> than the cache. Notably, the scan of B should NOT spoil the cache for A\nAre you sure of that ? Is Postgres able to say to OS: \"don't use the cache for this query\"?\n\n> Priming the cache is quite hard...but not impossible.\n> What will kill you on a shared_buffers that big is the bgwriter, which\n> you should turn off by setting bgwriter_maxpages = 0\nIs bgwriter concerned as my application applyes only SELECT ?\n\n> \n> On the other hand...just go for more RAM, as you suggest...but you\n> should create a RAMdisk, rather than use too large\n> shared_buffers....that way your data is always in RAM, rather than maybe\n> in RAM.\nI am not an Linux expert: Is it possible (and how) to create a RAMdisk ?\n\n\n\n\nThank a lot for your help !\n\nPatrick\n\n\n\n\n\n\n\n\nHello Simon,\n \n> Sounds like your disks/layout/something is \npretty sick. You don't> mention I/O bandwidth, controller or RAID, so you \nshould look more into> those topics.Well seen ! (as we say in \nFrance).\nAs I said to Gustavo, your last suspicion took me \ninto a simple disk test: I've just copyed (time cp) the \"aggregate\" table files \n(4 Gb) from one disk to an another one: it takes 22 mn !(3 Mb/s).\nThat seems to demonstrate that Postgres is not the \ncause of this issue. \nI've just untrusted to my system engineer the \nanalysis of my disks...\n \n> Setting shared_buffers that high will do you \nno good at all, as Richard> suggests.I (and my own tests) agree my \nyou.\n> You've got 1.5Gb of shared_buffers and \n> 2Gb data. In 8.0, the scan will> hardly use the cache at all, nor \nwill it ever, since the data is bigger> than the cache. Notably, the scan \nof B should NOT spoil the cache for AAre you sure of that ? Is Postgres able \nto say to OS: \"don't use the cache for this query\"?\n> Priming the cache is quite hard...but not \nimpossible.> What will kill you on a shared_buffers that big is the \nbgwriter, which> you should turn off by setting bgwriter_maxpages = \n0\nIs bgwriter concerned as my application applyes \nonly SELECT ?\n> > On the other hand...just go for \nmore RAM, as you suggest...but you> should create a RAMdisk, rather than \nuse too large> shared_buffers....that way your data is always in RAM, \nrather than maybe> in RAM.I am not an Linux expert: Is it possible \n(and how) to create a RAMdisk ?\n \n \n\nThank a lot for your help !\n \nPatrick", "msg_date": "Thu, 24 Mar 2005 10:48:33 +0100", "msg_from": "\"Patrick Vedrines\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU 0.1% IOWAIT 99% for decisonnal queries" }, { "msg_contents": "Hello Richard,\n\n> Perhaps look into clustering the tables.\nGood idea : I will try to go further into this way.\n\n> > There is no index on the aggregate table since the criterias, their\n> > number and their scope are freely choosen by the customers.\n>\n> Hmm... not convinced this is a good idea.\nLong days ago, when my application used Informix, I've try to index the\naggregate table: It was a nightmare to manage all these indexes (and their\nvolume) for a uncertain benefit.\n\n> If you don't have any indexes and the table isn't clustered then PG has\n> no choice but to scan the entire table for every query. As you note,\n> that's going to destroy your cache. You can increase the RAM but sooner\n> or later, you'll get the same problem.\nI agree with you : You remarks take me not to rely to the cache features.\n\n> 3. Split your tables into two - common fields, uncommon fields, that way\n> filtering on the common fields might take less space.\n> 4. Split your tables by date, one table per month or year. Then re-write\n> your customers' queries on-the-fly to select from the right table. Will\n> only help with queries on date of course.\nThat forces me to rewrite my query generator which is already a very complex\nprogram (in fact the heart of the system)\n\n> 5. Place each database on its own machine or virtual machine so they\n> don't interfere with each other.\nI'm afraid I don't have the money for that. As Simon and Gustavo suggested,\nI will check my SCSI disks first.\n\n\nThank a lot for your advises !\n\nAmicalement,\n\nPatrick\n\n\n", "msg_date": "Thu, 24 Mar 2005 11:08:55 +0100", "msg_from": "\"Patrick Vedrines\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU 0.1% IOWAIT 99% for decisonnal queries" }, { "msg_contents": "Good day Patrick!\n \n I can help you to design you disk layout for better perform and \nsecurity. Please, tell me how many disks (and some specs, like capacity \nand RPM).\n\n If you want to know more, there is a very interesting article abou \nbenckmark filesystem ( http://linuxgazette.net/102/piszcz.html ). In \nthis article, ReiserFS 3.6, JFS and XFS are in the same level at top \ndepending your application, and ext3 is more slow than others. I believe \nthat version 4 of the ReiserFS is better that version 3.6, but I could \nnot still test it.\n\n Raid0 (striping, more at \nhttp://www.pcguide.com/ref/hdd/perf/raid/levels/singleLevel0-c.html) or \nRaid 0+1 (stripping + mirror, more at \nhttp://www.pcguide.com/ref/hdd/perf/raid/levels/multLevel01-c.html) is \nvery insteresting to postgresql. Raid0 provides only performance, and \nRaid 0+1 provides performance and security. Take a look at this articles \nand think about to use Raid (http://www.pcguide.com/ref/hdd/perf/raid/).\n\n I'm glad to help. Best regards!\n\nAtenciosamente,\n\nGustavo Franklin N�brega\nInfraestrutura e Banco de Dados\nPlanae Tecnologia da Informa��o\n(+55) 14 3106-3514\nhttp://www.planae.com.br\n\n\n\n\nPatrick Vedrines wrote:\n\n> Hello Gustavo,\n> \n> Your question seems to say that you suspect a disk issue, and a few \n> hours later, Simon told me \"Sounds like your disks/layout/something is \n> pretty sick\".\n> To be clear in my own mind about it, I've just copyed (time cp) the \n> \"aggregate\" table files (4 Gb) from one disk to an another one: it \n> takes 22 mn !(3 Mb/s).\n> That seems to demonstrate that Postgres is not the cause of this issue.\n> I've just untrusted to my system engineer the analysis of my disks...\n> \n> In case I would have to change my disks, do you have any performance \n> figures related to the types you mentionned (reiserfs vs ext3) ?\n> I don't use RAID since the security is not a concern.\n> \n> Thank a lot for your help !\n> \n> Patrick\n>\n> \n> Hi Patrick,\n>\n> How is configured your disk array? Do you have a Perc 4?\n>\n> Tip: Use reiserfs instead ext3, raid 0+1 and deadline I/O\n> scheduler in kernel linux 2.6\n>\n> \n>\n\n\n\n\n\n\n\nGood day Patrick!\n    \n    I can help you to design you disk layout for better perform and\nsecurity. Please, tell me how many disks (and some specs, like capacity\nand RPM).\n\n    If you want to know more, there is a very interesting article abou\nbenckmark filesystem ( http://linuxgazette.net/102/piszcz.html ). In\nthis article, ReiserFS 3.6, JFS and XFS are in the same level at top\ndepending your application, and ext3 is more slow than others. I\nbelieve that version 4 of the ReiserFS is better that version 3.6, but\nI could not still test it.\n\n    Raid0 (striping, more at\nhttp://www.pcguide.com/ref/hdd/perf/raid/levels/singleLevel0-c.html) or\nRaid 0+1 (stripping + mirror, more at\nhttp://www.pcguide.com/ref/hdd/perf/raid/levels/multLevel01-c.html) is\nvery insteresting to postgresql. Raid0 provides only performance, and\nRaid 0+1 provides performance and security. Take a look at this\narticles and think about to use Raid\n(http://www.pcguide.com/ref/hdd/perf/raid/).\n\n    I'm glad to help. Best regards!\nAtenciosamente,\n\nGustavo Franklin Nóbrega\nInfraestrutura e Banco de Dados\nPlanae Tecnologia da Informação\n(+55) 14 3106-3514\nhttp://www.planae.com.br\n\n\n\n\nPatrick Vedrines wrote:\n\n\n\n\n\n\nHello Gustavo,\n \nYour question seems to say that you\nsuspect a disk issue, and a few hours later, Simon told me \"Sounds like\nyour disks/layout/something is pretty sick\".\nTo be clear in my own mind about it,\nI've just copyed (time cp) the \"aggregate\" table files (4 Gb) from one\ndisk to an another one: it takes 22 mn !(3 Mb/s).\nThat seems to demonstrate that\nPostgres is not the cause of this issue. \nI've just untrusted to my system\nengineer the analysis of my disks...\n \nIn case I would have to change my\ndisks, do you have any performance figures related to the types you\nmentionned (reiserfs vs ext3) ?\nI don't use RAID since the security\nis not a concern.\n \nThank a lot for your help !\n \nPatrick\n\n\n \nHi Patrick,\n\n    How is configured your disk array? Do you have a Perc 4?\n\nTip: Use reiserfs instead ext3, raid 0+1 and deadline I/O scheduler in\nkernel linux 2.6", "msg_date": "Thu, 24 Mar 2005 08:52:15 -0300", "msg_from": "=?ISO-8859-1?Q?Gustavo_Franklin_N=F3brega_-_Planae?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU 0.1% IOWAIT 99% for decisonnal queries" }, { "msg_contents": "Great !\n\nI'm not an expert but as far as I know, my 15 databases are spread over 4 SCSI RAID disks 73 GB 10K RPM mounted under ext3 mode. \nI remember that they where provided by DELL under RAID5 and I asked my system engineer for switching them to standard SCSI because I don't care about security but only about speed and capacity ( maybe this switch was not set properly at this time...).\n\nThank you for these interesting links: I 've sent them to my system engineer with my two hands !\n\nAmicalement\n\nPatrick\n ----- Original Message ----- \n From: Gustavo Franklin Nóbrega - Planae \n To: Patrick Vedrines \n Cc: performance pgsql \n Sent: Thursday, March 24, 2005 12:52 PM\n Subject: Re: [PERFORM] CPU 0.1% IOWAIT 99% for decisonnal queries\n\n\n Good day Patrick!\n \n I can help you to design you disk layout for better perform and security. Please, tell me how many disks (and some specs, like capacity and RPM).\n\n If you want to know more, there is a very interesting article abou benckmark filesystem ( http://linuxgazette.net/102/piszcz.html ). In this article, ReiserFS 3.6, JFS and XFS are in the same level at top depending your application, and ext3 is more slow than others. I believe that version 4 of the ReiserFS is better that version 3.6, but I could not still test it.\n\n Raid0 (striping, more at http://www.pcguide.com/ref/hdd/perf/raid/levels/singleLevel0-c.html) or Raid 0+1 (stripping + mirror, more at http://www.pcguide.com/ref/hdd/perf/raid/levels/multLevel01-c.html) is very insteresting to postgresql. Raid0 provides only performance, and Raid 0+1 provides performance and security. Take a look at this articles and think about to use Raid (http://www.pcguide.com/ref/hdd/perf/raid/).\n\n I'm glad to help. Best regards!\n\nAtenciosamente,\n\nGustavo Franklin Nóbrega\nInfraestrutura e Banco de Dados\nPlanae Tecnologia da Informação\n(+55) 14 3106-3514\nhttp://www.planae.com.br\n\n\n\n Patrick Vedrines wrote: \n Hello Gustavo,\n\n Your question seems to say that you suspect a disk issue, and a few hours later, Simon told me \"Sounds like your disks/layout/something is pretty sick\".\n To be clear in my own mind about it, I've just copyed (time cp) the \"aggregate\" table files (4 Gb) from one disk to an another one: it takes 22 mn !(3 Mb/s).\n That seems to demonstrate that Postgres is not the cause of this issue. \n I've just untrusted to my system engineer the analysis of my disks...\n\n In case I would have to change my disks, do you have any performance figures related to the types you mentionned (reiserfs vs ext3) ?\n I don't use RAID since the security is not a concern.\n\n Thank a lot for your help !\n\n Patrick\n\n Hi Patrick,\n\n How is configured your disk array? Do you have a Perc 4?\n\n Tip: Use reiserfs instead ext3, raid 0+1 and deadline I/O scheduler in kernel linux 2.6\n\n \n\n\n\n\nGreat !\n \nI'm not an expert but as far as I know, my 15 \ndatabases are spread over 4 SCSI RAID disks 73 GB 10K RPM mounted under ext3 \nmode. \nI remember that they where provided by DELL under \nRAID5 and I asked my system engineer for switching them to standard SCSI \nbecause I don't care about security but only about speed and capacity ( maybe \nthis switch was not set properly at this time...).\n \nThank you for these interesting links: I 've sent \nthem to my system engineer with my two hands !\n \nAmicalement\n \nPatrick\n\n----- Original Message ----- \nFrom:\nGustavo \n Franklin Nóbrega - Planae \nTo: Patrick Vedrines \nCc: performance pgsql \nSent: Thursday, March 24, 2005 12:52 \n PM\nSubject: Re: [PERFORM] CPU 0.1% IOWAIT \n 99% for decisonnal queries\nGood day Patrick!        \n I can help you to design you disk layout for better perform and security. \n Please, tell me how many disks (and some specs, like capacity and \n RPM).    If you want to know more, there is a very \n interesting article abou benckmark filesystem ( http://linuxgazette.net/102/piszcz.html \n ). In this article, ReiserFS 3.6, JFS and XFS are in the same level at top \n depending your application, and ext3 is more slow than others. I believe that \n version 4 of the ReiserFS is better that version 3.6, but I could not still \n test it.    Raid0 (striping, more at http://www.pcguide.com/ref/hdd/perf/raid/levels/singleLevel0-c.html) \n or Raid 0+1 (stripping + mirror, more at http://www.pcguide.com/ref/hdd/perf/raid/levels/multLevel01-c.html) \n is very insteresting to postgresql. Raid0 provides only performance, and Raid \n 0+1 provides performance and security. Take a look at this articles and think \n about to use Raid (http://www.pcguide.com/ref/hdd/perf/raid/).    \n I'm glad to help. Best regards!Atenciosamente,\n\nGustavo Franklin Nóbrega\nInfraestrutura e Banco de Dados\nPlanae Tecnologia da Informação\n(+55) 14 3106-3514\nhttp://www.planae.com.br\n\nPatrick Vedrines wrote: \n \n\n\n\nHello Gustavo,\n \nYour question seems to say that you suspect a \n disk issue, and a few hours later, Simon told me \"Sounds like your \n disks/layout/something is pretty sick\".\nTo be clear in my own mind about it, I've just \n copyed (time cp) the \"aggregate\" table files (4 Gb) from one disk to an \n another one: it takes 22 mn !(3 Mb/s).\nThat seems to demonstrate that Postgres is not \n the cause of this issue. \nI've just untrusted to my system \n engineer the analysis of my disks...\n \nIn case I would have to change my disks, do you \n have any performance figures related to the types you mentionned \n (reiserfs vs ext3) ?\nI don't use RAID since the security is not a \n concern.\n \nThank a lot for your help !\n \nPatrick\n\n Hi \n Patrick,    How is configured your disk array? Do \n you have a Perc 4?Tip: Use reiserfs instead ext3, raid 0+1 and \n deadline I/O scheduler in kernel linux 2.6", "msg_date": "Thu, 24 Mar 2005 15:04:08 +0100", "msg_from": "\"Patrick Vedrines\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU 0.1% IOWAIT 99% for decisonnal queries" }, { "msg_contents": "Hi!\n\n I have a Dell PowerEdge 2600 with a Perc 4/DI and 4 scsi disks 35GB. \nI have made a array raid 0+1 with 4 disks, because is mission critical \napplication. But, for your, you can configure a raid0, thats is faster \nthan raid5 for 4 disks.\n\n Ask to your system enginner what is distribution of Linux is used, \nkernel version and if he does a kernel tuning for your hardware and \napplication.\n\nBest regards.\n\nAtenciosamente,\n\nGustavo Franklin N�brega\nInfraestrutura e Banco de Dados\nPlanae Tecnologia da Informa��o\n(+55) 14 3106-3514\nhttp://www.planae.com.br\n\n\n\n\nPatrick Vedrines wrote:\n\n> Great !\n> \n> I'm not an expert but as far as I know, my 15 databases are spread \n> over 4 SCSI RAID disks 73 GB 10K RPM mounted under ext3 mode. \n> I remember that they where provided by DELL under RAID5 and I asked my \n> system engineer for switching them to standard SCSI because I don't \n> care about security but only about speed and capacity ( maybe this \n> switch was not set properly at this time...).\n> \n> Thank you for these interesting links: I 've sent them to my system \n> engineer with my two hands !\n> \n> Amicalement\n> \n> Patrick\n>\n> ----- Original Message -----\n> *From:* Gustavo Franklin N�brega - Planae\n> <mailto:[email protected]>\n> *To:* Patrick Vedrines <mailto:[email protected]>\n> *Cc:* performance pgsql <mailto:[email protected]>\n> *Sent:* Thursday, March 24, 2005 12:52 PM\n> *Subject:* Re: [PERFORM] CPU 0.1% IOWAIT 99% for decisonnal queries\n>\n> Good day Patrick!\n> \n> I can help you to design you disk layout for better perform\n> and security. Please, tell me how many disks (and some specs, like\n> capacity and RPM).\n>\n> If you want to know more, there is a very interesting article\n> abou benckmark filesystem (\n> http://linuxgazette.net/102/piszcz.html ). In this article,\n> ReiserFS 3.6, JFS and XFS are in the same level at top depending\n> your application, and ext3 is more slow than others. I believe\n> that version 4 of the ReiserFS is better that version 3.6, but I\n> could not still test it.\n>\n> Raid0 (striping, more at\n> http://www.pcguide.com/ref/hdd/perf/raid/levels/singleLevel0-c.html)\n> or Raid 0+1 (stripping + mirror, more at\n> http://www.pcguide.com/ref/hdd/perf/raid/levels/multLevel01-c.html)\n> is very insteresting to postgresql. Raid0 provides only\n> performance, and Raid 0+1 provides performance and security. Take\n> a look at this articles and think about to use Raid\n> (http://www.pcguide.com/ref/hdd/perf/raid/).\n>\n> I'm glad to help. Best regards!\n>\n>Atenciosamente,\n>\n>Gustavo Franklin N�brega\n>Infraestrutura e Banco de Dados\n>Planae Tecnologia da Informa��o\n>(+55) 14 3106-3514\n>http://www.planae.com.br\n>\n> \n>\n>\n>\n> Patrick Vedrines wrote:\n>\n>> Hello Gustavo,\n>> \n>> Your question seems to say that you suspect a disk issue, and a\n>> few hours later, Simon told me \"Sounds like your\n>> disks/layout/something is pretty sick\".\n>> To be clear in my own mind about it, I've just copyed (time cp)\n>> the \"aggregate\" table files (4 Gb) from one disk to an another\n>> one: it takes 22 mn !(3 Mb/s).\n>> That seems to demonstrate that Postgres is not the cause of this\n>> issue.\n>> I've just untrusted to my system engineer the analysis of my disks...\n>> \n>> In case I would have to change my disks, do you have any\n>> performance figures related to the types you mentionned (reiserfs\n>> vs ext3) ?\n>> I don't use RAID since the security is not a concern.\n>> \n>> Thank a lot for your help !\n>> \n>> Patrick\n>>\n>> \n>> Hi Patrick,\n>>\n>> How is configured your disk array? Do you have a Perc 4?\n>>\n>> Tip: Use reiserfs instead ext3, raid 0+1 and deadline I/O\n>> scheduler in kernel linux 2.6\n>>\n>> \n>>\n\n\n\n\n\n\n\nHi!\n\n    I have a Dell PowerEdge 2600 with a Perc 4/DI and 4 scsi disks\n35GB. I have made a array raid 0+1 with 4 disks, because is mission\ncritical application. But, for your,  you can configure a raid0, thats\nis faster than raid5 for 4 disks. \n\n    Ask to your system enginner what is distribution of Linux is used,\nkernel version and if he does a kernel tuning for your hardware and\napplication.\n\nBest regards.\nAtenciosamente,\n\nGustavo Franklin Nóbrega\nInfraestrutura e Banco de Dados\nPlanae Tecnologia da Informação\n(+55) 14 3106-3514\nhttp://www.planae.com.br\n\n\n\n\nPatrick Vedrines wrote:\n\n\n\n\nGreat !\n \nI'm not an expert but as far as I\nknow, my 15 databases are spread over 4 SCSI RAID disks 73 GB 10K RPM\nmounted under ext3 mode. \nI remember that they where provided\nby DELL under RAID5 and I asked my system engineer for switching them\nto standard SCSI because I don't care about security but only about\nspeed and capacity ( maybe this switch was not set properly at this\ntime...).\n \nThank you for these interesting\nlinks: I 've sent them to my system engineer with my two hands !\n \nAmicalement\n \nPatrick\n\n-----\nOriginal Message ----- \nFrom:\nGustavo Franklin Nóbrega - Planae\n\nTo:\nPatrick Vedrines \nCc:\nperformance pgsql \nSent:\nThursday, March 24, 2005 12:52 PM\nSubject:\nRe: [PERFORM] CPU 0.1% IOWAIT 99% for decisonnal queries\n\n\nGood day Patrick!\n    \n    I can help you to design you disk layout for better perform and\nsecurity. Please, tell me how many disks (and some specs, like capacity\nand RPM).\n\n    If you want to know more, there is a very interesting article abou\nbenckmark filesystem ( http://linuxgazette.net/102/piszcz.html\n). In this article, ReiserFS 3.6, JFS and XFS are in the same level at\ntop depending your application, and ext3 is more slow than others. I\nbelieve that version 4 of the ReiserFS is better that version 3.6, but\nI could not still test it.\n\n    Raid0 (striping, more at http://www.pcguide.com/ref/hdd/perf/raid/levels/singleLevel0-c.html)\nor Raid 0+1 (stripping + mirror, more at http://www.pcguide.com/ref/hdd/perf/raid/levels/multLevel01-c.html)\nis very insteresting to postgresql. Raid0 provides only performance,\nand Raid 0+1 provides performance and security. Take a look at this\narticles and think about to use Raid (http://www.pcguide.com/ref/hdd/perf/raid/).\n\n    I'm glad to help. Best regards!\nAtenciosamente,\n\nGustavo Franklin Nóbrega\nInfraestrutura e Banco de Dados\nPlanae Tecnologia da Informação\n(+55) 14 3106-3514\nhttp://www.planae.com.br\n\n \n\n\nPatrick Vedrines wrote:\n \n\n\n\nHello Gustavo,\n \nYour question seems to say that\nyou suspect a disk issue, and a few hours later, Simon told me \"Sounds\nlike your disks/layout/something is pretty sick\".\nTo be clear in my own mind about\nit, I've just copyed (time cp) the \"aggregate\" table files (4 Gb) from\none disk to an another one: it takes 22 mn !(3 Mb/s).\nThat seems to demonstrate that\nPostgres is not the cause of this issue. \nI've just untrusted to my system\nengineer the analysis of my disks...\n \nIn case I would have to change\nmy disks, do you have any performance figures related to the types you\nmentionned (reiserfs vs ext3) ?\nI don't use RAID since the\nsecurity is not a concern.\n \nThank a lot for your help !\n \nPatrick\n\n\n \nHi Patrick,\n\n    How is configured your disk array? Do you have a Perc 4?\n\nTip: Use reiserfs instead ext3, raid 0+1 and deadline I/O scheduler in\nkernel linux 2.6", "msg_date": "Thu, 24 Mar 2005 11:28:30 -0300", "msg_from": "=?ISO-8859-1?Q?Gustavo_Franklin_N=F3brega_-_Planae?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU 0.1% IOWAIT 99% for decisonnal queries" }, { "msg_contents": "On Thu, 2005-03-24 at 10:48 +0100, Patrick Vedrines wrote:\n> > You've got 1.5Gb of shared_buffers and > 2Gb data. In 8.0, the scan\n> will\n> > hardly use the cache at all, nor will it ever, since the data is\n> bigger\n> > than the cache. Notably, the scan of B should NOT spoil the cache\n> for A\n> Are you sure of that ? Is Postgres able to say to OS: \"don't use the\n> cache for this query\"?\n\nPostgreSQL 8.0 has the ARC algorithm which prevents cache spoiling of\nthe shared_buffers, but has no direct influence over the OS cache.\n\n> > Priming the cache is quite hard...but not impossible.\n> > What will kill you on a shared_buffers that big is the bgwriter,\n> which\n> > you should turn off by setting bgwriter_maxpages = 0\n> Is bgwriter concerned as my application applyes only SELECT ?\n\nWith very large shared_buffers the bgwriter's default settings are a\nproblem. You don't need it, so I suggest turning it off.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Thu, 24 Mar 2005 21:55:56 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU 0.1% IOWAIT 99% for decisonnal queries" } ]
[ { "msg_contents": "We've recently moved our pgsql installation and DBs to a Solaris 8\nmachine with striped and mirrored ufs filesystem that houses the DB\ndata. We are now seeing terrible performance and the bottleneck is no\ndoubt disk I/O.\n\nWe've tried modifying a tunables related to ufs, but it doesn't seem\nto be helping.\n\nIs there anything we should be looking at that is specifically related\nto ufs filesystems on Solaris 8 or possibly something in general that\nwould improve performance?\n\nThanks.\n\n-- \nBrandon\n", "msg_date": "Tue, 22 Mar 2005 14:44:10 -0600 (CST)", "msg_from": "\"Brandon Metcalf\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL on Solaris 8 and ufs" }, { "msg_contents": "Brandon Metcalf wrote:\n\n>We've recently moved our pgsql installation and DBs to a Solaris 8\n>machine with striped and mirrored ufs filesystem that houses the DB\n>data. We are now seeing terrible performance and the bottleneck is no\n>doubt disk I/O.\n>\n>We've tried modifying a tunables related to ufs, but it doesn't seem\n>to be helping.\n>\n>Is there anything we should be looking at that is specifically related\n>to ufs filesystems on Solaris 8 or possibly something in general that\n>would improve performance?\n> \n>\nWell, Solaris 8 is a bit old now, so I don't remember all the details. \nBut, if memory servers, Solaris 8 still has some \"high water\" and \"lo \nwater\" tunables related to the amount of IO can be outstanding to a \nsingle file.\n\nTry setting\nset ufs:ufs_WRITES=0\nin /etc/system and rebooting, which basically says \"any amount of disk \nIO can be outstanding\". There's a tunables doc on docs.sun.com that \nexplains this option.\n\nAlso, logging UFS might help with some of the metadata requirements of \nUFS as well. So, use \"mount -o logging\" or add the relevant entry in \n/etc/vfstab.\n\nOf course, the best thing is Solaris 9 or 10, which would be much better \nfor this sort of thing.\n\nHope this helps.\n\n-- Alan\n", "msg_date": "Tue, 22 Mar 2005 16:02:39 -0500", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL on Solaris 8 and ufs" }, { "msg_contents": "On Tue, 2005-03-22 at 14:44 -0600, Brandon Metcalf wrote:\n> We've recently moved our pgsql installation and DBs to a Solaris 8\n> machine with striped and mirrored ufs filesystem that houses the DB\n> data. We are now seeing terrible performance and the bottleneck is no\n> doubt disk I/O.\n> \n> We've tried modifying a tunables related to ufs, but it doesn't seem\n> to be helping.\n> \n> Is there anything we should be looking at that is specifically related\n> to ufs filesystems on Solaris 8 or possibly something in general that\n> would improve performance?\n> \n> Thanks.\n> \n\nWhat are you using to create your raid? You say it is \"no doubt disk\nI/O\" - does iostat confirm this? A lot of performance issues are related\nto the size of the stripe you chose for the striped portion of the\narray, the actual array configuration, etc. I am assuming you have\nlooked at system variables such as autoup and the likes? What tweaks\nhave you done?\n\nAlso, are your pg_xlog and data directories separated onto separate\nvolumes? Doing so will help immensely. What are you using to measure\nperformance?\n\nSven\n\n", "msg_date": "Tue, 22 Mar 2005 16:03:43 -0500", "msg_from": "Sven Willenberger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL on Solaris 8 and ufs" }, { "msg_contents": "s == [email protected] writes:\n\n s> Try setting\n s> set ufs:ufs_WRITES=0\n s> in /etc/system and rebooting, which basically says \"any amount of disk\n s> IO can be outstanding\". There's a tunables doc on docs.sun.com that\n s> explains this option.\n\n s> Also, logging UFS might help with some of the metadata requirements of\n s> UFS as well. So, use \"mount -o logging\" or add the relevant entry in\n s> /etc/vfstab.\n\nOK. I'll try these out. We do have ufs_WRITES enabled with the\nfollowing parameters:\n\n set ncsize = 257024\n set autoup = 90\n set bufhwm = 15000\n set tune_t_fsflushr = 15\n set ufs:ufs_HW = 16777216\n set ufs:ufs_LW = 8388608\n\n\n-- \nBrandon\n", "msg_date": "Tue, 22 Mar 2005 15:06:40 -0600 (CST)", "msg_from": "\"Brandon Metcalf\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL on Solaris 8 and ufs" }, { "msg_contents": "s == [email protected] writes:\n\n s> What are you using to create your raid?\n\nHm. I didn't set this up. I'll have to check.\n\n s> You say it is \"no doubt disk\n s> I/O\" - does iostat confirm this? A lot of performance issues are related\n s> to the size of the stripe you chose for the striped portion of the\n s> array, the actual array configuration, etc. I am assuming you have\n s> looked at system variables such as autoup and the likes? What tweaks\n s> have you done?\n\nI've mainly been using Glance which shows a lot of queued requests for\nthe disks in question.\n\nHere's currently what we have in /etc/system related to ufs:\n\n set ncsize = 257024\n set autoup = 90\n set bufhwm = 15000\n set tune_t_fsflushr = 15\n set ufs:ufs_HW = 16777216\n set ufs:ufs_LW = 8388608\n\n s> Also, are your pg_xlog and data directories separated onto separate\n s> volumes? Doing so will help immensely.\n\nNo, they are on the same volume.\n\n s> What are you using to measure\n s> performance?\n\nNothing too scientific other than the fact that since we have moved\nthe DB, we consistenly see a large number of postmater processes\n(close to 100) where before we did not.\n\n\n-- \nBrandon\n", "msg_date": "Tue, 22 Mar 2005 15:23:18 -0600 (CST)", "msg_from": "\"Brandon Metcalf\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL on Solaris 8 and ufs" }, { "msg_contents": "Brandon Metcalf wrote:\n> We've recently moved our pgsql installation and DBs to a Solaris 8\n> machine with striped and mirrored ufs filesystem that houses the DB\n> data. We are now seeing terrible performance and the bottleneck is no\n> doubt disk I/O.\n> \n> We've tried modifying a tunables related to ufs, but it doesn't seem\n> to be helping.\n> \n> Is there anything we should be looking at that is specifically related\n> to ufs filesystems on Solaris 8 or possibly something in general that\n> would improve performance?\n> \n> Thanks.\n> \n\nI found that mounting the filesystem that contains the PGDATA directory \n(probably only the pg_xlog directory in fact) without logging improved \nthings a great deal (assuming you are using logging that is...).\n\nIn addition changing the postgresql.conf parameter wal_sync_method from \nthe default of open_datasync to fdatasync improved things a bit more. \nHowever I seem to recall a posting suggesting the opposite! ...so feel \nfree to experiment and let us know!\n\nMark\n\nP.s : original tests on Solaris 8, \nhttp://archives.postgresql.org/pgsql-performance/2003-12/msg00165.php\n", "msg_date": "Wed, 23 Mar 2005 09:26:16 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL on Solaris 8 and ufs" }, { "msg_contents": "On Tue, Mar 22, 2005 at 03:23:18PM -0600, Brandon Metcalf wrote:\n> s> What are you using to measure\n> s> performance?\n> \n> Nothing too scientific other than the fact that since we have moved\n> the DB, we consistenly see a large number of postmater processes\n> (close to 100) where before we did not.\n\nWhat did you move from? The Solaris ps (not in ucb, which is the\nBSD-style ps) shows the parent process name, so everything shows up\nas \"postmaster\" rather than \"postgres\". There's always one back end\nper connection.\n\nIf you are in fact using more connections, by the way, I can tell you\nthat Solaris 8, in my experience, is _very bad_ at managing context\nswitches. So you may not be merely I/O bound (although your other\nreports seem to indicate that you are).\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe whole tendency of modern prose is away from concreteness.\n\t\t--George Orwell\n", "msg_date": "Wed, 23 Mar 2005 11:58:07 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL on Solaris 8 and ufs" }, { "msg_contents": "a == [email protected] writes:\n\n a> On Tue, Mar 22, 2005 at 03:23:18PM -0600, Brandon Metcalf wrote:\n a> > s> What are you using to measure\n a> > s> performance?\n a> >\n a> > Nothing too scientific other than the fact that since we have moved\n a> > the DB, we consistenly see a large number of postmater processes\n a> > (close to 100) where before we did not.\n\n a> What did you move from? The Solaris ps (not in ucb, which is the\n a> BSD-style ps) shows the parent process name, so everything shows up\n a> as \"postmaster\" rather than \"postgres\". There's always one back end\n a> per connection.\n\n a> If you are in fact using more connections, by the way, I can tell you\n a> that Solaris 8, in my experience, is _very bad_ at managing context\n a> switches. So you may not be merely I/O bound (although your other\n a> reports seem to indicate that you are).\n\n\nWe moved from an HP-UX 10.20 box where the pgsql installation and data\nwere on a vxfs fileystem.\n\nAnd we're definitely seeing more connections at a time which indicates\nthat each process is taking longer to complete.\n\n\n-- \nBrandon\n", "msg_date": "Wed, 23 Mar 2005 11:16:29 -0600 (CST)", "msg_from": "\"Brandon Metcalf\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL on Solaris 8 and ufs" }, { "msg_contents": "On the context switching issue, we've found that this setting in /etc/system helps:\n\nset rechoose_interval=30\n\nthis sets the minimum time that a process is eligible to be switched to another cpu. (the default is 3).\n\nYou can monitor context switching with the cs column in vmstat. We've found that high context switching seems to be more a symptom,\nrather than a cause of problems -- for example we had an issue with column statistics and some really bad queries, and the cpu's start\ncontext switching like crazy. (20,000 - 50,000 or more in a 5 second period, normally < 5000 per 5 second period under heavy load.)\n\nBrandon Metcalf wrote:\n\n> a == [email protected] writes:\n> \n> a> On Tue, Mar 22, 2005 at 03:23:18PM -0600, Brandon Metcalf wrote:\n> a> > s> What are you using to measure\n> a> > s> performance?\n> a> >\n> a> > Nothing too scientific other than the fact that since we have moved\n> a> > the DB, we consistenly see a large number of postmater processes\n> a> > (close to 100) where before we did not.\n> \n> a> What did you move from? The Solaris ps (not in ucb, which is the\n> a> BSD-style ps) shows the parent process name, so everything shows up\n> a> as \"postmaster\" rather than \"postgres\". There's always one back end\n> a> per connection.\n> \n> a> If you are in fact using more connections, by the way, I can tell you\n> a> that Solaris 8, in my experience, is _very bad_ at managing context\n> a> switches. So you may not be merely I/O bound (although your other\n> a> reports seem to indicate that you are).\n> \n> \n> We moved from an HP-UX 10.20 box where the pgsql installation and data\n> were on a vxfs fileystem.\n> \n> And we're definitely seeing more connections at a time which indicates\n> that each process is taking longer to complete.\n> \n> \n", "msg_date": "Wed, 23 Mar 2005 09:32:07 -0800", "msg_from": "Tom Arthurs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL on Solaris 8 and ufs" }, { "msg_contents": "On Wed, Mar 23, 2005 at 11:16:29AM -0600, Brandon Metcalf wrote:\n> \n> We moved from an HP-UX 10.20 box where the pgsql installation and data\n> were on a vxfs fileystem.\n\nMy best guess, then, is that ufs tuning really is your issue. We\nalways used vxfs for our Sun database servers (which was a nightmare\nall on its own, BTW, so I don't actually recommend this), so I don't\nhave any real ufs tuning advice. \n\nThe Packer Solaris database book (Packer, Allan N., _Configuring &\nTuning Databases on the Solaris Platform_. Palo Alto: Sun\nMicrosystems P, 2002. ISBN 0-13-083417-3) does suggest mounting the\nfilesystems with forcedirectio; I dimly recall using this for the wal\npartition on one test box, and STR that it helped. Also, you want to\nmake sure you use the right fsync method; if it's still set to\n\"fsync\" in the config file, you'll want to change that. I remember\nfinding that fsync was something like 3 times slower than everything\nelse. I don't have any more Solaris boxes to check, but I believe we\nwere using open_datasync as our method. You'll want to run some\ntests.\n\nYou also should enable priority paging, but expect that this will\ngive you really strange po numbers from vmstat and friends. Priority\npaging, I found, makes things look like you're swapping when you\naren't. Procmem is useful, but if you really want the goods on\nwhat's going on, you need the SE toolkit. Just be careful using it\nas root -- in some cases it'll modify kernel parameters behind the\nscenes. In my case, I didn't have superuser access, so there wasn't\na danger; but I've heard sysadmins complain about this. \n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThis work was visionary and imaginative, and goes to show that visionary\nand imaginative work need not end up well. \n\t\t--Dennis Ritchie\n", "msg_date": "Wed, 23 Mar 2005 12:55:07 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL on Solaris 8 and ufs" }, { "msg_contents": "On Wed, Mar 23, 2005 at 09:32:07AM -0800, Tom Arthurs wrote:\n> found that high context switching seems to be more a symptom,\n\nYes, that's a good point. It usually _is_ a symptom; but it might be\na symptom that you've got an expensive query, and Solaris's foot-gun\napproach to handling such cases is a little painful. (We didn't give\nup on Solaris because of cs problems, BTW; but I have to say that AIX\nseems to be a little less prone to self-DOS on this front than\nSolaris was. If you really want to hear me rant, ask me some time\nabout ecache and Sun support.)\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe plural of anecdote is not data.\n\t\t--Roger Brinner\n", "msg_date": "Wed, 23 Mar 2005 12:58:59 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL on Solaris 8 and ufs" }, { "msg_contents": "Andrew,\n\n> The Packer Solaris database book (Packer, Allan N., _Configuring &\n> Tuning Databases on the Solaris Platform_.  Palo Alto: Sun\n> Microsystems P, 2002.  ISBN 0-13-083417-3) does suggest mounting the\n> filesystems with forcedirectio; I dimly recall using this for the wal\n> partition on one test box, and STR that it helped.\n\nThis is a good idea for a WAL partition, but is NOT a good idea for the \ndatabase.\n\nYou pay want to look into setting segmap_percent to 70% or something. On \nSolaris 10 at least, the OS by default only uses 10% of memory for disk \nbuffers. \n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 23 Mar 2005 15:39:34 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL on Solaris 8 and ufs" }, { "msg_contents": "Hi,\n\nI am thinking about how to continuously monitor the performance of a\nPostgreSQL 8 database. I am interested in two things: (1) the growth of\ntables with TOAST and indexes; and (2) the respond time breakdown for a\nquery.\n\nIn Chapters 23 and 24 of the big manual, I found enough materials to\nteach me how to do the 1st job. But I have difficulty with the 2nd one.\nI found some script for Oracle\n(http://www.ixora.com.au/scripts/waits.htm).\n\nDo we have something that can do the same job for PostgreSQL 8? \n\nThanks.\n\n-Jack\n\n", "msg_date": "25 Mar 2005 10:12:05 -0500", "msg_from": "Jack Xue <[email protected]>", "msg_from_op": false, "msg_subject": "Script for getting a table of reponse-time breakdown" }, { "msg_contents": "Jack,\n\n> I am thinking about how to continuously monitor the performance of a\n> PostgreSQL 8 database. I am interested in two things: (1) the growth of\n> tables with TOAST and indexes;\n\nThis is queryable from the system tables, if you don't mind an approximate. \n\n> and (2) the respond time breakdown for a \n> query.\n\nThe what? You mean EXPLAIN ANALYZE?\n\n> In Chapters 23 and 24 of the big manual, I found enough materials to\n> teach me how to do the 1st job. But I have difficulty with the 2nd one.\n> I found some script for Oracle\n> (http://www.ixora.com.au/scripts/waits.htm).\n\nLife's too short for reading Oracle docs. Can you just explain, in \nstep-by-step detail, what you want?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 25 Mar 2005 09:40:59 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Script for getting a table of reponse-time breakdown" }, { "msg_contents": "Josh,\n\nThe description of the Oracle script is:\n\nThis script can be used to focus tuning attention on the most important\nissues. It reports a breakdown of total foreground response time into\nfour major categories: CPU usage, disk I/O, resource waits, and routine\nlatencies. These categories are broken down further into sub-categories,\nand the component wait events are shown. \n\nThe 8.1.6 version of the script uses the ratio_to_report analytic\nfunction to calculate percentages. The 8.1.5 version can be used if\npercentages are not required. The 8.1.5 version of the script should\nwork on Oracle8 also, but has not yet been tested.\n\nThe print out of this script is:\nSQL> @response_time_breakdown\n\nMAJOR MINOR WAIT_EVENT SECONDS PCT \n-------- ------------- ---------------------------------------- -------- ------ \nCPU time parsing n/a 497 .57% \n reloads n/a 13 .01% \n execution n/a 52618 59.99% \n \ndisk I/O normal I/O db file sequential read 21980 25.06% \n full scans db file scattered read 9192 10.48% \n direct I/O direct path read 1484 1.69% \n direct path write 354 .40% \n other I/O log file sequential read 9 .01% \n db file parallel read 0 .00% \n control file sequential read 0 .00% \n control file parallel write 0 .00% \n \nwaits DBWn writes rdbms ipc reply 143 .16% \n free buffer waits 36 .04% \n checkpoint completed 31 .04% \n LGWR writes log file switch completion 698 .80% \n other locks latch free 496 .57% \n sort segment request 108 .12% \n \nlatency commits log file sync 6 .01% \n network SQL*Net break/reset to client 18 .02% \n SQL*Net more data to client 8 .01% \n SQL*Net message to client 7 .01% \n SQL*Net more data from client 3 .00% \n file ops file open 4 .01% \n file identify 1 .00% \n misc instance state change 0 .00% \n\n\nThe script is pretty long:\n\n-------------------------------------------------------------------------------\n--\n-- Script:\tresponse_time_breakdown.sql\n-- Purpose:\tto report the components of foreground response time in % terms\n-- For:\t\t8.0 to 8.1.5\n--\n-- Copyright:\t(c) Ixora Pty Ltd\n-- Author:\tSteve Adams\n--\n-------------------------------------------------------------------------------\n@save_sqlplus_settings\n\ncolumn major format a8\ncolumn wait_event format a40 trunc\ncolumn seconds format 9999999\ncolumn pct justify right\nbreak on major skip 1 on minor\n\nselect\n substr(n_major, 3) major,\n substr(n_minor, 3) minor,\n wait_event,\n round(time/100) seconds\nfrom\n (\n select /*+ ordered use_hash(b) */\n '1 CPU time' n_major,\n decode(t.ksusdnam,\n\t'redo size', '2 reloads',\n\t'parse time cpu', '1 parsing',\n\t'3 execution'\n ) n_minor,\n 'n/a' wait_event,\n decode(t.ksusdnam,\n\t'redo size', nvl(r.time, 0),\n\t'parse time cpu', t.ksusgstv - nvl(b.time, 0),\n\tt.ksusgstv - nvl(b.time, 0) - nvl(r.time, 0)\n ) time\n from\n sys.x_$ksusgsta t,\n (\n\tselect /*+ ordered use_nl(s) */\t\t-- star query: few rows from d and b\n\t s.ksusestn,\t\t\t\t-- statistic#\n\t sum(s.ksusestv) time\t\t\t-- time used by backgrounds\n\tfrom\n\t sys.x_$ksusd d,\t\t\t-- statname\n\t sys.x_$ksuse b,\t\t\t-- session\n\t sys.x_$ksbdp p,\t\t\t-- background process\n\t sys.x_$ksusesta s\t\t\t-- sesstat\n\twhere\n\t d.ksusdnam in (\n\t 'parse time cpu',\n\t 'CPU used when call started') and\n\t b.ksspaown = p.ksbdppro and\n\t s.ksusestn = d.indx and\n\t s.indx = b.indx\n\tgroup by\n\t s.ksusestn\n ) b,\n (\n\tselect /*+ no_merge */\n\t ksusgstv *\t\t\t\t-- parse cpu time *\n\t kglstrld /\t\t\t\t-- SQL AREA reloads /\n\t (1 + kglstget - kglstght)\t\t-- SQL AREA misses\n\t time\n\tfrom\n\t sys.x_$kglst k,\n\t sys.x_$ksusgsta g\n\twhere\n\t k.indx = 0 and\n\t g.ksusdnam = 'parse time cpu'\n ) r\n where\n t.ksusdnam in (\n\t'redo size',\t\t\t\t-- arbitrary: to get a row to replace\n\t'parse time cpu',\t\t\t-- with the 'reload cpu time'\n\t'CPU used when call started') and\n b.ksusestn (+) = t.indx\n union all\n select\n decode(n_minor,\n\t'1 normal I/O',\t\t'2 disk I/O',\n\t'2 full scans',\t\t'2 disk I/O',\n\t'3 direct I/O',\t\t'2 disk I/O',\n\t'4 BFILE reads',\t'2 disk I/O',\n\t'5 other I/O',\t\t'2 disk I/O',\n\t'1 DBWn writes',\t'3 waits',\n\t'2 LGWR writes',\t'3 waits',\n\t'3 ARCn writes',\t'3 waits',\n\t'4 enqueue locks',\t'3 waits',\n\t'5 PCM locks',\t\t'3 waits',\n\t'6 other locks',\t'3 waits',\n\t'1 commits',\t\t'4 latency',\n\t'2 network',\t\t'4 latency',\n\t'3 file ops',\t\t'4 latency',\n\t'4 process ctl',\t'4 latency',\n\t'5 global locks',\t'4 latency',\n\t'6 misc',\t\t'4 latency'\n ) n_major,\n n_minor,\n wait_event,\n time\n from\n (\n\tselect /*+ ordered use_hash(b) use_nl(d) */\n\t decode(\n\t d.kslednam,\n\t \t\t\t\t\t-- disk I/O\n\t 'db file sequential read',\t\t\t'1 normal I/O',\n\t 'db file scattered read',\t\t\t'2 full scans',\n\t 'BFILE read',\t\t\t\t'4 BFILE reads',\n\t 'KOLF: Register LFI read',\t\t\t'4 BFILE reads',\n\t 'log file sequential read',\t\t\t'5 other I/O',\n\t 'log file single write',\t\t\t'5 other I/O',\n\t\t\t\t\t\t-- resource waits\n\t 'checkpoint completed',\t\t\t'1 DBWn writes',\n\t 'free buffer waits',\t\t\t'1 DBWn writes',\n\t 'write complete waits',\t\t\t'1 DBWn writes',\n\t 'local write wait',\t\t\t\t'1 DBWn writes',\n\t 'log file switch (checkpoint incomplete)',\t'1 DBWn writes',\n\t 'rdbms ipc reply',\t\t\t\t'1 DBWn writes',\n\t 'log file switch (archiving needed)',\t'3 ARCn writes',\n\t 'enqueue',\t\t\t\t\t'4 enqueue locks',\n\t 'buffer busy due to global cache',\t\t'5 PCM locks',\n\t 'global cache cr request',\t\t\t'5 PCM locks',\n\t 'global cache lock cleanup',\t\t'5 PCM locks',\n\t 'global cache lock null to s',\t\t'5 PCM locks',\n\t 'global cache lock null to x',\t\t'5 PCM locks',\n\t 'global cache lock s to x',\t\t\t'5 PCM locks',\n\t 'lock element cleanup',\t\t\t'5 PCM locks',\n\t 'checkpoint range buffer not saved',\t'6 other locks',\n\t 'dupl. cluster key',\t\t\t'6 other locks',\n\t 'PX Deq Credit: free buffer',\t\t'6 other locks',\n\t 'PX Deq Credit: need buffer',\t\t'6 other locks',\n\t 'PX Deq Credit: send blkd',\t\t\t'6 other locks',\n\t 'PX qref latch',\t\t\t\t'6 other locks',\n\t 'Wait for credit - free buffer',\t\t'6 other locks',\n\t 'Wait for credit - need buffer to send',\t'6 other locks',\n\t 'Wait for credit - send blocked',\t\t'6 other locks',\n\t 'global cache freelist wait',\t\t'6 other locks',\n\t 'global cache lock busy',\t\t\t'6 other locks',\n\t 'index block split',\t\t\t'6 other locks',\n\t 'lock element waits',\t\t\t'6 other locks',\n\t 'parallel query qref latch',\t\t'6 other locks',\n\t 'pipe put',\t\t\t\t\t'6 other locks',\n\t 'rdbms ipc message block',\t\t\t'6 other locks',\n\t 'row cache lock',\t\t\t\t'6 other locks',\n\t 'sort segment request',\t\t\t'6 other locks',\n\t 'transaction',\t\t\t\t'6 other locks',\n\t 'unbound tx',\t\t\t\t'6 other locks',\n\t\t\t\t\t\t-- routine waits\n\t 'log file sync',\t\t\t\t'1 commits',\n\t 'name-service call wait',\t\t\t'2 network',\n\t 'Test if message present',\t\t\t'4 process ctl',\n\t 'process startup',\t\t\t\t'4 process ctl',\n\t 'read SCN lock',\t\t\t\t'5 global locks',\n\t decode(substr(d.kslednam, 1, instr(d.kslednam, ' ')),\n\t\t\t\t\t\t-- disk I/O\n\t 'direct ',\t\t\t\t'3 direct I/O',\n\t 'control ',\t\t\t\t'5 other I/O',\n\t 'db ',\t\t\t\t\t'5 other I/O',\n\t\t\t\t\t\t-- resource waits\n\t 'log ',\t\t\t\t\t'2 LGWR writes',\n\t 'buffer ',\t\t\t\t'6 other locks',\n\t 'free ',\t\t\t\t\t'6 other locks',\n\t 'latch ',\t\t\t\t\t'6 other locks',\n\t 'library ',\t\t\t\t'6 other locks',\n\t 'undo ',\t\t\t\t\t'6 other locks',\n\t\t\t\t\t\t-- routine waits\n\t 'SQL*Net ',\t\t\t\t'2 network',\n\t 'BFILE ',\t\t\t\t\t'3 file ops',\n\t 'KOLF: ',\t\t\t\t\t'3 file ops',\n\t 'file ',\t\t\t\t\t'3 file ops',\n\t 'KXFQ: ',\t\t\t\t\t'4 process ctl',\n\t 'KXFX: ',\t\t\t\t\t'4 process ctl',\n\t 'PX ',\t\t\t\t\t'4 process ctl',\n\t 'Wait ',\t\t\t\t\t'4 process ctl',\n\t 'inactive ',\t\t\t\t'4 process ctl',\n\t 'multiple ',\t\t\t\t'4 process ctl',\n\t 'parallel ',\t\t\t\t'4 process ctl',\n\t 'DFS ',\t\t\t\t\t'5 global locks',\n\t 'batched ',\t\t\t\t'5 global locks',\n\t 'on-going ',\t\t\t\t'5 global locks',\n\t 'global ',\t\t\t\t'5 global locks',\n\t 'wait ',\t\t\t\t\t'5 global locks',\n\t 'writes ',\t\t\t\t'5 global locks',\n\t \t\t\t\t\t\t'6 misc'\n\t )\n\t ) n_minor,\n\t d.kslednam wait_event,\t\t-- event name\n\t i.kslestim - nvl(b.time, 0) time\t-- non-background time\n\tfrom\n\t sys.x_$kslei i,\t\t\t-- system events\n\t (\n\t select /*+ ordered use_hash(e) */\t-- no fixed index on e\n\t e.kslesenm,\t\t\t-- event number\n\t sum(e.kslestim) time\t\t-- time waited by backgrounds\n\t from\n\t sys.x_$ksuse s,\t\t\t-- sessions\n\t sys.x_$ksbdp b,\t\t\t-- backgrounds\n\t sys.x_$ksles e\t\t\t-- session events\n\t where\n\t s.ksspaown = b.ksbdppro and\t-- background session\n\t e.kslessid = s.indx\n\t group by\n\t e.kslesenm\n\t having\n\t sum(e.kslestim) > 0\n\t ) b,\n\t sys.x_$ksled d\n\twhere\n\t i.kslestim > 0 and\n\t b.kslesenm (+) = i.indx and\n\t nvl(b.time, 0) < i.kslestim and\n\t d.indx = i.indx and\n\t d.kslednam not in (\n\t 'Null event',\n\t 'KXFQ: Dequeue Range Keys - Slave',\n\t 'KXFQ: Dequeuing samples',\n\t 'KXFQ: kxfqdeq - dequeue from specific qref',\n\t 'KXFQ: kxfqdeq - normal deqeue',\n\t 'KXFX: Execution Message Dequeue - Slave',\n\t 'KXFX: Parse Reply Dequeue - Query Coord',\n\t 'KXFX: Reply Message Dequeue - Query Coord',\n\t 'PAR RECOV : Dequeue msg - Slave',\n\t 'PAR RECOV : Wait for reply - Query Coord',\n\t 'Parallel Query Idle Wait - Slaves',\n\t 'PL/SQL lock timer',\n\t 'PX Deq: Execute Reply',\n\t 'PX Deq: Execution Msg',\n\t 'PX Deq: Index Merge Execute',\n\t 'PX Deq: Index Merge Reply',\n\t 'PX Deq: Par Recov Change Vector',\n\t 'PX Deq: Par Recov Execute',\n\t 'PX Deq: Par Recov Reply',\n\t 'PX Deq: Parse Reply',\n\t 'PX Deq: Table Q Get Keys',\n\t 'PX Deq: Table Q Normal',\n\t 'PX Deq: Table Q Sample',\n\t 'PX Deq: Table Q qref',\n\t 'PX Deq: Txn Recovery Reply',\n\t 'PX Deq: Txn Recovery Start',\n\t 'PX Deque wait',\n\t 'PX Idle Wait',\n\t 'Replication Dequeue',\n\t 'Replication Dequeue ',\n\t 'SQL*Net message from client',\n\t 'SQL*Net message from dblink',\n\t 'debugger command',\n\t 'dispatcher timer',\n\t 'parallel query dequeue wait',\n\t 'pipe get',\n\t 'queue messages',\n\t 'rdbms ipc message',\n\t 'secondary event',\n\t 'single-task message',\n\t 'slave wait',\n\t 'virtual circuit status'\n\t ) and\n\t d.kslednam not like 'resmgr:%'\n )\n )\norder by\n n_major,\n n_minor,\n time desc\n/\n\n@restore_sqlplus_settings\n\nDo we have some similar for Postgres?\n\nThanks.\n\n-Jack\n\n\nOn Fri, 2005-03-25 at 12:40, Josh Berkus wrote:\n> Jack,\n> \n> > I am thinking about how to continuously monitor the performance of a\n> > PostgreSQL 8 database. I am interested in two things: (1) the growth of\n> > tables with TOAST and indexes;\n> \n> This is queryable from the system tables, if you don't mind an approximate. \n> \n> > and (2) the respond time breakdown for a \n> > query.\n> \n> The what? You mean EXPLAIN ANALYZE?\n> \n> > In Chapters 23 and 24 of the big manual, I found enough materials to\n> > teach me how to do the 1st job. But I have difficulty with the 2nd one.\n> > I found some script for Oracle\n> > (http://www.ixora.com.au/scripts/waits.htm).\n> \n> Life's too short for reading Oracle docs. Can you just explain, in \n> step-by-step detail, what you want?\n\n", "msg_date": "25 Mar 2005 13:00:06 -0500", "msg_from": "Jack Xue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Script for getting a table of reponse-time breakdown" }, { "msg_contents": "Jack,\n\n> This script can be used to focus tuning attention on the most important\n> issues. It reports a breakdown of total foreground response time into\n> four major categories: CPU usage, disk I/O, resource waits, and routine\n> latencies. These categories are broken down further into sub-categories,\n> and the component wait events are shown.\n\nThis would be very nice. And very, very hard to build.\n\nNo, we don't have anything similar. You can, of course, use profiling tools.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 25 Mar 2005 10:07:13 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Script for getting a table of reponse-time breakdown" }, { "msg_contents": "I just wanted to follow up and let everyone know that the biggest\nimprovement in performance came from moving the pg_xlog directory to\nanother filesystem (different set of disks) separate from the data\ndirectory.\n\nThanks for the suggestions.\n\n-- \nBrandon\n", "msg_date": "Wed, 30 Mar 2005 15:00:57 -0600 (CST)", "msg_from": "\"Brandon Metcalf\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL on Solaris 8 and ufs" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Tuesday, March 22, 2005 3:48 PM\n> To: Greg Stark\n> Cc: Christopher Browne; [email protected]\n> Subject: Re: [PERFORM] What about utility to calculate planner cost\n> constants?\n> [...]\n> The difficulty with the notion of doing that measurement by timing\n> Postgres operations is that it's a horribly bad experimental setup.\n> You have no way to isolate the effects of just one variable, or even\n> a small number of variables, which you really need to do if you want\n> to estimate with any degree of precision. What's more, there \n> are plenty of relevant factors that aren't in the model at all (such\n> as the extent of other load on the machine), and so the noise in the\n> measurements will be enormous.\n> \n> And you can't just dismiss the issue of wrong cost models and say we\n> can get numbers anyway. We see examples almost every day on this \n> list where the planner is so far off about indexscan vs seqscan costs\n> that you'd have to put random_page_cost far below 1 to make its numbers\n> line up with reality. That's not a matter of tuning the parameter,\n> it's evidence that the cost model is wrong. If you try to solve for\n> the \"right value\" of the parameter by comparing estimated and actual\n> costs, you'll get garbage, even without any issues about noisy\n> measurements or numerical instability of your system of equations.\n\nThen instead of building a fixed cost model, why not evolve an adaptive\nmodel using an ANN or GA? I can't say that I'm remotely familiar with\nhow the planner does its business, but perhaps we should throw away all\nthese tunable cost parameters and let a neural network create them\nimplicitly, if they really exist in any meaningful form. I suppose the\ninputs to the network would be the available scan types, the actual and\nestimated rows, correlations, etc. The outputs would be query plans, is\nthat right? So we pick several representative data points in the query\nspace and train the network on those, to \"bootstrap\" it. With any luck,\nthe network will generalize the training inputs and do a halfway decent\njob on real-world values. If a user is unhappy with the way the network\nis operating, they can switch on a training mode whereby the network \ntries some different plans for a given query and uses the execution time \nto judge which plans worked the best. The alternative plans can be\nsuggested by built-in heuristics or perhaps built randomly. Of course,\nsuch training would only be practical for smaller data sets, but perhaps\nthere would be a way to let the network perform a query on a subset of\nthe data and then extrapolate the behavior of a plan over the full data\nset.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n", "msg_date": "Tue, 22 Mar 2005 16:16:17 -0600", "msg_from": "\"Dave Held\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What about utility to calculate planner cost constants?" }, { "msg_contents": "[email protected] (\"Dave Held\") writes:\n>> -----Original Message-----\n>> From: Tom Lane [mailto:[email protected]]\n>> Sent: Tuesday, March 22, 2005 3:48 PM\n>> To: Greg Stark\n>> Cc: Christopher Browne; [email protected]\n>> Subject: Re: [PERFORM] What about utility to calculate planner cost\n>> constants?\n>> [...]\n>> The difficulty with the notion of doing that measurement by timing\n>> Postgres operations is that it's a horribly bad experimental setup.\n>> You have no way to isolate the effects of just one variable, or even\n>> a small number of variables, which you really need to do if you want\n>> to estimate with any degree of precision. What's more, there \n>> are plenty of relevant factors that aren't in the model at all (such\n>> as the extent of other load on the machine), and so the noise in the\n>> measurements will be enormous.\n>> \n>> And you can't just dismiss the issue of wrong cost models and say we\n>> can get numbers anyway. We see examples almost every day on this \n>> list where the planner is so far off about indexscan vs seqscan costs\n>> that you'd have to put random_page_cost far below 1 to make its numbers\n>> line up with reality. That's not a matter of tuning the parameter,\n>> it's evidence that the cost model is wrong. If you try to solve for\n>> the \"right value\" of the parameter by comparing estimated and actual\n>> costs, you'll get garbage, even without any issues about noisy\n>> measurements or numerical instability of your system of equations.\n>\n> Then instead of building a fixed cost model, why not evolve an adaptive\n> model using an ANN or GA? I can't say that I'm remotely familiar with\n> how the planner does its business, but perhaps we should throw away all\n> these tunable cost parameters and let a neural network create them\n> implicitly, if they really exist in any meaningful form. I suppose the\n> inputs to the network would be the available scan types, the actual and\n> estimated rows, correlations, etc. The outputs would be query plans, is\n> that right? So we pick several representative data points in the query\n> space and train the network on those, to \"bootstrap\" it. With any luck,\n> the network will generalize the training inputs and do a halfway decent\n> job on real-world values. If a user is unhappy with the way the network\n> is operating, they can switch on a training mode whereby the network \n> tries some different plans for a given query and uses the execution time \n> to judge which plans worked the best. The alternative plans can be\n> suggested by built-in heuristics or perhaps built randomly. Of course,\n> such training would only be practical for smaller data sets, but perhaps\n> there would be a way to let the network perform a query on a subset of\n> the data and then extrapolate the behavior of a plan over the full data\n> set.\n\nThis strikes me as an interesting approach for trying to determine the\nproper shape of the cost model. I'd also want to consider simulated\nannealing (SA) (e.g. - perhaps Lester Ingber's ASA code...).\n\nWe take such a network, perhaps assuming some small degree polynomial\nset of parameters, train it based on some reasonably sizable set of\nqueries, and then see which of those parameters wind up being treated\nas strong/not.\n\nThat would provide results that would allow improving the model.\n\nI wouldn't assume that an untuned ANN/GA/SA would provide useful results\nin general.\n\nIt would certainly be pretty cool if we could get this approach into a\nproduction query optimizer; we would hope that this would, in effect,\ntune itself, over time.\n\nAnd I suppose that actually is somewhat plausible... Each query plan\nthat comes thru winds up having some \"expected cost\", and after\nexecuting that plan, we have an \"actual cost\" which could be used as\nfeedback to the effect that the estimate was either pretty right or\npretty wrong.\n\nWe'd presumably start by taking our \"traditional cost estimate\" as\nbeing the way to go; when it gets sufficiently clear that the ANN/GA\nnetwork is providing a more accurate cost, it would make sense to make\nthe jump...\n\nWhat would also be very interesting would be to see the degree to\nwhich analytical results could be taken out of this. For instance,\nsome cost factors might turn out to be pretty universally true, and we\nmight discover that most of the benefit can come from using a pretty\nstatic network of parameters.\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"cbbrowne.com\")\nhttp://www3.sympatico.ca/cbbrowne/or.html\n\"The present need for security products far exceeds the number of\nindividuals capable of designing secure systems. Consequently,\nindustry has resorted to employing folks and purchasing \"solutions\"\nfrom vendors that shouldn't be let near a project involving securing a\nsystem.\" -- Lucky Green\n", "msg_date": "Tue, 22 Mar 2005 18:30:14 -0500", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What about utility to calculate planner cost constants?" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Dave Held \n> Sent: Tuesday, March 22, 2005 4:16 PM\n> To: Tom Lane\n> Cc: [email protected]\n> Subject: Re: [PERFORM] What about utility to calculate planner cost\n> constants?\n> [...]\n> Then instead of building a fixed cost model, why not evolve \n> an adaptive model using an ANN or GA?\n> [...]\n\nAnother advantage to an adaptive planner is that for those who\ncan afford to duplicate/replicate their hardware/db, they can\nperhaps dedicate a backup server to plan optimization where the \ndb just runs continuously in a learning mode trying out different\nplans for a core set of important queries. Then those network\nparameters can get replicated back to the primary server(s), \nhopefully improving query planning on the production dbs. And\nperhaps people could make those parameters public, with the hope\nthat someone with a similar setup could benefit from a pre-\nlearned network.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n", "msg_date": "Tue, 22 Mar 2005 16:33:41 -0600", "msg_from": "\"Dave Held\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What about utility to calculate planner cost constants?" } ]
[ { "msg_contents": "Hi guys,\nWe are in the process of buying a new dell server.\nHere is what we need to be able to do:\n- we need to be able to do queries on tables that has 10-20 millions\nof records (around 40-60 bytes each row) in less than 5-7 seconds.\nWe also need the hardware to be able to handle up to 50 millions\nrecords on a few tables (5 tables in the DB).\n\nHere is what we are thinking:\n- Dual Xeon 2.8 Ghz\n- 4GB DDR2 400 Mhz Dual Ranked DIMMS (is dual ranked or single ranked\nmakes any differences in terms of performance?). Do you guys think 4GB\nis reasonably enough?\n- 73 GB 15k RPM Ultra 320 SCSI Hard Drive\n- Dual on-board NICS (is this enough, or Gigabit network adapter will help?)\n\nAny input or suggestions is greatly appreciated.\nThank you,\n\n\nJun\n", "msg_date": "Tue, 22 Mar 2005 17:32:02 -0800", "msg_from": "Junaili Lie <[email protected]>", "msg_from_op": true, "msg_subject": "Hardware questions" }, { "msg_contents": "On Tue, 2005-03-22 at 17:32 -0800, Junaili Lie wrote:\n> Here is what we are thinking:\n> - Dual Xeon 2.8 Ghz\n> - 4GB DDR2 400 Mhz Dual Ranked DIMMS (is dual ranked or single ranked\n> makes any differences in terms of performance?). Do you guys think 4GB\n> is reasonably enough?\n> - 73 GB 15k RPM Ultra 320 SCSI Hard Drive\n> - Dual on-board NICS (is this enough, or Gigabit network adapter will help?)\n\nPurely based on price alone, you could get a Sun V20z with similar\nconfig for $400 (list) less... but I don't know what discounts etc you\nget.\n\nAMD's processor/memory architecture has a higher throughput, and the\nsize of the data and speeds you are asking about will need it.\n\nThere is still some talk about the context-switching issue with\nmulti-xeons. I am under the impression that it still gets some people.\n\nYou will likely want more disks as well.\n\nSome recent threads on this topic:\nhttp://archives.postgresql.org/pgsql-performance/2005-03/msg00177.php\nhttp://archives.postgresql.org/pgsql-performance/2005-03/msg00238.php\nhttp://archives.postgresql.org/pgsql-performance/2005-03/msg00406.php\n\nHTH,\n\n-- \nKarim Nassar\nDepartment of Computer Science\nBox 15600, College of Engineering and Natural Sciences\nNorthern Arizona University, Flagstaff, Arizona 86011\nOffice: (928) 523-5868 -=- Mobile: (928) 699-9221\n\n", "msg_date": "Tue, 22 Mar 2005 19:21:06 -0700", "msg_from": "Karim Nassar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware questions" }, { "msg_contents": "Junaili,\n\nI'd suggest you don't buy a dell. The aren't particularly good performers.\nDave\n\nJunaili Lie wrote:\n\n>Hi guys,\n>We are in the process of buying a new dell server.\n>Here is what we need to be able to do:\n>- we need to be able to do queries on tables that has 10-20 millions\n>of records (around 40-60 bytes each row) in less than 5-7 seconds.\n>We also need the hardware to be able to handle up to 50 millions\n>records on a few tables (5 tables in the DB).\n>\n>Here is what we are thinking:\n>- Dual Xeon 2.8 Ghz\n>- 4GB DDR2 400 Mhz Dual Ranked DIMMS (is dual ranked or single ranked\n>makes any differences in terms of performance?). Do you guys think 4GB\n>is reasonably enough?\n>- 73 GB 15k RPM Ultra 320 SCSI Hard Drive\n>- Dual on-board NICS (is this enough, or Gigabit network adapter will help?)\n>\n>Any input or suggestions is greatly appreciated.\n>Thank you,\n>\n>\n>Jun\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n>\n> \n>\n", "msg_date": "Wed, 23 Mar 2005 07:53:10 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware questions" } ]
[ { "msg_contents": "I observed slowdowns when I declared SQL function as strict. There were\nno slowdowns, when I implmented the same function in plpgsql, in fact it\ngot faster with strict, if parameters where NULL. Could it be\nside-effect of SQL function inlining? Is there CASE added around the\nfunction to not calculate it, when one of the parameters is NULL?\n\nThe functions:\n\ncreate or replace function keskmine_omahind(kogus, raha) returns raha\n language sql\n immutable\n strict\nas '\n SELECT CASE WHEN $1 > 0 THEN $2 / $1 ELSE NULL END::raha;\n';\n\ncreate or replace function keskmine_omahind2(kogus, raha) returns raha\n language plpgsql\n immutable\n strict\nas '\nBEGIN\n RETURN CASE WHEN $1 > 0 THEN $2 / $1 ELSE NULL END::raha;\nEND;\n';\n\nWith strict:\n\nepos=# select count(keskmine_omahind(laokogus, laosumma)) from kaubad;\n count\n-------\n 9866\n(1 row)\n\nTime: 860,495 ms\n\nepos=# select count(keskmine_omahind2(laokogus, laosumma)) from kaubad;\n count\n-------\n 9866\n(1 row)\n\nTime: 178,922 ms\n\nWithout strict:\n\nepos=# select count(keskmine_omahind(laokogus, laosumma)) from kaubad;\n count\n-------\n 9866\n(1 row)\n\nTime: 88,151 ms\n\nepos=# select count(keskmine_omahind2(laokogus, laosumma)) from kaubad;\n count\n-------\n 9866\n(1 row)\n\nTime: 178,383 ms\n\nepos=# select version();\n version\n------------------------------------------------------------------------\n------------------------------\n PostgreSQL 7.4.5 on i386-pc-linux-gnu, compiled by GCC i386-linux-gcc\n(GCC) 3.3.4 (Debian 1:3.3.4-9)\n\n Tambet\n\n> -----Original Message-----\n> From: Neil Conway [mailto:[email protected]] \n> Sent: Monday, March 21, 2005 7:13 AM\n> To: Bruno Wolff III\n> Cc: Keith Worthington; [email protected]\n> Subject: Re: View vs function\n> \n> \n> Bruno Wolff III wrote:\n> > Functions are just black boxes to the planner.\n> \n> ... unless the function is a SQL function that is trivial \n> enough for the \n> planner to inline it into the plan of the invoking query. \n> Currently, we \n> won't inline set-returning SQL functions that are used in the query's \n> rangetable, though. This would be worth doing, I think -- I'm \n> not sure \n> how much work it would be, though.\n> \n> -Neil\n> \n", "msg_date": "Wed, 23 Mar 2005 12:03:26 +0200", "msg_from": "\"Tambet Matiisen\" <[email protected]>", "msg_from_op": true, "msg_subject": "SQL function inlining (was: View vs function)" }, { "msg_contents": "\"Tambet Matiisen\" <[email protected]> writes:\n> I observed slowdowns when I declared SQL function as strict. There were\n> no slowdowns, when I implmented the same function in plpgsql, in fact it\n> got faster with strict, if parameters where NULL. Could it be\n> side-effect of SQL function inlining? Is there CASE added around the\n> function to not calculate it, when one of the parameters is NULL?\n\nIIRC we will not inline a STRICT SQL function if the resulting\nexpression would not behave strict-ly. This is clearly a necessary rule\nbecause inlining would change the behavior otherwise. But the test for\nit is pretty simplistic: CASEs are considered not strict, period. So I\nthink you are measuring the difference between inlined and not-inlined.\n\nI'd suggest just leaving off the STRICT if you are writing a SQL\nfunction you hope to have inlined.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 Mar 2005 11:55:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL function inlining (was: View vs function) " }, { "msg_contents": "* Tom Lane <[email protected]> wrote:\n\n<big_snip>\n\nBTW: is it possible to explicitly clear the cache for immutable \nfunctions ?\n\nI'd like to use immutable functions for really often lookups like \nfetching a username by uid and vice versa. The queried tables \nchange very rarely, but when they change is quite unpredictable.\n\n\nthx\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n cellphone: +49 174 7066481\n---------------------------------------------------------------------\n -- DSL ab 0 Euro. -- statische IP -- UUCP -- Hosting -- Webshops --\n---------------------------------------------------------------------\n", "msg_date": "Thu, 24 Mar 2005 14:32:48 +0100", "msg_from": "Enrico Weigelt <[email protected]>", "msg_from_op": false, "msg_subject": "clear function cache (WAS: SQL function inlining)" }, { "msg_contents": "On Thu, Mar 24, 2005 at 02:32:48PM +0100, Enrico Weigelt wrote:\n\n> BTW: is it possible to explicitly clear the cache for immutable \n> functions ?\n\nWhat cache? There is no caching of function results.\n\n> I'd like to use immutable functions for really often lookups like \n> fetching a username by uid and vice versa. The queried tables \n> change very rarely, but when they change is quite unpredictable.\n\nMaybe you should use a stable function if you fear we'll having function\nresult caching without you noticing.\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"Aprende a avergonzarte m�s ante ti que ante los dem�s\" (Dem�crito)\n", "msg_date": "Thu, 24 Mar 2005 09:42:40 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: clear function cache (WAS: SQL function inlining)" }, { "msg_contents": "* Alvaro Herrera <[email protected]> wrote:\n> On Thu, Mar 24, 2005 at 02:32:48PM +0100, Enrico Weigelt wrote:\n> \n> > BTW: is it possible to explicitly clear the cache for immutable \n> > functions ?\n> \n> What cache? There is no caching of function results.\n\nNot ? So what's immutable for ?\n\n<snip>\n> > I'd like to use immutable functions for really often lookups like \n> > fetching a username by uid and vice versa. The queried tables \n> > change very rarely, but when they change is quite unpredictable.\n> \n> Maybe you should use a stable function if you fear we'll having function\n> result caching without you noticing.\n\nhmm, this makes more real evaluations necessary than w/ immuatable.\nAFAIK stable functions have to be evaluated once per query, and the \nresults are not cached between several queries.\n\n\ncu\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n cellphone: +49 174 7066481\n---------------------------------------------------------------------\n -- DSL ab 0 Euro. -- statische IP -- UUCP -- Hosting -- Webshops --\n---------------------------------------------------------------------\n", "msg_date": "Thu, 24 Mar 2005 15:12:33 +0100", "msg_from": "Enrico Weigelt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: clear function cache (WAS: SQL function inlining)" }, { "msg_contents": "\nOn Thu, 24 Mar 2005, Enrico Weigelt wrote:\n\n> * Alvaro Herrera <[email protected]> wrote:\n> > On Thu, Mar 24, 2005 at 02:32:48PM +0100, Enrico Weigelt wrote:\n> >\n> > > BTW: is it possible to explicitly clear the cache for immutable\n> > > functions ?\n> >\n> > What cache? There is no caching of function results.\n>\n> Not ? So what's immutable for ?\n\nFor knowing that you can do things like use it in a functional index and\nI think for things like constant folding in a prepared plan.\n", "msg_date": "Thu, 24 Mar 2005 06:58:34 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: clear function cache (WAS: SQL function inlining)" }, { "msg_contents": "* Stephan Szabo <[email protected]> wrote:\n> \n> On Thu, 24 Mar 2005, Enrico Weigelt wrote:\n> \n> > * Alvaro Herrera <[email protected]> wrote:\n> > > On Thu, Mar 24, 2005 at 02:32:48PM +0100, Enrico Weigelt wrote:\n> > >\n> > > > BTW: is it possible to explicitly clear the cache for immutable\n> > > > functions ?\n> > >\n> > > What cache? There is no caching of function results.\n> >\n> > Not ? So what's immutable for ?\n> \n> For knowing that you can do things like use it in a functional index and\n> I think for things like constant folding in a prepared plan.\n\nSo when can I expect the function to be reevaluated ? \nNext query ? Next session ? Random time ?\n\ncu\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n cellphone: +49 174 7066481\n---------------------------------------------------------------------\n -- DSL ab 0 Euro. -- statische IP -- UUCP -- Hosting -- Webshops --\n---------------------------------------------------------------------\n", "msg_date": "Fri, 15 Apr 2005 22:40:55 +0200", "msg_from": "Enrico Weigelt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: clear function cache (WAS: SQL function inlining)" } ]
[ { "msg_contents": "The query and the corresponding EXPLAIN is at\n\nhttp://hannes.imos.net/query.txt\n\nI'd like to use the column q.replaced_serials for multiple calculations\nin the SELECT clause, but every time it is referenced there in some way\nthe whole query in the FROM clause returning q is executed again.\n\nThis doesn't make sense to me at all and eats performance.\n\nIf this wasn't clear enough, for every\n\nq.replaced_serials <insert_random_calculation> AS some_column\n\nin the SELECT clause there is new block of\n\n---------------------------------------------------------------\n-> Aggregate (cost=884.23..884.23 rows=1 width=0)\n -> Nested Loop (cost=0.00..884.23 rows=1 width=0)\n -> Index Scan using ix_rma_ticket_serials_replace on \n\n rma_ticket_serials rts (cost=0.00..122.35\n rows=190 width=4)\n Index Cond: (\"replace\" = false)\n -> Index Scan using pk_serials on serials s\n (cost=0.00..3.51 rows=1 width=4)\n Index Cond: (s.serial_id = \"outer\".serial_id)\n Filter: ((article_no = $0) AND (delivery_id = $1))\n---------------------------------------------------------------\n\nin the EXPLAIN result.\n\nFor those who wonder why I do this FROM (SELECT...). I was searching for\na way to use the result of an subselect for multiple calculations in the\nSELECT clause and return that calculation results as individual columns.\n\nI tested a bit further and found out that PG behaves the same in case q\nis a view. This makes me wonder how efficient the optimizer can work\nwith views - or even worse - nested views.\n\nTested and reproduced on PG 7.4.1 linux and 8.0.0 win32.\n\n\nThanks in advance,\nHannes Dorbath\n", "msg_date": "Thu, 24 Mar 2005 11:31:11 +0100", "msg_from": "Hannes Dorbath <[email protected]>", "msg_from_op": true, "msg_subject": "Query Optimizer Failure / Possible Bug" }, { "msg_contents": "hm, a few days and not a single reply :|\n\nany more information needed? test data? simplified test case? anything?\n\n\nthanks\n\n\nHannes Dorbath wrote:\n> The query and the corresponding EXPLAIN is at\n> \n> http://hannes.imos.net/query.txt\n> \n> I'd like to use the column q.replaced_serials for multiple calculations\n> in the SELECT clause, but every time it is referenced there in some way\n> the whole query in the FROM clause returning q is executed again.\n> \n> This doesn't make sense to me at all and eats performance.\n> \n> If this wasn't clear enough, for every\n> \n> q.replaced_serials <insert_random_calculation> AS some_column\n> \n> in the SELECT clause there is new block of\n> \n> ---------------------------------------------------------------\n> -> Aggregate (cost=884.23..884.23 rows=1 width=0)\n> -> Nested Loop (cost=0.00..884.23 rows=1 width=0)\n> -> Index Scan using ix_rma_ticket_serials_replace on\n> rma_ticket_serials rts (cost=0.00..122.35\n> rows=190 width=4)\n> Index Cond: (\"replace\" = false)\n> -> Index Scan using pk_serials on serials s\n> (cost=0.00..3.51 rows=1 width=4)\n> Index Cond: (s.serial_id = \"outer\".serial_id)\n> Filter: ((article_no = $0) AND (delivery_id = $1))\n> ---------------------------------------------------------------\n> \n> in the EXPLAIN result.\n> \n> For those who wonder why I do this FROM (SELECT...). I was searching for\n> a way to use the result of an subselect for multiple calculations in the\n> SELECT clause and return that calculation results as individual columns.\n> \n> I tested a bit further and found out that PG behaves the same in case q\n> is a view. This makes me wonder how efficient the optimizer can work\n> with views - or even worse - nested views.\n> \n> Tested and reproduced on PG 7.4.1 linux and 8.0.0 win32.\n> \n> \n> Thanks in advance,\n> Hannes Dorbath\n\n-- \nimos Gesellschaft fuer Internet-Marketing und Online-Services mbH\nAlfons-Feifel-Str. 9 // D-73037 Goeppingen // Stauferpark Ost\nTel: 07161 93339-14 // Fax: 07161 93339-99 // Internet: www.imos.net\n", "msg_date": "Mon, 28 Mar 2005 16:14:44 +0200", "msg_from": "Hannes Dorbath <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Optimizer Failure / Possible Bug" }, { "msg_contents": "Hannes,\n\n> The query and the corresponding EXPLAIN is at\n>\n> http://hannes.imos.net/query.txt\n\nThe problem is that you're using a complex corellated sub-select in the SELECT \nclause:\n\n SELECT\n d.delivery_id,\n da.article_no,\n da.amount,\n (\n SELECT\n COUNT(*)\n FROM\n serials s\n INNER JOIN rma_ticket_serials rts ON (\n s.serial_id = rts.serial_id\n )\n WHERE\n s.article_no = da.article_no AND\n s.delivery_id = d.delivery_id AND\n rts.replace = FALSE\n ) AS replaced_serials\n\nThis means that the planner pretty much has to iterate over the subquery, \nrunning it once for each row in the result set. If you want the optimizer \nto use a JOIN structure instead, put the subselect in the FROM clause.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 28 Mar 2005 11:51:17 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Optimizer Failure / Possible Bug" }, { "msg_contents": "Thank you very much for your reply. I'll try to modify it.\n\n\nJosh Berkus wrote:\n> Hannes,\n> \n> \n>>The query and the corresponding EXPLAIN is at\n>>\n>>http://hannes.imos.net/query.txt\n> \n> \n> The problem is that you're using a complex corellated sub-select in the SELECT \n> clause:\n> \n> SELECT\n> d.delivery_id,\n> da.article_no,\n> da.amount,\n> (\n> SELECT\n> COUNT(*)\n> FROM\n> serials s\n> INNER JOIN rma_ticket_serials rts ON (\n> s.serial_id = rts.serial_id\n> )\n> WHERE\n> s.article_no = da.article_no AND\n> s.delivery_id = d.delivery_id AND\n> rts.replace = FALSE\n> ) AS replaced_serials\n> \n> This means that the planner pretty much has to iterate over the subquery, \n> running it once for each row in the result set. If you want the optimizer \n> to use a JOIN structure instead, put the subselect in the FROM clause.\n> \n\n-- \nimos Gesellschaft fuer Internet-Marketing und Online-Services mbH\nAlfons-Feifel-Str. 9 // D-73037 Goeppingen // Stauferpark Ost\nTel: 07161 93339-14 // Fax: 07161 93339-99 // Internet: www.imos.net\n", "msg_date": "Tue, 29 Mar 2005 02:15:01 +0200", "msg_from": "Hannes Dorbath <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Optimizer Failure / Possible Bug" }, { "msg_contents": "\n\tNoticed this problem,too.\n\tYou can always make the calculation you want done once inside a set \nreturning function so it'll behave like a table, but that's ugly.\n\nOn Mon, 28 Mar 2005 16:14:44 +0200, Hannes Dorbath \n<[email protected]> wrote:\n\n> hm, a few days and not a single reply :|\n>\n> any more information needed? test data? simplified test case? anything?\n>\n>\n> thanks\n>\n>\n> Hannes Dorbath wrote:\n>> The query and the corresponding EXPLAIN is at\n>> http://hannes.imos.net/query.txt\n>> I'd like to use the column q.replaced_serials for multiple calculations\n>> in the SELECT clause, but every time it is referenced there in some way\n>> the whole query in the FROM clause returning q is executed again.\n>> This doesn't make sense to me at all and eats performance.\n>> If this wasn't clear enough, for every\n>> q.replaced_serials <insert_random_calculation> AS some_column\n>> in the SELECT clause there is new block of\n>> ---------------------------------------------------------------\n>> -> Aggregate (cost=884.23..884.23 rows=1 width=0)\n>> -> Nested Loop (cost=0.00..884.23 rows=1 width=0)\n>> -> Index Scan using ix_rma_ticket_serials_replace on\n>> rma_ticket_serials rts (cost=0.00..122.35\n>> rows=190 width=4)\n>> Index Cond: (\"replace\" = false)\n>> -> Index Scan using pk_serials on serials s\n>> (cost=0.00..3.51 rows=1 width=4)\n>> Index Cond: (s.serial_id = \"outer\".serial_id)\n>> Filter: ((article_no = $0) AND (delivery_id = $1))\n>> ---------------------------------------------------------------\n>> in the EXPLAIN result.\n>> For those who wonder why I do this FROM (SELECT...). I was searching \n>> for\n>> a way to use the result of an subselect for multiple calculations in the\n>> SELECT clause and return that calculation results as individual columns.\n>> I tested a bit further and found out that PG behaves the same in case q\n>> is a view. This makes me wonder how efficient the optimizer can work\n>> with views - or even worse - nested views.\n>> Tested and reproduced on PG 7.4.1 linux and 8.0.0 win32.\n>> Thanks in advance,\n>> Hannes Dorbath\n>\n\n\n", "msg_date": "Sun, 03 Apr 2005 10:01:13 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Optimizer Failure / Possible Bug" }, { "msg_contents": "Mhh. I have no clue about the internals of PostgreSQL and query planing, \nbut to me as user this should really be a thing the optimizer has to \nwork out..\n\n\nOn 03.04.2005 10:01, PFC wrote:\n> \n> Noticed this problem,too.\n> You can always make the calculation you want done once inside a set \n> returning function so it'll behave like a table, but that's ugly.\n> \n> On Mon, 28 Mar 2005 16:14:44 +0200, Hannes Dorbath \n> <[email protected]> wrote:\n> \n>> hm, a few days and not a single reply :|\n>>\n>> any more information needed? test data? simplified test case? anything?\n>>\n>>\n>> thanks\n>>\n>>\n>> Hannes Dorbath wrote:\n>>\n>>> The query and the corresponding EXPLAIN is at\n>>> http://hannes.imos.net/query.txt\n>>> I'd like to use the column q.replaced_serials for multiple calculations\n>>> in the SELECT clause, but every time it is referenced there in some way\n>>> the whole query in the FROM clause returning q is executed again.\n>>> This doesn't make sense to me at all and eats performance.\n>>> If this wasn't clear enough, for every\n>>> q.replaced_serials <insert_random_calculation> AS some_column\n>>> in the SELECT clause there is new block of\n>>> ---------------------------------------------------------------\n>>> -> Aggregate (cost=884.23..884.23 rows=1 width=0)\n>>> -> Nested Loop (cost=0.00..884.23 rows=1 width=0)\n>>> -> Index Scan using ix_rma_ticket_serials_replace on\n>>> rma_ticket_serials rts (cost=0.00..122.35\n>>> rows=190 width=4)\n>>> Index Cond: (\"replace\" = false)\n>>> -> Index Scan using pk_serials on serials s\n>>> (cost=0.00..3.51 rows=1 width=4)\n>>> Index Cond: (s.serial_id = \"outer\".serial_id)\n>>> Filter: ((article_no = $0) AND (delivery_id = $1))\n>>> ---------------------------------------------------------------\n>>> in the EXPLAIN result.\n>>> For those who wonder why I do this FROM (SELECT...). I was \n>>> searching for\n>>> a way to use the result of an subselect for multiple calculations in the\n>>> SELECT clause and return that calculation results as individual columns.\n>>> I tested a bit further and found out that PG behaves the same in case q\n>>> is a view. This makes me wonder how efficient the optimizer can work\n>>> with views - or even worse - nested views.\n>>> Tested and reproduced on PG 7.4.1 linux and 8.0.0 win32.\n>>> Thanks in advance,\n>>> Hannes Dorbath\n>>\n>>\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n", "msg_date": "Mon, 04 Apr 2005 17:18:24 +0200", "msg_from": "Hannes Dorbath <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Optimizer Failure / Possible Bug" }, { "msg_contents": "Some people on the #postgresql irc channel pointed out that it's a known \nissue.\n\nhttp://www.qaix.com/postgresql-database-development/246-557-select-based-on-function-result-read.shtml\n\nA more simple testcase is below. Adding OFFSET 0 to the inner query does \nindeed fix it in my case.\n\n\nSELECT\n tmp.user_id AS foo,\n tmp.user_id AS bar,\n tmp.user_id AS baz\nFROM\n (\n SELECT\n u.user_id\n FROM\n users u\n ) AS tmp;\n\n\n\nSeq Scan on users (cost=0.00..1.53 rows=53 width=4) (actual \ntime=0.230..0.233 rows=1 loops=1)\nTotal runtime: 0.272 ms\n\n\n---------------------------\n\n\nSELECT\n tmp.user_id AS foo,\n tmp.user_id AS bar,\n tmp.user_id AS baz\nFROM\n (\n SELECT\n (SELECT 1) AS user_id\n FROM\n users u\n ) AS tmp;\n\n\n\n Seq Scan on users u (cost=0.03..1.56 rows=53 width=0) (actual \ntime=0.216..0.219 rows=1 loops=1)\n InitPlan\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.004..0.006 rows=1 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.002..0.004 rows=1 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.002..0.003 rows=1 loops=1)\nTotal runtime: 0.270 ms\n\n\n---------------------------\n\n\nSELECT\n tmp.user_id AS foo,\n tmp.user_id AS bar,\n tmp.user_id AS baz\nFROM\n (\n SELECT\n (SELECT 1) AS user_id\n FROM\n users u\n OFFSET 0\n ) AS tmp;\n\n\nSubquery Scan tmp (cost=0.01..1.03 rows=1 width=4) (actual \ntime=0.032..0.042 rows=1 loops=1)\n -> Limit (cost=0.01..1.02 rows=1 width=0) (actual time=0.026..0.033 \nrows=1 loops=1)\n InitPlan\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.003..0.004 rows=1 loops=1)\n -> Seq Scan on users u (cost=0.00..1.01 rows=1 width=0) \n(actual time=0.022..0.027 rows=1 loops=1)\nTotal runtime: 0.090 ms\n\n\n\n\nOn 04.04.2005 17:18, Hannes Dorbath wrote:\n> Mhh. I have no clue about the internals of PostgreSQL and query planing, \n> but to me as user this should really be a thing the optimizer has to \n> work out..\n> \n> \n> On 03.04.2005 10:01, PFC wrote:\n> \n>>\n>> Noticed this problem,too.\n>> You can always make the calculation you want done once inside a \n>> set returning function so it'll behave like a table, but that's ugly.\n>>\n>> On Mon, 28 Mar 2005 16:14:44 +0200, Hannes Dorbath \n>> <[email protected]> wrote:\n>>\n>>> hm, a few days and not a single reply :|\n>>>\n>>> any more information needed? test data? simplified test case? anything?\n>>>\n>>>\n>>> thanks\n>>>\n>>>\n>>> Hannes Dorbath wrote:\n>>>\n>>>> The query and the corresponding EXPLAIN is at\n>>>> http://hannes.imos.net/query.txt\n>>>> I'd like to use the column q.replaced_serials for multiple \n>>>> calculations\n>>>> in the SELECT clause, but every time it is referenced there in some way\n>>>> the whole query in the FROM clause returning q is executed again.\n>>>> This doesn't make sense to me at all and eats performance.\n>>>> If this wasn't clear enough, for every\n>>>> q.replaced_serials <insert_random_calculation> AS some_column\n>>>> in the SELECT clause there is new block of\n>>>> ---------------------------------------------------------------\n>>>> -> Aggregate (cost=884.23..884.23 rows=1 width=0)\n>>>> -> Nested Loop (cost=0.00..884.23 rows=1 width=0)\n>>>> -> Index Scan using ix_rma_ticket_serials_replace on\n>>>> rma_ticket_serials rts (cost=0.00..122.35\n>>>> rows=190 width=4)\n>>>> Index Cond: (\"replace\" = false)\n>>>> -> Index Scan using pk_serials on serials s\n>>>> (cost=0.00..3.51 rows=1 width=4)\n>>>> Index Cond: (s.serial_id = \"outer\".serial_id)\n>>>> Filter: ((article_no = $0) AND (delivery_id = $1))\n>>>> ---------------------------------------------------------------\n>>>> in the EXPLAIN result.\n>>>> For those who wonder why I do this FROM (SELECT...). I was \n>>>> searching for\n>>>> a way to use the result of an subselect for multiple calculations in \n>>>> the\n>>>> SELECT clause and return that calculation results as individual \n>>>> columns.\n>>>> I tested a bit further and found out that PG behaves the same in \n>>>> case q\n>>>> is a view. This makes me wonder how efficient the optimizer can work\n>>>> with views - or even worse - nested views.\n>>>> Tested and reproduced on PG 7.4.1 linux and 8.0.0 win32.\n>>>> Thanks in advance,\n>>>> Hannes Dorbath\n>>>\n>>>\n>>>\n>>\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 5: Have you checked our extensive FAQ?\n>>\n>> http://www.postgresql.org/docs/faq\n>>\n", "msg_date": "Sat, 16 Apr 2005 12:45:30 +0200", "msg_from": "Hannes Dorbath <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Optimizer Failure / Possible Bug" } ]
[ { "msg_contents": "Hello !\n\nI'm running pg_autovacuum on a 1GHz, 80Gig, 512Mhz machine. The database is\nabout 30MB tarred. We have about 50000 Updates/Inserts/Deletes per day. It\nruns beautifully for ~4 days. Then the HDD activity and the Postmaster CPU\nusage goes up ALOT. Even though I have plenty (?) of FSM (2 million) pages.\nI perform a vacuum and everything is back to normal for another 4 days. I\ncould schedule a manual vacuum each day but the util is not called\npg_SemiAutoVacuum so I'm hoping this is not necessary. The same user that\nran the manual vacuum is running pg_autovacuum. The normal CPU usage is\nabout 10% w/ little HD activity.\n\nIm running autovacuum with the following flags -d 3 -v 300 -V 0.1 -s 180 -S\n0.1 -a 200 -A 0.1\n\nBelow are some snipplets regarding vacuuming from the busiest table\n\nThis is the last VACUUM ANALYZE performed by pg_autovacuum before I ran the\nmanual vacuum\n\n[2005-03-24 02:05:43 EST] DEBUG: Performing: VACUUM ANALYZE\n\"public\".\"file_92\"\n[2005-03-24 02:05:52 EST] INFO: table name: secom.\"public\".\"file_92\"\n[2005-03-24 02:05:52 EST] INFO: relid: 9384219; relisshared: 0\n[2005-03-24 02:05:52 EST] INFO: reltuples: 106228.000000; relpages:\n9131\n[2005-03-24 02:05:52 EST] INFO: curr_analyze_count: 629121;\ncurr_vacuum_count: 471336\n[2005-03-24 02:05:52 EST] INFO: last_analyze_count: 629121;\nlast_vacuum_count: 471336\n[2005-03-24 02:05:52 EST] INFO: analyze_threshold: 10822;\nvacuum_threshold: 10922\n\nThis is the last pg_autovacuum debug output before I ran the manual vacuum\n\n[2005-03-24 09:18:44 EST] INFO: table name: secom.\"public\".\"file_92\"\n[2005-03-24 09:18:44 EST] INFO: relid: 9384219; relisshared: 0\n[2005-03-24 09:18:44 EST] INFO: reltuples: 106228.000000; relpages:\n9131\n[2005-03-24 09:18:44 EST] INFO: curr_analyze_count: 634119;\ncurr_vacuum_count: 476095\n[2005-03-24 09:18:44 EST] INFO: last_analyze_count: 629121;\nlast_vacuum_count: 471336\n[2005-03-24 09:18:44 EST] INFO: analyze_threshold: 10822;\nvacuum_threshold: 10922\n\nfile_92 had about 10000 Inserts/Deletes between 02:05 and 9:20\n\nThen i Ran a vacuum verbose\n\n23 Mar 05 - 9:20 AM\nINFO: vacuuming \"public.file_92\"\nINFO: index \"file_92_record_number_key\" now contains 94 row versions in\n2720 pages\nDETAIL: 107860 index row versions were removed.\n2712 index pages have been deleted, 2060 are currently reusable.\nCPU 0.22s/0.64u sec elapsed 8.45 sec.\nINFO: \"file_92\": removed 107860 row versions in 9131 pages\nDETAIL: CPU 1.13s/4.27u sec elapsed 11.75 sec.\nINFO: \"file_92\": found 107860 removable, 92 nonremovable row versions in\n9131 pages\nDETAIL: 91 dead row versions cannot be removed yet.\nThere were 303086 unused item pointers.\n0 pages are entirely empty.\nCPU 1.55s/5.00u sec elapsed 20.86 sec.\nINFO: \"file_92\": truncated 9131 to 8423 pages\nDETAIL: CPU 0.65s/0.03u sec elapsed 5.80 sec.\nINFO: free space map: 57 relations, 34892 pages stored; 34464 total pages\nneeded\nDETAIL: Allocated FSM size: 1000 relations + 2000000 pages = 11784 kB\nshared memory.\n\nAlso, file_92 is just a temporary storage area, for records waiting to be\nprocessed. Records are in there typically ~10 sec.\n\nOver 100'000 Index Rows removed, 300'000 unused item pointers ? How could\nautovacuum let this happen ? I would estimate the table had about 10000\ninserts/deletes between the last pg_autovacuum \"Vacuum analyze\" and my\nmanual vacuum verbose.\n\nIt is like the suction is not strong enough ;)\n\nAny ideas ? It would be greatly appreciated as this is taking me one step\ncloser to the looney bin.\n\nThanks\n\n/Otto Blomqvist\n\n\n", "msg_date": "Thu, 24 Mar 2005 10:17:06 -0800", "msg_from": "\"Otto Blomqvist\" <[email protected]>", "msg_from_op": true, "msg_subject": "pg_autovacuum not having enough suction ?" }, { "msg_contents": "\"Otto Blomqvist\" <[email protected]> writes:\n> Over 100'000 Index Rows removed, 300'000 unused item pointers ? How could\n> autovacuum let this happen ?\n\nWhat PG version is this?\n\n(The earlier autovacuum releases had some bugs with large tables, thus\nthe question...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 24 Mar 2005 14:32:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_autovacuum not having enough suction ? " }, { "msg_contents": "Well the simple answer is that pg_autovacuum didn't see 10,000 inserts \nupdates or deletes.\npg_autovacuum saw: 476095 - 471336 = 4759 U/D's relevant for \nvacuuming and\n 634119 - 629121 = 4998 I/U/D's relevant for performing analyze.\n\nThe tough question is why is pg_autovacuum not seeing all the updates. \nSince autovacuum depends on the stats system for it's numbers, the most \nlikely answer is that the stats system is not able to keep up with the \nworkload, and is ignoring some of the updates. Would you check to see \nwhat the stats system is reporting for numbers of I/U/D's for the \nfile_92 table? The query pg_autovacuum uses is:\n\nselect a.oid,a.relname,a.relnamespace,a.relpages,a.relisshared,a.reltuples,\n b.schemaname,b.n_tup_ins,b.n_tup_upd,b.n_tup_del\nfrom pg_class a, pg_stat_all_tables b\nwhere a.oid=b.relid and a.relkind = 'r'\n\nTake a look at the n_tup_ins, upd, del numbers before and see if they \nare keeping up with the actual number if I/U/D's that you are \nperforming. If they are, then it's a pg_autovacuum problem that I will \nlook into further, if they are not, then it's a stats system problem \nthat I can't really help with.\n\nGood luck,\n\nMatthew\n\n\nOtto Blomqvist wrote:\n\n>Hello !\n>\n>I'm running pg_autovacuum on a 1GHz, 80Gig, 512Mhz machine. The database is\n>about 30MB tarred. We have about 50000 Updates/Inserts/Deletes per day. It\n>runs beautifully for ~4 days. Then the HDD activity and the Postmaster CPU\n>usage goes up ALOT. Even though I have plenty (?) of FSM (2 million) pages.\n>I perform a vacuum and everything is back to normal for another 4 days. I\n>could schedule a manual vacuum each day but the util is not called\n>pg_SemiAutoVacuum so I'm hoping this is not necessary. The same user that\n>ran the manual vacuum is running pg_autovacuum. The normal CPU usage is\n>about 10% w/ little HD activity.\n>\n>Im running autovacuum with the following flags -d 3 -v 300 -V 0.1 -s 180 -S\n>0.1 -a 200 -A 0.1\n>\n>Below are some snipplets regarding vacuuming from the busiest table\n>\n>This is the last VACUUM ANALYZE performed by pg_autovacuum before I ran the\n>manual vacuum\n>\n>[2005-03-24 02:05:43 EST] DEBUG: Performing: VACUUM ANALYZE\n>\"public\".\"file_92\"\n>[2005-03-24 02:05:52 EST] INFO: table name: secom.\"public\".\"file_92\"\n>[2005-03-24 02:05:52 EST] INFO: relid: 9384219; relisshared: 0\n>[2005-03-24 02:05:52 EST] INFO: reltuples: 106228.000000; relpages:\n>9131\n>[2005-03-24 02:05:52 EST] INFO: curr_analyze_count: 629121;\n>curr_vacuum_count: 471336\n>[2005-03-24 02:05:52 EST] INFO: last_analyze_count: 629121;\n>last_vacuum_count: 471336\n>[2005-03-24 02:05:52 EST] INFO: analyze_threshold: 10822;\n>vacuum_threshold: 10922\n>\n>This is the last pg_autovacuum debug output before I ran the manual vacuum\n>\n>[2005-03-24 09:18:44 EST] INFO: table name: secom.\"public\".\"file_92\"\n>[2005-03-24 09:18:44 EST] INFO: relid: 9384219; relisshared: 0\n>[2005-03-24 09:18:44 EST] INFO: reltuples: 106228.000000; relpages:\n>9131\n>[2005-03-24 09:18:44 EST] INFO: curr_analyze_count: 634119;\n>curr_vacuum_count: 476095\n>[2005-03-24 09:18:44 EST] INFO: last_analyze_count: 629121;\n>last_vacuum_count: 471336\n>[2005-03-24 09:18:44 EST] INFO: analyze_threshold: 10822;\n>vacuum_threshold: 10922\n>\n>file_92 had about 10000 Inserts/Deletes between 02:05 and 9:20\n>\n>Then i Ran a vacuum verbose\n>\n>23 Mar 05 - 9:20 AM\n>INFO: vacuuming \"public.file_92\"\n>INFO: index \"file_92_record_number_key\" now contains 94 row versions in\n>2720 pages\n>DETAIL: 107860 index row versions were removed.\n>2712 index pages have been deleted, 2060 are currently reusable.\n>CPU 0.22s/0.64u sec elapsed 8.45 sec.\n>INFO: \"file_92\": removed 107860 row versions in 9131 pages\n>DETAIL: CPU 1.13s/4.27u sec elapsed 11.75 sec.\n>INFO: \"file_92\": found 107860 removable, 92 nonremovable row versions in\n>9131 pages\n>DETAIL: 91 dead row versions cannot be removed yet.\n>There were 303086 unused item pointers.\n>0 pages are entirely empty.\n>CPU 1.55s/5.00u sec elapsed 20.86 sec.\n>INFO: \"file_92\": truncated 9131 to 8423 pages\n>DETAIL: CPU 0.65s/0.03u sec elapsed 5.80 sec.\n>INFO: free space map: 57 relations, 34892 pages stored; 34464 total pages\n>needed\n>DETAIL: Allocated FSM size: 1000 relations + 2000000 pages = 11784 kB\n>shared memory.\n>\n>Also, file_92 is just a temporary storage area, for records waiting to be\n>processed. Records are in there typically ~10 sec.\n>\n>Over 100'000 Index Rows removed, 300'000 unused item pointers ? How could\n>autovacuum let this happen ? I would estimate the table had about 10000\n>inserts/deletes between the last pg_autovacuum \"Vacuum analyze\" and my\n>manual vacuum verbose.\n>\n>It is like the suction is not strong enough ;)\n>\n>Any ideas ? It would be greatly appreciated as this is taking me one step\n>closer to the looney bin.\n>\n>Thanks\n>\n>/Otto Blomqvist\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n> \n>\n\n-- \nMatthew O'Connor\nV.P. of Operations\nTerrie O'Connor Realtors\n201-934-4900 x27\n\n", "msg_date": "Thu, 24 Mar 2005 15:25:48 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_autovacuum not having enough suction ?" }, { "msg_contents": "Sorry about that. I'm Running 8.0.0 on Linux Redhat 8.0\n\n\n\"Tom Lane\" <[email protected]> wrote in message\nnews:[email protected]...\n> \"Otto Blomqvist\" <[email protected]> writes:\n> > Over 100'000 Index Rows removed, 300'000 unused item pointers ? How\ncould\n> > autovacuum let this happen ?\n>\n> What PG version is this?\n>\n> (The earlier autovacuum releases had some bugs with large tables, thus\n> the question...)\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\n", "msg_date": "Thu, 24 Mar 2005 15:04:24 -0800", "msg_from": "\"Otto Blomqvist\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_autovacuum not having enough suction ?" }, { "msg_contents": "The version that shipped with 8.0 should be fine. The only version that \nhad the problem Tom referred to are in the early 7.4.x releases. \n\nDid you get my other message about information from the stats system \n(I'm not sure why my other post has yet to show up on the performance \nlist).\n\nMatthew\n\n\nOtto Blomqvist wrote:\n\n>Sorry about that. I'm Running 8.0.0 on Linux Redhat 8.0\n>\n>\n>\"Tom Lane\" <[email protected]> wrote in message\n>news:[email protected]...\n> \n>\n>>\"Otto Blomqvist\" <[email protected]> writes:\n>> \n>>\n>>>Over 100'000 Index Rows removed, 300'000 unused item pointers ? How\n>>> \n>>>\n>could\n> \n>\n>>>autovacuum let this happen ?\n>>> \n>>>\n>>What PG version is this?\n>>\n>>(The earlier autovacuum releases had some bugs with large tables, thus\n>>the question...)\n>>\n>>regards, tom lane\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 6: Have you searched our list archives?\n>>\n>> http://archives.postgresql.org\n>>\n>> \n>>\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 7: don't forget to increase your free space map settings\n>\n> \n>\n", "msg_date": "Thu, 24 Mar 2005 18:40:44 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_autovacuum not having enough suction ?" }, { "msg_contents": "I would rather keep this on list since other people can chime in.\n\nOtto Blomqvist wrote:\n\n>It does not seem to be a Stats collector problem.\n>\n> oid | relname | relnamespace | relpages | relisshared | reltuples |\n>schemaname | n_tup_ins | n_tup_upd | n_tup_del\n>---------+---------+--------------+----------+-------------+-----------+----\n>--------+-----------+-----------+-----------\n> 9384219 | file_92 | 2200 | 8423 | f | 49837 |\n>public | 158176 | 318527 | 158176\n>(1 row)\n>\n>I insert 50000 records\n>\n>secom=# select createfile_92records(1, 50000); <--- this is a pg script\n>that inserts records 1 threw 50000.\n> createfile_92records\n>----------------------\n> 0\n>\n>\n> oid | relname | relnamespace | relpages | relisshared | reltuples |\n>schemaname | n_tup_ins | n_tup_upd | n_tup_del\n>---------+---------+--------------+----------+-------------+-----------+----\n>--------+-----------+-----------+-----------\n> 9384219 | file_92 | 2200 | 8423 | f | 49837 |\n>public | 208179 | 318932 | 158377\n>(1 row)\n>\n>reltuples does not change ? Hmm. n_tup_ins looks fine.\n> \n>\n\nThat is expected, reltuples only gets updated by a vacuum or an analyze.\n\n>This table is basically a queue full of records waiting to get transfered\n>over from our 68030 system to the PG database. The records are then moved\n>into folders (using a trigger) like file_92_myy depending on what month the\n>record was created on the 68030. During normal operations there should not\n>be more than 10 records at a time in the table, although during the course\n>of a day a normal system will get about 50k records. I create 50000 records\n>to simulate incoming traffic, since we don't have much traffic in the test\n>lab.\n>\n>After a few hours we have\n>\n>secom=# select count(*) from file_92;\n> count\n>-------\n> 42072\n>\n>So we have sent over approx 8000 Records.\n>\n> oid | relname | relnamespace | relpages | relisshared | reltuples |\n>schemaname | n_tup_ins | n_tup_upd | n_tup_del\n>---------+---------+--------------+----------+-------------+-----------+----\n>--------+-----------+-----------+-----------\n> 9384219 | file_92 | 2200 | 8423 | f | 49837 |\n>public | 208218 | 334521 | 166152\n>(1 row)\n>\n>\n>n_tup_upd: 318932 + (50000-42072)*2 = 334788 pretty close. (Each record\n>gets updated twice, then moved)\n>n_tup_del: 158377 + (50000-42072) = 166305 pretty close. (there are also\n>minor background traffic going on)\n>\n>\n>I could send over the full vacuum verbose capture as well as the autovacuum\n>capture if that is of interest.\n>\n\nThat might be helpful. I don't see a stats system problem here, but I \nalso haven't heard of any autovac problems recently, so this might be \nsomething new.\n\nThanks,\n\nMatthew O'Connor\n\n\n", "msg_date": "Thu, 24 Mar 2005 18:58:00 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_autovacuum not having enough suction ?" }, { "msg_contents": "It looks like the reltuples-values are screwed up. Even though rows are\nconstantly being removed from the table the reltuples keep going up. If I\nunderstand correctly that also makes the Vacuum threshold go up and we end\nup in a vicious circle. Right after pg_autovacuum performed a vacuum analyze\non the table it actually had 31000 records, but reltuples reports over 100k.\nI'm not sure if this means anything But i thought i would pass it along.\n\nPG version 8.0.0, 31MB tarred DB.\n\n[2005-03-25 09:16:14 EST] INFO: dbname: testing\n[2005-03-25 09:16:14 EST] INFO: oid: 9383816\n[2005-03-25 09:16:14 EST] INFO: username: (null)\n[2005-03-25 09:16:14 EST] INFO: password: (null)\n[2005-03-25 09:16:14 EST] INFO: conn is null, (not connected)\n[2005-03-25 09:16:14 EST] INFO: default_analyze_threshold: 1000\n[2005-03-25 09:16:14 EST] INFO: default_vacuum_threshold: 500\n\n\n[2005-03-25 09:05:12 EST] INFO: table name: secom.\"public\".\"file_92\"\n[2005-03-25 09:05:12 EST] INFO: relid: 9384219; relisshared: 0\n[2005-03-25 09:05:12 EST] INFO: reltuples: 49185.000000; relpages:\n8423\n[2005-03-25 09:05:12 EST] INFO: curr_analyze_count: 919274;\ncurr_vacuum_count: 658176\n[2005-03-25 09:05:12 EST] INFO: last_analyze_count: 899272;\nlast_vacuum_count: 560541\n[2005-03-25 09:05:12 EST] INFO: analyze_threshold: 49685;\nvacuum_threshold: 100674\n\n\n[2005-03-25 09:10:12 EST] DEBUG: Performing: VACUUM ANALYZE\n\"public\".\"file_92\"\n[2005-03-25 09:10:33 EST] INFO: table name: secom.\"public\".\"file_92\"\n[2005-03-25 09:10:33 EST] INFO: relid: 9384219; relisshared: 0\n[2005-03-25 09:10:33 EST] INFO: reltuples: 113082.000000; relpages:\n6624\n[2005-03-25 09:10:33 EST] INFO: curr_analyze_count: 923820;\ncurr_vacuum_count: 662699\n[2005-03-25 09:10:33 EST] INFO: last_analyze_count: 923820;\nlast_vacuum_count: 662699\n[2005-03-25 09:10:33 EST] INFO: analyze_threshold: 113582;\nvacuum_threshold: 227164\n\n\n[2005-03-25 09:16:14 EST] INFO: table name: secom.\"public\".\"file_92\"\n[2005-03-25 09:16:14 EST] INFO: relid: 9384219; relisshared: 0\n[2005-03-25 09:16:14 EST] INFO: reltuples: 113082.000000; relpages:\n6624 <-- Actually has 31k rows\n[2005-03-25 09:16:14 EST] INFO: curr_analyze_count: 923820;\ncurr_vacuum_count: 662699\n[2005-03-25 09:16:14 EST] INFO: last_analyze_count: 923820;\nlast_vacuum_count: 662699\n[2005-03-25 09:16:14 EST] INFO: analyze_threshold: 113582;\nvacuum_threshold: 227164\n\nDETAIL: Allocated FSM size: 1000 relations + 2000000 pages = 11784 kB\nshared memory.\n\n\n\n\n----- Original Message -----\nFrom: \"Matthew T. O'Connor\" <[email protected]>\nTo: \"Otto Blomqvist\" <[email protected]>;\n<[email protected]>\nSent: Thursday, March 24, 2005 3:58 PM\nSubject: Re: [PERFORM] pg_autovacuum not having enough suction ?\n\n\n> I would rather keep this on list since other people can chime in.\n>\n> Otto Blomqvist wrote:\n>\n> >It does not seem to be a Stats collector problem.\n> >\n> > oid | relname | relnamespace | relpages | relisshared | reltuples |\n> >schemaname | n_tup_ins | n_tup_upd | n_tup_del\n>\n>---------+---------+--------------+----------+-------------+-----------+---\n-\n> >--------+-----------+-----------+-----------\n> > 9384219 | file_92 | 2200 | 8423 | f | 49837 |\n> >public | 158176 | 318527 | 158176\n> >(1 row)\n> >\n> >I insert 50000 records\n> >\n> >secom=# select createfile_92records(1, 50000); <--- this is a pg\nscript\n> >that inserts records 1 threw 50000.\n> > createfile_92records\n> >----------------------\n> > 0\n> >\n> >\n> > oid | relname | relnamespace | relpages | relisshared | reltuples |\n> >schemaname | n_tup_ins | n_tup_upd | n_tup_del\n>\n>---------+---------+--------------+----------+-------------+-----------+---\n-\n> >--------+-----------+-----------+-----------\n> > 9384219 | file_92 | 2200 | 8423 | f | 49837 |\n> >public | 208179 | 318932 | 158377\n> >(1 row)\n> >\n> >reltuples does not change ? Hmm. n_tup_ins looks fine.\n> >\n> >\n>\n> That is expected, reltuples only gets updated by a vacuum or an analyze.\n>\n> >This table is basically a queue full of records waiting to get transfered\n> >over from our 68030 system to the PG database. The records are then moved\n> >into folders (using a trigger) like file_92_myy depending on what month\nthe\n> >record was created on the 68030. During normal operations there should\nnot\n> >be more than 10 records at a time in the table, although during the\ncourse\n> >of a day a normal system will get about 50k records. I create 50000\nrecords\n> >to simulate incoming traffic, since we don't have much traffic in the\ntest\n> >lab.\n> >\n> >After a few hours we have\n> >\n> >secom=# select count(*) from file_92;\n> > count\n> >-------\n> > 42072\n> >\n> >So we have sent over approx 8000 Records.\n> >\n> > oid | relname | relnamespace | relpages | relisshared | reltuples |\n> >schemaname | n_tup_ins | n_tup_upd | n_tup_del\n>\n>---------+---------+--------------+----------+-------------+-----------+---\n-\n> >--------+-----------+-----------+-----------\n> > 9384219 | file_92 | 2200 | 8423 | f | 49837 |\n> >public | 208218 | 334521 | 166152\n> >(1 row)\n> >\n> >\n> >n_tup_upd: 318932 + (50000-42072)*2 = 334788 pretty close. (Each record\n> >gets updated twice, then moved)\n> >n_tup_del: 158377 + (50000-42072) = 166305 pretty close. (there are also\n> >minor background traffic going on)\n> >\n> >\n> >I could send over the full vacuum verbose capture as well as the\nautovacuum\n> >capture if that is of interest.\n> >\n>\n> That might be helpful. I don't see a stats system problem here, but I\n> also haven't heard of any autovac problems recently, so this might be\n> something new.\n>\n> Thanks,\n>\n> Matthew O'Connor\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n", "msg_date": "Fri, 25 Mar 2005 10:29:30 -0800", "msg_from": "\"Otto Blomqvist\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_autovacuum not having enough suction ?" }, { "msg_contents": "hmm.... the value in reltuples should be accurate after a vacuum (or \nvacuum analyze) if it's not it's a vacuum bug or something is going on \nthat isn't understood. If you or pg_autovacuum are running plain \nanalyze commands, that could explain the invalid reltules numbers.\n\nWas reltuples = 113082 correct right after the vacuum? \n\nMatthew\n\n\nOtto Blomqvist wrote:\n\n>It looks like the reltuples-values are screwed up. Even though rows are\n>constantly being removed from the table the reltuples keep going up. If I\n>understand correctly that also makes the Vacuum threshold go up and we end\n>up in a vicious circle. Right after pg_autovacuum performed a vacuum analyze\n>on the table it actually had 31000 records, but reltuples reports over 100k.\n>I'm not sure if this means anything But i thought i would pass it along.\n>\n>PG version 8.0.0, 31MB tarred DB.\n>\n>[2005-03-25 09:16:14 EST] INFO: dbname: testing\n>[2005-03-25 09:16:14 EST] INFO: oid: 9383816\n>[2005-03-25 09:16:14 EST] INFO: username: (null)\n>[2005-03-25 09:16:14 EST] INFO: password: (null)\n>[2005-03-25 09:16:14 EST] INFO: conn is null, (not connected)\n>[2005-03-25 09:16:14 EST] INFO: default_analyze_threshold: 1000\n>[2005-03-25 09:16:14 EST] INFO: default_vacuum_threshold: 500\n>\n>\n>[2005-03-25 09:05:12 EST] INFO: table name: secom.\"public\".\"file_92\"\n>[2005-03-25 09:05:12 EST] INFO: relid: 9384219; relisshared: 0\n>[2005-03-25 09:05:12 EST] INFO: reltuples: 49185.000000; relpages:\n>8423\n>[2005-03-25 09:05:12 EST] INFO: curr_analyze_count: 919274;\n>curr_vacuum_count: 658176\n>[2005-03-25 09:05:12 EST] INFO: last_analyze_count: 899272;\n>last_vacuum_count: 560541\n>[2005-03-25 09:05:12 EST] INFO: analyze_threshold: 49685;\n>vacuum_threshold: 100674\n>\n>\n>[2005-03-25 09:10:12 EST] DEBUG: Performing: VACUUM ANALYZE\n>\"public\".\"file_92\"\n>[2005-03-25 09:10:33 EST] INFO: table name: secom.\"public\".\"file_92\"\n>[2005-03-25 09:10:33 EST] INFO: relid: 9384219; relisshared: 0\n>[2005-03-25 09:10:33 EST] INFO: reltuples: 113082.000000; relpages:\n>6624\n>[2005-03-25 09:10:33 EST] INFO: curr_analyze_count: 923820;\n>curr_vacuum_count: 662699\n>[2005-03-25 09:10:33 EST] INFO: last_analyze_count: 923820;\n>last_vacuum_count: 662699\n>[2005-03-25 09:10:33 EST] INFO: analyze_threshold: 113582;\n>vacuum_threshold: 227164\n>\n>\n>[2005-03-25 09:16:14 EST] INFO: table name: secom.\"public\".\"file_92\"\n>[2005-03-25 09:16:14 EST] INFO: relid: 9384219; relisshared: 0\n>[2005-03-25 09:16:14 EST] INFO: reltuples: 113082.000000; relpages:\n>6624 <-- Actually has 31k rows\n>[2005-03-25 09:16:14 EST] INFO: curr_analyze_count: 923820;\n>curr_vacuum_count: 662699\n>[2005-03-25 09:16:14 EST] INFO: last_analyze_count: 923820;\n>last_vacuum_count: 662699\n>[2005-03-25 09:16:14 EST] INFO: analyze_threshold: 113582;\n>vacuum_threshold: 227164\n>\n>DETAIL: Allocated FSM size: 1000 relations + 2000000 pages = 11784 kB\n>shared memory.\n>\n>\n>\n>\n>----- Original Message -----\n>From: \"Matthew T. O'Connor\" <[email protected]>\n>To: \"Otto Blomqvist\" <[email protected]>;\n><[email protected]>\n>Sent: Thursday, March 24, 2005 3:58 PM\n>Subject: Re: [PERFORM] pg_autovacuum not having enough suction ?\n>\n>\n> \n>\n>>I would rather keep this on list since other people can chime in.\n>>\n>>Otto Blomqvist wrote:\n>>\n>> \n>>\n>>>It does not seem to be a Stats collector problem.\n>>>\n>>> oid | relname | relnamespace | relpages | relisshared | reltuples |\n>>>schemaname | n_tup_ins | n_tup_upd | n_tup_del\n>>> \n>>>\n>>---------+---------+--------------+----------+-------------+-----------+---\n>> \n>>\n>-\n> \n>\n>>>--------+-----------+-----------+-----------\n>>>9384219 | file_92 | 2200 | 8423 | f | 49837 |\n>>>public | 158176 | 318527 | 158176\n>>>(1 row)\n>>>\n>>>I insert 50000 records\n>>>\n>>>secom=# select createfile_92records(1, 50000); <--- this is a pg\n>>> \n>>>\n>script\n> \n>\n>>>that inserts records 1 threw 50000.\n>>>createfile_92records\n>>>----------------------\n>>> 0\n>>>\n>>>\n>>> oid | relname | relnamespace | relpages | relisshared | reltuples |\n>>>schemaname | n_tup_ins | n_tup_upd | n_tup_del\n>>> \n>>>\n>>---------+---------+--------------+----------+-------------+-----------+---\n>> \n>>\n>-\n> \n>\n>>>--------+-----------+-----------+-----------\n>>>9384219 | file_92 | 2200 | 8423 | f | 49837 |\n>>>public | 208179 | 318932 | 158377\n>>>(1 row)\n>>>\n>>>reltuples does not change ? Hmm. n_tup_ins looks fine.\n>>>\n>>>\n>>> \n>>>\n>>That is expected, reltuples only gets updated by a vacuum or an analyze.\n>>\n>> \n>>\n>>>This table is basically a queue full of records waiting to get transfered\n>>>over from our 68030 system to the PG database. The records are then moved\n>>>into folders (using a trigger) like file_92_myy depending on what month\n>>> \n>>>\n>the\n> \n>\n>>>record was created on the 68030. During normal operations there should\n>>> \n>>>\n>not\n> \n>\n>>>be more than 10 records at a time in the table, although during the\n>>> \n>>>\n>course\n> \n>\n>>>of a day a normal system will get about 50k records. I create 50000\n>>> \n>>>\n>records\n> \n>\n>>>to simulate incoming traffic, since we don't have much traffic in the\n>>> \n>>>\n>test\n> \n>\n>>>lab.\n>>>\n>>>After a few hours we have\n>>>\n>>>secom=# select count(*) from file_92;\n>>>count\n>>>-------\n>>>42072\n>>>\n>>>So we have sent over approx 8000 Records.\n>>>\n>>> oid | relname | relnamespace | relpages | relisshared | reltuples |\n>>>schemaname | n_tup_ins | n_tup_upd | n_tup_del\n>>> \n>>>\n>>---------+---------+--------------+----------+-------------+-----------+---\n>> \n>>\n>-\n> \n>\n>>>--------+-----------+-----------+-----------\n>>>9384219 | file_92 | 2200 | 8423 | f | 49837 |\n>>>public | 208218 | 334521 | 166152\n>>>(1 row)\n>>>\n>>>\n>>>n_tup_upd: 318932 + (50000-42072)*2 = 334788 pretty close. (Each record\n>>>gets updated twice, then moved)\n>>>n_tup_del: 158377 + (50000-42072) = 166305 pretty close. (there are also\n>>>minor background traffic going on)\n>>>\n>>>\n>>>I could send over the full vacuum verbose capture as well as the\n>>> \n>>>\n>autovacuum\n> \n>\n>>>capture if that is of interest.\n>>>\n>>> \n>>>\n>>That might be helpful. I don't see a stats system problem here, but I\n>>also haven't heard of any autovac problems recently, so this might be\n>>something new.\n>>\n>>Thanks,\n>>\n>>Matthew O'Connor\n>>\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 6: Have you searched our list archives?\n>>\n>> http://archives.postgresql.org\n>>\n>> \n>>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 8: explain analyze is your friend\n>\n> \n>\n\n", "msg_date": "Fri, 25 Mar 2005 14:45:42 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_autovacuum not having enough suction ?" }, { "msg_contents": "\"Matthew T. O'Connor\" <[email protected]> writes:\n> hmm.... the value in reltuples should be accurate after a vacuum (or \n> vacuum analyze) if it's not it's a vacuum bug or something is going on \n> that isn't understood. If you or pg_autovacuum are running plain \n> analyze commands, that could explain the invalid reltules numbers.\n\n> Was reltuples = 113082 correct right after the vacuum? \n\nAnother thing to check is whether the reltuples (and relpages!) that\nautovacuum is reporting are the same as what's actually in the pg_class\nrow for the relation. I'm wondering if this could be a similar issue\nto the old autovac bug where it wasn't reading the value correctly.\n\nIf they are the same then it seems like it must be a backend issue.\n\nOne thing that is possibly relevant here is that in 8.0 a plain VACUUM\ndoesn't set reltuples to the exactly correct number, but to an\ninterpolated value that reflects our estimate of the \"steady state\"\naverage between vacuums. I wonder if that code is wrong, or if it's\noperating as designed but is confusing autovac.\n\nCan autovac be told to run the vacuums in VERBOSE mode? It would be\nuseful to compare what VERBOSE has to say to the changes in\nreltuples/relpages.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 25 Mar 2005 14:55:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_autovacuum not having enough suction ? " }, { "msg_contents": "I wrote:\n> One thing that is possibly relevant here is that in 8.0 a plain VACUUM\n> doesn't set reltuples to the exactly correct number, but to an\n> interpolated value that reflects our estimate of the \"steady state\"\n> average between vacuums. I wonder if that code is wrong, or if it's\n> operating as designed but is confusing autovac.\n\nNow that I think it over, I'm thinking that I must have been suffering\nsevere brain fade the day I wrote lazy_update_relstats() (see\nvacuumlazy.c). The numbers that that routine is averaging are the pre-\nand post-vacuum physical tuple counts. But the difference between them\nconsists of known-dead tuples, and we shouldn't be factoring dead tuples\ninto reltuples. The planner has always considered reltuples to count\nonly live tuples, and I think this is correct on two grounds:\n\n1. The numbers of tuples estimated to be returned by scans certainly\nshouldn't count dead ones.\n\n2. Dead tuples don't have that much influence on scan costs either, at\nleast not once they are marked as known-dead. Certainly they shouldn't\nbe charged at full freight.\n\nIt's possible that there'd be some value in adding a column to pg_class\nto record dead tuple count, but given what we have now, the calculation\nin lazy_update_relstats is totally wrong.\n\nThe idea I was trying to capture is that the tuple density is at a\nminimum right after VACUUM, and will increase as free space is filled\nin until the next VACUUM, so that recording the exact tuple count\nunderestimates the number of tuples that will be seen on-the-average.\nBut I'm not sure that idea really holds water. The only way that a\ntable can be at \"steady state\" over a long period is if the number of\nlive tuples remains roughly constant (ie, inserts balance deletes).\nWhat actually increases and decreases over a VACUUM cycle is the density\nof *dead* tuples ... but per the above arguments this isn't something\nwe should adjust reltuples for.\n\nSo I'm thinking lazy_update_relstats should be ripped out and we should\ngo back to recording just the actual stats.\n\nSound reasonable? Or was I right the first time and suffering brain\nfade today?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 25 Mar 2005 15:22:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "lazy_update_relstats considered harmful (was Re: [PERFORM]\n\tpg_autovacuum not having enough suction ?)" }, { "msg_contents": "> Was reltuples = 113082 correct right after the vacuum?\n\nNo, There where about 31000 rows after the vacuum. I'm no expert but tuples\n= rows, right ?\n\nThis is not a \"normal\" table though, in the sence that it is only a\ntemporary holding ground as I explained earlier. I create 50000 records and\nthese get sent over from our custom 68030 system, to tables like\nfile_92_myy, depending on the date of the record. A pl/pgsql script is used\nas a trigger to move the records after they get data from the 68030. Don't\nknow if that is of interest or not. I could post the trigger if you'd like.\n\n\n\"\"Matthew T. O'Connor\"\" <[email protected]> wrote in message\nnews:[email protected]...\n> hmm.... the value in reltuples should be accurate after a vacuum (or\n> vacuum analyze) if it's not it's a vacuum bug or something is going on\n> that isn't understood. If you or pg_autovacuum are running plain\n> analyze commands, that could explain the invalid reltules numbers.\n>\n> Was reltuples = 113082 correct right after the vacuum?\n>\n> Matthew\n>\n>\n> Otto Blomqvist wrote:\n>\n> >It looks like the reltuples-values are screwed up. Even though rows are\n> >constantly being removed from the table the reltuples keep going up. If I\n> >understand correctly that also makes the Vacuum threshold go up and we\nend\n> >up in a vicious circle. Right after pg_autovacuum performed a vacuum\nanalyze\n> >on the table it actually had 31000 records, but reltuples reports over\n100k.\n> >I'm not sure if this means anything But i thought i would pass it along.\n> >\n> >PG version 8.0.0, 31MB tarred DB.\n> >\n> >[2005-03-25 09:16:14 EST] INFO: dbname: testing\n> >[2005-03-25 09:16:14 EST] INFO: oid: 9383816\n> >[2005-03-25 09:16:14 EST] INFO: username: (null)\n> >[2005-03-25 09:16:14 EST] INFO: password: (null)\n> >[2005-03-25 09:16:14 EST] INFO: conn is null, (not connected)\n> >[2005-03-25 09:16:14 EST] INFO: default_analyze_threshold: 1000\n> >[2005-03-25 09:16:14 EST] INFO: default_vacuum_threshold: 500\n> >\n> >\n> >[2005-03-25 09:05:12 EST] INFO: table name: secom.\"public\".\"file_92\"\n> >[2005-03-25 09:05:12 EST] INFO: relid: 9384219; relisshared: 0\n> >[2005-03-25 09:05:12 EST] INFO: reltuples: 49185.000000;\nrelpages:\n> >8423\n> >[2005-03-25 09:05:12 EST] INFO: curr_analyze_count: 919274;\n> >curr_vacuum_count: 658176\n> >[2005-03-25 09:05:12 EST] INFO: last_analyze_count: 899272;\n> >last_vacuum_count: 560541\n> >[2005-03-25 09:05:12 EST] INFO: analyze_threshold: 49685;\n> >vacuum_threshold: 100674\n> >\n> >\n> >[2005-03-25 09:10:12 EST] DEBUG: Performing: VACUUM ANALYZE\n> >\"public\".\"file_92\"\n> >[2005-03-25 09:10:33 EST] INFO: table name: secom.\"public\".\"file_92\"\n> >[2005-03-25 09:10:33 EST] INFO: relid: 9384219; relisshared: 0\n> >[2005-03-25 09:10:33 EST] INFO: reltuples: 113082.000000;\nrelpages:\n> >6624\n> >[2005-03-25 09:10:33 EST] INFO: curr_analyze_count: 923820;\n> >curr_vacuum_count: 662699\n> >[2005-03-25 09:10:33 EST] INFO: last_analyze_count: 923820;\n> >last_vacuum_count: 662699\n> >[2005-03-25 09:10:33 EST] INFO: analyze_threshold: 113582;\n> >vacuum_threshold: 227164\n> >\n> >\n> >[2005-03-25 09:16:14 EST] INFO: table name: secom.\"public\".\"file_92\"\n> >[2005-03-25 09:16:14 EST] INFO: relid: 9384219; relisshared: 0\n> >[2005-03-25 09:16:14 EST] INFO: reltuples: 113082.000000;\nrelpages:\n> >6624 <-- Actually has 31k rows\n> >[2005-03-25 09:16:14 EST] INFO: curr_analyze_count: 923820;\n> >curr_vacuum_count: 662699\n> >[2005-03-25 09:16:14 EST] INFO: last_analyze_count: 923820;\n> >last_vacuum_count: 662699\n> >[2005-03-25 09:16:14 EST] INFO: analyze_threshold: 113582;\n> >vacuum_threshold: 227164\n> >\n> >DETAIL: Allocated FSM size: 1000 relations + 2000000 pages = 11784 kB\n> >shared memory.\n> >\n> >\n> >\n> >\n> >----- Original Message -----\n> >From: \"Matthew T. O'Connor\" <[email protected]>\n> >To: \"Otto Blomqvist\" <[email protected]>;\n> ><[email protected]>\n> >Sent: Thursday, March 24, 2005 3:58 PM\n> >Subject: Re: [PERFORM] pg_autovacuum not having enough suction ?\n> >\n> >\n> >\n> >\n> >>I would rather keep this on list since other people can chime in.\n> >>\n> >>Otto Blomqvist wrote:\n> >>\n> >>\n> >>\n> >>>It does not seem to be a Stats collector problem.\n> >>>\n> >>> oid | relname | relnamespace | relpages | relisshared | reltuples |\n> >>>schemaname | n_tup_ins | n_tup_upd | n_tup_del\n> >>>\n> >>>\n>\n>>---------+---------+--------------+----------+-------------+-----------+--\n-\n> >>\n> >>\n> >-\n> >\n> >\n> >>>--------+-----------+-----------+-----------\n> >>>9384219 | file_92 | 2200 | 8423 | f | 49837 |\n> >>>public | 158176 | 318527 | 158176\n> >>>(1 row)\n> >>>\n> >>>I insert 50000 records\n> >>>\n> >>>secom=# select createfile_92records(1, 50000); <--- this is a pg\n> >>>\n> >>>\n> >script\n> >\n> >\n> >>>that inserts records 1 threw 50000.\n> >>>createfile_92records\n> >>>----------------------\n> >>> 0\n> >>>\n> >>>\n> >>> oid | relname | relnamespace | relpages | relisshared | reltuples |\n> >>>schemaname | n_tup_ins | n_tup_upd | n_tup_del\n> >>>\n> >>>\n>\n>>---------+---------+--------------+----------+-------------+-----------+--\n-\n> >>\n> >>\n> >-\n> >\n> >\n> >>>--------+-----------+-----------+-----------\n> >>>9384219 | file_92 | 2200 | 8423 | f | 49837 |\n> >>>public | 208179 | 318932 | 158377\n> >>>(1 row)\n> >>>\n> >>>reltuples does not change ? Hmm. n_tup_ins looks fine.\n> >>>\n> >>>\n> >>>\n> >>>\n> >>That is expected, reltuples only gets updated by a vacuum or an analyze.\n> >>\n> >>\n> >>\n> >>>This table is basically a queue full of records waiting to get\ntransfered\n> >>>over from our 68030 system to the PG database. The records are then\nmoved\n> >>>into folders (using a trigger) like file_92_myy depending on what month\n> >>>\n> >>>\n> >the\n> >\n> >\n> >>>record was created on the 68030. During normal operations there should\n> >>>\n> >>>\n> >not\n> >\n> >\n> >>>be more than 10 records at a time in the table, although during the\n> >>>\n> >>>\n> >course\n> >\n> >\n> >>>of a day a normal system will get about 50k records. I create 50000\n> >>>\n> >>>\n> >records\n> >\n> >\n> >>>to simulate incoming traffic, since we don't have much traffic in the\n> >>>\n> >>>\n> >test\n> >\n> >\n> >>>lab.\n> >>>\n> >>>After a few hours we have\n> >>>\n> >>>secom=# select count(*) from file_92;\n> >>>count\n> >>>-------\n> >>>42072\n> >>>\n> >>>So we have sent over approx 8000 Records.\n> >>>\n> >>> oid | relname | relnamespace | relpages | relisshared | reltuples |\n> >>>schemaname | n_tup_ins | n_tup_upd | n_tup_del\n> >>>\n> >>>\n>\n>>---------+---------+--------------+----------+-------------+-----------+--\n-\n> >>\n> >>\n> >-\n> >\n> >\n> >>>--------+-----------+-----------+-----------\n> >>>9384219 | file_92 | 2200 | 8423 | f | 49837 |\n> >>>public | 208218 | 334521 | 166152\n> >>>(1 row)\n> >>>\n> >>>\n> >>>n_tup_upd: 318932 + (50000-42072)*2 = 334788 pretty close. (Each\nrecord\n> >>>gets updated twice, then moved)\n> >>>n_tup_del: 158377 + (50000-42072) = 166305 pretty close. (there are\nalso\n> >>>minor background traffic going on)\n> >>>\n> >>>\n> >>>I could send over the full vacuum verbose capture as well as the\n> >>>\n> >>>\n> >autovacuum\n> >\n> >\n> >>>capture if that is of interest.\n> >>>\n> >>>\n> >>>\n> >>That might be helpful. I don't see a stats system problem here, but I\n> >>also haven't heard of any autovac problems recently, so this might be\n> >>something new.\n> >>\n> >>Thanks,\n> >>\n> >>Matthew O'Connor\n> >>\n> >>\n> >>\n> >>---------------------------(end of broadcast)---------------------------\n> >>TIP 6: Have you searched our list archives?\n> >>\n> >> http://archives.postgresql.org\n> >>\n> >>\n> >>\n> >\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 8: explain analyze is your friend\n> >\n> >\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n>\n\n\n", "msg_date": "Fri, 25 Mar 2005 12:24:51 -0800", "msg_from": "\"Otto Blomqvist\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_autovacuum not having enough suction ?" }, { "msg_contents": "\n> Another thing to check is whether the reltuples (and relpages!) that\n> autovacuum is reporting are the same as what's actually in the pg_class\n> row for the relation. I'm wondering if this could be a similar issue\n> to the old autovac bug where it wasn't reading the value correctly.\n\nThese values where extracted at roughly the same time.\n\n relname | relnamespace | reltype | relowner | relam | relfilenode |\nreltablespace | relpages | reltuples | reltoastrelid | reltoastidxid |\nrelhasindex | relisshared | relkind | relnatts | relchecks | reltriggers |\nrelukeys | relfkeys | relrefs | relhasoids | relhaspkey | relhasrules |\nrelhassubclass | relacl\n---------+--------------+---------+----------+-------+-------------+--------\n-------+----------+-----------+---------------+---------------+-------------\n+-------------+---------+----------+-----------+-------------+----------+---\n-------+---------+------------+------------+-------------+----------------+-\n-------\n file_92 | 2200 | 9384220 | 100 | 0 | 9384219 |\n0 | 6624 | 113082 | 0 | 0 | t | f\n| r | 23 | 0 | 1 | 0 | 0 |\n0 | t | f | f | f |\n(1 row)\n\nsecom=# select count(*) from file_92;\n count\n-------\n 17579\n(1 row)\n\n[2005-03-25 12:16:32 EST] INFO: table name: secom.\"public\".\"file_92\"\n[2005-03-25 12:16:32 EST] INFO: relid: 9384219; relisshared: 0\n[2005-03-25 12:16:32 EST] INFO: reltuples: 113082.000000; relpages:\n6624\n[2005-03-25 12:16:32 EST] INFO: curr_analyze_count: 993780;\ncurr_vacuum_count: 732470\n[2005-03-25 12:16:32 EST] INFO: last_analyze_count: 923820;\nlast_vacuum_count: 662699\n[2005-03-25 12:16:32 EST] INFO: analyze_threshold: 113582;\nvacuum_threshold: 227164\n\n\nHope this helps, if there is anything else I can do please let me know.\n\n\n> If they are the same then it seems like it must be a backend issue.\n>\n> One thing that is possibly relevant here is that in 8.0 a plain VACUUM\n> doesn't set reltuples to the exactly correct number, but to an\n> interpolated value that reflects our estimate of the \"steady state\"\n> average between vacuums. I wonder if that code is wrong, or if it's\n> operating as designed but is confusing autovac.\n\n\nThis average steady state value might be hard to interpolete in this case\nsince this is only a temporary holding place for the records ..? Normaly the\ntable has < 10 records in it at the same time. In the lab we create a\n\"lump-traffic\" by sending over 50000 Records. It takes about 20 hours to\ntransfer over all of the 50k records.\n\n\n\n\n", "msg_date": "Fri, 25 Mar 2005 12:35:00 -0800", "msg_from": "\"Otto Blomqvist\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_autovacuum not having enough suction ?" }, { "msg_contents": "Tom Lane wrote:\n\n>\"Matthew T. O'Connor\" <[email protected]> writes:\n> \n>\n>>hmm.... the value in reltuples should be accurate after a vacuum (or \n>>vacuum analyze) if it's not it's a vacuum bug or something is going on \n>>that isn't understood. If you or pg_autovacuum are running plain \n>>analyze commands, that could explain the invalid reltules numbers.\n>> \n>>\n>>Was reltuples = 113082 correct right after the vacuum? \n>> \n>>\n>\n>Another thing to check is whether the reltuples (and relpages!) that\n>autovacuum is reporting are the same as what's actually in the pg_class\n>row for the relation. I'm wondering if this could be a similar issue\n>to the old autovac bug where it wasn't reading the value correctly.\n> \n>\n\nI don't think so, as he did some manual selects from pg_class and \npg_stat_all in one of the emails he sent that were showing similar \nnumbers to what autovac was reporting.\n\n>If they are the same then it seems like it must be a backend issue.\n>\n>One thing that is possibly relevant here is that in 8.0 a plain VACUUM\n>doesn't set reltuples to the exactly correct number, but to an\n>interpolated value that reflects our estimate of the \"steady state\"\n>average between vacuums. I wonder if that code is wrong, or if it's\n>operating as designed but is confusing autovac.\n> \n>\n\nAhh.... Now that you mention it, I do remember the discussion during \n8.0 development. This sounds very much like the cause of the problem. \nAutovac is not vacuuming often enough for this table because reltuples \nis telling autovac that there are alot more tuples in this table than \nthere really are. \n\nReally this is just another case of the more general problem with \nautovac as it stands now. That is, you can't set vacuum thresholds on a \nper table basis, and databases like this can't survive with a one size \nfits all threshold. I would suggest that Otto perform regular cron \nbased vacuums of this one table in addition to autovac, that is what \nseveral people I have heard from in the field are doing.\n\nCome hell or high water I'm gonna get autovac integrated into 8.1, at \nwhich point per table thresholds would be easy todo.\n\n>Can autovac be told to run the vacuums in VERBOSE mode? It would be\n>useful to compare what VERBOSE has to say to the changes in\n>reltuples/relpages.\n>\nNot as it stands now. That would be an interesting feature for \ndebugging purposes though.\n\n", "msg_date": "Fri, 25 Mar 2005 16:04:25 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_autovacuum not having enough suction ?" }, { "msg_contents": "Tom Lane wrote:\n\n>I wrote:\n> \n>\n>>One thing that is possibly relevant here is that in 8.0 a plain VACUUM\n>>doesn't set reltuples to the exactly correct number, but to an\n>>interpolated value that reflects our estimate of the \"steady state\"\n>>average between vacuums. I wonder if that code is wrong, or if it's\n>>operating as designed but is confusing autovac.\n>> \n>>\n>\n>Now that I think it over, I'm thinking that I must have been suffering\n>severe brain fade the day I wrote lazy_update_relstats() (see\n>vacuumlazy.c). The numbers that that routine is averaging are the pre-\n>and post-vacuum physical tuple counts. But the difference between them\n>consists of known-dead tuples, and we shouldn't be factoring dead tuples\n>into reltuples. The planner has always considered reltuples to count\n>only live tuples, and I think this is correct on two grounds:\n>\n>1. The numbers of tuples estimated to be returned by scans certainly\n>shouldn't count dead ones.\n>\n>2. Dead tuples don't have that much influence on scan costs either, at\n>least not once they are marked as known-dead. Certainly they shouldn't\n>be charged at full freight.\n>\n>It's possible that there'd be some value in adding a column to pg_class\n>to record dead tuple count, but given what we have now, the calculation\n>in lazy_update_relstats is totally wrong.\n>\n>The idea I was trying to capture is that the tuple density is at a\n>minimum right after VACUUM, and will increase as free space is filled\n>in until the next VACUUM, so that recording the exact tuple count\n>underestimates the number of tuples that will be seen on-the-average.\n>But I'm not sure that idea really holds water. The only way that a\n>table can be at \"steady state\" over a long period is if the number of\n>live tuples remains roughly constant (ie, inserts balance deletes).\n>What actually increases and decreases over a VACUUM cycle is the density\n>of *dead* tuples ... but per the above arguments this isn't something\n>we should adjust reltuples for.\n>\n>So I'm thinking lazy_update_relstats should be ripped out and we should\n>go back to recording just the actual stats.\n>\n>Sound reasonable? Or was I right the first time and suffering brain\n>fade today?\n>\n\n", "msg_date": "Fri, 25 Mar 2005 16:12:27 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lazy_update_relstats considered harmful (was Re: [PERFORM]" }, { "msg_contents": "On Fri, 2005-03-25 at 15:22 -0500, Tom Lane wrote:\n> 2. Dead tuples don't have that much influence on scan costs either, at\n> least not once they are marked as known-dead. Certainly they shouldn't\n> be charged at full freight.\n\nYes, minor additional CPU time, but the main issue is when the dead\ntuples force additional I/O.\n\n> It's possible that there'd be some value in adding a column to pg_class\n> to record dead tuple count, but given what we have now, the calculation\n> in lazy_update_relstats is totally wrong.\n\nYes, thats the way. We can record the (averaged?) dead tuple count, but\nalso record the actual row count in reltuples.\n\nWe definitely need to record the physical and logical tuple counts,\nsince each of them have different contributions to run-times.\n\nFor comparing seq scan v index, we need to look at the physical tuples\ncount * avg row size, whereas when we calculate number of rows returned\nwe should look at fractions of the logical row count.\n\n> The idea I was trying to capture is that the tuple density is at a\n> minimum right after VACUUM, and will increase as free space is filled\n> in until the next VACUUM, so that recording the exact tuple count\n> underestimates the number of tuples that will be seen on-the-average.\n> But I'm not sure that idea really holds water. The only way that a\n> table can be at \"steady state\" over a long period is if the number of\n> live tuples remains roughly constant (ie, inserts balance deletes).\n> What actually increases and decreases over a VACUUM cycle is the density\n> of *dead* tuples ... but per the above arguments this isn't something\n> we should adjust reltuples for.\n> \n> So I'm thinking lazy_update_relstats should be ripped out and we should\n> go back to recording just the actual stats.\n> \n> Sound reasonable? Or was I right the first time and suffering brain\n> fade today?\n\nWell, I think the original idea had some validity, but clearly\nlazy_update_relstats isn't the way to do it even though we thought so at\nthe time.\n\nBest Regards, Simon Riggs\n\n\n", "msg_date": "Fri, 25 Mar 2005 22:20:23 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lazy_update_relstats considered harmful (was Re: [PERFORM]" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> On Fri, 2005-03-25 at 15:22 -0500, Tom Lane wrote:\n>> 2. Dead tuples don't have that much influence on scan costs either, at\n>> least not once they are marked as known-dead. Certainly they shouldn't\n>> be charged at full freight.\n\n> Yes, minor additional CPU time, but the main issue is when the dead\n> tuples force additional I/O.\n\nI/O costs are mostly estimated off relpages, though, not reltuples.\nThe only time you really pay through the nose for a dead tuple is when\nan indexscan visits it, but with the known-dead marking we now do in\nbtree indexes, I'm pretty sure that path is seldom taken.\n\n>> It's possible that there'd be some value in adding a column to pg_class\n>> to record dead tuple count, but given what we have now, the calculation\n>> in lazy_update_relstats is totally wrong.\n\n> Yes, thats the way. We can record the (averaged?) dead tuple count, but\n> also record the actual row count in reltuples.\n\nWhat I'd be inclined to record is the actual number of dead rows removed\nby the most recent VACUUM. Any math on that is best done in the\nplanner, since we can change the logic more easily than the database\ncontents. It'd probably be reasonable to take half of that number as\nthe estimate of the average number of dead tuples.\n\nBut in any case, that's for the future; we can't have it in 8.0.*, and\nright at the moment I'm focusing on what to push out for 8.0.2.\n\n> We definitely need to record the physical and logical tuple counts,\n> since each of them have different contributions to run-times.\n\nThere isn't any difference, if you are talking about fully dead tuples.\nIt would be possible for VACUUM to also count the number of\nnot-committed-but-not-removable tuples (ie, new from still-open\ntransactions, plus dead-but-still-visible-to-somebody), but I'm not sure\nthat it would be useful to do so, because that sort of count is hugely\ntransient. The stat would be irrelevant moments after it was taken.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 25 Mar 2005 17:35:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lazy_update_relstats considered harmful (was Re: [PERFORM]\n\tpg_autovacuum not having enough suction ?)" }, { "msg_contents": "> Otto Blomqvist wrote:\n>> This table is basically a queue full of records waiting to get transfered\n>> over from our 68030 system to the PG database. The records are then moved\n>> into folders (using a trigger) like file_92_myy depending on what month the\n>> record was created on the 68030. During normal operations there should not\n>> be more than 10 records at a time in the table, although during the course\n>> of a day a normal system will get about 50k records. I create 50000 records\n>> to simulate incoming traffic, since we don't have much traffic in the test\n>> lab.\n\nReally the right way to do housekeeping for a table like that is to\nVACUUM FULL (or better yet, TRUNCATE, if possible) immediately after\ndiscarding a batch of records. The VACUUM FULL will take very little\ntime if it only has to repack <10 records. Plain VACUUM is likely to\nleave the table nearly empty but physically sizable, which is bad news\nfrom a statistical point of view: as the table fills up again, it won't\nget physically larger, thereby giving the planner no clue that it\ndoesn't still have <10 records. This means the queries that process\nthe 50K-record patch are going to get horrible plans :-(\n\nI'm not sure if autovacuum could be taught to do that --- it could\nperhaps launch a vacuum as soon as it notices a large fraction of the\ntable got deleted, but do we really want to authorize it to launch\nVACUUM FULL? It'd be better to issue the vacuum synchronously\nas part of the batch updating script, I feel.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 25 Mar 2005 18:06:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_autovacuum not having enough suction ? " }, { "msg_contents": "Tom Lane wrote:\n> > Otto Blomqvist wrote:\n> >> This table is basically a queue full of records waiting to get transfered\n> >> over from our 68030 system to the PG database. The records are then moved\n> >> into folders (using a trigger) like file_92_myy depending on what month the\n> >> record was created on the 68030. During normal operations there should not\n> >> be more than 10 records at a time in the table, although during the course\n> >> of a day a normal system will get about 50k records. I create 50000 records\n> >> to simulate incoming traffic, since we don't have much traffic in the test\n> >> lab.\n> \n> Really the right way to do housekeeping for a table like that is to\n> VACUUM FULL (or better yet, TRUNCATE, if possible) immediately after\n> discarding a batch of records. The VACUUM FULL will take very little\n> time if it only has to repack <10 records. Plain VACUUM is likely to\n> leave the table nearly empty but physically sizable, which is bad news\n> from a statistical point of view: as the table fills up again, it won't\n> get physically larger, thereby giving the planner no clue that it\n> doesn't still have <10 records. This means the queries that process\n> the 50K-record patch are going to get horrible plans :-(\n> \n> I'm not sure if autovacuum could be taught to do that --- it could\n> perhaps launch a vacuum as soon as it notices a large fraction of the\n> table got deleted, but do we really want to authorize it to launch\n> VACUUM FULL? It'd be better to issue the vacuum synchronously\n> as part of the batch updating script, I feel.\n\nI added this to the TODO section for autovacuum:\n\n o Do VACUUM FULL if table is nearly empty?\n\nI don't think autovacuum is every going to be smart enough to recycle\nduring the delete, especially since the rows can't be reused until the\ntransaction completes.\n\nOne problem with VACUUM FULL would be autovacuum waiting for an\nexclusive lock on the table. Anyway, it is documented now as a possible\nissue.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 25 Mar 2005 18:13:15 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_autovacuum not having enough suction ?" }, { "msg_contents": "\n> > I'm not sure if autovacuum could be taught to do that --- it could\n> > perhaps launch a vacuum as soon as it notices a large fraction of the\n> > table got deleted, but do we really want to authorize it to launch\n> > VACUUM FULL? It'd be better to issue the vacuum synchronously\n> > as part of the batch updating script, I feel.\n> \n> I added this to the TODO section for autovacuum:\n> \n> o Do VACUUM FULL if table is nearly empty?\n\nWe should never automatically launch a vacuum full. That seems like a\nreally bad idea.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> I don't think autovacuum is every going to be smart enough to recycle\n> during the delete, especially since the rows can't be reused until the\n> transaction completes.\n> \n> One problem with VACUUM FULL would be autovacuum waiting for an\n> exclusive lock on the table. Anyway, it is documented now as a possible\n> issue.\n> \n-- \nCommand Prompt, Inc., Your PostgreSQL solutions company. 503-667-4564\nCustom programming, 24x7 support, managed services, and hosting\nOpen Source Authors: plPHP, pgManage, Co-Authors: plPerlNG\nReliable replication, Mammoth Replicator - http://www.commandprompt.com/\n\n", "msg_date": "Fri, 25 Mar 2005 15:17:06 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_autovacuum not having enough suction ?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom Lane wrote:\n>> I'm not sure if autovacuum could be taught to do that --- it could\n>> perhaps launch a vacuum as soon as it notices a large fraction of the\n>> table got deleted, but do we really want to authorize it to launch\n>> VACUUM FULL?\n\n> One problem with VACUUM FULL would be autovacuum waiting for an\n> exclusive lock on the table. Anyway, it is documented now as a possible\n> issue.\n\nI don't care too much about autovacuum waiting awhile to get a lock.\nI do care about other processes getting queued up behind it, though.\n\nPerhaps it would be possible to alter the normal lock queuing semantics\nfor this case, so that autovacuum's request doesn't block later\narrivals, and it can only get the lock when no one is interested in the\ntable. Of course, that might never happen, or by the time it does\nthere's no point in VACUUM FULL anymore :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 25 Mar 2005 18:18:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_autovacuum not having enough suction ? " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Tom Lane wrote:\n> >> I'm not sure if autovacuum could be taught to do that --- it could\n> >> perhaps launch a vacuum as soon as it notices a large fraction of the\n> >> table got deleted, but do we really want to authorize it to launch\n> >> VACUUM FULL?\n> \n> > One problem with VACUUM FULL would be autovacuum waiting for an\n> > exclusive lock on the table. Anyway, it is documented now as a possible\n> > issue.\n> \n> I don't care too much about autovacuum waiting awhile to get a lock.\n> I do care about other processes getting queued up behind it, though.\n> \n> Perhaps it would be possible to alter the normal lock queuing semantics\n> for this case, so that autovacuum's request doesn't block later\n> arrivals, and it can only get the lock when no one is interested in the\n> table. Of course, that might never happen, or by the time it does\n> there's no point in VACUUM FULL anymore :-(\n\nCan we issue a LOCK TABLE with a statement_timeout, and only do the\nVACUUM FULL if we can get a lock quickly? That seems like a plan.\n\nThe only problem is that you can't VACUUM FULL in a transaction:\n\n\ttest=> create table test (x int);\n\tCREATE TABLE\n\ttest=> insert into test values (1);\n\tINSERT 0 1\n\ttest=> begin;\n\tBEGIN\n\ttest=> lock table test;\n\tLOCK TABLE\n\ttest=> vacuum full;\n\tERROR: VACUUM cannot run inside a transaction block\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 25 Mar 2005 18:21:24 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_autovacuum not having enough suction ?" }, { "msg_contents": "ok, Thanks a lot for your time guys ! I guess my table is pretty unusual\nand thats why this problem has not surfaced until now. Better late then\nnever ;) I'll cron a \"manual\" vacuum full on the table.\n\n\n\n\"Tom Lane\" <[email protected]> wrote in message\nnews:[email protected]...\n> > Otto Blomqvist wrote:\n> >> This table is basically a queue full of records waiting to get\ntransfered\n> >> over from our 68030 system to the PG database. The records are then\nmoved\n> >> into folders (using a trigger) like file_92_myy depending on what month\nthe\n> >> record was created on the 68030. During normal operations there should\nnot\n> >> be more than 10 records at a time in the table, although during the\ncourse\n> >> of a day a normal system will get about 50k records. I create 50000\nrecords\n> >> to simulate incoming traffic, since we don't have much traffic in the\ntest\n> >> lab.\n>\n> Really the right way to do housekeeping for a table like that is to\n> VACUUM FULL (or better yet, TRUNCATE, if possible) immediately after\n> discarding a batch of records. The VACUUM FULL will take very little\n> time if it only has to repack <10 records. Plain VACUUM is likely to\n> leave the table nearly empty but physically sizable, which is bad news\n> from a statistical point of view: as the table fills up again, it won't\n> get physically larger, thereby giving the planner no clue that it\n> doesn't still have <10 records. This means the queries that process\n> the 50K-record patch are going to get horrible plans :-(\n>\n> I'm not sure if autovacuum could be taught to do that --- it could\n> perhaps launch a vacuum as soon as it notices a large fraction of the\n> table got deleted, but do we really want to authorize it to launch\n> VACUUM FULL? It'd be better to issue the vacuum synchronously\n> as part of the batch updating script, I feel.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n\n", "msg_date": "Fri, 25 Mar 2005 15:34:00 -0800", "msg_from": "\"Otto Blomqvist\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_autovacuum not having enough suction ?" }, { "msg_contents": "On Fri, Mar 25, 2005 at 06:21:24PM -0500, Bruce Momjian wrote:\n> \n> Can we issue a LOCK TABLE with a statement_timeout, and only do the\n> VACUUM FULL if we can get a lock quickly? That seems like a plan.\n\nI think someone else's remark in this thread is important, though:\nautovacuum shouldn't ever block other transactions, and this approach\nwill definitely run that risk.\n\nA\n\n\n-- \nAndrew Sullivan | [email protected]\nA certain description of men are for getting out of debt, yet are\nagainst all taxes for raising money to pay it off.\n\t\t--Alexander Hamilton\n", "msg_date": "Thu, 31 Mar 2005 17:13:35 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_autovacuum not having enough suction ?" } ]