threads
listlengths
1
275
[ { "msg_contents": "Is this advisable? The disks are rather fast (15k iirc) but somehow I \ndon't think they are covered in whatever magic fairy dust it would \nrequire for a sequential read to be as fast as a random one. However \nI could be wrong, are there any circumstances when this is actually \ngoing to help performance?\n", "msg_date": "Thu, 9 Jun 2005 17:37:25 +0100", "msg_from": "Alex Stapleton <[email protected]>", "msg_from_op": true, "msg_subject": "random_page_cost = 1?" }, { "msg_contents": "Alex Stapleton <[email protected]> writes:\n> Is this advisable?\n\nOnly if your database is small enough that you expect it to remain fully\ncached in RAM. In that case random_page_cost = 1 does in fact describe\nthe performance you expect Postgres to see.\n\nPeople occasionally use values for random_page_cost that are much\nsmaller than physical reality would suggest, but I think this is mainly\na workaround for deficiencies elsewhere in the planner cost models.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 09 Jun 2005 12:55:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_cost = 1? " } ]
[ { "msg_contents": "Hi All,\n\nNot sure if this is correct fix or not, but a bit of research :\nhttp://archives.postgresql.org/pgsql-hackers/2001-04/msg01129.php\nAnd offical doco's from postgres :\nhttp://www.postgresql.org/docs/7.4/static/wal-configuration.html\nLead me to try :\nwal_sync_method = open_sync\nAnd this has increased the speed on my Redhat 8 servers my 20X !\n\nSteve\n-----Original Message-----\nFrom: Steve Pollard \nSent: Thursday, 9 June 2005 1:27 PM\nTo: Steve Pollard; [email protected]\nSubject: RE: [PERFORM] Importing from pg_dump slow, low Disk IO\n\nAs a follow up to this ive installed on another test Rehat 8 machine\nwith\n7.3.4 and slow inserts are present, however on another machine with ES3\nthe same 15,000 inserts is about 20 times faster, anyone know of a\nchange that would effect this, kernel or rehat release ?\n\nSteve\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Steve\nPollard\nSent: Wednesday, 8 June 2005 6:39 PM\nTo: [email protected]\nSubject: [PERFORM] Importing from pg_dump slow, low Disk IO\n\n\nHi Everyone,\n\nIm having a performance issue with version 7.3.4 which i first thought\nwas Disk IO related, however now it seems like the problem is caused by\nreally slow commits, this is running on Redhat 8.\n\nBasically im taking a .sql file with insert of about 15,000 lines and\n<'ing straight into psql DATABASENAME, the Disk writes never gets over\nabout 2000 on this machine with a RAID5 SCSI setup, this happens in my\nPROD and DEV environment.\n\nIve installed the latest version on RedHat ES3 and copied the configs\nacross however the inserts are really really fast..\n\nWas there a performce change from 7.3.4 to current to turn of\nautocommits by default or is buffering handled differently ?\n\nI have ruled out Disk IO issues as a siple 'cp' exceeds Disk writes to\n60000 (using vmstat)\n\nIf i do this with a BEGIN; and COMMIT; its really fast, however not\npractical as im setting up a cold-standby server for automation.\n\nHave been trying to debug for a few days now and see nothing.. here is\nsome info :\n\n::::::::::::::\n/proc/sys/kernel/shmall\n::::::::::::::\n2097152\n::::::::::::::\n/proc/sys/kernel/shmmax\n::::::::::::::\n134217728\n::::::::::::::\n/proc/sys/kernel/shmmni\n::::::::::::::\n4096\n\n\nshared_buffers = 51200\nmax_fsm_relations = 1000\nmax_fsm_pages = 10000\nmax_locks_per_transaction = 64\nwal_buffers = 64\neffective_cache_size = 65536\n\nMemTotal: 1547608 kB\nMemFree: 47076 kB\nMemShared: 0 kB\nBuffers: 134084 kB\nCached: 1186596 kB\nSwapCached: 544 kB\nActive: 357048 kB\nActiveAnon: 105832 kB\nActiveCache: 251216 kB\nInact_dirty: 321020 kB\nInact_laundry: 719492 kB\nInact_clean: 28956 kB\nInact_target: 285300 kB\nHighTotal: 655336 kB\nHighFree: 1024 kB\nLowTotal: 892272 kB\nLowFree: 46052 kB\nSwapTotal: 1534056 kB\nSwapFree: 1526460 kB\n\nThis is a real doosey for me, please provide any advise possible.\n\nSteve\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n", "msg_date": "Fri, 10 Jun 2005 15:33:17 +0930", "msg_from": "\"Steve Pollard\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Importing from pg_dump slow, low Disk IO" } ]
[ { "msg_contents": "Hi,\n\ni'm trying this too :). My Dump (IN) is about 84 minutes. Now\ni'm testing how much time takes it with open_sync :). I'm \nanxious about the new results :).\n\nbest regards,\n\npingufreak\n\n\nAm Freitag, den 10.06.2005, 15:33 +0930 schrieb Steve Pollard:\n> Hi All,\n> \n> Not sure if this is correct fix or not, but a bit of research :\n> http://archives.postgresql.org/pgsql-hackers/2001-04/msg01129.php\n> And offical doco's from postgres :\n> http://www.postgresql.org/docs/7.4/static/wal-configuration.html\n> Lead me to try :\n> wal_sync_method = open_sync\n> And this has increased the speed on my Redhat 8 servers my 20X !\n> \n> Steve\n> -----Original Message-----\n> From: Steve Pollard \n> Sent: Thursday, 9 June 2005 1:27 PM\n> To: Steve Pollard; [email protected]\n> Subject: RE: [PERFORM] Importing from pg_dump slow, low Disk IO\n> \n> As a follow up to this ive installed on another test Rehat 8 machine\n> with\n> 7.3.4 and slow inserts are present, however on another machine with ES3\n> the same 15,000 inserts is about 20 times faster, anyone know of a\n> change that would effect this, kernel or rehat release ?\n> \n> Steve\n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Steve\n> Pollard\n> Sent: Wednesday, 8 June 2005 6:39 PM\n> To: [email protected]\n> Subject: [PERFORM] Importing from pg_dump slow, low Disk IO\n> \n> \n> Hi Everyone,\n> \n> Im having a performance issue with version 7.3.4 which i first thought\n> was Disk IO related, however now it seems like the problem is caused by\n> really slow commits, this is running on Redhat 8.\n> \n> Basically im taking a .sql file with insert of about 15,000 lines and\n> <'ing straight into psql DATABASENAME, the Disk writes never gets over\n> about 2000 on this machine with a RAID5 SCSI setup, this happens in my\n> PROD and DEV environment.\n> \n> Ive installed the latest version on RedHat ES3 and copied the configs\n> across however the inserts are really really fast..\n> \n> Was there a performce change from 7.3.4 to current to turn of\n> autocommits by default or is buffering handled differently ?\n> \n> I have ruled out Disk IO issues as a siple 'cp' exceeds Disk writes to\n> 60000 (using vmstat)\n> \n> If i do this with a BEGIN; and COMMIT; its really fast, however not\n> practical as im setting up a cold-standby server for automation.\n> \n> Have been trying to debug for a few days now and see nothing.. here is\n> some info :\n> \n> ::::::::::::::\n> /proc/sys/kernel/shmall\n> ::::::::::::::\n> 2097152\n> ::::::::::::::\n> /proc/sys/kernel/shmmax\n> ::::::::::::::\n> 134217728\n> ::::::::::::::\n> /proc/sys/kernel/shmmni\n> ::::::::::::::\n> 4096\n> \n> \n> shared_buffers = 51200\n> max_fsm_relations = 1000\n> max_fsm_pages = 10000\n> max_locks_per_transaction = 64\n> wal_buffers = 64\n> effective_cache_size = 65536\n> \n> MemTotal: 1547608 kB\n> MemFree: 47076 kB\n> MemShared: 0 kB\n> Buffers: 134084 kB\n> Cached: 1186596 kB\n> SwapCached: 544 kB\n> Active: 357048 kB\n> ActiveAnon: 105832 kB\n> ActiveCache: 251216 kB\n> Inact_dirty: 321020 kB\n> Inact_laundry: 719492 kB\n> Inact_clean: 28956 kB\n> Inact_target: 285300 kB\n> HighTotal: 655336 kB\n> HighFree: 1024 kB\n> LowTotal: 892272 kB\n> LowFree: 46052 kB\n> SwapTotal: 1534056 kB\n> SwapFree: 1526460 kB\n> \n> This is a real doosey for me, please provide any advise possible.\n> \n> Steve\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n\n", "msg_date": "Fri, 10 Jun 2005 08:46:16 +0200", "msg_from": "\"Martin Fandel\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Importing from pg_dump slow, low Disk IO" }, { "msg_contents": "Hmmm. In my configuration there are not much more performance:\n\nThe Dump-size is 6-7GB on a PIV-3Ghz, 2GB-RAM, 4x10k disks on raid 10\nfor the db and 2x10k disks raid 1 for the system and the wal-logs.\n\nopen_sync:\nreal 79m1.980s\nuser 25m25.285s\nsys 1m20.112s\n\nfsync:\nreal 75m23.792s\nuser 27m3.693s\nsys 1m26.538s\n\nbest regards,\nmartin\n\nAm Freitag, den 10.06.2005, 15:33 +0930 schrieb Steve Pollard:\n> Hi All,\n> \n> Not sure if this is correct fix or not, but a bit of research :\n> http://archives.postgresql.org/pgsql-hackers/2001-04/msg01129.php\n> And offical doco's from postgres :\n> http://www.postgresql.org/docs/7.4/static/wal-configuration.html\n> Lead me to try :\n> wal_sync_method = open_sync\n> And this has increased the speed on my Redhat 8 servers my 20X !\n> \n> Steve\n> -----Original Message-----\n> From: Steve Pollard \n> Sent: Thursday, 9 June 2005 1:27 PM\n> To: Steve Pollard; [email protected]\n> Subject: RE: [PERFORM] Importing from pg_dump slow, low Disk IO\n> \n> As a follow up to this ive installed on another test Rehat 8 machine\n> with\n> 7.3.4 and slow inserts are present, however on another machine with ES3\n> the same 15,000 inserts is about 20 times faster, anyone know of a\n> change that would effect this, kernel or rehat release ?\n> \n> Steve\n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Steve\n> Pollard\n> Sent: Wednesday, 8 June 2005 6:39 PM\n> To: [email protected]\n> Subject: [PERFORM] Importing from pg_dump slow, low Disk IO\n> \n> \n> Hi Everyone,\n> \n> Im having a performance issue with version 7.3.4 which i first thought\n> was Disk IO related, however now it seems like the problem is caused by\n> really slow commits, this is running on Redhat 8.\n> \n> Basically im taking a .sql file with insert of about 15,000 lines and\n> <'ing straight into psql DATABASENAME, the Disk writes never gets over\n> about 2000 on this machine with a RAID5 SCSI setup, this happens in my\n> PROD and DEV environment.\n> \n> Ive installed the latest version on RedHat ES3 and copied the configs\n> across however the inserts are really really fast..\n> \n> Was there a performce change from 7.3.4 to current to turn of\n> autocommits by default or is buffering handled differently ?\n> \n> I have ruled out Disk IO issues as a siple 'cp' exceeds Disk writes to\n> 60000 (using vmstat)\n> \n> If i do this with a BEGIN; and COMMIT; its really fast, however not\n> practical as im setting up a cold-standby server for automation.\n> \n> Have been trying to debug for a few days now and see nothing.. here is\n> some info :\n> \n> ::::::::::::::\n> /proc/sys/kernel/shmall\n> ::::::::::::::\n> 2097152\n> ::::::::::::::\n> /proc/sys/kernel/shmmax\n> ::::::::::::::\n> 134217728\n> ::::::::::::::\n> /proc/sys/kernel/shmmni\n> ::::::::::::::\n> 4096\n> \n> \n> shared_buffers = 51200\n> max_fsm_relations = 1000\n> max_fsm_pages = 10000\n> max_locks_per_transaction = 64\n> wal_buffers = 64\n> effective_cache_size = 65536\n> \n> MemTotal: 1547608 kB\n> MemFree: 47076 kB\n> MemShared: 0 kB\n> Buffers: 134084 kB\n> Cached: 1186596 kB\n> SwapCached: 544 kB\n> Active: 357048 kB\n> ActiveAnon: 105832 kB\n> ActiveCache: 251216 kB\n> Inact_dirty: 321020 kB\n> Inact_laundry: 719492 kB\n> Inact_clean: 28956 kB\n> Inact_target: 285300 kB\n> HighTotal: 655336 kB\n> HighFree: 1024 kB\n> LowTotal: 892272 kB\n> LowFree: 46052 kB\n> SwapTotal: 1534056 kB\n> SwapFree: 1526460 kB\n> \n> This is a real doosey for me, please provide any advise possible.\n> \n> Steve\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n\n", "msg_date": "Fri, 10 Jun 2005 12:14:36 +0200", "msg_from": "\"Martin Fandel\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Importing from pg_dump slow, low Disk IO" } ]
[ { "msg_contents": "I managed, by extensive usage of temporary tables, to totally bloat \npg_attribute. It currently has about 40000 pages with just 3000 tuples. \nThe question is, how to restore it to it's former beauty? With ordinary \ntable I'd just CLUSTER it, but alas! I cannot do that with system \ncatalog. I always get:\n\ndb=# cluster pg_attribute_relid_attnam_index on pg_attribute;\nERROR: \"pg_attribute\" is a system catalog\n\nThe only thing I could think of is VACUUM FULL, but from my former \nexperience I guess it'll take maybe over an hour, effectively rendering \nthe server unusable, because of the exclusive lock. It is a live 24/7 \nsystem, so I'd really prefer something less drastic than dumping and \nreloading the database (though it's still shorter downtime than with the \nvacuum.)\n\nIsn't there a way to somehow go around the above mentioned limitation \nand CLUSTER the table?\n\nThanks for your ideas.\n\n-- \nMichal Tďż˝borskďż˝\nCTO, Internet Mall, a.s.\n\nInternet Mall - obchody, kterďż˝ si oblďż˝bďż˝te\n<http://www.MALL.cz>\n", "msg_date": "Fri, 10 Jun 2005 11:05:45 +0200", "msg_from": "Michal Taborsky <[email protected]>", "msg_from_op": true, "msg_subject": "Cleaning bloated pg_attribute" }, { "msg_contents": "Michal Taborsky wrote:\n> I managed, by extensive usage of temporary tables, to totally bloat \n> pg_attribute. It currently has about 40000 pages with just 3000 tuples. \n\n> The only thing I could think of is VACUUM FULL, but from my former \n> experience I guess it'll take maybe over an hour, effectively rendering \n> the server unusable, because of the exclusive lock.\n\nYou can vacuum full a single table - shouldn't take an hour for just the \none table. Unless your disk I/O is *constantly* running flat-out.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 10 Jun 2005 11:07:17 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cleaning bloated pg_attribute" } ]
[ { "msg_contents": "Richard, \n\nthanks for info. \n\n\"...the RH supplied Postgres binary has issues...\"\n\nWould you have the time to provide a bit more info?\nVersion of PG? Nature of issues? Methods that resolved?\n\nThanks again, \n\n-- Ross\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Richard Rowell\nSent: Friday, June 10, 2005 8:34 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Whence the Opterons?\n\n\nI will second the nod to Penguin computing. We have a bit of Penguin hardware here (though the majority is Dell). We did have issues with one machine a couple of years ago, but Penguin was very pro-active in addressing that.\n\nWe recently picked up a Dual Opteron system from them and have been very pleased with it so far. \n\nI would be careful of the RHES that it ships with though. We had machine lockups immediately after the suggested kernel update (had to down grade manually). Also, the RH supplied Postgres binary has issues, so you would need to compile Postgres yourself until the next RH update.\n\nOn Fri, 2005-05-06 at 14:39 -0700, Mischa Sandberg wrote:\n> After reading the comparisons between Opteron and Xeon processors for \n> Linux, I'd like to add an Opteron box to our stable of Dells and \n> Sparcs, for comparison.\n> \n> IBM, Sun and HP have their fairly pricey Opteron systems.\n> The IT people are not swell about unsupported purchases off ebay. \n> Anyone care to suggest any other vendors/distributors? Looking for \n> names with national support, so that we can recommend as much to our \n> customers.\n> \n> Many thanks in advance.\n-- \n--\nRichard Rowell\[email protected]\nBowman Systems\n(318) 213-8780\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n", "msg_date": "Fri, 10 Jun 2005 14:59:33 -0000", "msg_from": "\"Mohan, Ross\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Whence the Opterons?" } ]
[ { "msg_contents": "[[email protected] - Fri at 12:10:19PM -0400]\n> tle-bu=> EXPLAIN ANALYZE SELECT file_type, file_parent_dir, file_name FROM\n> file_info_7;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------\n> Seq Scan on file_info_7 (cost=0.00..11028.35 rows=294035 width=118)\n> (actual time=0.122..2707.764 rows=294035 loops=1)\n> Total runtime: 3717.862 ms\n> (2 rows)\n> \n\nAs far as I can see, you are selecting everything from the table without any\nsort order. The only rational thing to do then is a sequential scan, it's\nno point in an index scan.\n\n-- \nTobias Brox, +47-91700050\n\n", "msg_date": "Fri, 10 Jun 2005 18:55:35 +0300", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index ot being used" }, { "msg_contents": "Hi,\n\nAt 18:10 10/06/2005, [email protected] wrote:\n>tle-bu=> EXPLAIN ANALYZE SELECT file_type, file_parent_dir, file_name FROM\n>file_info_7;\n\nWhat could the index be used for? Unless you have some WHERE or (in some \ncases) ORDER BY clause, there's absolutely no need for an index, since you \nare just asking for all rows from the table...\n\nJacques.\n\n\n", "msg_date": "Fri, 10 Jun 2005 17:57:01 +0200", "msg_from": "Jacques Caron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used" }, { "msg_contents": "Hi all,\n\n I have an index on a table that doesn't seem to want to be used. I'm\nhopig someone might be able to help point me in the right direction.\n\nMy index is (typed, not copied):\n\ntle-bu=> \\d file_info_7_display_idx;\n Index \"public.file_info_7_display_idx\"\n Column | Type\n-----------------+----------------------\n file_type | character varying(2)\n file_parent_dir | text\n file_name | text\nbtree, for table \"public.file_info_7\"\n\ntle-bu=> EXPLAIN ANALYZE SELECT file_type, file_parent_dir, file_name FROM\nfile_info_7;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Seq Scan on file_info_7 (cost=0.00..11028.35 rows=294035 width=118)\n(actual time=0.122..2707.764 rows=294035 loops=1)\n Total runtime: 3717.862 ms\n(2 rows)\n\n Can anyone see what's wrong? Should I post the table schema? Thanks all!\n\nMadison\n", "msg_date": "Fri, 10 Jun 2005 12:10:19 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Index ot being used" }, { "msg_contents": "Tobias Brox wrote:\n> [[email protected] - Fri at 12:10:19PM -0400]\n> \n>>tle-bu=> EXPLAIN ANALYZE SELECT file_type, file_parent_dir, file_name FROM\n>>file_info_7;\n>> QUERY PLAN\n>>----------------------------------------------------------------------------------------------------------------------\n>> Seq Scan on file_info_7 (cost=0.00..11028.35 rows=294035 width=118)\n>>(actual time=0.122..2707.764 rows=294035 loops=1)\n>> Total runtime: 3717.862 ms\n>>(2 rows)\n>>\n> \n> \n> As far as I can see, you are selecting everything from the table without any\n> sort order. The only rational thing to do then is a sequential scan, it's\n> no point in an index scan.\n> \n\n Thanks for replying, Tobias and Jacques!\n\n Doh! This is a case of over simplification, I think. I was trying to \nsimplify my query as much as I could and then work it out to the actual \nquery I want. It would seem I don't understand how to use indexes quite \nright. Do you think you might be able to help me with a useful index?\n\n Here is the 'file_info_7' schema, my query and the 'explain analyze' \nresults:\n\ntle-bu=> \\d file_info_7\n Table \"public.file_info_7\"\n Column | Type | Modifiers\n----------------------+----------------------+-----------------------------------------\n file_group_name | text |\n file_group_uid | bigint | not null\n file_mod_time | bigint | not null\n file_name | text | not null\n file_parent_dir | text | not null\n file_perm | text | not null\n file_size | bigint | not null\n file_type | character varying(2) | not null default \n'f'::character varying\n file_user_name | text |\n file_user_uid | bigint | not null\n file_backup | boolean | not null default true\n file_display | boolean | not null default false\n file_restore_display | boolean | not null default false\n file_restore | boolean | not null default false\nIndexes:\n \"file_info_7_display_idx\" btree (file_type, file_parent_dir, file_name)\n\n Here is my full query:\n\ntle-bu=> EXPLAIN ANALYZE SELECT file_name, file_parent_dir, file_type \nFROM file_info_7 WHERE file_type='d' ORDER BY file_parent_dir ASC, \nfile_name ASC;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Sort (cost=14541.24..14603.48 rows=24895 width=118) (actual \ntime=15751.804..15967.591 rows=25795 loops=1)\n Sort Key: file_parent_dir, file_name\n -> Seq Scan on file_info_7 (cost=0.00..11763.44 rows=24895 \nwidth=118) (actual time=19.289..3840.845 rows=25795 loops=1)\n Filter: ((file_type)::text = 'd'::text)\n Total runtime: 16043.075 ms\n(5 rows)\n\n This is my index (which I guess is wrong):\n\ntle-bu=> \\d file_info_7_display_idx\n Index \"public.file_info_7_display_idx\"\n Column | Type\n-----------------+----------------------\n file_type | character varying(2)\n file_parent_dir | text\n file_name | text\nbtree, for table \"public.file_info_7\"\n\n Those are the three columns I am using in my restrictions so I \nthought that would create an index this query would use. Do I need to do \nsomething different because of the 'ORDER BY...'?\n\n Thanks again for the replies!\n\nMadison\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\nMadison Kelly (Digimer)\nTLE-BU, The Linux Experience; Back Up\nhttp://tle-bu.thelinuxexperience.com\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n", "msg_date": "Sun, 12 Jun 2005 10:12:27 -0400", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used" }, { "msg_contents": "Madison Kelly <[email protected]> writes:\n> Here is my full query:\n\n> tle-bu=> EXPLAIN ANALYZE SELECT file_name, file_parent_dir, file_type \n> FROM file_info_7 WHERE file_type='d' ORDER BY file_parent_dir ASC, \n> file_name ASC;\n\n> This is my index (which I guess is wrong):\n\n> tle-bu=> \\d file_info_7_display_idx\n> Index \"public.file_info_7_display_idx\"\n> Column | Type\n> -----------------+----------------------\n> file_type | character varying(2)\n> file_parent_dir | text\n> file_name | text\n> btree, for table \"public.file_info_7\"\n\nThe index is fine, but you need to phrase the query as\n\n\t... ORDER BY file_type, file_parent_dir, file_name;\n\n(Whether you use ASC or not doesn't matter.) Otherwise the planner\nwon't make the connection to the sort ordering of the index.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 12 Jun 2005 11:56:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used " }, { "msg_contents": "On Sun, Jun 12, 2005 at 10:12:27 -0400,\n Madison Kelly <[email protected]> wrote:\n> Indexes:\n> \"file_info_7_display_idx\" btree (file_type, file_parent_dir, file_name)\n\n> Here is my full query:\n> \n> tle-bu=> EXPLAIN ANALYZE SELECT file_name, file_parent_dir, file_type \n> FROM file_info_7 WHERE file_type='d' ORDER BY file_parent_dir ASC, \n> file_name ASC;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------\n\nThis is a case where postgres's planner can't make a deduction needed for\nit to realize that the index can be used. Try rewriting the query as:\n\nSELECT file_name, file_parent_dir, file_type \n FROM file_info_7 WHERE file_type='d'\n ORDER BY file_type ASC, file_parent_dir ASC, file_name ASC;\n", "msg_date": "Sun, 12 Jun 2005 12:37:32 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used" }, { "msg_contents": "Tom Lane wrote:\n> Madison Kelly <[email protected]> writes:\n> \n>> Here is my full query:\n> \n> \n>>tle-bu=> EXPLAIN ANALYZE SELECT file_name, file_parent_dir, file_type \n>>FROM file_info_7 WHERE file_type='d' ORDER BY file_parent_dir ASC, \n>>file_name ASC;\n> \n> \n>> This is my index (which I guess is wrong):\n> \n> \n>>tle-bu=> \\d file_info_7_display_idx\n>> Index \"public.file_info_7_display_idx\"\n>> Column | Type\n>>-----------------+----------------------\n>> file_type | character varying(2)\n>> file_parent_dir | text\n>> file_name | text\n>>btree, for table \"public.file_info_7\"\n> \n> \n> The index is fine, but you need to phrase the query as\n> \n> \t... ORDER BY file_type, file_parent_dir, file_name;\n> \n> (Whether you use ASC or not doesn't matter.) Otherwise the planner\n> won't make the connection to the sort ordering of the index.\n> \n> \t\t\tregards, tom lane\n\nHi Tom and Bruno,\n\n After sending that email I kept plucking away and in the course of \ndoing so decided that I didn't need to return the 'file_type' column. \nOther than that, it would see my query now matches what you two have \nrecommended in the 'ORDER BY...' front but I still can't get an index \nsearch.\n\n Here is the latest query and the new index:\n\ntle-bu=> \\d file_info_7_display_idx;\nIndex \"public.file_info_7_display_idx\"\n Column | Type\n-----------------+------\n file_parent_dir | text\n file_name | text\nbtree, for table \"public.file_info_7\"\n\ntle-bu=> EXPLAIN ANALYZE SELECT file_parent_dir, file_name, file_display \nFROM file_info_7 WHERE file_type='d' ORDER BY file_parent_dir ASC, \nfile_name ASC;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Sort (cost=14509.53..14571.76 rows=24895 width=114) (actual \ntime=19995.250..20123.874 rows=25795 loops=1)\n Sort Key: file_parent_dir, file_name\n -> Seq Scan on file_info_7 (cost=0.00..11762.44 rows=24895 \nwidth=114) (actual time=0.123..3228.446 rows=25795 loops=1)\n Filter: ((file_type)::text = 'd'::text)\n Total runtime: 20213.443 ms\n\n The 'Sort' is taking 20 seconds on my pentium III 1GHz (not great, \nbut...). If I follow you right, my index is 'file_parent_dir' first and \n'file_name' second (does order matter?). So I figured the query:\n\nSELECT file_parent_dir, file_name, file_display\nFROM file_info_7\nWHERE file_type='d'\nORDER BY file_parent_dir ASC, file_name ASC;\n\n Would hit the index for the sort. Is there any other way other than \n'EXPLAIN ANALYZE...' to get a better understanding of what is happening \nin there? For what it's worth, there is a little under 300,000 entries \nin this table of which, as you can see above, 25,795 are being returned.\n\n Yet again, thank you both!! I'm off to keep trying to figure this out...\n\nMadison\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\nMadison Kelly (Digimer)\nTLE-BU, The Linux Experience; Back Up\nhttp://tle-bu.thelinuxexperience.com\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n", "msg_date": "Sun, 12 Jun 2005 18:52:05 -0400", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used" }, { "msg_contents": "On Sun, Jun 12, 2005 at 18:52:05 -0400,\n Madison Kelly <[email protected]> wrote:\n> \n> After sending that email I kept plucking away and in the course of \n> doing so decided that I didn't need to return the 'file_type' column. \n> Other than that, it would see my query now matches what you two have \n> recommended in the 'ORDER BY...' front but I still can't get an index \n> search.\n\nNo it doesn't. Even if you don't return file_type you still need it\nin the order by clause if you want postgres to consider using your\nindex.\n\nIs there some reason you didn't actually try out our suggestion, but are\nnow asking for more advice?\n\n> \n> Here is the latest query and the new index:\n> \n> tle-bu=> \\d file_info_7_display_idx;\n> Index \"public.file_info_7_display_idx\"\n> Column | Type\n> -----------------+------\n> file_parent_dir | text\n> file_name | text\n> btree, for table \"public.file_info_7\"\n> \n> tle-bu=> EXPLAIN ANALYZE SELECT file_parent_dir, file_name, file_display \n> FROM file_info_7 WHERE file_type='d' ORDER BY file_parent_dir ASC, \n> file_name ASC;\n", "msg_date": "Sun, 12 Jun 2005 22:00:01 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used" }, { "msg_contents": "On Sun, Jun 12, 2005 at 22:00:01 -0500,\n Bruno Wolff III <[email protected]> wrote:\n> On Sun, Jun 12, 2005 at 18:52:05 -0400,\n> Madison Kelly <[email protected]> wrote:\n> > \n> > After sending that email I kept plucking away and in the course of \n> > doing so decided that I didn't need to return the 'file_type' column. \n> > Other than that, it would see my query now matches what you two have \n> > recommended in the 'ORDER BY...' front but I still can't get an index \n> > search.\n> \n> No it doesn't. Even if you don't return file_type you still need it\n> in the order by clause if you want postgres to consider using your\n> index.\n\nI didn't notice that you had changed the index. The reason this index\ndoesn't help is that you can't use it to select on records with the\ndesired file_type.\n\n> \n> Is there some reason you didn't actually try out our suggestion, but are\n> now asking for more advice?\n> \n> > \n> > Here is the latest query and the new index:\n> > \n> > tle-bu=> \\d file_info_7_display_idx;\n> > Index \"public.file_info_7_display_idx\"\n> > Column | Type\n> > -----------------+------\n> > file_parent_dir | text\n> > file_name | text\n> > btree, for table \"public.file_info_7\"\n> > \n> > tle-bu=> EXPLAIN ANALYZE SELECT file_parent_dir, file_name, file_display \n> > FROM file_info_7 WHERE file_type='d' ORDER BY file_parent_dir ASC, \n> > file_name ASC;\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n", "msg_date": "Sun, 12 Jun 2005 22:13:17 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used" }, { "msg_contents": "Bruno Wolff III wrote:\n> On Sun, Jun 12, 2005 at 18:52:05 -0400,\n> Madison Kelly <[email protected]> wrote:\n> \n>> After sending that email I kept plucking away and in the course of \n>>doing so decided that I didn't need to return the 'file_type' column. \n>>Other than that, it would see my query now matches what you two have \n>>recommended in the 'ORDER BY...' front but I still can't get an index \n>>search.\n> \n> \n> No it doesn't. Even if you don't return file_type you still need it\n> in the order by clause if you want postgres to consider using your\n> index.\n> \n> Is there some reason you didn't actually try out our suggestion, but are\n> now asking for more advice?\n\nNo good excuse.\n\nI'll recreate the index and test out your suggestion...\n\ntle-bu=> EXPLAIN ANALYZE SELECT file_name, file_parent_dir, file_type \nFROM file_info_7 WHERE file_type='d' ORDER BY file_type ASC, \nfile_parent_dir ASC, file_name ASC;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Sort (cost=14789.92..14857.06 rows=26856 width=117) (actual \ntime=16865.473..16989.104 rows=25795 loops=1)\n Sort Key: file_type, file_parent_dir, file_name\n -> Seq Scan on file_info_7 (cost=0.00..11762.44 rows=26856 \nwidth=117) (actual time=0.178..1920.413 rows=25795 loops=1)\n Filter: ((file_type)::text = 'd'::text)\n Total runtime: 17102.925 ms\n(5 rows)\n\ntle-bu=> \\d file_info_7_display_idx Index \"public.file_info_7_display_idx\"\n Column | Type\n-----------------+----------------------\n file_type | character varying(2)\n file_parent_dir | text\n file_name | text\nbtree, for table \"public.file_info_7\"\n\n I'm still getting the sequential scan.\n\nMadison\n\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\nMadison Kelly (Digimer)\nTLE-BU, The Linux Experience; Back Up\nhttp://tle-bu.thelinuxexperience.com\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n", "msg_date": "Sun, 12 Jun 2005 23:35:10 -0400", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used" }, { "msg_contents": "Bruno Wolff III wrote:\n> On Sun, Jun 12, 2005 at 22:00:01 -0500,\n> Bruno Wolff III <[email protected]> wrote:\n> \n>>On Sun, Jun 12, 2005 at 18:52:05 -0400,\n>> Madison Kelly <[email protected]> wrote:\n>>\n>>> After sending that email I kept plucking away and in the course of \n>>>doing so decided that I didn't need to return the 'file_type' column. \n>>>Other than that, it would see my query now matches what you two have \n>>>recommended in the 'ORDER BY...' front but I still can't get an index \n>>>search.\n>>\n>>No it doesn't. Even if you don't return file_type you still need it\n>>in the order by clause if you want postgres to consider using your\n>>index.\n> \n> \n> I didn't notice that you had changed the index. The reason this index\n> doesn't help is that you can't use it to select on records with the\n> desired file_type.\n\nAs you probably saw in my last reply, I went back to the old index and \ntried the query you and Tom Lane recommended. Should this not have \ncaught the index?\n\nAt any rate, I am re-reading the documents on indexing for 7.4.x on \npostgresql.org... This is kind of flustering. Thanks again though for \nsom much help!\n\nMadison\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\nMadison Kelly (Digimer)\nTLE-BU, The Linux Experience; Back Up\nhttp://tle-bu.thelinuxexperience.com\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n", "msg_date": "Sun, 12 Jun 2005 23:42:05 -0400", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used" }, { "msg_contents": "On Sun, Jun 12, 2005 at 23:42:05 -0400,\n Madison Kelly <[email protected]> wrote:\n> \n> As you probably saw in my last reply, I went back to the old index and \n> tried the query you and Tom Lane recommended. Should this not have \n> caught the index?\n\nProbably, but there might be some other reason the planner thought it\nwas better to not use it. Using indexes is not always faster.\n\nIt would help to see your latest definition of the table and indexes,\nthe exact query you used and explain analyze output.\n", "msg_date": "Sun, 12 Jun 2005 22:53:46 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used" }, { "msg_contents": "Bruno Wolff III wrote:\n> On Sun, Jun 12, 2005 at 23:42:05 -0400,\n> Madison Kelly <[email protected]> wrote:\n> \n>>As you probably saw in my last reply, I went back to the old index and \n>>tried the query you and Tom Lane recommended. Should this not have \n>>caught the index?\n> \n> \n> Probably, but there might be some other reason the planner thought it\n> was better to not use it. Using indexes is not always faster.\n> \n> It would help to see your latest definition of the table and indexes,\n> the exact query you used and explain analyze output.\n> \n\nOkay, here's what I have at the moment:\n\ntle-bu=> \\d file_info_7 Table \n\"public.file_info_7\"\n Column | Type | Modifiers\n----------------------+----------------------+-----------------------------------------\n file_group_name | text |\n file_group_uid | bigint | not null\n file_mod_time | bigint | not null\n file_name | text | not null\n file_parent_dir | text | not null\n file_perm | text | not null\n file_size | bigint | not null\n file_type | character varying(2) | not null default \n'f'::character varying\n file_user_name | text |\n file_user_uid | bigint | not null\n file_backup | boolean | not null default true\n file_display | boolean | not null default false\n file_restore_display | boolean | not null default false\n file_restore | boolean | not null default false\nIndexes:\n \"file_info_7_display_idx\" btree (file_parent_dir, file_name)\n\n\ntle-bu=> \\d file_info_7_display_idx\nIndex \"public.file_info_7_display_idx\"\n Column | Type\n-----------------+------\n file_parent_dir | text\n file_name | text\nbtree, for table \"public.file_info_7\"\n\n\ntle-bu=> EXPLAIN ANALYZE SELECT file_name, file_parent_dir, file_display \nFROM file_info_7 WHERE file_type='d' ORDER BY file_parent_dir ASC, \nfile_name ASC;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Sort (cost=15091.53..15165.29 rows=29502 width=114) (actual \ntime=12834.933..12955.136 rows=25795 loops=1)\n Sort Key: file_parent_dir, file_name\n -> Seq Scan on file_info_7 (cost=0.00..11762.44 rows=29502 \nwidth=114) (actual time=0.244..2533.388 rows=25795 loops=1)\n Filter: ((file_type)::text = 'd'::text)\n Total runtime: 13042.421 ms\n(5 rows)\n\n\n Since my last post I went back to a query closer to what I actually \nwant. What is most important to me is that 'file_parent_dir, file_name, \nfile_display' are returned and that the results are sorted by \n'file_parent_dir, file_name' and the results are restricted to where \n'file_info='d''.\n\n Basically what I am trying to do is display a directory tree in a \nfile browser. I had this working before but it was far, far too slow \nonce the number of directories to display got much higher than 1,000. \nThat is what 'file_display' is, by the way.\n\n Again, thank you!\n\nMadison\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\nMadison Kelly (Digimer)\nTLE-BU, The Linux Experience; Back Up\nhttp://tle-bu.thelinuxexperience.com\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n", "msg_date": "Mon, 13 Jun 2005 00:29:08 -0400", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used" }, { "msg_contents": "On Mon, Jun 13, 2005 at 00:29:08 -0400,\n Madison Kelly <[email protected]> wrote:\n> Bruno Wolff III wrote:\n> >On Sun, Jun 12, 2005 at 23:42:05 -0400,\n> > Madison Kelly <[email protected]> wrote:\n> >\n> >>As you probably saw in my last reply, I went back to the old index and \n> >>tried the query you and Tom Lane recommended. Should this not have \n> >>caught the index?\n> >\n> >\n> >Probably, but there might be some other reason the planner thought it\n> >was better to not use it. Using indexes is not always faster.\n> >\n> >It would help to see your latest definition of the table and indexes,\n> >the exact query you used and explain analyze output.\n> >\n> \n> Okay, here's what I have at the moment:\n> \n> tle-bu=> \\d file_info_7 Table \n> \"public.file_info_7\"\n> Column | Type | Modifiers\n> ----------------------+----------------------+-----------------------------------------\n> file_group_name | text |\n> file_group_uid | bigint | not null\n> file_mod_time | bigint | not null\n> file_name | text | not null\n> file_parent_dir | text | not null\n> file_perm | text | not null\n> file_size | bigint | not null\n> file_type | character varying(2) | not null default \n> 'f'::character varying\n> file_user_name | text |\n> file_user_uid | bigint | not null\n> file_backup | boolean | not null default true\n> file_display | boolean | not null default false\n> file_restore_display | boolean | not null default false\n> file_restore | boolean | not null default false\n> Indexes:\n> \"file_info_7_display_idx\" btree (file_parent_dir, file_name)\n> \n> \n> tle-bu=> \\d file_info_7_display_idx\n> Index \"public.file_info_7_display_idx\"\n> Column | Type\n> -----------------+------\n> file_parent_dir | text\n> file_name | text\n> btree, for table \"public.file_info_7\"\n> \n> \n> tle-bu=> EXPLAIN ANALYZE SELECT file_name, file_parent_dir, file_display \n> FROM file_info_7 WHERE file_type='d' ORDER BY file_parent_dir ASC, \n> file_name ASC;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=15091.53..15165.29 rows=29502 width=114) (actual \n> time=12834.933..12955.136 rows=25795 loops=1)\n> Sort Key: file_parent_dir, file_name\n> -> Seq Scan on file_info_7 (cost=0.00..11762.44 rows=29502 \n> width=114) (actual time=0.244..2533.388 rows=25795 loops=1)\n> Filter: ((file_type)::text = 'd'::text)\n> Total runtime: 13042.421 ms\n> (5 rows)\n> \n> \n> Since my last post I went back to a query closer to what I actually \n> want. What is most important to me is that 'file_parent_dir, file_name, \n> file_display' are returned and that the results are sorted by \n> 'file_parent_dir, file_name' and the results are restricted to where \n> 'file_info='d''.\n\nI am guessing you mean 'file_type' instead of 'file_info'.\n\nTo do this efficiently you want an index on (file_type, file_parent_dir,\nfile_name). Currently you only have an index on (file_parent_dir, file_name)\nwhich won't help for this query. You also need to order by file_type\neven though it will be constant for all of the returned rows in order\nto help out the planner. This will allow an index scan over the desired\nrows that returns them in the desired order.\n\nPlease actually try this before changing anything else.\n", "msg_date": "Mon, 13 Jun 2005 08:28:52 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used" }, { "msg_contents": "Bruno Wolff III wrote:\n> I am guessing you mean 'file_type' instead of 'file_info'.\n> \n> To do this efficiently you want an index on (file_type, file_parent_dir,\n> file_name). Currently you only have an index on (file_parent_dir, file_name)\n> which won't help for this query. You also need to order by file_type\n> even though it will be constant for all of the returned rows in order\n> to help out the planner. This will allow an index scan over the desired\n> rows that returns them in the desired order.\n> \n> Please actually try this before changing anything else.\n\n If I follow then I tried it but still got the sequential scan. Here's \nthe index and query (copied from the 'psql' shell):\n\n\ntle-bu=> \\d file_info_7_display_idx Index \"public.file_info_7_display_idx\"\n Column | Type\n-----------------+----------------------\n file_type | character varying(2)\n file_parent_dir | text\n file_name | text\nbtree, for table \"public.file_info_7\"\n\ntle-bu=> EXPLAIN ANALYZE SELECT file_name, file_parent_dir, file_display \nFROM file_info_7 WHERE file_type='d' ORDER BY file_type ASC, \nfile_parent_dir ASC, file_name ASC;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Sort (cost=14810.92..14874.65 rows=25490 width=119) (actual \ntime=15523.767..15731.136 rows=25795 loops=1)\n Sort Key: file_type, file_parent_dir, file_name\n -> Seq Scan on file_info_7 (cost=0.00..11956.84 rows=25490 \nwidth=119) (actual time=0.132..2164.757 rows=25795 loops=1)\n Filter: ((file_type)::text = 'd'::text)\n Total runtime: 15884.188 ms\n(5 rows)\n\n\n If I follow all three 'ORDER BY...' items match the three columns in \nthe index.\n\n Again, thanks!\n\nMadison\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\nMadison Kelly (Digimer)\nTLE-BU, The Linux Experience; Back Up\nhttp://tle-bu.thelinuxexperience.com\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n", "msg_date": "Mon, 13 Jun 2005 13:50:51 -0400", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used" }, { "msg_contents": "Madison Kelly <[email protected]> writes:\n> Bruno Wolff III wrote:\n>> Please actually try this before changing anything else.\n\n> If I follow then I tried it but still got the sequential scan.\n\nGiven the fairly large number of rows being selected, it seems likely\nthat the planner thinks this is faster than an indexscan. It could\nbe right, too. Have you tried \"set enable_seqscan = off\" to see if\nthe index is used then? If so, is it faster or slower? Comparing\nEXPLAIN ANALYZE results with enable_seqscan on and off would be useful.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Jun 2005 14:13:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used " }, { "msg_contents": "Tom Lane wrote:\n> Madison Kelly <[email protected]> writes:\n> \n>>Bruno Wolff III wrote:\n>>\n>>>Please actually try this before changing anything else.\n> \n> \n>> If I follow then I tried it but still got the sequential scan.\n> \n> \n> Given the fairly large number of rows being selected, it seems likely\n> that the planner thinks this is faster than an indexscan. It could\n> be right, too. Have you tried \"set enable_seqscan = off\" to see if\n> the index is used then? If so, is it faster or slower? Comparing\n> EXPLAIN ANALYZE results with enable_seqscan on and off would be useful.\n\nWow!\n\nWith the sequence scan off my query took less than 2sec. When I turned \nit back on the time jumped back up to just under 14sec.\n\n\ntle-bu=> set enable_seqscan = off; SET\ntle-bu=> EXPLAIN ANALYZE SELECT file_name, file_parent_dir, file_display \nFROM file_info_7 WHERE file_type='d' ORDER BY file_type ASC, \nfile_parent_dir ASC, file_name ASC;\n \nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using file_info_7_display_idx on file_info_7 \n(cost=0.00..83171.78 rows=25490 width=119) (actual \ntime=141.405..1700.459 rows=25795 loops=1)\n Index Cond: ((file_type)::text = 'd'::text)\n Total runtime: 1851.366 ms\n(3 rows)\n\n\ntle-bu=> set enable_seqscan = on; SET\ntle-bu=> EXPLAIN ANALYZE SELECT file_name, file_parent_dir, file_display \nFROM file_info_7 WHERE file_type='d' ORDER BY file_type ASC, \nfile_parent_dir ASC, file_name ASC;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Sort (cost=14810.92..14874.65 rows=25490 width=119) (actual \ntime=13605.185..13728.436 rows=25795 loops=1)\n Sort Key: file_type, file_parent_dir, file_name\n -> Seq Scan on file_info_7 (cost=0.00..11956.84 rows=25490 \nwidth=119) (actual time=0.048..2018.996 rows=25795 loops=1)\n Filter: ((file_type)::text = 'd'::text)\n Total runtime: 13865.830 ms\n(5 rows)\n\n So the index obiously provides a major performance boost! I just need \nto figure out how to tell the planner how to use it...\n\nMadison\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\nMadison Kelly (Digimer)\nTLE-BU, The Linux Experience; Back Up\nhttp://tle-bu.thelinuxexperience.com\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n\n", "msg_date": "Mon, 13 Jun 2005 15:05:00 -0400", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used" }, { "msg_contents": "On Mon, Jun 13, 2005 at 15:05:00 -0400,\n Madison Kelly <[email protected]> wrote:\n> Wow!\n> \n> With the sequence scan off my query took less than 2sec. When I turned \n> it back on the time jumped back up to just under 14sec.\n> \n> \n> tle-bu=> set enable_seqscan = off; SET\n> tle-bu=> EXPLAIN ANALYZE SELECT file_name, file_parent_dir, file_display \n> FROM file_info_7 WHERE file_type='d' ORDER BY file_type ASC, \n> file_parent_dir ASC, file_name ASC;\n> \n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using file_info_7_display_idx on file_info_7 \n> (cost=0.00..83171.78 rows=25490 width=119) (actual \n> time=141.405..1700.459 rows=25795 loops=1)\n> Index Cond: ((file_type)::text = 'd'::text)\n> Total runtime: 1851.366 ms\n> (3 rows)\n> \n> \n> tle-bu=> set enable_seqscan = on; SET\n> tle-bu=> EXPLAIN ANALYZE SELECT file_name, file_parent_dir, file_display \n> FROM file_info_7 WHERE file_type='d' ORDER BY file_type ASC, \n> file_parent_dir ASC, file_name ASC;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=14810.92..14874.65 rows=25490 width=119) (actual \n> time=13605.185..13728.436 rows=25795 loops=1)\n> Sort Key: file_type, file_parent_dir, file_name\n> -> Seq Scan on file_info_7 (cost=0.00..11956.84 rows=25490 \n> width=119) (actual time=0.048..2018.996 rows=25795 loops=1)\n> Filter: ((file_type)::text = 'd'::text)\n> Total runtime: 13865.830 ms\n> (5 rows)\n> \n> So the index obiously provides a major performance boost! I just need \n> to figure out how to tell the planner how to use it...\n\nThe two things you probably want to look at are (in postgresql.conf):\neffective_cache_size = 10000 # typically 8KB each\nrandom_page_cost = 2 # units are one sequential page fetch cost\n\nIncreasing effective cache size and decreasing the penalty for random\ndisk fetches will favor using index scans. People have reported that\ndropping random_page_cost from the default of 4 to 2 works well.\nEffective cache size should be set to some reasonable estimate of\nthe memory available on your system to postgres, not counting that\nset aside for shared buffers.\n\nHowever, since the planner thought the index scan plan was going to be 6 times\nslower than the sequential scan plan, I don't know if tweaking these values\nenough to switch the plan choice won't cause problems for other queries.\n", "msg_date": "Mon, 13 Jun 2005 15:45:59 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used" }, { "msg_contents": "Madison Kelly <[email protected]> writes:\n\n> So the index obiously provides a major performance boost! I just need to\n> figure out how to tell the planner how to use it...\n\nBe careful extrapolating too much from a single query in a single context.\nNotably you might want to test the same query after not touching this table\nfor a little while. The index is probably benefiting disproportionately from\nhaving you repeatedly running this one query and having the entire table in\ncache.\n\nThat said, you should look at lowering random_page_cost. The default is 4 but\nif this query is representative of your system's performance then much of your\ndatabase is in cache and the effective value will be closer to 1. Try 2 or\neven 1.5 or 1.2.\n\nBut like I said, test other queries and test under more representative\nconditions other than repeating a single query over and over.\n\n-- \ngreg\n\n", "msg_date": "13 Jun 2005 16:53:51 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used" }, { "msg_contents": "Madison Kelly <[email protected]> writes:\n> So the index obiously provides a major performance boost! I just need \n> to figure out how to tell the planner how to use it...\n\nSimple division shows that the planner's cost estimate ratio between the\nseqscan and the indexscan (11956.84 vs 83171.78) is off by a factor of\nmore than 8 compared to reality (2018.996 vs 1700.459). Also the cost of\nthe sort seems to be drastically underestimated.\n\nI suspect this may be a combination of random_page_cost being too high\n(since your test case, at least, is no doubt fully cached in RAM) and\ncpu_operator_cost being too low. I'm wondering if text comparisons\nare really slow on your machine --- possibly due to strcoll being\ninefficient in the locale you are using, which you didn't say. That\nwould account for both the seqscan being slower than expected and the\nsort taking a long time.\n\nIt'd be interesting to look at the actual runtimes of this seqscan vs\none that is doing a simple integer comparison over the same number of\nrows (and, preferably, returning about the same number of rows as this).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Jun 2005 17:00:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used " }, { "msg_contents": "Bruno Wolff III wrote:\n> On Mon, Jun 13, 2005 at 15:05:00 -0400,\n> Madison Kelly <[email protected]> wrote:\n> \n>>Wow!\n>>\n>>With the sequence scan off my query took less than 2sec. When I turned \n>>it back on the time jumped back up to just under 14sec.\n>>\n>>\n>>tle-bu=> set enable_seqscan = off; SET\n>>tle-bu=> EXPLAIN ANALYZE SELECT file_name, file_parent_dir, file_display \n>>FROM file_info_7 WHERE file_type='d' ORDER BY file_type ASC, \n>>file_parent_dir ASC, file_name ASC;\n>>\n>>QUERY PLAN\n>>--------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Index Scan using file_info_7_display_idx on file_info_7 \n>>(cost=0.00..83171.78 rows=25490 width=119) (actual \n>>time=141.405..1700.459 rows=25795 loops=1)\n>> Index Cond: ((file_type)::text = 'd'::text)\n>> Total runtime: 1851.366 ms\n>>(3 rows)\n>>\n>>\n>>tle-bu=> set enable_seqscan = on; SET\n>>tle-bu=> EXPLAIN ANALYZE SELECT file_name, file_parent_dir, file_display \n>>FROM file_info_7 WHERE file_type='d' ORDER BY file_type ASC, \n>>file_parent_dir ASC, file_name ASC;\n>> QUERY PLAN\n>>----------------------------------------------------------------------------------------------------------------------------\n>> Sort (cost=14810.92..14874.65 rows=25490 width=119) (actual \n>>time=13605.185..13728.436 rows=25795 loops=1)\n>> Sort Key: file_type, file_parent_dir, file_name\n>> -> Seq Scan on file_info_7 (cost=0.00..11956.84 rows=25490 \n>>width=119) (actual time=0.048..2018.996 rows=25795 loops=1)\n>> Filter: ((file_type)::text = 'd'::text)\n>> Total runtime: 13865.830 ms\n>>(5 rows)\n>>\n>> So the index obiously provides a major performance boost! I just need \n>>to figure out how to tell the planner how to use it...\n> \n> \n> The two things you probably want to look at are (in postgresql.conf):\n> effective_cache_size = 10000 # typically 8KB each\n> random_page_cost = 2 # units are one sequential page fetch cost\n> \n> Increasing effective cache size and decreasing the penalty for random\n> disk fetches will favor using index scans. People have reported that\n> dropping random_page_cost from the default of 4 to 2 works well.\n> Effective cache size should be set to some reasonable estimate of\n> the memory available on your system to postgres, not counting that\n> set aside for shared buffers.\n> \n> However, since the planner thought the index scan plan was going to be 6 times\n> slower than the sequential scan plan, I don't know if tweaking these values\n> enough to switch the plan choice won't cause problems for other queries.\n\nHmm,\n\n In this case I am trying to avoid modifying 'postgres.conf' and am \ntrying to handle any performance tweaks within my program through SQL \ncalls. This is because (I hope) my program will be installed by many \nusers and I don't want to expect them to be able/comfortable playing \nwith 'postgres.conf'. I do plan later though to create a section in the \ndocs with extra tweaks for more advanced users and in that case I will \ncome back to this and try/record just that.\n\n In the mean time Tom's recommendation works from perl by calling:\n\n$DB->do(\"SET ENABLE_SEQSCAN TO OFF\") || die...\n<query...>\n$DB->do(\"SET ENABLE_SEQSCAN TO ON\") || die...\n\n Forces the index to be used. It isn't clean but it works for now and \nI don't need to do anything outside my program.\n\n Lacking any other ideas, thank you very, very much for sticking with \nthis and helping me out!\n\nMadison\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\nMadison Kelly (Digimer)\nTLE-BU, The Linux Experience; Back Up\nhttp://tle-bu.thelinuxexperience.com\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n", "msg_date": "Mon, 13 Jun 2005 17:18:51 -0400", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": false, "msg_subject": "Pseudo-Solved was: (Re: Index ot being used)" }, { "msg_contents": "Tom Lane wrote:\n> Madison Kelly <[email protected]> writes:\n> \n>> So the index obiously provides a major performance boost! I just need \n>>to figure out how to tell the planner how to use it...\n> \n> \n> Simple division shows that the planner's cost estimate ratio between the\n> seqscan and the indexscan (11956.84 vs 83171.78) is off by a factor of\n> more than 8 compared to reality (2018.996 vs 1700.459). Also the cost of\n> the sort seems to be drastically underestimated.\n> \n> I suspect this may be a combination of random_page_cost being too high\n> (since your test case, at least, is no doubt fully cached in RAM) and\n> cpu_operator_cost being too low. I'm wondering if text comparisons\n> are really slow on your machine --- possibly due to strcoll being\n> inefficient in the locale you are using, which you didn't say. That\n> would account for both the seqscan being slower than expected and the\n> sort taking a long time.\n> \n> It'd be interesting to look at the actual runtimes of this seqscan vs\n> one that is doing a simple integer comparison over the same number of\n> rows (and, preferably, returning about the same number of rows as this).\n> \n> \t\t\tregards, tom lane\n\n This is where I should mention that though 'n00b' might be a little \nharsh, I am still somewhat of a beginner (only been using postgres or \nprogramming at all for a little over a year).\n\n What is, and how do I check, 'strcoll'? Is there a way that I can \nclear the psql cache to make the tests more accurate to real-world \nsituations? For what it's worth, the program is working (I am doing \nstress-testing and optimizing now) and the data in this table is actual \ndata, not a construct.\n\n As I mentioned to Bruno in my reply to him, I am trying to keep as \nmany tweaks as I can inside my program. The reason for this is that this \nis a backup program that I am trying to aim to more mainstream users or \nwhere a techy would set it up and then it would be used by mainstream \nusers. At this point I want to avoid, as best I can, any changes from \ndefault to the 'postgres.conf' file or other external files. Later \nthough, once I finish this testing phase, I plan to write a section of \nexternal tweaking where I will test these changes out and note my \nsuccess for mre advanced users who feel more comfortable playing with \npostgres (and web server, rsync, etc) configs.\n\n If there is any way that I can make changes like this similar from \ninside my (perl) program I would prefer that. For example, I implemented \nthe 'enable_seqscan' via:\n\n$DB->do(\"SET ENABLE_SEQSCAN TO OFF\") || die...\n...\n$DB->do(\"SET ENABLE_SEQSCAN TO ON\") || die...\n\n Thank you very kindly! You and Bruno are wonderfully helpful! (as are \nthe other's who have replied ^_^;)\n\nMadison\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\nMadison Kelly (Digimer)\nTLE-BU, The Linux Experience; Back Up\nhttp://tle-bu.thelinuxexperience.com\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n", "msg_date": "Mon, 13 Jun 2005 17:30:32 -0400", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used" }, { "msg_contents": "On Mon, 2005-06-13 at 17:30 -0400, Madison Kelly wrote:\n> As I mentioned to Bruno in my reply to him, I am trying to keep as \n> many tweaks as I can inside my program. The reason for this is that this \n> is a backup program that I am trying to aim to more mainstream users or \n> where a techy would set it up and then it would be used by mainstream \n> users. At this point I want to avoid, as best I can, any changes from \n> default to the 'postgres.conf' file or other external files. Later \n> though, once I finish this testing phase, I plan to write a section of \n> external tweaking where I will test these changes out and note my \n> success for mre advanced users who feel more comfortable playing with \n> postgres (and web server, rsync, etc) configs.\n> \n> If there is any way that I can make changes like this similar from \n> inside my (perl) program I would prefer that. For example, I implemented \n> the 'enable_seqscan' via:\n> \n> $DB->do(\"SET ENABLE_SEQSCAN TO OFF\") || die...\n> ...\n> $DB->do(\"SET ENABLE_SEQSCAN TO ON\") || die...\n\nYour goal is admirable. However, many people tweak their postgresql.conf\nfiles, and your program can't know whether or not this has happened. It\nmight be a good idea to have a var $do_db_optimization, which defaults\nto on. Then, if your users have trouble or are advanced admins they can\nturn it off. My personal opinion is that there are too many\narchitectures and configurations for you to accurately optimize inside\nyour program, and this gives you and your users an easy out.\n\nif ($do_db_optimization == 1) {\n $DB->do(\"SET ENABLE_SEQSCAN TO OFF\") || die...\n} else {\n # do nothing -- postgresql will figure it out\n}\n\n-- \nKarim Nassar <[email protected]>\n\n", "msg_date": "Wed, 15 Jun 2005 09:52:02 -0700", "msg_from": "Karim Nassar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used" }, { "msg_contents": "Karim Nassar wrote:\n> Your goal is admirable. However, many people tweak their postgresql.conf\n> files, and your program can't know whether or not this has happened. It\n> might be a good idea to have a var $do_db_optimization, which defaults\n> to on. Then, if your users have trouble or are advanced admins they can\n> turn it off. My personal opinion is that there are too many\n> architectures and configurations for you to accurately optimize inside\n> your program, and this gives you and your users an easy out.\n> \n> if ($do_db_optimization == 1) {\n> $DB->do(\"SET ENABLE_SEQSCAN TO OFF\") || die...\n> } else {\n> # do nothing -- postgresql will figure it out\n> }\n\nThat is a wonderful idea and I already have the foundation in place to \neasily implement this. Thanks!!\n\nMadison\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\nMadison Kelly (Digimer)\nTLE-BU, The Linux Experience; Back Up\nhttp://tle-bu.thelinuxexperience.com\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n", "msg_date": "Wed, 15 Jun 2005 13:33:20 -0400", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used" } ]
[ { "msg_contents": "Hi-\n\nWould someone please enlighten me as\nto why I'm not seeing a faster execution\ntime on the simple scenario below?\n\nthere are 412,485 rows in the table and the\nquery matches on 132,528 rows, taking\nalmost a minute to execute. vaccuum\nanalyze was just run.\n\nThanks!\nClark\n\n test\n-------------------------\n id | integer\n partnumber | character varying(32)\n productlistid | integer\n typeid | integer\n\n\nIndexes:\n\"test_id\" btree (id)\n\"test_plid\" btree (productlistid)\n\"test_typeid\" btree (typeid)\n\"test_plidtypeid\" btree (productlistid, typeid)\n\n\nexplain analyze select * from test where productlistid=3 and typeid=9 \norder by partnumber limit 15;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------\n Limit (cost=201073.76..201073.79 rows=15 width=722) (actual \ntime=58092.477..58092.518 rows=15 loops=1)\n -> Sort (cost=201073.76..201451.76 rows=151200 width=722) (actual \ntime=58092.470..58092.505 rows=15 loops=1)\n Sort Key: partnumber\n -> Seq Scan on test (cost=0.00..96458.27 rows=151200 width=722) \n(actual time=2.515..40201.275 rows=132528 loops=1)\n Filter: ((productlistid = 3) AND (typeid = 9))\n Total runtime: 59664.765 ms\n(6 rows)\n\n\nSystem specs:\nPostgreSQL 7.4.2 on RedHat 9\ndual AMD Athlon 2GHz processors\n1 gig memory\nmirrored 7200 RPM IDE disks\n\n", "msg_date": "Fri, 10 Jun 2005 13:45:05 -0400 (EDT)", "msg_from": "Clark Slater <[email protected]>", "msg_from_op": true, "msg_subject": "faster search" }, { "msg_contents": "On Fri, Jun 10, 2005 at 01:45:05PM -0400, Clark Slater wrote:\n> Indexes:\n> \"test_id\" btree (id)\n> \"test_plid\" btree (productlistid)\n> \"test_typeid\" btree (typeid)\n> \"test_plidtypeid\" btree (productlistid, typeid)\n> \n> \n> explain analyze select * from test where productlistid=3 and typeid=9 \n> order by partnumber limit 15;\n\nYou do not have an index on partnumber. Try adding one.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Fri, 10 Jun 2005 19:51:32 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: faster search" }, { "msg_contents": "Clark Slater wrote:\n> Hi-\n> \n> Would someone please enlighten me as\n> to why I'm not seeing a faster execution\n> time on the simple scenario below?\n> \n> there are 412,485 rows in the table and the\n> query matches on 132,528 rows, taking\n> almost a minute to execute. vaccuum\n> analyze was just run.\n\nWell, if you are matching 130k out of 400k rows, then a sequential scan\nis certainly prefered to an index scan. And then you have to sort those\n130k rows by partnumber. This *might* be spilling to disk depending on\nwhat your workmem/sortmem is set to.\n\nI would also say that what you would really want is some way to get the\nwhole thing from an index. And I think the way to do that is:\n\nCREATE INDEX test_partnum_listid_typeid_idx ON\n\ttest(partnumber, productlistid, typeid);\n\nVACUUM ANALYZE test;\n\nEXPLAIN ANALYZE SELECT * FROM test\n\tWHERE productlistid=3 AND typeid=9\n\tORDER BY partnumber, productlistid, typeid\n\tLIMIT 15\n;\n\nThe trick is that you have to match the order by exactly with the index,\nso the planner realizes it can do an indexed lookup to get the information.\n\nYou could also just create an index on partnumber, and see how that\naffects your original query. I think the planner could use an index\nlookup on partnumber to get the ordering correct. But it will have to do\nfiltering after the fact based on productlistid and typeid.\nWith my extended index, I think the planner can be smarter and lookup\nall 3 by the index.\n\n> \n> Thanks!\n> Clark\n\nGood luck,\nJohn\n=:->", "msg_date": "Fri, 10 Jun 2005 13:00:54 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: faster search" }, { "msg_contents": "[Clark Slater - Fri at 01:45:05PM -0400]\n> Would someone please enlighten me as\n> to why I'm not seeing a faster execution\n> time on the simple scenario below?\n\nJust some thoughts from a novice PG-DBA .. :-)\n\nMy general experience is that PG usually prefers sequal scans to indices if\na large portion of the table is to be selected, because it is faster to do a\nseqscan than to follow an index and constantly seek between different\npositions on the hard disk.\n\nHowever, most of the time is spent sorting on partnumber, and you only want\n15 rows, so of course you should have an index on partnumber! Picking up 15\nrows will be ligtning fast with that index.\n\nIf you may want to select significantly more than 15 rows, you can also try\nto make a partial index:\n\ncreate index test_pli3_ti9_by_part on test (partnumber) where\nproductlistid=3 and typeid=9;\n\nIf 3 and 9 are not constants in the query, try to make a three-key index\n(it's important with partnumber because a lot of time is spent sorting):\n\ncreate index test_pli_type_part on test (productslistid,typeid,partnumber);\n\nTo get pg to recognize the index, you will probably have to help it a bit:\n\nselect * from test where productlistid=3 and typeid=9 order by\nproductlistid,typeid,partnumber limit 15;\n\n-- \nTobias Brox, +47-91700050\n\n", "msg_date": "Fri, 10 Jun 2005 21:05:35 +0300", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: faster search" }, { "msg_contents": "On Fri, Jun 10, 2005 at 01:45:05PM -0400, Clark Slater wrote:\n> Hi-\n> \n> Would someone please enlighten me as\n> to why I'm not seeing a faster execution\n> time on the simple scenario below?\n\nBecause you need to extract a huge number of rows via a seqscan, sort\nthem and then throw them away, I think.\n\n> explain analyze select * from test where productlistid=3 and typeid=9 \n> order by partnumber limit 15;\n\nCreate an index on (productlistid, typeid, partnumber) then\n\n select * from test where productlistid=3 and typeid=9\n order by productlistid, typeid, partnumber LIMIT 15;\n\n?\n\nCheers,\n Steve\n", "msg_date": "Fri, 10 Jun 2005 13:12:40 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: faster search" }, { "msg_contents": "hmm, i'm baffled. i simplified the query\nand it is still taking forever...\n\n\n test\n-------------------------\n id | integer\n partnumber | character varying(32)\n productlistid | integer\n typeid | integer\n\n\nIndexes:\n\"test_productlistid\" btree (productlistid)\n\"test_typeid\" btree (typeid)\n\"test_productlistid_typeid\" btree (productlistid, typeid)\n\n\nexplain analyze select * from test where (productlistid=3 and typeid=9);\n\n QUERY PLAN\n-----------------------------------------------------------------------\n Seq Scan on test (cost=0.00..96458.27 rows=156194 width=725) (actual\ntime=516.459..41930.250 rows=132528 loops=1)\n Filter: ((productlistid = 3) AND (typeid = 9))\n Total runtime: 41975.154 ms\n(3 rows)\n\n\nSystem specs:\nPostgreSQL 7.4.2 on RedHat 9\ndual AMD Athlon 2GHz processors\n1 gig memory\nmirrored 7200 RPM IDE disks\n\n\nOn Fri, 10 Jun 2005, John A Meinel wrote:\n\n> Clark Slater wrote:\n>> Hi-\n>>\n>> Would someone please enlighten me as\n>> to why I'm not seeing a faster execution\n>> time on the simple scenario below?\n>>\n>> there are 412,485 rows in the table and the\n>> query matches on 132,528 rows, taking\n>> almost a minute to execute. vaccuum\n>> analyze was just run.\n>\n> Well, if you are matching 130k out of 400k rows, then a sequential scan\n> is certainly prefered to an index scan. And then you have to sort those\n> 130k rows by partnumber. This *might* be spilling to disk depending on\n> what your workmem/sortmem is set to.\n>\n> I would also say that what you would really want is some way to get the\n> whole thing from an index. And I think the way to do that is:\n>\n> CREATE INDEX test_partnum_listid_typeid_idx ON\n> \ttest(partnumber, productlistid, typeid);\n>\n> VACUUM ANALYZE test;\n>\n> EXPLAIN ANALYZE SELECT * FROM test\n> \tWHERE productlistid=3 AND typeid=9\n> \tORDER BY partnumber, productlistid, typeid\n> \tLIMIT 15\n> ;\n>\n> The trick is that you have to match the order by exactly with the index,\n> so the planner realizes it can do an indexed lookup to get the information.\n>\n> You could also just create an index on partnumber, and see how that\n> affects your original query. I think the planner could use an index\n> lookup on partnumber to get the ordering correct. But it will have to do\n> filtering after the fact based on productlistid and typeid.\n> With my extended index, I think the planner can be smarter and lookup\n> all 3 by the index.\n>\n>>\n>> Thanks!\n>> Clark\n>\n> Good luck,\n> John\n> =:->\n>\n", "msg_date": "Fri, 10 Jun 2005 20:07:57 -0400 (EDT)", "msg_from": "Clark Slater <[email protected]>", "msg_from_op": true, "msg_subject": "Re: faster search" }, { "msg_contents": "Clark Slater wrote:\n> hmm, i'm baffled. i simplified the query\n> and it is still taking forever...\n\nWhat happens if you:\n\nalter table test alter column productlistid set statistics 150;\nalter table test alter column typeid set statistics 150;\nexplain analyze select * from test where (productlistid=3 and typeid=9);\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> \n> test\n> -------------------------\n> id | integer\n> partnumber | character varying(32)\n> productlistid | integer\n> typeid | integer\n> \n> \n> Indexes:\n> \"test_productlistid\" btree (productlistid)\n> \"test_typeid\" btree (typeid)\n> \"test_productlistid_typeid\" btree (productlistid, typeid)\n> \n> \n> explain analyze select * from test where (productlistid=3 and typeid=9);\n> \n> QUERY PLAN\n> -----------------------------------------------------------------------\n> Seq Scan on test (cost=0.00..96458.27 rows=156194 width=725) (actual\n> time=516.459..41930.250 rows=132528 loops=1)\n> Filter: ((productlistid = 3) AND (typeid = 9))\n> Total runtime: 41975.154 ms\n> (3 rows)\n> \n> \n> System specs:\n> PostgreSQL 7.4.2 on RedHat 9\n> dual AMD Athlon 2GHz processors\n> 1 gig memory\n> mirrored 7200 RPM IDE disks\n> \n> \n> On Fri, 10 Jun 2005, John A Meinel wrote:\n> \n>> Clark Slater wrote:\n>>\n>>> Hi-\n>>>\n>>> Would someone please enlighten me as\n>>> to why I'm not seeing a faster execution\n>>> time on the simple scenario below?\n>>>\n>>> there are 412,485 rows in the table and the\n>>> query matches on 132,528 rows, taking\n>>> almost a minute to execute. vaccuum\n>>> analyze was just run.\n>>\n>>\n>> Well, if you are matching 130k out of 400k rows, then a sequential scan\n>> is certainly prefered to an index scan. And then you have to sort those\n>> 130k rows by partnumber. This *might* be spilling to disk depending on\n>> what your workmem/sortmem is set to.\n>>\n>> I would also say that what you would really want is some way to get the\n>> whole thing from an index. And I think the way to do that is:\n>>\n>> CREATE INDEX test_partnum_listid_typeid_idx ON\n>> test(partnumber, productlistid, typeid);\n>>\n>> VACUUM ANALYZE test;\n>>\n>> EXPLAIN ANALYZE SELECT * FROM test\n>> WHERE productlistid=3 AND typeid=9\n>> ORDER BY partnumber, productlistid, typeid\n>> LIMIT 15\n>> ;\n>>\n>> The trick is that you have to match the order by exactly with the index,\n>> so the planner realizes it can do an indexed lookup to get the \n>> information.\n>>\n>> You could also just create an index on partnumber, and see how that\n>> affects your original query. I think the planner could use an index\n>> lookup on partnumber to get the ordering correct. But it will have to do\n>> filtering after the fact based on productlistid and typeid.\n>> With my extended index, I think the planner can be smarter and lookup\n>> all 3 by the index.\n>>\n>>>\n>>> Thanks!\n>>> Clark\n>>\n>>\n>> Good luck,\n>> John\n>> =:->\n>>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n\n-- \nYour PostgreSQL solutions provider, Command Prompt, Inc.\n24x7 support - 1.800.492.2240, programming, and consulting\nHome of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\nhttp://www.commandprompt.com / http://www.postgresql.org\n", "msg_date": "Fri, 10 Jun 2005 17:14:33 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: faster search" }, { "msg_contents": "thanks for your suggestion.\na small improvement. still pretty slow...\n\nvbp=# alter table test alter column productlistid set statistics 150;\nALTER TABLE\nvbp=# alter table test alter column typeid set statistics 150;\nALTER TABLE\nvbp=# explain analyze select * from test where (productlistid=3 and typeid=9);\n QUERY PLAN\n------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..96458.27 rows=156194 width=725) (actual \ntime=525.617..36802.556 rows=132528 loops=1)\n Filter: ((productlistid = 3) AND (typeid = 9))\n Total runtime: 36847.754 ms\n(3 rows)\n\nTime: 36850.719 ms\n\n\nOn Fri, 10 Jun 2005, Joshua D. Drake wrote:\n\n> Clark Slater wrote:\n>> hmm, i'm baffled. i simplified the query\n>> and it is still taking forever...\n>\n> What happens if you:\n>\n> alter table test alter column productlistid set statistics 150;\n> alter table test alter column typeid set statistics 150;\n> explain analyze select * from test where (productlistid=3 and typeid=9);\n>\n> Sincerely,\n>\n> Joshua D. Drake\n>\n>\n>> \n>> \n>> test\n>> -------------------------\n>> id | integer\n>> partnumber | character varying(32)\n>> productlistid | integer\n>> typeid | integer\n>> \n>> \n>> Indexes:\n>> \"test_productlistid\" btree (productlistid)\n>> \"test_typeid\" btree (typeid)\n>> \"test_productlistid_typeid\" btree (productlistid, typeid)\n>> \n>> \n>> explain analyze select * from test where (productlistid=3 and typeid=9);\n>> \n>> QUERY PLAN\n>> -----------------------------------------------------------------------\n>> Seq Scan on test (cost=0.00..96458.27 rows=156194 width=725) (actual\n>> time=516.459..41930.250 rows=132528 loops=1)\n>> Filter: ((productlistid = 3) AND (typeid = 9))\n>> Total runtime: 41975.154 ms\n>> (3 rows)\n>> \n>> \n>> System specs:\n>> PostgreSQL 7.4.2 on RedHat 9\n>> dual AMD Athlon 2GHz processors\n>> 1 gig memory\n>> mirrored 7200 RPM IDE disks\n>> \n>> \n>> On Fri, 10 Jun 2005, John A Meinel wrote:\n>> \n>>> Clark Slater wrote:\n>>> \n>>>> Hi-\n>>>> \n>>>> Would someone please enlighten me as\n>>>> to why I'm not seeing a faster execution\n>>>> time on the simple scenario below?\n>>>> \n>>>> there are 412,485 rows in the table and the\n>>>> query matches on 132,528 rows, taking\n>>>> almost a minute to execute. vaccuum\n>>>> analyze was just run.\n>>> \n>>> \n>>> Well, if you are matching 130k out of 400k rows, then a sequential scan\n>>> is certainly prefered to an index scan. And then you have to sort those\n>>> 130k rows by partnumber. This *might* be spilling to disk depending on\n>>> what your workmem/sortmem is set to.\n>>> \n>>> I would also say that what you would really want is some way to get the\n>>> whole thing from an index. And I think the way to do that is:\n>>> \n>>> CREATE INDEX test_partnum_listid_typeid_idx ON\n>>> test(partnumber, productlistid, typeid);\n>>> \n>>> VACUUM ANALYZE test;\n>>> \n>>> EXPLAIN ANALYZE SELECT * FROM test\n>>> WHERE productlistid=3 AND typeid=9\n>>> ORDER BY partnumber, productlistid, typeid\n>>> LIMIT 15\n>>> ;\n>>> \n>>> The trick is that you have to match the order by exactly with the index,\n>>> so the planner realizes it can do an indexed lookup to get the \n>>> information.\n>>> \n>>> You could also just create an index on partnumber, and see how that\n>>> affects your original query. I think the planner could use an index\n>>> lookup on partnumber to get the ordering correct. But it will have to do\n>>> filtering after the fact based on productlistid and typeid.\n>>> With my extended index, I think the planner can be smarter and lookup\n>>> all 3 by the index.\n>>> \n>>>> \n>>>> Thanks!\n>>>> Clark\n>>> \n>>> \n>>> Good luck,\n>>> John\n>>> =:->\n>>> \n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 2: you can get off all lists at once with the unregister command\n>> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n>\n> -- \n> Your PostgreSQL solutions provider, Command Prompt, Inc.\n> 24x7 support - 1.800.492.2240, programming, and consulting\n> Home of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\n> http://www.commandprompt.com / http://www.postgresql.org\n>\n", "msg_date": "Fri, 10 Jun 2005 20:20:25 -0400 (EDT)", "msg_from": "Clark Slater <[email protected]>", "msg_from_op": true, "msg_subject": "Re: faster search" }, { "msg_contents": "Clark Slater wrote:\n> thanks for your suggestion.\n> a small improvement. still pretty slow...\n> \n> vbp=# alter table test alter column productlistid set statistics 150;\n> ALTER TABLE\n> vbp=# alter table test alter column typeid set statistics 150;\n> ALTER TABLE\n> vbp=# explain analyze select * from test where (productlistid=3 and \n> typeid=9);\n> QUERY PLAN\n> ------------------------------------------------------------------------------ \n> \n> Seq Scan on test (cost=0.00..96458.27 rows=156194 width=725) (actual \n> time=525.617..36802.556 rows=132528 loops=1)\n> Filter: ((productlistid = 3) AND (typeid = 9))\n> Total runtime: 36847.754 ms\n> (3 rows)\n> \n> Time: 36850.719 ms\n> \n> \n> On Fri, 10 Jun 2005, Joshua D. Drake wrote:\n> \n>> Clark Slater wrote:\n>>\n>>> hmm, i'm baffled. i simplified the query\n>>> and it is still taking forever...\n>>\n>>\n>> What happens if you:\n>>\n>> alter table test alter column productlistid set statistics 150;\n>> alter table test alter column typeid set statistics 150;\n>> explain analyze select * from test where (productlistid=3 and typeid=9);\n\nHow many rows should it return?\n\n>>\n>> Sincerely,\n>>\n>> Joshua D. Drake\n>>\n>>\n>>>\n>>>\n>>> test\n>>> -------------------------\n>>> id | integer\n>>> partnumber | character varying(32)\n>>> productlistid | integer\n>>> typeid | integer\n>>>\n>>>\n>>> Indexes:\n>>> \"test_productlistid\" btree (productlistid)\n>>> \"test_typeid\" btree (typeid)\n>>> \"test_productlistid_typeid\" btree (productlistid, typeid)\n>>>\n>>>\n>>> explain analyze select * from test where (productlistid=3 and typeid=9);\n>>>\n>>> QUERY PLAN\n>>> -----------------------------------------------------------------------\n>>> Seq Scan on test (cost=0.00..96458.27 rows=156194 width=725) (actual\n>>> time=516.459..41930.250 rows=132528 loops=1)\n>>> Filter: ((productlistid = 3) AND (typeid = 9))\n>>> Total runtime: 41975.154 ms\n>>> (3 rows)\n>>>\n>>>\n>>> System specs:\n>>> PostgreSQL 7.4.2 on RedHat 9\n>>> dual AMD Athlon 2GHz processors\n>>> 1 gig memory\n>>> mirrored 7200 RPM IDE disks\n>>>\n>>>\n>>> On Fri, 10 Jun 2005, John A Meinel wrote:\n>>>\n>>>> Clark Slater wrote:\n>>>>\n>>>>> Hi-\n>>>>>\n>>>>> Would someone please enlighten me as\n>>>>> to why I'm not seeing a faster execution\n>>>>> time on the simple scenario below?\n>>>>>\n>>>>> there are 412,485 rows in the table and the\n>>>>> query matches on 132,528 rows, taking\n>>>>> almost a minute to execute. vaccuum\n>>>>> analyze was just run.\n>>>>\n>>>>\n>>>>\n>>>> Well, if you are matching 130k out of 400k rows, then a sequential scan\n>>>> is certainly prefered to an index scan. And then you have to sort those\n>>>> 130k rows by partnumber. This *might* be spilling to disk depending on\n>>>> what your workmem/sortmem is set to.\n>>>>\n>>>> I would also say that what you would really want is some way to get the\n>>>> whole thing from an index. And I think the way to do that is:\n>>>>\n>>>> CREATE INDEX test_partnum_listid_typeid_idx ON\n>>>> test(partnumber, productlistid, typeid);\n>>>>\n>>>> VACUUM ANALYZE test;\n>>>>\n>>>> EXPLAIN ANALYZE SELECT * FROM test\n>>>> WHERE productlistid=3 AND typeid=9\n>>>> ORDER BY partnumber, productlistid, typeid\n>>>> LIMIT 15\n>>>> ;\n>>>>\n>>>> The trick is that you have to match the order by exactly with the \n>>>> index,\n>>>> so the planner realizes it can do an indexed lookup to get the \n>>>> information.\n>>>>\n>>>> You could also just create an index on partnumber, and see how that\n>>>> affects your original query. I think the planner could use an index\n>>>> lookup on partnumber to get the ordering correct. But it will have \n>>>> to do\n>>>> filtering after the fact based on productlistid and typeid.\n>>>> With my extended index, I think the planner can be smarter and lookup\n>>>> all 3 by the index.\n>>>>\n>>>>>\n>>>>> Thanks!\n>>>>> Clark\n>>>>\n>>>>\n>>>>\n>>>> Good luck,\n>>>> John\n>>>> =:->\n>>>>\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 2: you can get off all lists at once with the unregister command\n>>> (send \"unregister YourEmailAddressHere\" to [email protected])\n>>\n>>\n>>\n>> -- \n>> Your PostgreSQL solutions provider, Command Prompt, Inc.\n>> 24x7 support - 1.800.492.2240, programming, and consulting\n>> Home of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\n>> http://www.commandprompt.com / http://www.postgresql.org\n>>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n\n-- \nYour PostgreSQL solutions provider, Command Prompt, Inc.\n24x7 support - 1.800.492.2240, programming, and consulting\nHome of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\nhttp://www.commandprompt.com / http://www.postgresql.org\n", "msg_date": "Fri, 10 Jun 2005 17:46:53 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: faster search" }, { "msg_contents": "Clark Slater wrote:\n> thanks for your suggestion.\n> a small improvement. still pretty slow...\n> \n> vbp=# alter table test alter column productlistid set statistics 150;\n> ALTER TABLE\n> vbp=# alter table test alter column typeid set statistics 150;\n> ALTER TABLE\n> vbp=# explain analyze select * from test where (productlistid=3 and \n\nHello,\n\nAlso what happens if you:\n\nset enable_seqscan = false;\nexplain analyze query....\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n> typeid=9);\n> QUERY PLAN\n> ------------------------------------------------------------------------------ \n> \n> Seq Scan on test (cost=0.00..96458.27 rows=156194 width=725) (actual \n> time=525.617..36802.556 rows=132528 loops=1)\n> Filter: ((productlistid = 3) AND (typeid = 9))\n> Total runtime: 36847.754 ms\n> (3 rows)\n> \n> Time: 36850.719 ms\n> \n> \n> On Fri, 10 Jun 2005, Joshua D. Drake wrote:\n> \n>> Clark Slater wrote:\n>>\n>>> hmm, i'm baffled. i simplified the query\n>>> and it is still taking forever...\n>>\n>>\n>> What happens if you:\n>>\n>> alter table test alter column productlistid set statistics 150;\n>> alter table test alter column typeid set statistics 150;\n>> explain analyze select * from test where (productlistid=3 and typeid=9);\n>>\n>> Sincerely,\n>>\n>> Joshua D. Drake\n>>\n>>\n>>>\n>>>\n>>> test\n>>> -------------------------\n>>> id | integer\n>>> partnumber | character varying(32)\n>>> productlistid | integer\n>>> typeid | integer\n>>>\n>>>\n>>> Indexes:\n>>> \"test_productlistid\" btree (productlistid)\n>>> \"test_typeid\" btree (typeid)\n>>> \"test_productlistid_typeid\" btree (productlistid, typeid)\n>>>\n>>>\n>>> explain analyze select * from test where (productlistid=3 and typeid=9);\n>>>\n>>> QUERY PLAN\n>>> -----------------------------------------------------------------------\n>>> Seq Scan on test (cost=0.00..96458.27 rows=156194 width=725) (actual\n>>> time=516.459..41930.250 rows=132528 loops=1)\n>>> Filter: ((productlistid = 3) AND (typeid = 9))\n>>> Total runtime: 41975.154 ms\n>>> (3 rows)\n>>>\n>>>\n>>> System specs:\n>>> PostgreSQL 7.4.2 on RedHat 9\n>>> dual AMD Athlon 2GHz processors\n>>> 1 gig memory\n>>> mirrored 7200 RPM IDE disks\n>>>\n>>>\n>>> On Fri, 10 Jun 2005, John A Meinel wrote:\n>>>\n>>>> Clark Slater wrote:\n>>>>\n>>>>> Hi-\n>>>>>\n>>>>> Would someone please enlighten me as\n>>>>> to why I'm not seeing a faster execution\n>>>>> time on the simple scenario below?\n>>>>>\n>>>>> there are 412,485 rows in the table and the\n>>>>> query matches on 132,528 rows, taking\n>>>>> almost a minute to execute. vaccuum\n>>>>> analyze was just run.\n>>>>\n>>>>\n>>>>\n>>>> Well, if you are matching 130k out of 400k rows, then a sequential scan\n>>>> is certainly prefered to an index scan. And then you have to sort those\n>>>> 130k rows by partnumber. This *might* be spilling to disk depending on\n>>>> what your workmem/sortmem is set to.\n>>>>\n>>>> I would also say that what you would really want is some way to get the\n>>>> whole thing from an index. And I think the way to do that is:\n>>>>\n>>>> CREATE INDEX test_partnum_listid_typeid_idx ON\n>>>> test(partnumber, productlistid, typeid);\n>>>>\n>>>> VACUUM ANALYZE test;\n>>>>\n>>>> EXPLAIN ANALYZE SELECT * FROM test\n>>>> WHERE productlistid=3 AND typeid=9\n>>>> ORDER BY partnumber, productlistid, typeid\n>>>> LIMIT 15\n>>>> ;\n>>>>\n>>>> The trick is that you have to match the order by exactly with the \n>>>> index,\n>>>> so the planner realizes it can do an indexed lookup to get the \n>>>> information.\n>>>>\n>>>> You could also just create an index on partnumber, and see how that\n>>>> affects your original query. I think the planner could use an index\n>>>> lookup on partnumber to get the ordering correct. But it will have \n>>>> to do\n>>>> filtering after the fact based on productlistid and typeid.\n>>>> With my extended index, I think the planner can be smarter and lookup\n>>>> all 3 by the index.\n>>>>\n>>>>>\n>>>>> Thanks!\n>>>>> Clark\n>>>>\n>>>>\n>>>>\n>>>> Good luck,\n>>>> John\n>>>> =:->\n>>>>\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 2: you can get off all lists at once with the unregister command\n>>> (send \"unregister YourEmailAddressHere\" to [email protected])\n>>\n>>\n>>\n>> -- \n>> Your PostgreSQL solutions provider, Command Prompt, Inc.\n>> 24x7 support - 1.800.492.2240, programming, and consulting\n>> Home of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\n>> http://www.commandprompt.com / http://www.postgresql.org\n>>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n\n-- \nYour PostgreSQL solutions provider, Command Prompt, Inc.\n24x7 support - 1.800.492.2240, programming, and consulting\nHome of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\nhttp://www.commandprompt.com / http://www.postgresql.org\n", "msg_date": "Fri, 10 Jun 2005 17:48:39 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: faster search" }, { "msg_contents": "Clark Slater wrote:\n> hmm, i'm baffled. i simplified the query\n> and it is still taking forever...\n> \n> \n> test\n> -------------------------\n> id | integer\n> partnumber | character varying(32)\n> productlistid | integer\n> typeid | integer\n> \n> \n> Indexes:\n> \"test_productlistid\" btree (productlistid)\n> \"test_typeid\" btree (typeid)\n> \"test_productlistid_typeid\" btree (productlistid, typeid)\n> \n> \n> explain analyze select * from test where (productlistid=3 and typeid=9);\n> \n> QUERY PLAN\n> -----------------------------------------------------------------------\n> Seq Scan on test (cost=0.00..96458.27 rows=156194 width=725) (actual\n> time=516.459..41930.250 rows=132528 loops=1)\n> Filter: ((productlistid = 3) AND (typeid = 9))\n> Total runtime: 41975.154 ms\n> (3 rows)\n> \n> \n\nThis query is still going to take a long time, because you have to scan\nthe whole table. Your WHERE clause is not very specific (it takes 25% of\nthe table). Convention says that any time you want > 5-10% of a table, a\nsequential scan is better, because it does it in order.\n\nNow if you did:\n\nexplain analyze select * from test where (productlistid=3 and typeid=9)\nlimit 15;\n\nI think that would be very fast.\n\nI am a little surprised that it is taking 40s to scan only 400k rows,\nthough. On an older machine of mine (with only 256M ram and dual 450MHz\nCelerons), I have a table with 74k rows which takes about .5 sec. At\nthose numbers it should take more like 4s not 40.\n\nJohn\n=:->", "msg_date": "Fri, 10 Jun 2005 19:51:36 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: faster search" }, { "msg_contents": "Query should return 132,528 rows.\n\nvbp=# set enable_seqscan = false;\nSET\nvbp=# explain analyze select * from test where (productlistid=3 and typeid=9);\n\n QUERY PLAN\n------------------------------------------------------------------------\n Index Scan using test_typeid on test (cost=0.00..137223.89 rows=156194 \nwidth=725) (actual time=25.999..25708.478 rows=132528\n loops=1)\n Index Cond: (typeid = 9)\n Filter: (productlistid = 3)\n Total runtime: 25757.679 ms\n(4 rows)\n\n\nOn Fri, 10 Jun 2005, Joshua D. Drake wrote:\n\n> Clark Slater wrote:\n>> thanks for your suggestion.\n>> a small improvement. still pretty slow...\n>> \n>> vbp=# alter table test alter column productlistid set statistics 150;\n>> ALTER TABLE\n>> vbp=# alter table test alter column typeid set statistics 150;\n>> ALTER TABLE\n>> vbp=# explain analyze select * from test where (productlistid=3 and \n>\n> Hello,\n>\n> Also what happens if you:\n>\n> set enable_seqscan = false;\n> explain analyze query....\n>\n> Sincerely,\n>\n> Joshua D. Drake\n>\n>\n>\n>> typeid=9);\n>> QUERY PLAN\n>> \n>> ------------------------------------------------------------------------------ \n>> Seq Scan on test (cost=0.00..96458.27 rows=156194 width=725) (actual \n>> time=525.617..36802.556 rows=132528 loops=1)\n>> Filter: ((productlistid = 3) AND (typeid = 9))\n>> Total runtime: 36847.754 ms\n>> (3 rows)\n>> \n>> Time: 36850.719 ms\n>> \n>> \n>> On Fri, 10 Jun 2005, Joshua D. Drake wrote:\n>> \n>>> Clark Slater wrote:\n>>> \n>>>> hmm, i'm baffled. i simplified the query\n>>>> and it is still taking forever...\n>>> \n>>> \n>>> What happens if you:\n>>> \n>>> alter table test alter column productlistid set statistics 150;\n>>> alter table test alter column typeid set statistics 150;\n>>> explain analyze select * from test where (productlistid=3 and typeid=9);\n>>> \n>>> Sincerely,\n>>> \n>>> Joshua D. Drake\n>>> \n>>> \n>>>> \n>>>> \n>>>> test\n>>>> -------------------------\n>>>> id | integer\n>>>> partnumber | character varying(32)\n>>>> productlistid | integer\n>>>> typeid | integer\n>>>> \n>>>> \n>>>> Indexes:\n>>>> \"test_productlistid\" btree (productlistid)\n>>>> \"test_typeid\" btree (typeid)\n>>>> \"test_productlistid_typeid\" btree (productlistid, typeid)\n>>>> \n>>>> \n>>>> explain analyze select * from test where (productlistid=3 and typeid=9);\n>>>> \n>>>> QUERY PLAN\n>>>> -----------------------------------------------------------------------\n>>>> Seq Scan on test (cost=0.00..96458.27 rows=156194 width=725) (actual\n>>>> time=516.459..41930.250 rows=132528 loops=1)\n>>>> Filter: ((productlistid = 3) AND (typeid = 9))\n>>>> Total runtime: 41975.154 ms\n>>>> (3 rows)\n>>>> \n>>>> \n>>>> System specs:\n>>>> PostgreSQL 7.4.2 on RedHat 9\n>>>> dual AMD Athlon 2GHz processors\n>>>> 1 gig memory\n>>>> mirrored 7200 RPM IDE disks\n>>>> \n>>>> \n>>>> On Fri, 10 Jun 2005, John A Meinel wrote:\n>>>> \n>>>>> Clark Slater wrote:\n>>>>> \n>>>>>> Hi-\n>>>>>> \n>>>>>> Would someone please enlighten me as\n>>>>>> to why I'm not seeing a faster execution\n>>>>>> time on the simple scenario below?\n>>>>>> \n>>>>>> there are 412,485 rows in the table and the\n>>>>>> query matches on 132,528 rows, taking\n>>>>>> almost a minute to execute. vaccuum\n>>>>>> analyze was just run.\n>>>>> \n>>>>> \n>>>>> \n>>>>> Well, if you are matching 130k out of 400k rows, then a sequential scan\n>>>>> is certainly prefered to an index scan. And then you have to sort those\n>>>>> 130k rows by partnumber. This *might* be spilling to disk depending on\n>>>>> what your workmem/sortmem is set to.\n>>>>> \n>>>>> I would also say that what you would really want is some way to get the\n>>>>> whole thing from an index. And I think the way to do that is:\n>>>>> \n>>>>> CREATE INDEX test_partnum_listid_typeid_idx ON\n>>>>> test(partnumber, productlistid, typeid);\n>>>>> \n>>>>> VACUUM ANALYZE test;\n>>>>> \n>>>>> EXPLAIN ANALYZE SELECT * FROM test\n>>>>> WHERE productlistid=3 AND typeid=9\n>>>>> ORDER BY partnumber, productlistid, typeid\n>>>>> LIMIT 15\n>>>>> ;\n>>>>> \n>>>>> The trick is that you have to match the order by exactly with the index,\n>>>>> so the planner realizes it can do an indexed lookup to get the \n>>>>> information.\n>>>>> \n>>>>> You could also just create an index on partnumber, and see how that\n>>>>> affects your original query. I think the planner could use an index\n>>>>> lookup on partnumber to get the ordering correct. But it will have to do\n>>>>> filtering after the fact based on productlistid and typeid.\n>>>>> With my extended index, I think the planner can be smarter and lookup\n>>>>> all 3 by the index.\n>>>>> \n>>>>>> \n>>>>>> Thanks!\n>>>>>> Clark\n>>>>> \n>>>>> \n>>>>> \n>>>>> Good luck,\n>>>>> John\n>>>>> =:->\n>>>>> \n>>>> \n>>>> ---------------------------(end of broadcast)---------------------------\n>>>> TIP 2: you can get off all lists at once with the unregister command\n>>>> (send \"unregister YourEmailAddressHere\" to [email protected])\n>>> \n>>> \n>>> \n>>> -- \n>>> Your PostgreSQL solutions provider, Command Prompt, Inc.\n>>> 24x7 support - 1.800.492.2240, programming, and consulting\n>>> Home of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\n>>> http://www.commandprompt.com / http://www.postgresql.org\n>>> \n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n>\n> -- \n> Your PostgreSQL solutions provider, Command Prompt, Inc.\n> 24x7 support - 1.800.492.2240, programming, and consulting\n> Home of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\n> http://www.commandprompt.com / http://www.postgresql.org\n>\n", "msg_date": "Fri, 10 Jun 2005 20:52:17 -0400 (EDT)", "msg_from": "Clark Slater <[email protected]>", "msg_from_op": true, "msg_subject": "Re: faster search" }, { "msg_contents": "Clark Slater wrote:\n> Query should return 132,528 rows.\n\nO.k. then the planner is doing fine it looks like. The problem is you \nare pulling 132,528 rows. I would suggest moving to a cursor which will\nallow you to fetch in smaller chunks much quicker.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> vbp=# set enable_seqscan = false;\n> SET\n> vbp=# explain analyze select * from test where (productlistid=3 and \n> typeid=9);\n> \n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Index Scan using test_typeid on test (cost=0.00..137223.89 rows=156194 \n> width=725) (actual time=25.999..25708.478 rows=132528\n> loops=1)\n> Index Cond: (typeid = 9)\n> Filter: (productlistid = 3)\n> Total runtime: 25757.679 ms\n> (4 rows)\n> \n> \n> On Fri, 10 Jun 2005, Joshua D. Drake wrote:\n> \n>> Clark Slater wrote:\n>>\n>>> thanks for your suggestion.\n>>> a small improvement. still pretty slow...\n>>>\n>>> vbp=# alter table test alter column productlistid set statistics 150;\n>>> ALTER TABLE\n>>> vbp=# alter table test alter column typeid set statistics 150;\n>>> ALTER TABLE\n>>> vbp=# explain analyze select * from test where (productlistid=3 and \n>>\n>>\n>> Hello,\n>>\n>> Also what happens if you:\n>>\n>> set enable_seqscan = false;\n>> explain analyze query....\n>>\n>> Sincerely,\n>>\n>> Joshua D. Drake\n>>\n>>\n>>\n>>> typeid=9);\n>>> QUERY PLAN\n>>>\n>>> ------------------------------------------------------------------------------ \n>>> Seq Scan on test (cost=0.00..96458.27 rows=156194 width=725) \n>>> (actual time=525.617..36802.556 rows=132528 loops=1)\n>>> Filter: ((productlistid = 3) AND (typeid = 9))\n>>> Total runtime: 36847.754 ms\n>>> (3 rows)\n>>>\n>>> Time: 36850.719 ms\n>>>\n>>>\n>>> On Fri, 10 Jun 2005, Joshua D. Drake wrote:\n>>>\n>>>> Clark Slater wrote:\n>>>>\n>>>>> hmm, i'm baffled. i simplified the query\n>>>>> and it is still taking forever...\n>>>>\n>>>>\n>>>>\n>>>> What happens if you:\n>>>>\n>>>> alter table test alter column productlistid set statistics 150;\n>>>> alter table test alter column typeid set statistics 150;\n>>>> explain analyze select * from test where (productlistid=3 and \n>>>> typeid=9);\n>>>>\n>>>> Sincerely,\n>>>>\n>>>> Joshua D. Drake\n>>>>\n>>>>\n>>>>>\n>>>>>\n>>>>> test\n>>>>> -------------------------\n>>>>> id | integer\n>>>>> partnumber | character varying(32)\n>>>>> productlistid | integer\n>>>>> typeid | integer\n>>>>>\n>>>>>\n>>>>> Indexes:\n>>>>> \"test_productlistid\" btree (productlistid)\n>>>>> \"test_typeid\" btree (typeid)\n>>>>> \"test_productlistid_typeid\" btree (productlistid, typeid)\n>>>>>\n>>>>>\n>>>>> explain analyze select * from test where (productlistid=3 and \n>>>>> typeid=9);\n>>>>>\n>>>>> QUERY PLAN\n>>>>> ----------------------------------------------------------------------- \n>>>>>\n>>>>> Seq Scan on test (cost=0.00..96458.27 rows=156194 width=725) (actual\n>>>>> time=516.459..41930.250 rows=132528 loops=1)\n>>>>> Filter: ((productlistid = 3) AND (typeid = 9))\n>>>>> Total runtime: 41975.154 ms\n>>>>> (3 rows)\n>>>>>\n>>>>>\n>>>>> System specs:\n>>>>> PostgreSQL 7.4.2 on RedHat 9\n>>>>> dual AMD Athlon 2GHz processors\n>>>>> 1 gig memory\n>>>>> mirrored 7200 RPM IDE disks\n>>>>>\n>>>>>\n>>>>> On Fri, 10 Jun 2005, John A Meinel wrote:\n>>>>>\n>>>>>> Clark Slater wrote:\n>>>>>>\n>>>>>>> Hi-\n>>>>>>>\n>>>>>>> Would someone please enlighten me as\n>>>>>>> to why I'm not seeing a faster execution\n>>>>>>> time on the simple scenario below?\n>>>>>>>\n>>>>>>> there are 412,485 rows in the table and the\n>>>>>>> query matches on 132,528 rows, taking\n>>>>>>> almost a minute to execute. vaccuum\n>>>>>>> analyze was just run.\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> Well, if you are matching 130k out of 400k rows, then a sequential \n>>>>>> scan\n>>>>>> is certainly prefered to an index scan. And then you have to sort \n>>>>>> those\n>>>>>> 130k rows by partnumber. This *might* be spilling to disk \n>>>>>> depending on\n>>>>>> what your workmem/sortmem is set to.\n>>>>>>\n>>>>>> I would also say that what you would really want is some way to \n>>>>>> get the\n>>>>>> whole thing from an index. And I think the way to do that is:\n>>>>>>\n>>>>>> CREATE INDEX test_partnum_listid_typeid_idx ON\n>>>>>> test(partnumber, productlistid, typeid);\n>>>>>>\n>>>>>> VACUUM ANALYZE test;\n>>>>>>\n>>>>>> EXPLAIN ANALYZE SELECT * FROM test\n>>>>>> WHERE productlistid=3 AND typeid=9\n>>>>>> ORDER BY partnumber, productlistid, typeid\n>>>>>> LIMIT 15\n>>>>>> ;\n>>>>>>\n>>>>>> The trick is that you have to match the order by exactly with the \n>>>>>> index,\n>>>>>> so the planner realizes it can do an indexed lookup to get the \n>>>>>> information.\n>>>>>>\n>>>>>> You could also just create an index on partnumber, and see how that\n>>>>>> affects your original query. I think the planner could use an index\n>>>>>> lookup on partnumber to get the ordering correct. But it will have \n>>>>>> to do\n>>>>>> filtering after the fact based on productlistid and typeid.\n>>>>>> With my extended index, I think the planner can be smarter and lookup\n>>>>>> all 3 by the index.\n>>>>>>\n>>>>>>>\n>>>>>>> Thanks!\n>>>>>>> Clark\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> Good luck,\n>>>>>> John\n>>>>>> =:->\n>>>>>>\n>>>>>\n>>>>> ---------------------------(end of \n>>>>> broadcast)---------------------------\n>>>>> TIP 2: you can get off all lists at once with the unregister command\n>>>>> (send \"unregister YourEmailAddressHere\" to \n>>>>> [email protected])\n>>>>\n>>>>\n>>>>\n>>>>\n>>>> -- \n>>>> Your PostgreSQL solutions provider, Command Prompt, Inc.\n>>>> 24x7 support - 1.800.492.2240, programming, and consulting\n>>>> Home of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\n>>>> http://www.commandprompt.com / http://www.postgresql.org\n>>>>\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>>\n>>\n>>\n>> -- \n>> Your PostgreSQL solutions provider, Command Prompt, Inc.\n>> 24x7 support - 1.800.492.2240, programming, and consulting\n>> Home of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\n>> http://www.commandprompt.com / http://www.postgresql.org\n>>\n\n\n-- \nYour PostgreSQL solutions provider, Command Prompt, Inc.\n24x7 support - 1.800.492.2240, programming, and consulting\nHome of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\nhttp://www.commandprompt.com / http://www.postgresql.org\n", "msg_date": "Fri, 10 Jun 2005 17:55:51 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: faster search" }, { "msg_contents": "Steve Atkins wrote:\n\n> On Fri, Jun 10, 2005 at 01:45:05PM -0400, Clark Slater wrote:\n> \n>>Hi-\n>>\n>>Would someone please enlighten me as\n>>to why I'm not seeing a faster execution\n>>time on the simple scenario below?\n> \n > [...]\n >\n> Create an index on (productlistid, typeid, partnumber) then\n> \n> select * from test where productlistid=3 and typeid=9\n> order by productlistid, typeid, partnumber LIMIT 15;\n> \n\nClark, try also adding (just for testing) partnumber to your\nwhere clause, like this:\n\n select * from test where productlistid=3 and typeid=9\n and partnumber='foo' order by productlistid,\n typeid, partnumber;\n\nand check output of explain analyze.\n\nI had experiences of planner \"bad\" use of indexes when attribute\ntypes were integer and cardinality was low (a single attribute\nvalue, like \"typeid=9\" selects one or few rows).\nHowever, this was on 7.1.3, and probably is not relevant to your case.\n\n-- \nCosimo\n\n", "msg_date": "Sat, 11 Jun 2005 13:05:55 +0200", "msg_from": "Cosimo Streppone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: faster search" }, { "msg_contents": "John A Meinel <[email protected]> writes:\n> I am a little surprised that it is taking 40s to scan only 400k rows,\n> though.\n\nYeah, that seemed high to me too. Table bloat maybe? It would be\ninteresting to look at the output of \"vacuum verbose test\" to see\nhow much dead space there is.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 11 Jun 2005 12:49:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: faster search " } ]
[ { "msg_contents": "With your current (apparently well-normalized) schema, I don't see how\nyou can get a better query plan than that. There may be something you\ncan do in terms of memory configuration to get it to execute somewhat\nfaster, but the only way to make it really fast is to de-normalize. \nThis is something which is often necessary for performance.\n \nIf you add a column to the person table for \"last_food_id\" and triggers\nto maintain it when the food table is modified, voila! You have a\nsimple and fast way to get the results you want.\n \n-Kevin\n \n \n>>> Junaili Lie <[email protected]> 06/09/05 8:30 PM >>>\nHi Kevin,\nThanks for the reply.\nI tried that query. It definately faster, but not fast enough (took\naround 50 second to complete).\nI have around 2.5 million on food and 1000 on person.\nHere is the query plan:\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Merge Join (cost=0.00..11662257.52 rows=1441579 width=16)\n Merge Cond: (\"outer\".id = \"inner\".p_id)\n -> Index Scan using person_pkey on person p (cost=0.00..25.17\nrows=569 width=8)\n -> Index Scan using p_id_food_index on food f \n(cost=0.00..11644211.28 rows=1441579 width=16)\n Filter: (NOT (subplan))\n SubPlan\n -> Index Scan using p_id_food_index on food f2 \n(cost=0.00..11288.47 rows=2835 width=177)\n Index Cond: (p_id = $0)\n Filter: (id > $1)\n(9 rows)\n\nI appreciate if you have further ideas to troubleshoot this issue.\nThank you!\n\nOn 6/8/05, Kevin Grittner <[email protected]> wrote:\n> This is a pattern which I've seen many of times. I call it a \"best\n> choice\" query -- you can easily match a row from one table against any\n> of a number of rows in another, the trick is to pick the one that\n> matters most. I've generally found that I want the query results to\n> show more than the columns used for making the choice (and there can\nbe\n> many), which rules out the min/max technique. What works in a pretty\n> straitforward way, and generally optimizes at least as well as the\n> alternatives, is to join to the set of candidate rows and add a \"not\n> exists\" test to eliminate all but the best choice.\n> \n> For your example, I've taken some liberties and added hypothetical\n> columns from both tables to the result set, to demonstrate how that\n> works. Feel free to drop them or substitute actual columns as you see\n> fit. This will work best if there is an index for the food table on\n> p_id and id. Please let me know whether this works for you.\n> \n> select p.id as p_id, p.fullname, f.id, f.foodtype, f.ts\n> from food f join person p\n> on f.p_id = p.id\n> and not exists (select * from food f2 where f2.p_id = f.p_id and f2.id\n>\n> f.id)\n> order by p_id\n> \n> Note that this construct works for inner or outer joins and works\n> regardless of how complex the logic for picking the best choice is. I\n> think one reason this tends to optimize well is that an EXISTS test\ncan\n> finish as soon as it finds one matching row.\n> \n> -Kevin\n> \n> \n> >>> Junaili Lie <[email protected]> 06/08/05 2:34 PM >>>\n> Hi,\n> I have the following table:\n> person - primary key id, and some attributes\n> food - primary key id, foreign key p_id reference to table person.\n> \n> table food store all the food that a person is eating. The more recent\n> food is indicated by the higher food.id.\n> \n> I need to find what is the most recent food a person ate for every\n> person.\n> The query:\n> select f.p_id, max(f.id) from person p, food f where p.id=f.p_id group\n> by f.p_id will work.\n> But I understand this is not the most efficient way. Is there another\n> way to rewrite this query? (maybe one that involves order by desc\n> limit 1)\n> \n> Thank you in advance.\n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if\n> your\n> joining column's datatypes do not match\n> \n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n>\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n\n", "msg_date": "Fri, 10 Jun 2005 14:49:57 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with rewriting query" }, { "msg_contents": "[Kevin Grittner - Fri at 02:49:57PM -0500]\n> If you add a column to the person table for \"last_food_id\" and triggers\n> to maintain it when the food table is modified, voila! You have a\n> simple and fast way to get the results you want.\n\nReminds me about the way the precursor software of our product was made,\nwhenever it was needed to check the balance of a customer, it was needed to\nscan the whole transaction table and sum up all transactions. This\noperation eventually took 3-4 seconds before we released the new software,\nand the customers balance was supposed to show up at several web pages :-)\n\nBy now we have the updated balance both in the customer table and as\n\"post_balance\" in the transaction table. Sometimes redundancy is good.\nMuch easier to solve inconsistency problems as well :-)\n\n-- \nTobias Brox, +47-91700050\n", "msg_date": "Sat, 11 Jun 2005 12:59:09 +0300", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with rewriting query" } ]
[ { "msg_contents": "Hi,\n\nI'm trying to update a table that has about 600.000 records.\nThe update query is very simple : update mytable set pagesdesc = - \npages ;\n\n(I use pagesdesc to avoid problems with sort that have one field in \nascending order and one in descending order. That was a problem I had \na week ago)\n\nThe query takes about half an hour to an hour to execute. I have tried \na lot of things.\nThis is my setup\n\nLinux Slackware 10.1\nPostgres 8.0.1\nMy filesystem has EXT2 filesystem so I don't have journaling.\nMy partition is mounted in fstab with the noatime option.\n\nI have tried to change some settings in $PGDATA/postgresql.conf. But \nthat does not seem to matter a lot.\nI'm not even sure that file is being used. I ran KSysGuard when \nexecuting my query and I don't see my processor being used more than \n20%\nThe memory increases for the cache, but not for the app itself.\n\nMy testsystem is an Asus portable, P4 with 1 Gig of RAM.\nDisk is speedy. All runs fine except for the update queries.\n\nI would appreciate some help or a document to point me to the settings \nI must change.\n\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Sun, 12 Jun 2005 19:40:29 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Updates on large tables are extremely slow" }, { "msg_contents": "Hi,\n\nAt 19:40 12/06/2005, Yves Vindevogel wrote:\n>Hi,\n>\n>I'm trying to update a table that has about 600.000 records.\n>The update query is very simple : update mytable set pagesdesc = - \n>pages ;\n>\n>(I use pagesdesc to avoid problems with sort that have one field in \n>ascending order and one in descending order. That was a problem I had a \n>week ago)\n\nAn index on (-pages) would probably do exactly what you want without having \nto add another column.\n\n>The query takes about half an hour to an hour to execute.\n\nDepending on the total size of the table and associated indexes and on your \nexact setup (especially your hardare), this could be quite normal: the \nexuctor goes through all rows in the table, and for each, creates a copy \nwith the additional column, updates indexes, and logs to WAL. You might \nwant to look into moving your WAL files (pg_xlog) to a separate disk, \nincrease WAL and checkpoint buffers, add more RAM, add more disks...\n\nBut as I said, you might not even need to do that, just use an index on an \nexpression...\n\nJacques.\n\n\n", "msg_date": "Sun, 12 Jun 2005 19:57:54 +0200", "msg_from": "Jacques Caron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Updates on large tables are extremely slow" }, { "msg_contents": "Yves Vindevogel wrote:\n> \n> I'm trying to update a table that has about 600.000 records.\n> The update query is very simple : update mytable set pagesdesc = - pages ;\n> \n> The query takes about half an hour to an hour to execute. I have tried a \n> lot of things.\n> \n\nHalf an hour seem a bit long - I would expect less than 5 minutes on \nreasonable hardware.\n\nYou may have dead tuple bloat - can you post the output of 'ANALYZE \nVERBOSE mytable' ?\n\nCheers\n\nMark\n", "msg_date": "Mon, 13 Jun 2005 14:43:41 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Updates on large tables are extremely slow" }, { "msg_contents": "Apologies - I should have said output of 'VACUUM VERBOSE mytable'.\n\n(been using 8.1, which displays dead tuple info in ANALYZE...).\n\nMark\n\nYves Vindevogel wrote:\n> rvponp=# analyze verbose tblPrintjobs ;\n> INFO: analyzing \"public.tblprintjobs\"\n> INFO: \"tblprintjobs\": 19076 pages, 3000 rows sampled, 588209 estimated \n> total rows\n> ANALYZE\n> \n> \n> On 13 Jun 2005, at 04:43, Mark Kirkwood wrote:\n> \n> Yves Vindevogel wrote:\n> \n> I'm trying to update a table that has about 600.000 records.\n> The update query is very simple : update mytable set pagesdesc =\n> - pages ;\n> The query takes about half an hour to an hour to execute. I have\n> tried a lot of things.\n> \n> \n> Half an hour seem a bit long - I would expect less than 5 minutes on\n> reasonable hardware.\n> \n> You may have dead tuple bloat - can you post the output of 'ANALYZE\n> VERBOSE mytable' ?\n", "msg_date": "Mon, 13 Jun 2005 20:54:23 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Updates on large tables are extremely slow" }, { "msg_contents": "rvponp=# vacuum verbose tblPrintjobs ;\nINFO: vacuuming \"public.tblprintjobs\"\nINFO: index \"pkprintjobs\" now contains 622972 row versions in 8410 \npages\nDETAIL: 9526 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.60s/0.31u sec elapsed 31.68 sec.\nINFO: index \"uxprintjobs\" now contains 622972 row versions in 3978 \npages\nDETAIL: 9526 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.15s/0.48u sec elapsed 3.59 sec.\nINFO: index \"ixprintjobsipaddress\" now contains 622972 row versions in \n2542 pages\nDETAIL: 9526 index row versions were removed.\n49 index pages have been deleted, 0 are currently reusable.\nCPU 0.13s/0.24u sec elapsed 2.57 sec.\nINFO: index \"ixprintjobshostname\" now contains 622972 row versions in \n2038 pages\nDETAIL: 9526 index row versions were removed.\n35 index pages have been deleted, 0 are currently reusable.\nCPU 0.09s/0.30u sec elapsed 1.14 sec.\nINFO: index \"ixprintjobsrecordnumber\" now contains 622972 row versions \nin 1850 pages\nDETAIL: 9526 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.07s/0.28u sec elapsed 1.51 sec.\nINFO: index \"ixprintjobseventdate\" now contains 622972 row versions in \n1408 pages\nDETAIL: 9526 index row versions were removed.\n4 index pages have been deleted, 0 are currently reusable.\nCPU 0.05s/0.24u sec elapsed 2.61 sec.\nINFO: index \"ixprintjobseventtime\" now contains 622972 row versions in \n1711 pages\nDETAIL: 9526 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.12s/0.53u sec elapsed 11.66 sec.\nINFO: index \"ixprintjobseventcomputer\" now contains 622972 row \nversions in 2039 pages\nDETAIL: 9526 index row versions were removed.\n36 index pages have been deleted, 0 are currently reusable.\nCPU 0.12s/0.23u sec elapsed 1.27 sec.\nINFO: index \"ixprintjobseventuser\" now contains 622972 row versions in \n2523 pages\nDETAIL: 9526 index row versions were removed.\n19 index pages have been deleted, 0 are currently reusable.\nCPU 0.14s/0.24u sec elapsed 1.74 sec.\nINFO: index \"ixprintjobsloginuser\" now contains 622972 row versions in \n2114 pages\nDETAIL: 9526 index row versions were removed.\n13 index pages have been deleted, 0 are currently reusable.\nCPU 0.07s/0.32u sec elapsed 4.29 sec.\nINFO: index \"ixprintjobsprintqueue\" now contains 622972 row versions \nin 2201 pages\nDETAIL: 9526 index row versions were removed.\n30 index pages have been deleted, 0 are currently reusable.\nCPU 0.10s/0.34u sec elapsed 1.92 sec.\nINFO: index \"ixprintjobsprintport\" now contains 622972 row versions in \n3040 pages\nDETAIL: 9526 index row versions were removed.\n40 index pages have been deleted, 0 are currently reusable.\nCPU 0.18s/0.27u sec elapsed 2.63 sec.\nINFO: index \"ixprintjobssize\" now contains 622972 row versions in 1733 \npages\nDETAIL: 9526 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.16s/0.43u sec elapsed 4.07 sec.\nINFO: index \"ixprintjobspages\" now contains 622972 row versions in \n1746 pages\nDETAIL: 9526 index row versions were removed.\n24 index pages have been deleted, 0 are currently reusable.\nCPU 0.13s/0.22u sec elapsed 1.58 sec.\nINFO: index \"ixprintjobsapplicationtype\" now contains 622972 row \nversions in 1395 pages\nDETAIL: 9526 index row versions were removed.\n27 index pages have been deleted, 0 are currently reusable.\nCPU 0.07s/0.29u sec elapsed 1.20 sec.\nINFO: index \"ixprintjobsusertype\" now contains 622972 row versions in \n1393 pages\nDETAIL: 9526 index row versions were removed.\n24 index pages have been deleted, 0 are currently reusable.\nCPU 0.07s/0.22u sec elapsed 0.82 sec.\nINFO: index \"ixprintjobsdocumentname\" now contains 622972 row versions \nin 4539 pages\nDETAIL: 9526 index row versions were removed.\n6 index pages have been deleted, 0 are currently reusable.\nCPU 0.24s/0.38u sec elapsed 5.83 sec.\nINFO: index \"ixprintjobsdesceventdate\" now contains 622972 row \nversions in 1757 pages\nDETAIL: 9526 index row versions were removed.\n4 index pages have been deleted, 0 are currently reusable.\nCPU 0.08s/0.25u sec elapsed 1.16 sec.\nINFO: index \"ixprintjobsdesceventtime\" now contains 622972 row \nversions in 1711 pages\nDETAIL: 9526 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.18s/0.52u sec elapsed 9.44 sec.\nINFO: index \"ixprintjobsdescpages\" now contains 622972 row versions in \n1748 pages\nDETAIL: 9526 index row versions were removed.\n24 index pages have been deleted, 0 are currently reusable.\nCPU 0.06s/0.26u sec elapsed 0.94 sec.\nINFO: index \"ixprintjobspagesperjob\" now contains 622972 row versions \nin 5259 pages\nDETAIL: 9526 index row versions were removed.\n4 index pages have been deleted, 0 are currently reusable.\nCPU 0.31s/0.36u sec elapsed 5.47 sec.\nINFO: \"tblprintjobs\": removed 9526 row versions in 307 pages\nDETAIL: CPU 0.00s/0.06u sec elapsed 0.23 sec.\nINFO: \"tblprintjobs\": found 9526 removable, 622972 nonremovable row \nversions in 19382 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 75443 unused item pointers.\n0 pages are entirely empty.\nCPU 3.43s/6.83u sec elapsed 97.86 sec.\nINFO: vacuuming \"pg_toast.pg_toast_2169880\"\nINFO: index \"pg_toast_2169880_index\" now contains 0 row versions in 1 \npages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_2169880\": found 0 removable, 0 nonremovable row \nversions in 0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\nrvponp=#\n\n\n\nOn 13 Jun 2005, at 10:54, Mark Kirkwood wrote:\n\n> Apologies - I should have said output of 'VACUUM VERBOSE mytable'.\n>\n> (been using 8.1, which displays dead tuple info in ANALYZE...).\n>\n> Mark\n>\n> Yves Vindevogel wrote:\n>> rvponp=# analyze verbose tblPrintjobs ;\n>> INFO: analyzing \"public.tblprintjobs\"\n>> INFO: \"tblprintjobs\": 19076 pages, 3000 rows sampled, 588209 \n>> estimated total rows\n>> ANALYZE\n>> On 13 Jun 2005, at 04:43, Mark Kirkwood wrote:\n>> Yves Vindevogel wrote:\n>> I'm trying to update a table that has about 600.000 records.\n>> The update query is very simple : update mytable set \n>> pagesdesc =\n>> - pages ;\n>> The query takes about half an hour to an hour to execute. I \n>> have\n>> tried a lot of things.\n>> Half an hour seem a bit long - I would expect less than 5 minutes \n>> on\n>> reasonable hardware.\n>> You may have dead tuple bloat - can you post the output of \n>> 'ANALYZE\n>> VERBOSE mytable' ?\n>\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Mon, 13 Jun 2005 11:02:04 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Updates on large tables are extremely slow" }, { "msg_contents": "Hi there\nI have a query (please refer to \nhttp://213.173.234.215:8080/get_content_plan.htm for the query as well \nas query plan) that is slow when it's run the first time and fast(ish) \non all successive runs within a reasonable time period.\nThat is, if the query is not run for like 30 min, execution time returns \nto the initial time.\n\nThis leads me to suspect that when the query is first run, all used data \nhave to be fetched from the disk where as once it has been run all data \nis available in the OS's disk cache.\nComparing the execution times we're talking roughly a factor 35 in time \ndifference, thus optimization would be handy.\nIs there anway to either enhance the chance that the data can be found \nin the disk cache or allowing the database to fetch the data faster?\nIs this what the CLUSTER command is for, if so, which tables would I \nneed to cluster?\nOr is my only option to de-normalize the table structure around this \nquery to speed it up?\n\nFurthermore, it seems the database spends the majority of its time in \nthe loop marked with italic in the initial plan, any idea what it spends \nits time on there?\n\nDatabase is PG 7.3.9 on RH ES 3.0, with Dual XEON 1.9GHz processors and \n2GB of RAM.\neffective_cache_size = 100k\nshared_buffers = 14k\nrandom_page_cost = 3\ndefault_statistics_target = 50\nVACUUM ANALYZE runs every few hours, so statistics should be up to date.\n\nAppreciate any input here.\n\nCheers\nJona\n", "msg_date": "Mon, 13 Jun 2005 15:04:15 +0200", "msg_from": "Jona <[email protected]>", "msg_from_op": false, "msg_subject": "How to enhance the chance that data is in disk cache" }, { "msg_contents": "Yves Vindevogel <[email protected]> writes:\n> rvponp=3D# vacuum verbose tblPrintjobs ;\n> INFO: vacuuming \"public.tblprintjobs\"\n> [ twenty-one different indexes on one table ]\n\nWell, there's your problem. You think updating all those indexes is\nfree? It's *expensive*. Heed the manual's advice: avoid creating\nindexes you are not certain you need for identifiable commonly-used\nqueries.\n\n(The reason delete is fast is it doesn't have to touch the indexes ...\nthe necessary work is left to be done by VACUUM.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Jun 2005 10:32:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Updates on large tables are extremely slow " }, { "msg_contents": "Jona <[email protected]> writes:\n> I have a query (please refer to \n> http://213.173.234.215:8080/get_content_plan.htm for the query as well \n> as query plan) that is slow when it's run the first time and fast(ish) \n> on all successive runs within a reasonable time period.\n\n> This leads me to suspect that when the query is first run, all used data \n> have to be fetched from the disk where as once it has been run all data \n> is available in the OS's disk cache.\n\nSounds like that to me too.\n\n> Is there anway to either enhance the chance that the data can be found \n> in the disk cache or allowing the database to fetch the data faster?\n\nRun the query more often?\n\nAlso, that pile of INNER JOINs is forcing a probably-bad join order;\nyou need to think carefully about the order you want things joined in,\nor else convert the query to non-JOIN syntax. See the \"Performance\nTips\" chapter of the manual.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Jun 2005 10:51:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to enhance the chance that data is in disk cache " }, { "msg_contents": "Thank you for the response Tom, I bet you get a lot of mails with \n\"trivial\" solutions (mine likely being one of them)\nI for one however truly appreciate you taking the time to answer them.\n\n>Run the query more often?\n> \n>\nThe query is dynamically constructed from user input, although the total \nnumber of different queries that can be run is limited (around 10k \ndifferent combinations I suspect) it seems rather pointless to run all \nof them (or even the most common) more often just to keep the data in \nthe disk cache.\nIs there a way to make the data more accessible on the disk?\n\n>Also, that pile of INNER JOINs is forcing a probably-bad join order;\n>you need to think carefully about the order you want things joined in,\n>or else convert the query to non-JOIN syntax. See the \"Performance\n>Tips\" chapter of the manual.\n> \n>\nYou're probably right here, the join order must be bad though it just \nflattening the join and letting the planner decide on what would be best \nmakes the plan change for every execution.\nHave query cost variering from from 1350 to 4500.\nI wager it ends up using GEQO due to the number of possiblities for a \njoin order that the query has and thus just decides on a \"good\" plan out \nof those it examined.\nIn any case, the \"right\" way to do this is definning a good explicit \njoin order, no?\nOn top of my head I'm not sure how to re-write it proberly, suppose \ntrial and errors is the only way....\n From the plan it appears that the following part is where the cost \ndramatically increases (although the time does not??):\n-> Nested Loop (cost=0.00..1207.19 rows=75 width=32) (actual \ntime=0.28..18.47 rows=164 loops=1) \n -> Nested Loop (cost=0.00..868.23 rows=58 width=20) (actual \ntime=0.16..13.91 rows=164 loops=1) \n -> Index Scan using subcat_uq on sct2subcattype_tbl \n(cost=0.00..479.90 rows=82 width=8) (actual time=0.11..9.47 rows=164 \nloops=1)\n Index Cond: (subcattpid = 50) \n Filter: (NOT (subplan)) \n SubPlan \n -> Seq Scan on aff2sct2subcattype_tbl (cost=0.00..1.92 \nrows=1 width=4) (actual time=0.05..0.05 rows=0 loops=164) \n Filter: ((affid = 8) AND ($0 = sctid)) \n -> Index Scan using aff_price_uq on price_tbl (cost=0.00..4.72 \nrows=1 width=12) (actual time=0.02..0.02 rows=1 loops=164) \n Index Cond: ((price_tbl.affid = 8) AND (price_tbl.sctid = \nouter\".sctid))\" \n -> Index Scan using ctp_statcon on statcon_tbl (cost=0.00..5.86 \nrows=1 width=12) (actual time=0.02..0.02 rows=1 loops=164) \n Index Cond: ((statcon_tbl.sctid = outer\".sctid) AND \n(statcon_tbl.ctpid = 1))\" \nEspecially the index scan on subcat_uq seems rather expensive, but is \npretty fast.\nCan there be drawn a relation between estimated cost and execution time?\nAny other pointers in the right direction would be very much appreciated.\n\nFor the full query and query plan, please refer to: \nhttp://213.173.234.215:8080/get_content_plan.htm\n\nCheers\nJona\n\nTom Lane wrote:\n\n>Jona <[email protected]> writes:\n> \n>\n>>I have a query (please refer to \n>>http://213.173.234.215:8080/get_content_plan.htm for the query as well \n>>as query plan) that is slow when it's run the first time and fast(ish) \n>>on all successive runs within a reasonable time period.\n>> \n>>\n>\n> \n>\n>>This leads me to suspect that when the query is first run, all used data \n>>have to be fetched from the disk where as once it has been run all data \n>>is available in the OS's disk cache.\n>> \n>>\n>\n>Sounds like that to me too.\n>\n> \n>\n>>Is there anway to either enhance the chance that the data can be found \n>>in the disk cache or allowing the database to fetch the data faster?\n>> \n>>\n>\n> \n>\n\n>Run the query more often?\n> \n>\nThe query is dynamically constructed from user input, although the total \nnumber of different queries that can be run is limited (around 10k \ndifferent combinations I suspect) it seems rather pointless to run all \nof them (or even the most common) more often just to keep the data in \nthe disk cache.\nIs there a way to make the data more accessible on the disk?\n\n>Also, that pile of INNER JOINs is forcing a probably-bad join order;\n>you need to think carefully about the order you want things joined in,\n>or else convert the query to non-JOIN syntax. See the \"Performance\n>Tips\" chapter of the manual.\n> \n>\nYou're probably right herem though I'm not sure I can\n\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n> \n>\n\n\n\n\n\n\n\nThank you for the response Tom, I bet you get a lot of mails with\n\"trivial\" solutions (mine likely being one of them)\nI for one however truly appreciate you taking the time to answer them.\n\n\nRun the query more often?\n \n\nThe query is dynamically constructed from user input, although the\ntotal number of different queries that can be run is limited (around\n10k different combinations I suspect) it seems rather pointless to run\nall of them (or even the most common) more often just to keep the data\nin the disk cache.\nIs there a way to make the data more accessible on the disk?\n\n\nAlso, that pile of INNER JOINs is forcing a probably-bad join order;\nyou need to think carefully about the order you want things joined in,\nor else convert the query to non-JOIN syntax. See the \"Performance\nTips\" chapter of the manual.\n \n\nYou're probably right here, the join order must be bad though it just\nflattening the join and letting the planner decide on what would be\nbest makes the plan change for every execution.\nHave query cost variering from from 1350 to 4500.\nI wager it ends up using GEQO due to the number of possiblities for a\njoin order that the query has and thus just decides on a \"good\" plan\nout of those it examined.\nIn any case, the \"right\" way to do this is definning a good explicit\njoin order, no?\nOn top of my head I'm not sure how to re-write it proberly, suppose\ntrial and errors is the only way....\n>From the plan it appears that the following part is where the cost\ndramatically increases (although the time does not??):\n->  Nested Loop  (cost=0.00..1207.19 rows=75 width=32) (actual\ntime=0.28..18.47 rows=164 loops=1)      \n    ->  Nested Loop  (cost=0.00..868.23 rows=58 width=20) (actual\ntime=0.16..13.91 rows=164 loops=1)     \n        ->  Index Scan using subcat_uq on sct2subcattype_tbl \n(cost=0.00..479.90 rows=82 width=8) (actual time=0.11..9.47 rows=164\nloops=1)\n              Index Cond: (subcattpid = 50)     \n              Filter: (NOT (subplan))     \n              SubPlan     \n              ->  Seq Scan on aff2sct2subcattype_tbl \n(cost=0.00..1.92 rows=1 width=4) (actual time=0.05..0.05 rows=0\nloops=164)     \n                    Filter: ((affid = 8) AND ($0 = sctid))     \n        ->  Index Scan using aff_price_uq on price_tbl \n(cost=0.00..4.72 rows=1 width=12) (actual time=0.02..0.02 rows=1\nloops=164)     \n              Index Cond: ((price_tbl.affid = 8) AND (price_tbl.sctid =\nouter\".sctid))\"     \n    ->  Index Scan using ctp_statcon on statcon_tbl \n(cost=0.00..5.86 rows=1 width=12) (actual time=0.02..0.02 rows=1\nloops=164)     \n          Index Cond: ((statcon_tbl.sctid = outer\".sctid) AND\n(statcon_tbl.ctpid = 1))\"     \nEspecially the index scan on subcat_uq seems rather expensive, but is\npretty fast.\nCan there be drawn a relation between estimated cost and execution time?\nAny other pointers in the right direction would be very much\nappreciated.\n\nFor the full query and query plan, please refer to:\nhttp://213.173.234.215:8080/get_content_plan.htm\n\nCheers\nJona\n\nTom Lane wrote:\n\nJona <[email protected]> writes:\n \n\nI have a query (please refer to \nhttp://213.173.234.215:8080/get_content_plan.htm for the query as well \nas query plan) that is slow when it's run the first time and fast(ish) \non all successive runs within a reasonable time period.\n \n\n\n \n\nThis leads me to suspect that when the query is first run, all used data \nhave to be fetched from the disk where as once it has been run all data \nis available in the OS's disk cache.\n \n\n\nSounds like that to me too.\n\n \n\nIs there anway to either enhance the chance that the data can be found \nin the disk cache or allowing the database to fetch the data faster?\n \n\n\n \n\n\n\nRun the query more often?\n \n\nThe query is dynamically constructed from user input, although the\ntotal number of different queries that can be run is limited (around\n10k different combinations I suspect) it seems rather pointless to run\nall of them (or even the most common) more often just to keep the data\nin the disk cache.\nIs there a way to make the data more accessible on the disk?\n\n\nAlso, that pile of INNER JOINs is forcing a probably-bad join order;\nyou need to think carefully about the order you want things joined in,\nor else convert the query to non-JOIN syntax. See the \"Performance\nTips\" chapter of the manual.\n \n\nYou're probably right herem though I'm not sure I can \n\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster", "msg_date": "Mon, 13 Jun 2005 19:10:32 +0200", "msg_from": "Jona <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to enhance the chance that data is in disk cache" } ]
[ { "msg_contents": "I've got a list of old resource requirements.\nI want to know how far off they are and if anything\ncrucial is missing. My usual recommendation is\n\"as much as you can afford\" so I don't usually deal\nwith real numbers :)\n\nRAM:\n Number of connections * 2MB\nDisk:\n Program and Manual 8-15MB\n Regression Tests 30MB\n Compiled Source 60-160MB\n Storage for user data ( as much as you can afford :)\n\nPlease copy me since I'm not officially on this list.\n\nThanks,\n\nElein\n\n============================================================\[email protected] Varlena, LLC www.varlena.com\n\n PostgreSQL Consulting, Support & Training \n\nPostgreSQL General Bits http://www.varlena.com/GeneralBits/\n=============================================================\nI have always depended on the [QA] of strangers.\n\n", "msg_date": "Sun, 12 Jun 2005 17:30:49 -0700", "msg_from": "[email protected] (elein)", "msg_from_op": true, "msg_subject": "Resource Requirements" }, { "msg_contents": "Elein,\n\n> I've got a list of old resource requirements.\n> I want to know how far off they are and if anything\n> crucial is missing. My usual recommendation is\n> \"as much as you can afford\" so I don't usually deal\n> with real numbers :)\n\nThese look very approximate.\n\n> RAM:\n> Number of connections * 2MB\n\nThat's not a bad recommendation, but not an actual requirement. It really \ndepends on how much sort_mem you need. Could vary from 0.5mb to as much \nas 256mb per connection, depending on your application.\n\n> Disk:\n> Program and Manual 8-15MB\n> Regression Tests 30MB\n> Compiled Source 60-160MB\n\nWell, my compiled source takes up 87mb, and the installed PostgreSQL seems \nto be about 41mb including WAL. Not sure how much the regression tests \nare.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 13 Jun 2005 15:06:13 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Resource Requirements" } ]
[ { "msg_contents": "Hi,\n\nI have a view that has something like this: select x, y, z from tbl \norder by x, y\nI have created a special index on x + y\nI have run analyze\n\nStill, when I use explain, pg says it will first sort my tables instead \nof using my index\nHow is that possible ?\n\nWhen I do explain select x,y,z from tbl order by x, y, it works like \nI want it to work\n\n\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Mon, 13 Jun 2005 08:54:21 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "View not using index" }, { "msg_contents": "On Mon, 13 Jun 2005 04:54 pm, Yves Vindevogel wrote:\n> Still, when I use explain, pg says it will first sort my tables instead \n> of using my index\n> How is that possible ?\n\nCan we see the output of the explain analyze?\nThe definition of the view?\n\nRegards\n\nRussell Smith\n", "msg_date": "Mon, 13 Jun 2005 17:05:35 +1000", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View not using index" }, { "msg_contents": "rvponp=# explain select * from vw_document_pagesperjob ;\n QUERY PLAN\n------------------------------------------------------------------------ \n----------------\n Subquery Scan vw_document_pagesperjob (cost=82796.59..90149.20 \nrows=588209 width=706)\n -> Sort (cost=82796.59..84267.11 rows=588209 width=74)\n Sort Key: tblprintjobs.descpages, tblprintjobs.documentname\n -> Seq Scan on tblprintjobs (cost=0.00..26428.61 rows=588209 \nwidth=74)\n(4 rows)\n\nrvponp=# explain select * from vw_document_pagesperjob limit 10 ;\n QUERY PLAN\n------------------------------------------------------------------------ \n----------------------\n Limit (cost=82796.59..82796.72 rows=10 width=706)\n -> Subquery Scan vw_document_pagesperjob (cost=82796.59..90149.20 \nrows=588209 width=706)\n -> Sort (cost=82796.59..84267.11 rows=588209 width=74)\n Sort Key: tblprintjobs.descpages, \ntblprintjobs.documentname\n -> Seq Scan on tblprintjobs (cost=0.00..26428.61 \nrows=588209 width=74)\n(5 rows)\n\nrvponp=# explain select documentname, eventdate, eventtime, loginuser, \npages from tblPrintjobs order\nby descpages, documentname ;\n QUERY PLAN\n------------------------------------------------------------------------ \n----\n Sort (cost=81326.07..82796.59 rows=588209 width=74)\n Sort Key: descpages, documentname\n -> Seq Scan on tblprintjobs (cost=0.00..24958.09 rows=588209 \nwidth=74)\n(3 rows)\n\nrvponp=# explain select documentname, eventdate, eventtime, loginuser, \npages from tblPrintjobs order\nby descpages, documentname limit 10 ;\n QUERY PLAN\n------------------------------------------------------------------------ \n-------------------------------------\n Limit (cost=0.00..33.14 rows=10 width=74)\n -> Index Scan using ixprintjobspagesperjob on tblprintjobs \n(cost=0.00..1949116.68 rows=588209 width=74)\n(2 rows)\n\n\ncreate or replace view vw_document_pagesperjob as\n\tselect documentname, eventdate, eventtime, loginuser,\n\t\tfnFormatInt(pages) as pages\n\tfrom tblPrintjobs\n\torder by descpages, documentname ;\n\n\n\n\n\n\nOn 13 Jun 2005, at 09:05, Russell Smith wrote:\n\n> On Mon, 13 Jun 2005 04:54 pm, Yves Vindevogel wrote:\n>> Still, when I use explain, pg says it will first sort my tables \n>> instead\n>> of using my index\n>> How is that possible ?\n>\n> Can we see the output of the explain analyze?\n> The definition of the view?\n>\n> Regards\n>\n> Russell Smith\n>\n>\nMet vriendelijke groeten,\nBien � vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Mon, 13 Jun 2005 09:18:50 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: View not using index" }, { "msg_contents": "Please CC the list.\n\nOn Mon, 13 Jun 2005 05:11 pm, Yves Vindevogel wrote:\n> create or replace view vw_document_pagesperjob as\n> select documentname, eventdate, eventtime, loginuser,\n> fnFormatInt(pages) as pages\n> from tblPrintjobs\n> order by descpages, documentname ;\n> \n> rvponp=# explain select documentname, eventdate, eventtime, loginuser, \n> pages from tblPrintjobs order\n> by descpages, documentname ;\n> QUERY PLAN\n> ------------------------------------------------------------------------ \n> ----\n> Sort (cost=81326.07..82796.59 rows=588209 width=74)\n> Sort Key: descpages, documentname\n> -> Seq Scan on tblprintjobs (cost=0.00..24958.09 rows=588209 \n> width=74)\n> (3 rows)\n> \nPostgresql must scan the entire heap anyway, so ordering in memory will be faster,\nand you don't have to load the pages from disk in a random order.\n\n> rvponp=# explain select documentname, eventdate, eventtime, loginuser, \n> pages from tblPrintjobs order\n> by descpages, documentname limit 10 ;\n> QUERY PLAN\n> ------------------------------------------------------------------------ \n> -------------------------------------\n> Limit (cost=0.00..33.14 rows=10 width=74)\n> -> Index Scan using ixprintjobspagesperjob on tblprintjobs \n> (cost=0.00..1949116.68 rows=588209 width=74)\n> (2 rows)\n> \nThat's because an index scan is only useful if you are scanning a small\npercentage of the table. Which you are doing when you have the limit clause.\n\n> Strange thing is, when I immediately add the limit clause, it runs like \n> I want it to run.\n\nI am not sure of the usefulness of the first query anyway, it returns a lot of data.\nHow do you expect it not to scan the whole table when you want all the data form\nthe table?\n\n\n> Problem is that I run this from Cocoon. Cocoon adds the limit clause \n> itself.\n> Maybe I need to rewrite everything in functions instead of views.\n> \nFunctions, views. It will make not difference. The issue is the amount of data returned\nrelative to the amount of data in the table.\n\nRegards\n\nRussell Smith\n", "msg_date": "Mon, 13 Jun 2005 17:18:59 +1000", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View not using index" }, { "msg_contents": "Note the last query below (prev post)\nThere it does use the index\n\n\nrvponp=# create type tpJobsPerDay as\nrvponp-# ( documentname varchar(1000),\nrvponp(# eventdate date,\nrvponp(# eventtime time,\nrvponp(# loginuser varchar(255),\nrvponp(# pages varchar(20)\nrvponp(# ) ;\nCREATE TYPE\nrvponp=# create function fnJobsPerDay (bigint, bigint) returns setof \ntpJobsPerDay as\nrvponp-# '\nrvponp'# select documentname, eventdate, eventtime, loginuser, \nfnFormatInt(pages) as pages\nrvponp'# from tblPrintjobs order by descpages, documentname\nrvponp'# offset $1 limit $2 ;\nrvponp'# ' language 'sql' ;\nCREATE FUNCTION\n\nrvponp=# analyze ;\nANALYZE\nrvponp=# explain select * from fnJobsperday (1, 10) ;\n QUERY PLAN\n-----------------------------------------------------------------------\n Function Scan on fnjobsperday (cost=0.00..12.50 rows=1000 width=697)\n(1 row)\n\n\nWith the function, it still is very slow. I can't see anything in the \nexplain here, but it seems to be using a table scan.\n\nOn 13 Jun 2005, at 09:18, Yves Vindevogel wrote:\n\n> rvponp=# explain select * from vw_document_pagesperjob ;\n> QUERY PLAN\n> ----------------------------------------------------------------------- \n> -----------------\n> Subquery Scan vw_document_pagesperjob (cost=82796.59..90149.20 \n> rows=588209 width=706)\n> -> Sort (cost=82796.59..84267.11 rows=588209 width=74)\n> Sort Key: tblprintjobs.descpages, tblprintjobs.documentname\n> -> Seq Scan on tblprintjobs (cost=0.00..26428.61 \n> rows=588209 width=74)\n> (4 rows)\n>\n> rvponp=# explain select * from vw_document_pagesperjob limit 10 ;\n> QUERY PLAN\n> ----------------------------------------------------------------------- \n> -----------------------\n> Limit (cost=82796.59..82796.72 rows=10 width=706)\n> -> Subquery Scan vw_document_pagesperjob (cost=82796.59..90149.20 \n> rows=588209 width=706)\n> -> Sort (cost=82796.59..84267.11 rows=588209 width=74)\n> Sort Key: tblprintjobs.descpages, \n> tblprintjobs.documentname\n> -> Seq Scan on tblprintjobs (cost=0.00..26428.61 \n> rows=588209 width=74)\n> (5 rows)\n>\n> rvponp=# explain select documentname, eventdate, eventtime, loginuser, \n> pages from tblPrintjobs order\n> by descpages, documentname ;\n> QUERY PLAN\n> ----------------------------------------------------------------------- \n> -----\n> Sort (cost=81326.07..82796.59 rows=588209 width=74)\n> Sort Key: descpages, documentname\n> -> Seq Scan on tblprintjobs (cost=0.00..24958.09 rows=588209 \n> width=74)\n> (3 rows)\n>\n> rvponp=# explain select documentname, eventdate, eventtime, loginuser, \n> pages from tblPrintjobs order\n> by descpages, documentname limit 10 ;\n> QUERY PLAN\n> ----------------------------------------------------------------------- \n> --------------------------------------\n> Limit (cost=0.00..33.14 rows=10 width=74)\n> -> Index Scan using ixprintjobspagesperjob on tblprintjobs \n> (cost=0.00..1949116.68 rows=588209 width=74)\n> (2 rows)\n>\n>\n> create or replace view vw_document_pagesperjob as\n> \tselect documentname, eventdate, eventtime, loginuser,\n> \t\tfnFormatInt(pages) as pages\n> \tfrom tblPrintjobs\n> \torder by descpages, documentname ;\n>\n>\n>\n>\n>\n>\n> On 13 Jun 2005, at 09:05, Russell Smith wrote:\n>\n>> On Mon, 13 Jun 2005 04:54 pm, Yves Vindevogel wrote:\n>>> Still, when I use explain, pg says it will first sort my tables \n>>> instead\n>>> of using my index\n>>> How is that possible ?\n>>\n>> Can we see the output of the explain analyze?\n>> The definition of the view?\n>>\n>> Regards\n>>\n>> Russell Smith\n>>\n>>\n> Met vriendelijke groeten,\n> Bien � vous,\n> Kind regards,\n>\n> Yves Vindevogel\n> Implements\n>\n> <Pasted Graphic 2.tiff>\n>\n> Mail: [email protected] - Mobile: +32 (478) 80 82 91\n>\n> Kempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n>\n> Web: http://www.implements.be\n>\n> First they ignore you. Then they laugh at you. Then they fight you. \n> Then you win.\n> Mahatma Ghandi.\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\nMet vriendelijke groeten,\nBien � vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Mon, 13 Jun 2005 09:35:47 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: View not using index" }, { "msg_contents": "Yves Vindevogel <[email protected]> writes:\n> rvponp=# explain select * from vw_document_pagesperjob limit 10 ;\n> QUERY PLAN\n> ------------------------------------------------------------------------ \n> ----------------------\n> Limit (cost=82796.59..82796.72 rows=10 width=706)\n> -> Subquery Scan vw_document_pagesperjob (cost=82796.59..90149.20 \n> rows=588209 width=706)\n> -> Sort (cost=82796.59..84267.11 rows=588209 width=74)\n> Sort Key: tblprintjobs.descpages, \n> tblprintjobs.documentname\n> -> Seq Scan on tblprintjobs (cost=0.00..26428.61 \n> rows=588209 width=74)\n> (5 rows)\n\nIn general, putting an ORDER BY inside a view isn't a great idea ---\nit's not legal per SQL spec (hence not portable), and it defeats most\nforms of optimization of the view.\n\nCVS tip is actually able to do what you wish with the above case, but no\nexisting release will optimize the view's ORDER BY in light of a LIMIT\nthat's outside the view.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Jun 2005 10:18:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View not using index " } ]
[ { "msg_contents": ">\n> I have started this on my testmachine at 11h20. It's still running \n> and here it's 13h40.\n>\n> Setup:\n> Intel P4 2Ghz, 1 Gb ram\n> ReiserFS 3 (with atime in fstab, which is not optimal)\n> Slackware 10\n> PG 7.4\n>\n> I have the same problems on my OSX and other test machines.\n>\n> It's frustrating. Even Microsoft Access is faster !!\n>\n> On 13 Jun 2005, at 11:02, Yves Vindevogel wrote:\n>\n>> rvponp=# vacuum verbose tblPrintjobs ;\n>> INFO: vacuuming \"public.tblprintjobs\"\n>> INFO: index \"pkprintjobs\" now contains 622972 row versions in 8410 \n>> pages\n>> DETAIL: 9526 index row versions were removed.\n>> 0 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.60s/0.31u sec elapsed 31.68 sec.\n>> INFO: index \"uxprintjobs\" now contains 622972 row versions in 3978 \n>> pages\n>> DETAIL: 9526 index row versions were removed.\n>> 0 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.15s/0.48u sec elapsed 3.59 sec.\n>> INFO: index \"ixprintjobsipaddress\" now contains 622972 row versions \n>> in 2542 pages\n>> DETAIL: 9526 index row versions were removed.\n>> 49 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.13s/0.24u sec elapsed 2.57 sec.\n>> INFO: index \"ixprintjobshostname\" now contains 622972 row versions \n>> in 2038 pages\n>> DETAIL: 9526 index row versions were removed.\n>> 35 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.09s/0.30u sec elapsed 1.14 sec.\n>> INFO: index \"ixprintjobsrecordnumber\" now contains 622972 row \n>> versions in 1850 pages\n>> DETAIL: 9526 index row versions were removed.\n>> 0 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.07s/0.28u sec elapsed 1.51 sec.\n>> INFO: index \"ixprintjobseventdate\" now contains 622972 row versions \n>> in 1408 pages\n>> DETAIL: 9526 index row versions were removed.\n>> 4 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.05s/0.24u sec elapsed 2.61 sec.\n>> INFO: index \"ixprintjobseventtime\" now contains 622972 row versions \n>> in 1711 pages\n>> DETAIL: 9526 index row versions were removed.\n>> 0 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.12s/0.53u sec elapsed 11.66 sec.\n>> INFO: index \"ixprintjobseventcomputer\" now contains 622972 row \n>> versions in 2039 pages\n>> DETAIL: 9526 index row versions were removed.\n>> 36 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.12s/0.23u sec elapsed 1.27 sec.\n>> INFO: index \"ixprintjobseventuser\" now contains 622972 row versions \n>> in 2523 pages\n>> DETAIL: 9526 index row versions were removed.\n>> 19 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.14s/0.24u sec elapsed 1.74 sec.\n>> INFO: index \"ixprintjobsloginuser\" now contains 622972 row versions \n>> in 2114 pages\n>> DETAIL: 9526 index row versions were removed.\n>> 13 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.07s/0.32u sec elapsed 4.29 sec.\n>> INFO: index \"ixprintjobsprintqueue\" now contains 622972 row versions \n>> in 2201 pages\n>> DETAIL: 9526 index row versions were removed.\n>> 30 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.10s/0.34u sec elapsed 1.92 sec.\n>> INFO: index \"ixprintjobsprintport\" now contains 622972 row versions \n>> in 3040 pages\n>> DETAIL: 9526 index row versions were removed.\n>> 40 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.18s/0.27u sec elapsed 2.63 sec.\n>> INFO: index \"ixprintjobssize\" now contains 622972 row versions in \n>> 1733 pages\n>> DETAIL: 9526 index row versions were removed.\n>> 0 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.16s/0.43u sec elapsed 4.07 sec.\n>> INFO: index \"ixprintjobspages\" now contains 622972 row versions in \n>> 1746 pages\n>> DETAIL: 9526 index row versions were removed.\n>> 24 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.13s/0.22u sec elapsed 1.58 sec.\n>> INFO: index \"ixprintjobsapplicationtype\" now contains 622972 row \n>> versions in 1395 pages\n>> DETAIL: 9526 index row versions were removed.\n>> 27 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.07s/0.29u sec elapsed 1.20 sec.\n>> INFO: index \"ixprintjobsusertype\" now contains 622972 row versions \n>> in 1393 pages\n>> DETAIL: 9526 index row versions were removed.\n>> 24 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.07s/0.22u sec elapsed 0.82 sec.\n>> INFO: index \"ixprintjobsdocumentname\" now contains 622972 row \n>> versions in 4539 pages\n>> DETAIL: 9526 index row versions were removed.\n>> 6 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.24s/0.38u sec elapsed 5.83 sec.\n>> INFO: index \"ixprintjobsdesceventdate\" now contains 622972 row \n>> versions in 1757 pages\n>> DETAIL: 9526 index row versions were removed.\n>> 4 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.08s/0.25u sec elapsed 1.16 sec.\n>> INFO: index \"ixprintjobsdesceventtime\" now contains 622972 row \n>> versions in 1711 pages\n>> DETAIL: 9526 index row versions were removed.\n>> 0 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.18s/0.52u sec elapsed 9.44 sec.\n>> INFO: index \"ixprintjobsdescpages\" now contains 622972 row versions \n>> in 1748 pages\n>> DETAIL: 9526 index row versions were removed.\n>> 24 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.06s/0.26u sec elapsed 0.94 sec.\n>> INFO: index \"ixprintjobspagesperjob\" now contains 622972 row \n>> versions in 5259 pages\n>> DETAIL: 9526 index row versions were removed.\n>> 4 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.31s/0.36u sec elapsed 5.47 sec.\n>> INFO: \"tblprintjobs\": removed 9526 row versions in 307 pages\n>> DETAIL: CPU 0.00s/0.06u sec elapsed 0.23 sec.\n>> INFO: \"tblprintjobs\": found 9526 removable, 622972 nonremovable row \n>> versions in 19382 pages\n>> DETAIL: 0 dead row versions cannot be removed yet.\n>> There were 75443 unused item pointers.\n>> 0 pages are entirely empty.\n>> CPU 3.43s/6.83u sec elapsed 97.86 sec.\n>> INFO: vacuuming \"pg_toast.pg_toast_2169880\"\n>> INFO: index \"pg_toast_2169880_index\" now contains 0 row versions in \n>> 1 pages\n>> DETAIL: 0 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n>> INFO: \"pg_toast_2169880\": found 0 removable, 0 nonremovable row \n>> versions in 0 pages\n>> DETAIL: 0 dead row versions cannot be removed yet.\n>> There were 0 unused item pointers.\n>> 0 pages are entirely empty.\n>> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n>> VACUUM\n>> rvponp=#\n>>\n>>\n>>\n>> On 13 Jun 2005, at 10:54, Mark Kirkwood wrote:\n>>\n>>> Apologies - I should have said output of 'VACUUM VERBOSE mytable'.\n>>>\n>>> (been using 8.1, which displays dead tuple info in ANALYZE...).\n>>>\n>>> Mark\n>>>\n>>> Yves Vindevogel wrote:\n>>>> rvponp=# analyze verbose tblPrintjobs ;\n>>>> INFO: analyzing \"public.tblprintjobs\"\n>>>> INFO: \"tblprintjobs\": 19076 pages, 3000 rows sampled, 588209 \n>>>> estimated total rows\n>>>> ANALYZE\n>>>> On 13 Jun 2005, at 04:43, Mark Kirkwood wrote:\n>>>> Yves Vindevogel wrote:\n>>>> I'm trying to update a table that has about 600.000 records.\n>>>> The update query is very simple : update mytable set \n>>>> pagesdesc =\n>>>> - pages ;\n>>>> The query takes about half an hour to an hour to execute. I \n>>>> have\n>>>> tried a lot of things.\n>>>> Half an hour seem a bit long - I would expect less than 5 \n>>>> minutes on\n>>>> reasonable hardware.\n>>>> You may have dead tuple bloat - can you post the output of \n>>>> 'ANALYZE\n>>>> VERBOSE mytable' ?\n>>>\n>>>\n>> Met vriendelijke groeten,\n>> Bien à vous,\n>> Kind regards,\n>>\n>> Yves Vindevogel\n>> Implements\n>>\n>> <Pasted Graphic 2.tiff>\n>>\n>> Mail: [email protected] - Mobile: +32 (478) 80 82 91\n>>\n>> Kempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n>>\n>> Web: http://www.implements.be\n>>\n>> First they ignore you. Then they laugh at you. Then they fight you. \n>> Then you win.\n>> Mahatma Ghandi.\n>>\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 7: don't forget to increase your free space map settings\n>>\n> Met vriendelijke groeten,\n> Bien à vous,\n> Kind regards,\n>\n> Yves Vindevogel\n> Implements\n>\n\n>\n>\n> Mail: [email protected] - Mobile: +32 (478) 80 82 91\n>\n> Kempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n>\n> Web: http://www.implements.be\n>\n> First they ignore you. Then they laugh at you. Then they fight you. \n> Then you win.\n> Mahatma Ghandi.\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Mon, 13 Jun 2005 13:51:41 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Updates on large tables are extremely slow" }, { "msg_contents": "What else I don't understand is that an update is so slow, whereas this\n\nrvponp=# insert into tblTest (id, descpages) select oid, -pages from \ntblPrintjobs ;\nINSERT 0 622972\nrvponp=# delete from tblTest ;\nDELETE 622972\nrvponp=#\n\ntakes about 1 minute for the insert, and 5 seconds for the delete.\n\n\nOn 13 Jun 2005, at 13:51, Yves Vindevogel wrote:\n\n>>\n>> I have started this on my testmachine at 11h20. It's still running \n>> and here it's 13h40.\n>>\n>> Setup:\n>> Intel P4 2Ghz, 1 Gb ram\n>> ReiserFS 3 (with atime in fstab, which is not optimal)\n>> Slackware 10\n>> PG 7.4\n>>\n>> I have the same problems on my OSX and other test machines.\n>>\n>> It's frustrating. Even Microsoft Access is faster !!\n>>\n>> On 13 Jun 2005, at 11:02, Yves Vindevogel wrote:\n>>\n>>> rvponp=# vacuum verbose tblPrintjobs ;\n>>> INFO: vacuuming \"public.tblprintjobs\"\n>>> INFO: index \"pkprintjobs\" now contains 622972 row versions in 8410 \n>>> pages\n>>> DETAIL: 9526 index row versions were removed.\n>>> 0 index pages have been deleted, 0 are currently reusable.\n>>> CPU 0.60s/0.31u sec elapsed 31.68 sec.\n>>> INFO: index \"uxprintjobs\" now contains 622972 row versions in 3978 \n>>> pages\n>>> DETAIL: 9526 index row versions were removed.\n>>> 0 index pages have been deleted, 0 are currently reusable.\n>>> CPU 0.15s/0.48u sec elapsed 3.59 sec.\n>>> INFO: index \"ixprintjobsipaddress\" now contains 622972 row versions \n>>> in 2542 pages\n>>> DETAIL: 9526 index row versions were removed.\n>>> 49 index pages have been deleted, 0 are currently reusable.\n>>> CPU 0.13s/0.24u sec elapsed 2.57 sec.\n>>> INFO: index \"ixprintjobshostname\" now contains 622972 row versions \n>>> in 2038 pages\n>>> DETAIL: 9526 index row versions were removed.\n>>> 35 index pages have been deleted, 0 are currently reusable.\n>>> CPU 0.09s/0.30u sec elapsed 1.14 sec.\n>>> INFO: index \"ixprintjobsrecordnumber\" now contains 622972 row \n>>> versions in 1850 pages\n>>> DETAIL: 9526 index row versions were removed.\n>>> 0 index pages have been deleted, 0 are currently reusable.\n>>> CPU 0.07s/0.28u sec elapsed 1.51 sec.\n>>> INFO: index \"ixprintjobseventdate\" now contains 622972 row versions \n>>> in 1408 pages\n>>> DETAIL: 9526 index row versions were removed.\n>>> 4 index pages have been deleted, 0 are currently reusable.\n>>> CPU 0.05s/0.24u sec elapsed 2.61 sec.\n>>> INFO: index \"ixprintjobseventtime\" now contains 622972 row versions \n>>> in 1711 pages\n>>> DETAIL: 9526 index row versions were removed.\n>>> 0 index pages have been deleted, 0 are currently reusable.\n>>> CPU 0.12s/0.53u sec elapsed 11.66 sec.\n>>> INFO: index \"ixprintjobseventcomputer\" now contains 622972 row \n>>> versions in 2039 pages\n>>> DETAIL: 9526 index row versions were removed.\n>>> 36 index pages have been deleted, 0 are currently reusable.\n>>> CPU 0.12s/0.23u sec elapsed 1.27 sec.\n>>> INFO: index \"ixprintjobseventuser\" now contains 622972 row versions \n>>> in 2523 pages\n>>> DETAIL: 9526 index row versions were removed.\n>>> 19 index pages have been deleted, 0 are currently reusable.\n>>> CPU 0.14s/0.24u sec elapsed 1.74 sec.\n>>> INFO: index \"ixprintjobsloginuser\" now contains 622972 row versions \n>>> in 2114 pages\n>>> DETAIL: 9526 index row versions were removed.\n>>> 13 index pages have been deleted, 0 are currently reusable.\n>>> CPU 0.07s/0.32u sec elapsed 4.29 sec.\n>>> INFO: index \"ixprintjobsprintqueue\" now contains 622972 row \n>>> versions in 2201 pages\n>>> DETAIL: 9526 index row versions were removed.\n>>> 30 index pages have been deleted, 0 are currently reusable.\n>>> CPU 0.10s/0.34u sec elapsed 1.92 sec.\n>>> INFO: index \"ixprintjobsprintport\" now contains 622972 row versions \n>>> in 3040 pages\n>>> DETAIL: 9526 index row versions were removed.\n>>> 40 index pages have been deleted, 0 are currently reusable.\n>>> CPU 0.18s/0.27u sec elapsed 2.63 sec.\n>>> INFO: index \"ixprintjobssize\" now contains 622972 row versions in \n>>> 1733 pages\n>>> DETAIL: 9526 index row versions were removed.\n>>> 0 index pages have been deleted, 0 are currently reusable.\n>>> CPU 0.16s/0.43u sec elapsed 4.07 sec.\n>>> INFO: index \"ixprintjobspages\" now contains 622972 row versions in \n>>> 1746 pages\n>>> DETAIL: 9526 index row versions were removed.\n>>> 24 index pages have been deleted, 0 are currently reusable.\n>>> CPU 0.13s/0.22u sec elapsed 1.58 sec.\n>>> INFO: index \"ixprintjobsapplicationtype\" now contains 622972 row \n>>> versions in 1395 pages\n>>> DETAIL: 9526 index row versions were removed.\n>>> 27 index pages have been deleted, 0 are currently reusable.\n>>> CPU 0.07s/0.29u sec elapsed 1.20 sec.\n>>> INFO: index \"ixprintjobsusertype\" now contains 622972 row versions \n>>> in 1393 pages\n>>> DETAIL: 9526 index row versions were removed.\n>>> 24 index pages have been deleted, 0 are currently reusable.\n>>> CPU 0.07s/0.22u sec elapsed 0.82 sec.\n>>> INFO: index \"ixprintjobsdocumentname\" now contains 622972 row \n>>> versions in 4539 pages\n>>> DETAIL: 9526 index row versions were removed.\n>>> 6 index pages have been deleted, 0 are currently reusable.\n>>> CPU 0.24s/0.38u sec elapsed 5.83 sec.\n>>> INFO: index \"ixprintjobsdesceventdate\" now contains 622972 row \n>>> versions in 1757 pages\n>>> DETAIL: 9526 index row versions were removed.\n>>> 4 index pages have been deleted, 0 are currently reusable.\n>>> CPU 0.08s/0.25u sec elapsed 1.16 sec.\n>>> INFO: index \"ixprintjobsdesceventtime\" now contains 622972 row \n>>> versions in 1711 pages\n>>> DETAIL: 9526 index row versions were removed.\n>>> 0 index pages have been deleted, 0 are currently reusable.\n>>> CPU 0.18s/0.52u sec elapsed 9.44 sec.\n>>> INFO: index \"ixprintjobsdescpages\" now contains 622972 row versions \n>>> in 1748 pages\n>>> DETAIL: 9526 index row versions were removed.\n>>> 24 index pages have been deleted, 0 are currently reusable.\n>>> CPU 0.06s/0.26u sec elapsed 0.94 sec.\n>>> INFO: index \"ixprintjobspagesperjob\" now contains 622972 row \n>>> versions in 5259 pages\n>>> DETAIL: 9526 index row versions were removed.\n>>> 4 index pages have been deleted, 0 are currently reusable.\n>>> CPU 0.31s/0.36u sec elapsed 5.47 sec.\n>>> INFO: \"tblprintjobs\": removed 9526 row versions in 307 pages\n>>> DETAIL: CPU 0.00s/0.06u sec elapsed 0.23 sec.\n>>> INFO: \"tblprintjobs\": found 9526 removable, 622972 nonremovable row \n>>> versions in 19382 pages\n>>> DETAIL: 0 dead row versions cannot be removed yet.\n>>> There were 75443 unused item pointers.\n>>> 0 pages are entirely empty.\n>>> CPU 3.43s/6.83u sec elapsed 97.86 sec.\n>>> INFO: vacuuming \"pg_toast.pg_toast_2169880\"\n>>> INFO: index \"pg_toast_2169880_index\" now contains 0 row versions in \n>>> 1 pages\n>>> DETAIL: 0 index pages have been deleted, 0 are currently reusable.\n>>> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n>>> INFO: \"pg_toast_2169880\": found 0 removable, 0 nonremovable row \n>>> versions in 0 pages\n>>> DETAIL: 0 dead row versions cannot be removed yet.\n>>> There were 0 unused item pointers.\n>>> 0 pages are entirely empty.\n>>> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n>>> VACUUM\n>>> rvponp=#\n>>>\n>>>\n>>>\n>>> On 13 Jun 2005, at 10:54, Mark Kirkwood wrote:\n>>>\n>>>> Apologies - I should have said output of 'VACUUM VERBOSE mytable'.\n>>>>\n>>>> (been using 8.1, which displays dead tuple info in ANALYZE...).\n>>>>\n>>>> Mark\n>>>>\n>>>> Yves Vindevogel wrote:\n>>>>> rvponp=# analyze verbose tblPrintjobs ;\n>>>>> INFO: analyzing \"public.tblprintjobs\"\n>>>>> INFO: \"tblprintjobs\": 19076 pages, 3000 rows sampled, 588209 \n>>>>> estimated total rows\n>>>>> ANALYZE\n>>>>> On 13 Jun 2005, at 04:43, Mark Kirkwood wrote:\n>>>>> Yves Vindevogel wrote:\n>>>>> I'm trying to update a table that has about 600.000 \n>>>>> records.\n>>>>> The update query is very simple : update mytable set \n>>>>> pagesdesc =\n>>>>> - pages ;\n>>>>> The query takes about half an hour to an hour to execute. \n>>>>> I have\n>>>>> tried a lot of things.\n>>>>> Half an hour seem a bit long - I would expect less than 5 \n>>>>> minutes on\n>>>>> reasonable hardware.\n>>>>> You may have dead tuple bloat - can you post the output of \n>>>>> 'ANALYZE\n>>>>> VERBOSE mytable' ?\n>>>>\n>>>>\n>>> Met vriendelijke groeten,\n>>> Bien à vous,\n>>> Kind regards,\n>>>\n>>> Yves Vindevogel\n>>> Implements\n>>>\n>>> <Pasted Graphic 2.tiff>\n>>>\n>>> Mail: [email protected] - Mobile: +32 (478) 80 82 91\n>>>\n>>> Kempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n>>>\n>>> Web: http://www.implements.be\n>>>\n>>> First they ignore you. Then they laugh at you. Then they fight \n>>> you. Then you win.\n>>> Mahatma Ghandi.\n>>>\n>>>\n>>>\n>>> ---------------------------(end of \n>>> broadcast)---------------------------\n>>> TIP 7: don't forget to increase your free space map settings\n>>>\n>> Met vriendelijke groeten,\n>> Bien à vous,\n>> Kind regards,\n>>\n>> Yves Vindevogel\n>> Implements\n>>\n> <Pasted Graphic 2.tiff>\n>>\n>> Mail: [email protected] - Mobile: +32 (478) 80 82 91\n>>\n>> Kempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n>>\n>> Web: http://www.implements.be\n>>\n>> First they ignore you. Then they laugh at you. Then they fight you. \n>> Then you win.\n>> Mahatma Ghandi.\n>>\n>>\n> Met vriendelijke groeten,\n> Bien à vous,\n> Kind regards,\n>\n> Yves Vindevogel\n> Implements\n>\n> <Pasted Graphic 2.tiff>\n>\n> Mail: [email protected] - Mobile: +32 (478) 80 82 91\n>\n> Kempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n>\n> Web: http://www.implements.be\n>\n> First they ignore you. Then they laugh at you. Then they fight you. \n> Then you win.\n> Mahatma Ghandi.\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to \n> [email protected]\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Mon, 13 Jun 2005 14:17:35 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Updates on large tables are extremely slow" } ]
[ { "msg_contents": "We have two index's like so\n\nl1_historical=# \\d \"N_intra_time_idx\"\n Index \"N_intra_time_idx\"\nColumn | Type\n--------+-----------------------------\ntime | timestamp without time zone\nbtree\n\n\nl1_historical=# \\d \"N_intra_pkey\"\n Index \"N_intra_pkey\"\nColumn | Type\n--------+-----------------------------\nsymbol | text\ntime | timestamp without time zone\nunique btree (primary key)\n\nand on queries like this\n\nselect * from \"N_intra\" where symbol='SOMETHING WHICH DOESNT EXIST' \norder by time desc limit 1;\n\nPostgreSQL takes a very long time to complete, as it effectively \nscans the entire table, backwards. And the table is huge, about 450 \nmillion rows. (btw, there are no triggers or any other exciting \nthings like that on our tables in this db.)\n\nbut on things where the symbol does exist in the table, it's more or \nless fine, and nice and fast.\n\nWhilst the option the planner has taken might be faster most of the \ntime, the worst case scenario is unacceptable for obvious reasons. \nI've googled for trying to force the use of a specific index, but \ncan't find anything relevant. Does anyone have any suggestions on \ngetting it to use an index which hopefully will have better worst \ncase performance?\n", "msg_date": "Mon, 13 Jun 2005 14:02:30 +0100", "msg_from": "Alex Stapleton <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL using the wrong Index" }, { "msg_contents": "Oh, we are running 7.4.2 btw. And our random_page_cost = 1\n\nOn 13 Jun 2005, at 14:02, Alex Stapleton wrote:\n\n> We have two index's like so\n>\n> l1_historical=# \\d \"N_intra_time_idx\"\n> Index \"N_intra_time_idx\"\n> Column | Type\n> --------+-----------------------------\n> time | timestamp without time zone\n> btree\n>\n>\n> l1_historical=# \\d \"N_intra_pkey\"\n> Index \"N_intra_pkey\"\n> Column | Type\n> --------+-----------------------------\n> symbol | text\n> time | timestamp without time zone\n> unique btree (primary key)\n>\n> and on queries like this\n>\n> select * from \"N_intra\" where symbol='SOMETHING WHICH DOESNT EXIST' \n> order by time desc limit 1;\n>\n> PostgreSQL takes a very long time to complete, as it effectively \n> scans the entire table, backwards. And the table is huge, about 450 \n> million rows. (btw, there are no triggers or any other exciting \n> things like that on our tables in this db.)\n>\n> but on things where the symbol does exist in the table, it's more \n> or less fine, and nice and fast.\n>\n> Whilst the option the planner has taken might be faster most of the \n> time, the worst case scenario is unacceptable for obvious reasons. \n> I've googled for trying to force the use of a specific index, but \n> can't find anything relevant. Does anyone have any suggestions on \n> getting it to use an index which hopefully will have better worst \n> case performance?\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to \n> [email protected]\n>\n>\n\n", "msg_date": "Mon, 13 Jun 2005 14:08:08 +0100", "msg_from": "Alex Stapleton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL using the wrong Index" }, { "msg_contents": "Alex Stapleton wrote:\n\n> Oh, we are running 7.4.2 btw. And our random_page_cost = 1\n>\nWhich is only correct if your entire db fits into memory. Also, try\nupdating to a later 7.4 version if at all possible.\n\n> On 13 Jun 2005, at 14:02, Alex Stapleton wrote:\n>\n>> We have two index's like so\n>>\n>> l1_historical=# \\d \"N_intra_time_idx\"\n>> Index \"N_intra_time_idx\"\n>> Column | Type\n>> --------+-----------------------------\n>> time | timestamp without time zone\n>> btree\n>>\nJust so you are aware, writing this as: \"We have an index on\nN_intra(time) and one on N_Intra(symbol, time)\" is a lot more succinct.\n\n>>\n>> l1_historical=# \\d \"N_intra_pkey\"\n>> Index \"N_intra_pkey\"\n>> Column | Type\n>> --------+-----------------------------\n>> symbol | text\n>> time | timestamp without time zone\n>> unique btree (primary key)\n>>\n>> and on queries like this\n>>\n>> select * from \"N_intra\" where symbol='SOMETHING WHICH DOESNT EXIST'\n>> order by time desc limit 1;\n>>\n>> PostgreSQL takes a very long time to complete, as it effectively\n>> scans the entire table, backwards. And the table is huge, about 450\n>> million rows. (btw, there are no triggers or any other exciting\n>> things like that on our tables in this db.)\n>>\n>> but on things where the symbol does exist in the table, it's more or\n>> less fine, and nice and fast.\n>\nWhat happens if you do:\nSELECT * FROM \"N_intra\" WHERE symbol='doesnt exist' ORDER BY symbol,\ntime DESC LIMIT 1;\n\nYes, symbol is constant, but it frequently helps the planner realize it\ncan use an index scan if you include all terms in the index in the ORDER\nBY clause.\n\n>>\n>> Whilst the option the planner has taken might be faster most of the\n>> time, the worst case scenario is unacceptable for obvious reasons.\n>> I've googled for trying to force the use of a specific index, but\n>> can't find anything relevant. Does anyone have any suggestions on\n>> getting it to use an index which hopefully will have better worst\n>> case performance?\n>\nTry the above first. You could also create a new index on symbol\n CREATE INDEX \"N_intra_symbol_idx\" ON \"N_intra\"(symbol);\n\nThen the WHERE clause should use the symbol index, which means it can\nknow quickly that an entry doesn't exist. I'm not sure how many entries\nyou have per symbol, though, so this might cause problems in the ORDER\nBY time portion.\n\nI'm guessing what you really want is to just do the ORDER BY symbol, time.\n\nJohn\n=:->", "msg_date": "Mon, 13 Jun 2005 09:47:10 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL using the wrong Index" }, { "msg_contents": "Alex Stapleton <[email protected]> writes:\n> l1_historical=# \\d \"N_intra_pkey\"\n> Index \"N_intra_pkey\"\n> Column | Type\n> --------+-----------------------------\n> symbol | text\n> time | timestamp without time zone\n> unique btree (primary key)\n\n> and on queries like this\n\n> select * from \"N_intra\" where symbol='SOMETHING WHICH DOESNT EXIST' \n> order by time desc limit 1;\n\nThis was just covered in excruciating detail yesterday ...\n\nYou need to write\n\torder by symbol desc, time desc limit 1\nto get the planner to recognize the connection to the sort order\nof this index. Since you're only selecting one value of symbol,\nthe actual output doesn't change.\n\n> Oh, we are running 7.4.2 btw. And our random_page_cost = 1\n\nI'll bet lunch that that is a bad selection of random_page_cost,\nunless your database is so small that it all fits in RAM.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Jun 2005 10:54:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL using the wrong Index " }, { "msg_contents": "\nOn 13 Jun 2005, at 15:47, John A Meinel wrote:\n\n> Alex Stapleton wrote:\n>\n>\n>> Oh, we are running 7.4.2 btw. And our random_page_cost = 1\n>>\n>>\n> Which is only correct if your entire db fits into memory. Also, try\n> updating to a later 7.4 version if at all possible.\n>\n\nI am aware of this, I didn't configure this machine though \nunfortuantely.\n\n>> On 13 Jun 2005, at 14:02, Alex Stapleton wrote:\n>>\n>>\n>>> We have two index's like so\n>>>\n>>> l1_historical=# \\d \"N_intra_time_idx\"\n>>> Index \"N_intra_time_idx\"\n>>> Column | Type\n>>> --------+-----------------------------\n>>> time | timestamp without time zone\n>>> btree\n>>>\n>>>\n> Just so you are aware, writing this as: \"We have an index on\n> N_intra(time) and one on N_Intra(symbol, time)\" is a lot more \n> succinct.\n>\n\nSorry, I happened to have them there in my clipboard at the time so I \njust blindly pasted them in.\n\n>>>\n>>> l1_historical=# \\d \"N_intra_pkey\"\n>>> Index \"N_intra_pkey\"\n>>> Column | Type\n>>> --------+-----------------------------\n>>> symbol | text\n>>> time | timestamp without time zone\n>>> unique btree (primary key)\n>>>\n>>> and on queries like this\n>>>\n>>> select * from \"N_intra\" where symbol='SOMETHING WHICH DOESNT EXIST'\n>>> order by time desc limit 1;\n>>>\n>>> PostgreSQL takes a very long time to complete, as it effectively\n>>> scans the entire table, backwards. And the table is huge, about 450\n>>> million rows. (btw, there are no triggers or any other exciting\n>>> things like that on our tables in this db.)\n>>>\n>>> but on things where the symbol does exist in the table, it's \n>>> more or\n>>> less fine, and nice and fast.\n>>>\n>>\n>>\n> What happens if you do:\n> SELECT * FROM \"N_intra\" WHERE symbol='doesnt exist' ORDER BY symbol,\n> time DESC LIMIT 1;\n\nHurrah! I should of thought of this, considering i've done it in the \npast :) Thanks a lot, that's great.\n\n> Yes, symbol is constant, but it frequently helps the planner \n> realize it\n> can use an index scan if you include all terms in the index in the \n> ORDER\n> BY clause.\n\n\n\n>\n>>>\n>>> Whilst the option the planner has taken might be faster most of the\n>>> time, the worst case scenario is unacceptable for obvious reasons.\n>>> I've googled for trying to force the use of a specific index, but\n>>> can't find anything relevant. Does anyone have any suggestions on\n>>> getting it to use an index which hopefully will have better worst\n>>> case performance?\n>>>\n>>\n>>\n> Try the above first. You could also create a new index on symbol\n> CREATE INDEX \"N_intra_symbol_idx\" ON \"N_intra\"(symbol);\n>\n> Then the WHERE clause should use the symbol index, which means it can\n> know quickly that an entry doesn't exist. I'm not sure how many \n> entries\n> you have per symbol, though, so this might cause problems in the ORDER\n> BY time portion.\n>\n> I'm guessing what you really want is to just do the ORDER BY \n> symbol, time.\n>\n> John\n> =:->\n>\n>\n\n", "msg_date": "Mon, 13 Jun 2005 16:20:19 +0100", "msg_from": "Alex Stapleton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL using the wrong Index" }, { "msg_contents": "Tom Lane wrote:\n> \n> \n> This was just covered in excruciating detail yesterday ...\n> \n> You need to write\n> \torder by symbol desc, time desc limit 1\n> to get the planner to recognize the connection to the sort order\n> of this index. Since you're only selecting one value of symbol,\n> the actual output doesn't change.\n> \nIs this the right behavior (not a bug)? Is postgresql planning on changing \nthis soon?\n\n\nThanks\n\nWei\n", "msg_date": "Mon, 13 Jun 2005 11:47:30 -0400", "msg_from": "Wei Weng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL using the wrong Index" } ]
[ { "msg_contents": "I've done a lot of work with a bookkeeping system where we have such\nredundancy built in. The auditors, however, need to be able to generate\nlists of the financial transaction detail to support balances. These\nreports are among the most demanding in the system. I shudder to think\nhow unacceptable performance would be without the redundancy.\n \nAlso, due to multiple media failures, and backup process problems (on\nanother database product), a large database was badly mangled. The\nredundancies allowed us to reconstruct much data, and to at least\nidentify what was missing for the rest.\n \nThere is, of course, some cost for the redundancy. Up front, someone\nneeds to code routines to maintain it. It needs to be checked against\nthe underlying detail periodically, to prevent \"drift\". And there is a\ncost, usually pretty minimal, for the software to do the work.\n \nI strongly recommend that some form of trigger (either native to the\ndatabase or, if portability is an issue, within a middle tier framework)\ndo the work of maintaining the redundant data. If you rely on\napplication code to maintain it, you can expect that sooner or later it\nwill get missed.\n \n \n>>> Tobias Brox <[email protected]> 06/11/05 4:59 AM >>>\n[\nReminds me about the way the precursor software of our product was made,\nwhenever it was needed to check the balance of a customer, it was needed\nto\nscan the whole transaction table and sum up all transactions. This\noperation eventually took 3-4 seconds before we released the new\nsoftware,\nand the customers balance was supposed to show up at several web pages\n:-)\n\nBy now we have the updated balance both in the customer table and as\n\"post_balance\" in the transaction table. Sometimes redundancy is good.\nMuch easier to solve inconsistency problems as well :-)\n\n", "msg_date": "Mon, 13 Jun 2005 09:21:06 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with rewriting query" } ]
[ { "msg_contents": "It sure would be nice if the optimizer would consider that it had the\nleeway to add any column which was restricted to a single value to any\npoint in the ORDER BY clause. Without that, the application programmer\nhas to know what indexes are on the table, rather than being able to\njust worry about the set of data they want. Obviously, if a column can\nhave only one value in the result set, adding to any point in the ORDER\nBY can't change anything but performance. That sure sounds like\nsomething which should fall within the scope of an optimizer.\n \nIt really should be a DBA function to add or drop indexes to tune the\nperformance of queries, without requiring application programmers to\nmodify the queries for every DBA adjustment. (When you have a database\nwith over 350 tables and thousands of queries, you really begin to\nappreciate the importance of this.)\n \n>>> Tom Lane <[email protected]> 06/12/05 10:56 AM >>>\nMadison Kelly <[email protected]> writes:\n> Here is my full query:\n\n> tle-bu=> EXPLAIN ANALYZE SELECT file_name, file_parent_dir, file_type \n> FROM file_info_7 WHERE file_type='d' ORDER BY file_parent_dir ASC, \n> file_name ASC;\n\n> This is my index (which I guess is wrong):\n\n> tle-bu=> \\d file_info_7_display_idx\n> Index \"public.file_info_7_display_idx\"\n> Column | Type\n> -----------------+----------------------\n> file_type | character varying(2)\n> file_parent_dir | text\n> file_name | text\n> btree, for table \"public.file_info_7\"\n\nThe index is fine, but you need to phrase the query as\n\n\t... ORDER BY file_type, file_parent_dir, file_name;\n\n(Whether you use ASC or not doesn't matter.) Otherwise the planner\nwon't make the connection to the sort ordering of the index.\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if\nyour\n joining column's datatypes do not match\n\n", "msg_date": "Mon, 13 Jun 2005 09:34:33 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index ot being used" }, { "msg_contents": "Kevin Grittner wrote:\n\n>It sure would be nice if the optimizer would consider that it had the\n>leeway to add any column which was restricted to a single value to any\n>point in the ORDER BY clause. Without that, the application programmer\n>has to know what indexes are on the table, rather than being able to\n>just worry about the set of data they want. Obviously, if a column can\n>have only one value in the result set, adding to any point in the ORDER\n>BY can't change anything but performance. That sure sounds like\n>something which should fall within the scope of an optimizer.\n>\n>It really should be a DBA function to add or drop indexes to tune the\n>performance of queries, without requiring application programmers to\n>modify the queries for every DBA adjustment. (When you have a database\n>with over 350 tables and thousands of queries, you really begin to\n>appreciate the importance of this.)\n>\n>\nI agree that having a smarter optimizer, which can recognize when an\nindex can be used for ORDER BY would be useful.\n\nI don't know if there are specific reasons why not, other than just not\nbeing implemented yet. It might be tricky to get it correct (for\ninstance, how do you know which columns can be added, which ones will be\nconstant) Perhaps you could just potentially add the WHERE items if they\nhave an equality constraint with a constant. But I'm guessing there are\nmore cases than that where the optimization could be performed.\n\nAlso, the more options you give the planner, the longer it takes on\naverage to plan any single query. Yes, it is beneficial for this use\ncase, but does that balance out slowing down all the other queries by a\ntiny bit.\n\nI'm guessing the optimization wasn't as important as some of the others\nthat have been done, so it hasn't been implemented yet.\n\nJohn\n=:->", "msg_date": "Mon, 13 Jun 2005 09:51:57 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used" }, { "msg_contents": "On Mon, Jun 13, 2005 at 09:51:57 -0500,\n John A Meinel <[email protected]> wrote:\n> \n> I don't know if there are specific reasons why not, other than just not\n> being implemented yet. It might be tricky to get it correct (for\n> instance, how do you know which columns can be added, which ones will be\n> constant) Perhaps you could just potentially add the WHERE items if they\n> have an equality constraint with a constant. But I'm guessing there are\n> more cases than that where the optimization could be performed.\n\nI think there is already some intelligence about which expressions are\nconstant in particular parts of a plan.\n\nI think you need to be able to do two things. One is to drop constant\nexpressions from order by lists. The other is when looking for an index\nto produce a specific ordering, to ingore leading constant expressions\nwhen comparing to the order by expressions.\n\n> Also, the more options you give the planner, the longer it takes on\n> average to plan any single query. Yes, it is beneficial for this use\n> case, but does that balance out slowing down all the other queries by a\n> tiny bit.\n\nBut there aren't that many possible indexes, so I don't expect this will\nslow things down much more than the current check for potentially useful\nindexes.\n", "msg_date": "Mon, 13 Jun 2005 10:48:34 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used" }, { "msg_contents": "> John A Meinel <[email protected]> wrote:\n>> I don't know if there are specific reasons why not, other than just not\n>> being implemented yet. It might be tricky to get it correct\n\nNot so much tricky to get correct, as potentially expensive to test for;\nit'd be quite easy to waste a lot of cycles trying to match ORDER BY\nkeys in multiple ways to completely-irrelevant indexes. Since this\nwill only be helpful for a minority of queries but the costs would be\npaid on almost everything with an ORDER BY, that consideration has been\nlooming large in my mind.\n\nBruno Wolff III <[email protected]> writes:\n> I think you need to be able to do two things. One is to drop constant\n> expressions from order by lists. The other is when looking for an index\n> to produce a specific ordering, to ingore leading constant expressions\n> when comparing to the order by expressions.\n\nI've been thinking about this some more this morning, and I think I see\nhow it could be relatively inexpensive to recognize x=constant\nrestrictions that allow ordering columns of an index to be ignored. We\nare already doing 90% of the work for that just as a byproduct of trying\nto match the x=constant clause to the index in the first place, so it's\nmostly a matter of refactoring the code to allow that work to be reused.\n\nI don't, however, see an equally inexpensive way to ignore ORDER BY\ncolumns. That would imply associating the '=' operator of the\nrestriction clause with the '<' or '>' operator of the ORDER BY clause,\nwhich means searching for a btree opclass that has them in common, which\nis not cheap since there's no indexing on pg_amop that would allow us to\nfind it easily. (There are various places where we do in fact do that\nsort of thing, but they aren't so performance-critical.) This doesn't\ncome up in the other case because we already know the relevant opclass\nfrom the index.\n\nI don't think the use-case has been shown that justifies doing this much\nwork to ignore useless ORDER BY clauses. The examples that have come up\nin the past all suggest ignoring index columns not the other way 'round.\nCan you make a case that we need to do that part of it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Jun 2005 12:22:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used " }, { "msg_contents": "On Mon, Jun 13, 2005 at 12:22:14 -0400,\n Tom Lane <[email protected]> wrote:\n> \n> I don't think the use-case has been shown that justifies doing this much\n> work to ignore useless ORDER BY clauses. The examples that have come up\n> in the past all suggest ignoring index columns not the other way 'round.\n> Can you make a case that we need to do that part of it?\n\nI don't think so. I don't think people are likely to order by constant\nexpressions except by adding them to the front to help optimization.\nWhen I was thinking about this I was looking at what equivalences could\nbe used and didn't look back to see which ones would be useful in the\nnormal case. And I think it is a lot more likely people will leave out\ncolumns they know not to be relevant than to include them.\n", "msg_date": "Mon, 13 Jun 2005 11:49:29 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used" }, { "msg_contents": "Kevin Grittner wrote:\n>>tle-bu=> \\d file_info_7_display_idx\n>> Index \"public.file_info_7_display_idx\"\n>> Column | Type\n>>-----------------+----------------------\n>> file_type | character varying(2)\n>> file_parent_dir | text\n>> file_name | text\n>>btree, for table \"public.file_info_7\"\n> \n> \n> The index is fine, but you need to phrase the query as\n> \n> \t... ORDER BY file_type, file_parent_dir, file_name;\n> \n> (Whether you use ASC or not doesn't matter.) Otherwise the planner\n> won't make the connection to the sort ordering of the index.\n> \n> \t\t\tregards, tom lane\n\nWith Bruno's help I've gone back and tried just this with no luck. I've \nre-written the query to include all three items in the 'ORDER BY...' \ncolumn in the same order but the sort still takes a long time and a \nsequential scan is being done instead of using the index.\n\nFor what it's worth, and being somewhat of a n00b, I agree with the idea \nof a smarter, more flexible planner. I guess the trade off is the added \noverhead neaded versus the size of the average query.\n\nMadison\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\nMadison Kelly (Digimer)\nTLE-BU, The Linux Experience; Back Up\nhttp://tle-bu.thelinuxexperience.com\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n", "msg_date": "Mon, 13 Jun 2005 13:57:31 -0400", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index ot being used" } ]
[ { "msg_contents": "I forgot cc\n\nBegin forwarded message:\n\n> From: Yves Vindevogel <[email protected]>\n> Date: Mon 13 Jun 2005 17:45:19 CEST\n> To: Tom Lane <[email protected]>\n> Subject: Re: [PERFORM] Updates on large tables are extremely slow\n>\n> Yes, but if I update one column, why should PG update 21 indexes ?\n> There's only one index affected !\n>\n> On 13 Jun 2005, at 16:32, Tom Lane wrote:\n>\n>> Yves Vindevogel <[email protected]> writes:\n>>> rvponp=3D# vacuum verbose tblPrintjobs ;\n>>> INFO: vacuuming \"public.tblprintjobs\"\n>>> [ twenty-one different indexes on one table ]\n>>\n>> Well, there's your problem. You think updating all those indexes is\n>> free? It's *expensive*. Heed the manual's advice: avoid creating\n>> indexes you are not certain you need for identifiable commonly-used\n>> queries.\n>>\n>> (The reason delete is fast is it doesn't have to touch the indexes ...\n>> the necessary work is left to be done by VACUUM.)\n>>\n>> \t\t\tregards, tom lane\n>>\n>>\n> Met vriendelijke groeten,\n> Bien à vous,\n> Kind regards,\n>\n> Yves Vindevogel\n> Implements\n>\n\n>\n>\n> Mail: [email protected] - Mobile: +32 (478) 80 82 91\n>\n> Kempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n>\n> Web: http://www.implements.be\n>\n> First they ignore you. Then they laugh at you. Then they fight you. \n> Then you win.\n> Mahatma Ghandi.\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Mon, 13 Jun 2005 17:49:47 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Updates on large tables are extremely slow " }, { "msg_contents": "Yves Vindevogel wrote:\n> I forgot cc\n> \n> Begin forwarded message:\n> \n>> From: Yves Vindevogel <[email protected]>\n>> Date: Mon 13 Jun 2005 17:45:19 CEST\n>> To: Tom Lane <[email protected]>\n>> Subject: Re: [PERFORM] Updates on large tables are extremely slow\n>>\n>> Yes, but if I update one column, why should PG update 21 indexes ?\n>> There's only one index affected !\n\nNo - all 21 are affected. MVCC creates a new row on disk.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 13 Jun 2005 17:02:24 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Updates on large tables are extremely slow" }, { "msg_contents": "Ok, if all 21 are affected, I can understand the problem.\nBut allow me to say that this is a \"functional error\"\n\nOn 13 Jun 2005, at 18:02, Richard Huxton wrote:\n\n> Yves Vindevogel wrote:\n>> I forgot cc\n>> Begin forwarded message:\n>>> From: Yves Vindevogel <[email protected]>\n>>> Date: Mon 13 Jun 2005 17:45:19 CEST\n>>> To: Tom Lane <[email protected]>\n>>> Subject: Re: [PERFORM] Updates on large tables are extremely slow\n>>>\n>>> Yes, but if I update one column, why should PG update 21 indexes ?\n>>> There's only one index affected !\n>\n> No - all 21 are affected. MVCC creates a new row on disk.\n>\n> -- \n> Richard Huxton\n> Archonet Ltd\n>\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Mon, 13 Jun 2005 18:45:59 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Updates on large tables are extremely slow" }, { "msg_contents": "\n> Ok, if all 21 are affected, I can understand the problem.\n> But allow me to say that this is a \"functional error\"\n\nIt's a choice between total throughput on a high load, high connection\nbasis (MVCC dramatically wins here), versus a single user, low load\nscenario (MS Access is designed for this).\n\nBelieve me when I say that a lot of people have spent a lot of time\nexplicitly making the system work that way.\n\n> On 13 Jun 2005, at 18:02, Richard Huxton wrote:\n> \n> Yves Vindevogel wrote:\n> I forgot cc\n> Begin forwarded message:\n> From: Yves Vindevogel\n> <[email protected]>\n> Date: Mon 13 Jun 2005 17:45:19 CEST\n> To: Tom Lane <[email protected]>\n> Subject: Re: [PERFORM] Updates on large tables\n> are extremely slow\n> \n> Yes, but if I update one column, why should PG\n> update 21 indexes ?\n> There's only one index affected !\n> \n> No - all 21 are affected. MVCC creates a new row on disk.\n> \n> -- \n> Richard Huxton\n> Archonet Ltd\n> \n> \n> Met vriendelijke groeten,\n> Bien � vous,\n> Kind regards,\n> \n> Yves Vindevogel\n> Implements\n> \n> \n> \n> ______________________________________________________________________\n> \n> \n> \n> Mail: [email protected] - Mobile: +32 (478) 80 82 91\n> \n> Kempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n> \n> Web: http://www.implements.be\n> \n> First they ignore you. Then they laugh at you. Then they fight you.\n> Then you win.\n> Mahatma Ghandi.\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n-- \n\n", "msg_date": "Mon, 13 Jun 2005 12:53:19 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Updates on large tables are extremely slow" }, { "msg_contents": "I just dropped 19 of the 21 indexes. I just left the primary key \nconstraint and my unique index on 3 fields ...\nI did a vacuum full and an analyse .... I just ran the query again \nsome 20 minutes ago.\n\nGuess what .... It's still running !!\n\nSo it's not that much faster for the moment.\nI just want to update a single field in one table with a simple value \n(negative value of another field)\nThat can not be that hard ...\n\nOr is it the MVCC that is responsible for this ?\n\nIt can't be indexes on other tables, right ?\nThat would be absolutely sick\n\nOn 13 Jun 2005, at 18:45, Yves Vindevogel wrote:\n\n> Ok, if all 21 are affected, I can understand the problem.\n> But allow me to say that this is a \"functional error\"\n>\n> On 13 Jun 2005, at 18:02, Richard Huxton wrote:\n>\n>> Yves Vindevogel wrote:\n>>> I forgot cc\n>>> Begin forwarded message:\n>>>> From: Yves Vindevogel <[email protected]>\n>>>> Date: Mon 13 Jun 2005 17:45:19 CEST\n>>>> To: Tom Lane <[email protected]>\n>>>> Subject: Re: [PERFORM] Updates on large tables are extremely slow\n>>>>\n>>>> Yes, but if I update one column, why should PG update 21 indexes ?\n>>>> There's only one index affected !\n>>\n>> No - all 21 are affected. MVCC creates a new row on disk.\n>>\n>> -- \n>> Richard Huxton\n>> Archonet Ltd\n>>\n>>\n> Met vriendelijke groeten,\n> Bien à vous,\n> Kind regards,\n>\n> Yves Vindevogel\n> Implements\n>\n> <Pasted Graphic 2.tiff>\n>\n> Mail: [email protected] - Mobile: +32 (478) 80 82 91\n>\n> Kempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n>\n> Web: http://www.implements.be\n>\n> First they ignore you. Then they laugh at you. Then they fight you. \n> Then you win.\n> Mahatma Ghandi.\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Mon, 13 Jun 2005 19:22:07 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Updates on large tables are extremely slow" }, { "msg_contents": "Hi,\n\nAt 19:22 13/06/2005, Yves Vindevogel wrote:\n>It can't be indexes on other tables, right ?\n\nIt could be foreign keys from that table referencing other tables or \nforeign keys from other tables referencing that table, especially if you \ndon't have the matching indexes...\n\nJacques.\n\n\n", "msg_date": "Mon, 13 Jun 2005 19:36:45 +0200", "msg_from": "Jacques Caron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Updates on large tables are extremely slow" }, { "msg_contents": "> Ok, if all 21 are affected, I can understand the problem.\n> But allow me to say that this is a \"functional error\"\n\nNo, it's normal MVCC design...\n\n", "msg_date": "Tue, 14 Jun 2005 09:29:28 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Updates on large tables are extremely slow" } ]
[ { "msg_contents": "I agree that ignoring useless columns in an ORDER BY clause is less\nimportant than ignoring index columns where the value is fixed. There\nis one use case for ignoring useless ORDER BY columns that leaps to\nmind, however -- a column is added to the ORDER BY clause of a query to\nhelp out the optimizer, then the indexes are modified such that that\ncolumn is no longer useful. Whether this merits the programming effort\nand performance hit you describe seems highly questionable, though.\n \n-Kevin\n \n \n>>> Tom Lane <[email protected]> 06/13/05 11:22 AM >>>\n\nI don't think the use-case has been shown that justifies doing this much\nwork to ignore useless ORDER BY clauses. The examples that have come up\nin the past all suggest ignoring index columns not the other way 'round.\nCan you make a case that we need to do that part of it?\n\n", "msg_date": "Mon, 13 Jun 2005 11:46:46 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index ot being used" } ]
[ { "msg_contents": "On Mon, Jun 13, 2005 at 11:46:46 -0500,\n Kevin Grittner <[email protected]> wrote:\n> I agree that ignoring useless columns in an ORDER BY clause is less\n> important than ignoring index columns where the value is fixed. There\n> is one use case for ignoring useless ORDER BY columns that leaps to\n> mind, however -- a column is added to the ORDER BY clause of a query to\n> help out the optimizer, then the indexes are modified such that that\n> column is no longer useful. Whether this merits the programming effort\n> and performance hit you describe seems highly questionable, though.\n\nI suspect that this isn't a big deal. There was a question like that\nthat has been going back and forth over the last couple of days.\n\nIf you remove the constant expression from the index, you aren't likely\ngoing to use the index anyway, but will instead sort the output rows\nfrom either a sequential scan or an index scan based on an index\nthat does use the constant expression.\n", "msg_date": "Mon, 13 Jun 2005 11:53:55 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index ot being used" } ]
[ { "msg_contents": "Hi All,\n\nWe are looking to upgrade to 8.0 from 7.3.2 on production server. The current production system we are using is \n\n----------------------------------------\n\n2 x 2.4 Ghz Intel Xeon CPU with HT(4 virtual CPUs)\n\nRAM - 1GB\n\nHDD - 34GB SCSI\n\n-------------------------------------\n\nProduction DB size: 10.89 GB\n\nNumber of tables: 253\n\nWe are planning to get a new server/system and upgrade to 8.0 on it. What is the recommended system requirement for Postgres 8.0?\n\nPlease give me your inputs on this.\n\nThanks\n\nSaranya\n\n\n\t\t\n---------------------------------\nYahoo! Mail Mobile\n Take Yahoo! Mail with you! Check email on your mobile phone.\n\nHi All,\nWe are looking to upgrade to 8.0 from 7.3.2 on production server. The current production system we are using is \n----------------------------------------\n2 x 2.4 Ghz Intel Xeon CPU with HT(4 virtual CPUs)\nRAM - 1GB\nHDD - 34GB SCSI\n-------------------------------------\nProduction DB size: 10.89 GB\nNumber of tables: 253\nWe are planning to get a new server/system and upgrade to 8.0 on it. What is the recommended system requirement for Postgres 8.0?\nPlease give me your inputs on this.\nThanks\nSaranya\nYahoo! Mail Mobile\nTake Yahoo! Mail with you! Check email on your mobile phone.", "msg_date": "Mon, 13 Jun 2005 12:56:37 -0700 (PDT)", "msg_from": "Saranya Sivakumar <[email protected]>", "msg_from_op": true, "msg_subject": "System Requirement" }, { "msg_contents": "Saranya Sivakumar wrote:\n> Hi All,\n> \n> We are looking to upgrade to 8.0 from 7.3.2 on production server. The \n> current production system we are using is\n> \n> ----------------------------------------\n> \n> 2 x 2.4 Ghz Intel Xeon CPU with HT(4 virtual CPUs)\n> \n> RAM - 1GB\n> \n> HDD - 34GB SCSI\n> \n> -------------------------------------\n> \n> Production DB size: 10.89 GB\n> \n> Number of tables: 253\n> \n> We are planning to get a new server/system and upgrade to 8.0 on it. \n> What is the recommended system requirement for Postgres 8.0?\n> \n> Please give me your inputs on this.\n> \n> Thanks\n> \n> Saranya\n\nHi,\n\n Let me be the first to recommend RAM. From what little I know so far \nI think it is still important to know more about what your database \nlooks like and how is it used/accessed. Can you post some more \ninformation on the details of your database? Is it a few users with \nlarge datasets (like a research project) or many users with small data \nsets (like a website)?\n\nhttp://www.postgresql.org/docs/8.0/interactive/kernel-resources.html\n\nSee if that helps a bit. My first suggestion would be to simply increase \nyour RAM to at least 2GB. Anything more would be beneficial up to the \npoint of being able to load your entire DB into RAM (16GB RAM should \nallow for that plus other OS overhead).\n\nWell, I'm relatively new so defer to others but this is my suggestion.\n\nBest of luck!\n\nMadison\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\nMadison Kelly (Digimer)\nTLE-BU, The Linux Experience; Back Up\nhttp://tle-bu.thelinuxexperience.com\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n", "msg_date": "Mon, 13 Jun 2005 16:10:20 -0400", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System Requirement" }, { "msg_contents": "On 6/13/05, Saranya Sivakumar <[email protected]> wrote:\n> 2 x 2.4 Ghz Intel Xeon CPU with HT(4 virtual CPUs) \n\nswitch to amd opteron (dual cpu). for the same price you get 2x\nperformance - comparing to xeon boxes.\n\n> RAM - 1GB \n\nyou'd definitelly could use more ram. the more the better.\n\n> HDD - 34GB SCSI \n\nis it one drive of 34G? if yes, buy another one and setup raid1 over\nthem. should boost performance as well.\n\n> Production DB size: 10.89 GB \n\nnot much. you could even consider buing 12 or 16g of ram to make it\nfit in memory.\n\ndepesz\n", "msg_date": "Tue, 14 Jun 2005 10:32:06 +0200", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System Requirement" }, { "msg_contents": "Hi,\n \nThanks for the advice on increasing RAM and switching to AMD processors.\n \nTo tell more about our DB: Our DB is transaction intensive. Interaction to the DB is through a web based application. Typically at any instant there are over 100 users using the application and we have customers worldwide. The DB size is 10.89 GB with 250+ tables.\n\nAlso, regarding upgrading to 8.0, it is better to first upgrade to 7.4 (from 7.3.2--current version on production) and then upgrade to 8.0. Am I right?\n \nThanks,\n \nSaranya\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \nHi,\n \nThanks for the advice on increasing RAM and switching to AMD processors.\n \nTo tell more about our DB: Our DB is transaction intensive. Interaction to the DB is through a web based application. Typically at any instant there are over 100 users using the application and we have customers  worldwide. The DB size is 10.89 GB with 250+ tables.\nAlso, regarding upgrading to 8.0, it is better to first upgrade to 7.4 (from 7.3.2--current version on production) and then upgrade to 8.0. Am I right?\n \nThanks,\n \nSaranya__________________________________________________Do You Yahoo!?Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com", "msg_date": "Tue, 14 Jun 2005 06:04:20 -0700 (PDT)", "msg_from": "Saranya Sivakumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: System Requirement" } ]
[ { "msg_contents": "I search for particular strings using regular expressions (e.g. where\ncolumn ~* $query) through a text data type column which contains notes\n(some html code like <b>bold</b> is included).\n\nIt works but my question is whether there would be a way to speed up\nsearches?\n\n From my limited investigation, I see the features \"CREATE INDEX\" and\n\"tsearch2\" but I'm not clear on how these work, whether they would be\nappropriate, and whether there would be a better approach.\n\nI'd appreciate being pointed in the right direction.\n\nPierre\n\n", "msg_date": "Tue, 14 Jun 2005 12:40:31 -0400 (EDT)", "msg_from": "\"Pierre A. Fortier\" <[email protected]>", "msg_from_op": true, "msg_subject": "regular expression search" }, { "msg_contents": "Just read the docs in contrib/tsearch2 in the PostgreSQL distribution.\n\nPierre A. Fortier wrote:\n> I search for particular strings using regular expressions (e.g. where\n> column ~* $query) through a text data type column which contains notes\n> (some html code like <b>bold</b> is included).\n> \n> It works but my question is whether there would be a way to speed up\n> searches?\n> \n>>From my limited investigation, I see the features \"CREATE INDEX\" and\n> \"tsearch2\" but I'm not clear on how these work, whether they would be\n> appropriate, and whether there would be a better approach.\n> \n> I'd appreciate being pointed in the right direction.\n> \n> Pierre\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n\n", "msg_date": "Wed, 15 Jun 2005 09:21:06 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: regular expression search" } ]
[ { "msg_contents": "Hi All,\n\nI previously posted the following as a sequel to my SELECT DISTINCT \nPerformance Issue question. We would most appreciate any clue or \nsuggestions on how to overcome this show-stopping issue. We are using 8.0.3 \non Windows.\n\nIs it a known limitation when using a view with SELECT ... LIMIT 1?\n\nWould the forthcoming performance enhancement with MAX help when used \nwithin a view, as in:\n\ncreate or replace view VCurPlayer as select * from Player a\nwhere a.AtDate = (select Max(b.AtDate) from Player b where a.PlayerID = \nb.PlayerID);\n\nselect PlayerID,AtDate from VCurPlayer where PlayerID='22220';\n\nThanks and regards,\nKC.\n\n---------\n\nAt 19:45 05/06/06, PFC wrote:\n\n>>Previously, we have also tried to use LIMIT 1 instead of DISTINCT, but\n>>the performance was no better:\n>>select PlayerID,AtDate from Player where PlayerID='22220' order by\n>>PlayerID desc, AtDate desc LIMIT 1\n>\n> The DISTINCT query will pull out all the rows and keep only one, \n> so the\n>one with LIMIT should be faster. Can you post explain analyze of the LIMIT \n>query ?\n\nActually the problem with LIMIT 1 query is when we use views with the LIMIT \n1 construct. The direct SQL is ok:\n\nesdt=> explain analyze select PlayerID,AtDate from Player where \nPlayerID='22220'\n order by PlayerID desc, AtDate desc LIMIT 1;\n\n Limit (cost=0.00..1.37 rows=1 width=23) (actual time=0.000..0.000 rows=1 \nloops=1)\n -> Index Scan Backward using pk_player on player (cost=0.00..16074.23 \nrows=11770 width=23) (actual time=0.000..0.000 rows=1 loops=1)\n Index Cond: ((playerid)::text = '22220'::text)\n Total runtime: 0.000 ms\n\nesdt=> create or replace view VCurPlayer3 as select * from Player a\nwhere AtDate = (select b.AtDate from Player b where a.PlayerID = b.PlayerID\norder by b.PlayerID desc, b.AtDate desc LIMIT 1);\n\nesdt=> explain analyze select PlayerID,AtDate,version from VCurPlayer3 \nwhere PlayerID='22220';\n Index Scan using pk_player on player a (cost=0.00..33072.78 rows=59 \nwidth=27)\n(actual time=235.000..235.000 rows=1 loops=1)\n Index Cond: ((playerid)::text = '22220'::text)\n Filter: ((atdate)::text = ((subplan))::text)\n SubPlan\n -> Limit (cost=0.00..1.44 rows=1 width=23) (actual \ntime=0.117..0.117 rows=1 loops=1743)\n -> Index Scan Backward using pk_player on player \nb (cost=0.00..14023.67 rows=9727 width=23) (actual time=0.108..0.108 \nrows=1 loops=1743)\n Index Cond: (($0)::text = (playerid)::text)\n Total runtime: 235.000 ms\n\nThe problem appears to be in the loops=1743 scanning all 1743 data records \nfor that player.\n\nRegards, KC.\n\n\n", "msg_date": "Wed, 15 Jun 2005 10:46:56 +0800", "msg_from": "K C Lau <[email protected]>", "msg_from_op": true, "msg_subject": "SELECT LIMIT 1 VIEW Performance Issue" }, { "msg_contents": "Hi All,\n\nInvestigating further on this problem I brought up in June, the following \nquery with pg 8.0.3 on Windows scans all 1743 data records for a player:\n\nesdt=> explain analyze select PlayerID,AtDate from Player a\n where PlayerID='22220' and AtDate = (select b.AtDate from Player b\n where b.PlayerID = a.PlayerID order by b.PlayerID desc, b.AtDate desc \nLIMIT 1);\n\n Index Scan using pk_player on player a (cost=0.00..2789.07 rows=9 \nwidth=23) (a\nctual time=51.046..51.049 rows=1 loops=1)\n Index Cond: ((playerid)::text = '22220'::text)\n Filter: ((atdate)::text = ((subplan))::text)\n SubPlan\n -> Limit (cost=0.00..0.83 rows=1 width=23) (actual \ntime=0.016..0.017 rows\n=1 loops=1743)\n -> Index Scan Backward using pk_player on player \nb (cost=0.00..970.\n53 rows=1166 width=23) (actual time=0.011..0.011 rows=1 loops=1743)\n Index Cond: ((playerid)::text = ($0)::text)\n Total runtime: 51.133 ms\n\nUsing a static value in the subquery produces the desired result below, but \nsince we use views for our queries (see last part of this email), we cannot \npush the static value into the subquery:\n\nesdt=> explain analyze select PlayerID,AtDate from Player a\n where PlayerID='22220' and AtDate = (select b.AtDate from Player b\n where b.PlayerID = '22220' order by b.PlayerID desc, b.AtDate desc LIMIT 1);\n\n Index Scan using pk_player on player a (cost=0.75..4.26 rows=1 width=23) \n(actu\nal time=0.054..0.058 rows=1 loops=1)\n Index Cond: (((playerid)::text = '22220'::text) AND ((atdate)::text = \n($0)::t\next))\n InitPlan\n -> Limit (cost=0.00..0.75 rows=1 width=23) (actual \ntime=0.028..0.029 rows\n=1 loops=1)\n -> Index Scan Backward using pk_player on player \nb (cost=0.00..1323\n.05 rows=1756 width=23) (actual time=0.023..0.023 rows=1 loops=1)\n Index Cond: ((playerid)::text = '22220'::text)\n Total runtime: 0.149 ms\n\nThe Player table has a primary key on PlayerID, AtDate. Is there a way to \nstop the inner-most index scan looping all 1743 data records for that \nplayer? Is that a bug or known issue?\n\nBTW, I tried using 8.1 beta2 on Windows and its performance is similar, I \nhave also tried other variants such as MAX and DISTINCT but with no success.\n\nAny help is most appreciated.\n\nBest regards,\nKC.\n\n\nAt 10:46 05/06/15, K C Lau wrote:\n>Hi All,\n>\n>I previously posted the following as a sequel to my SELECT DISTINCT \n>Performance Issue question. We would most appreciate any clue or \n>suggestions on how to overcome this show-stopping issue. We are using \n>8.0.3 on Windows.\n>\n>Is it a known limitation when using a view with SELECT ... LIMIT 1?\n>\n>Would the forthcoming performance enhancement with MAX help when used \n>within a view, as in:\n>\n>create or replace view VCurPlayer as select * from Player a\n>where a.AtDate = (select Max(b.AtDate) from Player b where a.PlayerID = \n>b.PlayerID);\n>\n>select PlayerID,AtDate from VCurPlayer where PlayerID='22220';\n>\n>Thanks and regards,\n>KC.\n>\n>---------\n>\n>Actually the problem with LIMIT 1 query is when we use views with the \n>LIMIT 1 construct. The direct SQL is ok:\n>\n>esdt=> explain analyze select PlayerID,AtDate from Player where \n>PlayerID='22220'\n> order by PlayerID desc, AtDate desc LIMIT 1;\n>\n> Limit (cost=0.00..1.37 rows=1 width=23) (actual time=0.000..0.000 \n> rows=1 loops=1)\n> -> Index Scan Backward using pk_player on \n> player (cost=0.00..16074.23 rows=11770 width=23) (actual \n> time=0.000..0.000 rows=1 loops=1)\n> Index Cond: ((playerid)::text = '22220'::text)\n> Total runtime: 0.000 ms\n>\n>esdt=> create or replace view VCurPlayer3 as select * from Player a\n>where AtDate = (select b.AtDate from Player b where a.PlayerID = b.PlayerID\n>order by b.PlayerID desc, b.AtDate desc LIMIT 1);\n>\n>esdt=> explain analyze select PlayerID,AtDate,version from VCurPlayer3 \n>where PlayerID='22220';\n> Index Scan using pk_player on player a (cost=0.00..33072.78 rows=59 \n> width=27)\n>(actual time=235.000..235.000 rows=1 loops=1)\n> Index Cond: ((playerid)::text = '22220'::text)\n> Filter: ((atdate)::text = ((subplan))::text)\n> SubPlan\n> -> Limit (cost=0.00..1.44 rows=1 width=23) (actual \n> time=0.117..0.117 rows=1 loops=1743)\n> -> Index Scan Backward using pk_player on player \n> b (cost=0.00..14023.67 rows=9727 width=23) (actual time=0.108..0.108 \n> rows=1 loops=1743)\n> Index Cond: (($0)::text = (playerid)::text)\n> Total runtime: 235.000 ms\n\n", "msg_date": "Thu, 22 Sep 2005 12:21:18 +0800", "msg_from": "K C Lau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue" }, { "msg_contents": "On Thu, 2005-09-22 at 12:21 +0800, K C Lau wrote:\n\n> Investigating further on this problem I brought up in June, the following \n> query with pg 8.0.3 on Windows scans all 1743 data records for a player:\n> \n> esdt=> explain analyze select PlayerID,AtDate from Player a\n> where PlayerID='22220' and AtDate = (select b.AtDate from Player b\n> where b.PlayerID = a.PlayerID order by b.PlayerID desc, b.AtDate desc \n> LIMIT 1);\n> \n\n> Total runtime: 51.133 ms\n> \n> Using a static value in the subquery produces the desired result below, but \n> since we use views for our queries (see last part of this email), we cannot \n> push the static value into the subquery:\n> \n> esdt=> explain analyze select PlayerID,AtDate from Player a\n> where PlayerID='22220' and AtDate = (select b.AtDate from Player b\n> where b.PlayerID = '22220' order by b.PlayerID desc, b.AtDate desc LIMIT 1);\n\n> Total runtime: 0.149 ms\n> \n> The Player table has a primary key on PlayerID, AtDate. Is there a way to \n> stop the inner-most index scan looping all 1743 data records for that \n> player? Is that a bug or known issue?\n\nCurrently the planner can't tell whether a subquery is correlated or not\nuntil it has planned the query. So it is unable to push down the\nqualification automatically in the way you have achieved manually. The\nnew min() optimisation doesn't yet work with GROUP BY which is what you\nwould use to reformulate the query that way, so no luck that way either.\n\nIf you don't want to do this in a view, calculate the values for all\nplayers at once and store the values in a summary table for when you\nneed them.\n\nBest Regards, Simon Riggs\n\n\n", "msg_date": "Thu, 22 Sep 2005 09:40:56 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue" }, { "msg_contents": "We use similar views as base views throughout our OLTP system to get the \nlatest time-based record(s). So it is quite impossible to use summary \ntables etc. Are there other ways to do it?\n\nThe subquery would pinpoint the record(s) with the composite primary key. \nBoth MS Sql and Oracle do not have such performance problem. So this \nproblem is effectively stopping us from migrating to PostgreSQL.\n\nAny suggestions would be most appreciated.\n\nBest regards,\nKC.\n\nAt 16:40 05/09/22, Simon Riggs wrote:\n>On Thu, 2005-09-22 at 12:21 +0800, K C Lau wrote:\n>\n> > Investigating further on this problem I brought up in June, the following\n> > query with pg 8.0.3 on Windows scans all 1743 data records for a player:\n> >\n> > esdt=> explain analyze select PlayerID,AtDate from Player a\n> > where PlayerID='22220' and AtDate = (select b.AtDate from Player b\n> > where b.PlayerID = a.PlayerID order by b.PlayerID desc, b.AtDate desc\n> > LIMIT 1);\n> >\n>\n> > Total runtime: 51.133 ms\n> >\n> > Using a static value in the subquery produces the desired result below, \n> but\n> > since we use views for our queries (see last part of this email), we \n> cannot\n> > push the static value into the subquery:\n> >\n> > esdt=> explain analyze select PlayerID,AtDate from Player a\n> > where PlayerID='22220' and AtDate = (select b.AtDate from Player b\n> > where b.PlayerID = '22220' order by b.PlayerID desc, b.AtDate desc \n> LIMIT 1);\n>\n> > Total runtime: 0.149 ms\n> >\n> > The Player table has a primary key on PlayerID, AtDate. Is there a way to\n> > stop the inner-most index scan looping all 1743 data records for that\n> > player? Is that a bug or known issue?\n>\n>Currently the planner can't tell whether a subquery is correlated or not\n>until it has planned the query. So it is unable to push down the\n>qualification automatically in the way you have achieved manually. The\n>new min() optimisation doesn't yet work with GROUP BY which is what you\n>would use to reformulate the query that way, so no luck that way either.\n>\n>If you don't want to do this in a view, calculate the values for all\n>players at once and store the values in a summary table for when you\n>need them.\n>\n>Best Regards, Simon Riggs\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: explain analyze is your friend\n\n", "msg_date": "Thu, 22 Sep 2005 18:40:46 +0800", "msg_from": "K C Lau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue" }, { "msg_contents": "On Thu, 2005-09-22 at 18:40 +0800, K C Lau wrote:\n> We use similar views as base views throughout our OLTP system to get the \n> latest time-based record(s). So it is quite impossible to use summary \n> tables etc. Are there other ways to do it?\n> \n> The subquery would pinpoint the record(s) with the composite primary key. \n> Both MS Sql and Oracle do not have such performance problem. So this \n> problem is effectively stopping us from migrating to PostgreSQL.\n> \n> Any suggestions would be most appreciated.\n\nEven if this were fixed for 8.1, which seems unlikely, would you be able\nto move to that release immediately?\n\nISTM you have two choices, in priority, complexity and time/cost order\n1) custom mods to your app\n2) custom mods to PostgreSQL\n\nMaybe its possible to reconstruct your query with sub-sub-selects so\nthat you have a correlated query with manually pushed down clauses,\nwhich also references a more constant base view?\n\nIs a 51ms query really such a problem for you?\n\nBest Regards, Simon Riggs\n\n\n", "msg_date": "Thu, 22 Sep 2005 13:48:04 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue" }, { "msg_contents": "At 20:48 05/09/22, Simon Riggs wrote:\n>On Thu, 2005-09-22 at 18:40 +0800, K C Lau wrote:\n> > We use similar views as base views throughout our OLTP system to get the\n> > latest time-based record(s). So it is quite impossible to use summary\n> > tables etc. Are there other ways to do it?\n> >\n> > The subquery would pinpoint the record(s) with the composite primary key.\n> > Both MS Sql and Oracle do not have such performance problem. So this\n> > problem is effectively stopping us from migrating to PostgreSQL.\n> >\n> > Any suggestions would be most appreciated.\n>\n>Even if this were fixed for 8.1, which seems unlikely, would you be able\n>to move to that release immediately?\n\nYes. In fact when we first developed our system a few years ago, we tested \non MS7.0, Oracle 8 and PG 7.1.1 and we did not hit that problem. When we \ntry again with PG 8.0, the performance becomes unbearable, but other areas \nappear ok and other queries are often faster than MS Sql2k.\n\n>Maybe its possible to reconstruct your query with sub-sub-selects so\n>that you have a correlated query with manually pushed down clauses,\n>which also references a more constant base view?\n\nWe would be most happy to try them if we have some example views or pointers.\n\n>Is a 51ms query really such a problem for you?\n\nUnfortunately yes, as our target performance is in the high hundreds of \ntransactions per sec. And 51 ms is already the best case for a single \nselect, with everything cached in memory immediately after the same select \nwhich took 390 ms on a quiet system.\n\n>Best Regards, Simon Riggs\n\nBest regards,\nKC. \n\n", "msg_date": "Thu, 22 Sep 2005 22:39:29 +0800", "msg_from": "K C Lau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue" }, { "msg_contents": "K C Lau <[email protected]> writes:\n> At 20:48 05/09/22, Simon Riggs wrote:\n>> Even if this were fixed for 8.1, which seems unlikely, would you be able\n>> to move to that release immediately?\n\n> Yes. In fact when we first developed our system a few years ago, we tested \n> on MS7.0, Oracle 8 and PG 7.1.1 and we did not hit that problem.\n\nIt's really not credible that PG 7.1 did any better with this than\ncurrent sources do. The subplan mechanism hasn't changed materially\nsince about 6.5. It could be that 7.1's performance was simply so\nbad across the board that you didn't notice ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Sep 2005 11:41:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue " }, { "msg_contents": "On Thu, 2005-09-22 at 22:39 +0800, K C Lau wrote:\n> >Is a 51ms query really such a problem for you?\n> \n> Unfortunately yes, as our target performance is in the high hundreds of \n> transactions per sec. And 51 ms is already the best case for a single \n> select, with everything cached in memory immediately after the same select \n> which took 390 ms on a quiet system.\n\nIf the current value is used so often, use two tables - one with a\ncurrent view only of the row maintained using UPDATE. Different\nperformance issues maybe, but at least not correlated subquery ones.\n\nBest Regards, Simon Riggs\n\n\n", "msg_date": "Thu, 22 Sep 2005 17:16:23 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue" }, { "msg_contents": "On Thu, 2005-09-22 at 18:40 +0800, K C Lau wrote: \n> > > esdt=> explain analyze select PlayerID,AtDate from Player a\n> > > where PlayerID='22220' and AtDate = (select b.AtDate from Player b\n> > > where b.PlayerID = a.PlayerID order by b.PlayerID desc, b.AtDate desc\n> > > LIMIT 1);\n\nI think you should try:\n\nselect distinct on (PlayerID) PlayerID,AtDate from Player a\nwhere PlayerID='22220' order by PlayerId, AtDate Desc;\n\nDoes that work for you?\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Fri, 23 Sep 2005 12:15:41 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue" }, { "msg_contents": "At 19:15 05/09/23, Simon Riggs wrote:\n>select distinct on (PlayerID) PlayerID,AtDate from Player a\n>where PlayerID='22220' order by PlayerId, AtDate Desc;\n>\n>Does that work for you?\n>\n>Best Regards, Simon Riggs\n\nesdt=> explain analyze select distinct on (PlayerID) PlayerID,AtDate from \nPlayer a where PlayerID='22220' order by PlayerId, AtDate Desc;\n Unique (cost=1417.69..1426.47 rows=2 width=23) (actual \ntime=31.231..36.609 rows=1 loops=1)\n -> Sort (cost=1417.69..1422.08 rows=1756 width=23) (actual \ntime=31.129..32.473 rows=1743 loops=1)\n Sort Key: playerid, atdate\n -> Index Scan using pk_player on player a (cost=0.00..1323.05 \nrows=1756 width=23) (actual time=0.035..6.575 rows=1743 loops=1)\n Index Cond: ((playerid)::text = '22220'::text)\n Total runtime: 36.943 ms\n\nThe sort was eliminated with: order by PlayerId Desc, AtDate Desc:\n\nesdt=> explain analyze select distinct on (PlayerID) PlayerID,AtDate from \nPlayer a where PlayerID='22220' order by PlayerId Desc, AtDate Desc;\n Unique (cost=0.00..1327.44 rows=2 width=23) (actual time=0.027..8.438 \nrows=1 loops=1)\n -> Index Scan Backward using pk_player on player \na (cost=0.00..1323.05 rows=1756 width=23) (actual time=0.022..4.950 \nrows=1743 loops=1)\n Index Cond: ((playerid)::text = '22220'::text)\n Total runtime: 8.499 ms\n\nThat is the fastest of all queries looping the 1743 rows.\nI do get the desired result by adding LIMIT 1:\n\nesdt=> explain analyze select distinct on (PlayerID) PlayerID,AtDate from \nPlayer a where PlayerID='22220' order by PlayerId Desc, AtDate Desc LIMIT 1;\n\n Limit (cost=0.00..663.72 rows=1 width=23) (actual time=0.032..0.033 \nrows=1 loops=1)\n -> Unique (cost=0.00..1327.44 rows=2 width=23) (actual \ntime=0.028..0.028 rows=1 loops=1)\n -> Index Scan Backward using pk_player on player \na (cost=0.00..1323.05 rows=1756 width=23) (actual time=0.022..0.022 rows=1 \nloops=1)\n Index Cond: ((playerid)::text = '22220'::text)\n Total runtime: 0.094 ms\n\nHowever, when I use that within a function in a view, it is slow again:\n\nesdt=> create or replace function player_max_atdate (varchar(32)) returns \nvarchar(32) as $$\nesdt$> select distinct on (PlayerID) AtDate from player where PlayerID= $1 \norder by PlayerID desc, AtDate desc limit 1;\nesdt$> $$ language sql immutable;\nCREATE FUNCTION\nesdt=> create or replace view VCurPlayer3 as select * from Player where \nAtDate = player_max_atdate(PlayerID);\nCREATE VIEW\nesdt=> explain analyze select PlayerID,AtDate from VCurPlayer3 where \nPlayerID='22220';\n\n Index Scan using pk_player on player (cost=0.00..1331.83 rows=9 \nwidth=23) (actual time=76.660..76.664 rows=1 loops=1)\n Index Cond: ((playerid)::text = '22220'::text)\n Filter: ((atdate)::text = (player_max_atdate(playerid))::text)\n Total runtime: 76.716 ms\n\nWhy wouldn't the function get the row as quickly as the direct sql does?\n\nBest regards, KC.\n\n\n", "msg_date": "Fri, 23 Sep 2005 20:17:03 +0800", "msg_from": "K C Lau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue" }, { "msg_contents": "At 20:17 05/09/23, K C Lau wrote:\n>At 19:15 05/09/23, Simon Riggs wrote:\n>>select distinct on (PlayerID) PlayerID,AtDate from Player a\n>>where PlayerID='22220' order by PlayerId, AtDate Desc;\n>>\n>>Does that work for you?\n>>\n>>Best Regards, Simon Riggs\n>\n>esdt=> explain analyze select distinct on (PlayerID) PlayerID,AtDate from \n>Player a where PlayerID='22220' order by PlayerId Desc, AtDate Desc;\n> Unique (cost=0.00..1327.44 rows=2 width=23) (actual time=0.027..8.438 \n> rows=1 loops=1)\n> -> Index Scan Backward using pk_player on player \n> a (cost=0.00..1323.05 rows=1756 width=23) (actual time=0.022..4.950 \n> rows=1743 loops=1)\n> Index Cond: ((playerid)::text = '22220'::text)\n> Total runtime: 8.499 ms\n>\n>That is the fastest of all queries looping the 1743 rows.\n>I do get the desired result by adding LIMIT 1:\n>\n>esdt=> explain analyze select distinct on (PlayerID) PlayerID,AtDate from \n>Player a where PlayerID='22220' order by PlayerId Desc, AtDate Desc LIMIT 1;\n>\n> Limit (cost=0.00..663.72 rows=1 width=23) (actual time=0.032..0.033 \n> rows=1 loops=1)\n> -> Unique (cost=0.00..1327.44 rows=2 width=23) (actual \n> time=0.028..0.028 rows=1 loops=1)\n> -> Index Scan Backward using pk_player on player \n> a (cost=0.00..1323.05 rows=1756 width=23) (actual time=0.022..0.022 \n> rows=1 loops=1)\n> Index Cond: ((playerid)::text = '22220'::text)\n> Total runtime: 0.094 ms\n>\n>However, when I use that within a function in a view, it is slow again:\n>\n>esdt=> create or replace function player_max_atdate (varchar(32)) returns \n>varchar(32) as $$\n>esdt$> select distinct on (PlayerID) AtDate from player where PlayerID= \n>$1 order by PlayerID desc, AtDate desc limit 1;\n>esdt$> $$ language sql immutable;\n>CREATE FUNCTION\n>esdt=> create or replace view VCurPlayer3 as select * from Player where \n>AtDate = player_max_atdate(PlayerID);\n>CREATE VIEW\n>esdt=> explain analyze select PlayerID,AtDate from VCurPlayer3 where \n>PlayerID='22220';\n>\n> Index Scan using pk_player on player (cost=0.00..1331.83 rows=9 \n> width=23) (actual time=76.660..76.664 rows=1 loops=1)\n> Index Cond: ((playerid)::text = '22220'::text)\n> Filter: ((atdate)::text = (player_max_atdate(playerid))::text)\n> Total runtime: 76.716 ms\n>\n>Why wouldn't the function get the row as quickly as the direct sql does?\n\nResults from the following query suggests that the explain analyze output \nabove only tells half the story, and that the function is in fact called \n1743 times:\n\nesdt=> create or replace view VCurPlayer3 as select distinct on (PlayerID) \n* from Player a where OID = (select distinct on (PlayerID) OID from Player \nb where b.PlayerID = a.PlayerID and b.AtDate = \nplayer_max_atdate(b.PlayerID) order by PlayerID desc, AtDate desc limit 1) \norder by PlayerId Desc, AtDate desc;\nCREATE VIEW\nesdt=> explain analyze select PlayerID,AtDate from VCurPlayer3 where \nPlayerID='22220';\n Subquery Scan vcurplayer3 (cost=0.00..1715846.91 rows=1 width=68) \n(actual time=0.640..119.124 rows=1 loops=1)\n -> Unique (cost=0.00..1715846.90 rows=1 width=776) (actual \ntime=0.633..119.112 rows=1 loops=1)\n -> Index Scan Backward using pk_player on player \na (cost=0.00..1715846.88 rows=9 width=776) (actual time=0.628..119.104 \nrows=1 loops=1)\n Index Cond: ((playerid)::text = '22220'::text)\n Filter: (oid = (subplan))\n SubPlan\n -> Limit (cost=0.00..976.38 rows=1 width=27) (actual \ntime=0.057..0.058 rows=1 loops=1743)\n -> Unique (cost=0.00..976.38 rows=1 width=27) \n(actual time=0.052..0.052 rows=1 loops=1743)\n -> Index Scan Backward using pk_player on \nplayer b (cost=0.00..976.36 rows=6 width=27) (actual time=0.047..0.047 \nrows=1 loops=1743)\n Index Cond: ((playerid)::text = ($0)::text)\n Filter: ((atdate)::text = \n(player_max_atdate(playerid))::text)\n Total runtime: 119.357 ms\n\nIt would also explain the very long time taken by the pl/pgsql function I \nposted a bit earlier.\n\nSo I guess it all comes back to the basic question:\n\nFor the query select distinct on (PlayerID) * from Player a where \nPlayerID='22220' order by PlayerId Desc, AtDate Desc;\ncan the optimizer recognise the fact the query is selecting by the primary \nkey (PlayerID,AtDate), so it can skip the remaining rows for that PlayerID, \nas if LIMIT 1 is implied?\n\nBest regards, KC.\n\n\n", "msg_date": "Mon, 26 Sep 2005 15:46:30 +0800", "msg_from": "K C Lau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue" }, { "msg_contents": "On Fri, Sep 23, 2005 at 08:17:03PM +0800, K C Lau wrote:\n> esdt=> create or replace function player_max_atdate (varchar(32)) returns \n> varchar(32) as $$\n> esdt$> select distinct on (PlayerID) AtDate from player where PlayerID= $1 \n> order by PlayerID desc, AtDate desc limit 1;\n> esdt$> $$ language sql immutable;\n> CREATE FUNCTION\n\nThat function is not immutable, it should be defined as stable.\n\n> esdt=> create or replace view VCurPlayer3 as select * from Player where \n> AtDate = player_max_atdate(PlayerID);\n> CREATE VIEW\n> esdt=> explain analyze select PlayerID,AtDate from VCurPlayer3 where \n> PlayerID='22220';\n> \n> Index Scan using pk_player on player (cost=0.00..1331.83 rows=9 \n> width=23) (actual time=76.660..76.664 rows=1 loops=1)\n> Index Cond: ((playerid)::text = '22220'::text)\n> Filter: ((atdate)::text = (player_max_atdate(playerid))::text)\n> Total runtime: 76.716 ms\n> \n> Why wouldn't the function get the row as quickly as the direct sql does?\n\nPostgreSQL doesn't pre-compile functions, at least not until 8.1 (and\nI'm not sure how much those are pre-compiled, though they are\nsyntax-checked at creation). Do you get the same result time when you\nrun it a second time? What time do you get from running just the\nfunction versus the SQL in the function?\n\nAlso, remember that every layer you add to the cake means more work for\nthe database. If speed is that highly critical you'll probably want to\nnot wrap things in functions, and possibly not use views either.\n\nAlso, keep in mind that getting below 1ms doesn't automatically mean\nyou'll be able to scale to 1000TPS. Things will definately change when\nyou load the system down, so if performance is that critical you should\nstart testing with the system under load if you're not already.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 4 Oct 2005 16:15:41 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue" }, { "msg_contents": "On Tue, Oct 04, 2005 at 04:15:41PM -0500, Jim C. Nasby wrote:\n> > Index Cond: ((playerid)::text = '22220'::text)\n\nAlso, why is playerid a text field? Comparing ints will certainly be\nfaster...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 4 Oct 2005 17:19:12 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue" } ]
[ { "msg_contents": "I deeply apologize if this has been covered with some similar topic \nbefore, but I need a little guidance in the optimization department. \nWe use Postgres as our database and we're having some issues dealing \nwith customers who are, shall we say, \"thrifty\" when it comes to \nbuying RAM.\n\nWe tell them to buy at least 1GB, but there's always the bargain \nchaser who thinks 256MB of RAM \"is more than enough. So here's what I \nneed--in layman's terms 'cause I'll need to forward this message on \nto them to prove what I'm saying (don't ya love customers?).\n\n1. Our database has a total of 35 tables and maybe 300 variables\n2. There are five primary tables and only two of these are written to \nevery minute, sometimes up to a menial 1500 transactions per minute.\n3. Our customers usually buy RAM in 256MB, 512MB, 1GB or 2GB. We've \ntried to come up with a optimization scheme based on what we've been \nable to discern from lists like this, but we don't have a lot of \nconfidence. Using the default settings seems to work best with 1GB, \nbut we need help with the other RAM sizes.\n\nWhat's the problem? The sucker gets s-l-o-w on relatively simple \nqueries. For example, simply listing all of the users online at one \ntime takes 30-45 seconds if we're talking about 800 users. We've \nadjusted the time period for vacuuming the tables to the point where \nit occurs once an hour, but we're getting only a 25% performance gain \nfrom that. We're looking at the system settings now to see how those \ncan be tweaked.\n\nSo, what I need is to be pointed to (or told) what are the best \nsettings for our database given these memory configurations. What \nshould we do?\n\nThanks\n\nTodd\n\nDon't know if this will help, but here's the result of show all:\n\nNOTICE: enable_seqscan is on\nNOTICE: enable_indexscan is on\nNOTICE: enable_tidscan is on\nNOTICE: enable_sort is on\nNOTICE: enable_nestloop is on\nNOTICE: enable_mergejoin is on\nNOTICE: enable_hashjoin is on\nNOTICE: ksqo is off\nNOTICE: geqo is on\nNOTICE: tcpip_socket is on\nNOTICE: ssl is off\nNOTICE: fsync is on\nNOTICE: silent_mode is off\nNOTICE: log_connections is off\nNOTICE: log_timestamp is off\nNOTICE: log_pid is off\nNOTICE: debug_print_query is off\nNOTICE: debug_print_parse is off\nNOTICE: debug_print_rewritten is off\nNOTICE: debug_print_plan is off\nNOTICE: debug_pretty_print is off\nNOTICE: show_parser_stats is off\nNOTICE: show_planner_stats is off\nNOTICE: show_executor_stats is off\nNOTICE: show_query_stats is off\nNOTICE: stats_start_collector is on\nNOTICE: stats_reset_on_server_start is on\nNOTICE: stats_command_string is off\nNOTICE: stats_row_level is off\nNOTICE: stats_block_level is off\nNOTICE: trace_notify is off\nNOTICE: hostname_lookup is off\nNOTICE: show_source_port is off\nNOTICE: sql_inheritance is on\nNOTICE: australian_timezones is off\nNOTICE: fixbtree is on\nNOTICE: password_encryption is off\nNOTICE: transform_null_equals is off\nNOTICE: geqo_threshold is 20\nNOTICE: geqo_pool_size is 0\nNOTICE: geqo_effort is 1\nNOTICE: geqo_generations is 0\nNOTICE: geqo_random_seed is -1\nNOTICE: deadlock_timeout is 1000\nNOTICE: syslog is 0\nNOTICE: max_connections is 64\nNOTICE: shared_buffers is 256\nNOTICE: port is 5432\nNOTICE: unix_socket_permissions is 511\nNOTICE: sort_mem is 2048\nNOTICE: vacuum_mem is 126622\nNOTICE: max_files_per_process is 1000\nNOTICE: debug_level is 0\nNOTICE: max_expr_depth is 10000\nNOTICE: max_fsm_relations is 500\nNOTICE: max_fsm_pages is 10000\nNOTICE: max_locks_per_transaction is 64\nNOTICE: authentication_timeout is 60\nNOTICE: pre_auth_delay is 0\nNOTICE: checkpoint_segments is 3\nNOTICE: checkpoint_timeout is 300\nNOTICE: wal_buffers is 8\nNOTICE: wal_files is 0\nNOTICE: wal_debug is 0\nNOTICE: commit_delay is 0\nNOTICE: commit_siblings is 5\nNOTICE: effective_cache_size is 79350\nNOTICE: random_page_cost is 2\nNOTICE: cpu_tuple_cost is 0.01\nNOTICE: cpu_index_tuple_cost is 0.001\nNOTICE: cpu_operator_cost is 0.0025\nNOTICE: geqo_selection_bias is 2\nNOTICE: default_transaction_isolation is read committed\nNOTICE: dynamic_library_path is $libdir\nNOTICE: krb_server_keyfile is FILE:/etc/pgsql/krb5.keytab\nNOTICE: syslog_facility is LOCAL0\nNOTICE: syslog_ident is postgres\nNOTICE: unix_socket_group is unset\nNOTICE: unix_socket_directory is unset\nNOTICE: virtual_host is unset\nNOTICE: wal_sync_method is fdatasync\nNOTICE: DateStyle is ISO with US (NonEuropean) conventions\nNOTICE: Time zone is unset\nNOTICE: TRANSACTION ISOLATION LEVEL is READ COMMITTED\nNOTICE: Current client encoding is 'SQL_ASCII'\nNOTICE: Current server encoding is 'SQL_ASCII'\nNOTICE: Seed for random number generator is unavailable\n", "msg_date": "Wed, 15 Jun 2005 02:06:27 -0700", "msg_from": "Todd Landfried <[email protected]>", "msg_from_op": true, "msg_subject": "Needed: Simplified guide to optimal memory configuration" }, { "msg_contents": "On Wed, 15 Jun 2005, Todd Landfried wrote:\n\n> So, what I need is to be pointed to (or told) what are the best \n> settings for our database given these memory configurations. What \n> should we do?\n\nMaybe this will help:\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n> NOTICE: shared_buffers is 256\n\nThis looks like it's way too low. Try something like 2048.\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Wed, 15 Jun 2005 11:25:37 +0200 (CEST)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Needed: Simplified guide to optimal memory configuration" }, { "msg_contents": "Dennis Bjorklund <[email protected]> writes:\n> On Wed, 15 Jun 2005, Todd Landfried wrote:\n>> NOTICE: shared_buffers is 256\n\n> This looks like it's way too low. Try something like 2048.\n\nIt also is evidently PG 7.2 or before; SHOW's output hasn't looked like\nthat in years. Try a more recent release --- there's usually nontrivial\nperformance improvements in each major release.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Jun 2005 10:18:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Needed: Simplified guide to optimal memory configuration " }, { "msg_contents": "On Wed, Jun 15, 2005 at 02:06:27 -0700,\n Todd Landfried <[email protected]> wrote:\n> \n> What's the problem? The sucker gets s-l-o-w on relatively simple \n> queries. For example, simply listing all of the users online at one \n> time takes 30-45 seconds if we're talking about 800 users. We've \n> adjusted the time period for vacuuming the tables to the point where \n> it occurs once an hour, but we're getting only a 25% performance gain \n> from that. We're looking at the system settings now to see how those \n> can be tweaked.\n\nIt might be useful to see example slow queries and the corresponding\nexplain analyze output.\n", "msg_date": "Wed, 15 Jun 2005 09:19:49 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Needed: Simplified guide to optimal memory configuration" }, { "msg_contents": "Dennis,\n\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n>\n> > NOTICE: shared_buffers is 256\n\nFor everyone's info, the current (8.0) version is at:\nhttp://www.powerpostgresql.com/PerfList\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 15 Jun 2005 11:34:43 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Needed: Simplified guide to optimal memory configuration" }, { "msg_contents": "Yes, it is 7.2. Why? because an older version of our software runs on \nRH7.3 and that was the latest supported release of Postgresql for \nRH7.3 (that we can find). We're currently ported to 8, but we still \nhave a large installed base with the other version.\n\n\nOn Jun 15, 2005, at 7:18 AM, Tom Lane wrote:\n\n> Dennis Bjorklund <[email protected]> writes:\n>\n>> On Wed, 15 Jun 2005, Todd Landfried wrote:\n>>\n>>> NOTICE: shared_buffers is 256\n>>>\n>\n>\n>> This looks like it's way too low. Try something like 2048.\n>>\n>\n> It also is evidently PG 7.2 or before; SHOW's output hasn't looked \n> like\n> that in years. Try a more recent release --- there's usually \n> nontrivial\n> performance improvements in each major release.\n>\n> regards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to \n> [email protected])\n>\n\n", "msg_date": "Thu, 16 Jun 2005 07:46:45 -0700", "msg_from": "Todd Landfried <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Needed: Simplified guide to optimal memory configuration " }, { "msg_contents": "We run the RPM's for RH 7.3 on our 7.2 install base with no problems.\nRPM's as recent as for PostgreSQL 7.4.2 are available here:\nftp://ftp10.us.postgresql.org/pub/postgresql/binary/v7.4.2/redhat/redhat-7.3/\n\nOr you can always compile from source. There isn't any such thing as a\n'supported' package for RH7.2 anyway.\n\n-- Mark Lewis\n\n\nOn Thu, 2005-06-16 at 07:46 -0700, Todd Landfried wrote:\n> Yes, it is 7.2. Why? because an older version of our software runs on \n> RH7.3 and that was the latest supported release of Postgresql for \n> RH7.3 (that we can find). We're currently ported to 8, but we still \n> have a large installed base with the other version.\n> \n> \n> On Jun 15, 2005, at 7:18 AM, Tom Lane wrote:\n> \n> > Dennis Bjorklund <[email protected]> writes:\n> >\n> >> On Wed, 15 Jun 2005, Todd Landfried wrote:\n> >>\n> >>> NOTICE: shared_buffers is 256\n> >>>\n> >\n> >\n> >> This looks like it's way too low. Try something like 2048.\n> >>\n> >\n> > It also is evidently PG 7.2 or before; SHOW's output hasn't looked \n> > like\n> > that in years. Try a more recent release --- there's usually \n> > nontrivial\n> > performance improvements in each major release.\n> >\n> > regards, tom lane\n> >\n> > ---------------------------(end of \n> > broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to \n> > [email protected])\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n\n", "msg_date": "Thu, 16 Jun 2005 10:01:00 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Needed: Simplified guide to optimal memory" }, { "msg_contents": "On Thu, Jun 16, 2005 at 07:46:45 -0700,\n Todd Landfried <[email protected]> wrote:\n> Yes, it is 7.2. Why? because an older version of our software runs on \n> RH7.3 and that was the latest supported release of Postgresql for \n> RH7.3 (that we can find). We're currently ported to 8, but we still \n> have a large installed base with the other version.\n\nYou can build it from source. I run 8.0 stable from CVS on a RH 6.1 box.\n", "msg_date": "Thu, 16 Jun 2005 12:03:10 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Needed: Simplified guide to optimal memory configuration" }, { "msg_contents": "Thanks for the link. I'll look into those.\n\nI'm going only on what my engineers are telling me, but they say \nupgrading breaks a lot of source code with some SQL commands that are \na pain to hunt down and kill. Not sure if that's true, but that's \nwhat I'm told.\n\nTodd\n\nOn Jun 16, 2005, at 10:01 AM, Mark Lewis wrote:\n\n\n> We run the RPM's for RH 7.3 on our 7.2 install base with no problems.\n> RPM's as recent as for PostgreSQL 7.4.2 are available here:\n> ftp://ftp10.us.postgresql.org/pub/postgresql/binary/v7.4.2/redhat/ \n> redhat-7.3/\n>\n> Or you can always compile from source. There isn't any such thing \n> as a\n> 'supported' package for RH7.2 anyway.\n>\n> -- Mark Lewis\n>\n>\n> On Thu, 2005-06-16 at 07:46 -0700, Todd Landfried wrote:\n>\n>\n>> Yes, it is 7.2. Why? because an older version of our software runs on\n>> RH7.3 and that was the latest supported release of Postgresql for\n>> RH7.3 (that we can find). We're currently ported to 8, but we still\n>> have a large installed base with the other version.\n>>\n>>\n>> On Jun 15, 2005, at 7:18 AM, Tom Lane wrote:\n>>\n>>\n>>\n>>> Dennis Bjorklund <[email protected]> writes:\n>>>\n>>>\n>>>\n>>>> On Wed, 15 Jun 2005, Todd Landfried wrote:\n>>>>\n>>>>\n>>>>\n>>>>> NOTICE: shared_buffers is 256\n>>>>>\n>>>>>\n>>>>>\n>>>\n>>>\n>>>\n>>>\n>>>> This looks like it's way too low. Try something like 2048.\n>>>>\n>>>>\n>>>>\n>>>\n>>> It also is evidently PG 7.2 or before; SHOW's output hasn't looked\n>>> like\n>>> that in years. Try a more recent release --- there's usually\n>>> nontrivial\n>>> performance improvements in each major release.\n>>>\n>>> regards, tom lane\n>>>\n>>> ---------------------------(end of\n>>> broadcast)---------------------------\n>>> TIP 2: you can get off all lists at once with the unregister command\n>>> (send \"unregister YourEmailAddressHere\" to\n>>> [email protected])\n>>>\n>>>\n>>>\n>>\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 9: the planner will ignore your desire to choose an index scan \n>> if your\n>> joining column's datatypes do not match\n>>\n>>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n>\n\n\n", "msg_date": "Thu, 16 Jun 2005 19:15:08 -0700", "msg_from": "Todd Landfried <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Needed: Simplified guide to optimal memory" }, { "msg_contents": "On Thu, Jun 16, 2005 at 07:15:08PM -0700, Todd Landfried wrote:\n> Thanks for the link. I'll look into those.\n> \n> I'm going only on what my engineers are telling me, but they say \n> upgrading breaks a lot of source code with some SQL commands that are \n> a pain to hunt down and kill. Not sure if that's true, but that's \n> what I'm told.\n\nThis is true. Migrating to a newer version is not a one-day thing. But\nincreasing shared_buffers is trivially done, would get you lots of\nbenefit, and it's very unlikely to break anything. (Migrating one\nversion can be painful already -- migrating three versions on one shot\nmight be a nightmare. OTOH it's much better to pay the cost of\nmigration once rather than three times ...)\n\n-- \nAlvaro Herrera (<alvherre[a]surnet.cl>)\n\"The Postgresql hackers have what I call a \"NASA space shot\" mentality.\n Quite refreshing in a world of \"weekend drag racer\" developers.\"\n(Scott Marlowe)\n", "msg_date": "Fri, 17 Jun 2005 00:46:43 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Needed: Simplified guide to optimal memory" }, { "msg_contents": "Todd,\n\n> I'm going only on what my engineers are telling me, but they say\n> upgrading breaks a lot of source code with some SQL commands that are\n> a pain to hunt down and kill. Not sure if that's true, but that's\n> what I'm told.\n\nDepends on your app, but certainly that can be true. Oddly, 7.2 -> 8.0 is \nless trouble than 7.2 -> 7.4 because of some type casting issues which were \nresolved.\n\nMind you, in the past a quick \"sed\" script has been adequate for me to fix \ncompatibility issues.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 17 Jun 2005 09:42:37 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Needed: Simplified guide to optimal memory" }, { "msg_contents": "For those who provided some guidance, I say \"thank you.\" You comments \nhelped out a lot. All of our customers who are using the older \nrelease are now very pleased with the performance of the database now \nthat we were able to give them meaningful configuration settings. I'm \nalso pleased to see that Frank WIles has taken upon himself the \neffort to write this guidance down for folks like me.\n\nKudos to you all. Thanks again.\n\nTodd\n\n\nOn Jun 15, 2005, at 2:06 AM, Todd Landfried wrote:\n\n> I deeply apologize if this has been covered with some similar topic \n> before, but I need a little guidance in the optimization \n> department. We use Postgres as our database and we're having some \n> issues dealing with customers who are, shall we say, \"thrifty\" when \n> it comes to buying RAM.\n>\n> We tell them to buy at least 1GB, but there's always the bargain \n> chaser who thinks 256MB of RAM \"is more than enough. So here's what \n> I need--in layman's terms 'cause I'll need to forward this message \n> on to them to prove what I'm saying (don't ya love customers?).\n>\n> 1. Our database has a total of 35 tables and maybe 300 variables\n> 2. There are five primary tables and only two of these are written \n> to every minute, sometimes up to a menial 1500 transactions per \n> minute.\n> 3. Our customers usually buy RAM in 256MB, 512MB, 1GB or 2GB. We've \n> tried to come up with a optimization scheme based on what we've \n> been able to discern from lists like this, but we don't have a lot \n> of confidence. Using the default settings seems to work best with \n> 1GB, but we need help with the other RAM sizes.\n>\n> What's the problem? The sucker gets s-l-o-w on relatively simple \n> queries. For example, simply listing all of the users online at one \n> time takes 30-45 seconds if we're talking about 800 users. We've \n> adjusted the time period for vacuuming the tables to the point \n> where it occurs once an hour, but we're getting only a 25% \n> performance gain from that. We're looking at the system settings \n> now to see how those can be tweaked.\n>\n> So, what I need is to be pointed to (or told) what are the best \n> settings for our database given these memory configurations. What \n> should we do?\n>\n> Thanks\n>\n> Todd\n>\n> Don't know if this will help, but here's the result of show all:\n>\n> NOTICE: enable_seqscan is on\n> NOTICE: enable_indexscan is on\n> NOTICE: enable_tidscan is on\n> NOTICE: enable_sort is on\n> NOTICE: enable_nestloop is on\n> NOTICE: enable_mergejoin is on\n> NOTICE: enable_hashjoin is on\n> NOTICE: ksqo is off\n> NOTICE: geqo is on\n> NOTICE: tcpip_socket is on\n> NOTICE: ssl is off\n> NOTICE: fsync is on\n> NOTICE: silent_mode is off\n> NOTICE: log_connections is off\n> NOTICE: log_timestamp is off\n> NOTICE: log_pid is off\n> NOTICE: debug_print_query is off\n> NOTICE: debug_print_parse is off\n> NOTICE: debug_print_rewritten is off\n> NOTICE: debug_print_plan is off\n> NOTICE: debug_pretty_print is off\n> NOTICE: show_parser_stats is off\n> NOTICE: show_planner_stats is off\n> NOTICE: show_executor_stats is off\n> NOTICE: show_query_stats is off\n> NOTICE: stats_start_collector is on\n> NOTICE: stats_reset_on_server_start is on\n> NOTICE: stats_command_string is off\n> NOTICE: stats_row_level is off\n> NOTICE: stats_block_level is off\n> NOTICE: trace_notify is off\n> NOTICE: hostname_lookup is off\n> NOTICE: show_source_port is off\n> NOTICE: sql_inheritance is on\n> NOTICE: australian_timezones is off\n> NOTICE: fixbtree is on\n> NOTICE: password_encryption is off\n> NOTICE: transform_null_equals is off\n> NOTICE: geqo_threshold is 20\n> NOTICE: geqo_pool_size is 0\n> NOTICE: geqo_effort is 1\n> NOTICE: geqo_generations is 0\n> NOTICE: geqo_random_seed is -1\n> NOTICE: deadlock_timeout is 1000\n> NOTICE: syslog is 0\n> NOTICE: max_connections is 64\n> NOTICE: shared_buffers is 256\n> NOTICE: port is 5432\n> NOTICE: unix_socket_permissions is 511\n> NOTICE: sort_mem is 2048\n> NOTICE: vacuum_mem is 126622\n> NOTICE: max_files_per_process is 1000\n> NOTICE: debug_level is 0\n> NOTICE: max_expr_depth is 10000\n> NOTICE: max_fsm_relations is 500\n> NOTICE: max_fsm_pages is 10000\n> NOTICE: max_locks_per_transaction is 64\n> NOTICE: authentication_timeout is 60\n> NOTICE: pre_auth_delay is 0\n> NOTICE: checkpoint_segments is 3\n> NOTICE: checkpoint_timeout is 300\n> NOTICE: wal_buffers is 8\n> NOTICE: wal_files is 0\n> NOTICE: wal_debug is 0\n> NOTICE: commit_delay is 0\n> NOTICE: commit_siblings is 5\n> NOTICE: effective_cache_size is 79350\n> NOTICE: random_page_cost is 2\n> NOTICE: cpu_tuple_cost is 0.01\n> NOTICE: cpu_index_tuple_cost is 0.001\n> NOTICE: cpu_operator_cost is 0.0025\n> NOTICE: geqo_selection_bias is 2\n> NOTICE: default_transaction_isolation is read committed\n> NOTICE: dynamic_library_path is $libdir\n> NOTICE: krb_server_keyfile is FILE:/etc/pgsql/krb5.keytab\n> NOTICE: syslog_facility is LOCAL0\n> NOTICE: syslog_ident is postgres\n> NOTICE: unix_socket_group is unset\n> NOTICE: unix_socket_directory is unset\n> NOTICE: virtual_host is unset\n> NOTICE: wal_sync_method is fdatasync\n> NOTICE: DateStyle is ISO with US (NonEuropean) conventions\n> NOTICE: Time zone is unset\n> NOTICE: TRANSACTION ISOLATION LEVEL is READ COMMITTED\n> NOTICE: Current client encoding is 'SQL_ASCII'\n> NOTICE: Current server encoding is 'SQL_ASCII'\n> NOTICE: Seed for random number generator is unavailable\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to \n> [email protected])\n>\n\n", "msg_date": "Fri, 24 Jun 2005 14:18:24 -0700", "msg_from": "Todd Landfried <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Needed: Simplified guide to optimal memory configuration" } ]
[ { "msg_contents": "Hi All,\n\nI have an app that updates a PostgreSQL db in a batch fashion. After\neach batch (or several batches), it issues VACUUM and ANALYZE calls on\nthe updated tables. Now I want to cluster some tables for better\nperformance. I understand that doing a VACUUM and a CLUSTER on a table\nis wasteful as the CLUSTER makes the VACUUM superfluous. The app does\nnot have a built-in list of the tables and whether each is clustered or\nnot. It looks to me as if the only way to determine whether to issue a\nVACUUM (on a non-clustered table) or a CLUSTER (on a clustered table) is\nto query the table \"pg_index\", much like view \"pg_indexes\" does, for the\ncolumn \"indisclustered\". Is this right?\n\nAlso, how expensive is CLUSTER compared to VACUUM? Does CLUSTER read in\nthe whole table, sort it, and write it back out? Or write out a\ncompletely new file? Is the time for a CLUSTER the same whether one row\nis out of place or the table is completely disordered?\n\nThanks,\nKen\n\n\n", "msg_date": "Wed, 15 Jun 2005 11:34:18 -0400", "msg_from": "\"Ken Shaw\" <[email protected]>", "msg_from_op": true, "msg_subject": "How to determine whether to VACUUM or CLUSTER" }, { "msg_contents": "On Wed, Jun 15, 2005 at 11:34:18AM -0400, Ken Shaw wrote:\n> Hi All,\n> \n> I have an app that updates a PostgreSQL db in a batch fashion. After\n> each batch (or several batches), it issues VACUUM and ANALYZE calls on\n> the updated tables. Now I want to cluster some tables for better\n> performance. I understand that doing a VACUUM and a CLUSTER on a table\n> is wasteful as the CLUSTER makes the VACUUM superfluous. The app does\n> not have a built-in list of the tables and whether each is clustered or\n> not. It looks to me as if the only way to determine whether to issue a\n> VACUUM (on a non-clustered table) or a CLUSTER (on a clustered table) is\n> to query the table \"pg_index\", much like view \"pg_indexes\" does, for the\n> column \"indisclustered\". Is this right?\n\nI don't think that's what you want. 'indisclustered' only indicates if\nthe last time the table was clustered was on that index. The best thing\nthat comes to mind is looking at the correlation of the first field in\nthe index for the table. You'll find this info in pg_stats.\n\n> Also, how expensive is CLUSTER compared to VACUUM? Does CLUSTER read in\n> the whole table, sort it, and write it back out? Or write out a\n> completely new file? Is the time for a CLUSTER the same whether one row\n> is out of place or the table is completely disordered?\n\nAFAIK, cluster completely re-creates the table from scratch, then\nrebuilds all the indexes. It's basically the most expensive operation\nyou can perform on a table. There probably will be some increased\nperformance from the sort if the table is already mostly in the right\norder though.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Mon, 20 Jun 2005 01:53:49 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to determine whether to VACUUM or CLUSTER" } ]
[ { "msg_contents": "Hi All,\n\nI have an app that updates a PostgreSQL db in a batch fashion. After each batch (or several batches), it issues VACUUM and ANALYZE calls on the updated tables. Now I want to cluster some tables for better performance. I understand that doing a VACUUM and a CLUSTER on a table is wasteful as the CLUSTER makes the VACUUM superfluous. The app does not have a built-in list of the tables and whether each is clustered or not. It looks to me as if the only way to determine whether to issue a VACUUM (on a non-clustered table) or a CLUSTER (on a clustered table) is to query the table \"pg_index\", much like view \"pg_indexes\" does, for the column \"indisclustered\". Is this right?\n\nAlso, how expensive is CLUSTER compared to VACUUM? Does CLUSTER read in the whole table, sort it, and write it back out? Or write out a completely new file? Is the time for a CLUSTER the same whether one row is out of place or the table is completely disordered?\n\nThanks,\n\nKen\n\n\n\t\t\n---------------------------------\nDiscover Yahoo!\n Find restaurants, movies, travel & more fun for the weekend. Check it out!\n\nHi All,\nI have an app that updates a PostgreSQL db in a batch fashion. After each batch (or several batches), it issues VACUUM and ANALYZE calls on the updated tables. Now I want to cluster some tables for better performance. I understand that doing a VACUUM and a CLUSTER on a table is wasteful as the CLUSTER makes the VACUUM superfluous. The app does not have a built-in list of the tables and whether each is clustered or not. It looks to me as if the only way to determine whether to issue a VACUUM (on a non-clustered table) or a CLUSTER (on a clustered table) is to query the table \"pg_index\", much like view \"pg_indexes\" does, for the column \"indisclustered\". Is this right?\nAlso, how expensive is CLUSTER compared to VACUUM? Does CLUSTER read in the whole table, sort it, and write it back out? Or write out a completely new file? Is the time for a CLUSTER the same whether one row is out of place or the table is completely disordered?\nThanks,\nKen\nDiscover Yahoo! \nFind restaurants, movies, travel & more fun for the weekend. Check it out!", "msg_date": "Thu, 16 Jun 2005 11:04:41 -0700 (PDT)", "msg_from": "ken shaw <[email protected]>", "msg_from_op": true, "msg_subject": "How to determine whether to VACUUM or CLUSTER" }, { "msg_contents": "ken shaw <[email protected]> writes:\n> It looks to me as if the only way to determine whether to issue a\n> VACUUM (on a non-clustered table) or a CLUSTER (on a clustered table)\n> is to query the table \"pg_index\", much like view \"pg_indexes\" does,\n> for the column \"indisclustered\". Is this right?\n\nindisclustered is certainly the ground truth here, and [ ... digs around\nin the source code ... ] it doesn't look like there are any views that\npresent the information in a different fashion. So yup, that's what\nyou gotta do.\n\n> Also, how expensive is CLUSTER compared to VACUUM?\n\nWell, it's definitely expensive compared to plain VACUUM, but compared\nto VACUUM FULL the case is not clear-cut. I would say that if you had\na seriously bloated table (where VACUUM FULL would have to move all or\nmost of the live tuples in order to compact the table completely) then\nCLUSTER will be faster --- not to mention any possible future benefits\nfrom having the table more or less in order with respect to the index.\n\nAs near as I can tell, VACUUM FULL was designed to work nicely when you\nhad maybe 10%-25% free space in the table and you want it all compacted\nout. In a scenario where it has to move all the tuples it is certainly\nnot faster than CLUSTER; plus the end result is much worse as far as the\nstate of the indexes goes, because VACUUM FULL does *nothing* for\ncompacting indexes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Jun 2005 23:47:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to determine whether to VACUUM or CLUSTER " } ]
[ { "msg_contents": "Hey,\n\nHow does Postgres (8.0.x) buffer changes to a database within a \ntransaction? I need to insert/update more than a thousand rows (mayde \neven more than 10000 rows, ~100 bytes/row) in a table but the changes \nmust not be visible to other users/transactions before every row is \nupdated. One way of doing this that I thought of was start a \ntransaction, delete everything and then just dump new data in (copy \nperhaps). The old data would be usable to other transactions until I \ncommit my insert. This would be the fastest way, but how much memory \nwould this use? Will this cause performance issues on a heavily loaded \nserver with too little memory even to begin with :)\n\n\n-veikko\n", "msg_date": "Thu, 16 Jun 2005 22:28:30 +0300", "msg_from": "=?ISO-8859-1?Q?Veikko_M=E4kinen?= <[email protected]>", "msg_from_op": true, "msg_subject": "How does the transaction buffer work?" }, { "msg_contents": "Veikko Mäkinen wrote:\n\n> Hey,\n>\n> How does Postgres (8.0.x) buffer changes to a database within a \n> transaction? I need to insert/update more than a thousand rows (mayde \n> even more than 10000 rows, ~100 bytes/row) in a table but the changes \n> must not be visible to other users/transactions before every row is \n> updated. One way of doing this that I thought of was start a \n> transaction, delete everything and then just dump new data in (copy \n> perhaps). The old data would be usable to other transactions until I \n> commit my insert. This would be the fastest way, but how much memory \n> would this use? Will this cause performance issues on a heavily loaded \n> server with too little memory even to begin with :)\n>\nPostgres does indeed keep track of who can see what. Such that changes \nwon't be seen until a final commit.\nIf you are trying to insert bulk data, definitely consider COPY.\n\nBut UPDATE should also be invisible until the commit. So if you are only \nchanging data, there really isn't any reason to do a DELETE and INSERT. \nEspecially since you'll have problems with foreign keys at the DELETE stage.\n\nJohn\n=:->\n\n>\n> -veikko", "msg_date": "Thu, 16 Jun 2005 14:36:07 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How does the transaction buffer work?" }, { "msg_contents": "\n>> transaction, delete everything and then just dump new data in (copy\n>> perhaps). The old data would be usable to other transactions until I\n>> commit my insert. This would be the fastest way, but how much memory\n>> would this use? Will this cause performance issues on a heavily loaded\n>> server with too little memory even to begin with :)\n\n\tWell.\n\n\tIf you DELETE everything in your table and then COPY in new rows, it will \nbe fast, old rows will still be visible until the COMMIT. I hope you \nhaven't anything referencing this table with ON DELETE CASCADE on it, or \nelse you might delete more stuff than you think.\n\n\tAlso you have to consider locking.\n\n\tYou could TRUNCATE the table instead of deleting, but then anyone trying \nto SELECT from it will block until the updater transaction is finished.\n\n\tIf you DELETE you could also vacuum afterwards.\n\n\tYou could also COPY your rows to a temporary table and use a Joined \nUpdate to update your table in place. This might well be the more elegant \nsolution, and the only one if the updated table has foreign key references \npointing to it.\n\n", "msg_date": "Thu, 16 Jun 2005 22:12:59 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How does the transaction buffer work?" }, { "msg_contents": "Veikko,\n\n> One way of doing this that I thought of was start a\n> transaction, delete everything and then just dump new data in (copy\n> perhaps). The old data would be usable to other transactions until I\n> commit my insert. This would be the fastest way, but how much memory\n> would this use?\n\nStarting a transaction doesn't use any more memory than without one. \nUnlike Some Other Databases, PostgreSQL's transactions occur in WAL and on \ndata pages, not in RAM.\n\n> Will this cause performance issues on a heavily loaded \n> server with too little memory even to begin with :)\n\nQuite possibly, but the visibility issue won't be the problem.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 16 Jun 2005 14:53:36 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How does the transaction buffer work?" }, { "msg_contents": "=?ISO-8859-1?Q?Veikko_M=E4kinen?= <[email protected]> writes:\n> How does Postgres (8.0.x) buffer changes to a database within a \n> transaction? I need to insert/update more than a thousand rows (mayde \n> even more than 10000 rows, ~100 bytes/row) in a table but the changes \n> must not be visible to other users/transactions before every row is \n> updated.\n\nThere were some other responses already, but I wanted to add this:\nthere isn't any \"transaction buffer\" in Postgres. The above scenario\nwon't cause us any noticeable problem, because what we do is mark\neach row with its originating transaction ID, and then readers compare\nthat to the set of transaction IDs that they think are \"in the past\".\nThe number of rows affected by a transaction is not really a factor\nat all.\n\nNow of course this isn't Nirvana, you must pay somewhere ;-) and our\nweak spot is the need for VACUUM. But you have no need to fear large\nindividual transactions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Jun 2005 00:08:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How does the transaction buffer work? " }, { "msg_contents": "> Now of course this isn't Nirvana, you must pay somewhere ;-) and our\n> weak spot is the need for VACUUM. But you have no need to fear large\n> individual transactions.\n\nNo need to fear long running transactions other than their ability to\nstop VACUUM from doing what it's supposed to be doing, thus possibly\nimpacting performance.\n-- \n\n", "msg_date": "Fri, 17 Jun 2005 01:05:15 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How does the transaction buffer work?" } ]
[ { "msg_contents": "I have 6 Windows PC in a test environment accessing a very small Postgres DB\non a 2003 Server. The PC's access the database with a cobol app via ODBC.\n3 of the PC's operate very efficiently and quickly. 3 of them do not. The\n3 that do not are all new Dell XP Pro with SP2. They all produce the error\nin the log file as below:\n \n2005-06-16 16:17:30 LOG: could not send data to client: No connection could\nbe made because the target machine actively refused it.\n \n2005-06-16 16:17:30 LOG: could not receive data from client: No connection\ncould be made because the target machine actively refused it.\n \n2005-06-16 16:17:30 LOG: unexpected EOF on client connection\n \nThanks,\n \nJustin Davis\nRapid Systems, Inc.\n800.356.8952\n \n\n-- \nNo virus found in this outgoing message.\nChecked by AVG Anti-Virus.\nVersion: 7.0.323 / Virus Database: 267.7.3/15 - Release Date: 6/14/2005\n \n\n\n\n\n\nI have 6 Windows PC \nin a test environment accessing a very small Postgres DB on a 2003 Server.  \nThe PC's access the database with a cobol app via ODBC.  3 of the PC's \noperate very efficiently and quickly.  3 of them do not.  The 3 that \ndo not are all new Dell XP Pro with SP2.  They all produce the error in the \nlog file as below:\n \n2005-06-16 16:17:30 \nLOG:  could not send data to client: No connection could be made because \nthe target machine actively refused it.\n \n2005-06-16 16:17:30 \nLOG:  could not receive data from client: No connection could be made \nbecause the target machine actively refused it.\n \n2005-06-16 16:17:30 LOG:  unexpected EOF on client \nconnection\n \nThanks,\n \nJustin Davis\nRapid Systems, Inc.\n800.356.8952", "msg_date": "Thu, 16 Jun 2005 16:53:44 -0400", "msg_from": "\"Justin Davis\" <[email protected]>", "msg_from_op": true, "msg_subject": "could not send data to client:" } ]
[ { "msg_contents": "Justin wrote:\nI have 6 Windows PC in a test environment accessing a very small Postgres DB on a 2003 Server.  The PC's access the database with a cobol app via ODBC.  3 of the PC's operate very efficiently and quickly.  3 of them do not.  The 3 that do not are all new Dell XP Pro with SP2.  They all produce the error in the log file as below:\n \n2005-06-16 16:17:30 LOG:  could not send data to client: No connection could be made because the target machine actively refused it.\n \n2005-06-16 16:17:30 LOG:  could not receive data from client: No connection could be made because the target machine actively refused it.\n \n2005-06-16 16:17:30 LOG:  unexpected EOF on client connection\n\n[...]\n\nHave you tried other ODBC app (excel, etc) to connect to the database from the machines?\n\nIf so and it works,\n1. what version odbc driver\n2. what cobol compiler \n3. what technology to map cobol i/o to sql (Acu4GL for example)\n\nThis is probably more appropriate on pgsql-odbc and plain text is preferred for these mailing lists.\n\nMerlin\n", "msg_date": "Fri, 17 Jun 2005 08:49:15 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: could not send data to client:" } ]
[ { "msg_contents": "Hi,\n\nWe are looking to build a new machine for a big PG database.\nWe were wondering if a machine with 5 scsi-disks would perform better \nif we use a hardware raid 5 controller or if we would go for the \nclustering in PG.\nIf we cluster in PG, do we have redundancy on the data like in a RAID 5 \n?\n\nFirst concern is performance, not redundancy (we can do that a \ndifferent way because all data comes from upload files)\n\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Fri, 17 Jun 2005 21:34:19 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Multiple disks: RAID 5 or PG Cluster" }, { "msg_contents": "On Jun 17, 2005, at 3:34 PM, Yves Vindevogel wrote:\n\n> We are looking to build a new machine for a big PG database.\n> We were wondering if a machine with 5 scsi-disks would perform \n> better if we use a hardware raid 5 controller or if we would go for \n> the clustering in PG.\n> If we cluster in PG, do we have redundancy on the data like in a \n> RAID 5 ?\n>\n\nI'd recommend 4 disks in a hardware RAID10 plus a hot spare, or use \nthe 5th disk as boot + OS if you're feeling lucky.\n\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806\n\n\n\nOn Jun 17, 2005, at 3:34 PM, Yves Vindevogel wrote:We are looking to build a new machine for a big PG database. We were wondering if a machine with 5 scsi-disks would perform better if we use a hardware raid 5 controller or if we would go for the clustering in PG. If we cluster in PG, do we have redundancy on the data like in a RAID 5 ? I'd recommend 4 disks in a hardware RAID10 plus a hot spare, or use the 5th disk as boot + OS if you're feeling lucky. Vivek Khera, Ph.D. +1-301-869-4449 x806", "msg_date": "Fri, 17 Jun 2005 16:21:51 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple disks: RAID 5 or PG Cluster" }, { "msg_contents": "If you truly do not care about data protection -- either from drive loss or from\nsudden power failure, or anything else -- and just want to get the fastest\npossible performance, then do RAID 0 (striping). It may be faster to do that\nwith software RAID on the host than with a special RAID controller. And turn\noff fsyncing the write ahead log in postgresql.conf (fsync = false).\n\nBut be prepared to replace your whole database from scratch (or backup or\nwhatever) if you lose a single hard drive. And if you have a sudden power loss\nor other type of unclean system shutdown (kernel panic or something) then your\ndata integrity will be at risk as well.\n\nTo squeeze evena little bit more performance, put your operating system, swap\nand PostgreSQL binaries on a cheap IDE or SATA drive--and only your data on the\n5 striped SCSI drives.\n\nI do not know what clustering would do for you. But striping will provide a\nhigh level of assurance that each of your hard drives will process equivalent\namounts of IO operations.\n\nQuoting Yves Vindevogel <[email protected]>:\n\n> Hi,\n> \n> We are looking to build a new machine for a big PG database.\n> We were wondering if a machine with 5 scsi-disks would perform better \n> if we use a hardware raid 5 controller or if we would go for the \n> clustering in PG.\n> If we cluster in PG, do we have redundancy on the data like in a RAID 5 \n> ?\n> \n> First concern is performance, not redundancy (we can do that a \n> different way because all data comes from upload files)\n> \n> Met vriendelijke groeten,\n> Bien � vous,\n> Kind regards,\n> \n> Yves Vindevogel\n> Implements\n> \n> \n\n\n", "msg_date": "Fri, 17 Jun 2005 13:38:41 -0700", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Multiple disks: RAID 5 or PG Cluster" }, { "msg_contents": "\n\n> I do not know what clustering would do for you. But striping will \n> provide a\n> high level of assurance that each of your hard drives will process \n> equivalent\n> amounts of IO operations.\n\n\tI don't know what I'm talking about, but wouldn't mirorring be faster \nthan striping for random reads like you often get on a database ? (ie. the \nreads can be dispatched to any disk) ? (or course, not for writes, but if \nyou won't use fsync, random writes should be reduced no ?)\n\n\t\n", "msg_date": "Sat, 18 Jun 2005 18:00:14 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple disks: RAID 5 or PG Cluster" }, { "msg_contents": "Hi,\n\nAt 18:00 18/06/2005, PFC wrote:\n> I don't know what I'm talking about, but wouldn't mirorring be \n> faster\n>than striping for random reads like you often get on a database ? (ie. the\n>reads can be dispatched to any disk) ? (or course, not for writes, but if\n>you won't use fsync, random writes should be reduced no ?)\n\nRoughly, for random reads, the performance (in terms of operations/s) \ncompared to a single disk setup, with N being the number of drives, is:\n\nRAID 0 (striping):\n- read = N\n- write = N\n- capacity = N\n- redundancy = 0\n\nRAID 1 (mirroring, N=2):\n- read = N\n- write = 1\n- capacity = 1\n- redundancy = 1\n\nRAID 5 (striping + parity, N>=3)\n- read = N-1\n- write = 1/2\n- capacity = N-1\n- redundancy = 1\n\nRAID 10 (mirroring + striping, N=2n, N>=4)\n- read = N\n- write = N/2\n- capacity = N/2\n- redundancy < N/2\n\nSo depending on your app, i.e. your read/write ratio, how much data can be \ncached, whether the data is important or not, how much data you have, etc, \none or the other option might be better.\n\nJacques.\n\n\n", "msg_date": "Sat, 18 Jun 2005 18:24:21 +0200", "msg_from": "Jacques Caron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple disks: RAID 5 or PG Cluster" }, { "msg_contents": "Of course these numbers are not true as soon as you exceed the stripe size \nfor a read operation, which is often only 128k. Typically a stripe of \nmirrors will not read from seperate halves of the mirrors either, so RAID 10 \nis only N/2 best case in my experience, Raid 0+1 is a mirror of stripes and \nwill read from independant halves, but gives worse redundancy.\n\nAlex Turner\nNetEconomist\n\nOn 6/18/05, Jacques Caron <[email protected]> wrote:\n> \n> Hi,\n> \n> At 18:00 18/06/2005, PFC wrote:\n> > I don't know what I'm talking about, but wouldn't mirorring be\n> > faster\n> >than striping for random reads like you often get on a database ? (ie. \n> the\n> >reads can be dispatched to any disk) ? (or course, not for writes, but if\n> >you won't use fsync, random writes should be reduced no ?)\n> \n> Roughly, for random reads, the performance (in terms of operations/s)\n> compared to a single disk setup, with N being the number of drives, is:\n> \n> RAID 0 (striping):\n> - read = N\n> - write = N\n> - capacity = N\n> - redundancy = 0\n> \n> RAID 1 (mirroring, N=2):\n> - read = N\n> - write = 1\n> - capacity = 1\n> - redundancy = 1\n> \n> RAID 5 (striping + parity, N>=3)\n> - read = N-1\n> - write = 1/2\n> - capacity = N-1\n> - redundancy = 1\n> \n> RAID 10 (mirroring + striping, N=2n, N>=4)\n> - read = N\n> - write = N/2\n> - capacity = N/2\n> - redundancy < N/2\n> \n> So depending on your app, i.e. your read/write ratio, how much data can be\n> cached, whether the data is important or not, how much data you have, etc,\n> one or the other option might be better.\n> \n> Jacques.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n>\n\nOf course these numbers are not true as soon as you exceed the stripe\nsize for a read operation, which is often only 128k.  Typically a\nstripe of mirrors will not read from seperate halves of the mirrors\neither, so RAID 10 is only N/2 best case in my experience, Raid 0+1 is\na mirror of stripes and will read from independant halves, but gives\nworse redundancy.\n\nAlex Turner\nNetEconomistOn 6/18/05, Jacques Caron <[email protected]> wrote:\nHi,At 18:00 18/06/2005, PFC wrote:>         I don't know what I'm talking about, but wouldn't mirorring be> faster>than striping for random reads like you often get on a database ? (ie. the\n>reads can be dispatched to any disk) ? (or course, not for writes, but if>you won't use fsync, random writes should be reduced no ?)Roughly, for random reads, the performance (in terms of operations/s)\ncompared to a single disk setup, with N being the number of drives, is:RAID 0 (striping):- read = N- write = N- capacity = N- redundancy = 0RAID 1 (mirroring, N=2):- read = N- write = 1\n- capacity = 1- redundancy = 1RAID 5 (striping + parity, N>=3)- read = N-1- write = 1/2- capacity = N-1- redundancy = 1RAID 10 (mirroring + striping, N=2n, N>=4)- read = N\n- write = N/2- capacity = N/2- redundancy < N/2So depending on your app, i.e. your read/write ratio, how much data can becached, whether the data is important or not, how much data you have, etc,\none or the other option might be better.Jacques.---------------------------(end of broadcast)---------------------------TIP 9: the planner will ignore your desire to choose an index scan if your\n      joining column's datatypes do not match", "msg_date": "Sun, 19 Jun 2005 00:58:47 -0400", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple disks: RAID 5 or PG Cluster" } ]
[ { "msg_contents": "Ok, I will hate that day, but it's only 6 months\n\nBegin forwarded message:\n\n> From: Vivek Khera <[email protected]>\n> Date: Fri 17 Jun 2005 23:26:43 CEST\n> To: Yves Vindevogel <[email protected]>\n> Subject: Re: [PERFORM] Multiple disks: RAID 5 or PG Cluster\n>\n>\n> On Jun 17, 2005, at 5:24 PM, Yves Vindevogel wrote:\n>\n>> That means that only 2 / 5 of my discs are actual storage. That's a \n>> bit low, imho.\n>>\n>> Maybe I can ask my question again:\n>> Would I go for RAID 5, RAID 0 or PG clustering\n>>\n>> On 17 Jun 2005, at 22:21, Vivek Khera wrote:\n> If you're allergic to RAID10, then do RAID5.  but you'll sacrifice \n> performance.  You'll hate life the day you blow a disk and have to \n> rebuild everything, even if it is all easily restored.\n>\n>\n>\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Fri, 17 Jun 2005 23:30:27 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Multiple disks: RAID 5 or PG Cluster" } ]
[ { "msg_contents": "BTW, tnx for the opinion ...\nI forgot to cc list ...\n\n\nBegin forwarded message:\n\n> From: Yves Vindevogel <[email protected]>\n> Date: Fri 17 Jun 2005 23:29:32 CEST\n> To: [email protected]\n> Subject: Re: [PERFORM] Multiple disks: RAID 5 or PG Cluster\n>\n> Ok, striping is a good option ...\n>\n> I'll tell you why I don't care about dataloss\n>\n> 1) The database will run 6 months, no more.\n> 2) The database is fed with upload files. So, if I have a backup each \n> day, plus my files of that day, I can restore pretty quickly.\n> 3) Power failure is out of the question: battery backup (UPS), disk \n> failure is minimal change: new server, new discs, 6 months ...\n>\n> We do have about 500.000 new records each day in that database, so \n> that's why I want performance\n> Records are uploaded in one major table and then denormalised into \n> several others.\n>\n> But, I would like to hear somebody about the clustering method. Isn't \n> that much used ?\n> Or isn't it used in a single machine ?\n>\n> On 17 Jun 2005, at 22:38, [email protected] wrote:\n>\n>> If you truly do not care about data protection -- either from drive \n>> loss or from\n>> sudden power failure, or anything else -- and just want to get the \n>> fastest\n>> possible performance, then do RAID 0 (striping). It may be faster to \n>> do that\n>> with software RAID on the host than with a special RAID controller. \n>> And turn\n>> off fsyncing the write ahead log in postgresql.conf (fsync = false).\n>>\n>> But be prepared to replace your whole database from scratch (or \n>> backup or\n>> whatever) if you lose a single hard drive. And if you have a sudden \n>> power loss\n>> or other type of unclean system shutdown (kernel panic or something) \n>> then your\n>> data integrity will be at risk as well.\n>>\n>> To squeeze evena little bit more performance, put your operating \n>> system, swap\n>> and PostgreSQL binaries on a cheap IDE or SATA drive--and only your \n>> data on the\n>> 5 striped SCSI drives.\n>>\n>> I do not know what clustering would do for you. But striping will \n>> provide a\n>> high level of assurance that each of your hard drives will process \n>> equivalent\n>> amounts of IO operations.\n>>\n>> Quoting Yves Vindevogel <[email protected]>:\n>>\n>>> Hi,\n>>>\n>>> We are looking to build a new machine for a big PG database.\n>>> We were wondering if a machine with 5 scsi-disks would perform better\n>>> if we use a hardware raid 5 controller or if we would go for the\n>>> clustering in PG.\n>>> If we cluster in PG, do we have redundancy on the data like in a \n>>> RAID 5\n>>> ?\n>>>\n>>> First concern is performance, not redundancy (we can do that a\n>>> different way because all data comes from upload files)\n>>>\n>>> Met vriendelijke groeten,\n>>> Bien à vous,\n>>> Kind regards,\n>>>\n>>> Yves Vindevogel\n>>> Implements\n>>>\n>>>\n>>\n>>\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 8: explain analyze is your friend\n>>\n>>\n> Met vriendelijke groeten,\n> Bien à vous,\n> Kind regards,\n>\n> Yves Vindevogel\n> Implements\n>\n\n>\n>\n> Mail: [email protected] - Mobile: +32 (478) 80 82 91\n>\n> Kempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n>\n> Web: http://www.implements.be\n>\n> First they ignore you. Then they laugh at you. Then they fight you. \n> Then you win.\n> Mahatma Ghandi.\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Fri, 17 Jun 2005 23:31:00 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Multiple disks: RAID 5 or PG Cluster" } ]
[ { "msg_contents": "cc ...\n\nBegin forwarded message:\n\n> From: Yves Vindevogel <[email protected]>\n> Date: Sat 18 Jun 2005 18:18:53 CEST\n> To: PFC <[email protected]>\n> Subject: Re: [PERFORM] Multiple disks: RAID 5 or PG Cluster\n>\n> There's a basic difference between striping (raid 0) and mirroring \n> (raid 1)\n>\n> With striping, each file is distributed over several disks, making the \n> physical write faster because several disks can do the work. Same for \n> reading, multiple disks return a part of the file.\n> Striping can not be used for safety/backup, if one disk fails, your \n> file is lost (if it is partly on that failing disk). With mirroring \n> you do not lose any disk space.\n>\n> Mirroring is a technique for avoiding disasters when you have a disk \n> failure. Every file is written twice, each time to a different disk, \n> which is a mirror of the first one.\n> You effectively lose half of your diskspace to that mirror. But when \n> a disk fails, you don't lose anything, since you can rely on the other \n> mirrored disk.\n>\n> Raid 10, which is the combination of that, has both. You have \n> multiple disks that form your first part of the raid and you have an \n> equal amount of disks for the mirror.\n> On each part of the mirror, striping is used to spread the files like \n> in a raid 0. This is a very costly operation. You need a minimum of \n> 4 disks, and you lose 50% of your capacity.\n>\n> BTW: mirroring is always slower than striping.\n>\n> On 18 Jun 2005, at 18:00, PFC wrote:\n>\n>>\n>>\n>>> I do not know what clustering would do for you. But striping will \n>>> provide a\n>>> high level of assurance that each of your hard drives will process \n>>> equivalent\n>>> amounts of IO operations.\n>>\n>> \tI don't know what I'm talking about, but wouldn't mirorring be \n>> faster than striping for random reads like you often get on a \n>> database ? (ie. the reads can be dispatched to any disk) ? (or \n>> course, not for writes, but if you won't use fsync, random writes \n>> should be reduced no ?)\n>>\n>> \t\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 5: Have you checked our extensive FAQ?\n>>\n>> http://www.postgresql.org/docs/faq\n>>\n>>\n> Met vriendelijke groeten,\n> Bien à vous,\n> Kind regards,\n>\n> Yves Vindevogel\n> Implements\n>\n\n>\n>\n> Mail: [email protected] - Mobile: +32 (478) 80 82 91\n>\n> Kempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n>\n> Web: http://www.implements.be\n>\n> First they ignore you. Then they laugh at you. Then they fight you. \n> Then you win.\n> Mahatma Ghandi.\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Sat, 18 Jun 2005 18:42:27 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Multiple disks: RAID 5 or PG Cluster" }, { "msg_contents": "On Sat, Jun 18, 2005 at 06:42:27PM +0200, Yves Vindevogel wrote:\n>>With striping, each file is distributed over several disks, making \n>>the physical write faster because several disks can do the work. \n>>Same for reading, multiple disks return a part of the file. \n\nA mirror behaves almost exactly the same for reads, with a caveat: for a\nlarge enough file, multiple disks *must* be accessed in a striped\nconfiguration, while in a mirrored configuration the controller may\naccess either one or more disks to read any file.\n\n>>BTW: mirroring is always slower than striping. \n\nThat's simply not true. Striping speeds up writes but has no advantage\nover a simlarly sized mirror for reading. In fact, the mirror will be\nfaster for pathological cases in which the reads are aligned in such a\nway that they would all be have to be read from the same stripe of a\nstriped array. The striped configuration has an advantage when more than\ntwo disks are used, but that derives from the number of spindles, not\nfrom the striping; it is possible to have a mirror of more than two\ndisks (which would have the same read advantage as the striped\nconfiguration with the same number of disks) but this is rarely seen\nbecause it is expensive.\n\nMike Stone\n", "msg_date": "Sat, 18 Jun 2005 18:57:49 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Multiple disks: RAID 5 or PG Cluster" }, { "msg_contents": "\nMichael Stone <[email protected]> writes:\n\n> it is possible to have a mirror of more than two disks (which would have the\n> same read advantage as the striped configuration with the same number of\n> disks) but this is rarely seen because it is expensive.\n\nActually three-way mirrors are quite common for backup purposes. To take a\nbackup you break the mirror by taking one of the three copies out. Back it up\nat your leisure, then just resync it in time for the next backup.\n\n-- \ngreg\n\n", "msg_date": "19 Jun 2005 01:53:50 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Multiple disks: RAID 5 or PG Cluster" } ]
[ { "msg_contents": "Hi, i'm trying to optimise our autovacuum configuration so that it \nvacuums / analyzes some of our larger tables better. It has been set \nto the default settings for quite some time. We never delete \nanything (well not often, and not much) from the tables, so I am not \nso worried about the VACUUM status, but I am wary of XID wraparound \nnuking us at some point if we don't sort vacuuming out so we VACUUM \nat least once every year ;) However not running ANALYZE for such huge \nperiods of time is probably impacting the statistics accuracy \nsomewhat, and I have seen some unusually slow queries at times. \nAnyway, does anyone think we might benefit from a more aggressive \nautovacuum configuration? \n", "msg_date": "Mon, 20 Jun 2005 15:44:08 +0100", "msg_from": "Alex Stapleton <[email protected]>", "msg_from_op": true, "msg_subject": "autovacuum suggestions for 500,000,000+ row tables?" }, { "msg_contents": "Hi,\n\nAt 16:44 20/06/2005, Alex Stapleton wrote:\n>We never delete\n>anything (well not often, and not much) from the tables, so I am not\n>so worried about the VACUUM status\n\nDELETEs are not the only reason you might need to VACUUM. UPDATEs are \nimportant as well, if not more. Tables that are constantly updated \n(statistics, session data, queues...) really need to be VACUUMed a lot.\n\n>but I am wary of XID wraparound\n>nuking us at some point if we don't sort vacuuming out so we VACUUM\n>at least once every year ;)\n\nThat would give you a maximum average of 31 transactions/sec... Don't know \nif that's high or low for you.\n\n> However not running ANALYZE for such huge\n>periods of time is probably impacting the statistics accuracy\n>somewhat, and I have seen some unusually slow queries at times.\n>Anyway, does anyone think we might benefit from a more aggressive\n>autovacuum configuration?\n\nANALYZE is not a very expensive operation, however VACUUM can definitely be \na big strain and take a looooong time on big tables, depending on your \nsetup. I've found that partitioning tables (at the application level) can \nbe quite helpful if you manage to keep each partition to a reasonable size \n(under or close to available memory), especially if the partitioning scheme \nis somehow time-related. YMMV.\n\nJacques.\n\n\n", "msg_date": "Mon, 20 Jun 2005 16:59:29 +0200", "msg_from": "Jacques Caron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum suggestions for 500,000,000+ row" }, { "msg_contents": "\nOn 20 Jun 2005, at 15:59, Jacques Caron wrote:\n\n> Hi,\n>\n> At 16:44 20/06/2005, Alex Stapleton wrote:\n>\n>> We never delete\n>> anything (well not often, and not much) from the tables, so I am not\n>> so worried about the VACUUM status\n>>\n>\n> DELETEs are not the only reason you might need to VACUUM. UPDATEs \n> are important as well, if not more. Tables that are constantly \n> updated (statistics, session data, queues...) really need to be \n> VACUUMed a lot.\n\nWe UPDATE it even less often.\n\n>\n>> but I am wary of XID wraparound\n>> nuking us at some point if we don't sort vacuuming out so we VACUUM\n>> at least once every year ;)\n>>\n>\n> That would give you a maximum average of 31 transactions/sec... \n> Don't know if that's high or low for you.\n\nIt's high as far as inserts go for us. It does them all at the end of \neach minute.\n\n>\n>> However not running ANALYZE for such huge\n>> periods of time is probably impacting the statistics accuracy\n>> somewhat, and I have seen some unusually slow queries at times.\n>> Anyway, does anyone think we might benefit from a more aggressive\n>> autovacuum configuration?\n>>\n>\n> ANALYZE is not a very expensive operation, however VACUUM can \n> definitely be a big strain and take a looooong time on big tables, \n> depending on your setup. I've found that partitioning tables (at \n> the application level) can be quite helpful if you manage to keep \n> each partition to a reasonable size (under or close to available \n> memory), especially if the partitioning scheme is somehow time- \n> related. YMMV.\n>\n> Jacques.\n\nThat's not currently an option as it would require a pretty large \namount of work to implement. I think we will have to keep that in \nmind though.\n\n", "msg_date": "Mon, 20 Jun 2005 16:05:56 +0100", "msg_from": "Alex Stapleton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: autovacuum suggestions for 500,000,000+ row tables?" }, { "msg_contents": "Alex Stapleton wrote:\n\n>\n> On 20 Jun 2005, at 15:59, Jacques Caron wrote:\n>\n...\n\n>> ANALYZE is not a very expensive operation, however VACUUM can\n>> definitely be a big strain and take a looooong time on big tables,\n>> depending on your setup. I've found that partitioning tables (at the\n>> application level) can be quite helpful if you manage to keep each\n>> partition to a reasonable size (under or close to available memory),\n>> especially if the partitioning scheme is somehow time- related. YMMV.\n>>\n>> Jacques.\n>\n>\n> That's not currently an option as it would require a pretty large\n> amount of work to implement. I think we will have to keep that in\n> mind though.\n\nRemember, you can fake it with a low-level set of tables, and then wrap\nthem into a UNION ALL view.\nSo you get something like:\n\nCREATE VIEW orig_table AS\n SELECT * FROM table_2005_04\n UNION ALL SELECT * FROM table_2005_05\n UNION ALL SELECT * FROM table_2005_06\n...\n;\n\nThen at least your individual operations are fast. As you insert, you\ncan create a rule that on insert into orig_table do instead ... insert\ninto table_2005_07 (or whatever the current table is).\nIt takes a little bit of maintenance on the DB admin's part, since every\nmonth they have to create a new table, and then update all of the views\nand triggers. But it is pretty straightforward.\nIf you are doing append-only inserting, then you have the nice feature\nthat only the last table is ever modified, which means that the older\ntables don't really need to be vacuumed or analyzed.\nAnd even if you have to have each table modified as you go, you still\ncan break up a VACUUM into only doing one of the sub tables at a time.\n\nI don't know you db schema, but I thought I would mention that true\npartitioning isn't implemented yet, you can still get something very\nsimilar with views, triggers and rules.\n\nJohn\n=:->", "msg_date": "Mon, 20 Jun 2005 10:20:37 -0500", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum suggestions for 500,000,000+ row tables?" }, { "msg_contents": "Alex,\n\n> Hi, i'm trying to optimise our autovacuum configuration so that it\n> vacuums / analyzes some of our larger tables better. It has been set\n> to the default settings for quite some time. We never delete\n> anything (well not often, and not much) from the tables, so I am not\n> so worried about the VACUUM status, but I am wary of XID wraparound\n> nuking us at some point if we don't sort vacuuming out so we VACUUM\n> at least once every year ;) \n\nI personally don't use autovaccuum on very large databases. For DW, \nvacuuming is far better tied to ETL operations or a clock schedule of \ndowntime.\n\nXID wraparound may be further away than you think. Try checking \npg_controldata, which will give you the current XID, and you can calculate \nhow long you are away from wraparound. I just tested a 200G data warehouse \nand figured out that we are 800 months away from wraparound, despite hourly \nETL.\n\n> However not running ANALYZE for such huge \n> periods of time is probably impacting the statistics accuracy\n> somewhat, and I have seen some unusually slow queries at times.\n> Anyway, does anyone think we might benefit from a more aggressive\n> autovacuum configuration?\n\nHmmm, good point, you could use autovacuum for ANALYZE only. Just set the \nVACUUM settings preposterously high (like 10x) so it never runs. Then it'll \nrun ANALYZE only. I generally threshold 200, multiple 0.1x for analyze; \nthat is, re-analyze after 200+10% of rows have changed.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 20 Jun 2005 10:46:41 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum suggestions for 500,000,000+ row tables?" }, { "msg_contents": "\nOn 20 Jun 2005, at 18:46, Josh Berkus wrote:\n\n\n> Alex,\n>\n>\n>\n>> Hi, i'm trying to optimise our autovacuum configuration so that it\n>> vacuums / analyzes some of our larger tables better. It has been set\n>> to the default settings for quite some time. We never delete\n>> anything (well not often, and not much) from the tables, so I am not\n>> so worried about the VACUUM status, but I am wary of XID wraparound\n>> nuking us at some point if we don't sort vacuuming out so we VACUUM\n>> at least once every year ;)\n>>\n>>\n>\n> I personally don't use autovaccuum on very large databases. For DW,\n> vacuuming is far better tied to ETL operations or a clock schedule of\n> downtime.\n>\n\nDowntime is something I'd rather avoid if possible. Do you think we \nwill need to run VACUUM FULL occasionally? I'd rather not lock tables \nup unless I cant avoid it. We can probably squeeze an automated \nvacuum tied to our data inserters every now and then though.\n\n\n> XID wraparound may be further away than you think. Try checking\n> pg_controldata, which will give you the current XID, and you can \n> calculate\n> how long you are away from wraparound. I just tested a 200G data \n> warehouse\n> and figured out that we are 800 months away from wraparound, \n> despite hourly\n> ETL.\n>\n\nIs this an 8.0 thing? I don't have a pg_controldata from what I can \nsee. Thats nice to hear though.\n\n\n>\n>\n>\n>> However not running ANALYZE for such huge\n>> periods of time is probably impacting the statistics accuracy\n>> somewhat, and I have seen some unusually slow queries at times.\n>> Anyway, does anyone think we might benefit from a more aggressive\n>> autovacuum configuration?\n>>\n>>\n>\n> Hmmm, good point, you could use autovacuum for ANALYZE only. Just \n> set the\n> VACUUM settings preposterously high (like 10x) so it never runs. \n> Then it'll\n> run ANALYZE only. I generally threshold 200, multiple 0.1x for \n> analyze;\n> that is, re-analyze after 200+10% of rows have changed.\n>\n\nI will try those settings out, that sounds good to me though.\n\n\n> -- \n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n>\n>\n\n\n", "msg_date": "Tue, 21 Jun 2005 11:04:45 +0100", "msg_from": "Alex Stapleton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: autovacuum suggestions for 500,000,000+ row tables?" }, { "msg_contents": "Alex,\n\n> Downtime is something I'd rather avoid if possible. Do you think we\n> will need to run VACUUM FULL occasionally? I'd rather not lock tables\n> up unless I cant avoid it. We can probably squeeze an automated\n> vacuum tied to our data inserters every now and then though.\n\nAs long as your update/deletes are less than 10% of the table for all time, \nyou should never have to vacuum, pending XID wraparound.\n\n> Is this an 8.0 thing? I don't have a pg_controldata from what I can\n> see. Thats nice to hear though.\n\n'fraid so, yes.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 21 Jun 2005 10:13:40 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum suggestions for 500,000,000+ row tables?" }, { "msg_contents": "\nOn 21 Jun 2005, at 18:13, Josh Berkus wrote:\n\n> Alex,\n>\n>\n>> Downtime is something I'd rather avoid if possible. Do you think we\n>> will need to run VACUUM FULL occasionally? I'd rather not lock tables\n>> up unless I cant avoid it. We can probably squeeze an automated\n>> vacuum tied to our data inserters every now and then though.\n>>\n>\n> As long as your update/deletes are less than 10% of the table for \n> all time,\n> you should never have to vacuum, pending XID wraparound.\n>\n\nHmm, I guess as we have hundreds of millions of rows, and when we do \ndelete things, it's only a few thousand, and rarely. VACUUMing \nshouldn't need to happen too often. Thats good. Thanks a lot for the \nadvice.\n\n>> Is this an 8.0 thing? I don't have a pg_controldata from what I can\n>> see. Thats nice to hear though.\n>>\n>\n> 'fraid so, yes.\n\nBloody Debian stable. I might have to experiment with building from \nsource or using alien on debian to convert the rpms. Fun. Oh well.\n\n> -- \n> --Josh\n>\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to \n> [email protected]\n>\n>\n\n", "msg_date": "Tue, 21 Jun 2005 23:08:43 +0100", "msg_from": "Alex Stapleton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: autovacuum suggestions for 500,000,000+ row tables?" }, { "msg_contents": "On Tue, Jun 21, 2005 at 11:08:43PM +0100, Alex Stapleton wrote:\n> Bloody Debian stable. I might have to experiment with building from \n> source or using alien on debian to convert the rpms. Fun. Oh well.\n\nOr just pull in postgresql-8.0 from unstable; sid is close enough to sarge\nfor it to work quite well in practice, AFAIK.\n\nYou'll lose the security support, though, but you will with building from\nsource or using alien anyhow :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 22 Jun 2005 00:23:00 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum suggestions for 500,000,000+ row tables?" } ]
[ { "msg_contents": "I've got some queries generated by my application that will, for some \nreason, run forever until I kill the pid. Yet, when I run the \nqueries manually to check them out, they usually work fine. To get \nmore information about these queries, I'm writing a utility to take \nsnapshots of pg_stat_activity every 5 minutes. If it finds a query \nthat runs for longer than 15 minutes, it will trap the query so I can \nrun 'explain analyze' on it and see where the weakness is.\n\nHowever, the problem I have is that pg_stat_activity only returns the \nfirst n (255?) characters of the SQL as \"current_query\", so it gets \nchopped off at the end. I would very much like to find out how I can \nget the *entire* query that is active. Is this possible?\n\nAlso, I'm sure some people will respond with \"turn on query \nlogging\".. I've explored that option and the formatting of the log \nfile and the fact that EVERY query is logged is not what I'm after \nfor this project. The \"infinite-running\" queries are unpredictable \nand may only happen once a week. Logging 24/7 in anticipation of one \nof these occurrences is not something I'd like to do.\n\nThanks,\n\nDan Harris\n", "msg_date": "Mon, 20 Jun 2005 11:55:59 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "investigating slow queries through pg_stat_activity" }, { "msg_contents": "Hi,\n\nAt 19:55 20/06/2005, Dan Harris wrote:\n>Also, I'm sure some people will respond with \"turn on query\n>logging\".. I've explored that option and the formatting of the log\n>file and the fact that EVERY query is logged is not what I'm after\n>for this project.\n\nYou can log just those queries that take \"a little bit too much time\". See \nlog_min_duration_statement in postgresql.conf. Set it really high, and \nyou'll only get those queries you're after.\n\nJacques.\n\n\n", "msg_date": "Mon, 20 Jun 2005 20:45:54 +0200", "msg_from": "Jacques Caron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: investigating slow queries through" }, { "msg_contents": "Dan Harris <[email protected]> writes:\n> However, the problem I have is that pg_stat_activity only returns the \n> first n (255?) characters of the SQL as \"current_query\", so it gets \n> chopped off at the end. I would very much like to find out how I can \n> get the *entire* query that is active. Is this possible?\n\nI think the limit is ~1000 characters in 8.0 and later. However, you\ncan't realistically have \"unlimited\" because of constraints of the stats\nmessaging mechanism.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 20 Jun 2005 15:11:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: investigating slow queries through pg_stat_activity " }, { "msg_contents": "On 6/20/05, Dan Harris <[email protected]> wrote:\n> Also, I'm sure some people will respond with \"turn on query\n> logging\".. I've explored that option and the formatting of the log\n> file and the fact that EVERY query is logged is not what I'm after\n> for this project. \n\nYou don't have to log every query. You can set\nlog_min_duration_statement in postgresql.conf to log only the queries\nthat exceed a certain amount of time.\n\n From the manual at\nhttp://www.postgresql.org/docs/8.0/static/runtime-config.html:\n\nlog_min_duration_statement (integer)\n\n Sets a minimum statement execution time (in milliseconds) that\ncauses a statement to be logged. All SQL statements that run for the\ntime specified or longer will be logged with their duration. Setting\nthis to zero will print all queries and their durations. Minus-one\n(the default) disables the feature. For example, if you set it to 250\nthen all SQL statements that run 250ms or longer will be logged.\nEnabling this option can be useful in tracking down unoptimized\nqueries in your applications. Only superusers can change this setting.\n\nGeorge Essig\n", "msg_date": "Mon, 20 Jun 2005 15:30:38 -0500", "msg_from": "George Essig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: investigating slow queries through pg_stat_activity" }, { "msg_contents": "* Dan Harris <[email protected]> wrote:\n\nHi,\n\n> I've got some queries generated by my application that will, for some \n> reason, run forever until I kill the pid. Yet, when I run the \n> queries manually to check them out, they usually work fine. \n\nIf you can change your application, you could try to encapsulate the \nqueries into views - this makes logging and tracking down problems \nmuch easier. \n\n\ncu\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n---------------------------------------------------------------------\n Realtime Forex/Stock Exchange trading powered by postgresSQL :))\n http://www.fxignal.net/\n---------------------------------------------------------------------\n", "msg_date": "Mon, 4 Jul 2005 00:28:30 +0200", "msg_from": "Enrico Weigelt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: investigating slow queries through pg_stat_activity" } ]
[ { "msg_contents": "Hi,\n \nI have a table with two indices on the same column, one of which is a partial index. I would like the query planner to use the partial index whenever the query condition lies in the range of the partial index as it would yield better performance. Is there any way to enforce the ordering for the indices? How does the query planner decide which index to use when a particular query is fired? 'Explain Analyze' showed the total index being used in a situation that could be fulfiled by the partial index.\n \nThanks,\nRohit\n\n\n\n\n\n\t\t\n---------------------------------\nHow much free photo storage do you get? Store your friends n family photos for FREE with Yahoo! Photos. \n http://in.photos.yahoo.com\n\n\n\n\nHi,\n \nI have a table with two indices on the same column, one of which is a partial index. I would like the query planner to use the partial index whenever the query condition lies in the range of the partial index as it would yield better performance. Is there any way to enforce the ordering for the indices? How does the query planner decide which index to use when a particular query is fired?  'Explain Analyze' showed the total index being used in a situation that could be fulfiled by the partial index.\n \nThanks,\nRohit\nHow much free photo storage do you get? Store your friends n family photos for FREE with Yahoo! Photos. http://in.photos.yahoo.com", "msg_date": "Tue, 21 Jun 2005 08:42:23 +0100 (BST)", "msg_from": "Rohit Gaddi <[email protected]>", "msg_from_op": true, "msg_subject": "index selection by query planner" } ]
[ { "msg_contents": "Hi,\n \nI have a table with two indices on the same column, one of which is a partial index. I would like the query planner to use the partial index whenever the query condition lies in the range of the partial index as it would yield better performance. Is there any way to enforce the ordering for the indices? How does the query planner decide which index to use when a particular query is fired? 'Explain Analyze' showed the total index being used in a situation that could be fulfiled by the partial index.\n \nThanks,\nRohit\n\n\n\t\t\n---------------------------------\nHow much free photo storage do you get? Store your friends n family photos for FREE with Yahoo! Photos. \n http://in.photos.yahoo.com\n\nHi,\n \nI have a table with two indices on the same column, one of which is a partial index. I would like the query planner to use the partial index whenever the query condition lies in the range of the partial index as it would yield better performance. Is there any way to enforce the ordering for the indices? How does the query planner decide which index to use when a particular query is fired?  'Explain Analyze' showed the total index being used in a situation that could be fulfiled by the partial index.\n \nThanks,\nRohit\nHow much free photo storage do you get? Store your friends n family photos for FREE with Yahoo! Photos. http://in.photos.yahoo.com", "msg_date": "Tue, 21 Jun 2005 08:48:50 +0100 (BST)", "msg_from": "Rohit Gaddi <[email protected]>", "msg_from_op": true, "msg_subject": "index selection by query planner" } ]
[ { "msg_contents": "Hi all,\n\nI have like a repository table with is very very huge with atleast a few\nhundreds of millions, may be over that. The information is stored in form of\nrows in these tables. I need to make that information wide based on some\ngrouping and display them as columns on the screen.\n\nI am thinking of having a solution where I create views for each screen,\nwhich are just read only.\n\nHowever, I donot know if the query that creates the view is executed\neverytime I select something from the view. Because if that is the case,\nthen I think my queries will again be slow. But if that is the way views\nwork, then what would be the point in creating them ..\n\nAny suggestions, helps --\n\n(Please pardon if this question should not be on performance forum)\n\nThanks,\nAmit\n\n", "msg_date": "Tue, 21 Jun 2005 10:01:21 -0400", "msg_from": "Amit V Shah <[email protected]>", "msg_from_op": true, "msg_subject": "Do Views execute underlying query everytime ??" }, { "msg_contents": "On 6/21/05, Amit V Shah <[email protected]> wrote:\n> Hi all,\n...\n> I am thinking of having a solution where I create views for each screen,\n> which are just read only.\n> \n> However, I donot know if the query that creates the view is executed\n> everytime I select something from the view. Because if that is the case,\n> then I think my queries will again be slow. But if that is the way views\n> work, then what would be the point in creating them ..\n> \n> Any suggestions, helps --\n\nThey do get executed every time. I have a similar issue, but my data\ndoes not change very frequently, so instead of using a view, I create\nlookup tables to hold the data. So once a day I do something like\nthis:\ndrop lookup_table_1;\ncreate table lookup_table_1 as SELECT ...;\n\nIn my case, rows are not deleted or updated, so I don't actually do a\n\"drop table...\" I merely add new records to the existing table, but if\nyour data changes, the drop table technique can be faster than doing a\ndelete or update.\n\n-- \nMatthew Nuzum\nwww.bearfruit.org\n", "msg_date": "Tue, 21 Jun 2005 09:45:21 -0500", "msg_from": "Matthew Nuzum <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do Views execute underlying query everytime ??" }, { "msg_contents": "Amit V Shah wrote:\n> Hi all,\n> \n> I have like a repository table with is very very huge with atleast a few\n> hundreds of millions, may be over that. The information is stored in form of\n> rows in these tables. I need to make that information wide based on some\n> grouping and display them as columns on the screen.\n> \n> I am thinking of having a solution where I create views for each screen,\n> which are just read only.\n> \n> However, I donot know if the query that creates the view is executed\n> everytime I select something from the view. Because if that is the case,\n> then I think my queries will again be slow. But if that is the way views\n> work, then what would be the point in creating them ..\n\nThat's exactly how they work. You'd still want them because they let you \nsimplify access control (user A can only see some rows, user B can see \nall rows) or just make your queries simpler.\n\nSounds like you want what is known as a \"materialised view\" which is \nbasically a summary table that is kept up to date by triggers. You query \nthe table instead of actually recalculating every time. Perhaps google \nfor \"postgresql materialized view\" (you might want a \"z\" or \"s\" in \nmaterialised).\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 21 Jun 2005 15:48:39 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do Views execute underlying query everytime ??" }, { "msg_contents": "\n\n> However, I donot know if the query that creates the view is executed\n> everytime I select something from the view. Because if that is the case,\n> then I think my queries will again be slow. But if that is the way views\n> work, then what would be the point in creating them ..\n\n\tViews are more for when you have a query which keeps coming a zillion \ntime in your application like :\n\nSELECT p.*, pd.* FROM products p, products_names pd WHERE p.id=pd.id AND \npd.language=...\n\n\tYou create a view like :\n\nCREATE VIEW products_with_name AS SELECT p.*, pd.* FROM products p, \nproducts_names pd WHERE p.id=pd.id\n\n\tAnd then you :\n\nSELECT * FROM products_with_name WHERE id=... AND language=...\n\n\tIt saves a lot of headache and typing over and over again the same thing, \nand you can tell your ORM library to use them, too.\n\n\tBut for your application, they're useless, You should create a \n\"materialized view\"... which is just a table and update it from a CRON job.\n\tYou can still use a view to fill your table, and as a way to hold your \nquery, so the cron job doesn't have to issue real queries, just filling \ntables from views :\n\nCREATE VIEW cached_stuff_view AS ...\n\nAnd once in while :\n\nBEGIN;\nDROP TABLE cached_stuff;\nCREATE TABLE cached_stuff AS SELECT * FROM cached_stuff_view;\nCREATE INDEX ... ON cached_stuff( ... )\nCOMMIT;\nANALYZE cached_stuff;\n\nOr :\nBEGIN;\nTRUNCATE cached_stuff;\nINSERT INTO cached_stuff SELECT * FROM cached_stuff_view;\nCOMMIT;\nANALYZE cached_stuff;\n\nIf you update your entire table it's faster to just junk it or truncate it \nthen recreate it, but maybe you'd prefer TRUNCATE which saves you from \nhaving to re-create of indexes... but it'll be faster if you drop the \nindexes and re-create them afterwards anyway instead of them being updated \nfor each row inserted. So I'd say DROP TABLE.\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Tue, 21 Jun 2005 17:00:10 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do Views execute underlying query everytime ??" } ]
[ { "msg_contents": "Hi,\n\nI have a very simple query on a big table. When I issue a \"limit\" \nand/or \"offset\" clause, the query is not using the index.\nCan anyone explain me this ?\n\nrvponp=# explain select * from tblprintjobs order by loginuser, \ndesceventdate, desceventtime offset 25 limit 25 ;\n QUERY PLAN\n------------------------------------------------------------------------ \n-----------\n Limit (cost=349860.62..349860.68 rows=25 width=206)\n -> Sort (cost=349860.56..351416.15 rows=622236 width=206)\n Sort Key: loginuser, desceventdate, desceventtime\n -> Seq Scan on tblprintjobs (cost=0.00..25589.36 rows=622236 \nwidth=206)\n(4 rows)\n\nrvponp=# explain select * from tblprintjobs order by loginuser, \ndesceventdate, desceventtime ;\n QUERY PLAN\n------------------------------------------------------------------------ \n-----\n Sort (cost=349860.56..351416.15 rows=622236 width=206)\n Sort Key: loginuser, desceventdate, desceventtime\n -> Seq Scan on tblprintjobs (cost=0.00..25589.36 rows=622236 \nwidth=206)\n(3 rows)\n\nMet vriendelijke groeten,\nBien � vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Tue, 21 Jun 2005 16:33:55 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Limit clause not using index" }, { "msg_contents": "Yves Vindevogel wrote:\n> Hi, \n> \n> rvponp=# explain select * from tblprintjobs order by loginuser,\n> desceventdate, desceventtime offset 25 limit 25 ;\n>\n> I have a very simple query on a big table. When I issue a \"limit\"\n> and/or \"offset\" clause, the query is not using the index. \n> Can anyone explain me this ? \n\nDo you have an index on (loginuser,desceventdate,desceventtime)?\n\n-- \n_______________________________\n\nThis e-mail may be privileged and/or confidential, and the sender does\nnot waive any related rights and obligations. Any distribution, use or\ncopying of this e-mail or the information it contains by other than an\nintended recipient is unauthorized. If you received this e-mail in\nerror, please advise me (by return e-mail or otherwise) immediately.\n_______________________________\n", "msg_date": "Tue, 21 Jun 2005 07:40:34 -0700", "msg_from": "Bricklen Anderson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit clause not using index" }, { "msg_contents": "Yves Vindevogel wrote:\n\n> Hi,\n>\n> I have a very simple query on a big table. When I issue a \"limit\" \n> and/or \"offset\" clause, the query is not using the index.\n> Can anyone explain me this ?\n\nYou didn't give enough information. What does you index look like that \nyou are expecting it to use?\nGenerally, you want to have matching columns. So you would want\nCREATE INDEX blah ON tblprintjobs(loginuser, desceventdate, desceventtime);\n\nNext, you should post EXPLAIN ANALYZE instead of regular explain, so we \ncan have an idea if the planner is actually making correct estimations.\n\nJohn\n=:->\n\n>\n> rvponp=# explain select * from tblprintjobs order by loginuser, \n> desceventdate, desceventtime offset 25 limit 25 ;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------- \n>\n> Limit (cost=349860.62..349860.68 rows=25 width=206)\n> -> Sort (cost=349860.56..351416.15 rows=622236 width=206)\n> Sort Key: loginuser, desceventdate, desceventtime\n> -> Seq Scan on tblprintjobs (cost=0.00..25589.36 rows=622236 width=206)\n> (4 rows)\n>\n> rvponp=# explain select * from tblprintjobs order by loginuser, \n> desceventdate, desceventtime ;\n> QUERY PLAN\n> ----------------------------------------------------------------------------- \n>\n> Sort (cost=349860.56..351416.15 rows=622236 width=206)\n> Sort Key: loginuser, desceventdate, desceventtime\n> -> Seq Scan on tblprintjobs (cost=0.00..25589.36 rows=622236 width=206)\n> (3 rows)\n>\n> Met vriendelijke groeten,\n> Bien à vous,\n> Kind regards,\n>\n> *Yves Vindevogel*\n> *Implements*", "msg_date": "Tue, 21 Jun 2005 09:42:36 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit clause not using index" }, { "msg_contents": "Yves Vindevogel <[email protected]> writes:\n> Can anyone explain me this ?\n\n> rvponp=# explain select * from tblprintjobs order by loginuser, \n> desceventdate, desceventtime offset 25 limit 25 ;\n> QUERY PLAN\n> ------------------------------------------------------------------------ \n> -----------\n> Limit (cost=349860.62..349860.68 rows=25 width=206)\n> -> Sort (cost=349860.56..351416.15 rows=622236 width=206)\n> Sort Key: loginuser, desceventdate, desceventtime\n> -> Seq Scan on tblprintjobs (cost=0.00..25589.36 rows=622236 width=206)\n> (4 rows)\n\n\nDo you have an index matching that sort key? I'd certainly expect the\nabove to use it if it were there. For the full table case it's not so\nclear --- an indexscan isn't always better.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 Jun 2005 10:42:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit clause not using index " }, { "msg_contents": "These are my indexes\n\n\tcreate index ixprintjobsapplicationtype on tblPrintjobs \n(applicationtype);\n\tcreate index ixprintjobsdesceventdate on tblPrintjobs (desceventdate);\n\tcreate index ixprintjobsdesceventtime on tblPrintjobs (desceventtime);\n\tcreate index ixprintjobsdescpages on tblPrintjobs (descpages);\n\tcreate index ixprintjobsdocumentname on tblPrintjobs (documentname) ;\n\tcreate index ixprintjobseventcomputer on tblPrintjobs (eventcomputer);\n\tcreate index ixprintjobseventdate on tblPrintjobs (eventdate);\n\tcreate index ixprintjobseventtime on tblPrintjobs (eventtime);\n\tcreate index ixprintjobseventuser on tblPrintjobs (eventuser);\n\tcreate index ixprintjobshostname on tblPrintjobs (hostname) ;\n\tcreate index ixprintjobsipaddress on tblPrintjobs (ipaddress) ;\n\tcreate index ixprintjobsloginuser on tblPrintjobs (loginuser) ;\n\tcreate index ixprintjobspages on tblPrintjobs (pages) ;\n\tcreate index ixprintjobsprintport on tblPrintjobs (printport) ;\n\tcreate index ixprintjobsprintqueue on tblPrintjobs (printqueue) ;\n\tcreate index ixprintjobsrecordnumber on tblPrintjobs (recordnumber) ;\n\tcreate index ixprintjobssize on tblPrintjobs (size) ;\n\tcreate index ixprintjobsusertype on tblPrintjobs (usertype) ;\n\tcreate index ixPrintjobsDescpagesDocumentname on tblPrintjobs \n(descpages, documentname) ;\n\tcreate index ixPrintjobsHostnamePrintqueueDesceventdateDesceventtime \non tblPrintjobs (hostname, printqueue, desceventdate, desceventtime) ;\n\tcreate index ixPrintjobsLoginDescEventdateDesceventtime on \ntblPrintjobs (loginuser, desceventdate, desceventtime) ;\n\n\nOn 21 Jun 2005, at 16:42, Tom Lane wrote:\n\n> Yves Vindevogel <[email protected]> writes:\n>> Can anyone explain me this ?\n>\n>> rvponp=# explain select * from tblprintjobs order by loginuser,\n>> desceventdate, desceventtime offset 25 limit 25 ;\n>> QUERY PLAN\n>> ---------------------------------------------------------------------- \n>> --\n>> -----------\n>> Limit (cost=349860.62..349860.68 rows=25 width=206)\n>> -> Sort (cost=349860.56..351416.15 rows=622236 width=206)\n>> Sort Key: loginuser, desceventdate, desceventtime\n>> -> Seq Scan on tblprintjobs (cost=0.00..25589.36 \n>> rows=622236 width=206)\n>> (4 rows)\n>\n>\n> Do you have an index matching that sort key? I'd certainly expect the\n> above to use it if it were there. For the full table case it's not so\n> clear --- an indexscan isn't always better.\n>\n> \t\t\tregards, tom lane\n>\n>\nMet vriendelijke groeten,\nBien � vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Tue, 21 Jun 2005 16:57:51 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Limit clause not using index " }, { "msg_contents": "rvponp=# explain analyze select * from tblPrintjobs order by loginuser, \ndesceventdate, desceventtime ;\n QUERY PLAN\n------------------------------------------------------------------------ \n--------------------------------------------------------\n Sort (cost=345699.06..347256.49 rows=622972 width=203) (actual \ntime=259438.952..268885.586 rows=622972 loops=1)\n Sort Key: loginuser, desceventdate, desceventtime\n -> Seq Scan on tblprintjobs (cost=0.00..25596.72 rows=622972 \nwidth=203) (actual time=21.155..8713.810 rows=622972 loops=1)\n Total runtime: 271583.422 ms\n(4 rows)\n\nOn 21 Jun 2005, at 16:42, John A Meinel wrote:\n\n> Yves Vindevogel wrote:\n>\n>> Hi,\n>>\n>> I have a very simple query on a big table. When I issue a \"limit\" \n>> and/or \"offset\" clause, the query is not using the index.\n>> Can anyone explain me this ?\n>\n> You didn't give enough information. What does you index look like that \n> you are expecting it to use?\n> Generally, you want to have matching columns. So you would want\n> CREATE INDEX blah ON tblprintjobs(loginuser, desceventdate, \n> desceventtime);\n>\n> Next, you should post EXPLAIN ANALYZE instead of regular explain, so \n> we can have an idea if the planner is actually making correct \n> estimations.\n>\n> John\n> =:->\n>\n>>\n>> rvponp=# explain select * from tblprintjobs order by loginuser, \n>> desceventdate, desceventtime offset 25 limit 25 ;\n>> QUERY PLAN\n>> ---------------------------------------------------------------------- \n>> -------------\n>> Limit (cost=349860.62..349860.68 rows=25 width=206)\n>> -> Sort (cost=349860.56..351416.15 rows=622236 width=206)\n>> Sort Key: loginuser, desceventdate, desceventtime\n>> -> Seq Scan on tblprintjobs (cost=0.00..25589.36 rows=622236 \n>> width=206)\n>> (4 rows)\n>>\n>> rvponp=# explain select * from tblprintjobs order by loginuser, \n>> desceventdate, desceventtime ;\n>> QUERY PLAN\n>> ---------------------------------------------------------------------- \n>> -------\n>> Sort (cost=349860.56..351416.15 rows=622236 width=206)\n>> Sort Key: loginuser, desceventdate, desceventtime\n>> -> Seq Scan on tblprintjobs (cost=0.00..25589.36 rows=622236 \n>> width=206)\n>> (3 rows)\n>>\n>> Met vriendelijke groeten,\n>> Bien � vous,\n>> Kind regards,\n>>\n>> *Yves Vindevogel*\n>> *Implements*\n>\n>\n>\n>\nMet vriendelijke groeten,\nBien � vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Tue, 21 Jun 2005 17:07:54 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Limit clause not using index" }, { "msg_contents": "Nevermind guys ....\nThere's an error in a function that is creating these indexes.\nThe function never completed succesfully so the index is not there\n\nVery sorry about this !!\n\n\nOn 21 Jun 2005, at 16:57, Yves Vindevogel wrote:\n\n> These are my indexes\n>\n> \tcreate index ixprintjobsapplicationtype on tblPrintjobs \n> (applicationtype);\n> \tcreate index ixprintjobsdesceventdate on tblPrintjobs (desceventdate);\n> \tcreate index ixprintjobsdesceventtime on tblPrintjobs (desceventtime);\n> \tcreate index ixprintjobsdescpages on tblPrintjobs (descpages);\n> \tcreate index ixprintjobsdocumentname on tblPrintjobs (documentname) ;\n> \tcreate index ixprintjobseventcomputer on tblPrintjobs (eventcomputer);\n> \tcreate index ixprintjobseventdate on tblPrintjobs (eventdate);\n> \tcreate index ixprintjobseventtime on tblPrintjobs (eventtime);\n> \tcreate index ixprintjobseventuser on tblPrintjobs (eventuser);\n> \tcreate index ixprintjobshostname on tblPrintjobs (hostname) ;\n> \tcreate index ixprintjobsipaddress on tblPrintjobs (ipaddress) ;\n> \tcreate index ixprintjobsloginuser on tblPrintjobs (loginuser) ;\n> \tcreate index ixprintjobspages on tblPrintjobs (pages) ;\n> \tcreate index ixprintjobsprintport on tblPrintjobs (printport) ;\n> \tcreate index ixprintjobsprintqueue on tblPrintjobs (printqueue) ;\n> \tcreate index ixprintjobsrecordnumber on tblPrintjobs (recordnumber) ;\n> \tcreate index ixprintjobssize on tblPrintjobs (size) ;\n> \tcreate index ixprintjobsusertype on tblPrintjobs (usertype) ;\n> \tcreate index ixPrintjobsDescpagesDocumentname on tblPrintjobs \n> (descpages, documentname) ;\n> \tcreate index ixPrintjobsHostnamePrintqueueDesceventdateDesceventtime \n> on tblPrintjobs (hostname, printqueue, desceventdate, desceventtime) ;\n> \tcreate index ixPrintjobsLoginDescEventdateDesceventtime on \n> tblPrintjobs (loginuser, desceventdate, desceventtime) ;\n>\n>\n> On 21 Jun 2005, at 16:42, Tom Lane wrote:\n>\n>> Yves Vindevogel <[email protected]> writes:\n>>> Can anyone explain me this ?\n>>\n>>> rvponp=# explain select * from tblprintjobs order by loginuser,\n>>> desceventdate, desceventtime offset 25 limit 25 ;\n>>> QUERY PLAN\n>>> --------------------------------------------------------------------- \n>>> ---\n>>> -----------\n>>> Limit (cost=349860.62..349860.68 rows=25 width=206)\n>>> -> Sort (cost=349860.56..351416.15 rows=622236 width=206)\n>>> Sort Key: loginuser, desceventdate, desceventtime\n>>> -> Seq Scan on tblprintjobs (cost=0.00..25589.36 \n>>> rows=622236 width=206)\n>>> (4 rows)\n>>\n>>\n>> Do you have an index matching that sort key? I'd certainly expect the\n>> above to use it if it were there. For the full table case it's not so\n>> clear --- an indexscan isn't always better.\n>>\n>> \t\t\tregards, tom lane\n>>\n>>\n> Met vriendelijke groeten,\n> Bien � vous,\n> Kind regards,\n>\n> Yves Vindevogel\n> Implements\n>\n> <Pasted Graphic 2.tiff>\n>\n> Mail: [email protected] - Mobile: +32 (478) 80 82 91\n>\n> Kempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n>\n> Web: http://www.implements.be\n>\n> First they ignore you. Then they laugh at you. Then they fight you. \n> Then you win.\n> Mahatma Ghandi.\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\nMet vriendelijke groeten,\nBien � vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Tue, 21 Jun 2005 17:10:23 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Limit clause not using index " }, { "msg_contents": "Yves Vindevogel wrote:\n\n> rvponp=# explain analyze select * from tblPrintjobs order by\n> loginuser, desceventdate, desceventtime ;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------\n>\n> Sort (cost=345699.06..347256.49 rows=622972 width=203) (actual\n> time=259438.952..268885.586 rows=622972 loops=1)\n> Sort Key: loginuser, desceventdate, desceventtime\n> -> Seq Scan on tblprintjobs (cost=0.00..25596.72 rows=622972\n> width=203) (actual time=21.155..8713.810 rows=622972 loops=1)\n> Total runtime: 271583.422 ms\n> (4 rows)\n\n\nCan you post it with the limit? I realize the query takes a long time,\nbut that is the more important query to look at.\n\nAlso, just as a test, if you can, try dropping most of the indexes\nexcept for the important one. It might be that the planner is having a\nhard time because there are too many permutations to try.\nI believe if you drop the indexes inside a transaction, they will still\nbe there for other queries, and if you rollback instead of commit, you\nwon't lose anything.\n\nBEGIN;\nDROP INDEX ...\nEXPLAIN ANALYZE SELECT *...\nROLLBACK;\n\nJohn\n=:->", "msg_date": "Tue, 21 Jun 2005 10:14:24 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit clause not using index" }, { "msg_contents": "Yves Vindevogel <[email protected]> writes:\n> \tcreate index ixPrintjobsLoginDescEventdateDesceventtime on \n> tblPrintjobs (loginuser, desceventdate, desceventtime) ;\n\nHmm, that certainly looks like it should match the query. What happens\nto the EXPLAIN output if you do \"set enable_sort = false\"?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 Jun 2005 11:30:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit clause not using index " }, { "msg_contents": "[John A Meinel - Tue at 10:14:24AM -0500]\n> I believe if you drop the indexes inside a transaction, they will still\n> be there for other queries, and if you rollback instead of commit, you\n> won't lose anything.\n\nHas anyone tested this?\n\n(sorry, I only have the production database to play with at the moment,\nand I don't think I should play with it ;-)\n\n-- \nTobias Brox, Beijing\n\n", "msg_date": "Tue, 21 Jun 2005 21:46:39 +0200", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit clause not using index" }, { "msg_contents": "On Tue, Jun 21, 2005 at 09:46:39PM +0200, Tobias Brox wrote:\n> [John A Meinel - Tue at 10:14:24AM -0500]\n> > I believe if you drop the indexes inside a transaction, they will still\n> > be there for other queries, and if you rollback instead of commit, you\n> > won't lose anything.\n> \n> Has anyone tested this?\n\nObservations from tests with 8.0.3:\n\nDROP INDEX acquires an AccessExclusiveLock on the table and on the\nindex. This will cause the transaction executing the DROP INDEX\nto block until no other transaction holds any kind of lock on either,\nand once the locks are acquired, no other transaction will be able\nto access the table or the index until the transaction doing the\nDROP INDEX commits or rolls back. Rolling back leaves the index\nin place.\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Tue, 21 Jun 2005 15:08:19 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit clause not using index" }, { "msg_contents": "Tobias Brox <[email protected]> writes:\n> [John A Meinel - Tue at 10:14:24AM -0500]\n>> I believe if you drop the indexes inside a transaction, they will still\n>> be there for other queries, and if you rollback instead of commit, you\n>> won't lose anything.\n\n> Has anyone tested this?\n\nCertainly. Bear in mind though that DROP INDEX will acquire exclusive\nlock on the index's table, so until you roll back, no other transaction\nwill be able to touch the table at all. So the whole thing may be a\nnonstarter in a production database anyway :-(. You can probably get\naway with\n\tBEGIN;\n\tDROP INDEX ...\n\tEXPLAIN ...\n\tROLLBACK;\nif you fire it from a script rather than by hand --- but EXPLAIN\nANALYZE might be a bad idea ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 Jun 2005 17:20:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit clause not using index " }, { "msg_contents": "[Tom Lane - Tue at 05:20:07PM -0400]\n> \n> Certainly. Bear in mind though that DROP INDEX will acquire exclusive\n> lock on the index's table, so until you roll back, no other transaction\n> will be able to touch the table at all. So the whole thing may be a\n> nonstarter in a production database anyway :-(.\n\nThat's what I was afraid of. I was running psql at the production DB\nwithout starting a transaction (bad habit, I know) and tried to drop an\nindex there, but I had to cancel the transaction, it took forever and\nin the same time blocking all the revenue-generating activity.\n\n-- \nTobias Brox, +86-13521622905\nNordicbet, IT dept\n", "msg_date": "Wed, 22 Jun 2005 11:54:55 +0800", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit clause not using index" } ]
[ { "msg_contents": "After I sent out this email, I found this article from google\n\nhttp://jonathangardner.net/PostgreSQL/materialized_views/matviews.html\n\nLooks like we can control as to when the views refresh... I am still kind of\nconfused, and would appreciate help !!\n\nThe create/drop table does sound a solution that can work, but the thing is\nI want to get manual intervention out, and besides, my work flow is very\ncomplex so this might not be an option for me :-(\n\nThanks,\nAmit\n\n-----Original Message-----\nFrom: Matthew Nuzum [mailto:[email protected]]\nSent: Tuesday, June 21, 2005 10:45 AM\nTo: Amit V Shah\nCc: [email protected]\nSubject: Re: [PERFORM] Do Views execute underlying query everytime ??\n\n\nOn 6/21/05, Amit V Shah <[email protected]> wrote:\n> Hi all,\n...\n> I am thinking of having a solution where I create views for each screen,\n> which are just read only.\n> \n> However, I donot know if the query that creates the view is executed\n> everytime I select something from the view. Because if that is the case,\n> then I think my queries will again be slow. But if that is the way views\n> work, then what would be the point in creating them ..\n> \n> Any suggestions, helps --\n\nThey do get executed every time. I have a similar issue, but my data\ndoes not change very frequently, so instead of using a view, I create\nlookup tables to hold the data. So once a day I do something like\nthis:\ndrop lookup_table_1;\ncreate table lookup_table_1 as SELECT ...;\n\nIn my case, rows are not deleted or updated, so I don't actually do a\n\"drop table...\" I merely add new records to the existing table, but if\nyour data changes, the drop table technique can be faster than doing a\ndelete or update.\n\n-- \nMatthew Nuzum\nwww.bearfruit.org\n\n", "msg_date": "Tue, 21 Jun 2005 10:49:28 -0400", "msg_from": "Amit V Shah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Do Views execute underlying query everytime ??" }, { "msg_contents": "Amit V Shah wrote:\n\n>After I sent out this email, I found this article from google\n>\n>http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html\n>\n>Looks like we can control as to when the views refresh... I am still kind of\n>confused, and would appreciate help !!\n>\n>The create/drop table does sound a solution that can work, but the thing is\n>I want to get manual intervention out, and besides, my work flow is very\n>complex so this might not be an option for me :-(\n>\n>Thanks,\n>Amit\n>\n\nJust to make it clear, a view is not the same as a materialized view.\nA view is just a set of rules to the planner so that it can simplify\ninteractions with the database. A materialized view is a query which has\nbeen saved into a table.\n\nTo set it up properly, really depends on what your needs are.\n\n 1. How much time can elapse between an update to the system, and an\n update to the materialized views?\n 2. How many updates / (sec, min, hour, month) do you expect. Is\n insert performance critical, or secondary.\n\nFor instance, if you get a lot of updates, but you can have a 1 hour lag\nbetween the time a new row is inserted and the view is updated, you can\njust create a cron job that runs every hour to regenerate the\nmaterialized view.\n\nIf you don't get many updates, but you need them to show up right away,\nthen you can add triggers to the affected tables, such that\ninserting/updating to a specific table causes an update to the\nmaterialized view.\n\nThere are quite a few potential tradeoffs. Rather than doing a\nmaterialized view, you could just improve your filters. If you are doing\na query to show people the results, you generally have some sort of\nupper bound on how much data you can display. Humans don't like reading\nmore than 100 or 1000 rows. So create your normal query, and just take\non a LIMIT 100 at the end. If you structure your query properly, and\nhave appropriate indexes, you should be able to make the LIMIT count,\nand allow you to save a lot of overhead of generating rows that you\ndon't use.\n\nI would probably start by posting the queries you are currently using,\nalong with an EXPLAIN ANALYZE, and a description of what you actually\nneed from the query. Then this list can be quite helpful in\nrestructuring your query to make it faster.\n\nJohn\n=:->", "msg_date": "Tue, 21 Jun 2005 10:01:13 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do Views execute underlying query everytime ??" } ]
[ { "msg_contents": "Hi,\n\nI have another question regarding indexes.\n\nI have a table with a lot of indexes on it. Those are needed to \nperform my searches.\nOnce a day, a bunch of records is inserted in my table.\n\nSay, my table has 1.000.000 records and I add 10.000 records (1% new)\nWhat would be faster.\n\n1) Dropping my indexes and recreating them after the inserts\n2) Just inserting it and have PG manage the indexes\n\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Tue, 21 Jun 2005 17:17:51 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Another question on indexes (drop and recreate)" }, { "msg_contents": "Yves Vindevogel wrote:\n\n> Hi,\n>\n> I have another question regarding indexes.\n>\n> I have a table with a lot of indexes on it. Those are needed to \n> perform my searches.\n> Once a day, a bunch of records is inserted in my table.\n>\n> Say, my table has 1.000.000 records and I add 10.000 records (1% new)\n> What would be faster.\n>\n> 1) Dropping my indexes and recreating them after the inserts\n> 2) Just inserting it and have PG manage the indexes\n>\n> Met vriendelijke groeten,\n> Bien à vous,\n> Kind regards,\n>\n> *Yves Vindevogel*\n> *Implements*\n\n\nI'm guessing for 1% new that (2) would be faster.\nJohn\n=:->", "msg_date": "Tue, 21 Jun 2005 10:22:27 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Another question on indexes (drop and recreate)" }, { "msg_contents": "And, after let's say a week, would that index still be optimal or would \nit be a good idea to drop it in the weekend and recreate it.\n\nOn 21 Jun 2005, at 17:22, John A Meinel wrote:\n\n> Yves Vindevogel wrote:\n>\n>> Hi,\n>>\n>> I have another question regarding indexes.\n>>\n>> I have a table with a lot of indexes on it. Those are needed to \n>> perform my searches.\n>> Once a day, a bunch of records is inserted in my table.\n>>\n>> Say, my table has 1.000.000 records and I add 10.000 records (1% new)\n>> What would be faster.\n>>\n>> 1) Dropping my indexes and recreating them after the inserts\n>> 2) Just inserting it and have PG manage the indexes\n>>\n>> Met vriendelijke groeten,\n>> Bien à vous,\n>> Kind regards,\n>>\n>> *Yves Vindevogel*\n>> *Implements*\n>\n>\n> I'm guessing for 1% new that (2) would be faster.\n> John\n> =:->\n>\n>\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Tue, 21 Jun 2005 17:28:20 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Another question on indexes (drop and recreate)" }, { "msg_contents": "Yves Vindevogel wrote:\n\n> And, after let's say a week, would that index still be optimal or\n> would it be a good idea to drop it in the weekend and recreate it.\n\n\nIt depends a little bit on the postgres version you are using. If you\nare only ever adding to the table, and you are not updating it or\ndeleting from it, I think the index is always optimal.\nOnce you start deleting from it there are a few cases where older\nversions would not properly re-use the empty entries, requiring a\nREINDEX. (Deleting low numbers and always adding high numbers was one of\nthe cases)\n\nHowever, I believe that as long as you vacuum often enough, so that the\nsystem knows where the unused entries are, you don't ever have to drop\nand re-create the index.\n\nJohn\n=:->", "msg_date": "Tue, 21 Jun 2005 10:49:28 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Another question on indexes (drop and recreate)" }, { "msg_contents": "I only add records, and most of the values are \"random\"\nExcept the columns for dates, ....\n\nOn 21 Jun 2005, at 17:49, John A Meinel wrote:\n\n> Yves Vindevogel wrote:\n>\n>> And, after let's say a week, would that index still be optimal or\n>> would it be a good idea to drop it in the weekend and recreate it.\n>\n>\n> It depends a little bit on the postgres version you are using. If you\n> are only ever adding to the table, and you are not updating it or\n> deleting from it, I think the index is always optimal.\n> Once you start deleting from it there are a few cases where older\n> versions would not properly re-use the empty entries, requiring a\n> REINDEX. (Deleting low numbers and always adding high numbers was one \n> of\n> the cases)\n>\n> However, I believe that as long as you vacuum often enough, so that the\n> system knows where the unused entries are, you don't ever have to drop\n> and re-create the index.\n>\n> John\n> =:->\n>\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Tue, 21 Jun 2005 18:43:58 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Another question on indexes (drop and recreate)" }, { "msg_contents": "Yves Vindevogel wrote:\n\n> I only add records, and most of the values are \"random\"\n> Except the columns for dates, ....\n\nI doubt that you would need to recreate indexes. That really only needs\nto be done in pathological cases, most of which have been fixed in the\nlatest postgres.\n\nIf you are only inserting (never updating or deleting), the index can\nnever bloat, since you are only adding new stuff.\n(You cannot get dead items to bloat your index if you never delete\nanything.)\n\nJohn\n=:->", "msg_date": "Tue, 21 Jun 2005 11:54:55 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Another question on indexes (drop and recreate)" }, { "msg_contents": "Ok, tnx !!\n\nOn 21 Jun 2005, at 18:54, John A Meinel wrote:\n\n> Yves Vindevogel wrote:\n>\n>> I only add records, and most of the values are \"random\"\n>> Except the columns for dates, ....\n>\n> I doubt that you would need to recreate indexes. That really only needs\n> to be done in pathological cases, most of which have been fixed in the\n> latest postgres.\n>\n> If you are only inserting (never updating or deleting), the index can\n> never bloat, since you are only adding new stuff.\n> (You cannot get dead items to bloat your index if you never delete\n> anything.)\n>\n> John\n> =:->\n>\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Tue, 21 Jun 2005 21:30:50 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Another question on indexes (drop and recreate)" } ]
[ { "msg_contents": "First of all, thanks to everyone for helping me !\n\nLooks like materialized views will be my answer.\n\nLet me explain my situation a little better. \n\nThe repository table looks like this -\n\ncreate table repository (statName varchar(45), statValue varchar(45),\nmetaData varchar(45));\n\nMetaData is a foreign key to other tables. \n\nThe screens show something like following -\n\nScreen 1 -\nStat1 Stat2 Stat3\nValue Value Value\nValue Value Value\n\n\n\nScreen 2 -\nStat3 Stat1 Stat5\nValue Value Value\nValue Value Value\n\n\netc. etc.\n\nThe data is grouped based on metaData. \n\nUpdates will only occur nightly and can be controlled. But selects occur\n9-5.\n\nOne of the compelling reasons I feel is that to create such tables out of\nrepository tables, the query would be very complicated. If I have a\nmaterialized view, I think the information will be \"cached\". \n\nAnother concern I have is load. If I have lot of simultaneous users creating\nsuch \"wide tables\" out of one \"long table\", that would generate substantial\nload on the servers. ??\n\nI like the materialized view solution better than having other tables for\neach screen. (Would be nice if someone can comment on that)\n\nSo that is my situation. \n\nAgain, thanks everyone for helping\nAmit\n\n-----Original Message-----\nFrom: John A Meinel [mailto:[email protected]]\nSent: Tuesday, June 21, 2005 11:01 AM\nTo: Amit V Shah\nCc: '[email protected]'; [email protected]\nSubject: Re: [PERFORM] Do Views execute underlying query everytime ??\n\n\nAmit V Shah wrote:\n\n>After I sent out this email, I found this article from google\n>\n>http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html\n>\n>Looks like we can control as to when the views refresh... I am still kind\nof\n>confused, and would appreciate help !!\n>\n>The create/drop table does sound a solution that can work, but the thing is\n>I want to get manual intervention out, and besides, my work flow is very\n>complex so this might not be an option for me :-(\n>\n>Thanks,\n>Amit\n>\n\nJust to make it clear, a view is not the same as a materialized view.\nA view is just a set of rules to the planner so that it can simplify\ninteractions with the database. A materialized view is a query which has\nbeen saved into a table.\n\nTo set it up properly, really depends on what your needs are.\n\n 1. How much time can elapse between an update to the system, and an\n update to the materialized views?\n 2. How many updates / (sec, min, hour, month) do you expect. Is\n insert performance critical, or secondary.\n\nFor instance, if you get a lot of updates, but you can have a 1 hour lag\nbetween the time a new row is inserted and the view is updated, you can\njust create a cron job that runs every hour to regenerate the\nmaterialized view.\n\nIf you don't get many updates, but you need them to show up right away,\nthen you can add triggers to the affected tables, such that\ninserting/updating to a specific table causes an update to the\nmaterialized view.\n\nThere are quite a few potential tradeoffs. Rather than doing a\nmaterialized view, you could just improve your filters. If you are doing\na query to show people the results, you generally have some sort of\nupper bound on how much data you can display. Humans don't like reading\nmore than 100 or 1000 rows. So create your normal query, and just take\non a LIMIT 100 at the end. If you structure your query properly, and\nhave appropriate indexes, you should be able to make the LIMIT count,\nand allow you to save a lot of overhead of generating rows that you\ndon't use.\n\nI would probably start by posting the queries you are currently using,\nalong with an EXPLAIN ANALYZE, and a description of what you actually\nneed from the query. Then this list can be quite helpful in\nrestructuring your query to make it faster.\n\nJohn\n=:->\n\n", "msg_date": "Tue, 21 Jun 2005 11:46:59 -0400", "msg_from": "Amit V Shah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Do Views execute underlying query everytime ??" }, { "msg_contents": "\n From what you say I understand that you have a huge table like this :\n\n( name, value, id )\n\nAnd you want to make statistics on (value) according to (name,id).\n\n***************************************************\n\nFirst of all a \"materialized view\" doen't exist in postgres, it's just a \nword to name \"a table automatically updated by triggers\".\nAn example would be like this :\n\ntable orders (order_id, ...)\ntable ordered_products (order_id, product_id, quantity, ...)\n\nIf you want to optimize the slow request :\n\"SELECT product_id, sum(quantity) as total_quantity_ordered\n FROM ordered_products GROUP BY product_id\"\n\nYou would create a cache table like this :\ntable ordered_products_cache (product_id, quantity)\n\nAnd add triggers ON UPDATE/INSERT/DELETE on table ordered_products to \nupdate ordered_products_cache accordingly.\n\nOf course in this case everytime someone touches ordered_products, an \nupdate is issued to ordered_products_cache.\n\n***************************************************\n\nIn your case I don't think that is the solution, because you do big \nupdates. With triggers this would mean issuing one update of your \nmaterialized view per row in your big update. This could be slow.\n\nIn this case you might want to update the cache table in one request \nrather than doing an awful lot of updates.\n\nSo you have two solutions :\n\n1- Junk it all and rebuild it from scratch (this can be faster than it \nseems)\n2- Put the rows to be added in a temporary table, update the cache table \nconsidering the difference between this temporary table and your big \ntable, then insert the rows in the big table.\n\nThis is the fastest solution but it requires a bit more coding (not THAT \nmuch though).\n\n***************************************************\n\nAs for the structure of your cache table, you want :\n\n\nScreen 1 -\nStat1 Stat2 Stat3\nValue Value Value\nValue Value Value\n\n\n\nScreen 2 -\nStat3 Stat1 Stat5\nValue Value Value\nValue Value Value\n\nYou have several lines, so what is that ? is it grouped by date ? I'll \npresume it is.\n\nSo your screens basically show a subset of :\n\nSELECT date, name, sum(value) FROM table GROUP BY name, date\n\nThis is what you should put in your summary table.\nThen index it on (date,name) and build your screens with :\n\nSELECT * FROM summary WHERE (date BETWEEN .. AND ..) AND name IN (Stat3, \nStat1, Stat5)\n\nThat should be pretty easy ; you get a list of (name,date,value) that you \njust have to format accordingly on your screen.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Tue, 21 Jun 2005 18:59:48 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do Views execute underlying query everytime ??" }, { "msg_contents": "On 6/21/05, PFC <[email protected]> wrote:\n...\n> In your case I don't think that is the solution, because you do big\n> updates. With triggers this would mean issuing one update of your\n> materialized view per row in your big update. This could be slow.\n> \n> In this case you might want to update the cache table in one request\n> rather than doing an awful lot of updates.\n> \n> So you have two solutions :\n> \n> 1- Junk it all and rebuild it from scratch (this can be faster than it\n> seems)\n> 2- Put the rows to be added in a temporary table, update the cache table\n> considering the difference between this temporary table and your big\n> table, then insert the rows in the big table.\n> \n> This is the fastest solution but it requires a bit more coding (not THAT\n> much though).\n> \nAmit,\n\nI understand your desire to not need any manual intervention...\n\nI don't know what OS you use, but here are two practical techniques\nyou can use to achieve the above solution suggested by PFC:\n\na: If you are on a Unix like OS such as Linux of Free BSD you have the\nbeautiful cron program that will run commands nightly.\n\nb: If you are on Windows you have to do something else. The simplest\nsolution I've found is called \"pycron\" (easily locatable by google)\nand is a service that emulates Unix cron on windows (bypassing a lot\nof the windows scheduler hassle).\n\nNow, using either of those solutions, let's say at 6:00 am you want to\ndo your batch query.\n\n1. Put the queries you want into a text file EXACTLY as you would type\nthem using psql and save the text file. For example, the file may be\nnamed \"create_mat_view.txt\".\n2. Test them by doing this from a command prompt: psql dbname <\ncreate_mat_view.txt\n3. Create a cron entry to run the command once a day, it might look like this:\n0 6 * * * /usr/bin/psql dbname < /home/admin/create_mat_view.txt\nor maybe like this:\n0 6 * * * \"C:\\Program Files\\PostgreSQL\\8.0\\psql.exe\" dbname <\n\"C:\\create_mat_view.txt\"\n\nI hope this helps,\n-- \nMatthew Nuzum\nwww.bearfruit.org\n", "msg_date": "Tue, 21 Jun 2005 12:57:17 -0500", "msg_from": "Matthew Nuzum <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do Views execute underlying query everytime ??" } ]
[ { "msg_contents": "My Dual Core Opteron server came in last week. I tried to do some \nbenchmarks with pgbench to get some numbers on the difference between \n1x1 -> 2x1 -> 2x2 but no matter what I did, I kept getting the same TPS \non all systems. Any hints on what the pgbench parameters I should be using?\n\nIn terms of production use, it definitely can handle more load. \nPreviously, Apache/Perl had to run on a separate server to avoid a ~50% \npenalty. Now, the numbers are +15% performance even with Apache/Perl \nrunning on the same box as PostgreSQL. How much more load of course is \nwhat I'd like to quantify.\n", "msg_date": "Tue, 21 Jun 2005 09:04:48 -0700", "msg_from": "William Yu <[email protected]>", "msg_from_op": true, "msg_subject": "Trying to figure out pgbench" } ]
[ { "msg_contents": "I had a similar experience. \n\nregardless of scaling, etc, I got same results. almost like flags\nare not active. \n\ndid\n\npgbench -I template1\nand\npgbench -c 10 -t 50 -v -d 1 \n\nand played around from there....\n\nThis is on IBM pSeries, AIX5.3, PG8.0.2\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of William Yu\nSent: Tuesday, June 21, 2005 12:05 PM\nTo: [email protected]\nSubject: [PERFORM] Trying to figure out pgbench\n\n\nMy Dual Core Opteron server came in last week. I tried to do some \nbenchmarks with pgbench to get some numbers on the difference between \n1x1 -> 2x1 -> 2x2 but no matter what I did, I kept getting the same TPS \non all systems. Any hints on what the pgbench parameters I should be using?\n\nIn terms of production use, it definitely can handle more load. \nPreviously, Apache/Perl had to run on a separate server to avoid a ~50% \npenalty. Now, the numbers are +15% performance even with Apache/Perl \nrunning on the same box as PostgreSQL. How much more load of course is \nwhat I'd like to quantify.\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n", "msg_date": "Tue, 21 Jun 2005 16:11:37 -0000", "msg_from": "\"Mohan, Ross\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trying to figure out pgbench" }, { "msg_contents": "I wonder if the -c parameter is truly submitting everything in parallel. \nHaving 2 telnet sessions up -- 1 doing -c 1 and another doing -c 100 -- \nI don't see much different in the display speed messages. Perhaps it's \nan issue with the telnet console display limiting the command speed. I \nthought about piping the output to /dev/null but then the final TPS \nresults are also piped there. I can try piping output to a file on a \nramdisk maybe.\n\n\n\nMohan, Ross wrote:\n> I had a similar experience. \n> \n> regardless of scaling, etc, I got same results. almost like flags\n> are not active. \n> \n> did\n> \n> pgbench -I template1\n> and\n> pgbench -c 10 -t 50 -v -d 1 \n> \n> and played around from there....\n> \n> This is on IBM pSeries, AIX5.3, PG8.0.2\n> \n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]] On Behalf Of William Yu\n> Sent: Tuesday, June 21, 2005 12:05 PM\n> To: [email protected]\n> Subject: [PERFORM] Trying to figure out pgbench\n> \n> \n> My Dual Core Opteron server came in last week. I tried to do some \n> benchmarks with pgbench to get some numbers on the difference between \n> 1x1 -> 2x1 -> 2x2 but no matter what I did, I kept getting the same TPS \n> on all systems. Any hints on what the pgbench parameters I should be using?\n> \n> In terms of production use, it definitely can handle more load. \n> Previously, Apache/Perl had to run on a separate server to avoid a ~50% \n> penalty. Now, the numbers are +15% performance even with Apache/Perl \n> running on the same box as PostgreSQL. How much more load of course is \n> what I'd like to quantify.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n", "msg_date": "Tue, 21 Jun 2005 14:53:52 -0700", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to figure out pgbench" } ]
[ { "msg_contents": "Folks,\n\nOK, I've checked in my first code module and the configurator project is \nofficially launched. Come join us at \nwww.pgfoundry.org/projects/configurator\n\nFurther communications will be on the Configurator mailing list only.\n\nfrom the spec:\n\nWhat is the Configurator, and Why do We Need It?\n-------------------------------------------------\n\nThe Configurator is a set of Perl scripts and modules which allow users and\ninstallation programs to write a reasonable postgresql.conf for PostgreSQL\nperformance based on the answers to some relatively simple questions. Its\npurpose is to provide an option between the poor-performing default\nconfiguration, and the in-depth knowledge required for hand-tuning.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 21 Jun 2005 11:24:25 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Configurator project launched" } ]
[ { "msg_contents": "Hello!\n\nI use FreeBSD 4.11 with PostGreSQL 7.3.8.\n\nI got a huge database with roughly 15 million records. There is just one\ntable, with a time field, a few ints and a few strings.\n\n\ntable test\nfields time (timestamp), source (string), destination (string), p1 (int),\np2 (int)\n\n\nI have run VACUUM ANALYZE ;\n\nI have created indexes on every field, but for some reason my postgre\nserver wants to use a seqscan, even tho i know a indexed scan would be\nmuch faster.\n\n\ncreate index test_time_idx on test (time) ;\ncreate index test_source_idx on test (source) ;\ncreate index test_destination_idx on test (destination) ;\ncreate index test_p1_idx on test (p1) ;\ncreate index test_p2_idx on test (p2) ;\n\n\n\nWhat is really strange, is that when i query a count(*) on one of the int\nfields (p1), which has a very low count, postgre uses seqscan. In another\ncount on the same int field (p1), i know he is giving about 2.2 million\nhits, but then he suddenly uses seqscan, instead of a indexed one. Isn't\nthe whole idea of indexing to increase performance in large queries.. To\nmake sort of a phonebook for the values, to make it faster to look up what\never you need... This just seems opposite..\n\nHere is a EXPLAIN of my query\n\ndatabase=> explain select date_trunc('hour', time),count(*) as total from\ntest where p1=53 and time > now() - interval '24 hours' group by\ndate_trunc order by date_trunc ;\n QUERY PLAN\n------------------------------------------------------------------------------------------\n Aggregate (cost=727622.61..733143.23 rows=73608 width=8)\n -> Group (cost=727622.61..731303.02 rows=736083 width=8)\n -> Sort (cost=727622.61..729462.81 rows=736083 width=8)\n Sort Key: date_trunc('hour'::text, \"time\")\n -> Seq Scan on test (cost=0.00..631133.12 rows=736083\nwidth=8)\n Filter: ((p1 = 53) AND (\"time\" > (now() - '1\nday'::interval)))\n(6 rows)\n\n\n\ndatabase=> drop INDEX test_<TABULATOR>\ntest_source_idx test_destination_idx test_p1_idx \ntest_p2_idx test_time_idx\n\n\nAfter all this, i tried to set enable_seqscan to off and\nenable_nestedloops to on. This didnt help much either. The time to run the\nquery is still in minutes. My results are the number of elements for each\nhour, and it gives about 1000-2000 hits per hour. I have read somewhere,\nabout PostGreSQL, that it can easily handle 100-200million records. And\nwith the right tuned system, have a great performance.. I would like to\nlearn how :)\n\nI also found an article on a page\n(http://techdocs.postgresql.org/techdocs/pgsqladventuresep3.php):\n\nTip #11: Don't bother indexing columns with huge numbers of records and a\nsmall range of values, such as BOOLEAN columns.\n\nThis tip, regretfully, is perhaps the only tip where I cannot provide a\ngood, real-world example from my work. So I'll give you a hypothetical\nsituation instead:\n\nImagine that you have a database table with a list of every establishment\nvending ice cream in the US. A simple example might look like:\n\nWhere there were almost 1 million rows, but due to simplistic data entry,\nonly three possible values for type (1-SUPERMARKET, 2-BOUTIQUE, and\n3-OTHER) which are relatively evenly distributed. In this hypothetical\nsituation, you might find (with testing using EXPLAIN) that an index on\ntype is ignored and the parser uses a \"seq scan\" (or table scan) instead. \nThis is because a table scan can actually be faster than an index scan in\nthis situation. Thus, any index on type should be dropped.\n\nCertainly, the boolean column (active) requires no indexing as it has only\ntwo possible values and no index will be faster than a table scan.\n\n\nThen I ask, what is useful with indexing, when I can't use it on a VERY\nlarge database? It is on my 15 million record database it takes for ever\nto do seqscans over and over again... This is probably why, as i mentioned\nearlier, the reason (read the quote) why he chooses a full scan and not a\nindexed one...\n\nSo what do I do? :confused:\n\nI'v used SQL for years, but never in such a big scale. Thus, not having to\nlearn how to deal with large number of records. Usually a maximum of 1000\nrecords. Now, with millions, I need to learn a way to make my sucky\nqueries better.\n\nIm trying to learn more about tuning my system, makeing better queries and\nsuch. I'v found some documents on the Internet, but far from the best.\n\nFeedback most appreciated!\n\nRegards,\na learning PostGreSQL user\n\n\n\n\n", "msg_date": "Tue, 21 Jun 2005 20:33:58 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Too slow querying a table of 15 million records" }, { "msg_contents": "[[email protected] - Tue at 08:33:58PM +0200]\n> I use FreeBSD 4.11 with PostGreSQL 7.3.8.\n(...)\n> database=> explain select date_trunc('hour', time),count(*) as total from\n> test where p1=53 and time > now() - interval '24 hours' group by\n> date_trunc order by date_trunc ;\n\nI haven't looked through all your email yet, but this phenomena have been up\nat the list a couple of times. Try replacing \"now() - interval '24 hours'\"\nwith a fixed time stamp, and see if it helps.\n\npg7 will plan the query without knowledge of what \"now() - interval '24\nhours'\" will compute to. This should be fixed in pg8.\n\n-- \nTobias Brox, +86-13521622905\nNordicbet, IT dept\n", "msg_date": "Tue, 28 Jun 2005 14:03:33 +0800", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too slow querying a table of 15 million records" }, { "msg_contents": "Tobias Brox wrote:\n\n>[[email protected] - Tue at 08:33:58PM +0200]\n>\n>\n>>I use FreeBSD 4.11 with PostGreSQL 7.3.8.\n>>\n>>\n>(...)\n>\n>\n>>database=> explain select date_trunc('hour', time),count(*) as total from\n>>test where p1=53 and time > now() - interval '24 hours' group by\n>>date_trunc order by date_trunc ;\n>>\n>>\n>\n>I haven't looked through all your email yet, but this phenomena have been up\n>at the list a couple of times. Try replacing \"now() - interval '24 hours'\"\n>with a fixed time stamp, and see if it helps.\n>\n>pg7 will plan the query without knowledge of what \"now() - interval '24\n>hours'\" will compute to. This should be fixed in pg8.\n>\n>\n>\nThe grandparent was a mailing list double send. Notice the date is 1\nweek ago. It has already been answered (though your answer is still\ncorrect).\n\nJohn\n=:->", "msg_date": "Tue, 28 Jun 2005 01:35:07 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too slow querying a table of 15 million records" }, { "msg_contents": "> database=> explain select date_trunc('hour', time),count(*) as total from\n> test where p1=53 and time > now() - interval '24 hours' group by\n> date_trunc order by date_trunc ;\n\nTry going:\n\ntime > '2005-06-28 15:34:00'\n\nie. put in the time 24 hours ago as a literal constant.\n\nChris\n\n", "msg_date": "Tue, 28 Jun 2005 16:50:42 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too slow querying a table of 15 million records" }, { "msg_contents": "\n\n> database=> explain select date_trunc('hour', time),count(*) as total from\n> test where p1=53 and time > now() - interval '24 hours' group by\n> date_trunc order by date_trunc ;\n\n\t1. Use CURRENT_TIMESTAMP (which is considered a constant by the planner) \ninstead of now()\n\t2. Create a multicolumn index on (p1,time) or (time,p1) whichever works \nbetter\n", "msg_date": "Tue, 28 Jun 2005 19:10:05 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too slow querying a table of 15 million records" }, { "msg_contents": "PFC <[email protected]> writes:\n> \t1. Use CURRENT_TIMESTAMP (which is considered a constant by the planner) \n> instead of now()\n\nOh?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Jun 2005 14:14:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too slow querying a table of 15 million records " } ]
[ { "msg_contents": "Hello!\n\nI use FreeBSD 4.11 with PostGreSQL 7.3.8.\n\nI got a huge database with roughly 19 million records. There is just one\ntable, with a time field, a few ints and a few strings.\n\n\ntable test\nfields time (timestamp), source (string), destination (string), p1 (int),\np2 (int)\n\n\nI have run VACUUM ANALYZE ;\n\nI have created indexes on every field, but for some reason my postgre\nserver wants to use a seqscan, even tho i know a indexed scan would be\nmuch faster.\n\n\ncreate index test_time_idx on test (time) ;\ncreate index test_source_idx on test (source) ;\ncreate index test_destination_idx on test (destination) ;\ncreate index test_p1_idx on test (p1) ;\ncreate index test_p2_idx on test (p2) ;\n\n\n\nWhat is really strange, is that when i query a count(*) on one of the int\nfields (p1), which has a very low count, postgre uses seqscan. In another\ncount on the same int field (p1), i know he is giving about 2.2 million\nhits, but then he suddenly uses seqscan, instead of a indexed one. Isn't\nthe whole idea of indexing to increase performance in large queries.. To\nmake sort of a phonebook for the values, to make it faster to look up what\never you need... This just seems opposite..\n\nHere is a EXPLAIN of my query\n\ndatabase=> explain select date_trunc('hour', time),count(*) as total from\ntest where p1=53 and time > now() - interval '24 hours' group by\ndate_trunc order by date_trunc ;\nQUERY PLAN\n------------------------------------------------------------------------------------------\nAggregate (cost=727622.61..733143.23 rows=73608 width=8)\n-> Group (cost=727622.61..731303.02 rows=736083 width=8)\n-> Sort (cost=727622.61..729462.81 rows=736083 width=8)\nSort Key: date_trunc('hour'::text, \"time\")\n-> Seq Scan on test (cost=0.00..631133.12 rows=736083\nwidth=8)\nFilter: ((p1 = 53) AND (\"time\" > (now() - '1\nday'::interval)))\n(6 rows)\n\n\n\n\ndatabase=> drop INDEX test_<TABULATOR>\ntest_source_idx test_destination_idx test_p1_idx \ntest_p2_idx test_time_idx\n\n\nAfter all this, i tried to set enable_seqscan to off and\nenable_nestedloops to on. This didnt help much either. The time to run the\nquery is still in minutes. My results are the number of elements for each\nhour, and it gives about 1000-2000 hits per hour. I have read somewhere,\nabout PostGreSQL, that it can easily handle 100-200million records. And\nwith the right tuned system, have a great performance.. I would like to\nlearn how :)\n\nI also found an article on a page\n(http://techdocs.postgresql.org/techdocs/pgsqladventuresep3.php):\nTip #11: Don't bother indexing columns with huge numbers of records and a\nsmall range of values, such as BOOLEAN columns.\n\nThis tip, regretfully, is perhaps the only tip where I cannot provide a\ngood, real-world example from my work. So I'll give you a hypothetical\nsituation instead:\n\nImagine that you have a database table with a list of every establishment\nvending ice cream in the US. A simple example might look like:\n\nWhere there were almost 1 million rows, but due to simplistic data entry,\nonly three possible values for type (1-SUPERMARKET, 2-BOUTIQUE, and\n3-OTHER) which are relatively evenly distributed. In this hypothetical\nsituation, you might find (with testing using EXPLAIN) that an index on\ntype is ignored and the parser uses a \"seq scan\" (or table scan) instead. \nThis is because a table scan can actually be faster than an index scan in\nthis situation. Thus, any index on type should be dropped.\n\nCertainly, the boolean column (active) requires no indexing as it has only\ntwo possible values and no index will be faster than a table scan.\n\n\nThen I ask, what is useful with indexing, when I can't use it on a VERY\nlarge database? It is on my 15 million record database it takes for ever\nto do seqscans over and over again... This is probably why, as i mentioned\nearlier, the reason (read the quote) why he chooses a full scan and not a\nindexed one...\n\nSo what do I do? :confused:\n\nI'v used SQL for years, but never in such a big scale. Thus, not having to\nlearn how to deal with large number of records. Usually a maximum of 1000\nrecords. Now, with millions, I need to learn a way to make my sucky\nqueries better.\n\nIm trying to learn more about tuning my system, makeing better queries and\nsuch. I'v found some documents on the Internet, but far from the best.\n\nFeedback most appreciated!\n\nRegards,\na learning PostGreSQL user\n\nHello!I use FreeBSD 4.11 with PostGreSQL 7.3.8.I got a huge database with roughly 19 million records. There is just onetable, with a time field, a few ints and a few strings.table testfields time (timestamp), source (string), destination (string), p1 (int),\np2 (int)I have run VACUUM ANALYZE ;I have created indexes on every field, but for some reason my postgreserver wants to use a seqscan, even tho i know a indexed scan would bemuch faster.\ncreate index test_time_idx on test (time) ;create index test_source_idx on test (source) ;create index test_destination_idx on test (destination) ;create index test_p1_idx on test (p1) ;create index test_p2_idx on test (p2) ;\nWhat is really strange, is that when i query a count(*) on one of the intfields (p1), which has a very low count, postgre uses seqscan. In anothercount on the same int field (p1), i know he is giving about \n2.2 millionhits, but then he suddenly uses seqscan, instead of a indexed one. Isn'tthe whole idea of indexing to increase performance in large queries.. Tomake sort of a phonebook for the values, to make it faster to look up what\never you need... This just seems opposite..Here is a EXPLAIN of my querydatabase=> explain select date_trunc('hour', time),count(*) as total fromtest where p1=53 and time > now() - interval '24 hours' group by\ndate_trunc order by date_trunc ;                                        QUERY PLAN------------------------------------------------------------------------------------------Aggregate  (cost=727622.61..733143.23\n rows=73608 width=8)   ->  Group  (cost=727622.61..731303.02 rows=736083 width=8)         ->  Sort  (cost=727622.61..729462.81 rows=736083 width=8)               Sort Key: date_trunc('hour'::text, \"time\")\n               ->  Seq Scan on test  (cost=0.00..631133.12 rows=736083width=8)                     Filter: ((p1 = 53) AND (\"time\" > (now() - '1day'::interval)))(6 rows)\ndatabase=> drop INDEX test_<TABULATOR>test_source_idx         test_destination_idx        test_p1_idx        test_p2_idx       test_time_idxAfter all this, i tried to set enable_seqscan to off and\nenable_nestedloops to on. This didnt help much either. The time to run thequery is still in minutes. My results are the number of elements for eachhour, and it gives about 1000-2000 hits per hour. I have read somewhere,\nabout PostGreSQL, that it can easily handle 100-200million records. Andwith the right tuned system, have a great performance.. I would like tolearn how :)I also found an article on a page(\nhttp://techdocs.postgresql.org/techdocs/pgsqladventuresep3.php):Tip #11:  Don't bother indexing columns with huge numbers of records and asmall range of values, such as BOOLEAN columns.This tip, regretfully, is perhaps the only tip where I cannot provide a\ngood, real-world example from my work.  So I'll give you a hypotheticalsituation instead:Imagine that you have a database table with a list of every establishmentvending ice cream in the US.  A simple example might look like:\nWhere there were almost 1 million rows, but due to simplistic data entry,only three possible values for type (1-SUPERMARKET, 2-BOUTIQUE, and3-OTHER) which are relatively evenly distributed.  In this hypothetical\nsituation, you might find (with testing using EXPLAIN) that an index ontype is ignored and the parser uses a \"seq scan\" (or table scan) instead. This is because a table scan can actually be faster than an index scan in\nthis situation.  Thus, any index on type should be dropped.Certainly, the boolean column (active) requires no indexing as it has onlytwo possible values and no index will be faster than a table scan.\nThen I ask, what is useful with indexing, when I can't use it on a VERYlarge database? It is on my 15 million record database it takes for everto do seqscans over and over again... This is probably why, as i mentioned\nearlier, the reason (read the quote) why he chooses a full scan and not aindexed one...So what do I do? :confused:I'v used SQL for years, but never in such a big scale. Thus, not having tolearn how to deal with large number of records. Usually a maximum of 1000\nrecords. Now, with millions, I need to learn a way to make my suckyqueries better.Im trying to learn more about tuning my system, makeing better queries andsuch. I'v found some documents on the Internet, but far from the best.\nFeedback most appreciated!Regards,a learning PostGreSQL user", "msg_date": "Tue, 21 Jun 2005 21:09:48 +0200", "msg_from": "Kjell Tore Fossbakk <[email protected]>", "msg_from_op": true, "msg_subject": "Querying 19million records very slowly" }, { "msg_contents": "Some tips:\n\n- EXPLAIN ANALYZE provides a more useful analysis of a slow query, \nbecause it gives both the estimate and actual times/rows for each step \nin the plan.\n\n- The documentation is right: rows with little variation are pretty \nuseless to index. Indexing is about \"selectivity\", reducing the amount \nof stuff the database has to read off the the disk.\n\n- You only have two things in your WHERE clause, so that is where the \nmost important indexes reside. How many of your rows have p1=53? How \nmany of your rows have happened in the last day? If your answer is \"a \nlot\" then the indexes are not going to help: PostgreSQL will be more \nefficient scanning every tuple than it will be jumping around the index \nstructure for a large number of tuples.\n\n- If neither time nor p1 are particularly selective individually, but \nthey are selective when taken together, try a multi-key index on them both.\n\nPaul\n\nKjell Tore Fossbakk wrote:\n\n> Hello!\n> \n> I use FreeBSD 4.11 with PostGreSQL 7.3.8.\n> \n> I got a huge database with roughly 19 million records. There is just one\n> table, with a time field, a few ints and a few strings.\n> \n> \n> table test\n> fields time (timestamp), source (string), destination (string), p1 (int),\n> p2 (int)\n> \n> \n> I have run VACUUM ANALYZE ;\n> \n> I have created indexes on every field, but for some reason my postgre\n> server wants to use a seqscan, even tho i know a indexed scan would be\n> much faster.\n> \n> \n> create index test_time_idx on test (time) ;\n> create index test_source_idx on test (source) ;\n> create index test_destination_idx on test (destination) ;\n> create index test_p1_idx on test (p1) ;\n> create index test_p2_idx on test (p2) ;\n> \n> \n> \n> What is really strange, is that when i query a count(*) on one of the int\n> fields (p1), which has a very low count, postgre uses seqscan. In another\n> count on the same int field (p1), i know he is giving about 2.2 million\n> hits, but then he suddenly uses seqscan, instead of a indexed one. Isn't\n> the whole idea of indexing to increase performance in large queries.. To\n> make sort of a phonebook for the values, to make it faster to look up what\n> ever you need... This just seems opposite..\n> \n> Here is a EXPLAIN of my query\n> \n> database=> explain select date_trunc('hour', time),count(*) as total from\n> test where p1=53 and time > now() - interval '24 hours' group by\n> date_trunc order by date_trunc ;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------\n> Aggregate (cost=727622.61..733143.23 rows=73608 width=8)\n> -> Group (cost=727622.61..731303.02 rows=736083 width=8)\n> -> Sort (cost=727622.61..729462.81 rows=736083 width=8)\n> Sort Key: date_trunc('hour'::text, \"time\")\n> -> Seq Scan on test (cost=0.00..631133.12 rows=736083\n> width=8)\n> Filter: ((p1 = 53) AND (\"time\" > (now() - '1\n> day'::interval)))\n> (6 rows)\n> \n> \n> \n> \n> database=> drop INDEX test_<TABULATOR>\n> test_source_idx test_destination_idx test_p1_idx \n> test_p2_idx test_time_idx\n> \n> \n> After all this, i tried to set enable_seqscan to off and\n> enable_nestedloops to on. This didnt help much either. The time to run the\n> query is still in minutes. My results are the number of elements for each\n> hour, and it gives about 1000-2000 hits per hour. I have read somewhere,\n> about PostGreSQL, that it can easily handle 100-200million records. And\n> with the right tuned system, have a great performance.. I would like to\n> learn how :)\n> \n> I also found an article on a page\n> ( http://techdocs.postgresql.org/techdocs/pgsqladventuresep3.php):\n> Tip #11: Don't bother indexing columns with huge numbers of records and a\n> small range of values, such as BOOLEAN columns.\n> \n> This tip, regretfully, is perhaps the only tip where I cannot provide a\n> good, real-world example from my work. So I'll give you a hypothetical\n> situation instead:\n> \n> Imagine that you have a database table with a list of every establishment\n> vending ice cream in the US. A simple example might look like:\n> \n> Where there were almost 1 million rows, but due to simplistic data entry,\n> only three possible values for type (1-SUPERMARKET, 2-BOUTIQUE, and\n> 3-OTHER) which are relatively evenly distributed. In this hypothetical\n> situation, you might find (with testing using EXPLAIN) that an index on\n> type is ignored and the parser uses a \"seq scan\" (or table scan) instead.\n> This is because a table scan can actually be faster than an index scan in\n> this situation. Thus, any index on type should be dropped.\n> \n> Certainly, the boolean column (active) requires no indexing as it has only\n> two possible values and no index will be faster than a table scan.\n> \n> \n> Then I ask, what is useful with indexing, when I can't use it on a VERY\n> large database? It is on my 15 million record database it takes for ever\n> to do seqscans over and over again... This is probably why, as i mentioned\n> earlier, the reason (read the quote) why he chooses a full scan and not a\n> indexed one...\n> \n> So what do I do? :confused:\n> \n> I'v used SQL for years, but never in such a big scale. Thus, not having to\n> learn how to deal with large number of records. Usually a maximum of 1000\n> records. Now, with millions, I need to learn a way to make my sucky\n> queries better.\n> \n> Im trying to learn more about tuning my system, makeing better queries and\n> such. I'v found some documents on the Internet, but far from the best.\n> \n> Feedback most appreciated!\n> \n> Regards,\n> a learning PostGreSQL user\n> \n\n", "msg_date": "Tue, 21 Jun 2005 12:59:22 -0700", "msg_from": "Paul Ramsey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Querying 19million records very slowly" }, { "msg_contents": "\nuse CURRENT_TIME which is a constant instead of now() which is not \nconsidered constant... (I think)\n", "msg_date": "Tue, 21 Jun 2005 23:33:30 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Querying 19million records very slowly" }, { "msg_contents": "database=> set enable_seqscan to on;\nSET\nTime: 0.34 ms\n\n\n\ndatabase=> explain analyze select count(*) from test where p1=53;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=522824.50..522824.50 rows=1 width=0) (actual\ntime=56380.72..56380.72 rows=1 loops=1)\n -> Seq Scan on test (cost=0.00..517383.30 rows=2176479 width=0)\n(actual time=9.61..47677.48 rows=2220746 loops=1)\n Filter: (p1 = 53)\n Total runtime: 56380.79 msec\n(4 rows)\n\nTime: 56381.40 ms\n\n\n\ndatabase=> explain analyze select count(*) from test where p1=53 and\ntime > now() - interval '24 hours' ;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=661969.01..661969.01 rows=1 width=0) (actual\ntime=45787.02..45787.02 rows=1 loops=1)\n -> Seq Scan on test (cost=0.00..660155.28 rows=725493 width=0)\n(actual time=37799.32..45613.58 rows=42424 loops=1)\n Filter: ((p1 = 53) AND (\"time\" > (now() - '1 day'::interval)))\n Total runtime: 45787.09 msec\n(4 rows)\n\nTime: 45787.79 ms\n\n\n\ndatabase=> explain analyze select date_trunc('hour', time),count(*) as\ntotal from test where p1=53 and time>now()-interval '24 hours' group\nby date_trunc order by date_trunc;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=755116.97..760558.17 rows=72549 width=8) (actual\ntime=46040.63..46717.61 rows=23 loops=1)\n -> Group (cost=755116.97..758744.44 rows=725493 width=8) (actual\ntime=46022.06..46548.84 rows=42407 loops=1)\n -> Sort (cost=755116.97..756930.70 rows=725493 width=8)\n(actual time=46022.04..46198.94 rows=42407 loops=1)\n Sort Key: date_trunc('hour'::text, \"time\")\n -> Seq Scan on test (cost=0.00..660155.28 rows=725493\nwidth=8) (actual time=37784.91..45690.88 rows=42407 loops=1)\n Filter: ((p1 = 53) AND (\"time\" > (now() - '1\nday'::interval)))\n Total runtime: 46718.43 msec\n(7 rows)\n\nTime: 46719.44 ms\n\n\n\ndatabase=> create index test_time_p1_idx on test(time,p1) ;\nCREATE INDEX\nTime: 178926.02 ms\n\ndatabase=> vacuum analyze test ;\nVACUUM\nTime: 73058.33 ms\n\ndatabase=> \\d test\n Table \"public.test\"\n Column | Type | Modifiers\n-------------+--------------------------+-----------\n time | timestamp with time zone |\n source | inet |\n destination | inet |\n p1 | integer |\n p2 | integer |\n\n\n\ndatabase=> \\di\n public | test_time_idx | index | database | test\n public | test_source_idx | index | database | test\n public | test_destination_idx | index | database | test\n public | test_p1_idx | index | database | test\n public | test_p2_idx | index | database | test\n public | test_time_p1_idx | index | database | test\n\n\n\ndatabase=> set enable_seqscan to off ;\nSET\nTime: 0.28 ms\n\n\n\ndatabase=> explain analyze select date_trunc('hour', time),count(*) as\ntotal from test where p1=53 and time>now()-interval '24 hours' group\nby date_trunc order by date_trunc;\n Aggregate (cost=2315252.66..2320767.17 rows=73527 width=8) (actual\ntime=2081.15..2720.44 rows=23 loops=1)\n -> Group (cost=2315252.66..2318929.00 rows=735268 width=8)\n(actual time=2079.76..2564.22 rows=41366 loops=1)\n -> Sort (cost=2315252.66..2317090.83 rows=735268 width=8)\n(actual time=2079.74..2243.32 rows=41366 loops=1)\n Sort Key: date_trunc('hour'::text, \"time\")\n -> Index Scan using test_time_p1_idx on test \n(cost=0.00..2218878.46 rows=735268 width=8) (actual\ntime=29.50..1774.52 rows=41366 loops=1)\n Index Cond: ((\"time\" > (now() - '1\nday'::interval)) AND (p1 = 53))\n Total runtime: 2735.42 msec\n\nTime: 2736.48 ms\n\n\n\ndatabase=> explain analyze select date_trunc('hour', time),count(*) as\ntotal from test where p1=80 and time>now()-interval '24 hours' group\nby date_trunc order by date_trunc;\n Aggregate (cost=1071732.15..1074305.59 rows=34313 width=8) (actual\ntime=6353.93..7321.99 rows=22 loops=1)\n -> Group (cost=1071732.15..1073447.77 rows=343125 width=8)\n(actual time=6323.76..7078.10 rows=64267 loops=1)\n -> Sort (cost=1071732.15..1072589.96 rows=343125 width=8)\n(actual time=6323.75..6579.42 rows=64267 loops=1)\n Sort Key: date_trunc('hour'::text, \"time\")\n -> Index Scan using test_time_p1_idx on test \n(cost=0.00..1035479.58 rows=343125 width=8) (actual time=0.20..5858.67\nrows=64267 loops=1)\n Index Cond: ((\"time\" > (now() - '1\nday'::interval)) AND (p1 = 80))\n Total runtime: 7322.82 msec\n\nTime: 7323.90 ms\n\n\n\ndatabase=> explain analyze select date_trunc('hour', time),count(*) as\ntotal from test where p1=139 and time>now()-interval '24 hours' group\nby date_trunc order by date_trunc;\n Aggregate (cost=701562.34..703250.12 rows=22504 width=8) (actual\ntime=2448.41..3033.80 rows=22 loops=1)\n -> Group (cost=701562.34..702687.53 rows=225037 width=8) (actual\ntime=2417.39..2884.25 rows=36637 loops=1)\n -> Sort (cost=701562.34..702124.94 rows=225037 width=8)\n(actual time=2417.38..2574.19 rows=36637 loops=1)\n Sort Key: date_trunc('hour'::text, \"time\")\n -> Index Scan using test_time_p1_idx on test \n(cost=0.00..679115.34 rows=225037 width=8) (actual time=8.47..2156.18\nrows=36637 loops=1)\n Index Cond: ((\"time\" > (now() - '1\nday'::interval)) AND (p1 = 139))\n Total runtime: 3034.57 msec\n\nTime: 3035.70 ms\n\n\n\nNow, this query gives me all the hours in a day, with the count of all\np1=53 for each hour. Pg uses 46.7 seconds to run with seqscan, while\n2.7 seconds indexing on (time,p1). I think I turned \"set\nenable_seqscan to on;\" again, and then the planner used seqscan, and\nnot index.\n- Why does Pg not see the benefits of using index?\n- and how can i tune the optimisation fields in postgresql.conf to help him?\n\nSo now my PG uses a reasonable amout of time on these queries (with\nenable_seqscan turned off)\n\nThe next place which seems to slow my queries, is probably my\nconnection to PHP. I got a bash script running in cron on my server\n(freebsd 4.11), which runs php on a php file. To force PG to not use\nseqscans, I have modifies the postgresql.conf:\n\n..\nenable_seqscan = false\nenable_indexscan = true\n..\neffective_cache_size = 10000\nrandom_page_cost = 2\n..\n\nI save the file, type 'pg_crl reload' then enter 'psql database'.\n\ndatabase=> show enable_seqscan ;\n enable_seqscan\n----------------\n on\n(1 row)\n\n\nargus=> show effective_cache_size ;\n effective_cache_size\n----------------------\n 1000\n(1 row)\n\nI have used the manual pages on postgresql, postmaster, and so on, but\nI cant find anywhere to specify which config file Pg is to use. I'm\nnot entirely sure if he uses the one im editing\n(/usr/local/etc/postgresql.conf).\n\nAny hints, tips or help is most appreciated!\n\nKjell Tore.\n\n\n\n\n\nOn 6/21/05, PFC <[email protected]> wrote:\n> \n> use CURRENT_TIME which is a constant instead of now() which is not \n> considered constant... (I think)\n>\n", "msg_date": "Wed, 22 Jun 2005 09:45:22 +0200", "msg_from": "Kjell Tore Fossbakk <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Querying 19million records very slowly" }, { "msg_contents": "[Kjell Tore Fossbakk - Wed at 09:45:22AM +0200]\n> database=> explain analyze select count(*) from test where p1=53 and\n> time > now() - interval '24 hours' ;\n\nSorry to say that I have not followed the entire thread neither read the\nentire email I'm replying to, but I have a quick hint on this one (ref my\nearlier thread about timestamp indices) - the postgresql planner will\ngenerally behave smarter when using a fixed timestamp (typically generated\nby the app server) than logics based on now().\n\nOne of my colleagues also claimed that he found the usage of\nlocaltimestamp faster than now().\n\n-- \nTobias Brox, +86-13521622905\nNordicbet, IT dept\n", "msg_date": "Wed, 22 Jun 2005 16:03:57 +0800", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Querying 19million records very slowly" }, { "msg_contents": "Appreciate your time, Mr Brox.\n\nI'll test the use of current_timestamp, rather than now(). I am not\nsure if Pg can do a match between a fixed timestamp and a datetime?\n\ntime > current_timestamp - interval '24 hours',\nwhen time is yyyy-mm-dd hh-mm-ss+02, like 2005-06-22 16:00:00+02.\n\nIf Pg cant do it, and current_time is faster, i could possibly convert\nthe time field in my database to timestamp, and insert all rows as\ntimestamp rather than a timedate. But that is some script to work over\n19 mill rows, so I need to know if that will give me any more speed..\n\nKjell Tore.\n\nOn 6/22/05, Tobias Brox <[email protected]> wrote:\n> [Kjell Tore Fossbakk - Wed at 09:45:22AM +0200]\n> > database=> explain analyze select count(*) from test where p1=53 and\n> > time > now() - interval '24 hours' ;\n> \n> Sorry to say that I have not followed the entire thread neither read the\n> entire email I'm replying to, but I have a quick hint on this one (ref my\n> earlier thread about timestamp indices) - the postgresql planner will\n> generally behave smarter when using a fixed timestamp (typically generated\n> by the app server) than logics based on now().\n> \n> One of my colleagues also claimed that he found the usage of\n> localtimestamp faster than now().\n> \n> -- \n> Tobias Brox, +86-13521622905\n> Nordicbet, IT dept\n>\n", "msg_date": "Wed, 22 Jun 2005 10:18:30 +0200", "msg_from": "Kjell Tore Fossbakk <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Querying 19million records very slowly" }, { "msg_contents": "[Kjell Tore Fossbakk - Wed at 10:18:30AM +0200]\n> I'll test the use of current_timestamp, rather than now(). I am not\n> sure if Pg can do a match between a fixed timestamp and a datetime?\n\nI have almost all my experience with timestamps wo timezones, but ... isn't\nthat almost the same as the timedate type?\n\n> time > current_timestamp - interval '24 hours',\n> when time is yyyy-mm-dd hh-mm-ss+02, like 2005-06-22 16:00:00+02.\n\nTry to type in '2005-06-21 16:36:22+08' directly in the query, and see if it\nmakes changes. Or probably '2005-06-21 10:36:22+02' in your case ;-)\n\n(btw, does postgresql really handles timezones? '+02' is quite different\nfrom 'CET', which will be obvious sometime in the late autoumn...)\n\n-- \nTobias Brox, +86-13521622905\nNordicbet, IT dept\n", "msg_date": "Wed, 22 Jun 2005 16:39:21 +0800", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Querying 19million records very slowly" }, { "msg_contents": "\nOn Jun 22, 2005, at 5:39 PM, Tobias Brox wrote:\n\n> (btw, does postgresql really handles timezones? '+02' is quite \n> different\n> from 'CET', which will be obvious sometime in the late autoumn...)\n\nYes, it does. It doesn't (currently) record the time zone name, but \nrather only the offset from UTC. If a time zone name (rather than UTC \noffset) is given, it is converted to the UTC offset *at that \ntimestamptz* when it is stored. For time zones that take into account \nDST, their UTC offset changes during the year, and PostgreSQL records \nthe equivalent UTC offset for the appropriate timestamptz values.\n\nThere has been discussion in the past on storing the time zone name \nwith the timestamptz as well, though no one has implemented this yet.\n\nMichael Glaesemann\ngrzm myrealbox com\n\n", "msg_date": "Wed, 22 Jun 2005 17:55:35 +0900", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Querying 19million records very slowly" }, { "msg_contents": "> Try to type in '2005-06-21 16:36:22+08' directly in the query, and see if it\n> makes changes. Or probably '2005-06-21 10:36:22+02' in your case ;-)\n\nWhich one does Pg read fastes? Does he convert datetime in the table,\nthen my where clause and check, for each row? How does he compare a\ndatetime with a datetime? Timestamp are easy, large number bigger than\nanother large number..\n\ntime (datetime) > '2005-06-21 10:36:22+02'\n\nor \n\ntime (timestamp) > 'some timestamp pointing to yesterday'\n\nHmm.. I cant find any doc that describes this very good.\n\n\nOn 6/22/05, Michael Glaesemann <[email protected]> wrote:\n> \n> On Jun 22, 2005, at 5:39 PM, Tobias Brox wrote:\n> \n> > (btw, does postgresql really handles timezones? '+02' is quite \n> > different\n> > from 'CET', which will be obvious sometime in the late autoumn...)\n> \n> Yes, it does. It doesn't (currently) record the time zone name, but \n> rather only the offset from UTC. If a time zone name (rather than UTC \n> offset) is given, it is converted to the UTC offset *at that \n> timestamptz* when it is stored. For time zones that take into account \n> DST, their UTC offset changes during the year, and PostgreSQL records \n> the equivalent UTC offset for the appropriate timestamptz values.\n> \n> There has been discussion in the past on storing the time zone name \n> with the timestamptz as well, though no one has implemented this yet.\n> \n> Michael Glaesemann\n> grzm myrealbox com\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n>\n", "msg_date": "Wed, 22 Jun 2005 11:10:42 +0200", "msg_from": "Kjell Tore Fossbakk <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Querying 19million records very slowly" }, { "msg_contents": "[Kjell Tore Fossbakk - Wed at 11:10:42AM +0200]\n> Which one does Pg read fastes? Does he convert datetime in the table,\n> then my where clause and check, for each row? How does he compare a\n> datetime with a datetime? Timestamp are easy, large number bigger than\n> another large number..\n> \n> time (datetime) > '2005-06-21 10:36:22+02'\n> \n> or \n> \n> time (timestamp) > 'some timestamp pointing to yesterday'\n\nIf I have understood it correctly, the planner will recognize the timestamp\nand compare it with the statistics in the first example but not in the\nsecond, and thus it will be more likely to use index scan on the first one\nand seqscan on the second.\n\n-- \nTobias Brox, +86-13521622905\nNordicbet, IT dept\n", "msg_date": "Wed, 22 Jun 2005 17:36:44 +0800", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Querying 19million records very slowly" }, { "msg_contents": "Tobias Brox <[email protected]> writes:\n>> time (datetime) > '2005-06-21 10:36:22+02'\n>> or \n>> time (timestamp) > 'some timestamp pointing to yesterday'\n\n> If I have understood it correctly, the planner will recognize the timestamp\n> and compare it with the statistics in the first example but not in the\n> second, and thus it will be more likely to use index scan on the first one\n> and seqscan on the second.\n\nThat statement is true for releases before 8.0. Kjell has not at any\npoint told us what PG version he is running, unless I missed it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Jun 2005 09:50:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Querying 19million records very slowly " }, { "msg_contents": "OK, so the planner is in fact making a mistake (I think). Try turning \ndown your random_page_cost a little. It defaults at 4.0, see if 2.0 \nworks \"right\". (Careful, move these things around too much for one \nquery, you will wreck others.) 4.0 is a little large for almost all \nmodern hardware, so see if moving it down a little makes things \nsomewhat smarter.\n\nP\n\nOn Wednesday, June 22, 2005, at 12:45 AM, Kjell Tore Fossbakk wrote:\n\n> database=> set enable_seqscan to on;\n> SET\n> Time: 0.34 ms\n>\n>\n>\n> database=> explain analyze select count(*) from test where p1=53;\n> QUERY PLAN\n> ----------------------------------------------------------------------- \n> ------------------------------------------------\n> Aggregate (cost=522824.50..522824.50 rows=1 width=0) (actual\n> time=56380.72..56380.72 rows=1 loops=1)\n> -> Seq Scan on test (cost=0.00..517383.30 rows=2176479 width=0)\n> (actual time=9.61..47677.48 rows=2220746 loops=1)\n> Filter: (p1 = 53)\n> Total runtime: 56380.79 msec\n> (4 rows)\n>\n> Time: 56381.40 ms\n>\n>\n>\n> database=> explain analyze select count(*) from test where p1=53 and\n> time > now() - interval '24 hours' ;\n> QUERY PLAN\n> ----------------------------------------------------------------------- \n> -------------------------------------------------\n> Aggregate (cost=661969.01..661969.01 rows=1 width=0) (actual\n> time=45787.02..45787.02 rows=1 loops=1)\n> -> Seq Scan on test (cost=0.00..660155.28 rows=725493 width=0)\n> (actual time=37799.32..45613.58 rows=42424 loops=1)\n> Filter: ((p1 = 53) AND (\"time\" > (now() - '1 day'::interval)))\n> Total runtime: 45787.09 msec\n> (4 rows)\n>\n> Time: 45787.79 ms\n>\n>\n>\n> database=> explain analyze select date_trunc('hour', time),count(*) as\n> total from test where p1=53 and time>now()-interval '24 hours' group\n> by date_trunc order by date_trunc;\n> QUERY\n> PLAN\n> ----------------------------------------------------------------------- \n> -------------------------------------------------------------\n> Aggregate (cost=755116.97..760558.17 rows=72549 width=8) (actual\n> time=46040.63..46717.61 rows=23 loops=1)\n> -> Group (cost=755116.97..758744.44 rows=725493 width=8) (actual\n> time=46022.06..46548.84 rows=42407 loops=1)\n> -> Sort (cost=755116.97..756930.70 rows=725493 width=8)\n> (actual time=46022.04..46198.94 rows=42407 loops=1)\n> Sort Key: date_trunc('hour'::text, \"time\")\n> -> Seq Scan on test (cost=0.00..660155.28 rows=725493\n> width=8) (actual time=37784.91..45690.88 rows=42407 loops=1)\n> Filter: ((p1 = 53) AND (\"time\" > (now() - '1\n> day'::interval)))\n> Total runtime: 46718.43 msec\n> (7 rows)\n>\n> Time: 46719.44 ms\n>\n>\n>\n> database=> create index test_time_p1_idx on test(time,p1) ;\n> CREATE INDEX\n> Time: 178926.02 ms\n>\n> database=> vacuum analyze test ;\n> VACUUM\n> Time: 73058.33 ms\n>\n> database=> \\d test\n> Table \"public.test\"\n> Column | Type | Modifiers\n> -------------+--------------------------+-----------\n> time | timestamp with time zone |\n> source | inet |\n> destination | inet |\n> p1 | integer |\n> p2 | integer |\n>\n>\n>\n> database=> \\di\n> public | test_time_idx | index | database | test\n> public | test_source_idx | index | database | test\n> public | test_destination_idx | index | database | test\n> public | test_p1_idx | index | database | test\n> public | test_p2_idx | index | database | test\n> public | test_time_p1_idx | index | database | test\n>\n>\n>\n> database=> set enable_seqscan to off ;\n> SET\n> Time: 0.28 ms\n>\n>\n>\n> database=> explain analyze select date_trunc('hour', time),count(*) as\n> total from test where p1=53 and time>now()-interval '24 hours' group\n> by date_trunc order by date_trunc;\n> Aggregate (cost=2315252.66..2320767.17 rows=73527 width=8) (actual\n> time=2081.15..2720.44 rows=23 loops=1)\n> -> Group (cost=2315252.66..2318929.00 rows=735268 width=8)\n> (actual time=2079.76..2564.22 rows=41366 loops=1)\n> -> Sort (cost=2315252.66..2317090.83 rows=735268 width=8)\n> (actual time=2079.74..2243.32 rows=41366 loops=1)\n> Sort Key: date_trunc('hour'::text, \"time\")\n> -> Index Scan using test_time_p1_idx on test\n> (cost=0.00..2218878.46 rows=735268 width=8) (actual\n> time=29.50..1774.52 rows=41366 loops=1)\n> Index Cond: ((\"time\" > (now() - '1\n> day'::interval)) AND (p1 = 53))\n> Total runtime: 2735.42 msec\n>\n> Time: 2736.48 ms\n>\n>\n>\n> database=> explain analyze select date_trunc('hour', time),count(*) as\n> total from test where p1=80 and time>now()-interval '24 hours' group\n> by date_trunc order by date_trunc;\n> Aggregate (cost=1071732.15..1074305.59 rows=34313 width=8) (actual\n> time=6353.93..7321.99 rows=22 loops=1)\n> -> Group (cost=1071732.15..1073447.77 rows=343125 width=8)\n> (actual time=6323.76..7078.10 rows=64267 loops=1)\n> -> Sort (cost=1071732.15..1072589.96 rows=343125 width=8)\n> (actual time=6323.75..6579.42 rows=64267 loops=1)\n> Sort Key: date_trunc('hour'::text, \"time\")\n> -> Index Scan using test_time_p1_idx on test\n> (cost=0.00..1035479.58 rows=343125 width=8) (actual time=0.20..5858.67\n> rows=64267 loops=1)\n> Index Cond: ((\"time\" > (now() - '1\n> day'::interval)) AND (p1 = 80))\n> Total runtime: 7322.82 msec\n>\n> Time: 7323.90 ms\n>\n>\n>\n> database=> explain analyze select date_trunc('hour', time),count(*) as\n> total from test where p1=139 and time>now()-interval '24 hours' group\n> by date_trunc order by date_trunc;\n> Aggregate (cost=701562.34..703250.12 rows=22504 width=8) (actual\n> time=2448.41..3033.80 rows=22 loops=1)\n> -> Group (cost=701562.34..702687.53 rows=225037 width=8) (actual\n> time=2417.39..2884.25 rows=36637 loops=1)\n> -> Sort (cost=701562.34..702124.94 rows=225037 width=8)\n> (actual time=2417.38..2574.19 rows=36637 loops=1)\n> Sort Key: date_trunc('hour'::text, \"time\")\n> -> Index Scan using test_time_p1_idx on test\n> (cost=0.00..679115.34 rows=225037 width=8) (actual time=8.47..2156.18\n> rows=36637 loops=1)\n> Index Cond: ((\"time\" > (now() - '1\n> day'::interval)) AND (p1 = 139))\n> Total runtime: 3034.57 msec\n>\n> Time: 3035.70 ms\n>\n>\n>\n> Now, this query gives me all the hours in a day, with the count of all\n> p1=53 for each hour. Pg uses 46.7 seconds to run with seqscan, while\n> 2.7 seconds indexing on (time,p1). I think I turned \"set\n> enable_seqscan to on;\" again, and then the planner used seqscan, and\n> not index.\n> - Why does Pg not see the benefits of using index?\n> - and how can i tune the optimisation fields in postgresql.conf to \n> help him?\n>\n> So now my PG uses a reasonable amout of time on these queries (with\n> enable_seqscan turned off)\n>\n> The next place which seems to slow my queries, is probably my\n> connection to PHP. I got a bash script running in cron on my server\n> (freebsd 4.11), which runs php on a php file. To force PG to not use\n> seqscans, I have modifies the postgresql.conf:\n>\n> ..\n> enable_seqscan = false\n> enable_indexscan = true\n> ..\n> effective_cache_size = 10000\n> random_page_cost = 2\n> ..\n>\n> I save the file, type 'pg_crl reload' then enter 'psql database'.\n>\n> database=> show enable_seqscan ;\n> enable_seqscan\n> ----------------\n> on\n> (1 row)\n>\n>\n> argus=> show effective_cache_size ;\n> effective_cache_size\n> ----------------------\n> 1000\n> (1 row)\n>\n> I have used the manual pages on postgresql, postmaster, and so on, but\n> I cant find anywhere to specify which config file Pg is to use. I'm\n> not entirely sure if he uses the one im editing\n> (/usr/local/etc/postgresql.conf).\n>\n> Any hints, tips or help is most appreciated!\n>\n> Kjell Tore.\n>\n>\n>\n>\n>\n> On 6/21/05, PFC <[email protected]> wrote:\n>>\n>> use CURRENT_TIME which is a constant instead of now() which is not\n>> considered constant... (I think)\n>>\n>>\n Paul Ramsey\n Refractions Research\n Email: [email protected]\n Phone: (250) 885-0632\n\n", "msg_date": "Wed, 22 Jun 2005 07:20:13 -0700", "msg_from": "Paul Ramsey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Querying 19million records very slowly" }, { "msg_contents": "I cant get the config file to load into my postgres. that's the\nproblem. I want to set it to 10k, but it is only still at 1000... I\nsave the file and restart the service..\n\nyes, i ment 'pg_ctl reload', sry about that one.\n\nkjell tore\n\nOn 6/22/05, Bricklen Anderson <[email protected]> wrote:\n> >> enable_seqscan = false\n> >> enable_indexscan = true\n> >> ..\n> >> effective_cache_size = 10000\n> >> random_page_cost = 2\n> >> ..\n> >>\n> >> I save the file, type 'pg_crl reload' then enter 'psql database'.\n> >>\n> >> argus=> show effective_cache_size ;\n> >> effective_cache_size\n> >> ----------------------\n> >> 1000\n> >> (1 row)\n> \n> I assume that 'pg_crl' is a typo? That should read 'pg_ctl reload'\n> Also, you said that your effective_cache_size = 10000, yet when you SHOW\n> it,\n> it's only 1000. A cut 'n paste error, or maybe your erroneous \"pg_crl\"\n> didn't\n> trigger the reload?\n> \n> -- \n> _______________________________\n> \n> This e-mail may be privileged and/or confidential, and the sender does\n> not waive any related rights and obligations. Any distribution, use or\n> copying of this e-mail or the information it contains by other than an\n> intended recipient is unauthorized. If you received this e-mail in\n> error, please advise me (by return e-mail or otherwise) immediately.\n> _______________________________\n>\n", "msg_date": "Wed, 22 Jun 2005 07:41:54 -0700", "msg_from": "Kjell Tore Fossbakk <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Querying 19million records very slowly" }, { "msg_contents": "On 2005-06-22 10:55, Michael Glaesemann wrote:\n> There has been discussion in the past on storing the time zone name \n> with the timestamptz as well, though no one has implemented this yet.\n\nThe reason for this may be that time zone names (abbreviations) are not\nunique. For example, \"ECT\" can mean \"Ecuador Time\" (offset -05) or\n\"Eastern Caribbean Time\" (offset -04).\n\nhttp://www.worldtimezone.com/wtz-names/timezonenames.html\n\ncheers,\nstefan\n", "msg_date": "Wed, 22 Jun 2005 17:23:41 +0200", "msg_from": "Stefan Weiss <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Querying 19million records very slowly" }, { "msg_contents": "[Kjell Tore Fossbakk - Wed at 07:41:54AM -0700]\n> I cant get the config file to load into my postgres. that's the\n> problem. I want to set it to 10k, but it is only still at 1000... I\n> save the file and restart the service..\n> \n> yes, i ment 'pg_ctl reload', sry about that one.\n\nClassical problem, a bit depending on the distro you are using.\n\nThe \"master\" file usually resides in /etc/postgresql while the actual file\nused usually resides in /var/lib/postgres/data ... or something. Some\ndistros copies over the file (such that one always should edit the file in\n/etc) others don't (thus you either have to do that your self, or edit the\nfile in the database data directory.\n\n-- \nTobias Brox, +86-13521622905\nNordicbet, IT dept\n", "msg_date": "Thu, 23 Jun 2005 00:28:44 +0800", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Querying 19million records very slowly" }, { "msg_contents": "-I also changed now() to current_time, which increased performance quite \ngood. I need to make further tests, before I'll post any results.\n-I tried to change now()- interval 'x hours' to like 2005-06-22 16:00:00+02. \nThis also increased the performance.\n changing to time > '2005-06-22 16:00:00+02' (or what ever date is 24 hours \nback) or time > current_time - interval '24 hours' will be used.\n I'm running FreeBSD 4.11, and im editing the file in \n/usr/local/etc/postgresql.conf, but it doesnt help. When i start up \"psql \ndatabase\", none of the options are changed (with a restart of the \npostmaster). I cant find a '--configuration=path/file' option for the \npostmaster either...\n Kjell Tore\n On 6/22/05, Tobias Brox <[email protected]> wrote: \n> \n> [Kjell Tore Fossbakk - Wed at 07:41:54AM -0700]\n> > I cant get the config file to load into my postgres. that's the\n> > problem. I want to set it to 10k, but it is only still at 1000... I\n> > save the file and restart the service..\n> >\n> > yes, i ment 'pg_ctl reload', sry about that one.\n> \n> Classical problem, a bit depending on the distro you are using.\n> \n> The \"master\" file usually resides in /etc/postgresql while the actual file\n> used usually resides in /var/lib/postgres/data ... or something. Some\n> distros copies over the file (such that one always should edit the file in\n> /etc) others don't (thus you either have to do that your self, or edit the\n> file in the database data directory.\n> \n> --\n> Tobias Brox, +86-13521622905\n> Nordicbet, IT dept\n>\n\n-I also changed now() to current_time, which increased performance quite good. I need to make further tests, before I'll post any results.\n-I tried to change now()- interval 'x hours' to like 2005-06-22 16:00:00+02. This also increased the performance.\n \nchanging to time > '2005-06-22 16:00:00+02' (or what ever date is 24 hours back) or time > current_time - interval '24 hours' will be used.\n \nI'm running FreeBSD 4.11, and im editing the file in /usr/local/etc/postgresql.conf, but it doesnt help. When i start up \"psql database\", none of the options are changed (with a restart of the postmaster). I cant find a '--configuration=path/file' option for the postmaster either...\n\n \nKjell Tore\n \nOn 6/22/05, Tobias Brox <[email protected]> wrote:\n[Kjell Tore Fossbakk - Wed at 07:41:54AM -0700]> I cant get the config file to load into my postgres. that's the\n> problem. I want to set it to 10k, but it is only still at 1000... I> save the file and restart the service..>> yes, i ment 'pg_ctl reload', sry about that one.Classical problem, a bit depending on the distro you are using.\nThe \"master\" file usually resides in /etc/postgresql while the actual fileused usually resides in /var/lib/postgres/data ... or something.  Somedistros copies over the file (such that one always should edit the file in\n/etc) others don't (thus you either have to do that your self, or edit thefile in the database data directory.--Tobias Brox, +86-13521622905Nordicbet, IT dept", "msg_date": "Thu, 23 Jun 2005 15:31:13 +0200", "msg_from": "Kjell Tore Fossbakk <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Querying 19million records very slowly" }, { "msg_contents": "> I'm running FreeBSD 4.11, and im editing the file in \n> /usr/local/etc/postgresql.conf, but it doesnt help.\n\nOn my system the 'live' config file resides in\n/var/lib/postgresql/data/postgresql.conf - maybe you have them in\n/usr/local/var/lib ...\n\n-- \nTobias Brox, +86-13521622905\nNordicbet, IT dept\n", "msg_date": "Thu, 23 Jun 2005 23:31:12 +0800", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Querying 19million records very slowly" } ]
[ { "msg_contents": "I'm hoping someone can offer some advice here.\n I have a large perl script that employs prepared statements to do all its \nqueries. I'm looking at using stored procedures to improve performance times \nfor the script. Would making a stored procedure to replace each prepared \nstatement be worthwhile? If not, when could I use stored procedures to \nimprove performance?\n Thanks in advance.\n\nI'm hoping someone can offer some advice here.\n \nI have a large perl script that employs prepared statements to do all its queries. I'm looking at using stored procedures to improve performance times for the script. Would making a stored procedure to replace each prepared statement be worthwhile? If not, when could I use stored procedures to improve performance?\n\n \nThanks in advance.", "msg_date": "Tue, 21 Jun 2005 15:46:03 -0400", "msg_from": "Oliver Crosby <[email protected]>", "msg_from_op": true, "msg_subject": "Prepared statements vs. Stored Procedures" }, { "msg_contents": "[Oliver Crosby - Tue at 03:46:03PM -0400]\n> I'm hoping someone can offer some advice here.\n> I have a large perl script that employs prepared statements to do all its \n> queries. I'm looking at using stored procedures to improve performance times \n> for the script. Would making a stored procedure to replace each prepared \n> statement be worthwhile? If not, when could I use stored procedures to \n> improve performance?\n> Thanks in advance.\n\nMy gut feeling says that if you are only doing read-operations there are\nnone or almost none benefits with stored procedures.\n\nOne argument we used for not looking much into stored procedures, was\nthat we expect the database to become the bottleneck if we get too much\nactivity. At the application side, we can always expand by adding more\nboxes, but the database, beeing the hub of the system, cannot easily be\nexpanded (we can tweak and tune and upgrade the whole box, and\neventually at some point we believe we will need to put old data at a\nseparate database, and also make a replica for heavy report queries)\n\nIf you have loads of data going from the database to the application, a\nlittle bit of light processing done on the data, and then data going\nback to the database server, then I guess stored procedures would be\nbetter.\n", "msg_date": "Tue, 21 Jun 2005 22:40:57 +0200", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Prepared statements vs. Stored Procedures" }, { "msg_contents": "\n> I'm hoping someone can offer some advice here.\n> I have a large perl script that employs prepared statements to do all its\n> queries. I'm looking at using stored procedures to improve performance\n> times\n> for the script. Would making a stored procedure to replace each prepared\n> statement be worthwhile? If not, when could I use stored procedures to\n> improve performance?\n> Thanks in advance.\n>\n\nYou'll definitely gain some performance if you manage to group several\noperations that are executed in a sequence - into a stored procedure. The\nprinciple here is that you'd be reducing the number of round-trips to the\ndatabase server.\nAs an example assume you start a transaction, lock several rows in\ndifferent tables for update (thereof), update fields and then commit. If\nthis is done in a sequencial manner - whether this is perl or java/jdbc or\nlibpq - you'll require several round-trips to the server and also fetch\nseveral bits and pieces to the application. If this can be rewritten as a\nstored procedure that receives the data/parameters it needs in order to\ncomplete its work and does the whole thing in one go you'll definitely see\nan improvement as ther will be a single call to the database and you'll\nmove (much) less data between the server and the application.\nOn the other hand if you're mostly fetching data I doubt you'll be able to\ngain anything from changing to stored procedures.\nI believe a good rule of thumb is this: change data, several related\noperations, very simple processing involved -> stored procedure. Read data\nas in a reporting scenario -> prepared statements. Obviously if you're\nreading data in several steps and then aggregate it in the application\nthen perhaps you need to make better use of SQL :)\n\nI hope this helps,\nRegards,\n-- \nRadu-Adrian Popescu\nCSA, DBA, Developer\nAldrapay MD\nAldratech Ltd.\n+40213212243\n", "msg_date": "Wed, 22 Jun 2005 10:24:15 +0300 (EEST)", "msg_from": "\"Radu-Adrian Popescu\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Prepared statements vs. Stored Procedures" } ]
[ { "msg_contents": "I just tried this on 8.0.3. A query which runs very fast through an\nindex on a 25 million row table blocked when I dropped the index within\na database transaction. As soon as I rolled back the database\ntransactiton, the query completed, the index appears fine, and the query\nruns fast, as usual.\n \nSo, it looks like this is right except for the assertion that the index\nis still available for other queries.\n \n-Kevin\n \n \n>>> Tobias Brox <[email protected]> 06/21/05 2:46 PM >>>\n[John A Meinel - Tue at 10:14:24AM -0500]\n> I believe if you drop the indexes inside a transaction, they will\nstill\n> be there for other queries, and if you rollback instead of commit, you\n> won't lose anything.\n\nHas anyone tested this?\n\n(sorry, I only have the production database to play with at the moment,\nand I don't think I should play with it ;-)\n\n-- \nTobias Brox, Beijing\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n", "msg_date": "Tue, 21 Jun 2005 16:11:40 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Limit clause not using index" } ]
[ { "msg_contents": "Greg,\n\n> Not sure how far along you are, but I've been writing some really nifty\n> extensions to DBD::Pg that allow easy querying of all the current\n> run-time settings. Could be very useful to this project, seems to me. If\n> you're interested in possibly using it, let me know, I can bump it up on\n> my todo list.\n\nUm, can't we just get that from pg_settings?\n\nAnyway, I'll be deriving settings from the .conf file, since most of the \ntime the Configurator will be run on a new installation.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 21 Jun 2005 16:14:05 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Configurator project launched" }, { "msg_contents": "Josh Berkus wrote:\n> Greg,\n> \n> \n>>Not sure how far along you are, but I've been writing some really nifty\n>>extensions to DBD::Pg that allow easy querying of all the current\n>>run-time settings. Could be very useful to this project, seems to me. If\n>>you're interested in possibly using it, let me know, I can bump it up on\n>>my todo list.\n> \n> \n> Um, can't we just get that from pg_settings?\n> \n> Anyway, I'll be deriving settings from the .conf file, since most of the \n> time the Configurator will be run on a new installation.\n\nAren't most of the settings all kept in the SHOW variables anyway?\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n", "msg_date": "Tue, 21 Jun 2005 16:30:47 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configurator project launched" }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n>> Um, can't we just get that from pg_settings?\n>>\n>> Anyway, I'll be deriving settings from the .conf file, since most of the\n>> time the Configurator will be run on a new installation.\n>\n> Aren't most of the settings all kept in the SHOW variables anyway?\n\nAs I said, it may not be applicable to this project, but thought I would\noffer. One gotcha the module makes transparent is that in older versions of\nPG, the variables are returned in a different way (via RAISE). My module\nwill allow you to get the configuration for any connected database, for\nany configuration file, and the defaults for any known version, and do\nquick comparisons between them all. So you could use it to see what has\nchanged between a particular server and its conf file, or the differences\nbetween two conf files, or the differences between two databases, or even show\nwhat has changed in the default conf file from 7.4.7 and 8.0.1. It will also\nallow you to rewrite the conf files in a standard way.\n\nI'm hoping to roll this into 1.44 or 1.45 or DBD::Pg.\n\n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200506212046\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n-----BEGIN PGP SIGNATURE-----\n\niD8DBQFCuLWDvJuQZxSWSsgRAjUVAJ42oeveZBuutFo1G3Cs/3dRZWjKggCfS1Yf\nTv5RWiG9s8Ucv/t/2HZ4/R8=\n=1eap\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Wed, 22 Jun 2005 00:50:44 -0000", "msg_from": "\"Greg Sabino Mullane\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configurator project launched" }, { "msg_contents": "Greg Sabino Mullane wrote:\n> \n> \n>>>>Um, can't we just get that from pg_settings?\n>>>>\n>>>>Anyway, I'll be deriving settings from the .conf file, since most of the\n>>>>time the Configurator will be run on a new installation.\n>>>\n>>>Aren't most of the settings all kept in the SHOW variables anyway?\n> \n> \n> As I said, it may not be applicable to this project, but thought I would\n> offer. One gotcha the module makes transparent is that in older versions of\n> PG, the variables are returned in a different way (via RAISE). My module\n> will allow you to get the configuration for any connected database, for\n> any configuration file, and the defaults for any known version, and do\n> quick comparisons between them all. So you could use it to see what has\n> changed between a particular server and its conf file, or the differences\n> between two conf files, or the differences between two databases, or even show\n> what has changed in the default conf file from 7.4.7 and 8.0.1. It will also\n> allow you to rewrite the conf files in a standard way.\n\nSounds a little similar to what's in pgAdmin CVS right now. The \nconfiguration editor can retrieve the config file and display configured \nand active setting concurrently, together with explanations taken from \npg_settings (when not run against a pgsql server but a file current \nsettings are missing, comments are taken from a pg_settings csv dump). \nThere's the infrastructure to give hints about all settings, with very \nfew currently implemented.\n\nI wonder if this could be combined with the configurator somehow. \nCurrently, integration won't work with Perl, so maybe C for the core and \nPerl for the interactive part would be better.\n\nRegards,\nAndreas\n", "msg_date": "Wed, 22 Jun 2005 08:30:51 +0000", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configurator project launched" }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n> Sounds a little similar to what's in pgAdmin CVS right now. The\n> configuration editor can retrieve the config file and display configured\n> and active setting concurrently, together with explanations taken from\n> pg_settings (when not run against a pgsql server but a file current\n> settings are missing, comments are taken from a pg_settings csv dump).\n> There's the infrastructure to give hints about all settings, with very\n> few currently implemented.\n>\n> I wonder if this could be combined with the configurator somehow.\n> Currently, integration won't work with Perl, so maybe C for the core and\n> Perl for the interactive part would be better.\n\nProbably so. Seems there is a bit of convergent evolution going on. When I\nget a moment of free time, I'll check out the pgAdmin code. Can someone\nshoot me a URL to the files in question? (assuming a web cvs interface).\n\n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200506242107\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n\n-----BEGIN PGP SIGNATURE-----\n\niD8DBQFCvK6AvJuQZxSWSsgRApFcAKDVQ5OdVgVc2PmY/p719teJ3BqNjQCgrgyx\n+w+w8GCGXUFO+5dxi5RPwKo=\n=eG7M\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Sat, 25 Jun 2005 01:09:29 -0000", "msg_from": "\"Greg Sabino Mullane\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configurator project launched" }, { "msg_contents": "Greg Sabino Mullane wrote:\n\n>>I wonder if this could be combined with the configurator somehow.\n>>Currently, integration won't work with Perl, so maybe C for the core and\n>>Perl for the interactive part would be better.\n>> \n>>\n>\n>Probably so. Seems there is a bit of convergent evolution going on. When I\n>get a moment of free time, I'll check out the pgAdmin code. Can someone\n>shoot me a URL to the files in question? (assuming a web cvs interface).\n>\n> \n>\nhttp://svn.pgadmin.org/cgi-bin/viewcvs.cgi/trunk/pgadmin3/src/frm/frmMainConfig.cpp?rev=4317&view=markup\n\nThis is an editor only, an expert mode (e.g. recommendations for \ndifferent sizes/loads) would be nice.\n\nRegards,\nAndreas\n\n", "msg_date": "Sat, 25 Jun 2005 09:46:18 +0200", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configurator project launched" } ]
[ { "msg_contents": "Hi there,\n\nI'm doing an update of ~30,000 rows and she takes about 15mins on\npretty good hardware, even just after a vacuum analyze.\nI was hoping some kind soul could offer some performance advice. Do I\njust have too many indexes? Or am I missing some trick with the nulls?\n\n\nMY QUERY\n========\nupdate bob.product_price set thru_date = '2005-06-22 22:08:49.957'\nwhere thru_date is null;\n\n\nMY TABLE\n=========\n Table \"bob.product_price\"\n Column | Type | Modifiers \n-----------------------------+--------------------------+-----------\n product_id | character varying(20) | not null\n product_price_type_id | character varying(20) | not null\n currency_uom_id | character varying(20) | not null\n product_store_id | character varying(20) | not null\n from_date | timestamp with time zone | not null\n thru_date | timestamp with time zone | \n price | numeric(18,2) | \n created_date | timestamp with time zone | \n created_by_user_login | character varying(255) | \n last_modified_date | timestamp with time zone | \n last_modified_by_user_login | character varying(255) | \n last_updated_stamp | timestamp with time zone | \n last_updated_tx_stamp | timestamp with time zone | \n created_stamp | timestamp with time zone | \n created_tx_stamp | timestamp with time zone | \n\nIndexes:\n---------\npk_product_price primary key btree\n (product_id, product_price_type_id, currency_uom_id,\nproduct_store_id, from_date),\nprdct_prc_txcrts btree (created_tx_stamp),\nprdct_prc_txstmp btree (last_updated_tx_stamp),\nprod_price_cbul btree (created_by_user_login),\nprod_price_cuom btree (currency_uom_id),\nprod_price_lmbul btree (last_modified_by_user_login),\nprod_price_prod btree (product_id),\nprod_price_pst btree (product_store_id),\nprod_price_type btree (product_price_type_id)\n\nForeign Key constraints: \n-------------------------\nprod_price_prod FOREIGN KEY (product_id) REFERENCES bob.product(product_id) \n ON UPDATE NO ACTION ON DELETE NO ACTION,\nprod_price_type FOREIGN KEY (product_price_type_id) REFERENCES\nbob.product_price_type(product_price_type_id)\n ON UPDATE NO ACTION ON DELETE NO ACTION,\nprod_price_cuom FOREIGN KEY (currency_uom_id) REFERENCES bob.uom(uom_id) \n ON UPDATE NO ACTION ON DELETE NO ACTION,\nprod_price_pst FOREIGN KEY (product_store_id) REFERENCES\nbob.product_store(product_store_id)\n ON UPDATE NO ACTION ON DELETE NO ACTION,\nprod_price_cbul FOREIGN KEY (created_by_user_login) REFERENCES\nbob.user_login(user_login_id)\n ON UPDATE NO ACTION ON DELETE NO ACTION,\nprod_price_lmbul FOREIGN KEY (last_modified_by_user_login) REFERENCES\nbob.user_login(user_login_id)\n ON UPDATE NO ACTION ON DELETE NO ACTION\n", "msg_date": "Wed, 22 Jun 2005 18:12:34 +1200", "msg_from": "Colin Taylor <[email protected]>", "msg_from_op": true, "msg_subject": "slow simple update?" }, { "msg_contents": "You should provide a bit more details on what happens if you want people to\nhelp you.\n Tipically you will be asked an explain analyze of your query.\n\nAs a first tip if your table contains much more than 30.000 rows you could\ntry to set up a partial index with \nthru_date is null condition.\n\n\nregards\n--\nPhilippe\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Colin Taylor\nSent: mercredi 22 juin 2005 08:13\nTo: [email protected]\nSubject: [PERFORM] slow simple update?\n\nHi there,\n\nI'm doing an update of ~30,000 rows and she takes about 15mins on pretty\ngood hardware, even just after a vacuum analyze.\nI was hoping some kind soul could offer some performance advice. Do I just\nhave too many indexes? Or am I missing some trick with the nulls?\n\n\nMY QUERY\n========\nupdate bob.product_price set thru_date = '2005-06-22 22:08:49.957'\nwhere thru_date is null;\n\n\nMY TABLE\n=========\n Table \"bob.product_price\"\n Column | Type | Modifiers \n-----------------------------+--------------------------+-----------\n product_id | character varying(20) | not null\n product_price_type_id | character varying(20) | not null\n currency_uom_id | character varying(20) | not null\n product_store_id | character varying(20) | not null\n from_date | timestamp with time zone | not null\n thru_date | timestamp with time zone | \n price | numeric(18,2) | \n created_date | timestamp with time zone | \n created_by_user_login | character varying(255) | \n last_modified_date | timestamp with time zone | \n last_modified_by_user_login | character varying(255) | \n last_updated_stamp | timestamp with time zone | \n last_updated_tx_stamp | timestamp with time zone | \n created_stamp | timestamp with time zone | \n created_tx_stamp | timestamp with time zone | \n\nIndexes:\n---------\npk_product_price primary key btree\n (product_id, product_price_type_id, currency_uom_id, product_store_id,\nfrom_date), prdct_prc_txcrts btree (created_tx_stamp), prdct_prc_txstmp\nbtree (last_updated_tx_stamp), prod_price_cbul btree\n(created_by_user_login), prod_price_cuom btree (currency_uom_id),\nprod_price_lmbul btree (last_modified_by_user_login), prod_price_prod btree\n(product_id), prod_price_pst btree (product_store_id), prod_price_type btree\n(product_price_type_id)\n\nForeign Key constraints: \n-------------------------\nprod_price_prod FOREIGN KEY (product_id) REFERENCES bob.product(product_id)\nON UPDATE NO ACTION ON DELETE NO ACTION, prod_price_type FOREIGN KEY\n(product_price_type_id) REFERENCES\nbob.product_price_type(product_price_type_id)\n ON UPDATE NO ACTION ON DELETE NO ACTION, prod_price_cuom FOREIGN KEY\n(currency_uom_id) REFERENCES bob.uom(uom_id) ON UPDATE NO ACTION ON DELETE\nNO ACTION, prod_price_pst FOREIGN KEY (product_store_id) REFERENCES\nbob.product_store(product_store_id)\n ON UPDATE NO ACTION ON DELETE NO ACTION, prod_price_cbul FOREIGN KEY\n(created_by_user_login) REFERENCES\nbob.user_login(user_login_id)\n ON UPDATE NO ACTION ON DELETE NO ACTION, prod_price_lmbul FOREIGN KEY\n(last_modified_by_user_login) REFERENCES\nbob.user_login(user_login_id)\n ON UPDATE NO ACTION ON DELETE NO ACTION\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n", "msg_date": "Thu, 30 Jun 2005 14:38:36 +0200", "msg_from": "\"philippe ventrillon\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow simple update?" } ]
[ { "msg_contents": "Hello. I believe in earlier versions, a query of the\nform \nselect attrib from ttt where attrib like 'foo%' would\nbe able to take advantage of an index. I have seen\nthis in the past. Currently I am using v8.0.3. From\nwhat I can see is that the execultion plan seems to\nuse a seq scan and to totally ignore the index. Is\nthis the case?\n\n-Aditya\n\n", "msg_date": "Wed, 22 Jun 2005 02:03:29 -0700 (PDT)", "msg_from": "Aditya Damle <[email protected]>", "msg_from_op": true, "msg_subject": "LIKE search with ending % not optimized in v8" }, { "msg_contents": "On Wed, Jun 22, 2005 at 02:03:29AM -0700, Aditya Damle wrote:\n>\n> Hello. I believe in earlier versions, a query of the\n> form \n> select attrib from ttt where attrib like 'foo%' would\n> be able to take advantage of an index. I have seen\n> this in the past. Currently I am using v8.0.3. From\n> what I can see is that the execultion plan seems to\n> use a seq scan and to totally ignore the index. Is\n> this the case?\n\n8.0.3 can certainly use indexes for LIKE queries, but the planner\nwill choose a sequential scan if it thinks that would be faster.\nHave you vacuumed and analyzed your tables? Could you post the\nEXPLAIN ANALYZE output of a query, once with enable_seqscan turned\non and once with it turned off?\n\nSee also \"Operator Classes\" in the \"Indexes\" chapter of the\ndocumentation:\n\nhttp://www.postgresql.org/docs/8.0/static/indexes-opclass.html\n\nWhat locale are you using?\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Mon, 27 Jun 2005 23:16:26 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE search with ending % not optimized in v8" } ]
[ { "msg_contents": "Hello!\n \nWe're using PostgreSQL 8.0.1 as general backend for all of our websites,\nincluding our online forums (aka bulletin boards or whatever you wish to\ncall that). As for full text search capabilities, we've chosen to\nimplement this via tsearch2. However, the tables themselves are quite\nlarge, and as there's lots of weird user input in them (just no way of\nlimiting our users to \"proper\" orthography), so are the indices; we have\nalready split up the main posting-table in two, one containing the more\nrecent messages (<6 months) and one for everything else.\n\nSearch capabilities have been limited to accessing only one of those,\neither recent or archive. Still, the tsearch2-GiST-index for a table is\naround 325MB in size; the \"recent messages\" table itself without any\nindices weighs in at about 1.8GB containing over one million rows, the\narchive-table is a little over 3GB and contains about 1.3 million rows.\nA full text search in the table with the recent postings can take up to\nfive minutes.\n\nThis wouldn't be much of a problem, as we're providing other, quicker\nsearch options (like searching for an author or a full text search just\non the topics); the problem with the full text search lies in the\nlocking mechanisms: As long as there's a search going on, all the\nsubsequent INSERTs or UPDATEs on that table fail due to timeout. This\nmeans that currently, whenever we allow full text searching, there may\nbe a timeframe of more than one hour, during which users cannot write\nany new postings in our forum or edit (i.e. update) anything. This is\nhardly acceptable...\n\nThis is what I did to actually diagnose that simple tsearch2-related\nSELECTs where causing the write-locks:\n\nFirst I started a full text search query which I knew would run over\nfour minutes. Then I waited for other users to try and post some\nmessages; soon enough a 'ps ax|grep wait' showed several \"INSERT/UPDATE\nwaiting\"-backends. So I took a look at the locks:\n\nselect s.current_query as statement,\n l.mode as lock_mode,\n l.granted as lock_granted,\n c.relname as locked_relation,\n c.relnamespace as locked_relnamespace,\n c.reltype as locked_reltype\nfrom pg_stat_activity s,\n pg_locks l,\n pg_class c\nwhere\n l.pid = s.procpid\nand\n l.relation = c.oid\norder by age(s.query_start) desc;\n\nI found four locks for the search query at the very beginning of the\nresultset - all of them of the AccessShareLock persuasion and granted\nalright: one on the message-table, one on the thread-table, one on the\ntsearch2-index and another one on the primary key index of the\nthread-table.\n\nThe hanging inserts/updates were waiting for an AccessExclusiveLock on\nthe tsearch2-index - all the other locks of these queries were marked as\ngranted.\n\nAs far as I understand from some of the previous messages on the mailing\nlist regarding concurrency issues with GiST-type indices, any SELECT\nthat's using a tsearch2-index would completely lock write-access to that\nindex for the runtime of the query - is that correct so far?\n\nNow I'd like to find out about possible solutions or workarounds for\nthis issue. Surely some of you must have encountered quite similar\nsituations, so what did you do about it? I already pondered the idea of\na separate insert/update-queue-table which would then be processed by a\ncron-job, thus separating the information-entry from the actual insert\ninto the table that's blocked due to the lock on the index. Another\npossibility (which I find a little bit more compelling) would involve\nreplicating the message-table via Slony-I to another database which\ncould then be used as only target for any search-queries which require\nuse of the GiST-index. Would this provide the needed \"asynchronicity\" to\navoid this race condition between the AccessShareLock from the\nsearch-SELECT and the AccessExclusiveLock from the write access queries?\n\nI'd be very glad to know your opinions on this matter.\n\nKind regards\n\n Markus\n", "msg_date": "Wed, 22 Jun 2005 11:14:56 +0200", "msg_from": "\"Markus Wollny\" <[email protected]>", "msg_from_op": true, "msg_subject": "Forums & tsearch2 - best practices reg. concurrency" }, { "msg_contents": "Markus,\n\nwait for 8.1 which should solve all of these issues. We're working\non GiST concurrency & recovery right now. See http://www.pgsql.ru/db/mw/msg.html?mid=2073083\nfor details.\n\nOleg\nOn Wed, 22 Jun 2005, Markus Wollny wrote:\n\n> Hello!\n>\n> We're using PostgreSQL 8.0.1 as general backend for all of our websites,\n> including our online forums (aka bulletin boards or whatever you wish to\n> call that). As for full text search capabilities, we've chosen to\n> implement this via tsearch2. However, the tables themselves are quite\n> large, and as there's lots of weird user input in them (just no way of\n> limiting our users to \"proper\" orthography), so are the indices; we have\n> already split up the main posting-table in two, one containing the more\n> recent messages (<6 months) and one for everything else.\n>\n> Search capabilities have been limited to accessing only one of those,\n> either recent or archive. Still, the tsearch2-GiST-index for a table is\n> around 325MB in size; the \"recent messages\" table itself without any\n> indices weighs in at about 1.8GB containing over one million rows, the\n> archive-table is a little over 3GB and contains about 1.3 million rows.\n> A full text search in the table with the recent postings can take up to\n> five minutes.\n>\n> This wouldn't be much of a problem, as we're providing other, quicker\n> search options (like searching for an author or a full text search just\n> on the topics); the problem with the full text search lies in the\n> locking mechanisms: As long as there's a search going on, all the\n> subsequent INSERTs or UPDATEs on that table fail due to timeout. This\n> means that currently, whenever we allow full text searching, there may\n> be a timeframe of more than one hour, during which users cannot write\n> any new postings in our forum or edit (i.e. update) anything. This is\n> hardly acceptable...\n>\n> This is what I did to actually diagnose that simple tsearch2-related\n> SELECTs where causing the write-locks:\n>\n> First I started a full text search query which I knew would run over\n> four minutes. Then I waited for other users to try and post some\n> messages; soon enough a 'ps ax|grep wait' showed several \"INSERT/UPDATE\n> waiting\"-backends. So I took a look at the locks:\n>\n> select s.current_query as statement,\n> l.mode as lock_mode,\n> l.granted as lock_granted,\n> c.relname as locked_relation,\n> c.relnamespace as locked_relnamespace,\n> c.reltype as locked_reltype\n> from pg_stat_activity s,\n> pg_locks l,\n> pg_class c\n> where\n> l.pid = s.procpid\n> and\n> l.relation = c.oid\n> order by age(s.query_start) desc;\n>\n> I found four locks for the search query at the very beginning of the\n> resultset - all of them of the AccessShareLock persuasion and granted\n> alright: one on the message-table, one on the thread-table, one on the\n> tsearch2-index and another one on the primary key index of the\n> thread-table.\n>\n> The hanging inserts/updates were waiting for an AccessExclusiveLock on\n> the tsearch2-index - all the other locks of these queries were marked as\n> granted.\n>\n> As far as I understand from some of the previous messages on the mailing\n> list regarding concurrency issues with GiST-type indices, any SELECT\n> that's using a tsearch2-index would completely lock write-access to that\n> index for the runtime of the query - is that correct so far?\n>\n> Now I'd like to find out about possible solutions or workarounds for\n> this issue. Surely some of you must have encountered quite similar\n> situations, so what did you do about it? I already pondered the idea of\n> a separate insert/update-queue-table which would then be processed by a\n> cron-job, thus separating the information-entry from the actual insert\n> into the table that's blocked due to the lock on the index. Another\n> possibility (which I find a little bit more compelling) would involve\n> replicating the message-table via Slony-I to another database which\n> could then be used as only target for any search-queries which require\n> use of the GiST-index. Would this provide the needed \"asynchronicity\" to\n> avoid this race condition between the AccessShareLock from the\n> search-SELECT and the AccessExclusiveLock from the write access queries?\n>\n> I'd be very glad to know your opinions on this matter.\n>\n> Kind regards\n>\n> Markus\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Wed, 22 Jun 2005 14:53:20 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Forums & tsearch2 - best practices reg. concurrency" } ]
[ { "msg_contents": "\n Hi Everyone, \n\n I've put together a short article and posted it online regarding\n performance tuning PostgreSQL in general. I believe it helps to bring\n together the info in a easy to digest manner. I would appreciate any\n feedback, comments, and especially any technical corrections. \n\n The article can be found here: \n\n http://www.revsys.com/writings/postgresql-performance.html\n\n Thanks! \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Wed, 22 Jun 2005 09:52:27 -0500", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": true, "msg_subject": "Performance Tuning Article" }, { "msg_contents": "Frank,\n\n> I've put together a short article and posted it online regarding\n> performance tuning PostgreSQL in general. I believe it helps to bring\n> together the info in a easy to digest manner. I would appreciate any\n> feedback, comments, and especially any technical corrections.\n\nLooks nice. You should mark the link to the perf tips at Varlena.com as \n\"PostgreSQL 7.4\" and augment it with the current version here:\nwww.powerpostgresql.com/PerfList\nas well as the Annotated .Conf File:\nwww.powerpostgresql.com/Docs\n\nFor my part, I've generally seen that SATA disks still suck for read-write \napplications. I generally rate 1 UltraSCSI = 2 SATA disks for anything but \na 99% read application.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 22 Jun 2005 10:16:03 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning Article" }, { "msg_contents": "On Wed, 22 Jun 2005 10:16:03 -0700\nJosh Berkus <[email protected]> wrote:\n\n> Frank,\n> \n> > I've put together a short article and posted it online regarding\n> > performance tuning PostgreSQL in general. I believe it helps to\n> > bring together the info in a easy to digest manner. I would\n> > appreciate any feedback, comments, and especially any technical\n> > corrections.\n> \n> Looks nice. You should mark the link to the perf tips at Varlena.com\n> as \"PostgreSQL 7.4\" and augment it with the current version here:\n> www.powerpostgresql.com/PerfList\n> as well as the Annotated .Conf File:\n> www.powerpostgresql.com/Docs\n\n Thanks! These changes have been incorporated. \n \n> For my part, I've generally seen that SATA disks still suck for\n> read-write applications. I generally rate 1 UltraSCSI = 2 SATA\n> disks for anything but a 99% read application.\n\n I'll work this bit of wisdom in later tonight. Thanks again for the\n feedback. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Wed, 22 Jun 2005 12:42:32 -0500", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Tuning Article" }, { "msg_contents": "On Wed, 2005-06-22 at 09:52 -0500, Frank Wiles wrote:\n> I've put together a short article and posted it online regarding\n> performance tuning PostgreSQL in general. \n\nNice work! Some minor issues I saw:\n\n* section \"Understanding the process\", para 5:\n\n\"Now that PostgreSQL has a plan of what it believes to be the best way\nto retrieve the hardware it is time to actually get it.\"\n\nDo you mean \"retrieve the data\" instead of \"retrieve the hardware\"?\n\n\n* Perhaps some examples under \"Disk Configuration\"? \n\n\n* section \"Database Design and Layout\", after new table layout:\n\n\"Take for example the employee table above. Your probably only display\nactive employees throughout the majority of the application...\"\n\nDo you mean \"You're probably only displaying\"?\n\n\nHTH,\n-- \nKarim Nassar <[email protected]>\n\n", "msg_date": "Wed, 22 Jun 2005 11:05:05 -0700", "msg_from": "Karim Nassar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning Article" }, { "msg_contents": "[Frank Wiles - Wed at 09:52:27AM -0500]\n> I've put together a short article and posted it online regarding\n> performance tuning PostgreSQL in general. I believe it helps to bring\n> together the info in a easy to digest manner. I would appreciate any\n> feedback, comments, and especially any technical corrections. \n\nI did not read through the whole article, but I already have some comments;\n\nwork_mem was formerly sort_mem. As many of us still use pg7, you should\nprobably have a note about it.\n\nThere are already quite some short articles at the web about this issue, and\nthat was actually my starting point when I was assigned the task of tweaking\nthe database performance. I think diversity is a good thing, some of the\nshort articles was relatively outdated, others were not very well written.\nAnd also - I still never had the chance to do proper benchmarking of the\nimpact of my changes in the configuration file, I just chose to trust some\nof the advices when I saw almost the same advice repeated in several\narticles.\n\nI think we need some comprehensive chapter about this in the manual, with\nplenty of pointers - or eventually some separate well-organized pages\ntelling about all known issues. It seems to me that many of the standard\ntips here are repeating themselves over and over again.\n\n-- \nTobias Brox, +86-13521622905\nNordicbet, IT dept\n", "msg_date": "Thu, 23 Jun 2005 09:06:04 +0800", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning Article" }, { "msg_contents": "\n\n>>> I've put together a short article and posted it online regarding\n>>> performance tuning PostgreSQL in general. I believe it helps to\n>>> bring together the info in a easy to digest manner. I would\n>>> appreciate any feedback, comments, and especially any technical\n>>> corrections.\n>>\n>>Looks nice. You should mark the link to the perf tips at Varlena.com\n>>as \"PostgreSQL 7.4\" and augment it with the current version here:\n>>www.powerpostgresql.com/PerfList\n>>as well as the Annotated .Conf File:\n>>www.powerpostgresql.com/Docs\n> \n> \n> Thanks! These changes have been incorporated. \n> \n> \n>>For my part, I've generally seen that SATA disks still suck for\n>>read-write applications. I generally rate 1 UltraSCSI = 2 SATA\n>>disks for anything but a 99% read application.\n> \n> \n> I'll work this bit of wisdom in later tonight. Thanks again for the\n> feedback. \n> \n> ---------------------------------\n> Frank Wiles <[email protected]>\n> http://www.wiles.org\n> ---------------------------------\n\nFrank,\n\nA couple of things I wish I had been told when I started asking how to \nconfigure a new machine.\n\nUse RAID 10 (striping across mirrored disks)\n or RAID 0+1 (mirror a striped array) for your data.\nUse RAID 1 (mirror) for your OS\nUse RAID 1 (mirror) for the WAL.\n\nDon't put anything else on the array holding the WAL.\n\nThere have been problems with Xeon processors.\n\n-- \nKind Regards,\nKeith\n", "msg_date": "Wed, 22 Jun 2005 22:31:29 -0400", "msg_from": "Keith Worthington <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning Article" }, { "msg_contents": "> \n> There have been problems with Xeon processors.\n> \n\nCan you elaborate on that please ?\n\nThanks,\n-- \nRadu-Adrian Popescu\nCSA, DBA, Developer\nAldrapay MD\nAldratech Ltd.\n+40213212243", "msg_date": "Thu, 23 Jun 2005 12:22:17 +0300", "msg_from": "Radu-Adrian Popescu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning Article" }, { "msg_contents": "Radu-Adrian Popescu wrote:\n>>\n>> There have been problems with Xeon processors.\n>>\n> \n> Can you elaborate on that please ?\n> \n> Thanks,\n\nNot really as I do not understand the issue.\n\nHere is one post from the archives.\nhttp://archives.postgresql.org/pgsql-performance/2005-05/msg00441.php\n\nIf you search the archives for xeon sooner or later you will bump into \nsomething relevant.\n\n-- \nKind Regards,\nKeith\n", "msg_date": "Thu, 23 Jun 2005 08:16:22 -0400", "msg_from": "Keith Worthington <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning Article" }, { "msg_contents": "My understanding is that it isn't particularly XEON processors that \nis the problem\n\nAny dual processor will exhibit the problem, XEON's with \nhyperthreading exacerbate the problem though\n\nand the good news is that it has been fixed in 8.1\n\nDave\nOn 23-Jun-05, at 8:16 AM, Keith Worthington wrote:\n\n> Radu-Adrian Popescu wrote:\n>\n>>>\n>>> There have been problems with Xeon processors.\n>>>\n>>>\n>> Can you elaborate on that please ?\n>> Thanks,\n>>\n>\n> Not really as I do not understand the issue.\n>\n> Here is one post from the archives.\n> http://archives.postgresql.org/pgsql-performance/2005-05/msg00441.php\n>\n> If you search the archives for xeon sooner or later you will bump \n> into something relevant.\n>\n> -- \n> Kind Regards,\n> Keith\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n>\n\n\n\nDave Cramer\[email protected]\nwww.postgresintl.com\nICQ #14675561\njabber [email protected]\nph (519 939 0336 )\n\n", "msg_date": "Thu, 23 Jun 2005 09:36:04 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning Article" }, { "msg_contents": "Dave Cramer wrote:\n> My understanding is that it isn't particularly XEON processors that is \n> the problem\n> \n> Any dual processor will exhibit the problem, XEON's with hyperthreading \n> exacerbate the problem though\n> \n> and the good news is that it has been fixed in 8.1\n> \n\nWhere's that ? The only information I have is a message from Tom Lane saying the \nbuffer manager (or something like that) locking has been redone for 8.0. Any \npointers ?\n\n> Dave\n\nThanks,\n-- \nRadu-Adrian Popescu\nCSA, DBA, Developer\nAldrapay MD\nAldratech Ltd.\n+40213212243", "msg_date": "Thu, 23 Jun 2005 16:46:31 +0300", "msg_from": "Radu-Adrian Popescu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning Article" }, { "msg_contents": "On Wed, 22 Jun 2005 22:31:29 -0400\nKeith Worthington <[email protected]> wrote:\n\n> Frank,\n> \n> A couple of things I wish I had been told when I started asking how to\n> \n> configure a new machine.\n> \n> Use RAID 10 (striping across mirrored disks)\n> or RAID 0+1 (mirror a striped array) for your data.\n> Use RAID 1 (mirror) for your OS\n> Use RAID 1 (mirror) for the WAL.\n> \n> Don't put anything else on the array holding the WAL.\n> \n> There have been problems with Xeon processors.\n\n I believe all of these issues are covered in the article, but\n obviously not clearly enough. I'll work on rewording that section.\n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Thu, 23 Jun 2005 08:48:54 -0500", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Tuning Article" }, { "msg_contents": "On Wed, Jun 22, 2005 at 10:31:29PM -0400, Keith Worthington wrote:\n>Use RAID 10 (striping across mirrored disks)\n> or RAID 0+1 (mirror a striped array) for your data.\n\nyikes! never tell an unsuspecting person to use mirred stripes--that\nconfiguration has lower reliability and performance than striped mirrors\nwith no redeeming qualities.\n\nMike Stone\n", "msg_date": "Thu, 23 Jun 2005 10:44:52 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning Article" }, { "msg_contents": "AFAIK, the problem was the buffer manager\n\nDave\nOn 23-Jun-05, at 9:46 AM, Radu-Adrian Popescu wrote:\n\n> Dave Cramer wrote:\n>\n>> My understanding is that it isn't particularly XEON processors \n>> that is the problem\n>> Any dual processor will exhibit the problem, XEON's with \n>> hyperthreading exacerbate the problem though\n>> and the good news is that it has been fixed in 8.1\n>>\n>\n> Where's that ? The only information I have is a message from Tom \n> Lane saying the buffer manager (or something like that) locking has \n> been redone for 8.0. Any pointers ?\n>\n>\n>> Dave\n>>\n>\n> Thanks,\n> -- \n> Radu-Adrian Popescu\n> CSA, DBA, Developer\n> Aldrapay MD\n> Aldratech Ltd.\n> +40213212243\n>\n\n", "msg_date": "Thu, 23 Jun 2005 12:16:57 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning Article" }, { "msg_contents": "Dave Cramer <[email protected]> writes:\n> AFAIK, the problem was the buffer manager\n\nThe buffer manager was the place that seemed to be hit hardest by Xeon's\nproblems with spinlock contention. I think we've partially fixed that\nissue in 8.1, but as we continue to improve the system's performance,\nit's likely to surface as a bottleneck again in other places.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Jun 2005 13:15:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning Article " } ]
[ { "msg_contents": "\nHi,\n\nConsider the where-clauses:\n\nWHERE lower(col) LIKE 'abc';\nWHERE lower(col) LIKE 'abc%';\n\nthese will both use a b-tree functional index in lower(col) if one exists. \nThe clause\n\nWHERE lower(col) LIKE '%abc';\n\ncan't use the index as you would expect, because of the wildcard at the\nfront (as mentioned in the manual). Thus, it has to do a seqscan, on what\nin my case is a very large table. But still that's not too bad, because I\nexpect an overwhelming amount of the simple cases, and only very few that\nstart with a percentage sign. Now, what's problematic is if i replace the\nliteral with a parameter, like this:\n\nWHERE lower(col) LIKE ?\n\nIt seems that the parameterized query gets compiled once, and because the\nparameter is not yet known, one cannot be sure it doesn't start with a\npercentage sign. Using the parameterized version causes ALL cases to use\na seqscan.\n\nOf course, I could modify the application and send different SQL depending\non which case we're in or just constructing a query with a literal each\ntime, but is there a way to add a hint to the SQL that would cause the\nquery to be re-planned if it's a case that could use the index? Or can I \nconvince the (Perl) driver to do so?\n\n\n\nkurt.\n", "msg_date": "Wed, 22 Jun 2005 20:50:13 +0200 (CEST)", "msg_from": "Kurt De Grave <[email protected]>", "msg_from_op": true, "msg_subject": "parameterized LIKE does not use index" } ]
[ { "msg_contents": "\nHi,\n\nConsider the where-clauses:\n\nWHERE lower(col) LIKE 'abc';\nWHERE lower(col) LIKE 'abc%';\n\nthese will both use a b-tree functional index in lower(col) if one exists.\nThe clause\n\nWHERE lower(col) LIKE '%abc';\n\ncan't use the index as you would expect, because of the wildcard at the\nfront. Thus, it has to do a seqscan, on what\nin my case is a very large table. But still that's not too bad, because I\nexpect an overwhelming amount of the simple cases, and only very few that\nstart with a percentage sign. Now, what's problematic is if I replace the\nliteral with a parameter, like this:\n\nWHERE lower(col) LIKE ?\n\nIt seems that the parameterized query gets compiled once, and because the\nparameter is not yet known, one cannot be sure it doesn't start with a\npercentage sign. Using the parameterized version causes ALL cases to use\na seqscan.\n\nOf course, I could modify the application and send different SQL depending\non which case we're in or just constructing a query with a literal each\ntime, but is there a way to add a hint to the SQL that would cause the\nquery to be re-planned if it's a case that could use the index? Or can I\nconvince the (Perl) driver to do so?\n\n\nkurt.\n\n", "msg_date": "Wed, 22 Jun 2005 21:25:20 +0200", "msg_from": "Kurt De Grave <[email protected]>", "msg_from_op": true, "msg_subject": "parameterized LIKE does not use index" }, { "msg_contents": "Kurt,\n\n> Of course, I could modify the application and send different SQL\n> depending on which case we're in or just constructing a query with a\n> literal each time, but is there a way to add a hint to the SQL that\n> would cause the query to be re-planned if it's a case that could use the\n> index?  Or can I convince the (Perl) driver to do so?\n\nThere should be an option to tell DBD::Pg not to cache a query plan. \nLet's see ....\n\nyes. pg_server_prepare=0, passed to the prepare() call.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 22 Jun 2005 14:08:55 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: parameterized LIKE does not use index" } ]
[ { "msg_contents": "Hey, all. I've bounced this around in #postgres for an hour or so, and \nit was suggested that I post it here as well. Hopefully someone can \nhelp me out.\n\nI have three machines. All have 512MB of ram.\nMachine A is a 2.0ghz celeron, running debian, pg verison 7.4.6.\nMachine B is a 1.8ghz celeron, running centos 3.4, pg verison 8.0.3. \n(7.3.9 also exhibited the behaviour below, by the way)\nMachine C is a 1.0ghz athlon, running centos 4.0, pg verison 7.4.7.\n\n\nThe SAME data and schema is loaded (from a pg_dump, default parameters) \nonto all three machines. With the same query: \"select distinct model \nfrom exif_common\", machines A and C return results quickly (1/4 \nsecond). Machine B chews on it for 30ish seconds! Note, this column is \na VARCHAR(40).\n\nHere's an explain analyze for it.\n\nMachine A (fast): \nphotos=# explain analyze select distinct model from exif_common;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=2629.74..2732.11 rows=5 width=15) (actual time=211.358..265.049 rows=6 loops=1)\n -> Sort (cost=2629.74..2680.93 rows=20473 width=15) (actual time=211.351..242.296 rows=20473 loops=1)\n Sort Key: model\n -> Seq Scan on exif_common (cost=0.00..1163.73 rows=20473 width=15) (actual time=0.022..58.635 rows=20473 loops=1)\n Total runtime: 265.928 ms\n(5 rows)\n \n \n \nMachine B (slow): \nphotos=# explain analyze select distinct model from exif_common;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=2640.74..2743.11 rows=6 width=15) (actual time=27939.231..32914.134 rows=6 loops=1)\n -> Sort (cost=2640.74..2691.93 rows=20473 width=15) (actual time=27939.222..27983.784 rows=20473 loops=1)\n Sort Key: model\n -> Seq Scan on exif_common (cost=0.00..1174.73 rows=20473 width=15) (actual time=0.071..97.772 rows=20473 loops=1)\n Total runtime: 32915.031 ms\n(5 rows)\n\n\n( yes, i know, six distinct rows out of 20,000.... But holy moly! 1/4 \nsec vs 32.9 sec?!?! )\n\n\nNow, if I do a similar query against an INT column, the speeds are more \nin line with each other:\n\nMachine A:\nphotos=# explain analyze select distinct imagewidth from exif_common;\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Unique (cost=2629.74..2732.11 rows=36 width=4) (actual time=179.899..225.934 rows=107 loops=1)\n -> Sort (cost=2629.74..2680.93 rows=20473 width=4) (actual time=179.891..207.632 rows=20473 loops=1)\n Sort Key: imagewidth\n -> Seq Scan on exif_common (cost=0.00..1163.73 rows=20473 width=4) (actual time=0.024..62.946 rows=20473 loops=1)\n Total runtime: 226.707 ms\n(5 rows)\n \n \n \nMachine B:\nphotos=# explain analyze select distinct imagewidth from exif_common;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Unique (cost=2640.74..2743.11 rows=24 width=4) (actual time=209.394..287.131 rows=107 loops=1)\n -> Sort (cost=2640.74..2691.93 rows=20473 width=4) (actual time=209.384..251.693 rows=20473 loops=1)\n Sort Key: imagewidth\n -> Seq Scan on exif_common (cost=0.00..1174.73 rows=20473 width=4) (actual time=0.074..94.574 rows=20473 loops=1)\n Total runtime: 288.411 ms\n\n(5 rows)\n\n\n\n\nMachine C exhibits the same behaviour as A for all queries.\n\nThis weird slow behaviour on machine B also appeared in 7.3.9. \nUpgrading didn't seem to help.\n\nneilc from irc thought it may be a qsort(2) quirk, but a sample C \nprogram I whipped up testing different sized data sets with a similar \ndistribution gave very similar sort timings between the three \nmachines.. Therefore, I don't think it's qsort(2) to blame...\n\nAnyone have any ideas as to what may be up with machine B? \n\nThanks,\n-Elliott\n", "msg_date": "Wed, 22 Jun 2005 23:19:53 -0400", "msg_from": "Elliott Bennett <[email protected]>", "msg_from_op": true, "msg_subject": "select distinct on varchar -- wild performance differences!" } ]
[ { "msg_contents": "\n> > Of course, I could modify the application and send different SQL\n> > depending on which case we're in or just constructing a query with a\n> > literal each time, but is there a way to add a hint to the SQL that\n> > would cause the query to be re-planned if it's a case that could use the\n> > index? Or can I convince the (Perl) driver to do so?\n\n> There should be an option to tell DBD::Pg not to cache a query plan.\n> Let's see ....\n>\n> yes. pg_server_prepare=0, passed to the prepare() call.\n\nThat does the trick! Now I can have the cake and eat it! (clean code\nand high perf)\n\nNow it's tempting to dream of some mechanism that could make the\ndatabase consider\nreplanning the query automatically once it knows the parameter, or\nchoose from\na set of plans depending on the parameter. In this case the general plan\nwas about three orders\nof magnitude slower than the specialized plan. But I guess this case is\nnot all that common\nand the developer can work around it.\n\nthanks,\nkurt.\n\n-- \nir. Kurt De Grave http://www.PharmaDM.com\nPharmaDM nv. phone: +32-16-298494\nKapeldreef 60, B-3001 Leuven, Belgium fax: +32-16-298490\n\n", "msg_date": "Thu, 23 Jun 2005 10:33:18 +0200", "msg_from": "Kurt De Grave <[email protected]>", "msg_from_op": true, "msg_subject": "Re: parameterized LIKE does not use index" }, { "msg_contents": "On Thu, Jun 23, 2005 at 10:33:18 +0200,\n Kurt De Grave <[email protected]> wrote:\n> \n> Now it's tempting to dream of some mechanism that could make the\n> database consider\n> replanning the query automatically once it knows the parameter, or\n> choose from\n> a set of plans depending on the parameter. In this case the general plan\n> was about three orders\n> of magnitude slower than the specialized plan. But I guess this case is\n> not all that common\n> and the developer can work around it.\n\nI remember some discussion about delaying planning until the first\nactual query so that planning could use actual parameters to do\nthe planning. If you really want to have it check the parameters\nevery time, I think you will need to replan every time. I don't\nknow if there is a way to save some of the prepare working while\ndoing this.\n", "msg_date": "Thu, 23 Jun 2005 09:18:54 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: parameterized LIKE does not use index" }, { "msg_contents": "Bruno,\n\n> I remember some discussion about delaying planning until the first\n> actual query so that planning could use actual parameters to do\n> the planning. If you really want to have it check the parameters\n> every time, I think you will need to replan every time. I don't\n> know if there is a way to save some of the prepare working while\n> doing this.\n\nThat wouldn't help much in Kurt's case. Nor in most \"real\" cases, which is \nwhy I think the idea never went anywhere.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 23 Jun 2005 11:55:35 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: parameterized LIKE does not use index" }, { "msg_contents": "On Thu, Jun 23, 2005 at 11:55:35AM -0700, Josh Berkus wrote:\n> Bruno,\n> \n> > I remember some discussion about delaying planning until the first\n> > actual query so that planning could use actual parameters to do\n> > the planning. If you really want to have it check the parameters\n> > every time, I think you will need to replan every time. I don't\n> > know if there is a way to save some of the prepare working while\n> > doing this.\n> \n> That wouldn't help much in Kurt's case. Nor in most \"real\" cases, which is \n> why I think the idea never went anywhere.\n\nI suspect the only way to do this and have it work well would be to\ncache plans based on the relevant statistics of the parameters passed\nin. Basically, as part of parsing (which could always be cached, btw, so\nlong as schema changes clear the cache), you store what fields in what\ntables/indexes each parameter corresponds to. When you go to execute you\nlook up the stats relevant to each parameter; you can then cache plans\naccording to the stats each parameter has. Of course caching all that is\na non-trivial amount of work, so you'd only want to do it for pretty\ncomplex queries.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Fri, 24 Jun 2005 18:41:44 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: parameterized LIKE does not use index" } ]
[ { "msg_contents": ">hi, need some help with some experts here.\n>currently we have a function that use together with temp table, it calls search result function, everytime\n>this function is calling, it will go through some filter before come out as a result.\n>now we have some major problem , the first time the function execute, it take about 13 second\n>second time the function is execute, it take about 17 second, every time you execute the function\n>the time taken will grow about 4 second, ?\n>may i know what going on here?\n>since we use function with temp table, so every statement that related to temp table will using EXECUTE\n>command.\n>\n>regards\n>ivan\n\n\n\n\n\n\n>hi, need some help with some experts \nhere.\n>currently we have a function that use together \nwith temp table, it calls search result function, everytime\n>this function is calling, it will go through \nsome filter before come out as a result.\n>now we have some major problem , the first time \nthe function execute, it take about 13 second\n>second time the function is execute, it take \nabout 17 second, every time you execute the function\n>the time taken will grow about 4 second, \n?\n>may i know what going on \nhere?\n>since we use function with temp table, so every \nstatement that related to temp table will using EXECUTE\n>command.\n>\n>regards\n>ivan", "msg_date": "Thu, 23 Jun 2005 17:56:52 +0800", "msg_from": "\"Chun Yit(Chronos)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql7.4.5 running slow on plpgsql function " }, { "msg_contents": "On Thu, Jun 23, 2005 at 05:56:52PM +0800, Chun Yit(Chronos) wrote:\n>\n> currently we have a function that use together with temp table, it calls\n> search result function, everytime this function is calling, it will go\n> through some filter before come out as a result. now we have some major\n> problem , the first time the function execute, it take about 13 second\n> second time the function is execute, it take about 17 second, every time\n> you execute the function the time taken will grow about 4 second, ? may\n> i know what going on here? since we use function with temp table, so\n> every statement that related to temp table will using EXECUTE command.\n\nCould you post the function? Without knowing what the code is doing\nit's impossible to say what's happening. Is the temporary table\ngrowing on each function call? Does the function delete records\nfrom the table on each call, leaving a lot of dead tuples?\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Tue, 28 Jun 2005 00:26:20 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql7.4.5 running slow on plpgsql function" }, { "msg_contents": "[Please copy the mailing list on replies so others can contribute\nto and learn from the discussion.]\n\nOn Wed, Jun 29, 2005 at 12:29:42PM +0800, Chun Yit(Chronos) wrote:\n>\n> Yes, the function will delete records from the temporary table every time \n> on each call.\n> if leaving a lot of dead tuples, then how can we solve it?\n\nIf the function deletes all records from the temporary table then\nyou could use TRUNCATE instead of DELETE. Otherwise you could\nVACUUM the table between calls to the function (you can't run VACUUM\ninside a function).\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Wed, 29 Jun 2005 06:50:22 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql7.4.5 running slow on plpgsql function" } ]
[ { "msg_contents": "Hi all,\n\nI'm running PG 8.0.3 on WinXP and I'm coming across some performance issues \nrelated to text columns. Basically, it appears as though PG is storing the \ntext data inline with the rest of the row data as queries that don't touch \nthe text column are slower when there is data in the text column than when \nthere isn't. According to section 8.3 of the doc:\n\n\"Long values are also stored in background tables so they do not interfere \nwith rapid access to the shorter column values.\"\n\nSo, how long does a value have to be to be considered \"long\"?\n\nIf necessary, here is some more specific information about what I'm doing:\n\n1) I create a new table and use 'COPY FROM' to populate it. When the data in \nthe text column is limited to a max of 60 characters, this part takes 2-3 \nminutes less than when the data is at its full size. The table will be \npopulated with ~750k rows. Here's an example of the table I create (no, I \ndidn't name the fields \"vc_field1\", \"vc_field2\", etc ;) ):\n\ncreate table my_table_import\n(\nvc_field1 varchar(255),\nvc_field2 varchar(255),\nvc_field3 varchar(255),\nf_field1 float8,\ntext_field1 text,\nts_field1 timestamp,\nv_field4 varchar(255),\ni_field1 int8,\ni_field2 int8\n);\n\n2) I populate i_field1 and i_field2 from lookup tables. This step takes \nabout 7 mins longer with the full text data than with the limited data.\n\nupdate my_table_import\nset i_field1 = f.i_field1,\ni_field2 = u.i_field2\nfrom lookup1 as f, lookup2 as u\nwhere vc_field2 = f.vc_field2\nand vc_field1 = u.vc_field1;\n\n3) I then create an index on this table and run a couple of queries on it. \nEach of these queries takes about 10 minutes longer with the full text data \nthen without it. Here's the index that I create and an example of one of the \nqueries that I run:\n\ncreate index idx_my_table_import_i1_i2 on my_table_import (i_field1, \ni_field2);\nanalyze my_table_import;\n\nselect i_field1, i_field2, max(ts_field1) as ts_field1, count(*) as \ndup_count\nfrom my_table_import\nwhere i_field1 between 0 and 9999\ngroup by i_field1, i_field2\n\nThanks for the help,\nMeetesh Karia\n\nHi all,\n\nI'm running PG 8.0.3 on WinXP and I'm coming across some performance\nissues related to text columns.  Basically, it appears as though\nPG is storing the text data inline with the rest of the row data as\nqueries that don't touch the text column are slower when there is data\nin the text column than when there isn't.  According to section\n8.3 of the doc:\n\n\"Long values are also stored in background tables so they do not interfere with \nrapid access to the shorter column values.\"\n\nSo, how long does a value have to be to be considered \"long\"?\n\nIf necessary, here is some more specific information about what I'm doing:\n\n1) I create a new table and use 'COPY FROM' to populate it.  When\nthe data in the text column is limited to a max of 60 characters, this\npart takes 2-3 minutes less than when the data is at its full\nsize.  The table will be populated with ~750k rows.  Here's\nan example of the table I create (no, I didn't name the fields\n\"vc_field1\", \"vc_field2\", etc ;) ):\n\n    create table my_table_import\n    (\n        vc_field1 varchar(255),\n        vc_field2 varchar(255),\n        vc_field3 varchar(255),\n        f_field1 float8,\n        text_field1 text,\n        ts_field1 timestamp,\n        v_field4 varchar(255),\n        i_field1 int8,\n        i_field2 int8\n    );\n\n2) I populate i_field1 and i_field2 from lookup tables.  This step\ntakes about 7 mins longer with the full text data than with the limited\ndata.\n\n    update my_table_import\n        set i_field1 = f.i_field1,\n            i_field2 = u.i_field2\n        from lookup1 as f, lookup2 as u\n        where vc_field2 = f.vc_field2\n            and vc_field1 = u.vc_field1;\n\n3) I then create an index on this table and run a couple of queries on\nit.  Each of these queries takes about 10 minutes longer with the\nfull text data then without it.  Here's the index that I create\nand an example of one of the queries that I run:\n\n    create index idx_my_table_import_i1_i2 on my_table_import (i_field1, i_field2);\n    analyze my_table_import;\n\n    select i_field1, i_field2, max(ts_field1) as ts_field1, count(*) as dup_count\n        from my_table_import\n        where i_field1 between 0 and 9999\n        group by i_field1, i_field2\n\nThanks for the help,\nMeetesh Karia", "msg_date": "Thu, 23 Jun 2005 12:40:15 +0200", "msg_from": "Meetesh Karia <[email protected]>", "msg_from_op": true, "msg_subject": "How are text columns stored?" }, { "msg_contents": "Meetesh Karia <[email protected]> writes:\n> According to section 8.3 of the doc:\n\n> \"Long values are also stored in background tables so they do not interfere\n> with rapid access to the shorter column values.\"\n\n> So, how long does a value have to be to be considered \"long\"?\n\nSeveral kilobytes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Jun 2005 02:18:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How are text columns stored? " } ]
[ { "msg_contents": "Hey, all. I've bounced this around in #postgres for an hour or so, and\nit was suggested that I post it here as well. Hopefully someone can\nhelp me out.\n\nI have three machines. All have 512MB of ram.\nMachine A is a 2.0ghz celeron, running debian, pg verison 7.4.6.\nMachine B is a 1.8ghz celeron, running centos 3.4, pg verison 8.0.3.\n(7.3.9 also exhibited the behaviour below, by the way)\nMachine C is a 1.0ghz athlon, running centos 4.0, pg verison 7.4.7.\n\n\nThe SAME data and schema is loaded (from a pg_dump, default parameters)\nonto all three machines. With the same query: \"select distinct model\nfrom exif_common\", machines A and C return results quickly (1/4\nsecond). Machine B chews on it for 30ish seconds! Note, this column is\na VARCHAR(40).\n\nHere's an explain analyze for it.\n\nMachine A (fast):\nphotos=# explain analyze select distinct model from exif_common;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n+--\nUnique (cost=2629.74..2732.11 rows=5 width=15) (actual\ntime=211.358..265.049 rows=6 loops=1)\n -> Sort (cost=2629.74..2680.93 rows=20473 width=15) (actual\n time=211.351..242.296 rows=20473 loops=1)\n Sort Key: model\n -> Seq Scan on exif_common (cost=0.00..1163.73 rows=20473\n width=15) (actual time=0.022..58.635 rows=20473 loops=1)\nTotal runtime: 265.928 ms\n(5 rows)\n\n\n\nMachine B (slow):\nphotos=# explain analyze select distinct model from exif_common;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n+--\nUnique (cost=2640.74..2743.11 rows=6 width=15) (actual\ntime=27939.231..32914.134 rows=6 loops=1)\n -> Sort (cost=2640.74..2691.93 rows=20473 width=15) (actual\n time=27939.222..27983.784 rows=20473 loops=1)\n Sort Key: model\n -> Seq Scan on exif_common (cost=0.00..1174.73 rows=20473\n width=15) (actual time=0.071..97.772 rows=20473 loops=1)\nTotal runtime: 32915.031 ms\n(5 rows)\n\n\n( yes, i know, six distinct rows out of 20,000.... But holy moly! 1/4\nsec vs 32.9 sec?!?! )\n\n\nNow, if I do a similar query against an INT column, the speeds are more\nin line with each other:\n\nMachine A:\nphotos=# explain analyze select distinct imagewidth from exif_common;\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n+-\nUnique (cost=2629.74..2732.11 rows=36 width=4) (actual\ntime=179.899..225.934 rows=107 loops=1)\n -> Sort (cost=2629.74..2680.93 rows=20473 width=4) (actual\n time=179.891..207.632 rows=20473 loops=1)\n Sort Key: imagewidth\n -> Seq Scan on exif_common (cost=0.00..1163.73 rows=20473\nwidth=4)\n (actual time=0.024..62.946 rows=20473 loops=1)\nTotal runtime: 226.707 ms\n(5 rows)\n\n\n\nMachine B:\nphotos=# explain analyze select distinct imagewidth from exif_common;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n+-\nUnique (cost=2640.74..2743.11 rows=24 width=4) (actual\ntime=209.394..287.131 rows=107 loops=1)\n -> Sort (cost=2640.74..2691.93 rows=20473 width=4) (actual\n time=209.384..251.693 rows=20473 loops=1)\n Sort Key: imagewidth\n -> Seq Scan on exif_common (cost=0.00..1174.73 rows=20473\nwidth=4)\n (actual time=0.074..94.574 rows=20473 loops=1)\nTotal runtime: 288.411 ms\n\n(5 rows)\n\n\n\n\nMachine C exhibits the same behaviour as A for all queries.\n\nThis weird slow behaviour on machine B also appeared in 7.3.9.\nUpgrading didn't seem to help.\n\nneilc from irc thought it may be a qsort(2) quirk, but a sample C\nprogram I whipped up testing different sized data sets with a similar\ndistribution gave very similar sort timings between the three\nmachines.. Therefore, I don't think it's qsort(2) to blame...\n\nAnyone have any ideas as to what may be up with machine B?\n\nThanks,\n-Elliott\n\n\n\n", "msg_date": "Thu, 23 Jun 2005 11:09:45 -0400", "msg_from": "Elliott Bennett <[email protected]>", "msg_from_op": true, "msg_subject": "select distinct on varchar - wild performance differences!" }, { "msg_contents": "Elliott Bennett <[email protected]> writes:\n> Anyone have any ideas as to what may be up with machine B?\n\nDifferent locale setting? strcoll() can be horribly slow in some\nlocales ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Jun 2005 11:34:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select distinct on varchar - wild performance differences! " }, { "msg_contents": "hah! That did it. Setting to 'C' makes it just as fast as the other\nmachines. I think it defaulted to en_US...\n\nThanks!\n\n-Elliott\n\nOn Thu, Jun 23, 2005 at 11:34:55AM -0400, Tom Lane wrote:\n> Elliott Bennett <[email protected]> writes:\n> > Anyone have any ideas as to what may be up with machine B?\n> \n> Different locale setting? strcoll() can be horribly slow in some\n> locales ...\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n> \n", "msg_date": "Thu, 23 Jun 2005 13:09:07 -0400", "msg_from": "Elliott Bennett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select distinct on varchar - wild performance differences!" } ]
[ { "msg_contents": "Situation:\nI'm trying to optimize an ETL process with many upserts (~100k aggregated rows)\n(no duplicates allowed). The source (table t2) table holds around 14 million\nrows, and I'm grabbing them 100,000 rows at a time from t2, resulting in about\n100,000 distinct rows in the destination table (t1).\n\n\nWhat I've tried:\n\ni. FOR EXECUTE LOOP over my result set (aggregated results, 100k-ish rows), and\ntry an update first, check the ROW_COUNT, if 0, then do an insert.\n...\nrun time: approx. 25 mins\n\n\nii. in a function (pseudo code), (table name is dynamic):\n...\nup_stm :=\n'UPDATE '||t1||' SET x=t2.x\nFROM\t(select sum(x),a,b,c\n\tfrom t2\n\tgroup by a,b,c) as t2\nWHERE '||t1||'.a=t2.a AND '||t1||'.b=t2.b AND '||t1||'.c=t3.c';\n\nEXECUTE up_stm;\n\nins_stm :=\n'INSERT INTO '||t1||' (x,a,b,c) select x,a,b,c\nFROM (select sum(x) as x,a,b,c from t2 group by a,b,c) as t2\nWHERE NOT EXISTS\n\t(select true from '||t1||'\n\twhere '||t1||'.a=t2.a\n\tand '||t1||'.b=t2.b\n\tand '||t1||'.c=t2.c\n\tlimit 1)';\n\nEXECUTE ins_stm;\n...\n\ntakes about 7 minutes. The performance of this is reasonable, but there is room\nfor improvement.\nI think it's the NOT EXISTS subquery on the insert that makes the first run\nslow. Any revisions that may be faster (for the subquery)?\nNote, this subquery is necessary so that duplicates don't get into the target\ntable (t1).\n\nSubsequent runs will be mostly updates (and still slow), with few inserts. I'm\nnot seeing a way for that update statement to be sped up, but maybe someone else\ndoes?\n\n\niii. UNIQUE constraint on table \"t1\". This didn't seem to perform too badly with\nfewer rows (preliminary tests), but as you'd expect, on error the whole\ntransaction would roll back. Is it possible to skip a row if it causes an error,\nas opposed to aborting the transaction altogether?\n\n\n\nTo summarize, I'm looking for the most efficient and fastest way to perform my\nupserts. Tips and/or references to pertinent docs are also appreciated!\nIf any more information is necessary, please let me know.\n\n\n(postgresql 8.0.3, linux)\n\n\nCheers,\n\nBricklen\n-- \n_______________________________\n\nThis e-mail may be privileged and/or confidential, and the sender does\nnot waive any related rights and obligations. Any distribution, use or\ncopying of this e-mail or the information it contains by other than an\nintended recipient is unauthorized. If you received this e-mail in\nerror, please advise me (by return e-mail or otherwise) immediately.\n_______________________________\n", "msg_date": "Thu, 23 Jun 2005 12:38:04 -0700", "msg_from": "Bricklen Anderson <[email protected]>", "msg_from_op": true, "msg_subject": "ETL optimization" }, { "msg_contents": "I don't know what this will change wrt how often you need to run VACUUM (I'm \na SQL Server guy), but instead of an update and insert, try a delete and \ninsert. You'll only have to find the duplicate rows once and your insert \ndoesn't need a where clause.\n\nMeetesh\n\nOn 6/23/05, Bricklen Anderson <[email protected]> wrote:\n> \n> Situation:\n> I'm trying to optimize an ETL process with many upserts (~100k aggregated \n> rows)\n> (no duplicates allowed). The source (table t2) table holds around 14 \n> million\n> rows, and I'm grabbing them 100,000 rows at a time from t2, resulting in \n> about\n> 100,000 distinct rows in the destination table (t1).\n> \n> \n> What I've tried:\n> \n> i. FOR EXECUTE LOOP over my result set (aggregated results, 100k-ish \n> rows), and\n> try an update first, check the ROW_COUNT, if 0, then do an insert.\n> ...\n> run time: approx. 25 mins\n> \n> \n> ii. in a function (pseudo code), (table name is dynamic):\n> ...\n> up_stm :=\n> 'UPDATE '||t1||' SET x=t2.x\n> FROM (select sum(x),a,b,c\n> from t2\n> group by a,b,c) as t2\n> WHERE '||t1||'.a=t2.a AND '||t1||'.b=t2.b AND '||t1||'.c=t3.c';\n> \n> EXECUTE up_stm;\n> \n> ins_stm :=\n> 'INSERT INTO '||t1||' (x,a,b,c) select x,a,b,c\n> FROM (select sum(x) as x,a,b,c from t2 group by a,b,c) as t2\n> WHERE NOT EXISTS\n> (select true from '||t1||'\n> where '||t1||'.a=t2.a\n> and '||t1||'.b=t2.b\n> and '||t1||'.c=t2.c\n> limit 1)';\n> \n> EXECUTE ins_stm;\n> ...\n> \n> takes about 7 minutes. The performance of this is reasonable, but there is \n> room\n> for improvement.\n> I think it's the NOT EXISTS subquery on the insert that makes the first \n> run\n> slow. Any revisions that may be faster (for the subquery)?\n> Note, this subquery is necessary so that duplicates don't get into the \n> target\n> table (t1).\n> \n> Subsequent runs will be mostly updates (and still slow), with few inserts. \n> I'm\n> not seeing a way for that update statement to be sped up, but maybe \n> someone else\n> does?\n> \n> \n> iii. UNIQUE constraint on table \"t1\". This didn't seem to perform too \n> badly with\n> fewer rows (preliminary tests), but as you'd expect, on error the whole\n> transaction would roll back. Is it possible to skip a row if it causes an \n> error,\n> as opposed to aborting the transaction altogether?\n> \n> \n> \n> To summarize, I'm looking for the most efficient and fastest way to \n> perform my\n> upserts. Tips and/or references to pertinent docs are also appreciated!\n> If any more information is necessary, please let me know.\n> \n> \n> (postgresql 8.0.3, linux)\n> \n> \n> Cheers,\n> \n> Bricklen\n> --\n> _______________________________\n> \n> This e-mail may be privileged and/or confidential, and the sender does\n> not waive any related rights and obligations. Any distribution, use or\n> copying of this e-mail or the information it contains by other than an\n> intended recipient is unauthorized. If you received this e-mail in\n> error, please advise me (by return e-mail or otherwise) immediately.\n> _______________________________\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\nI don't know what this will change wrt how often you need to run VACUUM\n(I'm a SQL Server guy), but instead of an update and insert, try a\ndelete and insert.  You'll only have to find the duplicate rows\nonce and your insert doesn't need a where clause.\n\nMeeteshOn 6/23/05, Bricklen Anderson <[email protected]> wrote:\nSituation:I'm trying to optimize an ETL process with many upserts (~100k aggregated rows)(no duplicates allowed). The source (table t2) table holds around 14 millionrows, and I'm grabbing them 100,000 rows at a time from t2, resulting in about\n100,000 distinct rows in the destination table (t1).What I've tried:i. FOR EXECUTE LOOP over my result set (aggregated results, 100k-ish rows), andtry an update first, check the ROW_COUNT, if 0, then do an insert.\n...run time: approx. 25 minsii. in a function (pseudo code), (table name is dynamic):...up_stm :='UPDATE '||t1||' SET x=t2.xFROM    (select sum(x),a,b,c        from t2        group by a,b,c) as t2\nWHERE '||t1||'.a=t2.a AND '||t1||'.b=t2.b AND '||t1||'.c=t3.c';EXECUTE up_stm;ins_stm :='INSERT INTO '||t1||' (x,a,b,c) select x,a,b,cFROM (select sum(x) as x,a,b,c from t2 group by a,b,c) as t2\nWHERE NOT EXISTS        (select true from '||t1||'        where '||t1||'.a=t2.a        and '||t1||'.b=t2.b        and '||t1||'.c=t2.c        limit 1)';EXECUTE ins_stm;...takes about 7 minutes. The performance of this is reasonable, but there is room\nfor improvement.I think it's the NOT EXISTS subquery on the insert that makes the first runslow. Any revisions that may be faster (for the subquery)?Note, this subquery is necessary so that duplicates don't get into the target\ntable (t1).Subsequent runs will be mostly updates (and still slow), with few inserts. I'mnot seeing a way for that update statement to be sped up, but maybe someone elsedoes?iii. UNIQUE constraint on table \"t1\". This didn't seem to perform too badly with\nfewer rows (preliminary tests), but as you'd expect, on error the wholetransaction would roll back. Is it possible to skip a row if it causes an error,as opposed to aborting the transaction altogether?\nTo summarize, I'm looking for the most efficient and fastest way to perform myupserts. Tips and/or references to pertinent docs are also appreciated!If any more information is necessary, please let me know.\n(postgresql 8.0.3, linux)Cheers,Bricklen--_______________________________This e-mail may be privileged and/or confidential, and the sender doesnot waive any related rights and obligations. Any distribution, use or\ncopying of this e-mail or the information it contains by other than anintended recipient is unauthorized. If you received this e-mail inerror, please advise me (by return e-mail or otherwise) immediately._______________________________\n---------------------------(end of broadcast)---------------------------TIP 2: you can get off all lists at once with the unregister command    (send \"unregister YourEmailAddressHere\" to \[email protected])", "msg_date": "Thu, 23 Jun 2005 21:54:06 +0200", "msg_from": "Meetesh Karia <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ETL optimization" }, { "msg_contents": "Meetesh Karia wrote:\n> I don't know what this will change wrt how often you need to run VACUUM\n> (I'm a SQL Server guy), but instead of an update and insert, try a\n> delete and insert. You'll only have to find the duplicate rows once and\n> your insert doesn't need a where clause.\n> \n> Meetesh\n> \nVacuum analyze in generally run about once an hour. You know, I didn't even\nthink to try a delete + insert combo (which will not be visible to the other\nqueries that are occurring). Truncate is out of the question, because of the\naforementioned queries, but I'll give the d+i a shot.\n\nThanks!\n\n\n-- \n_______________________________\n\nThis e-mail may be privileged and/or confidential, and the sender does\nnot waive any related rights and obligations. Any distribution, use or\ncopying of this e-mail or the information it contains by other than an\nintended recipient is unauthorized. If you received this e-mail in\nerror, please advise me (by return e-mail or otherwise) immediately.\n_______________________________\n", "msg_date": "Thu, 23 Jun 2005 13:16:50 -0700", "msg_from": "Bricklen Anderson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ETL optimization" }, { "msg_contents": "Hi,\n\nAt 21:38 23/06/2005, Bricklen Anderson wrote:\n>Situation:\n>I'm trying to optimize an ETL process with many upserts (~100k aggregated \n>rows)\n>(no duplicates allowed). The source (table t2) table holds around 14 million\n>rows, and I'm grabbing them 100,000 rows at a time from t2, resulting in about\n>100,000 distinct rows in the destination table (t1).\n>\n>\n>What I've tried:\n>\n>i. FOR EXECUTE LOOP over my result set (aggregated results, 100k-ish \n>rows), and\n>try an update first, check the ROW_COUNT, if 0, then do an insert.\n>...\n>run time: approx. 25 mins\n>\n>\n>ii. in a function (pseudo code), (table name is dynamic):\n>...\n>up_stm :=\n>'UPDATE '||t1||' SET x=t2.x\n>FROM (select sum(x),a,b,c\n> from t2\n> group by a,b,c) as t2\n>WHERE '||t1||'.a=t2.a AND '||t1||'.b=t2.b AND '||t1||'.c=t3.c';\n>\n>EXECUTE up_stm;\n>\n>ins_stm :=\n>'INSERT INTO '||t1||' (x,a,b,c) select x,a,b,c\n>FROM (select sum(x) as x,a,b,c from t2 group by a,b,c) as t2\n>WHERE NOT EXISTS\n> (select true from '||t1||'\n> where '||t1||'.a=t2.a\n> and '||t1||'.b=t2.b\n> and '||t1||'.c=t2.c\n> limit 1)';\n>\n>EXECUTE ins_stm;\n>...\n\nI have a similar situation, and the solution I use (though I haven't really \ntested many different situations):\n- have a trigger ON INSERT which does:\nUPDATE set whatever_value=NEW.whatever_value,... WHERE \nwhatever_key=NEW.whatever.key AND...\nIF FOUND THEN\n RETURN NULL;\nELSE\n RETURN NEW;\nEND IF;\n- use COPY\n\nFor optimal performance, a different trigger function is created for each \ntable, which allows the query plan of the UPDATE to be cached.\n\nLet us know how that works out for you and if you find a better solution!\n\nJacques.\n\n\n", "msg_date": "Thu, 23 Jun 2005 23:56:27 +0200", "msg_from": "Jacques Caron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ETL optimization" }, { "msg_contents": "Jacques Caron wrote:\n> \n> I have a similar situation, and the solution I use (though I haven't\n> really tested many different situations):\n> - have a trigger ON INSERT which does:\n> UPDATE set whatever_value=NEW.whatever_value,... WHERE\n> whatever_key=NEW.whatever.key AND...\n> IF FOUND THEN\n> RETURN NULL;\n> ELSE\n> RETURN NEW;\n> END IF;\n> - use COPY\n> \n> For optimal performance, a different trigger function is created for\n> each table, which allows the query plan of the UPDATE to be cached.\n> \n> Let us know how that works out for you and if you find a better solution!\n> \n> Jacques.\n> \nHi Jacques, thanks for the suggestion. I've previously tested triggers under a\nvariety of situations and there was no way that they would work under the load\nwe currently have, and the much greater load that we will be expecting soon\n(~40x increase in data).\n\nI'm in the process of testing the delete scenario right now, and at first blush\nseems to perform fairly well. 2.5 million rows before aggregation, and 171000\nafter, in a little under 7 minutes.\n\nCurrently testing again with about 18.5 million rows. A drawback by using the\ndelete method is that we cannot do any of the aggregation incrementally, but so\nfar that hasn't been a big help anyways. I still need to test the performance of\nconcurrent querying against the destination table whilst the aggregation is\noccurring.\n\n-- \n_______________________________\n\nThis e-mail may be privileged and/or confidential, and the sender does\nnot waive any related rights and obligations. Any distribution, use or\ncopying of this e-mail or the information it contains by other than an\nintended recipient is unauthorized. If you received this e-mail in\nerror, please advise me (by return e-mail or otherwise) immediately.\n_______________________________\n", "msg_date": "Thu, 23 Jun 2005 15:04:33 -0700", "msg_from": "Bricklen Anderson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ETL optimization" }, { "msg_contents": "On Thu, 23 Jun 2005, Bricklen Anderson wrote:\n\n> iii. UNIQUE constraint on table \"t1\". This didn't seem to perform too\n> badly with fewer rows (preliminary tests), but as you'd expect, on error\n> the whole transaction would roll back. Is it possible to skip a row if\n> it causes an error, as opposed to aborting the transaction altogether?\n\nYou don't need to roll back the whole transaction if you use savepoints or \nthe exception features in pl/pgsql\n\nTake a look at this example:\n\nhttp://developer.postgresql.org/docs/postgres/plpgsql-control-structures.html#PLPGSQL-UPSERT-EXAMPLE\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Fri, 24 Jun 2005 06:27:21 +0200 (CEST)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ETL optimization" }, { "msg_contents": "Dennis Bjorklund wrote:\n> On Thu, 23 Jun 2005, Bricklen Anderson wrote:\n> \n> \n>>iii. UNIQUE constraint on table \"t1\". This didn't seem to perform too\n>>badly with fewer rows (preliminary tests), but as you'd expect, on error\n>>the whole transaction would roll back. Is it possible to skip a row if\n>>it causes an error, as opposed to aborting the transaction altogether?\n> \n> \n> You don't need to roll back the whole transaction if you use savepoints or \n> the exception features in pl/pgsql\n> \n> Take a look at this example:\n> \n> http://developer.postgresql.org/docs/postgres/plpgsql-control-structures.html#PLPGSQL-UPSERT-EXAMPLE\n> \nHmmm... forgot about savepoints. That's an interesting idea that I'll have to\ncheck out. I earlier mentioned that I was going to test the delete + insert\nversion, and it works pretty well. I got it down to about 3 minutes using that\nmethod. I'll test the savepoint and the exception version that you listed as well.\n\nThanks!\n\n-- \n_______________________________\n\nThis e-mail may be privileged and/or confidential, and the sender does\nnot waive any related rights and obligations. Any distribution, use or\ncopying of this e-mail or the information it contains by other than an\nintended recipient is unauthorized. If you received this e-mail in\nerror, please advise me (by return e-mail or otherwise) immediately.\n_______________________________\n", "msg_date": "Mon, 27 Jun 2005 11:15:06 -0700", "msg_from": "Bricklen Anderson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ETL optimization" } ]
[ { "msg_contents": "I'm currently trying to make a decision on whether to use the Cygwin port of Postgres 7.4 or Postgres 8.0 for a windows installation. Can someone provide some comparison info from a performance point of view? I was thinking that the Cygwin port has the overhead of the translation layer, but 8.0 is a newer product and may still have performance issue. Can anyone comment on this? \n \nThanks for the help.\n \nScott\n\nI'm currently trying to make a decision on whether to use the Cygwin port of Postgres 7.4 or Postgres 8.0 for a windows installation.  Can someone provide some comparison info from a performance point of view?  I was thinking that the Cygwin port has the overhead of the translation layer, but 8.0 is a newer product and may still have performance issue.  Can anyone comment on this?  \n \nThanks for the help.\n \nScott", "msg_date": "Thu, 23 Jun 2005 21:32:54 -0700 (PDT)", "msg_from": "Scott Goldstein <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres 8 vs Postgres 7.4/cygwin" }, { "msg_contents": "PostgreSQL 8 for windows faster AND more reliable :)\n\nChris\n\nScott Goldstein wrote:\n> I'm currently trying to make a decision on whether to use the Cygwin \n> port of Postgres 7.4 or Postgres 8.0 for a windows installation. Can \n> someone provide some comparison info from a performance point of view? \n> I was thinking that the Cygwin port has the overhead of the translation \n> layer, but 8.0 is a newer product and may still have performance issue. \n> Can anyone comment on this? \n> \n> Thanks for the help.\n> \n> Scott\n\n", "msg_date": "Fri, 24 Jun 2005 13:07:17 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 8 vs Postgres 7.4/cygwin" }, { "msg_contents": "Scott Goldstein <[email protected]> writes:\n> I'm currently trying to make a decision on whether to use the Cygwin port of Postgres 7.4 or Postgres 8.0 for a windows installation. Can someone provide some comparison info from a performance point of view? I was thinking that the Cygwin port has the overhead of the translation layer, but 8.0 is a newer product and may still have performance issue. Can anyone comment on this? \n\nWell, the performance issues of the cygwin-based releases are the stuff\nof legend ;-). New product or no, this is really a no-brainer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Jun 2005 09:40:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 8 vs Postgres 7.4/cygwin " } ]
[ { "msg_contents": "Hi again all,\n\nMy queries are now optimised. They all use the indexes like they \nshould.\nHowever, there's still a slight problem when I issue the \"offset\" \nclause.\n\nWe have a table that contains 600.000 records\nWe display them by 25 in the webpage.\nSo, when I want the last page, which is: 600k / 25 = page 24000 - 1 = \n23999, I issue the offset of 23999 * 25\nThis take a long time to run, about 5-10 seconds whereas offset below \n100 take less than a second.\n\nCan I speed this up ?\n\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Fri, 24 Jun 2005 20:18:48 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Speed with offset clause" }, { "msg_contents": "On 6/24/05, Yves Vindevogel <[email protected]> wrote:\n> So, when I want the last page, which is: 600k / 25 = page 24000 - 1 =\n> 23999, I issue the offset of 23999 * 25\n\nimproving this is hard, but not impossible.\nif you have right index created, try to reverse the order and fetch\nfirst adverts, and then resort it (just the 25 adverts) in correct\norder.\nit will be faster.\n\ndepesz\n", "msg_date": "Fri, 24 Jun 2005 20:54:55 +0200", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed with offset clause" }, { "msg_contents": "Yves Vindevogel wrote:\n\n> Hi again all,\n>\n> My queries are now optimised. They all use the indexes like they should.\n> However, there's still a slight problem when I issue the \"offset\" clause.\n>\n> We have a table that contains 600.000 records\n> We display them by 25 in the webpage.\n> So, when I want the last page, which is: 600k / 25 = page 24000 - 1 = \n> 23999, I issue the offset of 23999 * 25\n> This take a long time to run, about 5-10 seconds whereas offset below \n> 100 take less than a second.\n>\n> Can I speed this up ?\n>\n>\n> Met vriendelijke groeten,\n> Bien à vous,\n> Kind regards,\n>\n> *Yves Vindevogel*\n> *Implements*\n>\nPostgres has the optimization that it will plan a query, and once it \nreaches the limit, it can stop even though there is more data available.\nThe problem you are having is that it has to go through \"offset\" rows \nfirst, before it can apply the limit.\nIf you can, (as mentioned in the other post), try to refine your index \nso that you can reverse it for the second half of the data.\n\nThis is probably tricky, as you may not know how many rows you have (or \nthe amount might be changing).\n\nA potentially better thing, is if you have an index you are using, you \ncould use a subselect so that the only portion that needs to have 60k \nrows is a single column.\n\nMaybe an example:\nInstead of saying:\n\nSELECT * FROM table1, table2 WHERE table1.id = table2.id ORDER BY \ntable1.date OFFSET x LIMIT 25;\n\nYou could do:\n\nSELECT * FROM\n (SELECT id FROM table1 OFFSET x LIMIT 25) as subselect\n JOIN table1 ON subselect.id = table1.id\n , table2\n WHERE table1.id = table2.id;\n\nThat means that the culling process is done on only a few rows of one \ntable, and the rest of the real merging work is done on only a few rows.\n\nIt really depends on you query, though, as what rows you are sorting on \nhas a big influence on how well this will work.\n\nJohn\n=:->", "msg_date": "Fri, 24 Jun 2005 14:22:07 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed with offset clause" }, { "msg_contents": "Hi,\n\nIndeed, I would have to do it through a function, where I check the \nnumber of pages, ....\nIt puts my weakest point in the middle then.\n\nI could simply rewrite my query like you state, just to check.\nI think all my queries are on one table only. (I report in a website \non one table, that has been denormalized into other smaller tables for \nspeed)\nBut the problem is on the big table.\n\nI'm currently looking at another possibility, and that is generating \nXML files based upon my database. This would increase disk space \nenormously, but limit my problems with the database.\nSince I am using Cocoon for the website, this is not such a problematic \ndecision, disks are cheap and I need only a few modifications to my \ncode.\n\nOn 24 Jun 2005, at 21:22, John A Meinel wrote:\n\n> Yves Vindevogel wrote:\n>\n>> Hi again all,\n>>\n>> My queries are now optimised. They all use the indexes like they \n>> should.\n>> However, there's still a slight problem when I issue the \"offset\" \n>> clause.\n>>\n>> We have a table that contains 600.000 records\n>> We display them by 25 in the webpage.\n>> So, when I want the last page, which is: 600k / 25 = page 24000 - 1 = \n>> 23999, I issue the offset of 23999 * 25\n>> This take a long time to run, about 5-10 seconds whereas offset below \n>> 100 take less than a second.\n>>\n>> Can I speed this up ?\n>>\n>>\n>> Met vriendelijke groeten,\n>> Bien à vous,\n>> Kind regards,\n>>\n>> *Yves Vindevogel*\n>> *Implements*\n>>\n> Postgres has the optimization that it will plan a query, and once it \n> reaches the limit, it can stop even though there is more data \n> available.\n> The problem you are having is that it has to go through \"offset\" rows \n> first, before it can apply the limit.\n> If you can, (as mentioned in the other post), try to refine your index \n> so that you can reverse it for the second half of the data.\n>\n> This is probably tricky, as you may not know how many rows you have \n> (or the amount might be changing).\n>\n> A potentially better thing, is if you have an index you are using, you \n> could use a subselect so that the only portion that needs to have 60k \n> rows is a single column.\n>\n> Maybe an example:\n> Instead of saying:\n>\n> SELECT * FROM table1, table2 WHERE table1.id = table2.id ORDER BY \n> table1.date OFFSET x LIMIT 25;\n>\n> You could do:\n>\n> SELECT * FROM\n> (SELECT id FROM table1 OFFSET x LIMIT 25) as subselect\n> JOIN table1 ON subselect.id = table1.id\n> , table2\n> WHERE table1.id = table2.id;\n>\n> That means that the culling process is done on only a few rows of one \n> table, and the rest of the real merging work is done on only a few \n> rows.\n>\n> It really depends on you query, though, as what rows you are sorting \n> on has a big influence on how well this will work.\n>\n> John\n> =:->\n>\n>\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Fri, 24 Jun 2005 22:23:43 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed with offset clause" } ]
[ { "msg_contents": "Hello, I'm a Sun Solaris sys admin for a start-up\ncompany. I've got the UNIX background, but now I'm\nhaving to learn PostgreSQL to support it on our\nservers :)\n\nServer Background:\n\nSolaris 10 x86\nPostgreSQL 8.0.3 \nDell PowerEdge 2650 w/4gb ram.\nThis is running JBoss/Apache as well (I KNOW the bad\njuju of running it all on one box, but it's all we\nhave currently for this project). I'm dedicating 1gb\nfor PostgreSQL alone.\n\nSo, far I LOVE it compared to MySQL it's solid.\n\nThe only things I'm kind of confused about (and I've\nbeen searching for answers on lot of good perf docs,\nbut not too clear to me) are the following:\n\n1.) shared_buffers I see lot of reference to making\nthis the size of available ram (for the DB). However,\nI also read to make it the size of pgdata directory. \n\nI notice when I load postgres each daemon is using the\namount of shared memory (shared_buffers). Our current\ndataset (pgdata) is 85mb in size. So, I'm curious\nshould this size reflect the pgdata or the 'actual'\nmemory given?\n\nI currently have this at 128mb \n\n2.) effective_cache_size - from what I read this is\nthe 'total' allowed memory for postgresql to use\ncorrect? So, if I am willing to allow 1GB of memory\nshould I make this 1GB?\n\n3.) max_connections, been trying to figure 'how' to\ndetermine this #. I've read this is buffer_size+500k\nper a connection. \n\nie. 128mb(buffer) + 500kb = 128.5mb per connection?\n\nI was curious about 'sort_mem' I can't find reference\nof it in the 8.0.3 documentation, has it been removed?\n\nwork_mem and max_stack_depth set to 4096\nmaintenance_work_mem set to 64mb\n\nThanks for any help on this. I'm sure bombardment of\nnewbies gets old :)\n\n-William\n\n\n\t\t\n____________________________________________________ \nYahoo! Sports \nRekindle the Rivalries. Sign up for Fantasy Football \nhttp://football.fantasysports.yahoo.com\n", "msg_date": "Fri, 24 Jun 2005 11:54:31 -0700 (PDT)", "msg_from": "Puddle <[email protected]>", "msg_from_op": true, "msg_subject": "max_connections / shared_buffers / effective_cache_size questions" }, { "msg_contents": "Puddle wrote:\n\n>Hello, I'm a Sun Solaris sys admin for a start-up\n>company. I've got the UNIX background, but now I'm\n>having to learn PostgreSQL to support it on our\n>servers :)\n>\n>Server Background:\n>\n>Solaris 10 x86\n>PostgreSQL 8.0.3\n>Dell PowerEdge 2650 w/4gb ram.\n>This is running JBoss/Apache as well (I KNOW the bad\n>juju of running it all on one box, but it's all we\n>have currently for this project). I'm dedicating 1gb\n>for PostgreSQL alone.\n>\n>So, far I LOVE it compared to MySQL it's solid.\n>\n>The only things I'm kind of confused about (and I've\n>been searching for answers on lot of good perf docs,\n>but not too clear to me) are the following:\n>\n>1.) shared_buffers I see lot of reference to making\n>this the size of available ram (for the DB). However,\n>I also read to make it the size of pgdata directory.\n>\n>I notice when I load postgres each daemon is using the\n>amount of shared memory (shared_buffers). Our current\n>dataset (pgdata) is 85mb in size. So, I'm curious\n>should this size reflect the pgdata or the 'actual'\n>memory given?\n>\n>I currently have this at 128mb\n>\n>\nYou generally want shared_buffers to be no more than 10% of available\nram. Postgres expects the OS to do it's own caching. 128M/4G = 3% seems\nreasonable to me. I would certainly never set it to 100% of ram.\n\n>2.) effective_cache_size - from what I read this is\n>the 'total' allowed memory for postgresql to use\n>correct? So, if I am willing to allow 1GB of memory\n>should I make this 1GB?\n>\n>\nThis is the effective amount of caching between the actual postgres\nbuffers, and the OS buffers. If you are dedicating this machine to\npostgres, I would set it to something like 3.5G. If it is a mixed\nmachine, then you have to think about it.\n\nThis does not change how postgres uses RAM, it changes how postgres\nestimates whether an Index scan will be cheaper than a Sequential scan,\nbased on the likelihood that the data you want will already be cached in\nRam.\n\nIf you dataset is only 85MB, and you don't think it will grow, you\nreally don't have to worry about this much. You have a very small database.\n\n>3.) max_connections, been trying to figure 'how' to\n>determine this #. I've read this is buffer_size+500k\n>per a connection.\n>\n>ie. 128mb(buffer) + 500kb = 128.5mb per connection?\n>\n>\nMax connections is just how many concurrent connections you want to\nallow. If you can get away with lower, do so. Mostly this is to prevent\nconnections * work_mem to get bigger than your real working memory and\ncausing you to swap.\n\n>I was curious about 'sort_mem' I can't find reference\n>of it in the 8.0.3 documentation, has it been removed?\n>\n>\nsort_mem changed to work_mem in 8.0, same thing with vacuum_mem ->\nmaintenance_work_mem.\n\n>work_mem and max_stack_depth set to 4096\n>maintenance_work_mem set to 64mb\n>\n>\nDepends how much space you want to give per connection. 4M is pretty\nsmall for a machine with 4G of RAM, but if your DB is only 85M it might\nbe plenty.\nwork_mem is how much memory a sort/hash/etc will use before it spills to\ndisk. So look at your queries. If you tend to sort most of your 85M db\nin a single query, you might want to make it a little bit more. But if\nall of your queries are very selective, 4M could be plenty.\n\nI would make maintenance_work_mem more like 512M. It is only used for\nCREATE INDEX, VACUUM, etc. Things that are not generally done by more\nthan one process at a time. And it's nice for them to have plenty of\nroom to run fast.\n\n>Thanks for any help on this. I'm sure bombardment of\n>newbies gets old :)\n>\n>-William\n>\n>\nGood luck,\nJohn\n=:->", "msg_date": "Fri, 24 Jun 2005 14:16:02 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: max_connections / shared_buffers / effective_cache_size" }, { "msg_contents": "\n> 1.) shared_buffers I see lot of reference to making\n> this the size of available ram (for the DB). However,\n> I also read to make it the size of pgdata directory. \n\n> 2.) effective_cache_size - from what I read this is\n> the 'total' allowed memory for postgresql to use\n> correct? So, if I am willing to allow 1GB of memory\n> should I make this 1GB?\n\nshared_buffers in your case should be about 10000. It is not taken on a\nper connection basis, but is global for that cluster. Perhaps your\nmemory analysis tool is fooling with you?\n\neffective_cache_size is what you want to set to the amount of ram that\nyou expect the kernel to use for caching the database information in\nmemory. PostgreSQL will not allocate this memory, but it will make\nadjustments to the query execution methods (plan) chosen.\n\n> 3.) max_connections, been trying to figure 'how' to\n> determine this #. I've read this is buffer_size+500k\n> per a connection. \n\n> ie. 128mb(buffer) + 500kb = 128.5mb per connection?\n\nMax connections is the number of connections to the database you intend\nto allow.\n\nShared_buffers must be of a certain minimum size to have that number of\nconnections, but the 10k number above should cover any reasonable\nconfigurations.\n\n> work_mem and max_stack_depth set to 4096\n> maintenance_work_mem set to 64mb\n\nSort_mem and vacuum_mem became work_mem and maintenance_work_mem as\nthose terms better indicate what they really do.\n\n> Thanks for any help on this. I'm sure bombardment of\n> newbies gets old :)\n\nThat's alright. We only request that once you have things figured out\nthat you, at your leisure, help out a few others.\n\n\n-- \n\n", "msg_date": "Fri, 24 Jun 2005 15:24:05 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: max_connections / shared_buffers /" }, { "msg_contents": "Thanks for the feedback guys.\n\nThe database will grow in size. This first client\nyears worth of data was 85mb (test to proof of\nconcept). The 05 datasets I expect to be much larger.\n\nI think I may increase the work_mem and\nmaintenance_work_mem a bit more as suggested to.\n\nI'm a bit still confused with max_connections.\n\nI've been keeping the max_connections to the # of\nApache connections. Since, this is all currently one\none box and it's a web-based application. I wanted to\nmake sure it stuck with the same # of connections. \nHowever, is there a formula or way to determine if a\ncurrent setup with memory etc to allow such\nconnections?\n\nExactly how is max_connections determined or is a\nguess?\n\nAgain thanks for your help and Mr. Taylors.\n\nLook forward to providing help when I got more a grasp\non things to !:)\n\n-William\n\n--- John A Meinel <[email protected]> wrote:\n\n> Puddle wrote:\n> \n> >Hello, I'm a Sun Solaris sys admin for a start-up\n> >company. I've got the UNIX background, but now I'm\n> >having to learn PostgreSQL to support it on our\n> >servers :)\n> >\n> >Server Background:\n> >\n> >Solaris 10 x86\n> >PostgreSQL 8.0.3\n> >Dell PowerEdge 2650 w/4gb ram.\n> >This is running JBoss/Apache as well (I KNOW the\n> bad\n> >juju of running it all on one box, but it's all we\n> >have currently for this project). I'm dedicating\n> 1gb\n> >for PostgreSQL alone.\n> >\n> >So, far I LOVE it compared to MySQL it's solid.\n> >\n> >The only things I'm kind of confused about (and\n> I've\n> >been searching for answers on lot of good perf\n> docs,\n> >but not too clear to me) are the following:\n> >\n> >1.) shared_buffers I see lot of reference to making\n> >this the size of available ram (for the DB). \n> However,\n> >I also read to make it the size of pgdata\n> directory.\n> >\n> >I notice when I load postgres each daemon is using\n> the\n> >amount of shared memory (shared_buffers). Our\n> current\n> >dataset (pgdata) is 85mb in size. So, I'm curious\n> >should this size reflect the pgdata or the 'actual'\n> >memory given?\n> >\n> >I currently have this at 128mb\n> >\n> >\n> You generally want shared_buffers to be no more than\n> 10% of available\n> ram. Postgres expects the OS to do it's own caching.\n> 128M/4G = 3% seems\n> reasonable to me. I would certainly never set it to\n> 100% of ram.\n> \n> >2.) effective_cache_size - from what I read this is\n> >the 'total' allowed memory for postgresql to use\n> >correct? So, if I am willing to allow 1GB of memory\n> >should I make this 1GB?\n> >\n> >\n> This is the effective amount of caching between the\n> actual postgres\n> buffers, and the OS buffers. If you are dedicating\n> this machine to\n> postgres, I would set it to something like 3.5G. If\n> it is a mixed\n> machine, then you have to think about it.\n> \n> This does not change how postgres uses RAM, it\n> changes how postgres\n> estimates whether an Index scan will be cheaper than\n> a Sequential scan,\n> based on the likelihood that the data you want will\n> already be cached in\n> Ram.\n> \n> If you dataset is only 85MB, and you don't think it\n> will grow, you\n> really don't have to worry about this much. You have\n> a very small database.\n> \n> >3.) max_connections, been trying to figure 'how' to\n> >determine this #. I've read this is\n> buffer_size+500k\n> >per a connection.\n> >\n> >ie. 128mb(buffer) + 500kb = 128.5mb per\n> connection?\n> >\n> >\n> Max connections is just how many concurrent\n> connections you want to\n> allow. If you can get away with lower, do so. \n> Mostly this is to prevent\n> connections * work_mem to get bigger than your real\n> working memory and\n> causing you to swap.\n> \n> >I was curious about 'sort_mem' I can't find\n> reference\n> >of it in the 8.0.3 documentation, has it been\n> removed?\n> >\n> >\n> sort_mem changed to work_mem in 8.0, same thing with\n> vacuum_mem ->\n> maintenance_work_mem.\n> \n> >work_mem and max_stack_depth set to 4096\n> >maintenance_work_mem set to 64mb\n> >\n> >\n> Depends how much space you want to give per\n> connection. 4M is pretty\n> small for a machine with 4G of RAM, but if your DB\n> is only 85M it might\n> be plenty.\n> work_mem is how much memory a sort/hash/etc will use\n> before it spills to\n> disk. So look at your queries. If you tend to sort\n> most of your 85M db\n> in a single query, you might want to make it a\n> little bit more. But if\n> all of your queries are very selective, 4M could be\n> plenty.\n> \n> I would make maintenance_work_mem more like 512M. It\n> is only used for\n> CREATE INDEX, VACUUM, etc. Things that are not\n> generally done by more\n> than one process at a time. And it's nice for them\n> to have plenty of\n> room to run fast.\n> \n> >Thanks for any help on this. I'm sure bombardment\n> of\n> >newbies gets old :)\n> >\n> >-William\n> >\n> >\n> Good luck,\n> John\n> =:->\n> \n> \n\n\n\t\t\n____________________________________________________ \nYahoo! Sports \nRekindle the Rivalries. Sign up for Fantasy Football \nhttp://football.fantasysports.yahoo.com\n", "msg_date": "Fri, 24 Jun 2005 12:56:35 -0700 (PDT)", "msg_from": "Puddle <[email protected]>", "msg_from_op": true, "msg_subject": "Re: max_connections / shared_buffers / effective_cache_size questions" } ]
[ { "msg_contents": "\nHi:\n\tI'm beginning the push at our company to look at running \npostgreSQL in production here. We have a dual CPU 2.8 GHZ Xeon \nBox running oracle. Typical CPU load runs between 20% and 90%.\nRaw DB size is about 200GB. We hit the disk at roughly 15MB/s\nread volume and 3MB/s write.\n\tAt any given time we have from 2 to 70 sessions running\non the instance. Sessions often persist for 24 hours or more.\n\n Total \t Free \t Free\n Mb Mb %\n\t\n IDXS_EXT10 \t2000 290 \t 14.5 \t\n DATA_EXT100 \t10000 \t3200 \t 32 \t\n SYSTEM \t 220 \t 95.2 \t 43.3\n IDXS_EXT100 \t20000 \t9600 \t 48\n DATA_EXT10 \t6000 \t 2990 \t 49.8\n UNDOB \t 4000 \t2561.1 64\n TEMP \t8000 \t 5802.9 72.5\t\n DATA_LOB_EXT20 \t2000 \t 1560 \t 78\n IDXS_EXT1 \t500 \t 401 \t 80.2\n DATA_EXT1 \t4000 \t 3758 \t 94\nTotal Instance \t56720 \t30258.2 \t53.3 \t \n\n\nThere are some immediate questions from our engineers about performance\n\n\"- Oracle has one particular performance enhancement that Postgres is\nmissing. If you do a select that returns 100,000 rows in a given order,\nand all you want are rows 99101 to 99200, then Oracle can do that very\nefficiently. With Postgres, it has to read the first 99200 rows and\nthen discard the first 99100. But... If we really want to look at\nperformance, then we ought to put together a set of benchmarks of some\ntypical tasks.\"\n\nIs this accurate:\naccoring to\nhttp://www.postgresql.org/docs/8.0/interactive/queries-limit.html\n -- \" The rows skipped by an OFFSET clause still have to be computed \ninside the server; therefore a large OFFSET can be inefficient.\"\n\n\nWhat are the key performance areas I should be looking at?\nWhere is psql not appropriate to replace Oracle?\n\nThanks in advance, apologies if this occurs as spam, please send\nReplies to me off-list. \n", "msg_date": "Fri, 24 Jun 2005 12:49:51 -0700", "msg_from": "\"Greg Maples\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance - moving from oracle to postgresql" }, { "msg_contents": "\n> There are some immediate questions from our engineers about performance\n> \n> \"- Oracle has one particular performance enhancement that Postgres is\n> missing. If you do a select that returns 100,000 rows in a given order,\n> and all you want are rows 99101 to 99200, then Oracle can do that very\n> efficiently. With Postgres, it has to read the first 99200 rows and\n> then discard the first 99100. But... If we really want to look at\n> performance, then we ought to put together a set of benchmarks of some\n> typical tasks.\"\n> \n> Is this accurate:\n> accoring to\n> http://www.postgresql.org/docs/8.0/interactive/queries-limit.html\n> -- \" The rows skipped by an OFFSET clause still have to be computed \n> inside the server; therefore a large OFFSET can be inefficient.\"\n\nYes. That's accurate. First you need to determine whether PostgreSQLs\nmethod is fast enough for that specific query, and if the performance\ngains for other queries (inserts, updates, delete) from reduced index\nmanagement evens out your concern. All performance gains through design\nchanges either increase complexity dramatically or have a performance\ntrade-off elsewhere.\n\n\nI find it rather odd that anyone would issue a single one-off select for\n0.1% of the data about 99.1% of the way through, without doing anything\nwith the rest. Perhaps you want to take a look at using a CURSOR?\n\n> Where is psql not appropriate to replace Oracle?\n\nAnything involving reporting using complex aggregates or very long\nrunning selects which Oracle can divide amongst multiple CPUs.\n\nWell, PostgreSQL can do it if you give it enough time to run the query,\nbut a CUBE in PostgreSQL on a TB sized table would likely take\nsignificantly longer to complete. It's mostly just that the Pg\ndevelopers haven't implemented those features optimally, or at all, yet.\n\n-- \n\n", "msg_date": "Fri, 24 Jun 2005 18:00:52 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance - moving from oracle to postgresql" }, { "msg_contents": "> \"- Oracle has one particular performance enhancement that Postgres is\n> missing. If you do a select that returns 100,000 rows in a given order,\n> and all you want are rows 99101 to 99200, then Oracle can do that very\n> efficiently. With Postgres, it has to read the first 99200 rows and\n> then discard the first 99100.\n\nWhen I was reading up on resultset pagination on AskTom I got a clear\nimpression that the same happens in Oracle as well.\nResultset is like:\n0....START...STOP...END\n0............STOP\n START...END\nYou first select all the rows from 0 to STOP and then from that select the\nrows from START to end (which is now STOP). This is done using ROWNUM\ntwice and subselects.\nIt was discussed over there that this obviously produces higher response\ntimes as you move towards the end of a very large resultset. Tom even\npointed out the same effect when using google search, as you move forward\nthrough a very large (millions) search result.\n\nRegards,\n-- \nRadu-Adrian Popescu\nCSA, DBA, Developer\nAldrapay MD\nAldratech Ltd.\n+40213212243\n", "msg_date": "Sat, 25 Jun 2005 13:45:27 +0300 (EEST)", "msg_from": "\"Radu-Adrian Popescu\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance - moving from oracle to postgresql" }, { "msg_contents": "Hi all\n\nI'm wondering if and how the size of a table affects speed of inserts\ninto it? What if the table has indexes, does that alter the answer?\n\nThanks\n\n\n", "msg_date": "Mon, 27 Jun 2005 13:24:06 +0200", "msg_from": "\"Praveen Raja\" <[email protected]>", "msg_from_op": false, "msg_subject": "Insert performance vs Table size" }, { "msg_contents": "Hi,\n\nAt 13:24 27/06/2005, Praveen Raja wrote:\n>I'm wondering if and how the size of a table affects speed of inserts\n>into it? What if the table has indexes, does that alter the answer?\n\nMany parameters will affect the result:\n- whether there are any indexes (including the primary key, unique \nconstraints...) to update or not\n- whether there are any foreign keys from or to that table\n- the size of the rows\n- whether the table (or at least the bits being updated) fit in RAM or not\n- whether the table has \"holes\" (due to former updates/deletes and vacuum) \nand how they are placed\n- and probably a bunch of other things...\n\nObviously, if you have an append-only (no updates, no deletes) table with \nno indexes and no foreign keys, the size of the table should not matter \nmuch. As soon as one of those conditions is not met table size will have an \nimpact, probably small as long as whatever is needed can be held in RAM, a \nlot bigger once it's not the case.\n\nHope that helps,\n\nJacques.\n\n\n", "msg_date": "Mon, 27 Jun 2005 13:40:13 +0200", "msg_from": "Jacques Caron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance vs Table size" }, { "msg_contents": "Just to clear things up a bit, the scenario that I'm interested in is a\ntable with a large number of indexes on it (maybe 7-8). In this scenario\nother than the overhead of having to maintain the indexes (which I'm\nguessing is the same regardless of the size of the table), does the size\nof the table play a role in determining insert performance (and I mean\nonly insert performance)?\n\n-----Original Message-----\nFrom: Jacques Caron [mailto:[email protected]] \nSent: 27 June 2005 13:40\nTo: Praveen Raja\nCc: [email protected]\nSubject: Re: [PERFORM] Insert performance vs Table size\n\nHi,\n\nAt 13:24 27/06/2005, Praveen Raja wrote:\n>I'm wondering if and how the size of a table affects speed of inserts\n>into it? What if the table has indexes, does that alter the answer?\n\nMany parameters will affect the result:\n- whether there are any indexes (including the primary key, unique \nconstraints...) to update or not\n- whether there are any foreign keys from or to that table\n- the size of the rows\n- whether the table (or at least the bits being updated) fit in RAM or\nnot\n- whether the table has \"holes\" (due to former updates/deletes and\nvacuum) \nand how they are placed\n- and probably a bunch of other things...\n\nObviously, if you have an append-only (no updates, no deletes) table\nwith \nno indexes and no foreign keys, the size of the table should not matter \nmuch. As soon as one of those conditions is not met table size will have\nan \nimpact, probably small as long as whatever is needed can be held in RAM,\na \nlot bigger once it's not the case.\n\nHope that helps,\n\nJacques.\n\n\n", "msg_date": "Mon, 27 Jun 2005 13:50:13 +0200", "msg_from": "\"Praveen Raja\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance vs Table size" }, { "msg_contents": "Hi,\n\nAt 13:50 27/06/2005, Praveen Raja wrote:\n>Just to clear things up a bit, the scenario that I'm interested in is a\n>table with a large number of indexes on it (maybe 7-8).\n\nIf you're after performance you'll want to carefully consider which indexes \nare really useful and/or redesign your schema so that you can have less \nindexes on that table. 7 or 8 indexes is quite a lot, and that really has a \ncost.\n\n> In this scenario\n>other than the overhead of having to maintain the indexes (which I'm\n>guessing is the same regardless of the size of the table)\n\nDefinitely not: indexes grow with the size of the table. Depending on what \ncolumns you index (and their types), the indexes may be a fraction of the \nsize of the table, or they may be very close in size (in extreme cases they \nmay even be larger). With 7 or 8 indexes, that can be quite a large volume \nof data to manipulate, especially if the values of the columns inserted can \nspan the whole range of the index (rather than being solely id- or \ntime-based, for instance, in which case index updates are concentrated in a \nsmall area of each of the indexes), as this means you'll need to have a \nmajority of the indexes in RAM if you want to maintain decent performance.\n\n>does the size of the table play a role in determining insert performance \n>(and I mean\n>only insert performance)?\n\nIn this case, it's really the indexes that'll cause you trouble, though \nheavily fragmented tables (due to lots of deletes or updates) will also \nincur a penalty just for the data part of the inserts.\n\nAlso, don't forget the usual hints if you are going to do lots of inserts:\n- batch them in large transactions, don't do them one at a time\n- better yet, use COPY rather than INSERT\n- in some situations, you might be better of dropping the indexes, doing \nlarge batch inserts, then re-creating the indexes. YMMV depending on the \nexisting/new ratio, whether you need to maintain indexed access to the \ntables, etc.\n- pay attention to foreign keys\n\nJacques.\n\n\n", "msg_date": "Mon, 27 Jun 2005 14:05:03 +0200", "msg_from": "Jacques Caron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance vs Table size" }, { "msg_contents": "I assume you took size to mean the row size? What I really meant was\ndoes the number of rows a table has affect the performance of new\ninserts into the table (just INSERTs) all other things remaining\nconstant. Sorry for the confusion.\n\nI know that having indexes on the table adds an overhead but again does\nthis overhead increase (for an INSERT operation) with the number of rows\nthe table contains?\n\nMy instinct says no to both. If I'm wrong can someone explain why the\nnumber of rows in a table affects INSERT performance?\n\nThanks again\n\n-----Original Message-----\nFrom: Jacques Caron [mailto:[email protected]] \nSent: 27 June 2005 14:05\nTo: Praveen Raja\nCc: [email protected]\nSubject: RE: [PERFORM] Insert performance vs Table size\n\nHi,\n\nAt 13:50 27/06/2005, Praveen Raja wrote:\n>Just to clear things up a bit, the scenario that I'm interested in is a\n>table with a large number of indexes on it (maybe 7-8).\n\nIf you're after performance you'll want to carefully consider which\nindexes \nare really useful and/or redesign your schema so that you can have less \nindexes on that table. 7 or 8 indexes is quite a lot, and that really\nhas a \ncost.\n\n> In this scenario\n>other than the overhead of having to maintain the indexes (which I'm\n>guessing is the same regardless of the size of the table)\n\nDefinitely not: indexes grow with the size of the table. Depending on\nwhat \ncolumns you index (and their types), the indexes may be a fraction of\nthe \nsize of the table, or they may be very close in size (in extreme cases\nthey \nmay even be larger). With 7 or 8 indexes, that can be quite a large\nvolume \nof data to manipulate, especially if the values of the columns inserted\ncan \nspan the whole range of the index (rather than being solely id- or \ntime-based, for instance, in which case index updates are concentrated\nin a \nsmall area of each of the indexes), as this means you'll need to have a \nmajority of the indexes in RAM if you want to maintain decent\nperformance.\n\n>does the size of the table play a role in determining insert\nperformance \n>(and I mean\n>only insert performance)?\n\nIn this case, it's really the indexes that'll cause you trouble, though \nheavily fragmented tables (due to lots of deletes or updates) will also \nincur a penalty just for the data part of the inserts.\n\nAlso, don't forget the usual hints if you are going to do lots of\ninserts:\n- batch them in large transactions, don't do them one at a time\n- better yet, use COPY rather than INSERT\n- in some situations, you might be better of dropping the indexes, doing\n\nlarge batch inserts, then re-creating the indexes. YMMV depending on the\n\nexisting/new ratio, whether you need to maintain indexed access to the \ntables, etc.\n- pay attention to foreign keys\n\nJacques.\n\n\n", "msg_date": "Tue, 28 Jun 2005 11:50:08 +0200", "msg_from": "\"Praveen Raja\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance vs Table size" }, { "msg_contents": "Hi,\n\nAt 11:50 28/06/2005, Praveen Raja wrote:\n>I assume you took size to mean the row size?\n\nNope, the size of the table.\n\n> What I really meant was\n>does the number of rows a table has affect the performance of new\n>inserts into the table (just INSERTs) all other things remaining\n>constant. Sorry for the confusion.\n\nAs I said previously, in most cases it does. One of the few cases where it \ndoesn't would be an append-only table, no holes, no indexes, no foreign keys...\n\n>I know that having indexes on the table adds an overhead but again does\n>this overhead increase (for an INSERT operation) with the number of rows\n>the table contains?\n\nIt depends on what you are indexing. If the index key is something that \ngrows monotonically (e.g. a unique ID or a timestamp), then the size of the \ntable (and hence of the indexes) should have a very limited influence on \nthe INSERTs. If the index key is anything else (and that must definitely be \nthe case if you have 7 or 8 indexes!), then that means updates will happen \nall over the indexes, which means a lot of read and write activity, and \nonce the total size of your indexes exceeds what can be cached in RAM, \nperformance will decrease quite a bit. Of course if your keys are \nconcentrated in a few limited areas of the key ranges it might help.\n\n>My instinct says no to both. If I'm wrong can someone explain why the\n>number of rows in a table affects INSERT performance?\n\nAs described above, maintaining indexes when you \"hit\" anywhere in said \nindexes is very costly. The larger the table, the larger the indexes, the \nhigher the number of levels in the trees, etc. As long as it fits in RAM, \nit shouldn't be a problem. Once you exceed that threshold, you start \ngetting a lot of random I/O, and that's expensive.\n\nAgain, it depends a lot on your exact schema, the nature of the data, the \nspread of the different values, etc, but I would believe it's more often \nthe case than not.\n\nJacques.\n\n\n", "msg_date": "Tue, 28 Jun 2005 12:43:47 +0200", "msg_from": "Jacques Caron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance vs Table size" }, { "msg_contents": "\"Praveen Raja\" <[email protected]> writes:\n> I know that having indexes on the table adds an overhead but again does\n> this overhead increase (for an INSERT operation) with the number of rows\n> the table contains?\n\nTypical index implementations (such as b-tree) have roughly O(log N)\ncost to insert or lookup a key in an N-entry index. So yes, it grows,\nthough slowly.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Jun 2005 10:25:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance vs Table size " } ]
[ { "msg_contents": ">\n> Hmm, I can't do this, i'm afraid. Or it would be rather difficult\n>\n> My query is executed through a webpage (link to the page in a \n> navigation bar)\n> I do not know how many records there are (data is changing, and \n> currently is 600k records)\n>\n> The only thing I could do, is doing this in a function where I first \n> get the page, and then decide whether to use the normal sort order or \n> the reversed order\n> That would put my weak point right in the middle, which is not that \n> bad, but I would like to find an easier way, if that is possible\n>\n> Huge memory would help ?\n>\n> On 24 Jun 2005, at 20:54, hubert depesz lubaczewski wrote:\n>\n>> On 6/24/05, Yves Vindevogel <[email protected]> wrote:\n>>> So, when I want the last page, which is: 600k / 25 = page 24000 - 1 =\n>>> 23999, I issue the offset of 23999 * 25\n>>\n>> improving this is hard, but not impossible.\n>> if you have right index created, try to reverse the order and fetch\n>> first adverts, and then resort it (just the 25 adverts) in correct\n>> order.\n>> it will be faster.\n>>\n>> depesz\n>>\n>>\n> Met vriendelijke groeten,\n> Bien à vous,\n> Kind regards,\n>\n> Yves Vindevogel\n> Implements\n>\n\n>\n>\n> Mail: [email protected] - Mobile: +32 (478) 80 82 91\n>\n> Kempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n>\n> Web: http://www.implements.be\n>\n> First they ignore you. Then they laugh at you. Then they fight you. \n> Then you win.\n> Mahatma Ghandi.\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Fri, 24 Jun 2005 22:19:26 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Speed with offset clause" }, { "msg_contents": "I just ran this query\n\nselect p.* from tblPrintjobs p , (select oid from tblPrintjobs limit 25 \noffset 622825) as subset where p.oid = subset.oid\n\nAnd it seems to be a bit faster than without the subselect, probably \nbecause I'm only getting one column.\nThe speed gain is not that high though\n\nOn 24 Jun 2005, at 22:19, Yves Vindevogel wrote:\n\n>>\n>> Hmm, I can't do this, i'm afraid. Or it would be rather difficult\n>>\n>> My query is executed through a webpage (link to the page in a \n>> navigation bar)\n>> I do not know how many records there are (data is changing, and \n>> currently is 600k records)\n>>\n>> The only thing I could do, is doing this in a function where I first \n>> get the page, and then decide whether to use the normal sort order or \n>> the reversed order\n>> That would put my weak point right in the middle, which is not that \n>> bad, but I would like to find an easier way, if that is possible\n>>\n>> Huge memory would help ?\n>>\n>> On 24 Jun 2005, at 20:54, hubert depesz lubaczewski wrote:\n>>\n>>> On 6/24/05, Yves Vindevogel <[email protected]> wrote:\n>>>> So, when I want the last page, which is: 600k / 25 = page 24000 - 1 \n>>>> =\n>>>> 23999, I issue the offset of 23999 * 25\n>>>\n>>> improving this is hard, but not impossible.\n>>> if you have right index created, try to reverse the order and fetch\n>>> first adverts, and then resort it (just the 25 adverts) in correct\n>>> order.\n>>> it will be faster.\n>>>\n>>> depesz\n>>>\n>>>\n>> Met vriendelijke groeten,\n>> Bien à vous,\n>> Kind regards,\n>>\n>> Yves Vindevogel\n>> Implements\n>>\n> <Pasted Graphic 2.tiff>\n>>\n>> Mail: [email protected] - Mobile: +32 (478) 80 82 91\n>>\n>> Kempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n>>\n>> Web: http://www.implements.be\n>>\n>> First they ignore you. Then they laugh at you. Then they fight you. \n>> Then you win.\n>> Mahatma Ghandi.\n>>\n>>\n> Met vriendelijke groeten,\n> Bien à vous,\n> Kind regards,\n>\n> Yves Vindevogel\n> Implements\n>\n> <Pasted Graphic 2.tiff>\n>\n> Mail: [email protected] - Mobile: +32 (478) 80 82 91\n>\n> Kempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n>\n> Web: http://www.implements.be\n>\n> First they ignore you. Then they laugh at you. Then they fight you. \n> Then you win.\n> Mahatma Ghandi.\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to \n> [email protected]\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Fri, 24 Jun 2005 22:34:52 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed with offset clause" }, { "msg_contents": "\n> I just ran this query\n>\n> select p.* from tblPrintjobs p , (select oid from tblPrintjobs limit 25\n> offset 622825) as subset where p.oid = subset.oid\n>\n\nI'm just curious here, from a social point of view. How often do you think\nsomeone will paginate over say 300K rows in steps of 25 ?\nThe way I see things, pagination is only meant for humans. If someone\nreally looks at 300K rows then it's really cheaper and makes more sense to\ndownload them/import into spreadsheet program instead of clicking next\n12.000 times.\nIf it's not intended for humans then there's better ways of doing this.\n\nRegards,\n-- \nRadu-Adrian Popescu\nCSA, DBA, Developer\nAldrapay MD\nAldratech Ltd.\n+40213212243\n", "msg_date": "Sat, 25 Jun 2005 14:42:44 +0300 (EEST)", "msg_from": "\"Radu-Adrian Popescu\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed with offset clause" } ]
[ { "msg_contents": "Hi,\n\nThe article seems to dismiss RAID5 a little too quickly. For many\napplication types, using fast striped mirrors for the index space and\nRAID5 for the data can offer quite good performance (provided a\nsufficient number of spindles for the RAID5 - 5 or 6 disks or more). In\nfact, random read (ie most webapps) performance of RAID5 isn't\nnecessarily worse than that of RAID10, and can in fact be better in some\ncircumstances. And, using the cheaper RAID5 might allow you to do that\nseparation between index and data in the first place.\n\nJust thought I'd mention it,\nDmitri\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Frank Wiles\nSent: Wednesday, June 22, 2005 10:52 AM\nTo: [email protected]\nSubject: [PERFORM] Performance Tuning Article\n\n\n\n Hi Everyone, \n\n I've put together a short article and posted it online regarding\n performance tuning PostgreSQL in general. I believe it helps to bring\n together the info in a easy to digest manner. I would appreciate any\n feedback, comments, and especially any technical corrections. \n\n The article can be found here: \n\n http://www.revsys.com/writings/postgresql-performance.html\n\n Thanks! \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\nThe information transmitted is intended only for the person or entity to\nwhich it is addressed and may contain confidential and/or privileged\nmaterial. Any review, retransmission, dissemination or other use of, or\ntaking of any action in reliance upon, this information by persons or\nentities other than the intended recipient is prohibited. If you\nreceived this in error, please contact the sender and delete the\nmaterial from any computer\n", "msg_date": "Fri, 24 Jun 2005 18:02:36 -0400", "msg_from": "\"Dmitri Bichko\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Tuning Article" } ]
[ { "msg_contents": "Alvaro,\n\n> Am I the only one annoyed by the fact that the patch is not very nice to\n> 80-columns-wide terminals? It doesn't need to be a rigid rule but I\n> think the code looks much better if it's not too wide. This code is\n> wide already, but I think we should be making it better, not the other\n> way around.\n\nYup - fixed (as well as I can without mucking readability).\n\n> Also, your text editor seems to be messing the indentation of comments\n> when there are ( or other symbols in the comment text. (Maybe this\n> doesn't matter a lot because pgindent will fix it, but still -- it makes\n> it slightly more difficult to read.)\n\nYah - I think I fixed several mis-indented comments. I'm using vim with\ntabstop=4. I personally don't like tabs in text and would prefer them\nexpanded using spaces, but that's a nice way to make small formatting\nchanges look huge in a cvs diff.\n\nSee attached - only formatting changes included.\n\n- Luke", "msg_date": "Mon, 27 Jun 2005 02:23:05 -0400", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "\n\nLuke Lonergan wrote:\n\n>Yah - I think I fixed several mis-indented comments. I'm using vim with\n>tabstop=4. I personally don't like tabs in text and would prefer them\n>expanded using spaces, but that's a nice way to make small formatting\n>changes look huge in a cvs diff.\n>\n> \n>\n\nYou might like to look at running pgindent (see src/tools/pgindent) over \nthe file before cutting a patch. Since this is usually run over each \nfile just before a release, the only badness should be things from \nrecent patches.\n\ncheers\n\nandrew\n", "msg_date": "Mon, 27 Jun 2005 07:45:19 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "Andrew,\n\n> You might like to look at running pgindent (see src/tools/pgindent) over\n> the file before cutting a patch. Since this is usually run over each\n> file just before a release, the only badness should be things from\n> recent patches.\n\nI've attached two patches, one gained from running pgindent against the\ncurrent CVS tip copy.c (:-D) and one gained by running the COPY FROM perf\nimprovements through the same. Nifty tool!\n\nOnly formatting changes in these.\n\n- Luke", "msg_date": "Mon, 27 Jun 2005 11:52:30 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "\n\nLuke Lonergan wrote:\n\n>Andrew,\n>\n> \n>\n>>You might like to look at running pgindent (see src/tools/pgindent) over\n>>the file before cutting a patch. Since this is usually run over each\n>>file just before a release, the only badness should be things from\n>>recent patches.\n>> \n>>\n>\n>I've attached two patches, one gained from running pgindent against the\n>current CVS tip copy.c (:-D) and one gained by running the COPY FROM perf\n>improvements through the same. Nifty tool!\n>\n>Only formatting changes in these.\n>\n> \n>\n>\nLuke,\n\nSomething strange has happened. I suspect that you've inadvertantly used \nGNU indent or an unpatched BSD indent. pgindent needs a special patched \nBSD indent to work according to the PG standards - see the README\n\ncheers\n\nandrew\n", "msg_date": "Mon, 27 Jun 2005 16:20:24 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "Andrew,\n\n> Something strange has happened. I suspect that you've inadvertantly used\n> GNU indent or an unpatched BSD indent. pgindent needs a special patched\n> BSD indent to work according to the PG standards - see the README\n\nOK - phew! I generated new symbols for pgindent and fixed a bug in the awk\nscripting within (diff attached) and ran against the CVS tip copy.c and got\nonly minor changes in formatting that appear to be consistent with the rest\nof the code. I pgindent'ed the COPY FROM performance modded code and it\nlooks good and tests good.\n\nOnly formatting changes to the previous patch for copy.c attached.\n\nPatch to update pgindent with new symbols and fix a bug in an awk section\n(extra \\\\ in front of a ')').\n\n- Luke", "msg_date": "Mon, 27 Jun 2005 21:44:59 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "Luke, Alon\n\nOK, I'm going to apply the patch to my copy and try to get my head \naround it. meanwhile:\n\n. we should not be describing things as \"old\" or \"new\". The person \nreading the code might have no knowledge of the history, and should not \nneed to.\n. we should not have \"slow\" and \"fast\" either. We should have \"text\", \n\"csv\" and \"binary\".\n\nIOW, the patch comments look slightly like it is intended for after the \nfact application rather than incorporation into the main code.\n\nAre you looking at putting CSV mode into the fast code? Please let me \nknow if you have questions about that. There are only a few days left to \nwhip this into shape.\n\ncheers\n\nandrew\n\nLuke Lonergan wrote:\n\n>Andrew,\n>\n> \n>\n>>Something strange has happened. I suspect that you've inadvertantly used\n>>GNU indent or an unpatched BSD indent. pgindent needs a special patched\n>>BSD indent to work according to the PG standards - see the README\n>> \n>>\n>\n>OK - phew! I generated new symbols for pgindent and fixed a bug in the awk\n>scripting within (diff attached) and ran against the CVS tip copy.c and got\n>only minor changes in formatting that appear to be consistent with the rest\n>of the code. I pgindent'ed the COPY FROM performance modded code and it\n>looks good and tests good.\n>\n>Only formatting changes to the previous patch for copy.c attached.\n>\n>Patch to update pgindent with new symbols and fix a bug in an awk section\n>(extra \\\\ in front of a ')').\n>\n>- Luke \n>\n> \n>\n", "msg_date": "Tue, 28 Jun 2005 08:53:50 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "Luke Lonergan wrote:\n> Patch to update pgindent with new symbols and fix a bug in an awk section\n> (extra \\\\ in front of a ')').\n\nYea, that '\\' wasn't needed. I applied the following patch to use //\ninstead of \"\" for patterns, and removed the unneeded backslash.\n\nI will update the typedefs in a separate commit.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: src/tools/pgindent/pgindent\n===================================================================\nRCS file: /cvsroot/pgsql/src/tools/pgindent/pgindent,v\nretrieving revision 1.73\ndiff -c -c -r1.73 pgindent\n*** src/tools/pgindent/pgindent\t7 Oct 2004 14:15:50 -0000\t1.73\n--- src/tools/pgindent/pgindent\t28 Jun 2005 23:12:07 -0000\n***************\n*** 50,62 ****\n \t \t\tif (NR >= 2)\n \t\t\t\tprint line1;\n \t\t\tif (NR >= 2 &&\n! \t\t\t line2 ~ \"^{[ \t]*$\" &&\n! \t\t\t line1 !~ \"^struct\" &&\n! \t\t\t line1 !~ \"^enum\" &&\n! \t\t\t line1 !~ \"^typedef\" &&\n! \t\t\t line1 !~ \"^extern[ \t][ \t]*\\\"C\\\"\" &&\n! \t\t\t line1 !~ \"=\" &&\n! \t\t\t line1 ~ \"\\)\")\n \t\t\t\tprint \"int\tpgindent_func_no_var_fix;\";\n \t\t\tline1 = line2;\n \t\t}\n--- 50,62 ----\n \t \t\tif (NR >= 2)\n \t\t\t\tprint line1;\n \t\t\tif (NR >= 2 &&\n! \t\t\t line2 ~ /^{[ \t]*$/ &&\n! \t\t\t line1 !~ /^struct/ &&\n! \t\t\t line1 !~ /^enum/ &&\n! \t\t\t line1 !~ /^typedef/ &&\n! \t\t\t line1 !~ /^extern[ \t][ \t]*\"C\"/ &&\n! \t\t\t line1 !~ /=/ &&\n! \t\t\t line1 ~ /)/)\n \t\t\t\tprint \"int\tpgindent_func_no_var_fix;\";\n \t\t\tline1 = line2;\n \t\t}\n***************\n*** 70,77 ****\n \t\t\tline2 = $0;\n \t\t\tif (skips > 0)\n \t\t\t\tskips--;\n! \t\t\tif (line1 ~ \"^#ifdef[ \t]*__cplusplus\" &&\n! \t\t\t line2 ~ \"^extern[ \t]*\\\"C\\\"[ \t]*$\")\n \t\t\t{\n \t\t\t\tprint line1;\n \t\t\t\tprint line2;\n--- 70,77 ----\n \t\t\tline2 = $0;\n \t\t\tif (skips > 0)\n \t\t\t\tskips--;\n! \t\t\tif (line1 ~ /^#ifdef[ \t]*__cplusplus/ &&\n! \t\t\t line2 ~ /^extern[ \t]*\"C\"[ \t]*$/)\n \t\t\t{\n \t\t\t\tprint line1;\n \t\t\t\tprint line2;\n***************\n*** 81,88 ****\n \t\t\t\tline2 = \"\";\n \t\t\t\tskips = 2;\n \t\t\t}\n! \t\t\telse if (line1 ~ \"^#ifdef[ \t]*__cplusplus\" &&\n! \t\t\t line2 ~ \"^}[ \t]*$\")\n \t\t\t{\n \t\t\t\tprint line1;\n \t\t\t\tprint \"/* Close extern \\\"C\\\" */\";\n--- 81,88 ----\n \t\t\t\tline2 = \"\";\n \t\t\t\tskips = 2;\n \t\t\t}\n! \t\t\telse if (line1 ~ /^#ifdef[ \t]*__cplusplus/ &&\n! \t\t\t line2 ~ /^}[ \t]*$/)\n \t\t\t{\n \t\t\t\tprint line1;\n \t\t\t\tprint \"/* Close extern \\\"C\\\" */\";\n***************\n*** 1732,1738 ****\n # work around misindenting of function with no variables defined\n \tawk '\n \t{\n! \t\tif ($0 ~ \"^[ \t]*int[ \t]*pgindent_func_no_var_fix;\")\n \t\t{\n \t\t\tif (getline && $0 != \"\")\n \t\t\t\tprint $0;\n--- 1732,1738 ----\n # work around misindenting of function with no variables defined\n \tawk '\n \t{\n! \t\tif ($0 ~ /^[ \t]*int[ \t]*pgindent_func_no_var_fix;/)\n \t\t{\n \t\t\tif (getline && $0 != \"\")\n \t\t\t\tprint $0;\n***************\n*** 1751,1759 ****\n #\t\t\tline3 = $0; \n #\t\t\tif (skips > 0)\n #\t\t\t\tskips--;\n! #\t\t\tif (line1 ~ \"\t\t*{$\" &&\n! #\t\t\t line2 ~ \"\t\t*[^;{}]*;$\" &&\n! #\t\t\t line3 ~ \"\t\t*}$\")\n #\t\t\t{\n #\t\t\t\tprint line2;\n #\t\t\t\tline2 = \"\";\n--- 1751,1759 ----\n #\t\t\tline3 = $0; \n #\t\t\tif (skips > 0)\n #\t\t\t\tskips--;\n! #\t\t\tif (line1 ~ /\t\t*{$/ &&\n! #\t\t\t line2 ~ /\t\t*[^;{}]*;$/ &&\n! #\t\t\t line3 ~ /\t\t*}$/)\n #\t\t\t{\n #\t\t\t\tprint line2;\n #\t\t\t\tline2 = \"\";\n***************\n*** 1778,1786 ****\n \t\t\tline3 = $0; \n \t\t\tif (skips > 0)\n \t\t\t\tskips--;\n! \t\t\tif (line1 ~ \"\t*{$\" &&\n! \t\t\t line2 ~ \"^$\" &&\n! \t\t\t line3 ~ \"\t\t*/\\\\*$\")\n \t\t\t{\n \t\t\t\tprint line1;\n \t\t\t\tprint line3;\n--- 1778,1786 ----\n \t\t\tline3 = $0; \n \t\t\tif (skips > 0)\n \t\t\t\tskips--;\n! \t\t\tif (line1 ~ /\t*{$/ &&\n! \t\t\t line2 ~ /^$/ &&\n! \t\t\t line3 ~ /\t\t*\\/\\*$/)\n \t\t\t{\n \t\t\t\tprint line1;\n \t\t\t\tprint line3;\n***************\n*** 1819,1826 ****\n \t\t\tline2 = $0;\n \t\t\tif (skips > 0)\n \t\t\t\tskips--;\n! \t\t\tif (line1 ~ \"^$\" &&\n! \t\t\t line2 ~ \"^#endif\")\n \t\t\t{\n \t\t\t\tprint line2;\n \t\t\t\tline2 = \"\";\n--- 1819,1826 ----\n \t\t\tline2 = $0;\n \t\t\tif (skips > 0)\n \t\t\t\tskips--;\n! \t\t\tif (line1 ~ /^$/ &&\n! \t\t\t line2 ~ /^#endif/)\n \t\t\t{\n \t\t\t\tprint line2;\n \t\t\t\tline2 = \"\";\n***************\n*** 1844,1850 ****\n \t\t\tline1 = line2;\n \t\t}\n \t\tEND {\n! \t\t\tif (NR >= 1 && line2 ~ \"^#endif\")\n \t\t\t\tprintf \"\\n\";\n \t\t\tprint line1;\n \t\t}' |\n--- 1844,1850 ----\n \t\t\tline1 = line2;\n \t\t}\n \t\tEND {\n! \t\t\tif (NR >= 1 && line2 ~ /^#endif/)\n \t\t\t\tprintf \"\\n\";\n \t\t\tprint line1;\n \t\t}' |\n***************\n*** 1853,1868 ****\n # like real functions.\n \tawk '\tBEGIN\t{paren_level = 0} \n \t{\n! \t\tif ($0 ~ /^[a-zA-Z_][a-zA-Z_0-9]*[^\\(]*$/)\n \t\t{\n \t\t\tsaved_len = 0;\n \t\t\tsaved_lines[++saved_len] = $0;\n \t\t\tif ((getline saved_lines[++saved_len]) == 0)\n \t\t\t\tprint saved_lines[1];\n \t\t\telse\n! \t\t\tif (saved_lines[saved_len] !~ /^[a-zA-Z_][a-zA-Z_0-9]*\\(/ ||\n! \t\t\t saved_lines[saved_len] ~ /^[a-zA-Z_][a-zA-Z_0-9]*\\(.*\\)$/ ||\n! \t\t\t saved_lines[saved_len] ~ /^[a-zA-Z_][a-zA-Z_0-9]*\\(.*\\);$/)\n \t\t\t{\n \t\t\t\tprint saved_lines[1];\n \t\t\t\tprint saved_lines[2];\n--- 1853,1868 ----\n # like real functions.\n \tawk '\tBEGIN\t{paren_level = 0} \n \t{\n! \t\tif ($0 ~ /^[a-zA-Z_][a-zA-Z_0-9]*[^(]*$/)\n \t\t{\n \t\t\tsaved_len = 0;\n \t\t\tsaved_lines[++saved_len] = $0;\n \t\t\tif ((getline saved_lines[++saved_len]) == 0)\n \t\t\t\tprint saved_lines[1];\n \t\t\telse\n! \t\t\tif (saved_lines[saved_len] !~ /^[a-zA-Z_][a-zA-Z_0-9]*(/ ||\n! \t\t\t saved_lines[saved_len] ~ /^[a-zA-Z_][a-zA-Z_0-9]*(.*)$/ ||\n! \t\t\t saved_lines[saved_len] ~ /^[a-zA-Z_][a-zA-Z_0-9]*(.*);$/)\n \t\t\t{\n \t\t\t\tprint saved_lines[1];\n \t\t\t\tprint saved_lines[2];\n***************\n*** 1879,1885 ****\n \t\t\t\t}\n \t\t\t\tfor (i=1; i <= saved_len; i++)\n \t\t\t\t{\n! \t\t\t\t\tif (i == 1 && saved_lines[saved_len] ~ /\\);$/)\n \t\t\t\t\t{\n \t\t\t\t\t\tprintf \"%s\", saved_lines[i];\n \t\t\t\t\t\tif (substr(saved_lines[i], length(saved_lines[i]),1) != \"*\")\n--- 1879,1885 ----\n \t\t\t\t}\n \t\t\t\tfor (i=1; i <= saved_len; i++)\n \t\t\t\t{\n! \t\t\t\t\tif (i == 1 && saved_lines[saved_len] ~ /);$/)\n \t\t\t\t\t{\n \t\t\t\t\t\tprintf \"%s\", saved_lines[i];\n \t\t\t\t\t\tif (substr(saved_lines[i], length(saved_lines[i]),1) != \"*\")", "msg_date": "Tue, 28 Jun 2005 19:14:41 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "I revisited my patch and removed the code duplications that were there, and\nadded support for CSV with buffered input, so CSV now runs faster too\n(although it is not as optimized as the TEXT format parsing). So now\nTEXT,CSV and BINARY are all parsed in CopyFrom(), like in the original file.\n\nPatch attached.\n\nGreetings,\nAlon.", "msg_date": "Thu, 14 Jul 2005 17:22:18 -0700", "msg_from": "\"Alon Goldshuv\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "\n\nAlon Goldshuv wrote:\n\n>I revisited my patch and removed the code duplications that were there, and\n>added support for CSV with buffered input, so CSV now runs faster too\n>(although it is not as optimized as the TEXT format parsing). So now\n>TEXT,CSV and BINARY are all parsed in CopyFrom(), like in the original file.\n>\n>Patch attached.\n>\n> \n>\n\nI do not have time to review this 2900 line patch analytically, nor to \nbenchmark it. I have done some functional testing of it on Windows, and \ntried to break it in text and CSV modes, and with both Unix and Windows \ntype line endings - I have not observed any breakage.\n\nThis does need lots of eyeballs, though.\n\ncheers\n\nandrew\n", "msg_date": "Tue, 19 Jul 2005 11:23:18 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "On Thu, 14 Jul 2005 17:22:18 -0700\n\"Alon Goldshuv\" <[email protected]> wrote:\n\n> I revisited my patch and removed the code duplications that were there, and\n> added support for CSV with buffered input, so CSV now runs faster too\n> (although it is not as optimized as the TEXT format parsing). So now\n> TEXT,CSV and BINARY are all parsed in CopyFrom(), like in the original file.\n\nHi Alon,\n\nI'm curious, what kind of system are you testing this on? I'm trying to\nload 100GB of data in our dbt3 workload on a 4-way itanium2. I'm\ninterested in the results you would expect.\n\nMark\n", "msg_date": "Tue, 19 Jul 2005 12:54:53 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "Hi Mark,\n\nI improved the data *parsing* capabilities of COPY, and didn't touch the\ndata conversion or data insertion parts of the code. The parsing improvement\nwill vary largely depending on the ratio of parsing -to- converting and\ninserting. \n\nTherefore, the speed increase really depends on the nature of your data:\n\n100GB file with\nlong data rows (lots of parsing)\nSmall number of columns (small number of attr conversions per row)\nless rows (less tuple insertions)\n\nWill show the best performance improvements.\n\nHowever, same file size 100GB with\nShort data rows (minimal parsing)\nlarge number of columns (large number of attr conversions per row)\nAND/OR\nmore rows (more tuple insertions)\n\nWill show improvements but not as significant.\nIn general I'll estimate 40%-95% improvement in load speed for the 1st case\nand 10%-40% for the 2nd. But that also depends on the hardware, disk speed\netc... This is for TEXT format. As for CSV, it may be faster but not as much\nas I specified here. BINARY will stay the same as before.\n\nHTH\nAlon.\n\n\n\n\n\n\nOn 7/19/05 12:54 PM, \"Mark Wong\" <[email protected]> wrote:\n\n> On Thu, 14 Jul 2005 17:22:18 -0700\n> \"Alon Goldshuv\" <[email protected]> wrote:\n> \n>> I revisited my patch and removed the code duplications that were there, and\n>> added support for CSV with buffered input, so CSV now runs faster too\n>> (although it is not as optimized as the TEXT format parsing). So now\n>> TEXT,CSV and BINARY are all parsed in CopyFrom(), like in the original file.\n> \n> Hi Alon,\n> \n> I'm curious, what kind of system are you testing this on? I'm trying to\n> load 100GB of data in our dbt3 workload on a 4-way itanium2. I'm\n> interested in the results you would expect.\n> \n> Mark\n> \n\n\n", "msg_date": "Tue, 19 Jul 2005 14:05:56 -0700", "msg_from": "\"Alon Goldshuv\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "Hi Alon,\n\nYeah, that helps. I just need to break up my scripts a little to just\nload the data and not build indexes.\n\nIs the following information good enough to give a guess about the data\nI'm loading, if you don't mind? ;) Here's a link to my script to create\ntables:\nhttp://developer.osdl.org/markw/mt/getfile.py?id=eaf16b7831588729780645b2bb44f7f23437e432&path=scripts/pgsql/create_tables.sh.in\n\nFile sizes:\n-rw-r--r-- 1 markw 50 2.3G Jul 8 15:03 customer.tbl\n-rw-r--r-- 1 markw 50 74G Jul 8 15:03 lineitem.tbl\n-rw-r--r-- 1 markw 50 2.1K Jul 8 15:03 nation.tbl\n-rw-r--r-- 1 markw 50 17G Jul 8 15:03 orders.tbl\n-rw-r--r-- 1 markw 50 2.3G Jul 8 15:03 part.tbl\n-rw-r--r-- 1 markw 50 12G Jul 8 15:03 partsupp.tbl\n-rw-r--r-- 1 markw 50 391 Jul 8 15:03 region.tbl\n-rw-r--r-- 1 markw 50 136M Jul 8 15:03 supplier.tbl\n\nNumber of rows:\n# wc -l *.tbl\n 15000000 customer.tbl\n 600037902 lineitem.tbl\n 25 nation.tbl\n 150000000 orders.tbl\n 20000000 part.tbl\n 80000000 partsupp.tbl\n 5 region.tbl\n 1000000 supplier.tbl\n\nThanks,\nMark\n\nOn Tue, 19 Jul 2005 14:05:56 -0700\n\"Alon Goldshuv\" <[email protected]> wrote:\n\n> Hi Mark,\n> \n> I improved the data *parsing* capabilities of COPY, and didn't touch the\n> data conversion or data insertion parts of the code. The parsing improvement\n> will vary largely depending on the ratio of parsing -to- converting and\n> inserting. \n> \n> Therefore, the speed increase really depends on the nature of your data:\n> \n> 100GB file with\n> long data rows (lots of parsing)\n> Small number of columns (small number of attr conversions per row)\n> less rows (less tuple insertions)\n> \n> Will show the best performance improvements.\n> \n> However, same file size 100GB with\n> Short data rows (minimal parsing)\n> large number of columns (large number of attr conversions per row)\n> AND/OR\n> more rows (more tuple insertions)\n> \n> Will show improvements but not as significant.\n> In general I'll estimate 40%-95% improvement in load speed for the 1st case\n> and 10%-40% for the 2nd. But that also depends on the hardware, disk speed\n> etc... This is for TEXT format. As for CSV, it may be faster but not as much\n> as I specified here. BINARY will stay the same as before.\n> \n> HTH\n> Alon.\n> \n> \n> \n> \n> \n> \n> On 7/19/05 12:54 PM, \"Mark Wong\" <[email protected]> wrote:\n> \n> > On Thu, 14 Jul 2005 17:22:18 -0700\n> > \"Alon Goldshuv\" <[email protected]> wrote:\n> > \n> >> I revisited my patch and removed the code duplications that were there, and\n> >> added support for CSV with buffered input, so CSV now runs faster too\n> >> (although it is not as optimized as the TEXT format parsing). So now\n> >> TEXT,CSV and BINARY are all parsed in CopyFrom(), like in the original file.\n> > \n> > Hi Alon,\n> > \n> > I'm curious, what kind of system are you testing this on? I'm trying to\n> > load 100GB of data in our dbt3 workload on a 4-way itanium2. I'm\n> > interested in the results you would expect.\n> > \n> > Mark\n> > \n> \n", "msg_date": "Tue, 19 Jul 2005 14:37:58 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "Mark,\n\nThanks for the info.\n\nYes, isolating indexes out of the picture is a good idea for this purpose.\n\nI can't really give a guess to how fast the load rate should be. I don't\nknow how your system is configured, and all the hardware characteristics\n(and even if I knew that info I may not be able to guess...). I am pretty\nconfident that the load will be faster than before, I'll risk that ;-)\nLooking into your TPC-H size and metadata I'll estimate that\npartsupp,customer and orders will have the most significant increase in load\nrate. You could start with those.\n\nI guess the only way to really know is to try... Load several times with the\nexisting PG-COPY and then load several times with the patched COPY and\ncompare. I'll be curious to hear your results.\n\nThx,\nAlon.\n\n \n\n\nOn 7/19/05 2:37 PM, \"Mark Wong\" <[email protected]> wrote:\n\n> Hi Alon,\n> \n> Yeah, that helps. I just need to break up my scripts a little to just\n> load the data and not build indexes.\n> \n> Is the following information good enough to give a guess about the data\n> I'm loading, if you don't mind? ;) Here's a link to my script to create\n> tables:\n> http://developer.osdl.org/markw/mt/getfile.py?id=eaf16b7831588729780645b2bb44f\n> 7f23437e432&path=scripts/pgsql/create_tables.sh.in\n> \n> File sizes:\n> -rw-r--r-- 1 markw 50 2.3G Jul 8 15:03 customer.tbl\n> -rw-r--r-- 1 markw 50 74G Jul 8 15:03 lineitem.tbl\n> -rw-r--r-- 1 markw 50 2.1K Jul 8 15:03 nation.tbl\n> -rw-r--r-- 1 markw 50 17G Jul 8 15:03 orders.tbl\n> -rw-r--r-- 1 markw 50 2.3G Jul 8 15:03 part.tbl\n> -rw-r--r-- 1 markw 50 12G Jul 8 15:03 partsupp.tbl\n> -rw-r--r-- 1 markw 50 391 Jul 8 15:03 region.tbl\n> -rw-r--r-- 1 markw 50 136M Jul 8 15:03 supplier.tbl\n> \n> Number of rows:\n> # wc -l *.tbl\n> 15000000 customer.tbl\n> 600037902 lineitem.tbl\n> 25 nation.tbl\n> 150000000 orders.tbl\n> 20000000 part.tbl\n> 80000000 partsupp.tbl\n> 5 region.tbl\n> 1000000 supplier.tbl\n> \n> Thanks,\n> Mark\n> \n> On Tue, 19 Jul 2005 14:05:56 -0700\n> \"Alon Goldshuv\" <[email protected]> wrote:\n> \n>> Hi Mark,\n>> \n>> I improved the data *parsing* capabilities of COPY, and didn't touch the\n>> data conversion or data insertion parts of the code. The parsing improvement\n>> will vary largely depending on the ratio of parsing -to- converting and\n>> inserting. \n>> \n>> Therefore, the speed increase really depends on the nature of your data:\n>> \n>> 100GB file with\n>> long data rows (lots of parsing)\n>> Small number of columns (small number of attr conversions per row)\n>> less rows (less tuple insertions)\n>> \n>> Will show the best performance improvements.\n>> \n>> However, same file size 100GB with\n>> Short data rows (minimal parsing)\n>> large number of columns (large number of attr conversions per row)\n>> AND/OR\n>> more rows (more tuple insertions)\n>> \n>> Will show improvements but not as significant.\n>> In general I'll estimate 40%-95% improvement in load speed for the 1st case\n>> and 10%-40% for the 2nd. But that also depends on the hardware, disk speed\n>> etc... This is for TEXT format. As for CSV, it may be faster but not as much\n>> as I specified here. BINARY will stay the same as before.\n>> \n>> HTH\n>> Alon.\n>> \n>> \n>> \n>> \n>> \n>> \n>> On 7/19/05 12:54 PM, \"Mark Wong\" <[email protected]> wrote:\n>> \n>>> On Thu, 14 Jul 2005 17:22:18 -0700\n>>> \"Alon Goldshuv\" <[email protected]> wrote:\n>>> \n>>>> I revisited my patch and removed the code duplications that were there, and\n>>>> added support for CSV with buffered input, so CSV now runs faster too\n>>>> (although it is not as optimized as the TEXT format parsing). So now\n>>>> TEXT,CSV and BINARY are all parsed in CopyFrom(), like in the original\n>>>> file.\n>>> \n>>> Hi Alon,\n>>> \n>>> I'm curious, what kind of system are you testing this on? I'm trying to\n>>> load 100GB of data in our dbt3 workload on a 4-way itanium2. I'm\n>>> interested in the results you would expect.\n>>> \n>>> Mark\n>>> \n>> \n> \n\n\n", "msg_date": "Tue, 19 Jul 2005 15:06:17 -0700", "msg_from": "\"Alon Goldshuv\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "Mark,\n\nYou should definitely not be doing this sort of thing, I believe:\n\nCREATE TABLE orders (\n\to_orderkey INTEGER,\n\to_custkey INTEGER,\n\to_orderstatus CHAR(1),\n\to_totalprice REAL,\n\to_orderDATE DATE,\n\to_orderpriority CHAR(15),\n\to_clerk CHAR(15),\n\to_shippriority INTEGER,\n\to_comment VARCHAR(79),\n\tPRIMARY KEY (o_orderkey))\n\nCreate the table with no constraints, load the data, then set up primary keys and whatever other constraints you want using ALTER TABLE. Last time I did a load like this (albeit 2 orders of magnitude smaller) I saw a 50% speedup from deferring constarint creation.\n\n\ncheers\n\nandrew\n\n\n\nMark Wong wrote:\n\n>Hi Alon,\n>\n>Yeah, that helps. I just need to break up my scripts a little to just\n>load the data and not build indexes.\n>\n>Is the following information good enough to give a guess about the data\n>I'm loading, if you don't mind? ;) Here's a link to my script to create\n>tables:\n>http://developer.osdl.org/markw/mt/getfile.py?id=eaf16b7831588729780645b2bb44f7f23437e432&path=scripts/pgsql/create_tables.sh.in\n>\n>File sizes:\n>-rw-r--r-- 1 markw 50 2.3G Jul 8 15:03 customer.tbl\n>-rw-r--r-- 1 markw 50 74G Jul 8 15:03 lineitem.tbl\n>-rw-r--r-- 1 markw 50 2.1K Jul 8 15:03 nation.tbl\n>-rw-r--r-- 1 markw 50 17G Jul 8 15:03 orders.tbl\n>-rw-r--r-- 1 markw 50 2.3G Jul 8 15:03 part.tbl\n>-rw-r--r-- 1 markw 50 12G Jul 8 15:03 partsupp.tbl\n>-rw-r--r-- 1 markw 50 391 Jul 8 15:03 region.tbl\n>-rw-r--r-- 1 markw 50 136M Jul 8 15:03 supplier.tbl\n>\n>Number of rows:\n># wc -l *.tbl\n> 15000000 customer.tbl\n> 600037902 lineitem.tbl\n> 25 nation.tbl\n> 150000000 orders.tbl\n> 20000000 part.tbl\n> 80000000 partsupp.tbl\n> 5 region.tbl\n> 1000000 supplier.tbl\n>\n>Thanks,\n>Mark\n>\n>On Tue, 19 Jul 2005 14:05:56 -0700\n>\"Alon Goldshuv\" <[email protected]> wrote:\n>\n> \n>\n>>Hi Mark,\n>>\n>>I improved the data *parsing* capabilities of COPY, and didn't touch the\n>>data conversion or data insertion parts of the code. The parsing improvement\n>>will vary largely depending on the ratio of parsing -to- converting and\n>>inserting. \n>>\n>>Therefore, the speed increase really depends on the nature of your data:\n>>\n>>100GB file with\n>>long data rows (lots of parsing)\n>>Small number of columns (small number of attr conversions per row)\n>>less rows (less tuple insertions)\n>>\n>>Will show the best performance improvements.\n>>\n>>However, same file size 100GB with\n>>Short data rows (minimal parsing)\n>>large number of columns (large number of attr conversions per row)\n>>AND/OR\n>>more rows (more tuple insertions)\n>>\n>>Will show improvements but not as significant.\n>>In general I'll estimate 40%-95% improvement in load speed for the 1st case\n>>and 10%-40% for the 2nd. But that also depends on the hardware, disk speed\n>>etc... This is for TEXT format. As for CSV, it may be faster but not as much\n>>as I specified here. BINARY will stay the same as before.\n>>\n>>HTH\n>>Alon.\n>>\n>>\n>>\n>>\n>>\n>>\n>>On 7/19/05 12:54 PM, \"Mark Wong\" <[email protected]> wrote:\n>>\n>> \n>>\n>>>On Thu, 14 Jul 2005 17:22:18 -0700\n>>>\"Alon Goldshuv\" <[email protected]> wrote:\n>>>\n>>> \n>>>\n>>>>I revisited my patch and removed the code duplications that were there, and\n>>>>added support for CSV with buffered input, so CSV now runs faster too\n>>>>(although it is not as optimized as the TEXT format parsing). So now\n>>>>TEXT,CSV and BINARY are all parsed in CopyFrom(), like in the original file.\n>>>> \n>>>>\n>>>Hi Alon,\n>>>\n>>>I'm curious, what kind of system are you testing this on? I'm trying to\n>>>load 100GB of data in our dbt3 workload on a 4-way itanium2. I'm\n>>>interested in the results you would expect.\n>>>\n>>>Mark\n>>>\n>>> \n>>>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n> \n>\n", "msg_date": "Tue, 19 Jul 2005 18:17:52 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "Whoopsies, yeah good point about the PRIMARY KEY. I'll fix that.\n\nMark\n\nOn Tue, 19 Jul 2005 18:17:52 -0400\nAndrew Dunstan <[email protected]> wrote:\n\n> Mark,\n> \n> You should definitely not be doing this sort of thing, I believe:\n> \n> CREATE TABLE orders (\n> \to_orderkey INTEGER,\n> \to_custkey INTEGER,\n> \to_orderstatus CHAR(1),\n> \to_totalprice REAL,\n> \to_orderDATE DATE,\n> \to_orderpriority CHAR(15),\n> \to_clerk CHAR(15),\n> \to_shippriority INTEGER,\n> \to_comment VARCHAR(79),\n> \tPRIMARY KEY (o_orderkey))\n> \n> Create the table with no constraints, load the data, then set up primary keys and whatever other constraints you want using ALTER TABLE. Last time I did a load like this (albeit 2 orders of magnitude smaller) I saw a 50% speedup from deferring constarint creation.\n> \n> \n> cheers\n> \n> andrew\n> \n> \n> \n> Mark Wong wrote:\n> \n> >Hi Alon,\n> >\n> >Yeah, that helps. I just need to break up my scripts a little to just\n> >load the data and not build indexes.\n> >\n> >Is the following information good enough to give a guess about the data\n> >I'm loading, if you don't mind? ;) Here's a link to my script to create\n> >tables:\n> >http://developer.osdl.org/markw/mt/getfile.py?id=eaf16b7831588729780645b2bb44f7f23437e432&path=scripts/pgsql/create_tables.sh.in\n> >\n> >File sizes:\n> >-rw-r--r-- 1 markw 50 2.3G Jul 8 15:03 customer.tbl\n> >-rw-r--r-- 1 markw 50 74G Jul 8 15:03 lineitem.tbl\n> >-rw-r--r-- 1 markw 50 2.1K Jul 8 15:03 nation.tbl\n> >-rw-r--r-- 1 markw 50 17G Jul 8 15:03 orders.tbl\n> >-rw-r--r-- 1 markw 50 2.3G Jul 8 15:03 part.tbl\n> >-rw-r--r-- 1 markw 50 12G Jul 8 15:03 partsupp.tbl\n> >-rw-r--r-- 1 markw 50 391 Jul 8 15:03 region.tbl\n> >-rw-r--r-- 1 markw 50 136M Jul 8 15:03 supplier.tbl\n> >\n> >Number of rows:\n> ># wc -l *.tbl\n> > 15000000 customer.tbl\n> > 600037902 lineitem.tbl\n> > 25 nation.tbl\n> > 150000000 orders.tbl\n> > 20000000 part.tbl\n> > 80000000 partsupp.tbl\n> > 5 region.tbl\n> > 1000000 supplier.tbl\n> >\n> >Thanks,\n> >Mark\n> >\n> >On Tue, 19 Jul 2005 14:05:56 -0700\n> >\"Alon Goldshuv\" <[email protected]> wrote:\n> >\n> > \n> >\n> >>Hi Mark,\n> >>\n> >>I improved the data *parsing* capabilities of COPY, and didn't touch the\n> >>data conversion or data insertion parts of the code. The parsing improvement\n> >>will vary largely depending on the ratio of parsing -to- converting and\n> >>inserting. \n> >>\n> >>Therefore, the speed increase really depends on the nature of your data:\n> >>\n> >>100GB file with\n> >>long data rows (lots of parsing)\n> >>Small number of columns (small number of attr conversions per row)\n> >>less rows (less tuple insertions)\n> >>\n> >>Will show the best performance improvements.\n> >>\n> >>However, same file size 100GB with\n> >>Short data rows (minimal parsing)\n> >>large number of columns (large number of attr conversions per row)\n> >>AND/OR\n> >>more rows (more tuple insertions)\n> >>\n> >>Will show improvements but not as significant.\n> >>In general I'll estimate 40%-95% improvement in load speed for the 1st case\n> >>and 10%-40% for the 2nd. But that also depends on the hardware, disk speed\n> >>etc... This is for TEXT format. As for CSV, it may be faster but not as much\n> >>as I specified here. BINARY will stay the same as before.\n> >>\n> >>HTH\n> >>Alon.\n> >>\n> >>\n> >>\n> >>\n> >>\n> >>\n> >>On 7/19/05 12:54 PM, \"Mark Wong\" <[email protected]> wrote:\n> >>\n> >> \n> >>\n> >>>On Thu, 14 Jul 2005 17:22:18 -0700\n> >>>\"Alon Goldshuv\" <[email protected]> wrote:\n> >>>\n> >>> \n> >>>\n> >>>>I revisited my patch and removed the code duplications that were there, and\n> >>>>added support for CSV with buffered input, so CSV now runs faster too\n> >>>>(although it is not as optimized as the TEXT format parsing). So now\n> >>>>TEXT,CSV and BINARY are all parsed in CopyFrom(), like in the original file.\n> >>>> \n> >>>>\n> >>>Hi Alon,\n> >>>\n> >>>I'm curious, what kind of system are you testing this on? I'm trying to\n> >>>load 100GB of data in our dbt3 workload on a 4-way itanium2. I'm\n> >>>interested in the results you would expect.\n> >>>\n> >>>Mark\n> >>>\n> >>> \n> >>>\n> >\n", "msg_date": "Tue, 19 Jul 2005 15:51:33 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "Good points on all, another element in the performance expectations is the\nratio of CPU speed to I/O subsystem speed, as Alon had hinted earlier.\n\nThis patch substantially (500%) improves the efficiency of parsing in the\nCOPY path, which, on a 3GHz P4 desktop with a commodity disk drive\nrepresents 8 of a total of 30 seconds of processing time. So, by reducing\nthe parsing time from 8 seconds to 1.5 seconds, the overall COPY time is\nreduced from 30 seconds to 23.5 seconds, or a speedup of about 20%.\n\nOn a dual 2.2GHz Opteron machine with a 6-disk SCSI RAID subsystem capable\nof 240MB/s sequential read and writes, the ratios change and we see between\n35% and 95% increase in COPY performance, with the bottleneck being CPU.\nThe disk is only running at about 90MB/s during this period.\n\nI'd expect that as your CPUs slow down relative to your I/O speed, and\nItaniums or IT2s are quite slow, you should see an increased effect of the\nparsing improvements.\n\nOne good way to validate the effect is to watch the I/O bandwidth using\nvmstat 1 (on Linux) while the load is progressing. When you watch that with\nthe unpatched source and with the patched source, if they are the same, you\nshould see no benefit from the patch (you are I/O limited).\n\nIf you check your underlying sequential write speed, you will be\nbottlenecked at roughly half that in performing COPY because of the\nwrite-through the WAL.\n\n- Luke\n\nOn 7/19/05 3:51 PM, \"Mark Wong\" <[email protected]> wrote:\n\n> Whoopsies, yeah good point about the PRIMARY KEY. I'll fix that.\n> \n> Mark\n> \n> On Tue, 19 Jul 2005 18:17:52 -0400\n> Andrew Dunstan <[email protected]> wrote:\n> \n>> Mark,\n>> \n>> You should definitely not be doing this sort of thing, I believe:\n>> \n>> CREATE TABLE orders (\n>> o_orderkey INTEGER,\n>> o_custkey INTEGER,\n>> o_orderstatus CHAR(1),\n>> o_totalprice REAL,\n>> o_orderDATE DATE,\n>> o_orderpriority CHAR(15),\n>> o_clerk CHAR(15),\n>> o_shippriority INTEGER,\n>> o_comment VARCHAR(79),\n>> PRIMARY KEY (o_orderkey))\n>> \n>> Create the table with no constraints, load the data, then set up primary keys\n>> and whatever other constraints you want using ALTER TABLE. Last time I did a\n>> load like this (albeit 2 orders of magnitude smaller) I saw a 50% speedup\n>> from deferring constarint creation.\n>> \n>> \n>> cheers\n>> \n>> andrew\n>> \n>> \n>> \n>> Mark Wong wrote:\n>> \n>>> Hi Alon,\n>>> \n>>> Yeah, that helps. I just need to break up my scripts a little to just\n>>> load the data and not build indexes.\n>>> \n>>> Is the following information good enough to give a guess about the data\n>>> I'm loading, if you don't mind? ;) Here's a link to my script to create\n>>> tables:\n>>> http://developer.osdl.org/markw/mt/getfile.py?id=eaf16b7831588729780645b2bb4\n>>> 4f7f23437e432&path=scripts/pgsql/create_tables.sh.in\n>>> \n>>> File sizes:\n>>> -rw-r--r-- 1 markw 50 2.3G Jul 8 15:03 customer.tbl\n>>> -rw-r--r-- 1 markw 50 74G Jul 8 15:03 lineitem.tbl\n>>> -rw-r--r-- 1 markw 50 2.1K Jul 8 15:03 nation.tbl\n>>> -rw-r--r-- 1 markw 50 17G Jul 8 15:03 orders.tbl\n>>> -rw-r--r-- 1 markw 50 2.3G Jul 8 15:03 part.tbl\n>>> -rw-r--r-- 1 markw 50 12G Jul 8 15:03 partsupp.tbl\n>>> -rw-r--r-- 1 markw 50 391 Jul 8 15:03 region.tbl\n>>> -rw-r--r-- 1 markw 50 136M Jul 8 15:03 supplier.tbl\n>>> \n>>> Number of rows:\n>>> # wc -l *.tbl\n>>> 15000000 customer.tbl\n>>> 600037902 lineitem.tbl\n>>> 25 nation.tbl\n>>> 150000000 orders.tbl\n>>> 20000000 part.tbl\n>>> 80000000 partsupp.tbl\n>>> 5 region.tbl\n>>> 1000000 supplier.tbl\n>>> \n>>> Thanks,\n>>> Mark\n>>> \n>>> On Tue, 19 Jul 2005 14:05:56 -0700\n>>> \"Alon Goldshuv\" <[email protected]> wrote:\n>>> \n>>> \n>>> \n>>>> Hi Mark,\n>>>> \n>>>> I improved the data *parsing* capabilities of COPY, and didn't touch the\n>>>> data conversion or data insertion parts of the code. The parsing\n>>>> improvement\n>>>> will vary largely depending on the ratio of parsing -to- converting and\n>>>> inserting. \n>>>> \n>>>> Therefore, the speed increase really depends on the nature of your data:\n>>>> \n>>>> 100GB file with\n>>>> long data rows (lots of parsing)\n>>>> Small number of columns (small number of attr conversions per row)\n>>>> less rows (less tuple insertions)\n>>>> \n>>>> Will show the best performance improvements.\n>>>> \n>>>> However, same file size 100GB with\n>>>> Short data rows (minimal parsing)\n>>>> large number of columns (large number of attr conversions per row)\n>>>> AND/OR\n>>>> more rows (more tuple insertions)\n>>>> \n>>>> Will show improvements but not as significant.\n>>>> In general I'll estimate 40%-95% improvement in load speed for the 1st case\n>>>> and 10%-40% for the 2nd. But that also depends on the hardware, disk speed\n>>>> etc... This is for TEXT format. As for CSV, it may be faster but not as\n>>>> much\n>>>> as I specified here. BINARY will stay the same as before.\n>>>> \n>>>> HTH\n>>>> Alon.\n>>>> \n>>>> \n>>>> \n>>>> \n>>>> \n>>>> \n>>>> On 7/19/05 12:54 PM, \"Mark Wong\" <[email protected]> wrote:\n>>>> \n>>>> \n>>>> \n>>>>> On Thu, 14 Jul 2005 17:22:18 -0700\n>>>>> \"Alon Goldshuv\" <[email protected]> wrote:\n>>>>> \n>>>>> \n>>>>> \n>>>>>> I revisited my patch and removed the code duplications that were there,\n>>>>>> and\n>>>>>> added support for CSV with buffered input, so CSV now runs faster too\n>>>>>> (although it is not as optimized as the TEXT format parsing). So now\n>>>>>> TEXT,CSV and BINARY are all parsed in CopyFrom(), like in the original\n>>>>>> file.\n>>>>>> \n>>>>>> \n>>>>> Hi Alon,\n>>>>> \n>>>>> I'm curious, what kind of system are you testing this on? I'm trying to\n>>>>> load 100GB of data in our dbt3 workload on a 4-way itanium2. I'm\n>>>>> interested in the results you would expect.\n>>>>> \n>>>>> Mark\n>>>>> \n>>>>> \n>>>>> \n>>> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n", "msg_date": "Tue, 19 Jul 2005 17:39:33 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "I just ran through a few tests with the v14 patch against 100GB of data\nfrom dbt3 and found a 30% improvement; 3.6 hours vs 5.3 hours. Just to\ngive a few details, I only loaded data and started a COPY in parallel\nfor each the data files:\n\thttp://www.testing.osdl.org/projects/dbt3testing/results/fast_copy/\n\nHere's a visual of my disk layout, for those familiar with the database schema:\n\thttp://www.testing.osdl.org/projects/dbt3testing/results/fast_copy/layout-dev4-010-dbt3.html\n\nI have 6 arrays of fourteen 15k rpm drives in a split-bus configuration\nattached to a 4-way itanium2 via 6 compaq smartarray pci-x controllers.\n\nLet me know if you have any questions.\n\nMark\n", "msg_date": "Thu, 21 Jul 2005 14:55:07 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "Cool!\n\nAt what rate does your disk setup write sequential data, e.g.:\n time dd if=/dev/zero of=bigfile bs=8k count=500000\n\n(sized for 2x RAM on a system with 2GB)\n\nBTW - the Compaq smartarray controllers are pretty broken on Linux from a\nperformance standpoint in our experience. We've had disastrously bad\nresults from the SmartArray 5i and 6 controllers on kernels from 2.4 ->\n2.6.10, on the order of 20MB/s.\n\nFor comparison, the results on our dual opteron with a single LSI SCSI\ncontroller with software RAID0 on a 2.6.10 kernel:\n\n[llonergan@stinger4 dbfast]$ time dd if=/dev/zero of=bigfile bs=8k\ncount=500000\n500000+0 records in\n500000+0 records out\n\nreal 0m24.702s\nuser 0m0.077s\nsys 0m8.794s\n\nWhich calculates out to about 161MB/s.\n\n- Luke\n\n\nOn 7/21/05 2:55 PM, \"Mark Wong\" <[email protected]> wrote:\n\n> I just ran through a few tests with the v14 patch against 100GB of data\n> from dbt3 and found a 30% improvement; 3.6 hours vs 5.3 hours. Just to\n> give a few details, I only loaded data and started a COPY in parallel\n> for each the data files:\n> http://www.testing.osdl.org/projects/dbt3testing/results/fast_copy/\n> \n> Here's a visual of my disk layout, for those familiar with the database\n> schema:\n> http://www.testing.osdl.org/projects/dbt3testing/results/fast_copy/layout-dev4\n> -010-dbt3.html\n> \n> I have 6 arrays of fourteen 15k rpm drives in a split-bus configuration\n> attached to a 4-way itanium2 via 6 compaq smartarray pci-x controllers.\n> \n> Let me know if you have any questions.\n> \n> Mark\n> \n\n\n", "msg_date": "Thu, 21 Jul 2005 16:14:47 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "Luke Lonergan wrote:\n> Cool!\n> \n> At what rate does your disk setup write sequential data, e.g.:\n> time dd if=/dev/zero of=bigfile bs=8k count=500000\n> \n> (sized for 2x RAM on a system with 2GB)\n> \n> BTW - the Compaq smartarray controllers are pretty broken on Linux from a\n> performance standpoint in our experience. We've had disastrously bad\n> results from the SmartArray 5i and 6 controllers on kernels from 2.4 ->\n> 2.6.10, on the order of 20MB/s.\n\nO.k. this strikes me as interesting, now we know that Compaq and Dell \nare borked for Linux. Is there a name brand server (read Enterprise) \nthat actually does provide reasonable performance?\n\n> \n> For comparison, the results on our dual opteron with a single LSI SCSI\n> controller with software RAID0 on a 2.6.10 kernel:\n> \n> [llonergan@stinger4 dbfast]$ time dd if=/dev/zero of=bigfile bs=8k\n> count=500000\n> 500000+0 records in\n> 500000+0 records out\n> \n> real 0m24.702s\n> user 0m0.077s\n> sys 0m8.794s\n> \n> Which calculates out to about 161MB/s.\n> \n> - Luke\n> \n> \n> On 7/21/05 2:55 PM, \"Mark Wong\" <[email protected]> wrote:\n> \n> \n>>I just ran through a few tests with the v14 patch against 100GB of data\n>>from dbt3 and found a 30% improvement; 3.6 hours vs 5.3 hours. Just to\n>>give a few details, I only loaded data and started a COPY in parallel\n>>for each the data files:\n>>http://www.testing.osdl.org/projects/dbt3testing/results/fast_copy/\n>>\n>>Here's a visual of my disk layout, for those familiar with the database\n>>schema:\n>>http://www.testing.osdl.org/projects/dbt3testing/results/fast_copy/layout-dev4\n>>-010-dbt3.html\n>>\n>>I have 6 arrays of fourteen 15k rpm drives in a split-bus configuration\n>>attached to a 4-way itanium2 via 6 compaq smartarray pci-x controllers.\n>>\n>>Let me know if you have any questions.\n>>\n>>Mark\n>>\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n\n-- \nYour PostgreSQL solutions provider, Command Prompt, Inc.\n24x7 support - 1.800.492.2240, programming, and consulting\nHome of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\nhttp://www.commandprompt.com / http://www.postgresql.org\n", "msg_date": "Thu, 21 Jul 2005 17:08:09 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "Joshua,\n\nOn 7/21/05 5:08 PM, \"Joshua D. Drake\" <[email protected]> wrote:\n\n> O.k. this strikes me as interesting, now we know that Compaq and Dell\n> are borked for Linux. Is there a name brand server (read Enterprise)\n> that actually does provide reasonable performance?\n\nI think late model Dell (post the bad chipset problem, circa 2001-2?) and\nIBM and Sun servers are fine because they all use simple SCSI adapters from\nLSI or Adaptec.\n\nThe HP Smartarray is an aberration, they don't have good driver support for\nLinux and as a consequence have some pretty bad problems with both\nperformance and stability. On Windows they perform quite well.\n\nAlso - there are very big issues with some SATA controllers and Linux we've\nseen, particularly the Silicon Image, Highpoint other non-Intel controllers.\nNot sure about Nvidia, but the only ones I trust now are 3Ware and the\nothers mentioned in earlier posts.\n\n- Luke\n\n \n\n\n", "msg_date": "Thu, 21 Jul 2005 19:04:55 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "\n> I think late model Dell (post the bad chipset problem, circa 2001-2?) and\n> IBM and Sun servers are fine because they all use simple SCSI adapters from\n> LSI or Adaptec.\n\nWell I know that isn't true at least not with ANY of the Dells my \ncustomers have purchased in the last 18 months. They are still really, \nreally slow.\n\n> Also - there are very big issues with some SATA controllers and Linux we've\n> seen, particularly the Silicon Image, Highpoint other non-Intel controllers.\n> Not sure about Nvidia, but the only ones I trust now are 3Ware and the\n> others mentioned in earlier posts.\n\nI have great success with Silicon Image as long as I am running them \nwith Linux software RAID. The LSI controllers are also really nice.\n\nJ\n\n\n\n> \n> - Luke\n> \n> \n> \n\n\n-- \nYour PostgreSQL solutions provider, Command Prompt, Inc.\n24x7 support - 1.800.492.2240, programming, and consulting\nHome of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\nhttp://www.commandprompt.com / http://www.postgresql.org\n", "msg_date": "Thu, 21 Jul 2005 19:53:35 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "Joshua,\n\nOn 7/21/05 7:53 PM, \"Joshua D. Drake\" <[email protected]> wrote:\n> Well I know that isn't true at least not with ANY of the Dells my\n> customers have purchased in the last 18 months. They are still really,\n> really slow.\n\nThat's too bad, can you cite some model numbers? SCSI?\n\n> I have great success with Silicon Image as long as I am running them\n> with Linux software RAID. The LSI controllers are also really nice.\n\nThat's good to hear, I gave up on Silicon Image controllers on Linux about 1\nyear ago, which kernel are you using with success? Silicon Image\ncontrollers are the most popular, so it's important to see them supported\nwell, though I'd rather see more SATA headers than 2 off of the built-in\nchipsets. \n\n- Luke\n\n\n", "msg_date": "Thu, 21 Jul 2005 21:19:04 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "\nthis discussion belongs on -performance\n\ncheers\n\nandrew\n\n\nLuke Lonergan said:\n> Joshua,\n>\n> On 7/21/05 7:53 PM, \"Joshua D. Drake\" <[email protected]> wrote:\n>> Well I know that isn't true at least not with ANY of the Dells my\n>> customers have purchased in the last 18 months. They are still really,\n>> really slow.\n>\n> That's too bad, can you cite some model numbers? SCSI?\n>\n>> I have great success with Silicon Image as long as I am running them\n>> with Linux software RAID. The LSI controllers are also really nice.\n>\n> That's good to hear, I gave up on Silicon Image controllers on Linux\n> about 1 year ago, which kernel are you using with success? Silicon\n> Image\n> controllers are the most popular, so it's important to see them\n> supported well, though I'd rather see more SATA headers than 2 off of\n> the built-in chipsets.\n>\n> - Luke\n\n\n\n", "msg_date": "Fri, 22 Jul 2005 00:27:41 -0500 (CDT)", "msg_from": "\"Andrew Dunstan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "Luke Lonergan wrote:\n> Joshua,\n> \n> On 7/21/05 7:53 PM, \"Joshua D. Drake\" <[email protected]> wrote:\n> \n>>Well I know that isn't true at least not with ANY of the Dells my\n>>customers have purchased in the last 18 months. They are still really,\n>>really slow.\n> \n> \n> That's too bad, can you cite some model numbers? SCSI?\n\nYeah I will get them and post, but yes they are all SCSI.\n\n> \n> \n>>I have great success with Silicon Image as long as I am running them\n>>with Linux software RAID. The LSI controllers are also really nice.\n> \n> \n> That's good to hear, I gave up on Silicon Image controllers on Linux about 1\n> year ago, which kernel are you using with success?\n\nAny of the 2.6 kernels. ALso the laster 2.4 (+22 I believe) support it \npretty well as well.\n\n Silicon Image\n> controllers are the most popular, so it's important to see them supported\n> well, though I'd rather see more SATA headers than 2 off of the built-in\n> chipsets. \n> \n> - Luke\n> \n\n\n-- \nYour PostgreSQL solutions provider, Command Prompt, Inc.\n24x7 support - 1.800.492.2240, programming, and consulting\nHome of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\nhttp://www.commandprompt.com / http://www.postgresql.org\n", "msg_date": "Fri, 22 Jul 2005 08:03:28 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "On Thu, Jul 21, 2005 at 09:19:04PM -0700, Luke Lonergan wrote:\n> Joshua,\n> \n> On 7/21/05 7:53 PM, \"Joshua D. Drake\" <[email protected]> wrote:\n> > Well I know that isn't true at least not with ANY of the Dells my\n> > customers have purchased in the last 18 months. They are still really,\n> > really slow.\n> \n> That's too bad, can you cite some model numbers? SCSI?\n\nI would be interested too, given\n\n http://www.netbsd.org/cgi-bin/query-pr-single.pl?number=30531\n\n\nCheers,\n\nPatrick\n", "msg_date": "Fri, 22 Jul 2005 17:49:05 +0100", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "Here is the SCSI output:\n\nWeb Server\n\nSCSI subsystem driver Revision: 1.00\nmegaraid: v1.18j (Release Date: Mon Jul 7 14:39:55 EDT 2003)\nmegaraid: found 0x1028:0x000f:idx 0:bus 4:slot 3:func 0\nscsi0 : Found a MegaRAID controller at 0xf883f000, IRQ: 18\nscsi0 : Enabling 64 bit support\nmegaraid: [412W:H406] detected 1 logical drives\nmegaraid: supports extended CDBs.\nmegaraid: channel[1] is raid.\nmegaraid: channel[2] is raid.\nscsi0 : LSI Logic MegaRAID 412W 254 commands 15 targs 5 chans 7 luns\n\n\nDatabase Server\n\nSCSI subsystem driver Revision: 1.00\nmegaraid: v1.18j (Release Date: Mon Jul 7 14:39:55 EDT 2003)\nmegaraid: found 0x101e:0x1960:idx 0:bus 5:slot 0:func 0\nscsi0 : Found a MegaRAID controller at 0xf883f000, IRQ: 21\nscsi0 : Enabling 64 bit support\nmegaraid: [196T:3.33] detected 1 logical drives\nmegaraid: supports extended CDBs.\nmegaraid: channel[1] is raid.\nmegaraid: channel[2] is raid.\nscsi0 : LSI Logic MegaRAID 196T 254 commands 15 targs 5 chans 7 luns\nStarting timer : 0 0\nblk: queue c5f2d218, I/O limit 4095Mb (mask 0xffffffff)\nscsi0: scanning virtual channel 0 for logical drives.\n Vendor: MegaRAID Model: LD 0 RAID5 86G Rev: 196T\n Type: Direct-Access ANSI SCSI revision: 02\nStarting timer : 0 0\n\n\nThe webserver is a 1U and it actually performs better on the IO than the \ndatabase server even though the database server is running 6 disks versus 3.\n\nThe database server is a PE (Power Edge) 6600\n\nDatabase Server IO:\n\n[root@master root]# /sbin/hdparm -tT /dev/sda\n\n/dev/sda:\n Timing buffer-cache reads: 1888 MB in 2.00 seconds = 944.00 MB/sec\n Timing buffered disk reads: 32 MB in 3.06 seconds = 10.46 MB/sec\n\nSecond Database Server IO:\n\n[root@pq-slave root]# /sbin/hdparm -tT /dev/sda\n\n/dev/sda:\n Timing buffer-cache reads: 1816 MB in 2.00 seconds = 908.00 MB/sec\n Timing buffered disk reads: 26 MB in 3.11 seconds = 8.36 MB/sec\n[root@pq-slave root]#\n\nWhich is just horrible.\n\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n\n\nPatrick Welche wrote:\n> On Thu, Jul 21, 2005 at 09:19:04PM -0700, Luke Lonergan wrote:\n> \n>>Joshua,\n>>\n>>On 7/21/05 7:53 PM, \"Joshua D. Drake\" <[email protected]> wrote:\n>>\n>>>Well I know that isn't true at least not with ANY of the Dells my\n>>>customers have purchased in the last 18 months. They are still really,\n>>>really slow.\n>>\n>>That's too bad, can you cite some model numbers? SCSI?\n> \n> \n> I would be interested too, given\n> \n> http://www.netbsd.org/cgi-bin/query-pr-single.pl?number=30531\n> \n> \n> Cheers,\n> \n> Patrick\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n", "msg_date": "Fri, 22 Jul 2005 10:11:25 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] COPY FROM performance improvements" }, { "msg_contents": "Joshua,\n\nOn 7/22/05 10:11 AM, \"Joshua D. Drake\" <[email protected]> wrote:\n> The database server is a PE (Power Edge) 6600\n> \n> Database Server IO:\n> \n> [root@master root]# /sbin/hdparm -tT /dev/sda\n> \n> /dev/sda:\n> Timing buffer-cache reads: 1888 MB in 2.00 seconds = 944.00 MB/sec\n> Timing buffered disk reads: 32 MB in 3.06 seconds = 10.46 MB/sec\n> \n> Second Database Server IO:\n> \n> [root@pq-slave root]# /sbin/hdparm -tT /dev/sda\n> \n> /dev/sda:\n> Timing buffer-cache reads: 1816 MB in 2.00 seconds = 908.00 MB/sec\n> Timing buffered disk reads: 26 MB in 3.11 seconds = 8.36 MB/sec\n> [root@pq-slave root]#\n\nCan you post the \"time dd if=/dev/zero of=bigfile bs=8k count=500000\"\nresults? Also do the reverse (read the file) with \"time dd if=bigfile\nof=/dev/null bs=8k\".\n\nI think you are observing what we've known for a while, hardware RAID is\nhorribly slow. We've not found a hardware RAID adapter of this class yet\nthat shows reasonable read or write performance. The Adaptec 2400R or the\nLSI or others have terrible internal I/O compared to raw SCSI with software\nRAID, and even the CPU usage is higher on these cards while doing slower I/O\nthan linux SW RAID.\n\nNotably - we've found that the 3Ware RAID controller does a better job than\nthe low end SCSI RAID at HW RAID support, and also exports JBOD at high\nspeeds. If you export JBOD on the low end SCSI RAID adapters, the\nperformance is also very poor, though generally faster than using HW RAID.\n\n- Luke\n\n\n", "msg_date": "Fri, 22 Jul 2005 12:28:43 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] COPY FROM performance improvements" }, { "msg_contents": "On a single spindle:\n\n$ time dd if=/dev/zero of=bigfile bs=8k count=2000000\n2000000+0 records in\n2000000+0 records out\n\nreal 2m8.569s\nuser 0m0.725s\nsys 0m19.633s\n\nNone of my drives are partitioned big enough for me to create 2x RAM\nsized files on a single disk. I have 16MB RAM and only 36GB drives. \nBut here are some number for my 12-disk lvm2 striped volume.\n\n$ time dd if=/dev/zero of=bigfile3 bs=8k count=4000000\n4000000+0 records in\n4000000+0 records out\n\nreal 1m17.059s\nuser 0m1.479s\nsys 0m41.293s\n\nMark\n\nOn Thu, 21 Jul 2005 16:14:47 -0700\n\"Luke Lonergan\" <[email protected]> wrote:\n\n> Cool!\n> \n> At what rate does your disk setup write sequential data, e.g.:\n> time dd if=/dev/zero of=bigfile bs=8k count=500000\n> \n> (sized for 2x RAM on a system with 2GB)\n> \n> BTW - the Compaq smartarray controllers are pretty broken on Linux from a\n> performance standpoint in our experience. We've had disastrously bad\n> results from the SmartArray 5i and 6 controllers on kernels from 2.4 ->\n> 2.6.10, on the order of 20MB/s.\n> \n> For comparison, the results on our dual opteron with a single LSI SCSI\n> controller with software RAID0 on a 2.6.10 kernel:\n> \n> [llonergan@stinger4 dbfast]$ time dd if=/dev/zero of=bigfile bs=8k\n> count=500000\n> 500000+0 records in\n> 500000+0 records out\n> \n> real 0m24.702s\n> user 0m0.077s\n> sys 0m8.794s\n> \n> Which calculates out to about 161MB/s.\n> \n> - Luke\n> \n> \n> On 7/21/05 2:55 PM, \"Mark Wong\" <[email protected]> wrote:\n> \n> > I just ran through a few tests with the v14 patch against 100GB of data\n> > from dbt3 and found a 30% improvement; 3.6 hours vs 5.3 hours. Just to\n> > give a few details, I only loaded data and started a COPY in parallel\n> > for each the data files:\n> > http://www.testing.osdl.org/projects/dbt3testing/results/fast_copy/\n> > \n> > Here's a visual of my disk layout, for those familiar with the database\n> > schema:\n> > http://www.testing.osdl.org/projects/dbt3testing/results/fast_copy/layout-dev4\n> > -010-dbt3.html\n> > \n> > I have 6 arrays of fourteen 15k rpm drives in a split-bus configuration\n> > attached to a 4-way itanium2 via 6 compaq smartarray pci-x controllers.\n> > \n> > Let me know if you have any questions.\n> > \n> > Mark\n> > \n> \n", "msg_date": "Fri, 22 Jul 2005 12:47:18 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] COPY FROM performance improvements" }, { "msg_contents": "Mark,\n\nOn 7/22/05 12:47 PM, \"Mark Wong\" <[email protected]> wrote:\n\n> On a single spindle:\n> \n> $ time dd if=/dev/zero of=bigfile bs=8k count=2000000\n> 2000000+0 records in\n> 2000000+0 records out\n> \n> real 2m8.569s\n> user 0m0.725s\n> sys 0m19.633s\n\nThis is super fast! 124MB/s seems too fast for true write performance on a\nsingle spindle.\n \n> But here are some number for my 12-disk lvm2 striped volume.\n\nSo, software striping on how many controllers?\n \n> $ time dd if=/dev/zero of=bigfile3 bs=8k count=4000000\n> 4000000+0 records in\n> 4000000+0 records out\n> \n> real 1m17.059s\n> user 0m1.479s\n> sys 0m41.293s\n\nAgain - super fast at 416MB/s. How many controllers?\n\nWhen we had our problems with the cciss driver and the smartarray 5i/6\ncontrollers, we found the only way to get any performance out of them was to\nrun them in JBOD mode and software stripe. However, when we did so the CPU\nusage skyrocketed and the performance of simple SCSI adapters was 50% faster\nwith less CPU consumption. These numbers show 100*(42.8/67) = 64% CPU\nconsumption, I'd expect less with 2 simple U320 SCSI controllers.\n\n- Luke\n\n\n", "msg_date": "Fri, 22 Jul 2005 21:09:10 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] COPY FROM performance improvements" }, { "msg_contents": "On Fri, 22 Jul 2005 12:28:43 -0700\n\"Luke Lonergan\" <[email protected]> wrote:\n\n> Joshua,\n> \n> On 7/22/05 10:11 AM, \"Joshua D. Drake\" <[email protected]> wrote:\n> > The database server is a PE (Power Edge) 6600\n> > \n> > Database Server IO:\n> > \n> > [root@master root]# /sbin/hdparm -tT /dev/sda\n> > \n> > /dev/sda:\n> > Timing buffer-cache reads: 1888 MB in 2.00 seconds = 944.00 MB/sec\n> > Timing buffered disk reads: 32 MB in 3.06 seconds = 10.46 MB/sec\n> > \n> > Second Database Server IO:\n> > \n> > [root@pq-slave root]# /sbin/hdparm -tT /dev/sda\n> > \n> > /dev/sda:\n> > Timing buffer-cache reads: 1816 MB in 2.00 seconds = 908.00 MB/sec\n> > Timing buffered disk reads: 26 MB in 3.11 seconds = 8.36 MB/sec\n> > [root@pq-slave root]#\n> \n> Can you post the \"time dd if=/dev/zero of=bigfile bs=8k count=500000\"\n> results? Also do the reverse (read the file) with \"time dd if=bigfile\n> of=/dev/null bs=8k\".\n> \n> I think you are observing what we've known for a while, hardware RAID is\n> horribly slow. We've not found a hardware RAID adapter of this class yet\n> that shows reasonable read or write performance. The Adaptec 2400R or the\n> LSI or others have terrible internal I/O compared to raw SCSI with software\n> RAID, and even the CPU usage is higher on these cards while doing slower I/O\n> than linux SW RAID.\n> \n> Notably - we've found that the 3Ware RAID controller does a better job than\n> the low end SCSI RAID at HW RAID support, and also exports JBOD at high\n> speeds. If you export JBOD on the low end SCSI RAID adapters, the\n> performance is also very poor, though generally faster than using HW RAID.\n\nAre there any recommendations for Qlogic controllers on Linux, scsi or\nfiber channel? I might be able to my hands on some. I have pci-x slots\nfor AMD, Itanium, or POWER5 if the architecture makes a difference.\n\nMark\n", "msg_date": "Thu, 28 Jul 2005 16:43:52 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] COPY FROM performance improvements" }, { "msg_contents": ">>\n>>>[root@pq-slave root]# /sbin/hdparm -tT /dev/sda\n>>>\n>>>/dev/sda:\n>>> Timing buffer-cache reads: 1816 MB in 2.00 seconds = 908.00 MB/sec\n>>> Timing buffered disk reads: 26 MB in 3.11 seconds = 8.36 MB/sec\n>>>[root@pq-slave root]#\n>>\n>>Can you post the \"time dd if=/dev/zero of=bigfile bs=8k count=500000\"\n>>results? Also do the reverse (read the file) with \"time dd if=bigfile\n>>of=/dev/null bs=8k\".\n\nI didn't see this come across before... here ya go:\n\ntime dd if=/dev/zero of=bigfile bs=8k count=500000\n\n500000+0 records in\n500000+0 records out\n\nreal 1m52.738s\nuser 0m0.310s\nsys 0m36.590s\n\ntime dd if=bigfile of=/dev/null bs=8k\n\ntime dd if=bigfile of=/dev/null bs=8k\n500000+0 records in\n500000+0 records out\n\nreal 4m38.742s\nuser 0m0.320s\nsys 0m27.870s\n\n\nFYI on your hardware raid comment... I easily get 50 megs a second on my \n3ware controllers and faster on my LSI SATA controllers.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n>\n-- \nYour PostgreSQL solutions provider, Command Prompt, Inc.\n24x7 support - 1.800.492.2240, programming, and consulting\nHome of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\nhttp://www.commandprompt.com / http://www.postgresql.org\n", "msg_date": "Thu, 28 Jul 2005 16:57:24 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] COPY FROM performance improvements" }, { "msg_contents": ">>> Can you post the \"time dd if=/dev/zero of=bigfile bs=8k count=500000\"\n>>> results? Also do the reverse (read the file) with \"time dd if=bigfile\n>>> of=/dev/null bs=8k\".\n> \n> I didn't see this come across before... here ya go:\n> \n> time dd if=/dev/zero of=bigfile bs=8k count=500000\n> \n> 500000+0 records in\n> 500000+0 records out\n> \n> real 1m52.738s\n> user 0m0.310s\n> sys 0m36.590s\n\nSo, that's 35MB/s, or 1/2 of a single disk drive.\n \n> time dd if=bigfile of=/dev/null bs=8k\n> \n> time dd if=bigfile of=/dev/null bs=8k\n> 500000+0 records in\n> 500000+0 records out\n> \n> real 4m38.742s\n> user 0m0.320s\n> sys 0m27.870s\n\nAnd that's 14MB/s, or < 1/4 of a single disk drive.\n\n> FYI on your hardware raid comment... I easily get 50 megs a second on my\n> 3ware controllers and faster on my LSI SATA controllers.\n\nThen you are almost getting one disk worth of bandwidth.\n\nBy comparison, we get this using Linux software RAID on Xeon or Opteron:\n\n$ time dd if=/dev/zero of=bigfile bs=8k count=500000\n500000+0 records in\n500000+0 records out\n\nreal 0m26.927s\nuser 0m0.074s\nsys 0m8.769s\n\n$ time dd if=bigfile of=/dev/null bs=8k\n500000+0 records in\n500000+0 records out\n\nreal 0m28.190s\nuser 0m0.039s\nsys 0m8.349s\n\nwith less CPU usage than HW SCSI RAID controllers.\n\n- Luke\n\n\n", "msg_date": "Thu, 28 Jul 2005 21:03:20 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] COPY FROM performance improvements" }, { "msg_contents": "Mark,\n\nOn 7/28/05 4:43 PM, \"Mark Wong\" <[email protected]> wrote:\n\n> Are there any recommendations for Qlogic controllers on Linux, scsi or\n> fiber channel? I might be able to my hands on some. I have pci-x slots\n> for AMD, Itanium, or POWER5 if the architecture makes a difference.\n\nI don't have a recommendation for a particular one, it's been too long\n(1998) since I've used one with Linux. However, I'd like to see a\ncomparison between Emulex and Qlogic and a winner chosen. We've had some\napparent driver issues with a client running Emulex on Linux, even using\nmany different versions of the kernel.\n\n- Luke\n\n\n", "msg_date": "Thu, 28 Jul 2005 21:07:12 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] COPY FROM performance improvements" }, { "msg_contents": "Luke Lonergan wrote:\n> Mark,\n> \n> On 7/28/05 4:43 PM, \"Mark Wong\" <[email protected]> wrote:\n> \n> > Are there any recommendations for Qlogic controllers on Linux, scsi or\n> > fiber channel? I might be able to my hands on some. I have pci-x slots\n> > for AMD, Itanium, or POWER5 if the architecture makes a difference.\n> \n> I don't have a recommendation for a particular one, it's been too long\n> (1998) since I've used one with Linux. However, I'd like to see a\n> comparison between Emulex and Qlogic and a winner chosen. We've had some\n> apparent driver issues with a client running Emulex on Linux, even using\n> many different versions of the kernel.\n\nWhere is the most recent version of the COPY patch?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 29 Jul 2005 08:37:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] COPY FROM performance improvements" }, { "msg_contents": "Bruce,\n\nOn 7/29/05 5:37 AM, \"Bruce Momjian\" <[email protected]> wrote:\n\n> Where is the most recent version of the COPY patch?\n\nMy direct e-mails aren't getting to you, they are trapped in a spam filter\non your end, so you didn't get my e-mail with the patch!\n\nI've attached it here, sorry to the list owner for the patch inclusion /\noff-topic.\n\n- Luke", "msg_date": "Fri, 29 Jul 2005 10:54:08 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] COPY FROM performance improvements" }, { "msg_contents": "\"Luke Lonergan\" <[email protected]> writes:\n> On 7/29/05 5:37 AM, \"Bruce Momjian\" <[email protected]> wrote:\n>> Where is the most recent version of the COPY patch?\n\n> I've attached it here, sorry to the list owner for the patch inclusion /\n> off-topic.\n\nThis patch appears to reverse out the most recent committed changes in\ncopy.c.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Aug 2005 12:32:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] COPY FROM performance improvements " }, { "msg_contents": "New patch attached. It includes very minor changes. These are changes that\nwere committed to CVS 3 weeks ago (copy.c 1.247) which I missed in the\nprevious patch.\n\nAlon.", "msg_date": "Tue, 02 Aug 2005 10:59:43 -0400", "msg_from": "\"Alon Goldshuv\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "\"Alon Goldshuv\" <[email protected]> writes:\n> New patch attached. It includes very minor changes. These are changes that\n> were committed to CVS 3 weeks ago (copy.c 1.247) which I missed in the\n> previous patch.\n\nI've applied this with (rather extensive) revisions. I didn't like what\nyou had done with the control structure --- loading the input buffer\nonly at the outermost loop level was a bad design choice IMHO. You had\nsprinkled the code with an unreasonable number of special cases in order\nto try to cope with the effects of that mistake, but there were lots\nof problems still left. Some of the bugs I noticed:\n\n* Broke old-protocol COPY, since that has no provision for stopping at\nthe EOF marker except by parsing the data carefully to start with. The\nbackend would just hang up unless the total data size chanced to be a\nmultiple of 64K.\n\n* Subtle change in interpretation of \\. EOF marker (the existing code\nwill recognize it even when not at start of line).\n\n* Seems to have thrown away detection of newline format discrepancies.\n\n* Fails for zero-column tables.\n\n* Broke display of column values during error context callback (would\nalways show the first column contents no matter which one is being\ncomplained of).\n\n* DetectLineEnd mistakenly assumes CR mode if very last character of first\nbufferload is CR; need to reserve judgment until next char is available.\n\n* DetectLineEnd fails to account for backslashed control characters,\nso it will e.g. accept \\ followed by \\n as determining the newline\nstyle.\n\n* Fails to apply encoding conversion if first line exceeds copy buf\nsize, because when DetectLineEnd fails the quick-exit path doesn't do\nit.\n\n* There seem to be several bugs associated with the fact that input_buf[]\nalways has 64K of data in it even when EOF has been reached on the\ninput. One example:\n\techo -n 123 >zzz1\n\tpsql> create temp table t1(f1 text);\n\tpsql> copy t1 from '/home/tgl/zzz1';\n\tpsql> select * from t1;\nhmm ... where'd that 64K of whitespace come from?\n\nI rewrote the patch in a way that retained most of the speedups without\nchanging the basic control structure (except for replacing multiple\nCopyReadAttribute calls with one CopyReadAttributes call per line).\n\nI had some difficulty in generating test cases that weren't largely\nI/O-bound, but AFAICT the patch as applied is about the same speed\nas what you submitted.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 Aug 2005 17:04:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements " }, { "msg_contents": "Tom,\n\nThanks for finding the bugs and reworking things.\n\n> I had some difficulty in generating test cases that weren't largely\n> I/O-bound, but AFAICT the patch as applied is about the same speed\n> as what you submitted.\n\nYou achieve the important objective of knocking the parsing stage down a\nlot, but your parsing code is actually about 20% slower than Alon's.\n\nBefore your patch:\n Time: 14205.606 ms\n\nWith your patch:\n Time: 10565.374 ms\n\nWith Alon's patch:\n Time: 10289.845 ms\n\nThe parsing part of the code in your version is slower, but as a percentage\nof the total it's hidden. The loss of 0.3 seconds on 143MB means:\n\n- If parsing takes a total of 0.9 seconds, the parsing rate is 160MB/s\n(143/0.9)\n\n- If we add another 0.3 seconds to parsing to bring it to 1.2, then the\nparsing rate becomes 120MB/s\n\nWhen we improve the next stages of the processing (attribute conversion,\nwrite-to disk), this will stand out a lot more. Our objective is to get the\nCOPY rate *much* faster than the current poky rate of 14MB/s (after this\npatch).\n\n- Luke\n\nOn 8/6/05 2:04 PM, \"Tom Lane\" <[email protected]> wrote:\n\n> \"Alon Goldshuv\" <[email protected]> writes:\n>> New patch attached. It includes very minor changes. These are changes that\n>> were committed to CVS 3 weeks ago (copy.c 1.247) which I missed in the\n>> previous patch.\n> \n> I've applied this with (rather extensive) revisions. I didn't like what\n> you had done with the control structure --- loading the input buffer\n> only at the outermost loop level was a bad design choice IMHO. You had\n> sprinkled the code with an unreasonable number of special cases in order\n> to try to cope with the effects of that mistake, but there were lots\n> of problems still left. Some of the bugs I noticed:\n> \n> * Broke old-protocol COPY, since that has no provision for stopping at\n> the EOF marker except by parsing the data carefully to start with. The\n> backend would just hang up unless the total data size chanced to be a\n> multiple of 64K.\n> \n> * Subtle change in interpretation of \\. EOF marker (the existing code\n> will recognize it even when not at start of line).\n> \n> * Seems to have thrown away detection of newline format discrepancies.\n> \n> * Fails for zero-column tables.\n> \n> * Broke display of column values during error context callback (would\n> always show the first column contents no matter which one is being\n> complained of).\n> \n> * DetectLineEnd mistakenly assumes CR mode if very last character of first\n> bufferload is CR; need to reserve judgment until next char is available.\n> \n> * DetectLineEnd fails to account for backslashed control characters,\n> so it will e.g. accept \\ followed by \\n as determining the newline\n> style.\n> \n> * Fails to apply encoding conversion if first line exceeds copy buf\n> size, because when DetectLineEnd fails the quick-exit path doesn't do\n> it.\n> \n> * There seem to be several bugs associated with the fact that input_buf[]\n> always has 64K of data in it even when EOF has been reached on the\n> input. One example:\n> echo -n 123 >zzz1\n> psql> create temp table t1(f1 text);\n> psql> copy t1 from '/home/tgl/zzz1';\n> psql> select * from t1;\n> hmm ... where'd that 64K of whitespace come from?\n> \n> I rewrote the patch in a way that retained most of the speedups without\n> changing the basic control structure (except for replacing multiple\n> CopyReadAttribute calls with one CopyReadAttributes call per line).\n> \n> I had some difficulty in generating test cases that weren't largely\n> I/O-bound, but AFAICT the patch as applied is about the same speed\n> as what you submitted.\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n", "msg_date": "Sat, 06 Aug 2005 19:33:07 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "Tom,\n\nThe previous timings were for a table with 15 columns of mixed type. We\nalso test with 1 column to make the parsing overhead more apparent. In the\ncase of 1 text column with 145MB of input data:\n\nYour patch:\n Time: 6612.599 ms\n\nAlon's patch:\n Time: 6119.244 ms\n\n\nAlon's patch is 7.5% faster here, where it was only 3% faster on the 15\ncolumn case. This is consistent with a large difference in parsing speed\nbetween your approach and Alon's.\n\nI'm pretty sure that the \"mistake\" you refer to is responsible for the speed\nimprovement, and was deliberately chosen to minimize memory copies, etc.\nGiven that we're looking ahead to getting much higher speeds, approaching\ncurrent high performance disk speeds, we've been looking more closely at the\nparsing speed. It comes down to a tradeoff between elegant code and speed.\n\nWe'll prove it in lab tests soon, where we measure the parsing rate\ndirectly, but these experiments show it clearly, though indirectly.\n\n- Luke\n\n\n\nOn 8/6/05 2:04 PM, \"Tom Lane\" <[email protected]> wrote:\n\n> \"Alon Goldshuv\" <[email protected]> writes:\n>> New patch attached. It includes very minor changes. These are changes that\n>> were committed to CVS 3 weeks ago (copy.c 1.247) which I missed in the\n>> previous patch.\n> \n> I've applied this with (rather extensive) revisions. I didn't like what\n> you had done with the control structure --- loading the input buffer\n> only at the outermost loop level was a bad design choice IMHO. You had\n> sprinkled the code with an unreasonable number of special cases in order\n> to try to cope with the effects of that mistake, but there were lots\n> of problems still left. Some of the bugs I noticed:\n> \n> * Broke old-protocol COPY, since that has no provision for stopping at\n> the EOF marker except by parsing the data carefully to start with. The\n> backend would just hang up unless the total data size chanced to be a\n> multiple of 64K.\n> \n> * Subtle change in interpretation of \\. EOF marker (the existing code\n> will recognize it even when not at start of line).\n> \n> * Seems to have thrown away detection of newline format discrepancies.\n> \n> * Fails for zero-column tables.\n> \n> * Broke display of column values during error context callback (would\n> always show the first column contents no matter which one is being\n> complained of).\n> \n> * DetectLineEnd mistakenly assumes CR mode if very last character of first\n> bufferload is CR; need to reserve judgment until next char is available.\n> \n> * DetectLineEnd fails to account for backslashed control characters,\n> so it will e.g. accept \\ followed by \\n as determining the newline\n> style.\n> \n> * Fails to apply encoding conversion if first line exceeds copy buf\n> size, because when DetectLineEnd fails the quick-exit path doesn't do\n> it.\n> \n> * There seem to be several bugs associated with the fact that input_buf[]\n> always has 64K of data in it even when EOF has been reached on the\n> input. One example:\n> echo -n 123 >zzz1\n> psql> create temp table t1(f1 text);\n> psql> copy t1 from '/home/tgl/zzz1';\n> psql> select * from t1;\n> hmm ... where'd that 64K of whitespace come from?\n> \n> I rewrote the patch in a way that retained most of the speedups without\n> changing the basic control structure (except for replacing multiple\n> CopyReadAttribute calls with one CopyReadAttributes call per line).\n> \n> I had some difficulty in generating test cases that weren't largely\n> I/O-bound, but AFAICT the patch as applied is about the same speed\n> as what you submitted.\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n", "msg_date": "Sat, 06 Aug 2005 19:54:04 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "Tom,\n\nMy direct e-mails to you are apparently blocked, so I'll send this to the\nlist.\n\nI've attached the case we use for load performance testing, with the data\ngenerator modified to produce a single row version of the dataset.\n\nI do believe that you/we will need to invert the processing loop to get the\nmaximum parsing speed. We will be implementing much higher loading speeds\nwhich require it to compete with Oracle, Netezza, Teradata, so we'll have to\nwork this out for the best interests of our users.\n\n- Luke", "msg_date": "Sat, 06 Aug 2005 20:25:24 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "\"Luke Lonergan\" <[email protected]> writes:\n>> I had some difficulty in generating test cases that weren't largely\n>> I/O-bound, but AFAICT the patch as applied is about the same speed\n>> as what you submitted.\n\n> You achieve the important objective of knocking the parsing stage down a\n> lot, but your parsing code is actually about 20% slower than Alon's.\n\nI would like to see the exact test case you are using to make this\nclaim; the tests I did suggested my code is the same speed or faster.\nThe particular test case I was using was the \"tenk1\" data from the\nregression database, duplicated out to about 600K rows so as to run\nlong enough to measure with some degree of repeatability.\n\nAs best I can tell, my version of CopyReadAttributes is significantly\nquicker than Alon's, approximately balancing out the fact that my\nversion of CopyReadLine is slower. I did the latter first, and would\nnow be tempted to rewrite it in the same style as CopyReadAttributes,\nie one pass of memory-to-memory copy using pointers rather than buffer\nindexes.\n\nBTW, late today I figured out a way to get fairly reproducible\nnon-I/O-bound numbers about COPY FROM: use a trigger that suppresses\nthe actual inserts, thus:\n\ncreate table foo ...\ncreate function noway() returns trigger as\n'begin return null; end' language plpgsql;\ncreate trigger noway before insert on foo\n for each row execute procedure noway();\nthen repeat:\ncopy foo from '/tmp/foo.data';\n\nIf the source file is not too large to fit in kernel disk cache, then\nafter the first iteration there is no I/O at all. I got numbers\nthat were reproducible within less than 1%, as opposed to 5% or more\nvariation when the thing was partially I/O bound. Pretty useless in the\nreal world, of course, but great for timing COPY's data-pushing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 07 Aug 2005 00:08:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements " }, { "msg_contents": "Tom,\n\nOn 8/6/05 9:08 PM, \"Tom Lane\" <[email protected]> wrote:\n\n> \"Luke Lonergan\" <[email protected]> writes:\n>>> I had some difficulty in generating test cases that weren't largely\n>>> I/O-bound, but AFAICT the patch as applied is about the same speed\n>>> as what you submitted.\n> \n>> You achieve the important objective of knocking the parsing stage down a\n>> lot, but your parsing code is actually about 20% slower than Alon's.\n> \n> I would like to see the exact test case you are using to make this\n> claim; the tests I did suggested my code is the same speed or faster.\n\nI showed mine - you show yours :-) Apparently our e-mail crossed.\n \n> As best I can tell, my version of CopyReadAttributes is significantly\n> quicker than Alon's, approximately balancing out the fact that my\n> version of CopyReadLine is slower. I did the latter first, and would\n> now be tempted to rewrite it in the same style as CopyReadAttributes,\n> ie one pass of memory-to-memory copy using pointers rather than buffer\n> indexes.\n\nSee previous timings - looks like Alon's parsing is substantially faster.\nHowever, I'd like him to confirm by running with the \"shunt\" placed at\ndifferent stages, in this case between parse and attribute conversion (not\nattribute parse).\n \n> BTW, late today I figured out a way to get fairly reproducible\n> non-I/O-bound numbers about COPY FROM: use a trigger that suppresses\n> the actual inserts, thus:\n> \n> create table foo ...\n> create function noway() returns trigger as\n> 'begin return null; end' language plpgsql;\n> create trigger noway before insert on foo\n> for each row execute procedure noway();\n> then repeat:\n> copy foo from '/tmp/foo.data';\n\nCool! That's a better way than hacking code and inserting shunts.\n \nAlon will likely hit this tomorrow.\n\n- Luke\n\n\n", "msg_date": "Sat, 06 Aug 2005 22:21:03 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "I did some performance checks after the recent code commit.\n\nThe good news is that the parsing speed of COPY is now MUCH faster, which is\ngreat. It is about 5 times faster - about 100MB/sec on my machine\n(previously 20MB/sec at best, usually less).\n\nThe better news is that my original patch parsing speed reaches 120MB/sec,\nabout 20MB/sec faster than the code that's now in CVS. This can be\nsignificant for the long scheme of things and for large data sets. Maybe we\ncan improve the current code a bit more to reach this number.\n\nI performed those measurement by executing *only the parsing logic* of the\nCOPY pipeline. All data conversion (functioncall3(string...)) and tuple\nhandling (form_heaptuple etc...) and insertion were manually disabled. So\nthe only code measured is reading from disk and parsing to the attribute\nlevel.\n\nCheers,\nAlon.\n\n\n\n\n\nOn 8/7/05 1:21 AM, \"Luke Lonergan\" <[email protected]> wrote:\n\n> Tom,\n> \n> On 8/6/05 9:08 PM, \"Tom Lane\" <[email protected]> wrote:\n> \n>> \"Luke Lonergan\" <[email protected]> writes:\n>>>> I had some difficulty in generating test cases that weren't largely\n>>>> I/O-bound, but AFAICT the patch as applied is about the same speed\n>>>> as what you submitted.\n>> \n>>> You achieve the important objective of knocking the parsing stage down a\n>>> lot, but your parsing code is actually about 20% slower than Alon's.\n>> \n>> I would like to see the exact test case you are using to make this\n>> claim; the tests I did suggested my code is the same speed or faster.\n> \n> I showed mine - you show yours :-) Apparently our e-mail crossed.\n> \n>> As best I can tell, my version of CopyReadAttributes is significantly\n>> quicker than Alon's, approximately balancing out the fact that my\n>> version of CopyReadLine is slower. I did the latter first, and would\n>> now be tempted to rewrite it in the same style as CopyReadAttributes,\n>> ie one pass of memory-to-memory copy using pointers rather than buffer\n>> indexes.\n> \n> See previous timings - looks like Alon's parsing is substantially faster.\n> However, I'd like him to confirm by running with the \"shunt\" placed at\n> different stages, in this case between parse and attribute conversion (not\n> attribute parse).\n> \n>> BTW, late today I figured out a way to get fairly reproducible\n>> non-I/O-bound numbers about COPY FROM: use a trigger that suppresses\n>> the actual inserts, thus:\n>> \n>> create table foo ...\n>> create function noway() returns trigger as\n>> 'begin return null; end' language plpgsql;\n>> create trigger noway before insert on foo\n>> for each row execute procedure noway();\n>> then repeat:\n>> copy foo from '/tmp/foo.data';\n> \n> Cool! That's a better way than hacking code and inserting shunts.\n> \n> Alon will likely hit this tomorrow.\n> \n> - Luke\n> \n\n\n", "msg_date": "Tue, 09 Aug 2005 17:41:18 -0700", "msg_from": "\"Alon Goldshuv\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "\n\nAlon Goldshuv wrote:\n\n>I performed those measurement by executing *only the parsing logic* of the\n>COPY pipeline. All data conversion (functioncall3(string...)) and tuple\n>handling (form_heaptuple etc...) and insertion were manually disabled. So\n>the only code measured is reading from disk and parsing to the attribute\n>level.\n> \n>\n\nArguably this might exaggerate the effect quite significantly. Users \nwill want to know the real time effect on a complete COPY. Depending on \nhow much the pasing is in the total time your 20% improvement in parsing \nmight only be a small fraction of 20% improvement in COPY.\n\nLike you, I'm happy we have seen a 5 times improvement in parsing. Is it \npossible you can factor out something smallish from your patch that \nmight make up the balance?\n\ncheers\n\nandrew\n", "msg_date": "Tue, 09 Aug 2005 21:01:38 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "Andrew,\n\n> Arguably this might exaggerate the effect quite significantly. Users\n> will want to know the real time effect on a complete COPY. Depending on\n> how much the pasing is in the total time your 20% improvement in parsing\n> might only be a small fraction of 20% improvement in COPY.\n\nArguably has already been argued. We knew this because we profiled the\nentire COPY process and came up with this approx. breakdown for a specific\ncase:\nParsing: 25%\nAttribute Conversion: 40%\nData Insertion: 35%\n\nNet copy rate: 8 MB/s on a filesystem that does 240 MB/s\n\nSo - if we speed up parsing by 500% or 450%, the end result is about a\n20-30% speed increase in the overall process.\n\nNote that we're still a *long* way from getting anywhere near the limit of\nthe I/O subsystem at 12 MB/s. Oracle can probably get 5-8 times this data\nrate, if not better.\n\nThe attribute conversion logic is also very slow and needs similar\nimprovements.\n\nThe reason we focused first on Parsing is that our MPP version of Bizgres\nwill reach data loading rates approaching the parsing speed, so we needed to\nimprove that part to get it out of the way.\n\nWe will continue to improve COPY speed in Bizgres so that we can provide\ncomparable COPY performance to Oracle and MySQL.\n\n> Like you, I'm happy we have seen a 5 times improvement in parsing. Is it\n> possible you can factor out something smallish from your patch that\n> might make up the balance?\n\nThat parsing was 25% of the workload was traceable to a 3 main things:\n1) Per character acquisition from source instead of buffering\n2) Frequent interruption of the parsing pipeline to handle attribute\nconversion\n3) Lack of micro-parallelism in the character finding logic\n\nTom's patch took our contribution from (1) and (2) and his improvements, and\nhe rejected (3). The net result is that we lost performance from the lack\nof (3) but gained performance from his improvements of (1) and (2).\n\nI believe that re-introducing (3) may bring us from 100 MB/s to 150 MB/s\nparsing speed.\n\n- Luke\n\n\n", "msg_date": "Tue, 09 Aug 2005 21:39:55 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "Tom,\n\n> As best I can tell, my version of CopyReadAttributes is significantly\n> quicker than Alon's, approximately balancing out the fact that my\n> version of CopyReadLine is slower. I did the latter first, and would\n> now be tempted to rewrite it in the same style as CopyReadAttributes,\n> ie one pass of memory-to-memory copy using pointers rather than buffer\n> indexes.\n\nI think you are right, with the exception that Alon's results prove out that\nthe net result of your patch is 20% slower than his.\n\nI think with your speedup of CopyReadAttributes and some additional work on\nCopyReadLine the net result could be 50% faster than Alon's patch.\n\nThe key thing that is missing is the lack of micro-parallelism in the\ncharacter processing in this version. By \"inverting the loop\", or putting\nthe characters into a buffer on the outside, then doing fast character\nscanning inside with special \"fix-up\" cases, we exposed long runs of\npipeline-able code to the compiler.\n\nI think there is another way to accomplish the same thing and still preserve\nthe current structure, but it requires \"strip mining\" the character buffer\ninto chunks that can be processed with an explicit loop to check for the\ndifferent characters. While it may seem artificial (it is), it will provide\nthe compiler with the ability to pipeline the character finding logic over\nlong runs. The other necessary element will have to avoid pipeline stalls\nfrom the \"if\" conditions as much as possible.\n\nAnyway, thanks for reviewing this code and improving it - it's important to\nbring speed increases to our collective customer base. With Bizgres, we're\nnot satisfied with 12 MB/s, we won't stop until we saturate the I/O bus, so\nwe may get more extreme with the code than seems reasonable for the general\naudience.\n\n- Luke \n\n\n", "msg_date": "Tue, 09 Aug 2005 21:48:02 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "On Tue, 2005-08-09 at 21:48 -0700, Luke Lonergan wrote:\n\n> The key thing that is missing is the lack of micro-parallelism in the\n> character processing in this version. By \"inverting the loop\", or putting\n> the characters into a buffer on the outside, then doing fast character\n> scanning inside with special \"fix-up\" cases, we exposed long runs of\n> pipeline-able code to the compiler.\n> \n> I think there is another way to accomplish the same thing and still preserve\n> the current structure, but it requires \"strip mining\" the character buffer\n> into chunks that can be processed with an explicit loop to check for the\n> different characters. While it may seem artificial (it is), it will provide\n> the compiler with the ability to pipeline the character finding logic over\n> long runs. The other necessary element will have to avoid pipeline stalls\n> from the \"if\" conditions as much as possible.\n\nThis is a key point, IMHO.\n\nThat part of the code was specifically written to take advantage of\nprocessing pipelines in the hardware, not because the actual theoretical\nalgorithm for that approach was itself faster.\n\nNobody's said what compiler/hardware they have been using, so since both\nAlon and Tom say their character finding logic is faster, it is likely\nto be down to that? Name your platforms gentlemen, please.\n\nMy feeling is that we may learn something here that applies more widely\nacross many parts of the code.\n\nBest Regards, Simon Riggs\n\n\n", "msg_date": "Wed, 10 Aug 2005 09:15:00 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> Nobody's said what compiler/hardware they have been using, so since both\n> Alon and Tom say their character finding logic is faster, it is likely\n> to be down to that? Name your platforms gentlemen, please.\n\nI tested on HPPA with gcc 2.95.3 and on a Pentium 4 with gcc 3.4.3.\nGot pretty much the same results on both.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Aug 2005 11:29:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY FROM performance improvements " } ]
[ { "msg_contents": "Praveen Raja:\r\n\r\n\t I think the size of a table don't affect the speed of inserts into it.Because PostgreSQL just doing something like \"append\" on the data files.\r\n But the index do speed-down the inserts. Because PostgreSQL should maintain the index when doing inserts.\r\n\r\n\t\t hope this is useful for your question.\r\n\r\n\r\n\r\n======= 2005-06-27 19:24:06 you wrote:=======\r\n\r\n>Hi all\r\n>\r\n>I'm wondering if and how the size of a table affects speed of inserts\r\n>into it? What if the table has indexes, does that alter the answer?\r\n>\r\n>Thanks\r\n>\r\n>\r\n>\r\n>---------------------------(end of broadcast)---------------------------\r\n>TIP 1: subscribe and unsubscribe commands go to [email protected]\r\n>\r\n\r\n= = = = = = = = = = = = = = = = = = = =\r\n\t\t\t\r\n\r\n       Best regards!\r\n \r\n        \r\n 李江华\r\n Seamus Dean\r\n Alibaba.com\r\n\t\t\t TEL:0571-85022088-2287\r\n         [email protected]\r\n         2005-06-27\r\n", "msg_date": "Mon, 27 Jun 2005 19:30:50 +0800", "msg_from": "\"=?gb2312?B?wO69rbuq?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Insert performance vs Table size" } ]
[ { "msg_contents": "> There are some immediate questions from our engineers about\nperformance\n> \n> \"- Oracle has one particular performance enhancement that Postgres is\n> missing. If you do a select that returns 100,000 rows in a given\norder,\n> and all you want are rows 99101 to 99200, then Oracle can do that very\n> efficiently. With Postgres, it has to read the first 99200 rows and\n> then discard the first 99100. But... If we really want to look at\n> performance, then we ought to put together a set of benchmarks of some\n> typical tasks.\"\n\nI agree with Rod: you are correct but this is a very odd objection. You\nare declaring a set but are only interested in a tiny subset of that\nbased on arbitrary critera. You can do this with cursors or with clever\nquerying (not without materializing the full set however), but why? \n\nMerlin\n\n", "msg_date": "Mon, 27 Jun 2005 08:40:29 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance - moving from oracle to postgresql" } ]
[ { "msg_contents": "Hi !\n\nMy company is evaluating to compatibilizate our system (developed in \nC++) to PostgreSQL.\n\nOur programmer made a lot of tests and he informed me that the \nperformance using ODBC is very similar than using libpq, even with a big \nnumber of simultaneous connections/queries. Of course that for us is \nsimpler use ODBC because will be easier to maintan as we already support \na lot of other databases using ODBC (MySQL, DB2, etc).\n\nSomeone already had this experience? What are the key benefits using \nlibpq insted of ODBC ?\n\nOur application have a heavy load and around 150 concorrent users.\n\nRegards,\n\nRodrigo Carvalhaes\n\n-- \nEsta mensagem foi verificada pelo sistema de antiv�rus e\n acredita-se estar livre de perigo.\n\n", "msg_date": "Mon, 27 Jun 2005 10:32:35 -0300", "msg_from": "grupos <[email protected]>", "msg_from_op": true, "msg_subject": "PERFORMANCE ISSUE ODBC x LIBPQ C++ Application" } ]
[ { "msg_contents": "> Hi !\n> \n> My company is evaluating to compatibilizate our system (developed in\n> C++) to PostgreSQL.\n> \n> Our programmer made a lot of tests and he informed me that the\n> performance using ODBC is very similar than using libpq, even with a\nbig\n> number of simultaneous connections/queries. Of course that for us is\n> simpler use ODBC because will be easier to maintan as we already\nsupport\n> a lot of other databases using ODBC (MySQL, DB2, etc).\n> \n> Someone already had this experience? What are the key benefits using\n> libpq insted of ODBC ?\n> \n> Our application have a heavy load and around 150 concorrent users.\n\nThe ODBC driver for postgresql implements its own protocol stack.\nUnfortunately, it is still on protocol revision 2 (out of 3). Also, IMO\nlibpq is a little better tested and durable than the odbc driver. This\nnaturally follows from the fact that libpq is more widely used and more\nactively developed than odbc.\n\nIf you are heavily C++ invested you can consider wrapping libpq yourself\nif you want absolute maximum performance. If you happen to be\ndeveloping on Borland platform give strong consideration to Zeos\nconnection library which is very well designed (it wraps libpq).\n\nYou might want to consider posting your question to the odbc list.\n\nMerlin \n\n", "msg_date": "Mon, 27 Jun 2005 10:29:05 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PERFORMANCE ISSUE ODBC x LIBPQ C++ Application" } ]
[ { "msg_contents": "i would take a peek at psqlodbc-8.0 drivers ..\ni wouldn't battle with other version you might find such as (unixodbc\nones)\n\n\n-elz\n\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Merlin Moncure\n> Sent: 27 juin 2005 10:29\n> To: grupos\n> Cc: [email protected]\n> Subject: Re: [PERFORM] PERFORMANCE ISSUE ODBC x LIBPQ C++ Application\n> \n> > Hi !\n> > \n> > My company is evaluating to compatibilizate our system (developed in\n> > C++) to PostgreSQL.\n> > \n> > Our programmer made a lot of tests and he informed me that the \n> > performance using ODBC is very similar than using libpq, even with a\n> big\n> > number of simultaneous connections/queries. Of course that \n> for us is \n> > simpler use ODBC because will be easier to maintan as we already\n> support\n> > a lot of other databases using ODBC (MySQL, DB2, etc).\n> > \n> > Someone already had this experience? What are the key \n> benefits using \n> > libpq insted of ODBC ?\n> > \n> > Our application have a heavy load and around 150 concorrent users.\n> \n> The ODBC driver for postgresql implements its own protocol stack.\n> Unfortunately, it is still on protocol revision 2 (out of 3). \n> Also, IMO libpq is a little better tested and durable than \n> the odbc driver. This naturally follows from the fact that \n> libpq is more widely used and more actively developed than odbc.\n> \n> If you are heavily C++ invested you can consider wrapping \n> libpq yourself if you want absolute maximum performance. If \n> you happen to be developing on Borland platform give strong \n> consideration to Zeos connection library which is very well \n> designed (it wraps libpq).\n> \n> You might want to consider posting your question to the odbc list.\n> \n> Merlin \n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n", "msg_date": "Mon, 27 Jun 2005 10:46:46 -0400", "msg_from": "\"Eric Lauzon\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PERFORMANCE ISSUE ODBC x LIBPQ C++ Application" } ]
[ { "msg_contents": "[moved to pgsql-performance]\n> > \tCurrently I want to take a TPC-H test on postgresql-8.0.2. I\nhave\n> > downloaded the DBGEN and QGEN from the homepage of TPC. But I\n> encountered\n> > many problems which forced me to request some help. 1. How to load\nthe\n> data\n> > from flat file generated by dbgen tool? To the best of my knowledge,\n> there\n> > is a SQL Loader in Oracle 2. How to simulate the currency\nenvironment?\n> > Where can I download a client which connects to DB server through\nODBC?\n> \n> Get DBT3 from Sourceforge (search on \"osdldbt\"). This is OSDL's\nTPCH-like\n> test.\n> \n> However, given your knowledge of PostgreSQL you're unlikely to get any\n> kind of\n> result you can use -- TPCH requires siginficant database tuning\nknowledge.\n\nI don't necessarily agree. In fact, I remember reading the standards\nfor one of the TPC benchmarks and it said you were not supposed to\nspecifically tune for the test. Any submission, including one with\nstock settings, should be given consideration (and the .conf settings\nshould be submitted along with the benchmark results). This can only\nhelp to increase the body of knowledge on configuring the database.\n\nMerlin\n", "msg_date": "Mon, 27 Jun 2005 14:14:10 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] How two perform TPC-H test on postgresql-8.0.2" } ]
[ { "msg_contents": "Hi, \nI have a stored procedure written in perl and I doubt that perl's\ngarbage collector is working :-(\nafter a lot of work, postmaster has a size of 1100 Mb and I think\nthat the keyword \"undef\" has no effects.\nBefore tuning my procedure, does it exist a known issue, a workaround ?\n\n-- \nJean-Max Reymond\nCKR Solutions Open Source\nNice France\nhttp://www.ckr-solutions.com\n", "msg_date": "Mon, 27 Jun 2005 22:46:48 +0200", "msg_from": "Jean-Max Reymond <[email protected]>", "msg_from_op": true, "msg_subject": "perl garbage collector" }, { "msg_contents": "On Jun 27, 2005, at 4:46 PM, Jean-Max Reymond wrote:\n\n> Hi,\n> I have a stored procedure written in perl and I doubt that perl's\n> garbage collector is working :-(\n> after a lot of work, postmaster has a size of 1100 Mb and I think\n> that the keyword \"undef\" has no effects.\n> Before tuning my procedure, does it exist a known issue, a \n> workaround ?\n\njust because your application frees the memory doesn't mean that the \nOS takes it back. in other words, don't confuse memory usage with \nmemory leakage.\n\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806", "msg_date": "Mon, 27 Jun 2005 17:11:47 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: perl garbage collector" }, { "msg_contents": "Jean-Max Reymond <[email protected]> writes:\n> I have a stored procedure written in perl and I doubt that perl's\n> garbage collector is working :-(\n> after a lot of work, postmaster has a size of 1100 Mb and I think\n> that the keyword \"undef\" has no effects.\n\nCheck the PG list archives --- there's been previous discussion of\nsimilar issues. I think we concluded that when Perl is built to use\nits own private memory allocator, the results of that competing with\nmalloc are not very pretty :-(. You end up with a fragmented memory\nmap and no chance to give anything back to the OS.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Jun 2005 01:53:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: perl garbage collector " }, { "msg_contents": "2005/6/28, Tom Lane <[email protected]>:\n> Jean-Max Reymond <[email protected]> writes:\n> > I have a stored procedure written in perl and I doubt that perl's\n> > garbage collector is working :-(\n> > after a lot of work, postmaster has a size of 1100 Mb and I think\n> > that the keyword \"undef\" has no effects.\n> \n> Check the PG list archives --- there's been previous discussion of\n> similar issues. I think we concluded that when Perl is built to use\n> its own private memory allocator, the results of that competing with\n> malloc are not very pretty :-(. You end up with a fragmented memory\n> map and no chance to give anything back to the OS.\n\nthanks Tom for your advice. I have read the discussion but a small\ntest is very confusing for me.\nConsider this function:\n\nCREATE FUNCTION jmax() RETURNS integer\n AS $_$use strict;\n\nmy $i=0;\nfor ($i=0; $i<10000;$i++) {\n my $ch = \"0123456789\"x100000;\n my $res = spi_exec_query(\"select * from xdb_child where\ndoc_id=100 and ele_id=3 \");\n}\nmy $j=1;$_$\n LANGUAGE plperlu SECURITY DEFINER;\n\n\nALTER FUNCTION public.jmax() OWNER TO postgres;\n\nthe line my $ch = \"0123456789\"x100000; is used to allocate 1Mb.\nthe line my $res = spi_exec_query(\"select * from xdb_child where\ndoc_id=100 and ele_id=3 limit 5\"); simulates a query.\n\nwithout spi_exec_quer, the used memory in postmaster is a constant.\nSo, I think that pl/perl manages correctly memory in this case.\nwith spi_exec_query, postmaster grows and grows until the end of the loop. \nSi, it seems that spi_exec_query does not release all the memory after\neach call.\nFor my application (in real life) afer millions of spi_exec_query, it\ngrows up to 1Gb :-(\n\n\n\n-- \nJean-Max Reymond\nCKR Solutions Open Source\nNice France\nhttp://www.ckr-solutions.com\n", "msg_date": "Tue, 28 Jun 2005 20:47:27 +0200", "msg_from": "Jean-Max Reymond <[email protected]>", "msg_from_op": true, "msg_subject": "Re: perl garbage collector" }, { "msg_contents": "2005/6/28, Jean-Max Reymond <[email protected]>:\n> For my application (in real life) afer millions of spi_exec_query, it\n> grows up to 1Gb :-(\n\nOK, now in 2 lines:\n\nCREATE FUNCTION jmax() RETURNS integer\n AS $_$use strict;\n\nfor (my $i=0; $i<10000000;$i++) {\n spi_exec_query(\"select 'foo'\");\n}\nmy $j=1;$_$\n LANGUAGE plperlu SECURITY DEFINER\n\nrunning this test and your postmaster eats a lot of memory.\nit seems that there is a memory leak in spi_exec_query :-( \n\n\n-- \nJean-Max Reymond\nCKR Solutions Open Source\nNice France\nhttp://www.ckr-solutions.com\n", "msg_date": "Wed, 29 Jun 2005 00:10:01 +0200", "msg_from": "Jean-Max Reymond <[email protected]>", "msg_from_op": true, "msg_subject": "Re: perl garbage collector" } ]
[ { "msg_contents": "Karl,\n\n> Seems to me that when there's a constant value in the query\n> and an = comparision it will always be faster to use the (b-tree)\n> index that's ordered first by the constant value, as then all further\n> blocks are guarenteed to have a higher relevant information\n> density. At least when compared with another index that has the\n> same columns in it.\n\nThat really depends on the stats. Such a choice would *not* be \nappropriate if the < comparison was expected to return 1- rows while the = \ncondition applied to 15% of the table.\n\nWhat is your STATISTICS_TARGET for the relevant columns set to? When's \nthe last time you ran analyze? If this is all updated, you want to post \nthe pg_stats rows for the relevant columns?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 27 Jun 2005 15:37:41 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor index choice -- multiple indexes of the same columns" }, { "msg_contents": "Postgresql 8.0.3\n\nHi,\n\nI have a query\n\nselect 1\n from census\n where date < '1975-9-21' and sname = 'RAD' and status != 'A'\n limit 1;\n\nExplain analyze says it always uses the index made by:\n\n CREATE INDEX census_date_sname ON census (date, sname);\n\nthis is even after I made the index:\n\n CREATE INDEX census_sname_date ON census (sname, date);\n\nI made census_sname_date because it ran too slow. By deleting\ncensus_date_sname (temporarly, because my apps don't like this)\nI can force the use of census_sname_date and the query runs fine.\n\nSeems to me that when there's a constant value in the query\nand an = comparision it will always be faster to use the (b-tree)\nindex that's ordered first by the constant value, as then all further\nblocks are guarenteed to have a higher relevant information\ndensity. At least when compared with another index that has the\nsame columns in it.\n\nAs you might imagine there are relatively few sname values and\nrelatively many date values in my data. I use a query like the\nabove in a trigger to enforce bounds limitations. I don't\nexpect (want) to find any rows returned.\n\nI've figured out how to avoid executing this code very often,\nso this is not presently a serious problem for me.\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n", "msg_date": "Mon, 27 Jun 2005 23:09:26 +0000", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": false, "msg_subject": "Poor index choice -- multiple indexes of the same columns" }, { "msg_contents": "\nOn 06/27/2005 05:37:41 PM, Josh Berkus wrote:\n> Karl,\n> \n> > Seems to me that when there's a constant value in the query\n> > and an = comparision it will always be faster to use the (b-tree)\n> > index that's ordered first by the constant value, as then all\n> further\n> > blocks are guarenteed to have a higher relevant information\n> > density. At least when compared with another index that has the\n> > same columns in it.\n> \n> That really depends on the stats. Such a choice would *not* be\n> appropriate if the < comparison was expected to return 1- rows while\n> the =\n> condition applied to 15% of the table.\n\nWe're talking internals here so I don't know what I'm talking\nabout, but, when the = comparison returns 15% of the table\nyou can find your way straight to the 1- (sic) relevent rows\nbecause that 15% is further sorted by the second column of the\nindex. So that's one disk read and after that when you scan\nthe rest of the blocks every datum is relevant/returned.\nSo your scan will pass through fewer disk blocks. The only\ncase that would make sense to consider using the other\nindex is if the planner knew it could\nget the answer in 1 disk read, in which case it should be\nable to get the answer out of either index in one disk read\nas both indexes are on the same columns.\n\n> What is your STATISTICS_TARGET for the relevant columns set to?\n\nSTATISTICS_TARGET is the default, which I read as 10 the docs.\n\n> When's\n> the last time you ran analyze?\n\nI'm doing this in a torture test script, loading data.\nEvery fibnocci number of rows * 100 I VACCUM ANALYZE.\nSo, 100, 200, 300, 500, 800, etc.\n\nJust for grins I've created the index I'd like it to use\nand run VACUUM ANALYZE and shown the EXPLAIN ANALYZE below.\n\n> If this is all updated, you want to\n> post\n> the pg_stats rows for the relevant columns?\n\nPg_stats rows below. (I've tried to wrap the lines short\nso as not to mess up anybody's mailer.)\n\n# create index census_sname_date on census (sname, date);\nCREATE INDEX\n# vacuum analyze census;\nVACUUM\n# explain analyze select 1 from census where date < '1975-9-21'\n and sname = 'RAD' and status != 'A' ;\n QUERY\n PLAN\n---------------------------------------------------------------\n---------------------------------------------------------------\n----\n Index Scan using census_date_sname on census (cost=0.00..2169.51\nrows=1437 width=0) (actual time=40.610..40.610 rows=0 loops=1)\n Index Cond: ((date < '1975-09-21'::date) AND (sname =\n'RAD'::bpchar))\n Filter: (status <> 'A'::bpchar)\n Total runtime: 40.652 ms\n(4 rows)\n\nCompare with:\n\n# drop index census_date_sname;\nDROP INDEX\n# explain analyze select date from census where sname = 'RAD'\n and date < '1975-9-21' and status != 'A' limit 1;\n QUERY\nPLAN\n-------------------------------------------------------------------\n-------------------------------------------------------------------\n Limit (cost=0.00..3.37 rows=1 width=4) (actual time=0.097..0.097\nrows=0 loops=1)\n -> Index Scan using census_sname_date on census \n(cost=0.00..5203.95 rows=1544 width=4) (actual time=0.094..0.094\nrows=0 loops=1)\n Index Cond: ((sname = 'RAD'::bpchar) AND (date <\n'1975-09-21'::date))\n Filter: (status <> 'A'::bpchar)\n Total runtime: 0.133 ms\n(5 rows)\n\n\n\n\n# select * from pg_stats where tablename = 'census' and (attname =\n'sname' or attname = 'date');\n schemaname | tablename | attname | null_frac | avg_width | n_distinct\n| most_common_vals | most_common_freqs | histogram_bounds |\ncorrelation\n------------+-----------+---------+-----------+-----------+-----------\n-+--------------------------------------------------------------------\n---------------------------------------------+------------------------\n----------------------------------------------------------------------\n--------------+-------------------------------------------------------\n---------------------------------------------------------------------+\n-------------\n babase | census | date | 0 | 4 | 4687 |\n{1979-02-01,1976-06-16,1977-03-23,1978-08-25,1979-09-20,1971-06-28\n,1972-04-28,1972-08-27,1974-04-06,1975-03-19}\n|\n{0.002,0.00166667,0.00166667,0.00166667,0.00166667,0.00133333\n,0.00133333,0.00133333,0.00133333,0.00133333}\n|\n{1959-07-15,1966-02-18,1969-02-22,1971-01-10,1972-07-26,1974-02-09\n,1975-05-27,1976-07-28,1977-08-19,1978-08-07,1979-10-02}\n| 1\n babase | census | sname | 0 | 7 | 177 |\n{MAX,ALT,PRE,COW,EST,JAN,RIN,ZUM,DUT,LUL} |\n{0.0166667,0.015,0.015,0.0146667\n,0.0143333,0.014,0.0136667,0.0136667,0.0133333,0.0133333}\n| {ALI,BUN,FAN,IBI,LER,NDO,PET,RUS,SLM,TOT,XEN} | 0.0446897\n(2 rows)\n\nThanks.\n\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n", "msg_date": "Tue, 28 Jun 2005 02:36:51 +0000", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor index choice -- multiple indexes of the same" }, { "msg_contents": "\nOn 06/27/2005 09:36:51 PM, Karl O. Pinc wrote:\n\n> I'm doing this in a torture test script, loading data.\n> Every fibnocci number of rows * 100 I VACCUM ANALYZE.\n> So, 100, 200, 300, 500, 800, etc.\n\n(And of course disconnect my client and re-connect so\nas to use the new statistics. sure would be nice if\nI didn't have to do this.)\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n", "msg_date": "Tue, 28 Jun 2005 03:11:25 +0000", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor index choice -- multiple indexes of the same" }, { "msg_contents": "\"Karl O. Pinc\" <[email protected]> writes:\n> I have a query\n\n> select 1\n> from census\n> where date < '1975-9-21' and sname = 'RAD' and status != 'A'\n> limit 1;\n\n> Explain analyze says it always uses the index made by:\n\n> CREATE INDEX census_date_sname ON census (date, sname);\n\n> this is even after I made the index:\n\n> CREATE INDEX census_sname_date ON census (sname, date);\n\nI don't believe that any existing release can tell the difference\nbetween these two indexes as far as costs go. I just recently\nadded some code to btcostestimate that would cause it to prefer\nthe index on (sname, date) but of course that's not released yet.\n\nHowever: isn't the above query pretty seriously underspecified?\nWith a LIMIT and no ORDER BY, you are asking for a random one\nof the rows matching the condition. I realize that with\n\"select 1\" you may not care much, but adding a suitable ORDER BY\nwould help push the planner towards using the right index. In\nthis case \"ORDER BY sname DESC, date DESC\" would probably do the\ntrick.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Jun 2005 02:40:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor index choice -- multiple indexes of the same columns " }, { "msg_contents": "\nOn 06/28/2005 01:40:56 AM, Tom Lane wrote:\n> \"Karl O. Pinc\" <[email protected]> writes:\n> > I have a query\n> \n> > select 1\n> > from census\n> > where date < '1975-9-21' and sname = 'RAD' and status != 'A'\n> > limit 1;\n> \n> > Explain analyze says it always uses the index made by:\n> \n> > CREATE INDEX census_date_sname ON census (date, sname);\n> \n> > this is even after I made the index:\n> \n> > CREATE INDEX census_sname_date ON census (sname, date);\n> \n> I don't believe that any existing release can tell the difference\n> between these two indexes as far as costs go. I just recently\n> added some code to btcostestimate that would cause it to prefer\n> the index on (sname, date) but of course that's not released yet.\n> \n> However: isn't the above query pretty seriously underspecified?\n> With a LIMIT and no ORDER BY, you are asking for a random one\n> of the rows matching the condition. I realize that with\n> \"select 1\" you may not care much, but adding a suitable ORDER BY\n> would help push the planner towards using the right index. In\n> this case \"ORDER BY sname DESC, date DESC\" would probably do the\n> trick.\n\nYes, that works. I'd already tried \"ORDER BY date DESC\", before\nI first wrote, and that did not work. (I started with no LIMIT\neither, and tried adding specifications until I gave up. It's\nvery good that the new planner will figure out things by itself.)\n\"ORDER BY sname DESC\" works as well. This is a\nbit odd, as with the constant in the = comparison \"ORDER BY date\nDESC\" is the same as \"ORDER BY sname DESC, date DESC\".\nI guess that's why I gave up on my attempts to get the planner\nto use the (sname, date) index before I got to your solution.\n\nThanks everybody for the help.\n\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n", "msg_date": "Tue, 28 Jun 2005 17:16:41 +0000", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor index choice -- multiple indexes of the same" } ]
[ { "msg_contents": "Hi,\n\nAt 01:16 28/06/2005, Karl O. Pinc wrote:\n>http://www.postgresql.org/docs/8.0/static/indexes-examine.html\n>\n>Says:\n>\n>\"If you do not succeed in adjusting the costs to be more\n>appropriate, then you may have to resort to forcing index\n>usage explicitly.\"\n>\n>Is there a way to force a query to use a particular index?\n\nNot that I know of.\n\n>If not, what does this sentence mean?\n\nThat you can force the planner to use an index (any one) over not using an \nindex (and using seq scans instead) by setting enable_seqscan to off.\n\nJacques.\n\n\n", "msg_date": "Tue, 28 Jun 2005 00:40:53 +0200", "msg_from": "Jacques Caron <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Forcing use of a particular index" }, { "msg_contents": "http://www.postgresql.org/docs/8.0/static/indexes-examine.html\n\nSays:\n\n\"If you do not succeed in adjusting the costs to be more\nappropriate, then you may have to resort to forcing index\nusage explicitly.\"\n\nIs there a way to force a query to use a particular index?\nIf not, what does this sentence mean?\n\nThanks.\n\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n", "msg_date": "Mon, 27 Jun 2005 23:16:01 +0000", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": false, "msg_subject": "Forcing use of a particular index" } ]
[ { "msg_contents": "Hi,\n\nI'm having a hard time finding the poorly performing\nstatements in my plpgsql procedures, many of which\nare triggers. Am I missing something?\n\nI can get the query plans by starting up a new\nconnection and doing:\nSET DEBUG_PRINT_PLAN TO TRUE;\nSET CLIENT_MIN_MESSAGES TO DEBUG1;\nAnd then running code that exercises my functions.\nThen I can find the queries that, in theory,\ncould have problems. But problems remain\nafter this.\n\nWhat I'd really like is a SET variable (or maybe\na clause in CREATE FUNCTION) that causes any\nfunctions compiled to issue EXPLAIN ANALYZE output\nand the query text itself, to be RAISEd.\nThen I could watch the performance as it ran.\n\nShort of that I think I'm going to be reduced to\nwriting a C function that returns the real\nsystem time so I can spatter my code with\nRAISE statements that indicate actual execution\ntime.\n\nIs there a better approach?\nDoes anybody have such a C function handy?\n\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n", "msg_date": "Mon, 27 Jun 2005 23:30:45 +0000", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance analysis of plpgsql code" }, { "msg_contents": "On Mon, Jun 27, 2005 at 11:30:45PM +0000, Karl O. Pinc wrote:\n> \n> Short of that I think I'm going to be reduced to\n> writing a C function that returns the real\n> system time so I can spatter my code with\n> RAISE statements that indicate actual execution\n> time.\n\nSee timeofday().\n\nhttp://www.postgresql.org/docs/8.0/static/functions-datetime.html#FUNCTIONS-DATETIME-CURRENT\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Mon, 27 Jun 2005 17:33:03 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance analysis of plpgsql code" }, { "msg_contents": "\nOn Jun 28, 2005, at 10:54 AM, Karl O. Pinc wrote:\n\n>\n> On 06/27/2005 06:33:03 PM, Michael Fuhr wrote:\n>\n>> On Mon, Jun 27, 2005 at 11:30:45PM +0000, Karl O. Pinc wrote:\n>> >\n>> > Short of that I think I'm going to be reduced to\n>> > writing a C function that returns the real\n>> > system time so I can spatter my code with\n>> > RAISE statements that indicate actual execution\n>> > time.\n>> See timeofday().\n>>\n>\n> That only gives you the time at the start of the transaction,\n> so you get no indication of how long anything in the\n> transaction takes.\n\nI recommend you look again.\n\n<http://www.postgresql.org/docs/8.0/interactive/functions- \ndatetime.html#FUNCTIONS-DATETIME-CURRENT>\n\nMichael Glaesemann\ngrzm myrealbox com\n", "msg_date": "Tue, 28 Jun 2005 10:15:50 +0900", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance analysis of plpgsql code" }, { "msg_contents": "On Tue, Jun 28, 2005 at 01:54:08AM +0000, Karl O. Pinc wrote:\n> On 06/27/2005 06:33:03 PM, Michael Fuhr wrote:\n>\n> >See timeofday().\n> \n> That only gives you the time at the start of the transaction,\n> so you get no indication of how long anything in the\n> transaction takes.\n\nDid you read the documentation or try it? Perhaps you're thinking\nof now(), current_timestamp, and friends, which don't advance during\na transaction; but as the documentation states, \"timeofday() returns\nthe wall-clock time and does advance during transactions.\"\n\nI just ran tests on versions of PostgreSQL going back to 7.2.8 and\nin all of them timeofday() advanced during a transaction. Does it\nnot work on your system? If not then something's broken -- what\nOS and version of PostgreSQL are you using?\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Mon, 27 Jun 2005 19:34:19 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance analysis of plpgsql code" }, { "msg_contents": "\nOn 06/27/2005 06:33:03 PM, Michael Fuhr wrote:\n> On Mon, Jun 27, 2005 at 11:30:45PM +0000, Karl O. Pinc wrote:\n> >\n> > Short of that I think I'm going to be reduced to\n> > writing a C function that returns the real\n> > system time so I can spatter my code with\n> > RAISE statements that indicate actual execution\n> > time.\n> \n> See timeofday().\n\nThat only gives you the time at the start of the transaction,\nso you get no indication of how long anything in the\ntransaction takes.\n\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n", "msg_date": "Tue, 28 Jun 2005 01:54:08 +0000", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance analysis of plpgsql code" }, { "msg_contents": "On Tue, Jun 28, 2005 at 03:03:06AM +0000, Karl O. Pinc wrote:\n> \n> For all your work a documentation patch is appended that\n> I think is easier to read and might avoid this problem\n> in the future.\n\nPatches should go to the pgsql-patches list -- the people who review\nand apply patches might not be following this thread.\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Mon, 27 Jun 2005 20:49:18 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance analysis of plpgsql code" }, { "msg_contents": "\nOn 06/27/2005 08:34:19 PM, Michael Fuhr wrote:\n> On Tue, Jun 28, 2005 at 01:54:08AM +0000, Karl O. Pinc wrote:\n> > On 06/27/2005 06:33:03 PM, Michael Fuhr wrote:\n> >\n> > >See timeofday().\n> >\n> > That only gives you the time at the start of the transaction,\n> > so you get no indication of how long anything in the\n> > transaction takes.\n> \n> Did you read the documentation or try it? Perhaps you're thinking\n> of now(), current_timestamp, and friends, which don't advance during\n> a transaction; but as the documentation states, \"timeofday() returns\n> the wall-clock time and does advance during transactions.\"\n\nVery sorry. I did not read through the complete documentation.\n\n> I just ran tests on versions of PostgreSQL going back to 7.2.8 and\n> in all of them timeofday() advanced during a transaction.\n\nFor all your work a documentation patch is appended that\nI think is easier to read and might avoid this problem\nin the future. If you don't read all the way through the\ncurrent cvs version then you might think, as I did,\nthat timeofday() is a CURRENT_TIMESTAMP related function.\n\nSorry, but 3 lines wrap in the patch\nin my email client. :(\n\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n--- func.sgml\t2005-06-26 17:05:35.000000000 -0500\n+++ func.sgml.new\t2005-06-27 21:51:05.301097896 -0500\n@@ -5787,15 +5787,6 @@\n </para>\n\n <para>\n- There is also the function <function>timeofday()</function>, which \nfor historical\n- reasons returns a <type>text</type> string rather than a \n<type>timestamp</type> value:\n-<screen>\n-SELECT timeofday();\n-<lineannotation>Result: </lineannotation><computeroutput>Sat Feb 17 \n19:07:32.000126 2001 EST</computeroutput>\n-</screen>\n- </para>\n-\n- <para>\n It is important to know that\n <function>CURRENT_TIMESTAMP</function> and related functions \nreturn\n the start time of the current transaction; their values do not\n@@ -5803,8 +5794,7 @@\n the intent is to allow a single transaction to have a consistent\n notion of the <quote>current</quote> time, so that multiple\n modifications within the same transaction bear the same\n- time stamp. <function>timeofday()</function>\n- returns the wall-clock time and does advance during transactions.\n+ time stamp.\n </para>\n\n <note>\n@@ -5815,6 +5805,18 @@\n </note>\n\n <para>\n+ There is also the function <function>timeofday()</function> which\n+ returns the wall-clock time and advances during transactions. For\n+ historical reasons <function>timeofday()</function> returns a\n+ <type>text</type> string rather than a <type>timestamp</type>\n+ value:\n+<screen>\n+SELECT timeofday();\n+<lineannotation>Result: </lineannotation><computeroutput>Sat Feb 17 \n19:07:32.000126 2001 EST</computeroutput>\n+</screen>\n+ </para>\n+\n+ <para>\n All the date/time data types also accept the special literal value\n <literal>now</literal> to specify the current date and time. \nThus,\n the following three all return the same result:\n\n", "msg_date": "Tue, 28 Jun 2005 03:03:06 +0000", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance analysis of plpgsql code" }, { "msg_contents": "\nOn 06/27/2005 10:03:06 PM, Karl O. Pinc wrote:\n\nOn 06/27/2005 08:34:19 PM, Michael Fuhr wrote:\n> On Tue, Jun 28, 2005 at 01:54:08AM +0000, Karl O. Pinc wrote:\n> > On 06/27/2005 06:33:03 PM, Michael Fuhr wrote:\n> >\n> > >See timeofday().\n> >\n> > That only gives you the time at the start of the transaction,\n> > so you get no indication of how long anything in the\n> > transaction takes.\n> \n> Did you read the documentation or try it? Perhaps you're thinking\n> of now(), current_timestamp, and friends, which don't advance during\n> a transaction; but as the documentation states, \"timeofday() returns\n> the wall-clock time and does advance during transactions.\"\n\nVery sorry. I did not read through the complete documentation.\n\n> I just ran tests on versions of PostgreSQL going back to 7.2.8 and\n> in all of them timeofday() advanced during a transaction.\n\nFor all your work a documentation patch is appended that\nI think is easier to read and might avoid this problem\nin the future. If you don't read all the way through the\ncurrent cvs version then you might think, as I did,\nthat timeofday() is a CURRENT_TIMESTAMP related function.\n\nSorry, but 3 lines wrap in the patch\nin my email client. :(\n\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n--- func.sgml\t2005-06-26 17:05:35.000000000 -0500\n+++ func.sgml.new\t2005-06-27 21:51:05.301097896 -0500\n@@ -5787,15 +5787,6 @@\n </para>\n\n <para>\n- There is also the function <function>timeofday()</function>, which \nfor historical\n- reasons returns a <type>text</type> string rather than a \n<type>timestamp</type> value:\n-<screen>\n-SELECT timeofday();\n-<lineannotation>Result: </lineannotation><computeroutput>Sat Feb 17 \n19:07:32.000126 2001 EST</computeroutput>\n-</screen>\n- </para>\n-\n- <para>\n It is important to know that\n <function>CURRENT_TIMESTAMP</function> and related functions \nreturn\n the start time of the current transaction; their values do not\n@@ -5803,8 +5794,7 @@\n the intent is to allow a single transaction to have a consistent\n notion of the <quote>current</quote> time, so that multiple\n modifications within the same transaction bear the same\n- time stamp. <function>timeofday()</function>\n- returns the wall-clock time and does advance during transactions.\n+ time stamp.\n </para>\n\n <note>\n@@ -5815,6 +5805,18 @@\n </note>\n\n <para>\n+ There is also the function <function>timeofday()</function> which\n+ returns the wall-clock time and advances during transactions. For\n+ historical reasons <function>timeofday()</function> returns a\n+ <type>text</type> string rather than a <type>timestamp</type>\n+ value:\n+<screen>\n+SELECT timeofday();\n+<lineannotation>Result: </lineannotation><computeroutput>Sat Feb 17 \n19:07:32.000126 2001 EST</computeroutput>\n+</screen>\n+ </para>\n+\n+ <para>\n All the date/time data types also accept the special literal value\n <literal>now</literal> to specify the current date and time. \nThus,\n the following three all return the same result:\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n", "msg_date": "Tue, 28 Jun 2005 15:42:32 +0000", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Re: [PERFORM] Performance analysis of plpgsql code\n [[email protected]]" }, { "msg_contents": "\nPatch applied. Thanks. Your documentation changes can be viewed in\nfive minutes using links on the developer's page,\nhttp://www.postgresql.org/developer/testing.\n\n\n---------------------------------------------------------------------------\n\n\nKarl O. Pinc wrote:\n> \n> On 06/27/2005 10:03:06 PM, Karl O. Pinc wrote:\n> \n> On 06/27/2005 08:34:19 PM, Michael Fuhr wrote:\n> > On Tue, Jun 28, 2005 at 01:54:08AM +0000, Karl O. Pinc wrote:\n> > > On 06/27/2005 06:33:03 PM, Michael Fuhr wrote:\n> > >\n> > > >See timeofday().\n> > >\n> > > That only gives you the time at the start of the transaction,\n> > > so you get no indication of how long anything in the\n> > > transaction takes.\n> > \n> > Did you read the documentation or try it? Perhaps you're thinking\n> > of now(), current_timestamp, and friends, which don't advance during\n> > a transaction; but as the documentation states, \"timeofday() returns\n> > the wall-clock time and does advance during transactions.\"\n> \n> Very sorry. I did not read through the complete documentation.\n> \n> > I just ran tests on versions of PostgreSQL going back to 7.2.8 and\n> > in all of them timeofday() advanced during a transaction.\n> \n> For all your work a documentation patch is appended that\n> I think is easier to read and might avoid this problem\n> in the future. If you don't read all the way through the\n> current cvs version then you might think, as I did,\n> that timeofday() is a CURRENT_TIMESTAMP related function.\n> \n> Sorry, but 3 lines wrap in the patch\n> in my email client. :(\n> \n> \n> Karl <[email protected]>\n> Free Software: \"You don't pay back, you pay forward.\"\n> -- Robert A. Heinlein\n> \n> \n> --- func.sgml\t2005-06-26 17:05:35.000000000 -0500\n> +++ func.sgml.new\t2005-06-27 21:51:05.301097896 -0500\n> @@ -5787,15 +5787,6 @@\n> </para>\n> \n> <para>\n> - There is also the function <function>timeofday()</function>, which \n> for historical\n> - reasons returns a <type>text</type> string rather than a \n> <type>timestamp</type> value:\n> -<screen>\n> -SELECT timeofday();\n> -<lineannotation>Result: </lineannotation><computeroutput>Sat Feb 17 \n> 19:07:32.000126 2001 EST</computeroutput>\n> -</screen>\n> - </para>\n> -\n> - <para>\n> It is important to know that\n> <function>CURRENT_TIMESTAMP</function> and related functions \n> return\n> the start time of the current transaction; their values do not\n> @@ -5803,8 +5794,7 @@\n> the intent is to allow a single transaction to have a consistent\n> notion of the <quote>current</quote> time, so that multiple\n> modifications within the same transaction bear the same\n> - time stamp. <function>timeofday()</function>\n> - returns the wall-clock time and does advance during transactions.\n> + time stamp.\n> </para>\n> \n> <note>\n> @@ -5815,6 +5805,18 @@\n> </note>\n> \n> <para>\n> + There is also the function <function>timeofday()</function> which\n> + returns the wall-clock time and advances during transactions. For\n> + historical reasons <function>timeofday()</function> returns a\n> + <type>text</type> string rather than a <type>timestamp</type>\n> + value:\n> +<screen>\n> +SELECT timeofday();\n> +<lineannotation>Result: </lineannotation><computeroutput>Sat Feb 17 \n> 19:07:32.000126 2001 EST</computeroutput>\n> +</screen>\n> + </para>\n> +\n> + <para>\n> All the date/time data types also accept the special literal value\n> <literal>now</literal> to specify the current date and time. \n> Thus,\n> the following three all return the same result:\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n> \n> Karl <[email protected]>\n> Free Software: \"You don't pay back, you pay forward.\"\n> -- Robert A. Heinlein\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 28 Jun 2005 21:53:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Re: [PERFORM] Performance analysis of plpgsql code" } ]
[ { "msg_contents": "Hi all,\n\nMy company currently runs a number of both web-based and more \ntransactional projects on a PostgreSQL 7.3 server, and we're looking to \nupgrade to a new machine running 8.0 to boost performance and handle \ndata growth in to the future.\n\nRight now I'm looking at a Sun Fire V40z server in a fairly modest \nconfiguration: 2 Opteron 848 (2.2ghz) CPUs, and 4GB of RAM. The V40z has \n6 drive bays, and from earlier posts and the info at \nhttp://www.powerpostgresql.com/PerfList/ it sounds like the best \nconfiguration would be:\n\n* 2 drives in RAID 1 for OS and WAL\n* 4 drives in RAID 1+0 for data\n\nHowever, using 73gb 15krpm drives, I'll be limiting myself to about \n140GB of data storage, and I'm not sure if this will be enough to cover \nthe life of the server. If I stick with the faster drives for the WAL, \nhow significant a performance impact will there be if I use larger \n10krpm drives for the data?\n\nAlso, if anyone could recommend a SCSI RAID card for this configuration, \nor if anyone has any other suggestions, it'd be greatly appreciated.\n\nThanks\nLeigh\n", "msg_date": "Tue, 28 Jun 2005 11:24:48 +1000", "msg_from": "Leigh Dyer <[email protected]>", "msg_from_op": true, "msg_subject": "Faster drives for WAL than for data?" } ]
[ { "msg_contents": "We have the following function in our home grown mirroring package, but \nit isn't running as fast as we would like. We need to select statements \nfrom the pending_statement table, and we want to select all the \nstatements for a single transaction (pending_trans) in one go (that is, \nwe either select all the statements for a transaction, or none of them). \nWe select as many blocks of statements as it takes to top the 100 \nstatement limit (so if the last transaction we pull has enough \nstatements to put our count at 110, we'll still take it, but then we're \ndone).\n\nHere is our function:\n\nCREATE OR REPLACE FUNCTION dbmirror.get_pending()\n RETURNS SETOF dbmirror.pending_statement AS\n$BODY$\n\nDECLARE\n count INT4;\n transaction RECORD;\n statement dbmirror.pending_statement;\n BEGIN\n count := 0;\n\n FOR transaction IN SELECT t.trans_id as ID\n FROM pending_trans AS t WHERE fetched = false\n ORDER BY trans_id LIMIT 50\n LOOP\n update pending_trans set fetched = true where trans_id = \ntransaction.id;\n\n\t FOR statement IN SELECT s.id, s.transaction_id, s.table_name, s.op, \ns.data\n FROM dbmirror.pending_statement AS s\n WHERE s.transaction_id = transaction.id\n ORDER BY s.id ASC\n LOOP\n count := count + 1;\n\n RETURN NEXT statement;\n END LOOP;\n\n IF count > 100 THEN\n EXIT;\n END IF;\n END LOOP;\n\n RETURN;\n END;$BODY$\n LANGUAGE 'plpgsql' VOLATILE;\n\nTable Schemas:\n\nCREATE TABLE dbmirror.pending_trans\n(\n trans_id oid NOT NULL,\n fetched bool DEFAULT false,\n CONSTRAINT pending_trans_pkey PRIMARY KEY (trans_id)\n)\nWITHOUT OIDS;\n\nCREATE TABLE dbmirror.pending_statement\n(\n id oid NOT NULL DEFAULT nextval('dbmirror.statement_id_seq'::text),\n transaction_id oid NOT NULL,\n table_name text NOT NULL,\n op char NOT NULL,\n data text NOT NULL,\n CONSTRAINT pending_statement_pkey PRIMARY KEY (id)\n)\nWITHOUT OIDS;\n\nCREATE UNIQUE INDEX idx_stmt_tran_id_id\n ON dbmirror.pending_statement\n USING btree\n (transaction_id, id);\n\nPostgres 8.0.1 on Linux.\n\nAny Help would be greatly appreciated.\n\nRegards\n\n-- \nDavid Mitchell\nSoftware Engineer\nTelogis\n", "msg_date": "Tue, 28 Jun 2005 14:37:34 +1200", "msg_from": "David Mitchell <[email protected]>", "msg_from_op": true, "msg_subject": "How can I speed up this function?" }, { "msg_contents": "What's wrong with Slony?\n\nDavid Mitchell wrote:\n> We have the following function in our home grown mirroring package, but \n> it isn't running as fast as we would like. We need to select statements \n> from the pending_statement table, and we want to select all the \n> statements for a single transaction (pending_trans) in one go (that is, \n> we either select all the statements for a transaction, or none of them). \n> We select as many blocks of statements as it takes to top the 100 \n> statement limit (so if the last transaction we pull has enough \n> statements to put our count at 110, we'll still take it, but then we're \n> done).\n> \n> Here is our function:\n> \n> CREATE OR REPLACE FUNCTION dbmirror.get_pending()\n> RETURNS SETOF dbmirror.pending_statement AS\n> $BODY$\n> \n> DECLARE\n> count INT4;\n> transaction RECORD;\n> statement dbmirror.pending_statement;\n> BEGIN\n> count := 0;\n> \n> FOR transaction IN SELECT t.trans_id as ID\n> FROM pending_trans AS t WHERE fetched = false\n> ORDER BY trans_id LIMIT 50\n> LOOP\n> update pending_trans set fetched = true where trans_id = \n> transaction.id;\n> \n> FOR statement IN SELECT s.id, s.transaction_id, s.table_name, \n> s.op, s.data\n> FROM dbmirror.pending_statement AS s\n> WHERE s.transaction_id = transaction.id\n> ORDER BY s.id ASC\n> LOOP\n> count := count + 1;\n> \n> RETURN NEXT statement;\n> END LOOP;\n> \n> IF count > 100 THEN\n> EXIT;\n> END IF;\n> END LOOP;\n> \n> RETURN;\n> END;$BODY$\n> LANGUAGE 'plpgsql' VOLATILE;\n> \n> Table Schemas:\n> \n> CREATE TABLE dbmirror.pending_trans\n> (\n> trans_id oid NOT NULL,\n> fetched bool DEFAULT false,\n> CONSTRAINT pending_trans_pkey PRIMARY KEY (trans_id)\n> )\n> WITHOUT OIDS;\n> \n> CREATE TABLE dbmirror.pending_statement\n> (\n> id oid NOT NULL DEFAULT nextval('dbmirror.statement_id_seq'::text),\n> transaction_id oid NOT NULL,\n> table_name text NOT NULL,\n> op char NOT NULL,\n> data text NOT NULL,\n> CONSTRAINT pending_statement_pkey PRIMARY KEY (id)\n> )\n> WITHOUT OIDS;\n> \n> CREATE UNIQUE INDEX idx_stmt_tran_id_id\n> ON dbmirror.pending_statement\n> USING btree\n> (transaction_id, id);\n> \n> Postgres 8.0.1 on Linux.\n> \n> Any Help would be greatly appreciated.\n> \n> Regards\n> \n\n", "msg_date": "Tue, 28 Jun 2005 11:04:48 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can I speed up this function?" }, { "msg_contents": "\nChristopher Kings-Lynne wrote:\n> What's wrong with Slony?\n\nBecause it's not multi-master. Our mirroring package is.\n\n-- \nDavid Mitchell\nSoftware Engineer\nTelogis\n", "msg_date": "Tue, 28 Jun 2005 15:11:02 +1200", "msg_from": "David Mitchell <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How can I speed up this function?" }, { "msg_contents": ">> What's wrong with Slony?\n> \n> Because it's not multi-master. Our mirroring package is.\n\nI'm curious - how did you write a multi-master replication package in \npgsql, when pgsql doesn't have 2 phase commits or any kind of \ndistributed syncing or conflict resolution in a release version?\n\nChris\n\n", "msg_date": "Tue, 28 Jun 2005 11:20:36 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can I speed up this function?" }, { "msg_contents": "David Mitchell wrote:\n> We have the following function in our home grown mirroring package, but \n> it isn't running as fast as we would like. We need to select statements \n> from the pending_statement table, and we want to select all the \n> statements for a single transaction (pending_trans) in one go (that is, \n> we either select all the statements for a transaction, or none of them). \n> We select as many blocks of statements as it takes to top the 100 \n> statement limit (so if the last transaction we pull has enough \n> statements to put our count at 110, we'll still take it, but then we're \n> done).\n> \n> Here is our function:\n> \n> CREATE OR REPLACE FUNCTION dbmirror.get_pending()\n> RETURNS SETOF dbmirror.pending_statement AS\n> $BODY$\n> \n> DECLARE\n> count INT4;\n> transaction RECORD;\n> statement dbmirror.pending_statement;\n> BEGIN\n> count := 0;\n> \n> FOR transaction IN SELECT t.trans_id as ID\n> FROM pending_trans AS t WHERE fetched = false\n> ORDER BY trans_id LIMIT 50\n> LOOP\n> update pending_trans set fetched = true where trans_id = \n> transaction.id;\n> \n> FOR statement IN SELECT s.id, s.transaction_id, s.table_name, \n> s.op, s.data\n> FROM dbmirror.pending_statement AS s\n> WHERE s.transaction_id = transaction.id\n> ORDER BY s.id ASC\n> LOOP\n> count := count + 1;\n> \n> RETURN NEXT statement;\n> END LOOP;\n> \n> IF count > 100 THEN\n> EXIT;\n> END IF;\n> END LOOP;\n> \n> RETURN;\n> END;$BODY$\n> LANGUAGE 'plpgsql' VOLATILE;\n\nDavid,\n\nI'm still a newbie and it may not affect performance but why are you \naliasing the tables? Can you not simply use\n\nFOR transaction IN SELECT trans_id\n FROM pending_trans\n WHERE fetched = false\n ORDER BY trans_id\n LIMIT 50\n\nand\n\nFOR statement IN SELECT id,\n transaction_id,\n table_name,\n op,\n data\n FROM dbmirror.pending_statement\n WHERE pending_statement.transaction_id =\n transaction.trans_id\n ORDER BY pending_statement.id\n\nI am pretty sure that the ORDER BY is slowing down both of these \nqueries. Since you are going to go through the whole table eventually \ndo you really need to sort the data at this point?\n\n-- \nKind Regards,\nKeith\n", "msg_date": "Mon, 27 Jun 2005 23:21:10 -0400", "msg_from": "Keith Worthington <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can I speed up this function?" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> \n> I'm curious - how did you write a multi-master replication package in \n> pgsql, when pgsql doesn't have 2 phase commits or any kind of \n> distributed syncing or conflict resolution in a release version?\n\nWe didn't write it entirely in pgsql, there is a worker process that \ntakes care of actually committing to the database.\n\nCheers\n\n-- \nDavid Mitchell\nSoftware Engineer\nTelogis\n", "msg_date": "Tue, 28 Jun 2005 15:31:48 +1200", "msg_from": "David Mitchell <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How can I speed up this function?" }, { "msg_contents": "Hi Keith,\n\nUnfortunately, we must have those sorts. The statements within a \ntransaction must be executed on the slave in the same order as they were \non the master, and similarly, transactions must also go in the same \norder. As for aliasing the tables, that is just a remnant from previous \nversions of the code.\n\nThanks\n\nDavid\n\nKeith Worthington wrote:\n> I'm still a newbie and it may not affect performance but why are you \n> aliasing the tables? Can you not simply use\n> \n> FOR transaction IN SELECT trans_id\n> FROM pending_trans\n> WHERE fetched = false\n> ORDER BY trans_id\n> LIMIT 50\n> \n> and\n> \n> FOR statement IN SELECT id,\n> transaction_id,\n> table_name,\n> op,\n> data\n> FROM dbmirror.pending_statement\n> WHERE pending_statement.transaction_id =\n> transaction.trans_id\n> ORDER BY pending_statement.id\n> \n> I am pretty sure that the ORDER BY is slowing down both of these \n> queries. Since you are going to go through the whole table eventually \n> do you really need to sort the data at this point?\n> \n\n\n-- \nDavid Mitchell\nSoftware Engineer\nTelogis\n\n", "msg_date": "Tue, 28 Jun 2005 15:33:44 +1200", "msg_from": "David Mitchell <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How can I speed up this function?" }, { "msg_contents": "Merge the two select statements like this and try,\n\nSELECT t.trans_id as ID,s.id, s.transaction_id, s.table_name, s.op, s.data\n FROM pending_trans AS t join dbmirror.pending_statement AS s\n on (s.transaction_id=t.id)\nWHERE t.fetched = false order by t.trans_id,s.id limit 100;\n\n If the above query works in the way you want, then you can also do the\nupdate\nusing the same.\n\nwith regards,\nS.Gnanavel\n\n\n> -----Original Message-----\n> From: [email protected]\n> Sent: Tue, 28 Jun 2005 14:37:34 +1200\n> To: [email protected]\n> Subject: [PERFORM] How can I speed up this function?\n>\n> We have the following function in our home grown mirroring package, but\n> it isn't running as fast as we would like. We need to select statements\n> from the pending_statement table, and we want to select all the\n> statements for a single transaction (pending_trans) in one go (that is,\n> we either select all the statements for a transaction, or none of them).\n> We select as many blocks of statements as it takes to top the 100\n> statement limit (so if the last transaction we pull has enough\n> statements to put our count at 110, we'll still take it, but then we're\n> done).\n>\n> Here is our function:\n>\n> CREATE OR REPLACE FUNCTION dbmirror.get_pending()\n> RETURNS SETOF dbmirror.pending_statement AS\n> $BODY$\n>\n> DECLARE\n> count INT4;\n> transaction RECORD;\n> statement dbmirror.pending_statement;\n> BEGIN\n> count := 0;\n>\n> FOR transaction IN SELECT t.trans_id as ID\n> FROM pending_trans AS t WHERE fetched = false\n> ORDER BY trans_id LIMIT 50\n> LOOP\n> update pending_trans set fetched = true where trans_id =\n> transaction.id;\n>\n> \t FOR statement IN SELECT s.id, s.transaction_id, s.table_name, s.op,\n> s.data\n> FROM dbmirror.pending_statement AS s\n> WHERE s.transaction_id = transaction.id\n> ORDER BY s.id ASC\n> LOOP\n> count := count + 1;\n>\n> RETURN NEXT statement;\n> END LOOP;\n>\n> IF count > 100 THEN\n> EXIT;\n> END IF;\n> END LOOP;\n>\n> RETURN;\n> END;$BODY$\n> LANGUAGE 'plpgsql' VOLATILE;\n>\n> Table Schemas:\n>\n> CREATE TABLE dbmirror.pending_trans\n> (\n> trans_id oid NOT NULL,\n> fetched bool DEFAULT false,\n> CONSTRAINT pending_trans_pkey PRIMARY KEY (trans_id)\n> )\n> WITHOUT OIDS;\n>\n> CREATE TABLE dbmirror.pending_statement\n> (\n> id oid NOT NULL DEFAULT nextval('dbmirror.statement_id_seq'::text),\n> transaction_id oid NOT NULL,\n> table_name text NOT NULL,\n> op char NOT NULL,\n> data text NOT NULL,\n> CONSTRAINT pending_statement_pkey PRIMARY KEY (id)\n> )\n> WITHOUT OIDS;\n>\n> CREATE UNIQUE INDEX idx_stmt_tran_id_id\n> ON dbmirror.pending_statement\n> USING btree\n> (transaction_id, id);\n>\n> Postgres 8.0.1 on Linux.\n>\n> Any Help would be greatly appreciated.\n>\n> Regards\n>\n> --\n> David Mitchell\n> Software Engineer\n> Telogis\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend", "msg_date": "Mon, 27 Jun 2005 20:06:11 -0800", "msg_from": "Gnanavel Shanmugam <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can I speed up this function?" }, { "msg_contents": "Hi Gnanavel,\n\nThanks, but that will only return at most 100 statements. If there is a \ntransaction with 110 statements then this will not return all the \nstatements for that transaction. We need to make sure that the function \nreturns all the statements for a transaction.\n\nCheers\n\nDavid\n\nGnanavel Shanmugam wrote:\n> Merge the two select statements like this and try,\n> \n> SELECT t.trans_id as ID,s.id, s.transaction_id, s.table_name, s.op, s.data\n> FROM pending_trans AS t join dbmirror.pending_statement AS s\n> on (s.transaction_id=t.id)\n> WHERE t.fetched = false order by t.trans_id,s.id limit 100;\n> \n> If the above query works in the way you want, then you can also do the\n> update\n> using the same.\n> \n> with regards,\n> S.Gnanavel\n> \n> \n> \n>>-----Original Message-----\n>>From: [email protected]\n>>Sent: Tue, 28 Jun 2005 14:37:34 +1200\n>>To: [email protected]\n>>Subject: [PERFORM] How can I speed up this function?\n>>\n>>We have the following function in our home grown mirroring package, but\n>>it isn't running as fast as we would like. We need to select statements\n>>from the pending_statement table, and we want to select all the\n>>statements for a single transaction (pending_trans) in one go (that is,\n>>we either select all the statements for a transaction, or none of them).\n>>We select as many blocks of statements as it takes to top the 100\n>>statement limit (so if the last transaction we pull has enough\n>>statements to put our count at 110, we'll still take it, but then we're\n>>done).\n>>\n>>Here is our function:\n>>\n>>CREATE OR REPLACE FUNCTION dbmirror.get_pending()\n>> RETURNS SETOF dbmirror.pending_statement AS\n>>$BODY$\n>>\n>>DECLARE\n>> count INT4;\n>> transaction RECORD;\n>> statement dbmirror.pending_statement;\n>> BEGIN\n>> count := 0;\n>>\n>> FOR transaction IN SELECT t.trans_id as ID\n>> FROM pending_trans AS t WHERE fetched = false\n>> ORDER BY trans_id LIMIT 50\n>> LOOP\n>> update pending_trans set fetched = true where trans_id =\n>>transaction.id;\n>>\n>>\t FOR statement IN SELECT s.id, s.transaction_id, s.table_name, s.op,\n>>s.data\n>> FROM dbmirror.pending_statement AS s\n>> WHERE s.transaction_id = transaction.id\n>> ORDER BY s.id ASC\n>> LOOP\n>> count := count + 1;\n>>\n>> RETURN NEXT statement;\n>> END LOOP;\n>>\n>> IF count > 100 THEN\n>> EXIT;\n>> END IF;\n>> END LOOP;\n>>\n>> RETURN;\n>> END;$BODY$\n>> LANGUAGE 'plpgsql' VOLATILE;\n>>\n>>Table Schemas:\n>>\n>>CREATE TABLE dbmirror.pending_trans\n>>(\n>> trans_id oid NOT NULL,\n>> fetched bool DEFAULT false,\n>> CONSTRAINT pending_trans_pkey PRIMARY KEY (trans_id)\n>>)\n>>WITHOUT OIDS;\n>>\n>>CREATE TABLE dbmirror.pending_statement\n>>(\n>> id oid NOT NULL DEFAULT nextval('dbmirror.statement_id_seq'::text),\n>> transaction_id oid NOT NULL,\n>> table_name text NOT NULL,\n>> op char NOT NULL,\n>> data text NOT NULL,\n>> CONSTRAINT pending_statement_pkey PRIMARY KEY (id)\n>>)\n>>WITHOUT OIDS;\n>>\n>>CREATE UNIQUE INDEX idx_stmt_tran_id_id\n>> ON dbmirror.pending_statement\n>> USING btree\n>> (transaction_id, id);\n>>\n>>Postgres 8.0.1 on Linux.\n>>\n>>Any Help would be greatly appreciated.\n>>\n>>Regards\n>>\n>>--\n>>David Mitchell\n>>Software Engineer\n>>Telogis\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 8: explain analyze is your friend\n\n\n-- \nDavid Mitchell\nSoftware Engineer\nTelogis\n", "msg_date": "Tue, 28 Jun 2005 16:29:32 +1200", "msg_from": "David Mitchell <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How can I speed up this function?" }, { "msg_contents": "But in the function you are exiting the loop when the count hits 100. If you\ndo not want to limit the statements then remove the limit clause from the\nquery I've written.\n\nwith regards,\nS.Gnanavel\n\n\n> -----Original Message-----\n> From: [email protected]\n> Sent: Tue, 28 Jun 2005 16:29:32 +1200\n> To: [email protected]\n> Subject: Re: [PERFORM] How can I speed up this function?\n>\n> Hi Gnanavel,\n>\n> Thanks, but that will only return at most 100 statements. If there is a\n> transaction with 110 statements then this will not return all the\n> statements for that transaction. We need to make sure that the function\n> returns all the statements for a transaction.\n>\n> Cheers\n>\n> David\n>\n> Gnanavel Shanmugam wrote:\n> > Merge the two select statements like this and try,\n> >\n> > SELECT t.trans_id as ID,s.id, s.transaction_id, s.table_name, s.op,\n> s.data\n> > FROM pending_trans AS t join dbmirror.pending_statement AS s\n> > on (s.transaction_id=t.id)\n> > WHERE t.fetched = false order by t.trans_id,s.id limit 100;\n> >\n> > If the above query works in the way you want, then you can also do the\n> > update\n> > using the same.\n> >\n> > with regards,\n> > S.Gnanavel\n> >\n> >\n> >\n> >>-----Original Message-----\n> >>From: [email protected]\n> >>Sent: Tue, 28 Jun 2005 14:37:34 +1200\n> >>To: [email protected]\n> >>Subject: [PERFORM] How can I speed up this function?\n> >>\n> >>We have the following function in our home grown mirroring package, but\n> >>it isn't running as fast as we would like. We need to select statements\n> >>from the pending_statement table, and we want to select all the\n> >>statements for a single transaction (pending_trans) in one go (that is,\n> >>we either select all the statements for a transaction, or none of\n> them).\n> >>We select as many blocks of statements as it takes to top the 100\n> >>statement limit (so if the last transaction we pull has enough\n> >>statements to put our count at 110, we'll still take it, but then we're\n> >>done).\n> >>\n> >>Here is our function:\n> >>\n> >>CREATE OR REPLACE FUNCTION dbmirror.get_pending()\n> >> RETURNS SETOF dbmirror.pending_statement AS\n> >>$BODY$\n> >>\n> >>DECLARE\n> >> count INT4;\n> >> transaction RECORD;\n> >> statement dbmirror.pending_statement;\n> >> BEGIN\n> >> count := 0;\n> >>\n> >> FOR transaction IN SELECT t.trans_id as ID\n> >> FROM pending_trans AS t WHERE fetched = false\n> >> ORDER BY trans_id LIMIT 50\n> >> LOOP\n> >> update pending_trans set fetched = true where trans_id =\n> >>transaction.id;\n> >>\n> >>\t FOR statement IN SELECT s.id, s.transaction_id, s.table_name,\n> s.op,\n> >>s.data\n> >> FROM dbmirror.pending_statement AS s\n> >> WHERE s.transaction_id = transaction.id\n> >> ORDER BY s.id ASC\n> >> LOOP\n> >> count := count + 1;\n> >>\n> >> RETURN NEXT statement;\n> >> END LOOP;\n> >>\n> >> IF count > 100 THEN\n> >> EXIT;\n> >> END IF;\n> >> END LOOP;\n> >>\n> >> RETURN;\n> >> END;$BODY$\n> >> LANGUAGE 'plpgsql' VOLATILE;\n> >>\n> >>Table Schemas:\n> >>\n> >>CREATE TABLE dbmirror.pending_trans\n> >>(\n> >> trans_id oid NOT NULL,\n> >> fetched bool DEFAULT false,\n> >> CONSTRAINT pending_trans_pkey PRIMARY KEY (trans_id)\n> >>)\n> >>WITHOUT OIDS;\n> >>\n> >>CREATE TABLE dbmirror.pending_statement\n> >>(\n> >> id oid NOT NULL DEFAULT nextval('dbmirror.statement_id_seq'::text),\n> >> transaction_id oid NOT NULL,\n> >> table_name text NOT NULL,\n> >> op char NOT NULL,\n> >> data text NOT NULL,\n> >> CONSTRAINT pending_statement_pkey PRIMARY KEY (id)\n> >>)\n> >>WITHOUT OIDS;\n> >>\n> >>CREATE UNIQUE INDEX idx_stmt_tran_id_id\n> >> ON dbmirror.pending_statement\n> >> USING btree\n> >> (transaction_id, id);\n> >>\n> >>Postgres 8.0.1 on Linux.\n> >>\n> >>Any Help would be greatly appreciated.\n> >>\n> >>Regards\n> >>\n> >>--\n> >>David Mitchell\n> >>Software Engineer\n> >>Telogis\n> >>\n> >>---------------------------(end of\n> broadcast)---------------------------\n> >>TIP 8: explain analyze is your friend\n>\n>\n> --\n> David Mitchell\n> Software Engineer\n> Telogis", "msg_date": "Mon, 27 Jun 2005 20:42:28 -0800", "msg_from": "Gnanavel Shanmugam <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can I speed up this function?" }, { "msg_contents": "The function I have exits the loop when the count hits 100 yes, but the \ninner loop can push the count up as high as necessary to select all the \nstatements for a transaction, so by the time it exits, the count could \nbe much higher. I do want to limit the statements, but I want to get \nenough for complete transactions.\n\nDavid\n\nGnanavel Shanmugam wrote:\n> But in the function you are exiting the loop when the count hits 100. If you\n> do not want to limit the statements then remove the limit clause from the\n> query I've written.\n> \n> with regards,\n> S.Gnanavel\n> \n> \n> \n>>-----Original Message-----\n>>From: [email protected]\n>>Sent: Tue, 28 Jun 2005 16:29:32 +1200\n>>To: [email protected]\n>>Subject: Re: [PERFORM] How can I speed up this function?\n>>\n>>Hi Gnanavel,\n>>\n>>Thanks, but that will only return at most 100 statements. If there is a\n>>transaction with 110 statements then this will not return all the\n>>statements for that transaction. We need to make sure that the function\n>>returns all the statements for a transaction.\n>>\n>>Cheers\n>>\n>>David\n>>\n>>Gnanavel Shanmugam wrote:\n>>\n>>>Merge the two select statements like this and try,\n>>>\n>>>SELECT t.trans_id as ID,s.id, s.transaction_id, s.table_name, s.op,\n>>\n>>s.data\n>>\n>>> FROM pending_trans AS t join dbmirror.pending_statement AS s\n>>> on (s.transaction_id=t.id)\n>>>WHERE t.fetched = false order by t.trans_id,s.id limit 100;\n>>>\n>>> If the above query works in the way you want, then you can also do the\n>>>update\n>>>using the same.\n>>>\n>>>with regards,\n>>>S.Gnanavel\n>>>\n>>>\n>>>\n>>>\n>>>>-----Original Message-----\n>>>>From: [email protected]\n>>>>Sent: Tue, 28 Jun 2005 14:37:34 +1200\n>>>>To: [email protected]\n>>>>Subject: [PERFORM] How can I speed up this function?\n>>>>\n>>>>We have the following function in our home grown mirroring package, but\n>>>>it isn't running as fast as we would like. We need to select statements\n>>>\n>>>>from the pending_statement table, and we want to select all the\n>>>\n>>>>statements for a single transaction (pending_trans) in one go (that is,\n>>>>we either select all the statements for a transaction, or none of\n>>\n>>them).\n>>\n>>>>We select as many blocks of statements as it takes to top the 100\n>>>>statement limit (so if the last transaction we pull has enough\n>>>>statements to put our count at 110, we'll still take it, but then we're\n>>>>done).\n>>>>\n>>>>Here is our function:\n>>>>\n>>>>CREATE OR REPLACE FUNCTION dbmirror.get_pending()\n>>>> RETURNS SETOF dbmirror.pending_statement AS\n>>>>$BODY$\n>>>>\n>>>>DECLARE\n>>>> count INT4;\n>>>> transaction RECORD;\n>>>> statement dbmirror.pending_statement;\n>>>> BEGIN\n>>>> count := 0;\n>>>>\n>>>> FOR transaction IN SELECT t.trans_id as ID\n>>>> FROM pending_trans AS t WHERE fetched = false\n>>>> ORDER BY trans_id LIMIT 50\n>>>> LOOP\n>>>> update pending_trans set fetched = true where trans_id =\n>>>>transaction.id;\n>>>>\n>>>>\t FOR statement IN SELECT s.id, s.transaction_id, s.table_name,\n>>\n>>s.op,\n>>\n>>>>s.data\n>>>> FROM dbmirror.pending_statement AS s\n>>>> WHERE s.transaction_id = transaction.id\n>>>> ORDER BY s.id ASC\n>>>> LOOP\n>>>> count := count + 1;\n>>>>\n>>>> RETURN NEXT statement;\n>>>> END LOOP;\n>>>>\n>>>> IF count > 100 THEN\n>>>> EXIT;\n>>>> END IF;\n>>>> END LOOP;\n>>>>\n>>>> RETURN;\n>>>> END;$BODY$\n>>>> LANGUAGE 'plpgsql' VOLATILE;\n>>>>\n>>>>Table Schemas:\n>>>>\n>>>>CREATE TABLE dbmirror.pending_trans\n>>>>(\n>>>> trans_id oid NOT NULL,\n>>>> fetched bool DEFAULT false,\n>>>> CONSTRAINT pending_trans_pkey PRIMARY KEY (trans_id)\n>>>>)\n>>>>WITHOUT OIDS;\n>>>>\n>>>>CREATE TABLE dbmirror.pending_statement\n>>>>(\n>>>> id oid NOT NULL DEFAULT nextval('dbmirror.statement_id_seq'::text),\n>>>> transaction_id oid NOT NULL,\n>>>> table_name text NOT NULL,\n>>>> op char NOT NULL,\n>>>> data text NOT NULL,\n>>>> CONSTRAINT pending_statement_pkey PRIMARY KEY (id)\n>>>>)\n>>>>WITHOUT OIDS;\n>>>>\n>>>>CREATE UNIQUE INDEX idx_stmt_tran_id_id\n>>>> ON dbmirror.pending_statement\n>>>> USING btree\n>>>> (transaction_id, id);\n>>>>\n>>>>Postgres 8.0.1 on Linux.\n>>>>\n>>>>Any Help would be greatly appreciated.\n>>>>\n>>>>Regards\n>>>>\n>>>>--\n>>>>David Mitchell\n>>>>Software Engineer\n>>>>Telogis\n>>>>\n>>>>---------------------------(end of\n>>\n>>broadcast)---------------------------\n>>\n>>>>TIP 8: explain analyze is your friend\n>>\n>>\n>>--\n>>David Mitchell\n>>Software Engineer\n>>Telogis\n\n\n-- \nDavid Mitchell\nSoftware Engineer\nTelogis\n", "msg_date": "Tue, 28 Jun 2005 16:55:00 +1200", "msg_from": "David Mitchell <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How can I speed up this function?" }, { "msg_contents": "I think the following logic will do want you expect\n FOR statement IN <previous_query> LOOP\n -- update statement goes here --\n if count > 100 and temp <> transaction_id then\n // reaches here only if the transaction is complete\n return;\n else\n count:= count+1;\n temp:=transaction_id;\n end if;\n end loop;\n\nwith regards,\nS.Gnanavel\n\n\n> -----Original Message-----\n> From: [email protected]\n> Sent: Tue, 28 Jun 2005 16:55:00 +1200\n> To: [email protected]\n> Subject: Re: [PERFORM] How can I speed up this function?\n>\n> The function I have exits the loop when the count hits 100 yes, but the\n> inner loop can push the count up as high as necessary to select all the\n> statements for a transaction, so by the time it exits, the count could\n> be much higher. I do want to limit the statements, but I want to get\n> enough for complete transactions.\n>\n> David\n>\n> Gnanavel Shanmugam wrote:\n> > But in the function you are exiting the loop when the count hits 100.\n> If you\n> > do not want to limit the statements then remove the limit clause from\n> the\n> > query I've written.\n> >\n> > with regards,\n> > S.Gnanavel\n> >\n> >\n> >\n> >>-----Original Message-----\n> >>From: [email protected]\n> >>Sent: Tue, 28 Jun 2005 16:29:32 +1200\n> >>To: [email protected]\n> >>Subject: Re: [PERFORM] How can I speed up this function?\n> >>\n> >>Hi Gnanavel,\n> >>\n> >>Thanks, but that will only return at most 100 statements. If there is a\n> >>transaction with 110 statements then this will not return all the\n> >>statements for that transaction. We need to make sure that the function\n> >>returns all the statements for a transaction.\n> >>\n> >>Cheers\n> >>\n> >>David\n> >>\n> >>Gnanavel Shanmugam wrote:\n> >>\n> >>>Merge the two select statements like this and try,\n> >>>\n> >>>SELECT t.trans_id as ID,s.id, s.transaction_id, s.table_name, s.op,\n> >>\n> >>s.data\n> >>\n> >>> FROM pending_trans AS t join dbmirror.pending_statement AS s\n> >>> on (s.transaction_id=t.id)\n> >>>WHERE t.fetched = false order by t.trans_id,s.id limit 100;\n> >>>\n> >>> If the above query works in the way you want, then you can also do\n> the\n> >>>update\n> >>>using the same.\n> >>>\n> >>>with regards,\n> >>>S.Gnanavel\n> >>>\n> >>>\n> >>>\n> >>>\n> >>>>-----Original Message-----\n> >>>>From: [email protected]\n> >>>>Sent: Tue, 28 Jun 2005 14:37:34 +1200\n> >>>>To: [email protected]\n> >>>>Subject: [PERFORM] How can I speed up this function?\n> >>>>\n> >>>>We have the following function in our home grown mirroring package,\n> but\n> >>>>it isn't running as fast as we would like. We need to select\n> statements\n> >>>\n> >>>>from the pending_statement table, and we want to select all the\n> >>>\n> >>>>statements for a single transaction (pending_trans) in one go (that\n> is,\n> >>>>we either select all the statements for a transaction, or none of\n> >>\n> >>them).\n> >>\n> >>>>We select as many blocks of statements as it takes to top the 100\n> >>>>statement limit (so if the last transaction we pull has enough\n> >>>>statements to put our count at 110, we'll still take it, but then\n> we're\n> >>>>done).\n> >>>>\n> >>>>Here is our function:\n> >>>>\n> >>>>CREATE OR REPLACE FUNCTION dbmirror.get_pending()\n> >>>> RETURNS SETOF dbmirror.pending_statement AS\n> >>>>$BODY$\n> >>>>\n> >>>>DECLARE\n> >>>> count INT4;\n> >>>> transaction RECORD;\n> >>>> statement dbmirror.pending_statement;\n> >>>> BEGIN\n> >>>> count := 0;\n> >>>>\n> >>>> FOR transaction IN SELECT t.trans_id as ID\n> >>>> FROM pending_trans AS t WHERE fetched = false\n> >>>> ORDER BY trans_id LIMIT 50\n> >>>> LOOP\n> >>>> update pending_trans set fetched = true where trans_id =\n> >>>>transaction.id;\n> >>>>\n> >>>>\t FOR statement IN SELECT s.id, s.transaction_id, s.table_name,\n> >>\n> >>s.op,\n> >>\n> >>>>s.data\n> >>>> FROM dbmirror.pending_statement AS s\n> >>>> WHERE s.transaction_id = transaction.id\n> >>>> ORDER BY s.id ASC\n> >>>> LOOP\n> >>>> count := count + 1;\n> >>>>\n> >>>> RETURN NEXT statement;\n> >>>> END LOOP;\n> >>>>\n> >>>> IF count > 100 THEN\n> >>>> EXIT;\n> >>>> END IF;\n> >>>> END LOOP;\n> >>>>\n> >>>> RETURN;\n> >>>> END;$BODY$\n> >>>> LANGUAGE 'plpgsql' VOLATILE;\n> >>>>\n> >>>>Table Schemas:\n> >>>>\n> >>>>CREATE TABLE dbmirror.pending_trans\n> >>>>(\n> >>>> trans_id oid NOT NULL,\n> >>>> fetched bool DEFAULT false,\n> >>>> CONSTRAINT pending_trans_pkey PRIMARY KEY (trans_id)\n> >>>>)\n> >>>>WITHOUT OIDS;\n> >>>>\n> >>>>CREATE TABLE dbmirror.pending_statement\n> >>>>(\n> >>>> id oid NOT NULL DEFAULT nextval('dbmirror.statement_id_seq'::text),\n> >>>> transaction_id oid NOT NULL,\n> >>>> table_name text NOT NULL,\n> >>>> op char NOT NULL,\n> >>>> data text NOT NULL,\n> >>>> CONSTRAINT pending_statement_pkey PRIMARY KEY (id)\n> >>>>)\n> >>>>WITHOUT OIDS;\n> >>>>\n> >>>>CREATE UNIQUE INDEX idx_stmt_tran_id_id\n> >>>> ON dbmirror.pending_statement\n> >>>> USING btree\n> >>>> (transaction_id, id);\n> >>>>\n> >>>>Postgres 8.0.1 on Linux.\n> >>>>\n> >>>>Any Help would be greatly appreciated.\n> >>>>\n> >>>>Regards\n> >>>>\n> >>>>--\n> >>>>David Mitchell\n> >>>>Software Engineer\n> >>>>Telogis\n> >>>>\n> >>>>---------------------------(end of\n> >>\n> >>broadcast)---------------------------\n> >>\n> >>>>TIP 8: explain analyze is your friend\n> >>\n> >>\n> >>--\n> >>David Mitchell\n> >>Software Engineer\n> >>Telogis\n>\n>\n> --\n> David Mitchell\n> Software Engineer\n> Telogis", "msg_date": "Mon, 27 Jun 2005 21:27:14 -0800", "msg_from": "Gnanavel Shanmugam <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can I speed up this function?" }, { "msg_contents": "On Tue, 28 Jun 2005 14:37:34 +1200, David Mitchell <[email protected]> wrote:\n> FOR transaction IN SELECT t.trans_id as ID\n> FROM pending_trans AS t WHERE fetched = false\n> ORDER BY trans_id LIMIT 50\n\nWhat the the average number of statements per transaction? if avg > 2\nthen you could save a small amount of time by lowering the limit.\n\nYou might also save some time by using FOR UPDATE on the select since\nthe next thing you're going to do is update the value. \n\n\n> \t FOR statement IN SELECT s.id, s.transaction_id, s.table_name, s.op, \n> s.data\n> FROM dbmirror.pending_statement AS s\n> WHERE s.transaction_id = transaction.id\n> ORDER BY s.id ASC\n\nHave you explained this to make sure it's using the created index? You\nmight need to order by both transaction_id, id.\n\nklint.\n\n+---------------------------------------+-----------------+\n: Klint Gore : \"Non rhyming :\n: EMail : [email protected] : slang - the :\n: Snail : A.B.R.I. : possibilities :\n: Mail University of New England : are useless\" :\n: Armidale NSW 2351 Australia : L.J.J. :\n: Fax : +61 2 6772 5376 : :\n+---------------------------------------+-----------------+\n", "msg_date": "Tue, 28 Jun 2005 18:36:35 +1000", "msg_from": "Klint Gore <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can I speed up this function?" } ]
[ { "msg_contents": " \n \n\n\n\t\t\n____________________________________________________ \nYahoo! Sports \nRekindle the Rivalries. Sign up for Fantasy Football \nhttp://football.fantasysports.yahoo.com\n", "msg_date": "Tue, 28 Jun 2005 07:18:05 -0700 (PDT)", "msg_from": "Erik Westland <[email protected]>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "I need a fast way (sql only preferred) to solve the following problem:\n\nI need the smallest integer that is greater than zero that is not in the\ncolumn of a table. In other words, if an 'id' column has values\n1,2,3,4,6 and 7, I need a query that returns the value of 5.\n\nI've already worked out a query using generate_series (not scalable) and\npl/pgsql. An SQL only solution would be preferred, am I missing\nsomething obvious?\n\nMerlin\n", "msg_date": "Tue, 28 Jun 2005 10:21:16 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "tricky query" }, { "msg_contents": "Merlin Moncure wrote:\n\n>I need a fast way (sql only preferred) to solve the following problem:\n>\n>I need the smallest integer that is greater than zero that is not in the\n>column of a table. In other words, if an 'id' column has values\n>1,2,3,4,6 and 7, I need a query that returns the value of 5.\n>\n>I've already worked out a query using generate_series (not scalable) and\n>pl/pgsql. An SQL only solution would be preferred, am I missing\n>something obvious?\n>\n>Merlin\n>\n>\n\nNot so bad. Try something like this:\n\nSELECT min(id+1) as id_new FROM table\n WHERE (id+1) NOT IN (SELECT id FROM table);\n\nNow, this requires probably a sequential scan, but I'm not sure how you\ncan get around that.\nMaybe if you got trickier and did some ordering and limits. The above\nseems to give the right answer, though.\n\nI don't know how big you want to scale to.\n\nYou might try something like:\nSELECT id+1 as id_new FROM t\n WHERE (id+1) NOT IN (SELECT id FROM t)\n ORDER BY id LIMIT 1;\n\nJohn\n=:->", "msg_date": "Tue, 28 Jun 2005 10:07:50 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tricky query" }, { "msg_contents": "On Tue, Jun 28, 2005 at 10:21:16 -0400,\n Merlin Moncure <[email protected]> wrote:\n> I need a fast way (sql only preferred) to solve the following problem:\n> \n> I need the smallest integer that is greater than zero that is not in the\n> column of a table. In other words, if an 'id' column has values\n> 1,2,3,4,6 and 7, I need a query that returns the value of 5.\n> \n> I've already worked out a query using generate_series (not scalable) and\n> pl/pgsql. An SQL only solution would be preferred, am I missing\n> something obvious?\n\nI would expect that using generate series from the 1 to the max (using\norder by and limit 1 to avoid extra sequential scans) and subtracting\nout the current list using except and then taking the minium value\nwould be the best way to do this if the list is pretty dense and\nyou don't want to change the structure.\n\nIf it is sparse than you can do a special check for 1 and if that\nis present find the first row whose successor is not in the table.\nThat shouldn't be too slow.\n\nIf you are willing to change the structure you might keep one row for\neach number and use a flag to mark which ones are empty. If there are\nrelatively few empty rows at any time, then you can create a partial\nindex on the row number for only empty rows.\n", "msg_date": "Tue, 28 Jun 2005 10:12:46 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tricky query" }, { "msg_contents": "Merlin Moncure wrote:\n\n> I need a fast way (sql only preferred) to solve the following problem:\n> I need the smallest integer that is greater than zero that is not in the\n> column of a table.\n> \n> I've already worked out a query using generate_series (not scalable) and\n> pl/pgsql. An SQL only solution would be preferred, am I missing\n> something obvious?\n\nProbably not, but I thought about this \"brute-force\" approach... :-)\nThis should work well provided that:\n\n- you have a finite number of integers. Your column should have a biggest\n integer value with a reasonable maximum like 100,000 or 1,000,000.\n #define YOUR_MAX 99999\n\n- you can accept that query execution time depends on smallest integer found.\n The bigger the found integer, the slower execution you get.\n\nOk, so:\n\nCreate a relation \"integers\" (or whatever) with every single integer from 1 to \nYOUR_MAX:\n\n CREATE TABLE integers (id integer primary key);\n INSERT INTO integers (id) VALUES (1);\n INSERT INTO integers (id) VALUES (2);\n ...\n INSERT INTO integers (id) VALUES (YOUR_MAX);\n\nCreate your relation:\n\n CREATE TABLE merlin (id integer primary key);\n <and fill it with values>\n\nQuery is simple now:\n\n SELECT a.id FROM integers a\n LEFT JOIN merlin b ON a.id=b.id\n WHERE b.id IS NULL\n ORDER BY a.id LIMIT 1;\n\nExecution times with 100k tuples in \"integers\" and\n99,999 tuples in \"merlin\":\n\n >\\timing\n Timing is on.\n >select i.id from integers i left join merlin s on i.id=s.id where s.id is \nnull order by i.id limit 1;\n 99999\n\n Time: 233.618 ms\n >insert into merlin (id) values (99999);\n INSERT 86266614 1\n Time: 0.579 ms\n >delete from merlin where id=241;\n DELETE 1\n Time: 0.726 ms\n >select i.id from integers i left join merlin s on i.id=s.id where s.id is \nnull order by i.id limit 1;\n 241\n\n Time: 1.336 ms\n >\n\n-- \nCosimo\n\n", "msg_date": "Tue, 28 Jun 2005 17:20:13 +0200", "msg_from": "Cosimo Streppone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tricky query" }, { "msg_contents": "John A Meinel wrote:\n\n> Merlin Moncure wrote:\n>\n>> I need a fast way (sql only preferred) to solve the following problem:\n>>\n>> I need the smallest integer that is greater than zero that is not in the\n>> column of a table. In other words, if an 'id' column has values\n>> 1,2,3,4,6 and 7, I need a query that returns the value of 5.\n>>\n>> I've already worked out a query using generate_series (not scalable) and\n>> pl/pgsql. An SQL only solution would be preferred, am I missing\n>> something obvious?\n>>\n>> Merlin\n>>\n>>\n>\n> Not so bad. Try something like this:\n>\n> SELECT min(id+1) as id_new FROM table\n> WHERE (id+1) NOT IN (SELECT id FROM table);\n>\n> Now, this requires probably a sequential scan, but I'm not sure how you\n> can get around that.\n> Maybe if you got trickier and did some ordering and limits. The above\n> seems to give the right answer, though.\n>\n> I don't know how big you want to scale to.\n>\n> You might try something like:\n> SELECT id+1 as id_new FROM t\n> WHERE (id+1) NOT IN (SELECT id FROM t)\n> ORDER BY id LIMIT 1;\n>\n> John\n> =:->\n\nWell, I was able to improve it to using appropriate index scans.\nHere is the query:\n\nSELECT t1.id+1 as id_new FROM id_test t1\n WHERE NOT EXISTS\n (SELECT t2.id FROM id_test t2 WHERE t2.id = t1.id+1)\n ORDER BY t1.id LIMIT 1;\n\nI created a test table which has 90k randomly inserted rows. And this is\nwhat EXPLAIN ANALYZE says:\n\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..12.10 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=1)\n -> Index Scan using id_test_pkey on id_test t1 (cost=0.00..544423.27 rows=45000 width=4) (actual time=0.000..0.000 rows=1 loops=1)\n Filter: (NOT (subplan))\n SubPlan\n -> Index Scan using id_test_pkey on id_test t2 (cost=0.00..6.01 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=15)\n Index Cond: (id = ($0 + 1))\n Total runtime: 0.000 ms\n(7 rows)\n\nThe only thing I have is a primary key index on id_test(id);\n\nJohn\n=:->", "msg_date": "Tue, 28 Jun 2005 10:42:28 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tricky query" }, { "msg_contents": "Merlin Moncure wrote:\n>I've already worked out a query using generate_series (not scalable) and\n>pl/pgsql. An SQL only solution would be preferred, am I missing\n>something obvious?\n\nI would be tempted to join the table to itself like:\n\n SELECT id+1\n FROM foo\n WHERE id > 0\n AND i NOT IN (SELECT id-1 FROM foo)\n LIMIT 1;\n\nSeems to work for me. Not sure if that's good enough for you, but\nit may help.\n\n Sam\n", "msg_date": "Tue, 28 Jun 2005 16:42:42 +0100", "msg_from": "Sam Mason <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tricky query" }, { "msg_contents": "John A Meinel wrote:\n>SELECT t1.id+1 as id_new FROM id_test t1\n> WHERE NOT EXISTS\n> (SELECT t2.id FROM id_test t2 WHERE t2.id = t1.id+1)\n> ORDER BY t1.id LIMIT 1;\n\nThis works well on sparse data, as it only requires as many index\naccess as it takes to find the first gap. The simpler \"NOT IN\"\nversion that everybody seems to have posted the first time round\nhas a reasonably constant (based on the number of rows, not gap\nposition) startup time but the actual time spent searching for the\ngap is much lower.\n\nI guess the version you use depends on how sparse you expect the\ndata to be. If you expect your query to have to search through\nmore than half the table before finding the gap then you're better\noff using the \"NOT IN\" version, otherwise the \"NOT EXISTS\" version\nis faster -- on my system anyway.\n\nHope that's interesting!\n\n\n Sam\n", "msg_date": "Tue, 28 Jun 2005 18:42:05 +0100", "msg_from": "Sam Mason <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tricky query" }, { "msg_contents": "John A Meinel wrote:\n> John A Meinel wrote:\n>> Merlin Moncure wrote:\n>>\n>>> I need the smallest integer that is greater than zero that is not in the\n>>> column of a table. In other words, if an 'id' column has values\n>>> 1,2,3,4,6 and 7, I need a query that returns the value of 5.\n>>\n >> [...]\n >\n> Well, I was able to improve it to using appropriate index scans.\n> Here is the query:\n> \n> SELECT t1.id+1 as id_new FROM id_test t1\n> WHERE NOT EXISTS\n> (SELECT t2.id FROM id_test t2 WHERE t2.id = t1.id+1)\n> ORDER BY t1.id LIMIT 1;\n\nI'm very interested in this \"tricky query\".\nSorry John, but if I populate the `id_test' relation\nwith only 4 tuples with id values (10, 11, 12, 13),\nthe result of this query is:\n\n cosimo=> create table id_test (id integer primary key);\n NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'id_test_pkey' \nfor table 'id_test'\n CREATE TABLE\n cosimo=> insert into id_test values (10); -- and 11, 12, 13, 14\n INSERT 7457570 1\n INSERT 7457571 1\n INSERT 7457572 1\n INSERT 7457573 1\n INSERT 7457574 1\n cosimo=> SELECT t1.id+1 as id_new FROM id_test t1 WHERE NOT EXISTS (SELECT \nt2.id FROM id_test t2 WHERE t2.id = t1.id+1) ORDER BY t1.id LIMIT 1;\n id_new\n --------\n 15\n (1 row)\n\nwhich if I understand correctly, is the wrong answer to the problem.\nAt this point, I'm starting to think I need some sleep... :-)\n\n-- \nCosimo\n\n", "msg_date": "Tue, 28 Jun 2005 21:33:03 +0200", "msg_from": "Cosimo Streppone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tricky query" }, { "msg_contents": "John A Meinel schrieb:\n\n> John A Meinel wrote:\n>\n>>\n>\n> Well, I was able to improve it to using appropriate index scans.\n> Here is the query:\n>\n> SELECT t1.id+1 as id_new FROM id_test t1\n> WHERE NOT EXISTS\n> (SELECT t2.id FROM id_test t2 WHERE t2.id = t1.id+1)\n> ORDER BY t1.id LIMIT 1;\n>\n> I created a test table which has 90k randomly inserted rows. And this is\n> what EXPLAIN ANALYZE says:\n>\n> \n\n\nAs Cosimo stated the result can be wrong. The result is always wrong\nwhen the id with value 1 does not exist.\n\n-- \nBest Regards / Viele Gr��e\n\nSebastian Hennebrueder\n\n----\n\nhttp://www.laliluna.de\n\nTutorials for JSP, JavaServer Faces, Struts, Hibernate and EJB \n\nGet support, education and consulting for these technologies - uncomplicated and cheap.\n\n", "msg_date": "Tue, 28 Jun 2005 22:38:54 +0200", "msg_from": "Sebastian Hennebrueder <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tricky query" } ]
[ { "msg_contents": "I would suggest something like this, don't know how fast it is ... :\n\nSELECT (ID +1) as result FROM my_table\nWHERE (ID+1) NOT IN (SELECT ID FROM my_table) as tmp\nORDER BY result asc limit 1;\n\n\n\n\n\n\"Merlin Moncure\" <[email protected]>\nEnvoyé par : [email protected]\n28/06/2005 16:21\n\n \n Pour : <[email protected]>\n cc : \n Objet : [PERFORM] tricky query\n\n\nI need a fast way (sql only preferred) to solve the following problem:\n\nI need the smallest integer that is greater than zero that is not in the\ncolumn of a table. In other words, if an 'id' column has values\n1,2,3,4,6 and 7, I need a query that returns the value of 5.\n\nI've already worked out a query using generate_series (not scalable) and\npl/pgsql. An SQL only solution would be preferred, am I missing\nsomething obvious?\n\nMerlin\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n\n\n", "msg_date": "Tue, 28 Jun 2005 16:50:17 +0200", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "=?iso-8859-1?Q?R=E9f=2E_=3A__tricky_query?=" } ]
[ { "msg_contents": "Is it possible to tweak the size of a block that postgres tries to read\nwhen doing a sequential scan? It looks like it reads in fairly small\nblocks, and I'd expect a fairly significant boost in i/o performance\nwhen doing a large (multi-gig) sequential scan if larger blocks were\nused.\n\nMike Stone\n", "msg_date": "Tue, 28 Jun 2005 11:27:50 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": true, "msg_subject": "read block size" }, { "msg_contents": "Michael Stone wrote:\n\n> Is it possible to tweak the size of a block that postgres tries to read\n> when doing a sequential scan? It looks like it reads in fairly small\n> blocks, and I'd expect a fairly significant boost in i/o performance\n> when doing a large (multi-gig) sequential scan if larger blocks were\n> used.\n>\n> Mike Stone\n\n\nI believe postgres reads in one database page at a time, which defaults\nto 8k IIRC. If you want bigger, you could recompile and set the default\npage size to something else.\n\nThere has been discussion about changing the reading/writing code to be\nable to handle multiple pages at once, (using something like vread())\nbut I don't know that it has been implemented.\n\nAlso, this would hurt cases where you can terminate as sequential scan\nearly. And if the OS is doing it's job right, it will already do some\nread-ahead for you.\n\nJohn\n=:->", "msg_date": "Tue, 28 Jun 2005 12:02:55 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: read block size" }, { "msg_contents": "On Tue, Jun 28, 2005 at 12:02:55PM -0500, John A Meinel wrote:\n>There has been discussion about changing the reading/writing code to be\n>able to handle multiple pages at once, (using something like vread())\n>but I don't know that it has been implemented.\n\nthat sounds promising\n\n>Also, this would hurt cases where you can terminate as sequential scan\n>early. \n\nIf you're doing a sequential scan of a 10G file in, say, 1M blocks I\ndon't think the performance difference of reading a couple of blocks\nunnecessarily is going to matter.\n\n>And if the OS is doing it's job right, it will already do some\n>read-ahead for you.\n\nThe app should have a much better idea of whether it's doing a\nsequential scan and won't be confused by concurrent activity. Even if\nthe OS does readahead perfectly, you'll still get a with with larger\nblocks by cutting down on the syscalls.\n\nMike Stone\n\n", "msg_date": "Tue, 28 Jun 2005 13:27:01 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": true, "msg_subject": "Re: read block size" } ]
[ { "msg_contents": "> Merlin Moncure wrote:\n> \n> > I need a fast way (sql only preferred) to solve the following\nproblem:\n> > I need the smallest integer that is greater than zero that is not in\nthe\n> > column of a table.\n> >\n> > I've already worked out a query using generate_series (not scalable)\nand\n> > pl/pgsql. An SQL only solution would be preferred, am I missing\n> > something obvious?\n \n> Probably not, but I thought about this \"brute-force\" approach... :-)\n> This should work well provided that:\n> \n> - you have a finite number of integers. Your column should have a\nbiggest\n> integer value with a reasonable maximum like 100,000 or 1,000,000.\n> #define YOUR_MAX 99999\n[...]\n:-) generate_series function does the same thing only a little bit\nfaster (although less portable).\n\ngenerate_series(m,n) returns set of integers from m to n with time\ncomplexity n - m. I use it for cases where I need to increment for\nsomething, for example:\n\nselect now()::date + d from generate_series(0,355) as d;\n\nreturns days from today until 355 days from now.\n\nMerlin\n", "msg_date": "Tue, 28 Jun 2005 11:30:34 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tricky query" } ]
[ { "msg_contents": "> Not so bad. Try something like this:\n> \n> SELECT min(id+1) as id_new FROM table\n> WHERE (id+1) NOT IN (SELECT id FROM table);\n> \n> Now, this requires probably a sequential scan, but I'm not sure how\nyou\n> can get around that.\n> Maybe if you got trickier and did some ordering and limits. The above\n> seems to give the right answer, though.\n\nit does, but it is still faster than generate_series(), which requires\nboth a seqscan and a materialization of the function.\n \n> I don't know how big you want to scale to.\n\nbig. :)\n\nmerlin\n", "msg_date": "Tue, 28 Jun 2005 11:39:53 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tricky query" }, { "msg_contents": "Merlin Moncure wrote:\n\n>>Not so bad. Try something like this:\n>>\n>>SELECT min(id+1) as id_new FROM table\n>> WHERE (id+1) NOT IN (SELECT id FROM table);\n>>\n>>Now, this requires probably a sequential scan, but I'm not sure how\n>>\n>>\n>you\n>\n>\n>>can get around that.\n>>Maybe if you got trickier and did some ordering and limits. The above\n>>seems to give the right answer, though.\n>>\n>>\n>\n>it does, but it is still faster than generate_series(), which requires\n>both a seqscan and a materialization of the function.\n>\n>\n>\n>>I don't know how big you want to scale to.\n>>\n>>\n>\n>big. :)\n>\n>merlin\n>\n>\n\nSee my follow up post, which enables an index scan. On my system with\n90k rows, it takes no apparent time.\n(0.000ms)\nJohn\n=:->", "msg_date": "Tue, 28 Jun 2005 10:43:25 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tricky query" } ]
[ { "msg_contents": "John Meinel wrote:\n> See my follow up post, which enables an index scan. On my system with\n> 90k rows, it takes no apparent time.\n> (0.000ms)\n> John\n> =:->\n\nConfirmed. Hats off to you, the above some really wicked querying.\nIIRC I posted the same question several months ago with no response and\nhad given up on it. I think your solution (smallest X1 not in X) is a\ngood candidate for general bits, so I'm passing this to varlena for\nreview :)\n\nSELECT t1.id+1 as id_new FROM id_test t1\n WHERE NOT EXISTS\n (SELECT t2.id FROM id_test t2 WHERE t2.id = t1.id+1)\n ORDER BY t1.id LIMIT 1;\n\nMerlin\n", "msg_date": "Tue, 28 Jun 2005 12:02:09 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tricky query" }, { "msg_contents": "Merlin Moncure wrote:\n\n>John Meinel wrote:\n>\n>\n>>See my follow up post, which enables an index scan. On my system with\n>>90k rows, it takes no apparent time.\n>>(0.000ms)\n>>John\n>>=:->\n>>\n>>\n>\n>Confirmed. Hats off to you, the above some really wicked querying.\n>IIRC I posted the same question several months ago with no response and\n>had given up on it. I think your solution (smallest X1 not in X) is a\n>good candidate for general bits, so I'm passing this to varlena for\n>review :)\n>\n>SELECT t1.id+1 as id_new FROM id_test t1\n> WHERE NOT EXISTS\n> (SELECT t2.id FROM id_test t2 WHERE t2.id = t1.id+1)\n> ORDER BY t1.id LIMIT 1;\n>\n>Merlin\n>\n>\nJust be aware that as your table fills it's holes, this query gets\nslower and slower.\nI've been doing some testing. And it starts at 0.00 when the first entry\nis something like 3, but when you start getting to 16k it starts taking\nmore like 200 ms.\n\nSo it kind of depends how your table fills (and empties I suppose).\n\nThe earlier query was slower overall (since it took 460ms to read in the\nwhole table).\nI filled up the table such that 63713 is the first empty space, and it\ntakes 969ms to run.\nSo actually if your table is mostly full, the first form is better.\n\nBut if you are going to have 100k rows, with basically random\ndistribution of empties, then the NOT EXISTS works quite well.\n\nJust be aware of the tradeoff. I'm pretty sure the WHERE NOT EXISTS will\nalways use a looping structure, and go through the index in order.\n\nJohn\n=:->", "msg_date": "Tue, 28 Jun 2005 11:25:56 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tricky query" }, { "msg_contents": "On Tue, Jun 28, 2005 at 12:02:09 -0400,\n Merlin Moncure <[email protected]> wrote:\n> \n> Confirmed. Hats off to you, the above some really wicked querying.\n> IIRC I posted the same question several months ago with no response and\n> had given up on it. I think your solution (smallest X1 not in X) is a\n> good candidate for general bits, so I'm passing this to varlena for\n> review :)\n> \n> SELECT t1.id+1 as id_new FROM id_test t1\n> WHERE NOT EXISTS\n> (SELECT t2.id FROM id_test t2 WHERE t2.id = t1.id+1)\n> ORDER BY t1.id LIMIT 1;\n\nYou need to rework this to check to see if row '1' is missing. The\nabove returns the start of the first gap after the first row that\nisn't missing.\n", "msg_date": "Tue, 28 Jun 2005 14:31:23 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tricky query" } ]
[ { "msg_contents": "> On Tue, Jun 28, 2005 at 12:02:09 -0400,\n> Merlin Moncure <[email protected]> wrote:\n> >\n> > Confirmed. Hats off to you, the above some really wicked querying.\n> > IIRC I posted the same question several months ago with no response\nand\n> > had given up on it. I think your solution (smallest X1 not in X) is\na\n> > good candidate for general bits, so I'm passing this to varlena for\n> > review :)\n> >\n> > SELECT t1.id+1 as id_new FROM id_test t1\n> > WHERE NOT EXISTS\n> > (SELECT t2.id FROM id_test t2 WHERE t2.id = t1.id+1)\n> > ORDER BY t1.id LIMIT 1;\n> \n> You need to rework this to check to see if row '1' is missing. The\n> above returns the start of the first gap after the first row that\n> isn't missing.\n\nCorrect. \n\nIn fact, I left out a detail in my original request in that I had a\nstarting value (easily supplied with where clause)...so what I was\nreally looking for was a query which started at a supplied value and\nlooped forwards looking for an empty slot. John's supplied query is a\ndrop in replacement for a plpgsql routine which does exactly this.\n\nThe main problem with the generate_series approach is that there is no\nconvenient way to determine a supplied upper bound. Also, in some\ncorner cases of my problem domain the performance was not good.\n\nMerlin\n\n\n", "msg_date": "Tue, 28 Jun 2005 15:36:29 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tricky query" }, { "msg_contents": "Merlin Moncure wrote:\n\n>>On Tue, Jun 28, 2005 at 12:02:09 -0400,\n>> Merlin Moncure <[email protected]> wrote:\n>>\n>>\n>>>Confirmed. Hats off to you, the above some really wicked querying.\n>>>IIRC I posted the same question several months ago with no response\n>>>\n>>>\n>and\n>\n>\n>>>had given up on it. I think your solution (smallest X1 not in X) is\n>>>\n>>>\n>a\n>\n>\n>>>good candidate for general bits, so I'm passing this to varlena for\n>>>review :)\n>>>\n>>>SELECT t1.id+1 as id_new FROM id_test t1\n>>> WHERE NOT EXISTS\n>>> (SELECT t2.id FROM id_test t2 WHERE t2.id = t1.id+1)\n>>> ORDER BY t1.id LIMIT 1;\n>>>\n>>>\n>>You need to rework this to check to see if row '1' is missing. The\n>>above returns the start of the first gap after the first row that\n>>isn't missing.\n>>\n>>\n>\n>Correct.\n>\n>In fact, I left out a detail in my original request in that I had a\n>starting value (easily supplied with where clause)...so what I was\n>really looking for was a query which started at a supplied value and\n>looped forwards looking for an empty slot. John's supplied query is a\n>drop in replacement for a plpgsql routine which does exactly this.\n>\n>The main problem with the generate_series approach is that there is no\n>convenient way to determine a supplied upper bound. Also, in some\n>corner cases of my problem domain the performance was not good.\n>\n>Merlin\n>\n>\nActually, if you already have a lower bound, then you can change it to:\n\nSELECT t1.id+1 as id_new FROM id_test t1\n WHERE t1.id > id_min\n\tAND NOT EXISTS\n (SELECT t2.id FROM id_test t2 WHERE t2.id = t1.id+1)\n ORDER BY t1.id LIMIT 1;\n\nThis would actually really help performance if you have a large table\nand then empty entries start late.\n\nOn my system, where the first entry is 64k, doing where id > 60000\nspeeds it up back to 80ms instead of 1000ms.\nJohn\n=:->", "msg_date": "Tue, 28 Jun 2005 14:42:21 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tricky query" }, { "msg_contents": "On 6/28/05, John A Meinel <[email protected]> wrote:\n\n> Actually, if you already have a lower bound, then you can change it to:\n> \n> SELECT t1.id+1 as id_new FROM id_test t1\n> WHERE t1.id > id_min\n> AND NOT EXISTS\n> (SELECT t2.id FROM id_test t2 WHERE t2.id = t1.id+1)\n> ORDER BY t1.id LIMIT 1;\n> \n> This would actually really help performance if you have a large table\n> and then empty entries start late.\n\nYou can also boost performance by creating a functional index!\n\nCREATE UNIQUE INDEX id_test_id1_index ON id_test ((id+1));\n\n...and then joining two tables and filtering results. PostgreSQL (8.x)\nwill do Merge Full Join which will use both the indexes:\n\nSELECT t2.id+1 FROM id_test t1 FULL OUTER JOIN id_test t2 ON (t1.id =\nt2.id+1) WHERE t1.id IS NULL LIMIT 1;\n\n Limit (cost=0.00..1.52 rows=1 width=4)\n -> Merge Full Join (cost=0.00..1523122.73 rows=999974 width=4)\n Merge Cond: (\"outer\".id = (\"inner\".id + 1))\n Filter: (\"outer\".id IS NULL)\n -> Index Scan using id_test_pkey on id_test t1 \n(cost=0.00..18455.71 rows=999974 width=4)\n -> Index Scan using id_test_id1_index on id_test t2 \n(cost=0.00..1482167.60 rows=999974 width=4)\n(6 rows)\n\n...the only drawback is having to keep two indexes instead of just one.\nBut for large tables I think it is really worth it\n\nFor my test case, the times are (1-1000000 range with 26 missing\nrows):\nNOT EXISTS -- 670ms\nNOT IN -- 1800ms\nindexed FULL OUTER -- 267ms\n\n Regards,\n Dawid\n\nPS: Does it qualify for General Bits? ;-)))\n", "msg_date": "Wed, 29 Jun 2005 08:36:52 +0200", "msg_from": "Dawid Kuroczko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tricky query" } ]
[ { "msg_contents": "Cosimo wrote:\n> I'm very interested in this \"tricky query\".\n> Sorry John, but if I populate the `id_test' relation\n> with only 4 tuples with id values (10, 11, 12, 13),\n> the result of this query is:\n> \n> cosimo=> create table id_test (id integer primary key);\n> NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n> 'id_test_pkey'\n> for table 'id_test'\n> CREATE TABLE\n> cosimo=> insert into id_test values (10); -- and 11, 12, 13, 14\n> INSERT 7457570 1\n> INSERT 7457571 1\n> INSERT 7457572 1\n> INSERT 7457573 1\n> INSERT 7457574 1\n> cosimo=> SELECT t1.id+1 as id_new FROM id_test t1 WHERE NOT EXISTS\n> (SELECT\n> t2.id FROM id_test t2 WHERE t2.id = t1.id+1) ORDER BY t1.id LIMIT 1;\n> id_new\n> --------\n> 15\n> (1 row)\n> \n> which if I understand correctly, is the wrong answer to the problem.\n> At this point, I'm starting to think I need some sleep... :-)\n\nCorrect, in that John's query returns the first empty slot above an\nexisting filled slot (correct behavior in my case). You could flip\nthings around a bit to get around thist tho.\n\nMerlin\n", "msg_date": "Tue, 28 Jun 2005 15:45:53 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tricky query" }, { "msg_contents": "Hola folks,\n\nI have a web statistics Pg database (user agent, urls, referrer, etc)\nthat is part of an online web survey system. All of the data derived\nfrom analyzing web server logs is stored in one large table with each\nrecord representing an analyzed webserver log entry.\n\nCurrently all reports are generated when the logs are being analyzed\nand before the data ever goes into the large table I mention above.\nWell, the time has come to build an interface that will allow a user\nto make ad-hoc queries against the stats and that is why I am emailing\nthe performance list.\n\nI need to allow the user to specify any fields and values in a query. \nFor example,\n\n\"I want to see a report about all users from Germany that have flash\ninstalled\" or\n\"I want to see a report about all users from Africa that are using FireFox 1\"\n\nI do not believe that storing all of the data in one big table is the\ncorrect way to go about this. So, I am asking for suggestions,\npointers and any kind of info that anyone can share on how to store\nthis data set in an optimized manner.\n\nAlso, I have created a prototype and done some testing using the\ncolossal table. Actually finding all of the rows that satisfy the\nquery is pretty fast and is not a problem. The bottleneck in the\nwhole process is actually counting each data point (how many times a\nurl was visited, or how many times a url referred the user to the\nwebsite). So more specifically I am wondering if there is way to store\nand retrieve the data such that it speeds up the counting of the\nstatistics.\n\nLastly, this will become an open source tool that is akin to urchin,\nawstats, etc. The difference is that this software is part of a suite\nof tools for doing online web surveys and it maps web stats to the\nsurvey respondent data. This can give web site managers a very clear\nview of what type of people come to the site and how those types use\nthe site.\n\nThanks in advance,\n\nexty\n", "msg_date": "Tue, 28 Jun 2005 13:39:05 -0700", "msg_from": "Billy extyeightysix <[email protected]>", "msg_from_op": false, "msg_subject": "optimized counting of web statistics" }, { "msg_contents": "> The bottleneck in the\n> whole process is actually counting each data point (how many times a\n> url was visited, or how many times a url referred the user to the\n> website). So more specifically I am wondering if there is way to store\n> and retrieve the data such that it speeds up the counting of the\n> statistics.\n\nThis is misleading, the counting is being done by perl. so what is\nhappening is that I am locating all of the rows via a cursor and then\ncalculating the stats using perl hashes. no counting is being in the\nDB. maybe it would be much faster to count in the db somehow?\n\n\nexty\n", "msg_date": "Tue, 28 Jun 2005 13:43:46 -0700", "msg_from": "Billy extyeightysix <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimized counting of web statistics" }, { "msg_contents": "On 6/28/05, Billy extyeightysix <[email protected]> wrote:\n> Hola folks,\n> \n> I have a web statistics Pg database (user agent, urls, referrer, etc)\n> that is part of an online web survey system. All of the data derived\n> from analyzing web server logs is stored in one large table with each\n> record representing an analyzed webserver log entry.\n> \n> Currently all reports are generated when the logs are being analyzed\n> and before the data ever goes into the large table I mention above.\n> Well, the time has come to build an interface that will allow a user\n> to make ad-hoc queries against the stats and that is why I am emailing\n> the performance list.\n\nLoad your data into a big table, then pre-process into additional\ntables that have data better organized for running your reports.\n\nFor example, you may want a table that shows a sum of all hits for\neach site, for each hour of the day. You may want an additional table\nthat shows the sum of all page views, or maybe sessions for each site\nfor each hour of the day.\n\nSo, if you manage a single site, each day you will add 24 new records\nto the sum table.\n\nYou may want the following fields:\nsite (string)\natime (timestamptz)\nhour_of_day (int)\nday_of_week (int)\ntotal_hits (int8)\n\nA record may look like this:\nsite | atime | hour_of_day | day_of_week | total_hits\n'www.yoursite.com' '2005-06-28 16:00:00 -0400' 18 2 350\n\nIndex all of the fields except total_hits (unless you want a report\nthat shows all hours where hits were greater than x or less than x).\n\nDoing:\nselect sum(total_hits) as total_hits from summary_table where atime\nbetween now() and (now() - '7 days'::interval);\nshould be pretty fast.\n\nYou can also normalize your data such as referrers, user agents, etc\nand create similar tables to the above.\n\nIn case you haven't guessed, I've already done this very thing.\n\nI do my batch processing daily using a python script I've written. I\nfound that trying to do it with pl/pgsql took more than 24 hours to\nprocess 24 hours worth of logs. I then used C# and in memory hash\ntables to drop the time to 2 hours, but I couldn't get mono installed\non some of my older servers. Python proved the fastest and I can\nprocess 24 hours worth of logs in about 15 minutes. Common reports run\nin < 1 sec and custom reports run in < 15 seconds (usually).\n-- \nMatthew Nuzum\nwww.bearfruit.org\n", "msg_date": "Tue, 28 Jun 2005 16:55:44 -0500", "msg_from": "Matthew Nuzum <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimized counting of web statistics" }, { "msg_contents": "On 6/29/05, Rudi Starcevic <[email protected]> wrote:\n> Hi,\n> \n> >I do my batch processing daily using a python script I've written. I\n> >found that trying to do it with pl/pgsql took more than 24 hours to\n> >process 24 hours worth of logs. I then used C# and in memory hash\n> >tables to drop the time to 2 hours, but I couldn't get mono installed\n> >on some of my older servers. Python proved the fastest and I can\n> >process 24 hours worth of logs in about 15 minutes. Common reports run\n> >in < 1 sec and custom reports run in < 15 seconds (usually).\n> >\n> >\n> \n> When you say you do your batch processing in a Python script do you mean\n> a you are using 'plpython' inside\n> PostgreSQL or using Python to execut select statements and crunch the\n> data 'outside' PostgreSQL?\n> \n> Your reply is very interesting.\n\nSorry for not making that clear... I don't use plpython, I'm using an\nexternal python program that makes database connections, creates\ndictionaries and does the normalization/batch processing in memory. It\nthen saves the changes to a textfile which is copied using psql.\n\nI've tried many things and while this is RAM intensive, it is by far\nthe fastest aproach I've found. I've also modified the python program\nto optionally use disk based dictionaries based on (I think) gdb. This\nsignfincantly increases the time to closer to 25 min. ;-) but drops\nthe memory usage by an order of magnitude.\n\nTo be fair to C# and .Net, I think that python and C# can do it\nequally fast, but between the time of creating the C# version and the\npython version I learned some new optimization techniques. I feel that\nboth are powerful languages. (To be fair to python, I can write the\ndictionary lookup code in 25% (aprox) fewer lines than similar hash\ntable code in C#. I could go on but I think I'm starting to get off\ntopic.)\n-- \nMatthew Nuzum\nwww.bearfruit.org\n", "msg_date": "Tue, 28 Jun 2005 21:54:47 -0500", "msg_from": "Matthew Nuzum <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimized counting of web statistics" }, { "msg_contents": "Hi,\n\n>I do my batch processing daily using a python script I've written. I\n>found that trying to do it with pl/pgsql took more than 24 hours to\n>process 24 hours worth of logs. I then used C# and in memory hash\n>tables to drop the time to 2 hours, but I couldn't get mono installed\n>on some of my older servers. Python proved the fastest and I can\n>process 24 hours worth of logs in about 15 minutes. Common reports run\n>in < 1 sec and custom reports run in < 15 seconds (usually).\n> \n>\n\nWhen you say you do your batch processing in a Python script do you mean\na you are using 'plpython' inside\nPostgreSQL or using Python to execut select statements and crunch the\ndata 'outside' PostgreSQL?\n\nYour reply is very interesting.\n\nThanks.\nRegards,\nRudi.\n\n", "msg_date": "Wed, 29 Jun 2005 10:17:41 -0700", "msg_from": "Rudi Starcevic <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimized counting of web statistics" } ]
[ { "msg_contents": "Hi\n\nWhat is your Postgres-Version and with which programming\nlanguage are you connecting to the db?\n\ngreetings,\n\nMartin\n\nAm Mittwoch, den 29.06.2005, 11:49 +0200 schrieb Shay Kachlon:\n> [email protected]\n\n", "msg_date": "Wed, 29 Jun 2005 11:02:49 +0200", "msg_from": "\"Martin Fandel\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: could not receive data from client: Connection timed out Error" }, { "msg_contents": "Hi There,\n\n \n\nWe having some problem with the DB which logs sometimes \"could not receive data from client: Connection timed out\" error.\n\n \n\nWe couldn't find when it happens, and why..\n\n \n\nAlso from the client side (we facing some transaction that comes back after something like 15 min) BDW: no exceptions on the client.\n\n \n\nCan any one help me with this ?\n\n \n\nShay Kachlon\n\n\n\n\n\n\n\n\n\n\nHi\nThere,\n \nWe\nhaving some problem with the DB which logs sometimes \"could not receive\ndata from client: Connection timed out\" error.\n \nWe\ncouldn't find when it happens, and why..\n \nAlso\nfrom the client side (we facing some transaction that comes back after something\nlike 15 min) BDW: no exceptions on the client.\n \nCan\nany one help me with this ?\n \nShay\nKachlon", "msg_date": "Wed, 29 Jun 2005 11:49:24 +0200", "msg_from": "\"Shay Kachlon\" <[email protected]>", "msg_from_op": false, "msg_subject": "could not receive data from client: Connection timed out Error" } ]
[ { "msg_contents": "\nI have been trying to diagnose a performance problem we have been seeing with \na postgres application. The performance of the database server is usually \nquite good but every now and then it slows to a crawl. The output of vmstat \ndoes not show excessive CPU usage or disk IO. The output of ps does show that \nthe number of postgres process's that appear to be stuck in some query spikes \nand in some cases restarting the postgres server is the only way to clear \nthem. While trying to diagnose this problem I ran\n\nselect * from pg_locks\n\nI could understand most of the output but I was wondering what a result like \nthe following indicates\n\n relation | database | transaction | pid | mode | granted\n----------+----------+-------------+-------+---------------+---------\n | | 26052434 | 29411 | ExclusiveLock | t\n | | 26051641 | 29345 | ExclusiveLock | t\n | | 26052415 | 29519 | ExclusiveLock | t\n | | 26052407 | 29381 | ExclusiveLock | t\n | | 26052432 | 29658 | ExclusiveLock | t\n\nWhen I see the slowdowns there are hundreds of these with no entry for \nrelation or database. Any ideas what is being locked in this case?\n\nEmil\n", "msg_date": "Wed, 29 Jun 2005 11:08:28 -0400", "msg_from": "Emil Briggs <[email protected]>", "msg_from_op": true, "msg_subject": "Exclusive lock question" }, { "msg_contents": "Emil Briggs <[email protected]> writes:\n> When I see the slowdowns there are hundreds of these with no entry for \n> relation or database. Any ideas what is being locked in this case?\n\nPer the pg_locks documentation:\n\nEvery transaction holds an exclusive lock on its transaction ID for its\nentire duration. If one transaction finds it necessary to wait\nspecifically for another transaction, it does so by attempting to\nacquire share lock on the other transaction ID. That will succeed only\nwhen the other transaction terminates and releases its locks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Jun 2005 11:27:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Exclusive lock question " } ]
[ { "msg_contents": "Hi,\n\nthe time needed for a daily VACUUM on a table with about 28 mio records\nincreases from day to day. What's the best way to avoid this? A full\nvacuum will probably take too much time, are there other ways to keep\nvacuum performant?\n\nThe database was updated to postgres-8.0 on Jun 04 this year.\n\nBetween Jun 07 and Jun 30 the time vacuum needed increased from 683\nseconds up to 1,663 seconds, the output is posted below. E.g. the time\nfor vacuuming the index of a text-field (i_ids_user) raised from 123 sec\nto 668 secs. The increase happens each day so this is not a problem of\nthe last run. The number of records in the table in the same time only\nincreased from 27.5 mio to 28.9 mio, the number of records updated daily\nis about 700,000 to 1,000,000.\n\nRegards\n\nMartin\n\n================================================================\n| Tue Jun 7 04:07:17 CEST 2005 Starting\n| SET VACUUM_MEM=250000; VACUUM ANALYZE VERBOSE t_ids\n----------------------------------------------------------------\nINFO: vacuuming \"public.t_ids\"\nINFO: index \"i_ids_score\" now contains 4323671 row versions in 12414 pages\nDETAIL: 493855 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 1.76s/5.44u sec elapsed 33.22 sec.\nINFO: index \"i_ids_id\" now contains 27500002 row versions in 61515 pages\nDETAIL: 960203 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 8.09s/24.93u sec elapsed 108.43 sec.\nINFO: index \"i_ids_user\" now contains 27500002 row versions in 103172 pages\nDETAIL: 960203 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 14.00s/39.65u sec elapsed 123.47 sec.\nINFO: \"t_ids\": removed 960203 row versions in 203369 pages\nDETAIL: CPU 22.88s/21.72u sec elapsed 294.22 sec.\nINFO: \"t_ids\": found 960203 removable, 27500002 nonremovable row versions in 208912 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 214149 unused item pointers.\n0 pages are entirely empty.\nCPU 53.02s/93.76u sec elapsed 643.46 sec.\nINFO: vacuuming \"pg_toast.pg_toast_224670\"\nINFO: index \"pg_toast_224670_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.01 sec.\nINFO: \"pg_toast_224670\": found 0 removable, 0 nonremovable row versions in 0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.01 sec.\nINFO: analyzing \"public.t_ids\"\nINFO: \"t_ids\": scanned 90000 of 208912 pages, containing 11846838 live rows and 0 dead rows; 90000 rows in sample, 27499407 estimated total rows\nVACUUM\n----------------------------------------------------------------\n| Tue Jun 7 04:18:40 CEST 2005 Job finished after 683 seconds\n================================================================\n\n================================================================\n| Thu Jun 30 01:23:33 CEST 2005 Starting\n| SET VACUUM_MEM=250000; VACUUM ANALYZE VERBOSE t_ids\n----------------------------------------------------------------\nINFO: vacuuming \"public.t_ids\"\nINFO: index \"i_ids_score\" now contains 4460326 row versions in 29867 pages\nDETAIL: 419232 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 4.58s/7.72u sec elapsed 368.13 sec.\nINFO: index \"i_ids_id\" now contains 28948643 row versions in 68832 pages\nDETAIL: 795700 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 9.08s/25.29u sec elapsed 151.38 sec.\nINFO: index \"i_ids_user\" now contains 28948938 row versions in 131683 pages\nDETAIL: 795700 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 20.10s/43.27u sec elapsed 668.00 sec.\nINFO: \"t_ids\": removed 795700 row versions in 206828 pages\nDETAIL: CPU 23.35s/23.50u sec elapsed 309.19 sec.\nINFO: \"t_ids\": found 795700 removable, 28948290 nonremovable row versions in 223145 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 906106 unused item pointers.\n0 pages are entirely empty.\nCPU 63.10s/101.96u sec elapsed 1592.00 sec.\nINFO: vacuuming \"pg_toast.pg_toast_224670\"\nINFO: index \"pg_toast_224670_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_224670\": found 0 removable, 0 nonremovable row versions in 0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.02 sec.\nINFO: analyzing \"public.t_ids\"\nINFO: \"t_ids\": scanned 90000 of 223146 pages, containing 11675055 live rows and 288 dead rows; 90000 rows in sample, 28947131 estimated total rows\nVACUUM\n----------------------------------------------------------------\n| Thu Jun 30 01:51:16 CEST 2005 Job finished after 1663 seconds\n================================================================\n\n", "msg_date": "Thu, 30 Jun 2005 09:24:06 +0200", "msg_from": "Martin Lesser <[email protected]>", "msg_from_op": true, "msg_subject": "Vacuum becomes slow" }, { "msg_contents": "Martin Lesser <[email protected]> writes:\n> the time needed for a daily VACUUM on a table with about 28 mio records\n> increases from day to day.\n\nMy guess is that the original timings were artificially low because the\nindexes were in nearly perfect physical order, and as that condition\ndegrades over time, it takes longer for VACUUM to scan them. If that's\nthe right theory, the runtime should level off soon, and maybe you don't\nneed to do anything. You could REINDEX periodically but I think the\ntime taken to do that would probably be more than you want to spend\n(especially since REINDEX locks out writes where VACUUM does not).\n\nYou should check that your FSM settings are large enough, but given that\nthe table itself doesn't seem to be bloating, that's probably not the\nissue.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Jun 2005 09:31:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum becomes slow " } ]
[ { "msg_contents": "We are running an application that uses psqlodbc driver on Windows XP to \nconnect to a server and for some reason the download of data from the \nserver is very slow. We have created a very simple test application that \ninserts a larger amount of data into the database and uses a simple \n\"SELECT * from test\" to download it back. The INSERT of 10MB takes about \n4 seconds, while the SELECT takes almost 5 minutes (with basically \nnothing else running on both the network and the two computers). If we \nrun the PostgreSQL server on the local machine so that the network is \nnot used, both actions are very fast.\n\nDo you have any idea what could be the cause of this behavior? Are there \nany driver settings/other drivers we should use? We are currently using \npsqlodbc version 7.03.02.00, but it seems that other versions we tried \nshow the same behavior. We have tweaked the various driver settings but \nthe times remain basically unchanged.\n\nAny ideas or hints are warmly welcome.\n\nregards\nMilan\n", "msg_date": "Thu, 30 Jun 2005 12:28:55 +0200", "msg_from": "Milan Sekanina <[email protected]>", "msg_from_op": true, "msg_subject": "ODBC driver over network very slow" }, { "msg_contents": "Milan Sekanina <[email protected]> writes:\n> We are running an application that uses psqlodbc driver on Windows XP to \n> connect to a server and for some reason the download of data from the \n> server is very slow. We have created a very simple test application that \n> inserts a larger amount of data into the database and uses a simple \n> \"SELECT * from test\" to download it back. The INSERT of 10MB takes about \n> 4 seconds, while the SELECT takes almost 5 minutes (with basically \n> nothing else running on both the network and the two computers). If we \n> run the PostgreSQL server on the local machine so that the network is \n> not used, both actions are very fast.\n\nI seem to recall having seen similar reports not involving ODBC at all.\nTry searching the mailing-list archives, but I think the cases we solved\ninvolved getting rid of third-party add-ons to the Windows TCP stack.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Jun 2005 09:48:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ODBC driver over network very slow " } ]
[ { "msg_contents": "> Milan Sekanina <[email protected]> writes:\n> > We are running an application that uses psqlodbc driver on Windows\nXP to\n> > connect to a server and for some reason the download of data from\nthe\n> > server is very slow. We have created a very simple test application\nthat\n> > inserts a larger amount of data into the database and uses a simple\n> > \"SELECT * from test\" to download it back. The INSERT of 10MB takes\nabout\n> > 4 seconds, while the SELECT takes almost 5 minutes (with basically\n> > nothing else running on both the network and the two computers). If\nwe\n> > run the PostgreSQL server on the local machine so that the network\nis\n> > not used, both actions are very fast.\n> \n> I seem to recall having seen similar reports not involving ODBC at\nall.\n> Try searching the mailing-list archives, but I think the cases we\nsolved\n> involved getting rid of third-party add-ons to the Windows TCP stack.\n\nIIRC there was a TCP related fix in the odbc driver related to\nperformance with large buffers. I'd suggest trying a newer odbc driver\nfirst.\n\nMerlin \n\ndave page wrote ([odbc] 500 times slower)\n> \n> My collegue spent some time to dig the following case and it \n> looks like \n> Nagle algorithm and delayed ACKs related problem.\n> In psqlodbc.h\n> #define SOCK_BUFFER_SIZE\t\t\t4096\n> \n> I changed that value to 8192 and driver works fine for me.\n> I am not sure why this change helps.\n\nErr, no neither am I. Why do you think it's got something to do with\nNagle/delayed ACKs?\n\nThe only thing that instantly rings bells for me is that the max size of\na text field is 8190 bytes at present (which really should be increased,\nif not removed altogether), which won't fit in the default buffer. But\nthen, I wouldn't expect to see the performance drop you describe with a\n4096 byte buffer, only one much smaller.\n", "msg_date": "Thu, 30 Jun 2005 10:10:25 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ODBC driver over network very slow " } ]
[ { "msg_contents": "I was hesitant to jump in on this because I am new to PostgreSQL and\nhaven't seen this problem with _it_, but I have seen this with the\nSybase database products. You can configure Sybase to disable the Nagle\nalgorithm. If you don't, any query which returns rows too big to fit in\ntheir network buffer will be painfully slow. Increasing the buffer size\ncan help with an individual query, but it just reduces the scope of the\nproblem. What you really want to do is make sure that TCP_NODELAY is\nset for the connection, to disable the Nagle algorithm; it just doesn't\nseem to be appropriate for returning query results.\n \nHow this issue comes into play in PostgreSQL is beyond my ken, but\nhopefully this observation is helpful to someone.\n \n-Kevin\n \n \n>>> \"Merlin Moncure\" <[email protected]> 06/30/05 9:10 AM >>>\n\n> My collegue spent some time to dig the following case and it \n> looks like \n> Nagle algorithm and delayed ACKs related problem.\n> In psqlodbc.h\n> #define SOCK_BUFFER_SIZE\t\t\t4096\n> \n> I changed that value to 8192 and driver works fine for me.\n> I am not sure why this change helps.\n\nErr, no neither am I. Why do you think it's got something to do with\nNagle/delayed ACKs?\n\n", "msg_date": "Thu, 30 Jun 2005 09:47:19 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ODBC driver over network very slow" } ]
[ { "msg_contents": "hi,\n\nI have two computers, one laptop (1.5 GHz, 512 Mb RAM, 1 disk 4200)\nand one big Sun (8Gb RAM, 2 disks SCSI).\n\nOn my laptop, I have this EXPLAIN ANALYZE\n\nSort (cost=7.56..7.56 rows=1 width=28) (actual time=0.187..0.187 \nrows=0 loops=1)\n Sort Key: evolution, indx\n -> Index Scan using index_xdb_child on xdb_child c1 \n(cost=0.00..7.55 rows=1 width=28) (actual time=0.045..0.045 rows=0 loops=1)\n Index Cond: ((doc_id = 100) AND (ele_id = 1) AND (isremoved = 0))\n Filter: (evolution = (subplan))\n SubPlan\n -> Aggregate (cost=3.78..3.78 rows=1 width=4) (never executed)\n -> Index Scan using index_xdb_child on xdb_child c2 \n(cost=0.00..3.77 rows=1 width=4) (never executed)\n Index Cond: ((doc_id = 100) AND (ele_id = 1))\n Filter: ((evolution <= 0) AND (child_id = $0) AND \n(child_class = $1))\n Total runtime: 0.469 ms\n(11 rows)\n\n\nand on the SUN:\n\n\"Sort (cost=7.56..7.56 rows=1 width=28) (actual time=26.335..26.335\nrows=0 loops=1)\"\n\" Sort Key: evolution, indx\"\n\" -> Index Scan using index_xdb_child on xdb_child c1 \n(cost=0.00..7.55 rows=1 width=28) (actual time=26.121..26.121 rows=0\nloops=1)\"\n\" Index Cond: ((doc_id = 100) AND (ele_id = 1) AND (isremoved = 0))\"\n\" Filter: (evolution = (subplan))\"\n\" SubPlan\"\n\" -> Aggregate (cost=3.78..3.78 rows=1 width=4) (never executed)\"\n\" -> Index Scan using index_xdb_child on xdb_child c2 \n(cost=0.00..3.77 rows=1 width=4) (never executed)\"\n\" Index Cond: ((doc_id = 100) AND (ele_id = 1))\"\n\" Filter: ((evolution <= 0) AND (child_id = $0)\nAND (child_class = $1))\"\n\"Total runtime: 26.646 ms\"\n\n\n\n\nso the request run in 26.646 ms on the Sun and 0.469ms on my laptop :-( \nthe database are the same, vacuumed and I think the Postgres (8.0.3)\nare well configured.\nThe Sun has two disks and use the TABLESPACE to have index on one disk\nand data's on the other disk.\nIt seems that the cost of the first sort is very high on the Sun.\nHow is it possible ?\n\nthe request:\n\nexplain analyze select * from XDB_CHILD c1\n where c1.doc_id = 100\n and c1.ele_id = 1\n and c1.isremoved = 0\n and c1.evolution = (select max(evolution)\n from XDB_CHILD c2\n where c2.doc_id=100\n and c2.ele_id=1\n and c2.evolution<=0\n and\nc2.child_id=c1.child_id\n and\nc2.child_class=c1.child_class) ORDER BY c1.evolution, c1.indx\n\n-- \nJean-Max Reymond\nCKR Solutions Open Source\nNice France\nhttp://www.ckr-solutions.com\n", "msg_date": "Thu, 30 Jun 2005 17:42:06 +0200", "msg_from": "Jean-Max Reymond <[email protected]>", "msg_from_op": true, "msg_subject": "start time very high" }, { "msg_contents": "2005/6/30, Jean-Max Reymond <[email protected]>:\n> so the request run in 26.646 ms on the Sun and 0.469ms on my laptop :-(\n> the database are the same, vacuumed and I think the Postgres (8.0.3)\n> are well configured.\n> The Sun has two disks and use the TABLESPACE to have index on one disk\n> and data's on the other disk.\n> It seems that the cost of the first sort is very high on the Sun.\n> How is it possible ?\n\nmay be data's not loaded in memory but on disk ?\n\n-- \nJean-Max Reymond\nCKR Solutions Open Source\nNice France\nhttp://www.ckr-solutions.com\n", "msg_date": "Thu, 30 Jun 2005 18:23:22 +0200", "msg_from": "Jean-Max Reymond <[email protected]>", "msg_from_op": true, "msg_subject": "Re: start time very high" }, { "msg_contents": "Jean-Max,\n\n> I have two computers, one laptop (1.5 GHz, 512 Mb RAM, 1 disk 4200)\n> and one big Sun (8Gb RAM, 2 disks SCSI).\n\nDid you run each query several times? It looks like the index is cached \non one server and not on the other.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 30 Jun 2005 12:01:27 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: start time very high" }, { "msg_contents": "Jean-Max Reymond <[email protected]> writes:\n> so the request run in 26.646 ms on the Sun and 0.469ms on my laptop :-( \n> the database are the same, vacuumed and I think the Postgres (8.0.3)\n> are well configured.\n\nAre you sure they're both vacuumed? The Sun machine's behavior seems\nconsistent with the idea of a lot of dead rows in its copy of the table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Jun 2005 17:01:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: start time very high " } ]
[ { "msg_contents": "pgsql performance gurus,\n\nWe ported an application from oracle to postgresql.\nWe are experiencing an approximately 50% performance\nhit. I am in the process of isolating the problem.\nI have searched the internet (google) and tried various\nthings. Only one thing seems to work. I am trying to\nfind out if our solution is the only option, or if I\nam doing something terribly wrong.\n\nThe original application runs on the following:\n\nhw:\ncpu0: SUNW,UltraSPARC-IIi (upaid 0 impl 0x12 ver 0x12 clock 302 MHz)\nmem = 393216K (0x18000000)\n\nsw:\nSolaris 5.6\nOracle 7.3.2.2.0\nApache 1.3.27\nPerl 5.004_04\nmod_perl 1.27\nDBI 1.20\nDBD::Oracle 1.12\n\nThe ported application runs on the following:\n\nhw:\nunix: [ID 389951 kern.info] mem = 262144K (0x10000000)\nrootnex: [ID 466748 kern.info] root nexus = Sun Ultra 5/10 UPA/PCI (UltraSPARC-IIi 360MHz)\n\nsw:\nSolaris 5.9\nPostgreSQL 7.4.6\nApache 1.3.33\nPerl 5.8.6\nmod_perl 1.29\nDBI 1.46\nDBD::Pg 1.40.1\n\nBased on assistance from another list, we have\ntried the following:\n\n(1) Upgraded DBD::Pg to current version 1.43\n(2) Ensured all tables are analyzed regularly\n(3) Setting some memory options in postgresql.conf\n(4) Located a handful of slow queries by setting\n log_min_duration_statement to 250.\n\nFuture options we will consider are:\n\n(1) Attempting other option settings, like\n random_page_cost\n(2) Upgrading db server to current version 8.0.3\n\nWith our handful of slow queries, we have done\nseveral iterations of changes to determine what\nwill address the issues.\n\nWe have broken this down to the direction of a join\nand setting the enable_seqscan to off. The table\ndefinitions are at the bottom of this e-mail. There\nis one large table (contacts) and one smaller table\n(lead_requests). The final SQL is as follows:\n\nSELECT\n c.id AS contact_id,\n lr.id AS lead_request_id\nFROM\n lead_requests lr\n JOIN contacts c ON (c.id = lr.contact_id)\nWHERE\n c.partner_id IS NULL\nORDER BY\n contact_id\n\nI ran this query against freshly vacuum analyzed tables.\n\nThe first run is as follows:\n\ndb=> explain analyze SELECT\ndb-> c.id AS contact_id,\ndb-> lr.id AS lead_request_id\ndb-> FROM\ndb-> lead_requests lr\ndb-> JOIN contacts c ON (c.id = lr.contact_id)\ndb-> WHERE\ndb-> c.partner_id IS NULL\ndb-> ORDER BY\ndb-> contact_id\ndb-> ;\nLOG: duration: 4618.133 ms statement: explain analyze SELECT\n c.id AS contact_id,\n lr.id AS lead_request_id\n FROM\n lead_requests lr\n JOIN contacts c ON (c.id = lr.contact_id)\n WHERE\n c.partner_id IS NULL\n ORDER BY\n contact_id\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=4272.84..4520.82 rows=1230 width=21) (actual time=3998.771..4603.739 rows=699 loops=1)\n Merge Cond: (\"outer\".contact_id = \"inner\".id)\n -> Index Scan using lead_requests_contact_id_idx on lead_requests lr (cost=0.00..74.09 rows=1435 width=21) (actual time=0.070..22.431 rows=1430 loops=1)\n -> Sort (cost=4272.84..4352.28 rows=31775 width=11) (actual time=3998.554..4130.067 rows=32640 loops=1)\n Sort Key: c.id\n -> Seq Scan on contacts c (cost=0.00..1896.77 rows=31775 width=11) (actual time=0.040..326.135 rows=32501 loops=1)\n Filter: (partner_id IS NULL)\n Total runtime: 4611.323 ms\n(8 rows)\n\nAs you can see, run time over 4 seconds.\nThen, I set enable_seqscan = off.\n\ndb=> set enable_seqscan=off;\nSET\n\nThen I ran the exact same query:\n\ndb=> explain analyze SELECT\ndb-> c.id AS contact_id,\ndb-> lr.id AS lead_request_id\ndb-> FROM\ndb-> lead_requests lr\ndb-> JOIN contacts c ON (c.id = lr.contact_id)\ndb-> WHERE\ndb-> c.partner_id IS NULL\ndb-> ORDER BY\ndb-> contact_id\ndb-> ;\nLOG: duration: 915.304 ms statement: explain analyze SELECT\n c.id AS contact_id,\n lr.id AS lead_request_id\n FROM\n lead_requests lr\n JOIN contacts c ON (c.id = lr.contact_id)\n WHERE\n c.partner_id IS NULL\n ORDER BY\n contact_id\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=0.00..4749.84 rows=1230 width=21) (actual time=0.213..901.315 rows=699 loops=1)\n Merge Cond: (\"outer\".contact_id = \"inner\".id)\n -> Index Scan using lead_requests_contact_id_idx on lead_requests lr (cost=0.00..74.09 rows=1435 width=21) (actual time=0.073..21.448 rows=1430 loops=1)\n -> Index Scan using contacts_pkey on contacts c (cost=0.00..4581.30 rows=31775 width=11) (actual time=0.038..524.217 rows=32640 loops=1)\n Filter: (partner_id IS NULL)\n Total runtime: 903.638 ms\n(6 rows)\n\nUnder 1 second. Excellent.\n\nThe contacts table has 30000+ records.\nThe lead_requests table has just around 1500 records.\nI want the query to start with the join at the lead_requests\ntable since the number is so much smaller.\n\nSo, bottom line is this:\nIn order to get the performance to an acceptable level\n(I can live with under 1 second, though less time would\nbe better), do I have to set enable_seqscan to off every\ntime I run this query? Is there a better or more acceptable\nway to handle this?\n\nThank you very much in advance for any pointers you can\nprovide. And, if this is the wrong forum for this question,\nplease let me know and I'll ask it elsewhere.\n\nJohnM\n\n\n\n\n\n-----\ntable definitions\n-----\n\n-----\ndb=> \\d contacts\n Table \"db.contacts\"\n Column | Type | Modifiers \n------------------------------+-----------------------------+-----------\n id | numeric(38,0) | not null\n db_id | character varying(32) | \n firstname | character varying(64) | \n lastname | character varying(64) | \n company | character varying(128) | \n email | character varying(256) | \n phone | character varying(64) | \n address | character varying(128) | \n city | character varying(128) | \n state | character varying(32) | \n postalcode | character varying(16) | \n country | character varying(2) | not null\n contact_info_modified | character(1) | \n token_id | numeric(38,0) | \n status_id | numeric(38,0) | \n status_last_modified | timestamp without time zone | \n notes | character varying(2000) | \n demo_schedule | timestamp without time zone | \n partner_id | numeric(38,0) | \n prev_partner_id | numeric(38,0) | \n prev_prev_partner_id | numeric(38,0) | \n site_last_visited | timestamp without time zone | \n source_id | numeric(4,0) | \n demo_requested | timestamp without time zone | \n sourcebook_requested | timestamp without time zone | \n zip | numeric(8,0) | \n suffix | numeric(8,0) | \n feedback_request_sent | timestamp without time zone | \n products_sold | character varying(512) | \n other_brand | character varying(512) | \n printsample_requested | timestamp without time zone | \n indoor_media_sample | timestamp without time zone | \n outdoor_media_sample | timestamp without time zone | \n printers_owned | character varying(256) | \n business_type | character varying(256) | \n printers_owned2 | character varying(256) | \n contact_quality_id | numeric(38,0) | \n est_annual_value | numeric(38,2) | \n likelyhood_of_closing | numeric(38,0) | \n priority | numeric(38,0) | \n business_type_id | numeric(38,0) | \n lead_last_modified | timestamp without time zone | \n lead_value | numeric(38,2) | \n channel_contact_flag | character(1) | \n request_status_last_modified | timestamp without time zone | \n master_key_number | numeric(38,0) | \n master_key_token | character varying(32) | \n current_media_cust | character(1) | \n kodak_media_id | numeric(38,0) | \n printer_sample_id | numeric(38,0) | \n quantity_used_id | numeric(38,0) | \n rip_used_id | numeric(38,0) | \n language_code | character varying(3) | \n region_id | numeric(38,0) | not null\n lead_deleted | timestamp without time zone | \n last_request_set_status_id | numeric(38,0) | \n address2 | character varying(128) | \n media_usage_id | numeric(38,0) | \nIndexes:\n \"contacts_pkey\" primary key, btree (id)\n \"contacts_partner_id_idx\" btree (partner_id)\n \"contacts_partner_id_null_idx\" btree (partner_id) WHERE (partner_id IS NULL)\n \"contacts_token_id_idx\" btree (token_id)\nCheck constraints:\n \"sys_c0050644\" CHECK (country IS NOT NULL)\n \"sys_c0050643\" CHECK (id IS NOT NULL)\n \"sys_c0050645\" CHECK (region_id IS NOT NULL)\nTriggers:\n insert_master_key BEFORE INSERT ON contacts FOR EACH ROW EXECUTE PROCEDURE pg_fct_insert_master_key()\n-----\n\n-----\ndb=> \\d lead_requests\n Table \"db.lead_requests\"\n Column | Type | Modifiers \n-----------------------+-----------------------------+-----------\n id | numeric(38,0) | not null\n contact_id | numeric(38,0) | not null\n request_id | numeric(38,0) | not null\n date_requested | timestamp without time zone | not null\n must_update_by | timestamp without time zone | \n date_satisfied | timestamp without time zone | \n status_id | numeric(38,0) | \n request_scheduled | timestamp without time zone | \n session_log_id | numeric(38,0) | \n notes | character varying(2000) | \n status_last_modified | timestamp without time zone | \n reminder_last_sent | timestamp without time zone | \n data | character varying(2000) | \n fulfillment_status_id | numeric(38,0) | \nIndexes:\n \"lead_requests_pkey\" primary key, btree (id)\n \"lead_requests_contact_id_idx\" btree (contact_id)\n \"lead_requests_request_id_idx\" btree (request_id)\nCheck constraints:\n \"sys_c0049877\" CHECK (request_id IS NOT NULL)\n \"sys_c0049876\" CHECK (contact_id IS NOT NULL)\n \"sys_c0049878\" CHECK (date_requested IS NOT NULL)\n-----\n\n\n-- \nJohn Mendenhall\[email protected]\nsurf utopia\ninternet services\n", "msg_date": "Thu, 30 Jun 2005 15:24:51 -0700", "msg_from": "John Mendenhall <[email protected]>", "msg_from_op": true, "msg_subject": "ported application having performance issues" }, { "msg_contents": "\n> Thank you very much in advance for any pointers you can\n> provide. And, if this is the wrong forum for this question,\n> please let me know and I'll ask it elsewhere.\n\nI think you may want to increase your statistics_target plus make sure \nyou are running analyze. explain anaylze would do.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n> \n> JohnM\n> \n> \n> \n> \n> \n> -----\n> table definitions\n> -----\n> \n> -----\n> db=> \\d contacts\n> Table \"db.contacts\"\n> Column | Type | Modifiers \n> ------------------------------+-----------------------------+-----------\n> id | numeric(38,0) | not null\n> db_id | character varying(32) | \n> firstname | character varying(64) | \n> lastname | character varying(64) | \n> company | character varying(128) | \n> email | character varying(256) | \n> phone | character varying(64) | \n> address | character varying(128) | \n> city | character varying(128) | \n> state | character varying(32) | \n> postalcode | character varying(16) | \n> country | character varying(2) | not null\n> contact_info_modified | character(1) | \n> token_id | numeric(38,0) | \n> status_id | numeric(38,0) | \n> status_last_modified | timestamp without time zone | \n> notes | character varying(2000) | \n> demo_schedule | timestamp without time zone | \n> partner_id | numeric(38,0) | \n> prev_partner_id | numeric(38,0) | \n> prev_prev_partner_id | numeric(38,0) | \n> site_last_visited | timestamp without time zone | \n> source_id | numeric(4,0) | \n> demo_requested | timestamp without time zone | \n> sourcebook_requested | timestamp without time zone | \n> zip | numeric(8,0) | \n> suffix | numeric(8,0) | \n> feedback_request_sent | timestamp without time zone | \n> products_sold | character varying(512) | \n> other_brand | character varying(512) | \n> printsample_requested | timestamp without time zone | \n> indoor_media_sample | timestamp without time zone | \n> outdoor_media_sample | timestamp without time zone | \n> printers_owned | character varying(256) | \n> business_type | character varying(256) | \n> printers_owned2 | character varying(256) | \n> contact_quality_id | numeric(38,0) | \n> est_annual_value | numeric(38,2) | \n> likelyhood_of_closing | numeric(38,0) | \n> priority | numeric(38,0) | \n> business_type_id | numeric(38,0) | \n> lead_last_modified | timestamp without time zone | \n> lead_value | numeric(38,2) | \n> channel_contact_flag | character(1) | \n> request_status_last_modified | timestamp without time zone | \n> master_key_number | numeric(38,0) | \n> master_key_token | character varying(32) | \n> current_media_cust | character(1) | \n> kodak_media_id | numeric(38,0) | \n> printer_sample_id | numeric(38,0) | \n> quantity_used_id | numeric(38,0) | \n> rip_used_id | numeric(38,0) | \n> language_code | character varying(3) | \n> region_id | numeric(38,0) | not null\n> lead_deleted | timestamp without time zone | \n> last_request_set_status_id | numeric(38,0) | \n> address2 | character varying(128) | \n> media_usage_id | numeric(38,0) | \n> Indexes:\n> \"contacts_pkey\" primary key, btree (id)\n> \"contacts_partner_id_idx\" btree (partner_id)\n> \"contacts_partner_id_null_idx\" btree (partner_id) WHERE (partner_id IS NULL)\n> \"contacts_token_id_idx\" btree (token_id)\n> Check constraints:\n> \"sys_c0050644\" CHECK (country IS NOT NULL)\n> \"sys_c0050643\" CHECK (id IS NOT NULL)\n> \"sys_c0050645\" CHECK (region_id IS NOT NULL)\n> Triggers:\n> insert_master_key BEFORE INSERT ON contacts FOR EACH ROW EXECUTE PROCEDURE pg_fct_insert_master_key()\n> -----\n> \n> -----\n> db=> \\d lead_requests\n> Table \"db.lead_requests\"\n> Column | Type | Modifiers \n> -----------------------+-----------------------------+-----------\n> id | numeric(38,0) | not null\n> contact_id | numeric(38,0) | not null\n> request_id | numeric(38,0) | not null\n> date_requested | timestamp without time zone | not null\n> must_update_by | timestamp without time zone | \n> date_satisfied | timestamp without time zone | \n> status_id | numeric(38,0) | \n> request_scheduled | timestamp without time zone | \n> session_log_id | numeric(38,0) | \n> notes | character varying(2000) | \n> status_last_modified | timestamp without time zone | \n> reminder_last_sent | timestamp without time zone | \n> data | character varying(2000) | \n> fulfillment_status_id | numeric(38,0) | \n> Indexes:\n> \"lead_requests_pkey\" primary key, btree (id)\n> \"lead_requests_contact_id_idx\" btree (contact_id)\n> \"lead_requests_request_id_idx\" btree (request_id)\n> Check constraints:\n> \"sys_c0049877\" CHECK (request_id IS NOT NULL)\n> \"sys_c0049876\" CHECK (contact_id IS NOT NULL)\n> \"sys_c0049878\" CHECK (date_requested IS NOT NULL)\n> -----\n> \n> \n\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n", "msg_date": "Thu, 30 Jun 2005 15:35:38 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ported application having performance issues" }, { "msg_contents": "John Mendenhall <[email protected]> writes:\n> Merge Join (cost=4272.84..4520.82 rows=1230 width=21) (actual time=3998.771..4603.739 rows=699 loops=1)\n> Merge Cond: (\"outer\".contact_id = \"inner\".id)\n> -> Index Scan using lead_requests_contact_id_idx on lead_requests lr (cost=0.00..74.09 rows=1435 width=21) (actual time=0.070..22.431 rows=1430 loops=1)\n> -> Sort (cost=4272.84..4352.28 rows=31775 width=11) (actual time=3998.554..4130.067 rows=32640 loops=1)\n> Sort Key: c.id\n> -> Seq Scan on contacts c (cost=0.00..1896.77 rows=31775 width=11) (actual time=0.040..326.135 rows=32501 loops=1)\n> Filter: (partner_id IS NULL)\n> Total runtime: 4611.323 ms\n\nHmm ... even on a SPARC, it doesn't seem like it should take 4 seconds\nto sort 30000 rows. You can certainly see that the planner is not\nexpecting that (it's estimating a sort cost comparable to the scan cost,\nwhich if true would put this in the sub-second ballpark).\n\nDoes increasing sort_mem help?\n\nHave you considered using some other datatype than \"numeric\" for your\nkeys? Numeric may be fast on Oracle but it's not amazingly fast on\nPostgres. bigint would be better, if you don't really need 38 digits;\nif you do, I'd be inclined to think about plain char or varchar keys.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Jun 2005 18:43:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ported application having performance issues " }, { "msg_contents": "pgsql performance gurus,\n\nI truly appreciate the suggestions provided.\n\nI have tried each one separately to determine the\nbest fit. I have included results for each suggestion.\nI have also included my entire postgresql.conf file so\nyou can see our base configuration.\nEach result is based on an in-session variable setting,\nso it only affected the current session.\n\n(1) Increase the default_statistics_target,\nrun vacuum, analyze on each table for each setting\n\nThe default setting is 10.\n\nI tried the following settings, with the corresponding\nresults:\n\ndefault_statistics_target = 10 time approximately 4500ms\ndefault_statistics_target = 100 time approximately 3900ms\ndefault_statistics_target = 500 time approximately 3900ms\ndefault_statistics_target = 1000 time approximately 3900ms\n\nSo, this option does not quite get us there.\n\n(2) Increase sort_mem value\n\nThe current setting for sort_mem is 2048.\n\nsort_mem = 2048 time approximately 4500ms\nsort_mem = 8192 time approximately 2750ms\nsort_mem = 16384 time approximately 2650ms\nsort_mem = 1024 time approximately 1000ms\n\nInteresting to note...\nWhen I set sort_mem to 1024, the plan started the join\nwith the lead_requests table and used the contacts index.\nNone of the above attempts used this.\n\n(3) Decrease random_page_cost, increase effective_cache_size\n\nThe default setting for random_page_cost is 4.\nOur setting for effective_cache_size is 2048.\n\nrandom_page_cost = 4, effective_cache_size = 2048 time approximately 4500ms\nrandom_page_cost = 3, effective_cache_size = 2048 time approximately 1050ms\nrandom_page_cost = 3, effective_cache_size = 4096 time approximately 1025ms\n\nThe decrease of random_page_cost to 3 caused the plan\nto work properly, using the lead_requests table as a\njoin starting point and using the contacts index.\n\n*****\n\nIt appears we learned the following:\n\n(a) For some reason, setting the sort_mem smaller than\nour current setting caused things to work correctly.\n(b) Lowering random_page_cost causes things to work\ncorrectly.\n\nThis brings up the following questions:\n\n (i) What is the ideal configuration for this query\nto work?\n(ii) Will this ideal configuration work for all our\nother queries, or is this specific to this query only?\n(iii) Should I try additional variable changes, or\nlower/raise the variables I have already changed even\nmore?\n\nThanks again for the suggestions provided. And,\nthanks in advance for any additional thoughts or\nsuggestions.\n\nJohnM\n\n::::::::::::::\npostgresql.conf\n::::::::::::::\n-----\n# -----------------------------\n# PostgreSQL configuration file\n# -----------------------------\n#\n# This file consists of lines of the form:\n#\n# name = value\n#\n# (The '=' is optional.) White space may be used. Comments are introduced\n# with '#' anywhere on a line. The complete list of option names and\n# allowed values can be found in the PostgreSQL documentation. The\n# commented-out settings shown in this file represent the default values.\n#\n# Any option can also be given as a command line switch to the\n# postmaster, e.g. 'postmaster -c log_connections=on'. Some options\n# can be changed at run-time with the 'SET' SQL command.\n#\n# This file is read on postmaster startup and when the postmaster\n# receives a SIGHUP. If you edit the file on a running system, you have\n# to SIGHUP the postmaster for the changes to take effect, or use\n# \"pg_ctl reload\".\n\n\n#---------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#---------------------------------------------------------------------------\n\n# - Connection Settings -\n\ntcpip_socket = false\nmax_connections = 128\n # note: increasing max_connections costs about 500 bytes of shared\n # memory per connection slot, in addition to costs from shared_buffers\n # and max_locks_per_transaction.\n#superuser_reserved_connections = 2\n#port = 5432\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777 # octal\n#virtual_host = '' # what interface to listen on; defaults to any\n#rendezvous_name = '' # defaults to the computer name\n\n# - Security & Authentication -\n\n#authentication_timeout = 60 # 1-600, in seconds\n#ssl = false\n#password_encryption = true\n#krb_server_keyfile = ''\n#db_user_namespace = false\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 4096 # min 16, at least max_connections*2, 8KB each\nsort_mem = 2048 # min 64, size in KB\n#vacuum_mem = 8192 # min 1024, size in KB\n\n# - Free Space Map -\n\n#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n#max_fsm_relations = 1000 # min 100, ~50 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25\n#preload_libraries = ''\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\n#fsync = true # turns forced synchronization on or off\n#wal_sync_method = fsync # the default varies across platforms:\n # fsync, fdatasync, open_sync, or open_datasync\n#wal_buffers = 8 # min 4, 8KB each\n\n# - Checkpoints -\n\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300 # range 30-3600, in seconds\n#checkpoint_warning = 30 # 0 is off, in seconds\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Enabling -\n\n#enable_hashagg = true\n#enable_hashjoin = true\n#enable_indexscan = true\n#enable_mergejoin = true\n#enable_nestloop = true\n#enable_seqscan = true\n#enable_sort = true\n#enable_tidscan = true\n\n# - Planner Cost Constants -\n\neffective_cache_size = 2048 # typically 8KB each\n#random_page_cost = 4 # units are one sequential page fetch cost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n\n# - Genetic Query Optimizer -\n\n#geqo = true\n#geqo_threshold = 11\n#geqo_effort = 1\n#geqo_generations = 0\n#geqo_pool_size = 0 # default based on tables in statement,\n # range 128-1024\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 10 # range 1-1000\n#from_collapse_limit = 8\n#join_collapse_limit = 8 # 1 disables collapsing of explicit JOINs\n\n\n#---------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------\n\n# - Syslog -\n\nsyslog = 1 # range 0-2; 0=stdout; 1=both; 2=syslog\nsyslog_facility = 'LOCAL0'\nsyslog_ident = 'postgres'\n\n# - When to Log -\n\nclient_min_messages = log # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # log, info, notice, warning, error\n\nlog_min_messages = info # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error, log, fatal,\n # panic\n\nlog_error_verbosity = verbose # terse, default, or verbose messages\n\nlog_min_error_statement = info # Values in order of increasing severity:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error, panic(off)\n\nlog_min_duration_statement = 250 # Log all statements whose\n # execution time exceeds the value, in\n # milliseconds. Zero prints all\n # queries. Minus-one disables.\n\nsilent_mode = false # DO NOT USE without Syslog!\n\n# - What to Log -\n\n#debug_print_parse = false\n#debug_print_rewritten = false\n#debug_print_plan = false\ndebug_pretty_print = true\nlog_connections = true\n#log_duration = true\n#log_pid = false\n#log_statement = true\nlog_timestamp = true\n#log_hostname = false\n#log_source_port = false\n\n\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Statistics Monitoring -\n\n#log_parser_stats = false\n#log_planner_stats = false\n#log_executor_stats = false\n#log_statement_stats = false\n\n# - Query/Index Statistics Collector -\n\n#stats_start_collector = true\n#stats_command_string = false\n#stats_block_level = false\n#stats_row_level = false\n#stats_reset_on_server_start = true\n\n\n#---------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#search_path = '$user,public' # schema names\n#check_function_bodies = true\n#default_transaction_isolation = 'read committed'\n#default_transaction_read_only = false\n#statement_timeout = 0 # 0 is disabled, in milliseconds\n\n# - Locale and Formatting -\n\n#datestyle = 'iso, mdy'\n#timezone = unknown # actually, defaults to TZ environment setting\n#australian_timezones = false\n#extra_float_digits = 0 # min -15, max 2\n#client_encoding = sql_ascii # actually, defaults to database encoding\n\n# These settings are initialized by initdb -- they may be changed\nlc_messages = 'C' # locale for system error message strings\nlc_monetary = 'C' # locale for monetary formatting\nlc_numeric = 'C' # locale for number formatting\nlc_time = 'C' # locale for time formatting\n\n# - Other Defaults -\n\n#explain_pretty_print = true\n#dynamic_library_path = '$libdir'\n#max_expr_depth = 10000 # min 10\n\n\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\n#deadlock_timeout = 1000 # in milliseconds\n#max_locks_per_transaction = 64 # min 10, ~260*max_connections bytes each\n\n\n#---------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n\n# - Previous Postgres Versions -\n\n#add_missing_from = true\n#regex_flavor = advanced # advanced, extended, or basic\n#sql_inheritance = true\n\n# - Other Platforms & Clients -\n\n#transform_null_equals = false\n-----\n\n-- \nJohn Mendenhall\[email protected]\nsurf utopia\ninternet services\n", "msg_date": "Thu, 30 Jun 2005 16:58:21 -0700", "msg_from": "John Mendenhall <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ported application having performance issues" }, { "msg_contents": "On Thu, 30 Jun 2005, John Mendenhall wrote:\n\n> Our setting for effective_cache_size is 2048.\n> \n> random_page_cost = 4, effective_cache_size = 2048 time approximately 4500ms\n> random_page_cost = 3, effective_cache_size = 2048 time approximately 1050ms\n> random_page_cost = 3, effective_cache_size = 4096 time approximately 1025ms\n> \n> The decrease of random_page_cost to 3 caused the plan\n> to work properly, using the lead_requests table as a\n> join starting point and using the contacts index.\n\nThe effective_cache_size still looks small. As a rule of tumb you might\nwant effective_cache_size to be something like 1/2 or 2/3 of your total\nmemory. I don't know how much you had, but effective_cache_size = 4096 is\nonly 32M.\n\nshared_buffers and effective_cache_size is normally the two most important \nsettings in my experience.\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Fri, 1 Jul 2005 10:57:02 +0200 (CEST)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ported application having performance issues" }, { "msg_contents": "Dennis,\n\nOn Fri, 01 Jul 2005, Dennis Bjorklund wrote:\n\n> On Thu, 30 Jun 2005, John Mendenhall wrote:\n> \n> > Our setting for effective_cache_size is 2048.\n> > \n> > random_page_cost = 4, effective_cache_size = 2048 time approximately 4500ms\n> > random_page_cost = 3, effective_cache_size = 2048 time approximately 1050ms\n> > random_page_cost = 3, effective_cache_size = 4096 time approximately 1025ms\n> \n> The effective_cache_size still looks small. As a rule of tumb you might\n> want effective_cache_size to be something like 1/2 or 2/3 of your total\n> memory. I don't know how much you had, but effective_cache_size = 4096 is\n> only 32M.\n> \n> shared_buffers and effective_cache_size is normally the two most important \n> settings in my experience.\n\nI have increased the effective_cache_size to 16384 (128M). I have kept\nrandom_page_cost at 3 for now. This appears to give me the performance\nI need at this time.\n\nIn the future, we'll look at other methods of increasing the\nperformance.\n\nThank you all for all your suggestions.\n\nJohnM\n\n-- \nJohn Mendenhall\[email protected]\nsurf utopia\ninternet services\n", "msg_date": "Fri, 1 Jul 2005 17:52:37 -0700", "msg_from": "John Mendenhall <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ported application having performance issues" }, { "msg_contents": "Hi all,\n\n If you can just help my understanding the choice of the planner. \n\nHere is the Query:\n explain analyse SELECT IRNUM FROM IR\n INNER JOIN IT ON IT.ITIRNUM = ANY ('{1000, 2000}') AND \nIT.ITYPNUM = 'M' AND IR.IRYPNUM = IT.ITYPNUM AND IR.IRNUM = IT.ITIRNUM \n WHERE IRNUM = ANY ('{1000, 2000}') and IRYPNUM = 'M'\n\nHere is the Query plan:\n\nQUERY PLAN \t\n\t\n\t\n\t\n\t\n\t\n\t\nHash Join (cost=1142.47..5581.75 rows=87 width=4) (actual \ntime=125.000..203.000 rows=2 loops=1) \t\n Hash Cond: (\"outer\".itirnum = \"inner\".irnum) \t\n\t\n\t\n\t\n\t\n -> Seq Scan on it (cost=0.00..3093.45 rows=31646 width=9) (actual \ntime=0.000..78.000 rows=2 loops=1) \t\n Filter: ((itirnum = ANY ('{1000,2000}'::integer[])) AND \n((itypnum)::text = 'M'::text)) \t\n\t\n -> Hash (cost=1142.09..1142.09 rows=151 width=37) (actual \ntime=125.000..125.000 rows=0 loops=1) \t\n -> Index Scan using ir_pk on ir (cost=0.00..1142.09 rows=151 \nwidth=37) (actual time=0.000..125.000 rows=2 loops=1)\n Index Cond: ((irypnum)::text = 'M'::text) \t\n\t\n\t\n\t\n\t\n Filter: (irnum = ANY ('{1000,2000}'::integer[])) \t\n\t\n\t\n\t\n\t\nTotal runtime: 203.000 ms \t\n\t\n\t\n\t\n\t\n\t\n\t\n\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\n\n\n I don't understand why the planner do a Seq Scan (Seq Scan on table \nIT ..) instead of passing by the followin index:\n ALTER TABLE IT ADD CONSTRAINT IT_IR_FK foreign key (ITYPNUM,ITIRNUM) \nreferences IR (IRYPNUM, IRNUM) ON UPDATE CASCADE;\n\nI tried some stuff but I'm not able to change this behavior. The IT and \nIR table may be quite huge (from 20k to 1600k rows) so I think doing a \nSEQ SCAN is not a good idea.. am I wrong? Is this query plan is oki for \nyou ?\n\nThanks for your help.\n\n/David\n P.S.: I'm using postgresql 8.0.3 on windows and I change those setting \nin my postgresql.conf :\nshared_buffers = 12000 # min 16, at least max_connections*2, 8KB each\nwork_mem = 15000 # min 64, size in KB\n\n\n\n\n\n\n\n\n\n\nHi all,\n\n  If you can just help my understanding the choice of the planner.  \n\nHere is the Query:\n explain analyse SELECT IRNUM FROM IR\n        INNER JOIN IT ON  IT.ITIRNUM = ANY ('{1000, 2000}') AND\nIT.ITYPNUM = 'M' AND IR.IRYPNUM = IT.ITYPNUM AND IR.IRNUM = IT.ITIRNUM \n\n        WHERE IRNUM = ANY ('{1000, 2000}') and IRYPNUM = 'M'\n\nHere is the Query plan:\n\n\n \n\nQUERY PLAN\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHash Join  (cost=1142.47..5581.75 rows=87 width=4) (actual\ntime=125.000..203.000 rows=2 loops=1)\n\n\n\n\n \n Hash Cond: (\"outer\".itirnum = \"inner\".irnum)\n\n\n\n\n\n\n\n\n\n\n\n\n \n ->  Seq Scan on it \n (cost=0.00..3093.45 rows=31646 width=9) (actual\ntime=0.000..78.000 rows=2 loops=1)\n\n\n\n\n       \n Filter: ((itirnum = ANY ('{1000,2000}'::integer[])) AND\n((itypnum)::text = 'M'::text))\n\n\n\n\n\n\n \n ->  Hash  (cost=1142.09..1142.09\nrows=151 width=37) (actual time=125.000..125.000 rows=0 loops=1)\n\n\n\n\n       \n ->  Index Scan using ir_pk on ir  (cost=0.00..1142.09 rows=151 width=37) (actual\ntime=0.000..125.000 rows=2 loops=1)\n\n\n             \n Index Cond: ((irypnum)::text = 'M'::text)\n\n\n\n\n\n\n\n\n\n\n\n\n             \n Filter: (irnum = ANY ('{1000,2000}'::integer[]))\n\n\n\n\n\n\n\n\n\n\n\n\nTotal\nruntime: 203.000 ms\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n    I don't understand why the planner do a Seq Scan (Seq Scan on table\nIT ..) instead of passing by the followin index:\n    ALTER TABLE IT ADD CONSTRAINT IT_IR_FK foreign key\n(ITYPNUM,ITIRNUM) references IR (IRYPNUM, IRNUM) ON UPDATE CASCADE;\n\nI tried some stuff but I'm not able to change this behavior.  The IT\nand IR table may be quite huge (from 20k to 1600k rows) so I think\ndoing a SEQ SCAN is not a good idea.. am I wrong?  Is this query plan\nis oki for you ?\n\nThanks for your help.\n\n/David\n P.S.: I'm using postgresql 8.0.3 on windows and I change those setting\nin  my postgresql.conf : \nshared_buffers = 12000        # min 16, at least max_connections*2, 8KB\neach\nwork_mem = 15000        # min 64, size in KB", "msg_date": "Mon, 04 Jul 2005 15:57:49 -0400", "msg_from": "David Gagnon <[email protected]>", "msg_from_op": false, "msg_subject": "Why the planner is not using the INDEX ." }, { "msg_contents": "On Mon, 4 Jul 2005, David Gagnon wrote:\n\n> If you can just help my understanding the choice of the planner.\n>\n> Here is the Query:\n> explain analyse SELECT IRNUM FROM IR\n> INNER JOIN IT ON IT.ITIRNUM = ANY ('{1000, 2000}') AND\n> IT.ITYPNUM = 'M' AND IR.IRYPNUM = IT.ITYPNUM AND IR.IRNUM = IT.ITIRNUM\n> WHERE IRNUM = ANY ('{1000, 2000}') and IRYPNUM = 'M'\n>\n> Here is the Query plan:\n>\n> QUERY PLAN\n>\n> Hash Join (cost=1142.47..5581.75 rows=87 width=4) (actual\n> time=125.000..203.000 rows=2 loops=1)\n> Hash Cond: (\"outer\".itirnum = \"inner\".irnum)\n> -> Seq Scan on it (cost=0.00..3093.45 rows=31646 width=9) (actual\n> time=0.000..78.000 rows=2 loops=1)\n> Filter: ((itirnum = ANY ('{1000,2000}'::integer[])) AND\n> ((itypnum)::text = 'M'::text))\n>\n> -> Hash (cost=1142.09..1142.09 rows=151 width=37) (actual\n> time=125.000..125.000 rows=0 loops=1)\n> -> Index Scan using ir_pk on ir (cost=0.00..1142.09 rows=151\n> width=37) (actual time=0.000..125.000 rows=2 loops=1)\n> Index Cond: ((irypnum)::text = 'M'::text)\n> Filter: (irnum = ANY ('{1000,2000}'::integer[]))\n> Total runtime: 203.000 ms\n\n> I don't understand why the planner do a Seq Scan (Seq Scan on table\n> IT ..) instead of passing by the followin index:\n> ALTER TABLE IT ADD CONSTRAINT IT_IR_FK foreign key (ITYPNUM,ITIRNUM)\n> references IR (IRYPNUM, IRNUM) ON UPDATE CASCADE;\n\nThat doesn't create an index on IT. Primary keys (and unique constraints)\ncreate indexes, but not foreign keys. Did you also create an index on\nthose fields?\n\nAlso it looks like it's way overestimating the number of rows that\ncondition would succeed for. You might consider raising the statistics\ntargets on those columns and reanalyzing.\n", "msg_date": "Mon, 4 Jul 2005 13:27:41 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why the planner is not using the INDEX ." }, { "msg_contents": "Thanks .. I miss that FK don't create indexed ... since Primary key \nimplicitly does ...\n\nI'm a bit surprised of that behavior thought, since it means that if we \ndelete a row from table A all tables (B,C,D) with FK pointing to this \ntable (A) must be scanned. \nIf there is no index on those tables it means we gone do all Sequantial \nscans. Than can cause significant performance problem!!!.\n\nIs there a reason why implicit index aren't created when FK are \ndeclared. I looked into the documentation and I haven't found a way to \ntell postgresql to automatically create an index when creating la FK. \nDoes it means I need to manage it EXPLICITLY with create index statement \n? Is there another way ?\n\nThanks for you help that simple answer will solve a lot of performance \nproblem I have ...\n\n/David\n\n\n>On Mon, 4 Jul 2005, David Gagnon wrote:\n>\n> \n>\n>> If you can just help my understanding the choice of the planner.\n>>\n>>Here is the Query:\n>> explain analyse SELECT IRNUM FROM IR\n>> INNER JOIN IT ON IT.ITIRNUM = ANY ('{1000, 2000}') AND\n>>IT.ITYPNUM = 'M' AND IR.IRYPNUM = IT.ITYPNUM AND IR.IRNUM = IT.ITIRNUM\n>> WHERE IRNUM = ANY ('{1000, 2000}') and IRYPNUM = 'M'\n>>\n>>Here is the Query plan:\n>>\n>>QUERY PLAN\n>>\n>>Hash Join (cost=1142.47..5581.75 rows=87 width=4) (actual\n>>time=125.000..203.000 rows=2 loops=1)\n>> Hash Cond: (\"outer\".itirnum = \"inner\".irnum)\n>> -> Seq Scan on it (cost=0.00..3093.45 rows=31646 width=9) (actual\n>>time=0.000..78.000 rows=2 loops=1)\n>> Filter: ((itirnum = ANY ('{1000,2000}'::integer[])) AND\n>>((itypnum)::text = 'M'::text))\n>>\n>> -> Hash (cost=1142.09..1142.09 rows=151 width=37) (actual\n>>time=125.000..125.000 rows=0 loops=1)\n>> -> Index Scan using ir_pk on ir (cost=0.00..1142.09 rows=151\n>>width=37) (actual time=0.000..125.000 rows=2 loops=1)\n>> Index Cond: ((irypnum)::text = 'M'::text)\n>> Filter: (irnum = ANY ('{1000,2000}'::integer[]))\n>>Total runtime: 203.000 ms\n>> \n>>\n>\n> \n>\n>> I don't understand why the planner do a Seq Scan (Seq Scan on table\n>>IT ..) instead of passing by the followin index:\n>> ALTER TABLE IT ADD CONSTRAINT IT_IR_FK foreign key (ITYPNUM,ITIRNUM)\n>>references IR (IRYPNUM, IRNUM) ON UPDATE CASCADE;\n>> \n>>\n>\n>That doesn't create an index on IT. Primary keys (and unique constraints)\n>create indexes, but not foreign keys. Did you also create an index on\n>those fields?\n>\n>Also it looks like it's way overestimating the number of rows that\n>condition would succeed for. You might consider raising the statistics\n>targets on those columns and reanalyzing.\n>\n> \n>\n\n\n\n\n\n\nThanks .. I miss that FK don't create indexed ...  since Primary key\nimplicitly does ...\n\nI'm a bit surprised of that behavior thought, since it means that if we\ndelete a row from table A all tables (B,C,D) with FK pointing to this\ntable (A) must be scanned.  \nIf there is no index on those tables it means we gone do all Sequantial\nscans. Than can cause significant performance problem!!!.\n\nIs there a reason why implicit index aren't created when FK are\ndeclared.  I looked into the documentation and I haven't found a way to\ntell postgresql to automatically create an index when creating la FK. \nDoes it means I need to manage it EXPLICITLY with create index\nstatement ?  Is there another way ?\n\nThanks for you help that simple answer will solve a lot of performance\nproblem I have ...\n\n/David\n\n\n\nOn Mon, 4 Jul 2005, David Gagnon wrote:\n\n \n\n If you can just help my understanding the choice of the planner.\n\nHere is the Query:\n explain analyse SELECT IRNUM FROM IR\n INNER JOIN IT ON IT.ITIRNUM = ANY ('{1000, 2000}') AND\nIT.ITYPNUM = 'M' AND IR.IRYPNUM = IT.ITYPNUM AND IR.IRNUM = IT.ITIRNUM\n WHERE IRNUM = ANY ('{1000, 2000}') and IRYPNUM = 'M'\n\nHere is the Query plan:\n\nQUERY PLAN\n\nHash Join (cost=1142.47..5581.75 rows=87 width=4) (actual\ntime=125.000..203.000 rows=2 loops=1)\n Hash Cond: (\"outer\".itirnum = \"inner\".irnum)\n -> Seq Scan on it (cost=0.00..3093.45 rows=31646 width=9) (actual\ntime=0.000..78.000 rows=2 loops=1)\n Filter: ((itirnum = ANY ('{1000,2000}'::integer[])) AND\n((itypnum)::text = 'M'::text))\n\n -> Hash (cost=1142.09..1142.09 rows=151 width=37) (actual\ntime=125.000..125.000 rows=0 loops=1)\n -> Index Scan using ir_pk on ir (cost=0.00..1142.09 rows=151\nwidth=37) (actual time=0.000..125.000 rows=2 loops=1)\n Index Cond: ((irypnum)::text = 'M'::text)\n Filter: (irnum = ANY ('{1000,2000}'::integer[]))\nTotal runtime: 203.000 ms\n \n\n\n \n\n I don't understand why the planner do a Seq Scan (Seq Scan on table\nIT ..) instead of passing by the followin index:\n ALTER TABLE IT ADD CONSTRAINT IT_IR_FK foreign key (ITYPNUM,ITIRNUM)\nreferences IR (IRYPNUM, IRNUM) ON UPDATE CASCADE;\n \n\n\nThat doesn't create an index on IT. Primary keys (and unique constraints)\ncreate indexes, but not foreign keys. Did you also create an index on\nthose fields?\n\nAlso it looks like it's way overestimating the number of rows that\ncondition would succeed for. You might consider raising the statistics\ntargets on those columns and reanalyzing.", "msg_date": "Mon, 04 Jul 2005 20:29:50 -0400", "msg_from": "David Gagnon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why the planner is not using the INDEX ." }, { "msg_contents": "> I'm a bit surprised of that behavior thought, since it means that if we \n> delete a row from table A all tables (B,C,D) with FK pointing to this \n> table (A) must be scanned. \n> If there is no index on those tables it means we gone do all Sequantial \n> scans. Than can cause significant performance problem!!!.\n\nCorrect.\n\n> Is there a reason why implicit index aren't created when FK are \n> declared.\n\nBecause it's not a requirement...\n\n> I looked into the documentation and I haven't found a way to \n> tell postgresql to automatically create an index when creating la FK. \n> Does it means I need to manage it EXPLICITLY with create index statement \n> ? Is there another way ?\n\nNo other way - you need to explicitly create them. It's not that hard \neither to write a query to search the system catalogs for unindexed FK's.\n\nChris\n\n", "msg_date": "Tue, 05 Jul 2005 09:13:30 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why the planner is not using the INDEX ." }, { "msg_contents": "On Mon, Jul 04, 2005 at 20:29:50 -0400,\n David Gagnon <[email protected]> wrote:\n> Thanks .. I miss that FK don't create indexed ... since Primary key \n> implicitly does ...\n> \n> I'm a bit surprised of that behavior thought, since it means that if we \n> delete a row from table A all tables (B,C,D) with FK pointing to this \n> table (A) must be scanned. \n\nBut in some applications you don't ever do that, so you don't save\nanything by having the index for deletes but have to pay the cost to\nupdate it when modifying the referencing table.\n\nIf you think an index will help in your case, just create one.\n", "msg_date": "Tue, 5 Jul 2005 07:32:17 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why the planner is not using the INDEX ." }, { "msg_contents": "On Mon, 4 Jul 2005, David Gagnon wrote:\n\n> Thanks .. I miss that FK don't create indexed ... since Primary key\n> implicitly does ...\n>\n> I'm a bit surprised of that behavior thought, since it means that if we\n> delete a row from table A all tables (B,C,D) with FK pointing to this\n> table (A) must be scanned.\n> If there is no index on those tables it means we gone do all Sequantial\n> scans. Than can cause significant performance problem!!!.\n>\n> Is there a reason why implicit index aren't created when FK are\n> declared. I looked into the documentation and I haven't found a way to\n\nThe reason is that it's not always useful to have an index for that\npurpose. You could either have low selectivity (in which case the index\nwouldn't be used) or low/batch changes to the referenced table (in which\ncase the cost of maintaining the index may be greater than the value of\nhaving the index) or other such cases. In primary key and unique, we\ncurrently have no choice but to make an index because that's how the\nconstraint is currently implemented.\n\n> tell postgresql to automatically create an index when creating la FK.\n> Does it means I need to manage it EXPLICITLY with create index statement\n> ?\n\nYeah.\n\n>Is there another way ?\n\nNot that I can think of without changing the source.\n", "msg_date": "Tue, 5 Jul 2005 07:02:07 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why the planner is not using the INDEX ." }, { "msg_contents": "David Gagnon <[email protected]> writes:\n> explain analyse SELECT IRNUM FROM IR\n> INNER JOIN IT ON IT.ITIRNUM = ANY ('{1000, 2000}') AND \n> IT.ITYPNUM = 'M' AND IR.IRYPNUM = IT.ITYPNUM AND IR.IRNUM = IT.ITIRNUM \n> WHERE IRNUM = ANY ('{1000, 2000}') and IRYPNUM = 'M'\n\nThose =ANY constructs are not currently optimizable at all. You might\nget better results with \"IT.ITIRNUM IN (1000, 2000)\" etc.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Jul 2005 12:09:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why the planner is not using the INDEX . " }, { "msg_contents": "Tom Lane wrote:\n\n>David Gagnon <[email protected]> writes:\n> \n>\n>> explain analyse SELECT IRNUM FROM IR\n>> INNER JOIN IT ON IT.ITIRNUM = ANY ('{1000, 2000}') AND \n>>IT.ITYPNUM = 'M' AND IR.IRYPNUM = IT.ITYPNUM AND IR.IRNUM = IT.ITIRNUM \n>> WHERE IRNUM = ANY ('{1000, 2000}') and IRYPNUM = 'M'\n>> \n>>\n>\n>Those =ANY constructs are not currently optimizable at all. You might\n>get better results with \"IT.ITIRNUM IN (1000, 2000)\" etc.\n>\n>\t\t\tregards, tom lane\n>\n> \n>\nI already tried this construct. But the statement comes from a stored \nprocedure where the {1000, 2000} is an array variable (requestIds). I \ntried to use\n\nIT.ITIRNUM IN (requestIds) or several other variant without success.\n\nIs there a way to make it work? Here is the statement the statement from the store procedure. Remenber requestIds is an array of int.\n\n\nFOR inventoryTransaction IN\n SELECT DISTINCT IRNUM, IRAENUM, IRSTATUT, IRSENS, IRSOURCE, \nIRDATE, IRQTE\n FROM IR\n WHERE IRNUM = ANY (requestIds) and IRYPNUM = companyId\n LOOP\n\nThank for your help !!!!\n/David\n", "msg_date": "Tue, 05 Jul 2005 13:53:59 -0400", "msg_from": "David Gagnon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why the planner is not using the INDEX ." }, { "msg_contents": "* David Gagnon <[email protected]> wrote:\n\n> FOR inventoryTransaction IN\n> SELECT DISTINCT IRNUM, IRAENUM, IRSTATUT, IRSENS, IRSOURCE, \n> IRDATE, IRQTE\n> FROM IR\n> WHERE IRNUM = ANY (requestIds) and IRYPNUM = companyId\n> LOOP\n\nhmm. you probably could create the query dynamically and \nthen execute it. \n\n\nBTW: why isn't IN not usable with arrays ?\n\n\ncu\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n---------------------------------------------------------------------\n Realtime Forex/Stock Exchange trading powered by postgresSQL :))\n http://www.fxignal.net/\n---------------------------------------------------------------------\n", "msg_date": "Fri, 8 Jul 2005 17:36:38 +0200", "msg_from": "Enrico Weigelt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why the planner is not using the INDEX ." } ]
[ { "msg_contents": "Hi,\n\nI've just been referred here after a conversion on IRC and everybody\nseemed to think I've stumbled upon some strangeness.\n\nThe planner (in PG version 8.0.2) is choosing what it thinks is a more\nexpensive plan. I've got a table of animals (about 3M rows) and their\nmovements (about 16M rows), and I'm trying to execute this query:\n\n SELECT a.birthlocnid, m.locnid\n FROM animals a\n LEFT JOIN movements m ON (a.animalid = m.animalid AND m.mtypeid=0)\n LIMIT 10;\n\nIf I have \"work_mem\" set to something small (1000) it uses this plan:\n\n QUERY PLAN\n\n Limit (cost=0.00..202.52 rows=10 width=8) (actual time=0.221..0.600 rows=10 loops=1)\n -> Merge Left Join (cost=0.00..66888828.30 rows=3302780 width=8) (actual time=0.211..0.576 rows=10 loops=1)\n Merge Cond: (\"outer\".animalid = \"inner\".animalid)\n -> Index Scan using animals_pkey on animals a (cost=0.00..10198983.91 rows=3302780 width=8) (actual time=0.112..0.276 rows=10 loops=1)\n -> Index Scan using movement_animal on movements m (cost=0.00..56642740.73 rows=3107737 width=8) (actual time=0.088..0.235 rows=10 loops=1)\n Filter: (mtypeid = 0)\n Total runtime: 0.413 ms\n\nBut if I increase \"work_mem\" to 10000 it uses this plan:\n\n QUERY PLAN\n\n Limit (cost=565969.42..566141.09 rows=10 width=8) (actual time=27769.047..27769.246 rows=10 loops=1)\n -> Merge Right Join (cost=565969.42..57264070.77 rows=3302780 width=8) (actual time=27769.043..27769.228 rows=10 loops=1)\n Merge Cond: (\"outer\".animalid = \"inner\".animalid)\n -> Index Scan using movement_animal on movements m (cost=0.00..56642740.73 rows=3107737 width=8) (actual time=0.022..0.154 rows=10 loops=1)\n Filter: (mtypeid = 0)\n -> Sort (cost=565969.42..574226.37 rows=3302780 width=8) (actual time=27768.991..27769.001 rows=10 loops=1)\n Sort Key: a.animalid\n -> Seq Scan on animals a (cost=0.00..77086.80 rows=3302780 width=8) (actual time=0.039..5620.651 rows=3303418 loops=1)\n Total runtime: 27851.097 ms\n\n\nI've tried playing with the statistics as people suggested on IRC but to\nno effect. There was some discussion about why it would be doing this,\nbut nothing obvious came out of it.\n\nSHOW ALL output is at the end of this mail but it should be pretty\nstandard apart from:\n\n shared_buffers = 10000\n work_mem = 8192\n max_connections = 100\n effective_cache_size = 10000\n\nHope that's enough information to be useful.\n\nThanks.\n\n Sam\n\n\n name | setting \n--------------------------------+--------------------------------\n add_missing_from | on\n archive_command | /home/postgres/pgarchive \"%p\"\n australian_timezones | off\n authentication_timeout | 60\n bgwriter_delay | 200\n bgwriter_maxpages | 100\n bgwriter_percent | 1\n block_size | 8192\n check_function_bodies | on\n checkpoint_segments | 3\n checkpoint_timeout | 300\n checkpoint_warning | 30\n client_encoding | SQL_ASCII\n client_min_messages | notice\n commit_delay | 0\n commit_siblings | 5\n config_file | /home/pgdata/postgresql.conf\n cpu_index_tuple_cost | 0.001\n cpu_operator_cost | 0.0025\n cpu_tuple_cost | 0.01\n custom_variable_classes | unset\n data_directory | /home/pgdata\n DateStyle | ISO, MDY\n db_user_namespace | off\n deadlock_timeout | 1000\n debug_pretty_print | off\n debug_print_parse | off\n debug_print_plan | off\n debug_print_rewritten | off\n debug_shared_buffers | 0\n default_statistics_target | 10\n default_tablespace | unset\n default_transaction_isolation | read committed\n default_transaction_read_only | off\n default_with_oids | on\n dynamic_library_path | $libdir\n effective_cache_size | 10000\n enable_hashagg | on\n enable_hashjoin | on\n enable_indexscan | on\n enable_mergejoin | on\n enable_nestloop | on\n enable_seqscan | off\n enable_sort | on\n enable_tidscan | on\n explain_pretty_print | on\n external_pid_file | unset\n extra_float_digits | 0\n from_collapse_limit | 8\n fsync | on\n geqo | on\n geqo_effort | 5\n geqo_generations | 0\n geqo_pool_size | 0\n geqo_selection_bias | 2\n geqo_threshold | 12\n hba_file | /home/pgdata/pg_hba.conf\n ident_file | /home/pgdata/pg_ident.conf\n integer_datetimes | off\n join_collapse_limit | 8\n krb_server_keyfile | unset\n lc_collate | C\n lc_ctype | C\n lc_messages | C\n lc_monetary | C\n lc_numeric | C\n lc_time | C\n listen_addresses | *\n log_connections | on\n log_destination | stderr\n log_directory | pg_log\n log_disconnections | off\n log_duration | off\n log_error_verbosity | default\n log_executor_stats | off\n log_filename | postgresql-%Y-%m-%d_%H%M%S.log\n log_hostname | off\n log_line_prefix | %t %u \n log_min_duration_statement | -1\n log_min_error_statement | panic\n log_min_messages | notice\n log_parser_stats | off\n log_planner_stats | off\n log_rotation_age | 1440\n log_rotation_size | 10240\n log_statement | all\n log_statement_stats | off\n log_truncate_on_rotation | off\n maintenance_work_mem | 256000\n max_connections | 100\n max_files_per_process | 1000\n max_fsm_pages | 20000\n max_fsm_relations | 1000\n max_function_args | 32\n max_identifier_length | 63\n max_index_keys | 32\n max_locks_per_transaction | 64\n max_stack_depth | 2048\n password_encryption | on\n port | 5432\n pre_auth_delay | 0\n preload_libraries | unset\n random_page_cost | 4\n redirect_stderr | off\n regex_flavor | advanced\n rendezvous_name | unset\n search_path | $user,public\n server_encoding | SQL_ASCII\n server_version | 8.0.2\n shared_buffers | 1000\n silent_mode | off\n sql_inheritance | on\n ssl | off\n statement_timeout | 0\n stats_block_level | off\n stats_command_string | off\n stats_reset_on_server_start | on\n stats_row_level | off\n stats_start_collector | on\n superuser_reserved_connections | 2\n syslog_facility | LOCAL0\n syslog_ident | postgres\n TimeZone | GMT\n trace_notify | off\n transaction_isolation | read committed\n transaction_read_only | off\n transform_null_equals | off\n unix_socket_directory | unset\n unix_socket_group | unset\n unix_socket_permissions | 511\n vacuum_cost_delay | 0\n vacuum_cost_limit | 200\n vacuum_cost_page_dirty | 20\n vacuum_cost_page_hit | 1\n vacuum_cost_page_miss | 10\n wal_buffers | 8\n wal_sync_method | fdatasync\n work_mem | 128000\n zero_damaged_pages | off\n\n", "msg_date": "Fri, 1 Jul 2005 14:33:05 +0100", "msg_from": "Sam Mason <[email protected]>", "msg_from_op": true, "msg_subject": "planner picking more expensive plan" }, { "msg_contents": "Sam Mason <[email protected]> writes:\n> The planner (in PG version 8.0.2) is choosing what it thinks is a more\n> expensive plan.\n\nI fooled around trying to duplicate this behavior, without success.\nCan you create a self-contained test case?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Jul 2005 10:22:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner picking more expensive plan " }, { "msg_contents": "Tom Lane wrote:\n>I fooled around trying to duplicate this behavior, without success.\n>Can you create a self-contained test case?\n\nI'll try and see if I can put something together, it's probably\ngoing to be early next week though. I wont be able to give you our\ndata, so I'll be a bit of a headscratching exercise generating\nsomething that'll provoke the same behaviour.\n\nNot sure if it'll help, but here's what the database schema looks\nlike at the moment:\n\n Table \"public.animals\"\n Column | Type | Modifiers \n-------------+-----------------------+-----------\n animalid | integer | not null\n sex | character(1) | not null\n dob | date | not null\n birthlocnid | integer | \n breedid | character varying(8) | \n eartag_1 | character varying(20) | \n eartag_2 | character varying(20) | \n eartag_3 | character varying(20) | \nIndexes:\n \"animals_pkey\" primary key, btree (animalid)\n \"animal_birthlocn\" btree (birthlocnid)\n \"animal_breed\" btree (breedid)\n \"animal_eartag\" btree (eartag_1)\nCheck constraints:\n \"animal_sex\" CHECK (sex = 'M'::bpchar OR sex = 'F'::bpchar)\n\n Table \"public.movements\"\n Column | Type | Modifiers \n----------+---------+-----------\n locnid | integer | not null\n animalid | integer | not null\n movedate | date | not null\n mtypeid | integer | not null\nIndexes:\n \"movement_animal\" btree (animalid)\n \"movement_location\" btree (locnid)\n \"movement_movedate\" btree (movedate)\n \"movement_movetype\" btree (mtypeid)\nForeign-key constraints:\n \"movement_location\" FOREIGN KEY (locnid) REFERENCES locations(locnid)\n \"movement_animal\" FOREIGN KEY (animalid) REFERENCES animals(animalid)\n \"movement_type\" FOREIGN KEY (mtypeid) REFERENCES k_movement_type(mtypeid)\n\n Table \"public.locations\"\n Column | Type | Modifiers \n--------+-----------------------+-----------\n locnid | integer | not null\n ptype | character varying(8) | \n ltype | character varying(8) | not null\n cph | character varying(20) | \n unk | integer | \nIndexes:\n \"locations_pkey\" primary key, btree (locnid)\n \"location_cph\" btree (cph)\n \"location_ltype\" btree (ltype)\n \"location_ptype\" btree (ptype)\nForeign-key constraints:\n \"location_ptype\" FOREIGN KEY (ptype) REFERENCES k_premise_type(ptypeid)\n \"location_ltype\" FOREIGN KEY (ltype) REFERENCES k_location_type(ltypeid)\n\nAs I said, animals contains about 3M rows, movements about 16M rows\nand locations about 80K rows. There are about 3 to 8 rows for each\nand every animal in the movements table, with at most one entry of\nmtypeid=0 for each animal (95% of the animals have an entry).\n\nNot sure if that's going to help making some demo data. It's just\nthat it took quite a while loading it all here, so coming up with\nsome code to make demo data may take a while.\n\n\nThanks!\n\n Sam\n", "msg_date": "Fri, 1 Jul 2005 15:58:48 +0100", "msg_from": "Sam Mason <[email protected]>", "msg_from_op": true, "msg_subject": "Re: planner picking more expensive plan" }, { "msg_contents": "Sam Mason wrote:\n\n>Hi,\n>\n>I've just been referred here after a conversion on IRC and everybody\n>seemed to think I've stumbled upon some strangeness.\n>\n>The planner (in PG version 8.0.2) is choosing what it thinks is a more\n>expensive plan. I've got a table of animals (about 3M rows) and their\n>movements (about 16M rows), and I'm trying to execute this query:\n>\n> SELECT a.birthlocnid, m.locnid\n> FROM animals a\n> LEFT JOIN movements m ON (a.animalid = m.animalid AND m.mtypeid=0)\n> LIMIT 10;\n>\n>\n>\nWhy are you using LIMIT without having an ORDER BY?\nWhat are actually trying to get out of this query? Is it just trying to\ndetermine where the 'home' locations are?\nIt just seems like this query isn't very useful. As it doesn't restrict\nby animal id, and it just gets 10 randomly selected animals where\nm.mtypeid=0.\nAnd why a LEFT JOIN instead of a normal join?\nAnyway, the general constraints you are applying seem kind of confusing.\nWhat happens if you change the plan to:\n\n SELECT a.birthlocnid, m.locnid\n FROM animals a\n LEFT JOIN movements m ON (a.animalid = m.animalid AND m.mtypeid=0)\n ORDER BY a.animalid LIMIT 10;\n\n\nI would guess that this would help the planner realize it should try to\nuse an index, since it can realize that it wants only a few rows by\na.animalid in order.\nThough I also recognize that you aren't returning a.animalid so you\ndon't really know which animals you are returning.\n\nI get the feeling you are trying to ask something like \"do animals stay\nat their birth location\", or at least \"how are animals moving around\". I\ndon't know what m.typeid = 0 means, but I'm guessing it is something\nlike where their home is.\n\nAnyway, I would say you need to put a little bit more restriction in, so\nthe planner can figure out how to get only 10 rows.\n\nJohn\n=:->\n\n>If I have \"work_mem\" set to something small (1000) it uses this plan:\n>\n> QUERY PLAN\n>\n> Limit (cost=0.00..202.52 rows=10 width=8) (actual time=0.221..0.600 rows=10 loops=1)\n> -> Merge Left Join (cost=0.00..66888828.30 rows=3302780 width=8) (actual time=0.211..0.576 rows=10 loops=1)\n> Merge Cond: (\"outer\".animalid = \"inner\".animalid)\n> -> Index Scan using animals_pkey on animals a (cost=0.00..10198983.91 rows=3302780 width=8) (actual time=0.112..0.276 rows=10 loops=1)\n> -> Index Scan using movement_animal on movements m (cost=0.00..56642740.73 rows=3107737 width=8) (actual time=0.088..0.235 rows=10 loops=1)\n> Filter: (mtypeid = 0)\n> Total runtime: 0.413 ms\n>\n>But if I increase \"work_mem\" to 10000 it uses this plan:\n>\n> QUERY PLAN\n>\n> Limit (cost=565969.42..566141.09 rows=10 width=8) (actual time=27769.047..27769.246 rows=10 loops=1)\n> -> Merge Right Join (cost=565969.42..57264070.77 rows=3302780 width=8) (actual time=27769.043..27769.228 rows=10 loops=1)\n> Merge Cond: (\"outer\".animalid = \"inner\".animalid)\n> -> Index Scan using movement_animal on movements m (cost=0.00..56642740.73 rows=3107737 width=8) (actual time=0.022..0.154 rows=10 loops=1)\n> Filter: (mtypeid = 0)\n> -> Sort (cost=565969.42..574226.37 rows=3302780 width=8) (actual time=27768.991..27769.001 rows=10 loops=1)\n> Sort Key: a.animalid\n> -> Seq Scan on animals a (cost=0.00..77086.80 rows=3302780 width=8) (actual time=0.039..5620.651 rows=3303418 loops=1)\n> Total runtime: 27851.097 ms\n>\n>\n>I've tried playing with the statistics as people suggested on IRC but to\n>no effect. There was some discussion about why it would be doing this,\n>but nothing obvious came out of it.\n>\n>SHOW ALL output is at the end of this mail but it should be pretty\n>standard apart from:\n>\n> shared_buffers = 10000\n> work_mem = 8192\n> max_connections = 100\n> effective_cache_size = 10000\n>\n>Hope that's enough information to be useful.\n>\n>Thanks.\n>\n> Sam\n>", "msg_date": "Fri, 01 Jul 2005 10:17:52 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner picking more expensive plan" }, { "msg_contents": "John A Meinel wrote:\n>Why are you using LIMIT without having an ORDER BY?\n\nI'm just exploring the data, trying to figure out what it's like.\n\n>It just seems like this query isn't very useful. As it doesn't restrict\n>by animal id, and it just gets 10 randomly selected animals where\n>m.mtypeid=0.\n\nYup, that's the point. Check to see if the animals were born where\nthey say they were. The data's come from an external source and\nI'm just trying to figure out how good it is before I do too much\nwith it\n\n>And why a LEFT JOIN instead of a normal join?\n\nI'm not sure if some animals will have missing data!\n\n>Anyway, the general constraints you are applying seem kind of confusing.\n\nThis was a slightly cut down query in an attempt to reduce general\nconfusion -- I guess I failed. Sorry!\n\n>I would guess that this would help the planner realize it should try to\n>use an index, since it can realize that it wants only a few rows by\n>a.animalid in order.\n\nThis seems to work the appropiate magic. It always seems to prefer\nindex scans now.\n\nThe real point of asking this question orignally was to find out\nwhy the planner was choosing a more expensive plan over a cheaper\none. When I discovered this orignally I was disabling seqscan and\nthen it picked the correct version. The actual work_mem didn't\nchange when I did this, it just picked the correct plan. I discovered\nthe work_mem parameter fiddle later. I think I forgot to mention\nthat in the original email though!\n\n\n Sam\n", "msg_date": "Fri, 1 Jul 2005 16:58:30 +0100", "msg_from": "Sam Mason <[email protected]>", "msg_from_op": true, "msg_subject": "Re: planner picking more expensive plan" }, { "msg_contents": "On Fri, 1 Jul 2005, Sam Mason wrote:\n\nThe key thing with the query that Sam have is that if you turn off seqscan\nyou get the first plan that run in 0.4ms and if seqscan is on the runtime\nis 27851ms.\n\nThere are 100 way to make it select the seq scan, including rewriting the \nquery to something more useful, tweaking different parameters and so on. \n\nThe interesting part is that pg give the fast plan a cost of 202 and the\nslow a cost of 566141, but still it chooses the slow query unless seqscan\nis turned off (or some other tweak with the same effect). It know very\nwell that the plan with the index scan will be much faster, it just don't\nmanage to generate it unless you force it to.\n\nIt makes you wonder if pg throws away some plans too early in the planning\nphase.\n\n> Limit (cost=0.00..202.52 rows=10 width=8) (actual time=0.221..0.600 rows=10 loops=1)\n> -> Merge Left Join (cost=0.00..66888828.30 rows=3302780 width=8) (actual time=0.211..0.576 rows=10 loops=1)\n> Merge Cond: (\"outer\".animalid = \"inner\".animalid)\n> -> Index Scan using animals_pkey on animals a (cost=0.00..10198983.91 rows=3302780 width=8) (actual time=0.112..0.276 rows=10 loops=1)\n> -> Index Scan using movement_animal on movements m (cost=0.00..56642740.73 rows=3107737 width=8) (actual time=0.088..0.235 rows=10 loops=1)\n> Filter: (mtypeid = 0)\n> Total runtime: 0.413 ms\n> \n> Limit (cost=565969.42..566141.09 rows=10 width=8) (actual time=27769.047..27769.246 rows=10 loops=1)\n> -> Merge Right Join (cost=565969.42..57264070.77 rows=3302780 width=8) (actual time=27769.043..27769.228 rows=10 loops=1)\n> Merge Cond: (\"outer\".animalid = \"inner\".animalid)\n> -> Index Scan using movement_animal on movements m (cost=0.00..56642740.73 rows=3107737 width=8) (actual time=0.022..0.154 rows=10 loops=1)\n> Filter: (mtypeid = 0)\n> -> Sort (cost=565969.42..574226.37 rows=3302780 width=8) (actual time=27768.991..27769.001 rows=10 loops=1)\n> Sort Key: a.animalid\n> -> Seq Scan on animals a (cost=0.00..77086.80 rows=3302780 width=8) (actual time=0.039..5620.651 rows=3303418 loops=1)\n> Total runtime: 27851.097 ms\n\n\nAnother thing to notice is that if one remove the Limit node then the\nsituation is reversed and the plan that pg choose (with the Limit node) is\nthe one with the lowest cost. The startup cost is however very high so \ncombining that Merge Join with a Limit will of course produce something \nslow compared to the upper plan where the startup cost is 0.0.\n\nA stand alone test case would be nice, but even without the above plans \nare interesting.\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Sat, 2 Jul 2005 07:24:26 +0200 (CEST)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner picking more expensive plan" } ]
[ { "msg_contents": "\nI'm working with an application where the database is entirely resident in RAM \n(the server is a quad opteron with 16GBytes of memory). It's a web \napplication and handles a high volume of queries. The planner seems to be \ngenerating poor plans for some of our queries which I can fix by raising \ncpu_tuple_cost. I have seen some other comments in the archives saying that \nthis is a bad idea but is that necessarily the case when the database is \nentirely resident in RAM?\n\nEmil\n", "msg_date": "Fri, 1 Jul 2005 21:59:38 -0400", "msg_from": "Emil Briggs <[email protected]>", "msg_from_op": true, "msg_subject": "Planner constants for RAM resident databases" }, { "msg_contents": "On Fri, Jul 01, 2005 at 09:59:38PM -0400, Emil Briggs wrote:\n\n> I'm working with an application where the database is entirely resident in RAM \n> (the server is a quad opteron with 16GBytes of memory). It's a web \n> application and handles a high volume of queries. The planner seems to be \n> generating poor plans for some of our queries which I can fix by raising \n> cpu_tuple_cost. I have seen some other comments in the archives saying that \n> this is a bad idea but is that necessarily the case when the database is \n> entirely resident in RAM?\n\nIf I'm understanding correctly that'll mostly increase the estimated\ncost of handling a row relative to a sequential page fetch, which\nsure sounds like it'll push plans in the right direction, but it\ndoesn't sound like the right knob to twiddle.\n\nWhat do you have random_page_cost set to?\n\nCheers,\n Steve\n", "msg_date": "Fri, 1 Jul 2005 19:08:23 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner constants for RAM resident databases" }, { "msg_contents": "Emil Briggs wrote:\n\n>I'm working with an application where the database is entirely resident in RAM\n>(the server is a quad opteron with 16GBytes of memory). It's a web\n>application and handles a high volume of queries. The planner seems to be\n>generating poor plans for some of our queries which I can fix by raising\n>cpu_tuple_cost. I have seen some other comments in the archives saying that\n>this is a bad idea but is that necessarily the case when the database is\n>entirely resident in RAM?\n>\n>Emil\n>\n>\n>\n\nGenerally, the key knob to twiddle when everything fits in RAM is\nrandom_page_cost. If you truly have everything in RAM you could set it\nalmost to 1. 1 means that it costs exactly the same to go randomly\nthrough the data then it does to go sequential. I would guess that even\nin RAM it is faster to go sequential (since you still have to page and\ndeal with L1/L2/L3 cache, etc). But the default random_page_cost of 4 is\nprobably too high for you.\n\nJohn\n=:->", "msg_date": "Fri, 01 Jul 2005 21:40:49 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner constants for RAM resident databases" }, { "msg_contents": "Emil Briggs wrote:\n\n>>I just mentioned random_page_cost, but you should also tune\n>>effective_cache_size, since that is effectively most of your RAM. It\n>>depends what else is going on in the system, but setting it as high as\n>>say 12-14GB is probably reasonable if it is a dedicated machine. With\n>>random_page_cost 1.5-2, and higher effective_cache_size, you should be\n>>doing pretty well.\n>>John\n>>=:->\n>>\n>>\n>\n>I tried playing around with these and they had no effect. It seems the only\n>thing that makes a difference is cpu_tuple_cost.\n>\n>\n>\nI'm surprised. I know cpu_tuple_cost can effect it as well, but usually\nthe recommended way to get indexed scans is the above two parameters.\n\nWhen you do \"explain analyze\" of a query that you have difficulties\nwith, how are the planner's estimates. Are the estimated number of rows\nabout equal to the actual number of rows?\nIf the planner is mis-estimating, there is a whole different set of\ntuning to do to help it estimate correctly.\n\nJohn\n=:->\n\nPS> Use reply-all so that your comments go to the list.", "msg_date": "Fri, 01 Jul 2005 22:14:50 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner constants for RAM resident databases" }, { "msg_contents": "> When you do \"explain analyze\" of a query that you have difficulties\n> with, how are the planner's estimates. Are the estimated number of rows\n> about equal to the actual number of rows?\n\nSome of them are pretty far off. For example\n\n -> Merge Left Join (cost=9707.71..13993.52 rows=1276 width=161) (actual \ntime=164.423..361.477 rows=49 loops=1)\n\nI tried setting enable_merge_joins to off and that made the query about three \ntimes faster. It's using a hash join instead.\n", "msg_date": "Sat, 2 Jul 2005 09:44:07 -0400", "msg_from": "Emil Briggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner constants for RAM resident databases" }, { "msg_contents": "Emil,\n\n> -> Merge Left Join (cost=9707.71..13993.52 rows=1276 width=161)\n> (actual time=164.423..361.477 rows=49 loops=1)\n\nThat would indicate that you need to either increase your statistical \nsampling (SET STATISTICS) or your frequency of running ANALYZE, or both.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 5 Jul 2005 14:33:28 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner constants for RAM resident databases" } ]
[ { "msg_contents": "Hi all,\n\n I have gone back to my index problem from a while ago where I am \ntrying to do an update with a regex on the WHERE column. If I specifiy a \nconstant the index is used so that much I know is working.\n\n I've been reading the 7.4 docs and I saw that a B-Tree index *should* \nbut used when the regex is anchored to the start. This is from 11.2 of \nthe docs; It says \"The optimizer can also use a B-tree indexfor queries \ninvolving pattern matching operators LIKE, ILIKE, ~, and ~*, if, the \npattern is anchored to the beginning of the string.\" In my case that is \nwhat I will always do.\n\n Specifically, this is a backup program I am using the DB for. The \ntable I am working on stores all the file and directory information for \na given partition. When the user toggles the checkbox for a given \ndirectory (to indicate that they do or do not what that directory backed \nup) I make a call to the DB telling it to change that column to given \nstate.\n\n When the user toggle a directory I want to propgate that change to \nall sub directories and all files within those directories. The way I do \nthis is:\n\nUPDATE file_info_11 SET file_backup='t' WHERE file_parent_dir~'^/foo/bar';\n\n Which basically is just to say \"change every directory and file with \nthis parent directory and all sub directories to the new backup state\". \n From what I gather this query should have used the index. Here is what \nI am actually getting though:\n\ntle-bu=> EXPLAIN ANALYZE UPDATE file_info_11 SET file_backup='t' WHERE \nfile_parent_dir~'^/';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Seq Scan on file_info_11 (cost=0.00..13484.23 rows=1 width=183) \n(actual time=13.560..22040.603 rows=336039 loops=1)\n Filter: (file_parent_dir ~ '^/'::text)\n Total runtime: 514099.565 ms\n(3 rows)\n\n Now if I define a static directory the index IS used:\n\ntle-bu=> EXPLAIN ANALYZE UPDATE file_info_11 SET file_backup='t' WHERE \nfile_parent_dir='/';\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using file_info_11_update_idx on file_info_11 \n(cost=0.00..109.69 rows=66 width=183) (actual time=22.828..62.020 rows=3 \nloops=1)\n Index Cond: (file_parent_dir = '/'::text)\n Total runtime: 88.334 ms\n(3 rows)\n\n Here is the table and index schemas:\n\ntle-bu=> \\d file_info_11; \\d file_info_11_update_idx;\n Table \"public.file_info_11\"\n Column | Type | Modifiers\n----------------------+----------------------+-----------------------------------------\n file_group_name | text |\n file_group_uid | bigint | not null\n file_mod_time | bigint | not null\n file_name | text | not null\n file_parent_dir | text | not null\n file_perm | text | not null\n file_size | bigint | not null\n file_type | character varying(2) | not null default \n'f'::character varying\n file_user_name | text |\n file_user_uid | bigint | not null\n file_backup | boolean | not null default true\n file_display | boolean | not null default false\n file_restore_display | boolean | not null default false\n file_restore | boolean | not null default false\nIndexes:\n \"file_info_11_display_idx\" btree (file_type, file_parent_dir, \nfile_name)\n \"file_info_11_update_idx\" btree (file_parent_dir)\n\nIndex \"public.file_info_11_update_idx\"\n Column | Type\n-----------------+------\n file_parent_dir | text\nbtree, for table \"public.file_info_11\"\n\n\n Can anyone see why the index might not be being used?\n\n I know that 'tsearch2' would probably work but it seems like way more \nthan I need (because I will never be searching the middle of a string).\n\nThanks for any advice/help/pointers!\n\nMadison\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\nMadison Kelly (Digimer)\nTLE-BU, The Linux Experience; Back Up\nhttp://tle-bu.thelinuxexperience.com\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n\n", "msg_date": "Sat, 02 Jul 2005 00:16:55 -0400", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": true, "msg_subject": "B-Tree index not being used" }, { "msg_contents": "Madison Kelly <[email protected]> writes:\n> Can anyone see why the index might not be being used?\n\nYou didn't initdb in 'C' locale. You can either re-initdb,\nor create a specialized index with a non-default operator class\nto support LIKE. See the documentation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 02 Jul 2005 09:54:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: B-Tree index not being used " }, { "msg_contents": "Tom Lane wrote:\n> Madison Kelly <[email protected]> writes:\n> \n>> Can anyone see why the index might not be being used?\n> \n> \n> You didn't initdb in 'C' locale. You can either re-initdb,\n> or create a specialized index with a non-default operator class\n> to support LIKE. See the documentation.\n> \n> \t\t\tregards, tom lane\n\nI'll look into the non-default op class. I want to keep anything that \ntweaks the DB in my code so that a user doesn't need to modify anything \non their system.\n\nThanks!\n\nMadison\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\nMadison Kelly (Digimer)\nTLE-BU, The Linux Experience; Back Up\nhttp://tle-bu.thelinuxexperience.com\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n", "msg_date": "Sat, 02 Jul 2005 11:20:23 -0400", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": true, "msg_subject": "Re: B-Tree index not being used" } ]
[ { "msg_contents": "\nHi folks,\n\n\nmy application reads and writes some table quite often\n(multiple times per second). these tables are quite small\n(not more than 20 tuples), but the operations take quite a \nlong time (>300 ms!). \n\nThe query operations are just include text matching (=) and \ndate comparison (<,>). \n\nI wasn't yet able to track down, if all these queries take \nsucha long time or just sometimes. When running them manually\nor trying explain, evrything's fast. Probably there could be\nsome side effects with other concurrent quries.\n\n\nCould anyone give me advice ?\n\n\ncu\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n---------------------------------------------------------------------\n Realtime Forex/Stock Exchange trading powered by postgresSQL :))\n http://www.fxignal.net/\n---------------------------------------------------------------------\n", "msg_date": "Mon, 4 Jul 2005 00:45:37 +0200", "msg_from": "Enrico Weigelt <[email protected]>", "msg_from_op": true, "msg_subject": "plain inserts and deletes very slow" }, { "msg_contents": "* Enrico Weigelt <[email protected]> wrote:\n\nforgot to mention:\n\n + linux-2.6.9\n + postgres-7.4.6\n + intel celeron 2ghz\n + intel ultra ata controller\n + 768mb ram\n\n\ncu\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n---------------------------------------------------------------------\n Realtime Forex/Stock Exchange trading powered by postgresSQL :))\n http://www.fxignal.net/\n---------------------------------------------------------------------\n", "msg_date": "Mon, 4 Jul 2005 00:57:07 +0200", "msg_from": "Enrico Weigelt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plain inserts and deletes very slow" }, { "msg_contents": "* Steinar H. Gunderson <[email protected]> wrote:\n> On Mon, Jul 04, 2005 at 12:45:37AM +0200, Enrico Weigelt wrote:\n> > my application reads and writes some table quite often\n> > (multiple times per second). these tables are quite small\n> > (not more than 20 tuples), but the operations take quite a \n> > long time (>300 ms!).\n> \n> Are you VACUUMing often enough?\n\nI've just VACUUM'ed multiple times, so it's perhaps not the problem.\n\n\ncu\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n---------------------------------------------------------------------\n Realtime Forex/Stock Exchange trading powered by postgresSQL :))\n http://www.fxignal.net/\n---------------------------------------------------------------------\n", "msg_date": "Mon, 4 Jul 2005 00:57:59 +0200", "msg_from": "Enrico Weigelt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plain inserts and deletes very slow" }, { "msg_contents": "On Mon, Jul 04, 2005 at 12:45:37AM +0200, Enrico Weigelt wrote:\n> my application reads and writes some table quite often\n> (multiple times per second). these tables are quite small\n> (not more than 20 tuples), but the operations take quite a \n> long time (>300 ms!).\n\nAre you VACUUMing often enough?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Mon, 4 Jul 2005 01:54:24 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plain inserts and deletes very slow" }, { "msg_contents": "* David Mitchell <[email protected]> wrote:\n> Did you vacuum full?\n> \n> When you do lots of inserts and deletes, dead tuples get left behind. \n> When you vacuum, postgres will reuse those dead tuples, but if you don't \n> vacuum for a long time these tuples will build up lots. Even when you \n> vacuum in this case, the dead tuples are still there, although they are \n> marked for reuse. Vacuuming full actually removes the dead tuples.\n\nI'm doing a VACUUM ANALYZE every 6 hours. \n\nvacuum'ing manually doesnt seem to have any effect on that.\n\n\ncu\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n---------------------------------------------------------------------\n Realtime Forex/Stock Exchange trading powered by postgresSQL :))\n http://www.fxignal.net/\n---------------------------------------------------------------------\n", "msg_date": "Mon, 4 Jul 2005 02:01:57 +0200", "msg_from": "Enrico Weigelt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plain inserts and deletes very slow" }, { "msg_contents": "* David Mitchell <[email protected]> wrote:\n> Perhaps if you are doing a lot of inserts and deletes, vacuuming every 6 \n> minutes would be closer to your mark. Try vacuuming every 15 minutes for \n> a start and see how that affects things (you will have to do a vacuum \n> full to get the tables back into shape after them slowing down as they \n> have).\n\nhmm. I've just done vacuum full at the moment on these tables, but it \ndoesnt seem to change anything :(\n\n\ncu\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n---------------------------------------------------------------------\n Realtime Forex/Stock Exchange trading powered by postgresSQL :))\n http://www.fxignal.net/\n---------------------------------------------------------------------\n", "msg_date": "Mon, 4 Jul 2005 02:17:47 +0200", "msg_from": "Enrico Weigelt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plain inserts and deletes very slow" }, { "msg_contents": "Did you vacuum full?\n\nWhen you do lots of inserts and deletes, dead tuples get left behind. \nWhen you vacuum, postgres will reuse those dead tuples, but if you don't \nvacuum for a long time these tuples will build up lots. Even when you \nvacuum in this case, the dead tuples are still there, although they are \nmarked for reuse. Vacuuming full actually removes the dead tuples.\n\nIf you vacuum (normal) regularly, then the number of dead tuples will \nstay down, as they are regularly marked for reuse.\n\nDavid\n\nEnrico Weigelt wrote:\n> * Enrico Weigelt <[email protected]> wrote:\n> \n> forgot to mention:\n> \n> + linux-2.6.9\n> + postgres-7.4.6\n> + intel celeron 2ghz\n> + intel ultra ata controller\n> + 768mb ram\n> \n> \n> cu\n\n\n-- \nDavid Mitchell\nSoftware Engineer\nTelogis\n\nNOTICE:\nThis message (including any attachments) contains CONFIDENTIAL\nINFORMATION intended for a specific individual and purpose, and\nis protected by law. If you are not the intended recipient,\nyou should delete this message and are hereby notified that any\ndisclosure, copying, or distribution of this message, or the\ntaking of any action based on it, is strictly prohibited.\n", "msg_date": "Mon, 04 Jul 2005 12:50:36 +1200", "msg_from": "David Mitchell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plain inserts and deletes very slow" }, { "msg_contents": "Perhaps if you are doing a lot of inserts and deletes, vacuuming every 6 \nminutes would be closer to your mark. Try vacuuming every 15 minutes for \na start and see how that affects things (you will have to do a vacuum \nfull to get the tables back into shape after them slowing down as they \nhave).\n\nDavid\n\nEnrico Weigelt wrote:\n> * David Mitchell <[email protected]> wrote:\n> \n>>Did you vacuum full?\n>>\n>>When you do lots of inserts and deletes, dead tuples get left behind. \n>>When you vacuum, postgres will reuse those dead tuples, but if you don't \n>>vacuum for a long time these tuples will build up lots. Even when you \n>>vacuum in this case, the dead tuples are still there, although they are \n>>marked for reuse. Vacuuming full actually removes the dead tuples.\n> \n> \n> I'm doing a VACUUM ANALYZE every 6 hours. \n> \n> vacuum'ing manually doesnt seem to have any effect on that.\n> \n> \n> cu\n\n\n-- \nDavid Mitchell\nSoftware Engineer\nTelogis\n\nNOTICE:\nThis message (including any attachments) contains CONFIDENTIAL\nINFORMATION intended for a specific individual and purpose, and\nis protected by law. If you are not the intended recipient,\nyou should delete this message and are hereby notified that any\ndisclosure, copying, or distribution of this message, or the\ntaking of any action based on it, is strictly prohibited.\n", "msg_date": "Mon, 04 Jul 2005 13:12:55 +1200", "msg_from": "David Mitchell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plain inserts and deletes very slow" }, { "msg_contents": "On Mon, Jul 04, 2005 at 02:17:47AM +0200, Enrico Weigelt wrote:\n> * David Mitchell <[email protected]> wrote:\n> > Perhaps if you are doing a lot of inserts and deletes, vacuuming every 6 \n> > minutes would be closer to your mark. Try vacuuming every 15 minutes for \n> > a start and see how that affects things (you will have to do a vacuum \n> > full to get the tables back into shape after them slowing down as they \n> > have).\n> \n> hmm. I've just done vacuum full at the moment on these tables, but it \n> doesnt seem to change anything :(\n\nMaybe you need a REINDEX, if you have indexes on that table. Try that,\ncoupled with the frequent VACUUM suggestion.\n\n-- \nAlvaro Herrera (<alvherre[a]surnet.cl>)\n\"World domination is proceeding according to plan\" (Andrew Morton)\n", "msg_date": "Sun, 3 Jul 2005 22:57:09 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plain inserts and deletes very slow" }, { "msg_contents": "Hmm, you said you don't experience this when executing the query \nmanually. What adapter are you using to access postgres from your \napplication? libpq, npgsql or something else? And what is your method \nfor running the query 'manually'. Are you running it locally or from a \nremote machine or what?\n\nRegards\n\nDavid\n\nEnrico Weigelt wrote:\n> * David Mitchell <[email protected]> wrote:\n> \n>>Perhaps if you are doing a lot of inserts and deletes, vacuuming every 6 \n>>minutes would be closer to your mark. Try vacuuming every 15 minutes for \n>>a start and see how that affects things (you will have to do a vacuum \n>>full to get the tables back into shape after them slowing down as they \n>>have).\n> \n> \n> hmm. I've just done vacuum full at the moment on these tables, but it \n> doesnt seem to change anything :(\n> \n> \n> cu\n\n\n-- \nDavid Mitchell\nSoftware Engineer\nTelogis\n", "msg_date": "Mon, 04 Jul 2005 15:51:27 +1200", "msg_from": "David Mitchell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plain inserts and deletes very slow" }, { "msg_contents": "* Alvaro Herrera <[email protected]> wrote:\n> On Mon, Jul 04, 2005 at 02:17:47AM +0200, Enrico Weigelt wrote:\n> > * David Mitchell <[email protected]> wrote:\n> > > Perhaps if you are doing a lot of inserts and deletes, vacuuming every 6 \n> > > minutes would be closer to your mark. Try vacuuming every 15 minutes for \n> > > a start and see how that affects things (you will have to do a vacuum \n> > > full to get the tables back into shape after them slowing down as they \n> > > have).\n> > \n> > hmm. I've just done vacuum full at the moment on these tables, but it \n> > doesnt seem to change anything :(\n> \n> Maybe you need a REINDEX, if you have indexes on that table. Try that,\n> coupled with the frequent VACUUM suggestion.\n\nI've tried it, but it doesn't seem to help :(\n\n\ncu\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n---------------------------------------------------------------------\n Realtime Forex/Stock Exchange trading powered by postgresSQL :))\n http://www.fxignal.net/\n---------------------------------------------------------------------\n", "msg_date": "Mon, 4 Jul 2005 10:57:29 +0200", "msg_from": "Enrico Weigelt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plain inserts and deletes very slow" }, { "msg_contents": "* David Mitchell <[email protected]> wrote:\n\nHi,\n\n> Hmm, you said you don't experience this when executing the query \n> manually. What adapter are you using to access postgres from your \n> application? libpq, npgsql or something else? \n\nhuh, its a delphi application ... (I didnt code it).\n\n> And what is your method for running the query 'manually'. Are you \n> running it locally or from a remote machine or what?\nusing psql remotely - database and client machines are sitting \non the same wire.\n\n\ncu\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n---------------------------------------------------------------------\n Realtime Forex/Stock Exchange trading powered by postgresSQL :))\n http://www.fxignal.net/\n---------------------------------------------------------------------\n", "msg_date": "Mon, 4 Jul 2005 10:59:03 +0200", "msg_from": "Enrico Weigelt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plain inserts and deletes very slow" }, { "msg_contents": "On Mon, Jul 04, 2005 at 10:57:29AM +0200, Enrico Weigelt wrote:\n> * Alvaro Herrera <[email protected]> wrote:\n> > On Mon, Jul 04, 2005 at 02:17:47AM +0200, Enrico Weigelt wrote:\n> > > * David Mitchell <[email protected]> wrote:\n> > > > Perhaps if you are doing a lot of inserts and deletes, vacuuming every 6 \n> > > > minutes would be closer to your mark. Try vacuuming every 15 minutes for \n> > > > a start and see how that affects things (you will have to do a vacuum \n> > > > full to get the tables back into shape after them slowing down as they \n> > > > have).\n> > > \n> > > hmm. I've just done vacuum full at the moment on these tables, but it \n> > > doesnt seem to change anything :(\n> > \n> > Maybe you need a REINDEX, if you have indexes on that table. Try that,\n> > coupled with the frequent VACUUM suggestion.\n> \n> I've tried it, but it doesn't seem to help :(\n\nSo, lets back up a little. You have no table nor index bloat, because\nyou reindexed and full-vacuumed. So where does the slowness come from?\nCan you post an example EXPLAIN ANALYZE of the queries in question?\n\n-- \nAlvaro Herrera (<alvherre[a]surnet.cl>)\n\"El realista sabe lo que quiere; el idealista quiere lo que sabe\" (An�nimo)\n", "msg_date": "Mon, 4 Jul 2005 10:27:55 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plain inserts and deletes very slow" }, { "msg_contents": "On Mon, 4 Jul 2005 10:59:03 +0200, Enrico Weigelt <[email protected]> wrote:\n> * David Mitchell <[email protected]> wrote:\n> \n> Hi,\n> \n> > Hmm, you said you don't experience this when executing the query \n> > manually. What adapter are you using to access postgres from your \n> > application? libpq, npgsql or something else? \n> \n> huh, its a delphi application ... (I didnt code it).\n\nTurn on statement logging. I've seen delphi interfaces do extra queries\non system tables to find some structure information.\n\nThe available interfaces for delphi that I know of are vitavoom's\ndbexpress (you should be able to find dbexppge.dll), zeos (you'll have\nto grep the executable), ODBC using ADO or bde, Or dot net.\n\nklint.\n\n+---------------------------------------+-----------------+\n: Klint Gore : \"Non rhyming :\n: EMail : [email protected] : slang - the :\n: Snail : A.B.R.I. : possibilities :\n: Mail University of New England : are useless\" :\n: Armidale NSW 2351 Australia : L.J.J. :\n: Fax : +61 2 6772 5376 : :\n+---------------------------------------+-----------------+\n", "msg_date": "Tue, 05 Jul 2005 09:36:02 +1000", "msg_from": "Klint Gore <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plain inserts and deletes very slow" }, { "msg_contents": "* Klint Gore <[email protected]> wrote:\n\n<snip>\n\n> Turn on statement logging. I've seen delphi interfaces do extra queries\n> on system tables to find some structure information.\n\nI'm already using statement logging of all queries taking longer\nthan 200ms. It seems that only the INSERT takes such a time. \n\nThe client is in fact written in delphi, and it sometimes seems \nto do strange things. For example we had the effect, that some\nnew fields in some table were regularily NULL'ed. None of the \ntriggers and rules inside the DB could do that (since there's \nno dynamic query stuff) and the delphi application is the only \none writing directly to this table.\n\n\ncu\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n---------------------------------------------------------------------\n Realtime Forex/Stock Exchange trading powered by postgresSQL :))\n http://www.fxignal.net/\n---------------------------------------------------------------------\n", "msg_date": "Fri, 8 Jul 2005 15:46:39 +0200", "msg_from": "Enrico Weigelt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plain inserts and deletes very slow" } ]