threads
listlengths
1
275
[ { "msg_contents": "Heya Guys n Gals,\n\nHaving been following the thread on \"go for a script! / ex: PostgreSQL vs.\nMySQL\". I thought I would throw something together in Perl. My current issue\nis that I only have access to a RH Linux box and so cannot make it\ncross-platform on my own :-(. Anyhow please find it attached. It runs fine\non my box, it doesnt actually write to postgresql.conf because I didnt want\nto mess it up, it does however write to postgresql.conf.new for the moment.\nThe diffs seem to be writing correctly. There are a set of parameters at the\ntop which may need to get tweaked for your platform. I can also carry on\nposting to this list new versions if people want. Clearly this lot is open\nsource, so please feel free to play with it and post patches/new features\nback either to the list or my email directly. In case you cant see my email\naddress, it is nicky at the domain below.\n\n I will also post it on me website and as I develop it further new versions\nwill appear there\n\nhttp://www.chuckie.co.uk/postgresql/pg_autoconfig.pl\n\nIs this a useful start?\n\n\nNick", "msg_date": "Fri, 10 Oct 2003 13:35:53 +0100", "msg_from": "\"Nick Barr\" <[email protected]>", "msg_from_op": true, "msg_subject": "go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "Nick,\n\n> Having been following the thread on \"go for a script! / ex: PostgreSQL vs.\n> MySQL\". I thought I would throw something together in Perl. \n\nCool! Would you be willing to work with me so that I can inject some of my \nknowledge of .conf tuning?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 10 Oct 2003 10:34:52 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "Josh Berkus wrote:\n\n>Nick,\n>\n> \n>\n>>Having been following the thread on \"go for a script! / ex: PostgreSQL vs.\n>>MySQL\". I thought I would throw something together in Perl. \n>> \n>>\n>\n>Cool! Would you be willing to work with me so that I can inject some of my \n>knowledge of .conf tuning?\n>\n> \n>\nSounds good to me. I will carry on working on it but I would definitely \nneed some help, or at least a list of parameters to tweak, and some \nrecomended values based on data about the puter in question.\n\nSo far:\n\nshared_buffers = 1/16th of total memory\neffective_cache_size = 80% of the supposed kernel cache.\n\nI guess we also may be able to offer a simple and advanced mode. Simple \nmode would work on these recomended values, but kick it into advanced \nmode and the user can tweak things more finely. This would only be \nrecomended for the Guru's out there of course. This may take a bit more \ntime to do though.\n\nAs I said in the previous email I have only got access to Linux, so \ncross-platform help would be good too. I will try to make it as easy to \ndo cross platform stuff as possible of course.\n\n\nNick\n\n", "msg_date": "Fri, 10 Oct 2003 19:08:59 +0100", "msg_from": "Nick Barr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": ">>>>> \"NB\" == Nick Barr <[email protected]> writes:\n\nNB> So far:\n\nNB> shared_buffers = 1/16th of total memory\nNB> effective_cache_size = 80% of the supposed kernel cache.\n\nPlease take into account the blocksize compiled into PG, too...\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Fri, 10 Oct 2003 16:26:58 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "Nick,\n\n> Sounds good to me. I will carry on working on it but I would definitely\n> need some help, or at least a list of parameters to tweak, and some\n> recomended values based on data about the puter in question.\n\n> shared_buffers = 1/16th of total memory\n> effective_cache_size = 80% of the supposed kernel cache.\n\nBut only if it's a dedicated DB machine. If it's not, all memory values \nshould be cut in half.\n\n> I guess we also may be able to offer a simple and advanced mode. Simple\n> mode would work on these recomended values, but kick it into advanced\n> mode and the user can tweak things more finely. This would only be\n> recomended for the Guru's out there of course. This may take a bit more\n> time to do though.\n\nWhat I would prefer would be an interactive script which would, by asking the \nuser simple questions and system scanning, collect all the information \nnecessary to set:\n\nmax_connections\nshared_buffers\nsort_mem\nvacuum_mem\neffective_cache_size\nrandom_page_cost\nmax_fsm_pages\ncheckpoint_segments & checkpoint_timeout\ntcp_ip\n\nand on the OS, it should set:\nshmmax & shmmall\nand should offer to create a chron job which does appropriate frequency VACUUM \nANALYZE.\n\n> As I said in the previous email I have only got access to Linux, so\n> cross-platform help would be good too. I will try to make it as easy to\n> do cross platform stuff as possible of course.\n\nLet's get it working on Linux; then we can rely on the community to port it to \nother platforms. I myself can work on the ports to Solaris and OS X.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 10 Oct 2003 13:38:18 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "Vivek,\n\n> NB> shared_buffers = 1/16th of total memory\n> NB> effective_cache_size = 80% of the supposed kernel cache.\n>\n> Please take into account the blocksize compiled into PG, too...\n\nWe can;t change the blocksize in a script that only does the .conf file. Or \nare you suggesting something else?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 10 Oct 2003 13:39:48 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": ">>>>> \"JB\" == Josh Berkus <[email protected]> writes:\n\nJB> Vivek,\nNB> shared_buffers = 1/16th of total memory\nNB> effective_cache_size = 80% of the supposed kernel cache.\n>> \n>> Please take into account the blocksize compiled into PG, too...\n\nJB> We can;t change the blocksize in a script that only does the .conf\nJB> file. Or are you suggesting something else?\n\n\nwhen you compute optimal shared buffers and effective cache size,\nthese are in terms of blocksize. so if I have 16k block size, you\ncan't compute based on default 8k blocksize. at worst, it would have\nto be a parameter you pass to the tuning script.\n", "msg_date": "Fri, 10 Oct 2003 17:14:25 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "Vivek,\n \n> when you compute optimal shared buffers and effective cache size,\n> these are in terms of blocksize. so if I have 16k block size, you\n> can't compute based on default 8k blocksize. at worst, it would have\n> to be a parameter you pass to the tuning script.\n\nOh, yes! Thank you. \n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 10 Oct 2003 14:49:00 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "> NB> So far:\n> \n> NB> shared_buffers = 1/16th of total memory\n> NB> effective_cache_size = 80% of the supposed kernel cache.\n> \n> Please take into account the blocksize compiled into PG, too...\n\nWould anyone object to a patch that exports the blocksize via a\nreadonly GUC? Too many tunables are page dependant, which is\ninfuriating when copying configs from DB to DB. I wish pgsql had some\nnotion of percentages for values that end with a '%'. -sc\n\n-- \nSean Chittenden\n", "msg_date": "Fri, 10 Oct 2003 15:59:24 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "Sean Chittenden wrote:\n> > NB> So far:\n> > \n> > NB> shared_buffers = 1/16th of total memory\n> > NB> effective_cache_size = 80% of the supposed kernel cache.\n> > \n> > Please take into account the blocksize compiled into PG, too...\n> \n> Would anyone object to a patch that exports the blocksize via a\n> readonly GUC? Too many tunables are page dependant, which is\n> infuriating when copying configs from DB to DB. I wish pgsql had some\n> notion of percentages for values that end with a '%'. -sc\n\nMakes sense to me --- we already have some read-only GUC variables.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 10 Oct 2003 21:49:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "On Fri, 2003-10-10 at 18:59, Sean Chittenden wrote:\n> > NB> So far:\n> > \n> > NB> shared_buffers = 1/16th of total memory\n> > NB> effective_cache_size = 80% of the supposed kernel cache.\n> > \n> > Please take into account the blocksize compiled into PG, too...\n> \n> Would anyone object to a patch that exports the blocksize via a\n> readonly GUC? Too many tunables are page dependant, which is\n> infuriating when copying configs from DB to DB. I wish pgsql had some\n> notion of percentages for values that end with a '%'.\n\nRather than showing the block size, how about we change the tunables to\nbe physical sizes rather than block based?\n\n effective_cache_size = 1.5GB\n shared_buffers = 25MB\n\nPercentages would be slick as well, but doing the above should fix most\nof the issue -- and be friendlier to read.", "msg_date": "Fri, 10 Oct 2003 21:55:34 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "> NB> shared_buffers = 1/16th of total memory\n> NB> effective_cache_size = 80% of the supposed kernel cache.\n\nI think Sean(?) mentioned this one for FreeBSD (Bash code):\n\necho \"effective_cache_size = $((`sysctl -n vfs.hibufspace` / 8192))\"\n\nI've used it for my dedicated servers. Is this calculation correct?\n\nChris\n\n", "msg_date": "Sat, 11 Oct 2003 13:20:43 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "> >NB> shared_buffers = 1/16th of total memory\n> >NB> effective_cache_size = 80% of the supposed kernel cache.\n> \n> I think Sean(?) mentioned this one for FreeBSD (Bash code):\n\nsh, not bash. :)\n\n> echo \"effective_cache_size = $((`sysctl -n vfs.hibufspace` / 8192))\"\n> \n> I've used it for my dedicated servers. Is this calculation correct?\n\nYes, or it's real close at least. vfs.hibufspace is the amount of\nkernel space that's used for caching IO operations (minus the\nnecessary space taken for the kernel). If you're real paranoid, you\ncould do some kernel profiling and figure out how much of the cache is\nactually disk IO and multiply the above by some percentage, say 80%?\nI haven't found it necessary to do so yet. Since hibufspace is all IO\nand caching any net activity is kinda pointless and I assume that 100%\nof it is used for a disk cache and don't use a multiplier. The 8192,\nhowever, is the size of a PG page, so, if you tweak PG's page size,\nyou have to change this constant (*grumbles*).\n\n-sc\n\n-- \nSean Chittenden\n", "msg_date": "Sat, 11 Oct 2003 02:23:08 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "In article <1065837333.12875.1.camel@jester>,\nRod Taylor <[email protected]> writes:\n\n>> Would anyone object to a patch that exports the blocksize via a\n>> readonly GUC? Too many tunables are page dependant, which is\n>> infuriating when copying configs from DB to DB. I wish pgsql had some\n>> notion of percentages for values that end with a '%'.\n\n> Rather than showing the block size, how about we change the tunables to\n> be physical sizes rather than block based?\n\n> effective_cache_size = 1.5GB\n> shared_buffers = 25MB\n\nAmen! Being forced to set config values in some obscure units rather\nthan bytes is an ugly braindamage which should be easy to fix.\n\n", "msg_date": "11 Oct 2003 12:22:42 +0200", "msg_from": "Harald Fuchs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "On Sat, 2003-10-11 at 05:22, Harald Fuchs wrote:\n> In article <1065837333.12875.1.camel@jester>,\n> Rod Taylor <[email protected]> writes:\n> \n> >> Would anyone object to a patch that exports the blocksize via a\n> >> readonly GUC? Too many tunables are page dependant, which is\n> >> infuriating when copying configs from DB to DB. I wish pgsql had some\n> >> notion of percentages for values that end with a '%'.\n> \n> > Rather than showing the block size, how about we change the tunables to\n> > be physical sizes rather than block based?\n> \n> > effective_cache_size = 1.5GB\n> > shared_buffers = 25MB\n> \n> Amen! Being forced to set config values in some obscure units rather\n> than bytes is an ugly braindamage which should be easy to fix.\n\nBut it's too user-friendly to do it this way!\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\nWhen Swedes start committing terrorism, I'll become suspicious of\nScandanavians.\n\n", "msg_date": "Sat, 11 Oct 2003 05:43:27 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "Josh Berkus wrote:\n\n>>shared_buffers = 1/16th of total memory\n>>effective_cache_size = 80% of the supposed kernel cache.\n>> \n>>\n>But only if it's a dedicated DB machine. If it's not, all memory values \n>should be cut in half.\n> \n>\n>\n>What I would prefer would be an interactive script which would, by asking the \n>user simple questions and system scanning, collect all the information \n>necessary to set:\n>\n>max_connections\n>shared_buffers\n>sort_mem\n>vacuum_mem\n>effective_cache_size\n>random_page_cost\n>max_fsm_pages\n>checkpoint_segments & checkpoint_timeout\n>tcp_ip\n>\n>and on the OS, it should set:\n>shmmax & shmmall\n>and should offer to create a chron job which does appropriate frequency VACUUM \n>ANALYZE.\n> \n>\n\nI reckon do a system scan first, and parse the current PostgreSQL conf \nfile to figure out what the settings are. Also back it up with a date \nand time appended to the end to make sure there is a backup before \noverwriting the real conf file. Then a bunch of questions. What sort of \nquestions would need to be asked and which parameters would these \nquestions affect? So far, and from my limited understanding of the .conf \nfile, I reckon there should be the following\n\n\nHere is your config of your hardware as detected. Is this correct ?\n\n This could potentially be several questions, i.e. one for proc, mem, \nos, hdd etc\n Would affect shared_buffers, sort_mem, effective_cache_size, \nrandom_page_cost\n\nHow was PostgreSQL compiled?\n\n This would be parameters such as the block size and a few other \ncompile time parameters. If we can get to some of these read-only \nparameters than that would make this step easier, certainly for the new \nrecruits amongst us.\n\nIs PostgreSQL the only thing being run on this computer?\n\n Then my previous assumptions about shared_buffers and \neffective_cache_size would be true.\n\nIf shmmax and shmmall are too small, then:\n\nPostgreSQL requires some more shared memory to cache some tables, x Mb, \ndo you want to increase your OS kernel parameters?\n\n Tweak shmmax and shmmall\n\nHow are the clients going to connect?\n\n i.e. TCP or Unix sockets\n\nHow many clients can connect to this database at once?\n\n Affects max_connections\n\nHow many databases and how many tables in each database are going to be \npresent?\n\n Affects max_fsm_pages, checkpoint_segments, checkpoint_timeout\n\nDo you want to vacuum you database regularly?\n\n Initial question for cron job\n\nIt is recomended that you vacuum analyze every night, do you want to do \nthis?\nIt is also recomended that you vacuum full every month, do you want to \ndo this?\n\n\n\nThoughts?\n\n\nNick\n\n", "msg_date": "Sat, 11 Oct 2003 15:55:43 +0100", "msg_from": "Nick Barr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "\n> If shmmax and shmmall are too small, then:\n> \n> PostgreSQL requires some more shared memory to cache some tables, x Mb, \n> do you want to increase your OS kernel parameters?\n> \n> Tweak shmmax and shmmall\n\nNote that this still requires a kernel recompile on FreeBSD :(\n\nChris\n\n", "msg_date": "Sun, 12 Oct 2003 10:43:35 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "Nick,\n\n> I reckon do a system scan first, and parse the current PostgreSQL conf\n> file to figure out what the settings are. Also back it up with a date\n> and time appended to the end to make sure there is a backup before\n> overwriting the real conf file. Then a bunch of questions. What sort of\n> questions would need to be asked and which parameters would these\n> questions affect? So far, and from my limited understanding of the .conf\n> file, I reckon there should be the following\n\nHmmm ... but I do think that there should be a file to store the user's \nprevious answers. That way, the script can easily be re-run to fix config \nissues.\n\n> Here is your config of your hardware as detected. Is this correct ?\n>\n> This could potentially be several questions, i.e. one for proc, mem,\n> os, hdd etc\n> Would affect shared_buffers, sort_mem, effective_cache_size,\n> random_page_cost\n\nActually, I think this would break down into:\n-- Are Proc & Mem correct? If not, type in correct values\n-- Is OS correct? If not, select from list\n-- Your HDD: is it:\n\t1) IDE\n\t2) Fast multi-disk SCSI or low-end RAID\n\t3) Medium-to-high-end RAID\n\nOther things, we don't care about.\n\n> How was PostgreSQL compiled?\n>\n> This would be parameters such as the block size and a few other\n> compile time parameters. If we can get to some of these read-only\n> parameters than that would make this step easier, certainly for the new\n> recruits amongst us.\n\nActually, from my perspective, we shouldn't bother with this; if an admin \nknows enough to set an alternate blaock size for PG, then they know enough to \ntweak the Conf file by hand. I think we should just issue a warning that \nthis script:\n1) does not work for anyone who is using non-default block sizes, \n2) may not work well for anyone using unusual locales, optimization flags, or \nother non-default compile options except for language interfaces.\n3) cannot produce good settings for embedded systems;\n4) will not work well for systems which are extremely low on disk space, \nmemory, or other resouces.\n\tBasically, the script only really needs to work for the people who are \ninstalling PostgreSQL with the default options or from RPM on regular server \nor workstation machines with plenty of disk space for normal database \npurposes. People who have more complicated setups can read the darned \ndocumentation and tune the conf file by hand.\n\n> Is PostgreSQL the only thing being run on this computer?\n\nFirst, becuase it affects a couple of other variables:\n\nWhat kind of database server are you expecting to run?\nA) Web Server (many small fast queries from many users, and not much update \nactivity)\nB) Online Transaction Processing (OLTP) database (many small updates \nconstantly from many users; think \"accounting application\").\nC) Online Analytical Reporting (OLAP) database (a few large and complicated \nread-only queries aggregating large quantites of data for display)\nD) Data Transformation tool (loading large amounts of data to process, \ntransform, and output to other software)\nE) Mixed-Use Database Server (a little of all of the above)\nF) Workstation (installing this database on a user machine which also has a \ndesktop, does word processing, etc.)\n\nIf the user answers anything but (F), then we ask:\n\nWill you be running any other signficant software on this server, such as a \nweb server, a Java runtime engine, or a reporting application? (yes|no)\n\nIf yes, then:\n\nHow much memory do you expect this other software, in total, to regularly use \nwhile PostgreSQL is in use? (# in MB; should offer default of 50% of the RAM \nscanned).\n\n> How are the clients going to connect?\n>\n> i.e. TCP or Unix sockets\n\nWe should warn them that they will still need to configure pg_hba.conf.\n\n> How many clients can connect to this database at once?\n>\n> Affects max_connections\n\nShould add a parenthetical comment that for applications which use pooled \nconnections, or intermittent connection, such as Web applications, the number \nof concurrent connections is often much lower than the number of concurrent \nusers.\n\n> How many databases and how many tables in each database are going to be\n> present?\n>\n> Affects max_fsm_pages, checkpoint_segments, checkpoint_timeout\n\nAlso need to ask if they have an idea of the total size of all databases, in \nMB or GB, which has a stronger relationship to those variables.\n\nAlso, this will give us a chance to check the free space on the PGDATA \npartition, and kick the user out with a warning if there is not at least \n2xExpected Size available.\n\n> Do you want to vacuum you database regularly?\n>\n> Initial question for cron job\n>\n> It is recomended that you vacuum analyze every night, do you want to do\n> this?\n> It is also recomended that you vacuum full every month, do you want to\n> do this?\n\nDepends on size/type of database. For large OLTP databases, I recommend \nvacuum as often as every 5 mintues, analyze every hour, and Vacuum Full + \nReindex once a week. For a workstation database, your frequencies are \nprobably OK.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 12 Oct 2003 13:30:45 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "Chris,\n\n> > PostgreSQL requires some more shared memory to cache some tables, x Mb,\n> > do you want to increase your OS kernel parameters?\n> >\n> > Tweak shmmax and shmmall\n>\n> Note that this still requires a kernel recompile on FreeBSD :(\n\nNot our fault, now is it? This would mean that we wouldn't be able to script \nfor FreeBSD. Bug the FreeBSD developers.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 12 Oct 2003 13:31:42 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "On Sun, 2003-10-12 at 15:31, Josh Berkus wrote:\n> Chris,\n> \n> > > PostgreSQL requires some more shared memory to cache some tables, x Mb,\n> > > do you want to increase your OS kernel parameters?\n> > >\n> > > Tweak shmmax and shmmall\n> >\n> > Note that this still requires a kernel recompile on FreeBSD :(\n> \n> Not our fault, now is it? This would mean that we wouldn't be able to script \n> for FreeBSD. Bug the FreeBSD developers.\n\n<TROLL=HAND-GRENADE>\nOr use a good OS, instead.\n</TROLL>\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"Oh, great altar of passive entertainment, bestow upon me thy\ndiscordant images at such speed as to render linear thought\nimpossible\"\nCalvin, regarding TV\n\n", "msg_date": "Sun, 12 Oct 2003 17:10:34 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "Hi,\n\nJosh Berkus wrote:\n\n> Nick,\n>\n> > I reckon do a system scan first, and parse the current PostgreSQL conf\n> > file to figure out what the settings are. Also back it up with a date\n> > and time appended to the end to make sure there is a backup before\n> > overwriting the real conf file. Then a bunch of questions. What sort of\n> > questions would need to be asked and which parameters would these\n> > questions affect? So far, and from my limited understanding of the .conf\n> > file, I reckon there should be the following\n>\n> Hmmm ... but I do think that there should be a file to store the user's\n> previous answers. That way, the script can easily be re-run to fix config\n> issues.\n>\n> > Here is your config of your hardware as detected. Is this correct ?\n> >\n> > This could potentially be several questions, i.e. one for proc, mem,\n> > os, hdd etc\n> > Would affect shared_buffers, sort_mem, effective_cache_size,\n> > random_page_cost\n>\n> Actually, I think this would break down into:\n> -- Are Proc & Mem correct? If not, type in correct values\n> -- Is OS correct? If not, select from list\n> -- Your HDD: is it:\n> 1) IDE\n> 2) Fast multi-disk SCSI or low-end RAID\n> 3) Medium-to-high-end RAID\n>\n> Other things, we don't care about.\n>\n> > How was PostgreSQL compiled?\n> >\n> > This would be parameters such as the block size and a few other\n> > compile time parameters. If we can get to some of these read-only\n> > parameters than that would make this step easier, certainly for the new\n> > recruits amongst us.\n>\n> Actually, from my perspective, we shouldn't bother with this; if an admin\n> knows enough to set an alternate blaock size for PG, then they know enough to\n> tweak the Conf file by hand. I think we should just issue a warning that\n> this script:\n> 1) does not work for anyone who is using non-default block sizes,\n> 2) may not work well for anyone using unusual locales, optimization flags, or\n> other non-default compile options except for language interfaces.\n> 3) cannot produce good settings for embedded systems;\n> 4) will not work well for systems which are extremely low on disk space,\n> memory, or other resouces.\n> Basically, the script only really needs to work for the people who are\n> installing PostgreSQL with the default options or from RPM on regular server\n> or workstation machines with plenty of disk space for normal database\n> purposes. People who have more complicated setups can read the darned\n> documentation and tune the conf file by hand.\n>\n> > Is PostgreSQL the only thing being run on this computer?\n>\n> First, becuase it affects a couple of other variables:\n>\n> What kind of database server are you expecting to run?\n> A) Web Server (many small fast queries from many users, and not much update\n> activity)\n> B) Online Transaction Processing (OLTP) database (many small updates\n> constantly from many users; think \"accounting application\").\n> C) Online Analytical Reporting (OLAP) database (a few large and complicated\n> read-only queries aggregating large quantites of data for display)\n> D) Data Transformation tool (loading large amounts of data to process,\n> transform, and output to other software)\n> E) Mixed-Use Database Server (a little of all of the above)\n> F) Workstation (installing this database on a user machine which also has a\n> desktop, does word processing, etc.)\n>\n> If the user answers anything but (F), then we ask:\n>\n> Will you be running any other signficant software on this server, such as a\n> web server, a Java runtime engine, or a reporting application? (yes|no)\n>\n> If yes, then:\n>\n> How much memory do you expect this other software, in total, to regularly use\n> while PostgreSQL is in use? (# in MB; should offer default of 50% of the RAM\n> scanned).\n>\n> > How are the clients going to connect?\n> >\n> > i.e. TCP or Unix sockets\n>\n> We should warn them that they will still need to configure pg_hba.conf.\n>\n> > How many clients can connect to this database at once?\n> >\n> > Affects max_connections\n>\n> Should add a parenthetical comment that for applications which use pooled\n> connections, or intermittent connection, such as Web applications, the number\n> of concurrent connections is often much lower than the number of concurrent\n> users.\n>\n> > How many databases and how many tables in each database are going to be\n> > present?\n> >\n> > Affects max_fsm_pages, checkpoint_segments, checkpoint_timeout\n>\n> Also need to ask if they have an idea of the total size of all databases, in\n> MB or GB, which has a stronger relationship to those variables.\n>\n\nWhy not to make a cron script that will detect this size fot hil self?In many\ncases we do not have a good idea how many records(size) will be in data base.\n\n> Also, this will give us a chance to check the free space on the PGDATA\n> partition, and kick the user out with a warning if there is not at least\n> 2xExpected Size available.\n>\n> > Do you want to vacuum you database regularly?\n> >\n> > Initial question for cron job\n> >\n> > It is recomended that you vacuum analyze every night, do you want to do\n> > this?\n> > It is also recomended that you vacuum full every month, do you want to\n> > do this?\n>\n> Depends on size/type of database. For large OLTP databases, I recommend\n> vacuum as often as every 5 mintues, analyze every hour, and Vacuum Full +\n> Reindex once a week. For a workstation database, your frequencies are\n> probably OK.\n>\n> --\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n\nregards,ivan.\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n\n\n", "msg_date": "Mon, 13 Oct 2003 07:01:26 +0200", "msg_from": "pginfo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": ">>>>> \"SC\" == Sean Chittenden <[email protected]> writes:\n\n>> echo \"effective_cache_size = $((`sysctl -n vfs.hibufspace` / 8192))\"\n>> \n>> I've used it for my dedicated servers. Is this calculation correct?\n\nSC> Yes, or it's real close at least. vfs.hibufspace is the amount of\nSC> kernel space that's used for caching IO operations (minus the\n\nI'm just curious if anyone has a tip to increase the amount of memory\nFreeBSD will use for the cache? It appears to me that even on my 2Gb\nbox, lots of memory is 'free' that could be used for the cache\n(bumping up shared buffers is another option...) yet the disk is being\nhighly utilized according to systat.\n", "msg_date": "Mon, 13 Oct 2003 10:04:35 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "On Monday 13 October 2003 19:34, Vivek Khera wrote:\n> >>>>> \"SC\" == Sean Chittenden <[email protected]> writes:\n> >>\n> >> echo \"effective_cache_size = $((`sysctl -n vfs.hibufspace` / 8192))\"\n> >>\n> >> I've used it for my dedicated servers. Is this calculation correct?\n>\n> SC> Yes, or it's real close at least. vfs.hibufspace is the amount of\n> SC> kernel space that's used for caching IO operations (minus the\n>\n> I'm just curious if anyone has a tip to increase the amount of memory\n> FreeBSD will use for the cache? It appears to me that even on my 2Gb\n> box, lots of memory is 'free' that could be used for the cache\n> (bumping up shared buffers is another option...) yet the disk is being\n> highly utilized according to systat.\n\nIs this of any help?..reverse video sucks though.. especially spec'ed person \nlike me..\n\nhttp://unix.derkeiler.com/Mailing-Lists/FreeBSD/performance/2003-07/0073.html\n\n Shridhar\n\n", "msg_date": "Mon, 13 Oct 2003 19:44:55 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Vivek Khera\nSent: Viernes, 10 de Octubre de 2003 03:14 p.m.\nTo: Josh Berkus\nCc: [email protected]\nSubject: Re: [PERFORM] go for a script! / ex: PostgreSQL vs. MySQL\n\n>>>>> \"JB\" == Josh Berkus <[email protected]> writes:\n\nJB> Vivek,\nNB> shared_buffers = 1/16th of total memory\nNB> effective_cache_size = 80% of the supposed kernel cache.\n>> \n>> Please take into account the blocksize compiled into PG, too...\n\nJB> We can;t change the blocksize in a script that only does the .conf\nJB> file. Or are you suggesting something else?\n\n\nwhen you compute optimal shared buffers and effective cache size,\nthese are in terms of blocksize. so if I have 16k block size, you\ncan't compute based on default 8k blocksize. at worst, it would have\nto be a parameter you pass to the tuning script.\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n\n\n", "msg_date": "Mon, 13 Oct 2003 08:38:25 -0600", "msg_from": "ingrim <[email protected]>", "msg_from_op": false, "msg_subject": "unsuscribe mailing list" }, { "msg_contents": ">>>>> \"CK\" == Christopher Kings-Lynne <[email protected]> writes:\n\n>> If shmmax and shmmall are too small, then:\n>> PostgreSQL requires some more shared memory to cache some tables, x\n>> Mb, do you want to increase your OS kernel parameters?\n>> Tweak shmmax and shmmall\n\nCK> Note that this still requires a kernel recompile on FreeBSD :(\n\nAccording to whom? sysctl is your friend. Some sysctl settings may\nrequire reboot, but I don't think the SHM ones do.\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Mon, 13 Oct 2003 11:37:53 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": ">>>>> \"JB\" == Josh Berkus <[email protected]> writes:\n\nJB> Chris,\n>> > PostgreSQL requires some more shared memory to cache some tables, x Mb,\n>> > do you want to increase your OS kernel parameters?\n>> >\n>> > Tweak shmmax and shmmall\n>> \n>> Note that this still requires a kernel recompile on FreeBSD :(\n\nJB> Not our fault, now is it? This would mean that we wouldn't be\nJB> able to script for FreeBSD. Bug the FreeBSD developers.\n\n\"I read it on the net so it must be true\" applies here. You /can/ set\nthese values via sysctl calls.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Mon, 13 Oct 2003 11:39:03 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "> >> echo \"effective_cache_size = $((`sysctl -n vfs.hibufspace` / 8192))\"\n> >> \n> >> I've used it for my dedicated servers. Is this calculation correct?\n> \n> SC> Yes, or it's real close at least. vfs.hibufspace is the amount\n> of SC> kernel space that's used for caching IO operations (minus the\n> \n> I'm just curious if anyone has a tip to increase the amount of\n> memory FreeBSD will use for the cache?\n\nRecompile your kernel with BKVASIZE set to 4 times its current value\nand double your nbuf size. According to Bruce Evans:\n\n\"Actually there is a way: the vfs_maxbufspace gives the amount of\nspace reserved for buffer kva (= nbuf * BKVASIZE). nbuf is easy to\nrecover from this, and the buffer kva space may be what is wanted\nanyway.\"\n[snip]\n\"I've never found setting nbuf useful, however. I want most\nparametrized sizes including nbuf to scale with resource sizes, and\nit's only with RAM sizes of similar sizes to the total virtual address\nsize that its hard to get things to fit. I haven't hit this problem\nmyself since my largest machine has only 1GB. I use an nbuf of\nsomething like twice the default one, and a BKVASIZE of 4 times the\ndefault. vfs.maxbufspace ends up at 445MB on the machine with 1GB, so\nit is maxed out now.\"\n\nYMMV.\n\n-sc\n\n-- \nSean Chittenden\n", "msg_date": "Mon, 13 Oct 2003 12:04:46 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "> > > PostgreSQL requires some more shared memory to cache some\n> > > tables, x Mb, do you want to increase your OS kernel parameters?\n> > >\n> > > Tweak shmmax and shmmall\n> >\n> > Note that this still requires a kernel recompile on FreeBSD :(\n> \n> Not our fault, now is it? This would mean that we wouldn't be able\n> to script for FreeBSD. Bug the FreeBSD developers.\n\nAnd if you do so, you're going to hear that shm* is an antiquated\ninterface that's dated, slow, inefficient and shouldn't be used. :)\n\nEvery few months one of the uber core BSD hackers threatens to rewrite\nthat part of PG because high up in the BSD camp, it's common belief\nthat shm* is a source of performance loss for PostgreSQL. One of\nthese days it'll happen, probably with mmap() mmap()'ing MAP_SHARED\nfiles stored in a $PGDATA/data/shared dir as mmap() is by far and away\nthe fastest shared memory mechanism and certainly is very widely\ndeployed (I would be surprised if any of the supported PG platforms\ndidn't have mmap()).\n\n-sc\n\n-- \nSean Chittenden\n", "msg_date": "Mon, 13 Oct 2003 12:10:23 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "Centuries ago, Nostradamus foresaw when [email protected] (Vivek Khera) would write:\n>>>>>> \"JB\" == Josh Berkus <[email protected]> writes:\n>\n> JB> Chris,\n>>> > PostgreSQL requires some more shared memory to cache some tables, x Mb,\n>>> > do you want to increase your OS kernel parameters?\n>>> >\n>>> > Tweak shmmax and shmmall\n>>> \n>>> Note that this still requires a kernel recompile on FreeBSD :(\n>\n> JB> Not our fault, now is it? This would mean that we wouldn't be\n> JB> able to script for FreeBSD. Bug the FreeBSD developers.\n>\n> \"I read it on the net so it must be true\" applies here. You /can/ set\n> these values via sysctl calls.\n\nYes, indeed, sysctl can tweak these values fairly adequately.\n\nNow, numbers of semaphors are not as readily tweaked; I wound up\nlimited, the other day, when I tried setting values for...\n\n kern.ipc.semmns\n kern.ipc.semmni\n-- \nlet name=\"cbbrowne\" and tld=\"ntlug.org\" in String.concat \"@\" [name;tld];;\nhttp://www.ntlug.org/~cbbrowne/x.html\n\"So, when you typed in the date, it exploded into a sheet of blue\nflame and burned the entire admin wing to the ground? Yes, that's a\nknown bug. We'll be fixing it in the next release. Until then, try not\nto use European date format, and keep an extinguisher handy.\"\n-- [email protected] (Tequila Rapide) \n", "msg_date": "Mon, 13 Oct 2003 19:58:36 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "Sean Chittenden <[email protected]> writes:\n> Every few months one of the uber core BSD hackers threatens to rewrite\n> that part of PG because high up in the BSD camp, it's common belief\n> that shm* is a source of performance loss for PostgreSQL. \n\nThey're full of it. RAM is RAM, no? Once you've got the memory mapped\ninto your address space, it's hard to believe that it matters how you\ngot hold of it.\n\nIn any case, mmap doesn't have the semantics we need. See past\ndiscussions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Oct 2003 20:11:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL " }, { "msg_contents": ">>>If shmmax and shmmall are too small, then:\n>>>PostgreSQL requires some more shared memory to cache some tables, x\n>>>Mb, do you want to increase your OS kernel parameters?\n>>>Tweak shmmax and shmmall\n> \n> \n> CK> Note that this still requires a kernel recompile on FreeBSD :(\n> \n> According to whom? sysctl is your friend. Some sysctl settings may\n> require reboot, but I don't think the SHM ones do.\n\nHmmm...you may be right - I can't prove it now...\n\nhouston# sysctl -w kern.ipc.shmmax=99999999999\nkern.ipc.shmmax: 33554432 -> 2147483647\n\nHrm. Ok. Maybe they've changed that in some recent version :)\n\nChris\n\n\n", "msg_date": "Tue, 14 Oct 2003 09:29:56 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "\n> \n> Yes, indeed, sysctl can tweak these values fairly adequately.\n> \n> Now, numbers of semaphors are not as readily tweaked; I wound up\n> limited, the other day, when I tried setting values for...\n> \n> kern.ipc.semmns\n> kern.ipc.semmni\n\nSame. Maybe that was the option I was thinking was read-only:\n\nhouston# sysctl kern.ipc.semmns\nkern.ipc.semmns: 60\nhouston# sysctl -w kern.ipc.semmns=70\nsysctl: oid 'kern.ipc.semmns' is read only\nhouston# sysctl kern.ipc.semmni\nkern.ipc.semmni: 10\nhouston# sysctl -w kern.ipc.semmni=30\nsysctl: oid 'kern.ipc.semmni' is read only\n\nI like how they use oids :P\n\nChris\n\n\n", "msg_date": "Tue, 14 Oct 2003 09:43:29 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" } ]
[ { "msg_contents": "----- Original Message -----\nFrom: \"Nick Barr\" <[email protected]>\nTo: <[email protected]>\nSent: Friday, October 10, 2003 1:35 PM\nSubject: go for a script! / ex: PostgreSQL vs. MySQL\n\n\n> I will also post it on me website and as I develop it further new\nversions\n> will appear there\n>\n> http://www.chuckie.co.uk/postgresql/pg_autoconfig.pl\n\nMake that\n\nhttp://www.chuckie.co.uk/postgresql/pg_autoconfig.txt\n\n\nNick\n\n\n\n\n", "msg_date": "Fri, 10 Oct 2003 13:40:10 +0100", "msg_from": "\"Nick Barr\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" } ]
[ { "msg_contents": "Hi,\n\nA simple question about PostgreSQL ... I have a Pentium Xeon Quadri processors \n...\nIf I do a SQL request ... does PostgreSQL use one or more processor ?\n\nAnd if it use only one ... why ?\nCould you explain me this ;o)\n\nThanks per advance.\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n\n", "msg_date": "Fri, 10 Oct 2003 18:30:49 +0200", "msg_from": "=?iso-8859-15?q?Herv=E9=20Piedvache?= <[email protected]>", "msg_from_op": true, "msg_subject": "One or more processor ?" }, { "msg_contents": "On Fri, 10 Oct 2003, [iso-8859-15] Herv� Piedvache wrote:\n\n> If I do a SQL request ... does PostgreSQL use one or more processor ?\n>\n\nNope. Just one processor this is because PG is process not thread based.\nHowever, if you opened 4 connections and each issued a sql statement all 4\nprocessors would be used. Check the -HACKERS archives for lengthy\ndiscussions of this.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Fri, 10 Oct 2003 12:35:50 -0400 (EDT)", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One or more processor ?" }, { "msg_contents": "On Fri, 10 Oct 2003, [iso-8859-15] Herv� Piedvache wrote:\n\n> A simple question about PostgreSQL ... I have a Pentium Xeon Quadri\n> processors ... If I do a SQL request ... does PostgreSQL use one or more\n> processor ?\n\nEach connection becomes a process, and each process runs on one processor. \nSo, with only one connection you use only one processor (and the OS might \nuse an other processor). Most databases has many concurrent users and then \nit will use more processors.\n\n-- \n/Dennis\n\n", "msg_date": "Fri, 10 Oct 2003 18:38:46 +0200 (CEST)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One or more processor ?" }, { "msg_contents": "Hervᅵ Piedvache wrote:\n> Hi,\n> \n> A simple question about PostgreSQL ... I have a Pentium Xeon Quadri processors \n> ...\n> If I do a SQL request ... does PostgreSQL use one or more processor ?\n\nPostgreSQL uses one processor per connection. If you have 4 simultaneous\nconnections, you'll use all four processors (assuming your operating system\nis properly designed/configured).\n\n> And if it use only one ... why ?\n> Could you explain me this ;o)\n\nThe answer to that is beyond my knowledge, but I have a few guesses:\n1) Doing so is more complicated than you think.\n2) The code was originally written for uniprocessor machines, and nobody\n has volunteered to update it yet.\n3) kernel threading isn't as predictable as some people would like to\n think, thus they developers have avoided using it so far.\n4) It simply isn't practical to expect a single query to\n execute on multiple processors simultaneously.\n\nDo you know of any RDBMS that actually will execute a single query on\nmultiple processors?\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Fri, 10 Oct 2003 12:42:04 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One or more processor ?" }, { "msg_contents": "\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Bill Moran\n> Sent: Friday, October 10, 2003 12:42 PM\n> To: Hervᅵ Piedvache\n> Cc: Postgresql Performance\n> Subject: Re: [PERFORM] One or more processor ?\n>\n>\n> Hervᅵ Piedvache wrote:\n> > Hi,\n> >\n> > A simple question about PostgreSQL ... I have a Pentium Xeon\n> Quadri processors\n> > ...\n> > If I do a SQL request ... does PostgreSQL use one or more processor ?\n>\n> PostgreSQL uses one processor per connection. If you have 4 simultaneous\n> connections, you'll use all four processors (assuming your\n> operating system\n> is properly designed/configured).\n>\n> > And if it use only one ... why ?\n> > Could you explain me this ;o)\n>\n> The answer to that is beyond my knowledge, but I have a few guesses:\n> 1) Doing so is more complicated than you think.\n\nYou need to be able to paralellize the algorithm. Even so, a 99%\nparalelizable algorithm over 2 cpus is only 50% faster than 1 cpu. So choose\nyour poison: 2 processes @100% in 1tu or 1 process at 150% at .66tu (tu=time\nunit). This ofcourse is over simplification. I don't think 99% is reasonable\nin query processing (though it can depend on the query) so I expect the 2\nconnection method to be better, unless you only ever have 1 connection.\n\n\n", "msg_date": "Fri, 10 Oct 2003 13:09:34 -0400", "msg_from": "Jason Hihn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One or more processor ?" }, { "msg_contents": "On Fri, Oct 10, 2003 at 12:42:04PM -0400, Bill Moran wrote:\n> 4) It simply isn't practical to expect a single query to\n> execute on multiple processors simultaneously.\n> \n> Do you know of any RDBMS that actually will execute a single query\n> on multiple processors?\n\nYes, DB2 will do this if configured correctly. It's very useful for\nlarge, complicated queries that have multiple subplans.\n\n-johnnnnnnnn\n", "msg_date": "Fri, 10 Oct 2003 12:15:34 -0500", "msg_from": "johnnnnnn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One or more processor ?" }, { "msg_contents": "johnnnnnn wrote:\n> On Fri, Oct 10, 2003 at 12:42:04PM -0400, Bill Moran wrote:\n> \n>>4) It simply isn't practical to expect a single query to\n>> execute on multiple processors simultaneously.\n>>\n>>Do you know of any RDBMS that actually will execute a single query\n>>on multiple processors?\n> \n> Yes, DB2 will do this if configured correctly. It's very useful for\n> large, complicated queries that have multiple subplans.\n\nI expected there would be someone who did (although I didn't know for\nsure).\n\nIs DB2 the only one that can do that?\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Fri, 10 Oct 2003 13:26:40 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One or more processor ?" }, { "msg_contents": "Herve'\n\n> A simple question about PostgreSQL ... I have a Pentium Xeon Quadri\n> processors ...\n> If I do a SQL request ... does PostgreSQL use one or more processor ?\n\nFor your configuration, yes, you want multiple processors. Postgres (or \nrather, the host OS) will distribute active connections over the multiple \nprocessors.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 10 Oct 2003 10:43:10 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One or more processor ?" }, { "msg_contents": "[email protected] (=?iso-8859-15?q?Herv=E9=20Piedvache?=) writes:\n> A simple question about PostgreSQL ... I have a Pentium Xeon Quadri processors \n> ...\n> If I do a SQL request ... does PostgreSQL use one or more processor ?\n\nJust one processor.\n\n> And if it use only one ... why ?\n> Could you explain me this ;o)\n\n... Because partitioning requests across multiple processors is a\nhairy and difficult proposition.\n\nSome musing has been done on how threading might be used to split\nprocessing of queries across multiple CPUs, but it represents a pretty\nimmense task involving substantial effort for design, implementation,\nand testing.\n\nIt's tough to make this portable across all the system PostgreSQL\nsupports, too.\n\nSo while musing has been done, nobody has seriously tried implementing\nit.\n-- \noutput = (\"cbbrowne\" \"@\" \"libertyrms.info\")\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n", "msg_date": "Fri, 10 Oct 2003 14:49:30 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One or more processor ?" }, { "msg_contents": "Chris,\n\n> > If I do a SQL request ... does PostgreSQL use one or more processor ?\n>\n> Just one processor.\n\nFor one query, yes. For multiple queries, PostgreSQL will use multiple \nprocessors, and that's what he's concerned about given his earlier posts.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 10 Oct 2003 13:42:11 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One or more processor ?" }, { "msg_contents": "\n> Do you know of any RDBMS that actually will execute a single query on\n> multiple processors?\n\nSQL Server does in a sense. It can split a query onto multiple threads\n(which could possible use multiple processors) and then brings the results\nfrom the threads into one and then sends the results to the client.\n\n\n\n", "msg_date": "Fri, 10 Oct 2003 14:03:51 -0700", "msg_from": "\"Relaxin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One or more processor ?" }, { "msg_contents": "On Fri, 10 Oct 2003, Bill Moran wrote:\n\n> johnnnnnn wrote:\n> > On Fri, Oct 10, 2003 at 12:42:04PM -0400, Bill Moran wrote:\n> >\n> >>4) It simply isn't practical to expect a single query to\n> >> execute on multiple processors simultaneously.\n> >>\n> >>Do you know of any RDBMS that actually will execute a single query\n> >>on multiple processors?\n> >\n> > Yes, DB2 will do this if configured correctly. It's very useful for\n> > large, complicated queries that have multiple subplans.\n>\n> I expected there would be someone who did (although I didn't know for\n> sure).\n>\n> Is DB2 the only one that can do that?\n\nOracle, i think, on partitioned tables.\n\nregards, andriy\n\nhttp://www.imt.com.ua\n\n", "msg_date": "Mon, 13 Oct 2003 11:53:12 +0300 (EEST)", "msg_from": "Andriy Tkachuk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One or more processor ?" }, { "msg_contents": "On Mon, 2003-10-13 at 01:53, Andriy Tkachuk wrote:\n> > >>Do you know of any RDBMS that actually will execute a single query\n> > >>on multiple processors?\n> > >\n> Oracle, i think, on partitioned tables.\n\n\n\nThis makes a certain amount of sense. It be much easier to allow this\non partitioned tables than in the general case, since partitioned tables\nlook a lot like multiple relations hiding behind a view and one could\nsimply run one thread per partition in parallel without any real fear of\ncontention on the table. This is doubly useful since partitioned tables\ntend to be huge almost by definition.\n\nOffhand, it would seem that this would be a feature largely restricted\nto threaded database kernels for a couple different pragmatic reasons.\n\nCheers,\n\n-James Rogers\n [email protected]\n\n\n", "msg_date": "13 Oct 2003 11:43:43 -0700", "msg_from": "James Rogers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One or more processor ?" } ]
[ { "msg_contents": "Hi,\n\nOne other small question ... Does PostgreSQL is scalable ?\nI mean ... is it possible to have two servers, one rack of disks connected to \nthe 2 servers to get access in same time to the same database ?\n\nOther point is there any possibilties to make servers clusters with PostgreSQL \n? If no why ? If yes how ? ;o)\n\nTo be clear I would like to make a system with PostgreSQL able to answer about \n70 000 000 requests by day (Internet services) ... I'm not sure about the \nserver configuration I have to make.\n\nThanks per advance for your answers.\n\nRegards,\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n\n", "msg_date": "Fri, 10 Oct 2003 18:36:11 +0200", "msg_from": "=?iso-8859-15?q?Herv=E9=20Piedvache?= <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL Scalable ?" }, { "msg_contents": "On Fri, 10 Oct 2003, [iso-8859-15] Herv� Piedvache wrote:\n\n> One other small question ... Does PostgreSQL is scalable ?\n> I mean ... is it possible to have two servers, one rack of disks connected to\n> the 2 servers to get access in same time to the same database ?\n\nNo. You need to replicate the DB to another machine to have this work -\nand even still, all writes need to go to the 'master' db. Reads can go to\neither.\n\n> To be clear I would like to make a system with PostgreSQL able to answer about\n> 70 000 000 requests by day (Internet services) ... I'm not sure about the\n> server configuration I have to make.\n>\n\nWell, 70M requests/day is only about 810 / second - assuming we're talking\nabout simple selects that is very easy to achieve.\n\nConsidering hardware you should look at: multiple cpus, gigs of memory,\nand very fast disks. (Raid5 w/battery backed write caches seem to be\npopular).\n\nYou should also look at how much data this guy will hold, what is the\nread/write ratio and all the \"normal\" things you should do while planning\na db.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Fri, 10 Oct 2003 12:55:42 -0400 (EDT)", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Scalable ?" }, { "msg_contents": "Herve'\n\n> One other small question ... Does PostgreSQL is scalable ?\n\nGiven that we have several members of our community with 2TB databases, and \none entitiy with a 32TB database, I'd say yes.\n\n> I mean ... is it possible to have two servers, one rack of disks connected\n> to the 2 servers to get access in same time to the same database ?\n\nNot at this time, no.\n\n> Other point is there any possibilties to make servers clusters with\n> PostgreSQL ? If no why ? If yes how ? ;o)\n\nOnly via replication or creative use of DBLink. Nobody has yet offered to \nbuild us database server clustering, which would be very nice to have, but \nwould require a substantial investment from a corporate sponsor.\n\n> To be clear I would like to make a system with PostgreSQL able to answer\n> about 70 000 000 requests by day (Internet services) ... I'm not sure about\n> the server configuration I have to make.\n\nThat sounds doable on proper hardware with good tuning. Might I suggest \nthat you consider hiring a consultant and start from there? I believe that \nAfilias Limited (www.afilias.info) has the requisite experience in Europe.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 10 Oct 2003 10:41:09 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Scalable ?" }, { "msg_contents": "On Fri, 2003-10-10 at 10:41, Josh Berkus wrote:\n> Herve'\n> > One other small question ... Does PostgreSQL is scalable ?\n> \n> Given that we have several members of our community with 2TB databases, and \n> one entitiy with a 32TB database, I'd say yes.\n\n\nIt depends on what is meant by \"scalable\". In terms of physical data\nsize, definitely yes. In terms of concurrency, it is also pretty good\nwith only a few caveats (e.g. large SMP systems aren't really exploited\nto their potential). However, in terms of working set size it is only\n\"fair to middling\", which is why I'm looking into those parts right now.\nSo \"scalable\" really depends on what your load profile looks like. For\nsome load profiles it is extremely scalable and for other load profiles\nless so, though nothing exhibits truly \"poor\" scalability that I've\nfound. A lot of scalability is how you set the parameters and design\nthe system if the underlying engine is reasonably competent. For the\nvast majority of purposes, you'll find that PostgreSQL scales just fine.\n\nCheers,\n\n-James Rogers\n [email protected]\n\n\n", "msg_date": "10 Oct 2003 11:33:05 -0700", "msg_from": "James Rogers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Scalable ?" } ]
[ { "msg_contents": "List,\n I'm creating this multi company POS database.\nMy inventory table looks like (all items are unique):\n\nid,category_id,invoice_id,x,y,z,gid,uid\n\nI have a primary key on id, and then an foreign keys on category_id and\ninvoice_id.\nGID is the group ID of the company, UID is the companies user, they are also\nconnected via foreign key to the respective tables. My question is this: Do\nI need to create more indexes on this table when inventory selects look like\n\nselect * from inventory where\n category_id = 1 and invoice_id is null and gid = 2\n\nSo where would the indexes need to be placed? Or since I have the FK setup\nare the indexes already in place? I expect to soon have >500K items in the\ninventory table and don't want it to slow down. I'll have the same type of\nissue with clients, invoices, purchase_orders and perhaps more\n\nIdeas?\n\nThanks!\n\nDavid Busby\nSystems Engineer\n\n", "msg_date": "Fri, 10 Oct 2003 14:04:51 -0700", "msg_from": "\"David Busby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Index/Foreign Key Question" }, { "msg_contents": "On Fri, 2003-10-10 at 16:04, David Busby wrote:\n> List,\n> I'm creating this multi company POS database.\n> My inventory table looks like (all items are unique):\n> \n> id,category_id,invoice_id,x,y,z,gid,uid\n> \n> I have a primary key on id, and then an foreign keys on category_id and\n> invoice_id.\n> GID is the group ID of the company, UID is the companies user, they are also\n> connected via foreign key to the respective tables. My question is this: Do\n> I need to create more indexes on this table when inventory selects look like\n> \n> select * from inventory where\n> category_id = 1 and invoice_id is null and gid = 2\n> \n> So where would the indexes need to be placed? Or since I have the FK setup\n> are the indexes already in place? I expect to soon have >500K items in the\n> inventory table and don't want it to slow down. I'll have the same type of\n> issue with clients, invoices, purchase_orders and perhaps more\n\nI'd make a multi-segment (non-unique?) index on:\n GID\n CATEGORY_ID\n INVOICE_ID\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\nLUKE: Is Perl better than Python?\nYODA: No... no... no. Quicker, easier, more seductive.\nLUKE: But how will I know why Python is better than Perl?\nYODA: You will know. When your code you try to read six months\nfrom now.\n\n", "msg_date": "Fri, 10 Oct 2003 16:31:55 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Index/Foreign Key Question" }, { "msg_contents": "----- Original Message ----- \nFrom: \"Ron Johnson\"\n> On Fri, 2003-10-10 at 16:04, David Busby wrote:\n> > List,\n> > I'm creating this multi company POS database.\n> > My inventory table looks like (all items are unique):\n> >\n> > id,category_id,invoice_id,x,y,z,gid,uid\n> >\n> > I have a primary key on id, and then an foreign keys on category_id and\n> > invoice_id.\n> > GID is the group ID of the company, UID is the companies user, they are\nalso\n> > connected via foreign key to the respective tables. My question is\nthis: Do\n> > I need to create more indexes on this table when inventory selects look\nlike\n> >\n> > select * from inventory where\n> > category_id = 1 and invoice_id is null and gid = 2\n> >\n> > So where would the indexes need to be placed? Or since I have the FK\nsetup\n> > are the indexes already in place? I expect to soon have >500K items in\nthe\n> > inventory table and don't want it to slow down. I'll have the same type\nof\n> > issue with clients, invoices, purchase_orders and perhaps more\n>\n> I'd make a multi-segment (non-unique?) index on:\n> GID\n> CATEGORY_ID\n> INVOICE_ID\n>\n\nSo the multi column index would be better than the three individual indexes?\nDoes PostgreSQL only pick one index per table on the select statements?\nWhat about the option of using schemas to segment the data? That would\neliminate the GID column and help performance correct? It also means I have\nto make company_a.invoice and company_b.invoice tables huh?\n\n/B\n\n", "msg_date": "Fri, 10 Oct 2003 14:32:30 -0700", "msg_from": "\"David Busby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index/Foreign Key Question" }, { "msg_contents": "On Fri, 10 Oct 2003, David Busby wrote:\n\n> ----- Original Message -----\n> From: \"Ron Johnson\"\n> > On Fri, 2003-10-10 at 16:04, David Busby wrote:\n> > > List,\n> > > I'm creating this multi company POS database.\n> > > My inventory table looks like (all items are unique):\n> > >\n> > > id,category_id,invoice_id,x,y,z,gid,uid\n> > >\n> > > I have a primary key on id, and then an foreign keys on category_id and\n> > > invoice_id.\n> > > GID is the group ID of the company, UID is the companies user, they are\n> also\n> > > connected via foreign key to the respective tables. My question is\n> this: Do\n> > > I need to create more indexes on this table when inventory selects look\n> like\n> > >\n> > > select * from inventory where\n> > > category_id = 1 and invoice_id is null and gid = 2\n> > >\n> > > So where would the indexes need to be placed? Or since I have the FK\n> setup\n> > > are the indexes already in place? I expect to soon have >500K items in\n> the\n> > > inventory table and don't want it to slow down. I'll have the same type\n> of\n> > > issue with clients, invoices, purchase_orders and perhaps more\n> >\n> > I'd make a multi-segment (non-unique?) index on:\n> > GID\n> > CATEGORY_ID\n> > INVOICE_ID\n> >\n>\n> So the multi column index would be better than the three individual indexes?\n\nFor the query in question, yes. However, you probably want a category_id\nindex and an invoice_id index if you are going to change the related\ntables because the (gid, category_id, invoice_id) isn't good enough for\nthe foreign key checks (and the fk columns aren't automatically indexed\nbecause it can easily become a pessimization depending on the workload\nthat's being done).\n", "msg_date": "Fri, 10 Oct 2003 17:23:31 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index/Foreign Key Question" }, { "msg_contents": "On Fri, 2003-10-10 at 16:32, David Busby wrote:\n> ----- Original Message ----- \n> From: \"Ron Johnson\"\n> > On Fri, 2003-10-10 at 16:04, David Busby wrote:\n> > > List,\n> > > I'm creating this multi company POS database.\n> > > My inventory table looks like (all items are unique):\n> > >\n> > > id,category_id,invoice_id,x,y,z,gid,uid\n> > >\n> > > I have a primary key on id, and then an foreign keys on category_id and\n> > > invoice_id.\n> > > GID is the group ID of the company, UID is the companies user, they are\n> also\n> > > connected via foreign key to the respective tables. My question is\n> this: Do\n> > > I need to create more indexes on this table when inventory selects look\n> like\n> > >\n> > > select * from inventory where\n> > > category_id = 1 and invoice_id is null and gid = 2\n> > >\n> > > So where would the indexes need to be placed? Or since I have the FK\n> setup\n> > > are the indexes already in place? I expect to soon have >500K items in\n> the\n> > > inventory table and don't want it to slow down. I'll have the same type\n> of\n> > > issue with clients, invoices, purchase_orders and perhaps more\n> >\n> > I'd make a multi-segment (non-unique?) index on:\n> > GID\n> > CATEGORY_ID\n> > INVOICE_ID\n> >\n> \n> So the multi column index would be better than the three individual indexes?\n\nYes, because it more closely matches the WHERE clause. Otherwise,\nit would have to look thru all three indexes, comparing OIDs.\n\n> Does PostgreSQL only pick one index per table on the select statements?\n\nThat's it's preference.\n\n> What about the option of using schemas to segment the data? That would\n> eliminate the GID column and help performance correct? It also means I have\n> to make company_a.invoice and company_b.invoice tables huh?\n\nYes, any time you add or alter a table, you'd have to do it on all\nthe schemas. However, multiple schemas would protect each company's\ndata from the other, and if the table structures are stable the\nmaintenance costs are low.\n\nAlso, you could script the mods, to reduce the work even more.\n\nAlso, you could have multiple databases. This isn't Oracle, so\nhave as many as you want. The benefit of this method is scalability.\nI.e., if the load grows too high, just buy another box, and move\n1/2 the databases to it.\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"Our computers and their computers are the same color. The\nconversion should be no problem!\"\nUnknown\n\n", "msg_date": "Fri, 10 Oct 2003 21:01:12 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index/Foreign Key Question" }, { "msg_contents": "On Fri, Oct 10, 2003 at 09:01:12PM -0500, Ron Johnson wrote:\n> \n> > Does PostgreSQL only pick one index per table on the select statements?\n> \n> That's it's preference.\n\nAs far as I know, that's all it can do. Do you know something\ndifferent?\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Sun, 12 Oct 2003 15:28:27 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index/Foreign Key Question" }, { "msg_contents": "Andrew Sullivan kirjutas P, 12.10.2003 kell 22:28:\n> On Fri, Oct 10, 2003 at 09:01:12PM -0500, Ron Johnson wrote:\n> > \n> > > Does PostgreSQL only pick one index per table on the select statements?\n> > \n> > That's it's preference.\n> \n> As far as I know, that's all it can do. Do you know something\n> different?\n\nTom has mentioned the possibility of using bitmaps as a an intermadiate\nstep, this would make star joins much faster as we could AND all index\ninfo and actually examine onlu tuples that mach all indexes.\n\nNone of it is done by now, AFAIK.\n\n-------\nHannu\n\n", "msg_date": "Sun, 12 Oct 2003 22:39:25 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index/Foreign Key Question" } ]
[ { "msg_contents": "Hello all\n\nI have two very similar queries which I need to execute. They both have\nexactly the same from / where conditions. When I execute the first, it takes\nabout 16 seconds. The second is executed almost immediately after, it takes\n13 seconds. In short, I'd like to know why the query result isn't being\ncached and any ideas on how to improve the execution.\n\nThe first query attempts to find the maximum size of an array in the result\nset- the field is called \"level\". IT contains anything between 1 and 10\nintegers. I just need to know what the largest size is. I do this to find\nout the maximum size of the \"level\" array.\n\n\"max(replace(split_part(array_dims(level),':',2),']','')::int)\"\n\nI know this is big and ugly but is there any better way of doing it ?\n\nThe second query just returns the result set - it has exactly the same\nFROM/Where clause.\n\nOK - so I could execute the query once, and get the maximum size of the\narray and the result set in one. I know what I am doing is less than optimal\nbut I had expected the query results to be cached. So the second execution\nwould be very quick. So why aren't they ? I have increased my cache size -\nshared_buffers is 2000 and I have doubled the default max_fsm... settings\n(although I am not sure what they do). sort_mem is 8192.\n\nThe from / where is\n\nFROM oscar_node N, oscar_point P\nwhere N.\"GEOM_ID_OF_POINT\" = P.\"POINT_ID\"\nand N.\"TILE_REF\" = P.\"TILE_REF\"\nand N.\"TILE_REF\" in ('TQ27NE','TQ28SE','TQ37NW','TQ38SW')\nand P.\"TILE_REF\" in ('TQ27NE','TQ28SE','TQ37NW','TQ38SW')\nand P.\"FEAT_CODE\" = 3500\nand P.wkb_geometry && GeometryFromText('BOX3D(529540.0 179658.88,530540.0\n180307.12)'::box3d,-1)\n\noscar_node and oscar_point both have about 3m rows. PK on oscar_node is\ncomposite of \"TILE_REF\" and \"NODE_ID\". PK on oscar_point is \"TILE_REF\" and\n\"POINT_ID\". The tables are indexed on feat_code and I have an index on\nwkb_geometry. (This is a GIST index). I have increased the statistics size\nand done the analyze command.\n\nHere is my explain plan\n\n Nested Loop (cost=0.00..147.11 rows=1 width=148)\n Join Filter: (\"inner\".\"GEOM_ID_OF_POINT\" = \"outer\".\"POINT_ID\")\n -> Index Scan using gidx_oscar_point on oscar_point p (cost=0.00..61.34\nrows=1 width=57)\n Index Cond: (wkb_geometry && 'SRID=-1;BOX3D(529540 179658.88\n0,530540 180307.12 0)'::geometry)\n Filter: (((\"TILE_REF\" = 'TQ27NE'::bpchar) OR (\"TILE_REF\" =\n'TQ28SE'::bpchar) OR (\"TILE_REF\" = 'TQ37NW'::bpchar) OR (\"TILE_REF\" =\n'TQ38SW'::bpchar)) AND (\"FEAT_CODE\" = 3500))\n -> Index Scan using idx_on_tile_ref on oscar_node n (cost=0.00..85.74\nrows=2 width=91)\n Index Cond: (n.\"TILE_REF\" = \"outer\".\"TILE_REF\")\n Filter: ((\"TILE_REF\" = 'TQ27NE'::bpchar) OR (\"TILE_REF\" =\n'TQ28SE'::bpchar) OR (\"TILE_REF\" = 'TQ37NW'::bpchar) OR (\"TILE_REF\" =\n'TQ38SW'::bpchar))\n\n\nI am seeing this message in my logs.\n\n\"bt_fixroot: not valid old root page\"\n\nMaybe this is relevant to my performance problems.\n\nI know this has been a long message but I would really appreciate any\nperformance tips.\n\nThanks\n\n\nChris\n\n\n", "msg_date": "Sat, 11 Oct 2003 10:43:04 +0100", "msg_from": "\"Chris Faulkner\" <[email protected]>", "msg_from_op": true, "msg_subject": "sql performance and cache " }, { "msg_contents": "\n> I have two very similar queries which I need to execute. They both have\n> exactly the same from / where conditions. When I execute the first, it takes\n> about 16 seconds. The second is executed almost immediately after, it takes\n> 13 seconds. In short, I'd like to know why the query result isn't being\n> cached and any ideas on how to improve the execution.\n\n<snip>\n\n> OK - so I could execute the query once, and get the maximum size of the\n> array and the result set in one. I know what I am doing is less than optimal\n> but I had expected the query results to be cached. So the second execution\n> would be very quick. So why aren't they ? I have increased my cache size -\n> shared_buffers is 2000 and I have doubled the default max_fsm... settings\n> (although I am not sure what they do). sort_mem is 8192.\n\nPostgreSQL does not have, and has never had a query cache - so nothing \nyou do is going to make that second query faster.\n\nPerhaps you are confusing it with the MySQL query cache?\n\nChris\n\n", "msg_date": "Sat, 11 Oct 2003 18:11:48 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sql performance and cache" }, { "msg_contents": "\n> PostgreSQL does not have, and has never had a query cache - so nothing \n> you do is going to make that second query faster.\n\nLet me clarify that. PostgreSQL will of course cache the disk pages \nused in getting the data for your query, which is why the second time \nyou run it, it is 3 seconds faster.\n\nHowever, it does not cache the _results_ of the query. Each time you \nrun it, it will be fully re-evaluated.\n\nThe btree error you give is bad and I'm sure the more experienced list \nmembers will want you to dig into it for them.\n\nChris\n\n\n", "msg_date": "Sat, 11 Oct 2003 18:16:35 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sql performance and cache" }, { "msg_contents": "On Saturday 11 October 2003 10:43, Chris Faulkner wrote:\n> Hello all\n>\n> I have two very similar queries which I need to execute. They both have\n> exactly the same from / where conditions. When I execute the first, it\n> takes about 16 seconds. The second is executed almost immediately after, it\n> takes 13 seconds. In short, I'd like to know why the query result isn't\n> being cached and any ideas on how to improve the execution.\n\nThe short answer is that PG doesn't cache query results. The only way it could \ndo so safely is to lock all tables you access to make sure that no other \nprocess changes them. That would effectively turn PG into a single-user DB in \nshort notice.\n\n> The first query attempts to find the maximum size of an array in the result\n> set- the field is called \"level\". IT contains anything between 1 and 10\n> integers. I just need to know what the largest size is. I do this to find\n> out the maximum size of the \"level\" array.\n>\n> \"max(replace(split_part(array_dims(level),':',2),']','')::int)\"\n>\n> I know this is big and ugly but is there any better way of doing it ?\n>\n> The second query just returns the result set - it has exactly the same\n> FROM/Where clause.\n\nI assume these two queries are linked? If you rely on the max size being \nunchanged and have more than one process using the database, you should make \nsure you lock the rows in question.\n\n> OK - so I could execute the query once, and get the maximum size of the\n> array and the result set in one. I know what I am doing is less than\n> optimal but I had expected the query results to be cached. So the second\n> execution would be very quick. So why aren't they ? I have increased my\n> cache size - shared_buffers is 2000 and I have doubled the default\n> max_fsm... settings (although I am not sure what they do). sort_mem is\n> 8192.\n\nPG will cache the underlying data, but not the results. The values you are \nchanging are used to hold table/index rows etc. This means the second query \nshouldn't need to access the disk if the rows it requires are cached.\n\nThere is a discussion of the postgresql.conf file and how to tune it at:\n http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n\nGiven the explain attached, 16 secs seems slow. Could you post an EXPLAIN \nANALYSE of either/both queries to the performance list. I'd drop the sql list \nwhen we're just talking about performance.\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Sat, 11 Oct 2003 11:39:10 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] sql performance and cache" }, { "msg_contents": "Hello\n\nThanks for the reply.\n\n> The short answer is that PG doesn't cache query results. The only\n> way it could\n> do so safely is to lock all tables you access to make sure that no other\n> process changes them. That would effectively turn PG into a\n> single-user DB in\n> short notice.\n\nI am not sure I agree with you. I have done similar things with Oracle and\nfound that the second query will execute much more quickly than the first.\nIt could be made to work in at least two scenarios\n\n- as a user/application perspective - you accept that the result might not\nbe up-to-date and take what comes back. This would be acceptable in my case\nbecause I know that the tables will not change.\nOR\n- the database could cache the result set. If some of the data is changed by\nanother query or session, then the database flushes the result set out of\nthe cache.\n\n> I assume these two queries are linked? If you rely on the max size being\n> unchanged and have more than one process using the database, you\n> should make\n> sure you lock the rows in question.\n\nI can rely on the max size remaining the same. As I mentioned above, the\ntables are entirely read only. The data will not be updated or deleted by\nanyone - I don't need to worry about that. The data will be updated en masse\nonce every 3 months.\n\n> There is a discussion of the postgresql.conf file and how to tune it at:\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n\nThanks for that.\n\n> Given the explain attached, 16 secs seems slow. Could you post an EXPLAIN\n> ANALYSE of either/both queries to the performance list. I'd drop\n> the sql list\n> when we're just talking about performance.\n\nTo be honest, my main concern was about the cache. If the second one could\nuse a cache amd execute in 2 seconds, that would be better that reducing the\nexecution of each individual query by 30% or so.\n\nThanks for the offer of help on this one. explain analyze gives me the same\nas the last message - did you want verbose ?\n\n Nested Loop (cost=0.00..147.11 rows=1 width=148) (actual\ntime=84.00..12323.00 rows=67 loops=1)\n Join Filter: (\"inner\".\"GEOM_ID_OF_POINT\" = \"outer\".\"POINT_ID\")\n -> Index Scan using gidx_oscar_point on oscar_point p (cost=0.00..61.34\nrows=1 width=57) (actual time=0.00..9.00 rows=67 loops=1)\n Index Cond: (wkb_geometry && 'SRID=-1;BOX3D(529540 179658.88\n0,530540 1\n80307.12 0)'::geometry)\n Filter: (((\"TILE_REF\" = 'TQ27NE'::bpchar) OR (\"TILE_REF\" =\n'TQ28SE'::bp\nchar) OR (\"TILE_REF\" = 'TQ37NW'::bpchar) OR (\"TILE_REF\" = 'TQ38SW'::bpchar))\nAND\n (\"FEAT_CODE\" = 3500))\n -> Index Scan using idx_on_tile_ref on oscar_node n (cost=0.00..85.74\nrows=2 width=91) (actual time=0.06..150.07 rows=4797 loops=67)\n Index Cond: (n.\"TILE_REF\" = \"outer\".\"TILE_REF\")\n Filter: ((\"TILE_REF\" = 'TQ27NE'::bpchar) OR (\"TILE_REF\" =\n'TQ28SE'::bpchar) OR (\"TILE_REF\" = 'TQ37NW'::bpchar) OR (\"TILE_REF\" =\n'TQ38SW'::bpchar))\n Total runtime: 12325.00 msec\n(9 rows)\n\nThanks\n\n\nChris\n\n\n", "msg_date": "Sat, 11 Oct 2003 12:12:01 +0100", "msg_from": "\"Chris Faulkner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] sql performance and cache" }, { "msg_contents": "\n> Nested Loop (cost=0.00..147.11 rows=1 width=148) (actual\n> time=84.00..12323.00 rows=67 loops=1)\n\nThe planner estimate doesn't seem to match reality in that particular \nstep. Are you sure you've run:\n\nANALYZE oscar_node;\nANALYZE oscar_point;\n\nAnd you could even run VACUUM FULL on them just to make sure.\n\nDoes that make any difference?\n\nChris\n\n\n", "msg_date": "Sat, 11 Oct 2003 19:26:23 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] sql performance and cache" }, { "msg_contents": "\n> Nested Loop (cost=0.00..147.11 rows=1 width=148) (actual\n> time=84.00..12323.00 rows=67 loops=1)\n\nThe planner estimate doesn't seem to match reality in that particular \nstep. Are you sure you've run:\n\nANALYZE oscar_node;\nANALYZE oscar_point;\n\nAnd you could even run VACUUM FULL on them just to make sure.\n\nDoes that make any difference?\n\nChris\n\n\n", "msg_date": "Sat, 11 Oct 2003 19:26:29 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] sql performance and cache" }, { "msg_contents": "On Saturday 11 October 2003 12:12, Chris Faulkner wrote:\n> Hello\n>\n> Thanks for the reply.\n>\n> > The short answer is that PG doesn't cache query results. The only\n> > way it could\n> > do so safely is to lock all tables you access to make sure that no other\n> > process changes them. That would effectively turn PG into a\n> > single-user DB in\n> > short notice.\n>\n> I am not sure I agree with you. I have done similar things with Oracle and\n> found that the second query will execute much more quickly than the first.\n> It could be made to work in at least two scenarios\n\nI'm guessing because the underlying rows and perhaps the plan are cached, \nrather than the results. If you cached the results of the first query you'd \nonly have the max length, not your other data anyway.\n\n[snip]\n\n> > I assume these two queries are linked? If you rely on the max size being\n> > unchanged and have more than one process using the database, you\n> > should make\n> > sure you lock the rows in question.\n>\n> I can rely on the max size remaining the same. As I mentioned above, the\n> tables are entirely read only. The data will not be updated or deleted by\n> anyone - I don't need to worry about that. The data will be updated en\n> masse once every 3 months.\n\nHmm - might be worth adding a column for your array length and pre-calculating \nif your data is basically static.\n\n> > There is a discussion of the postgresql.conf file and how to tune it at:\n> > http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n>\n> Thanks for that.\n>\n> > Given the explain attached, 16 secs seems slow. Could you post an EXPLAIN\n> > ANALYSE of either/both queries to the performance list. I'd drop\n> > the sql list\n> > when we're just talking about performance.\n>\n> To be honest, my main concern was about the cache. If the second one could\n> use a cache amd execute in 2 seconds, that would be better that reducing\n> the execution of each individual query by 30% or so.\n\nI'm puzzled as to why they aren't both below 2 seconds to start with - you're \nnot dealing with that many rows.\n\n> Thanks for the offer of help on this one. explain analyze gives me the same\n> as the last message - did you want verbose ?\n\nNope, this is what I need. Verbose prints pages of stuff that only the \ndevelopers would be interested in. This one actually runs the query and gives \nyou a second set of figures showing times.\n\n> Nested Loop (cost=0.00..147.11 rows=1 width=148) (actual\n> time=84.00..12323.00 rows=67 loops=1)\n> Join Filter: (\"inner\".\"GEOM_ID_OF_POINT\" = \"outer\".\"POINT_ID\")\n> -> Index Scan using gidx_oscar_point on oscar_point p \n> (cost=0.00..61.34 rows=1 width=57) (actual time=0.00..9.00 rows=67 loops=1)\n> Index Cond: (wkb_geometry && 'SRID=-1;BOX3D(529540 179658.88\n> 0,530540 1\n> 80307.12 0)'::geometry)\n> Filter: (((\"TILE_REF\" = 'TQ27NE'::bpchar) OR (\"TILE_REF\" =\n> 'TQ28SE'::bp\n> char) OR (\"TILE_REF\" = 'TQ37NW'::bpchar) OR (\"TILE_REF\" =\n> 'TQ38SW'::bpchar)) AND\n> (\"FEAT_CODE\" = 3500))\n\nThis next bit is the issue. It's joining on TILE_REF and then filtering by \nyour three static values. That's taking 67 * 150ms = 10.05secs\n\n> -> Index Scan using idx_on_tile_ref on oscar_node n (cost=0.00..85.74\n> rows=2 width=91) (actual time=0.06..150.07 rows=4797 loops=67)\n> Index Cond: (n.\"TILE_REF\" = \"outer\".\"TILE_REF\")\n> Filter: ((\"TILE_REF\" = 'TQ27NE'::bpchar) OR (\"TILE_REF\" =\n> 'TQ28SE'::bpchar) OR (\"TILE_REF\" = 'TQ37NW'::bpchar) OR (\"TILE_REF\" =\n> 'TQ38SW'::bpchar))\n\nNow if you look at the first set of figures, it's estimating 2 rows rather \nthan the 4797 you're actually getting. That's probably why it's chosen to \njoin then filter rather than the other way around.\n\nI'd suggest the following:\n1. VACUUM FULL on the table in question if you haven't done so since the last \nupdate/reload. If you aren't doing this after every bulk upload, you probably \nshould be.\n2. VACUUM ANALYSE/ANALYSE the table.\n3. Check the tuning document I mentioned and make sure your settings are at \nleast reasonable. They don't have to be perfect - that last 10% takes \nforever, but if they are badly wrong it can cripple you.\n4. PG should now have up-to-date stats and a reasonable set of config \nsettings. If it's still getting its row estimates wrong, we'll have to look \nat the statistics its got.\n\nIf we reach the statistics tinkering stage, it might be better to wait til \nMonday if you can - more people on the list then.\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Sat, 11 Oct 2003 12:55:08 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] sql performance and cache" }, { "msg_contents": "Chris, People:\n\n(Dropped SQL list because we're cross-posting unnecessarily)\n\n> I am not sure I agree with you. I have done similar things with Oracle and\n> found that the second query will execute much more quickly than the first.\n> It could be made to work in at least two scenarios\n\nActually, PostgreSQL often DOES cache data, it just uses the Kernel cache \nrather than any memory application built into Postgres, and it caches the \nunderlying data, not the final query results. Base data for query sets gets \ncached in RAM after a query, and the second query often *does* run much \nfaster.\n\nFor example, I was running some queries against the TPC-R OSDL database, and \nthe first time I ran the queries they took about 11 seconds each, the second \ntime (for each query) it was about 0.5 seconds because the data hadn't \nchanged and the underlying rowsets were in memory.\n\nI think it's likely that your machine has *already* cached the data in memory, \nwhich is why you don't see improvement on the second run. The slow execution \ntime is the result of bad planner decisions and others are helping you adjust \nthat.\n\nNow, regarding caching final query results in memory: This seems like a lot of \neffort for very little return to me. Doing so would require that all \nunderlying data stay the same, and on a complex query would require an \nimmense infrastructure of data-change tracking to verify.\n\nIf you want a data snapshot, ignoring the possibility of changes, there are \nalready ways to do this:\na) use a temp table;\nb) use your middleware to cache the query results\n\nNow, if someone were to present us with an implementation which effectively \nbuilt and automated form of option (b) above into a optional PG plug-in, I \nwouldn't vote against it. But I couldn't see voting for putting it on the \nTODO list, either.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 12 Oct 2003 14:31:19 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] sql performance and cache" }, { "msg_contents": "On Sat, Oct 11, 2003 at 10:43:04AM +0100, Chris Faulkner wrote:\n> I have two very similar queries which I need to execute. They both\n> have exactly the same from / where conditions. When I execute the\n> first, it takes about 16 seconds. The second is executed almost\n> immediately after, it takes 13 seconds. In short, I'd like to know\n> why the query result isn't being cached and any ideas on how to\n> improve the execution.\n\nThe way to do the type of caching you're talking about, if i\nunderstand you correctly, would be to create a temporary\ntable. Specifically, create a temporary table with the results of the\nsecond query. Then run a select * on that table (with no where\nclause), and follow it with a select max(replace(...)) on the same\ntable (no where clause).\n\nThat guarantees two things:\n\n1- The joins/filters are not parsed and evaluated twice, with the\ncorresponding disk reads.\n2- The data is exactly consistent between the two queries.\n\nCorrect me if i misunderstood your problem.\n\n-johnnnnnnnnnn\n\n", "msg_date": "Mon, 13 Oct 2003 11:19:59 -0500", "msg_from": "johnnnnnn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sql performance and cache" }, { "msg_contents": "\"Chris Faulkner\" <[email protected]> writes:\n> I am seeing this message in my logs.\n> \"bt_fixroot: not valid old root page\"\n\nThat's not good. I'd suggest reindexing that index.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 14 Oct 2003 12:14:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] sql performance and cache " }, { "msg_contents": "On Sat, 11 Oct 2003, Christopher Kings-Lynne wrote:\n\n> \n> > I have two very similar queries which I need to execute. They both have\n> > exactly the same from / where conditions. When I execute the first, it takes\n> > about 16 seconds. The second is executed almost immediately after, it takes\n> > 13 seconds. In short, I'd like to know why the query result isn't being\n> > cached and any ideas on how to improve the execution.\n> \n> <snip>\n> \n> > OK - so I could execute the query once, and get the maximum size of the\n> > array and the result set in one. I know what I am doing is less than optimal\n> > but I had expected the query results to be cached. So the second execution\n> > would be very quick. So why aren't they ? I have increased my cache size -\n> > shared_buffers is 2000 and I have doubled the default max_fsm... settings\n> > (although I am not sure what they do). sort_mem is 8192.\n> \n> PostgreSQL does not have, and has never had a query cache - so nothing \n> you do is going to make that second query faster.\n> \n> Perhaps you are confusing it with the MySQL query cache?\n> \n> Chris\n> \nIs there plan on developing one (query cache)?\n\nThanks\n\nWei\n\n", "msg_date": "Tue, 14 Oct 2003 12:48:06 -0400 (EDT)", "msg_from": "Wei Weng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] sql performance and cache" }, { "msg_contents": "On Tue, 14 Oct 2003, Wei Weng wrote:\n\n> On Sat, 11 Oct 2003, Christopher Kings-Lynne wrote:\n> \n> > \n> > > I have two very similar queries which I need to execute. They both have\n> > > exactly the same from / where conditions. When I execute the first, it takes\n> > > about 16 seconds. The second is executed almost immediately after, it takes\n> > > 13 seconds. In short, I'd like to know why the query result isn't being\n> > > cached and any ideas on how to improve the execution.\n> > \n> > <snip>\n> > \n> > > OK - so I could execute the query once, and get the maximum size of the\n> > > array and the result set in one. I know what I am doing is less than optimal\n> > > but I had expected the query results to be cached. So the second execution\n> > > would be very quick. So why aren't they ? I have increased my cache size -\n> > > shared_buffers is 2000 and I have doubled the default max_fsm... settings\n> > > (although I am not sure what they do). sort_mem is 8192.\n> > \n> > PostgreSQL does not have, and has never had a query cache - so nothing \n> > you do is going to make that second query faster.\n> > \n> > Perhaps you are confusing it with the MySQL query cache?\n> > \n> > Chris\n> > \n> Is there plan on developing one (query cache)?\n\nNot really, Postgresql's design makes it a bit of a non-winner.\n\n", "msg_date": "Tue, 14 Oct 2003 11:26:45 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] sql performance and cache" }, { "msg_contents": "> > Perhaps you are confusing it with the MySQL query cache?\n\n> Is there plan on developing one (query cache)?\n\nFor the most part, prepared queries and cursors give you a greater\nadvantage due to their versatility -- both of which we do have.\n\nIn the cases where an actual cache is useful, the client application\ncould do it just as easily or temp tables can be used.\n\nI suspect it would be implemented more as a caching proxy than as an\nactual part of PostgreSQL, should someone really want this feature.", "msg_date": "Tue, 14 Oct 2003 14:15:39 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sql performance and cache" } ]
[ { "msg_contents": "I am running an update-query to benchmark various databases; the\npostgres version is,\n \nUPDATE user_account SET last_name = 'abc'\nWHERE user_account_id IN (SELECT user_account_id FROM commercial_entity,\ncommercial_service WHERE yw_account_id IS NULL\nAND commercial_entity.commercial_entity_id =\ncommercial_service.commercial_entity_id);\n \n \nThe inner query (the select), run by itself, takes about a second. Add\nthe outer query (the update-portion), and the query dies. The machine\nhas been vacuum-analzyed. Here is the explain-analyze:\n \nbenchtest=# EXPLAIN ANALYZE UPDATE user_account SET last_name = 'abc'\nbenchtest-# WHERE user_account_id IN (SELECT user_account_id FROM\ncommercial_entity, commercial_service WHERE yw_account_id IS NULL\nbenchtest(# AND commercial_entity.commercial_entity_id =\ncommercial_service.commercial_entity_id);\n\n Seq Scan on user_account (cost=0.00..813608944.88 rows=36242\nwidth=718) (actual time=15696258.98..16311130.29 rows=3075 loops=1)\nFilter: (subplan)\n SubPlan\n -> Materialize (cost=11224.77..11224.77 rows=86952 width=36)\n(actual time=0.06..106.40 rows=84831 loops=72483)\n -> Merge Join (cost=0.00..11224.77 rows=86952 width=36)\n(actual time=0.21..1845.13 rows=85158 loops=1)\n Merge Cond: (\"outer\".commercial_entity_id =\n\"inner\".commercial_entity_id)\n -> Index Scan using commercial_entity_pkey on\ncommercial_entity (cost=0.00..6787.27 rows=77862 width=24) (actual\ntime=0.06..469.56 rows=78132 loops=1)\n Filter: (yw_account_id IS NULL)\n -> Index Scan using comm_serv_comm_ent_id_i on\ncommercial_service (cost=0.00..2952.42 rows=88038 width=12) (actual\ntime=0.03..444.80 rows=88038 loops=1)\n Total runtime: 16332976.21 msec\n(10 rows)\n\n \nHere are the relevant parts of the schema:\n \n \n \nUSER_ACCOUNT\n \n Column | Type |\nModifiers\n-------------------------------+-----------------------------+----------\n-------------------\n user_account_id | numeric(10,0) | not null\n first_name | character varying(100) |\n last_name | character varying(100) |\nIndexes: user_account_pkey primary key btree (user_account_id),\n usr_acc_last_name_i btree (last_name),\nForeign Key constraints: $1 FOREIGN KEY (lang_id) REFERENCES\nlang(lang_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n $2 FOREIGN KEY (source_id) REFERENCES\nsource(source_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n $3 FOREIGN KEY (user_role_id) REFERENCES\nuser_role(user_role_id) ON UPDATE NO ACTION ON DELETE NO ACTION\n \n \n \nCOMMERCIAL_ENTITY\n \n Column | Type |\nModifiers\n---------------------------+-----------------------------+--------------\n-----------------------------------------------\n commercial_entity_id | numeric(10,0) | not null\n yw_account_id | numeric(10,0) |\nIndexes: commercial_entity_pkey primary key btree\n(commercial_entity_id),\n comm_ent_yw_acc_id_i btree (yw_account_id)\nForeign Key constraints: $1 FOREIGN KEY (source_id) REFERENCES\nsource(source_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n $2 FOREIGN KEY (user_account_id) REFERENCES\nuser_account(user_account_id) ON UPDATE NO ACTION ON DELETE NO ACTION\n \n \n \nCOMMERCIAL_SERVICE \n \n Column | Type | Modifiers\n----------------------+---------------+-----------\n commercial_entity_id | numeric(10,0) | not null\n service_type_id | numeric(10,0) | not null\n source_id | numeric(10,0) | not null\nIndexes: commercial_service_pkey primary key btree\n(commercial_entity_id, service_type_id),\n comm_serv_comm_ent_id_i btree (commercial_entity_id),\n comm_serv_serv_type_id_i btree (service_type_id),\n comm_serv_source_id_i btree (source_id)\nForeign Key constraints: $1 FOREIGN KEY (commercial_entity_id)\nREFERENCES commercial_entity(commercial_entity_id) ON UPDATE NO ACTION\nON DELETE NO ACTION,\n $2 FOREIGN KEY (source_id) REFERENCES\nsource(source_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n $3 FOREIGN KEY (service_type_id) REFERENCES\nservice_type(service_type_id) ON UPDATE NO ACTION ON DELETE NO ACTION\n\n \n \nHere is the postgres.conf (or the variables that are not commented out):\n \ntcpip_socket = true\nmax_connections = 500\nshared_buffers = 32768 # min max_connections*2 or 16, 8KB each\nwal_buffers = 128 # min 4, typically 8KB each\nsort_mem = 4096 # min 64, size in KB\neffective_cache_size = 50000 # typically 8KB each\n \nIs it a problem with \"IN\"?\n \nDavid\n\n\n\n\n\n\n\n\nI am running an update-query to benchmark various databases; the \npostgres version is,\n \nUPDATE user_account SET last_name = 'abc'WHERE user_account_id IN \n(SELECT user_account_id FROM commercial_entity, commercial_service WHERE \nyw_account_id IS NULLAND commercial_entity.commercial_entity_id = \ncommercial_service.commercial_entity_id);\n \n \nThe inner query (the select), run by itself, takes \nabout a second. Add the outer query (the update-portion), and the query dies. \nThe machine has been vacuum-analzyed. Here is the \nexplain-analyze:\n \nbenchtest=# EXPLAIN ANALYZE UPDATE user_account SET \nlast_name = 'abc'benchtest-# WHERE user_account_id IN (SELECT \nuser_account_id FROM commercial_entity, commercial_service WHERE yw_account_id \nIS NULLbenchtest(# AND commercial_entity.commercial_entity_id = \ncommercial_service.commercial_entity_id);\n Seq Scan on user_account  \n(cost=0.00..813608944.88 rows=36242 width=718) (actual \ntime=15696258.98..16311130.29 rows=3075 loops=1)   Filter: \n(subplan)   SubPlan     ->  \nMaterialize  (cost=11224.77..11224.77 rows=86952 width=36) (actual \ntime=0.06..106.40 rows=84831 \nloops=72483)           \n->  Merge Join  (cost=0.00..11224.77 rows=86952 width=36) (actual \ntime=0.21..1845.13 rows=85158 \nloops=1)                 \nMerge Cond: (\"outer\".commercial_entity_id = \n\"inner\".commercial_entity_id)                 \n->  Index Scan using commercial_entity_pkey on commercial_entity  \n(cost=0.00..6787.27 rows=77862 width=24) (actual time=0.06..469.56 rows=78132 \nloops=1)                       \nFilter: (yw_account_id IS \nNULL)                 \n->  Index Scan using comm_serv_comm_ent_id_i on commercial_service  \n(cost=0.00..2952.42 rows=88038 width=12) (actual time=0.03..444.80 \nrows=88038 loops=1) Total runtime: 16332976.21 msec(10 \nrows)\n \nHere are the relevant parts of the \nschema:\n \n \n \nUSER_ACCOUNT\n \n            \nColumn             \n|            \nType             \n|          \nModifiers-------------------------------+-----------------------------+----------------------------- user_account_id               \n| \nnumeric(10,0)               \n| not \nnull first_name                    \n| character varying(100)      \n| last_name                     \n| character varying(100)      |Indexes: \nuser_account_pkey primary key btree \n(user_account_id),         \nusr_acc_last_name_i btree (last_name),Foreign Key constraints: $1 FOREIGN \nKEY (lang_id) REFERENCES lang(lang_id) ON UPDATE NO ACTION ON DELETE NO \nACTION,                         \n$2 FOREIGN KEY (source_id) REFERENCES source(source_id) ON UPDATE NO ACTION ON \nDELETE NO \nACTION,                         \n$3 FOREIGN KEY (user_role_id) REFERENCES user_role(user_role_id) ON UPDATE NO \nACTION ON DELETE NO ACTION \n \n \nCOMMERCIAL_ENTITY\n \n          \nColumn           \n|            \nType             \n|                          \nModifiers---------------------------+-----------------------------+------------------------------------------------------------- commercial_entity_id      \n| \nnumeric(10,0)               \n| not \nnull yw_account_id             \n| \nnumeric(10,0)               \n|Indexes: commercial_entity_pkey primary key btree \n(commercial_entity_id),         \ncomm_ent_yw_acc_id_i btree (yw_account_id)Foreign Key constraints: $1 \nFOREIGN KEY (source_id) REFERENCES source(source_id) ON UPDATE NO ACTION ON \nDELETE NO \nACTION,                         \n$2 FOREIGN KEY (user_account_id) REFERENCES user_account(user_account_id) ON \nUPDATE NO ACTION ON DELETE NO ACTION\n \n \n \nCOMMERCIAL_SERVICE \n \n        \nColumn        |     \nType      | \nModifiers----------------------+---------------+----------- commercial_entity_id \n| numeric(10,0) | not \nnull service_type_id      | numeric(10,0) | \nnot \nnull source_id            \n| numeric(10,0) | not nullIndexes: commercial_service_pkey primary key btree \n(commercial_entity_id, \nservice_type_id),         \ncomm_serv_comm_ent_id_i btree \n(commercial_entity_id),         \ncomm_serv_serv_type_id_i btree \n(service_type_id),         \ncomm_serv_source_id_i btree (source_id)Foreign Key constraints: $1 FOREIGN \nKEY (commercial_entity_id) REFERENCES commercial_entity(commercial_entity_id) ON \nUPDATE NO ACTION ON DELETE NO \nACTION,                         \n$2 FOREIGN KEY (source_id) REFERENCES source(source_id) ON UPDATE NO ACTION ON \nDELETE NO \nACTION,                         \n$3 FOREIGN KEY (service_type_id) REFERENCES service_type(service_type_id) ON \nUPDATE NO ACTION ON DELETE NO ACTION\n \n \nHere is the postgres.conf (or the variables that \nare not commented out):\n \ntcpip_socket = truemax_connections = \n500\nshared_buffers = \n32768          # min \nmax_connections*2 or 16, 8KB eachwal_buffers = \n128               \n# min 4, typically 8KB each\nsort_mem = \n4096                 \n# min 64, size in KBeffective_cache_size = \n50000    # typically 8KB each\n \nIs it a problem with \"IN\"?\n \nDavid", "msg_date": "Sat, 11 Oct 2003 12:44:36 -0700", "msg_from": "David Griffiths <[email protected]>", "msg_from_op": true, "msg_subject": "Another weird one with an UPDATE" }, { "msg_contents": "Sorry - just found the FAQ (\nhttp://jamesthornton.com/postgres/FAQ/faq-english.html#4.22\n<http://jamesthornton.com/postgres/FAQ/faq-english.html#4.22> ) on how\nIN is very slow.\n \nSo I rewrote the query:\n \n\\o ./data/temp.txt\nSELECT current_timestamp;\nUPDATE user_account SET last_name = 'abc'\nWHERE EXISTS (SELECT ua.user_account_id FROM user_account ua,\ncommercial_entity ce, commercial_service cs\nWHERE ua.user_account_id = ce.user_account_id AND\nce.commercial_entity_id = cs.commercial_entity_id);\nSELECT current_timestamp;\n\\o\n \nEXISTS is kind of a weird statement, and it doesn't appear to be\nidentical (the number of rows updated was 72,000 rather than 3500). It\nalso took 4 minutes to execute.\n \nIs there any way around this other than breaking the query into two? As\nin:\n \npstmt1 = conn.preprareStatement(\"SELECT ua.user_account_id FROM\nuser_account ua, commercial_entity ce, commercial_service cs\nWHERE ua.user_account_id = ce.user_account_id AND\nce.commercial_entity_id = cs.commercial_entity_id\");\nrset = pstmt1.executeQuery();\nwhile (rset.next())\n{\n pstmt2 = conn.prepareStatement(\"UPDATE user_account SET last_name =\n'abc' WHERE user_account_id = ?\");\n pstmt2.setLong(1, rset.getLong(1));\n ...\n}\n \nUnfort, that will be alot of data moved from Postgres->middle-tier\n(Weblogic/Resin), which is inefficient.\n \nAnyone see another solution?\n \nDavid.\n\n----- Original Message ----- \nFrom: David <mailto:[email protected]> Griffiths \nTo: [email protected]\n<mailto:[email protected]> \nSent: Saturday, October 11, 2003 12:44 PM\nSubject: [PERFORM] Another weird one with an UPDATE\n\n\nI am running an update-query to benchmark various databases; the\npostgres version is,\n \nUPDATE user_account SET last_name = 'abc'\nWHERE user_account_id IN (SELECT user_account_id FROM commercial_entity,\ncommercial_service WHERE yw_account_id IS NULL\nAND commercial_entity.commercial_entity_id =\ncommercial_service.commercial_entity_id);\n \n \nThe inner query (the select), run by itself, takes about a second. Add\nthe outer query (the update-portion), and the query dies. The machine\nhas been vacuum-analzyed. Here is the explain-analyze:\n \nbenchtest=# EXPLAIN ANALYZE UPDATE user_account SET last_name = 'abc'\nbenchtest-# WHERE user_account_id IN (SELECT user_account_id FROM\ncommercial_entity, commercial_service WHERE yw_account_id IS NULL\nbenchtest(# AND commercial_entity.commercial_entity_id =\ncommercial_service.commercial_entity_id);\n\n Seq Scan on user_account (cost=0.00..813608944.88 rows=36242\nwidth=718) (actual time=15696258.98..16311130.29 rows=3075 loops=1)\nFilter: (subplan)\n SubPlan\n -> Materialize (cost=11224.77..11224.77 rows=86952 width=36)\n(actual time=0.06..106.40 rows=84831 loops=72483)\n -> Merge Join (cost=0.00..11224.77 rows=86952 width=36)\n(actual time=0.21..1845.13 rows=85158 loops=1)\n Merge Cond: (\"outer\".commercial_entity_id =\n\"inner\".commercial_entity_id)\n -> Index Scan using commercial_entity_pkey on\ncommercial_entity (cost=0.00..6787.27 rows=77862 width=24) (actual\ntime=0.06..469.56 rows=78132 loops=1)\n Filter: (yw_account_id IS NULL)\n -> Index Scan using comm_serv_comm_ent_id_i on\ncommercial_service (cost=0.00..2952.42 rows=88038 width=12) (actual\ntime=0.03..444.80 rows=88038 loops=1)\n Total runtime: 16332976.21 msec\n(10 rows)\n\n \nHere are the relevant parts of the schema:\n \n \n \nUSER_ACCOUNT\n \n Column | Type |\nModifiers\n-------------------------------+-----------------------------+----------\n-------------------\n user_account_id | numeric(10,0) | not null\n first_name | character varying(100) |\n last_name | character varying(100) |\nIndexes: user_account_pkey primary key btree (user_account_id),\n usr_acc_last_name_i btree (last_name),\nForeign Key constraints: $1 FOREIGN KEY (lang_id) REFERENCES\nlang(lang_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n $2 FOREIGN KEY (source_id) REFERENCES\nsource(source_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n $3 FOREIGN KEY (user_role_id) REFERENCES\nuser_role(user_role_id) ON UPDATE NO ACTION ON DELETE NO ACTION\n \n \n \nCOMMERCIAL_ENTITY\n \n Column | Type |\nModifiers\n---------------------------+-----------------------------+--------------\n-----------------------------------------------\n commercial_entity_id | numeric(10,0) | not null\n yw_account_id | numeric(10,0) |\nIndexes: commercial_entity_pkey primary key btree\n(commercial_entity_id),\n comm_ent_yw_acc_id_i btree (yw_account_id)\nForeign Key constraints: $1 FOREIGN KEY (source_id) REFERENCES\nsource(source_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n $2 FOREIGN KEY (user_account_id) REFERENCES\nuser_account(user_account_id) ON UPDATE NO ACTION ON DELETE NO ACTION\n \n \n \nCOMMERCIAL_SERVICE \n \n Column | Type | Modifiers\n----------------------+---------------+-----------\n commercial_entity_id | numeric(10,0) | not null\n service_type_id | numeric(10,0) | not null\n source_id | numeric(10,0) | not null\nIndexes: commercial_service_pkey primary key btree\n(commercial_entity_id, service_type_id),\n comm_serv_comm_ent_id_i btree (commercial_entity_id),\n comm_serv_serv_type_id_i btree (service_type_id),\n comm_serv_source_id_i btree (source_id)\nForeign Key constraints: $1 FOREIGN KEY (commercial_entity_id)\nREFERENCES commercial_entity(commercial_entity_id) ON UPDATE NO ACTION\nON DELETE NO ACTION,\n $2 FOREIGN KEY (source_id) REFERENCES\nsource(source_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n $3 FOREIGN KEY (service_type_id) REFERENCES\nservice_type(service_type_id) ON UPDATE NO ACTION ON DELETE NO ACTION\n\n \n \nHere is the postgres.conf (or the variables that are not commented out):\n \ntcpip_socket = true\nmax_connections = 500\nshared_buffers = 32768 # min max_connections*2 or 16, 8KB each\nwal_buffers = 128 # min 4, typically 8KB each\nsort_mem = 4096 # min 64, size in KB\neffective_cache_size = 50000 # typically 8KB each\n \nIs it a problem with \"IN\"?\n \nDavid\n\n\n\n\n\n\n\n\nSorry - just found the FAQ (http://jamesthornton.com/postgres/FAQ/faq-english.html#4.22) \non how IN is very slow.\n \nSo I rewrote the query:\n \n\\o ./data/temp.txt\nSELECT current_timestamp;\nUPDATE user_account SET last_name = 'abc'WHERE \nEXISTS (SELECT ua.user_account_id FROM user_account ua, commercial_entity ce, \ncommercial_service csWHERE ua.user_account_id = ce.user_account_id AND \nce.commercial_entity_id = cs.commercial_entity_id);\nSELECT current_timestamp;\n\\o\n \nEXISTS is kind of a weird statement, and it doesn't \nappear to be identical (the number of rows updated was 72,000 rather than 3500). \nIt also took 4 minutes to execute.\n \nIs there any way around this other than breaking \nthe query into two? As in:\n \npstmt1 = conn.preprareStatement(\"SELECT \nua.user_account_id FROM user_account ua, commercial_entity ce, \ncommercial_service csWHERE ua.user_account_id = ce.user_account_id AND \nce.commercial_entity_id = cs.commercial_entity_id\");\nrset = pstmt1.executeQuery();\nwhile (rset.next())\n{\n    pstmt2 = \nconn.prepareStatement(\"UPDATE user_account SET last_name = 'abc' WHERE \nuser_account_id = ?\");\n    pstmt2.setLong(1, \nrset.getLong(1));\n    ...\n}\n \nUnfort, that will be alot of data moved from \nPostgres->middle-tier (Weblogic/Resin), which is inefficient.\n \nAnyone see another solution?\n \nDavid.\n\n----- Original Message ----- \nFrom:\nDavid \n Griffiths \nTo: [email protected]\n\nSent: Saturday, October 11, 2003 12:44 \n PM\nSubject: [PERFORM] Another weird one with \n an UPDATE\n\n\nI am running an update-query to benchmark various databases; the \n postgres version is,\n \nUPDATE user_account SET last_name = 'abc'WHERE \n user_account_id IN (SELECT user_account_id FROM commercial_entity, \n commercial_service WHERE yw_account_id IS NULLAND \n commercial_entity.commercial_entity_id = \n commercial_service.commercial_entity_id);\n \n \nThe inner query (the select), run by itself, \n takes about a second. Add the outer query (the update-portion), and the query \n dies. The machine has been vacuum-analzyed. Here is the \n explain-analyze:\n \nbenchtest=# EXPLAIN ANALYZE UPDATE user_account \n SET last_name = 'abc'benchtest-# WHERE user_account_id IN (SELECT \n user_account_id FROM commercial_entity, commercial_service WHERE yw_account_id \n IS NULLbenchtest(# AND commercial_entity.commercial_entity_id = \n commercial_service.commercial_entity_id);\n Seq Scan on user_account  \n (cost=0.00..813608944.88 rows=36242 width=718) (actual \n time=15696258.98..16311130.29 rows=3075 loops=1)   Filter: \n (subplan)   SubPlan     ->  \n Materialize  (cost=11224.77..11224.77 rows=86952 width=36) (actual \n time=0.06..106.40 rows=84831 \n loops=72483)           \n ->  Merge Join  (cost=0.00..11224.77 rows=86952 width=36) (actual \n time=0.21..1845.13 rows=85158 \n loops=1)                 \n Merge Cond: (\"outer\".commercial_entity_id = \n \"inner\".commercial_entity_id)                 \n ->  Index Scan using commercial_entity_pkey on commercial_entity  \n (cost=0.00..6787.27 rows=77862 width=24) (actual time=0.06..469.56 rows=78132 \n loops=1)                       \n Filter: (yw_account_id IS \n NULL)                 \n ->  Index Scan using comm_serv_comm_ent_id_i on \n commercial_service  (cost=0.00..2952.42 rows=88038 width=12) (actual \n time=0.03..444.80 rows=88038 loops=1) Total runtime: 16332976.21 \n msec(10 rows)\n \nHere are the relevant parts of the \n schema:\n \n \n \nUSER_ACCOUNT\n \n            \n Column             \n |            \n Type             \n |          \n Modifiers-------------------------------+-----------------------------+----------------------------- user_account_id               \n | \n numeric(10,0)               \n | not \n null first_name                    \n | character varying(100)      \n | last_name                     \n | character varying(100)      |Indexes: \n user_account_pkey primary key btree \n (user_account_id),         \n usr_acc_last_name_i btree (last_name),Foreign Key constraints: $1 FOREIGN \n KEY (lang_id) REFERENCES lang(lang_id) ON UPDATE NO ACTION ON DELETE NO \n ACTION,                         \n $2 FOREIGN KEY (source_id) REFERENCES source(source_id) ON UPDATE NO ACTION ON \n DELETE NO \n ACTION,                         \n $3 FOREIGN KEY (user_role_id) REFERENCES user_role(user_role_id) ON UPDATE NO \n ACTION ON DELETE NO ACTION \n \n \nCOMMERCIAL_ENTITY\n \n          \n Column           \n |            \n Type             \n |                          \n Modifiers---------------------------+-----------------------------+------------------------------------------------------------- commercial_entity_id      \n | \n numeric(10,0)               \n | not \n null yw_account_id             \n | \n numeric(10,0)               \n |Indexes: commercial_entity_pkey primary key btree \n (commercial_entity_id),         \n comm_ent_yw_acc_id_i btree (yw_account_id)Foreign Key constraints: $1 \n FOREIGN KEY (source_id) REFERENCES source(source_id) ON UPDATE NO ACTION ON \n DELETE NO \n ACTION,                         \n $2 FOREIGN KEY (user_account_id) REFERENCES user_account(user_account_id) ON \n UPDATE NO ACTION ON DELETE NO ACTION\n \n \n \nCOMMERCIAL_SERVICE \n \n        \n Column        |     \n Type      | \n Modifiers----------------------+---------------+----------- commercial_entity_id \n | numeric(10,0) | not \n null service_type_id      | numeric(10,0) | \n not \n null source_id            \n | numeric(10,0) | not nullIndexes: commercial_service_pkey primary key \n btree (commercial_entity_id, \n service_type_id),         \n comm_serv_comm_ent_id_i btree \n (commercial_entity_id),         \n comm_serv_serv_type_id_i btree \n (service_type_id),         \n comm_serv_source_id_i btree (source_id)Foreign Key constraints: $1 FOREIGN \n KEY (commercial_entity_id) REFERENCES commercial_entity(commercial_entity_id) \n ON UPDATE NO ACTION ON DELETE NO \n ACTION,                         \n $2 FOREIGN KEY (source_id) REFERENCES source(source_id) ON UPDATE NO ACTION ON \n DELETE NO \n ACTION,                         \n $3 FOREIGN KEY (service_type_id) REFERENCES service_type(service_type_id) ON \n UPDATE NO ACTION ON DELETE NO ACTION\n \n \nHere is the postgres.conf (or the variables that \n are not commented out):\n \ntcpip_socket = truemax_connections = \n 500\nshared_buffers = \n 32768          # min \n max_connections*2 or 16, 8KB eachwal_buffers = \n 128               \n # min 4, typically 8KB each\nsort_mem = \n 4096                 \n # min 64, size in KBeffective_cache_size = \n 50000    # typically 8KB each\n \nIs it a problem with \"IN\"?\n \nDavid", "msg_date": "Sat, 11 Oct 2003 14:35:52 -0700", "msg_from": "David Griffiths <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Another weird one with an UPDATE" }, { "msg_contents": "On Sat, 11 Oct 2003, David Griffiths wrote:\n\n> Sorry - just found the FAQ (\n> http://jamesthornton.com/postgres/FAQ/faq-english.html#4.22\n> <http://jamesthornton.com/postgres/FAQ/faq-english.html#4.22> ) on how\n> IN is very slow.\n>\n> So I rewrote the query:\n>\n> \\o ./data/temp.txt\n> SELECT current_timestamp;\n> UPDATE user_account SET last_name = 'abc'\n> WHERE EXISTS (SELECT ua.user_account_id FROM user_account ua,\n> commercial_entity ce, commercial_service cs\n> WHERE ua.user_account_id = ce.user_account_id AND\n> ce.commercial_entity_id = cs.commercial_entity_id);\n> SELECT current_timestamp;\n\nI don't think that's the query you want. You're not binding the subselect\nto the outer values of user_account.\n\nI think you want something like:\nUPDATE user_account SET last_name = 'abc'\n WHERE EXISTS (SELECT 1 FROM commercial_entity ce, commercial_service cs\n WHERE user_account.user_account_id = ce.user_account_id AND\n ce.commercial_entity_id = cs.commercial_entity_id);\n", "msg_date": "Sat, 11 Oct 2003 15:34:52 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Another weird one with an UPDATE" }, { "msg_contents": "Thanks - that worked.\n\nDavid\n----- Original Message -----\nFrom: \"Stephan Szabo\" <[email protected]>\nTo: \"David Griffiths\" <[email protected]>\nCc: <[email protected]>\nSent: Saturday, October 11, 2003 3:34 PM\nSubject: Re: [PERFORM] Another weird one with an UPDATE\n\n\n> On Sat, 11 Oct 2003, David Griffiths wrote:\n>\n> > Sorry - just found the FAQ (\n> > http://jamesthornton.com/postgres/FAQ/faq-english.html#4.22\n> > <http://jamesthornton.com/postgres/FAQ/faq-english.html#4.22> ) on how\n> > IN is very slow.\n> >\n> > So I rewrote the query:\n> >\n> > \\o ./data/temp.txt\n> > SELECT current_timestamp;\n> > UPDATE user_account SET last_name = 'abc'\n> > WHERE EXISTS (SELECT ua.user_account_id FROM user_account ua,\n> > commercial_entity ce, commercial_service cs\n> > WHERE ua.user_account_id = ce.user_account_id AND\n> > ce.commercial_entity_id = cs.commercial_entity_id);\n> > SELECT current_timestamp;\n>\n> I don't think that's the query you want. You're not binding the subselect\n> to the outer values of user_account.\n>\n> I think you want something like:\n> UPDATE user_account SET last_name = 'abc'\n> WHERE EXISTS (SELECT 1 FROM commercial_entity ce, commercial_service cs\n> WHERE user_account.user_account_id = ce.user_account_id AND\n> ce.commercial_entity_id = cs.commercial_entity_id);\n", "msg_date": "Sat, 11 Oct 2003 22:35:39 -0700", "msg_from": "David Griffiths <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Another weird one with an UPDATE" }, { "msg_contents": "[snip]\n\n> I think you want something like:\n> UPDATE user_account SET last_name = 'abc'\n> WHERE EXISTS (SELECT 1 FROM commercial_entity ce, commercial_service cs\n> WHERE user_account.user_account_id = ce.user_account_id AND\n> ce.commercial_entity_id = cs.commercial_entity_id);\n\nUnfort, this is still taking a long time.\n\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-------\n Seq Scan on user_account (cost=0.00..748990.51 rows=36242 width=716)\n(actual time=10262.50..26568.03 rows=3771 loops=1)\n Filter: (subplan)\n SubPlan\n -> Nested Loop (cost=0.00..11.47 rows=1 width=24) (actual\ntime=0.24..0.24 rows=0 loops=72483)\n -> Index Scan using comm_ent_usr_acc_id_i on commercial_entity\nce (cost=0.00..4.12 rows=1 width=12) (actual time=0.05..0.05 rows=0\nloops=72483)\n Index Cond: ($0 = user_account_id)\n -> Index Scan using comm_serv_comm_ent_id_i on\ncommercial_service cs (cost=0.00..7.32 rows=3 width=12) (actual\ntime=1.72..1.72 rows=0 loops=7990)\n Index Cond: (\"outer\".commercial_entity_id =\ncs.commercial_entity_id)\n Total runtime: 239585.09 msec\n(9 rows)\n\nAnyone have any thoughts?\n\nDavid\n", "msg_date": "Sun, 12 Oct 2003 18:33:54 -0700", "msg_from": "David Griffiths <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Another weird one with an UPDATE" }, { "msg_contents": "David Griffiths wrote:\n>>I think you want something like:\n>>UPDATE user_account SET last_name = 'abc'\n>> WHERE EXISTS (SELECT 1 FROM commercial_entity ce, commercial_service cs\n>> WHERE user_account.user_account_id = ce.user_account_id AND\n>> ce.commercial_entity_id = cs.commercial_entity_id);\n> \n> Unfort, this is still taking a long time.\n> -------\n> Seq Scan on user_account (cost=0.00..748990.51 rows=36242 width=716)\n\nDo you have an index on user_account.user_account_id?\n\nJoe\n\n", "msg_date": "Sun, 12 Oct 2003 18:37:18 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Another weird one with an UPDATE" }, { "msg_contents": "On Sun, 12 Oct 2003, David Griffiths wrote:\n\n> [snip]\n>\n> > I think you want something like:\n> > UPDATE user_account SET last_name = 'abc'\n> > WHERE EXISTS (SELECT 1 FROM commercial_entity ce, commercial_service cs\n> > WHERE user_account.user_account_id = ce.user_account_id AND\n> > ce.commercial_entity_id = cs.commercial_entity_id);\n>\n> Unfort, this is still taking a long time.\n\nHmm, does\nUPDATE user_account SET last_name='abc'\n FROM commercial_entity ce, commercial_service cs\n WHERE user_account.user_account_id = ce.user_account_id AND\n ce.commercial_entity_id=cs.commercial_entity_id;\ngive the right results... That might end up being faster.\n", "msg_date": "Sun, 12 Oct 2003 18:48:24 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Another weird one with an UPDATE" }, { "msg_contents": "Yes, the query operates only on indexed columns (all numeric(10)'s).\n\n Column | Type |\nModifiers\n-------------------------------+-----------------------------+--------------\n---------------\n user_account_id | numeric(10,0) | not null\n[snip]\nIndexes: user_account_pkey primary key btree (user_account_id),\nForeign Key constraints: $1 FOREIGN KEY (lang_id) REFERENCES lang(lang_id)\nON UPDATE NO ACTION ON DELETE NO ACTION,\n $2 FOREIGN KEY (source_id) REFERENCES\nsource(source_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n $3 FOREIGN KEY (user_role_id) REFERENCES\nuser_role(user_role_id) ON UPDATE NO ACTION ON DELETE NO ACTION\n\nDavid\n\n----- Original Message -----\nFrom: \"Joe Conway\" <[email protected]>\nTo: \"David Griffiths\" <[email protected]>\nCc: <[email protected]>\nSent: Sunday, October 12, 2003 6:37 PM\nSubject: Re: [PERFORM] Another weird one with an UPDATE\n\n\n> David Griffiths wrote:\n> >>I think you want something like:\n> >>UPDATE user_account SET last_name = 'abc'\n> >> WHERE EXISTS (SELECT 1 FROM commercial_entity ce, commercial_service cs\n> >> WHERE user_account.user_account_id = ce.user_account_id AND\n> >> ce.commercial_entity_id = cs.commercial_entity_id);\n> >\n> > Unfort, this is still taking a long time.\n> > -------\n> > Seq Scan on user_account (cost=0.00..748990.51 rows=36242 width=716)\n>\n> Do you have an index on user_account.user_account_id?\n>\n> Joe\n", "msg_date": "Sun, 12 Oct 2003 23:21:24 -0700", "msg_from": "David Griffiths <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Another weird one with an UPDATE" }, { "msg_contents": "It's a slight improvement, but that could be other things as well.\n\nI'd read that how you tune Postgres will determine how the optimizer works\non a query (sequential scan vs index scan). I am going to post all I've done\nwith tuning tommorow, and see if I've done anything dumb. I've found some\ncontradictory advice, and I'm still a bit hazy on how/why Postgres trusts\nthe OS to do caching. I'll post it all tommorow.\n\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n----------------\n Merge Join (cost=11819.21..15258.55 rows=12007 width=752) (actual\ntime=4107.64..5587.81 rows=20880 loops=1)\n Merge Cond: (\"outer\".commercial_entity_id = \"inner\".commercial_entity_id)\n -> Index Scan using comm_serv_comm_ent_id_i on commercial_service cs\n(cost=0.00..3015.53 rows=88038 width=12) (actual time=0.05..487.23\nrows=88038 loops=1)\n -> Sort (cost=11819.21..11846.08 rows=10752 width=740) (actual\ntime=3509.07..3955.15 rows=25098 loops=1)\n Sort Key: ce.commercial_entity_id\n -> Merge Join (cost=0.00..9065.23 rows=10752 width=740) (actual\ntime=0.18..2762.13 rows=7990 loops=1)\n Merge Cond: (\"outer\".user_account_id =\n\"inner\".user_account_id)\n -> Index Scan using user_account_pkey on user_account\n(cost=0.00..8010.39 rows=72483 width=716) (actual time=0.05..2220.86\nrows=72483 loops=1)\n -> Index Scan using comm_ent_usr_acc_id_i on\ncommercial_entity ce (cost=0.00..4787.69 rows=78834 width=24) (actual\ntime=0.02..55.64 rows=7991 loops=1)\n Total runtime: 226239.77 msec\n(10 rows)\n\nDavid\n\n----- Original Message -----\nFrom: \"Stephan Szabo\" <[email protected]>\nTo: \"David Griffiths\" <[email protected]>\nCc: <[email protected]>\nSent: Sunday, October 12, 2003 6:48 PM\nSubject: Re: [PERFORM] Another weird one with an UPDATE\n\n\n> On Sun, 12 Oct 2003, David Griffiths wrote:\n>\n> > [snip]\n> >\n> > > I think you want something like:\n> > > UPDATE user_account SET last_name = 'abc'\n> > > WHERE EXISTS (SELECT 1 FROM commercial_entity ce, commercial_service\ncs\n> > > WHERE user_account.user_account_id = ce.user_account_id AND\n> > > ce.commercial_entity_id = cs.commercial_entity_id);\n> >\n> > Unfort, this is still taking a long time.\n>\n> Hmm, does\n> UPDATE user_account SET last_name='abc'\n> FROM commercial_entity ce, commercial_service cs\n> WHERE user_account.user_account_id = ce.user_account_id AND\n> ce.commercial_entity_id=cs.commercial_entity_id;\n> give the right results... That might end up being faster.\n", "msg_date": "Sun, 12 Oct 2003 23:23:33 -0700", "msg_from": "David Griffiths <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Another weird one with an UPDATE" }, { "msg_contents": "David Griffiths wrote:\n\n> It's a slight improvement, but that could be other things as well.\n> \n> I'd read that how you tune Postgres will determine how the optimizer works\n> on a query (sequential scan vs index scan). I am going to post all I've done\n> with tuning tommorow, and see if I've done anything dumb. I've found some\n> contradictory advice, and I'm still a bit hazy on how/why Postgres trusts\n> the OS to do caching. I'll post it all tommorow.\n> \n> ----------------------------------------------------------------------------\n> ----------------------------------------------------------------------------\n> ----------------\n> Merge Join (cost=11819.21..15258.55 rows=12007 width=752) (actual\n> time=4107.64..5587.81 rows=20880 loops=1)\n> Merge Cond: (\"outer\".commercial_entity_id = \"inner\".commercial_entity_id)\n> -> Index Scan using comm_serv_comm_ent_id_i on commercial_service cs\n> (cost=0.00..3015.53 rows=88038 width=12) (actual time=0.05..487.23\n> rows=88038 loops=1)\n> -> Sort (cost=11819.21..11846.08 rows=10752 width=740) (actual\n> time=3509.07..3955.15 rows=25098 loops=1)\n> Sort Key: ce.commercial_entity_id\n\nI think this is the problem. Is there an index on ce.commercial_entity_id?\n\n> -> Merge Join (cost=0.00..9065.23 rows=10752 width=740) (actual\n> time=0.18..2762.13 rows=7990 loops=1)\n> Merge Cond: (\"outer\".user_account_id =\n> \"inner\".user_account_id)\n> -> Index Scan using user_account_pkey on user_account\n> (cost=0.00..8010.39 rows=72483 width=716) (actual time=0.05..2220.86\n> rows=72483 loops=1)\n> -> Index Scan using comm_ent_usr_acc_id_i on\n> commercial_entity ce (cost=0.00..4787.69 rows=78834 width=24) (actual\n> time=0.02..55.64 rows=7991 loops=1)\n\nIn this case of comparing account ids, its using two index scans. In the entity \nfield though, it chooses a sort. I think there is an index missing. The costs \nare also shot up as well.\n\n> Total runtime: 226239.77 msec\n\nStandard performance question. What was the last time these tables/database were \nvacuumed. Have you tuned postgresql.conf correctly?\n\n HTH\n\n Shridhar\n\n", "msg_date": "Mon, 13 Oct 2003 15:57:23 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Another weird one with an UPDATE" }, { "msg_contents": "David Griffiths wrote:\n> Yes, the query operates only on indexed columns (all numeric(10)'s).\n> \n> Column | Type |\n> Modifiers\n> -------------------------------+-----------------------------+--------------\n> ---------------\n> user_account_id | numeric(10,0) | not null\n> [snip]\n> Indexes: user_account_pkey primary key btree (user_account_id),\n> Foreign Key constraints: $1 FOREIGN KEY (lang_id) REFERENCES lang(lang_id)\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> $2 FOREIGN KEY (source_id) REFERENCES\n> source(source_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n> $3 FOREIGN KEY (user_role_id) REFERENCES\n> user_role(user_role_id) ON UPDATE NO ACTION ON DELETE NO ACTION\n\nAnd what about commercial_entity.user_account_id. Is it indexed and what \nis its data type (i.e. does it match numeric(10,0))?\n\nAlso, have you run VACUUM ANALYZE lately?\n\nJoe\n\n", "msg_date": "Mon, 13 Oct 2003 08:04:27 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Another weird one with an UPDATE" }, { "msg_contents": "\n> And what about commercial_entity.user_account_id. Is it indexed and what\n> is its data type (i.e. does it match numeric(10,0))?\n\nYup - all columns in the statement are indexed, and they are all\nnumeric(10,0).\n\n> Also, have you run VACUUM ANALYZE lately?\n\nYup - just before the last run.\n\nWill get together my tuning data now.\n\nDavid\n", "msg_date": "Mon, 13 Oct 2003 12:04:18 -0700", "msg_from": "David Griffiths <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Another weird one with an UPDATE" } ]
[ { "msg_contents": "> Date: Sun, 12 Oct 2003 13:30:45 -0700\n> From: Josh Berkus <[email protected]>\n> To: Nick Barr <[email protected]>\n> Cc: [email protected]\n> Subject: Re: go for a script! / ex: PostgreSQL vs. MySQL\n> Message-ID: <[email protected]>\n>\n\n>> This would be parameters such as the block size and a few other\n>> compile time parameters. If we can get to some of these read-only\n>> parameters than that would make this step easier, certainly for the new\n>> recruits amongst us.\n>\n> Actually, from my perspective, we shouldn't bother with this; if an admin\n> knows enough to set an alternate blaock size for PG, then they know\n> enough to tweak the Conf file by hand. I think we should just issue a\n> warning that this script:\n> 1) does not work for anyone who is using non-default block sizes,\n\nThere was some talk, either on this list or freebsd-performance about\nsetting the default block size for PostgreSQL running on FreeBSD to be 16k\nbecause of performance reasons. That is: *default* for the port, user is not\nasked. So an automagical method to scale non-default block sizes is a very\nneeded thing.\n\n> 2) may not work well for anyone using unusual locales, optimization\n> flags, or other non-default compile options except for language\n> interfaces.\n\nDepends on what you consider 'unusual'? I hope not things like iso8859-x\n(or, to be exact, European languages) :)\n\n\n--\nLogic is a systematic method of coming to the wrong conclusion with\nconfidence.\n\n", "msg_date": "Mon, 13 Oct 2003 00:47:38 +0200", "msg_from": "\"Ivan Voras\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "Ivan,\n\n> There was some talk, either on this list or freebsd-performance about\n> setting the default block size for PostgreSQL running on FreeBSD to be 16k\n> because of performance reasons. That is: *default* for the port, user is\n> not asked. So an automagical method to scale non-default block sizes is a\n> very needed thing.\n\nHmmm ... possibly. My concern is that if someone uses a very non-default \nvalue, such as 256K, then they are probably better off doing their own tuning \nbecause they've got an unusual system. However, we could easily limit it to \nthe range of 4K to 32K.\n\nOf course, since there's no GUC var, we'd have to ask the user to confirm \ntheir block size. I'm reluctant to take this approach because if the user \ngets it wrong, then the settings will be *way* off ... and possibly cause \nPostgreSQL to be unrunnable or have \"out of memory\" crashes.\n\nUnless there's a way to find it in the compiled source?\n\n> > 2) may not work well for anyone using unusual locales, optimization\n> > flags, or other non-default compile options except for language\n> > interfaces.\n>\n> Depends on what you consider 'unusual'? I hope not things like iso8859-x\n> (or, to be exact, European languages) :)\n\nOn second thought, I'm not sure what an \"unusual locale\" would be. Scratch \nthat.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 14 Oct 2003 10:09:26 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Unless there's a way to find it in the compiled source?\n\nSee pg_controldata.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 14 Oct 2003 14:15:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL " }, { "msg_contents": "> >> This would be parameters such as the block size and a few\n> >> other compile time parameters. If we can get to some of these\n> >> read-only parameters than that would make this step easier,\n> >> certainly for the new recruits amongst us.\n> >\n> > Actually, from my perspective, we shouldn't bother with this; if an admin\n> > knows enough to set an alternate blaock size for PG, then they know\n> > enough to tweak the Conf file by hand. I think we should just issue a\n> > warning that this script:\n> > 1) does not work for anyone who is using non-default block sizes,\n> \n> There was some talk, either on this list or freebsd-performance\n> about setting the default block size for PostgreSQL running on\n> FreeBSD to be 16k because of performance reasons. That is: *default*\n> for the port, user is not asked.\n\nReal quick, this isn't true, the block size is tunable, but does not\nchange the default. You can set PGBLOCKSIZE to the values \"16K\" or\n\"32K\" to change the block size, but the default remains 8K.\n\nhttp://lists.freebsd.org/pipermail/freebsd-database/2003-October/000111.html\n\n-sc\n\n-- \nSean Chittenden\n", "msg_date": "Tue, 14 Oct 2003 11:42:37 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL" } ]
[ { "msg_contents": "Actually, even Microsoft SQL Server will do this for you (you can even\nchose if it shoudl split it up on all processors or a maximum number).\nWill do it on any types of queries, as long as they're big enough (you\ncan tweak the cost limit, but the general idea is only process\nCPU-expensive queries that way)\n\n//Magnus\n\n\n> -----Original Message-----\n> From: Andriy Tkachuk [mailto:[email protected]] \n> Sent: Monday, October 13, 2003 10:53 AM\n> To: Bill Moran\n> Cc: johnnnnnn; [email protected]\n> Subject: Re: [PERFORM] One or more processor ?\n> \n> \n> On Fri, 10 Oct 2003, Bill Moran wrote:\n> \n> > johnnnnnn wrote:\n> > > On Fri, Oct 10, 2003 at 12:42:04PM -0400, Bill Moran wrote:\n> > >\n> > >>4) It simply isn't practical to expect a single query to\n> > >> execute on multiple processors simultaneously.\n> > >>\n> > >>Do you know of any RDBMS that actually will execute a \n> single query \n> > >>on multiple processors?\n> > >\n> > > Yes, DB2 will do this if configured correctly. It's very \n> useful for \n> > > large, complicated queries that have multiple subplans.\n> >\n> > I expected there would be someone who did (although I \n> didn't know for \n> > sure).\n> >\n> > Is DB2 the only one that can do that?\n> \n> Oracle, i think, on partitioned tables.\n> \n> regards, andriy\n> \nhttp://www.imt.com.ua\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 8: explain analyze is your friend\n", "msg_date": "Mon, 13 Oct 2003 11:04:44 +0200", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: One or more processor ?" } ]
[ { "msg_contents": "Hi,\n\nI did a search in the discussion lists and found several\npointers about setting the max_fsm_relations and pages.\n\nI have a table that keeps being updated and noticed\nthat after a few days, the disk usage has growned to\nfrom just over 150 MB to like 2 GB !\n\nI followed the recommendations from the various search\nof the archives, changed the max_fsm_relations, pages,\nkeep doing vacuum like every minute while the\ntable of interest in being updated. I kept\nwatching the disk space usage and still noticed that\nit continues to increase.\n\nLooks like vacuum has no effect.\n\nI did vacuum tablename and don't intend to use\nthe full option since it locks the table.\n\nI have 7.3.3 running in Solaris 9.\n\nAny recommendation ?\n\nThanks.\n\nGan\n-- \n+--------------------------------------------------------+\n| Seum-Lim GAN email : [email protected] |\n| Lucent Technologies |\n| 2000 N. Naperville Road, 6B-403F tel : (630)-713-6665 |\n| Naperville, IL 60566, USA. fax : (630)-713-7272 |\n| web : http://inuweb.ih.lucent.com/~slgan |\n+--------------------------------------------------------+\n", "msg_date": "Mon, 13 Oct 2003 04:12:27 -0500", "msg_from": "Seum-Lim Gan <[email protected]>", "msg_from_op": true, "msg_subject": "Performance, vacuum and reclaiming space, fsm" }, { "msg_contents": "On Mon, 13 Oct 2003, Seum-Lim Gan wrote:\n\n> Hi,\n> \n> I did a search in the discussion lists and found several\n> pointers about setting the max_fsm_relations and pages.\n> \n> I have a table that keeps being updated and noticed\n> that after a few days, the disk usage has growned to\n> from just over 150 MB to like 2 GB !\n> \n> I followed the recommendations from the various search\n> of the archives, changed the max_fsm_relations, pages,\n> keep doing vacuum like every minute while the\n> table of interest in being updated. I kept\n> watching the disk space usage and still noticed that\n> it continues to increase.\n> \n> Looks like vacuum has no effect.\n> \n> I did vacuum tablename and don't intend to use\n> the full option since it locks the table.\n> \n> I have 7.3.3 running in Solaris 9.\n> \n> Any recommendation ?\n> \n> Thanks.\n> \n> Gan\n> \n\n\tTry auto_vacuum (its in the 7.4beta4 contrib directory) I find it \nvery useful. Often you find that every minute in fact can be a little too \noften. My table updates every couple of seconds but is vacuumed \n(automatically) every hmm hour. \n\tIf you have lots of overlapping vacumms and or editing connections \nrecords may be held on to by one vacuum so the next can't do its job. \nAlways ensure that there is only one vacuum process. (You can't do this \neasily with cron!) \n\tI'm still using 7.3.2. 7.3.3 is sposed to have some big bugs and \n7.3.4 was produced within 24 hours.(must upgrade at some point)\n\tOh yes Index have problems (I think this is fix in later \nversions...) so you might want to try reindex. \n\n\tThey are all worth a try its a brief summary of what been on \npreform for weeks and weeks now.\n\nPeter Childs\n\n", "msg_date": "Mon, 13 Oct 2003 10:23:07 +0100 (BST)", "msg_from": "Peter Childs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance, vacuum and reclaiming space, fsm" }, { "msg_contents": "Seum-Lim Gan wrote:\n> I have a table that keeps being updated and noticed\n> that after a few days, the disk usage has growned to\n> from just over 150 MB to like 2 GB !\n\nHmm... You have quite a lot of wasted space there..\n> \n> I followed the recommendations from the various search\n> of the archives, changed the max_fsm_relations, pages,\n> keep doing vacuum like every minute while the\n> table of interest in being updated. I kept\n> watching the disk space usage and still noticed that\n> it continues to increase.\n\nThat will help if your table is in good shape. Otherwise it will have little \neffect particularly after such amount of wasted space.\n\n> Looks like vacuum has no effect.\n\nIts not that.\n\n> I did vacuum tablename and don't intend to use\n> the full option since it locks the table.\n\nYou got to do that. simple vacuum keeps a running instance of server clean. But \nonce dead tuples spill to disk, nothing but vacumm full can reclaim that space.\n\nAnd don't forget, you got to reindex the indexes as well.\n\nOnce your table is in good shape, you can tune max_fsm_* and vacuum once a \nminute. That will keep it good..\n\n\n> I have 7.3.3 running in Solaris 9.\n> \n> Any recommendation ?\n\n HTH\n\n Shridhar\n\n", "msg_date": "Mon, 13 Oct 2003 14:55:22 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance, vacuum and reclaiming space, fsm" }, { "msg_contents": "\nI am not sure I can do the full vacuum.\nIf my system is doing updates in realtime and needs to be\nok 24 hours and 7 days a week non-stop, once I do\nvacuum full, even on that table, that table will\nget locked out and any quiery or updates that come in\nwill timeout.\n\nAny suggestion on what to do besides shutting down to\ndo full vacuum ?\n\nPeter Child also mentions there is indexing bugs.\nIs this fixed in 7.3.4 ? I did notice after the database\ngrew in disk usage, its performance greatly decreases !\n\nGan\n\n\n\n>Seum-Lim Gan wrote:\n>>I have a table that keeps being updated and noticed\n>>that after a few days, the disk usage has growned to\n>>from just over 150 MB to like 2 GB !\n>\n>Hmm... You have quite a lot of wasted space there..\n>>\n>>I followed the recommendations from the various search\n>>of the archives, changed the max_fsm_relations, pages,\n>>keep doing vacuum like every minute while the\n>>table of interest in being updated. I kept\n>>watching the disk space usage and still noticed that\n>>it continues to increase.\n>\n>That will help if your table is in good shape. Otherwise it will \n>have little effect particularly after such amount of wasted space.\n>\n>>Looks like vacuum has no effect.\n>\n>Its not that.\n>\n>>I did vacuum tablename and don't intend to use\n>>the full option since it locks the table.\n>\n>You got to do that. simple vacuum keeps a running instance of server \n>clean. But once dead tuples spill to disk, nothing but vacumm full \n>can reclaim that space.\n>\n>And don't forget, you got to reindex the indexes as well.\n>\n>Once your table is in good shape, you can tune max_fsm_* and vacuum \n>once a minute. That will keep it good..\n>\n>>I have 7.3.3 running in Solaris 9.\n>>\n>>Any recommendation ?\n>\n> HTH\n>\n> Shridhar\n\n\n-- \n+--------------------------------------------------------+\n| Seum-Lim GAN email : [email protected] |\n| Lucent Technologies |\n| 2000 N. Naperville Road, 6B-403F tel : (630)-713-6665 |\n| Naperville, IL 60566, USA. fax : (630)-713-7272 |\n| web : http://inuweb.ih.lucent.com/~slgan |\n+--------------------------------------------------------+\n", "msg_date": "Mon, 13 Oct 2003 08:52:05 -0500", "msg_from": "Seum-Lim Gan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance, vacuum and reclaiming space, fsm" }, { "msg_contents": "On Monday 13 October 2003 19:22, Seum-Lim Gan wrote:\n> I am not sure I can do the full vacuum.\n> If my system is doing updates in realtime and needs to be\n> ok 24 hours and 7 days a week non-stop, once I do\n> vacuum full, even on that table, that table will\n> get locked out and any quiery or updates that come in\n> will timeout.\n\nIf you have 150MB type of data as you said last time, you could take a pg_dump \nof database, drop the database and recreate it. By all chances it will take \nless time than compacting a database from 2GB to 150MB.\n\nIt does involve downtime but can't help it. Thats closet you can get.\n\n> Any suggestion on what to do besides shutting down to\n> do full vacuum ?\n\nDrop the indexes and recreate them. While creating the index, all the updates \nwill be blocked anyways.\n\n> Peter Child also mentions there is indexing bugs.\n> Is this fixed in 7.3.4 ? I did notice after the database\n\nNo. It is fixed in 7.4 and 7.4 is in beta still..\n\n> grew in disk usage, its performance greatly decreases !\n\nObviously that is due to unnecessary IO it has to do.\n\nThing is your database has reached a state that is really bad for it's \noperation. I strongly encourage you to recreate the database from backup, \nfrom scratch, tune postgresql properly and run autovacuum daemon from 7.4 \nsource dir. Besides that you would need to reindex nightly or per 5-6 hour \ndepending upon rate of insertion.\n\n Check http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html for \nperformance tuning starter tips..\n\n HTH\n\n Shridhar\n\n", "msg_date": "Mon, 13 Oct 2003 19:37:14 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance, vacuum and reclaiming space, fsm" }, { "msg_contents": ">>>>> \"SD\" == Shridhar Daithankar <[email protected]> writes:\n\nSD> If you have 150MB type of data as you said last time, you could\nSD> take a pg_dump of database, drop the database and recreate it. By\nSD> all chances it will take less time than compacting a database from\nSD> 2GB to 150MB.\n\nThat's it? That's not so big of a disk footprint.\n\nSD> Drop the indexes and recreate them. While creating the index, all\nSD> the updates will be blocked anyways.\n\nBe *very careful* doing this, especially with UNIQUE indexes on a live\nsystem! My recommendation is to get a list of all indexes on your\nsystem with \\di in psql, then running \"reindex index XXXX\" per index.\nBe sure to bump sort_mem beforehand. Here's a script I ran over the\nweekend (during early morning low-usage time) on my system:\n\nSET sort_mem = 131072;\nSELECT NOW(); SELECT relname,relpages FROM pg_class WHERE relname LIKE 'user_list%' ORDER BY relname;\nSELECT NOW(); REINDEX INDEX user_list_pkey ;\nSELECT NOW(); REINDEX INDEX user_list_XXX ;\nSELECT NOW(); REINDEX INDEX user_list_YYY ;\nSELECT NOW(); SELECT relname,relpages FROM pg_class WHERE relname LIKE 'user_list%' ORDER BY relname;\n\nThe relpages used by the latter two indexes shrunk dramatically:\n\nuser_list_XXX | 109655\nuser_list_YYY | 69837\n\nto\n\nuser_list_XXX | 57032\nuser_list_YYY | 30911\n\nand disk usage went down quite a bit as well. Unfortunately, the pkey\nreindex failed due to a deadlock being detected, but the XXX index is\nmost popular... This is my \"hottest\" table, so I reindex it about\nonce a month. My other \"hot\" table takes 45 minutes per index to\nredo, so I try to avoid that until I *really* have to do it (about 6\nmonths). I don't think you'll need a nightly reindex.\n\nOf course, regular vacuums throughout the day on the busy talbes help\nkeep it from getting too fragmented.\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Mon, 13 Oct 2003 11:52:44 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance, vacuum and reclaiming space, fsm" } ]
[ { "msg_contents": "I've been having performance issues with Postgres (sequential scans vs\nindex scans in an update statement). I've read that optimizer will\nchange it's plan based on the resources it thinks are available. In\naddition, I've read alot of conflicting info on various parameters, so\nI'd like to sort those out as well.\n \nHere's the query I've been having problems with:\n \nUPDATE user_account SET last_name='abc'\nFROM commercial_entity ce, commercial_service cs\nWHERE user_account.user_account_id = ce.user_account_id AND\nce.commercial_entity_id=cs.commercial_entity_id;\n \nor \n \nUPDATE user_account SET last_name = 'abc'\n WHERE EXISTS (SELECT 1 FROM commercial_entity ce, commercial_service cs\n WHERE user_account.user_account_id = ce.user_account_id AND\n ce.commercial_entity_id = cs.commercial_entity_id);\n \nBoth are about the same.\n\nAll columns are indexed; all column-types are the same (numeric(10,0)).\nA vacuum analyze was run just before the last attempt at running the\nabove statement.\n \n \nMACHINE STATS\n---------------------------\nThe machine is a dual-Pentium 3 933mhz, 2 gig of RAM, RAID 5 (3xWestern\nDigital 80 gig drives with 8-meg buffers on 3Ware), Red Hat 9.0\n \n \nPOSTGRES TUNING INFO\n---------------------------------------\n \nHere are part of the contents of my sysctl.conf file (note that I've\nplayed with values as low as 600000 with no difference)\nkernel.shmmax=1400000000\nkernel.shmall=1400000000\n\nHere's the uncommented-lines from the postgresql.conf file (not the\ndefault one in the /usr/local/pgsql directory - I've initialzed the\ndatabase on a different mount point with more space):\n \ntcpip_socket = true\nmax_connections = 500\nshared_buffers = 96000 # min max_connections*2 or 16, 8KB each\nwal_buffers = 64 # min 4, typically 8KB each\nsort_mem = 2048 # min 64, size in KB\neffective_cache_size = 6000 # typically 8KB each\nLC_MESSAGES = 'en_US.UTF-8'\nLC_MONETARY = 'en_US.UTF-8'\nLC_NUMERIC = 'en_US.UTF-8'\nLC_TIME = 'en_US.UTF-8'\n\nNote that I've played with all these values; shared_buffers has been as\nlow as 5000, and effective_cache_size has been as high as 50000. Sort\nmem has varied between 1024 bytes and 4096 bytes. wal_buffers have been\nbetween 16 and 128.\n \n \nINFO FROM THE MACHINE\n-----------------------------------------\nHere are the vmstat numbers while running the query.\n \n procs memory swap io system\ncpu\n r b w swpd free buff cache si so bi bo in cs us\nsy id\n 0 1 2 261940 11624 110072 1334896 12 0 12 748 177 101\n2 4 95\n 0 1 1 261940 11628 110124 1334836 0 0 0 1103 170 59\n2 1 97\n 0 3 1 261928 11616 110180 1334808 3 0 6 1156 169 67\n2 2 96\n 0 2 1 261892 11628 110212 1334636 7 0 7 1035 186 100\n2 2 96\n 0 1 1 261796 11616 110272 1334688 18 0 18 932 169 79\n2 1 97\n 0 1 1 261780 11560 110356 1334964 3 0 3 4155 192 118\n2 7 92\n 0 1 1 261772 11620 110400 1334956 2 0 2 939 162 63\n3 0 97\n 0 1 3 261744 11636 110440 1334872 6 0 9 1871 171 104\n3 2 95\n 0 0 0 261744 13488 110472 1332244 0 0 0 922 195 1271\n3 2 94\n 0 0 0 261744 13436 110492 1332244 0 0 0 24 115 47\n0 1 99\n 0 0 0 261744 13436 110492 1332244 0 0 0 6 109 36\n0 5 95\n 0 0 0 261744 13436 110492 1332244 0 0 0 6 123 63\n0 0 100\n 0 0 0 261744 13436 110492 1332244 0 0 0 6 109 38\n0 0 100\n 0 0 0 261744 13436 110492 1332244 0 0 0 6 112 39\n0 1 99\n\nI'm not overly familiar with Linux, but the swap in-out seems low, as\ndoes the io in-out. Have I allocated too much memory? Regardless, it\ndoesn't explain why the optimizer would decide to do a sequential scan. \n \nHere's the explain-analyze:\n \n Merge Join (cost=11819.21..15258.55 rows=12007 width=752) (actual\ntime=4107.64..5587.81 rows=20880 loops=1)\n Merge Cond: (\"outer\".commercial_entity_id =\n\"inner\".commercial_entity_id)\n -> Index Scan using comm_serv_comm_ent_id_i on commercial_service cs\n(cost=0.00..3015.53 rows=88038 width=12) (actual time=0.05..487.23\nrows=88038 loops=1)\n -> Sort (cost=11819.21..11846.08 rows=10752 width=740) (actual\ntime=3509.07..3955.15 rows=25098 loops=1)\n Sort Key: ce.commercial_entity_id\n -> Merge Join (cost=0.00..9065.23 rows=10752 width=740)\n(actual time=0.18..2762.13 rows=7990 loops=1)\n Merge Cond: (\"outer\".user_account_id =\n\"inner\".user_account_id)\n -> Index Scan using user_account_pkey on user_account\n(cost=0.00..8010.39 rows=72483 width=716) (actual time=0.05..2220.86\nrows=72483 loops=1)\n -> Index Scan using comm_ent_usr_acc_id_i on\ncommercial_entity ce (cost=0.00..4787.69 rows=78834 width=24) (actual\ntime=0.02..55.64 rows=7991 loops=1)\n Total runtime: 226239.77 msec\n(10 rows)\n \n------------------------------------------------------------\n \nTied up in all this is my inability to grasp what shared_buffers do\n \nFrom \" http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n<http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html> \":\n \n\"shbufShared buffers defines a block of memory that PostgreSQL will use\nto hold requests that are awaiting attention from the kernel buffer and\nCPU.\" and \"The shared buffers parameter assumes that OS is going to\ncache a lot of files and hence it is generally very low compared with\nsystem RAM.\"\n \nFrom \" http://www.lyris.com/lm_help/6.0/tuning_postgresql.html\n<http://www.lyris.com/lm_help/6.0/tuning_postgresql.html> \"\n \n\"Increase the buffer size. Postgres uses a shared memory segment among\nits subthreads to buffer data in memory. The default is 512k, which is\ninadequate. On many of our installs, we've bumped it to ~16M, which is\nstill small. If you can spare enough memory to fit your whole database\nin memory, do so.\"\n \nOur database (in Oracle) is just over 4 gig in size; obviously, this\nwon't comfortably fit in memory (though we do have an Opteron machine\ninbound for next week with 4-gig of RAM and SCSI hard-drives). The more\nof it we can fit in memory the better.\n \nWhat about changing these costs - the doc at\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n<http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.htm\nl> doesn't go into a lot of detail. I was thinking that maybe the\noptimizer decided it was faster to do a sequential scan rather than an\nindex scan based on an analysis of the cost using these values.\n \n#random_page_cost = 4 # units are one sequential page fetch\ncost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n \nDavid\n\n\n\n\n\n\n\nI've been having performance issues with Postgres \n(sequential scans vs index scans in an update statement). I've read that \noptimizer will change it's plan based on the resources it thinks are available. \nIn addition, I've read alot of conflicting info on various parameters, so I'd \nlike to sort those out as well.\n \nHere's the query I've been having problems \nwith:\n \nUPDATE user_account SET last_name='abc'FROM \ncommercial_entity ce, commercial_service csWHERE \nuser_account.user_account_id = ce.user_account_id \nANDce.commercial_entity_id=cs.commercial_entity_id;\n \nor \n \nUPDATE user_account SET last_name = \n'abc' WHERE EXISTS (SELECT 1 FROM commercial_entity ce, \ncommercial_service cs WHERE user_account.user_account_id = \nce.user_account_id AND ce.commercial_entity_id = \ncs.commercial_entity_id);\n \nBoth are about the same.\nAll columns are indexed; all column-types are the \nsame (numeric(10,0)). A vacuum analyze was run just before the last attempt at \nrunning the above statement.\n \n \nMACHINE STATS\n---------------------------\nThe machine is a dual-Pentium 3 933mhz, 2 gig of \nRAM, RAID 5 (3xWestern Digital 80 gig drives with 8-meg buffers on 3Ware), Red \nHat 9.0\n \n \nPOSTGRES TUNING INFO\n---------------------------------------\n \n\nHere are part of the contents of my sysctl.conf \nfile (note that I've played with values as low as 600000 with no \ndifference)\nkernel.shmmax=1400000000kernel.shmall=1400000000\nHere's the uncommented-lines from the \npostgresql.conf file (not the default one in the /usr/local/pgsql directory - \nI've initialzed the database on a different mount point with more \nspace):\n \ntcpip_socket = truemax_connections = \n500shared_buffers = \n96000          # min \nmax_connections*2 or 16, 8KB eachwal_buffers = \n64                \n# min 4, typically 8KB each\nsort_mem = \n2048                 \n# min 64, size in KBeffective_cache_size = \n6000     # typically 8KB eachLC_MESSAGES = \n'en_US.UTF-8'LC_MONETARY = 'en_US.UTF-8'LC_NUMERIC = \n'en_US.UTF-8'LC_TIME = 'en_US.UTF-8'\nNote that I've played with all these values; \nshared_buffers has been as low as 5000, and effective_cache_size has been as \nhigh as 50000. Sort mem has varied between 1024 bytes and 4096 bytes. \nwal_buffers have been between 16 and 128.\n \n \nINFO FROM THE MACHINE\n-----------------------------------------\nHere are the vmstat numbers while running the \nquery.\n \n   \nprocs                      \nmemory      \nswap          \nio     system      \ncpu r  b  w   swpd   free   \nbuff  cache   si   so    \nbi    bo   in    cs us sy \nid 0  1  2 261940  11624 110072 1334896   \n12    0    12   748  \n177   101  2  4 95 0  1  1 261940  \n11628 110124 1334836    0    \n0     0  1103  170    59  \n2  1 97 0  3  1 261928  11616 110180 \n1334808    3    0     6  \n1156  169    67  2  2 96 0  2  \n1 261892  11628 110212 1334636    7    \n0     7  1035  186   100  2  2 \n96 0  1  1 261796  11616 110272 1334688   \n18    0    18   932  \n169    79  2  1 97 0  1  1 \n261780  11560 110356 1334964    3    \n0     3  4155  192   118  2  7 \n92 0  1  1 261772  11620 110400 \n1334956    2    0     \n2   939  162    63  3  0 \n97 0  1  3 261744  11636 110440 \n1334872    6    0     9  \n1871  171   104  3  2 95 0  0  0 \n261744  13488 110472 1332244    0    \n0     0   922  195  1271  3  2 \n94 0  0  0 261744  13436 110492 \n1332244    0    0     \n0    24  115    47  0  1 \n99 0  0  0 261744  13436 110492 \n1332244    0    0     \n0     6  109    36  0  5 \n95 0  0  0 261744  13436 110492 \n1332244    0    0     \n0     6  123    63  0  0 \n100 0  0  0 261744  13436 110492 \n1332244    0    0     \n0     6  109    38  0  0 \n100 0  0  0 261744  13436 110492 \n1332244    0    0     \n0     6  112    39  0  1 \n99\nI'm not overly familiar with Linux, but the swap \nin-out seems low, as does the io in-out. Have I allocated too much memory? \nRegardless, it doesn't explain why the optimizer would decide to do a sequential \nscan. \n \nHere's the explain-analyze:\n \n Merge Join  (cost=11819.21..15258.55 \nrows=12007 width=752) (actual time=4107.64..5587.81 rows=20880 \nloops=1)   Merge Cond: (\"outer\".commercial_entity_id = \n\"inner\".commercial_entity_id)   ->  Index Scan using \ncomm_serv_comm_ent_id_i on commercial_service cs (cost=0.00..3015.53 rows=88038 \nwidth=12) (actual time=0.05..487.23 rows=88038 loops=1)   \n->  Sort  (cost=11819.21..11846.08 rows=10752 width=740) (actual \ntime=3509.07..3955.15 rows=25098 \nloops=1)         Sort Key: \nce.commercial_entity_id         \n->  Merge Join  (cost=0.00..9065.23 rows=10752 width=740) (actual \ntime=0.18..2762.13 rows=7990 \nloops=1)               \nMerge Cond: (\"outer\".user_account_id = \n\"inner\".user_account_id)               \n->  Index Scan using user_account_pkey on user_account \n(cost=0.00..8010.39 rows=72483 width=716) (actual time=0.05..2220.86 rows=72483 \nloops=1)               \n->  Index Scan using comm_ent_usr_acc_id_i on commercial_entity ce  \n(cost=0.00..4787.69 rows=78834 width=24) (actual time=0.02..55.64 rows=7991 \nloops=1) Total runtime: 226239.77 msec(10 rows)\n \n------------------------------------------------------------\n \nTied up in all this is my inability to grasp what \nshared_buffers do\n \nFrom \"http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\":\n \n\"Shared buffers defines a block of \nmemory that PostgreSQL will use to hold requests that are awaiting attention \nfrom the kernel buffer and CPU.\" and \"The shared buffers parameter assumes \nthat OS is going to cache a lot of files and hence it is generally very low \ncompared with system RAM.\"\n \nFrom \"http://www.lyris.com/lm_help/6.0/tuning_postgresql.html\"\n \n\"Increase the buffer size. Postgres uses a shared \nmemory segment among its subthreads to buffer data in memory. The default is \n512k, which is inadequate. On many of our installs, we've bumped it to ~16M, \nwhich is still small. If you can spare enough memory to fit your whole database \nin memory, do so.\"\n \nOur database (in Oracle) is just over 4 gig in \nsize; obviously, this won't comfortably fit in memory (though we do have an \nOpteron machine inbound for next week with 4-gig of RAM and SCSI hard-drives). \nThe more of it we can fit in memory the better.\n \nWhat about changing these costs - the doc at http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html doesn't \ngo into a lot of detail. I was thinking that maybe the optimizer decided it was \nfaster to do a sequential scan rather than an index scan based on an analysis of \nthe cost using these values.\n \n#random_page_cost = \n4           # units are one \nsequential page fetch cost#cpu_tuple_cost = \n0.01          # \n(same)#cpu_index_tuple_cost = 0.001   # \n(same)#cpu_operator_cost = 0.0025     # \n(same)\n \nDavid", "msg_date": "Mon, 13 Oct 2003 12:43:32 -0700", "msg_from": "David Griffiths <[email protected]>", "msg_from_op": true, "msg_subject": "Any issues with my tuning..." }, { "msg_contents": "On Mon, 2003-10-13 at 14:43, David Griffiths wrote:\n> I've been having performance issues with Postgres (sequential scans vs\n> index scans in an update statement). I've read that optimizer will\n> change it's plan based on the resources it thinks are available. In\n> addition, I've read alot of conflicting info on various parameters, so\n> I'd like to sort those out as well.\n> \n> Here's the query I've been having problems with:\n> \n> UPDATE user_account SET last_name='abc'\n> FROM commercial_entity ce, commercial_service cs\n> WHERE user_account.user_account_id = ce.user_account_id AND\n> ce.commercial_entity_id=cs.commercial_entity_id;\n> \n> or \n> \n> UPDATE user_account SET last_name = 'abc'\n> WHERE EXISTS (SELECT 1 FROM commercial_entity ce, commercial_service\n> cs\n> WHERE user_account.user_account_id = ce.user_account_id AND\n> ce.commercial_entity_id = cs.commercial_entity_id);\n> \n> Both are about the same.\n> \n> All columns are indexed; all column-types are the same\n> (numeric(10,0)). A vacuum analyze was run just before the last attempt\n> at running the above statement.\n\nFirst thing is to change ce.user_account_id, ce.commercial_entity_id,\nand cs.commercial_entity_id from numeric(10,0) to INTEGER. PG uses\nthem much more efficiently than it does NUMERIC, since it's a simple\nscalar type.\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\nLUKE: Is Perl better than Python?\nYODA: No... no... no. Quicker, easier, more seductive.\nLUKE: But how will I know why Python is better than Perl?\nYODA: You will know. When your code you try to read six months\nfrom now.\n\n", "msg_date": "Mon, 13 Oct 2003 15:53:04 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any issues with my tuning..." }, { "msg_contents": "David,\n\n> shared_buffers = 96000 # min max_connections*2 or 16, 8KB each\n\nThis seems a little high to me, even for 2gb RAM. What % of your available \nRAM does it work out to?\n\n> effective_cache_size = 6000 # typically 8KB each\n\nThis is very, very low. Given your hardware, I'd set it to 1.5GB.\n\n> Note that I've played with all these values; shared_buffers has been as\n> low as 5000, and effective_cache_size has been as high as 50000. Sort\n> mem has varied between 1024 bytes and 4096 bytes. wal_buffers have been\n> between 16 and 128.\n\nIf large updates are slow, increasing checkpoint_segments has the largest \neffect on this.\n\n> Tied up in all this is my inability to grasp what shared_buffers do\n> \n> From \" http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n> <http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html> \":\n> \n> \"shbufShared buffers defines a block of memory that PostgreSQL will use\n> to hold requests that are awaiting attention from the kernel buffer and\n> CPU.\" and \"The shared buffers parameter assumes that OS is going to\n> cache a lot of files and hence it is generally very low compared with\n> system RAM.\"\n\nThis is correct. Optimal levels among the people on this list who have \nbothered to do profiling have ranged btw. 6% and 12% of available RAM, but \nnever higher.\n\n> From \" http://www.lyris.com/lm_help/6.0/tuning_postgresql.html\n> <http://www.lyris.com/lm_help/6.0/tuning_postgresql.html> \"\n> \n> \"Increase the buffer size. Postgres uses a shared memory segment among\n> its subthreads to buffer data in memory. The default is 512k, which is\n> inadequate. On many of our installs, we've bumped it to ~16M, which is\n> still small. If you can spare enough memory to fit your whole database\n> in memory, do so.\"\n\nThis is absolutely incorrect. They are confusing shared_buffers with the \nkernel cache, or perhaps confusing PostgreSQL configuration with Oracle \nconfiguration.\n\nI have contacted Lyris and advised them to update the manual.\n\n> Our database (in Oracle) is just over 4 gig in size; obviously, this\n> won't comfortably fit in memory (though we do have an Opteron machine\n> inbound for next week with 4-gig of RAM and SCSI hard-drives). The more\n> of it we can fit in memory the better.\n\nThis is done through increasing the effective_cache_size, which encourages the \nplanner to use data kept in the kernel cache.\n\n> What about changing these costs - the doc at\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n> <http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.htm\n> l> doesn't go into a lot of detail. I was thinking that maybe the\n> optimizer decided it was faster to do a sequential scan rather than an\n> index scan based on an analysis of the cost using these values.\n> \n> #random_page_cost = 4 # units are one sequential page fetch\n> cost\n> #cpu_tuple_cost = 0.01 # (same)\n> #cpu_index_tuple_cost = 0.001 # (same)\n> #cpu_operator_cost = 0.0025 # (same)\n\nThat's because nobody to date has done tests on the effect of tinkering with \nthese values on different machines and setups. We would welcome your \nresults.\n\nOn high-end machines, random_page_cost almost inevatibly needs to be lowered \nto 2 or even 1.5 to encourage the use of indexes.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 13 Oct 2003 14:32:17 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any issues with my tuning..." }, { "msg_contents": "On Mon, 2003-10-13 at 15:43, David Griffiths wrote: \n> Here are part of the contents of my sysctl.conf file (note that I've\n> played with values as low as 600000 with no difference)\n> kernel.shmmax=1400000000\n> kernel.shmall=1400000000\n\nThis is only a system-wide limit -- it either allows the shared memory\nallocation to proceed, or it does not. Changing it will have no other\neffect on the performance of PostgreSQL.\n\n> -> Index Scan using comm_ent_usr_acc_id_i on\n> commercial_entity ce (cost=0.00..4787.69 rows=78834 width=24) (actual\n> time=0.02..55.64 rows=7991 loops=1)\n\nInteresting that we get this row count estimate so completely wrong\n(although it may or may not have anything to do with the actual\nperformance problem you're running into). Have you run ANALYZE on this\ntable recently? If so, does increasing this column's statistics target\n(using ALTER TABLE ... ALTER COLUMN ... SET STATISTICS) improve the row\ncount estimate?\n\n-Neil\n\n\n", "msg_date": "Tue, 14 Oct 2003 15:37:04 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any issues with my tuning..." } ]
[ { "msg_contents": "Hello,\n\nThanks to all the previous suggestions for my previous question. I've \ndone a lot more research and playing around since then, and luckily I \nthink I have a better understanding of postgresql.\n\nI still have some queries that don't use an index, and I was wondering \nif there were some other tricks to try out?\n\nMy system: RH9, PG 7.3.4, IDE, 1 gig RAM, celeron 1.7\nMy Table Columns (all bigints): start, stop, step1, step2, step3\nMy Indexes: btree(start), btree(stop), btree(start, stop)\nSize of table: 16212 rows\nParams: shared_buffers = 128, effective_cache_size = 8192\n\nThe Query: explain analyze select * from path where start = 653873 or \nstart = 649967 or stop = 653873 or stop = 649967\n\nThe Result:\nSeq Scan on \"path\"  (cost=0.00..450.22 rows=878 width=48) (actual time=0 \n.08..40.50 rows=1562 loops=1)\n Filter: ((\"start\" = 653873) OR (\"start\" = 649967) OR (stop = 653873) OR \n (stop = 649967))\nTotal runtime: 42.41 msec\n\nDoes anyone have a suggestion on how to get that query to use an index? \n Is it even possible? I did run vacuum analyze right before this test.\n\nI'm only beginning to customize the parameters in postgresql.conf \n(mainly from tips from this list).\n\nThanks very much!\nSeth\n\n", "msg_date": "Mon, 13 Oct 2003 13:48:15 -1000", "msg_from": "Seth Ladd <[email protected]>", "msg_from_op": true, "msg_subject": "ways to force index use?" }, { "msg_contents": "Seth,\n\n> The Query: explain analyze select * from path where start = 653873 or \n> start = 649967 or stop = 653873 or stop = 649967\n\nyou need to cast all those numbers to BIGINT:\n\nselect * from path where start = 653873::BIGINT or \nstart = 649967::BIGINT or stop = 653873::BIGINT or stop = 649967::BIGINT\n\nThen an index on start or stop should be used by the planner.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 13 Oct 2003 16:55:25 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ways to force index use?" }, { "msg_contents": "Seth Ladd <[email protected]> writes:\n> My Table Columns (all bigints): start, stop, step1, step2, step3\n ^^^^^^^^^^^\n\n> The Query: explain analyze select * from path where start = 653873 or \n> start = 649967 or stop = 653873 or stop = 649967\n\n> Does anyone have a suggestion on how to get that query to use an index? \n\nCoerce the constants to bigint, for starters. However, a query that is\nselecting almost 10% of the table, as your example is, probably\n*shouldn't* be using an index.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Oct 2003 20:01:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ways to force index use? " }, { "msg_contents": "Tom Lane wrote:\n> Seth Ladd <[email protected]> writes:\n> \n>>My Table Columns (all bigints): start, stop, step1, step2, step3\n> \n> ^^^^^^^^^^^\n> \n> \n>>The Query: explain analyze select * from path where start = 653873 or \n>>start = 649967 or stop = 653873 or stop = 649967\n> \n> \n>>Does anyone have a suggestion on how to get that query to use an index? \n> \n> \n> Coerce the constants to bigint, for starters. However, a query that is\n> selecting almost 10% of the table, as your example is, probably\n> *shouldn't* be using an index.\n\nI think that for 10% is still better and Index scan.\nThere is around a way to calculate this elbow ?\n\n\n\nRegards\nGaetano Mendola\n\n", "msg_date": "Tue, 14 Oct 2003 02:20:40 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ways to force index use?" } ]
[ { "msg_contents": "\nOn Monday, Oct 13, 2003, at 21:24 Pacific/Honolulu, mila wrote:\n\n> Seth,\n>\n>> My system: RH9, PG 7.3.4, IDE, 1 gig RAM, celeron 1.7\n> ...\n>> Size of table: 16212 rows\n>> Params: shared_buffers = 128, effective_cache_size = 8192\n>\n> Just in case,\n> the \"shared_buffers\" value looks a bit far too small for your system.\n> I think you should raise it to at least 1024, or so.\n>\n> Effective cache size could be (at least) doubled, too ==> this might\n> help forcing the index use.\n\nThanks! I'm just beginning to play with these numbers. I'll \ndefinitely try them out.\n\nI can't wait to try out the script that will help set these parameters! \n:)\n\nSeth\n\n", "msg_date": "Mon, 13 Oct 2003 22:18:06 -1000", "msg_from": "Seth Ladd <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ways to force index use?" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nIs there any way to determine how much of the free space map is currently in \nuse?(ie. where and what it is tracking?) I vacuum on a regular basis but I \nnever hold in terms of disk space usage. I jacked up the free space map \npages but this doesn't appear to be working.\n\nshared_buffers = 29400 # 2*max_connections, min 16\nmax_fsm_relations = 1000 # min 10, fsm is free space map\nmax_fsm_pages = 10000000 # min 1000, fsm is free space map\n\n- -- \nJeremy M. Guthrie\nSystems Engineer\nBerbee\n5520 Research Park Dr.\nMadison, WI 53711\nPhone: 608-298-1061\n\nBerbee...Decade 1. 1993-2003\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.2 (GNU/Linux)\n\niD8DBQE/jCo4qtjaBHGZBeURAj9EAKCL+tiioPO5K1YM1sn62yS0L1Ry5QCfVifq\n22s22gFNFHAHquS+iiUZO6s=\n=AQ2Y\n-----END PGP SIGNATURE-----", "msg_date": "Tue, 14 Oct 2003 11:54:16 -0500", "msg_from": "\"Jeremy M. Guthrie\" <[email protected]>", "msg_from_op": true, "msg_subject": "free space map usage" }, { "msg_contents": "\"Jeremy M. Guthrie\" <[email protected]> writes:\n> Is there any way to determine how much of the free space map is currently i=\n> n=20\n> use?(ie. where and what it is tracking?) I vacuum on a regular basis but I=\n> =20\n> never hold in terms of disk space usage.\n\nNot in 7.3 AFAIR. In 7.4 a full-database VACUUM VERBOSE will end with\nthe info you want:\n\nregression=# vacuum verbose;\n... much cruft ...\nINFO: free space map: 11 relations, 144 pages stored; 272 total pages needed\nDETAIL: Allocated FSM size: 1000 relations + 20000 pages = 178 kB shared memory.\nVACUUM\nregression=#\n\nThis tells me I'm only using about 1% of the FSM space (272 out of 20000\npage slots).\n\n> I jacked up the free space map=20\n> pages but this doesn't appear to be working.\n\nYou know you have to restart the postmaster to make those changes take\neffect, right?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 14 Oct 2003 15:16:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: free space map usage " }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Tuesday 14 October 2003 02:16 pm, Tom Lane wrote:\n> \"Jeremy M. Guthrie\" <[email protected]> writes:\n> > Is there any way to determine how much of the free space map is currently\n> > i= n=20\n> > use?(ie. where and what it is tracking?) I vacuum on a regular basis but\n> > I= =20\n> > never hold in terms of disk space usage.\n>\n> Not in 7.3 AFAIR. In 7.4 a full-database VACUUM VERBOSE will end with\n> the info you want:\n>\n> regression=# vacuum verbose;\n> ... much cruft ...\n> INFO: free space map: 11 relations, 144 pages stored; 272 total pages\n> needed DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 178 kB\n> shared memory. VACUUM\n> regression=#\n>\n> This tells me I'm only using about 1% of the FSM space (272 out of 20000\n> page slots).\n>\n> > I jacked up the free space map=20\n> > pages but this doesn't appear to be working.\n>\n> You know you have to restart the postmaster to make those changes take\n> effect, right?\nYup. I still see no effect after restart.\n\n> \t\t\tregards, tom lane\n\n- -- \nJeremy M. Guthrie\nSystems Engineer\nBerbee\n5520 Research Park Dr.\nMadison, WI 53711\nPhone: 608-298-1061\n\nBerbee...Decade 1. 1993-2003\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.2 (GNU/Linux)\n\niD8DBQE/jFH8qtjaBHGZBeURAkKkAJ0cDa31C4VKxlHoByFaGY3EtQwMdwCgmA5k\n+Z9GUE3l7LIJVl9rII7d3TU=\n=gkIR\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Tue, 14 Oct 2003 14:43:53 -0500", "msg_from": "\"Jeremy M. Guthrie\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: free space map usage" }, { "msg_contents": "On Tue, 2003-10-14 at 15:43, Jeremy M. Guthrie wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> On Tuesday 14 October 2003 02:16 pm, Tom Lane wrote:\n> > \"Jeremy M. Guthrie\" <[email protected]> writes:\n> > > Is there any way to determine how much of the free space map is currently\n> > > i= n=20\n> > > use?(ie. where and what it is tracking?) I vacuum on a regular basis but\n> > > I= =20\n> > > never hold in terms of disk space usage.\n> >\n> > Not in 7.3 AFAIR. In 7.4 a full-database VACUUM VERBOSE will end with\n> > the info you want:\n> >\n> > regression=# vacuum verbose;\n> > ... much cruft ...\n> > INFO: free space map: 11 relations, 144 pages stored; 272 total pages\n> > needed DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 178 kB\n> > shared memory. VACUUM\n> > regression=#\n> >\n> > This tells me I'm only using about 1% of the FSM space (272 out of 20000\n> > page slots).\n> >\n> > > I jacked up the free space map=20\n> > > pages but this doesn't appear to be working.\n> >\n> > You know you have to restart the postmaster to make those changes take\n> > effect, right?\n> Yup. I still see no effect after restart.\n> \n\nGiven that you knew of no way to determine how much free space map you\nwere using, what is your criteria for it to \"appear to be working\"? If\nit's that space keeps growing, then your probably not vacuuming\nfrequently enough.\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "16 Oct 2003 17:38:31 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: free space map usage" } ]
[ { "msg_contents": " First - this is a test post to see if I am in fact able to post!\n\n I sent the usual subscribe email - but got no response. Then some\npostings from the list started to arrive.\n\n Glitch? Or something else?\n\n Regards,\n Harry.\n\n", "msg_date": "Wed, 15 Oct 2003 11:09:17 +0100 (BST)", "msg_from": "Harry Broomhall <[email protected]>", "msg_from_op": true, "msg_subject": "Config error on emails?" } ]
[ { "msg_contents": "There's been a lot of discussion on the ADMIN list about postgresql backups\nfrom LVM snapshots:\nhttp://marc.theaimsgroup.com/?l=postgresql-admin&w=2&r=1&s=LVM+snapshot&q=b\n\nNote that the existence of the snapshot slows the original filesystem down,\nso you want to minimize the duration for which the snapshot exists. A two\nphase rsync -- first off the \"live\" filesystem, second off the snapshot --\nto a backup filesystem accomplishes this. If you don't have the capacity to\nduplicate your $PGDATA folder, you'll want to consider doing incremental\nbackups with xfsdump. Since xfs and xfsdump has been around forever, so you\ncan bet your data on them; of course, xfsdump requires that pgdata be hosted\non an xfs volume (XFS is a fast filesystem with metadata journaling, giving\nfast crash recovery, so this is a good idea anyway). For other filesystems,\nincremental backup and restores are being developed in star:\nhttp://freshmeat.net/projects/star/. star also claims to be faster than gnu\ntar, so you might use it even if you aren't doing incremental backups.\n\nThe latest version of my backup script is attached. It demonstrates the\nimplementation of filesystem level backup of the postgresql data cluster\n(the $PGDATA folder) using two-phase rsync. The $PGDATA folder is on an xfs\nformatted LVM volume; various checks are done before and after backup, and\nerrors are e-mailed to a specified account. The script handles situations\nwhere (i) the XFS filesystem containing $PGDATA has an external log and (ii)\nthe postmaster log ($PGDATA/pg_xlog) is written to a filesystem different\nthan the one containing the $PGDATA folder. These configurations enhance\ndatabase performance, though an external XFS log is not a big win for\npostgresql, which creates relatively few new files and deletes relatively\nfew files (relative to an e-mail server). It should be possible, using this\nscript, to keep backup times below 10 minutes even for very high loads -\njust increase the frequency at which you do backups (it has been tested to\nrun hourly, and runs every three hours on the production server).\n\nCheers,\n\tMurthy\n\n(Note: I have experienced filesystem hangs within 2 days to a week, from\nrunning this script frequently with XFS versions including XFS 1.3. The XFS\nCVS kernel from Sep 30th 2003 seems not to have this problem - I have been\nrunning this script hourly for two weeks without problems. So you might\neither use a CVS kernel or wait for an XFS release based on linux 2.4.22 or\nlater; and you will be testing your own setup, right?!)\n\n\n\n\n\n\n>-----Original Message-----\n>From: Jeff [mailto:[email protected]]\n>Sent: Thursday, October 16, 2003 13:37\n>To: Josh Berkus\n>Cc: [email protected]; [email protected];\n>[email protected]; [email protected]\n>Subject: Re: [ADMIN] [PERFORM] backup/restore - another area.\n>\n>\n>On Thu, 16 Oct 2003 10:09:27 -0700\n>Josh Berkus <[email protected]> wrote:\n>\n>> Jeff,\n>> \n>> > I left the DB up while doing this.\n>> >\n>> > Even had a program sitting around committing data to try \n>and corrupt\n>> > things. (Which is how I discovered I was doing the snapshot wrong)\n>> \n>> Really? I'm unclear on the method you're using to take the \n>snapshot,\n>> then; I seem to have missed a couple posts on this thread. Want to\n>> refresh me?\n>> \n>\n>I have a 2 disk stripe LVM on /dev/postgres/pgdata/\n>\n>lvcreate -L4000M -s -n pg_backup /dev/postgres/pgdata\n>mount /dev/postgres/pg_backup /pg_backup \n>tar cf - /pg_backup | gzip -1 > /squeegit/mb.backup \n>umount /pg_backup;\n>lvremove -f /dev/postgres/pg_backup;\n>\n>In a nutshell an LVM snapshot is an atomic operation that \n>takes, well, a\n>snapshot of hte FS as it was at that instant. It does not make a 2nd\n>copy of the data. This way you can simply tar up the pgdata directory\n>and be happy as the snapshot will not be changing due to db activity.\n>\n>-- \n>Jeff Trout <[email protected]>\n>http://www.jefftrout.com/\n>http://www.stuarthamm.net/\n>\n>---------------------------(end of \n>broadcast)---------------------------\n>TIP 9: the planner will ignore your desire to choose an index \n>scan if your\n> joining column's datatypes do not match\n>", "msg_date": "Fri, 17 Oct 2003 10:45:37 -0400", "msg_from": "Murthy Kambhampaty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] backup/restore - another area." }, { "msg_contents": "Murthy Kambhampaty <[email protected]> writes:\n> ... The script handles situations\n> where (i) the XFS filesystem containing $PGDATA has an external log and (ii)\n> the postmaster log ($PGDATA/pg_xlog) is written to a filesystem different\n> than the one containing the $PGDATA folder.\n\nIt does? How exactly can you ensure snapshot consistency between\ndata files and XLOG if they are on different filesystems?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Oct 2003 12:05:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] backup/restore - another area. " } ]
[ { "msg_contents": "It seems a simple \"vacuum\" (not full or analyze) slows down the\ndatabase dramatically. I am running vacuum every 15 minutes, but it\ntakes about 5 minutes to run even after a fresh import. Even with\nvacuuming every 15 minutes, I'm not sure vacuuming is working\nproperly.\n\nThere are a lot of updates. The slowest relation is the primary key\nindex, which is composed of a sequence. I've appended a csv with the\nparsed output from vacuum. The page counts are growing way too fast\nimo. I believe this is caused by the updates, and index pages not\ngetting re-used. The index values aren't changing, but other values\nin the table are.\n\nAny suggestions how to make vacuuming more effective and reducing the\ntime it takes to vacuum? I'd settle for less frequent vacuuming or\nperhaps index rebuilding. The database can be re-imported in about an\nhour.\n\nRob\n----------------------------------------------------------------\nSpacing every 15 minutes\nPages,Tuples,Deleted\n7974,1029258,1536\n7979,1025951,4336\n7979,1026129,52\n7979,1025618,686\n7979,1025520,152\n7980,1025583,28\n7995,1028008,6\n8004,1030016,14\n8010,1026149,4965\n8012,1026684,6\n8014,1025910,960\n8020,1026812,114\n8027,1027642,50\n8031,1027913,362\n8040,1028368,784\n8046,1028454,1143\n8049,1029155,6\n8053,1029980,10\n8065,1031506,24\n8084,1029134,4804\n8098,1031004,346\n8103,1029412,3044\n8118,1029736,1872\n8141,1031643,1704\n8150,1032597,286\n8152,1033222,6\n8159,1029436,4845\n8165,1029987,712\n8170,1030229,268\n8176,1029568,1632\n8189,1030136,1540\n8218,1030915,3963\n8255,1033049,4598\n8297,1036583,3866\n8308,1031412,8640\n8315,1031987,1058\n8325,1033892,6\n8334,1030589,4625\n8350,1031709,1040\n8400,1033071,5946\n8426,1031555,8368\n8434,1031638,2240\n8436,1031703,872\n8442,1031891,612\n\n\n", "msg_date": "Fri, 17 Oct 2003 08:48:30 -0600", "msg_from": "Rob Nagler <[email protected]>", "msg_from_op": true, "msg_subject": "vacuum locking" }, { "msg_contents": "Rob Nagler wrote:\n\n> It seems a simple \"vacuum\" (not full or analyze) slows down the\n> database dramatically. I am running vacuum every 15 minutes, but it\n> takes about 5 minutes to run even after a fresh import. Even with\n> vacuuming every 15 minutes, I'm not sure vacuuming is working\n> properly.\n> \n> There are a lot of updates. The slowest relation is the primary key\n> index, which is composed of a sequence. I've appended a csv with the\n> parsed output from vacuum. The page counts are growing way too fast\n> imo. I believe this is caused by the updates, and index pages not\n> getting re-used. The index values aren't changing, but other values\n> in the table are.\n\nYou should try 7.4 beta and pg_autovacuum which is a contrib module in CVS tip. \nIt works with 7.3 as well.\n\nMajor reason for 7.4 is, it fixes index growth in vacuum. So if your database is \nfit, it will stay that way with proper vacuuming.\n> \n> Any suggestions how to make vacuuming more effective and reducing the\n> time it takes to vacuum? I'd settle for less frequent vacuuming or\n> perhaps index rebuilding. The database can be re-imported in about an\n> hour.\n\nMake sure that you have FSM properly tuned. Bump it from defaults to suit your \nneeds. I hope you have gone thr. this page for general purpose setting.\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n> \n> Rob\n> ----------------------------------------------------------------\n> Spacing every 15 minutes\n> Pages,Tuples,Deleted\n> 7974,1029258,1536\n> 7979,1025951,4336\n> 7979,1026129,52\n> 7979,1025618,686\n\nAssuming those were incremental figures, largest you have is ~8000 tuples per 15 \nminutes and 26 pages. I think with proper FSM/shared buffers/effective cache and \na pg_autovacuum with 1 min. polling interval, you could end up in lot better shape.\n\nLet us know if it works.\n\n Shridhar\n\n", "msg_date": "Fri, 17 Oct 2003 20:41:59 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "> Any suggestions how to make vacuuming more effective and reducing the\n> time it takes to vacuum? I'd settle for less frequent vacuuming or\n> perhaps index rebuilding. The database can be re-imported in about an\n> hour.\n\nWhich version and what are your FSM settings?", "msg_date": "Fri, 17 Oct 2003 11:14:01 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Shridhar Daithankar writes:\n> You should try 7.4 beta and pg_autovacuum which is a contrib module\n> in CVS tip. \n\nIt's on our todo list. :)\n\nHow does pg_autovacuum differ from vacuumdb? I mean it seems to call\nthe vacuum operation underneath just as vacuumdb does. I obviously\ndidn't follow the logic as to how it gets there. :-)\n\n> Make sure that you have FSM properly tuned. Bump it from defaults to\n> suit your needs. I hope you have gone thr. this page for general\n> purpose setting.\n\nI didn't start vacuuming regularly until recently, so I didn't see\nthis problem.\n\n> Assuming those were incremental figures, largest you have is ~8000\n> tuples per 15 minutes and 26 pages. I think with proper FSM/shared\n> buffers/effective cache and a pg_autovacuum with 1 min. polling\n> interval, you could end up in lot better shape.\n\nHere are the numbers that are different. I'm using 7.3:\n\nshared_buffers = 8000\nsort_mem = 8000\nvacuum_mem = 64000\neffective_cache_size = 40000\n\nfree says:\n total used free shared buffers cached\nMem: 1030676 1005500 25176 0 85020 382280\n-/+ buffers/cache: 538200 492476\nSwap: 2096472 272820 1823652\n\nIt seems effective_cache_size is about right.\n\nvacuum_mem might be slowing down the system? But if I reduce it,\nwon't vacuuming get slower?\n\nmax_fsm_relations is probably too low (the default in my conf file\nsays 100, probably needs to be 1000). Not sure how this affects disk\nusage.\n\nHere's the summary for the two active tables during a vacuum interval\nwith high activity. The other tables don't get much activity, and are\nmuch smaller. As you see the 261 + 65 adds up to the bulk of the 5\nminutes it takes to vacuum.\n\nINFO: Removed 8368 tuples in 427 pages.\n CPU 0.06s/0.04u sec elapsed 1.54 sec.\nINFO: Pages 24675: Changed 195, Empty 0; Tup 1031519: Vac 8368, Keep 254, UnUsed 1739.\n Total CPU 2.92s/2.58u sec elapsed 65.35 sec.\n\nINFO: Removed 232 tuples in 108 pages.\n CPU 0.01s/0.02u sec elapsed 0.27 sec.\nINFO: Pages 74836: Changed 157, Empty 0; Tup 4716475: Vac 232, Keep 11, UnUsed\n641.\n Total CPU 10.19s/6.03u sec elapsed 261.44 sec.\n\nHow would vacuuming every minute finish in time? It isn't changing\nmuch in the second table, but it's taking 261 seconds to wade through\n5m rows.\n\nAssuming I vacuum every 15 minutes, it would seem like max_fsm_pages\nshould be 1000, because that's about what was reclaimed. The default\nis 10000. Do I need to change this?\n\nSorry to be so dense, but I just don't know the right values are.\n\nThanks muchly for the advice,\nRob\n\n\n", "msg_date": "Fri, 17 Oct 2003 09:52:26 -0600", "msg_from": "Rob Nagler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Rob,\n\n> vacuum_mem might be slowing down the system? But if I reduce it,\n> won't vacuuming get slower?\n\nYes, but it will have less of an impact on the system while it's running.\n\n> INFO: Removed 8368 tuples in 427 pages.\n> CPU 0.06s/0.04u sec elapsed 1.54 sec.\n> INFO: Pages 24675: Changed 195, Empty 0; Tup 1031519: Vac 8368, Keep 254,\n> UnUsed 1739. Total CPU 2.92s/2.58u sec elapsed 65.35 sec.\n>\n> INFO: Removed 232 tuples in 108 pages.\n> CPU 0.01s/0.02u sec elapsed 0.27 sec.\n> INFO: Pages 74836: Changed 157, Empty 0; Tup 4716475: Vac 232, Keep 11,\n> UnUsed 641.\n> Total CPU 10.19s/6.03u sec elapsed 261.44 sec.\n\nWhat sort of disk array do you have? That seems like a lot of time \nconsidering how little work VACUUM is doing.\n\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 17 Oct 2003 09:36:25 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "On Fri, 17 Oct 2003 09:52:26 -0600, Rob Nagler <[email protected]>\nwrote:\n>INFO: Removed 8368 tuples in 427 pages.\n> CPU 0.06s/0.04u sec elapsed 1.54 sec.\n>INFO: Pages 24675: Changed 195, Empty 0; Tup 1031519: Vac 8368, Keep 254, UnUsed 1739.\n> Total CPU 2.92s/2.58u sec elapsed 65.35 sec.\n>\n>INFO: Removed 232 tuples in 108 pages.\n> CPU 0.01s/0.02u sec elapsed 0.27 sec.\n>INFO: Pages 74836: Changed 157, Empty 0; Tup 4716475: Vac 232, Keep 11, UnUsed\n>641.\n> Total CPU 10.19s/6.03u sec elapsed 261.44 sec.\n\nThe low UnUsed numbers indicate that FSM is working fine.\n\n>Assuming I vacuum every 15 minutes, it would seem like max_fsm_pages\n>should be 1000, because that's about what was reclaimed. The default\n>is 10000. Do I need to change this?\n\nISTM you are VACCUMing too aggressively. You are reclaiming less than\n1% and 0.005%, respectively, of tuples. I would increase FSM settings\nto ca. 1000 fsm_relations, 100000 fsm_pages and VACUUM *less* often,\nsay every two hours or so.\n\n... or configure autovacuum to VACUUM a table when it has 10% dead\ntuples.\n\nServus\n Manfred\n", "msg_date": "Fri, 17 Oct 2003 19:41:56 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Manfred Koizar writes:\n> ISTM you are VACCUMing too aggressively. You are reclaiming less than\n> 1% and 0.005%, respectively, of tuples. I would increase FSM settings\n> to ca. 1000 fsm_relations, 100000 fsm_pages and VACUUM *less* often,\n> say every two hours or so.\n\nI did this. We'll see how it goes.\n\n> ... or configure autovacuum to VACUUM a table when it has 10% dead\n> tuples.\n\nThis solution doesn't really fix the fact that VACUUM consumes the\ndisk while it is running. I want to avoid the erratic performance on\nmy web server when VACUUM is running.\n\nmfg,\nRob\n\n", "msg_date": "Fri, 17 Oct 2003 17:11:59 -0600", "msg_from": "Rob Nagler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Josh Berkus writes:\n> Yes, but it will have less of an impact on the system while it's running.\n\nWe'll find out. I lowered it to vacuum_mem to 32000.\n\n> What sort of disk array do you have? That seems like a lot of time \n> considering how little work VACUUM is doing.\n\nVendor: DELL Model: PERCRAID Mirror Rev: V1.0\n Type: Direct-Access ANSI SCSI revision: 02\n\nTwo 10K disks attached.\n\nRob\n", "msg_date": "Fri, 17 Oct 2003 17:37:16 -0600", "msg_from": "Rob Nagler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "I ran into the same problem with VACUUM on my Linux box. If you are running\nLinux, take a look at \"elvtune\" or read this post:\n\nhttp://groups.google.com/groups?q=stephen+vacuum+linux&hl=en&lr=&ie=UTF-8&se\nlm=gRdjb.7484%241o2.77%40nntp-post.primus.ca&rnum=3\n\nRegards, Stephen\n\n\n\"Rob Nagler\" <[email protected]> wrote in message\nnews:[email protected]...\n> Manfred Koizar writes:\n> > ISTM you are VACCUMing too aggressively. You are reclaiming less than\n> > 1% and 0.005%, respectively, of tuples. I would increase FSM settings\n> > to ca. 1000 fsm_relations, 100000 fsm_pages and VACUUM *less* often,\n> > say every two hours or so.\n>\n> I did this. We'll see how it goes.\n>\n> > ... or configure autovacuum to VACUUM a table when it has 10% dead\n> > tuples.\n>\n> This solution doesn't really fix the fact that VACUUM consumes the\n> disk while it is running. I want to avoid the erratic performance on\n> my web server when VACUUM is running.\n>\n> mfg,\n> Rob\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n\n", "msg_date": "Sat, 18 Oct 2003 12:33:41 -0400", "msg_from": "\"Stephen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": ">>>>> \"RN\" == Rob Nagler <[email protected]> writes:\n\n\nRN> Vendor: DELL Model: PERCRAID Mirror Rev: V1.0\nRN> Type: Direct-Access ANSI SCSI revision: 02\n\n\nAMI or Adaptec based?\n\nIf AMI, make sure it has write-back cache enabled (and you have\nbattery backup!), and disable the 'readahead' feature if you can.\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Wed, 22 Oct 2003 14:44:52 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": ">>>>> \"RN\" == Rob Nagler <[email protected]> writes:\n\nRN> This solution doesn't really fix the fact that VACUUM consumes the\nRN> disk while it is running. I want to avoid the erratic performance on\nRN> my web server when VACUUM is running.\n\nWhat's the disk utilization proir to running vacuum? If it is\nhovering around 95% or more of capacity, of course you're gonna\noverwhelm it.\n\nThis ain't Star Trek -- the engines can't run at 110%, Cap'n!\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Wed, 22 Oct 2003 14:46:26 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Vivek Khera writes:\n> AMI or Adaptec based?\n\nAdaptec, I think. AIC-7899 LVD SCSI is what dmidecode says, and\nRed Hat/Adaptec aacraid driver, Aug 18 2003 is what comes up when it\nboots. I haven't be able to use the aac utilities with this driver,\nhowever, so it's hard to interrogate the device.\n\n> If AMI, make sure it has write-back cache enabled (and you have\n> battery backup!), and disable the 'readahead' feature if you can.\n\nI can't do this so easily. It's at a colo, and it's production.\nI doubt this has anything to do with this problem, anyway. We're\ntalking about hundreds of megabytes of data.\n\n> What's the disk utilization proir to running vacuum? If it is\n> hovering around 95% or more of capacity, of course you're gonna\n> overwhelm it.\n\nHere's the vmstat 5 at a random time:\n\n procs memory swap io system cpu\n r b w swpd free buff cache si so bi bo in cs us sy id\n 0 0 0 272372 38416 78220 375048 0 3 2 0 0 0 2 2 0\n 0 0 0 272372 30000 78320 375660 0 0 34 274 382 284 5 1 94\n 0 1 0 272372 23012 78372 375924 0 0 25 558 445 488 8 2 90\n 1 0 0 272368 22744 78472 376192 0 6 125 594 364 664 9 3 88\n\nAnd here's it during vacuum:\n\n procs memory swap io system cpu\n r b w swpd free buff cache si so bi bo in cs us sy id\n 1 2 1 277292 9620 72028 409664 46 32 4934 4812 1697 966 8 4 88\n 0 3 0 277272 9588 72096 412964 61 0 7303 2478 1391 976 3 3 94\n 2 2 0 277336 9644 72136 393264 1326 32 2827 2954 1693 1519 8 3 89\nThe pages are growing proportionately with the number of tuples, btw.\nHere's a vacuum snippet from a few days ago after a clean import,\nrunning every 15 minutes:\n\nINFO: Removed 2192 tuples in 275 pages.\n CPU 0.06s/0.01u sec elapsed 0.91 sec.\nINFO: Pages 24458: Changed 260, Empty 0; Tup 1029223: Vac 2192, Keep 3876, UnUsed 26.\n Total CPU 2.91s/2.22u sec elapsed 65.74 sec.\n\nAnd here's the latest today, running every 2 hours:\n\nINFO: Removed 28740 tuples in 1548 pages.\n CPU 0.08s/0.06u sec elapsed 3.73 sec.\nINFO: Pages 27277: Changed 367, Empty 0; Tup 1114178: Vac 28740, Keep 1502, UnUsed 10631.\n Total CPU 4.78s/4.09u sec elapsed 258.10 sec.\n\nThe big tables/indexes are taking longer, but it's a big CPU/elapsed\ntime savings to vacuum every two hours vs every 15 minutes.\n\nThere's still the problem that when vacuum is running interactive\nperformance drops dramatically. A query that takes a couple of\nseconds to run when the db isn't being vacuumed will take minutes when\nvacuum is running. It's tough for me to correlate exactly, but I\nsuspect that while postgres is vacuuming an index or table, nothing else\nruns. In between relations, other stuff gets to run, and then vacuum\nhogs all the resources again. This could be for disk reasons or\nsimply because postgres locks the index or table while it is being\nvacuumed. Either way, the behavior is unacceptable. Users shouldn't\nhave to wait minutes while the database picks up after itself.\n\nThe concept of vacuuming seems to be problematic. I'm not sure why\nthe database simply can't garbage collect incrementally. AGC is very\ntricky, especially AGC that involves gigabytes of data on disk.\nIncremental garbage collection seems to be what other databases do,\nand it's been my experience that other databases don't have the type\nof unpredictable behavior I'm seeing with Postgres. I'd rather the\ndatabase be a little bit slower on average than have to figure out the\nbest time to inconvenience my users.\n\nSince my customer already has Oracle, we'll be running tests in the\ncoming month(s :-) with Oracle to see how it performs under the same\nload and hardware. I'll keep this group posted.\n\nRob\n\n\n", "msg_date": "Wed, 22 Oct 2003 17:32:12 -0600", "msg_from": "Rob Nagler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Rob Nagler <[email protected]> writes:\n> Here's the vmstat 5 at a random time:\n\n> procs memory swap io system cpu\n> r b w swpd free buff cache si so bi bo in cs us sy id\n> 0 0 0 272372 38416 78220 375048 0 3 2 0 0 0 2 2 0\n> 0 0 0 272372 30000 78320 375660 0 0 34 274 382 284 5 1 94\n> 0 1 0 272372 23012 78372 375924 0 0 25 558 445 488 8 2 90\n> 1 0 0 272368 22744 78472 376192 0 6 125 594 364 664 9 3 88\n\n> And here's it during vacuum:\n\n> procs memory swap io system cpu\n> r b w swpd free buff cache si so bi bo in cs us sy id\n> 1 2 1 277292 9620 72028 409664 46 32 4934 4812 1697 966 8 4 88\n> 0 3 0 277272 9588 72096 412964 61 0 7303 2478 1391 976 3 3 94\n> 2 2 0 277336 9644 72136 393264 1326 32 2827 2954 1693 1519 8 3 89\n\nThe increased I/O activity is certainly to be expected, but what I find\nstriking here is that you've got substantial swap activity in the second\ntrace. What is causing that? Not VACUUM I don't think. It doesn't have\nany huge memory demand. But swapping out processes could account for\nthe perceived slowdown in interactive response.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Oct 2003 21:27:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking " }, { "msg_contents": "Am Donnerstag, 23. Oktober 2003 01:32 schrieb Rob Nagler:\n> The concept of vacuuming seems to be problematic. I'm not sure why\n> the database simply can't garbage collect incrementally. AGC is very\n> tricky, especially AGC that involves gigabytes of data on disk.\n> Incremental garbage collection seems to be what other databases do,\n> and it's been my experience that other databases don't have the type\n> of unpredictable behavior I'm seeing with Postgres. I'd rather the\n> database be a little bit slower on average than have to figure out the\n> best time to inconvenience my users.\n\nI think oracle does not do garbage collect, it overwrites the tuples directly \nand stores the old tuples in undo buffers. Since most transactions are \ncommits, this is a big win.\n\n", "msg_date": "Thu, 23 Oct 2003 08:14:56 +0200", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "On Wed, Oct 22, 2003 at 09:27:47PM -0400, Tom Lane wrote:\n\n> trace. What is causing that? Not VACUUM I don't think. It doesn't have\n> any huge memory demand. But swapping out processes could account for\n\nWhat about if you've set vacuum_mem too high?\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 23 Oct 2003 06:14:25 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Andrew Sullivan <[email protected]> writes:\n> On Wed, Oct 22, 2003 at 09:27:47PM -0400, Tom Lane wrote:\n>> trace. What is causing that? Not VACUUM I don't think. It doesn't have\n>> any huge memory demand. But swapping out processes could account for\n\n> What about if you've set vacuum_mem too high?\n\nMaybe, but only if it actually had reason to use a ton of memory ---\nthat is, it were recycling a very large number of tuples in a single\ntable. IIRC that didn't seem to be the case here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Oct 2003 09:17:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking " }, { "msg_contents": "Mario Weilguni <[email protected]> writes:\n> I think oracle does not do garbage collect, it overwrites the tuples directly\n> and stores the old tuples in undo buffers. Since most transactions are \n> commits, this is a big win.\n\n... if all tuples are the same size, and if you never have any\ntransactions that touch enough tuples to overflow your undo segment\n(or even just sit there for a long time, preventing you from recycling\nundo-log space; this is the dual of the VACUUM-can't-reclaim-dead-tuple\nproblem). And a few other problems that any Oracle DBA can tell you about.\nI prefer our system.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Oct 2003 09:26:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking " }, { "msg_contents": "On Thu, Oct 23, 2003 at 09:17:41AM -0400, Tom Lane wrote:\n> \n> Maybe, but only if it actually had reason to use a ton of memory ---\n> that is, it were recycling a very large number of tuples in a single\n> table. IIRC that didn't seem to be the case here.\n\nAh, that's what I was trying to ask. I didn't know if the memory was\nactually taken by vacuum at the beginning (like shared memory is) or\nwhat-all happened.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 23 Oct 2003 09:54:45 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Tom Lane writes:\n> ... if all tuples are the same size, and if you never have any\n\nIncorrect. If the tuples smaller, Oracle does the right thing. If\nthere's enough space in the page, it shifts the tuples to make room.\nThat's what pctfree, pctused and pctincrease allow you to control.\nIt's all in memory so its fast, and I don't think it has to update any\nindices.\n\n> transactions that touch enough tuples to overflow your undo segment\n\nThat's easily configured, and hasn't been a problem in the databases\nI've managed.\n\n> (or even just sit there for a long time, preventing you from recycling\n\nThat's probably bad software or a batch system--which is tuned\ndifferently. Any OLTP system has to be able to partition its problems\nto keep transactions short and small. If it doesn't, it will not be\nusable.\n\n> undo-log space; this is the dual of the VACUUM-can't-reclaim-dead-tuple\n> problem). And a few other problems that any Oracle DBA can tell you\n> about. I prefer our system.\n\nOracle seems to make the assumption that data changes, which is why it\nmanages free space within each page as well as within free lists. The\ndatabase will be bigger but you get much better performance on DML.\nIt is very good at caching so reads are fast.\n\nPostgres seems to make the assumption that updates and deletes are\nrare. A delete/insert policy for updates means that a highly indexed\ntable requires lots of disk I/O when the update happens and the\nconcomitant garbage collection when vacuum runs. But then MVCC makes\nthe assumption that there's lots of DML. I don't understand the\nphilosphical split here.\n\nI guess I don't understand what application profiles/statistics makes\nyou prefer Postgres' approach over Oracle's.\n\n> The increased I/O activity is certainly to be expected, but what I find\n> striking here is that you've got substantial swap activity in the second\n> trace. What is causing that? Not VACUUM I don't think. It doesn't have\n> any huge memory demand. But swapping out processes could account for\n> the perceived slowdown in interactive response.\n\nThe box is a bit memory starved, and we'll be addressing that\nshortly. I don't think it accounts for 3 minute queries, but perhaps\nit might. vacuum_mem is 32mb, btw.\n\nRob\n\n\n", "msg_date": "Thu, 23 Oct 2003 09:15:34 -0600", "msg_from": "Rob Nagler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum locking " }, { "msg_contents": ">>>>> \"RN\" == Rob Nagler <[email protected]> writes:\n\nRN> Vivek Khera writes:\n>> AMI or Adaptec based?\n\nRN> Adaptec, I think. AIC-7899 LVD SCSI is what dmidecode says, and\nRN> Red Hat/Adaptec aacraid driver, Aug 18 2003 is what comes up when it\n\nCool. No need to diddle with it, then. The Adaptec work quite well,\nespecially if you have battery backup.\n\nAnyhow, it seems that as Tom mentioned, you are going into swap when\nyour vacuum runs, so I'll suspect you're just at the edge of total\nmemory utilization, and then you go over the top.\n\nAnother theory is that the disk capacity is near saturation, the\nvacuum causes it to slow down just a smidge, and then your application\nopens additional connections to handle the incoming requests which\ndon't complete fast enough, causing more memory usage with the\nadditional postmasters created. Again, you suffer the slow spiral of\ndeath due to resource shortage.\n\nI'd start by getting full diagnosis of overall what your system is\ndoing during the vacuum (eg, additional processes created) then see if\nadding RAM will help.\n\nAlso, how close are you to the capacity of your disk bandwidth? I\ndon't see that in your numbers. I know in freebsd I can run \"systat\n-vmstat\" and it gives me a percentage of utilization that lets me know\nwhen I'm near the capacity.\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Thu, 23 Oct 2003 16:53:05 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Am Donnerstag, 23. Oktober 2003 15:26 schrieb Tom Lane:\n> ... if all tuples are the same size, and if you never have any\n> transactions that touch enough tuples to overflow your undo segment\n> (or even just sit there for a long time, preventing you from recycling\n> undo-log space; this is the dual of the VACUUM-can't-reclaim-dead-tuple\n> problem). And a few other problems that any Oracle DBA can tell you about.\n> I prefer our system.\n\nof course both approaches have advantages, it simply depends on the usage \npattern. A case where oracle really rules over postgresql are m<-->n \nconnection tables where each record consist of two foreign keys, the \noverwrite approach is a big win here.\n\n", "msg_date": "Fri, 24 Oct 2003 08:17:22 +0200", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Rob Nagler <[email protected]> writes:\n\n> Incorrect. If the tuples smaller, Oracle does the right thing. If\n> there's enough space in the page, it shifts the tuples to make room.\n> That's what pctfree, pctused and pctincrease allow you to control.\n> It's all in memory so its fast, and I don't think it has to update any\n> indices.\n\nNote that pctfree/pctused are a big performance drain on the usual case. Try\nsetting them to 0/100 on a table that doesn't get updates (like a many-many\nrelation table) and see how much faster it is to insert and scan.\n\n> > transactions that touch enough tuples to overflow your undo segment\n> \n> That's easily configured, and hasn't been a problem in the databases\n> I've managed.\n\nJudging by the number of FAQ lists out there that explain various quirks of\nrollback segment configuration I wouldn't say it's so easily configured.\n\n> > (or even just sit there for a long time, preventing you from recycling\n> \n> That's probably bad software or a batch system--which is tuned\n> differently. Any OLTP system has to be able to partition its problems\n> to keep transactions short and small. If it doesn't, it will not be\n> usable.\n\nBoth DSS style and OLTP style databases can be accomodated with rollback\nsegments though it seems to me that DSS style databases lose most of the\nadvantage of rollback segments and optimistic commit.\n\nThe biggest problem is on systems where there's a combination of both users.\nYou need tremendous rollback segments to deal with the huge volume of oltp\ntransactions that can occur during a single DSS query. And the DSS query\nperformance is terrible as it has to check the rollback segments for a large\nportion of the blocks it reads.\n\n> Oracle seems to make the assumption that data changes, \n\nArguably it's the other way around. Postgres's approach wins whenever most of\nthe tuples in a table have been updated, in that case it just has to scan the\nwhole table ignoring old records not visible to the transaction. Oracle has to\nconsult the rollback segment for any recently updated tuple. Oracle's wins in\nthe case where most of the tuples haven't changed so it can just scan the\ntable without consulting lots of rollback segments.\n\n-- \ngreg\n\n", "msg_date": "24 Oct 2003 11:36:12 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Stephen writes:\n> I ran into the same problem with VACUUM on my Linux box. If you are running\n> Linux, take a look at \"elvtune\" or read this post:\n\nThe default values were -r 64 -w 8192. The article said this was\n\"optimal\". I just futzed with different values anywere from -w 128 -r\n128 to -r 16 -w 8192. None of these mattered much when vacuum is\nrunning. \n\nThis is a RAID1 box with two disks. Even with vacuum and one other\npostmaster running, it's still got to get a lot of blocks through the\nI/O system.\n\nRob\n\n\n", "msg_date": "Fri, 24 Oct 2003 16:04:39 -0600", "msg_from": "Rob Nagler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Mario Weilguni writes:\n> of course both approaches have advantages, it simply depends on the usage \n> pattern. A case where oracle really rules over postgresql are m<-->n \n> connection tables where each record consist of two foreign keys, the \n> overwrite approach is a big win here.\n\nThat's usually our case. My company almost always has \"groupware\"\nproblems to solve. Every record has a \"realm\" (security) foreign key\nand typically another key. The infrastructure puts the security\nkey on queries to avoid returning the wrong realm's data.\n\nRob\n\n\n", "msg_date": "Fri, 24 Oct 2003 16:07:25 -0600", "msg_from": "Rob Nagler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Vivek Khera writes:\n> Also, how close are you to the capacity of your disk bandwidth? I\n> don't see that in your numbers. I know in freebsd I can run \"systat\n> -vmstat\" and it gives me a percentage of utilization that lets me know\n> when I'm near the capacity.\n\nThe vacuum totally consumes the system. It's in a constant \"D\". As\nnear as I can tell, it's hitting all blocks in the database.\n\nThe problem is interactive performance when vacuum is in a D state.\nEven with just two processes doing \"stuff\" (vacuum and a select, let's\nsay), the select is very slow.\n\nMy understanding of the problem is that if a query hits the disk hard\n(and many of my queries do) and vacuum is hitting the disk hard, they\ncontend for the same resource and nobody wins. The query optimizer\nhas lots of problems with my queries and ends up doing silly sorts.\nAs a simple example, one query goes like this:\n\n \tselect avg(f1) from t1 group by f2;\n\nThis results in a plan like:\n\n Aggregate (cost=171672.95..180304.41 rows=115086 width=32)\n -> Group (cost=171672.95..177427.26 rows=1150862 width=32)\n -> Sort (cost=171672.95..174550.10 rows=1150862 width=32)\n Sort Key: f2\n -> Seq Scan on t1 (cost=0.00..39773.62 rows=1150862 width=32)\n\nThis is of course stupid, because it sorts a 1M rows, which probably\nmeans it has to hit disk (sort_mem can only be so large). Turns out\nthere are only about 20 different values of f2, so it would be much\nbetter to aggregate without sorting. This is the type of query which\nruns while vacuum runs and I'm sure the two are just plain\nincompatible. vacuum is read intensive and this query is write\nintensive.\n\nRob\n\n\n", "msg_date": "Fri, 24 Oct 2003 16:18:48 -0600", "msg_from": "Rob Nagler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Greg Stark writes:\n> Note that pctfree/pctused are a big performance drain on the usual case. Try\n> setting them to 0/100 on a table that doesn't get updates (like a many-many\n> relation table) and see how much faster it is to insert and scan.\n\nRight. You can optimize each table independently. The \"usual\" case\ndoesn't exist in most databases, I've found, which is why Oracle does\nbetter. \n\n> Judging by the number of FAQ lists out there that explain various quirks of\n> rollback segment configuration I wouldn't say it's so easily configured.\n\nMaybe we just got lucky. :-)\n\n> The biggest problem is on systems where there's a combination of both users.\n\nAs is ours.\n\n> You need tremendous rollback segments to deal with the huge volume of oltp\n> transactions that can occur during a single DSS query. And the DSS query\n> performance is terrible as it has to check the rollback segments for a large\n> portion of the blocks it reads.\n\nThe DSS issues only come into play I think if the queries are long.\nThis is our problem. Postgres does a bad job with DSS, I believe. I\nmentioned the select avg(f1) from t1 group by f2 in another message.\nIf it were optimized for \"standard\" SQL, such as, avg, sum, etc., I\nthink it would do a lot better with DSS-type problems. Our problem\nseems to be that the DSS queries almost always hit disk to sort.\n\n> Arguably it's the other way around. Postgres's approach wins whenever most of\n> the tuples in a table have been updated, in that case it just has to scan the\n> whole table ignoring old records not visible to the transaction. Oracle has to\n> consult the rollback segment for any recently updated tuple. Oracle's wins in\n> the case where most of the tuples haven't changed so it can just scan the\n> table without consulting lots of rollback segments.\n\nI see what you're saying. I'm not a db expert, just a programmer\ntrying to make his queries go faster, so I'll acknowledge that the\ndesign is theoretically better. \n\nIn practice, I'm still stuck. As a simple example, this query\n \tselect avg(f1) from t1 group by f2\n\nTakes 33 seconds (see explain analyze in another note in this thread)\nto run on idle hardware with about 1GB available in the cache. It's\nclearly hitting disk to do the sort. Being a dumb programmer, I\nchanged the query to:\n\n select f1 from t1;\n\nAnd wrote the rest in Perl. It takes 4 seconds to run. Why? The\nPerl doesn't sort to disk, it aggregates in memory. There are 18 rows\nreturned. What I didn't mention is that I originally had:\n\n select avg(f1), t2.name from t1, t2 where t2.f2 = t1.f2 group by t2.name;\n\nWhich is much worse:\n\n Aggregate (cost=161046.30..162130.42 rows=8673 width=222) (actual time=72069.10..87455.69 rows=18 loops=1)\n -> Group (cost=161046.30..161479.95 rows=86729 width=222) (actual time=71066.38..78108.17 rows=963660 loops=1)\n -> Sort (cost=161046.30..161263.13 rows=86729 width=222) (actual time=71066.36..72445.74 rows=963660 loops=1)\n Sort Key: t2.name\n -> Merge Join (cost=148030.15..153932.66 rows=86729 width=222) (actual time=19850.52..27266.40 rows=963660 loops=1)\n Merge Cond: (\"outer\".f2 = \"inner\".f2)\n -> Sort (cost=148028.59..150437.74 rows=963660 width=58) (actual time=19850.18..21750.12 rows=963660 loops=1)\n Sort Key: t1.f2\n -> Seq Scan on t1 (cost=0.00..32479.60 rows=963660 width=58) (actual time=0.06..3333.39 rows=963660 loops=1)\n -> Sort (cost=1.56..1.60 rows=18 width=164) (actual time=0.30..737.59 rows=931007 loops=1)\n Sort Key: t2.f2\n -> Seq Scan on t2 (cost=0.00..1.18 rows=18 width=164) (actual time=0.05..0.08 rows=18 loops=1)\n Total runtime: 87550.31 msec\n\nAgain, there are about 18 values of f2. The optimizer even knows this\n(it's a foreign key to t2.f2), but instead it does the query plan in\nexactly the wrong order. It hits disk probably 3 times as much as the\nsimpler query judging by the amount of time this query takes (33 vs 88\nsecs). BTW, adding an index to t1.f2 has seriously negative effects\non many other DSS queries.\n\nI'm still not sure that the sort problem is our only problem when\nvacuum runs. It's tough to pin down. We'll be adding more memory to\nsee if that helps with the disk contention.\n\nRob\n\n\n", "msg_date": "Fri, 24 Oct 2003 17:09:30 -0600", "msg_from": "Rob Nagler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "\nRob Nagler <[email protected]> writes:\n\n> Mario Weilguni writes:\n> > of course both approaches have advantages, it simply depends on the usage \n> > pattern. A case where oracle really rules over postgresql are m<-->n \n> > connection tables where each record consist of two foreign keys, the \n> > overwrite approach is a big win here.\n\nI don't understand why you would expect overwriting to win here. \nWhat types of updates do you do on these tables? \n\nNormally I found using update on such a table was too awkward to contemplate\nso I just delete all the relation records that I'm replacing for the key I'm\nworking with and insert new ones. This always works out to be cleaner code. In\nfact I usually leave such tables with no UPDATE grants on them.\n\nIn that situation I would have actually expected Postgres to do as well as or\nbetter than Oracle since that makes them both functionally equivalent.\n\n-- \ngreg\n\n", "msg_date": "24 Oct 2003 20:07:57 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "\nRob Nagler <[email protected]> writes:\n\n> Greg Stark writes:\n> > Note that pctfree/pctused are a big performance drain on the usual case. Try\n> > setting them to 0/100 on a table that doesn't get updates (like a many-many\n> > relation table) and see how much faster it is to insert and scan.\n> \n> Right. You can optimize each table independently. The \"usual\" case\n> doesn't exist in most databases, I've found, which is why Oracle does\n> better. \n\nSorry I was unclear. By \"usual case\" I meant reading, as opposed to updates.\nThe size of the on-disk representation turns out to be a major determinant in\na lot of database applications, since the dominant resource is i/o bandwidth.\nTry doing a fresh import of a large table with pctfree 0 pctuse 100 and\ncompare how long a select takes on it compared to the original table.\n\n\n\n> In practice, I'm still stuck. As a simple example, this query\n> \tselect avg(f1) from t1 group by f2\n> \n> Takes 33 seconds (see explain analyze in another note in this thread)\n> to run on idle hardware with about 1GB available in the cache. It's\n> clearly hitting disk to do the sort. Being a dumb programmer, I\n> changed the query to:\n\nI didn't see the rest of the thread so forgive me if you've already seen these\nsuggestions. \n\nFIrstly, that type of query will be faster in 7.4 due to implementing a new\nmethod for doing groups called hash aggregates.\n\nSecondly you could try raising sort_mem. Postgres can't know how much memory\nit really has before it swaps, so there's a parameter to tell it. And swapping\nwould be much worse than doing disk sorts.\n\nYou can raise sort_mem to tell it how much memory it's allowed to use before\nit goes to disk sorts. You can even use ALTER SESSION to raise it in a few DSS\nsessions but leave it low the many OLTP sessions. If it's high in OLTP\nsessions then you could quickly hit swap when they all happen to decide to use\nthe maximum amount at the same time. But then you don't want to be doing big\nsorts in OLTP sessions anyways.\n\nUnfortunately there's no way to tell how much memory it thinks it's going to\nuse. I used to use a script to monitor the pgsql_tmp directory in the database\nto watch for usage.\n\n> select f1 from t1;\n> \n> And wrote the rest in Perl. It takes 4 seconds to run. Why? The Perl doesn't\n> sort to disk, it aggregates in memory. There are 18 rows returned. What I\n> didn't mention is that I originally had:\n\nOof. I expect if you convinced 7.3 to do the sort in memory by a suitable\nvalue of sort_mem it would be close, but still slower than perl. 7.4 should be\nvery close since hash aggregates would be more or less equivalent to the perl\nmethod.\n\n\n> select avg(f1), t2.name from t1, t2 where t2.f2 = t1.f2 group by t2.name;\n> \n> Which is much worse:\n> \n> Aggregate (cost=161046.30..162130.42 rows=8673 width=222) (actual time=72069.10..87455.69 rows=18 loops=1)\n> -> Group (cost=161046.30..161479.95 rows=86729 width=222) (actual time=71066.38..78108.17 rows=963660 loops=1)\n> -> Sort (cost=161046.30..161263.13 rows=86729 width=222) (actual time=71066.36..72445.74 rows=963660 loops=1)\n> Sort Key: t2.name\n> -> Merge Join (cost=148030.15..153932.66 rows=86729 width=222) (actual time=19850.52..27266.40 rows=963660 loops=1)\n> Merge Cond: (\"outer\".f2 = \"inner\".f2)\n> -> Sort (cost=148028.59..150437.74 rows=963660 width=58) (actual time=19850.18..21750.12 rows=963660 loops=1)\n> Sort Key: t1.f2\n> -> Seq Scan on t1 (cost=0.00..32479.60 rows=963660 width=58) (actual time=0.06..3333.39 rows=963660 loops=1)\n> -> Sort (cost=1.56..1.60 rows=18 width=164) (actual time=0.30..737.59 rows=931007 loops=1)\n> Sort Key: t2.f2\n> -> Seq Scan on t2 (cost=0.00..1.18 rows=18 width=164) (actual time=0.05..0.08 rows=18 loops=1)\n> Total runtime: 87550.31 msec\n> \n> Again, there are about 18 values of f2. The optimizer even knows this\n> (it's a foreign key to t2.f2), but instead it does the query plan in\n> exactly the wrong order. It hits disk probably 3 times as much as the\n> simpler query judging by the amount of time this query takes (33 vs 88\n> secs). BTW, adding an index to t1.f2 has seriously negative effects\n> on many other DSS queries.\n\nWell, first of all it doesn't really because you said to group by t2.name not\nf1. You might expect it to at least optimize something like this:\n\nselect avg(f1),t2.name from t1 join t2 using (f2) group by f2\n\nbut even then I don't think it actually is capable of using foreign keys as a\nhint like that. I don't think Oracle does either actually, but I'm not sure.\n\nTo convince it to do the right thing you would have to do either:\n\nSELECT a, t2.name \n FROM (SELECT avg(f1),f2 FROM t1 GROUP BY f2) AS t1 \n JOIN t2 USING (f2)\n\nOr use a subquery:\n\nSELECT a, (SELECT name FROM t2 WHERE t2.f2 = t1.f2)\n FROM t1\n GROUP BY f2 \n\n\nOh, incidentally, my use of the \"JOIN\" syntax is a personal preference.\nIdeally it would produce identical plans but unfortunately that's not always\ntrue yet, though 7.4 is closer. I think in the suggestion above it actually\nwould.\n\n-- \ngreg\n\n", "msg_date": "24 Oct 2003 20:32:17 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Greg Stark writes:\n> Sorry I was unclear. By \"usual case\" I meant reading, as opposed to updates.\n> The size of the on-disk representation turns out to be a major determinant in\n> a lot of database applications, since the dominant resource is i/o bandwidth.\n> Try doing a fresh import of a large table with pctfree 0 pctuse 100 and\n> compare how long a select takes on it compared to the original table.\n\nBTW, I greatly appreciate your support on this stuff. This list is a\nfantastic resource.\n\nI think we agree. The question is what is the workload. On tables\nwithout updates, postgres will be fast enough. However, postgres is\nslow on tables with updates afaict. I think of OLTP as a system with\nupdates. One can do DSS on an OLTP database with Oracle, at least it\nseems to work for one of our projects.\n\n> FIrstly, that type of query will be faster in 7.4 due to implementing a new\n> method for doing groups called hash aggregates.\n\nWe'll be trying it as soon as it is out.\n\n> Secondly you could try raising sort_mem. Postgres can't know how much memory\n> it really has before it swaps, so there's a parameter to tell it. And swapping\n> would be much worse than doing disk sorts.\n\nIt is at 8000. This is probably as high as I can go with multiple\npostmasters. The sort area is shared in Oracle (I think :-) in the\nUGA.\n\n> You can raise sort_mem to tell it how much memory it's allowed to\n> use before it goes to disk sorts. You can even use ALTER SESSION to\n> raise it in a few DSS sessions but leave it low the many OLTP\n> sessions. If it's high in OLTP sessions then you could quickly hit\n> swap when they all happen to decide to use the maximum amount at the\n> same time. But then you don't want to be doing big sorts in OLTP\n> sessions anyways.\n\nThis is a web app. I can't control what the user wants to do.\nSometimes they update data, and other times they simply look at it.\n\nI didn't find ALTER SESSION for postgres (isn't that Oracle?), so I\nset sort_mem in the conf file to 512000, restarted postrgres. Reran\nthe simpler query (no name) 3 times, and it was still 27 secs.\n\n> Unfortunately there's no way to tell how much memory it thinks it's\n> going to use. I used to use a script to monitor the pgsql_tmp\n> directory in the database to watch for usage.\n\nI don't have to. The queries that run slow are hitting disk.\nAnything that takes a minute has to be writing to disk.\n\n> Well, first of all it doesn't really because you said to group by t2.name not\n> f1. You might expect it to at least optimize something like this:\n\nI put f2 in the group by, and it doesn't matter. That's the point.\nIt's the on-disk sort before the aggregate that's killing the query.\n\n> but even then I don't think it actually is capable of using foreign keys as a\n> hint like that. I don't think Oracle does either actually, but I'm not sure.\n\nI'll be finding out this week.\n\n> To convince it to do the right thing you would have to do either:\n> \n> SELECT a, t2.name \n> FROM (SELECT avg(f1),f2 FROM t1 GROUP BY f2) AS t1 \n> JOIN t2 USING (f2)\n> \n> Or use a subquery:\n> \n> SELECT a, (SELECT name FROM t2 WHERE t2.f2 = t1.f2)\n> FROM t1\n> GROUP BY f2 \n\nThis doesn't solve the problem. It's the GROUP BY that is doing the\nwrong thing. It's grouping, then aggregating.\n\nRob\n\n\n", "msg_date": "Mon, 27 Oct 2003 09:19:31 -0700", "msg_from": "Rob Nagler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Greg Stark writes:\n> I don't understand why you would expect overwriting to win here. \n> What types of updates do you do on these tables? \n\nThese are statistics that we're adjusting. I think that's pretty\nnormal stuff. The DSS component is the avg() of these numbers on\nparticular groups. The groups are related to foreign keys to\ncustomers and other things.\n\n> Normally I found using update on such a table was too awkward to\n> contemplate so I just delete all the relation records that I'm\n> replacing for the key I'm working with and insert new ones. This\n> always works out to be cleaner code. In fact I usually leave such\n> tables with no UPDATE grants on them.\n\nIn accounting apps, we do this, too. It's awkward with all the\nrelationships to update all the records in the right order. But\nOracle wins on delete/insert, too, because it reuses the tuples it\nalready has in memory, and it can reuse the same foreign key index\npages, too, since the values are usually the same.\n\nThe difference between Oracle and postgres seems to be optimism.\npostgres assumes the transaction will fail and/or that a transaction\nwill modify lots of data that is used by other queries going on in\nparallel. Oracle assumes that the transaction is going to be\ncommitted, and it might as well make the changes in place.\n\n> In that situation I would have actually expected Postgres to do as well as or\n> better than Oracle since that makes them both functionally\n> equivalent.\n\nI'll find out soon enough. :-)\n\nRob\n\n\n", "msg_date": "Mon, 27 Oct 2003 09:24:47 -0700", "msg_from": "Rob Nagler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Rob Nagler <[email protected]> writes:\n\n> I didn't find ALTER SESSION for postgres (isn't that Oracle?), so I\n> set sort_mem in the conf file to 512000, restarted postrgres. Reran\n> the simpler query (no name) 3 times, and it was still 27 secs.\n\nSorry, I don't know how that bubbled up from the depths of my Oracle memory.\nIn postgres it's just \"SET\"\n\ndb=> set sort_mem = 512000;\nSET\n\n> > To convince it to do the right thing you would have to do either:\n> > \n> > SELECT a, t2.name \n> > FROM (SELECT avg(f1),f2 FROM t1 GROUP BY f2) AS t1 \n> > JOIN t2 USING (f2)\n> > \n> > Or use a subquery:\n> > \n> > SELECT a, (SELECT name FROM t2 WHERE t2.f2 = t1.f2)\n> > FROM t1\n> > GROUP BY f2 \n> \n> This doesn't solve the problem. It's the GROUP BY that is doing the\n> wrong thing. It's grouping, then aggregating.\n\nBut at least in the form above it will consider using an index on f2, and it\nwill consider using indexes on t1 and t2 to do the join.\n\nIt's unlikely to go ahead and use the indexes though because normally sorting\nis faster than using the index when scanning the whole table. You should\ncompare the \"explain analyze\" results for the original query and these two.\nAnd check the results with \"set enable_seqscan = off\" as well. \n\nI suspect you'll find your original query uses sequential scans even when\nthey're disabled because it has no alternative. With the two above it can use\nindexes but I suspect you'll find they actually take longer than the\nsequential scan and sort -- especially if you have sort_mem set large enough.\n\n-- \ngreg\n\n", "msg_date": "27 Oct 2003 12:53:56 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Greg Stark writes:\n> > > SELECT a, (SELECT name FROM t2 WHERE t2.f2 = t1.f2)\n> > > FROM t1\n> > > GROUP BY f2 \n> > \n> > This doesn't solve the problem. It's the GROUP BY that is doing the\n> > wrong thing. It's grouping, then aggregating.\n> \n> But at least in the form above it will consider using an index on f2, and it\n> will consider using indexes on t1 and t2 to do the join.\n\nThere are 20 rows in t2, so an index actually slows down the join.\nI had to drop the index on t1.f2, because it was trying to use it\ninstead of simply sorting 20 rows.\n\nI've got preliminary results for a number of \"hard\" queries between\noracle and postgres (seconds):\n\n PG ORA \n 0 5 q1\n 1 0 q2\n 0 5 q3\n 2 1 q4\n219 7 q5\n217 5 q6\n 79 2 q7\n 31 1 q8\n\nThese are averages of 10 runs of each query. I didn't optimize\npctfree, etc., but I did run analyze after the oracle import.\n\nOne of the reason postgres is faster on the q1-4 is that postgres\nsupports OFFSET/LIMIT, and oracle doesn't. q7 and q8 are the queries\nthat I've referred to recently (avg of group by).\n\nq5 and q6 are too complex to discuss here, but the fundamental issue\nis the order in which postgres decides to do things. The choice for\nme is clear: the developer time trying to figure out how to make the\nplanner do the \"obviously right thing\" has been too high with\npostgres. These tests demonstate to me that for even complex queries,\noracle wins for our problem.\n\nIt looks like we'll be migrating to oracle for this project from these\npreliminary results. It's not just the planner problems. The\ncustomer is more familiar with oracle, and the vacuum performance is\nanother problem.\n\nRob\n", "msg_date": "Wed, 29 Oct 2003 16:32:18 -0700", "msg_from": "Rob Nagler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Rob Nagler <[email protected]> writes:\n> q5 and q6 are too complex to discuss here,\n\nHow do you expect us to get better if you don't show us the problems?\n\nBTW, have you tried any of this with a 7.4beta release? Another project\nthat I'm aware of saw several bottlenecks in their Oracle-centric code\ngo away when they tested 7.4 instead of 7.3. For instance, there is\nhash aggregation capability, which would probably solve the aggregate\nquery problem you were complaining about in\nhttp://archives.postgresql.org/pgsql-performance/2003-10/msg00640.php\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Oct 2003 19:03:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking " }, { "msg_contents": "Rob,\n\n> q5 and q6 are too complex to discuss here, but the fundamental issue\n> is the order in which postgres decides to do things. The choice for\n> me is clear: the developer time trying to figure out how to make the\n> planner do the \"obviously right thing\" has been too high with\n> postgres. These tests demonstate to me that for even complex queries,\n> oracle wins for our problem.\n> \n> It looks like we'll be migrating to oracle for this project from these\n> preliminary results. It's not just the planner problems. The\n> customer is more familiar with oracle, and the vacuum performance is\n> another problem.\n\nHey, we can't win 'em all. If we could, Larry would be circulating his \nresume'.\n\nI hope that you'll stay current with PostgreSQL developments so that you can \ndo a similarly thourough evaluation for your next project.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Wed, 29 Oct 2003 16:55:07 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Rob Nagler <[email protected]> writes:\n\n> One of the reason postgres is faster on the q1-4 is that postgres\n> supports OFFSET/LIMIT, and oracle doesn't. q7 and q8 are the queries\n> that I've referred to recently (avg of group by).\n\nWell the way to do offset/limit in Oracle is:\n\nSELECT * \n FROM (\n SELECT ... , rownum AS n \n WHERE rownum <= OFFSET+LIMIT\n ) \n WHERE n > OFFSET\n\nThat's basically the same thing Postgres does anyways. It actually has to do\nthe complete query and fetch and discard the records up to the OFFSET and then\nstop when it hits the LIMIT.\n\n> q5 and q6 are too complex to discuss here, but the fundamental issue\n> is the order in which postgres decides to do things. \n\nThat true for pretty 99% of all query optimization whether it's on Postgres or\nOracle. I'm rather curious to see the query and explain analyze output from q5\nand q6.\n\n-- \ngreg\n\n", "msg_date": "29 Oct 2003 23:07:59 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "On Wed, 29 Oct 2003, Rob Nagler wrote:\n\n> Greg Stark writes:\n> > > > SELECT a, (SELECT name FROM t2 WHERE t2.f2 = t1.f2)\n> > > > FROM t1\n> > > > GROUP BY f2 \n> > > \n> > > This doesn't solve the problem. It's the GROUP BY that is doing the\n> > > wrong thing. It's grouping, then aggregating.\n> > \n> > But at least in the form above it will consider using an index on f2, and it\n> > will consider using indexes on t1 and t2 to do the join.\n> \n> There are 20 rows in t2, so an index actually slows down the join.\n> I had to drop the index on t1.f2, because it was trying to use it\n> instead of simply sorting 20 rows.\n\nt2 was 'vacuum full'ed and analyzed, right? Just guessing.\n\n", "msg_date": "Thu, 30 Oct 2003 07:29:32 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Josh Berkus writes:\n> I hope that you'll stay current with PostgreSQL developments so that you can \n> do a similarly thourough evaluation for your next project.\n\nOh, no worries. This project just happens to be a tough one. We're\nheavily invested in Postgres. Other projects we maintain that use\nPostgres are zoescore.com, colosla.org, and paintedsnapshot.com.\n\nI am currently working on a very large project where the customer is\nvery committed to Postgres/open source. We're in discussions about\nwhat to do about the scalability problems we saw in the other project.\nYou can help by addressing a dilema we (my new customer and I) see.\nI apologize for the length of what follows, but I'm trying to be as\nclear as possible about our situation.\n\nI have had a lot push back from the core Postgres folks on the idea of\nplanner hints, which would go a long way to solve the performance\nproblems we are seeing. I presented an alternative approach: have a\n\"style sheet\" (Scribe, LaTex) type of solution in the postmaster,\nwhich can be customized by end users. That got no response so I\nassume it wasn't in line with the \"Postgres way\" (more below).\n\nThe vacuum problem is very serious for the problematic database to the\npoint that one of my customer's customers said:\n\n However, I am having a hard time understanding why the system is so\n slow... from my perspective it seems like you have some fundamental\n database issues that need to be addressed.\n\nThis is simply unacceptable, and that's why we're moving to Oracle.\nIt's very bad for my business reputation.\n\nI don't have a ready solution to vacuuming, and none on the list have\nbeen effective. We'll be adding more memory, but it seems to be disk\nbandwidth problem. I run Oracle on much slower system, and I've never\nnoticed problems of this kind, even when a database-wide validation is\nrunning. When vacuum is running, it's going through the entire\ndatabase, and that pretty much trashes all other queries, especially\nDSS queries. As always it is just software, and there's got to be\n80/20 solution.\n\nOur new project is large, high-profile, but not as data intensive as\nthe problematic one. We are willing to commit significant funding and\neffort to make Postgres faster. We are \"business value\" driven. That\nmeans we solve problems practically instead of theoretically. This\nseems to be in conflict with \"the Postgres way\", which seems to be\nmore theoretical. Our business situation comes ahead of theories.\n\nMy customer (who monitors this list) and I believe that our changes\nwould not be accepted back into the Postgres main branch. That\npresents us with a difficult situation, because we don't want to own a\nseparate branch. (Xemacs helped push emacs, and maybe that's what has\nto happen here, yet it's not a pretty situation.)\n\nWe'll be meeting next week to discuss the situation, and how we'll go\nforward. We have budget in 2003 to spend on this, but only if the\nsituation can be resolved. Otherwise, we'll have to respect the data\nwe are seeing, and think about our choice of technologies.\n\nThanks for the feedback.\n\nRob\n\n\n", "msg_date": "Thu, 30 Oct 2003 09:20:20 -0700", "msg_from": "Rob Nagler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Tom Lane writes:\n> Rob Nagler <[email protected]> writes:\n> > q5 and q6 are too complex to discuss here,\n> \n> How do you expect us to get better if you don't show us the problems?\n\nWith all due respect and thanks for the massive amount of help, I have\npresented the problems. q5 and q6 are a subset of the following\ngeneral problems:\n\n * Multiple ORDER BY results in no index used.\n Solution: drop multiple ORDER BY, only use first\n\n * Vacuum locks out interactive users\n Solution: don't run vacuum full and only run vacuum at night\n\n * Low cardinality index on large table confuses planner\n Solution: Drop (foreign key) index, which hurts other performance\n\n * Grouped aggregates result in disk sort\n Solution: Wait to 7.4 (may work), or write in Perl (works today)\n\n * Extreme non-linear performance (crossing magic number in\n optimizer drops performance three orders of magnitude)\n Solution: Don't cross magic number, or code in Perl\n\nThe general problem is that our system generates 90% of the SQL we\nneed. There are many advantages to this, such as being able to add\nOFFSET/LIMIT support with a few lines of code in a matter of hours.\nEvery time we have to custom code a query, or worse, code it in Perl,\nwe lose many benefits. I understand the need to optimize queries, but\nmy general experience with Oracle is that I don't have to do this very\noften. When the 80/20 rule inverts, there's something fundamentally\nwrong with the model. That's where we feel we're at. It's cost us a\ntremendous amount of money to deal with these query optimizations.\n\nThe solution is not to fix the queries, but to address the root\ncauses. That's what my other note in this thread is about. I hope\nyou understand the spirit of my suggestion, and work with us to\nfinding an acceptable approach to the general problems.\n\n> BTW, have you tried any of this with a 7.4beta release?\n\nI will, but for my other projects, not this one. I'll run this data,\nbecause it's a great test case.\n\nWe have a business decision to make: devote more time to Postgres or\ngo with Oracle. I spent less than a day getting the data into Oracle\nand to create the benchmark. The payoff is clear, now. The risk of\n7.4 is still very high, because the vacuum problem still looms and a\nsimple \"past performance is a good indicator of future performance\".\nGoing forward, there's no choice. We've had to limit end-user\nfunctionality to get Postgres working as well as it does, and that's\nway below where Oracle is without those same limits and without any\neffort put into tuning.\n\nThanks again for all your support.\n\nRob\n\n\n", "msg_date": "Thu, 30 Oct 2003 09:59:04 -0700", "msg_from": "Rob Nagler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum locking " }, { "msg_contents": "scott.marlowe writes:\n> t2 was 'vacuum full'ed and analyzed, right? Just guessing.\n\nFresh import. I've been told this includes a ANALYZE.\n\nRob\n\n\n", "msg_date": "Thu, 30 Oct 2003 10:04:24 -0700", "msg_from": "Rob Nagler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "On Thu, 30 Oct 2003, Rob Nagler wrote:\n\n> The vacuum problem is very serious for the problematic database to the\n> point that one of my customer's customers said:\n> \n> However, I am having a hard time understanding why the system is so\n> slow... from my perspective it seems like you have some fundamental\n> database issues that need to be addressed.\n> \n> This is simply unacceptable, and that's why we're moving to Oracle.\n> It's very bad for my business reputation.\n> \n> I don't have a ready solution to vacuuming, and none on the list have\n> been effective. We'll be adding more memory, but it seems to be disk\n> bandwidth problem. I run Oracle on much slower system, and I've never\n> noticed problems of this kind, even when a database-wide validation is\n> running. When vacuum is running, it's going through the entire\n> database, and that pretty much trashes all other queries, especially\n> DSS queries. As always it is just software, and there's got to be\n> 80/20 solution.\n\nHave you looked at the autovacuum daemon? Was it found wanting or what? \nI've had good luck with it so far, so I was just wondering if it might \nwork for your needs as well. It's quite intelligent about which tables \net.al. it vacuums.\n\n", "msg_date": "Thu, 30 Oct 2003 10:05:15 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Rob Nagler <[email protected]> writes:\n> When vacuum is running, it's going through the entire\n> database, and that pretty much trashes all other queries, especially\n> DSS queries. As always it is just software, and there's got to be\n> 80/20 solution.\n\nOne thing that's been discussed but not yet tried is putting a tunable\ndelay into VACUUM's per-page loop (ie, sleep N milliseconds after each\nheap page is processed, and probably each index page too). This might\nbe useless or it might be the 80/20 solution you want. Want to try it\nand report back?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Oct 2003 12:28:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking " }, { "msg_contents": "On Thu, 30 Oct 2003, Rob Nagler wrote:\n\n> scott.marlowe writes:\n> > t2 was 'vacuum full'ed and analyzed, right? Just guessing.\n> \n> Fresh import. I've been told this includes a ANALYZE.\n\nYou should probably run analyze by hand just to be sure. If the planner \nis using an index scan on a table with 20 rows, then it's likely it has \nthe default statistics for the table, not real ones.\n\n", "msg_date": "Thu, 30 Oct 2003 10:55:32 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Rob,\n\n> I have had a lot push back from the core Postgres folks on the idea of\n> planner hints, which would go a long way to solve the performance\n> problems we are seeing. \n\nI can tell you that the general reaction that you'll get is \"let's fix the \nproblems with the planner instead of giving the user a workaround.\" Not that \nthat helps people running on older versions, but it stems from a attitude of \n\"let's heal the illness, not the symptoms\" attitude that is one of our \nproject's strengths.\n\n> I presented an alternative approach: have a\n> \"style sheet\" (Scribe, LaTex) type of solution in the postmaster,\n> which can be customized by end users. That got no response so I\n> assume it wasn't in line with the \"Postgres way\" (more below).\n\nOr you just posted it on a bad week. I don't remember your post; how about \nwe try it out on Hackers again and we'll argue it out?\n\n> running. When vacuum is running, it's going through the entire\n> database, and that pretty much trashes all other queries, especially\n> DSS queries. As always it is just software, and there's got to be\n> 80/20 solution.\n\nSee Tom's post.\n\n> Our new project is large, high-profile, but not as data intensive as\n> the problematic one. We are willing to commit significant funding and\n> effort to make Postgres faster. We are \"business value\" driven. That\n> means we solve problems practically instead of theoretically. This\n> seems to be in conflict with \"the Postgres way\", which seems to be\n> more theoretical. Our business situation comes ahead of theories.\n\nAs always, it's a matter of balance. Our \"theoretical purity\" has given \nPostgreSQL a reliability and recoverability level only otherwise obtainable \nfrom Oracle for six figures. And has allowed us to build an extensability \nsystem that lets users define their own datatypes, operators, aggregates, \netc., in a way that is not possible on *any* other database. This is what \nyou're up against when you suggest changes to some of the core components ... \npeople don't want to break what's not broken unless there are substantial, \nproven gains to be made.\n\n> My customer (who monitors this list) and I believe that our changes\n> would not be accepted back into the Postgres main branch. \n\nIf you haven't posted, you don't know. A *lot* of suggestions get rejected \nbecause the suggestor wants Tom, Bruce, Peter, Joe and Jan to do the actual \nwork or aren't willing to follow-through with testing and maintanence. As I \nsaid before, *I* don't remember earlier posts from you offering patches; \nperhaps it's time to try again?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 30 Oct 2003 10:21:06 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "Josh Berkus wrote:\n> > Our new project is large, high-profile, but not as data intensive as\n> > the problematic one. We are willing to commit significant funding and\n> > effort to make Postgres faster. We are \"business value\" driven. That\n> > means we solve problems practically instead of theoretically. This\n> > seems to be in conflict with \"the Postgres way\", which seems to be\n> > more theoretical. Our business situation comes ahead of theories.\n> \n> As always, it's a matter of balance. Our \"theoretical purity\" has given \n> PostgreSQL a reliability and recoverability level only otherwise obtainable \n> from Oracle for six figures. And has allowed us to build an extensibility \n> system that lets users define their own datatypes, operators, aggregates, \n> etc., in a way that is not possible on *any* other database. This is what \n> you're up against when you suggest changes to some of the core components ... \n> people don't want to break what's not broken unless there are substantial, \n> proven gains to be made.\n\nLet me add a little historical perspective here --- the PostgreSQL code\nbase is almost 20 years old, and code size has doubled in the past 7\nyears. We are into PostgreSQL for the long haul --- that means we want\ncode that will be working and maintainable 7 years from now. If your\nsolution doesn't fit that case, well, you might be right, it might get\nrejected. However, we find that it is worth the time and effort to make\nour code sustainable, and it is possible your solution could be set up\nto do that. However, it requires you to go beyond your \"business\nsituation\" logic and devote time to contribute something that will make\nPostgreSQL better 5 years in the future, as well as the next release.\n\nWe have found very few companies that are not willing to work within\nthat long-term perspective.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 30 Oct 2003 13:34:24 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" }, { "msg_contents": "\n> Fresh import. I've been told this includes a ANALYZE.\n\nUh - no it doesn't.\n\nChris\n\n\n", "msg_date": "Fri, 31 Oct 2003 09:38:12 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum locking" } ]
[ { "msg_contents": "Friday, October 17, 2003 12:05, Tom Lane [mailto:[email protected]] wrote:\n\n>Murthy Kambhampaty <[email protected]> writes:\n>> ... The script handles situations\n>> where (i) the XFS filesystem containing $PGDATA has an \n>external log and (ii)\n>> the postmaster log ($PGDATA/pg_xlog) is written to a \n>filesystem different\n>> than the one containing the $PGDATA folder.\n>\n>It does? How exactly can you ensure snapshot consistency between\n>data files and XLOG if they are on different filesystem\n\nSay, you're setup looks something like this:\n\nmount -t xfs /dev/VG1/LV_data /home/pgdata\nmount -t xfs /dev/VG1/LV_xlog /home/pgdata/pg_xlog\n\nWhen you want to take the filesystem backup, you do:\n\nStep 1:\nxfs_freeze -f /dev/VG1/LV_xlog\nxfs_freeze -f /dev/VG1/LV_data\n\tThis should finish any checkpoints that were in progress, and not\nstart any new ones\n\ttill you unfreeze. (writes to an xfs_frozen filesystem wait for the\nxfs_freeze -u, \n\tbut reads proceed; see text from xfs_freeze manpage in postcript\nbelow.)\n\n\nStep2: \ncreate snapshots of /dev/VG1/LV_xlog and /dev/VG1/LV_xlog\n\nStep 3: \nxfs_freeze -u /dev/VG1/LV_data\nxfs_freeze -u /dev/VG1/LV_xlog\n\tUnfreezing in this order should assure that checkpoints resume where\nthey left off, then log writes commence.\n\n\nStep4:\nmount the snapshots taken in Step2 somewhere; e.g. /mnt/snap_data and\n/mnt/snap_xlog. Copy (or rsync or whatever) /mnt/snap_data to /mnt/pgbackup/\nand /mnt/snap_xlog to /mnt/pgbackup/pg_xlog. Upon completion, /mnt/pgbackup/\nlooks to the postmaster like /home/pgdata would if the server had crashed at\nthe moment that Step1 was initiated. As I understand it, during recovery\n(startup) the postmaster will roll the database forward to this point,\n\"checkpoint-ing\" all the transactions that made it into the log before the\ncrash.\n\nStep5:\nremove the snapshots created in Step2.\n\nThe key is \n(i) xfs_freeze allows you to \"quiesce\" any filesystem at any point in time\nand, if I'm not mistaken, the order (LIFO) in which you freeze and unfreeze\nthe two filesystems: freeze $PGDATA/pg_xlog then $PGDATA; unfreeze $PGDATA\nthen $PGDATA/pg_xlog.\n(ii) WAL recovery assures consistency after a (file)sytem crash.\n\nPresently, the test server for my backup scripts is set-up this way, and the\nbackup works flawlessly, AFAICT. (Note that the backup script starts a\npostmaster on the filesystem copy each time, so you get early warning of\nproblems. Moreover the data in the \"production\" and \"backup\" copies are\ntested and found to be identical.\n\nComments? Any suggestions for additional tests?\n\nThanks,\n\tMurthy\n\nPS: From the xfs_freeze manpage:\n\"xfs_freeze suspends and resumes access to an XFS filesystem (see\nxfs(5)). \n\nxfs_freeze halts new access to the filesystem and creates a stable image\non disk. xfs_freeze is intended to be used with volume managers and\nhardware RAID devices that support the creation of snapshots. \n\nThe mount-point argument is the pathname of the directory where the\nfilesystem is mounted. The filesystem must be mounted to be frozen (see\nmount(8)). \n\nThe -f flag requests the specified XFS filesystem to be frozen from new\nmodifications. When this is selected, all ongoing transactions in the\nfilesystem are allowed to complete, new write system calls are halted,\nother calls which modify the filesystem are halted, and all dirty data,\nmetadata, and log information are written to disk. Any process\nattempting to write to the frozen filesystem will block waiting for the\nfilesystem to be unfrozen. \n\nNote that even after freezing, the on-disk filesystem can contain\ninformation on files that are still in the process of unlinking. These\nfiles will not be unlinked until the filesystem is unfrozen or a clean\nmount of the snapshot is complete. \n\nThe -u option is used to un-freeze the filesystem and allow operations\nto continue. Any filesystem modifications that were blocked by the\nfreeze are unblocked and allowed to complete.\"\n", "msg_date": "Fri, 17 Oct 2003 13:33:36 -0400", "msg_from": "Murthy Kambhampaty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] backup/restore - another area. " } ]
[ { "msg_contents": "Hi,\n\nI downloaded PostgreSQL 7.4beta4 and tried it out.\n\nIt turns out that the index file is still bloating even\nafter running vacuum or vacuum analyze on the table.\nStill, only reindex will claim the space back.\n\nIs the index bloating issue still not resolved in 7.4beta4 ?\n\nThanks.\n\nGan\n-- \n+--------------------------------------------------------+\n| Seum-Lim GAN email : [email protected] |\n| Lucent Technologies |\n| 2000 N. Naperville Road, 6B-403F tel : (630)-713-6665 |\n| Naperville, IL 60566, USA. fax : (630)-713-7272 |\n| web : http://inuweb.ih.lucent.com/~slgan |\n+--------------------------------------------------------+\n", "msg_date": "Sat, 18 Oct 2003 16:55:14 -0500", "msg_from": "Seum-Lim Gan <[email protected]>", "msg_from_op": true, "msg_subject": "index file bloating still in 7.4 ?" }, { "msg_contents": "Gan,\n\n> Is the index bloating issue still not resolved in 7.4beta4 ?\n\nNo, it should be. Please post your max_fsm_pages setting, and the output of a \nsample VACUUM VERBOSE ANALYZE. You probably don't have your FSM set right.\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Sat, 18 Oct 2003 14:58:15 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index file bloating still in 7.4 ?" }, { "msg_contents": "Hi Josh,\n\nSample verbose analyze:\n\nVACUUM VERBOSE ANALYZE hello_rda_or_key;\nINFO: vacuuming \"craft.hello_rda_or_key\"\nINFO: index \"hello242_1105\" now contains 740813 row versions in 2477 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.42s/0.13u sec elapsed 4.76 sec.\nINFO: \"hello_rda_or_key\": found 0 removable, 740813 nonremovable row \nversions in 12778 pages\nDETAIL: 440813 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.78s/0.66u sec elapsed 6.41 sec.\nINFO: analyzing \"craft.hello_rda_or_key\"\nINFO: \"hello_rda_or_key\": 12778 pages, 3000 rows sampled, 39388 \nestimated total rows\nVACUUM\n\nHere is my postgresql.conf file:\n\n# -----------------------------\n# PostgreSQL configuration file\n# -----------------------------\n#\n# This file consists of lines of the form:\n#\n# name = value\n#\n# (The '=' is optional.) White space may be used. Comments are introduced\n# with '#' anywhere on a line. The complete list of option names and\n# allowed values can be found in the PostgreSQL documentation. The\n# commented-out settings shown in this file represent the default values.\n#\n# Any option can also be given as a command line switch to the\n# postmaster, e.g. 'postmaster -c log_connections=on'. Some options\n# can be changed at run-time with the 'SET' SQL command.\n#\n# This file is read on postmaster startup and when the postmaster\n# receives a SIGHUP. If you edit the file on a running system, you have\n# to SIGHUP the postmaster for the changes to take effect, or use\n# \"pg_ctl reload\".\n\n\n#---------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#---------------------------------------------------------------------------\n\n# - Connection Settings -\n\n#tcpip_socket = false\n#max_connections = 100\nmax_connections = 600\n # note: increasing max_connections costs about 500 bytes of shared\n # memory per connection slot, in addition to costs from shared_buffers\n # and max_locks_per_transaction.\n#superuser_reserved_connections = 2\n#port = 5432\nport = 5333\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777 # octal\n#virtual_host = '' # what interface to listen on; defaults to any\n#rendezvous_name = '' # defaults to the computer name\n\n# - Security & Authentication -\n\n#authentication_timeout = 60 # 1-600, in seconds\n#ssl = false\n#password_encryption = true\n#krb_server_keyfile = ''\n#db_user_namespace = false\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\n#shared_buffers = 1000 # min 16, at least max_connections*2, 8KB each\nshared_buffers = 1200 # min 16, at least max_connections*2, 8KB each\n#sort_mem = 1024 # min 64, size in KB\nsort_mem = 40960 # min 64, size in KB\n#vacuum_mem = 8192 # min 1024, size in KB\nvacuum_mem = 81920 # min 1024, size in KB\n\n# - Free Space Map -\n\n#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_pages = 50000 # min max_fsm_relations*16, 6 bytes each\n#max_fsm_relations = 1000 # min 100, ~50 bytes each\nmax_fsm_relations = 1000 # min 100, ~50 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25\n#preload_libraries = ''\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\n#fsync = true # turns forced synchronization on or off\nfsync = false # turns forced synchronization on or off\n#wal_sync_method = fsync # the default varies across platforms:\n # fsync, fdatasync, open_sync, or open_datasync\n#wal_buffers = 8 # min 4, 8KB each\n\n# - Checkpoints -\n\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300 # range 30-3600, in seconds\n#checkpoint_warning = 30 # 0 is off, in seconds\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Enabling -\n\n#enable_hashagg = true\n#enable_hashjoin = true\n#enable_indexscan = true\n#enable_mergejoin = true\n#enable_nestloop = true\n#enable_seqscan = true\n#enable_sort = true\n#enable_tidscan = true\n\n# - Planner Cost Constants -\n\n#effective_cache_size = 1000 # typically 8KB each\n#random_page_cost = 4 # units are one sequential page fetch cost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n\n# - Genetic Query Optimizer -\n\n#geqo = true\n#geqo_threshold = 11\n#geqo_effort = 1\n#geqo_generations = 0\n#geqo_pool_size = 0 # default based on tables in statement,\n # range 128-1024\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 10 # range 1-1000\n#from_collapse_limit = 8\n#join_collapse_limit = 8 # 1 disables collapsing of explicit JOINs\n\n\n#---------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------\n\n# - Syslog -\n\n#syslog = 0 # range 0-2; 0=stdout; 1=both; 2=syslog\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n# - When to Log -\n\n#client_min_messages = notice # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # log, info, notice, warning, error\n\n#log_min_messages = notice # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error, log, fatal,\n # panic\n\n#log_error_verbosity = default # terse, default, or verbose messages\n\n#log_min_error_statement = panic # Values in order of increasing severity:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error, panic(off)\n \n#log_min_duration_statement = 0 # Log all statements whose\n # execution time exceeds the value, in\n # milliseconds. Zero disables.\n\n#silent_mode = false # DO NOT USE without Syslog!\n\n# - What to Log -\n\n#debug_print_parse = false\n#debug_print_rewritten = false\n#debug_print_plan = false\n#debug_pretty_print = false\n#log_connections = false\n#log_duration = false\n#log_pid = false\n#log_statement = false\n#log_timestamp = false\n#log_hostname = false\n#log_source_port = false\n\n\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Statistics Monitoring -\n\n#log_parser_stats = false\n#log_planner_stats = false\n#log_executor_stats = false\n#log_statement_stats = false\n\n# - Query/Index Statistics Collector -\n\n#stats_start_collector = true\n#stats_command_string = false\n#stats_block_level = false\n#stats_row_level = false\n#stats_reset_on_server_start = true\n\n\n#---------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#search_path = '$user,public' # schema names\n#default_transaction_isolation = 'read committed'\n#default_transaction_read_only = false\n#statement_timeout = 0 # 0 is disabled, in milliseconds\n\n# - Locale and Formatting -\n\n#datestyle = 'iso, mdy'\n#timezone = unknown # actually, defaults to TZ environment setting\n#australian_timezones = false\n#extra_float_digits = 0 # min -15, max 2\n#client_encoding = sql_ascii # actually, defaults to database encoding\n\n# These settings are initialized by initdb -- they may be changed\nlc_messages = 'C' # locale for system error message strings\nlc_monetary = 'C' # locale for monetary formatting\nlc_numeric = 'C' # locale for number formatting\nlc_time = 'C' # locale for time formatting\n\n# - Other Defaults -\n\n#explain_pretty_print = true\n#dynamic_library_path = '$libdir'\n#max_expr_depth = 10000 # min 10\n\n\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\n#deadlock_timeout = 1000 # in milliseconds\n#max_locks_per_transaction = 64 # min 10, ~260*max_connections bytes each\n\n\n#---------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n\n# - Previous Postgres Versions -\n\n#add_missing_from = true\n#regex_flavor = advanced # advanced, extended, or basic\n#sql_inheritance = true\n\n# - Other Platforms & Clients -\n\n#transform_null_equals = false\n\nAt 2:58 pm -0700 2003/10/18, Josh Berkus wrote:\n>Gan,\n>\n>> Is the index bloating issue still not resolved in 7.4beta4 ?\n>\n>No, it should be. Please post your max_fsm_pages setting, and the output of a\n>sample VACUUM VERBOSE ANALYZE. You probably don't have your FSM set right.\n>\n>\n>--\n>-Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 7: don't forget to increase your free space map settings\n\n\n-- \n+--------------------------------------------------------+\n| Seum-Lim GAN email : [email protected] |\n| Lucent Technologies |\n| 2000 N. Naperville Road, 6B-403F tel : (630)-713-6665 |\n| Naperville, IL 60566, USA. fax : (630)-713-7272 |\n| web : http://inuweb.ih.lucent.com/~slgan |\n+--------------------------------------------------------+\n", "msg_date": "Sat, 18 Oct 2003 20:52:32 -0500", "msg_from": "Seum-Lim Gan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index file bloating still in 7.4 ?" }, { "msg_contents": "Seum-Lim Gan <[email protected]> writes:\n> Sample verbose analyze:\n\n> VACUUM VERBOSE ANALYZE hello_rda_or_key;\n> INFO: vacuuming \"craft.hello_rda_or_key\"\n> INFO: index \"hello242_1105\" now contains 740813 row versions in 2477 pages\n\nSo what's the problem? That doesn't seem like a particularly bloated\nindex. You didn't say what datatype the index is on, but making the\nmost optimistic assumptions, index entries must use at least 16 bytes\neach. You're getting about 300 entries per page, compared to the\ntheoretical limit of 512 ... actually more, since I'm not allowing for\nupper btree levels in this calculation ... which says to me that the\npage loading is right around the expected btree loading of 2/3.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 18 Oct 2003 22:21:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index file bloating still in 7.4 ? " }, { "msg_contents": "Hi Tom,\n\nI did that when I have stopped my updates.\n\nNow, I am doing updates below is the output of vacuum.\nAfter doing the vacuum verbose analyze, it reported the following :\n\nINFO: vacuuming \"craft.dsperf_rda_or_key\"\nINFO: index \"hello242_1105\" now contains 1792276 row versions in 6237 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.61s/0.36u sec elapsed 17.92 sec.\nINFO: \"hello_rda_or_key\": found 0 removable, 1791736 nonremovable \nrow versions in 30892 pages\nDETAIL: 1492218 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 1.95s/1.99u sec elapsed 26.95 sec.\nINFO: analyzing \"craft.dsperf_rda_or_key\"\nINFO: \"hello_rda_or_key\": 30909 pages, 3000 rows sampled, 93292 \nestimated total rows\nVACUUM\n\nGan\n\nAt 10:21 pm -0400 2003/10/18, Tom Lane wrote:\n>Seum-Lim Gan <[email protected]> writes:\n>> Sample verbose analyze:\n>\n>> VACUUM VERBOSE ANALYZE hello_rda_or_key;\n>> INFO: vacuuming \"craft.hello_rda_or_key\"\n>> INFO: index \"hello242_1105\" now contains 740813 row versions in 2477 pages\n>\n>So what's the problem? That doesn't seem like a particularly bloated\n>index. You didn't say what datatype the index is on, but making the\n>most optimistic assumptions, index entries must use at least 16 bytes\n>each. You're getting about 300 entries per page, compared to the\n>theoretical limit of 512 ... actually more, since I'm not allowing for\n>upper btree levels in this calculation ... which says to me that the\n>page loading is right around the expected btree loading of 2/3.\n>\n>\t\t\tregards, tom lane\n\n\n-- \n+--------------------------------------------------------+\n| Seum-Lim GAN email : [email protected] |\n| Lucent Technologies |\n| 2000 N. Naperville Road, 6B-403F tel : (630)-713-6665 |\n| Naperville, IL 60566, USA. fax : (630)-713-7272 |\n| web : http://inuweb.ih.lucent.com/~slgan |\n+--------------------------------------------------------+\n", "msg_date": "Sun, 19 Oct 2003 00:11:13 -0500", "msg_from": "Seum-Lim Gan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index file bloating still in 7.4 ?" }, { "msg_contents": "Seum-Lim Gan <[email protected]> writes:\n> INFO: vacuuming \"craft.dsperf_rda_or_key\"\n> INFO: index \"hello242_1105\" now contains 1792276 row versions in 6237 pages\n> DETAIL: 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.61s/0.36u sec elapsed 17.92 sec.\n> INFO: \"hello_rda_or_key\": found 0 removable, 1791736 nonremovable \n> row versions in 30892 pages\n> DETAIL: 1492218 dead row versions cannot be removed yet.\n\nYou still haven't got an index-bloat problem. I am, however, starting\nto wonder why you have so many dead-but-unremovable rows. I think you\nmust have some client process that's been holding an open transaction\nfor a long time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 19 Oct 2003 01:48:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index file bloating still in 7.4 ? " }, { "msg_contents": "Hi Tom,\n\nThanks for info. I stoped the update and removed the process that's doing\nthe update and did vacuum analyze. This time the result says\nthe index row has been removed :\n\nvacuum verbose analyze dsperf_rda_or_key;\nINFO: vacuuming \"scncraft.dsperf_rda_or_key\"\nINFO: index \"dsperf242_1105\" now contains 300000 row versions in 12387 pages\nDETAIL: 3097702 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 2.86s/25.49u sec elapsed 54.16 sec.\nINFO: \"dsperf_rda_or_key\": removed 3097702 row versions in 53726 pages\nDETAIL: CPU 6.29s/26.05u sec elapsed 78.23 sec.\nINFO: \"dsperf_rda_or_key\": found 3097702 removable, 300000 \nnonremovable row versions in 58586 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 5 unused item pointers.\n0 pages are entirely empty.\nCPU 10.23s/53.79u sec elapsed 135.78 sec.\nINFO: analyzing \"scncraft.dsperf_rda_or_key\"\nINFO: \"dsperf_rda_or_key\": 58586 pages, 3000 rows sampled, 176830 \nestimated total rows\nVACUUM\n\nHowever, when I check the disk space usage, it has not changed.\nBefore and after the vacuum, it stayed the same :\n\n/pg 822192 21% Sun Oct 19 09:34:25 CDT 2003\ntable /pg/data/base/17139/34048 Size=479936512 (relfilenode for table)\nindex /pg/data/base/17139/336727 Size=101474304 (relfilenode for index)\n\nAny idea here ?\n\nAnother question, if we have a process that has different threads trying\nto update PostgreSQL, is this going to post a problem if we do not have\nthe thread-safety option during configure ?\n\nThanks.\n\nGan\n\nAt 1:48 am -0400 2003/10/19, Tom Lane wrote:\n>Seum-Lim Gan <[email protected]> writes:\n>> INFO: vacuuming \"craft.dsperf_rda_or_key\"\n>> INFO: index \"hello242_1105\" now contains 1792276 row versions in 6237 pages\n>> DETAIL: 0 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.61s/0.36u sec elapsed 17.92 sec.\n>> INFO: \"hello_rda_or_key\": found 0 removable, 1791736 nonremovable\n>> row versions in 30892 pages\n>> DETAIL: 1492218 dead row versions cannot be removed yet.\n>\n>You still haven't got an index-bloat problem. I am, however, starting\n>to wonder why you have so many dead-but-unremovable rows. I think you\n>must have some client process that's been holding an open transaction\n>for a long time.\n>\n>\t\t\tregards, tom lane\n\n\n-- \n+--------------------------------------------------------+\n| Seum-Lim GAN email : [email protected] |\n| Lucent Technologies |\n| 2000 N. Naperville Road, 6B-403F tel : (630)-713-6665 |\n| Naperville, IL 60566, USA. fax : (630)-713-7272 |\n| web : http://inuweb.ih.lucent.com/~slgan |\n+--------------------------------------------------------+\n", "msg_date": "Sun, 19 Oct 2003 09:46:08 -0500", "msg_from": "Seum-Lim Gan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index file bloating still in 7.4 ?" }, { "msg_contents": "Seum-Lim Gan <[email protected]> writes:\n> vacuum verbose analyze dsperf_rda_or_key;\n> INFO: vacuuming \"scncraft.dsperf_rda_or_key\"\n> INFO: index \"dsperf242_1105\" now contains 300000 row versions in 12387 pages\n> DETAIL: 3097702 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n\nHm, interesting that you deleted 90% of the entries and still had no\nempty index pages at all. What was the pattern of your deletes and/or\nupdates with respect to this index's key?\n\n> However, when I check the disk space usage, it has not changed.\n\nIt won't in any case. Plain VACUUM is designed for maintaining a\nsteady-state level of free space in tables and indexes, not for\nreturning major amounts of space to the OS. For that you need\nmore-invasive operations like VACUUM FULL or REINDEX.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 19 Oct 2003 11:47:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index file bloating still in 7.4 ? " }, { "msg_contents": "Hi Tom,\n\nThe key is a range from 1 to 30000 and picked randomly.\n\nOh, so in order to reclaim the disk space, we must run\nreindex or vacuum full ?\nThis will lock out the table and we won't be able to do anything.\nLooks like this is a problem. It means we cannot use it for\n24x7 operations without having to stop the process and do the vacuum full\nand reindex. Is there anything down the road that these operations\nwill not lock out the table ?\n\nI let the process ran overnight. The last email I sent you with\nthe vacuum analyze output just about an hour ago, that was after\nI removed the process that does the updates.\n\nHowever, I search through all the vacuum I did just before I\nwent to bed and found that earlier vacuum did say 5 indexes deleted and\n5 reusable. It has been pretty constant for about 1 to 2 hours and\nthen down to zero and has been like this since.\n\nSun Oct 19 00:50:07 CDT 2003\nINFO: vacuuming \"scncraft.dsperf_rda_or_key\"\nINFO: index \"dsperf242_1105\" now contains 402335 row versions in 7111 pages\nDETAIL: 5 index pages have been deleted, 5 are currently reusable.\nCPU 1.32s/0.17u sec elapsed 22.44 sec.\nINFO: \"dsperf_rda_or_key\": found 0 removable, 401804 nonremovable \nrow versions in 35315 pages\nDETAIL: 101802 dead row versions cannot be removed yet.\nThere were 1646275 unused item pointers.\n0 pages are entirely empty.\nCPU 2.38s/0.71u sec elapsed 27.09 sec.\nINFO: analyzing \"scncraft.dsperf_rda_or_key\"\nINFO: \"dsperf_rda_or_key\": 35315 pages, 3000 rows sampled, 156124 \nestimated total rows\nVACUUM\nSleep 60 seconds\n\nSun Oct 19 00:51:40 CDT 2003\nINFO: vacuuming \"scncraft.dsperf_rda_or_key\"\nINFO: index \"dsperf242_1105\" now contains 411612 row versions in 7111 pages\nDETAIL: 5 index pages have been deleted, 5 are currently reusable.\nCPU 1.28s/0.22u sec elapsed 23.38 sec.\nINFO: \"dsperf_rda_or_key\": found 0 removable, 410889 nonremovable \nrow versions in 35315 pages\nDETAIL: 110900 dead row versions cannot be removed yet.\nThere were 1637190 unused item pointers.\n0 pages are entirely empty.\nCPU 2.13s/0.92u sec elapsed 27.13 sec.\nINFO: analyzing \"scncraft.dsperf_rda_or_key\"\nINFO: \"dsperf_rda_or_key\": 35315 pages, 3000 rows sampled, 123164 \nestimated total rows\nVACUUM\nSleep 60 seconds\n.\n.\n.\nSun Oct 19 02:14:41 CDT 2003\nINFO: vacuuming \"scncraft.dsperf_rda_or_key\"\nINFO: index \"dsperf242_1105\" now contains 1053582 row versions in 7112 pages\nDETAIL: 5 index pages have been deleted, 5 are currently reusable.\nCPU 0.58s/0.29u sec elapsed 21.63 sec.\nINFO: \"dsperf_rda_or_key\": found 0 removable, 1053103 nonremovable \nrow versions in 35315 pages\nDETAIL: 753064 dead row versions cannot be removed yet.\nThere were 995103 unused item pointers.\n0 pages are entirely empty.\nCPU 1.54s/1.35u sec elapsed 26.17 sec.\nINFO: analyzing \"scncraft.dsperf_rda_or_key\"\nINFO: \"dsperf_rda_or_key\": 35315 pages, 3000 rows sampled, 106627 \nestimated total rows\nVACUUM\nSleep 60 seconds\n\nSun Oct 19 02:16:16 CDT 2003\nINFO: vacuuming \"scncraft.dsperf_rda_or_key\"\nINFO: index \"dsperf242_1105\" now contains 1065887 row versions in 7119 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.71s/0.36u sec elapsed 21.12 sec.\nINFO: \"dsperf_rda_or_key\": found 0 removable, 1065357 nonremovable \nrow versions in 35315 pages\nDETAIL: 765328 dead row versions cannot be removed yet.\nThere were 982849 unused item pointers.\n0 pages are entirely empty.\nCPU 1.70s/1.42u sec elapsed 26.65 sec.\nINFO: analyzing \"scncraft.dsperf_rda_or_key\"\nINFO: \"dsperf_rda_or_key\": 35315 pages, 3000 rows sampled, 106627 \nestimated total rows\nVACUUM\nSleep 60 seconds\n.\n.\n.\n\nThanks.\nGan\n\n\nAt 11:47 am -0400 2003/10/19, Tom Lane wrote:\n>Seum-Lim Gan <[email protected]> writes:\n>> vacuum verbose analyze dsperf_rda_or_key;\n>> INFO: vacuuming \"scncraft.dsperf_rda_or_key\"\n>> INFO: index \"dsperf242_1105\" now contains 300000 row versions in \n>>12387 pages\n>> DETAIL: 3097702 index row versions were removed.\n> > 0 index pages have been deleted, 0 are currently reusable.\n>\n>Hm, interesting that you deleted 90% of the entries and still had no\n>empty index pages at all. What was the pattern of your deletes and/or\n>updates with respect to this index's key?\n>\n>> However, when I check the disk space usage, it has not changed.\n>\n>It won't in any case. Plain VACUUM is designed for maintaining a\n>steady-state level of free space in tables and indexes, not for\n>returning major amounts of space to the OS. For that you need\n>more-invasive operations like VACUUM FULL or REINDEX.\n>\n>\t\t\tregards, tom lane\n\n\n-- \n+--------------------------------------------------------+\n| Seum-Lim GAN email : [email protected] |\n| Lucent Technologies |\n| 2000 N. Naperville Road, 6B-403F tel : (630)-713-6665 |\n| Naperville, IL 60566, USA. fax : (630)-713-7272 |\n| web : http://inuweb.ih.lucent.com/~slgan |\n+--------------------------------------------------------+\n", "msg_date": "Sun, 19 Oct 2003 11:55:57 -0500", "msg_from": "Seum-Lim Gan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index file bloating still in 7.4 ?" }, { "msg_contents": "Gan,\n\n> Oh, so in order to reclaim the disk space, we must run\n> reindex or vacuum full ?\n> This will lock out the table and we won't be able to do anything.\n> Looks like this is a problem. It means we cannot use it for\n> 24x7 operations without having to stop the process and do the vacuum full\n> and reindex. Is there anything down the road that these operations\n> will not lock out the table ?\n\nI doubt it; the amount of page-shuffling required to reclaim 90% of the space \nin an index for a table that has been mostly cleared is substantial, and \nwould prevent concurrent access.\n\nAlso, you seem to have set up an impossible situation for VACUUM. If I'm \nreading your statistics right, you have a large number of threads accessing \nmost of the data 100% of the time, preventing VACUUM from cleaning up the \npages. This is not, in my experience, a realistic test case ... there are \npeak and idle periods for all databases, even webservers that have been \nslashdotted. \n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 19 Oct 2003 12:04:23 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index file bloating still in 7.4 ?" }, { "msg_contents": "Hi all,\n\nDoes anyone have any experience with putting PostgreSQL data on a NAS\ndevice?\n\nI am asking this because a NAS device is much cheaper to set up than a\ncouple of SCSI disks. I would like to use a relatively cheap NAS device\nwhich uses four IDE drives (7.200 rpm), like the Dell PowerVault 725N. The\ndisks themselves would be much slower than SCSI disks, I know, but a NAS\ndevice can be equipped with 3 Gb of memory, so this would make a very large\ndisk cache, right? If this NAS would be dedicated only to PostgreSQL, would\nthis be slower/faster than a SCSI RAID-10 setup of 6 disks? It would be much\ncheaper...\n\nAny advice on this would be appreciated :)\n\nKind regards,\nAlexander Priem.\n\n", "msg_date": "Mon, 20 Oct 2003 09:12:35 +0200", "msg_from": "\"Alexander Priem\" <[email protected]>", "msg_from_op": false, "msg_subject": "PostgreSQL data on a NAS device ?" }, { "msg_contents": "On Mon, 20 Oct 2003 09:12:35 +0200\n\"Alexander Priem\" <[email protected]> wrote:\n\n> I am asking this because a NAS device is much cheaper to set up than a\n> couple of SCSI disks. I would like to use a relatively cheap NAS\n> device which uses four IDE drives (7.200 rpm), like the Dell\n> PowerVault 725N. The disks themselves would be much slower than SCSI\n> disks, I know, but a NAS device can be equipped with 3 Gb of memory,\n> so this would make a very large disk cache, right? If this NAS would\n> be dedicated only to PostgreSQL, would this be slower/faster than a\n> SCSI RAID-10 setup of 6 disks? It would be much cheaper...\n> \n\nThe big concern would be the network connection, unless you are going\nfiber. You need to use _AT LEAST_ gigabit. _at least_. If you do\ngo that route it'd be interesting to see bonnie results. And the\nother thing - remember that just because you are running NAS doesn't\nmean you can attach another machine running postgres and have a\ncluster. (See archives for more info about this). \n\nI suppose it all boils down to your budget (I usually get to work with\na budget of $0). And I mentioned this in another post- If you don't mind\nrefurb disks(or slightly used) check out ebay - you can get scsi disks\nby the truckload for cheap. \n\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n", "msg_date": "Mon, 20 Oct 2003 08:20:30 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL data on a NAS device ?" }, { "msg_contents": "Thanks for your reply, Jeff.\n\nIf we are going to use a NAS device for storage, then it will be attached\nthrough a gigabit ethernet connection. Fiber will not be an option, since\nthat would negate the savings we can make by using an IDE NAS device instead\nof SCSI-RAID, fiber's pretty expensive, right?\n\nUsing a NAS device (that is used only by PostgreSQL, so it's dedicated) with\n3Gb of RAM and four 7200 rpm IDE harddisks, connected using a gigabit\nethernet connection to the PostgreSQL server, do you think it will be a\nmatch for a SCSI-RAID config using 4 or 6 15000rpm disks (RAID-10) through a\nSCSI-RAID controller having 128mb of writeback cache (battery-backed)?\n\nThe SCSI-RAID config would be a lot more expensive. I can't purchase both\nconfigs and test which one wil be faster, but if the NAS solution would be\n(almost) as fast as the SCSI-RAID solution, it would be cheaper and easier\nto maintain...\n\nAbout clustering: I know this can't be done by hooking multiple postmasters\nto one and the same NAS. This would result in data corruption, i've read...\n\nKind regards,\nAlexander.\n\n\n----- Original Message -----\nFrom: \"Jeff\" <[email protected]>\nTo: \"Alexander Priem\" <[email protected]>\nCc: <[email protected]>\nSent: Monday, October 20, 2003 2:20 PM\nSubject: Re: [PERFORM] PostgreSQL data on a NAS device ?\n\n\n> On Mon, 20 Oct 2003 09:12:35 +0200\n> \"Alexander Priem\" <[email protected]> wrote:\n>\n> > I am asking this because a NAS device is much cheaper to set up than a\n> > couple of SCSI disks. I would like to use a relatively cheap NAS\n> > device which uses four IDE drives (7.200 rpm), like the Dell\n> > PowerVault 725N. The disks themselves would be much slower than SCSI\n> > disks, I know, but a NAS device can be equipped with 3 Gb of memory,\n> > so this would make a very large disk cache, right? If this NAS would\n> > be dedicated only to PostgreSQL, would this be slower/faster than a\n> > SCSI RAID-10 setup of 6 disks? It would be much cheaper...\n> >\n>\n> The big concern would be the network connection, unless you are going\n> fiber. You need to use _AT LEAST_ gigabit. _at least_. If you do\n> go that route it'd be interesting to see bonnie results. And the\n> other thing - remember that just because you are running NAS doesn't\n> mean you can attach another machine running postgres and have a\n> cluster. (See archives for more info about this).\n>\n> I suppose it all boils down to your budget (I usually get to work with\n> a budget of $0). And I mentioned this in another post- If you don't mind\n> refurb disks(or slightly used) check out ebay - you can get scsi disks\n> by the truckload for cheap.\n>\n>\n> --\n> Jeff Trout <[email protected]>\n> http://www.jefftrout.com/\n> http://www.stuarthamm.net/\n\n", "msg_date": "Mon, 20 Oct 2003 14:29:32 +0200", "msg_from": "\"Alexander Priem\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL data on a NAS device ?" }, { "msg_contents": "Alexander Priem wrote:\n> About clustering: I know this can't be done by hooking multiple postmasters\n> to one and the same NAS. This would result in data corruption, i've read...\n\nOnly if they are reading same data directory. You can run 4 different data \ninstallations of postgresql, each one in its own directory and still put them on \nsame device.\n\n Shridhar\n\n", "msg_date": "Mon, 20 Oct 2003 18:00:41 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL data on a NAS device ?" }, { "msg_contents": "Even better than the four-disk NAS I mentioned earlier is the following:\n\nPromise UltraTrak RM8000. This is a so-called SCSI-to-IDE RAID system.\nBasically it's a RAID setup of eight IDE disks, using a hardware RAID\nengine, that's connected to (in this case) the PostgreSQL server via a SCSI\nUltra160 interface (!). So the server won't know any better than that\nthere's a SCSI disk attached, but in reality it's a IDE RAID setup. It\nsupports RAID levels 0, 1, 0+1, 5, 50 and JBOD and supports hot-swapping.\n\nSuch a NAS config would cost around EUR 3700 (ex. VAT), using 8x40 Gb IDE\ndisks (7200rpm).\n\nA SCSI RAID-10 setup using 6x18Gb (15000rpm) disks would cost around EUR\n6000 (ex. VAT) so it's a big difference...\n\nDoes anyone have experience with this NAS device or other \"SCSI-to-IDE\" RAID\nsystems? Are they OK in terms of performance and reliability?\n\nKind regards,\nAlexander.\n\n", "msg_date": "Mon, 20 Oct 2003 15:04:32 +0200", "msg_from": "\"Alexander Priem\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL data on a NAS device ?" }, { "msg_contents": "Alexander Priem kirjutas E, 20.10.2003 kell 15:29:\n> Thanks for your reply, Jeff.\n> \n> If we are going to use a NAS device for storage, then it will be attached\n> through a gigabit ethernet connection. Fiber will not be an option, since\n> that would negate the savings we can make by using an IDE NAS device instead\n> of SCSI-RAID, fiber's pretty expensive, right?\n> \n> Using a NAS device (that is used only by PostgreSQL, so it's dedicated) with\n> 3Gb of RAM and four 7200 rpm IDE harddisks, connected using a gigabit\n> ethernet connection to the PostgreSQL server, do you think it will be a\n> match for a SCSI-RAID config using 4 or 6 15000rpm disks (RAID-10) through a\n> SCSI-RAID controller having 128mb of writeback cache (battery-backed)?\n\nI sincerely don't know.\n\nBut if NAS is something that involves TCP (like iSCSI) then you should\ntake a look at some network card and TCP/IP stack that offloads the\nprotocol processing to the coprocessor on network card. (or just have\nsome extra processors free to do the protocol processing )\n\n---------------\nHannu\n\n", "msg_date": "Mon, 20 Oct 2003 16:08:23 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL data on a NAS device ?" }, { "msg_contents": "Alexander Priem kirjutas E, 20.10.2003 kell 16:04:\n> Even better than the four-disk NAS I mentioned earlier is the following:\n> \n> Promise UltraTrak RM8000. This is a so-called SCSI-to-IDE RAID system.\n\nWhile you are at it, you could also check out http://www.3ware.com/\n\nI guess one of these with 10000 rpm 36GB SATA drivest would be pretty\nfast and possibly cheaper than SCSI raid.\n\n--------------\nHannu\n", "msg_date": "Mon, 20 Oct 2003 16:19:07 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL data on a NAS device ?" }, { "msg_contents": "Hello Alexander,\n\nOn Mon, 2003-10-20 at 06:04, Alexander Priem wrote:\n> Even better than the four-disk NAS I mentioned earlier is the following:\n> \n> Promise UltraTrak RM8000. This is a so-called SCSI-to-IDE RAID system.\n> Basically it's a RAID setup of eight IDE disks, using a hardware RAID\n> engine, that's connected to (in this case) the PostgreSQL server via a SCSI\n> Ultra160 interface (!). So the server won't know any better than that\n> there's a SCSI disk attached, but in reality it's a IDE RAID setup. It\n> supports RAID levels 0, 1, 0+1, 5, 50 and JBOD and supports hot-swapping.\n\nWe have a Promise FasTrak 4000 in our development server connected to\n120 Gig western digital 8mb cache drives. Basically the fastest drives\nwe could get for an ide configuration. This system works well, however\nthere are a few things you need to consider. The biggest is that you\nhave very limited control over your devices with the Promise\ncontrollers. The bios of the raid controller doesn't have many options\non it. You basically plug everything together, and just hope it works.\n\nIt usually does, but there have been times in the past that really gave\nus a scare. And we had a situation that in a hard poweroff ( UPS died )\nwe suffered complete corruptions of 2 of our 4 drives. \n\nPerformance wise it is =okay= but definitely not on par with either our\nMegaraid elite 1650 controller or a solution I'm going to suggest to you\nlater in this mail. Your biggest hit is going to be multiple\nsimultaneous accesses. The controller and drives just can't keep up to\nit.\n\nRealistically with my experiences I cannot recommend this solution for a\nproduction machine, even with the budget constraints you have put forth.\n\n> \n> Such a NAS config would cost around EUR 3700 (ex. VAT), using 8x40 Gb IDE\n> disks (7200rpm).\n> \n> A SCSI RAID-10 setup using 6x18Gb (15000rpm) disks would cost around EUR\n> 6000 (ex. VAT) so it's a big difference...\n\nI'm not sure where you have your figures, but I would like to propose\nthe following solution for you.\n\nfor your boot device use either a single ide drive and keep an exact\nduplicate of the drive in the event of a drive failure, or use 2 drives\nand use software raid to mirror the two. In this manner you can spend\napprox $100 USD for each drive and no additional cost for your\ncontroller as you will use the motherboards IDE controller.\n\nFor your postgresql partition or even /var use software raid on an\nadaptec 29320-R SCSI controller. (\nhttp://www.adaptec.com/worldwide/product/proddetail.html?sess=no&language=English+US&prodkey=ASC-39320-R&cat=%2fTechnology%2fSCSI%2fUltra320+SCSI ) cost: $399 USD IF you bought it from adaptec\n\nMatch this with 6 Seagate 10k 36G Cheetah U320 scsi drives: \n( http://www.c-source.com/csource/newsite/ttechnote.asp?part_no=207024 )\nfor a cost of $189 USD per drive. If you have 6 of them it brings the\ntotal price for your drives to $1134 USD.\n\nTotal cost for this would be approx $1633 before shipping costs. We use\nthis configuration in our two file servers and have nothing but positive\nresults. If you are totally unable to use software raid you could still\nbuy 6 of those drives, and spend approx $900 USD on an LSI Megaraid 1650\ncontroller.\n\nI really believe you'll find either of those options to be superior in\nterms of price for you.\n\nSincerely,\n\nWill LaShell\n\n\n \n> Does anyone have experience with this NAS device or other \"SCSI-to-IDE\" RAID\n> systems? Are they OK in terms of performance and reliability?\n\n> Kind regards,\n> Alexander.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings", "msg_date": "20 Oct 2003 08:29:32 -0700", "msg_from": "Will LaShell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL data on a NAS device ?" }, { "msg_contents": "Hi Josh, Tom,\n\nOK. As I understand it, vacuum does not release the space\nused by the index file.\nHowever, it should be able to reuse the space for indexing.\n\nI have observed that during initial updates of the table,\nthe index file did not grow and was steady but it did not last long\nand keeps growing afterwards. Vacuum/vacuum analyze did not help.\n\nIn all the update testing, vacuum analyze was done every 1 minute.\n\nTom, something caught your attention the last time.\n\nAny insight so far ? Is it a bug ?\n\nThanks.\n\nGan\n\nTom Lane wrote:\n\nSeum-Lim Gan <[email protected]> writes:\n> vacuum verbose analyze dsperf_rda_or_key;\n> INFO: vacuuming \"scncraft.dsperf_rda_or_key\"\n> INFO: index \"dsperf242_1105\" now contains 300000 row versions in 12387 pages\n> DETAIL: 3097702 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n\nHm, interesting that you deleted 90% of the entries and still had no\nempty index pages at all. What was the pattern of your deletes and/or\nupdates with respect to this index's key?\n\n> However, when I check the disk space usage, it has not changed.\n\nIt won't in any case. Plain VACUUM is designed for maintaining a\nsteady-state level of free space in tables and indexes, not for\nreturning major amounts of space to the OS. For that you need\nmore-invasive operations like VACUUM FULL or REINDEX.\n\n\t\t\tregards, tom lane\n\nAt 12:04 pm -0700 2003/10/19, Josh Berkus wrote:\n>Gan,\n>\n>> Oh, so in order to reclaim the disk space, we must run\n>> reindex or vacuum full ?\n>> This will lock out the table and we won't be able to do anything.\n>> Looks like this is a problem. It means we cannot use it for\n>> 24x7 operations without having to stop the process and do the vacuum full\n>> and reindex. Is there anything down the road that these operations\n>> will not lock out the table ?\n>\n>I doubt it; the amount of page-shuffling required to reclaim 90% of the space\n>in an index for a table that has been mostly cleared is substantial, and\n>would prevent concurrent access.\n>\n>Also, you seem to have set up an impossible situation for VACUUM. If I'm\n>reading your statistics right, you have a large number of threads accessing\n>most of the data 100% of the time, preventing VACUUM from cleaning up the\n>pages. This is not, in my experience, a realistic test case ... there are\n>peak and idle periods for all databases, even webservers that have been\n>slashdotted.\n>\n>--\n>Josh Berkus\n>Aglio Database Solutions\n>San Francisco\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n\n-- \n+--------------------------------------------------------+\n| Seum-Lim GAN email : [email protected] |\n| Lucent Technologies |\n| 2000 N. Naperville Road, 6B-403F tel : (630)-713-6665 |\n| Naperville, IL 60566, USA. fax : (630)-713-7272 |\n| web : http://inuweb.ih.lucent.com/~slgan |\n+--------------------------------------------------------+\n", "msg_date": "Mon, 20 Oct 2003 11:04:43 -0500", "msg_from": "Seum-Lim Gan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index file bloating still in 7.4 ?" }, { "msg_contents": "On Mon, 20 Oct 2003, Alexander Priem wrote:\n\n> Hi all,\n> \n> Does anyone have any experience with putting PostgreSQL data on a NAS\n> device?\n> \n> I am asking this because a NAS device is much cheaper to set up than a\n> couple of SCSI disks. I would like to use a relatively cheap NAS device\n> which uses four IDE drives (7.200 rpm), like the Dell PowerVault 725N. The\n> disks themselves would be much slower than SCSI disks, I know, but a NAS\n> device can be equipped with 3 Gb of memory, so this would make a very large\n> disk cache, right? If this NAS would be dedicated only to PostgreSQL, would\n> this be slower/faster than a SCSI RAID-10 setup of 6 disks? It would be much\n> cheaper...\n> \n> Any advice on this would be appreciated :)\n\nHow important is this data?\n\nWith a local SCSI RAID controller and SCSI drives, you can pull the power \ncord out the back of the machine during 1000 transactions, and your \ndatabase will come back up in a coherent state.\n\nIf you need that kind of reliability, then you'll likely want to use \nlocal SCSI drives.\n\nNote that you should test your setup to be sure, i.e. pull the network \ncord and see how the machine recovers (if the machine recovers).\n\nRunning storage on a NAS is a bit of a tightrope act with your data, as is \nusing IDE drives with write cache enabled. But depending on your \napplication, using NAS may be a good solution. So, what's this database \ngonna be used for?\n\n", "msg_date": "Mon, 20 Oct 2003 11:20:46 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL data on a NAS device ?" }, { "msg_contents": "Hi Tom, Josh,\n\nWe tried one more thing: with the table not being updated\nat all and we did vacuum. Each time a vacuum is done,\nthe index file becomes bigger.\n\nThis is probably what is contributing to the index file\ngrowing as well.\n\nThanks.\n\nGan\n\nAt 11:04 am -0500 2003/10/20, Seum-Lim Gan wrote:\n>Hi Josh, Tom,\n>\n>OK. As I understand it, vacuum does not release the space\n>used by the index file.\n>However, it should be able to reuse the space for indexing.\n>\n>I have observed that during initial updates of the table,\n>the index file did not grow and was steady but it did not last long\n>and keeps growing afterwards. Vacuum/vacuum analyze did not help.\n>\n>In all the update testing, vacuum analyze was done every 1 minute.\n>\n>Tom, something caught your attention the last time.\n>\n>Any insight so far ? Is it a bug ?\n>\n>Thanks.\n>\n>Gan\n>\n>Tom Lane wrote:\n>\n>Seum-Lim Gan <[email protected]> writes:\n>> vacuum verbose analyze dsperf_rda_or_key;\n>> INFO: vacuuming \"scncraft.dsperf_rda_or_key\"\n>> INFO: index \"dsperf242_1105\" now contains 300000 row versions in \n>>12387 pages\n>> DETAIL: 3097702 index row versions were removed.\n>> 0 index pages have been deleted, 0 are currently reusable.\n>\n>Hm, interesting that you deleted 90% of the entries and still had no\n>empty index pages at all. What was the pattern of your deletes and/or\n>updates with respect to this index's key?\n>\n>> However, when I check the disk space usage, it has not changed.\n>\n>It won't in any case. Plain VACUUM is designed for maintaining a\n>steady-state level of free space in tables and indexes, not for\n>returning major amounts of space to the OS. For that you need\n>more-invasive operations like VACUUM FULL or REINDEX.\n>\n>\t\t\tregards, tom lane\n>\n>At 12:04 pm -0700 2003/10/19, Josh Berkus wrote:\n>>Gan,\n>>\n>>> Oh, so in order to reclaim the disk space, we must run\n>>> reindex or vacuum full ?\n>>> This will lock out the table and we won't be able to do anything.\n>>> Looks like this is a problem. It means we cannot use it for\n>>> 24x7 operations without having to stop the process and do the vacuum full\n>>> and reindex. Is there anything down the road that these operations\n>>> will not lock out the table ?\n>>\n>>I doubt it; the amount of page-shuffling required to reclaim 90% of the space\n>>in an index for a table that has been mostly cleared is substantial, and\n>>would prevent concurrent access.\n>>\n>>Also, you seem to have set up an impossible situation for VACUUM. If I'm\n>>reading your statistics right, you have a large number of threads accessing\n>>most of the data 100% of the time, preventing VACUUM from cleaning up the\n>>pages. This is not, in my experience, a realistic test case ... there are\n>>peak and idle periods for all databases, even webservers that have been\n>>slashdotted.\n>>\n>>--\n>>Josh Berkus\n>>Aglio Database Solutions\n>>San Francisco\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n>\n>--\n>+--------------------------------------------------------+\n>| Seum-Lim GAN email : [email protected] |\n>| Lucent Technologies |\n>| 2000 N. Naperville Road, 6B-403F tel : (630)-713-6665 |\n>| Naperville, IL 60566, USA. fax : (630)-713-7272 |\n>| web : http://inuweb.ih.lucent.com/~slgan |\n>+--------------------------------------------------------+\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n\n\n-- \n+--------------------------------------------------------+\n| Seum-Lim GAN email : [email protected] |\n| Lucent Technologies |\n| 2000 N. Naperville Road, 6B-403F tel : (630)-713-6665 |\n| Naperville, IL 60566, USA. fax : (630)-713-7272 |\n| web : http://inuweb.ih.lucent.com/~slgan |\n+--------------------------------------------------------+\n", "msg_date": "Mon, 20 Oct 2003 16:14:09 -0500", "msg_from": "Seum-Lim Gan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index file bloating still in 7.4 ?" }, { "msg_contents": "Seum-Lim Gan <[email protected]> writes:\n> We tried one more thing: with the table not being updated\n> at all and we did vacuum. Each time a vacuum is done,\n> the index file becomes bigger.\n\nIt is not possible for plain vacuum to make the index bigger.\n\nVACUUM FULL possibly could make the index bigger, since it has to\ntransiently create duplicate index entries for every row it moves.\n\nIf you want any really useful comments on your situation, you're going\nto have to offer considerably more detail than you have done so far ---\npreferably, a test case that lets someone else reproduce your results.\nSo far, all we can do is guess on the basis of very incomplete\ninformation. When you aren't even bothering to mention whether a vacuum\nis FULL or not, I have to wonder whether I have any realistic picture of\nwhat's going on.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 20 Oct 2003 17:25:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index file bloating still in 7.4 ? " }, { "msg_contents": "Seum-Lim Gan <[email protected]> writes:\n> [ successive outputs from VACUUM ANALYZE ]\n\nFWIW, I don't think your problem is really index bloat at all, it's\nmore like too-many-dead-rows bloat. Note that the number of \"dead row\nversions\" is climbing steadily from run to run:\n\n> DETAIL: 101802 dead row versions cannot be removed yet.\n\n> DETAIL: 110900 dead row versions cannot be removed yet.\n\n> DETAIL: 753064 dead row versions cannot be removed yet.\n\n> DETAIL: 765328 dead row versions cannot be removed yet.\n\nIt's hardly the index's fault that it's growing, when it has to keep\ntrack of an ever-increasing number of rows.\n\nThe real question is what you're doing that requires the system to keep\nhold of these dead rows instead of recycling them. I suspect you have\na client process somewhere that is holding an open transaction for a\nlong time ... probably not doing anything, just sitting there with an\nunclosed BEGIN ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 20 Oct 2003 17:42:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index file bloating still in 7.4 ? " }, { "msg_contents": "On Mon, Oct 20, 2003 at 05:42:52PM -0400, Tom Lane wrote:\n\n> hold of these dead rows instead of recycling them. I suspect you have\n> a client process somewhere that is holding an open transaction for a\n> long time ... probably not doing anything, just sitting there with an\n> unclosed BEGIN ...\n\nWhich could be because you're doing something nasty with one of the\n\"autocommit=off\" clients. Most of the client libraries implement\nthis by doing \"commit;begin;\" at every commit. This means you have\nway more idle in transaction connections than you think. Look in\npg_stat_activity, assuming you've turned on query echoing. \n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 21 Oct 2003 06:56:56 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index file bloating still in 7.4 ?" }, { "msg_contents": "Andrew Sullivan wrote:\n> On Mon, Oct 20, 2003 at 05:42:52PM -0400, Tom Lane wrote:\n> \n> \n>>hold of these dead rows instead of recycling them. I suspect you have\n>>a client process somewhere that is holding an open transaction for a\n>>long time ... probably not doing anything, just sitting there with an\n>>unclosed BEGIN ...\n> \n> \n> Which could be because you're doing something nasty with one of the\n> \"autocommit=off\" clients. Most of the client libraries implement\n> this by doing \"commit;begin;\" at every commit. This means you have\n> way more idle in transaction connections than you think. Look in\n> pg_stat_activity, assuming you've turned on query echoing. \n\nOr is enough do a ps -eafwww | grep post\nto see the state of the connections\n\n\nRegards\nGaetano Mendola\n\n", "msg_date": "Tue, 21 Oct 2003 14:12:34 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index file bloating still in 7.4 ?" }, { "msg_contents": "The machine is going to be used for a pretty large database (well over 100\ntables with some of them containing over a million records from the start,\nnumber of tables and records will grow (much?) larger in the future). This\ndatabase is going to be used by a pretty large number of employees. The\nnumber of concurrent users will vary between 1 - 100 or so, depending on the\ntime of day etc. This will be a database containing client and supplier data\nas well as product descriptions and prices/ingredients/labels/brands etc.\nDatabase use will include lots of SELECTS but also lots of INSERTS/UPDATES,\ni.e. the database will be pretty active during bussiness hours...\n\nI think you (Scott and Will) are right when you say that NAS devices are not\nideal for this kind of thing. I have been thinking about the hardware\nconfiguration for this machine for some time now (and had a lot of hints\nthrough this list already) and decided to go for a SCSI RAID config after\nall. The extra costs will be worth it :)\n\nThe machine I have in mind now is like this :\n\nDell PowerEdge 1750 machine with Intel Xeon CPU at 3 GHz and 4 GB of RAM.\nThis machine will contain a PERC4/Di RAID controller with 128MB of battery\nbacked cache memory. The O/S and logfiles will be placed on a RAID-1 setup\nof two 36Gb SCSI-U320 drives (15.000rpm). Database data will be placed on a\nDell PowerVault 220S rack-module containing six 36Gb SCSI-U320 drives\n(15.000rpm) in a RAID-10 setup. This PowerVault will be connected to the DB\nserver via a SCSI cable...\n\nThis machine will be a bit more expensive than I thought at first (it's\ngoing to be about EUR 14.000, but that's including 3 years of on-site\nsupport from Dell (24x7, 4-hour response) and peripherals like UPS etc...\n\nDo you think this machine wil be OK for this task?\n\nThanks for your help so far :)\n\nKind regards,\nAlexander Priem.\n\n", "msg_date": "Tue, 21 Oct 2003 14:48:06 +0200", "msg_from": "\"Alexander Priem\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL data on a NAS device ?" }, { "msg_contents": "Alexander Priem wrote:\n> Dell PowerEdge 1750 machine with Intel Xeon CPU at 3 GHz and 4 GB of RAM.\n> This machine will contain a PERC4/Di RAID controller with 128MB of battery\n> backed cache memory. The O/S and logfiles will be placed on a RAID-1 setup\n> of two 36Gb SCSI-U320 drives (15.000rpm). Database data will be placed on a\n> Dell PowerVault 220S rack-module containing six 36Gb SCSI-U320 drives\n> (15.000rpm) in a RAID-10 setup. This PowerVault will be connected to the DB\n> server via a SCSI cable...\n> This machine will be a bit more expensive than I thought at first (it's\n> going to be about EUR 14.000, but that's including 3 years of on-site\n> support from Dell (24x7, 4-hour response) and peripherals like UPS etc...\n\nCheck opteron as well.. I don't know much about european resellers. IBM sells \neserver 325 which has opterons. Apparently they scale much better at higher \nload. Of course pricing,availability and support are most important.\n\nhttp://theregister.co.uk/content/61/33378.html\nhttp://www.pc.ibm.com/us/eserver/opteron/325/\n\nAny concrete benchmarks for postgresql w.r.t xeons and opterons? A collection \nwould be nice to have..:-)\n\n Shridhar\n\n", "msg_date": "Tue, 21 Oct 2003 18:27:37 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL data on a NAS device ?" }, { "msg_contents": "I have considered Opteron, yes. But I think there are too many\nuncertainties, like :\n\n* It's a new CPU that has not proven itself yet.\n* I don't think I can buy directly from IBM (according to their site), so\nhow about support (24x7) ? This will be very important to our client.\n* I need to install and configure a 64bit Linux flavour which I don't know\n(yet)\n\nAny suggestions about the usability of the system I described before?\n\nHere is the description again:\n\nDell PowerEdge 1750 machine with Intel Xeon CPU at 3 GHz and 4 GB of RAM.\nThis machine will contain a PERC4/Di RAID controller with 128MB of battery\nbacked cache memory. The O/S and logfiles will be placed on a RAID-1 setup\nof two 36Gb SCSI-U320 drives (15.000rpm). Database data will be placed on a\nDell PowerVault 220S rack-module containing six 36Gb SCSI-U320 drives\n(15.000rpm) in a RAID-10 setup. This PowerVault will be connected to the DB\nserver via a SCSI cable...\n\nI have never worked with a XEON CPU before. Does anyone know how it performs\nrunning PostgreSQL 7.3.4 / 7.4 on RedHat 9 ? Is it faster than a Pentium 4?\nI believe the main difference is cache memory, right? Aside from cache mem,\nit's basically a Pentium 4, or am I wrong?\n\nKind regards,\nAlexander.\n\n", "msg_date": "Tue, 21 Oct 2003 15:33:47 +0200", "msg_from": "\"Alexander Priem\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL data on a NAS device ?" }, { "msg_contents": "Alexander Priem wrote:\n> I have considered Opteron, yes. But I think there are too many\n> uncertainties, like :\n> \n> * It's a new CPU that has not proven itself yet.\n> * I don't think I can buy directly from IBM (according to their site), so\n> how about support (24x7) ? This will be very important to our client.\n> * I need to install and configure a 64bit Linux flavour which I don't know\n> (yet)\n\nSee http://www.monarchcomputer.com/ they custom build operton systems \nand preload them with Linux. You don't pay the Microsoft tax.\n\n-- \nUntil later, Geoffrey\[email protected]\n\n", "msg_date": "Tue, 21 Oct 2003 09:39:52 -0400", "msg_from": "Geoffrey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL data on a NAS device ?" }, { "msg_contents": "On Tue, 21 Oct 2003, Alexander Priem wrote:\n\n> The machine is going to be used for a pretty large database (well over 100\n> tables with some of them containing over a million records from the start,\n> number of tables and records will grow (much?) larger in the future). This\n> database is going to be used by a pretty large number of employees. The\n> number of concurrent users will vary between 1 - 100 or so, depending on the\n> time of day etc. This will be a database containing client and supplier data\n> as well as product descriptions and prices/ingredients/labels/brands etc.\n> Database use will include lots of SELECTS but also lots of INSERTS/UPDATES,\n> i.e. the database will be pretty active during bussiness hours...\n> \n> I think you (Scott and Will) are right when you say that NAS devices are not\n> ideal for this kind of thing. I have been thinking about the hardware\n> configuration for this machine for some time now (and had a lot of hints\n> through this list already) and decided to go for a SCSI RAID config after\n> all. The extra costs will be worth it :)\n> \n> The machine I have in mind now is like this :\n> \n> Dell PowerEdge 1750 machine with Intel Xeon CPU at 3 GHz and 4 GB of RAM.\n> This machine will contain a PERC4/Di RAID controller with 128MB of battery\n> backed cache memory. The O/S and logfiles will be placed on a RAID-1 setup\n> of two 36Gb SCSI-U320 drives (15.000rpm). Database data will be placed on a\n> Dell PowerVault 220S rack-module containing six 36Gb SCSI-U320 drives\n> (15.000rpm) in a RAID-10 setup. This PowerVault will be connected to the DB\n> server via a SCSI cable...\n\nFunny, we're looking at the same basic type of system here, but with a \nPerc3/CI controller. We have a local supplier who gives us machines with \na 3 year warranty and looks to be $1,000 to $2,000 lower than the Dell.\n\nWe're just going to run two 73 Gig drives in a RAID1 to start with, with \nbattery backed RAM.\n\nSo that brings up my question, which is better, the Perc4 or Perc3 \ncontrollers, and what's the difference between them? I find Dell's \ntendency to hide other people's hardware behind their own model numbers \nmildly bothersome, as it makes it hard to comparison shop.\n\n", "msg_date": "Tue, 21 Oct 2003 09:40:49 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL data on a NAS device ?" }, { "msg_contents": "> I have never worked with a XEON CPU before. Does anyone know how it performs\n> running PostgreSQL 7.3.4 / 7.4 on RedHat 9 ? Is it faster than a Pentium 4?\n> I believe the main difference is cache memory, right? Aside from cache mem,\n> it's basically a Pentium 4, or am I wrong?\n\nWell, see the problem is of course, there's so many flavors of P4s and \nXeons that it's hard to tell which is faster unless you specify the \nexact model. And even then, it would depend on the workload. Would a \nXeon/3GHz/2MB L3/400FSB be faster than a P4C/3GHz/800FSB? No idea as no \none has complete number breakdowns on these comparisons. Oh yeah, you \ncould get a big round number that says on SPEC or something one CPU is \nfaster than the other but whether that's faster for Postgres and your PG \napp is a totally different story.\n\nThat in mind, I wouldn't worry about it. The CPU is probably plenty fast \nfor what you need to do. I'd look into two things in the server: memory \nand CPU expandability. I know you already plan on 4GB but you may need \neven more in the future. Few things can dramatically improve performance \nmore than moving disk access to disk cache. And if there's a 2nd socket \nwhere you can pop another CPU in, that would leave you extra room if \nyour server becomes CPU limited.\n\n", "msg_date": "Tue, 21 Oct 2003 09:12:14 -0700", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL data on a NAS device ?" }, { "msg_contents": "Hi Tom,\n\n1.)\nOK. We have narrowed it down.\n\nWe did a few (like 5 to 8 times) vacuum analyze <tablename> (no full), the\npg_statistics relfilenode grew. There was no database operation when\nwe did this, no other client connections except the one that does\nthe vacuum.\n\nIf we do plain simple \"vacuum <tablename>\" (again no full), we see\npg_statistics_relid_att_index relfilenode grew instead of\npg_statistics.\n\nSo, overtime, these files will grow if we do vacuum.\n\nAre these expected ?\n\nThe question now is, if we are not doing anything\nto the database, why would they grow after a few vacuums ?\n\n2.)\nThe other problem we have with\n> DETAIL: 101802 dead row versions cannot be removed yet.\n\n> DETAIL: 110900 dead row versions cannot be removed yet.\n\n> DETAIL: 753064 dead row versions cannot be removed yet.\n\n> DETAIL: 765328 dead row versions cannot be removed yet.\n\nWe will collect more data and see what we can get from the\nthe process. Offhand, the process is connecting to\nthe database through ODBC and we don't use any BEGIN in\nour updates, just doing plain UPDATE repeatedly\nwith different keys randomly.\nThe database is defaulted to autocommit=true in postgresql.conf.\n\nThanks.\n\nGan\n\nAt 5:25 pm -0400 2003/10/20, Tom Lane wrote:\n>Seum-Lim Gan <[email protected]> writes:\n>> We tried one more thing: with the table not being updated\n>> at all and we did vacuum. Each time a vacuum is done,\n>> the index file becomes bigger.\n>\n>It is not possible for plain vacuum to make the index bigger.\n>\n>VACUUM FULL possibly could make the index bigger, since it has to\n>transiently create duplicate index entries for every row it moves.\n>\n>If you want any really useful comments on your situation, you're going\n>to have to offer considerably more detail than you have done so far ---\n>preferably, a test case that lets someone else reproduce your results.\n>So far, all we can do is guess on the basis of very incomplete\n>information. When you aren't even bothering to mention whether a vacuum\n>is FULL or not, I have to wonder whether I have any realistic picture of\n>what's going on.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n\n\n-- \n+--------------------------------------------------------+\n| Seum-Lim GAN email : [email protected] |\n| Lucent Technologies |\n| 2000 N. Naperville Road, 6B-403F tel : (630)-713-6665 |\n| Naperville, IL 60566, USA. fax : (630)-713-7272 |\n| web : http://inuweb.ih.lucent.com/~slgan |\n+--------------------------------------------------------+\n", "msg_date": "Tue, 21 Oct 2003 11:25:33 -0500", "msg_from": "Seum-Lim Gan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index file bloating still in 7.4 ?" }, { "msg_contents": "Seum-Lim Gan <[email protected]> writes:\n> We did a few (like 5 to 8 times) vacuum analyze <tablename> (no full), the\n> pg_statistics relfilenode grew.\n\nWell, sure. ANALYZE puts new rows into pg_statistic, and obsoletes old\nones. You need to vacuum pg_statistic every so often (not to mention\nthe other system catalogs).\n\n> If we do plain simple \"vacuum <tablename>\" (again no full), we see\n> pg_statistics_relid_att_index relfilenode grew instead of\n> pg_statistics.\n\nDon't think I believe that. Plain vacuum won't touch pg_statistic\nat all (unless it's the target table of course). I'd expect ANALYZE\nto make both the stats table and its index grow, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 Oct 2003 12:42:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index file bloating still in 7.4 ? " }, { "msg_contents": "On Tue, 2003-10-21 at 08:40, scott.marlowe wrote:\n<SNIP>\n> So that brings up my question, which is better, the Perc4 or Perc3 \n> controllers, and what's the difference between them? I find Dell's \n> tendency to hide other people's hardware behind their own model numbers \n> mildly bothersome, as it makes it hard to comparison shop.\n\nPerc4 has n LSI 1030 chip\nhttp://docs.us.dell.com/docs/storage/perc4di/en/ug/features.htm\n\n\nPerc3\ndepending on the model can be a couple of things but I think they are\nall U160 controllers and not U320\n\n<SNIP>\n\n\nWill", "msg_date": "21 Oct 2003 12:00:05 -0700", "msg_from": "Will LaShell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL data on a NAS device ?" }, { "msg_contents": "On 21 Oct 2003, Will LaShell wrote:\n\n> On Tue, 2003-10-21 at 08:40, scott.marlowe wrote:\n> <SNIP>\n> > So that brings up my question, which is better, the Perc4 or Perc3 \n> > controllers, and what's the difference between them? I find Dell's \n> > tendency to hide other people's hardware behind their own model numbers \n> > mildly bothersome, as it makes it hard to comparison shop.\n> \n> Perc4 has n LSI 1030 chip\n> http://docs.us.dell.com/docs/storage/perc4di/en/ug/features.htm\n> \n> \n> Perc3\n> depending on the model can be a couple of things but I think they are\n> all U160 controllers and not U320\n\nThanks. I googled around and found this page:\n\nhttp://www.domsch.com/linux/\n\nWhich says what each model is. It looks like the \"RAID\" controller they \nwanna charge me for is about $500 or so, so I'm guessing it's the medium \nrange Elite 1600 type controller, i.e. U160, which is plenty for the \nmachine / drive number we'll be using. \n\nHas anyone played around with the latest ones to get a feel for them? I \nwant a battery backed controller that runs well under linux and also BSD \nthat isn't gonna break the bank. I'd heard bad stories about the \nperformance of the Adaptec RAID controllers, but it seems the newer ones \naren't bad from what I've found googling.\n\n", "msg_date": "Tue, 21 Oct 2003 14:36:01 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "RAID controllers etc... was: PostgreSQL data on a NAS device ?" }, { "msg_contents": "On Tue, 2003-10-21 at 13:36, scott.marlowe wrote:\n> On 21 Oct 2003, Will LaShell wrote:\n> \n> > On Tue, 2003-10-21 at 08:40, scott.marlowe wrote:\n> > <SNIP>\n> > > So that brings up my question, which is better, the Perc4 or Perc3 \n> > > controllers, and what's the difference between them? I find Dell's \n> > > tendency to hide other people's hardware behind their own model numbers \n> > > mildly bothersome, as it makes it hard to comparison shop.\n> > \n> > Perc4 has n LSI 1030 chip\n> > http://docs.us.dell.com/docs/storage/perc4di/en/ug/features.htm\n> > \n> > \n> > Perc3\n> > depending on the model can be a couple of things but I think they are\n> > all U160 controllers and not U320\n> \n> Thanks. I googled around and found this page:\n> \n> http://www.domsch.com/linux/\n> \n> Which says what each model is. It looks like the \"RAID\" controller they \n> wanna charge me for is about $500 or so, so I'm guessing it's the medium \n> range Elite 1600 type controller, i.e. U160, which is plenty for the \n> machine / drive number we'll be using. \n> \n> Has anyone played around with the latest ones to get a feel for them? I \n> want a battery backed controller that runs well under linux and also BSD \n> that isn't gonna break the bank. I'd heard bad stories about the \n> performance of the Adaptec RAID controllers, but it seems the newer ones \n> aren't bad from what I've found googling.\n\nWe own 2 Elite 1650 and we love them. It would be nice to have had\nU320 capable controllers but the cards are completely reliable. I\nrecommend the LSI controllers to everyone because I've never had a\nproblem with them.", "msg_date": "21 Oct 2003 14:27:56 -0700", "msg_from": "Will LaShell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID controllers etc... was: PostgreSQL data on a NAS device ?" }, { "msg_contents": "So I guess the PERC4/Di RAID controller is pretty good. It seems that\nRedHat9 supports it out-of-the-box (driver 1.18f), but I gather from the\nsites mentioned before that upgrading this driver to 1.18i would be\nbetter...\n\n", "msg_date": "Wed, 22 Oct 2003 10:13:35 +0200", "msg_from": "\"Alexander Priem\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID controllers etc... was: PostgreSQL data on a NAS device ?" }, { "msg_contents": "Heya\n\nOn Wed, 2003-10-22 at 01:13, Alexander Priem wrote:\n> So I guess the PERC4/Di RAID controller is pretty good. It seems that\n> RedHat9 supports it out-of-the-box (driver 1.18f), but I gather from the\n> sites mentioned before that upgrading this driver to 1.18i would be\n> better...\n\nActually upgrading to the Megaraid_2 driver would be even better. There\nare a -ton- of performance enhancements with it. Depending on your\nperformance needs and testing capabilities, I would highly recommend\ntrying it out.\n\nWill", "msg_date": "22 Oct 2003 08:28:48 -0700", "msg_from": "Will LaShell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID controllers etc... was: PostgreSQL data on a" }, { "msg_contents": "I have been searching (www.lsil.com) for this megaraid_2 driver you\nmentioned.\n\nWhat kind of MegaRaid card does the Perc4/Di match? Elite1600? Elite1650?\n\nI picked Elite1600 and the latest driver I found was version 2.05.00. Is\nthis one OK for RedHat 9? The README file present only mentions RedHat8...\n\nKind regards,\nAlexander.\n\n", "msg_date": "Thu, 23 Oct 2003 09:03:46 +0200", "msg_from": "\"Alexander Priem\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID controllers etc... was: PostgreSQL data on aNAS device ?" }, { "msg_contents": "Hi guys,\n\nThis basically continues the other thread about the PERC4 RAID controller,\nbut since it is a bit off-topic I thought to start another thread. Thanks\nfor all your help so far :)\n\nEarlier today I read about the newly released RedHat Enterprise Linux ES\nversion 3. This version should include out-of-the-box megaraid_2 drivers, so\nit would support the Dell PERC4/Di RAID controller.\n\nHowever, it is very much more expensive than RedHat Linux 9. RH Linux 9 is\nfree and the Enterpise ES edition will cost between 400 and several 1.000's\nof dollars, depending on the support you want to go with it.\n\nDo any of you guys have experience with the previous version of Enterprise\nLinux (that would be version 2.1) or even better, are any of you already\nusing version 3?\n\nWould you recommend this over RedHat Linux 9? I think that with RH Linux 9\nit would be easier to get all the latest versions of components I need (RPMs\nfor PostgreSQL, Apache, Samba etc.), while my guess would be that Enterprise\nLinux would be more difficult to upgrade...\n\nAlso, I cannot find any list of packages included in Enterprise Linux 2.1 /\n3. Does anyone know if PostgreSQL is included and if so, what version?\n\nKind regards,\nAlexander Priem.\n\n", "msg_date": "Thu, 23 Oct 2003 10:40:58 +0200", "msg_from": "\"Alexander Priem\" <[email protected]>", "msg_from_op": false, "msg_subject": "RedHat Enterprise Linux ES 3 ?!?!" }, { "msg_contents": "On Thu, 2003-10-23 at 01:40, Alexander Priem wrote:\n> Hi guys,\n> \n> This basically continues the other thread about the PERC4 RAID controller,\n> but since it is a bit off-topic I thought to start another thread. Thanks\n> for all your help so far :)\n> \n> Earlier today I read about the newly released RedHat Enterprise Linux ES\n> version 3. This version should include out-of-the-box megaraid_2 drivers, so\n> it would support the Dell PERC4/Di RAID controller.\n> \n> However, it is very much more expensive than RedHat Linux 9. RH Linux 9 is\n> free and the Enterpise ES edition will cost between 400 and several 1.000's\n> of dollars, depending on the support you want to go with it.\n> \n> Do any of you guys have experience with the previous version of Enterprise\n> Linux (that would be version 2.1) or even better, are any of you already\n> using version 3?\n> \n> Would you recommend this over RedHat Linux 9? I think that with RH Linux 9\n> it would be easier to get all the latest versions of components I need (RPMs\n> for PostgreSQL, Apache, Samba etc.), while my guess would be that Enterprise\n> Linux would be more difficult to upgrade...\n\nThe reason to get RHEL over RH9 or the upcoming Fedora releases is for\nstability. They have a -much- longer stability period, release cycle,\nand support lifetime. You get RHEL if you want a distribution that you\ncan get commercial support for, install the server and then not touch\nit. For production machines of this nature you'll pretty much never have\nthe latest and greatest packages. Instead you'll have the most\ncompletely stable packages. The two distribution types are really apples\nand oranges. They are both fruit ( they are both linux distros ) but\nthey sure taste different.\n\n> Also, I cannot find any list of packages included in Enterprise Linux 2.1 /\n> 3. Does anyone know if PostgreSQL is included and if so, what version?\n\nYou have two options as I understand it for PG under RHEL. You can\ninstall the PG source from Postgres themselves, or you can use the\nPostgresql Red Hat Edition. Bruce I think can give you more information\non this product. http://sources.redhat.com/rhdb/index.html This is the\nlink to it.\n\n> \n> Kind regards,\n> Alexander Priem.\n\nHope this helps,\n\nWill", "msg_date": "23 Oct 2003 08:27:43 -0700", "msg_from": "Will LaShell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RedHat Enterprise Linux ES 3 ?!?!" }, { "msg_contents": "On Thu, 23 Oct 2003, Alexander Priem wrote:\n\n> I have been searching (www.lsil.com) for this megaraid_2 driver you\n> mentioned.\n> \n> What kind of MegaRaid card does the Perc4/Di match? Elite1600? Elite1650?\n> \n> I picked Elite1600 and the latest driver I found was version 2.05.00. Is\n> this one OK for RedHat 9? The README file present only mentions RedHat8...\n\nI would guess it's a MegaRaid320-2 card, listed here:\n\nhttp://www.lsilogic.com/products/stor_prod/raid/3202.html\n\nSince the Elite1600/1650 seem to be U160 cards and the Perc/4Di would seem \nto be listed as a U320 card at this page:\n\nhttp://www.domsch.com/linux/\n\n", "msg_date": "Thu, 23 Oct 2003 09:29:28 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID controllers etc... was: PostgreSQL data on aNAS" }, { "msg_contents": "On Thu, 2003-10-23 at 11:27, Will LaShell wrote:\n> > Also, I cannot find any list of packages included in Enterprise Linux\n> 2.1 /\n> > 3. Does anyone know if PostgreSQL is included and if so, what version?\n> \n> You have two options as I understand it for PG under RHEL. You can\n> install the PG source from Postgres themselves, or you can use the\n> Postgresql Red Hat Edition. Bruce I think can give you more information\n> on this product. http://sources.redhat.com/rhdb/index.html This is the\n> link to it.\n> \n\nBruce works for SRA, not Red Hat, so he's probably not your best option\nto talk to on PRHE... While there are Red Hat employees floating around\nthese lists, I'd first suggest reading over the website and then either\nemailing the PRHE lists or one of it's team members depending on the\nspecifics of any questions.\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "23 Oct 2003 11:44:10 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RedHat Enterprise Linux ES 3 ?!?!" }, { "msg_contents": "On Thu, 2003-10-23 at 08:44, Robert Treat wrote:\n> On Thu, 2003-10-23 at 11:27, Will LaShell wrote:\n> > > Also, I cannot find any list of packages included in Enterprise Linux\n> > 2.1 /\n> > > 3. Does anyone know if PostgreSQL is included and if so, what version?\n> > \n> > You have two options as I understand it for PG under RHEL. You can\n> > install the PG source from Postgres themselves, or you can use the\n> > Postgresql Red Hat Edition. Bruce I think can give you more information\n> > on this product. http://sources.redhat.com/rhdb/index.html This is the\n> > link to it.\n> > \n> \n> Bruce works for SRA, not Red Hat, so he's probably not your best option\n> to talk to on PRHE... While there are Red Hat employees floating around\n\nGah that's right. *beats self*\n\n> these lists, I'd first suggest reading over the website and then either\n> emailing the PRHE lists or one of it's team members depending on the\n> specifics of any questions.\n\nDon't forget you can always call the RedHat sales people as well. They\nusually have good product knowledge especially since you are talking\nabout the Advanced Server lines.\n\n> Robert Treat\n> -- \n> Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\nWill", "msg_date": "23 Oct 2003 09:24:50 -0700", "msg_from": "Will LaShell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RedHat Enterprise Linux ES 3 ?!?!" }, { "msg_contents": "Robert Treat wrote:\n> On Thu, 2003-10-23 at 11:27, Will LaShell wrote:\n> > > Also, I cannot find any list of packages included in Enterprise Linux\n> > 2.1 /\n> > > 3. Does anyone know if PostgreSQL is included and if so, what version?\n> > \n> > You have two options as I understand it for PG under RHEL. You can\n> > install the PG source from Postgres themselves, or you can use the\n> > Postgresql Red Hat Edition. Bruce I think can give you more information\n> > on this product. http://sources.redhat.com/rhdb/index.html This is the\n> > link to it.\n> > \n> \n> Bruce works for SRA, not Red Hat, so he's probably not your best option\n> to talk to on PRHE... While there are Red Hat employees floating around\n> these lists, I'd first suggest reading over the website and then either\n> emailing the PRHE lists or one of it's team members depending on the\n> specifics of any questions.\n\nWay off topic, but let's do Red Hat a favor for employing PostgreSQL\nfolks --- here is a nice URL I read yesterday on the topic:\n\n\thttp://news.com.com/2100-7344-5094774.html?tag=nl\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 23 Oct 2003 12:56:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RedHat Enterprise Linux ES 3 ?!?!" } ]
[ { "msg_contents": "Hi List,\n\n I got a P4 1.7Ghz , 512MB RAM , HD 7200 RPM, on RED HAT 9 running PostgreSQL \n7.3.2-3 Database.\n I have a Delphi aplication that updates the Oracle database using .dbf \nfile's information ( converting the data from the old clipper aplication ) and \nit takes about 3min and 45 seconds to update Jan/2003 .\n My problem is that I must substitute this Oracle for a PostgreSQL database \nand this same Delphi aplication takes 45 min to update Jan/2003.\n All delphi routines are converted and optmized to work with PgSQL.\n\nHere follows my postgresql.conf:\n \n#\n#\tConnection Parameters\n#\ntcpip_socket = true\n#ssl = false\n\nmax_connections = 10\n#superuser_reserved_connections = 2\n\nport = 5432 \n#hostname_lookup = false\n#show_source_port = false\n\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777\t# octal\n\n#virtual_host = ''\n\n#krb_server_keyfile = ''\n\n\n#\n#\tShared Memory Size\n#\nshared_buffers = 10000\t\t# min max_connections*2 or 16, 8KB each\nmax_fsm_relations = 2000\t# min 10, fsm is free space map, ~40 bytes\nmax_fsm_pages = 20000\t\t# min 1000, fsm is free space map, ~6 bytes\n#max_locks_per_transaction = 64\t# min 10\n#wal_buffers = \t\t# min 4, typically 8KB each\n\n#\n#\tNon-shared Memory Sizes\n#\nsort_mem = 8000 \t\t# min 64, size in KB\nvacuum_mem = 16192\t\t# min 1024, size in KB\n\n\n#\n#\tWrite-ahead log (WAL)\n#\ncheckpoint_segments = 9 \t# in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300\t# range 30-3600, in seconds\n#\n#commit_delay = 0\t\t# range 0-100000, in microseconds\n#commit_siblings = 5\t\t# range 1-1000\n#\nfsync = false\n#wal_sync_method = fsync\t# the default varies across platforms:\n#\t\t\t\t# fsync, fdatasync, open_sync, or open_datasync\n#wal_debug = 0\t\t\t# range 0-16\n\n\n#\n#\tOptimizer Parameters\n#\nenable_seqscan = false\nenable_indexscan = true\nenable_tidscan = true\nenable_sort = true\nenable_nestloop = true\nenable_mergejoin = true\nenable_hashjoin = true\n\neffective_cache_size = 16000\t# typically 8KB each\n#random_page_cost = 4\t\t# units are one sequential page fetch cost\n#cpu_tuple_cost = 0.01\t\t# (same)\n#cpu_index_tuple_cost = 0.001\t# (same)\n#cpu_operator_cost = 0.0025\t# (same)\n\ndefault_statistics_target = 1000\t# range 1-1000\n\n#\n#\tGEQO Optimizer Parameters\n#\n#geqo = true\n#geqo_selection_bias = 2.0\t# range 1.5-2.0\n#geqo_threshold = 11\n#geqo_pool_size = 0\t\t# default based on tables in statement, \n\t\t\t\t# range 128-1024\n#geqo_effort = 1\n#geqo_generations = 0\n#geqo_random_seed = -1\t\t# auto-compute seed\n\n\n#\n#\tMessage display\n#\n#server_min_messages = notice\t# Values, in order of decreasing detail:\n\t\t\t\t# debug5, debug4, debug3, debug2, debug1,\n\t\t\t\t# info, notice, warning, error, log, fatal,\n\t\t\t\t# panic\n#client_min_messages = notice\t# Values, in order of decreasing detail:\n\t\t\t\t# debug5, debug4, debug3, debug2, debug1,\n\t\t\t\t# log, info, notice, warning, error\n#silent_mode = false\n\n#log_connections = false\n#log_pid = false\n#log_statement = false\n#log_duration = false\nlog_timestamp = true\n\n#log_min_error_statement = error # Values in order of increasing severity:\n\t\t\t\t # debug5, debug4, debug3, debug2, debug1,\n\t\t\t\t # info, notice, warning, error, panic(off)\n\n#debug_print_parse = false\n#debug_print_rewritten = false\n#debug_print_plan = false\n#debug_pretty_print = false\n\n#explain_pretty_print = true\n\n# requires USE_ASSERT_CHECKING\n#debug_assertions = true\n\n\n#\n#\tSyslog\n#\n#syslog = 0\t\t\t# range 0-2\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n\n#\n#\tStatistics\n#\n#show_parser_stats = false\n#show_planner_stats = false\n#show_executor_stats = false\n#show_statement_stats = false\n\n# requires BTREE_BUILD_STATS\n#show_btree_build_stats = false\n\n\n#\n#\tAccess statistics collection\n#\n#stats_start_collector = true\n#stats_reset_on_server_start = true\n#stats_command_string = false\n#stats_row_level = false\n#stats_block_level = false\n\n\n#\n#\tLock Tracing\n#\n#trace_notify = false\n\n# requires LOCK_DEBUG\n#trace_locks = false\n#trace_userlocks = false\n#trace_lwlocks = false\n#debug_deadlocks = false\n#trace_lock_oidmin = 16384\n#trace_lock_table = 0\n\n\n#\n#\tMisc\n#\n#autocommit = true\n#dynamic_library_path = '$libdir'\nsearch_path = 'vendas'\n#datestyle = 'iso, us'\n#timezone = unknown\t\t# actually, defaults to TZ environment setting\n#australian_timezones = false\n#client_encoding = sql_ascii\t# actually, defaults to database encoding\n#authentication_timeout = 60\t# 1-600, in seconds\n#deadlock_timeout = 1000\t# in milliseconds\n#default_transaction_isolation = 'read committed'\n#max_expr_depth = 10000\t\t# min 10\n#max_files_per_process = 1000\t# min 25\n#password_encryption = true\n#sql_inheritance = true\n#transform_null_equals = false\n#statement_timeout = 0\t\t# 0 is disabled, in milliseconds\n#db_user_namespace = false\n \n\n\n#\n#\tLocale settings\n#\n# (initialized by initdb -- may be changed)\nLC_MESSAGES = 'en_US.UTF-8'\nLC_MONETARY = 'en_US.UTF-8'\nLC_NUMERIC = 'en_US.UTF-8'\nLC_TIME = 'en_US.UTF-8'\n\nAtenciosamente,\n\nRhaoni Chiu Pereira\nSist�mica Computadores\n\nVisite-nos na Web: http://sistemica.info\nFone/Fax : +55 51 3328 1122\n\n\n\n\n\n", "msg_date": "Mon, 20 Oct 2003 12:13:26 -0200", "msg_from": "Rhaoni Chiu Pereira <[email protected]>", "msg_from_op": true, "msg_subject": "Low Insert/Update Performance" }, { "msg_contents": "On Mon, 20 Oct 2003 12:13:26 -0200\nRhaoni Chiu Pereira <[email protected]> wrote:\n\n> Hi List,\n> \n> I got a P4 1.7Ghz , 512MB RAM , HD 7200 RPM, on RED HAT 9 running\n> PostgreSQL \n> 7.3.2-3 Database.\n\n[clip]\n\nPlease send schema & queries or we will not be able to help you. Also,\nif you could provide explain analyze of each query it would be even more\nhelpful!\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n", "msg_date": "Mon, 20 Oct 2003 10:49:46 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Low Insert/Update Performance" }, { "msg_contents": "Rhaoni,\n\n> My problem is that I must substitute this Oracle for a PostgreSQL\n> database and this same Delphi aplication takes 45 min to update Jan/2003.\n> All delphi routines are converted and optmized to work with PgSQL.\n\nObviously not. \n\nHow about posting the update queries?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 20 Oct 2003 10:07:48 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Low Insert/Update Performance" }, { "msg_contents": "Rhaoni Chiu Pereira kirjutas E, 20.10.2003 kell 17:13:\n> Hi List,\n> \n> I got a P4 1.7Ghz , 512MB RAM , HD 7200 RPM, on RED HAT 9 running PostgreSQL \n> 7.3.2-3 Database.\n> I have a Delphi aplication that updates the Oracle database using .dbf \n> file's information ( converting the data from the old clipper aplication ) and \n> it takes about 3min and 45 seconds to update Jan/2003 .\n\nHave you tried contrib/dbase to do the same ?\n\nHow fast does this run\n\n> My problem is that I must substitute this Oracle for a PostgreSQL database \n> and this same Delphi aplication takes 45 min to update Jan/2003.\n> All delphi routines are converted and optmized to work with PgSQL.\n\nCould it be that you try to run each insert in a separate transaction in\nPgSQL version ?\n\nAnother possibility is that there is a primary key index created on\nempty tables which is not used in subsequent UNIQUE tests when tables\nstart to fill and using index would be useful. An ANALYZE in a parallel\nbackend could help here. Same can be true for foreign keys and unique\nconstraints.\n\n---------------\nHannu\n\n", "msg_date": "Mon, 20 Oct 2003 20:50:22 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Low Insert/Update Performance" }, { "msg_contents": "Rhaoni,\n\n> The delphi program does just one commit for all queries .\n> I was wandering if ther is some configuration parameters to be changed to\n> improve the performance ?\n\nTo help you, we'll need to to trap a query and run an EXPLAIN ANALYZE on it.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 21 Oct 2003 10:13:53 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Insert/Update Performance" }, { "msg_contents": "Hi List;\n\n Here follow the update query, explain analyze of it , my postgresql.conf and \nmy db configuration. This is my first PostgreSQL DB so I would like to know if \nits performance is normal !\n If there is some postgresql.conf's parameter that you think will optmize the \ndatabase just tell me !!!\n\nQUERY:\n\nupdate ftnfco00 set \nempfil = 0, \ndata_entrega = NULL,\nsituacao_nf = 'N',\ncod_fiscal = 61010000,\nbase_calc_icm_trib = '264.1'::float8,\nnf_emitida = 'S',\ntipo_cad_clicre ='C',\ncod_cliente = '55380'::float8,\ncod_repres = 8,\ncod_tipo_cliente = 1,\nestado_cliente = 'PR',\npais_cliente = 978,\nclassif_cliente = '',\ncod_suframa = ' ',\nordem_compra = ' ',\nbanco_cobranca = 0,\nsituacao_comissao = '0',\nperc_comissao = '6'::float8,\nemitir_bloqueto = 'N',\ncod_tipo_venda = 0,\nprazo_pgto_01 = 68,\nprazo_pgto_02 = 0,\nprazo_pgto_03 = 0,\nprazo_pgto_04 = 0,\nprazo_pgto_05 = 0,\nprazo_pgto_06 = 0,\nprazo_pgto_07 = 0,\nprazo_pgto_08 = 0,\nprazo_pgto_09 = 0,\nprazo_pgto_desc_duplic = 0,\nperc_desc_duplic = '0'::float8,\nqtde_fisica = '5'::float8,\nvlr_liquido = '264.1'::float8,\nvlr_ipi = '0'::float8,\nvlr_compl_nf = 0,\nvlr_frete = '26.4'::float8,\nvlr_acresc_fin_emp = 0,\nvlr_acresc_fin_tab = 0,\nvlr_dolar_vcto_dupl = 1,\nvlr_dolar_dia_fatur = 1,\nvlr_icm = '31.69'::float8,\nvlr_ipi_consignacao = 0,\nperc_juro_dia = '0.15'::float8,\ncod_texto_padrao = 19,\ncod_transp = 571,\ncod_transp_redesp = 0,\nplaca_transp = '',\npeso_liquido = '5.832'::float8,\npeso_bruto = '6.522'::float8,\nqtde_volumes = 5,\nproxima_nf = '0'::float8,\nlista_preco = '03RS',\nlista_preco_basico = ' ',\natu_guia_embarque = 'N',\nvlr_pis_cofins = 0,\nqtde_duzias = 5,\nobs_nf = 'ORDEM DE COMPRA 40851583',\nmargem_comercial = 0,\nmargem_operac = 0\nwhere\nemp = 909 and \nfil = 101 and \nnota_fiscal = '57798'::float8 and\nserie = 'UNICA' and\ndata_emissao = cast('2003-01-03 00:00:00'::timestamp as timestamp)\n\n\nEXPLAIN ANALYZE:\n QUERY \nPLAN $\n--------------------------------------------------------------------------------\n---------------------------------------------$\n Index Scan using ftnfco06 on ftnfco00 (cost=0.00..20.20 rows=1 width=535) \n(actual time=1.14..1.27 rows=1 loops=1)\n Index Cond: ((emp = 909::numeric) AND (fil = 101::numeric) AND (data_emissao \n= '2003-01-03 00:00:00'::timestamp without ti$\n Filter: (((nota_fiscal)::double precision = 57798::double precision) AND \n(serie = 'UNICA'::character varying))\n Total runtime: 3.56 msec\n(4 rows)\n\npostgresql.conf:\n\n#\tConnection Parameters\n#\ntcpip_socket = true\n#ssl = false\n\nmax_connections = 10\n#superuser_reserved_connections = 2\n\nport = 5432 \n#hostname_lookup = false\n#show_source_port = false\n\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777\t# octal\n\n#virtual_host = ''\n\n#krb_server_keyfile = ''\n\n\n#\n#\tShared Memory Size\n#\nshared_buffers = 10000\t\t# min max_connections*2 or 16, 8KB each\nmax_fsm_relations = 2000\t# min 10, fsm is free space map, ~40 bytes\nmax_fsm_pages = 20000\t\t# min 1000, fsm is free space map, ~6 bytes\n#max_locks_per_transaction = 64\t# min 10\n#wal_buffers = \t\t# min 4, typically 8KB each\n\n#\n#\tNon-shared Memory Sizes\n#\nsort_mem = 8000 \t\t# min 64, size in KB\nvacuum_mem = 16192\t\t# min 1024, size in KB\n\n\n#\n#\tWrite-ahead log (WAL)\n#\ncheckpoint_segments = 9 \t# in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300\t# range 30-3600, in seconds\n#\n#commit_delay = 0\t\t# range 0-100000, in microseconds\n#commit_siblings = 5\t\t# range 1-1000\n#\nfsync = false\n#wal_sync_method = fsync\t# the default varies across platforms:\n#\t\t\t\t# fsync, fdatasync, open_sync, or open_datasync\n#wal_debug = 0\t\t\t# range 0-16\n\n\n#\n#\tOptimizer Parameters\n#\nenable_seqscan = false\nenable_indexscan = true\nenable_tidscan = true\nenable_sort = true\nenable_nestloop = true\nenable_mergejoin = true\nenable_hashjoin = true\n\neffective_cache_size = 16000\t# typically 8KB each\n#random_page_cost = 4\t\t# units are one sequential page fetch cost\n#cpu_tuple_cost = 0.01\t\t# (same)\n#cpu_index_tuple_cost = 0.001\t# (same)\n#cpu_operator_cost = 0.0025\t# (same)\n\ndefault_statistics_target = 1000\t# range 1-1000\n\n#\n#\tGEQO Optimizer Parameters\n#\n#geqo = true\n#geqo_selection_bias = 2.0\t# range 1.5-2.0\n#geqo_threshold = 11\n#geqo_pool_size = 0\t\t# default based on tables in statement, \n\t\t\t\t# range 128-1024\n#geqo_effort = 1\n#geqo_generations = 0\n#geqo_random_seed = -1\t\t# auto-compute seed\n\n\n#\n#\tMessage display\n#\n#server_min_messages = notice\t# Values, in order of decreasing detail:\n\t\t\t\t# debug5, debug4, debug3, debug2, debug1,\n\t\t\t\t# info, notice, warning, error, log, fatal,\n\t\t\t\t# panic\n#client_min_messages = notice\t# Values, in order of decreasing detail:\n\t\t\t\t# debug5, debug4, debug3, debug2, debug1,\n\t\t\t\t# log, info, notice, warning, error\n#silent_mode = false\n\n#log_connections = false\n#log_pid = false\n#log_statement = false\n#log_duration = false\nlog_timestamp = true\n\n#log_min_error_statement = error # Values in order of increasing severity:\n\t\t\t\t # debug5, debug4, debug3, debug2, debug1,\n\t\t\t\t # info, notice, warning, error, panic(off)\n\n#debug_print_parse = false\n#debug_print_rewritten = false\n#debug_print_plan = false\n#debug_pretty_print = false\n\n#explain_pretty_print = true\n\n# requires USE_ASSERT_CHECKING\n#debug_assertions = true\n\n\n#\n#\tSyslog\n#\n#syslog = 0\t\t\t# range 0-2\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n\n#\n#\tStatistics\n#\n#show_parser_stats = false\n#show_planner_stats = false\n#show_executor_stats = false\n#show_statement_stats = false\n\n# requires BTREE_BUILD_STATS\n#show_btree_build_stats = false\n\n\n#\n#\tAccess statistics collection\n#\n#stats_start_collector = true\n#stats_reset_on_server_start = true\n#stats_command_string = false\n#stats_row_level = false\n#stats_block_level = false\n\n\n#\n#\tLock Tracing\n#\n#trace_notify = false\n\n# requires LOCK_DEBUG\n#trace_locks = false\n#trace_userlocks = false\n#trace_lwlocks = false\n#debug_deadlocks = false\n#trace_lock_oidmin = 16384\n#trace_lock_table = 0\n\n\n#\n#\tMisc\n#\n#autocommit = true\n#dynamic_library_path = '$libdir'\nsearch_path = 'vendas'\n#datestyle = 'iso, us'\n#timezone = unknown\t\t# actually, defaults to TZ environment setting\n#australian_timezones = false\n#client_encoding = sql_ascii\t# actually, defaults to database encoding\n#authentication_timeout = 60\t# 1-600, in seconds\n#deadlock_timeout = 1000\t# in milliseconds\n#default_transaction_isolation = 'read committed'\n#max_expr_depth = 10000\t\t# min 10\n#max_files_per_process = 1000\t# min 25\n#password_encryption = true\n#sql_inheritance = true\n#transform_null_equals = false\n#statement_timeout = 0\t\t# 0 is disabled, in milliseconds\n#db_user_namespace = false\n \n\n\n#\n#\tLocale settings\n#\n# (initialized by initdb -- may be changed)\nLC_MESSAGES = 'en_US.UTF-8'\nLC_MONETARY = 'en_US.UTF-8'\nLC_NUMERIC = 'en_US.UTF-8'\nLC_TIME = 'en_US.UTF-8'\n\n\ndb configuration:\n\n Pentium 4 1.7 GHz , 512 MB RAM DDR , HD 7200 RPM , RH 9 , PostgreSQL 7.3.2-3\n\n\n\nAtenciosamente,\n\nRhaoni Chiu Pereira\nSist�mica Computadores\n\nVisite-nos na Web: http://sistemica.info\nFone/Fax : +55 51 3328 1122\n\n\n\nCitando Josh Berkus <[email protected]>:\n\n<> Rhaoni,\n<> \n<> > The delphi program does just one commit for all queries .\n<> > I was wandering if ther is some configuration parameters to be changed to\n<> > improve the performance ?\n<> \n<> To help you, we'll need to to trap a query and run an EXPLAIN ANALYZE on\n<> it.\n<> \n<> -- \n<> Josh Berkus\n<> Aglio Database Solutions\n<> San Francisco\n<> \n\n", "msg_date": "Wed, 22 Oct 2003 14:21:37 -0200", "msg_from": "Rhaoni Chiu Pereira <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Low Insert/Update Performance" }, { "msg_contents": "Rhaoni,\n\n> Total runtime: 3.56 msec\n> (4 rows)\n\nWell, from that figure it's not the query that's holding you up. \n\nYou said that the system bogs down when you're doing a whole series of these \nupdates, or just one? If the former, then I'm afraid that it's your disk \nthat's to blame ... large numbers of rapid-fire updates simply won't be fast \non a single IDE disk. Try getting a second disk and moving the transaction \nlog to it.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 22 Oct 2003 09:27:49 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Insert/Update Performance" }, { "msg_contents": "Rhaoni,\n\n> First of all , thank's for your atention and fast answer. The system\n> really bogs down when I'm doing a whole series of these updates.\n\nThat would be consistent with a single-disk problem.\n\n> Take a\n> look at my postgresql.conf I'm afraid of putting some parameters wrong (\n> too high or too low ). And sorry if it sounds stupid but how can I move the\n> transaction log to this second disk ?\n\n1) Install the 2nd disk.\n2) With PostgreSQL shut down, copy the PGDATA/pg_xlog directory to the 2nd \ndisk.\n3) delete the old pg_xlog directory\n4) Symlink or Mount the new pg_xlog directory under PGDATA as PGDATA/pg_xlog.\n5) Restart Postgres.\n\nWhat I am interested in is your original assertion that this ran faster on \nOracle. Was Oracle installed on this particular machine, or a different \none?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 22 Oct 2003 09:58:28 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Insert/Update Performance" } ]
[ { "msg_contents": "It has been suggested to me that I resubmit this question to this list,\nrather than the GENERAL list it was originaly sent to.\n\n I asked earlier about ways of doing an UPDATE involving a left outer\njoin and got some very useful feedback.\n\n This has thrown up a (to me) strange anomaly about the speed of such\nan update.\n\n The input to this query is a fairly large (the example I'm working\nwith has 335,000 rows) set of records containing numbers to be looked\nup in the lookup table. This lookup table has 239 rows.\n\n I'm always reading the suggestion that doing a 'VACUUM ANALYZE' on a\ndatabase is 'A Good Thing' as it helps the planner to do the best thing, so\nI arranged a vacuum analyze on the input records.\n\n Running the query takes about 13 mins or so.\n\n If, however I *don't* do an analyze, but leave the input table as\nit was when imported the run takes about 2.5 mins!\n\n Looking at the output from 'explain' I can see that the main difference\nin the way the planner does it is that it does a merge join in the non-analyze\ncase, and a hash join in the analyze case.\n\n Unfortunately I don't really know what this is implying, hence the call\nfor assistance.\n\n I have a file with all sorts of info about the problem (details of tables,\noutput of 'explain' etc) but as it is about 5K in size, and wide as well, I\ndidn't want to dump it in the list without any warning!\n\n However - it has been suggested that it should be OK to include this I have\nnow done so - hopefully with this message.\n\n Regards,\n Harry.", "msg_date": "Mon, 20 Oct 2003 17:50:16 +0100 (BST)", "msg_from": "Harry Broomhall <[email protected]>", "msg_from_op": true, "msg_subject": "Performance weirdness with/without vacuum analyze" }, { "msg_contents": "Harry,\n\n> It has been suggested to me that I resubmit this question to this list,\n> rather than the GENERAL list it was originaly sent to.\n>\n> I asked earlier about ways of doing an UPDATE involving a left outer\n> join and got some very useful feedback.\n\nThe query you posted will always be somewhat slow due to the forced join \norder, which is unavodable with a left outer join. \n\nHowever, regarding your peculiar behaviour, please post:\n\n1) Your random_page_cost and effective_cache_size settings\n2) The EXPLAIN ANALYZE of each query instead of just the EXPLAIN\n\nThanks!\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 20 Oct 2003 10:28:00 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance weirdness with/without vacuum analyze" }, { "msg_contents": "Josh Berkus writes:\n> Harry,\n\n\n Many thanks for your response,\n\n> \n> > It has been suggested to me that I resubmit this question to this list,\n> > rather than the GENERAL list it was originaly sent to.\n> >\n> > I asked earlier about ways of doing an UPDATE involving a left outer\n> > join and got some very useful feedback.\n> \n> The query you posted will always be somewhat slow due to the forced join \n> order, which is unavodable with a left outer join. \n\n Yes - I rather suspected that! It is a shame it takes two joins to do\nthe work.\n\n> \n> However, regarding your peculiar behaviour, please post:\n> \n> 1) Your random_page_cost and effective_cache_size settings\n\n#effective_cache_size = 1000 # typically 8KB each\n#random_page_cost = 4 # units are one sequential page fetch cost\n\n i.e. - still set to their defaults.\n\n> 2) The EXPLAIN ANALYZE of each query instead of just the EXPLAIN\n\n First the case with no vacuum analyze:\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=99.32..171.32 rows=1000 width=259) (actual time=18579.92..48277.69 rows=335671 loops=1)\n Merge Cond: (\"outer\".cdr_id = \"inner\".cdr_id)\n -> Index Scan using import_cdrs_cdr_id_key on import_cdrs (cost=0.00..52.00 rows=1000 width=164) (actual time=0.42..11479.51 rows=335671 loops=1)\n -> Sort (cost=99.32..101.82 rows=1000 width=95) (actual time=18578.71..21155.65 rows=335671 loops=1)\n Sort Key: un.cdr_id\n -> Hash Join (cost=6.99..49.49 rows=1000 width=95) (actual time=4.70..10011.35 rows=335671 loops=1)\n Hash Cond: (\"outer\".interim_cli = \"inner\".interim_num)\n Join Filter: ((\"outer\".starttime >= \"inner\".starttime) AND (\"outer\".starttime <= \"inner\".endtime))\n -> Seq Scan on import_cdrs un (cost=0.00..20.00 rows=1000 width=49) (actual time=0.02..4265.63 rows=335671 loops=1)\n -> Hash (cost=6.39..6.39 rows=239 width=46) (actual time=4.57..4.57 rows=0 loops=1)\n -> Seq Scan on num_xlate (cost=0.00..6.39 rows=239 width=46) (actual time=0.12..2.77 rows=239 loops=1)\n Total runtime: 80408.42 msec\n(12 rows)\n\n And now the case *with* the vacuum analyze:\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=15335.91..49619.57 rows=335671 width=202) (actual time=12383.44..49297.58 rows=335671 loops=1)\n Hash Cond: (\"outer\".cdr_id = \"inner\".cdr_id)\n -> Seq Scan on import_cdrs (cost=0.00..8496.71 rows=335671 width=126) (actual time=0.15..9504.24 rows=335671 loops=1)\n -> Hash (cost=10398.73..10398.73 rows=335671 width=76) (actual time=12371.13..12371.13 rows=0 loops=1)\n -> Hash Join (cost=6.99..10398.73 rows=335671 width=76) (actual time=4.91..9412.55 rows=335671 loops=1)\n Hash Cond: (\"outer\".interim_cli = \"inner\".interim_num)\n Join Filter: ((\"outer\".starttime >= \"inner\".starttime) AND (\"outer\".starttime <= \"inner\".endtime))\n -> Seq Scan on import_cdrs un (cost=0.00..8496.71 rows=335671 width=30) (actual time=0.09..3813.54 rows=335671 loops=1)\n -> Hash (cost=6.39..6.39 rows=239 width=46) (actual time=4.71..4.71 rows=0 loops=1)\n -> Seq Scan on num_xlate (cost=0.00..6.39 rows=239 width=46) (actual time=0.22..2.90 rows=239 loops=1)\n Total runtime: 432543.73 msec\n(11 rows)\n\n Please note that since I first posted I have been slightly adjusting the\nschema of the tables, but the disparity remains.\n\n Many thanks for your assistance.\n\n Regards,\n Harry.\n\n", "msg_date": "Tue, 21 Oct 2003 12:40:26 +0100 (BST)", "msg_from": "Harry Broomhall <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance weirdness with/without vacuum analyze" }, { "msg_contents": "Harry Broomhall wrote:\n > #effective_cache_size = 1000 # typically 8KB each\n > #random_page_cost = 4 # units are one sequential page fetch cost\n\nYou must tune the first one at least. Try \nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html to tune these \nparameters.\n\n >>2) The EXPLAIN ANALYZE of each query instead of just the EXPLAIN\n >\n >\n > First the case with no vacuum analyze:\n >\n > QUERY PLAN\n > \n-------------------------------------------------------------------------------------------------------------------------------------------------------\n > Merge Join (cost=99.32..171.32 rows=1000 width=259) (actual \ntime=18579.92..48277.69 rows=335671 loops=1)\n > Merge Cond: (\"outer\".cdr_id = \"inner\".cdr_id)\n > -> Index Scan using import_cdrs_cdr_id_key on import_cdrs \n(cost=0.00..52.00 rows=1000 width=164) (actual time=0.42..11479.51 rows=335671 \nloops=1)\n > -> Sort (cost=99.32..101.82 rows=1000 width=95) (actual \ntime=18578.71..21155.65 rows=335671 loops=1)\n > Sort Key: un.cdr_id\n > -> Hash Join (cost=6.99..49.49 rows=1000 width=95) (actual \ntime=4.70..10011.35 rows=335671 loops=1)\n > Hash Cond: (\"outer\".interim_cli = \"inner\".interim_num)\n > Join Filter: ((\"outer\".starttime >= \"inner\".starttime) AND \n(\"outer\".starttime <= \"inner\".endtime))\n > -> Seq Scan on import_cdrs un (cost=0.00..20.00 rows=1000 \nwidth=49) (actual time=0.02..4265.63 rows=335671 loops=1)\n > -> Hash (cost=6.39..6.39 rows=239 width=46) (actual \ntime=4.57..4.57 rows=0 loops=1)\n > -> Seq Scan on num_xlate (cost=0.00..6.39 rows=239 \nwidth=46) (actual time=0.12..2.77 rows=239 loops=1)\n > Total runtime: 80408.42 msec\n > (12 rows)\n\nYou are lucky to get a better plan here because planner is way off w.r.t \nestimated number of rows.\n >\n > And now the case *with* the vacuum analyze:\n >\n > QUERY PLAN\n > \n-----------------------------------------------------------------------------------------------------------------------------------------\n > Hash Join (cost=15335.91..49619.57 rows=335671 width=202) (actual \ntime=12383.44..49297.58 rows=335671 loops=1)\n > Hash Cond: (\"outer\".cdr_id = \"inner\".cdr_id)\n > -> Seq Scan on import_cdrs (cost=0.00..8496.71 rows=335671 width=126) \n(actual time=0.15..9504.24 rows=335671 loops=1)\n > -> Hash (cost=10398.73..10398.73 rows=335671 width=76) (actual \ntime=12371.13..12371.13 rows=0 loops=1)\n > -> Hash Join (cost=6.99..10398.73 rows=335671 width=76) (actual \ntime=4.91..9412.55 rows=335671 loops=1)\n > Hash Cond: (\"outer\".interim_cli = \"inner\".interim_num)\n > Join Filter: ((\"outer\".starttime >= \"inner\".starttime) AND \n(\"outer\".starttime <= \"inner\".endtime))\n > -> Seq Scan on import_cdrs un (cost=0.00..8496.71 \nrows=335671 width=30) (actual time=0.09..3813.54 rows=335671 loops=1)\n > -> Hash (cost=6.39..6.39 rows=239 width=46) (actual \ntime=4.71..4.71 rows=0 loops=1)\n > -> Seq Scan on num_xlate (cost=0.00..6.39 rows=239 \nwidth=46) (actual time=0.22..2.90 rows=239 loops=1)\n > Total runtime: 432543.73 msec\n > (11 rows)\n >\n\nWhat happens if you turn off hash joins? Also bump sort memory to something \ngood.. around 16MB and see what difference does it make to performance..\n\n Shridhar\n\n\n", "msg_date": "Tue, 21 Oct 2003 17:30:08 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance weirdness with/without vacuum analyze" }, { "msg_contents": "Shridhar Daithankar writes:\n> Harry Broomhall wrote:\n> > #effective_cache_size = 1000 # typically 8KB each\n> > #random_page_cost = 4 # units are one sequential page fetch cost\n> \n> You must tune the first one at least. Try \n> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html to tune these \n> parameters.\n\n Wow. Many thanks for the pointer. I'm going to be spending some time\ntrying to get my head around all of that!\n\n[SNIP]\n\n> > Total runtime: 80408.42 msec\n> > (12 rows)\n> \n> You are lucky to get a better plan here because planner is way off w.r.t \n> estimated number of rows.\n\n Yes! I thought that. Which was why I was so surprised at the difference.\n\n> >\n> > And now the case *with* the vacuum analyze:\n> >\n[SNIP]\n> \n> What happens if you turn off hash joins? Also bump sort memory to something \n> good.. around 16MB and see what difference does it make to performance..\n\n\n\n Lots of things to try there.....\n\n\n It will probably take me some time <grin>.\n\n Regards,\n Harry.\n\n", "msg_date": "Tue, 21 Oct 2003 13:35:50 +0100 (BST)", "msg_from": "Harry Broomhall <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance weirdness with/without vacuum analyze" }, { "msg_contents": "Shridhar Daithankar writes:\n\n First - many thanks for your suggestions and pointers to further info.\n\n I have been trying some of them with some interesting results!\n\n> Harry Broomhall wrote:\n> > #effective_cache_size = 1000 # typically 8KB each\n> > #random_page_cost = 4 # units are one sequential page fetch cost\n> \n> You must tune the first one at least. Try \n> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html to tune these \n> parameters.\n\n Changing effective_cache_size seemed to have very little effect. I took it\nin steps up to 300MB (the machine has 640MB memory), and the differences\nin speed were less than 10%.\n\n[SNIP]\n> \n> What happens if you turn off hash joins?\n\n This makes the non vacuum version about 40% slower, and the vacuum version\nto the same speed (i.e. about 4X faster than it had been!).\n\n> Also bump sort memory to something \n> good.. around 16MB and see what difference does it make to performance..\n\n\n This was interesting. Taking it to 10MB made a slight improvement. Up to\n20MB and the vacuum case improved by 5X speed, but the non-vacuum version\nslowed down. Putting it up to 40MB slowed both down again.\n\n I will need to test with some of the other scripts and functions I have\nwritten, but it looks as if selective use of more sort memory will be\nuseful.\n\n Regards,\n Harry.\n\n", "msg_date": "Tue, 21 Oct 2003 15:50:48 +0100 (BST)", "msg_from": "Harry Broomhall <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance weirdness with/without vacuum analyze" }, { "msg_contents": "Harry Broomhall <[email protected]> writes:\n\n> -> Index Scan using import_cdrs_cdr_id_key on import_cdrs (cost=0.00..52.00 rows=1000 width=164) (actual time=0.42..11479.51 rows=335671 loops=1)\n\n> -> Seq Scan on import_cdrs (cost=0.00..8496.71 rows=335671 width=126) (actual time=0.15..9504.24 rows=335671 loops=1)\n\nHm. The planner's default cost parameters assume that a full-table\nindex scan will be much slower than a full-table seq scan. That's\nevidently not the case in your test situation. You could probably\nbring the estimates more in line with reality (and thereby improve the\nchoice of plan) by reducing random_page_cost towards 1 and increasing\neffective_cache_size to represent some realistic fraction of your\navailable RAM (though I concur with your observation that the\nlatter doesn't change the estimates all that much).\n\nBeware however that test-case reality and production reality are not the\nsame thing. You are evidently testing with tables that fit in RAM.\nIf your production tables will not, you'd better be wary of being overly\naggressive about reducing random_page_cost. I believe the default value\n(4.0) is fairly representative for situations where many actual disk\nfetches are needed, ie, the tables are much larger than RAM. 1.0 would\nbe appropriate if all your tables are always fully cached in RAM (since\nRAM has by definition no random-access penalty). In intermediate cases\nyou need to select intermediate values.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 Oct 2003 13:00:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance weirdness with/without vacuum analyze " } ]
[ { "msg_contents": "Folks,\n\nI'm working on the demo session for our upcoming presentation at PHPCon. \n\nAs a side issue, we ended up comparing 3 versions of the same search screen:\n\n1) All in PHP with views;\n2) Using a function to build a query and count results but executing that \nquery directly and sorting, paging in PHP;\n3) Using a Set Returning function to handle row-returning, sorting, and \npaging.\n\nAll three methods were executing a series moderately complex query against a \nmedium-sized data set (only about 20,000 rows but it's on a laptop). The \npostgresql.conf was tuned like a webserver; e.g. low sort_mem, high \nmax_connections.\n\nSo far, on the average of several searches, we have:\n\n1) 0.19687 seconds\n2) 0.20667 seconds\n3) 0.20594 seconds\n\nIn our tests, using any kind of PL/pgSQL function seems to carry a 0.01 second \npenalty over using PHP to build the search query. I'm not sure if this is \ncomparitive time for string-parsing or something else; the 0.01 seems to be \nconsistent regardless of scale.\n\nThe difference between using a PL/pgSQL function as a query-builder only (the \n7.2.x method) and using SRFs was small enough not to be significant.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 20 Oct 2003 17:55:12 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "SRFs ... no performance penalty?" }, { "msg_contents": "On Mon, 2003-10-20 at 20:55, Josh Berkus wrote:\n> Folks,\n> \n> I'm working on the demo session for our upcoming presentation at PHPCon. \n> \n> As a side issue, we ended up comparing 3 versions of the same search screen:\n> \n> 1) All in PHP with views;\n> 2) Using a function to build a query and count results but executing that \n> query directly and sorting, paging in PHP;\n> 3) Using a Set Returning function to handle row-returning, sorting, and \n> paging.\n> \n> All three methods were executing a series moderately complex query against a \n> medium-sized data set (only about 20,000 rows but it's on a laptop). The \n> postgresql.conf was tuned like a webserver; e.g. low sort_mem, high \n> max_connections.\n> \n> So far, on the average of several searches, we have:\n> \n> 1) 0.19687 seconds\n> 2) 0.20667 seconds\n> 3) 0.20594 seconds\n> \n\nIs this measuring time in the back-end or total time of script\nexecution? \n\n\n> In our tests, using any kind of PL/pgSQL function seems to carry a 0.01 second \n> penalty over using PHP to build the search query. I'm not sure if this is \n> comparitive time for string-parsing or something else; the 0.01 seems to be \n> consistent regardless of scale.\n> \n> The difference between using a PL/pgSQL function as a query-builder only (the \n> 7.2.x method) and using SRFs was small enough not to be significant.\n> \n> -- \n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "21 Oct 2003 11:02:25 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SRFs ... no performance penalty?" }, { "msg_contents": "Robert,\n\n> > 1) 0.19687 seconds\n> > 2) 0.20667 seconds\n> > 3) 0.20594 seconds\n>\n> Is this measuring time in the back-end or total time of script\n> execution?\n\nTotal time of execution, e.g. from clicking the \"enter\" button to displaying \nthe list of matches. Any other comparison would be misleading.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 21 Oct 2003 09:22:05 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SRFs ... no performance penalty?" } ]
[ { "msg_contents": "Hi,\n\nPretty soon, a PowerEdge 6650 with 4 x 2Ghz XEONs, and 8GB Memory, with\ninternal drives on RAID5 will be delivered. Postgres will be from RH8.0.\n\nI am planning for these values for the postgres configuration - to begin\nwith:\n\nShared_buffers (25% of RAM / 8KB)) = 8589934592 * .25 / 8192 = 262144\n\nSort_mem (4% of RAM / 1KB) = 335544. We'll take about half of that -\n167772\n\nEffective_cache_size = 262144 (same as shared_buffers - 25%)\n\n\nIn the /etc/sysctl file:\n=================\nkernel.shmall = 536870912 (512MB) SHMALL Total amount of shared memory\navailable (bytes or pages)\nkernel.shmmax = 536870912 (512MB) SHMMAX Maximum size of shared memory\nsegment (bytes)\n\nIn a generic sense, these are recommended values I found in some\ndocuments. The database will be small in size and will gradually grow\nover time from few thousands to a few million records, or more. The\nactivity will be mostly of select statements from a few tables with\njoins, orderby, groupby clauses. The web application is based on\nApache/Resin and hotspot JVM 1.4.0.\n\nAre the above settings ok to begin with? Are there any other parameters\nthat I should configure now, or monitor lateron?\n\nIn other words, am I missing anything here to take full advantage of 4\nCPUs and 8Gigs of RAM?\n\nAppreciate any help.\n\n\nThanks,\nAnjan\n\n************************************************************************\n** \nThis e-mail and any files transmitted with it are intended for the use\nof the addressee(s) only and may be confidential and covered by the\nattorney/client and other privileges. If you received this e-mail in\nerror, please notify the sender; do not disclose, copy, distribute, or\ntake any action in reliance on the contents of this information; and\ndelete it from your system. Any other use of this e-mail is prohibited.\n\n\n\n\n\n\n\nTuning for mid-size server\n\n\n\nHi,\n\nPretty soon, a PowerEdge 6650 with 4 x 2Ghz XEONs, and 8GB Memory, with internal drives on RAID5 will be delivered. Postgres will be from RH8.0.\nI am planning for these values for the postgres configuration - to begin with:\n\nShared_buffers (25% of RAM / 8KB)) = 8589934592 * .25 / 8192 = 262144\n\nSort_mem (4% of RAM / 1KB) = 335544. We'll take about half of that - 167772\n\nEffective_cache_size = 262144 (same as shared_buffers - 25%)\n\n\nIn the /etc/sysctl file:\n=================\nkernel.shmall = 536870912 (512MB) SHMALL Total amount of shared memory available (bytes or pages)\nkernel.shmmax = 536870912 (512MB) SHMMAX Maximum size of shared memory segment (bytes)\n\nIn a generic sense, these are recommended values I found in some documents. The database will be small in size and will gradually grow over time from few thousands to a few million records, or more. The activity will be mostly of select statements from a few tables with joins, orderby, groupby clauses. The web application is based on Apache/Resin and hotspot JVM 1.4.0.\nAre the above settings ok to begin with? Are there any other parameters that I should configure now, or monitor lateron?\nIn other words, am I missing anything here to take full advantage of 4 CPUs and 8Gigs of RAM?\n\nAppreciate any help.\n\n\nThanks,\nAnjan\n\n************************************************************************** \nThis e-mail and any files transmitted with it are intended for the use of the addressee(s) only and may be confidential and covered by the attorney/client and other privileges.  If you received this e-mail in error, please notify the sender; do not disclose, copy, distribute, or take any action in reliance on the contents of this information; and delete it from your system. Any other use of this e-mail is prohibited.", "msg_date": "Tue, 21 Oct 2003 10:28:13 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Tuning for mid-size server" }, { "msg_contents": "On Tuesday 21 October 2003 15:28, Anjan Dave wrote:\n> Hi,\n>\n> Pretty soon, a PowerEdge 6650 with 4 x 2Ghz XEONs, and 8GB Memory, with\n> internal drives on RAID5 will be delivered. Postgres will be from RH8.0.\n\nYou'll want to upgrade PG to v7.3.4\n\n> I am planning for these values for the postgres configuration - to begin\n> with:\n>\n> Shared_buffers (25% of RAM / 8KB)) = 8589934592 * .25 / 8192 = 262144\n>\n> Sort_mem (4% of RAM / 1KB) = 335544. We'll take about half of that -\n> 167772\n>\n> Effective_cache_size = 262144 (same as shared_buffers - 25%)\n\nMy instincts would be to lower the first two substantially, and increase the \neffective cache once you know load levels. I'd probably start with something \nlike the values below and work up:\nshared_buffers = 8,000 - 10,000 (PG is happier letting the OS do the cacheing)\nsort_mem = 4,000 - 8,000 (don't forget this is for each sort)\n\nYou'll find the annotated postgresql.conf and performance tuning articles \nuseful:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n\n> In a generic sense, these are recommended values I found in some\n> documents. The database will be small in size and will gradually grow\n> over time from few thousands to a few million records, or more. The\n> activity will be mostly of select statements from a few tables with\n> joins, orderby, groupby clauses. The web application is based on\n> Apache/Resin and hotspot JVM 1.4.0.\n\nYou'll need to figure out how many concurrent users you'll have and how much \nmemory will be required by apache/java. If your database grows radically, \nyou'll probably want to re-tune as it grows.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 21 Oct 2003 16:56:50 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server" }, { "msg_contents": "Anjan,\n\n> Pretty soon, a PowerEdge 6650 with 4 x 2Ghz XEONs, and 8GB Memory, with\n> internal drives on RAID5 will be delivered. Postgres will be from RH8.0.\n\nHow many drives? RAID5 sucks for heavy read-write databases, unless you have \n5+ drives. Or a large battery-backed cache.\n\nAlso, last I checked, you can't address 8GB of RAM without a 64-bit processor. \nSince when are the Xeons 64-bit?\n\n> Shared_buffers (25% of RAM / 8KB)) = 8589934592 * .25 / 8192 = 262144\n\nThat's too high. Cut it in half at least. Probably down to 5% of available \nRAM.\n\n> Sort_mem (4% of RAM / 1KB) = 335544. We'll take about half of that -\n> 167772\n\nFine if you're running a few-user-large-operation database. If this is a \nwebserver, you want a much, much lower value.\n\n> Effective_cache_size = 262144 (same as shared_buffers - 25%)\n\nMuch too low. Where did you get these calculations, anyway?\n\n> In a generic sense, these are recommended values I found in some\n> documents.\n\nWhere? We need to contact the author of the \"documents\" and tell them to \ncorrect things.\n\n> joins, orderby, groupby clauses. The web application is based on\n> Apache/Resin and hotspot JVM 1.4.0.\n\nYou'll need to estimate the memory consumed by Java & Apache to have realistic \nfigures to work with.\n\n> Are the above settings ok to begin with? Are there any other parameters\n> that I should configure now, or monitor lateron?\n\nNo, they're not. See:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html to tune these \nparameters.\n\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 21 Oct 2003 09:20:44 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server" }, { "msg_contents": "Anjan Dave wrote:\n\n> Shared_buffers (25% of RAM / 8KB)) = 8589934592 * .25 / 8192 = 262144\n\n250,000 is probably the max you can use due to the 2GB process limit \nunless you recompile the Linux Kernel to use 3GB process/1GB kernel. \nYes, I've got 8GB also and I started at 262144 and kept working my way \ndown until Linux would allocate the memory.\n\n> \n> Sort_mem (4% of RAM / 1KB) = 335544. We'll take about half of that - 167772\n> \n> Effective_cache_size = 262144 (same as shared_buffers - 25%)\n\nThis should reflect the amount of memory available for caching. And \nunless you plan on running a ton of memory hogging software on the same \nmachine, you probably will have 6GB available as cache. Top on my system \nconfirms the 6GB number so I've got my setting at 750,000. (Left a \nlittle space for OS/programs/etc.)\n\n> In the /etc/sysctl file:\n> =================\n> kernel.shmall = 536870912 (512MB) SHMALL Total amount of shared memory \n> available (bytes or pages)\n> kernel.shmmax = 536870912 (512MB) SHMMAX Maximum size of shared memory \n> segment (bytes)\n\nAin't gonna happen unless you recompile the linux kernel to do 3/1. \nThrough trial-and-error, I've found the largest number is:\n\n2,147,483,648\n\n> Are the above settings ok to begin with? Are there any other parameters \n> that I should configure now, or monitor lateron?\n\nAbove is pretty good. I'd also bump up the free space map settings and \nmaybe try to symlink the pg_xlog directory (log files) to a seperate drive.\n\n", "msg_date": "Tue, 21 Oct 2003 09:25:56 -0700", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server" }, { "msg_contents": "On Tue, 21 Oct 2003, Josh Berkus wrote:\n\n> Anjan,\n> \n> > Pretty soon, a PowerEdge 6650 with 4 x 2Ghz XEONs, and 8GB Memory, with\n> > internal drives on RAID5 will be delivered. Postgres will be from RH8.0.\n> \n> How many drives? RAID5 sucks for heavy read-write databases, unless you have \n> 5+ drives. Or a large battery-backed cache.\n\nYou don't need a large cache, so much as a cache. The size isn't usually \nan issue now that 64 to 256 megs caches are the nominal cache sizes. Back \nwhen it was a choice of 4 or 8 megs it made a much bigger difference than \n64 versus 256 meg make today.\n\nAlso, if it's a read only environment, RAID5 with n drives equals the \nperformance of RAID0 with n-1 drives.\n\n> Also, last I checked, you can't address 8GB of RAM without a 64-bit processor. \n> Since when are the Xeons 64-bit?\n\nJosh, you gotta get out more. IA32 has supported >4 gig ram for a long \ntime now, and so has the linux kernel. It uses a paging method to do it. \nIndividual processes are still limited to ~3 gig on Linux on 32 bit \nhardware though, so the extra mem will almost certainly spend it's time as \nkernel cache.\n\n\n", "msg_date": "Tue, 21 Oct 2003 10:48:33 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server" }, { "msg_contents": "Scott,\n\n> Also, if it's a read only environment, RAID5 with n drives equals the\n> performance of RAID0 with n-1 drives.\n\nTrue.\n\n> Josh, you gotta get out more. IA32 has supported >4 gig ram for a long\n> time now, and so has the linux kernel. It uses a paging method to do it.\n> Individual processes are still limited to ~3 gig on Linux on 32 bit\n> hardware though, so the extra mem will almost certainly spend it's time as\n> kernel cache.\n\nNot that you'd want a sigle process to grow that large anyway. \n\nSo what is the ceiling on 32-bit processors for RAM? Most of the 64-bit \nvendors are pushing Athalon64 and G5 as \"breaking the 4GB barrier\", and even \nI can do the math on 2^32. All these 64-bit vendors, then, are talking \nabout the limit on ram *per application* and not per machine?\n\nThis has all been academic to me to date, as the only very-high-ram systems \nI've worked with were Sparc or micros.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 21 Oct 2003 10:12:15 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server" }, { "msg_contents": "On Tue, 21 Oct 2003, Josh Berkus wrote:\n\n> Scott,\n> \n> > Also, if it's a read only environment, RAID5 with n drives equals the\n> > performance of RAID0 with n-1 drives.\n> \n> True.\n> \n> > Josh, you gotta get out more. IA32 has supported >4 gig ram for a long\n> > time now, and so has the linux kernel. It uses a paging method to do it.\n> > Individual processes are still limited to ~3 gig on Linux on 32 bit\n> > hardware though, so the extra mem will almost certainly spend it's time as\n> > kernel cache.\n> \n> Not that you'd want a sigle process to grow that large anyway.\n\nTrue :-) Especially a pgsql backend.\n\n> So what is the ceiling on 32-bit processors for RAM? Most of the 64-bit \n> vendors are pushing Athalon64 and G5 as \"breaking the 4GB barrier\", and even \n> I can do the math on 2^32. All these 64-bit vendors, then, are talking \n> about the limit on ram *per application* and not per machine?\n\nI think it's 64 gigs in the current implementation, but that could just be \na chip set thing, i.e. the theoretical limit is probably 2^63 or 2^64, but \nthe realistic limitation is that the current mobo chipsets are gonna have \na much lower limit, and I seem to recall that being 64 gig last I looked.\n\n\n", "msg_date": "Tue, 21 Oct 2003 11:30:36 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server" }, { "msg_contents": "On Tue, Oct 21, 2003 at 10:12:15AM -0700, Josh Berkus wrote:\n> \n> So what is the ceiling on 32-bit processors for RAM? Most of the 64-bit \n> vendors are pushing Athalon64 and G5 as \"breaking the 4GB barrier\", and even \n> I can do the math on 2^32. All these 64-bit vendors, then, are talking \n> about the limit on ram *per application* and not per machine?\n\nOr per same-time access. Remember that, back in the old days on the\npre-386s, accessing the extended or expanded memory (anyone remember\nwhich was which?) involved some fairly serious work, and not\neverything was seamless. I expect something similar is at work here. \nNot that I've had a reason to play with 4G ix86 machines, anyway.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 21 Oct 2003 13:48:52 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server" }, { "msg_contents": "On Tue, 21 Oct 2003 10:12:15 -0700\nJosh Berkus <[email protected]> wrote:\n\n> So what is the ceiling on 32-bit processors for RAM? Most of the\n> 64-bit vendors are pushing Athalon64 and G5 as \"breaking the 4GB\n> barrier\", and even I can do the math on 2^32. All these 64-bit\n> vendors, then, are talking about the limit on ram *per application*\n> and not per machine?\n\nYou can have > 4GB per app, but also you get a big performance boost as\nyou don't have to deal with all the silly paging - think of it from when\nwe switched from real mode to protected mode. \n\nIf you check out hte linux-kernel archives you'll see one of the things\noften recommended when things go odd is to turn off HIMEM support. \n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n", "msg_date": "Tue, 21 Oct 2003 13:50:15 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server" }, { "msg_contents": "> So what is the ceiling on 32-bit processors for RAM? Most of the 64-bit \n> vendors are pushing Athalon64 and G5 as \"breaking the 4GB barrier\", and even \n> I can do the math on 2^32. All these 64-bit vendors, then, are talking \n> about the limit on ram *per application* and not per machine?\n\n64-bit CPU on 64-bit OS. Up to physical address limit for anything and \neverything.\n\n64-bit CPU on 32-bit OS. Up to 4GB minus the kernel allocation -- which \nis usually 2GB on Windows and Linux. On Windows, you can up this to 3GB \nby using the /3GB switch. Linux requires a kernel recompile. PAE is then \nused to \"move\" the memory window to point to different areas of the \nphysical memory.\n\n", "msg_date": "Tue, 21 Oct 2003 11:27:08 -0700", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server" }, { "msg_contents": "Andrew Sullivan <[email protected]> writes:\n> On Tue, Oct 21, 2003 at 10:12:15AM -0700, Josh Berkus wrote:\n>> So what is the ceiling on 32-bit processors for RAM?\n\n> ... Remember that, back in the old days on the\n> pre-386s, accessing the extended or expanded memory (anyone remember\n> which was which?) involved some fairly serious work, and not\n> everything was seamless. I expect something similar is at work here. \n\nRight. A 32-bit processor can only (conveniently) allow any individual\nprocess to access 4G worth of address space. However the total RAM in\nthe system can be more --- the kernel can set up the hardware address\nmappings to let different user processes use different up-to-4G segments\nof that RAM. And the kernel can also use excess RAM for disk buffer\ncache. So there's plenty of value in more-than-4G RAM, as long as\nyou're not expecting any single user process to need more than 4G.\nThis is no problem at all for Postgres, in which individual backend\nprocesses don't usually get very large, and we'd just as soon let most\nof the RAM go to kernel disk buffers anyway.\n\nI think that some hardware configurations have problems with using RAM\nabove the first 4G for disk buffers, because of disk controller hardware\nthat can't cope with physical DMA addresses wider than 32 bits. The\nsolution here is to buy a better disk controller. If you google for\n\"bounce buffers\" you can learn more about this.\n\nWhat goes around comes around I guess --- I remember playing these same\nkinds of games to use more than 64K RAM in 16-bit machines, 25-odd years\nago...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 Oct 2003 15:00:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server " }, { "msg_contents": "In the last exciting episode, [email protected] (Josh Berkus) wrote:\n> So what is the ceiling on 32-bit processors for RAM? Most of the\n> 64-bit vendors are pushing Athalon64 and G5 as \"breaking the 4GB\n> barrier\", and even I can do the math on 2^32. All these 64-bit\n> vendors, then, are talking about the limit on ram *per application*\n> and not per machine?\n\nI have been seeing ia-32 servers with 8GB of RAM; it looks as though\nthere are ways of having them support (\"physically, in theory, if you\ncould get a suitable motherboard\") as much as 64GB.\n\nBut that certainly doesn't get you past 2^32 bytes per process, and\npossibly not past 2^31 bytes/process.\n\n From Linux kernel help:\n\n CONFIG_NOHIGHMEM:\n \n Linux can use up to 64 Gigabytes of physical memory on x86\n systems. However, the address space of 32-bit x86 processors is\n only 4 Gigabytes large. That means that, if you have a large\n amount of physical memory, not all of it can be \"permanently\n mapped\" by the kernel. The physical memory that's not permanently\n mapped is called \"high memory\".\n\nAnd that leaves open the question of how much shared memory you can\naddress. That presumably has to fit into the 4GB, and if your\nPostgreSQL processes had (by some fluke) 4GB of shared memory, there\nwouldn't be any \"local\" memory for sort memory and the likes.\n\nAdd to that the consideration that there are reports of Linux \"falling\nover\" when you get to right around 2GB/4GB. I ran a torture test a\nwhile back that _looked_ like it was running into that; I can't verify\nthat, unfortunately.\n\nI don't see there being a whole lot of use of having more than about\n8GB on an ia-32 system; what with shared memory maxing out at\nsomewhere between 1 and 2GB, that suggests having ~8GB in total.\n\nI'd add another PG cluster if I had 16GB...\n-- \nlet name=\"aa454\" and tld=\"freenet.carleton.ca\" in name ^ \"@\" ^ tld;;\nhttp://www.ntlug.org/~cbbrowne/postgresql.html\n\"A statement is either correct or incorrect. To be *very* incorrect is\n like being *very* dead ... \"\n-- Herbert F. Spirer\n Professor of Information Management\n University of Conn.\n (DATAMATION Letters, Sept. 1, 1984)\n", "msg_date": "Tue, 21 Oct 2003 15:27:02 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server" }, { "msg_contents": "On Tue, 2003-10-21 at 14:27, Christopher Browne wrote:\n> In the last exciting episode, [email protected] (Josh Berkus) wrote:\n> > So what is the ceiling on 32-bit processors for RAM? Most of the\n> > 64-bit vendors are pushing Athalon64 and G5 as \"breaking the 4GB\n> > barrier\", and even I can do the math on 2^32. All these 64-bit\n> > vendors, then, are talking about the limit on ram *per application*\n> > and not per machine?\n> \n> I have been seeing ia-32 servers with 8GB of RAM; it looks as though\n> there are ways of having them support (\"physically, in theory, if you\n> could get a suitable motherboard\") as much as 64GB.\n> \n> But that certainly doesn't get you past 2^32 bytes per process, and\n> possibly not past 2^31 bytes/process.\n> \n> >From Linux kernel help:\n> \n> CONFIG_NOHIGHMEM:\n> \n> Linux can use up to 64 Gigabytes of physical memory on x86\n> systems. However, the address space of 32-bit x86 processors is\n> only 4 Gigabytes large. That means that, if you have a large\n> amount of physical memory, not all of it can be \"permanently\n> mapped\" by the kernel. The physical memory that's not permanently\n> mapped is called \"high memory\".\n> \n> And that leaves open the question of how much shared memory you can\n> address. That presumably has to fit into the 4GB, and if your\n> PostgreSQL processes had (by some fluke) 4GB of shared memory, there\n> wouldn't be any \"local\" memory for sort memory and the likes.\n> \n> Add to that the consideration that there are reports of Linux \"falling\n> over\" when you get to right around 2GB/4GB. I ran a torture test a\n> while back that _looked_ like it was running into that; I can't verify\n> that, unfortunately.\n\nWell thank goodness that Linux & Postgres work so well on Alpha\nand long-mode AMD64.\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"Fear the Penguin!!\" \n\n", "msg_date": "Wed, 22 Oct 2003 16:36:51 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server" } ]
[ { "msg_contents": "From what I know, there is a cache-row-set functionality that doesn't\nexist with the newer postgres...\n\nConcurrent users will start from 1 to a high of 5000 or more, and could\nramp up rapidly. So far, with increased users, we have gone up to\nstarting the JVM (resin startup) with 1024megs min and max (recommended\nby Sun) - on the app side.\n\nThanks,\nAnjan \n\n\n-----Original Message-----\nFrom: Richard Huxton [mailto:[email protected]] \nSent: Tuesday, October 21, 2003 11:57 AM\nTo: Anjan Dave; [email protected]\nSubject: Re: [PERFORM] Tuning for mid-size server\n\n\nOn Tuesday 21 October 2003 15:28, Anjan Dave wrote:\n> Hi,\n>\n> Pretty soon, a PowerEdge 6650 with 4 x 2Ghz XEONs, and 8GB Memory, \n> with internal drives on RAID5 will be delivered. Postgres will be from\n\n> RH8.0.\n\nYou'll want to upgrade PG to v7.3.4\n\n> I am planning for these values for the postgres configuration - to \n> begin\n> with:\n>\n> Shared_buffers (25% of RAM / 8KB)) = 8589934592 * .25 / 8192 = 262144\n>\n> Sort_mem (4% of RAM / 1KB) = 335544. We'll take about half of that - \n> 167772\n>\n> Effective_cache_size = 262144 (same as shared_buffers - 25%)\n\nMy instincts would be to lower the first two substantially, and increase\nthe \neffective cache once you know load levels. I'd probably start with\nsomething \nlike the values below and work up:\nshared_buffers = 8,000 - 10,000 (PG is happier letting the OS do the\ncacheing) sort_mem = 4,000 - 8,000 (don't forget this is for each sort)\n\nYou'll find the annotated postgresql.conf and performance tuning\narticles \nuseful: http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n\n> In a generic sense, these are recommended values I found in some \n> documents. The database will be small in size and will gradually grow \n> over time from few thousands to a few million records, or more. The \n> activity will be mostly of select statements from a few tables with \n> joins, orderby, groupby clauses. The web application is based on \n> Apache/Resin and hotspot JVM 1.4.0.\n\nYou'll need to figure out how many concurrent users you'll have and how\nmuch \nmemory will be required by apache/java. If your database grows\nradically, \nyou'll probably want to re-tune as it grows.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 21 Oct 2003 12:26:09 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tuning for mid-size server" }, { "msg_contents": "Anjan,\n\n> From what I know, there is a cache-row-set functionality that doesn't\n> exist with the newer postgres...\n\nWhat? PostgreSQL has always used the kernel cache for queries.\n\n> Concurrent users will start from 1 to a high of 5000 or more, and could\n> ramp up rapidly. So far, with increased users, we have gone up to\n> starting the JVM (resin startup) with 1024megs min and max (recommended\n> by Sun) - on the app side.\n\nWell, just keep in mind when tuning that your calculations should be based on \n*available* RAM, meaning RAM not used by Apache or the JVM.\n\nWith that many concurrent requests, you'll want to be *very* conservative with \nsort_mem; I might stick to the default of 1024 if I were you, or even lower \nit to 512k.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 21 Oct 2003 10:22:49 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server" }, { "msg_contents": "On Tue, 21 Oct 2003, Josh Berkus wrote:\n\n> Anjan,\n> \n> > From what I know, there is a cache-row-set functionality that doesn't\n> > exist with the newer postgres...\n> \n> What? PostgreSQL has always used the kernel cache for queries.\n> \n> > Concurrent users will start from 1 to a high of 5000 or more, and could\n> > ramp up rapidly. So far, with increased users, we have gone up to\n> > starting the JVM (resin startup) with 1024megs min and max (recommended\n> > by Sun) - on the app side.\n> \n> Well, just keep in mind when tuning that your calculations should be based on \n> *available* RAM, meaning RAM not used by Apache or the JVM.\n> \n> With that many concurrent requests, you'll want to be *very* conservative with \n> sort_mem; I might stick to the default of 1024 if I were you, or even lower \n> it to 512k.\n\nExactly. Remember, Anjan, that that if you have a single sort that can't \nfit in RAM, it will use the hard drive for temp space, effectively \n\"swapping\" on its own. If the concurrent sorts run the server out of \nmemory, the server will start swapping process, quite possibly the sorts, \nin a sort of hideous round robin death spiral that will bring your machine \nto its knees as the worst possible time, midday under load. sort_mem is \none of the small \"foot guns\" in the postgresql.conf file that people tend \nto pick up and go \"huh, what's this do?\" right before cranking it up.\n\n", "msg_date": "Tue, 21 Oct 2003 11:33:43 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server" } ]
[ { "msg_contents": "Josh,\n\nThe 6650 can have upto 32GB of RAM.\n\nThere are 5 drives. In future, they will be replaced by a fiber array -\nhopefully.\n\nI read an article that suggests you 'start' with 25% of memory for\nshared_buffers. Sort memory was suggested to be at 2-4%. Here's the\nlink:\nhttp://www.ca.postgresql.org/docs/momjian/hw_performance/node8.html\nMaybe, I misinterpreted it.\n\nI read the document on\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html and the\nsuggested values are much lower than what I have mentioned here. It\nwon't hurt to start with lower numbers and increase lateron if needed.\n\nThanks,\nAnjan \n\n\n\n-----Original Message-----\nFrom: Josh Berkus [mailto:[email protected]] \nSent: Tuesday, October 21, 2003 12:21 PM\nTo: Anjan Dave; [email protected]\nSubject: Re: [PERFORM] Tuning for mid-size server\n\n\nAnjan,\n\n> Pretty soon, a PowerEdge 6650 with 4 x 2Ghz XEONs, and 8GB Memory, \n> with internal drives on RAID5 will be delivered. Postgres will be from\n\n> RH8.0.\n\nHow many drives? RAID5 sucks for heavy read-write databases, unless\nyou have \n5+ drives. Or a large battery-backed cache.\n\nAlso, last I checked, you can't address 8GB of RAM without a 64-bit\nprocessor. \nSince when are the Xeons 64-bit?\n\n> Shared_buffers (25% of RAM / 8KB)) = 8589934592 * .25 / 8192 = 262144\n\nThat's too high. Cut it in half at least. Probably down to 5% of\navailable \nRAM.\n\n> Sort_mem (4% of RAM / 1KB) = 335544. We'll take about half of that - \n> 167772\n\nFine if you're running a few-user-large-operation database. If this is\na \nwebserver, you want a much, much lower value.\n\n> Effective_cache_size = 262144 (same as shared_buffers - 25%)\n\nMuch too low. Where did you get these calculations, anyway?\n\n> In a generic sense, these are recommended values I found in some \n> documents.\n\nWhere? We need to contact the author of the \"documents\" and tell them\nto \ncorrect things.\n\n> joins, orderby, groupby clauses. The web application is based on \n> Apache/Resin and hotspot JVM 1.4.0.\n\nYou'll need to estimate the memory consumed by Java & Apache to have\nrealistic \nfigures to work with.\n\n> Are the above settings ok to begin with? Are there any other \n> parameters that I should configure now, or monitor lateron?\n\nNo, they're not. See:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html to tune\nthese \nparameters.\n\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 21 Oct 2003 13:02:08 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tuning for mid-size server" }, { "msg_contents": "Anjan,\n\n> I read an article that suggests you 'start' with 25% of memory for\n> shared_buffers. Sort memory was suggested to be at 2-4%. Here's the\n> link:\n> http://www.ca.postgresql.org/docs/momjian/hw_performance/node8.html\n> Maybe, I misinterpreted it.\n\nNo, I can see how you arrived at that conclusion, and Bruce is an authority. \nI'll contact him.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 21 Oct 2003 10:15:57 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server" }, { "msg_contents": "On Tue, Oct 21, 2003 at 10:15:57AM -0700, Josh Berkus wrote:\n> Anjan,\n> \n> > I read an article that suggests you 'start' with 25% of memory for\n> > shared_buffers. Sort memory was suggested to be at 2-4%. Here's the\n> > link:\n> > http://www.ca.postgresql.org/docs/momjian/hw_performance/node8.html\n> > Maybe, I misinterpreted it.\n> \n> No, I can see how you arrived at that conclusion, and Bruce is an authority. \n> I'll contact him.\n\nI think the \"25%\" rule of thumb is slightly stale: above some\nthreshold, it just falls apart, and lots of people now have machines\nwell within that threshold. Heck, I'll bet Bruce's 2-way machine is\nwithin that threshold.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 21 Oct 2003 13:50:17 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server" }, { "msg_contents": "Andrew Sullivan <[email protected]> writes:\n> I think the \"25%\" rule of thumb is slightly stale: above some\n> threshold, it just falls apart, and lots of people now have machines\n> well within that threshold. Heck, I'll bet Bruce's 2-way machine is\n> within that threshold.\n\nIIRC, we've not seen much evidence that increasing shared_buffers above\nabout 10000 delivers any performance boost. That's 80Mb, so the \"25%\"\nrule doesn't get seriously out of whack until you get to a gig or so of\nRAM. Which was definitely not common at the time the rule was put\nforward, but is now. Probably we should modify the rule-of-thumb to\nsomething like \"25%, but not more than 10000 buffers\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 Oct 2003 14:43:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server " }, { "msg_contents": "Andrew,\n\n> I think the \"25%\" rule of thumb is slightly stale: above some\n> threshold, it just falls apart, and lots of people now have machines\n> well within that threshold. Heck, I'll bet Bruce's 2-way machine is\n> within that threshold.\n\nSure. But we had a few people on this list do tests (including me) and the \nanecdotal evidence was lower than 25%, substantially. The falloff is subtle \nuntil you hit 50% of RAM, like:\n\n% query throughput\n1\t----\n5\t---------\n10 -----------\n15 ----------\n20\t----------\n25\t---------\n30\t--------\n35\t--------\n40\t-------\n\n... so it's often not immediately apparent when you've set stuff a little too \nhigh. However, in the folks that tested, the ideal was never anywhere near \n25%, usually more in the realm of 5-10%. I've been using 6% as my starting \nfigure for the last year for a variety of servers with good results.\n\nOf course, if you have anecdotal evidence to the contrary, then the only way \nto work this would be to have OSDL help us sort it out.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 21 Oct 2003 11:51:02 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server" }, { "msg_contents": "On Tue, Oct 21, 2003 at 11:51:02AM -0700, Josh Berkus wrote:\n\n> Of course, if you have anecdotal evidence to the contrary, then the\n> only way to work this would be to have OSDL help us sort it out.\n\nNope. I too have such anecdotal evidence that 25% is way too high. \nIt also seems to depend pretty heavily on what you're trying to\noptimise for and what platform you have. But I'm glad to hear\n(again) that people seem to think the 25% too high for most cases. I\ndon't feel so much like I'm tilting against windmills.\n\nA\n\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 21 Oct 2003 16:55:04 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server" }, { "msg_contents": "On Tue, 21 Oct 2003, Andrew Sullivan wrote:\n\n> On Tue, Oct 21, 2003 at 11:51:02AM -0700, Josh Berkus wrote:\n> \n> > Of course, if you have anecdotal evidence to the contrary, then the\n> > only way to work this would be to have OSDL help us sort it out.\n> \n> Nope. I too have such anecdotal evidence that 25% is way too high. \n> It also seems to depend pretty heavily on what you're trying to\n> optimise for and what platform you have. But I'm glad to hear\n> (again) that people seem to think the 25% too high for most cases. I\n> don't feel so much like I'm tilting against windmills.\n\nI think where it makes sense is when you have something like a report \nserver where the result sets may be huge, but the parellel load is load, \ni.e. 5 or 10 users tossing around 100 Meg or more at time.\n\nIf you've got 5,000 users running queries that are indexed and won't be \nusing that much memory each, then there's usually no advantage to going \nover a certain number of buffers, and that certain number may be as low \nas 1000 for some applications.\n\n", "msg_date": "Tue, 21 Oct 2003 15:11:17 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server" }, { "msg_contents": "Scott,\n\n> I think where it makes sense is when you have something like a report \n> server where the result sets may be huge, but the parellel load is load, \n> i.e. 5 or 10 users tossing around 100 Meg or more at time.\n\nI've found that that question makes the difference between using 6% & 12% ... \nparticularly large data transformations ... but not higher than that. And \nI've had ample opportunity to test on 2 reporting servers. For one thing, \nwith very large reports one tends to have a lot of I/O binding, which is \nhandled by the kernel.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 21 Oct 2003 14:32:16 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server" }, { "msg_contents": "On Tue, Oct 21, 2003 at 03:11:17PM -0600, scott.marlowe wrote:\n> I think where it makes sense is when you have something like a report \n> server where the result sets may be huge, but the parellel load is load, \n> i.e. 5 or 10 users tossing around 100 Meg or more at time.\n\nIn our case, we were noticing that truss showed an unbelievable\namount of time spent by the postmaster doing open() calls to the OS\n(this was on Solaris 7). So we thought, \"Let's try a 2G buffer\nsize.\" 2G was more than enough to hold the entire data set under\nquestion. Once the buffer started to fill, even plain SELECTs\nstarted taking a long time. The buffer algorithm is just not that\nclever, was my conclusion.\n\n(Standard disclaimer: not a long, controlled test. It's just a bit\nof gossip.)\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 21 Oct 2003 17:34:08 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server" }, { "msg_contents": "Andrew Sullivan wrote:\n> On Tue, Oct 21, 2003 at 03:11:17PM -0600, scott.marlowe wrote:\n> > I think where it makes sense is when you have something like a report \n> > server where the result sets may be huge, but the parellel load is load, \n> > i.e. 5 or 10 users tossing around 100 Meg or more at time.\n> \n> In our case, we were noticing that truss showed an unbelievable\n> amount of time spent by the postmaster doing open() calls to the OS\n> (this was on Solaris 7). So we thought, \"Let's try a 2G buffer\n> size.\" 2G was more than enough to hold the entire data set under\n> question. Once the buffer started to fill, even plain SELECTs\n> started taking a long time. The buffer algorithm is just not that\n> clever, was my conclusion.\n> \n> (Standard disclaimer: not a long, controlled test. It's just a bit\n> of gossip.)\n\nI know this is an old email, but have you tested larger shared buffers\nin CVS HEAD with Jan's new cache replacement policy?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 14 Dec 2003 00:42:21 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server" }, { "msg_contents": "On Sun, Dec 14, 2003 at 12:42:21AM -0500, Bruce Momjian wrote:\n> \n> I know this is an old email, but have you tested larger shared buffers\n> in CVS HEAD with Jan's new cache replacement policy?\n\nNot yet. It's on our TODO list, for sure, because the consequences\nof relying too much on the filesystem buffers under certain perverse\nloads is lousy database performance _precisely_ when we need it. I\nexpect some testing of this type some time in January.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Sun, 14 Dec 2003 12:55:48 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for mid-size server" } ]
[ { "msg_contents": "Hopefully, i am not steering this into a different direction, but is there a way to find out how much sort memory each query is taking up, so that we can scale that up with increasing users?\r\n\r\nTHanks,\r\nAnjan\r\n\r\n\t-----Original Message----- \r\n\tFrom: scott.marlowe [mailto:[email protected]] \r\n\tSent: Tue 10/21/2003 1:33 PM \r\n\tTo: Josh Berkus \r\n\tCc: Anjan Dave; Richard Huxton; [email protected] \r\n\tSubject: Re: [PERFORM] Tuning for mid-size server\r\n\t\r\n\t\r\n\r\n\tOn Tue, 21 Oct 2003, Josh Berkus wrote:\r\n\t\r\n\t> Anjan,\r\n\t>\r\n\t> > From what I know, there is a cache-row-set functionality that doesn't\r\n\t> > exist with the newer postgres...\r\n\t>\r\n\t> What? PostgreSQL has always used the kernel cache for queries.\r\n\t>\r\n\t> > Concurrent users will start from 1 to a high of 5000 or more, and could\r\n\t> > ramp up rapidly. So far, with increased users, we have gone up to\r\n\t> > starting the JVM (resin startup) with 1024megs min and max (recommended\r\n\t> > by Sun) - on the app side.\r\n\t>\r\n\t> Well, just keep in mind when tuning that your calculations should be based on\r\n\t> *available* RAM, meaning RAM not used by Apache or the JVM.\r\n\t>\r\n\t> With that many concurrent requests, you'll want to be *very* conservative with\r\n\t> sort_mem; I might stick to the default of 1024 if I were you, or even lower\r\n\t> it to 512k.\r\n\t\r\n\tExactly. Remember, Anjan, that that if you have a single sort that can't\r\n\tfit in RAM, it will use the hard drive for temp space, effectively\r\n\t\"swapping\" on its own. If the concurrent sorts run the server out of\r\n\tmemory, the server will start swapping process, quite possibly the sorts,\r\n\tin a sort of hideous round robin death spiral that will bring your machine\r\n\tto its knees as the worst possible time, midday under load. sort_mem is\r\n\tone of the small \"foot guns\" in the postgresql.conf file that people tend\r\n\tto pick up and go \"huh, what's this do?\" right before cranking it up.\r\n\t\r\n\t\r\n\r\n", "msg_date": "Tue, 21 Oct 2003 14:53:03 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tuning for mid-size server" } ]
[ { "msg_contents": "Josh,\r\n \r\nThe app servers are seperate dual-cpu boxes with 2GB RAM on each.\r\n \r\nYes, from all the responses i have seen, i will be reducing the numbers to what has been suggested.\r\n \r\nThanks to all,\r\nanjan\r\n\r\n\t-----Original Message----- \r\n\tFrom: Josh Berkus [mailto:[email protected]] \r\n\tSent: Tue 10/21/2003 1:22 PM \r\n\tTo: Anjan Dave; Richard Huxton; [email protected] \r\n\tCc: \r\n\tSubject: Re: [PERFORM] Tuning for mid-size server\r\n\t\r\n\t\r\n\r\n\tAnjan,\r\n\t\r\n\t> From what I know, there is a cache-row-set functionality that doesn't\r\n\t> exist with the newer postgres...\r\n\t\r\n\tWhat? PostgreSQL has always used the kernel cache for queries.\r\n\t\r\n\t> Concurrent users will start from 1 to a high of 5000 or more, and could\r\n\t> ramp up rapidly. So far, with increased users, we have gone up to\r\n\t> starting the JVM (resin startup) with 1024megs min and max (recommended\r\n\t> by Sun) - on the app side.\r\n\t\r\n\tWell, just keep in mind when tuning that your calculations should be based on\r\n\t*available* RAM, meaning RAM not used by Apache or the JVM.\r\n\t\r\n\tWith that many concurrent requests, you'll want to be *very* conservative with\r\n\tsort_mem; I might stick to the default of 1024 if I were you, or even lower\r\n\tit to 512k.\r\n\t\r\n\t--\r\n\tJosh Berkus\r\n\tAglio Database Solutions\r\n\tSan Francisco\r\n\t\r\n\r\n", "msg_date": "Tue, 21 Oct 2003 14:59:01 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tuning for mid-size server" } ]
[ { "msg_contents": "I'm running our DBT-2 workload against PostgreSQL 7.3.4 and I'm having\nsome trouble figuring out what I should be looking for when I'm trying\nto tune the database. I have results for a decent baseline, but when I\ntry to increase the load on the database, the performance drops.\nNothing in the graphs (in the links listed later) sticks out to me so\nI'm wondering if there are other database statitics I should try to\ncollect. Any suggestions would be great and let me know if I can answer\nany other questions.\n\nHere are a pair of results where I just raise the load on the\ndatabase, where increasing the load increases the area of the database\ntouched in addition to increasing the transaction rate. The overall\nmetric increases somewhat, but the response time for most of the\ninteractions also increases significantly:\n\nhttp://developer.osdl.org/markw/dbt2-pgsql/158/ [baseline]\n\t- load of 100 warehouses\n\t- metric 1249.65\n\t\nhttp://developer.osdl.org/markw/dbt2-pgsql/149/\n\t- load of 140 warehouses\n\t- metric 1323.90\n\nBoth of these runs had wal_buffers set to 8, checkpoint_segments 200,\nand checkpoint_timeout 1800.\n\nSo far I've only tried various wal_buffers and checkpoint_segments\nsettings in the next set of results for a load of 140 warehouses.\n\nhttp://developer.osdl.org/markw/dbt2-pgsql/148/\n- metric 1279.26\n- wal_buffers 8\n- checkpoint_segments 100\n- checkpoint_timeout 300\n\nhttp://developer.osdl.org/markw/dbt2-pgsql/149/\n- metric 1323.90\n- wal_buffers 8\n- checkpoint_segments 200\n- checkpoint_timeout 1800\n\nhttp://developer.osdl.org/markw/dbt2-pgsql/150/\n- metric 1281.13\n- wal_buffers 8\n- checkpoint_segments 300\n- checkpoint_timeout 1800\n\nhttp://developer.osdl.org/markw/dbt2-pgsql/151/\n- metric 1311.99\n- wal_buffers 32\n- checkpoint_segments 200\n- checkpoint_timeout 1800\n\nhttp://developer.osdl.org/markw/dbt2-pgsql/152/\n- metric 1268.37\n- wal_buffers 64\n- checkpoint_segments 200\n- checkpoint_timeout 1800\n\nhttp://developer.osdl.org/markw/dbt2-pgsql/154/\n- metric 1314.62\n- wal_buffers 16\n- checkpoint_segments 200\n- checkpoint_timeout 1800\n\n\nThanks!\n\n-- \nMark Wong - - [email protected]\nOpen Source Development Lab Inc - A non-profit corporation\n12725 SW Millikan Way - Suite 400 - Beaverton, OR 97005\n(503) 626-2455 x 32 (office)\n(503) 626-2436 (fax)\n", "msg_date": "Tue, 21 Oct 2003 17:24:02 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "analyzing postgresql performance for dbt-2" }, { "msg_contents": "[email protected] wrote:\n> I'm running our DBT-2 workload against PostgreSQL 7.3.4 and I'm having\n> some trouble figuring out what I should be looking for when I'm trying\n> to tune the database. I have results for a decent baseline, but when I\n> try to increase the load on the database, the performance drops.\n> Nothing in the graphs (in the links listed later) sticks out to me so\n> I'm wondering if there are other database statitics I should try to\n> collect. Any suggestions would be great and let me know if I can answer\n> any other questions.\n> \n> Here are a pair of results where I just raise the load on the\n> database, where increasing the load increases the area of the database\n> touched in addition to increasing the transaction rate. The overall\n> metric increases somewhat, but the response time for most of the\n> interactions also increases significantly:\n> \n> http://developer.osdl.org/markw/dbt2-pgsql/158/ [baseline]\n> \t- load of 100 warehouses\n> \t- metric 1249.65\n> \t\n> http://developer.osdl.org/markw/dbt2-pgsql/149/\n> \t- load of 140 warehouses\n> \t- metric 1323.90\n\nI looked at these charts and they looked normal to me. It looked like\nyour the load increased until your computer was saturated. Is there\nsomething I am missing?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 21 Oct 2003 20:35:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: analyzing postgresql performance for dbt-2" }, { "msg_contents": "On Tue, Oct 21, 2003 at 08:35:56PM -0400, Bruce Momjian wrote:\n> [email protected] wrote:\n> > I'm running our DBT-2 workload against PostgreSQL 7.3.4 and I'm having\n> > some trouble figuring out what I should be looking for when I'm trying\n> > to tune the database. I have results for a decent baseline, but when I\n> > try to increase the load on the database, the performance drops.\n> > Nothing in the graphs (in the links listed later) sticks out to me so\n> > I'm wondering if there are other database statitics I should try to\n> > collect. Any suggestions would be great and let me know if I can answer\n> > any other questions.\n> > \n> > Here are a pair of results where I just raise the load on the\n> > database, where increasing the load increases the area of the database\n> > touched in addition to increasing the transaction rate. The overall\n> > metric increases somewhat, but the response time for most of the\n> > interactions also increases significantly:\n> > \n> > http://developer.osdl.org/markw/dbt2-pgsql/158/ [baseline]\n> > \t- load of 100 warehouses\n> > \t- metric 1249.65\n> > \t\n> > http://developer.osdl.org/markw/dbt2-pgsql/149/\n> > \t- load of 140 warehouses\n> > \t- metric 1323.90\n> \n> I looked at these charts and they looked normal to me. It looked like\n> your the load increased until your computer was saturated. Is there\n> something I am missing?\n\nI've run some i/o tests so I'm pretty sure I haven't saturated that. And it\nlooks like I have almost 10% more processor time left. I do agree that it\nappears something might be saturated, I just don't know where to look...\n\nThanks,\nMark\n", "msg_date": "Tue, 21 Oct 2003 19:10:33 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: analyzing postgresql performance for dbt-2" }, { "msg_contents": "Mark Wong wrote:\n> > > Here are a pair of results where I just raise the load on the\n> > > database, where increasing the load increases the area of the database\n> > > touched in addition to increasing the transaction rate. The overall\n> > > metric increases somewhat, but the response time for most of the\n> > > interactions also increases significantly:\n> > > \n> > > http://developer.osdl.org/markw/dbt2-pgsql/158/ [baseline]\n> > > \t- load of 100 warehouses\n> > > \t- metric 1249.65\n> > > \t\n> > > http://developer.osdl.org/markw/dbt2-pgsql/149/\n> > > \t- load of 140 warehouses\n> > > \t- metric 1323.90\n> > \n> > I looked at these charts and they looked normal to me. It looked like\n> > your the load increased until your computer was saturated. Is there\n> > something I am missing?\n> \n> I've run some i/o tests so I'm pretty sure I haven't saturated that. And it\n> looks like I have almost 10% more processor time left. I do agree that it\n> appears something might be saturated, I just don't know where to look...\n\nCould the 10% be context switching time, or is the I/O saturated?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 26 Oct 2003 00:36:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: analyzing postgresql performance for dbt-2" }, { "msg_contents": "On 26 Oct, Bruce Momjian wrote:\n> Mark Wong wrote:\n>> > > Here are a pair of results where I just raise the load on the\n>> > > database, where increasing the load increases the area of the database\n>> > > touched in addition to increasing the transaction rate. The overall\n>> > > metric increases somewhat, but the response time for most of the\n>> > > interactions also increases significantly:\n>> > > \n>> > > http://developer.osdl.org/markw/dbt2-pgsql/158/ [baseline]\n>> > > \t- load of 100 warehouses\n>> > > \t- metric 1249.65\n>> > > \t\n>> > > http://developer.osdl.org/markw/dbt2-pgsql/149/\n>> > > \t- load of 140 warehouses\n>> > > \t- metric 1323.90\n>> > \n>> > I looked at these charts and they looked normal to me. It looked like\n>> > your the load increased until your computer was saturated. Is there\n>> > something I am missing?\n>> \n>> I've run some i/o tests so I'm pretty sure I haven't saturated that. And it\n>> looks like I have almost 10% more processor time left. I do agree that it\n>> appears something might be saturated, I just don't know where to look...\n> \n> Could the 10% be context switching time, or is the I/O saturated?\n\nThere are about 14,000 to 17,000 context switches/s according to the\nvmstat output. This is on a 1.5Ghz hyperthreaded Xeon processor. I\ndon't know what I'm supposed to be able to expect in terms of context\nswitching. I really doubt the i/o is saturated because I've run\ndisktest (part of the Linux Test Project suite) and saw much higher\nthroughput for various sequential/random read/write tests.\n\nI'm starting to collect oprofile data (and will hopefully have some\nresults soon) to get an idea where the database is spending its time,\njust in case that may have something to do with it.\n\nMark\n", "msg_date": "Tue, 28 Oct 2003 10:08:33 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: analyzing postgresql performance for dbt-2" }, { "msg_contents": "I've done a better controlled series of tests where I restore the\ndatabase before each test and have grabbed sar and oprofile data:\n\nhttp://developer.osdl.org/markw/dbt2-pgsql/176/\n\t- load of 100 warehouses\n\t- metric 1234.52\n\nhttp://developer.osdl.org/markw/dbt2-pgsql/177/\n\t- load of 120 warehouses\n\t- metric 1259.43\n\nhttp://developer.osdl.org/markw/dbt2-pgsql/178/\n\t- load of 140 warehouses\n\t- metric 1244.33\n\nFor the most part our primary metric, and the vmstat and sar output look\nfairly close for each run. Here are a couple of things that I've found\nto be considerably different from run 176 to 178:\n\n- oprofile says postgresql calls to SearchCatCache increased ~ 20%\n\n- readprofile says there are 50% more calls in the linux kernel to\n do_signaction (in kernel/signal.c)\n\nWould these two things offer any insight to what might be throttling the\nthroughput?\n\nThanks,\nMark\n\n\n", "msg_date": "Wed, 29 Oct 2003 14:26:11 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: analyzing postgresql performance for dbt-2" } ]
[ { "msg_contents": "If I have a fixed amount of money to spend as a general rule is it better to buy one processor and lots of memory or two processors and less memory for a system which is transactional based (in this case it's handling reservations). I realise the answer will be a generalised one but all the performance bits I've read seem to talk about adjusting memory allocation. The client has received the general advice from their hardware supplier that 2 Xeon processors and less memory is better but for postgresql I'm thinking they might be better off with a single processor and loads of memory. The OS is Red Hat Linux.\n\nHow long is a piece of string I guess but all comments welcome!\n\nTAI\nHilary\n\n\nHilary Forbes\n-------------\nDMR Computer Limited: http://www.dmr.co.uk/\nDirect line: 01689 889950\nSwitchboard: (44) 1689 860000 Fax: (44) 1689 860330\nE-mail: [email protected]\n\n**********************************************************\n\n", "msg_date": "Wed, 22 Oct 2003 11:09:55 +0100", "msg_from": "Hilary Forbes <[email protected]>", "msg_from_op": true, "msg_subject": "Processors vs Memory" }, { "msg_contents": "Hilary Forbes wrote:\n\n> If I have a fixed amount of money to spend as a general rule \n >is it better to buy one processor and lots of memory or two\n >processors and less memory for a system which is transactional\n >based (in this case it's handling reservations). I realise the\n >answer will be a generalised one but all the performance bits\n >I've read seem to talk about adjusting memory allocation.\n >The client has received the general advice from their hardware\n >supplier that 2 Xeon processors and less memory is better but\n >for postgresql I'm thinking they might be better off with a single\n >processor and loads of memory. The OS is Red Hat Linux.\n\nWell it depends. If your projected database size is say 2 gigs, then you should \nbuy 2Gigsof RAM and spend rest of the money on processor.\n\nBut if your database size(max of currrent and projected) is 100GB, obviously you \ncan not buy 100GB of memory that cheaply. So you should look for fast storage.\n\nThe order of priority is IO, memory and CPU. If database is just big enough to \nfit in a gig or two, you should get RAM first.\n\nProcessor is hardly ever a concern w.r.t database unless you are doing a lot in \ndatabase business logic.\n\nHTH\n\n Shridhar\n\n", "msg_date": "Wed, 22 Oct 2003 15:55:22 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Processors vs Memory" }, { "msg_contents": "On Wed, 22 Oct 2003, Hilary Forbes wrote:\n\n> If I have a fixed amount of money to spend as a general rule is it \n> better to buy one processor and lots of memory or two processors and \n> less memory for a system which is transactional based (in this case \n> it's handling reservations). I realise the answer will be a generalised \n> one but all the performance bits I've read seem to talk about adjusting \n> memory allocation. The client has received the general advice from \n> their hardware supplier that 2 Xeon processors and less memory is better \n> but for postgresql I'm thinking they might be better off with a single \n> processor and loads of memory. The OS is Red Hat Linux.\n\nMy opinion is that two CPUs is optimal because it allows the OS to operate \nin parallel to the database. After the second CPU, the only advantage is \nif you are doing a lot of parallel access.\n\nGo for fast I/O first, a RAID1+0 setup is optimal for smaller numbers of \ndrives (works on 4 or 6 drives nicely) and RAID5 is optimal for a larger \nnumber of drives (works well on 10 or more drives). Always use hardware \nRAID with battery backed cache for a heavily updated database. For a \nreports database software RAID is quite acceptable.\n\nThere's a limit to how much memory you can throw at the problem if you're \non 32 bit hardware, and that limit is about 2 to 4 gig. While you can \ninstall more, it usually makes little or no difference.\n\nLastly, don't forget to tune your database and server once you have it up \nand running:\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n", "msg_date": "Wed, 22 Oct 2003 10:36:10 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Processors vs Memory" } ]
[ { "msg_contents": "Hi\n\nCurrently we are running Postgresql v7.3.2 on Redhat Linux OS v9.0. We have\nWindows2000 client machines inserting records into the Postgresql tables\nvia ODBC.\n\nAfter a few weeks of usage, when we do a \\d at the sql prompt, there was a\nduplicate object name, ie it can be a duplicate row of index or table.\nWhen we do a \\d table_name, it will show a duplication of column names\ninside the table.\n\nIt doesnt affect the insertion/updating of the tables, but when we do a\npg_dump -Da -t <table_name> <db_name> > /exp/<table_name>.sql, it will not\ndo a proper backup/dump.\n\nDo we need to apply any patches or maintenace?\n\n\n\nPlease be informed that NEC Singapore Pte Ltd is now known as NEC Solutions\nAsia Pacific Pte Ltd.\nOur address and contact numbers remain.\nEmail: [email protected]\nhttp://www.nec.com.sg/ap\n\nThank you,\nREgards.\n\n\n\n\n", "msg_date": "Wed, 22 Oct 2003 18:25:51 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Postgresql performance" }, { "msg_contents": "[email protected] writes:\n> Currently we are running Postgresql v7.3.2 on Redhat Linux OS v9.0. We have\n> Windows2000 client machines inserting records into the Postgresql tables\n> via ODBC.\n\n> After a few weeks of usage, when we do a \\d at the sql prompt, there was a\n> duplicate object name, ie it can be a duplicate row of index or table.\n> When we do a \\d table_name, it will show a duplication of column names\n> inside the table.\n\nAre you sure you are using 7.3 psql? This sounds like something that\ncould happen with a pre-7.3 (not schema aware) psql, if there are\nmultiple occurrences of the same table name in different schemas.\n\n> It doesnt affect the insertion/updating of the tables, but when we do a\n> pg_dump -Da -t <table_name> <db_name> > /exp/<table_name>.sql, it will not\n> do a proper backup/dump.\n\nI'd wonder about whether you have the right pg_dump, too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Oct 2003 10:03:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql performance " }, { "msg_contents": "NEC,\n\n> After a few weeks of usage, when we do a \\d at the sql prompt, there was a\n> duplicate object name, ie it can be a duplicate row of index or table.\n> When we do a \\d table_name, it will show a duplication of column names\n> inside the table.\n\nI think the version of PSQL and pg_dump which you are using do not match the \nback-end database version. Correct this.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 22 Oct 2003 09:06:35 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql performance" } ]
[ { "msg_contents": "\nI'm using pg 7.3.4 to do a select involving a join on 2 tables. \nThe query is taking 15 secs which seems extreme to me considering \nthe indices that exist on the two tables. EXPLAIN ANALYZE shows \nthat the indices aren't being used. I've done VACUUM ANALYZE on the \ndb with no change in results. Shouldn't the indices be used?\n\nBelow is what I believe to be the relevant information. I haven't\nincluded the definitions of the tables involved in the foreign\nkey definititions because I don't think they matter. \n\nAny help will be greatly appreciated.\n\n CREATE TABLE shotpoint ( \n shot_line_num FLOAT4, \\\n shotpoint FLOAT4, \n x FLOAT4, \n y FLOAT4, \n template_id INT4, \n num_chans INT4)\n\n CREATE TABLE shot_record ( \n shot_line_num FLOAT4, \n shotpoint FLOAT4, \n index INT2, \n dev INT4, \n dev_offset INT8, \n bin INT4, \n shot_time INT8, \n record_length INT4,\n nav_x FLOAT4,\n nav_y FLOAT4,\n num_rus INT4,\n status INT4 DEFAULT 0, \n reel_num INT4,\n file_num INT4,\n nav_status INT2,\n nav_shot_line FLOAT4,\n nav_shotpoint FLOAT4,\n nav_depth FLOAT4,\n sample_skew INT4, \n trace_count INT4, \n PRIMARY KEY (shot_line_num, shotpoint, index)) \n\n ALTER TABLE shotpoint ADD CONSTRAINT shot_line_fk \n FOREIGN KEY (shot_line_num) \n REFERENCES shot_line(shot_line_num)\n\n CREATE UNIQUE INDEX shotpoint_idx \n ON shotpoint(shot_line_num, shotpoint)\n\n ALTER TABLE shot_record ADD CONSTRAINT shot_record_shotpoint_index_fk \n FOREIGN KEY (shot_line_num, shotpoint) \n REFERENCES shotpoint(shot_line_num, shotpoint)\n \n\n EXPLAIN ANALYZE SELECT r.shot_line_num, r.shotpoint, index, \n shot_time, \n record_length, dev, \n dev_offset, num_rus, bin, template_id, trace_count\n FROM shot_record r, shotpoint p \n WHERE p.shot_line_num = r.shot_line_num \n AND p.shotpoint = r.shotpoint; \n\n \n\nMerge Join (cost=49902.60..52412.21 rows=100221 width=58) (actual time=12814.28..15000.65 rows=100425 loops=1)\n Merge Cond: ((\"outer\".shot_line_num = \"inner\".shot_line_num) AND (\"outer\".shotpoint = \"inner\".shotpoint))\n -> Sort (cost=13460.90..13711.97 rows=100425 width=46) (actual time=3856.94..4157.01 rows=100425 loops=1)\n Sort Key: r.shot_line_num, r.shotpoint\n -> Seq Scan on shot_record r (cost=0.00..2663.25 rows=100425 width=46) (actual time=18.00..1089.00 rows=100425 loops=1)\n -> Sort (cost=36441.70..37166.96 rows=290106 width=12) (actual time=8957.19..9224.09 rows=100749 loops=1)\n Sort Key: p.shot_line_num, p.shotpoint\n -> Seq Scan on shotpoint p (cost=0.00..5035.06 rows=290106 width=12) (actual time=7.55..2440.06 rows=290106 loops=1)\n Total runtime: 15212.05 msec\n\n\n***********************************************************************\nMedora Schauer\nSr. Software Engineer\n\nFairfield Industries\n14100 Southwest Freeway\nSuite 600\nSugar Land, Tx 77478-3469\nUSA\n\[email protected]\nphone: 281-275-7664\nfax : 281-275-7551\n***********************************************************************\n\n", "msg_date": "Wed, 22 Oct 2003 09:48:19 -0500", "msg_from": "\"Medora Schauer\" <[email protected]>", "msg_from_op": true, "msg_subject": "slow select" }, { "msg_contents": "Medora,\n\n> I'm using pg 7.3.4 to do a select involving a join on 2 tables.\n> The query is taking 15 secs which seems extreme to me considering\n> the indices that exist on the two tables. EXPLAIN ANALYZE shows\n> that the indices aren't being used. I've done VACUUM ANALYZE on the\n> db with no change in results. Shouldn't the indices be used?\n\nNo. You're selecting 100,000 records. For such a large record dump, a seq \nscan is usually faster.\n\nIf you don't believe me, try setting enable_seqscan=false and see how long the \nquery takes.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 22 Oct 2003 09:23:15 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow select" } ]
[ { "msg_contents": "Folks,\n\n \n\nI���m hoping someone can give me some pointers to resolving an issue with postgres and it���s ability to utilize multiple CPUs effectively.\n\n \n\nThe issue is that no matter how much query load we throw at our server it seems almost impossible to get it to utilize more than 50% cpu on a dual-cpu box. For a single connection we can use all of one CPU, but multiple connections fail to increase the overall utilization (although they do cause it to spread across CPUs).\n\n \n\nThe platform is a dual CPU 2.8Ghz P4 Xeon Intel box (hyperthreading disabled) running a fairly standard Redhat 9 distribution. We are using postgres on this platform with a moderate sized data set (some hundreds of megs of data). The tests perform no updates and simply hit the server with a single large complex query via a multithreaded java/jdbc client. To avoid network distortion we run the client on the localhost (its cpu load is minimal). We are running with shared buffers large enough to hold the entire database and sort memory of 64m, should easily be enough to prevent sorting to disk. \n\n \n\nAt this point I���ve tried everything I can think of to diagnose this - checking the pg_locks table indicates that even under heavy load there are no ungranted locks, so it would appear not to be a locking issue. Vmstat/iostat show no excessive figures for network or io waits. The only outlandish figure is that context switches which spike up to 250,000/sec (seems large). By all indications, postgres is waiting internally as if it is somehow singlethreaded. However the documentation clearly indicates this should not be so.\n\n \n\nCan anyone give me some pointers as to why postgres would be doing this? Is postgres really multi-process capable or are the processes ultimately waiting on each other to run queries or access shared memory?\n\n \n\nOn a second note, has anyone got some tips on how to profile postgres in this kind of situation? I have tried using gprof, but because postgres spawns its processes dynamically I always end up profiling the postmaster (not very useful).\n\n \n\nThanking in advance for any help!\n\n \n\nCheers,\n\n \n\nSimon.\n\n \n\nPs. posted this to general, but then realised this is a better forum - sorry for the cross.\n\n \n\n\n\n---------------------------------\nDo you Yahoo!?\nThe New Yahoo! Shopping - with improved product search\n\nFolks,\n \nI���m hoping someone can give me some pointers to resolving an issue with postgres and it���s ability to utilize multiple CPUs effectively.\n \nThe issue is that no matter how much query load we throw at our server it seems almost impossible to get it to utilize more than 50% cpu on a dual-cpu box.  For a single connection we can use all of one CPU, but multiple connections fail to increase the overall utilization (although they do cause it to spread across CPUs).\n \nThe platform is a dual CPU 2.8Ghz P4 Xeon Intel box (hyperthreading disabled)  running a fairly standard Redhat 9 distribution.  We are using postgres on this platform with a moderate sized data set (some hundreds of megs of data).  The tests perform no updates and simply hit the server with a single large complex query via a multithreaded java/jdbc client.  To avoid network distortion we run the client on the localhost (its cpu load is minimal).   We are running with shared buffers large enough to hold the entire database and sort memory of 64m, should easily be enough to prevent sorting to disk.  \n \nAt this point I���ve tried everything I can think of to diagnose this - checking the pg_locks table indicates that even under heavy load there are no ungranted locks, so it would appear not to be a locking issue.  Vmstat/iostat show no excessive figures for network or io waits.  The only outlandish figure is that context switches which spike up to 250,000/sec (seems large).  By all indications, postgres is waiting internally as if it is somehow singlethreaded.  However the documentation clearly indicates this should not be so.\n \nCan anyone give me some pointers as to why postgres would be doing this?   Is postgres really multi-process capable or are the processes ultimately waiting on each other to run queries or access shared memory?\n \nOn a second note, has anyone got some tips on how to profile postgres in this kind of situation?  I have tried using gprof, but because postgres spawns its processes dynamically I always end up profiling the postmaster (not very useful).\n \nThanking in advance for any help!\n \nCheers,\n \nSimon.\n \nPs. posted this to general, but then realised this is a better forum - sorry for the cross.\n \nDo you Yahoo!?\nThe New Yahoo! Shopping - with improved product search", "msg_date": "Wed, 22 Oct 2003 07:57:57 -0700 (PDT)", "msg_from": "Simon Sadedin <[email protected]>", "msg_from_op": true, "msg_subject": "poor cpu utilization on dual cpu box" }, { "msg_contents": "Simon,\n\n> The issue is that no matter how much query load we throw at our server it\n> seems almost impossible to get it to utilize more than 50% cpu on a\n> dual-cpu box. For a single connection we can use all of one CPU, but\n> multiple connections fail to increase the overall utilization (although\n> they do cause it to spread across CPUs).\n\nThis is perfectly normal. It's a rare x86 machine (read fiber channel) where \nyou don't saturate the I/O or the RAM *long* before you saturate the CPU. \nTransactional databases are an I/O intensive operation, not a CPU-intensive \none.\n\n> We are running with shared buffers large enough to hold the\n> entire database\n\nWhich is bad. This is not what shared buffers are for. See:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 22 Oct 2003 09:11:14 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: poor cpu utilization on dual cpu box" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> We are running with shared buffers large enough to hold the\n>> entire database\n\n> Which is bad. This is not what shared buffers are for. See:\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\nIn fact, that may be the cause of the performance issue. The high\ncontext-swap rate suggests heavy contention for shared-memory data\nstructures. The first explanation that occurs to me is that too much\ntime is being spent managing the buffer hashtable, causing that to\nbecome a serialization bottleneck. Try setting shared_buffers to 10000\nor so and see if it gets better.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Oct 2003 13:02:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: poor cpu utilization on dual cpu box " }, { "msg_contents": "\nThe suggestion that we are saturating the memory bus\nmakes a lot of sense. We originally started with a\nlow setting for shared buffers and resized it to fit\nall our tables (since we have memory to burn). That\nimproved stand alone performance but not concurrent\nperformance - this would explain that phenomenon\nsomewhat.\n\nWill investigate further down this track.\n\nThanks to everyone who responded!\n\nCheers,\n\nSimon.\n\nJosh Berkus <[email protected]> wrote:Simon,\n\n> The issue is that no matter how much query load we\nthrow at our server it\n> seems almost impossible to get it to utilize more\nthan 50% cpu on a\n> dual-cpu box. For a single connection we can use all\nof one CPU, but\n> multiple connections fail to increase the overall\nutilization (although\n> they do cause it to spread across CPUs).\n\nThis is perfectly normal. It's a rare x86 machine\n(read fiber channel) where \nyou don't saturate the I/O or the RAM *long* before\nyou saturate the CPU. \nTransactional databases are an I/O intensive\noperation, not a CPU-intensive \none.\n\n> We are running with shared buffers large enough to\nhold the\n> entire database\n\nWhich is bad. This is not what shared buffers are for.\nSee:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n---------------------------(end of\nbroadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to\[email protected]\n\n\n__________________________________\nDo you Yahoo!?\nThe New Yahoo! Shopping - with improved product search\nhttp://shopping.yahoo.com\n", "msg_date": "Wed, 22 Oct 2003 10:58:08 -0700 (PDT)", "msg_from": "Simon Sadedin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: poor cpu utilization on dual cpu box" } ]
[ { "msg_contents": "\n\n \n> \n> Medora,\n> \n> > I'm using pg 7.3.4 to do a select involving a join on 2 tables.\n> > The query is taking 15 secs which seems extreme to me considering\n> > the indices that exist on the two tables. EXPLAIN ANALYZE shows\n> > that the indices aren't being used. I've done VACUUM ANALYZE on the\n> > db with no change in results. Shouldn't the indices be used?\n> \n> No. You're selecting 100,000 records. For such a large \n> record dump, a seq \n> scan is usually faster.\n> \n> If you don't believe me, try setting enable_seqscan=false and \n> see how long the \n> query takes.\n\nI did as you suggested (set enable_seqscan = false) and the query now takes 6 sec (vs\n15 secs before) :\n\nMerge Join (cost=0.00..287726.10 rows=100221 width=58) (actual time=61.60..5975.63 rows=100425 loops=1)\n Merge Cond: ((\"outer\".shot_line_num = \"inner\".shot_line_num) AND (\"outer\".shotpoint = \"inner\".shotpoint))\n -> Index Scan using hsot_record_idx on shot_record r (cost=0.00..123080.11 rows=100425 width=46) (actual time=24.15..2710.31 rows=100425 loops=1)\n -> Index Scan using shotpoint_idx on shotpoint p (cost=0.00..467924.54 rows=290106 width=12) (actual time=37.38..1379.64 rows=100749 loops=1)\n Total runtime: 6086.32 msec\n\nSo why did were the indices not used before when they yield a better plan?\n\n\n", "msg_date": "Wed, 22 Oct 2003 15:56:33 -0500", "msg_from": "\"Medora Schauer\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow select" }, { "msg_contents": "Medora,\n\n> So why did were the indices not used before when they yield a better plan?\n\nYour .conf settings, most likely. I'd lower your random_page_cost and raise \nyour effective_cache_size.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Wed, 22 Oct 2003 14:03:12 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow select" }, { "msg_contents": "\n\"Medora Schauer\" <[email protected]> writes:\n\n> Merge Join (cost=0.00..287726.10 rows=100221 width=58) (actual time=61.60..5975.63 rows=100425 loops=1)\n> Merge Cond: ((\"outer\".shot_line_num = \"inner\".shot_line_num) AND (\"outer\".shotpoint = \"inner\".shotpoint))\n> -> Index Scan using hsot_record_idx on shot_record r (cost=0.00..123080.11 rows=100425 width=46) (actual time=24.15..2710.31 rows=100425 loops=1)\n> -> Index Scan using shotpoint_idx on shotpoint p (cost=0.00..467924.54 rows=290106 width=12) (actual time=37.38..1379.64 rows=100749 loops=1)\n> Total runtime: 6086.32 msec\n> \n> So why did were the indices not used before when they yield a better plan?\n\nThere's another reason. Notice it thinks the second table will return 290k\nrecords. In fact it only returns 100k records. So it's optimizing on the\nassumption that it will have to read 3x as many records as it actually will...\n\nI'm not clear if there's anything you can do to improve this estimate though.\n\n-- \ngreg\n\n", "msg_date": "24 Oct 2003 11:19:05 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow select" } ]
[ { "msg_contents": "\nJosh,\n\n> > So why did were the indices not used before when they yield \n> a better plan?\n> \n> Your .conf settings, most likely. I'd lower your \n> random_page_cost and raise \n> your effective_cache_size.\n\nIncreasing effective_cache_size to 10000 did it. The query now\ntakes 4 secs. I left random_page_cost at the default value of 4. \nI thought, mistakenly apparently, that our database was relatively \nitty bitty and so haven't messed with the .conf file. Guess I \nbetter take a look at all the settings (I know where the docs are).\n\nThanks for your help,\n\nMedora\n\n***********************************************************************\nMedora Schauer\nSr. Software Engineer\n\nFairfield Industries\n***********************************************************************\n", "msg_date": "Wed, 22 Oct 2003 16:16:08 -0500", "msg_from": "\"Medora Schauer\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow select" }, { "msg_contents": "Medora,\n\n> Increasing effective_cache_size to 10000 did it. \n\nThat would be 78MB RAM. If you have more than that available, you can \nincrease it further. Ideally, it should be about 2/3 to 3/4 of available \nRAM.\n\n>The query now\n> takes 4 secs. I left random_page_cost at the default value of 4. \n> I thought, mistakenly apparently, that our database was relatively \n> itty bitty and so haven't messed with the .conf file. \n\nActually, for a itty bitty database on a fast machine, you definitely want to \nlower random_page_cost. It's a large database that would make you cautious \nabout this.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Wed, 22 Oct 2003 14:42:56 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow select" }, { "msg_contents": ">>>>> \"JB\" == Josh Berkus <[email protected]> writes:\n\nJB> Medora,\n>> Increasing effective_cache_size to 10000 did it. \n\nJB> That would be 78MB RAM. If you have more than that available, you can \nJB> increase it further. Ideally, it should be about 2/3 to 3/4 of available \nJB> RAM.\n\nAssuming your OS will use that much RAM for the cache... the whole\nworld's not Linux :-)\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Thu, 23 Oct 2003 16:54:26 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow select" }, { "msg_contents": "Vivek,\n\n> Assuming your OS will use that much RAM for the cache... the whole\n> world's not Linux :-)\n\nIt's not? Darn!\n\nActually, what OS's can't use all idle ram for kernel cache? I should note \nthat in my performance docs ....\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 24 Oct 2003 08:22:57 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow select" }, { "msg_contents": ">>>>> \"JB\" == Josh Berkus <[email protected]> writes:\n\nJB> Vivek,\n>> Assuming your OS will use that much RAM for the cache... the whole\n>> world's not Linux :-)\n\nJB> It's not? Darn!\n\n:-)\n\nJB> Actually, what OS's can't use all idle ram for kernel cache? I\nJB> should note that in my performance docs ....\n\nFreeBSD. Limited by the value of \"sysctl vfs.hibufspace\" from what I\nunderstand. This value is set at boot based on available RAM and some\nother tuning parameters.\n", "msg_date": "Fri, 24 Oct 2003 11:42:44 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow select" }, { "msg_contents": "Vivek Khera wrote:\n>>>>>>\"JB\" == Josh Berkus <[email protected]> writes:\n> JB> Actually, what OS's can't use all idle ram for kernel cache? I\n> JB> should note that in my performance docs ....\n> \n> FreeBSD. Limited by the value of \"sysctl vfs.hibufspace\" from what I\n> understand. This value is set at boot based on available RAM and some\n> other tuning parameters.\n\nActually I wanted to ask this question for long time. Can we have guidelines \nabout how to set effective cache size for various OSs?\n\nLinux is pretty simple. Everything free is buffer cache. FreeBSD, not so \nstraightforward but there is a sysctl..\n\nHow about HP-UX, Solaris and AIX? Other BSDs? and most importantly windows?\n\nThat could add much value to the tuning guide. Isn't it?\n\n Shridhar\n\n", "msg_date": "Mon, 27 Oct 2003 12:02:42 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow select" } ]
[ { "msg_contents": "\n\nHi\n\nThe Postgresql package came from the Redhat v9.0 CDROM.\nI have checked the version using psql --version and it showed v7.3.2\n\nHow to check the pg_dump version?\n\nThank you,\nREgards.\n\n\n\n\n \n Josh Berkus \n <[email protected] To: [email protected], [email protected] \n m> cc: \n Subject: Re: [PERFORM] Postgresql performance \n 23/10/2003 12:06 \n AM \n \n \n\n\n\n\nNEC,\n\n> After a few weeks of usage, when we do a \\d at the sql prompt, there was\na\n> duplicate object name, ie it can be a duplicate row of index or table.\n> When we do a \\d table_name, it will show a duplication of column names\n> inside the table.\n\nI think the version of PSQL and pg_dump which you are using do not match\nthe\nback-end database version. Correct this.\n\n--\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n\n\n\n\n", "msg_date": "Thu, 23 Oct 2003 09:46:32 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Postgresql performance" } ]
[ { "msg_contents": "\n\nHi\n\nThe Postgresql package came from the Redhat v9.0 CDROM.\nI have checked the version using psql --version and it showed v7.3.2\n\nThe duplication of table names is in the same schema.\n\nHow to check the pg_dump version?\n\n\nThank you,\nREgards.\n\n\n\n\n \n Tom Lane \n <[email protected] To: [email protected] \n s> cc: [email protected] \n Subject: Re: [PERFORM] Postgresql performance \n 22/10/2003 10:03 \n PM \n \n \n\n\n\n\[email protected] writes:\n> Currently we are running Postgresql v7.3.2 on Redhat Linux OS v9.0. We\nhave\n> Windows2000 client machines inserting records into the Postgresql tables\n> via ODBC.\n\n> After a few weeks of usage, when we do a \\d at the sql prompt, there was\na\n> duplicate object name, ie it can be a duplicate row of index or table.\n> When we do a \\d table_name, it will show a duplication of column names\n> inside the table.\n\nAre you sure you are using 7.3 psql? This sounds like something that\ncould happen with a pre-7.3 (not schema aware) psql, if there are\nmultiple occurrences of the same table name in different schemas.\n\n> It doesnt affect the insertion/updating of the tables, but when we do a\n> pg_dump -Da -t <table_name> <db_name> > /exp/<table_name>.sql, it will\nnot\n> do a proper backup/dump.\n\nI'd wonder about whether you have the right pg_dump, too.\n\n regards, tom lane\n\n\n\n\n\n", "msg_date": "Thu, 23 Oct 2003 09:47:56 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Postgresql performance" } ]
[ { "msg_contents": "Greetings.\n\nI have a table that will require 100,000 rows initially.\n\nAssume the following (some of the field names have been changed for\nconfidentiality reasons):\n\nCREATE TABLE baz (\n baz_number CHAR(15) NOT NULL,\n customer_id CHAR(39),\n foobar_id INTEGER,\n is_cancelled BOOL DEFAULT false NOT NULL,\n create_user VARCHAR(60) NOT NULL,\n create_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n last_update_user VARCHAR(60) NOT NULL,\n last_update_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n CONSTRAINT PK_baz PRIMARY KEY (baz_number)\n);\n\nALTER TABLE baz\n ADD FOREIGN KEY (customer_id) REFERENCES purchase (customer_id);\n\nALTER TABLE baz\n ADD FOREIGN KEY (foobar_id) REFERENCES foobar (foobar_id);\n\n\nUsing JDBC, it took approximately one hour to insert 100,000 records. I\nhave an algorithm to generate a unique baz_number - it is a mixture of alpha\nand numerics.\n\nThere is a purchase table; one purchase can have many associated baz\nrecords, but the baz records will be pre-allocated - baz.customer_id allows\nnull. The act of purchasing a baz will cause baz.customer_id to be\npopulated from the customer_id (key) field in the purchase table.\n\nIf it took an hour to insert 100,000 records, I can only imagine how much\ntime it will take if one customer were to attempt to purchase all 100,000\nbaz. Certainly too long for a web page.\n\nI've not had to deal with this kind of volume in Postgres before; I have my\nsuspicions on what is wrong here (could it be using a CHAR( 15 ) as a key?)\nbut I'd *LOVE* any thoughts.\n\nWould I be better off making the key an identity field and not indexing on\nbaz_number?\n\nThanks in advance for any help.\n\n__________________________________________________________________\nJohn Pagakis\nEmail: [email protected]\n\n\n\"The best way to make your dreams come true is to wake up.\"\n -- Paul Valery\n\nThis signature generated by\n ... and I Quote!!(tm) Copyright (c) 1999 SpaZmodic Frog Software, Inc.\n www.spazmodicfrog.com\n\n", "msg_date": "Thu, 23 Oct 2003 05:21:03 -0700", "msg_from": "\"John Pagakis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance Concern" }, { "msg_contents": "On Thu, 2003-10-23 at 08:21, John Pagakis wrote:\n> Greetings.\n> \n> I have a table that will require 100,000 rows initially.\n> \n> Assume the following (some of the field names have been changed for\n> confidentiality reasons):\n> \n> CREATE TABLE baz (\n> baz_number CHAR(15) NOT NULL,\n> customer_id CHAR(39),\n> foobar_id INTEGER,\n> is_cancelled BOOL DEFAULT false NOT NULL,\n> create_user VARCHAR(60) NOT NULL,\n> create_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n> last_update_user VARCHAR(60) NOT NULL,\n> last_update_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n> CONSTRAINT PK_baz PRIMARY KEY (baz_number)\n> );\n> \n> ALTER TABLE baz\n> ADD FOREIGN KEY (customer_id) REFERENCES purchase (customer_id);\n> \n> ALTER TABLE baz\n> ADD FOREIGN KEY (foobar_id) REFERENCES foobar (foobar_id);\n> \n> \n> Using JDBC, it took approximately one hour to insert 100,000 records. I\n> have an algorithm to generate a unique baz_number - it is a mixture of alpha\n> and numerics.\n\nUsing an int for identification is certainly suggested, however it\nsounds like you may be short a few indexes on the foreign key'd fields.\n\nEXPLAIN ANALYZE output is always nice..", "msg_date": "Fri, 24 Oct 2003 14:22:42 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Concern" }, { "msg_contents": "John,\n\nAre you treating each insertion as a separate transaction? If so the \nperformance will suffer. I am doing the same thing in building a data \nwarehouse using PG. I have to load millions of records each night. I \ndo two different things:\n\n1) If I need to keep the insertions inside the java process I turn off \nauto-commit and every n insertions (5000 seems to give me the best \nperformance for my setup) issue a commit. Make sure you do a final \ncommit in a finally block so you don't miss anything.\n\n2) Dump all the data to a file and then use a psql COPY <table> \n(columns) FROM 'file path' call to load it. Very fast.\n\n--sean\n\nJohn Pagakis wrote:\n\n>Greetings.\n>\n>I have a table that will require 100,000 rows initially.\n>\n>Assume the following (some of the field names have been changed for\n>confidentiality reasons):\n>\n>CREATE TABLE baz (\n> baz_number CHAR(15) NOT NULL,\n> customer_id CHAR(39),\n> foobar_id INTEGER,\n> is_cancelled BOOL DEFAULT false NOT NULL,\n> create_user VARCHAR(60) NOT NULL,\n> create_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n> last_update_user VARCHAR(60) NOT NULL,\n> last_update_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n> CONSTRAINT PK_baz PRIMARY KEY (baz_number)\n>);\n>\n>ALTER TABLE baz\n> ADD FOREIGN KEY (customer_id) REFERENCES purchase (customer_id);\n>\n>ALTER TABLE baz\n> ADD FOREIGN KEY (foobar_id) REFERENCES foobar (foobar_id);\n>\n>\n>Using JDBC, it took approximately one hour to insert 100,000 records. I\n>have an algorithm to generate a unique baz_number - it is a mixture of alpha\n>and numerics.\n>\n>There is a purchase table; one purchase can have many associated baz\n>records, but the baz records will be pre-allocated - baz.customer_id allows\n>null. The act of purchasing a baz will cause baz.customer_id to be\n>populated from the customer_id (key) field in the purchase table.\n>\n>If it took an hour to insert 100,000 records, I can only imagine how much\n>time it will take if one customer were to attempt to purchase all 100,000\n>baz. Certainly too long for a web page.\n>\n>I've not had to deal with this kind of volume in Postgres before; I have my\n>suspicions on what is wrong here (could it be using a CHAR( 15 ) as a key?)\n>but I'd *LOVE* any thoughts.\n>\n>Would I be better off making the key an identity field and not indexing on\n>baz_number?\n>\n>Thanks in advance for any help.\n>\n>__________________________________________________________________\n>John Pagakis\n>Email: [email protected]\n>\n>\n>\"The best way to make your dreams come true is to wake up.\"\n> -- Paul Valery\n>\n>This signature generated by\n> ... and I Quote!!(tm) Copyright (c) 1999 SpaZmodic Frog Software, Inc.\n> www.spazmodicfrog.com\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n> \n>\n\n", "msg_date": "Fri, 24 Oct 2003 14:30:55 -0400", "msg_from": "Sean Shanny <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Concern" }, { "msg_contents": "[email protected] (\"John Pagakis\") writes:\n> Greetings.\n>\n> I have a table that will require 100,000 rows initially.\n>\n> Assume the following (some of the field names have been changed for\n> confidentiality reasons):\n>\n> CREATE TABLE baz (\n> baz_number CHAR(15) NOT NULL,\n> customer_id CHAR(39),\n> foobar_id INTEGER,\n> is_cancelled BOOL DEFAULT false NOT NULL,\n> create_user VARCHAR(60) NOT NULL,\n> create_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n> last_update_user VARCHAR(60) NOT NULL,\n> last_update_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n> CONSTRAINT PK_baz PRIMARY KEY (baz_number)\n> );\n>\n> ALTER TABLE baz\n> ADD FOREIGN KEY (customer_id) REFERENCES purchase (customer_id);\n>\n> ALTER TABLE baz\n> ADD FOREIGN KEY (foobar_id) REFERENCES foobar (foobar_id);\n>\n> Using JDBC, it took approximately one hour to insert 100,000 records. I\n> have an algorithm to generate a unique baz_number - it is a mixture of alpha\n> and numerics.\n\nQuestion #1: How did you do the inserts?\n\nIf AUTO-COMMIT was turned on, then that would indicate that you\ninvoked 100,000 transactions, and that would contribute considerably\nto the process being slow. Put them all in as one transaction and\nyou'd probably see it run in a fraction of the time.\n\nQuestion #2. Do you have indices on purchase(customer_id) and on\nfoobar(foobar_id)?\n\nIf not, then the foreign key check would be rather inefficient.\n\n> There is a purchase table; one purchase can have many associated baz\n> records, but the baz records will be pre-allocated - baz.customer_id\n> allows null. The act of purchasing a baz will cause baz.customer_id\n> to be populated from the customer_id (key) field in the purchase\n> table.\n>\n> If it took an hour to insert 100,000 records, I can only imagine how\n> much time it will take if one customer were to attempt to purchase\n> all 100,000 baz. Certainly too long for a web page.\n\nI take it that each \"baz\" is a uniquely identifiable product, akin to\n(say) an RSA certificate or the like?\n\nBy the way, if you set up a stored procedure in PostgreSQL that can\ngenerate the \"baz_number\" identifiers, you could probably do the\ninserts Right Well Fast...\n\nConsider the following. I have a stored procedure, genauth(), which\ngenerates quasi-random values. (They're passwords, sort of...)\n\ncctld=# explain analyze insert into baz (baz_number, create_user, last_update_user)\ncctld-# select substr(genauth(), 1, 15), 'cbbrowne', 'cbbrowne' from big_table;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------\n Seq Scan on big_table (cost=0.00..789.88 rows=28988 width=0) (actual time=0.20..1713.60 rows=28988 loops=1)\n Total runtime: 3197.40 msec\n(2 rows)\n\nIt took about 3 seconds to insert 28988 rows into baz. (big_table,\nalso renamed, to protect the innocent, has 28988 rows. I didn't care\nabout its contents, just that it had a bunch of rows.)\n\nAnd the above is on a cheap desktop PC with IDE disk.\n\n> I've not had to deal with this kind of volume in Postgres before; I\n> have my suspicions on what is wrong here (could it be using a CHAR(\n> 15 ) as a key?) but I'd *LOVE* any thoughts.\n\n> Would I be better off making the key an identity field and not\n> indexing on baz_number?\n\nThat might be something of an improvement, but it oughtn't be\ncripplingly different to use a text field rather than an integer.\n\nWhat's crippling is submitting 100,000 queries in 100,000\ntransactions. Cut THAT down to size and you'll see performance return\nto being reasonable.\n-- \n\"cbbrowne\",\"@\",\"libertyrms.info\"\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n", "msg_date": "Fri, 24 Oct 2003 15:10:47 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Concern" }, { "msg_contents": "Christopher Browne kirjutas R, 24.10.2003 kell 22:10:\n\n> That might be something of an improvement, but it oughtn't be\n> cripplingly different to use a text field rather than an integer.\n\nI suspect his slowness comes from not running analyze when it would be\ntime to start using indexes for fk checks - if you run analyze on an\nempty table and then do 10000 inserts, then all these will run their\nchecks using seqscan, as this is the fastest way to do it on an empty\ntable ;)\n\n> What's crippling is submitting 100,000 queries in 100,000\n> transactions. Cut THAT down to size and you'll see performance return\n> to being reasonable.\n\neven this should not be too crippling.\n\nI 0nce did some testing for insert performance and got about 9000\ninserts/sec on 4 CPU Xeon with 2GB ram and RAID-5 (likely with battery\nbacked cache).\n\nThis 9000 dropped to ~250 when I added a primary key index (to a\n60.000.000 record table, so that the pk index fit only partly in\nmemory), all this with separate transactions, but with many clients\nrunning concurrently. (btw., the clients were not java/JDBC but\nPython/psycopg)\n\n\nWith just one client you are usually stuck to 1 trx/disk revolution, at\nleast with no battery-backed write cache.\n\neven 250/sec should insert 10000 in 40 sec.\n\n--------------\nHannu\n\n", "msg_date": "Fri, 24 Oct 2003 23:58:11 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Concern" }, { "msg_contents": "Thanks Rod.\n\nWhile I was waiting for my post to make it I went ahead and made the key an\nint. It improved it a lot, but was still pretty slow.\n\nThis is weird:\n\nI was testing in a query window thus:\n\nUPDATE baz SET customer_id = '1234' WHERE ( SELECT baz_number FROM baz WHERE\ncustomer_id IS NULL LIMIT 1000 );\n\nIn the version of the table I posted this took 3 1/2 minutes. By making\nbaz_number not part of the key, adding a baz_key of int4 and adjusting the\nabove query for that it dropped to 1 1/2 minutes.\n\nBut, I realized that was not how my app was going to be updating, so I wrote\na little simulation in JAVA that gets a list of baz_keys where the customer_\nis null and then iterates through the list one at a time attempting to\nUPDATE baz SET customer_id = '1234' WHERE baz_key = <bazKeyFromList> AND\ncustomer_id IS NULL. One thousand iterations took only 37 seconds.\n\nIt would appear PostgreSQL is tuned towards single updates as opposed to\nhanding a big bunch off to the query engine. Does that seem right? Seems\nodd to me.\n\nAnyway thanks for your response. I'll add some indexes and see if I can't\nshave that time down even further.\n\n__________________________________________________________________\nJohn Pagakis\nEmail: [email protected]\n\n\n\"If you can't beat them, arrange\n to have them beaten.\"\n -- George Carlin\n\nThis signature generated by\n ... and I Quote!!(tm) Copyright (c) 1999 SpaZmodic Frog Software, Inc.\n www.spazmodicfrog.com\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Rod Taylor\nSent: Friday, October 24, 2003 11:23 AM\nTo: [email protected]\nCc: Postgresql Performance\nSubject: Re: [PERFORM] Performance Concern\n\n\nOn Thu, 2003-10-23 at 08:21, John Pagakis wrote:\n> Greetings.\n>\n> I have a table that will require 100,000 rows initially.\n>\n> Assume the following (some of the field names have been changed for\n> confidentiality reasons):\n>\n> CREATE TABLE baz (\n> baz_number CHAR(15) NOT NULL,\n> customer_id CHAR(39),\n> foobar_id INTEGER,\n> is_cancelled BOOL DEFAULT false NOT NULL,\n> create_user VARCHAR(60) NOT NULL,\n> create_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n> last_update_user VARCHAR(60) NOT NULL,\n> last_update_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n> CONSTRAINT PK_baz PRIMARY KEY (baz_number)\n> );\n>\n> ALTER TABLE baz\n> ADD FOREIGN KEY (customer_id) REFERENCES purchase (customer_id);\n>\n> ALTER TABLE baz\n> ADD FOREIGN KEY (foobar_id) REFERENCES foobar (foobar_id);\n>\n>\n> Using JDBC, it took approximately one hour to insert 100,000 records. I\n> have an algorithm to generate a unique baz_number - it is a mixture of\nalpha\n> and numerics.\n\nUsing an int for identification is certainly suggested, however it\nsounds like you may be short a few indexes on the foreign key'd fields.\n\nEXPLAIN ANALYZE output is always nice..\n\n", "msg_date": "Fri, 24 Oct 2003 17:17:44 -0700", "msg_from": "\"John Pagakis\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Concern" }, { "msg_contents": "Sean -\nI believe auto-commit was off (not at the box right now). I'll play with\nthe commit interval; I know commits are expensive operations.\n\nThanks for item 2. I was toying with the notion of pre-creating 100000\nbazes off-loading them and then seeing if the COPY would be any faster; you\nsaved me the effort of experimenting. Thanks for the benefit of your\nexperience.\n\n__________________________________________________________________\nJohn Pagakis\nEmail: [email protected]\n\n\n\"Oh, you hate your job? Why didn't you say so?\n There's a support group for that. It's called\n EVERYBODY, and they meet at the bar.\"\n -- Drew Carey\n\nThis signature generated by\n ... and I Quote!!(tm) Copyright (c) 1999 SpaZmodic Frog Software, Inc.\n www.spazmodicfrog.com\n\n\n-----Original Message-----\nFrom: Sean Shanny [mailto:[email protected]]\nSent: Friday, October 24, 2003 11:31 AM\nTo: [email protected]\nCc: [email protected]\nSubject: Re: [PERFORM] Performance Concern\n\n\nJohn,\n\nAre you treating each insertion as a separate transaction? If so the\nperformance will suffer. I am doing the same thing in building a data\nwarehouse using PG. I have to load millions of records each night. I\ndo two different things:\n\n1) If I need to keep the insertions inside the java process I turn off\nauto-commit and every n insertions (5000 seems to give me the best\nperformance for my setup) issue a commit. Make sure you do a final\ncommit in a finally block so you don't miss anything.\n\n2) Dump all the data to a file and then use a psql COPY <table>\n(columns) FROM 'file path' call to load it. Very fast.\n\n--sean\n\nJohn Pagakis wrote:\n\n>Greetings.\n>\n>I have a table that will require 100,000 rows initially.\n>\n>Assume the following (some of the field names have been changed for\n>confidentiality reasons):\n>\n>CREATE TABLE baz (\n> baz_number CHAR(15) NOT NULL,\n> customer_id CHAR(39),\n> foobar_id INTEGER,\n> is_cancelled BOOL DEFAULT false NOT NULL,\n> create_user VARCHAR(60) NOT NULL,\n> create_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n> last_update_user VARCHAR(60) NOT NULL,\n> last_update_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n> CONSTRAINT PK_baz PRIMARY KEY (baz_number)\n>);\n>\n>ALTER TABLE baz\n> ADD FOREIGN KEY (customer_id) REFERENCES purchase (customer_id);\n>\n>ALTER TABLE baz\n> ADD FOREIGN KEY (foobar_id) REFERENCES foobar (foobar_id);\n>\n>\n>Using JDBC, it took approximately one hour to insert 100,000 records. I\n>have an algorithm to generate a unique baz_number - it is a mixture of\nalpha\n>and numerics.\n>\n>There is a purchase table; one purchase can have many associated baz\n>records, but the baz records will be pre-allocated - baz.customer_id allows\n>null. The act of purchasing a baz will cause baz.customer_id to be\n>populated from the customer_id (key) field in the purchase table.\n>\n>If it took an hour to insert 100,000 records, I can only imagine how much\n>time it will take if one customer were to attempt to purchase all 100,000\n>baz. Certainly too long for a web page.\n>\n>I've not had to deal with this kind of volume in Postgres before; I have my\n>suspicions on what is wrong here (could it be using a CHAR( 15 ) as a key?)\n>but I'd *LOVE* any thoughts.\n>\n>Would I be better off making the key an identity field and not indexing on\n>baz_number?\n>\n>Thanks in advance for any help.\n>\n>__________________________________________________________________\n>John Pagakis\n>Email: [email protected]\n>\n>\n>\"The best way to make your dreams come true is to wake up.\"\n> -- Paul Valery\n>\n>This signature generated by\n> ... and I Quote!!(tm) Copyright (c) 1999 SpaZmodic Frog Software, Inc.\n> www.spazmodicfrog.com\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n>\n>\n\n", "msg_date": "Fri, 24 Oct 2003 17:28:06 -0700", "msg_from": "\"John Pagakis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Concern" }, { "msg_contents": "Sean -\nI believe auto-commit was off (not at the box right now). I'll play with\nthe commit interval; I know commits are expensive operations.\n\nThanks for item 2. I was toying with the notion of pre-creating 100000\nbazes off-loading them and then seeing if the COPY would be any faster; you\nsaved me the effort of experimenting. Thanks for the benefit of your\nexperience.\n\n__________________________________________________________________\nJohn Pagakis\nEmail: [email protected]\n\n\n\"Oh, you hate your job? Why didn't you say so?\n There's a support group for that. It's called\n EVERYBODY, and they meet at the bar.\"\n -- Drew Carey\n\nThis signature generated by\n ... and I Quote!!(tm) Copyright (c) 1999 SpaZmodic Frog Software, Inc.\n www.spazmodicfrog.com\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Sean Shanny\nSent: Friday, October 24, 2003 11:31 AM\nTo: [email protected]\nCc: [email protected]\nSubject: Re: [PERFORM] Performance Concern\n\n\nJohn,\n\nAre you treating each insertion as a separate transaction? If so the\nperformance will suffer. I am doing the same thing in building a data\nwarehouse using PG. I have to load millions of records each night. I\ndo two different things:\n\n1) If I need to keep the insertions inside the java process I turn off\nauto-commit and every n insertions (5000 seems to give me the best\nperformance for my setup) issue a commit. Make sure you do a final\ncommit in a finally block so you don't miss anything.\n\n2) Dump all the data to a file and then use a psql COPY <table>\n(columns) FROM 'file path' call to load it. Very fast.\n\n--sean\n\nJohn Pagakis wrote:\n\n>Greetings.\n>\n>I have a table that will require 100,000 rows initially.\n>\n>Assume the following (some of the field names have been changed for\n>confidentiality reasons):\n>\n>CREATE TABLE baz (\n> baz_number CHAR(15) NOT NULL,\n> customer_id CHAR(39),\n> foobar_id INTEGER,\n> is_cancelled BOOL DEFAULT false NOT NULL,\n> create_user VARCHAR(60) NOT NULL,\n> create_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n> last_update_user VARCHAR(60) NOT NULL,\n> last_update_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n> CONSTRAINT PK_baz PRIMARY KEY (baz_number)\n>);\n>\n>ALTER TABLE baz\n> ADD FOREIGN KEY (customer_id) REFERENCES purchase (customer_id);\n>\n>ALTER TABLE baz\n> ADD FOREIGN KEY (foobar_id) REFERENCES foobar (foobar_id);\n>\n>\n>Using JDBC, it took approximately one hour to insert 100,000 records. I\n>have an algorithm to generate a unique baz_number - it is a mixture of\nalpha\n>and numerics.\n>\n>There is a purchase table; one purchase can have many associated baz\n>records, but the baz records will be pre-allocated - baz.customer_id allows\n>null. The act of purchasing a baz will cause baz.customer_id to be\n>populated from the customer_id (key) field in the purchase table.\n>\n>If it took an hour to insert 100,000 records, I can only imagine how much\n>time it will take if one customer were to attempt to purchase all 100,000\n>baz. Certainly too long for a web page.\n>\n>I've not had to deal with this kind of volume in Postgres before; I have my\n>suspicions on what is wrong here (could it be using a CHAR( 15 ) as a key?)\n>but I'd *LOVE* any thoughts.\n>\n>Would I be better off making the key an identity field and not indexing on\n>baz_number?\n>\n>Thanks in advance for any help.\n>\n>__________________________________________________________________\n>John Pagakis\n>Email: [email protected]\n>\n>\n>\"The best way to make your dreams come true is to wake up.\"\n> -- Paul Valery\n>\n>This signature generated by\n> ... and I Quote!!(tm) Copyright (c) 1999 SpaZmodic Frog Software, Inc.\n> www.spazmodicfrog.com\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n>\n>\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match\n\n", "msg_date": "Fri, 24 Oct 2003 17:38:05 -0700", "msg_from": "\"John Pagakis\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Concern" }, { "msg_contents": "John,\n\nOne other thing I forgot to mention with solution 2. If you are going \nto be adding a fair number of records to the table on an ongoing basis \nyou will want to drop indexes first and re-create them after the load is \ncomplete. I have tried it both ways and dropping is faster overall. \n\n--sean\n\nJohn Pagakis wrote:\n\n>Sean -\n>I believe auto-commit was off (not at the box right now). I'll play with\n>the commit interval; I know commits are expensive operations.\n>\n>Thanks for item 2. I was toying with the notion of pre-creating 100000\n>bazes off-loading them and then seeing if the COPY would be any faster; you\n>saved me the effort of experimenting. Thanks for the benefit of your\n>experience.\n>\n>__________________________________________________________________\n>John Pagakis\n>Email: [email protected]\n>\n>\n>\"Oh, you hate your job? Why didn't you say so?\n> There's a support group for that. It's called\n> EVERYBODY, and they meet at the bar.\"\n> -- Drew Carey\n>\n>This signature generated by\n> ... and I Quote!!(tm) Copyright (c) 1999 SpaZmodic Frog Software, Inc.\n> www.spazmodicfrog.com\n>\n>\n>-----Original Message-----\n>From: Sean Shanny [mailto:[email protected]]\n>Sent: Friday, October 24, 2003 11:31 AM\n>To: [email protected]\n>Cc: [email protected]\n>Subject: Re: [PERFORM] Performance Concern\n>\n>\n>John,\n>\n>Are you treating each insertion as a separate transaction? If so the\n>performance will suffer. I am doing the same thing in building a data\n>warehouse using PG. I have to load millions of records each night. I\n>do two different things:\n>\n>1) If I need to keep the insertions inside the java process I turn off\n>auto-commit and every n insertions (5000 seems to give me the best\n>performance for my setup) issue a commit. Make sure you do a final\n>commit in a finally block so you don't miss anything.\n>\n>2) Dump all the data to a file and then use a psql COPY <table>\n>(columns) FROM 'file path' call to load it. Very fast.\n>\n>--sean\n>\n>John Pagakis wrote:\n>\n> \n>\n>>Greetings.\n>>\n>>I have a table that will require 100,000 rows initially.\n>>\n>>Assume the following (some of the field names have been changed for\n>>confidentiality reasons):\n>>\n>>CREATE TABLE baz (\n>> baz_number CHAR(15) NOT NULL,\n>> customer_id CHAR(39),\n>> foobar_id INTEGER,\n>> is_cancelled BOOL DEFAULT false NOT NULL,\n>> create_user VARCHAR(60) NOT NULL,\n>> create_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n>> last_update_user VARCHAR(60) NOT NULL,\n>> last_update_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n>> CONSTRAINT PK_baz PRIMARY KEY (baz_number)\n>>);\n>>\n>>ALTER TABLE baz\n>> ADD FOREIGN KEY (customer_id) REFERENCES purchase (customer_id);\n>>\n>>ALTER TABLE baz\n>> ADD FOREIGN KEY (foobar_id) REFERENCES foobar (foobar_id);\n>>\n>>\n>>Using JDBC, it took approximately one hour to insert 100,000 records. I\n>>have an algorithm to generate a unique baz_number - it is a mixture of\n>> \n>>\n>alpha\n> \n>\n>>and numerics.\n>>\n>>There is a purchase table; one purchase can have many associated baz\n>>records, but the baz records will be pre-allocated - baz.customer_id allows\n>>null. The act of purchasing a baz will cause baz.customer_id to be\n>>populated from the customer_id (key) field in the purchase table.\n>>\n>>If it took an hour to insert 100,000 records, I can only imagine how much\n>>time it will take if one customer were to attempt to purchase all 100,000\n>>baz. Certainly too long for a web page.\n>>\n>>I've not had to deal with this kind of volume in Postgres before; I have my\n>>suspicions on what is wrong here (could it be using a CHAR( 15 ) as a key?)\n>>but I'd *LOVE* any thoughts.\n>>\n>>Would I be better off making the key an identity field and not indexing on\n>>baz_number?\n>>\n>>Thanks in advance for any help.\n>>\n>>__________________________________________________________________\n>>John Pagakis\n>>Email: [email protected]\n>>\n>>\n>>\"The best way to make your dreams come true is to wake up.\"\n>> -- Paul Valery\n>>\n>>This signature generated by\n>> ... and I Quote!!(tm) Copyright (c) 1999 SpaZmodic Frog Software, Inc.\n>> www.spazmodicfrog.com\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 1: subscribe and unsubscribe commands go to [email protected]\n>>\n>>\n>>\n>> \n>>\n>\n>\n> \n>\n\n", "msg_date": "Fri, 24 Oct 2003 21:15:27 -0400", "msg_from": "Sean Shanny <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Concern" }, { "msg_contents": "Christopher -\nThanks.\n\nAnswer 1:\nI believe auto commit was off (but I'm not at my dev box right now). I'll\ndouble-check that and the commit interval.\n\nAnswer 2:\nAh ha!! No indexes on FKs. I'll try that.\n\nYes, each baz is a uniquely identifiable. I had started a SP to create gen\nthe key but scrapped it when I saw no rand() function in pgpsql. Did I miss\nsomething?\n\nTurns out switching to ints no improvement on the inserts but a rather large\none on the updates. Also, I saw evidence in my testing that Postgres seemed\nto like doing single updates as opposed to being handed a group of updates;\nsee my response to Rod Taylor's post here (and Rod, if you're reading this:\nyou were *GREAT* in \"The Time Machine\" <g>!!\n\nAnswer 3:\nOh, there was no question three .... <g>!!\n\n\nThanks again Christopher!!\n\n__________________________________________________________________\nJohn Pagakis\nEmail: [email protected]\n\n\n\"I am *SINCERE* about life, but I'm not *SERIOUS* about it.\"\n -- Alan Watts\n\nThis signature generated by\n ... and I Quote!!(tm) Copyright (c) 1999 SpaZmodic Frog Software, Inc.\n www.spazmodicfrog.com\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Christopher\nBrowne\nSent: Friday, October 24, 2003 12:11 PM\nTo: [email protected]\nSubject: Re: [PERFORM] Performance Concern\n\n\[email protected] (\"John Pagakis\") writes:\n> Greetings.\n>\n> I have a table that will require 100,000 rows initially.\n>\n> Assume the following (some of the field names have been changed for\n> confidentiality reasons):\n>\n> CREATE TABLE baz (\n> baz_number CHAR(15) NOT NULL,\n> customer_id CHAR(39),\n> foobar_id INTEGER,\n> is_cancelled BOOL DEFAULT false NOT NULL,\n> create_user VARCHAR(60) NOT NULL,\n> create_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n> last_update_user VARCHAR(60) NOT NULL,\n> last_update_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n> CONSTRAINT PK_baz PRIMARY KEY (baz_number)\n> );\n>\n> ALTER TABLE baz\n> ADD FOREIGN KEY (customer_id) REFERENCES purchase (customer_id);\n>\n> ALTER TABLE baz\n> ADD FOREIGN KEY (foobar_id) REFERENCES foobar (foobar_id);\n>\n> Using JDBC, it took approximately one hour to insert 100,000 records. I\n> have an algorithm to generate a unique baz_number - it is a mixture of\nalpha\n> and numerics.\n\nQuestion #1: How did you do the inserts?\n\nIf AUTO-COMMIT was turned on, then that would indicate that you\ninvoked 100,000 transactions, and that would contribute considerably\nto the process being slow. Put them all in as one transaction and\nyou'd probably see it run in a fraction of the time.\n\nQuestion #2. Do you have indices on purchase(customer_id) and on\nfoobar(foobar_id)?\n\nIf not, then the foreign key check would be rather inefficient.\n\n> There is a purchase table; one purchase can have many associated baz\n> records, but the baz records will be pre-allocated - baz.customer_id\n> allows null. The act of purchasing a baz will cause baz.customer_id\n> to be populated from the customer_id (key) field in the purchase\n> table.\n>\n> If it took an hour to insert 100,000 records, I can only imagine how\n> much time it will take if one customer were to attempt to purchase\n> all 100,000 baz. Certainly too long for a web page.\n\nI take it that each \"baz\" is a uniquely identifiable product, akin to\n(say) an RSA certificate or the like?\n\nBy the way, if you set up a stored procedure in PostgreSQL that can\ngenerate the \"baz_number\" identifiers, you could probably do the\ninserts Right Well Fast...\n\nConsider the following. I have a stored procedure, genauth(), which\ngenerates quasi-random values. (They're passwords, sort of...)\n\ncctld=# explain analyze insert into baz (baz_number, create_user,\nlast_update_user)\ncctld-# select substr(genauth(), 1, 15), 'cbbrowne', 'cbbrowne' from\nbig_table;\n QUERY PLAN\n----------------------------------------------------------------------------\n-----------------------------------\n Seq Scan on big_table (cost=0.00..789.88 rows=28988 width=0) (actual\ntime=0.20..1713.60 rows=28988 loops=1)\n Total runtime: 3197.40 msec\n(2 rows)\n\nIt took about 3 seconds to insert 28988 rows into baz. (big_table,\nalso renamed, to protect the innocent, has 28988 rows. I didn't care\nabout its contents, just that it had a bunch of rows.)\n\nAnd the above is on a cheap desktop PC with IDE disk.\n\n> I've not had to deal with this kind of volume in Postgres before; I\n> have my suspicions on what is wrong here (could it be using a CHAR(\n> 15 ) as a key?) but I'd *LOVE* any thoughts.\n\n> Would I be better off making the key an identity field and not\n> indexing on baz_number?\n\nThat might be something of an improvement, but it oughtn't be\ncripplingly different to use a text field rather than an integer.\n\nWhat's crippling is submitting 100,000 queries in 100,000\ntransactions. Cut THAT down to size and you'll see performance return\nto being reasonable.\n--\n\"cbbrowne\",\"@\",\"libertyrms.info\"\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n", "msg_date": "Sat, 25 Oct 2003 00:16:45 -0700", "msg_from": "\"John Pagakis\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Concern" }, { "msg_contents": "John Pagakis kirjutas L, 25.10.2003 kell 10:16:\n> Christopher -\n> Thanks.\n> \n> Answer 1:\n> I believe auto commit was off (but I'm not at my dev box right now). I'll\n> double-check that and the commit interval.\n> \n> Answer 2:\n> Ah ha!! No indexes on FKs. I'll try that.\n> \n> Yes, each baz is a uniquely identifiable. I had started a SP to create gen\n> the key but scrapped it when I saw no rand() function in pgpsql. Did I miss\n> something?\n\nhannu=# select random();\n random\n------------------\n 0.59924242859671\n(1 row)\n\n\n\\df lists all available functions in psql\n\nto generate string keys you could use something like:\n\nhannu=# select 'key' || to_hex(cast(random()*1000000000 as int));\n ?column?\n-------------\n key1e22d8ea\n(1 row)\n\n-----------------\nHannu\n\n", "msg_date": "Sat, 25 Oct 2003 11:19:31 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Concern" }, { "msg_contents": "Bear with me all - working my way through this.\n\nFirst of all, thanks for all the terrific advice. I think I focused you on\nthe inserts, when my *REAL* concern is the updates. Initially, I was\nsurmising that if the insert of 100,000 baz took an hour, an update to\ncustomer_id of, say 1000 baz, would simply be outrageous. I now have a\nbetter feel for how bad it is.\n\nI have already learned that making an integer the key of baz as opposed to\nbaz_number - a CHAR( 15 ) - cuts my update cost almost in half, so my\nreiteration of the example uses this schema change.\n\nPlease let me start again and perhaps do a little better job of explaining:\n\nAssume the following (some of the field names have been changed for\nconfidentiality reasons):\n\nCREATE TABLE baz (\n baz_key int4 NOT NULL,\n baz_number CHAR(15) NOT NULL,\n customer_id CHAR(39),\n foobar_id INTEGER,\n is_cancelled BOOL DEFAULT false NOT NULL,\n create_user VARCHAR(60) NOT NULL,\n create_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n last_update_user VARCHAR(60) NOT NULL,\n last_update_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n CONSTRAINT PK_baz PRIMARY KEY (baz_key)\n);\n\nALTER TABLE baz\n ADD FOREIGN KEY (customer_id) REFERENCES purchase (customer_id);\n\nALTER TABLE baz\n ADD FOREIGN KEY (foobar_id) REFERENCES foobar (foobar_id);\n\n\nThere is a purchase table; one purchase can have many associated baz\nrecords, but the 100,00 baz records will be pre-allocated - baz.customer_id\nallows\nnull. The act of purchasing a baz will cause baz.customer_id to be\npopulated from the customer_id (key) field in the purchase table. The\ncolumn customer_id is actually the key to the purchase table despite the\nname.\n\nThe foobar table is inconsequential as it will not be populated until the\nbaz table is sold out. So for the inserts and updates, foobar will be\nempty. I could probably not even gen it until I needed it.\n\n\nAs I said earlier I'm less concerned about the inserts than I am about the\nupdates. The reason is the 100,000 inserts will happen before the site is\nlive. The updates will happen as part of the purchase process, so updates\nneed to be as fast as possible.\n\nI needed to do this because I absolutely positively cannot over-allocate\nbaz. I cannot allocate more than 100,000 period, and any number of users\ncan attempt to purchase one or more baz simultaneously. I am attempting to\navoid a race condition and avoid using database locks as I feared this table\nwould turn into a bottleneck.\n\nNote, as this question came up more than once from my previous post: Auto\nCommit was off for the inserts.\n\n\nThis will be for a public website and multiple users will be \"competing\" for\nbaz resources. My thought was for each user wishing to purchase one or more\nbazes:\n\n- Create a list of potentially available baz: SELECT baz_key WHERE\ncustomer_id IS NULL LIMIT 100;\n - If there are no more records in baz with customer_id of NULL, it's a\nsell-out.\n- Iterate through the list attempting to reserve a BAZ. Iterate until you\nhave reserved the number of baz requested or until the list is exhausted:\nUPDATE baz SET customer_id = <someCustId> WHERE baz_key = <currentKeyInList>\nAND customer_id IS NULL;\n - For a given update, if no record was updated, someone else set the\ncustomer_id before you could - go to the next baz_key in the list and try\nagain.\n - If the list is exhausted go get the next block of 100 potential\navailable baz keys and go again.\n\n\nAnyway, given this scenario, I *HAVE* to have auto commit on for updates so\nthat everyone is aware of everyone else immediately.\n\n\nI wrote a JAVA simulation of the above that did 1000 updates in 37 seconds.\nThat left me scratching my head because in psql when I did the\nsemi-equivalent:\n\nUPDATE baz SET customer_id = '1234' WHERE baz_key IN( SELECT baz_key FROM\nbaz WHERE customer_id IS NULL LIMIT 1000 );\n\nit took 1:27 (one minute 27 seconds) to execute. This led me (erroneously)\nto the conclusion that Postgres was somehow happier doing single updates\nthan \"grouping\" them. I realized today that I missed something in my\nsimulation (pulling an all-nighter will do that to you): my JAVA simulation\nhad Auto Commit off and I was doing a commit at the end. Obviously that\nwon't work given what I'm trying to do. Any updates must *IMMEDIATLY* be\nvisible to all other processes, or I could get hit with a race condition. I\nre-ran with Auto Commit on and the timing fell more in line with what I saw\nin psql - 1:13.\n\nThis seems a slow to me.\n\nIs there any way to optimize the update? Or, perhaps my design is the issue\nand I just need to do something else. Perhaps a lock on the table and an\ninsert would be quicker. I'm just worried about locking in a multi-user\nenvironment. On the other hand, it looks to me like this table will be a\nbottleneck no matter what I do.\n\nYour thoughts, as always, are much appreciated.\n\n\n__________________________________________________________________\nJohn Pagakis\nEmail: [email protected]\n\n\n\"If I had a formula for bypassing trouble, I would not pass it round.\n Trouble creates a capacity to handle it. I don't embrace trouble; that's\n as bad as treating it as an enemy. But I do say meet it as a friend, for\n you'll see a lot of it and had better be on speaking terms with it.\"\n -- Oliver Wendell Holmes\n\nThis signature generated by\n ... and I Quote!!(tm) Copyright (c) 1999 SpaZmodic Frog Software, Inc.\n www.spazmodicfrog.com\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of John Pagakis\nSent: Thursday, October 23, 2003 5:21 AM\nTo: [email protected]\nSubject: [PERFORM] Performance Concern\n\n\nGreetings.\n\nI have a table that will require 100,000 rows initially.\n\nAssume the following (some of the field names have been changed for\nconfidentiality reasons):\n\nCREATE TABLE baz (\n baz_number CHAR(15) NOT NULL,\n customer_id CHAR(39),\n foobar_id INTEGER,\n is_cancelled BOOL DEFAULT false NOT NULL,\n create_user VARCHAR(60) NOT NULL,\n create_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n last_update_user VARCHAR(60) NOT NULL,\n last_update_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n CONSTRAINT PK_baz PRIMARY KEY (baz_number)\n);\n\nALTER TABLE baz\n ADD FOREIGN KEY (customer_id) REFERENCES purchase (customer_id);\n\nALTER TABLE baz\n ADD FOREIGN KEY (foobar_id) REFERENCES foobar (foobar_id);\n\n\nUsing JDBC, it took approximately one hour to insert 100,000 records. I\nhave an algorithm to generate a unique baz_number - it is a mixture of alpha\nand numerics.\n\nThere is a purchase table; one purchase can have many associated baz\nrecords, but the baz records will be pre-allocated - baz.customer_id allows\nnull. The act of purchasing a baz will cause baz.customer_id to be\npopulated from the customer_id (key) field in the purchase table.\n\nIf it took an hour to insert 100,000 records, I can only imagine how much\ntime it will take if one customer were to attempt to purchase all 100,000\nbaz. Certainly too long for a web page.\n\nI've not had to deal with this kind of volume in Postgres before; I have my\nsuspicions on what is wrong here (could it be using a CHAR( 15 ) as a key?)\nbut I'd *LOVE* any thoughts.\n\nWould I be better off making the key an identity field and not indexing on\nbaz_number?\n\nThanks in advance for any help.\n\n__________________________________________________________________\nJohn Pagakis\nEmail: [email protected]\n\n\n\"The best way to make your dreams come true is to wake up.\"\n -- Paul Valery\n\nThis signature generated by\n ... and I Quote!!(tm) Copyright (c) 1999 SpaZmodic Frog Software, Inc.\n www.spazmodicfrog.com\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n\n", "msg_date": "Sat, 25 Oct 2003 02:56:10 -0700", "msg_from": "\"John Pagakis\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Concern" }, { "msg_contents": "At 05:56 10/25/2003, John Pagakis wrote:\n\nSnipping most of this, I only have one suggestion/comment to make.\n\n[snip]\n\n>CREATE TABLE baz (\n> baz_key int4 NOT NULL,\n> baz_number CHAR(15) NOT NULL,\n> customer_id CHAR(39),\n> foobar_id INTEGER,\n> is_cancelled BOOL DEFAULT false NOT NULL,\n> create_user VARCHAR(60) NOT NULL,\n> create_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n> last_update_user VARCHAR(60) NOT NULL,\n> last_update_datetime TIMESTAMP DEFAULT 'now()' NOT NULL,\n> CONSTRAINT PK_baz PRIMARY KEY (baz_key)\n>);\n>\n>ALTER TABLE baz\n> ADD FOREIGN KEY (customer_id) REFERENCES purchase (customer_id);\n>\n>ALTER TABLE baz\n> ADD FOREIGN KEY (foobar_id) REFERENCES foobar (foobar_id);\n\n[snip]\n\n>I needed to do this because I absolutely positively cannot over-allocate\n>baz. I cannot allocate more than 100,000 period, and any number of users\n>can attempt to purchase one or more baz simultaneously. I am attempting to\n>avoid a race condition and avoid using database locks as I feared this table\n>would turn into a bottleneck.\n\n[snip]\n\nI have a similar situation in the database here, using the following \nexample schema:\n\nCREATE TABLE foo\n(\n nID serial UNIQUE NOT NULL,\n bAvailable boolean NOT NULL DEFAULT true,\n nSomeField int4 NOT NULL,\n sSomeField text NOT NULL\n);\n\nCREATE TABLE bar\n(\n nfoo_id int4 UNIQUE NOT NULL\n);\n\nAssume foo is the table with the 100k pre-populated records that you want \nto assign to visitors on your site. bar is a table whos only purpose is to \neliminate race conditions, working off the following business rules:\n\n1. -- someone attempts to get a 'foo'\n SELECT nID from foo WHERE bAvailable;\n\n2. -- we first try to assign this 'foo' to ourselves\n -- the ? is bound to the foo.nID we selected in step 1.\n INSERT INTO bar (nfoo_ID) VALUES (?)\n\n3. -- Only if step 2 is successful, do we continue, otherwise someone beat \nus to it.\n UPDATE foo SET ... WHERE nID=?\n\nThe key here is step 2.\n\nSince there is a UNIQUE constraint defined on the bar.nfoo_id (could even \nbe an FK), only one INSERT will ever succeed. All others will fail. In \nstep 3, you can set the bAvailable flag to false, along with whatever other \nvalues you need to set for your 'baz'.\n\nThis will get much easier once 7.4 is production-ready, as the WHERE IN .. \nor WHERE NOT IN.. subselects are (according to the HISTORY file) going to \nbe as efficient as joins, instead of the O(n) operation they apparently are \nright now.\n\nUntil then however, I've found this simple trick works remarkably well.\n\n-Allen \n\n", "msg_date": "Sat, 25 Oct 2003 07:20:03 -0400", "msg_from": "Allen Landsidel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Concern" }, { "msg_contents": "\n\"John Pagakis\" <[email protected]> writes:\n\n> UPDATE baz SET customer_id = '1234' WHERE baz_key IN( SELECT baz_key FROM\n> baz WHERE customer_id IS NULL LIMIT 1000 );\n\nDo an \"explain analyze\" on this query. I bet it's doing two sequential scans.\nUnfortunately in 7.3 the WHERE IN type of clause is poorly handled. If you're\nstill in development perhaps you should move to the 7.4 beta as it should\nhandle this much better:\n\ntest74=> explain UPDATE test SET customer_id = 1 WHERE a IN (SELECT a FROM test WHERE customer_id IS NULL LIMIT 1000 );\n QUERY PLAN \n---------------------------------------------------------------------------------\n Nested Loop (cost=1447.26..2069.43 rows=201 width=10)\n -> HashAggregate (cost=1447.26..1447.26 rows=200 width=4)\n -> Subquery Scan \"IN_subquery\" (cost=0.00..1446.01 rows=501 width=4)\n -> Limit (cost=0.00..1441.00 rows=501 width=4)\n -> Seq Scan on test (cost=0.00..1441.00 rows=501 width=4)\n Filter: (customer_id IS NULL)\n -> Index Scan using ii on test (cost=0.00..3.10 rows=1 width=10)\n Index Cond: (test.a = \"outer\".a)\n(8 rows)\n\n\nHowever notice you still get at the one sequential scan. One way to help the\nsituation would be to create a partial index WHERE customer_id IS NULL. This\nwould especially help when things are almost completely sold out and available\nslots are sparse.\n\nslo=> explain UPDATE test SET customer_id = 1 WHERE a IN (SELECT a FROM test WHERE customer_id IS NULL LIMIT 1000 );\n QUERY PLAN \n------------------------------------------------------------------------------------------\n Nested Loop (cost=181.01..803.18 rows=201 width=10)\n -> HashAggregate (cost=181.01..181.01 rows=200 width=4)\n -> Subquery Scan \"IN_subquery\" (cost=0.00..179.76 rows=501 width=4)\n -> Limit (cost=0.00..174.75 rows=501 width=4)\n -> Index Scan using i on test (cost=0.00..174.75 rows=501 width=4)\n Filter: (customer_id IS NULL)\n -> Index Scan using ii on test (cost=0.00..3.10 rows=1 width=10)\n Index Cond: (test.a = \"outer\".a)\n(8 rows)\n\nNotice the both sequential scans are gone and replaced by index scans.\n\n\nI kind of worry you might still have a race condition with the above query.\nTwo clients could do the subquery and pick up the same records, then both run\nand update them. The database would lock the records until the first one\ncommits but I don't think that would stop the second one from updating them a\nsecond time.\n\nPerhaps moving to serializable transactions would help this, I'm not sure.\n\nIt's too bad the LIMIT clause doesn't work on UPDATEs.\nThen you could simply do:\n\nUPDATE baz SET customer_id = '1234' where customer_id IS NULL LIMIT 1000\n\nWhich shouldn't have to scan the table twice at all and I don't think suffer\nfrom any race conditions.\n\n-- \ngreg\n\n", "msg_date": "25 Oct 2003 11:08:03 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Concern" }, { "msg_contents": "John Pagakis kirjutas L, 25.10.2003 kell 12:56:\n\n> I wrote a JAVA simulation of the above that did 1000 updates in 37 seconds.\n> That left me scratching my head because in psql when I did the\n> semi-equivalent:\n> \n> UPDATE baz SET customer_id = '1234' WHERE baz_key IN( SELECT baz_key FROM\n> baz WHERE customer_id IS NULL LIMIT 1000 );\n\ntry it this way, maybe it will start using an index :\n\nUPDATE baz\n SET customer_id = '1234'\n WHERE baz_key IN (\n SELECT baz_key\n FROM baz innerbaz\n WHERE customer_id IS NULL\n and innerbaz.baz_key = baz.baz_key\n LIMIT 1000 );\n\nyou may also try to add a conditional index to baz:\n\nCREATE INDEX baz_key_with_null_custid_nxd\n ON baz\n WHERE customer_id IS NULL;\n\nto make the index access more efficient.\n\n----------------\nHannu\n\n", "msg_date": "Sun, 26 Oct 2003 00:13:36 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Concern" }, { "msg_contents": "On Sun, 26 Oct 2003 00:13:36 +0300, Hannu Krosing <[email protected]> wrote:\n>UPDATE baz\n> SET customer_id = '1234'\n> WHERE baz_key IN (\n> SELECT baz_key\n> FROM baz innerbaz\n> WHERE customer_id IS NULL\n> and innerbaz.baz_key = baz.baz_key\n> LIMIT 1000 );\n\nAFAICS this is not what the OP intended. It is equivalent to \n\n\tUPDATE baz\n\t SET customer_id = '1234'\n\t WHERE customer_id IS NULL;\n\nbecause the subselect is now correlated to the outer query and is\nevaluated for each row of the outer query which makes the LIMIT clause\nineffective.\n\nServus\n Manfred\n", "msg_date": "Mon, 27 Oct 2003 11:08:49 +0100", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Concern" } ]
[ { "msg_contents": "Asked and answered on the list probably a thousand times, but what else is \nthere to discuss on the performance list? :)\n\nI recently built a rather powerful machine to be used in a heavily accessed \ndatabase.. the machine is a dual AthlonMP 2800+, 2GB of PC2100 ECC, and a \n4x18GB RAID-0 using 15k rpm fujitsu MAS drives on a 4ch u160 ICP-Vortex \ncard with 256MB of cache.\n\nThe box runs FreeBSD, tracking RELENG_4 (-STABLE) and PostGreSQL 7.3.4 from \nports (7.3.4_1)\n\nThere are a few databases running on the machine, but for now, the one that \nis the most performance sensitive is also arguably the worst designed. The \naccess pattern on a day to day basis looks basically like this:\n\n1. ~75k rows aggregate are inserted into two different tables, 70/30 split \nbetween two tables. The 70% going to the smaller table (containing just \ntwo integers) and the 30% going into a larger table containing a rather \nlargish (~4KB) text field and more integer types; no searching of any kind \nis done on this text field, it appears in no where clauses, and is not indexed.\n\n2. As these rows are inserted, other processes see them and for each row:\n a. A new row containing just one field is inserted, that row being an FK \ninto the 30% table mentioned above.\n b. A row in a 3rd table is updated; this table never gets deleted from, \nand rarely sees inserts, it's just a status table, but it has nearly a \nmillion rows. The updated row is an integer.\n c. The 30% table itself is updated.\n\n3. When these processes finish their processing, the rows in both the 70/30 \ntables and the table from 2a are deleted; The 2b table has a row again updated.\n\nThere is only one process that does all the inserting, from a web \nbackend. Steps 2 and 3 are done by several other backend processes on \ndifferent machines, \"fighting\" to pick up the newly inserted rows and \nprocess them. Not the most efficient design, but modifying the current \ncode isn't an option; rest assured that this is being redesigned and new \ncode is being written, but the developer who wrote the original left us \nwith his spaghetti-python mess and no longer works for us.\n\nI run a 'vacuum analyze verbose' on the database in question every hour, \nand a reindex on every table in the database every six hours, 'vacuum full' \nis run manually as required perhaps anywhere from once a week to once a \nmonth. I realize the analyze may not be running often enough and the \nreindex more often than need be, but I don't think these are adversely \naffecting performance very much; degredation over time does not appear to \nbe an issue.\n\nSo on with the question. Given the above machine with the above database \nand access pattern, I've configured the system with the following \noptions. I'm just wondering what some of you more experierenced pg tuners \nhave to say. I can provide more information such as ipcs, vmstat, iostat, \netc output on request but I figure this message is getting long enough \nalready..\n\nThanks for any input. Kernel and postgres information follows.\n\nRelated kernel configuration options:\n\n...\ncpu I686_CPU\nmaxusers 256\n...\noptions MAXDSIZ=\"(1024UL*1024*1024)\"\noptions MAXSSIZ=\"(512UL*1024*1024)\"\noptions DFLDSIZ=\"(512UL*1024*1024)\"\n...\noptions SYSVSHM #SYSV-style shared memory\noptions SYSVMSG #SYSV-style message queues\noptions SYSVSEM #SYSV-style semaphores\noptions SHMMAXPGS=65536\noptions SHMMAX=\"(SHMMAXPGS*PAGE_SIZE+1)\"\noptions SHMSEG=256\noptions SEMMNI=384\noptions SEMMNS=768\noptions SEMMNU=384\noptions SEMMAP=384\n...\n\nrelevant postgresql.conf options:\n\nmax_connections = 128\nshared_buffers = 20000\nmax_fsm_relations = 10000\nmax_fsm_pages = 2000000\nmax_locks_per_transaction = 64\nwal_buffers = 128\nsort_mem = 262144 # we have some large queries running at times\nvacuum_mem = 131072\ncheckpoint_segments = 16\ncheckpoint_timeout = 300\ncommit_delay = 1000\ncommit_siblings = 32\nfsync = true\nwal_fsync_method = fsync\neffective_cache_size = 49152 # 384MB, this could probably be higher\nrandom_page_cost = 1.7\ncpu_tuble_cost = 0.005\ncpu_index_tuple_cost = 0.0005\ncpu_operator_cost = 0.0012\ngeqo_threshold = 20\nstats_start_collector = true\nstats_reset_on_server_start = off\nstats_command_string = true\nstats_row_level = true\nstats_block_level = true\n\n", "msg_date": "Thu, 23 Oct 2003 09:26:49 -0400", "msg_from": "Allen Landsidel <[email protected]>", "msg_from_op": true, "msg_subject": "My own performance/tuning q&a" }, { "msg_contents": ">>>>> \"AL\" == Allen Landsidel <[email protected]> writes:\n\nAL> I recently built a rather powerful machine to be used in a heavily\nAL> accessed database.. the machine is a dual AthlonMP 2800+, 2GB of\nAL> PC2100 ECC, and a 4x18GB RAID-0 using 15k rpm fujitsu MAS drives on a\nAL> 4ch u160 ICP-Vortex card with 256MB of cache.\n\nThe only recommendation I'd make is to switch from RAID0 to RAID10,\nunless you can afford the downtime (and loss of data) when one of your\ndrives takes a vacation.\n\nAlso, is your RAID card cache battery backed up? If no, then you lose\nthe ability to use write-back and this costs *dearly* in performance.\n\n\nAL> The box runs FreeBSD, tracking RELENG_4 (-STABLE) and PostGreSQL 7.3.4\nAL> from ports (7.3.4_1)\n\nAn excellent choice. :-)\n\n[[ ... ]]\n\nAL> I run a 'vacuum analyze verbose' on the database in question every\nAL> hour, and a reindex on every table in the database every six hours,\nAL> 'vacuum full' is run manually as required perhaps anywhere from once a\nAL> week to once a month. I realize the analyze may not be running often\nAL> enough and the reindex more often than need be, but I don't think\nAL> these are adversely affecting performance very much; degredation over\nAL> time does not appear to be an issue.\n\nPersonally, I don't think you need to reindex that much. And I don't\nthink you need to vacuum full *ever* if you vacuum often like you do.\nPerhaps reducing the vacuum frequency may let you reach a steady state\nof disk usage?\n\nDepending on how many concurrent actions you process, perhaps you can\nuse a temporary table for each, so you don't have to delete many rows\nwhen you're done.\n\n\nOn my busy tables, I vacuum every 6 hours. The vacuum analyze is run\non the entire DB nightly. I reindex every month or so my most often\nupdated tables that show index bloat. Watch for bloat by monitoring\nthe size of your indexes:\n\nSELECT relname,relpages FROM pg_class WHERE relname LIKE 'some_table%' ORDER BY relname;\n\nAL> Related kernel configuration options:\n\nAL> ...\nAL> cpu I686_CPU\nAL> maxusers 256\n\nlet the system autoconfigure maxusers...\n\nAL> ...\nAL> options MAXDSIZ=\"(1024UL*1024*1024)\"\nAL> options MAXSSIZ=\"(512UL*1024*1024)\"\nAL> options DFLDSIZ=\"(512UL*1024*1024)\"\n\nabove are ok at defaults.\n\nAL> options SHMMAXPGS=65536\n\nperhaps bump this and increase your shared buffers. I find that if\nyou do lots of writes, having a few more shared buffers helps.\n\nAL> options SHMMAX=\"(SHMMAXPGS*PAGE_SIZE+1)\"\n\nyou don't need to explicitly set this... it is automatically set based\non the above setting.\n\n\nAL> relevant postgresql.conf options:\n\nAL> max_fsm_pages = 2000000\n\nthis may be overkill. I currently run with 1000000\n\nAL> effective_cache_size = 49152 # 384MB, this could probably be higher\n\nthe current recommendation for freebsd is to set this to:\n\n`sysctl -n vfs.hibufspace` / 8192\n\nwhere 8192 is the blocksize used by postgres.\n\nYou may also want to increase the max buffer space used by FreeBSD,\nwhich apparently is capped at 200M (I think) by dafault. I'll have\nto look up how to bump that, as most likely you have plenty of RAM\nsitting around unused. What does \"top\" say about that when you're\nbusy?\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Thu, 23 Oct 2003 17:14:11 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: My own performance/tuning q&a" }, { "msg_contents": "At 17:14 10/23/2003, Vivek Khera wrote:\n> >>>>> \"AL\" == Allen Landsidel <[email protected]> writes:\n>\n>AL> I recently built a rather powerful machine to be used in a heavily\n>AL> accessed database.. the machine is a dual AthlonMP 2800+, 2GB of\n>AL> PC2100 ECC, and a 4x18GB RAID-0 using 15k rpm fujitsu MAS drives on a\n>AL> 4ch u160 ICP-Vortex card with 256MB of cache.\n>\n>The only recommendation I'd make is to switch from RAID0 to RAID10,\n>unless you can afford the downtime (and loss of data) when one of your\n>drives takes a vacation.\n>\n>Also, is your RAID card cache battery backed up? If no, then you lose\n>the ability to use write-back and this costs *dearly* in performance.\n\nI'm planning to move it to -10 or -5 (or even -50) once we have more money \nto spend on drives. As it is right now though, I couldn't spare the \nspace.. The box this was moved from was a 2x1000 P3 with a single u160 \ndrive.. Battery backup is something I really should have gotten on the \nmemory but I spaced out when placing the order, it'll come in the future.\n\nI'm kind of \"living on the edge\" here with regard to no bbu on the raid and \nusing raid-0 I know.. but it's for a short time, and I don't think in the \nscheme of things this is any more failure-prone than the crummy setup it \nwas on before. Backup and backup often, I know that mantra very well and \nlive by it. :)\n\n\n>AL> The box runs FreeBSD, tracking RELENG_4 (-STABLE) and PostGreSQL 7.3.4\n>AL> from ports (7.3.4_1)\n>\n>An excellent choice. :-)\n\nI recognize you from those lists.. didn't notice the Ph.D. before though.. \nbut yes, I'm a huge FreeBSD fan.. I didn't need anyone to talk me into that \nparticular choice. ;)\n\n>AL> I run a 'vacuum analyze verbose' on the database in question every\n>AL> hour, and a reindex on every table in the database every six hours,\n>AL> 'vacuum full' is run manually as required perhaps anywhere from once a\n>AL> week to once a month. I realize the analyze may not be running often\n>AL> enough and the reindex more often than need be, but I don't think\n>AL> these are adversely affecting performance very much; degredation over\n>AL> time does not appear to be an issue.\n>\n>Personally, I don't think you need to reindex that much. And I don't\n>think you need to vacuum full *ever* if you vacuum often like you do.\n>Perhaps reducing the vacuum frequency may let you reach a steady state\n>of disk usage?\n\nWell I had the vacuums running every 15 minutes for a while.. via a simple \ncron script I wrote just to make sure no more than one vacuum ran at once, \nand to 'nice' the job.. but performance on the db does suffer a bit during \nvacuums or so it seems. The performance doesn't degrade noticably after \nonly an hour without a vacuum though, so I'd like to make the state of \ndegraded performance more periodic -- not the general rule during 24/7 \noperation.\n\nI'll monkey around more with running the vacuum more often and see if the \nperformance hit was more imagined than real.\n\n\n>Depending on how many concurrent actions you process, perhaps you can\n>use a temporary table for each, so you don't have to delete many rows\n>when you're done.\n\nI'd love to but unfortunately the daemons that use the database are a mess, \nmore or less 'unsupported' at this point.. thankfully they're being \nreplaced along with a lot of performance-hurting SQL.\n\n\n>On my busy tables, I vacuum every 6 hours. The vacuum analyze is run\n>on the entire DB nightly. I reindex every month or so my most often\n>updated tables that show index bloat. Watch for bloat by monitoring\n>the size of your indexes:\n>\n>SELECT relname,relpages FROM pg_class WHERE relname LIKE 'some_table%' \n>ORDER BY relname;\n\nThanks for that tidbit.. maybe I'll cron something else to grab the values \nonce a day or so and archive them in another table for history.. make my \nlife easier. ;)\n\n\n>AL> Related kernel configuration options:\n>\n>AL> ...\n>AL> cpu I686_CPU\n>AL> maxusers 256\n>\n>let the system autoconfigure maxusers...\n\nAre you sure about this? I have always understood that explicitly setting \nthis value was the best thing to do if you knew the maximum number of users \nyou would encounter, as the kernel doesn't have to 'guess' at structure \nsizes and the like, or grow them later..\n\n\n>AL> ...\n>AL> options MAXDSIZ=\"(1024UL*1024*1024)\"\n>AL> options MAXSSIZ=\"(512UL*1024*1024)\"\n>AL> options DFLDSIZ=\"(512UL*1024*1024)\"\n>\n>above are ok at defaults.\n\nThese are related to something else.. a linux developer on the system used \nto the way it'll always allow you access to all the memory on a machine and \njust kill a random process to give you memory if you allocated more than \nwas free.. ;)\n\nHe didn't know processes were getting killed, but the defaults turned out \nto be not high enough. This will get turned back down to default once he's \ndone migrating everything into the new database and his app no longer needs \nto run there. I just mentioned them in case they could adversely affect \nperformance as-is.\n\n\n>AL> options SHMMAXPGS=65536\n>\n>perhaps bump this and increase your shared buffers. I find that if\n>you do lots of writes, having a few more shared buffers helps.\n\nAny ideas how much of a bump, or does that depend entirely on me and I \nshould just play with it? Would doubling it be too much of a bump?\n\n\n>AL> options SHMMAX=\"(SHMMAXPGS*PAGE_SIZE+1)\"\n>\n>you don't need to explicitly set this... it is automatically set based\n>on the above setting.\n\nI'm an explicit kind of guy. ;)\n\n\n>AL> relevant postgresql.conf options:\n>\n>AL> max_fsm_pages = 2000000\n>\n>this may be overkill. I currently run with 1000000\n\nAt only 6 bytes each I thought 12M wasn't too much to spare for the sake of \nmaking sure there is enough room there for everything.. I am watching my \nfile sizes and vacuum numbers to try and tune this value but it's an \narduous process.\n\n\n>AL> effective_cache_size = 49152 # 384MB, this could probably be higher\n>\n>the current recommendation for freebsd is to set this to:\n>\n>`sysctl -n vfs.hibufspace` / 8192\n>\n>where 8192 is the blocksize used by postgres.\n\nThat comes out as 25520.. I have it at 384MB because I wanted to take the \n256MB on the RAID controller into account as well.\n\nI'm not entirely certain how much of that 256MB is available, and for what \nkind of cache.. I know the i960 based controllers all need to set aside at \nleast 16MB for their \"OS\" and it isn't used for cache, not sure about ARM \nbased cards like the ICP.. but I don't think assuming 128MB is too much of \na stretch, or even 192MB.\n\n\n\n>You may also want to increase the max buffer space used by FreeBSD,\n>which apparently is capped at 200M (I think) by dafault. I'll have\n>to look up how to bump that, as most likely you have plenty of RAM\n>sitting around unused. What does \"top\" say about that when you're\n>busy?\n\nYes that hibufspace value comes out to 200MB.. (199.375 really, odd)\n\ntop usually shows me running with that same value.. 199MB.. and most of the \ntime, with maybe 1.2GB free in the Inact area..\n\nI'll see if sysctl lets me write this value, or if it's a kernel config \noption I missed, unless you have remembered between then and now. I'd \nreally like to have this higher, say around 512MB.. more if I can spare it \nafter watching for a bit.\n\nGiven this and the above about the controllers onboard cache (not to \nmention the per-drive cache) do you think I'll still need to lower \neffective_cache_size?\n\nThanks..\n\n-Allen\n\n", "msg_date": "Fri, 24 Oct 2003 04:32:12 -0400", "msg_from": "Allen Landsidel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: My own performance/tuning q&a" }, { "msg_contents": ">>>>> \"AL\" == Allen Landsidel <[email protected]> writes:\n\nAL> maxusers 256\n>> \n>> let the system autoconfigure maxusers...\n\nAL> Are you sure about this? I have always understood that explicitly\nAL> setting this value was the best thing to do if you knew the maximum\n\nYes, recent freebsd kernels autosize various tables and limits based\non existing RAM. It does pretty well.\n\n\nAL> These are related to something else.. a linux developer on the system\nAL> used to the way it'll always allow you access to all the memory on a\n\nAhhh... I guess we don't believe in multi-user systems ;-)\n\nAL> options SHMMAXPGS=65536\n>> \n>> perhaps bump this and increase your shared buffers. I find that if\n>> you do lots of writes, having a few more shared buffers helps.\n\nAL> Any ideas how much of a bump, or does that depend entirely on me and I\nAL> should just play with it? Would doubling it be too much of a bump?\n\nI use 262144 for SHMMAXPGS and SHMALL. I also use about 30000 shared\nbuffers.\n\nAL> I'll see if sysctl lets me write this value, or if it's a kernel\nAL> config option I missed, unless you have remembered between then and\n\nyou need to bump some header file constant and rebuild the kernel. it\nalso increases the granularity of how the buffer cache is used, so I'm\nnot sure how it affects overall system. nothing like an experiment...\n\nAL> Given this and the above about the controllers onboard cache (not to\nAL> mention the per-drive cache) do you think I'll still need to lower\nAL> effective_cache_size?\n\nIt is hard to say. If you tell PG you have more than you do, I don't\nknow what kind of decisions it will make incorrectly. I'd rather be\nconservative and limit it to the RAM that the system says it will\nuse. The RAM in the controller is not additive to this -- it is\nredundant to it, since all data goes thru that cache into the main\nmemory.\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Fri, 24 Oct 2003 11:49:34 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: My own performance/tuning q&a" }, { "msg_contents": ">>>>> \"AL\" == Allen Landsidel <[email protected]> writes:\n\nAL> Well I had the vacuums running every 15 minutes for a while.. via a\nAL> simple cron script I wrote just to make sure no more than one vacuum\nAL> ran at once, and to 'nice' the job.. but performance on the db does\n\n\"nice\"-ing the client does nothing for the backend server that does\nthe actual work. You need to track down the PID of the backend server\nrunning the vacuum and renice *it* to get any effect.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Fri, 24 Oct 2003 11:50:47 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: My own performance/tuning q&a" }, { "msg_contents": "On Fri, 24 Oct 2003, Vivek Khera wrote:\n\n> >>>>> \"AL\" == Allen Landsidel <[email protected]> writes:\n> \n> AL> Well I had the vacuums running every 15 minutes for a while.. via a\n> AL> simple cron script I wrote just to make sure no more than one vacuum\n> AL> ran at once, and to 'nice' the job.. but performance on the db does\n> \n> \"nice\"-ing the client does nothing for the backend server that does\n> the actual work. You need to track down the PID of the backend server\n> running the vacuum and renice *it* to get any effect.\n\nNote that Tom has mentioned problems with possible deadlocks when nicing \nindividual backends before, so proceed with caution here.\n\n", "msg_date": "Fri, 24 Oct 2003 10:32:52 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: My own performance/tuning q&a" }, { "msg_contents": ">>>>> \"sm\" == scott marlowe <scott.marlowe> writes:\n\n\nsm> Note that Tom has mentioned problems with possible deadlocks when nicing \nsm> individual backends before, so proceed with caution here.\n\nI can see possible starvation, but if scheduling changes cause\ndeadlocks, then there's something wrong with the design.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Fri, 24 Oct 2003 12:47:19 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: My own performance/tuning q&a" }, { "msg_contents": "Vivek Khera wrote:\n> >>>>> \"sm\" == scott marlowe <scott.marlowe> writes:\n> \n> \n> sm> Note that Tom has mentioned problems with possible deadlocks when nicing \n> sm> individual backends before, so proceed with caution here.\n> \n> I can see possible starvation, but if scheduling changes cause\n> deadlocks, then there's something wrong with the design.\n\nYes, I think Tom's concern was priority inversion, where a low priority\nprocess holds a lock while a higher one waits for it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 24 Oct 2003 16:50:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: My own performance/tuning q&a" }, { "msg_contents": "Pardon this for looking somewhat \"weird\" but it seems I'm not getting all \nthe messages to the list.. I've noticed the past few days a lot of them are \ncoming out of order as well..\n\nSo, this was copy/pasted from the web archive of the list..\n\nVivek Khera wrote:\n> >>>>> \"AL\" == Allen Landsidel <all ( at ) biosys ( dot ) net> writes:\n>\n>AL> maxusers 256\n> >> let the system autoconfigure maxusers...\n>\n>AL> Are you sure about this? I have always understood that explicitly\n>\n>Yes, recent freebsd kernels autosize various tables and limits based\n>on existing RAM. It does pretty well.\n\nI'll disable it then and see how it goes.\n\n>AL> These are related to something else.. a linux developer on the system\n>AL> used to the way it'll always allow you access to all the memory on a\n>\n>Ahhh... I guess we don't believe in multi-user systems ;-)\n\nNo, that's a foreign concept to a lot of people it seems. As a matter of \ntrivia, I first suggested we run this on another server instead and hit the \ndb remotely, as it's only going to be a \"run once\" type of thing that \nconverts the old system to the new one but was rebuffed. Yesterday during \na test run the thing ran over the 1GB limit, failed on some new() or other \nand dumped core. I couldn't bring the db down at that time to update the \nkernel, so we ran it on another box that has MAXDSIZ set to 1.5GB and it \nran ok, but took about six hours.. so I'll be upping the that value yet \nagain for this one special run this weekend when we do the *real* switch \nover, then putting it back down once we're all done.\n\nI can deal with it since it's not going to be \"normal\" but simply a one-off \ntype thing.\n\nFWIW the same kind of thing has happened to me with this postgres install; \nOccasionally large atomic queries like DELETE will fail for the same reason \n(out of memory) if there are a lot of rows to get removed, and TRUNCATE \nisn't an option since there are FKs on the table in question. This is an \nannoyance I'd be interested to hear how other people work around, but only \na minor one.\n\n>I use 262144 for SHMMAXPGS and SHMALL. I also use about 30000 shared\n>buffers.\n\nI believe I had it fairly high once before and didn't notice much of an \nimprovement but I'll fool with numbers around where you suggest.\n\n>AL> I'll see if sysctl lets me write this value, or if it's a kernel\n>AL> config option I missed, unless you have remembered between then and\n>\n>you need to bump some header file constant and rebuild the kernel. it\n>also increases the granularity of how the buffer cache is used, so I'm\n>not sure how it affects overall system. nothing like an experiment...\n\nSo far I've found a whole lot of questions about this, but nothing about \nthe constant. The sysctl (vfs.hibufspace I believe is the one) is read \nonly, although I should be able to work around that via /boot/loader.conf \nif I can't find the kernel option.\n\n>AL> Given this and the above about the controllers onboard cache (not to\n>AL> mention the per-drive cache) do you think I'll still need to lower\n>AL> effective_cache_size?\n>\n>It is hard to say. If you tell PG you have more than you do, I don't\n>know what kind of decisions it will make incorrectly. I'd rather be\n>conservative and limit it to the RAM that the system says it will\n>use. The RAM in the controller is not additive to this -- it is\n>redundant to it, since all data goes thru that cache into the main\n>memory.\n\nA very good point, I don't know why I thought they may hold different \ndata. I think it could differ somewhat but probably most everything in the \ncontroller cache will be duplicated in the OS cache, provided the OS cache \nis at least as large.\n\nA separate reply concatenated here to a message I actually did get \ndelivered via email:\n\nAt 16:50 10/24/2003, Bruce Momjian wrote:\n>Vivek Khera wrote:\n> > >>>>> \"sm\" == scott marlowe <scott.marlowe> writes:\n> >\n> >\n> > sm> Note that Tom has mentioned problems with possible deadlocks when \n> nicing\n> > sm> individual backends before, so proceed with caution here.\n> >\n> > I can see possible starvation, but if scheduling changes cause\n> > deadlocks, then there's something wrong with the design.\n>\n>Yes, I think Tom's concern was priority inversion, where a low priority\n>process holds a lock while a higher one waits for it.\n\n1. Vivek, you were absolutely right about the backend process not being \nlowered in priority by nice'ing the psql. Yet another thing that \"just \ndidn't occur\" when I wrote the script.\n\n2. Vivek and Bruce (and even Tom), \"VACUUM ANALYZE (VERBOSE)\" isn't \nsupposed to lock anything though, right? I can see this being a possible \nproblem for other queries that do lock things, but between Vivek pointing \nout that the nice isn't *really* affecting the vacuum (as I just run one \nquery db-wide) and the fact that the vacuum doesn't lock, I don't think \nit's hurting (or helping) in this case.\n\nHowever, I do the same thing with the reindex, so I'll definitely be taking \nit out there, as that one does lock.. although I would think the worst this \nwould do would be a making the index unavailable and forcing a seq scan.. \nis that not the case?\n\n", "msg_date": "Fri, 24 Oct 2003 20:11:52 -0400", "msg_from": "Allen Landsidel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: My own performance/tuning q&a" }, { "msg_contents": ">>>>> \"AL\" == Allen Landsidel <[email protected]> writes:\n\nAL> However, I do the same thing with the reindex, so I'll definitely be\nAL> taking it out there, as that one does lock.. although I would think\nAL> the worst this would do would be a making the index unavailable and\nAL> forcing a seq scan.. is that not the case?\n\nNope. *All* access to the table is locked out.\n\n\n\n\nAL> ---------------------------(end of broadcast)---------------------------\nAL> TIP 5: Have you checked our extensive FAQ?\n\nAL> http://www.postgresql.org/docs/faqs/FAQ.html\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Mon, 27 Oct 2003 11:03:35 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: My own performance/tuning q&a" }, { "msg_contents": ">>>>> \"AL\" == Allen Landsidel <[email protected]> writes:\n\n>> you need to bump some header file constant and rebuild the kernel. it\n>> also increases the granularity of how the buffer cache is used, so I'm\n>> not sure how it affects overall system. nothing like an experiment...\n\nAL> So far I've found a whole lot of questions about this, but nothing\nAL> about the constant. The sysctl (vfs.hibufspace I believe is the one)\nAL> is read only, although I should be able to work around that via\nAL> /boot/loader.conf if I can't find the kernel option.\n\nHere's what I have in my personal archive. I have not tried it yet.\nBKVASIZE is in a system header file, so is not a regular \"tunable\" for\na kernel. That is, you must muck with the source files to change it,\nwhich make for maintenance headaches.\n\n\n\nFrom: Sean Chittenden <[email protected]>\nSubject: Re: go for a script! / ex: PostgreSQL vs. MySQL\nNewsgroups: ml.postgres.performance\nTo: Vivek Khera <[email protected]>\nCc: [email protected]\nDate: Mon, 13 Oct 2003 12:04:46 -0700\nOrganization: none\n\n> >> echo \"effective_cache_size = $((`sysctl -n vfs.hibufspace` / 8192))\"\n> >> \n> >> I've used it for my dedicated servers. Is this calculation correct?\n> \n> SC> Yes, or it's real close at least. vfs.hibufspace is the amount\n> of SC> kernel space that's used for caching IO operations (minus the\n> \n> I'm just curious if anyone has a tip to increase the amount of\n> memory FreeBSD will use for the cache?\n\nRecompile your kernel with BKVASIZE set to 4 times its current value\nand double your nbuf size. According to Bruce Evans:\n\n\"Actually there is a way: the vfs_maxbufspace gives the amount of\nspace reserved for buffer kva (= nbuf * BKVASIZE). nbuf is easy to\nrecover from this, and the buffer kva space may be what is wanted\nanyway.\"\n[snip]\n\"I've never found setting nbuf useful, however. I want most\nparametrized sizes including nbuf to scale with resource sizes, and\nit's only with RAM sizes of similar sizes to the total virtual address\nsize that its hard to get things to fit. I haven't hit this problem\nmyself since my largest machine has only 1GB. I use an nbuf of\nsomething like twice the default one, and a BKVASIZE of 4 times the\ndefault. vfs.maxbufspace ends up at 445MB on the machine with 1GB, so\nit is maxed out now.\"\n\nYMMV.\n\n-sc\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Mon, 27 Oct 2003 11:06:11 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: My own performance/tuning q&a" }, { "msg_contents": "On Fri, 2003-10-24 at 20:11, Allen Landsidel wrote:\n> However, I do the same thing with the reindex, so I'll definitely be taking \n> it out there, as that one does lock.. although I would think the worst this \n> would do would be a making the index unavailable and forcing a seq scan.. \n> is that not the case?\n\nNo, it exclusively locks the table. It has been mentioned before that we\nshould probably be able to fall back to a seqscan while the REINDEX is\ngoing on, but that's not currently done.\n\n-Neil\n\n\n", "msg_date": "Mon, 27 Oct 2003 11:07:39 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: My own performance/tuning q&a" } ]
[ { "msg_contents": "On Friday 24 October 2003 02:21, you wrote:\n> Thank You Sir,....\n> I'm very Greatfull with your answer ,\n> 1. I don't use Cywig, my postgres is 7.4 , and I only\n> runs it from command prompt on windows(postmaster);\n\nDo you mean the native port from here:\nhttp://momjian.postgresql.org/main/writings/pgsql/win32.html\nPlease be aware that this is experimental, and you might expect problems.\n\nIf not, can you say where you got it from?\n\n> 2.I don't know what i must tuning on (is \"PGHBACONF\"\n> isn't it ?),and I use it in Delphi ,when I run the\n> program ,It runs very slow not like i run it with\n> mySQL ,What is The Problem , Please Help Me....\n\nThere are two important files:\npg_hba.conf - controls access to the database\npostgresql.conf - config/tuning\n\nSee the \"performance\" section at:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n\nThere are two articles there which cover tuning and adds information about the \nvarious other settings.\n\n> My computer is :\n> PIII 600B, 256 MB, Postgres 7.4 Beta (without cywig),\n> delphi version 7, and my connection to DB with\n> dbexpress,\n\nOK - does dbexpress use ODBC, or does it connect directly?\n\n> Dedy Styawan\n> sorry my english is not so good...\n\nYour English is fine sir. If you can subscribe to the performance list, \ndetails at:\nhttp://www.postgresql.org/lists.html\n\nThat way, others will be able to help too. I've CC'd this to the performance \nlist ready for you.\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 24 Oct 2003 10:07:01 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 7.4 beta for windows" } ]
[ { "msg_contents": "Just an interesting comparison:\r\n \r\nI don't have the specifics, but a Dell 2 x 2.4GHZ/512KB L3 / 2GB RAM machine timed a query much faster than an older Sun E4000 with 6 x ~300MHZ CPUs / 2GB RAM. One on RH(8 or 9, don't remember) and one on Solaris 9.\r\n \r\n-anjan\r\n \r\n-----Original Message----- \r\nFrom: William Yu [mailto:[email protected]] \r\nSent: Tue 10/21/2003 12:12 PM \r\nTo: [email protected] \r\nCc: \r\nSubject: Re: [PERFORM] PostgreSQL data on a NAS device ?\r\n\r\n\r\n\r\n\t> I have never worked with a XEON CPU before. Does anyone know how it performs\r\n\t> running PostgreSQL 7.3.4 / 7.4 on RedHat 9 ? Is it faster than a Pentium 4?\r\n\t> I believe the main difference is cache memory, right? Aside from cache mem,\r\n\t> it's basically a Pentium 4, or am I wrong?\r\n\t\r\n\tWell, see the problem is of course, there's so many flavors of P4s and\r\n\tXeons that it's hard to tell which is faster unless you specify the\r\n\texact model. And even then, it would depend on the workload. Would a\r\n\tXeon/3GHz/2MB L3/400FSB be faster than a P4C/3GHz/800FSB? No idea as no\r\n\tone has complete number breakdowns on these comparisons. Oh yeah, you\r\n\tcould get a big round number that says on SPEC or something one CPU is\r\n\tfaster than the other but whether that's faster for Postgres and your PG\r\n\tapp is a totally different story.\r\n\t\r\n\tThat in mind, I wouldn't worry about it. The CPU is probably plenty fast\r\n\tfor what you need to do. I'd look into two things in the server: memory\r\n\tand CPU expandability. I know you already plan on 4GB but you may need\r\n\teven more in the future. Few things can dramatically improve performance\r\n\tmore than moving disk access to disk cache. And if there's a 2nd socket\r\n\twhere you can pop another CPU in, that would leave you extra room if\r\n\tyour server becomes CPU limited.\r\n\t\r\n\t\r\n\t---------------------------(end of broadcast)---------------------------\r\n\tTIP 6: Have you searched our list archives?\r\n\t\r\n\t http://archives.postgresql.org\r\n\t\r\n\r\n", "msg_date": "Fri, 24 Oct 2003 15:22:45 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL data on a NAS device ?" } ]
[ { "msg_contents": "Just thought i'd mention that on top of optimising postgres as much as \npossible, don't forget how much something like memcached can do for you\n\nhttp://www.danga.com/memcached/\n\nwe use it on www.last.fm - most pages only take one or two database hits, \ncompared with 30 to 40 when memcache is turned off. \n\nRich.\n\n", "msg_date": "Fri, 24 Oct 2003 22:06:10 +0100", "msg_from": "Richard Jones <[email protected]>", "msg_from_op": true, "msg_subject": "Memcache" } ]
[ { "msg_contents": "Here's the basic issue: PostgreSQL doesn't use indexes unless a query\ncriterion is of exactly the same type as the index type. This occurs\neven when a cast would enable the use of an index and greatly improve\nperformance. I understand that casting is needed to use an index and\nwill therefore affect performance -- the part I don't understand is why\npostgresql doesn't automatically cast query arguments to the column\ntype, thereby enabling indexes on that column.\n\n\nI have a table that looks like this (extra cols, indexes, and fk\nconstraints removed):\n\n unison@csb=# \\d paprospect2\n Table \"unison.paprospect2\"\n Column | Type | Modifiers\n -------------+---------+-------------------------------------------------------------------\n pseq_id | integer | not null\n run_id | integer | not null\n pmodel_id | integer | not null\n svm | real |\n Indexes: paprospect2_search1 btree (pmodel_id, run_id, svm),\n\n\n\nI often search for pseq_ids based on all of pmodel_id, run_id, and svm\nthreshold as below, hence the multi-column index.\n\nWithout an explicit cast of the svm criterion:\n\n unison@csb=# explain select pseq_id from paprospect2 where pmodel_id=8210 and run_id=1 and svm>=11;\n Index Scan using paprospect2_search2 on paprospect2 (cost=0.00..43268.93 rows=2 width=4)\n Index Cond: ((pmodel_id = 8210) AND (run_id = 1))\n Filter: (svm >= 11::double precision)\n\nAnd with an explicit cast to real (the same as the column type and\nindexed type):\n\n unison@csb=# explain select pseq_id from paprospect2 where pmodel_id=8210 and run_id=1 and svm>=11::real;\n Index Scan using paprospect2_search1 on paprospect2 (cost=0.00..6.34 rows=2 width=4)\n Index Cond: ((pmodel_id = 8210) AND (run_id = 1) AND (svm >= 11::real))\n\n\nNote two things above: 1) The explicit cast greatly reduces the\npredicted (and actual) cost. 2) The uncasted query eventually casts svm\nto double precision, which seems odd since the column itself is real\n(that is, it eventually does cast, but to the \"wrong\" type).\n\nFor small queries (returning ~10 rows), this is worth 100x in speed (9ms\nv. 990ms... in absolute terms, no big deal). For larger result sets\n(~200 rows), I've seen more like 1000x speed increases by using an\nexplicit cast. For the larger queries, this can mean seconds versus many\nminutes.\n\nHaving to explicitly cast criterion is very non-intuitive. Moreover, it\nseems quite straightforward that PostgreSQL might incorporate casts (and\nperhaps even function calls like upper() for functional indexes) into\nits query strategy optimization. (I suppose functional indexes would\napply only to immutable fx only, but that's fine.)\n\nThanks,\nReece\n\n-- \nReece Hart, Ph.D. [email protected], http://www.gene.com/\nGenentech, Inc. 650/225-6133 (voice), -5389 (fax)\nBioinformatics and Protein Engineering\n1 DNA Way, MS-93 http://www.in-machina.com/~reece/\nSouth San Francisco, CA 94080-4990 [email protected], GPG: 0x25EC91A0\n\n\n\n\n\n\n\nHere's the basic issue: PostgreSQL doesn't use indexes unless a query criterion is of exactly the same type as the index type. This occurs even when a cast would enable the use of an index and greatly improve performance. I understand that casting is needed to use an index and will therefore affect performance -- the part I don't understand is why postgresql doesn't automatically cast query arguments to the column type, thereby enabling indexes on that column.\n\n\nI have a table that looks like this (extra cols, indexes, and fk constraints removed):\n\nunison@csb=# \\d paprospect2\n                                Table \"unison.paprospect2\"\n   Column    |  Type   |                             Modifiers\n-------------+---------+-------------------------------------------------------------------\n pseq_id     | integer | not null\n run_id      | integer | not null\n pmodel_id   | integer | not null\n svm         | real    |\nIndexes: paprospect2_search1 btree (pmodel_id, run_id, svm),\n\n\nI often search for pseq_ids based on all of pmodel_id, run_id, and svm threshold as below, hence the multi-column index.\n\nWithout an explicit cast of the svm criterion:\n\nunison@csb=# explain select pseq_id from paprospect2 where pmodel_id=8210 and run_id=1 and svm>=11;\n Index Scan using paprospect2_search2 on paprospect2  (cost=0.00..43268.93 rows=2 width=4)\n   Index Cond: ((pmodel_id = 8210) AND (run_id = 1))\n   Filter: (svm >= 11::double precision)\n\nAnd with an explicit cast to real (the same as the column type and indexed type):\n\nunison@csb=# explain select pseq_id from paprospect2 where pmodel_id=8210 and run_id=1 and svm>=11::real;\n Index Scan using paprospect2_search1 on paprospect2  (cost=0.00..6.34 rows=2 width=4)\n   Index Cond: ((pmodel_id = 8210) AND (run_id = 1) AND (svm >= 11::real))\n\n\nNote two things above: 1) The explicit cast greatly reduces the predicted (and actual) cost. 2) The uncasted query eventually casts svm to double precision, which seems odd since the column itself is real (that is, it eventually does cast, but to the \"wrong\" type).\n\nFor small queries (returning ~10 rows), this is worth 100x in speed (9ms v. 990ms... in absolute terms, no big deal). For larger result sets (~200 rows), I've seen more like 1000x speed increases by using an explicit cast. For the larger queries, this can mean seconds versus many minutes.\n\nHaving to explicitly cast criterion is very non-intuitive. Moreover, it seems quite straightforward that PostgreSQL might incorporate casts (and perhaps even function calls like upper() for functional indexes) into its query strategy optimization. (I suppose functional indexes would apply only to immutable fx only, but that's fine.)\n\nThanks,\nReece\n\n\n\n\n-- \nReece Hart, Ph.D. [email protected], http://www.gene.com/\nGenentech, Inc. 650/225-6133 (voice), -5389 (fax)\nBioinformatics and Protein Engineering\n1 DNA Way, MS-93 http://www.in-machina.com/~reece/\nSouth San Francisco, CA 94080-4990 [email protected], GPG: 0x25EC91A0", "msg_date": "Sat, 25 Oct 2003 10:49:00 -0700", "msg_from": "Reece Hart <[email protected]>", "msg_from_op": true, "msg_subject": "explicit casting required for index use" }, { "msg_contents": "On Sat, 2003-10-25 at 13:49, Reece Hart wrote:\n> Having to explicitly cast criterion is very non-intuitive. Moreover,\n> it seems quite straightforward that PostgreSQL might incorporate casts\n\nThis is a well-known issue with the query optimizer -- search the\nmailing list archives for lots more information. The executive summary\nis that this is NOT a trivial issue to fix, and it hasn't been fixed in\n7.4, but there is some speculation on how to fix it at some point in the\nfuture.\n\n-Neil\n\n\n", "msg_date": "Mon, 27 Oct 2003 04:18:53 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: explicit casting required for index use" } ]
[ { "msg_contents": "I am in the process of adding PostgreSQL support for an application, in \naddition to Oracle and MS SQL.\nI am using PostgreSQL version 7.3.2, Red Hat 9.0 on Intel Pentium III board.\n\nI have a query that generally looks like this:\n\nSELECT t1.col1, t2.col1 FROM t1, t2 WHERE t1.x=t2.y AND t2.p='string' \nAND t2.q=1\n\nThis query is strikingly slow (about 100 sec when both t1 and t2 has \nabout 1,200 records, compare with less than 4 sec with MS SQL and Oracle)\n\nThe strange thing is that if I remove one of the last 2 conditions \n(doesn't matter which one), I get the same performance like with the \nother databases.\nSince in this particular case both conditions ( t2.p='string', t2.q=1) \nare not required, I can't understand why having both turns the query so \nslow.\nA query on table t2 alone is fast with or without the 2 conditions.\n\nI tired several alternatives, this one works pretty well:\n\nSELECT t1.col1, t2.col1 FROM t1, t2 WHERE t1.x=t2.y AND\n EXISTS (\n SELECT * FROM t2 t2a WHERE t2a.p='string' AND t2a.q=1 AND \nt2a.y=t2.y )\n\nSince the first query is simpler than the second, it seems to me like a bug.\n\nPlease advise\n\nYonatan\n\n\n\n\n\n\n\nI am in the process of  adding PostgreSQL support for an application,\nin addition to Oracle and MS SQL.\nI am using PostgreSQL version 7.3.2, Red Hat 9.0 on Intel Pentium III\nboard.\n\nI have a query that generally looks like this:\n\nSELECT t1.col1, t2.col1 FROM t1, t2 WHERE t1.x=t2.y AND t2.p='string' AND t2.q=1\n\nThis query is strikingly slow (about 100 sec when both t1 and t2 has\nabout 1,200 records, compare with less than 4 sec with MS SQL and\nOracle)\n\nThe strange thing is that if I remove one of the last 2 conditions\n(doesn't matter which one), I get the same performance like with the\nother databases.\nSince in this particular case both conditions ( t2.p='string', t2.q=1)\nare not required, I can't understand why having both turns the query so\nslow.\nA query on table t2 alone is fast with or without the 2 conditions.\n\nI tired several alternatives, this one works pretty well:\n\nSELECT t1.col1, t2.col1 FROM t1, t2 WHERE t1.x=t2.y AND\n     EXISTS ( \n             SELECT * FROM t2 t2a WHERE t2a.p='string' AND t2a.q=1 AND t2a.y=t2.y )\n\nSince the first query is simpler than the second, it seems to me like a\nbug.\n\nPlease advise\n\nYonatan", "msg_date": "Sun, 26 Oct 2003 00:25:37 +0300", "msg_from": "Yonatan Goraly <[email protected]>", "msg_from_op": true, "msg_subject": "Slow performance with no apparent reason" }, { "msg_contents": "I guess my first message was not accurate, since t1 is a view, that \nincludes t2.\n\nAttached are the real queries with their corresponding plans, the first \none takes 10.8 sec to execute, the second one takes 0.6 sec.\n\nTo simplify, I expanded the view, so the attached query refers to tables \nonly.\n\nMartijn van Oosterhout wrote:\n\n>Please supply EXPLAIN ANALYZE output.\n>\n>On Sun, Oct 26, 2003 at 12:25:37AM +0300, Yonatan Goraly wrote:\n> \n>\n>>I am in the process of adding PostgreSQL support for an application, in \n>>addition to Oracle and MS SQL.\n>>I am using PostgreSQL version 7.3.2, Red Hat 9.0 on Intel Pentium III board.\n>>\n>>I have a query that generally looks like this:\n>>\n>>SELECT t1.col1, t2.col1 FROM t1, t2 WHERE t1.x=t2.y AND t2.p='string' \n>>AND t2.q=1\n>>\n>>This query is strikingly slow (about 100 sec when both t1 and t2 has \n>>about 1,200 records, compare with less than 4 sec with MS SQL and Oracle)\n>>\n>>The strange thing is that if I remove one of the last 2 conditions \n>>(doesn't matter which one), I get the same performance like with the \n>>other databases.\n>>Since in this particular case both conditions ( t2.p='string', t2.q=1) \n>>are not required, I can't understand why having both turns the query so \n>>slow.\n>>A query on table t2 alone is fast with or without the 2 conditions.\n>>\n>>I tired several alternatives, this one works pretty well:\n>>\n>>SELECT t1.col1, t2.col1 FROM t1, t2 WHERE t1.x=t2.y AND\n>> EXISTS (\n>> SELECT * FROM t2 t2a WHERE t2a.p='string' AND t2a.q=1 AND \n>>t2a.y=t2.y )\n>>\n>>Since the first query is simpler than the second, it seems to me like a bug.\n>>\n>>Please advise\n>>\n>>Yonatan\n>> \n>>\n>\n> \n>", "msg_date": "Sun, 26 Oct 2003 01:26:22 +0300", "msg_from": "Yonatan Goraly <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow performance with no apparent reason" }, { "msg_contents": "Please supply EXPLAIN ANALYZE output.\n\nOn Sun, Oct 26, 2003 at 12:25:37AM +0300, Yonatan Goraly wrote:\n> I am in the process of adding PostgreSQL support for an application, in \n> addition to Oracle and MS SQL.\n> I am using PostgreSQL version 7.3.2, Red Hat 9.0 on Intel Pentium III board.\n> \n> I have a query that generally looks like this:\n> \n> SELECT t1.col1, t2.col1 FROM t1, t2 WHERE t1.x=t2.y AND t2.p='string' \n> AND t2.q=1\n> \n> This query is strikingly slow (about 100 sec when both t1 and t2 has \n> about 1,200 records, compare with less than 4 sec with MS SQL and Oracle)\n> \n> The strange thing is that if I remove one of the last 2 conditions \n> (doesn't matter which one), I get the same performance like with the \n> other databases.\n> Since in this particular case both conditions ( t2.p='string', t2.q=1) \n> are not required, I can't understand why having both turns the query so \n> slow.\n> A query on table t2 alone is fast with or without the 2 conditions.\n> \n> I tired several alternatives, this one works pretty well:\n> \n> SELECT t1.col1, t2.col1 FROM t1, t2 WHERE t1.x=t2.y AND\n> EXISTS (\n> SELECT * FROM t2 t2a WHERE t2a.p='string' AND t2a.q=1 AND \n> t2a.y=t2.y )\n> \n> Since the first query is simpler than the second, it seems to me like a bug.\n> \n> Please advise\n> \n> Yonatan\n\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> \"All that is needed for the forces of evil to triumph is for enough good\n> men to do nothing.\" - Edmond Burke\n> \"The penalty good people pay for not being interested in politics is to be\n> governed by people worse than themselves.\" - Plato", "msg_date": "Sun, 26 Oct 2003 18:43:54 +1100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow performance with no apparent reason" }, { "msg_contents": "Ok, those figures look like you've mever run ANALYZE on that database at\nall, given you keep getting the default values. EXPLAIN ANALYZE would have\ngiven the actual number of matching rows.\n\nGiven that, the plans are probably extremely suboptimal. Also, do you have\n(unique) indexes on the columns that need it.\n\nSo the EXPLAIN ANALYZE output after running ANALYZE over your database would\nbe the next step.\n\nHope this helps,\n\nOn Sun, Oct 26, 2003 at 01:26:22AM +0300, Yonatan Goraly wrote:\n> I guess my first message was not accurate, since t1 is a view, that \n> includes t2.\n> \n> Attached are the real queries with their corresponding plans, the first \n> one takes 10.8 sec to execute, the second one takes 0.6 sec.\n> \n> To simplify, I expanded the view, so the attached query refers to tables \n> only.\n> \n> Martijn van Oosterhout wrote:\n> \n> >Please supply EXPLAIN ANALYZE output.\n> >\n> >On Sun, Oct 26, 2003 at 12:25:37AM +0300, Yonatan Goraly wrote:\n> > \n> >\n> >>I am in the process of adding PostgreSQL support for an application, in \n> >>addition to Oracle and MS SQL.\n> >>I am using PostgreSQL version 7.3.2, Red Hat 9.0 on Intel Pentium III \n> >>board.\n> >>\n> >>I have a query that generally looks like this:\n> >>\n> >>SELECT t1.col1, t2.col1 FROM t1, t2 WHERE t1.x=t2.y AND t2.p='string' \n> >>AND t2.q=1\n> >>\n> >>This query is strikingly slow (about 100 sec when both t1 and t2 has \n> >>about 1,200 records, compare with less than 4 sec with MS SQL and Oracle)\n> >>\n> >>The strange thing is that if I remove one of the last 2 conditions \n> >>(doesn't matter which one), I get the same performance like with the \n> >>other databases.\n> >>Since in this particular case both conditions ( t2.p='string', t2.q=1) \n> >>are not required, I can't understand why having both turns the query so \n> >>slow.\n> >>A query on table t2 alone is fast with or without the 2 conditions.\n> >>\n> >>I tired several alternatives, this one works pretty well:\n> >>\n> >>SELECT t1.col1, t2.col1 FROM t1, t2 WHERE t1.x=t2.y AND\n> >> EXISTS (\n> >> SELECT * FROM t2 t2a WHERE t2a.p='string' AND t2a.q=1 AND \n> >>t2a.y=t2.y )\n> >>\n> >>Since the first query is simpler than the second, it seems to me like a \n> >>bug.\n> >>\n> >>Please advise\n> >>\n> >>Yonatan\n> >> \n> >>\n> >\n> > \n> >\n\n> ------------------------------------------------------------------------------------------------------------\n> slow query(10 sec):\n> \n> select ent.ID,ent.TYPE,ent.STATUS,ent.NAME\n> \tfrom (select\n> e.ID, e.TYPE, e.STATUS, e.NAME\n> from\n> ENT_PROJECT e,\n> (select h.*,\n> CASE WHEN f1.ID=-1 THEN '' ELSE f1.NAME ||\n> CASE WHEN f2.ID=-1 THEN '' ELSE ' > ' || f2.NAME ||\n> CASE WHEN f3.ID=-1 THEN '' ELSE ' > ' || f3.NAME ||\n> CASE WHEN f4.ID=-1 THEN '' ELSE ' > ' || f4.NAME ||\n> CASE WHEN f5.ID=-1 THEN '' ELSE ' > ' || f5.NAME ||\n> CASE WHEN f6.ID=-1 THEN '' ELSE ' > ' || f6.NAME END END END END END END as PATH\n> \t\t from COMN_ATTR_HIERARCH h\n> \t\t \t join ENT_FOLDER f1 on h.FOLDER_ID_1=f1.ID\n> \t\t \t join ENT_FOLDER f2 on h.FOLDER_ID_2=f2.ID\n> \t\t \t join ENT_FOLDER f3 on h.FOLDER_ID_3=f3.ID\n> \t\t \t join ENT_FOLDER f4 on h.FOLDER_ID_4=f4.ID\n> \t\t \t join ENT_FOLDER f5 on h.FOLDER_ID_5=f5.ID\n> \t\t \t join ENT_FOLDER f6 on h.FOLDER_ID_6=f6.ID\n> \t) path\n> \twhere e.STATUS!=cast(-1 as numeric)\n> \t and e.ID = path.NODE_ID) ent , COMN_ATTR_HIERARCH hier\n> \t\t\t\twhere hier.NODE_ID=ent.ID and hier.HIERARCHY_ID='IMPLEMENTATION' and hier.DOMAIN=1\n> \n> \n> ------------------------------------------------------------------------------------------------------------\n> QUERY PLAN\n> Nested Loop (cost=1808.05..1955.27 rows=14 width=660)\n> Join Filter: (\"outer\".id = \"inner\".node_id)\n> -> Nested Loop (cost=0.00..10.82 rows=1 width=244)\n> -> Index Scan using idx_hierarch_hierarch_id on comn_attr_hierarch hier (cost=0.00..5.98 rows=1 width=32)\n> Index Cond: ((hierarchy_id = 'IMPLEMENTATION'::bpchar) AND (\"domain\" = 1::numeric))\n> -> Index Scan using pk_ent_project on ent_project e (cost=0.00..4.83 rows=1 width=212)\n> Index Cond: (\"outer\".node_id = e.id)\n> Filter: (status <> -1::numeric)\n> -> Materialize (cost=1910.33..1910.33 rows=2730 width=416)\n> -> Merge Join (cost=1808.05..1910.33 rows=2730 width=416)\n> Merge Cond: (\"outer\".id = \"inner\".folder_id_6)\n> -> Index Scan using pk_ent_folder on ent_folder f6 (cost=0.00..52.00 rows=1000 width=32)\n> -> Sort (cost=1808.05..1814.88 rows=2730 width=384)\n> Sort Key: h.folder_id_6\n> -> Merge Join (cost=1275.45..1377.73 rows=2730 width=384)\n> Merge Cond: (\"outer\".id = \"inner\".folder_id_5)\n> -> Index Scan using pk_ent_folder on ent_folder f5 (cost=0.00..52.00 rows=1000 width=32)\n> -> Sort (cost=1275.45..1282.28 rows=2730 width=352)\n> Sort Key: h.folder_id_5\n> -> Merge Join (cost=1017.37..1119.64 rows=2730 width=352)\n> Merge Cond: (\"outer\".id = \"inner\".folder_id_4)\n> -> Index Scan using pk_ent_folder on ent_folder f4 (cost=0.00..52.00 rows=1000 width=32)\n> -> Sort (cost=1017.37..1024.19 rows=2730 width=320)\n> Sort Key: h.folder_id_4\n> -> Merge Join (cost=759.28..861.56 rows=2730 width=320)\n> Merge Cond: (\"outer\".id = \"inner\".folder_id_3)\n> -> Index Scan using pk_ent_folder on ent_folder f3 (cost=0.00..52.00 rows=1000 width=32)\n> -> Sort (cost=759.28..766.11 rows=2730 width=288)\n> Sort Key: h.folder_id_3\n> -> Merge Join (cost=501.20..603.47 rows=2730 width=288)\n> Merge Cond: (\"outer\".id = \"inner\".folder_id_2)\n> -> Index Scan using pk_ent_folder on ent_folder f2 (cost=0.00..52.00 rows=1000 width=32)\n> -> Sort (cost=501.20..508.02 rows=2730 width=256)\n> Sort Key: h.folder_id_2\n> -> Merge Join (cost=243.11..345.39 rows=2730 width=256)\n> Merge Cond: (\"outer\".id = \"inner\".folder_id_1)\n> -> Index Scan using pk_ent_folder on ent_folder f1 (cost=0.00..52.00 rows=1000 width=32)\n> -> Sort (cost=243.11..249.94 rows=2730 width=224)\n> Sort Key: h.folder_id_1\n> -> Seq Scan on comn_attr_hierarch h (cost=0.00..87.30 rows=2730 width=224)\n> \n> \n> ------------------------------------------------------------------------------------------------------------\n> Fast query (.6 sec):\n> \n> select ent.ID,ent.TYPE,ent.STATUS,ent.NAME\n> \tfrom (select\n> e.ID, e.TYPE, e.STATUS, e.NAME\n> from\n> ENT_PROJECT e,\n> (select h.*,\n> CASE WHEN f1.ID=-1 THEN '' ELSE f1.NAME ||\n> CASE WHEN f2.ID=-1 THEN '' ELSE ' > ' || f2.NAME ||\n> CASE WHEN f3.ID=-1 THEN '' ELSE ' > ' || f3.NAME ||\n> CASE WHEN f4.ID=-1 THEN '' ELSE ' > ' || f4.NAME ||\n> CASE WHEN f5.ID=-1 THEN '' ELSE ' > ' || f5.NAME ||\n> CASE WHEN f6.ID=-1 THEN '' ELSE ' > ' || f6.NAME END END END END END END as PATH\n> \t\t from COMN_ATTR_HIERARCH h\n> \t\t \t join ENT_FOLDER f1 on h.FOLDER_ID_1=f1.ID\n> \t\t \t join ENT_FOLDER f2 on h.FOLDER_ID_2=f2.ID\n> \t\t \t join ENT_FOLDER f3 on h.FOLDER_ID_3=f3.ID\n> \t\t \t join ENT_FOLDER f4 on h.FOLDER_ID_4=f4.ID\n> \t\t \t join ENT_FOLDER f5 on h.FOLDER_ID_5=f5.ID\n> \t\t \t join ENT_FOLDER f6 on h.FOLDER_ID_6=f6.ID\n> \t) path\n> \twhere e.STATUS!=cast(-1 as numeric)\n> \t and e.ID = path.NODE_ID) ent , COMN_ATTR_HIERARCH hier\n> \t\t\t\twhere hier.NODE_ID=ent.ID and exists(\n> \t\t\tselect * from COMN_ATTR_HIERARCH h2 where h2.HIERARCHY_ID='IMPLEMENTATION' and h2.DOMAIN=1 and h2.NODE_ID=hier.NODE_ID\n> \t\t\t\tand h2.HIERARCHY_ID=hier.HIERARCHY_ID and h2.DOMAIN=hier.DOMAIN)\n> \n> \n> ------------------------------------------------------------------------------------------------------------\n> QUERY PLAN\n> Merge Join (cost=16145.60..16289.84 rows=18539 width=660)\n> Merge Cond: (\"outer\".id = \"inner\".node_id)\n> -> Merge Join (cost=13782.29..13863.08 rows=1358 width=244)\n> Merge Cond: (\"outer\".id = \"inner\".node_id)\n> -> Index Scan using pk_ent_project on ent_project e (cost=0.00..54.50 rows=995 width=212)\n> Filter: (status <> -1::numeric)\n> -> Sort (cost=13782.29..13785.70 rows=1365 width=32)\n> Sort Key: hier.node_id\n> -> Seq Scan on comn_attr_hierarch hier (cost=0.00..13711.21 rows=1365 width=32)\n> Filter: (subplan)\n> SubPlan\n> -> Index Scan using pk_comn_attr_hierarch on comn_attr_hierarch h2 (cost=0.00..4.99 rows=1 width=316)\n> Index Cond: ((\"domain\" = 1::numeric) AND (\"domain\" = $2) AND (node_id = $0))\n> Filter: ((hierarchy_id = 'IMPLEMENTATION'::bpchar) AND (hierarchy_id = $1))\n> -> Sort (cost=2363.32..2370.14 rows=2730 width=416)\n> Sort Key: h.node_id\n> -> Merge Join (cost=1808.05..1910.33 rows=2730 width=416)\n> Merge Cond: (\"outer\".id = \"inner\".folder_id_6)\n> -> Index Scan using pk_ent_folder on ent_folder f6 (cost=0.00..52.00 rows=1000 width=32)\n> -> Sort (cost=1808.05..1814.88 rows=2730 width=384)\n> Sort Key: h.folder_id_6\n> -> Merge Join (cost=1275.45..1377.73 rows=2730 width=384)\n> Merge Cond: (\"outer\".id = \"inner\".folder_id_5)\n> -> Index Scan using pk_ent_folder on ent_folder f5 (cost=0.00..52.00 rows=1000 width=32)\n> -> Sort (cost=1275.45..1282.28 rows=2730 width=352)\n> Sort Key: h.folder_id_5\n> -> Merge Join (cost=1017.37..1119.64 rows=2730 width=352)\n> Merge Cond: (\"outer\".id = \"inner\".folder_id_4)\n> -> Index Scan using pk_ent_folder on ent_folder f4 (cost=0.00..52.00 rows=1000 width=32)\n> -> Sort (cost=1017.37..1024.19 rows=2730 width=320)\n> Sort Key: h.folder_id_4\n> -> Merge Join (cost=759.28..861.56 rows=2730 width=320)\n> Merge Cond: (\"outer\".id = \"inner\".folder_id_3)\n> -> Index Scan using pk_ent_folder on ent_folder f3 (cost=0.00..52.00 rows=1000 width=32)\n> -> Sort (cost=759.28..766.11 rows=2730 width=288)\n> Sort Key: h.folder_id_3\n> -> Merge Join (cost=501.20..603.47 rows=2730 width=288)\n> Merge Cond: (\"outer\".id = \"inner\".folder_id_2)\n> -> Index Scan using pk_ent_folder on ent_folder f2 (cost=0.00..52.00 rows=1000 width=32)\n> -> Sort (cost=501.20..508.02 rows=2730 width=256)\n> Sort Key: h.folder_id_2\n> -> Merge Join (cost=243.11..345.39 rows=2730 width=256)\n> Merge Cond: (\"outer\".id = \"inner\".folder_id_1)\n> -> Index Scan using pk_ent_folder on ent_folder f1 (cost=0.00..52.00 rows=1000 width=32)\n> -> Sort (cost=243.11..249.94 rows=2730 width=224)\n> Sort Key: h.folder_id_1\n> -> Seq Scan on comn_attr_hierarch h (cost=0.00..87.30 rows=2730 width=224)\n> \n> ------------------------------------------------------------------------------------------------------------\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n\n\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> \"All that is needed for the forces of evil to triumph is for enough good\n> men to do nothing.\" - Edmond Burke\n> \"The penalty good people pay for not being interested in politics is to be\n> governed by people worse than themselves.\" - Plato", "msg_date": "Sun, 26 Oct 2003 20:29:48 +1100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow performance with no apparent reason" }, { "msg_contents": "I run ANALYZE and the problem resolved\n\nThanks\n\n>Yonatan Goraly kirjutas P, 26.10.2003 kell 00:25:\n> \n>\n>>I am in the process of adding PostgreSQL support for an application,\n>>in addition to Oracle and MS SQL.\n>>I am using PostgreSQL version 7.3.2, Red Hat 9.0 on Intel Pentium III\n>>board.\n>>\n>>I have a query that generally looks like this:\n>>\n>>SELECT t1.col1, t2.col1 FROM t1, t2 WHERE t1.x=t2.y AND t2.p='string'\n>>AND t2.q=1\n>>\n>>This query is strikingly slow (about 100 sec when both t1 and t2 has\n>>about 1,200 records, compare with less than 4 sec with MS SQL and\n>>Oracle)\n>> \n>>\n>\n>always send results of EXPLAIN ANALYZE if you ask for help on [PERFORM] \n>\n>knowing which indexes you have would also help.\n>\n>and you should have run ANALYZE too.\n>\n>-----------------\n>Hannu\n>\n>\n> \n>\n\n", "msg_date": "Sun, 26 Oct 2003 15:28:21 +0200", "msg_from": "Yonatan Goraly <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow performance with no apparent reason" }, { "msg_contents": "Yonatan Goraly kirjutas P, 26.10.2003 kell 00:25:\n> I am in the process of adding PostgreSQL support for an application,\n> in addition to Oracle and MS SQL.\n> I am using PostgreSQL version 7.3.2, Red Hat 9.0 on Intel Pentium III\n> board.\n> \n> I have a query that generally looks like this:\n> \n> SELECT t1.col1, t2.col1 FROM t1, t2 WHERE t1.x=t2.y AND t2.p='string'\n> AND t2.q=1\n> \n> This query is strikingly slow (about 100 sec when both t1 and t2 has\n> about 1,200 records, compare with less than 4 sec with MS SQL and\n> Oracle)\n\nalways send results of EXPLAIN ANALYZE if you ask for help on [PERFORM] \n\nknowing which indexes you have would also help.\n\nand you should have run ANALYZE too.\n\n-----------------\nHannu\n\n", "msg_date": "Sun, 26 Oct 2003 16:38:16 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow performance with no apparent reason" } ]
[ { "msg_contents": "Hi,\n\nWe're in the process of setting up a new database server. The\napplication is an online rss aggregator which you can see at\nwww.fastbuzz.com (still running with the old hardware).\n\nThe new machine is a dual Xeon with 2 Gigs of ram \n\nThe OS is freebsd 4.9. \n\nshared_buffers = 10000\nsort_mem = 32768\neffective_cache_size = 25520 -- freebsd forumla: vfs.hibufspace / 8192\n\n1. While it seems to work correctly, I'm unclear on why this number is\ncorrect. 25520*8 = 204160 or 200 Megs. On a machine with 2 Gigs it\nseems like the number should be more like 1 - 1.5 Gigs.\n\n2. The main performance challenges are with the items table which has around\nfive million rows and grows at the rate of more than 100,000 rows a day.\n\nIf I do a select count(*) from the items table it take 55 - 60 seconds\nto execute. I find it interesting that it takes that long whether it's\ndoing it the first time and fetching the pages from disk or on\nsubsequent request where it fetches the pages from memory.\nI know that it's not touching the disks because I'm running an iostat in\na different window. Here's the explain analyze:\n\nexplain analyze select count(*) from items;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=245377.53..245377.53 rows=1 width=0) (actual time=55246.035..55246.040 rows=1 loops=1)\n -> Seq Scan on items (cost=0.00..233100.62 rows=4910762 width=0)\n(actual time=0.054..30220.641 rows=4910762 loops=1)\n Total runtime: 55246.129 ms\n(3 rows)\n\nand the number of pages:\n\nselect relpages from pg_class where relname = 'items';\n relpages\n----------\n 183993\n\n\nSo does it make sense that it would take close to a minute to count the 5 million rows\neven if all pages are in memory? \n\n3. Relpages is 183993 so file size should be 183993*8192 = 1507270656,\nroughly 1.5 gig. The actual disk size is 1073741824 or roughly 1 gig.\nWhy the difference?\n\n\n\n4. If I put a certain filter/condition on the query it tells me that it's doing\na sequential scan, and yet it takes less time than a full sequential\nscan:\n\nexplain analyze select count(*) from items where channel < 5000;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=249141.54..249141.54 rows=1 width=0) (actual time=26224.603..26224.608 rows=1 loops=1)\n -> Seq Scan on items (cost=0.00..245377.52 rows=1505605 width=0) (actual time=7.599..17686.869 rows=1632057 loops=1)\n Filter: (channel < 5000)\n Total runtime: 26224.703 ms\n\n\nHow can it do a sequential scan and apply a filter to it in less time\nthan the full sequential scan? Is it actually using an index without\nreally telling me? \n\n\nHere's the structure of the items table\n\n Column | Type | Modifiers\n---------------+--------------------------+-----------\n articlenumber | integer | not null\n channel | integer | not null\n title | character varying |\n link | character varying |\n description | character varying |\n comments | character varying(500) |\n dtstamp | timestamp with time zone |\n signature | character varying(32) |\n pubdate | timestamp with time zone |\nIndexes:\n \"item_channel_link\" btree (channel, link)\n \"item_created\" btree (dtstamp)\n \"item_signature\" btree (signature)\n \"items_channel_article\" btree (channel, articlenumber)\n \"items_channel_tstamp\" btree (channel, dtstamp)\n\n\n5. Any other comments/suggestions on the above setup.\n\nThanks,\n\nDror\n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n", "msg_date": "Sun, 26 Oct 2003 11:44:50 -0800", "msg_from": "Dror Matalon <[email protected]>", "msg_from_op": true, "msg_subject": "Various performance questions" }, { "msg_contents": "Dror Matalon <[email protected]> writes:\n\n> explain analyze select count(*) from items where channel < 5000;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=249141.54..249141.54 rows=1 width=0) (actual time=26224.603..26224.608 rows=1 loops=1)\n> -> Seq Scan on items (cost=0.00..245377.52 rows=1505605 width=0) (actual time=7.599..17686.869 rows=1632057 loops=1)\n> Filter: (channel < 5000)\n> Total runtime: 26224.703 ms\n> \n> \n> How can it do a sequential scan and apply a filter to it in less time\n> than the full sequential scan? Is it actually using an index without\n> really telling me? \n\nIt's not using the index and not telling you. \n\nIt's possible the count(*) operator itself is taking some time. Postgres\ndoesn't have to call it on the rows that don't match the where clause. How\nlong does \"explain analyze select 1 from items\" with and without the where\nclause take?\n\nWhat version of postgres is this?. In 7.4 (and maybe 7.3?) count() uses an\nint8 to store its count so it's not limited to 4 billion records.\nUnfortunately int8 is somewhat inefficient as it has to be dynamically\nallocated repeatedly. It's possible it's making a noticeable difference,\nespecially with all the pages in cache, though I'm a bit surprised. There's\nsome thought about optimizing this in 7.5.\n\n-- \ngreg\n\n", "msg_date": "26 Oct 2003 22:49:29 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various performance questions" }, { "msg_contents": "On Sun, Oct 26, 2003 at 10:49:29PM -0500, Greg Stark wrote:\n> Dror Matalon <[email protected]> writes:\n> \n> > explain analyze select count(*) from items where channel < 5000;\n> > QUERY PLAN\n> > --------------------------------------------------------------------------------------------------------------------------\n> > Aggregate (cost=249141.54..249141.54 rows=1 width=0) (actual time=26224.603..26224.608 rows=1 loops=1)\n> > -> Seq Scan on items (cost=0.00..245377.52 rows=1505605 width=0) (actual time=7.599..17686.869 rows=1632057 loops=1)\n> > Filter: (channel < 5000)\n> > Total runtime: 26224.703 ms\n> > \n> > \n> > How can it do a sequential scan and apply a filter to it in less time\n> > than the full sequential scan? Is it actually using an index without\n> > really telling me? \n> \n> It's not using the index and not telling you. \n> \n> It's possible the count(*) operator itself is taking some time. Postgres\n\nI find it hard to believe that the actual counting would take a\nsignificant amount of time.\n\n> doesn't have to call it on the rows that don't match the where clause. How\n> long does \"explain analyze select 1 from items\" with and without the where\n> clause take?\n\nSame as count(*). Around 55 secs with no where clause, around 25 secs\nwith.\n\n> \n> What version of postgres is this?. In 7.4 (and maybe 7.3?) count() uses an\n\nThis is 7.4.\n\n> int8 to store its count so it's not limited to 4 billion records.\n> Unfortunately int8 is somewhat inefficient as it has to be dynamically\n> allocated repeatedly. It's possible it's making a noticeable difference,\n> especially with all the pages in cache, though I'm a bit surprised. There's\n> some thought about optimizing this in 7.5.\n> \n> -- \n> greg\n> \n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n", "msg_date": "Sun, 26 Oct 2003 20:54:31 -0800", "msg_from": "Dror Matalon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Various performance questions" }, { "msg_contents": "[email protected] (Dror Matalon) wrote:\n> On Sun, Oct 26, 2003 at 10:49:29PM -0500, Greg Stark wrote:\n>> Dror Matalon <[email protected]> writes:\n>> \n>> > explain analyze select count(*) from items where channel < 5000;\n>> > QUERY PLAN\n>> > --------------------------------------------------------------------------------------------------------------------------\n>> > Aggregate (cost=249141.54..249141.54 rows=1 width=0) (actual time=26224.603..26224.608 rows=1 loops=1)\n>> > -> Seq Scan on items (cost=0.00..245377.52 rows=1505605 width=0) (actual time=7.599..17686.869 rows=1632057 loops=1)\n>> > Filter: (channel < 5000)\n>> > Total runtime: 26224.703 ms\n>> > \n>> > \n>> > How can it do a sequential scan and apply a filter to it in less time\n>> > than the full sequential scan? Is it actually using an index without\n>> > really telling me? \n>> \n>> It's not using the index and not telling you. \n>> \n>> It's possible the count(*) operator itself is taking some time. Postgres\n>\n> I find it hard to believe that the actual counting would take a\n> significant amount of time.\n\nMost of the time involves:\n\n a) Reading each page of the table, and\n b) Figuring out which records on those pages are still \"live.\"\n\nWhat work were you thinking was involved in doing the counting?\n\n>> doesn't have to call it on the rows that don't match the where clause. How\n>> long does \"explain analyze select 1 from items\" with and without the where\n>> clause take?\n>\n> Same as count(*). Around 55 secs with no where clause, around 25 secs\n> with.\n\nGood; at least that's consistent...\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\nhttp://www3.sympatico.ca/cbbrowne/postgresql.html\nSigns of a Klingon Programmer #2: \"You question the worthiness of my\ncode? I should kill you where you stand!\"\n", "msg_date": "Mon, 27 Oct 2003 01:04:49 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various performance questions" }, { "msg_contents": "On Mon, Oct 27, 2003 at 01:04:49AM -0500, Christopher Browne wrote:\n> [email protected] (Dror Matalon) wrote:\n> > On Sun, Oct 26, 2003 at 10:49:29PM -0500, Greg Stark wrote:\n> >> Dror Matalon <[email protected]> writes:\n> >> \n> >> > explain analyze select count(*) from items where channel < 5000;\n> >> > QUERY PLAN\n> >> > --------------------------------------------------------------------------------------------------------------------------\n> >> > Aggregate (cost=249141.54..249141.54 rows=1 width=0) (actual time=26224.603..26224.608 rows=1 loops=1)\n> >> > -> Seq Scan on items (cost=0.00..245377.52 rows=1505605 width=0) (actual time=7.599..17686.869 rows=1632057 loops=1)\n> >> > Filter: (channel < 5000)\n> >> > Total runtime: 26224.703 ms\n> >> > \n> >> > \n> >> > How can it do a sequential scan and apply a filter to it in less time\n> >> > than the full sequential scan? Is it actually using an index without\n> >> > really telling me? \n> >> \n> >> It's not using the index and not telling you. \n> >> \n> >> It's possible the count(*) operator itself is taking some time. Postgres\n> >\n> > I find it hard to believe that the actual counting would take a\n> > significant amount of time.\n> \n> Most of the time involves:\n> \n> a) Reading each page of the table, and\n> b) Figuring out which records on those pages are still \"live.\"\n\nThe table has been VACUUM ANALYZED so that there are no \"dead\" records.\nIt's still not clear why select count() would be slower than select with\na \"where\" clause.\n\n> \n> What work were you thinking was involved in doing the counting?\n\nI was answering an earlier response that suggested that maybe the actual\ncounting took time so it would take quite a bit longer when there are\nmore rows to count.\n\n> \n> >> doesn't have to call it on the rows that don't match the where clause. How\n> >> long does \"explain analyze select 1 from items\" with and without the where\n> >> clause take?\n> >\n> > Same as count(*). Around 55 secs with no where clause, around 25 secs\n> > with.\n> \n> Good; at least that's consistent...\n> -- \n> (format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\n> http://www3.sympatico.ca/cbbrowne/postgresql.html\n> Signs of a Klingon Programmer #2: \"You question the worthiness of my\n> code? I should kill you where you stand!\"\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n", "msg_date": "Sun, 26 Oct 2003 23:17:03 -0800", "msg_from": "Dror Matalon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Various performance questions" }, { "msg_contents": "Dror Matalon wrote:\n\n> On Mon, Oct 27, 2003 at 01:04:49AM -0500, Christopher Browne wrote:\n>>Most of the time involves:\n>>\n>> a) Reading each page of the table, and\n>> b) Figuring out which records on those pages are still \"live.\"\n> \n> \n> The table has been VACUUM ANALYZED so that there are no \"dead\" records.\n> It's still not clear why select count() would be slower than select with\n> a \"where\" clause.\n\nDo a vacuum verbose full and then everything should be within small range of \neach other.\n\nAlso in the where clause, does explicitly typecasting helps?\n\nLike 'where channel<5000::int2;'\n\n HTH\n\n Shridhar\n\n", "msg_date": "Mon, 27 Oct 2003 12:52:27 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various performance questions" }, { "msg_contents": "On Mon, Oct 27, 2003 at 12:52:27PM +0530, Shridhar Daithankar wrote:\n> Dror Matalon wrote:\n> \n> >On Mon, Oct 27, 2003 at 01:04:49AM -0500, Christopher Browne wrote:\n> >>Most of the time involves:\n> >>\n> >>a) Reading each page of the table, and\n> >>b) Figuring out which records on those pages are still \"live.\"\n> >\n> >\n> >The table has been VACUUM ANALYZED so that there are no \"dead\" records.\n> >It's still not clear why select count() would be slower than select with\n> >a \"where\" clause.\n> \n> Do a vacuum verbose full and then everything should be within small range \n> of each other.\n> \n\nI did vaccum full verbose and the results are the same as before, 55\nseconds for count(*) and 26 seconds for count(*) where channel < 5000.\n\n> Also in the where clause, does explicitly typecasting helps?\n> \n> Like 'where channel<5000::int2;'\n\nIt makes no difference.\n\n> \n> HTH\n> \n> Shridhar\n> \n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n", "msg_date": "Sun, 26 Oct 2003 23:43:57 -0800", "msg_from": "Dror Matalon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Various performance questions" }, { "msg_contents": "In the last exciting episode, [email protected] (Dror Matalon) wrote:\n> I was answering an earlier response that suggested that maybe the actual\n> counting took time so it would take quite a bit longer when there are\n> more rows to count.\n\nWell, if a \"where clause\" allows the system to use an index to search\nfor the subset of elements, that would reduce the number of pages that\nhave to be examined, thereby diminishing the amount of work.\n\nWhy don't you report what EXPLAIN ANALYZE returns as output for the\nquery with WHERE clause? That would allow us to get more of an idea\nof what is going on...\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\nhttp://www3.sympatico.ca/cbbrowne/spiritual.html\nWhen replying, it is often possible to cleverly edit the original\nmessage in such a way as to subtly alter its meaning or tone to your\nadvantage while appearing that you are taking pains to preserve the\nauthor's intent. As a bonus, it will seem that your superior\nintellect is cutting through all the excess verbiage to the very heart\nof the matter. -- from the Symbolics Guidelines for Sending Mail\n", "msg_date": "Mon, 27 Oct 2003 07:52:06 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various performance questions" }, { "msg_contents": "Christopher Browne <[email protected]> writes:\n\n> In the last exciting episode, [email protected] (Dror Matalon) wrote:\n> > I was answering an earlier response that suggested that maybe the actual\n> > counting took time so it would take quite a bit longer when there are\n> > more rows to count.\n\nThat was my theory. I guess it's wrong. There is other work involved in\nprocessing a record, but i'm surprised it's as long as the work to actually\npull the record from kernel and check if it's visible.\n\n> Well, if a \"where clause\" allows the system to use an index to search\n> for the subset of elements, that would reduce the number of pages that\n> have to be examined, thereby diminishing the amount of work.\n\nit's not. therein lies the mystery.\n\n> Why don't you report what EXPLAIN ANALYZE returns as output for the\n> query with WHERE clause? That would allow us to get more of an idea\n> of what is going on...\n\nHe did, right at the start of the thread.\n\nFor a 1 million record table without he's seeing\n\n select 1 from tab\n select count(*) from tab\n\nbeing comparable with only a slight delay for the count(*) whereas\n\n select 1 from tab where c < 1000\n select count(*) from tab where c < 1000\n\nare much faster even though they still use a sequential scan.\n\nI'm puzzled why the where clause speeds things up as much as it does.\n\n-- \ngreg\n\n", "msg_date": "27 Oct 2003 10:09:09 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various performance questions" }, { "msg_contents": "On Sun, 26 Oct 2003, Dror Matalon wrote:\n\n> Here's the structure of the items table\n[snip]\n> pubdate | timestamp with time zone |\n> Indexes:\n> \"item_channel_link\" btree (channel, link)\n> \"item_created\" btree (dtstamp)\n> \"item_signature\" btree (signature)\n> \"items_channel_article\" btree (channel, articlenumber)\n> \"items_channel_tstamp\" btree (channel, dtstamp)\n> \n> \n> 5. Any other comments/suggestions on the above setup.\n\n\tTry set enable_seqscan = off; set enable_indexscan = on; to \nforce the planner to use one of the indexes. Analyze the queries from \nyour application and see what are the most used columns in WHERE clauses \nand recreate the indexes. select count(*) from items where channel < \n5000; will never use any of the current indexes because none matches \nyour WHERE clause (channel appears now only in multicolumn indexes).\n\n-- \nAny views or opinions presented within this e-mail are solely those of\nthe author and do not necessarily represent those of any company, unless\notherwise expressly stated.\n", "msg_date": "Mon, 27 Oct 2003 17:15:05 +0200 (EET)", "msg_from": "Tarhon-Onu Victor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various performance questions" }, { "msg_contents": "On Mon, 2003-10-27 at 10:15, Tarhon-Onu Victor wrote:\n> select count(*) from items where channel < \n> 5000; will never use any of the current indexes because none matches \n> your WHERE clause (channel appears now only in multicolumn indexes).\n\nNo -- a multi-column index can be used to answer queries on a prefix of\nthe index's column list. So an index on (channel, xyz) can be used to\nanswer queries on (just) \"channel\".\n\n-Neil\n\n\n", "msg_date": "Mon, 27 Oct 2003 10:34:53 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various performance questions" }, { "msg_contents": "On Sun, 2003-10-26 at 22:49, Greg Stark wrote:\n> What version of postgres is this?. In 7.4 (and maybe 7.3?) count() uses an\n> int8 to store its count so it's not limited to 4 billion records.\n> Unfortunately int8 is somewhat inefficient as it has to be dynamically\n> allocated repeatedly.\n\nUh, what? Why would an int8 need to be \"dynamically allocated\nrepeatedly\"?\n\n-Neil\n\n\n", "msg_date": "Mon, 27 Oct 2003 10:41:49 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various performance questions" }, { "msg_contents": ">>>>> \"DM\" == Dror Matalon <[email protected]> writes:\n\nDM> effective_cache_size = 25520 -- freebsd forumla: vfs.hibufspace / 8192\n\nDM> 1. While it seems to work correctly, I'm unclear on why this number is\nDM> correct. 25520*8 = 204160 or 200 Megs. On a machine with 2 Gigs it\nDM> seems like the number should be more like 1 - 1.5 Gigs.\n\nNope, that's correct...\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Mon, 27 Oct 2003 11:12:37 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various performance questions" }, { "msg_contents": "On Mon, Oct 27, 2003 at 11:12:37AM -0500, Vivek Khera wrote:\n> >>>>> \"DM\" == Dror Matalon <[email protected]> writes:\n> \n> DM> effective_cache_size = 25520 -- freebsd forumla: vfs.hibufspace / 8192\n> \n> DM> 1. While it seems to work correctly, I'm unclear on why this number is\n> DM> correct. 25520*8 = 204160 or 200 Megs. On a machine with 2 Gigs it\n> DM> seems like the number should be more like 1 - 1.5 Gigs.\n> \n> Nope, that's correct...\n\nI know it's correct. I was asking why it's correct.\n\n> \n> \n> -- \n> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\n> Vivek Khera, Ph.D. Khera Communications, Inc.\n> Internet: [email protected] Rockville, MD +1-240-453-8497\n> AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n", "msg_date": "Mon, 27 Oct 2003 09:23:10 -0800", "msg_from": "Dror Matalon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Various performance questions" }, { "msg_contents": "On Mon, Oct 27, 2003 at 07:52:06AM -0500, Christopher Browne wrote:\n> In the last exciting episode, [email protected] (Dror Matalon) wrote:\n> > I was answering an earlier response that suggested that maybe the actual\n> > counting took time so it would take quite a bit longer when there are\n> > more rows to count.\n> \n> Well, if a \"where clause\" allows the system to use an index to search\n> for the subset of elements, that would reduce the number of pages that\n> have to be examined, thereby diminishing the amount of work.\n> \n> Why don't you report what EXPLAIN ANALYZE returns as output for the\n> query with WHERE clause? That would allow us to get more of an idea\n> of what is going on...\n\n\nHere it is once again, and I've added another data poing \"channel <\n1000\" which takes even less time than channel < 5000. It almost seems\nlike the optimizer knows that it can skip certain rows \"rows=4910762\" vs\n\"rows=1505605\" . But how can it do that without using an index or\nactually looking at each row?\n\nzp1936=> EXPLAIN ANALYZE select count(*) from items;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=245044.53..245044.53 rows=1 width=0) (actual time=55806.893..55806.897 rows=1 loops=1)\n -> Seq Scan on items (cost=0.00..232767.62 rows=4910762 width=0)\n(actual time=0.058..30481.482 rows=4910762 loops=1)\n Total runtime: 55806.992 ms\n(3 rows)\n\nzp1936=> EXPLAIN ANALYZE select count(*) from items where channel < 5000;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=248808.54..248808.54 rows=1 width=0) (actual time=26071.264..26071.269 rows=1 loops=1)\n -> Seq Scan on items (cost=0.00..245044.52 rows=1505605 width=0)\n(actual time=0.161..17623.033 rows=1632057 loops=1)\n Filter: (channel < 5000)\n Total runtime: 26071.361 ms\n(4 rows)\n\nzp1936=> EXPLAIN ANALYZE select count(*) from items where channel < 1000;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=245429.74..245429.74 rows=1 width=0) (actual time=10225.272..10225.276 rows=1 loops=1)\n -> Seq Scan on items (cost=0.00..245044.52 rows=154085 width=0) (actual time=7.633..10083.246 rows=25687 loops=1)\n Filter: (channel < 1000)\n Total runtime: 10225.373 ms\n(4 rows)\n\n\n> -- \n> (format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\n> http://www3.sympatico.ca/cbbrowne/spiritual.html\n> When replying, it is often possible to cleverly edit the original\n> message in such a way as to subtly alter its meaning or tone to your\n> advantage while appearing that you are taking pains to preserve the\n> author's intent. As a bonus, it will seem that your superior\n> intellect is cutting through all the excess verbiage to the very heart\n> of the matter. -- from the Symbolics Guidelines for Sending Mail\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n", "msg_date": "Mon, 27 Oct 2003 09:40:19 -0800", "msg_from": "Dror Matalon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Various performance questions" }, { "msg_contents": "Neil Conway <[email protected]> writes:\n\n> On Sun, 2003-10-26 at 22:49, Greg Stark wrote:\n> > What version of postgres is this?. In 7.4 (and maybe 7.3?) count() uses an\n> > int8 to store its count so it's not limited to 4 billion records.\n> > Unfortunately int8 is somewhat inefficient as it has to be dynamically\n> > allocated repeatedly.\n> \n> Uh, what? Why would an int8 need to be \"dynamically allocated\n> repeatedly\"?\n\nPerhaps I'm wrong, I'm extrapolating from a comment Tom Lane made that\nprofiling showed that the bulk of the cost in count() went to allocating\nint8s. He commented that this could be optimized by having count() and sum()\nbypass the regular api. I don't have the original message handy.\n\n-- \ngreg\n\n", "msg_date": "27 Oct 2003 12:56:44 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various performance questions" }, { "msg_contents": "On Mon, 2003-10-27 at 12:56, Greg Stark wrote:\n> Neil Conway <[email protected]> writes:\n> > Uh, what? Why would an int8 need to be \"dynamically allocated\n> > repeatedly\"?\n> \n> Perhaps I'm wrong, I'm extrapolating from a comment Tom Lane made that\n> profiling showed that the bulk of the cost in count() went to allocating\n> int8s. He commented that this could be optimized by having count() and sum()\n> bypass the regular api. I don't have the original message handy.\n\nI'm still confused: int64 should be stack-allocated, AFAICS. Tom, do you\nrecall what the issue here is?\n\n-Neil\n\n\n", "msg_date": "Mon, 27 Oct 2003 13:40:06 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various performance questions" }, { "msg_contents": "Neil Conway <[email protected]> writes:\n> On Mon, 2003-10-27 at 12:56, Greg Stark wrote:\n>> Neil Conway <[email protected]> writes:\n>>> Uh, what? Why would an int8 need to be \"dynamically allocated\n>>> repeatedly\"?\n>> \n>> Perhaps I'm wrong, I'm extrapolating from a comment Tom Lane made that\n>> profiling showed that the bulk of the cost in count() went to allocating\n>> int8s. He commented that this could be optimized by having count() and sum()\n>> bypass the regular api. I don't have the original message handy.\n\n> I'm still confused: int64 should be stack-allocated, AFAICS. Tom, do you\n> recall what the issue here is?\n\nGreg is correct. int8 is a pass-by-reference datatype and so every\naggregate state-transition function cycle requires at least one palloc\n(to return the function result). I think in the current state of the\ncode it requires two pallocs :-(, because we can't trust the transition\nfunction to palloc its result in the right context without palloc'ing\nleaked junk in that context, so an extra palloc is needed to copy the\nresult Datum into a longer-lived context than we call the function in.\n\nThere was some speculation a few weeks ago about devising a way to let\nperformance-critical transition functions avoid the excess palloc's by\nworking with a specialized API instead of the standard function call\nAPI, but I think it'll have to wait for 7.5.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Oct 2003 13:52:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various performance questions " }, { "msg_contents": "On Mon, 2003-10-27 at 13:52, Tom Lane wrote:\n> Greg is correct. int8 is a pass-by-reference datatype and so every\n> aggregate state-transition function cycle requires at least one palloc\n> (to return the function result).\n\nInteresting. Is there a reason why int8 is pass-by-reference? (ISTM that\npass-by-value would be sufficient...)\n\nThanks for the information, Tom & Greg.\n\n-Neil\n\n\n", "msg_date": "Mon, 27 Oct 2003 13:54:50 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various performance questions" }, { "msg_contents": "Neil Conway <[email protected]> writes:\n> Interesting. Is there a reason why int8 is pass-by-reference?\n\nPass-by-value types have to fit into Datum.\n\nOn a 64-bit machine (ie, one where pointers are 64-bits anyway) it would\nmake sense to convert int8 (and float8 too) into pass-by-value types.\nIf the machine does not already need Datum to be 8 bytes, though, I\nthink that widening Datum to 8 bytes just for the benefit of these two\ndatatypes would be a serious net loss. Not to mention that it would\njust plain break everything on machines with no native 8-byte-int\ndatatype.\n\nOne of the motivations for the version-1 function call protocol was to\nallow the pass-by-value-or-by-ref nature of these datatypes to be hidden\nfrom most of the code, with an eye to someday making this a\nplatform-specific choice.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Oct 2003 14:09:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various performance questions " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> Greg is correct. int8 is a pass-by-reference datatype \n\nJust to keep the conversation on track. the evidence from this particular post\nseems to indicate that my theory was wrong and the overhead for count(*) is\n_not_ a big time sink. It seems to be at most 10% and usually less. A simple\n\"select 1 from tab\" takes nearly as long.\n\nI'm still puzzled why the times on these are so different when the latter\nreturns fewer records and both are doing sequential scans:\n\n select 1 from tab\n\n select 1 from tab where a < 1000\n\n-- \ngreg\n\n", "msg_date": "27 Oct 2003 14:10:11 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various performance questions" }, { "msg_contents": "\nIn fact the number of records seems to be almost irrelevant. A sequential scan\ntakes almost exactly the same amount of time up until a critical region (for\nme around 100000 records) at which point it starts going up very quickly.\n\nIt's almost as if it's doing some disk i/o, but I'm watching vmstat and don't\nsee anything. And in any case it would have to read all the same blocks to do\nthe sequential scan regardless of how many records match, no?\n\nI don't hear the disk seeking either -- though oddly there is some sound\ncoming from the computer when this computer running. It sounds like a high\npitched sound, almost like a floppy drive reading without seeking. Perhaps\nthere is some i/o happening and linux is lying about it? Perhaps I'm not\nhearing seeking because it's reading everything from one track and not\nseeking? Very strange.\n\n\nslo=> explain analyze select 1::int4 from test where a < 1 ;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..1693.00 rows=11 width=0) (actual time=417.468..417.468 rows=0 loops=1)\n Filter: (a < 1)\n Total runtime: 417.503 ms\n(3 rows)\n\nTime: 418.181 ms\n\n\nslo=> explain analyze select 1::int4 from test where a < 100 ;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..1693.00 rows=53 width=0) (actual time=0.987..416.224 rows=50 loops=1)\n Filter: (a < 100)\n Total runtime: 416.301 ms\n(3 rows)\n\nTime: 417.008 ms\n\n\nslo=> explain analyze select 1::int4 from test where a < 10000 ;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..1693.00 rows=5283 width=0) (actual time=0.812..434.967 rows=5000 loops=1)\n Filter: (a < 10000)\n Total runtime: 439.620 ms\n(3 rows)\n\nTime: 440.665 ms\n\n\nslo=> explain analyze select 1::int4 from test where a < 100000 ;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..1693.00 rows=50076 width=0) (actual time=0.889..458.623 rows=50000 loops=1)\n Filter: (a < 100000)\n Total runtime: 491.281 ms\n(3 rows)\n\nTime: 491.998 ms\n\n\nslo=> explain analyze select 1::int4 from test where a < 1000000 ;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..1693.00 rows=99991 width=0) (actual time=0.018..997.421 rows=715071 loops=1)\n Filter: (a < 1000000)\n Total runtime: 1461.851 ms\n(3 rows)\n\nTime: 1462.898 ms\n\n\nslo=> explain analyze select 1::int4 from test where a < 10000000 ;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..1693.00 rows=99991 width=0) (actual time=0.015..1065.456 rows=800000 loops=1)\n Filter: (a < 10000000)\n Total runtime: 1587.481 ms\n(3 rows)\n\n-- \ngreg\n\n", "msg_date": "27 Oct 2003 14:23:01 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various performance questions" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> I'm still puzzled why the times on these are so different when the latter\n> returns fewer records and both are doing sequential scans:\n\nMy best guess is that it's simply the per-tuple overhead of cycling\ntuples through the two plan nodes. When you have no actual I/O happening,\nthe seqscan runtime is going to be all CPU time, something of the form\n\tcost_per_page * number_of_pages_processed +\n\tcost_per_tuple_scanned * number_of_tuples_scanned +\n\tcost_per_tuple_returned * number_of_tuples_returned\nI don't have numbers for the relative sizes of those three costs, but\nI doubt that any of them are negligible compared to the other two.\n\nAdding a WHERE clause increases cost_per_tuple_scanned but reduces the\nnumber_of_tuples_returned, and so it cuts the contribution from the\nthird term, evidently by more than the WHERE clause adds to the second\nterm.\n\nNy own profiling had suggested that the cost-per-tuple-scanned in the\naggregate node dominated the seqscan CPU costs, but that might be\nplatform-specific, or possibly have something to do with the fact that\nI was profiling an assert-enabled build.\n\nIt might be worth pointing out that EXPLAIN ANALYZE adds two kernel\ncalls (gettimeofday or some such) into each cycle of the plan nodes;\nthat's probably inflating the cost_per_tuple_returned by a noticeable\namount.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Oct 2003 14:26:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various performance questions " } ]
[ { "msg_contents": "Hi\n\nCurrently we are running Postgresql v7.3.2 on Redhat Linux OS v9.0. We have\nWindows2000 client machines inserting records into the Postgresql tables\nvia the Postgres ODBC v7.3.0100.\n\nAfter a few weeks of usage, when we do a \\d at the sql prompt, there was a\nduplicate object name in the same schema, ie it can be a duplicate row of\nindex or table.\n\nWhen we do a \\d table_name, it will show a duplication of column names\ninside the table.\n\nWe discovered that the schema in the pg_user table was duplicated also;\nthus causing the pg_dump to fail.\n\nThank you,\nREgards.\n\n\n\n\n", "msg_date": "Mon, 27 Oct 2003 12:27:29 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Duplicate in pg_user table" } ]
[ { "msg_contents": "http://fsbench.netnation.com/\n\nSeems to answer a few of the questions about which might be the best \nfilesystem...\n\nChris\n\n\n", "msg_date": "Mon, 27 Oct 2003 23:16:05 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": true, "msg_subject": "Linux Filesystem Shootout" } ]
[ { "msg_contents": "Hi All,\n\nWe've been experiencing extremely poor batch upload performance on our \nPostgres 7.3 (and 7.3.4) database, and I've not been able to improve matters \nsignificantly using any suggestions I've gleamed off the mailing list \narchives ... so I was wondering if anyone with a bigger brain in this area \ncould help :)\n\nOur batch upload is performing a number of stored procedures to insert data on \nthe database. Initially, this results in quite good performance, but rapidly \nspirals down to approximately 1 per second after some minutes.\n\nI've got a script that generates stored procedure calls to upload test input \ndata, and the script is capable of inserting BEGIN and END at different \nintervals, together with VACUUM ANALYZE commands as well.\n\nI've tried varying the commit level from every operation, every 5, every 10, \nevery 25, every 100 operations (again, each operation is 5 SP calls) without \nany noticeable improvement. Likewise, I've varied the VACUUM ANALYZE from \nevery 50 to every 100 operations - again without any measurable improvement.\n\ntop reports that CPU usage is pretty constant at 99%, and there is \napproximately 1GB of free physical memory available to the OS (with \napproximately 1GB of physical memory in use).\n\nI've have been running postmaster with switched fsync off.\n\nI also tried running with backbuffers of default (64), 128, 256, 512 and even \n1024. Again, with no measurable change.\n\nThe typical metrics are (completed operations - each of these require 5 SP \ncalls):\n\n1 min: 1036 (1036 operations)\n2 min: 1426 (390 operations)\n3 min: 1756 (330 operations)\n4 min: 2026 (270 operations)\n5 min: 2266 (240 operations)\n\nWhen left running, its not too long before the code snails to 1 operation per \nsecond.\n\n\nHas anyone any ideas as to what could be causing the spiraling performance?\n\n\nWith approximately 20,000 operations commited in the database, it takes about \n1 minute to upload a dump of the database - unfortunately we cannot use the \nCOPY command to upload brand new data - it really has to go through the \nStored Procedures to ensure relationships and data integrity across the \nschema (it would be very difficult to develop and maintain code to generate \nCOPY commands for inserting new data). And whilst I appreciate INSERTs are \ninherently slower than COPY, I was hoping for something significantly faster \nthan the 1 operation/second that things fairly quickly descend to...\n\n\nThanks for any advice!\n\nDamien\n\n", "msg_date": "Mon, 27 Oct 2003 16:26:52 +0000", "msg_from": "Damien Dougan <[email protected]>", "msg_from_op": true, "msg_subject": "Very Poor Insert Performance" }, { "msg_contents": "\nDamien Dougan <[email protected]> writes:\n\n> Our batch upload is performing a number of stored procedures to insert data on \n> the database. Initially, this results in quite good performance, but rapidly \n> spirals down to approximately 1 per second after some minutes.\n\nIt's fairly unlikely anyone will be able to help without you saying what\nyou're doing. What are these procedures doing? What do the tables look like?\nWhat indexes exist?\n\nAt a guess the foreign key relationships you're enforcing don't have indexes\nto help them. If they do perhaps postgres isn't using them.\n\n-- \ngreg\n\n", "msg_date": "27 Oct 2003 13:01:12 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very Poor Insert Performance" }, { "msg_contents": "Damien Dougan <[email protected]> writes:\n> Has anyone any ideas as to what could be causing the spiraling performance?\n\nYou really haven't provided any information that would allow anything\nbut guesses, but I'll guess anyway: poor plans for foreign key checks?\nSee nearby threads.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Oct 2003 15:12:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very Poor Insert Performance " }, { "msg_contents": ">>>>> \"GS\" == Greg Stark <[email protected]> writes:\n\nGS> At a guess the foreign key relationships you're enforcing don't\nGS> have indexes to help them. If they do perhaps postgres isn't using\nGS> them.\n\n\nOr, if you do have indexes, they've bloated to be way too big and are\noverwhelming your shared buffers. Reindex them and see it it helps.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Tue, 28 Oct 2003 14:18:02 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very Poor Insert Performance" }, { "msg_contents": "On Monday 27 October 2003 8:12 pm, Tom Lane wrote:\n> Damien Dougan <[email protected]> writes:\n> > Has anyone any ideas as to what could be causing the spiraling\n> > performance?\n>\n> You really haven't provided any information that would allow anything\n> but guesses, but I'll guess anyway: poor plans for foreign key checks?\n> See nearby threads.\n>\n> \t\t\tregards, tom lane\n\n\nApologies for not including more info - I had been hoping that spiralling performance was a known tell-tale sign of something :)\n\n\nHere is some additional information - sorry if its overload, but I figured I should give an intro to the schema before showing the EXPLAIN results!\n\n\n\nFirstly, a quick schema overview for the relevant tables:\n\ncontact has many contactparts\naddress has many addressparts\ncontact has many address\n\nNow, the above table relationships are connected via relationship tables (rather than foreign indexes directly to each other), so we have:\n\ncontact\nrel_contact_has_contactpart\naddress\nrel_address_has_addresspart\n\n(The reasons behind this are for meta-data purposes - our database is intended to be very abstract from the code...)\n\n\n\n Table \"public.contact\"\n Column | Type | Modifiers\n------------------+-----------------------------+---------------------------------------------------\n id | integer | default nextval('contact_id_seq'::text)\n version | integer | default 1\n contactid | character varying |\n enddate | timestamp without time zone |\n preferredaddress | character varying |\n startdate | timestamp without time zone |\nIndexes:\n \"contact_id_idx\" unique, btree (id)\n \"contact_key\" unique, btree (contactid)\n\n\nSo we have an index on the meta-data related \"id\" and the externally visible \"contactid\" values. The \"id\" is used with the rel_contact_has_XXX tables (see below).\n\n\n\n Table \"public.contactpart\"\n Column | Type | Modifiers\n-------------+-------------------+------------------------------------------------\n id | integer | default nextval('contactpart_id_seq'::text)\n version | integer | default 1\n detailname | character varying | not null\n detailvalue | character varying |\nIndexes:\n \"contactpart_id_idx\" unique, btree (id)\n\n\nSo we have an index on the meta-data related \"id\".\n\n\n \n Table \"public.address\"\n Column | Type | Modifiers\n---------+-----------------------------+--------------------------------------------\n id | integer | default nextval('mc_address_id_seq'::text)\n version | integer | default 1\n enddate | timestamp without time zone |\n format | character varying |\n type | character varying | not null\n value | character varying |\nIndexes:\n \"address_id_idx\" unique, btree (id)\n \"address_value_key\" btree (value)\n\n\nSo we have an index on the meta-data related \"id\".\n\n\n Table \"public.addresspart\"\n Column | Type | Modifiers\n-------------+-------------------+------------------------------------------------\n id | integer | default nextval('addresspart_id_seq'::text)\n version | integer | default 1\n detailname | character varying | not null\n detailvalue | character varying |\nIndexes:\n \"addresspart_id_idx\" unique, btree (id)\n\nSo we have an index on the meta-data related \"id\". This is used with the rel_address_has_addresspart table (see below).\n\n\n\n \n Table \"public.rel_contact_has_contactpart\"\n Column | Type | Modifiers\n----------------------+---------+-----------\n contact_id | integer |\n contactpart_id | integer |\nIndexes:\n \"rel_contact_has_contactpart_idx2\" unique, btree (contactpart_id)\n \"rel_contact_has_contactpart_idx1\" btree (contact_id)\n\nSo we have a unique index on the contactpart and a non-unique index on the contact (to reflect the 1:M relationship contact has contactparts)\n\n\n\nTable \"public.rel_address_has_addresspart\"\n Column | Type | Modifiers\n-------------------+---------+-----------\n address_id | integer |\n addresspart_id | integer |\nIndexes:\n \"rel_address_has_addresspart_idx2\" unique, btree (addresspart_id)\n \"rel_address_has_addresspart_idx1\" btree (address_id)\n\nSo we have a unique index on the addresspart and a non-unique index on the address (to reflect the 1:M relationship address has addressparts)\n\n\n \n Table \"public.rel_contact_has_address\"\n Column | Type | Modifiers\n----------------------+---------+-----------\n contact_id | integer |\n address_id | integer |\nIndexes:\n \"rel_contact_has_address_idx2\" unique, btree (address_id)\n \"rel_contact_has_address_idx1\" btree (contactdetails_id)\n\nSo we have a unique index on the address and a non-unique index on the contact (to reflect the 1:M relationship contact has addresses)\n\n\nHowever, to add a layer of abstraction to the business logic, the underlying tables are never directly exposed through anything other than public views. The public views combine the <table> and <tablepart> into a single table.\n\nSo we have 2 public views: PvContact which ties together the contact and contactparts, and PvAddress which ties together the address and addresspart.\n\n View \"public.pvcontact\"\n Column | Type | Modifiers\n------------------+-----------------------------+-----------\n version | integer |\n contactid | character varying |\n startdate | timestamp without time zone |\n enddate | timestamp without time zone |\n preferredaddress | character varying |\n firstname | character varying |\n lastname | character varying |\n\n\n(Note - firstname and lastname are dervied from the detailnames of contactpart table)\n\n\n View \"public.pvaddress\"\n Column | Type | Modifiers\n-----------+-----------------------------+-----------\n version | integer |\n contactid | character varying |\n type | character varying |\n format | character varying |\n enddate | timestamp without time zone |\n npi | character varying |\n ton | character varying |\n number | character varying |\n prefix | character varying |\n addrvalue | character varying |\n link | character varying |\n house | character varying |\n street | character varying |\n town | character varying |\n city | character varying |\n county | character varying |\n postcode | character varying |\n state | character varying |\n zipcode | character varying |\n extension | character varying |\n\n (Note - number, prefix, link, house, street, town, city, postcode etc are derived from the detailnames of addresspart table)\n\n\n\nFor example, suppose we have 2 contactparts for a particular contact (with unique id = y): FirstName and LastName, then the contactpart table would have 2 rows like:\n\nid = x\nversion = 1\npartname = 'FirstName'\npartvalue = 'John'\n\nid = x+1\nversion = 1\npartname = 'LastName'\npartvalue = 'Doe'\n\n\nThen the public view, PvContact, would look like:\n\nVersion = 1\nContactId = y\nStartDate = ...\nEndDate = ...\nPreferredAddress = ...\nFirstName = John\nLastName = Doe\n\n\n\nAll Create, Read, Update, Delete operations on the DB are performed by StoredProcedures (again, to help abstract the code from the DB). The SPs are capable of dealing with externally (public view) advertised schemas, and ensuring data integrity across the underlying tables.\n\nNow, our problem seems to be the delays introduced by reading from the public views. I've taken some measurements of raw INSERTS that mirror what the SPs are doing (but the data is invalid and its not correctly linked across tables (which is what the StoredProcs are responsible for) - but the INSERTS are happening in the same order and frequency for a valid data upload). We can upload 2000 sets of users+contacts+addresses in about 17 seconds. But when it is done via the Stored Procedures (which do some inserts, some reads, some more inserts etc to ensure tables are properly linked via the relationships), this drops to 2 minutes for 2000. And the performance spirals down to less than 1 user+contact+address per second after a short while.\n\n\nFirst of all then, the definition of the PublicView PvAddress\n\nView definition:\n SELECT address.id AS addressid, address.\"version\", contact.id AS contactid, contact.contactid AS contactuuid, address.\"type\", address.format, address.enddate, address.value, rel_npi.npiid, rel_npi.npi, rel_ton.tonid, rel_ton.ton, rel_number.numberid, rel_number.number, rel_prefix.prefixid, rel_prefix.prefix, rel_addrvalue.addrvalueid, rel_addrvalue.addrvalue, rel_link.linkid, rel_link.link, rel_house.houseid, rel_house.house, rel_street.streetid, rel_street.street, rel_town.townid, rel_town.town, rel_city.cityid, rel_city.city, rel_county.countyid, rel_county.county, rel_postcode.postcodeid, rel_postcode.postcode, rel_state.stateid, rel_state.state, rel_zipcode.zipcodeid, rel_zipcode.zipcode, rel_extension.extensionid, rel_extension.extension\n FROM svcurrentcontactdetails contact, rel_contact_has_address rel_contact, svcurrentaddress address\n LEFT JOIN ( SELECT rel.address_id AS addressid, rel.addresspart_id AS npiid, det.detailvalue AS npi\n FROM rel_address_has_addresspart rel, addresspart det\n WHERE rel.addresspart_id = det.id AND det.detailname::text = 'Npi'::text) rel_npi ON address.id = rel_npi.addressid\n LEFT JOIN ( SELECT rel.address_id AS addressid, rel.addresspart_id AS tonid, det.detailvalue AS ton\n FROM rel_address_has_addresspart rel, addresspart det\n WHERE rel.addresspart_id = det.id AND det.detailname::text = 'Ton'::text) rel_ton ON address.id = rel_ton.addressid\n LEFT JOIN ( SELECT rel.address_id AS addressid, rel.addresspart_id AS numberid, det.detailvalue AS number\n FROM rel_address_has_addresspart rel, addresspart det\n WHERE rel.addresspart_id = det.id AND det.detailname::text = 'Number'::text) rel_number ON address.id = rel_number.addressid\n LEFT JOIN ( SELECT rel.address_id AS addressid, rel.addresspart_id AS prefixid, det.detailvalue AS prefix\n FROM rel_address_has_addresspart rel, addresspart det\n WHERE rel.addresspart_id = det.id AND det.detailname::text = 'Prefix'::text) rel_prefix ON address.id = rel_prefix.addressid\n LEFT JOIN ( SELECT rel.address_id AS addressid, rel.addresspart_id AS addrvalueid, det.detailvalue AS addrvalue\n FROM rel_address_has_addresspart rel, addresspart det\n WHERE rel.addresspart_id = det.id AND det.detailname::text = 'AddrValue'::text) rel_addrvalue ON address.id = rel_addrvalue.addressid\n LEFT JOIN ( SELECT rel.address_id AS addressid, rel.addresspart_id AS linkid, det.detailvalue AS link\n FROM rel_address_has_addresspart rel, addresspart det\n WHERE rel.addresspart_id = det.id AND det.detailname::text = 'Link'::text) rel_link ON address.id = rel_link.addressid\n LEFT JOIN ( SELECT rel.address_id AS addressid, rel.addresspart_id AS houseid, det.detailvalue AS house\n FROM rel_address_has_addresspart rel, addresspart det\n WHERE rel.addresspart_id = det.id AND det.detailname::text = 'House'::text) rel_house ON address.id = rel_house.addressid\n LEFT JOIN ( SELECT rel.address_id AS addressid, rel.addresspart_id AS streetid, det.detailvalue AS street\n FROM rel_address_has_addresspart rel, addresspart det\n WHERE rel.addresspart_id = det.id AND det.detailname::text = 'Street'::text) rel_street ON address.id = rel_street.addressid\n LEFT JOIN ( SELECT rel.address_id AS addressid, rel.addresspart_id AS townid, det.detailvalue AS town\n FROM rel_address_has_addresspart rel, addresspart det\n WHERE rel.addresspart_id = det.id AND det.detailname::text = 'Town'::text) rel_town ON address.id = rel_town.addressid\n LEFT JOIN ( SELECT rel.address_id AS addressid, rel.addresspart_id AS cityid, det.detailvalue AS city\n FROM rel_address_has_addresspart rel, addresspart det\n WHERE rel.addresspart_id = det.id AND det.detailname::text = 'City'::text) rel_city ON address.id = rel_city.addressid\n LEFT JOIN ( SELECT rel.address_id AS addressid, rel.addresspart_id AS countyid, det.detailvalue AS county\n FROM rel_address_has_addresspart rel, addresspart det\n WHERE rel.addresspart_id = det.id AND det.detailname::text = 'County'::text) rel_county ON address.id = rel_county.addressid\n LEFT JOIN ( SELECT rel.address_id AS addressid, rel.addresspart_id AS postcodeid, det.detailvalue AS postcode\n FROM rel_address_has_addresspart rel, addresspart det\n WHERE rel.addresspart_id = det.id AND det.detailname::text = 'Postcode'::text) rel_postcode ON address.id = rel_postcode.addressid\n LEFT JOIN ( SELECT rel.address_id AS addressid, rel.addresspart_id AS stateid, det.detailvalue AS state\n FROM rel_address_has_addresspart rel, addresspart det\n WHERE rel.addresspart_id = det.id AND det.detailname::text = 'State'::text) rel_state ON address.id = rel_state.addressid\n LEFT JOIN ( SELECT rel.address_id AS addressid, rel.addresspart_id AS zipcodeid, det.detailvalue AS zipcode\n FROM rel_address_has_addresspart rel, addresspart det\n WHERE rel.addresspart_id = det.id AND det.detailname::text = 'Zipcode'::text) rel_zipcode ON address.id = rel_zipcode.addressid\n LEFT JOIN ( SELECT rel.address_id AS addressid, rel.addresspart_id AS extensionid, det.detailvalue AS extension\n FROM rel_address_has_addresspart rel, addresspart det\n WHERE rel.addresspart_id = det.id AND det.detailname::text = 'Extension'::text) rel_extension ON address.id = rel_extension.addressid\n WHERE contact.id = rel_contact.contact_id AND address.id = rel_contact.address_id;\n \n(The JOINs are where our problems are below ...)\n\n\n\n\nhydradb=# explain select * from pvaddress where contactuuid = 'test' and type = 'sms' and format is null ;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------------------------------------------\n Merge Join (cost=42499.93..44975.38 rows=1 width=358)\n Merge Cond: (\"outer\".id = \"inner\".address_id)\n -> Merge Left Join (cost=42491.11..44957.05 rows=3795 width=323)\n Merge Cond: (\"outer\".id = \"inner\".address_id)\n -> Merge Left Join (cost=41822.20..44278.07 rows=3795 width=305)\n Merge Cond: (\"outer\".id = \"inner\".address_id)\n -> Merge Left Join (cost=41153.29..43599.08 rows=3795 width=287)\n Merge Cond: (\"outer\".id = \"inner\".address_id)\n -> Merge Left Join (cost=40484.39..42920.10 rows=3795 width=269)\n Merge Cond: (\"outer\".id = \"inner\".address_id)\n -> Merge Left Join (cost=39815.48..42241.12 rows=3795 width=251)\n Merge Cond: (\"outer\".id = \"inner\".address_id)\n -> Merge Left Join (cost=39146.58..41562.13 rows=3795 width=233)\n Merge Cond: (\"outer\".id = \"inner\".address_id)\n -> Merge Left Join (cost=38477.67..40883.15 rows=3795 width=215)\n Merge Cond: (\"outer\".id = \"inner\".address_id)\n -> Merge Left Join (cost=37808.76..40204.16 rows=3795 width=197)\n Merge Cond: (\"outer\".id = \"inner\".address_id)\n -> Merge Left Join (cost=37139.86..39525.18 rows=3795 width=179)\n Merge Cond: (\"outer\".id = \"inner\".address_id)\n -> Merge Left Join (cost=36470.95..38846.20 rows=3795 width=161)\n Merge Cond: (\"outer\".id = \"inner\".address_id)\n -> Merge Left Join (cost=35802.04..38167.21 rows=3795 width=143)\n Merge Cond: (\"outer\".id = \"inner\".address_id)\n -> Merge Left Join (cost=28634.40..30852.70 rows=3795 width=125)\n Merge Cond: (\"outer\".id = \"inner\".address_id)\n -> Merge Left Join (cost=21495.85..23569.10 rows=3795 width=107)\n Merge Cond: (\"outer\".id = \"inner\".address_id)\n -> Merge Left Join (cost=14328.21..16254.59 rows=3795 width=89)\n Merge Cond: (\"outer\".id = \"inner\".address_id)\n -> Merge Left Join (cost=7102.23..8878.06 rows=3795 width=71)\n Merge Cond: (\"outer\".id = \"inner\".address_id)\n -> Index Scan using address_id_idx on address (cost=0.00..1633.07 rows=3795 width=53)\n Filter: (((enddate IS NULL) OR (('now'::text)::timestamp(6) with time zone < (enddate)::timestamp with time zone)) AND ((\"typ\ne\")::text = 'sms'::text) AND (format IS NULL))\n -> Sort (cost=7102.23..7159.65 rows=22970 width=22)\n Sort Key: rel.address_id\n -> Merge Join (cost=0.00..5438.34 rows=22970 width=22)\n Merge Cond: (\"outer\".id = \"inner\".addressline_id)\n -> Index Scan using addressline_id_idx on addressline det (cost=0.00..2773.61 rows=22969 width=18)\n Filter: ((detailname)::text = 'Npi'::text)\n -> Index Scan using rel_address_has_addressline_idx2 on rel_address_has_addressline rel (cost=0.00..2082.13 rows=1181\n93 width=8)\n -> Sort (cost=7225.98..7286.76 rows=24310 width=22)\n Sort Key: rel.address_id\n -> Merge Join (cost=0.00..5455.09 rows=24310 width=22)\n Merge Cond: (\"outer\".id = \"inner\".addressline_id)\n -> Index Scan using addressline_id_idx on addressline det (cost=0.00..2773.61 rows=24309 width=18)\n Filter: ((detailname)::text = 'Ton'::text)\n -> Index Scan using rel_address_has_addressline_idx2 on rel_address_has_addressline rel (cost=0.00..2082.13 rows=118193 wid\nth=8)\n -> Sort (cost=7167.64..7226.84 rows=23679 width=22)\n Sort Key: rel.address_id\n -> Merge Join (cost=0.00..5447.20 rows=23679 width=22)\n Merge Cond: (\"outer\".id = \"inner\".addressline_id)\n -> Index Scan using addressline_id_idx on addressline det (cost=0.00..2773.61 rows=23678 width=18)\n Filter: ((detailname)::text = 'Number'::text)\n -> Index Scan using rel_address_has_addressline_idx2 on rel_address_has_addressline rel (cost=0.00..2082.13 rows=118193 width=8)\n -> Sort (cost=7138.56..7196.97 rows=23364 width=22)\n Sort Key: rel.address_id\n -> Merge Join (cost=0.00..5443.27 rows=23364 width=22)\n Merge Cond: (\"outer\".id = \"inner\".addressline_id)\n -> Index Scan using addressline_id_idx on addressline det (cost=0.00..2773.61 rows=23363 width=18)\n Filter: ((detailname)::text = 'Prefix'::text)\n -> Index Scan using rel_address_has_addressline_idx2 on rel_address_has_addressline rel (cost=0.00..2082.13 rows=118193 width=8)\n -> Sort (cost=7167.64..7226.84 rows=23679 width=22)\n Sort Key: rel.address_id\n -> Merge Join (cost=0.00..5447.20 rows=23679 width=22)\n Merge Cond: (\"outer\".id = \"inner\".addressline_id)\n -> Index Scan using addressline_id_idx on addressline det (cost=0.00..2773.61 rows=23678 width=18)\n Filter: ((detailname)::text = 'AddrValue'::text)\n -> Index Scan using rel_address_has_addressline_idx2 on rel_address_has_addressline rel (cost=0.00..2082.13 rows=118193 width=8)\n -> Sort (cost=668.91..669.16 rows=100 width=22)\n Sort Key: rel.address_id\n -> Nested Loop (cost=0.00..665.58 rows=100 width=22)\n -> Index Scan using addressline_detail_idx on addressline det (cost=0.00..366.01 rows=99 width=18)\n Index Cond: ((detailname)::text = 'Link'::text)\n -> Index Scan using rel_address_has_addressline_idx2 on rel_address_has_addressline rel (cost=0.00..3.01 rows=1 width=8)\n Index Cond: (rel.addressline_id = \"outer\".id)\n -> Sort (cost=668.91..669.16 rows=100 width=22)\n Sort Key: rel.address_id\n -> Nested Loop (cost=0.00..665.58 rows=100 width=22)\n -> Index Scan using addressline_detail_idx on addressline det (cost=0.00..366.01 rows=99 width=18)\n Index Cond: ((detailname)::text = 'House'::text)\n -> Index Scan using rel_address_has_addressline_idx2 on rel_address_has_addressline rel (cost=0.00..3.01 rows=1 width=8)\n Index Cond: (rel.addressline_id = \"outer\".id)\n -> Sort (cost=668.91..669.16 rows=100 width=22)\n Sort Key: rel.address_id\n -> Nested Loop (cost=0.00..665.58 rows=100 width=22)\n -> Index Scan using addressline_detail_idx on addressline det (cost=0.00..366.01 rows=99 width=18)\n Index Cond: ((detailname)::text = 'Street'::text)\n -> Index Scan using rel_address_has_addressline_idx2 on rel_address_has_addressline rel (cost=0.00..3.01 rows=1 width=8)\n Index Cond: (rel.addressline_id = \"outer\".id)\n -> Sort (cost=668.91..669.16 rows=100 width=22)\n Sort Key: rel.address_id\n -> Nested Loop (cost=0.00..665.58 rows=100 width=22)\n -> Index Scan using addressline_detail_idx on addressline det (cost=0.00..366.01 rows=99 width=18)\n Index Cond: ((detailname)::text = 'Town'::text)\n -> Index Scan using rel_address_has_addressline_idx2 on rel_address_has_addressline rel (cost=0.00..3.01 rows=1 width=8)\n Index Cond: (rel.addressline_id = \"outer\".id)\n -> Sort (cost=668.91..669.16 rows=100 width=22)\n Sort Key: rel.address_id\n -> Nested Loop (cost=0.00..665.58 rows=100 width=22)\n -> Index Scan using addressline_detail_idx on addressline det (cost=0.00..366.01 rows=99 width=18)\n Index Cond: ((detailname)::text = 'City'::text)\n -> Index Scan using rel_address_has_addressline_idx2 on rel_address_has_addressline rel (cost=0.00..3.01 rows=1 width=8)\n Index Cond: (rel.addressline_id = \"outer\".id)\n -> Sort (cost=668.91..669.16 rows=100 width=22)\n Sort Key: rel.address_id\n -> Nested Loop (cost=0.00..665.58 rows=100 width=22)\n -> Index Scan using addressline_detail_idx on addressline det (cost=0.00..366.01 rows=99 width=18)\n Index Cond: ((detailname)::text = 'County'::text)\n -> Index Scan using rel_address_has_addressline_idx2 on rel_address_has_addressline rel (cost=0.00..3.01 rows=1 width=8)\n Index Cond: (rel.addressline_id = \"outer\".id)\n -> Sort (cost=668.91..669.16 rows=100 width=22)\n Sort Key: rel.address_id\n -> Nested Loop (cost=0.00..665.58 rows=100 width=22)\n -> Index Scan using addressline_detail_idx on addressline det (cost=0.00..366.01 rows=99 width=18)\n Index Cond: ((detailname)::text = 'Postcode'::text)\n -> Index Scan using rel_address_has_addressline_idx2 on rel_address_has_addressline rel (cost=0.00..3.01 rows=1 width=8)\n Index Cond: (rel.addressline_id = \"outer\".id)\n -> Sort (cost=668.91..669.16 rows=100 width=22)\n Sort Key: rel.address_id\n -> Nested Loop (cost=0.00..665.58 rows=100 width=22)\n -> Index Scan using addressline_detail_idx on addressline det (cost=0.00..366.01 rows=99 width=18)\n Index Cond: ((detailname)::text = 'State'::text)\n -> Index Scan using rel_address_has_addressline_idx2 on rel_address_has_addressline rel (cost=0.00..3.01 rows=1 width=8)\n Index Cond: (rel.addressline_id = \"outer\".id)\n -> Sort (cost=668.91..669.16 rows=100 width=22)\n Sort Key: rel.address_id\n -> Nested Loop (cost=0.00..665.58 rows=100 width=22)\n -> Index Scan using addressline_detail_idx on addressline det (cost=0.00..366.01 rows=99 width=18)\n Index Cond: ((detailname)::text = 'Zipcode'::text)\n -> Index Scan using rel_address_has_addressline_idx2 on rel_address_has_addressline rel (cost=0.00..3.01 rows=1 width=8)\n Index Cond: (rel.addressline_id = \"outer\".id)\n -> Sort (cost=668.91..669.16 rows=100 width=22)\n Sort Key: rel.address_id\n -> Nested Loop (cost=0.00..665.58 rows=100 width=22)\n -> Index Scan using addressline_detail_idx on addressline det (cost=0.00..366.01 rows=99 width=18)\n Index Cond: ((detailname)::text = 'Extension'::text)\n -> Index Scan using rel_address_has_addressline_idx2 on rel_address_has_addressline rel (cost=0.00..3.01 rows=1 width=8)\n Index Cond: (rel.addressline_id = \"outer\".id)\n -> Sort (cost=8.83..8.83 rows=2 width=39)\n Sort Key: rel_contact.address_id\n -> Nested Loop (cost=0.00..8.82 rows=2 width=39)\n -> Index Scan using contact_key on contact (cost=0.00..5.77 rows=1 width=35)\n Index Cond: ((contactid)::text = 'test'::text)\n Filter: (((startdate IS NULL) OR (('now'::text)::timestamp(6) with time zone >= (startdate)::timestamp with time zone)) AND ((enddate IS NULL) OR (('now'::text)::timestamp(6) with time zone < (enddate)::\ntimestamp with time zone)))\n -> Index Scan using rel_contact_has_address_idx1 on rel_contact_has_address rel_contact (cost=0.00..3.02 rows=2 width=8)\n Index Cond: (\"outer\".id = rel_contact.contact_id)\n(147 rows)\n\n\n\n\n\nAs you can see, the PublicView is resulting in a huge nested loop, with an index scan of the contact only occurring at the end. I would have expected something more like:\n\n(1) An index scan of the contact table to determine the correct contact\n(2) An index scan of the address table using the rel_contact_has_address.address_id to obtain the (relatively small - max 16, and typically 2) addresses\n(3) A number of joins - at the same level rather than looping - to obtain the detailnames for the new column names of the public view\n\n\n\nAs I said in my original email, these delays are after applying all the performance related enhancements (fsync off, increased backbuffers, sort memory etc) I have picked up from the archives and FAQ. The upload script was also modified to commit and vacuum analyze at different intervals without providing any significant improvement. top reports the CPU usage at 99% - so I believe its all looping of the above intermediate SELECTs that is causing the spiralling delays as the number of rows increases.\n\n\nAgain, any help would be very much appreciated!\n\nDamien\n\n", "msg_date": "Wed, 29 Oct 2003 10:22:24 +0000", "msg_from": "Damien Dougan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very Poor Insert Performance" }, { "msg_contents": "Damien Dougan <[email protected]> writes:\n> Now, our problem seems to be the delays introduced by reading from the\n> public views.\n\nYour initial message stated plainly that the problem was in INSERTs;\nit's not surprising that you got unhelpful advice.\n\n> View definition:\n> [ huge view full of LEFT JOINs ]\n\n> As you can see, the PublicView is resulting in a huge nested loop,\n> with an index scan of the contact only occurring at the end. I would\n> have expected something more like:\n\n> (1) An index scan of the contact table to determine the correct contact\n> (2) An index scan of the address table using the rel_contact_has_address.address_id to obtain the (relatively small - max 16, and typically 2) addresses\n> (3) A number of joins - at the same level rather than looping - to obtain the detailnames for the new column names of the public view\n\nYour LEFT JOINs are constraining the join order --- see \nhttp://www.postgresql.org/docs/7.3/static/explicit-joins.html\nYou'll need to reorder the joins into something that does what you want.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Oct 2003 09:23:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very Poor Insert Performance " }, { "msg_contents": "On Wednesday 29 October 2003 2:23 pm, Tom Lane wrote:\n\n> Your initial message stated plainly that the problem was in INSERTs;\n> it's not surprising that you got unhelpful advice.\n\nBut perhaps my use of the term \"insert\" to describe upload was a very bad call \ngiven the domain of the list...\n\nI assure you I wasn't setting out to deceive anyone! The only location i used \nINSERT (ie as a Postgres keyword) was towards the end of my mail when I tried \nto highlight the fact we couldn't use COPY to upload our data because of the \ndifficulty in maintaining the code to generate inter-table relations ahead of \ntime.\n\nThe problem was showing itself during database upload - so I assumed (ASS out \nof U and ME and all that!) that the write delay was very large (hence the \ndisappointing improvements by switching off fsync etc). It was only after \nfurther investigation that we discovered that simulated INSERTs were going \nfine, but the Read delays between INSERTs where holding us up.\n\n\n> Your LEFT JOINs are constraining the join order --- see\n> http://www.postgresql.org/docs/7.3/static/explicit-joins.html\n> You'll need to reorder the joins into something that does what you want.\n\nThanks very much for the heads-up, we'll reorder the joins into something more \neffecient!\n\nDamien\n\n", "msg_date": "Wed, 29 Oct 2003 14:40:06 +0000", "msg_from": "Damien Dougan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very Poor Insert Performance" } ]
[ { "msg_contents": "Folks,\n\nI'm getting this plan on 7.2.4:\n\n----------------------------------------------------------\nexplain\nselect events.event_id, events.event_name, type_name,\n\tCOALESCE(cases.case_name || '(' || cases.docket || ')', \ntrial_groups.tgroup_name) as event_case,\n\tjw_date_format(events.event_date, events.event_tz, events.duration) as \nshow_date\nFROM event_types, events\n\tLEFT OUTER JOIN cases ON (events.link_type = 'case' AND events.case_id = \ncases.case_id)\n\tLEFT OUTER JOIN trial_groups ON ( events.link_type = 'tg' AND\n\t\tevents.case_id = trial_groups.tgroup_id )\n\tLEFT OUTER JOIN event_history eh ON events.event_id = eh.event_id\nWHERE events.status = 1 or events.status = 11\n\tand events.event_date > '2003-10-27'\n\tand events.etype_id = event_types.etype_id\n\tand (\n\t\t( events.mod_user = 562 AND eh.event_id IS NULL )\n\t\tOR\n\t\t( eh.mod_user = 562\n\t\t and not exists (select 1 from event_history eh2\n\t\t \twhere eh2.event_id = eh.event_id\n\t\t\tand eh2.mod_date < eh.mod_date) )\n\t );\n\nNested Loop (cost=100004949.08..2676373923.96 rows=3666858 width=197)\n -> Hash Join (cost=4949.08..8519.60 rows=43568 width=165)\n -> Hash Join (cost=4407.81..6615.02 rows=43568 width=149)\n -> Hash Join (cost=4403.21..6485.29 rows=43568 width=125)\n -> Seq Scan on events (cost=0.00..1515.70 rows=43568 \nwidth=79)\n -> Hash (cost=3108.07..3108.07 rows=115355 width=46)\n -> Seq Scan on cases (cost=0.00..3108.07 \nrows=115355 width=46)\n -> Hash (cost=4.43..4.43 rows=143 width=24)\n -> Seq Scan on trial_groups (cost=0.00..4.43 rows=143 \nwidth=24)\n -> Hash (cost=524.72..524.72 rows=13240 width=16)\n -> Seq Scan on event_history eh (cost=0.00..524.72 rows=13240 \nwidth=16)\n -> Seq Scan on event_types (cost=0.00..4.32 rows=106 width=32)\n SubPlan\n -> Seq Scan on event_history eh2 (cost=0.00..557.82 rows=1 width=0)\n-----------------------------------------------------------------\n\nWhat I can't figure out is what is that inredibly expensive nested loop for? \nIf I could figure that out, maybe I could query around it.\n\nUnfortunately, I can't EXPLAIN ANALYZE because the present query swamps the \nmachine, and it's a production server. Also it never completes.\n\nAnd yes, the system is vacuum full analyzed. Event_history is under-indexed, \nbut the other tables are heavily indexed.\n\nIdeas?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 27 Oct 2003 15:32:41 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Guesses on what this NestLoop is for?" }, { "msg_contents": "On Mon, 27 Oct 2003 15:32:41 -0800, Josh Berkus <[email protected]>\nwrote:\n>FROM event_types, events\n>\tLEFT OUTER JOIN ...\n>WHERE events.status = 1 or events.status = 11\n>\tand events.event_date > '2003-10-27'\n>\tand events.etype_id = event_types.etype_id\n>\tand ( ...\n>\t );\n>\n>\n>What I can't figure out is what is that inredibly expensive nested loop for? \n\nSorry, I have no answer to your question, but may I ask whether you\nreally want to get presumably 106 output rows for each event with\nstatus 1?\n\nOr did you mean\n\t WHERE (events.status = 1 OR events.status = 11) AND ...\n\n>Ideas?\n\nI'd also try to push that NOT EXISTS condition into the FROM clause:\n\n...LEFT JOIN (SELECT DISTINCT ON (event_id)\n event_id, mod_date, mod_user\n FROM event_history\n ORDER BY event_id, mod_date\n ) AS eh ON (events.event_id = eh.event_id) ...\nWHERE ...\n AND CASE WHEN eh.event_id IS NULL\n THEN events.mod_user\n ELSE eh.mod_user END = 562\n\nIf mod_user is NOT NULL in event_history, then CASE ... END can be\nsimplified to COALESCE(eh.mod_user, events.mod_user).\n\nServus\n Manfred\n", "msg_date": "Tue, 28 Oct 2003 11:59:36 +0100", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Guesses on what this NestLoop is for?" }, { "msg_contents": "Manfred,\n\n> Sorry, I have no answer to your question, but may I ask whether you\n> really want to get presumably 106 output rows for each event with\n> status 1?\n>\n> Or did you mean\n> \t WHERE (events.status = 1 OR events.status = 11) AND ...\n\nThanks! I spent so much time tinkering around with the exists clauses, I \ncompletely missed that. Hopefully I'll get this client to upgrade to 7.4 so \nthat the explains will be more readable ....\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 28 Oct 2003 09:36:02 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Guesses on what this NestLoop is for?" } ]
[ { "msg_contents": "Hi,\n I'am having major performance issues with post gre 7.3.1 db. Kindly suggest all the possible means by which i can optimize the performance of this database. If not all, some ideas (even if they are common) are also welcome. There is no optimisation done to the default configuration of the installed database. Kindly suggest. \n\nregards\nKamalraj\n", "msg_date": "Tue, 28 Oct 2003 11:45:28 +0530", "msg_from": "\"Kamalraj Singh Madhan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizing Performance" } ]
[ { "msg_contents": "Kamalraj Singh Madhan wrote:\n\n> Hi,\n> I'am having major performance issues with post gre 7.3.1 db. Kindly suggest all the possible means by which i can optimize the performance of this database. If not all, some ideas (even if they are common) are also welcome. There is no optimisation done to the default configuration of the installed database. Kindly suggest. \n\nCheck\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n Shridhar\n\n", "msg_date": "Tue, 28 Oct 2003 11:51:29 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizing Performance" }, { "msg_contents": "Hi,\n I'am having major performance issues with post gre 7.3.1 db. Kindly suggest all the possible means by which i can optimize the performance of this database. If not all, some ideas (even if they are common) are also welcome. There is no optimisation done to the default configuration of the installed database. Kindly suggest. \n\nregards\nKamalraj\n", "msg_date": "Tue, 28 Oct 2003 11:51:57 +0530", "msg_from": "\"Kamalraj Singh Madhan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Optimizing Performance" }, { "msg_contents": "[email protected] (\"Kamalraj Singh Madhan\") writes:\n> Hi, I'am having major performance issues with post gre 7.3.1\n> db. Kindly suggest all the possible means by which i can optimize\n> the performance of this database. If not all, some ideas (even if\n> they are common) are also welcome. There is no optimisation done to\n> the default configuration of the installed database. Kindly suggest.\n\nThe best single document I am aware of on tuning the database may be\nfound here:\n\n <http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html>\n\nThat may help with some of your problems, but there can be no\nguarantees. What ultimately must happen is for you to discover what\nare the bottlenecks on your system and addressing them. That involves\na process of exploration, as opposed to one of doing some unambiguous\n\"best practices.\"\n-- \nselect 'cbbrowne' || '@' || 'libertyrms.info';\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n", "msg_date": "Tue, 28 Oct 2003 10:59:37 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing Performance" } ]
[ { "msg_contents": "This has probably been asked before, but I'll re-ask to spark debate on it again.\n\nIs there any good reason to not have explain analyze also include information if temporary files will be required on sorts, hashes, etc. during the processing of a query. [Idea being setting your sort_mem won't be purely anecdotal]... maybe include how much space it needed in temp files? \nsomething along the lines of: \n\nSort (Cost=1..10) (Actual=1..1000) (Temp Files=5MB)\n\nSeeing that and looking at your current sort_mem and seeing it is 4MB you'll have the info you need to get a nice boost by avoiding that spill at a low cost. \n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n", "msg_date": "Tue, 28 Oct 2003 09:05:15 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": true, "msg_subject": "More info in explain analyze" } ]
[ { "msg_contents": "I recalled seeing a thread on -HACKERS about some major improvements to the speed of adding an FK to an existing table in 7.4. Naturally I was curious and decided to give it a whirl. My findings are not too good. In fact, they are bad. \n\nCould it be this patch never made it in?\n\nAnyway, here's the info.\nMachine: Linux 2.4.18 [stock rh8], p3 500, 512mb, 4x18GB scsi raid 0 \n\nTwo tables: members and watchedmembers with 1045720 and 829994 rows respectivly.\n\nfreshly vacuum analyze'd for each PG:\n\n7.4b4, 10k shared buff, 256mb effective cache: 485706ms\n7.3.4 [same settings]: 412304.76 ms\n\nNow the odd thing during that operation was that the machine was about oh, 50-70% _idle_ during the whole time. \n\nThen I started thinking more about it and realized hearing if you bump sort_mem up ridiculously high during a foreign key add it helps. So I did. Bumped it up\nto 256MB.\n\n[again, vacuum analyze'd each beforehand]\n\n7.3.4: 328912ms [cpu pegged]\n7.4b4: 298383ms [cpu pegged]\n\nQuite an improvement I'd say.\n\nPerhaps we should make note of this somewhere? Performance guide? Docs?\n\nAnd this leads to the place we'd get a huge benefit: Restoring backups.. If there were some way to bump up sort_mem while doing the restore.. things would be much more pleasant. [Although, even better would be to disable FK stuff while restoring a backup and assume the backup is \"sane\"] How we'd go about doing that is the subject of much debate. \n\nPerhaps add the functionality to pg_restore? ie, pg_restore -s 256MB mybackup.db?\nIt would just end up issuing a set sort_mem=256000..\n\nWhat do you guys think?\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n", "msg_date": "Tue, 28 Oct 2003 09:16:45 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": true, "msg_subject": "Adding foreign key performance" }, { "msg_contents": "\nOn Tue, 28 Oct 2003, Jeff wrote:\n\n> I recalled seeing a thread on -HACKERS about some major improvements to\n> the speed of adding an FK to an existing table in 7.4. Naturally I was\n> curious and decided to give it a whirl. My findings are not too good. In\n> fact, they are bad.\n>\n> Could it be this patch never made it in?\n\nI think it went in between b4 and b5, can you try with b5?\n\n", "msg_date": "Tue, 28 Oct 2003 07:21:51 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding foreign key performance" }, { "msg_contents": "Jeff <[email protected]> writes:\n> I recalled seeing a thread on -HACKERS about some major improvements to the speed of adding an FK to an existing table in 7.4. Naturally I was curious and decided to give it a whirl. My findings are not too good. In fact, they are bad. \n> 7.4b4, 10k shared buff, 256mb effective cache: 485706ms\n\nYou are testing the wrong version. beta5 has the ADD FOREIGN KEY improvement.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Oct 2003 10:51:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding foreign key performance " }, { "msg_contents": "On Tue, 28 Oct 2003 09:16:45 -0500\nJeff <[email protected]> wrote:\n\n\n> 7.3.4: 328912ms [cpu pegged]\n> 7.4b4: 298383ms [cpu pegged]\n> \n\nJust loaded up delicious 7.4b5 and wow... \n\nsort_mem 8192: 137038ms [lots of tmp file activity]\nsort_mem 256000: 83109ms \n\nThat's some good work there Lou, You'll make sargent for that someday.\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n", "msg_date": "Tue, 28 Oct 2003 11:33:59 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding foreign key performance" }, { "msg_contents": "FWIW: I'm fiddling with that right now, and the FK think was quick... a few \nseconds... the tables in question have 1400 records, 343000 records and 7200 \nrecords... I'm running Beta5...\n\nJohn.\n\nOn Tuesday 28 October 2003 10:21, Stephan Szabo wrote:\n> On Tue, 28 Oct 2003, Jeff wrote:\n> > I recalled seeing a thread on -HACKERS about some major improvements to\n> > the speed of adding an FK to an existing table in 7.4. Naturally I was\n> > curious and decided to give it a whirl. My findings are not too good. In\n> > fact, they are bad.\n> >\n> > Could it be this patch never made it in?\n>\n> I think it went in between b4 and b5, can you try with b5?\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n\n", "msg_date": "Tue, 28 Oct 2003 13:06:10 -0500", "msg_from": "\"John K. Herreshoff\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding foreign key performance" }, { "msg_contents": "On Tue, 28 Oct 2003, Jeff wrote:\n\n> On Tue, 28 Oct 2003 09:16:45 -0500\n> Jeff <[email protected]> wrote:\n>\n>\n> > 7.3.4: 328912ms [cpu pegged]\n> > 7.4b4: 298383ms [cpu pegged]\n> >\n>\n> Just loaded up delicious 7.4b5 and wow...\n>\n> sort_mem 8192: 137038ms [lots of tmp file activity]\n> sort_mem 256000: 83109ms\n\nHmm, 298383 -> 83109 (since those are the 256k numbers). Not as\nmuch as I'd have hoped, but I'll take a factor of 3.\n", "msg_date": "Tue, 28 Oct 2003 10:32:36 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding foreign key performance" }, { "msg_contents": "John K. Herreshoff wrote:\n> FWIW: I'm fiddling with that right now, and the FK think was quick... a few \n> seconds... the tables in question have 1400 records, 343000 records and 7200 \n> records... I'm running Beta5...\n\nDid those tables have analyze statistics? Can you try it without\nstatistics (I think you have to drop the tables to erase the\nstatistics).\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 28 Oct 2003 13:34:15 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding foreign key performance" }, { "msg_contents": "On Tue, 28 Oct 2003 10:32:36 -0800 (PST)\nStephan Szabo <[email protected]> wrote:\n\n> Hmm, 298383 -> 83109 (since those are the 256k numbers). Not as\n> much as I'd have hoped, but I'll take a factor of 3.\n\nYes. those are the numbers for 256MB of sort_mem.\n\nIt seemed to saturate the IO so once I get more disks in here it should\nhopefully speed up.\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n", "msg_date": "Tue, 28 Oct 2003 13:34:31 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding foreign key performance" }, { "msg_contents": "I'm not sure about the analyze stats... Where would I find that (in \npostgresql.conf I suppose) I'll go see what I have set up, and get back to \nyou in 30 minutes or less...\n\nJohn.\n\nOn Tuesday 28 October 2003 13:34, Bruce Momjian wrote:\n> John K. Herreshoff wrote:\n> > FWIW: I'm fiddling with that right now, and the FK think was quick... a\n> > few seconds... the tables in question have 1400 records, 343000 records\n> > and 7200 records... I'm running Beta5...\n>\n> Did those tables have analyze statistics? Can you try it without\n> statistics (I think you have to drop the tables to erase the\n> statistics).\n\n", "msg_date": "Tue, 28 Oct 2003 13:38:43 -0500", "msg_from": "\"John K. Herreshoff\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding foreign key performance" }, { "msg_contents": "John K. Herreshoff wrote:\n> I'm not sure about the analyze stats... Where would I find that (in \n> postgresql.conf I suppose) I'll go see what I have set up, and get back to \n> you in 30 minutes or less...\n\nThey are in pg_statistic. If you have ever anaylzed the table, there\nare stats. I am interested in the non-analyze case because that's how\nthe data will load into a fresh db via pg_dump.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 28 Oct 2003 13:54:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding foreign key performance" }, { "msg_contents": "That did not take long... about 13 minutes to reload the tables from an *.mdb \nfile, and a second or two for each of the 'alter table foo add foreign \nkey...' lines. I tried to drop a 'referencing' table, and the database would \nnot let me, said that something depended on it ;o)\n\nIs there some way to name the foreign key so that it can be dropped later, or \nis there a way to drop the foreign key using information already in the \ndatabase?\n\nJohn.\n\nOn Tuesday 28 October 2003 13:34, Bruce Momjian wrote:\n> John K. Herreshoff wrote:\n> > FWIW: I'm fiddling with that right now, and the FK think was quick... a\n> > few seconds... the tables in question have 1400 records, 343000 records\n> > and 7200 records... I'm running Beta5...\n>\n> Did those tables have analyze statistics? Can you try it without\n> statistics (I think you have to drop the tables to erase the\n> statistics).\n\n", "msg_date": "Tue, 28 Oct 2003 14:04:57 -0500", "msg_from": "\"John K. Herreshoff\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding foreign key performance" }, { "msg_contents": "John K. Herreshoff wrote:\n> That did not take long... about 13 minutes to reload the tables from an *.mdb \n> file, and a second or two for each of the 'alter table foo add foreign \n> key...' lines. I tried to drop a 'referencing' table, and the database would \n> not let me, said that something depended on it ;o)\n> \n> Is there some way to name the foreign key so that it can be dropped later, or \n> is there a way to drop the foreign key using information already in the \n> database?\n\nYou have to use ALTER TABLE DROP CONSTRAINT perhaps.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 28 Oct 2003 14:13:57 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding foreign key performance" }, { "msg_contents": ">>>>> \"J\" == Jeff <[email protected]> writes:\n\nJ> And this leads to the place we'd get a huge benefit: Restoring\nJ> backups.. If there were some way to bump up sort_mem while doing\nJ> the restore.. things would be much more pleasant. [Although, even\n\nThere was a rather substantial thread on this about the time when\n7.4b1 was released.\n\nJ> better would be to disable FK stuff while restoring a backup and\nJ> assume the backup is \"sane\"] How we'd go about doing that is the\nJ> subject of much debate.\n\nIf you're restoring from a pg_dump -Fc (compressed dump) it already\nhappens for you. The indexes and foreign keys are not added until the\nvery end, from what I recall.\n\nJ> Perhaps add the functionality to pg_restore? ie, pg_restore -s\nJ> 256MB mybackup.db? It would just end up issuing a set\nJ> sort_mem=256000..\n\nThis was essentially my proposal, though I had better speed\nenhancement by increasing the number of checkpoint buffers.\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Tue, 28 Oct 2003 14:22:04 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding foreign key performance" }, { "msg_contents": "On Tue, 28 Oct 2003 14:22:04 -0500\nVivek Khera <[email protected]> wrote:\n\n> If you're restoring from a pg_dump -Fc (compressed dump) it already\n> happens for you. The indexes and foreign keys are not added until the\n> very end, from what I recall.\n> \n\nThis happens with regular dumps - at the end is a pile of alter table's\nthat create the indices, FK's and triggers.\n\nIs the -Fc method different?\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n", "msg_date": "Tue, 28 Oct 2003 14:25:24 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding foreign key performance" }, { "msg_contents": ">>Just loaded up delicious 7.4b5 and wow...\n>>\n>>sort_mem 8192: 137038ms [lots of tmp file activity]\n>>sort_mem 256000: 83109ms\n> \n> \n> Hmm, 298383 -> 83109 (since those are the 256k numbers). Not as\n> much as I'd have hoped, but I'll take a factor of 3.\n\nHi Jeff,\n\nCould you let us know the load times when you have done:\n\n1. A full ANALYZE\n2. A delete all from pg_statistic\n\nSo we can see if ANALYZE stats make much difference?\n\nChris\n\n\n", "msg_date": "Wed, 29 Oct 2003 09:47:28 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding foreign key performance" }, { "msg_contents": "On Wed, 29 Oct 2003 09:47:28 +0800\nChristopher Kings-Lynne <[email protected]> wrote:\n\n> >>Just loaded up delicious 7.4b5 and wow...\n> >>\n> >>sort_mem 8192: 137038ms [lots of tmp file activity]\n> >>sort_mem 256000: 83109ms\n> > \n\n> 1. A full ANALYZE\n> 2. A delete all from pg_statistic\n> \nI had previously analyze'd before I ran those numbers.\nBut I did it again with and without stats. \n\nWith:\nRun 1 Time: 80157.21 ms\nRun 2 Time: 80763.59 ms\n\nKilled statistics:\n\nTime: 80571.71 ms\nTime: 80759.18 ms\n\nChances are it is going to seq scan regardless so the stats are rather\nuseless. Perhaps in other scenarios it would help.\n\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n", "msg_date": "Wed, 29 Oct 2003 09:01:54 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding foreign key performance" } ]
[ { "msg_contents": "Hi,\n\nsuppose, for simplicity, there is a table with index like this:\n\ncreate table TABLE1 (\n A integer\n);\ncreate index TABLE1_A on TABLE1 (A);\n\nMy question is: why psql (7.3.3) does not use index when filtering by A IS\nNULL, A IS NOT\nNULL expressions?\n\nIn fact, I need to filter by expression ((A is null) or (A > const)).\n\nIs there a way to filter by this expression using index?\n\nFunctional index cannot be used (except strange solution with CASE-ing and\nconverting NULL values into some integer constant)\n\n\n\n----------------------------------------------------------------------------\n--\n Index Scan using table1_a on table1 (cost=0.00..437.14 rows=29164 width=4)\n Index Cond: (a > 1000)\n----------------------------------------------------------------------------\n--\n Seq Scan on table1 (cost=0.00..448.22 rows=1 width=4)\n Filter: (a IS NULL)\n--------------------------------------------------------\n Seq Scan on table1 (cost=0.00..448.22 rows=30222 width=4)\n Filter: (a IS NOT NULL)\n------------------------------------------------------------\n Seq Scan on table1 (cost=0.00..523.77 rows=29164 width=4)\n Filter: ((a IS NULL) OR (a > 1000))\n------------------------------------------------------------\n\n\nCH\n\n", "msg_date": "Tue, 28 Oct 2003 19:57:24 +0100", "msg_from": "\"Cestmir Hybl\" <[email protected]>", "msg_from_op": true, "msg_subject": "Ignoring index on (A is null), (A is not null) conditions" }, { "msg_contents": "Are you seeing this question as totally off-topic in this list, or there is\nreally no one who knows something about indexing \"is null\" bits in postgres?\n\nRegards\nCH\n\n\n> Hi,\n>\n> suppose, for simplicity, there is a table with index like this:\n>\n> create table TABLE1 (\n> A integer\n> );\n> create index TABLE1_A on TABLE1 (A);\n>\n> My question is: why psql (7.3.3) does not use index when filtering by A IS\n> NULL, A IS NOT\n> NULL expressions?\n>\n> In fact, I need to filter by expression ((A is null) or (A > const)).\n>\n> Is there a way to filter by this expression using index?\n>\n> Functional index cannot be used (except strange solution with CASE-ing and\n> converting NULL values into some integer constant)\n>\n>\n>\n> --------------------------------------------------------------------------\n--\n> --\n> Index Scan using table1_a on table1 (cost=0.00..437.14 rows=29164\nwidth=4)\n> Index Cond: (a > 1000)\n> --------------------------------------------------------------------------\n--\n> --\n> Seq Scan on table1 (cost=0.00..448.22 rows=1 width=4)\n> Filter: (a IS NULL)\n> --------------------------------------------------------\n> Seq Scan on table1 (cost=0.00..448.22 rows=30222 width=4)\n> Filter: (a IS NOT NULL)\n> ------------------------------------------------------------\n> Seq Scan on table1 (cost=0.00..523.77 rows=29164 width=4)\n> Filter: ((a IS NULL) OR (a > 1000))\n> ------------------------------------------------------------\n>\n>\n> CH\n\n", "msg_date": "Thu, 30 Oct 2003 12:34:15 +0100", "msg_from": "\"Cestmir Hybl\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Ignoring index on (A is null), (A is not null) conditions" }, { "msg_contents": "try this:\nEXPLAIN [ANALYZE] SELECT a FROM table1 WHERE a IS NULL OR a>2;\nSET enable_seqscan TO off;\nEXPLAIN [ANALYZE] SELECT a FROM table1 WHERE a IS NULL OR a>2;\n \nand compare the costs and times of both executions. This will tell you\nwhy postgresql is not using an index.\n\nFor example, if you have 1000 rows in your table, they will fit in only\none page of the table, so postgresql will (correctly) think that\nfetching and procesing this only page will be faster than fetching the\nindex page, procesing it, and fetching and procesing the table page.\nOr perhaps there are so many rows that match your condition, that\npostgresql realizes that using and index or not it will still have to\nvisit almost every page in the table.\n\nMany things can cause postgresql to think that a seqscan is better than\nan indexscan, If after comparing the EXPLAINs you see that postgresql is\nwrong, you should tweak your postgresql.conf (for example the\ncpu_index_tuple_cost value).\n\nhope it helps.\n\nOn Thu, 2003-10-30 at 08:34, Cestmir Hybl wrote:\n\n> Are you seeing this question as totally off-topic in this list, or there is\n> really no one who knows something about indexing \"is null\" bits in postgres?\n> \n> Regards\n> CH\n> \n> \n> > Hi,\n> >\n> > suppose, for simplicity, there is a table with index like this:\n> >\n> > create table TABLE1 (\n> > A integer\n> > );\n> > create index TABLE1_A on TABLE1 (A);\n> >\n> > My question is: why psql (7.3.3) does not use index when filtering by A IS\n> > NULL, A IS NOT\n> > NULL expressions?\n> >\n> > In fact, I need to filter by expression ((A is null) or (A > const)).\n> >\n> > Is there a way to filter by this expression using index?\n> >\n> > Functional index cannot be used (except strange solution with CASE-ing and\n> > converting NULL values into some integer constant)\n> >\n> >\n> >\n> > --------------------------------------------------------------------------\n> --\n> > --\n> > Index Scan using table1_a on table1 (cost=0.00..437.14 rows=29164\n> width=4)\n> > Index Cond: (a > 1000)\n> > --------------------------------------------------------------------------\n> --\n> > --\n> > Seq Scan on table1 (cost=0.00..448.22 rows=1 width=4)\n> > Filter: (a IS NULL)\n> > --------------------------------------------------------\n> > Seq Scan on table1 (cost=0.00..448.22 rows=30222 width=4)\n> > Filter: (a IS NOT NULL)\n> > ------------------------------------------------------------\n> > Seq Scan on table1 (cost=0.00..523.77 rows=29164 width=4)\n> > Filter: ((a IS NULL) OR (a > 1000))\n> > ------------------------------------------------------------\n> >\n> >\n> > CH\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n>", "msg_date": "Thu, 30 Oct 2003 10:56:29 -0300", "msg_from": "Franco Bruno Borghesi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ignoring index on (A is null), (A is not null)" }, { "msg_contents": "On Thu, Oct 30, 2003 at 12:34:15 +0100,\n Cestmir Hybl <[email protected]> wrote:\n> Are you seeing this question as totally off-topic in this list, or there is\n> really no one who knows something about indexing \"is null\" bits in postgres?\n\nThere was some talk about IS NULL not being able to use indexes (unless\nyou specifically created a partial index using that condition) a number\nof months ago. You could search through the archives if you are interested\nin what was said. My memory is that people thought it would be a good idea\nbut that it wasn't that important to get done.\n\n> \n> > Hi,\n> >\n> > suppose, for simplicity, there is a table with index like this:\n> >\n> > create table TABLE1 (\n> > A integer\n> > );\n> > create index TABLE1_A on TABLE1 (A);\n> >\n> > My question is: why psql (7.3.3) does not use index when filtering by A IS\n> > NULL, A IS NOT\n> > NULL expressions?\n\nThat is a Postgres limitation. If there are only a few null values, but you\nquery for them a lot it may be worth creating a partial index.\n\n> >\n> > In fact, I need to filter by expression ((A is null) or (A > const)).\n\nThis is a whole different matter. Using an indexed search on > is not\nnecessarily a good idea. Unless you know only a small fraction of the\ntable (often 10% is quoted) is greater than the constant, a sequential\nscan is probably a better plan than an index scan. If you know that\nthere is only a small fraction of the values above constant and you\nknow some large value greater than all values, you can try using a between\ncomparison to coax the planner into doing an index scan.\n", "msg_date": "Thu, 30 Oct 2003 08:04:06 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ignoring index on (A is null), (A is not null) conditions" }, { "msg_contents": "\"Cestmir Hybl\" <[email protected]> writes:\n>> In fact, I need to filter by expression ((A is null) or (A > const)).\n\nI wonder whether you shouldn't reconsider your data representation.\nPerhaps the condition you are using \"null\" for would be better\nrepresented by setting A to infinity. (The float and timestamp\ndatatypes actually have a concept of infinity; for other types you\ncan fake it with a large positive value.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Oct 2003 10:31:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ignoring index on (A is null), (A is not null) conditions " }, { "msg_contents": "First of all, thanks for all your suggestions.\n\nThey were of two classes:\n\n1. use different data representation (special constant from column domain\ninstead of NULL)\n\nThis is possible, of course, but it makes data model less portable and\nrequires changes in database abstraction layer of application.\n\n2. use partial indexes\n\nThis is suitable for single null-allowed column index. With increasing\nnumber of null-allowed columns inside index, the number of partial indexes\nrequired grows exponentially.\n\nAll RDBMSs I ever used (Sybase, MSSQL, or even MySQL) were using index to\nfilter by expressions containing is NULL conditions /(A is NULL), (A is not\nNULL), (A is NULL or A = const), (A is NULL or A > const)/ so it seems\npretty strange to me that PostgreSQL does not.\n\nIs this sheduled feature at least?\n\nCH\n\n", "msg_date": "Tue, 4 Nov 2003 11:43:51 +0100", "msg_from": "\"Cestmir Hybl\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Ignoring index on (A is null), (A is not null) conditions" } ]
[ { "msg_contents": "I just noticed on one of my tables I have the following two indexes:\n\nIndexes: entity_watch_map_pkey primary key btree (entity_id, watch_id),\n ewm_entity_id btree (entity_id),\n\n\nI can't think of why the second index is there, as ISTM there is no\ninstance where the first index wouldn't be used in place of the second\none if i were to delete the second one. its a heavily updated table, so\naxing the second one would be a bonus for performance, am i missing\nsomething? Thanks in advance, \n\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "29 Oct 2003 09:03:56 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": true, "msg_subject": "redundent index?" }, { "msg_contents": "On Wed, 2003-10-29 at 09:03, Robert Treat wrote:\n> I just noticed on one of my tables I have the following two indexes:\n> \n> Indexes: entity_watch_map_pkey primary key btree (entity_id, watch_id),\n> ewm_entity_id btree (entity_id),\n> \n> \n> I can't think of why the second index is there, as ISTM there is no\n> instance where the first index wouldn't be used in place of the second\n\nThe cost in evaluating the first index will be a little higher (more\ndata to pull off disk due to second item), so there may be a few\nborderline cases that could switch to a sequential scan rather than an\nindex scan.", "msg_date": "Wed, 29 Oct 2003 10:17:24 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: redundent index?" }, { "msg_contents": "On Wed, 29 Oct 2003 10:17:24 -0500, Rod Taylor <[email protected]> wrote:\n>On Wed, 2003-10-29 at 09:03, Robert Treat wrote:\n>> Indexes: entity_watch_map_pkey primary key btree (entity_id, watch_id),\n>> ewm_entity_id btree (entity_id),\n>> \n>> I can't think of why the second index is there, as ISTM there is no\n>> instance where the first index wouldn't be used in place of the second\n>\n>The cost in evaluating the first index will be a little higher\n\nYes, the actual cost may be a little higher. But the cost estimation\nmight be significantly higher, so there can be border cases where the\nplanner chooses a sequential scan over a multi-column index scan while\na single-column index would correctly be recognized as being faster\n...\n\nServus\n Manfred\n", "msg_date": "Fri, 31 Oct 2003 14:47:14 +0100", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: redundent index?" } ]
[ { "msg_contents": "\nDear PostgreSQL gurus,\n\nI really not intend to start a flame war here but i am genuinely\nseeking help to retain PostgreSQL as my database for my RT\nsystem.\n\nFew months back i had posted regarding lowering of column names in SQL\nbeing passed to RDBMS by DBIx::SearchBuilder , looks like it was controlled\nby a parameter \"CASESENSITIVE\" changing it to 1 from 0 did help for postgresql\nto MySQL it probably does not matter.\n\n\nBut This time its a different situation\nThe query in Postgresql is taking 6 times more than MySQL\n\nThe Query being given gets generated by DBIx::SearchBuilder.\nAlthough i am not sure but i feel modules like DBIx::SearchBuilder which are\nsupposed to provide RDBMS independent abstraction are unfortunately\ngetting test only with MySQL or Oracle otherwise such huge difference in timing\nwere not possible.\n\n\n\nIN MYSQL:\n========\nmysql> SELECT DISTINCT main.* FROM Groups main , Principals Principals_1, ACL ACL_2 WHERE\n((ACL_2.RightName = 'OwnTicket')OR(ACL_2.RightName = 'SuperUser')) AND ( ( ACL_2.PrincipalId =\nPrincipals_1.id AND ACL_2.PrincipalType = 'Group' AND ( main.Domain = 'SystemInternal' OR\nmain.Domain = 'UserDefined' OR main.Domain = 'ACLEquivalence') AND main.id = Principals_1.id) OR (\n( (main.Domain = 'RT::Queue-Role' AND main.Instance = 25) OR ( main.Domain = 'RT::Ticket-Role'\nAND main.Instance = 6973) ) AND main.Type = ACL_2.PrincipalType AND main.id = Principals_1.id) )\nAND (ACL_2.ObjectType = 'RT::System' OR (ACL_2.ObjectType = 'RT::Queue' AND ACL_2.ObjectId = 25)\n) ORDER BY main.Name ASC ;+-------+------------+---------------------------+----------------+-----------+----------+\n| id | Name | Description | Domain | Type | Instance |\n+-------+------------+---------------------------+----------------+-----------+----------+\n| 40208 | sales | Sales team in Delhi | UserDefined | | |\n| 2 | User 1 | ACL equiv. for user 1 | ACLEquivalence | UserEquiv | 1 |\n| 11 | User 10 | ACL equiv. for user 10 | ACLEquivalence | UserEquiv | 10 |\n| 13 | User 12 | ACL equiv. for user 12 | ACLEquivalence | UserEquiv | 12 |\n| 31067 | User 31066 | ACL equiv. for user 31066 | ACLEquivalence | UserEquiv | 31066 |\n+-------+------------+---------------------------+----------------+-----------+----------+\n5 rows in set (0.94 sec)\n\nmysql>\n\nWHEREAS for PostgreSQL:\nrt3=# SELECT version();\n PostgreSQL 7.4beta5 on i686-pc-linux-gnu, compiled by GCC 2.96\n\n\nrt3=# SELECT DISTINCT main.* FROM Groups main , Principals Principals_1, ACL ACL_2 WHERE\n((ACL_2.RightName = 'OwnTicket')OR(ACL_2.RightName = 'SuperUser')) AND ( ( ACL_2.PrincipalId =\nPrincipals_1.id AND ACL_2.PrincipalType = 'Group' AND ( main.Domain = 'SystemInternal' OR\nmain.Domain = 'UserDefined' OR main.Domain = 'ACLEquivalence') AND main.id = Principals_1.id) OR (\n( (main.Domain = 'RT::Queue-Role' AND main.Instance = 25) OR ( main.Domain = 'RT::Ticket-Role'\nAND main.Instance = 6973) ) AND main.Type = ACL_2.PrincipalType AND main.id = Principals_1.id) )\nAND (ACL_2.ObjectType = 'RT::System' OR (ACL_2.ObjectType = 'RT::Queue' AND ACL_2.ObjectId = 25)\n) ORDER BY main.Name ASC ;+-------+------------+---------------------------+----------------+-----------+----------+\n| id | name | description | domain | type | instance |\n+-------+------------+---------------------------+----------------+-----------+----------+\n| 40264 | sales | Sales team in Delhi | UserDefined | | |\n| 2 | User 1 | ACL equiv. for user 1 | ACLEquivalence | UserEquiv | 1 |\n| 11 | User 10 | ACL equiv. for user 10 | ACLEquivalence | UserEquiv | 10 |\n| 13 | User 12 | ACL equiv. for user 12 | ACLEquivalence | UserEquiv | 12 |\n| 31123 | User 31122 | ACL equiv. for user 31122 | ACLEquivalence | UserEquiv | 31122 |\n+-------+------------+---------------------------+----------------+-----------+----------+\n(5 rows)\nTime: 7281.574 ms\nrt3=#\n\nExplain Analyze of Above Query is being given below:\n\nUnique (cost=4744.06..4744.08 rows=1 width=81) (actual time=6179.789..6179.828 rows=5 loops=1)\n -> Sort (cost=4744.06..4744.07 rows=1 width=81) (actual time=6179.785..6179.792 rows=6 loops=1)\n Sort Key: main.name, main.id, main.description, main.\"domain\", main.\"type\", main.instance\n -> Nested Loop (cost=1788.68..4744.05 rows=1 width=81) (actual time=584.004..6179.712\n rows=6 loops=1) Join Filter: ((((\"inner\".principaltype)::text = 'Group'::text) OR\n ((\"outer\".\"domain\")::text = 'RT::Ticket-Role'::text) OR ((\"outer\".\"domain\")::text =\n 'RT::Queue-Role'::text)) AND (((\"inner\".principaltype)::text = 'Group'::text) OR\n ((\"outer\".instance)::text = '6973'::text) OR ((\"outer\".\"domain\")::text =\n 'RT::Queue-Role'::text)) AND (((\"inner\".principaltype)::text = 'Group'::text) OR\n ((\"outer\".\"domain\")::text = 'RT::Ticket-Role'::text) OR ((\"outer\".instance)::text =\n '25'::text)) AND (((\"inner\".principaltype)::text = 'Group'::text) OR\n ((\"outer\".instance)::text = '6973'::text) OR ((\"outer\".instance)::text =\n '25'::text)) AND (((\"inner\".principaltype)::text = 'Group'::text) OR\n ((\"outer\".\"type\")::text = (\"inner\".principaltype)::text)) AND\n (((\"outer\".\"domain\")::text = 'SystemInternal'::text) OR ((\"outer\".\"domain\")::text =\n 'UserDefined'::text) OR ((\"outer\".\"domain\")::text = 'ACLEquivalence'::text) OR\n ((\"outer\".\"type\")::text = (\"inner\".principaltype)::text)) AND ((\"inner\".principalid\n = \"outer\".id) OR ((\"outer\".\"domain\")::text = 'RT::Ticket-Role'::text) OR\n ((\"outer\".\"domain\")::text = 'RT::Queue-Role'::text)) AND ((\"inner\".principalid =\n \"outer\".id) OR ((\"outer\".instance)::text = '6973'::text) OR\n ((\"outer\".\"domain\")::text = 'RT::Queue-Role'::text)) AND ((\"inner\".principalid =\n \"outer\".id) OR ((\"outer\".\"domain\")::text = 'RT::Ticket-Role'::text) OR\n ((\"outer\".instance)::text = '25'::text)) AND ((\"inner\".principalid = \"outer\".id) OR\n ((\"outer\".instance)::text = '6973'::text) OR ((\"outer\".instance)::text =\n '25'::text)) AND ((\"inner\".principalid = \"outer\".id) OR ((\"outer\".\"type\")::text =\n (\"inner\".principaltype)::text)) AND ((\"outer\".id = \"outer\".id) OR\n ((\"outer\".\"type\")::text = (\"inner\".principaltype)::text)) AND ((\"inner\".principalid\n = \"outer\".id) OR (\"outer\".id = \"outer\".id)) AND (((\"inner\".principaltype)::text =\n 'Group'::text) OR (\"outer\".id = \"outer\".id))) -> Merge Join (cost=1788.68..4735.71 rows=1 width=85) (actual\n time=583.804..1187.448 rows=20153 loops=1) Merge Cond: (\"outer\".id = \"inner\".id)\n Join Filter: (((\"inner\".id = \"outer\".id) OR ((\"inner\".\"domain\")::text =\n 'RT::Ticket-Role'::text) OR ((\"inner\".\"domain\")::text =\n 'RT::Queue-Role'::text)) AND ((\"inner\".id = \"outer\".id) OR\n ((\"inner\".instance)::text = '6973'::text) OR ((\"inner\".\"domain\")::text =\n 'RT::Queue-Role'::text)) AND ((\"inner\".id = \"outer\".id) OR\n ((\"inner\".\"domain\")::text = 'RT::Ticket-Role'::text) OR\n ((\"inner\".instance)::text = '25'::text)) AND ((\"inner\".id = \"outer\".id) OR\n ((\"inner\".instance)::text = '6973'::text) OR ((\"inner\".instance)::text =\n '25'::text)) AND (((\"inner\".\"domain\")::text = 'SystemInternal'::text) OR\n ((\"inner\".\"domain\")::text = 'UserDefined'::text) OR ((\"inner\".\"domain\")::text\n = 'ACLEquivalence'::text) OR (\"inner\".id = \"outer\".id))) -> Index Scan using principals_pkey on principals principals_1 \n (cost=0.00..2536.49 rows=82221 width=4) (actual time=0.087..169.725\n rows=64626 loops=1) -> Sort (cost=1788.68..1797.99 rows=3726 width=81) (actual\n time=583.624..625.604 rows=20153 loops=1) Sort Key: main.id\n -> Index Scan using groups_domain, groups_domain, groups_domain,\n groups_lower_instance, groups_domain on groups main \n (cost=0.00..1567.66 rows=3726 width=81) (actual time=0.132..449.240\n rows=20153 loops=1) Index Cond: (((\"domain\")::text = 'SystemInternal'::text) OR\n ((\"domain\")::text = 'UserDefined'::text) OR ((\"domain\")::text =\n 'ACLEquivalence'::text) OR ((instance)::text = '6973'::text) OR\n ((\"domain\")::text = 'RT::Queue-Role'::text)) Filter: ((((\"domain\")::text = 'SystemInternal'::text) OR\n ((\"domain\")::text = 'UserDefined'::text) OR ((\"domain\")::text =\n 'ACLEquivalence'::text) OR ((\"domain\")::text =\n 'RT::Ticket-Role'::text) OR ((\"domain\")::text =\n 'RT::Queue-Role'::text)) AND (((\"domain\")::text =\n 'SystemInternal'::text) OR ((\"domain\")::text =\n 'UserDefined'::text) OR ((\"domain\")::text =\n 'ACLEquivalence'::text) OR ((\"domain\")::text =\n 'RT::Ticket-Role'::text) OR ((instance)::text = '25'::text)) AND\n (((\"domain\")::text = 'SystemInternal'::text) OR ((\"domain\")::text\n = 'UserDefined'::text) OR ((\"domain\")::text =\n 'ACLEquivalence'::text) OR ((instance)::text = '6973'::text) OR\n ((instance)::text = '25'::text))) -> Index Scan using acl_objectid, acl_objecttype on acl acl_2 (cost=0.00..8.03\n rows=3 width=13) (actual time=0.032..0.138 rows=6 loops=20153) Index Cond: ((objectid = 25) OR ((objecttype)::text = 'RT::System'::text))\n Filter: ((((rightname)::text = 'OwnTicket'::text) OR ((rightname)::text =\n 'SuperUser'::text)) AND (((objecttype)::text = 'RT::Queue'::text) OR\n ((objecttype)::text = 'RT::System'::text))) Total runtime: 6183.155 ms [ 6 secs approx ]\n(18 rows)\n\nSincerely Looking Forward to a Help\nRegds\nMallah\n\n\n\n\n\n\n\n\n-----------------------------------------\nOver 1,00,000 exporters are waiting for your order! Click below to get\nin touch with leading Indian exporters listed in the premier\ntrade directory Exporters Yellow Pages.\nhttp://www.trade-india.com/dyn/gdh/eyp/\n\n\n", "msg_date": "Thu, 30 Oct 2003 00:52:43 +0530 (IST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 7.4beta5 vs MySQL 4.0.16 with RT(DBIx::SearchBuilder)" }, { "msg_contents": "\n\nActually PostgreSQL is at par with MySQL when the query is being Properly Written(simplified)\n like below\n\nrt3=# SELECT DISTINCT main.* FROM Groups main join Principals Principals_1 using(id) join ACL\nACL_2 on (ACL_2.PrincipalId = Principals_1.id) WHERE ((ACL_2.RightName =\n'OwnTicket')OR(ACL_2.RightName = 'SuperUser')) AND ( ( ACL_2.PrincipalType = 'Group' AND ( \nmain.Domain = 'SystemInternal' OR main.Domain = 'UserDefined' OR main.Domain = 'ACLEquivalence') )\nOR ( ( (main.Domain = 'RT::Queue-Role' AND main.Instance = 25) OR ( main.Domain =\n'RT::Ticket-Role' AND main.Instance = 6973) ) AND main.Type = ACL_2.PrincipalType ) ) AND\n(ACL_2.ObjectType = 'RT::System' OR (ACL_2.ObjectType = 'RT::Queue' AND ACL_2.ObjectId = 25) ) \nORDER BY main.Name ASC ;\n id | name | description | domain | type | instance\n-------+------------+---------------------------+----------------+-----------+----------\n 40264 | sales | Sales team in Delhi | UserDefined | |\n 2 | User 1 | ACL equiv. for user 1 | ACLEquivalence | UserEquiv | 1\n 11 | User 10 | ACL equiv. for user 10 | ACLEquivalence | UserEquiv | 10\n 13 | User 12 | ACL equiv. for user 12 | ACLEquivalence | UserEquiv | 12\n 31123 | User 31122 | ACL equiv. for user 31122 | ACLEquivalence | UserEquiv | 31122\n(5 rows)\n\n( Total runtime: 1.699 ms )\nTime: 6.455 ms which is 0.00 6455 Secs\n\n\nIn mysql:\nmysql> SELECT DISTINCT main.* FROM Groups main join Principals Principals_1 using(id) join ACL\nACL_2 on (ACL_2.PrincipalId = Principals_1.id) WHERE ((ACL_2.RightName =\n'OwnTicket')OR(ACL_2.RightName = 'SuperUser')) AND ( ( ACL_2.PrincipalType = 'Group' AND ( \nmain.Domain = 'SystemInternal' OR main.Domain = 'UserDefined' OR main.Domain = 'ACLEquivalence') )\nOR ( ( (main.Domain = 'RT::Queue-Role' AND main.Instance = 25) OR ( main.Domain =\n'RT::Ticket-Role' AND main.Instance = 6973) ) AND main.Type = ACL_2.PrincipalType ) ) AND\n(ACL_2.ObjectType = 'RT::System' OR (ACL_2.ObjectType = 'RT::Queue' AND ACL_2.ObjectId = 25) ) \nORDER BY main.Name ASC ;+-------+------------+---------------------------+----------------+-----------+----------+\n| id | Name | Description | Domain | Type | Instance |\n+-------+------------+---------------------------+----------------+-----------+----------+\n| 40208 | sales | Sales team in Delhi | UserDefined | | |\n| 2 | User 1 | ACL equiv. for user 1 | ACLEquivalence | UserEquiv | 1 |\n| 11 | User 10 | ACL equiv. for user 10 | ACLEquivalence | UserEquiv | 10 |\n| 13 | User 12 | ACL equiv. for user 12 | ACLEquivalence | UserEquiv | 12 |\n| 31067 | User 31066 | ACL equiv. for user 31066 | ACLEquivalence | UserEquiv | 31066 |\n+-------+------------+---------------------------+----------------+-----------+----------+\n5 rows in set (0.00 sec)\n\nmysql>\n\nSo its not just PostgreSQL that is suffering from the bad SQL but MySQL also.\nBut the question is my does PostgreSQL suffer so badly ??\nI think not all developers write very nice SQLs.\n\nIts really sad to see that a fine peice of work (RT) is performing sub-optimal\nbecoz of malformed SQLs. [ specially on database of my choice ;-) ]\n\n\n\nRegds\nMallah.\n\n>\n> Dear PostgreSQL gurus,\n>\n> I really not intend to start a flame war here but i am genuinely\n> seeking help to retain PostgreSQL as my database for my RT\n> system.\n>\n> Few months back i had posted regarding lowering of column names in SQL being passed to RDBMS by\n> DBIx::SearchBuilder , looks like it was controlled by a parameter \"CASESENSITIVE\" changing it\n> to 1 from 0 did help for postgresql to MySQL it probably does not matter.\n>\n>\n> But This time its a different situation\n> The query in Postgresql is taking 6 times more than MySQL\n>\n> The Query being given gets generated by DBIx::SearchBuilder.\n> Although i am not sure but i feel modules like DBIx::SearchBuilder which are supposed to\n> provide RDBMS independent abstraction are unfortunately getting test only with MySQL or Oracle\n> otherwise such huge difference in timing were not possible.\n>\n>\n>\n> IN MYSQL:\n> ========\n> mysql> SELECT DISTINCT main.* FROM Groups main , Principals Principals_1, ACL ACL_2 WHERE\n> ((ACL_2.RightName = 'OwnTicket')OR(ACL_2.RightName = 'SuperUser')) AND ( (\n> ACL_2.PrincipalId = Principals_1.id AND ACL_2.PrincipalType = 'Group' AND ( main.Domain =\n> 'SystemInternal' OR main.Domain = 'UserDefined' OR main.Domain = 'ACLEquivalence') AND main.id\n> = Principals_1.id) OR ( ( (main.Domain = 'RT::Queue-Role' AND main.Instance = 25) OR (\n> main.Domain = 'RT::Ticket-Role' AND main.Instance = 6973) ) AND main.Type =\n> ACL_2.PrincipalType AND main.id = Principals_1.id) ) AND (ACL_2.ObjectType = 'RT::System' OR\n> (ACL_2.ObjectType = 'RT::Queue' AND ACL_2.ObjectId = 25) ) ORDER BY main.Name ASC\n> ;+-------+------------+---------------------------+----------------+-----------+----------+ |\n> id | Name | Description | Domain | Type | Instance |\n> +-------+------------+---------------------------+----------------+-----------+----------+ |\n> 40208 | sales | Sales team in Delhi | UserDefined | | | |\n> 2 | User 1 | ACL equiv. for user 1 | ACLEquivalence | UserEquiv | 1 | | 11 |\n> User 10 | ACL equiv. for user 10 | ACLEquivalence | UserEquiv | 10 | | 13 | User\n> 12 | ACL equiv. for user 12 | ACLEquivalence | UserEquiv | 12 | | 31067 | User\n> 31066 | ACL equiv. for user 31066 | ACLEquivalence | UserEquiv | 31066 |\n> +-------+------------+---------------------------+----------------+-----------+----------+ 5\n> rows in set (0.94 sec)\n>\n> mysql>\n>\n> WHEREAS for PostgreSQL:\n> rt3=# SELECT version();\n> PostgreSQL 7.4beta5 on i686-pc-linux-gnu, compiled by GCC 2.96\n>\n>\n> rt3=# SELECT DISTINCT main.* FROM Groups main , Principals Principals_1, ACL ACL_2 WHERE\n> ((ACL_2.RightName = 'OwnTicket')OR(ACL_2.RightName = 'SuperUser')) AND ( (\n> ACL_2.PrincipalId = Principals_1.id AND ACL_2.PrincipalType = 'Group' AND ( main.Domain =\n> 'SystemInternal' OR main.Domain = 'UserDefined' OR main.Domain = 'ACLEquivalence') AND main.id\n> = Principals_1.id) OR ( ( (main.Domain = 'RT::Queue-Role' AND main.Instance = 25) OR (\n> main.Domain = 'RT::Ticket-Role' AND main.Instance = 6973) ) AND main.Type =\n> ACL_2.PrincipalType AND main.id = Principals_1.id) ) AND (ACL_2.ObjectType = 'RT::System' OR\n> (ACL_2.ObjectType = 'RT::Queue' AND ACL_2.ObjectId = 25) ) ORDER BY main.Name ASC\n> ;+-------+------------+---------------------------+----------------+-----------+----------+ |\n> id | name | description | domain | type | instance |\n> +-------+------------+---------------------------+----------------+-----------+----------+ |\n> 40264 | sales | Sales team in Delhi | UserDefined | | | |\n> 2 | User 1 | ACL equiv. for user 1 | ACLEquivalence | UserEquiv | 1 | | 11 |\n> User 10 | ACL equiv. for user 10 | ACLEquivalence | UserEquiv | 10 | | 13 | User\n> 12 | ACL equiv. for user 12 | ACLEquivalence | UserEquiv | 12 | | 31123 | User\n> 31122 | ACL equiv. for user 31122 | ACLEquivalence | UserEquiv | 31122 |\n> +-------+------------+---------------------------+----------------+-----------+----------+ (5\n> rows)\n> Time: 7281.574 ms\n> rt3=#\n>\n> Explain Analyze of Above Query is being given below:\n>\n> Unique (cost=4744.06..4744.08 rows=1 width=81) (actual time=6179.789..6179.828 rows=5 loops=1)\n> -> Sort (cost=4744.06..4744.07 rows=1 width=81) (actual time=6179.785..6179.792 rows=6\n> loops=1)\n> Sort Key: main.name, main.id, main.description, main.\"domain\", main.\"type\",\n> main.instance -> Nested Loop (cost=1788.68..4744.05 rows=1 width=81) (actual\n> time=584.004..6179.712 rows=6 loops=1) Join Filter:\n> ((((\"inner\".principaltype)::text = 'Group'::text) OR\n> ((\"outer\".\"domain\")::text = 'RT::Ticket-Role'::text) OR ((\"outer\".\"domain\")::text\n> = 'RT::Queue-Role'::text)) AND (((\"inner\".principaltype)::text = 'Group'::text)\n> OR ((\"outer\".instance)::text = '6973'::text) OR ((\"outer\".\"domain\")::text =\n> 'RT::Queue-Role'::text)) AND (((\"inner\".principaltype)::text = 'Group'::text) OR\n> ((\"outer\".\"domain\")::text = 'RT::Ticket-Role'::text) OR ((\"outer\".instance)::text\n> = '25'::text)) AND (((\"inner\".principaltype)::text = 'Group'::text) OR\n> ((\"outer\".instance)::text = '6973'::text) OR ((\"outer\".instance)::text =\n> '25'::text)) AND (((\"inner\".principaltype)::text = 'Group'::text) OR\n> ((\"outer\".\"type\")::text = (\"inner\".principaltype)::text)) AND\n> (((\"outer\".\"domain\")::text = 'SystemInternal'::text) OR ((\"outer\".\"domain\")::text\n> = 'UserDefined'::text) OR ((\"outer\".\"domain\")::text = 'ACLEquivalence'::text) OR\n> ((\"outer\".\"type\")::text = (\"inner\".principaltype)::text)) AND\n> ((\"inner\".principalid = \"outer\".id) OR ((\"outer\".\"domain\")::text =\n> 'RT::Ticket-Role'::text) OR ((\"outer\".\"domain\")::text = 'RT::Queue-Role'::text))\n> AND ((\"inner\".principalid = \"outer\".id) OR ((\"outer\".instance)::text =\n> '6973'::text) OR ((\"outer\".\"domain\")::text = 'RT::Queue-Role'::text)) AND\n> ((\"inner\".principalid = \"outer\".id) OR ((\"outer\".\"domain\")::text =\n> 'RT::Ticket-Role'::text) OR ((\"outer\".instance)::text = '25'::text)) AND\n> ((\"inner\".principalid = \"outer\".id) OR ((\"outer\".instance)::text = '6973'::text)\n> OR ((\"outer\".instance)::text = '25'::text)) AND ((\"inner\".principalid =\n> \"outer\".id) OR ((\"outer\".\"type\")::text = (\"inner\".principaltype)::text)) AND\n> ((\"outer\".id = \"outer\".id) OR ((\"outer\".\"type\")::text =\n> (\"inner\".principaltype)::text)) AND ((\"inner\".principalid = \"outer\".id) OR\n> (\"outer\".id = \"outer\".id)) AND (((\"inner\".principaltype)::text = 'Group'::text)\n> OR (\"outer\".id = \"outer\".id))) -> Merge Join\n> (cost=1788.68..4735.71 rows=1 width=85) (actual time=583.804..1187.448 rows=20153\n> loops=1) Merge Cond: (\"outer\".id = \"inner\".id)\n> Join Filter: (((\"inner\".id = \"outer\".id) OR ((\"inner\".\"domain\")::text =\n> 'RT::Ticket-Role'::text) OR ((\"inner\".\"domain\")::text =\n> 'RT::Queue-Role'::text)) AND ((\"inner\".id = \"outer\".id) OR\n> ((\"inner\".instance)::text = '6973'::text) OR ((\"inner\".\"domain\")::text =\n> 'RT::Queue-Role'::text)) AND ((\"inner\".id = \"outer\".id) OR\n> ((\"inner\".\"domain\")::text = 'RT::Ticket-Role'::text) OR\n> ((\"inner\".instance)::text = '25'::text)) AND ((\"inner\".id = \"outer\".id) OR\n> ((\"inner\".instance)::text = '6973'::text) OR ((\"inner\".instance)::text =\n> '25'::text)) AND (((\"inner\".\"domain\")::text = 'SystemInternal'::text) OR\n> ((\"inner\".\"domain\")::text = 'UserDefined'::text) OR\n> ((\"inner\".\"domain\")::text = 'ACLEquivalence'::text) OR (\"inner\".id =\n> \"outer\".id))) -> Index Scan using principals_pkey on\n> principals principals_1 (cost=0.00..2536.49 rows=82221 width=4) (actual\n> time=0.087..169.725 rows=64626 loops=1) -> Sort\n> (cost=1788.68..1797.99 rows=3726 width=81) (actual time=583.624..625.604\n> rows=20153 loops=1) Sort Key: main.id\n> -> Index Scan using groups_domain, groups_domain, groups_domain,\n> groups_lower_instance, groups_domain on groups main\n> (cost=0.00..1567.66 rows=3726 width=81) (actual time=0.132..449.240\n> rows=20153 loops=1) Index Cond:\n> (((\"domain\")::text = 'SystemInternal'::text) OR\n> ((\"domain\")::text = 'UserDefined'::text) OR ((\"domain\")::text =\n> 'ACLEquivalence'::text) OR ((instance)::text = '6973'::text) OR\n> ((\"domain\")::text = 'RT::Queue-Role'::text))\n> Filter: ((((\"domain\")::text =\n> 'SystemInternal'::text) OR ((\"domain\")::text =\n> 'UserDefined'::text) OR ((\"domain\")::text =\n> 'ACLEquivalence'::text) OR ((\"domain\")::text =\n> 'RT::Ticket-Role'::text) OR ((\"domain\")::text =\n> 'RT::Queue-Role'::text)) AND (((\"domain\")::text =\n> 'SystemInternal'::text) OR ((\"domain\")::text =\n> 'UserDefined'::text) OR ((\"domain\")::text =\n> 'ACLEquivalence'::text) OR ((\"domain\")::text =\n> 'RT::Ticket-Role'::text) OR ((instance)::text = '25'::text))\n> AND (((\"domain\")::text = 'SystemInternal'::text) OR\n> ((\"domain\")::text = 'UserDefined'::text) OR ((\"domain\")::text =\n> 'ACLEquivalence'::text) OR ((instance)::text = '6973'::text) OR\n> ((instance)::text = '25'::text))) -> Index Scan\n> using acl_objectid, acl_objecttype on acl acl_2\n> (cost=0.00..8.03\n> rows=3 width=13) (actual time=0.032..0.138 rows=6 loops=20153)\n> Index Cond: ((objectid = 25) OR ((objecttype)::text = 'RT::System'::text))\n> Filter: ((((rightname)::text = 'OwnTicket'::text) OR ((rightname)::text =\n> 'SuperUser'::text)) AND (((objecttype)::text = 'RT::Queue'::text) OR\n> ((objecttype)::text = 'RT::System'::text))) Total runtime: 6183.155 ms [ 6\n> secs approx ]\n> (18 rows)\n>\n> Sincerely Looking Forward to a Help\n> Regds\n> Mallah\n>\n>\n>\n>\n>\n>\n>\n>\n> -----------------------------------------\n> Over 1,00,000 exporters are waiting for your order! Click below to get in touch with leading\n> Indian exporters listed in the premier\n> trade directory Exporters Yellow Pages.\n> http://www.trade-india.com/dyn/gdh/eyp/\n>\n>\n>\n> ---------------------------(end of broadcast)--------------------------- TIP 2: you can get off\n> all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n\n-----------------------------------------\nOver 1,00,000 exporters are waiting for your order! Click below to get\nin touch with leading Indian exporters listed in the premier\ntrade directory Exporters Yellow Pages.\nhttp://www.trade-india.com/dyn/gdh/eyp/\n\n\n", "msg_date": "Thu, 30 Oct 2003 01:15:44 +0530 (IST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 7.4beta5 vs MySQL 4.0.16 with RT(DBIx::SearchBuilder)" }, { "msg_contents": "\n\n\nOn Thu, Oct 30, 2003 at 01:15:44AM +0530, [email protected] wrote:\n> Actually PostgreSQL is at par with MySQL when the query is being Properly Written(simplified)\n> \n> In mysql:\n> mysql> SELECT DISTINCT main.* FROM Groups main join Principals Principals_1 using(id) join ACL\n> ACL_2 on (ACL_2.PrincipalId = Principals_1.id) \n\nInteresting, last time I looked, this syntax wasn't valid on mysql.\nAnd I'm not familiar with the \"using(id)\" notation. Can you point me at\nproper docs on it?\n\n\n> \n> So its not just PostgreSQL that is suffering from the bad SQL but MySQL also.\n> But the question is my does PostgreSQL suffer so badly ??\n> I think not all developers write very nice SQLs.\n> \n> Its really sad to see that a fine peice of work (RT) is performing sub-optimal\n> becoz of malformed SQLs. [ specially on database of my choice ;-) ]\n\nCan you try using SearchBuilder 0.90? That made certain optimizations to\nthe postgres query builder that got backed out in 0.92, due to a\npossible really bad failure mode. Thankfully, because all of this is\nmachine generated SQL we can just improve the generator, rather than\nhaving to retool the entire application.\n\n\n-- \njesse reed vincent -- [email protected] -- [email protected] \n70EBAC90: 2A07 FC22 7DB4 42C1 9D71 0108 41A3 3FB3 70EB AC90\n\n\"If IBM _wanted_ to make clones, we could make them cheaper and faster than\nanyone else!\" - An IBM Rep. visiting Vassar College's Comp Sci Department.\n", "msg_date": "Wed, 29 Oct 2003 14:51:15 -0500", "msg_from": "Jesse <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 7.4beta5 vs MySQL 4.0.16 with RT(DBIx::SearchBuilder)" }, { "msg_contents": "> So its not just PostgreSQL that is suffering from the bad SQL but\n> MySQL also. But the question is my does PostgreSQL suffer so badly\n> ?? I think not all developers write very nice SQLs.\n> \n> Its really sad to see that a fine peice of work (RT) is performing\n> sub-optimal becoz of malformed SQLs. [ specially on database of my\n> choice ;-) ]\n\nPost EXPLAIN ANALYZES of the queries you're running, then maybe you'll\nbe able to get some useful help from this list. Until then, it's very\nhard to speculate as to why PostgreSQL is slower. -sc\n\n-- \nSean Chittenden\n", "msg_date": "Wed, 29 Oct 2003 12:03:44 -0800", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 7.4beta5 vs MySQL 4.0.16 with RT(DBIx::SearchBuilder)" }, { "msg_contents": ">\n>\n>\n> On Thu, Oct 30, 2003 at 01:15:44AM +0530, [email protected] wrote:\n>> Actually PostgreSQL is at par with MySQL when the query is being Properly Written(simplified)\n>>\n>> In mysql:\n>> mysql> SELECT DISTINCT main.* FROM Groups main join Principals Principals_1 using(id) join\n>> ACL ACL_2 on (ACL_2.PrincipalId = Principals_1.id)\n>\n> Interesting, last time I looked, this syntax wasn't valid on mysql. And I'm not familiar with\n> the \"using(id)\" notation. Can you point me at proper docs on it?\n\nI am using MySQL 4.0.16 the latest stable one.\nDocs\n\nMySQL: http://www.mysql.com/doc/en/JOIN.html\nPostgresql:\nwell i am not able to point out a dedicated page for this topic\nin pgsql document but below covers it a bit.\nhttp://www.postgresql.org/docs/7.3/static/sql-select.html\nJoin i beleive are SQL standard feature and better docs shud exist.\n\n\n\n>\n>\n>>\n>> So its not just PostgreSQL that is suffering from the bad SQL but MySQL also. But the question\n>> is my does PostgreSQL suffer so badly ??\n>> I think not all developers write very nice SQLs.\n>>\n>> Its really sad to see that a fine peice of work (RT) is performing sub-optimal becoz of\n>> malformed SQLs. [ specially on database of my choice ;-) ]\n>\n> Can you try using SearchBuilder 0.90? That made certain optimizations to the postgres query\n> builder that got backed out in 0.92, due to a\n> possible really bad failure mode. Thankfully, because all of this is machine generated SQL we\n> can just improve the generator, rather than having to retool the entire application.\n\nTrue, Its really a pleasure to see that in DBIx/SearchBuilder/Handle/Pg.pm\nDatabase Specific optimisations can be done easily Congratulations on writing\nSearchBuilder in such an well structured manner. mine is .92 just going to try .90 as u are \nsuggesting and will post back the result.\n\n>\n>\n> --\n> jesse reed vincent -- [email protected] -- [email protected]\n> 70EBAC90: 2A07 FC22 7DB4 42C1 9D71 0108 41A3 3FB3 70EB AC90\n>\n> \"If IBM _wanted_ to make clones, we could make them cheaper and faster than anyone else!\" - An\n> IBM Rep. visiting Vassar College's Comp Sci Department.\n\n\n-----------------------------------------\nOver 1,00,000 exporters are waiting for your order! Click below to get\nin touch with leading Indian exporters listed in the premier\ntrade directory Exporters Yellow Pages.\nhttp://www.trade-india.com/dyn/gdh/eyp/\n\n\n", "msg_date": "Thu, 30 Oct 2003 01:48:43 +0530 (IST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 7.4beta5 vs MySQL 4.0.16 with RT(DBIx::SearchBuilder)" }, { "msg_contents": ">> So its not just PostgreSQL that is suffering from the bad SQL but MySQL also. But the\n>> question is my does PostgreSQL suffer so badly ?? I think not all developers write very nice\n>> SQLs.\n>>\n>> Its really sad to see that a fine peice of work (RT) is performing sub-optimal becoz of\n>> malformed SQLs. [ specially on database of my choice ;-) ]\n>\n> Post EXPLAIN ANALYZES of the queries you're running, then maybe you'll be able to get some\n> useful help from this list. Until then, it's very hard to speculate as to why PostgreSQL is\n> slower. -sc\n\nHere It is:\n\nin case they are illegeble please lemme know i will attach it as .txt\nfiles.\n\nSlower One:\n\nexplain analyze SELECT DISTINCT main.* FROM Groups main , Principals Principals_1, ACL ACL_2 \nWHERE ((ACL_2.RightName = 'OwnTicket')OR(ACL_2.RightName = 'SuperUser')) AND ( ( \nACL_2.PrincipalId = Principals_1.id AND ACL_2.PrincipalType = 'Group' AND ( main.Domain =\n'SystemInternal' OR main.Domain = 'UserDefined' OR main.Domain = 'ACLEquivalence') AND main.id =\nPrincipals_1.id) OR ( ( (main.Domain = 'RT::Queue-Role' AND main.Instance = 25) OR ( main.Domain\n= 'RT::Ticket-Role' AND main.Instance = 6973) ) AND main.Type = ACL_2.PrincipalType AND main.id\n= Principals_1.id) ) AND (ACL_2.ObjectType = 'RT::System' OR (ACL_2.ObjectType = 'RT::Queue' AND\nACL_2.ObjectId = 25) ) ORDER BY main.Name ASC ;\nUnique (cost=4744.06..4744.08 rows=1 width=81) (actual time=6774.140..6774.204 rows=5 loops=1)\n -> Sort (cost=4744.06..4744.07 rows=1 width=81) (actual time=6774.136..6774.145 rows=6 loops=1)\n Sort Key: main.name, main.id, main.description, main.\"domain\", main.\"type\", main.instance\n -> Nested Loop (cost=1788.68..4744.05 rows=1 width=81) (actual time=597.744..6774.042\n rows=6 loops=1) Join Filter: ((((\"inner\".principaltype)::text = 'Group'::text) OR\n ((\"outer\".\"domain\")::text = 'RT::Ticket-Role'::text) OR ((\"outer\".\"domain\")::text =\n 'RT::Queue-Role'::text)) AND (((\"inner\".principaltype)::text = 'Group'::text) OR\n ((\"outer\".instance)::text = '6973'::text) OR ((\"outer\".\"domain\")::text =\n 'RT::Queue-Role'::text)) AND (((\"inner\".principaltype)::text = 'Group'::text) OR\n ((\"outer\".\"domain\")::text = 'RT::Ticket-Role'::text) OR ((\"outer\".instance)::text =\n '25'::text)) AND (((\"inner\".principaltype)::text = 'Group'::text) OR\n ((\"outer\".instance)::text = '6973'::text) OR ((\"outer\".instance)::text =\n '25'::text)) AND (((\"inner\".principaltype)::text = 'Group'::text) OR\n ((\"outer\".\"type\")::text = (\"inner\".principaltype)::text)) AND\n (((\"outer\".\"domain\")::text = 'SystemInternal'::text) OR ((\"outer\".\"domain\")::text =\n 'UserDefined'::text) OR ((\"outer\".\"domain\")::text = 'ACLEquivalence'::text) OR\n ((\"outer\".\"type\")::text = (\"inner\".principaltype)::text)) AND ((\"inner\".principalid\n = \"outer\".id) OR ((\"outer\".\"domain\")::text = 'RT::Ticket-Role'::text) OR\n ((\"outer\".\"domain\")::text = 'RT::Queue-Role'::text)) AND ((\"inner\".principalid =\n \"outer\".id) OR ((\"outer\".instance)::text = '6973'::text) OR\n ((\"outer\".\"domain\")::text = 'RT::Queue-Role'::text)) AND ((\"inner\".principalid =\n \"outer\".id) OR ((\"outer\".\"domain\")::text = 'RT::Ticket-Role'::text) OR\n ((\"outer\".instance)::text = '25'::text)) AND ((\"inner\".principalid = \"outer\".id) OR\n ((\"outer\".instance)::text = '6973'::text) OR ((\"outer\".instance)::text =\n '25'::text)) AND ((\"inner\".principalid = \"outer\".id) OR ((\"outer\".\"type\")::text =\n (\"inner\".principaltype)::text)) AND ((\"outer\".id = \"outer\".id) OR\n ((\"outer\".\"type\")::text = (\"inner\".principaltype)::text)) AND ((\"inner\".principalid\n = \"outer\".id) OR (\"outer\".id = \"outer\".id)) AND (((\"inner\".principaltype)::text =\n 'Group'::text) OR (\"outer\".id = \"outer\".id))) -> Merge Join (cost=1788.68..4735.71 rows=1 width=85) (actual\n time=597.540..1340.526 rows=20153 loops=1) Merge Cond: (\"outer\".id = \"inner\".id)\n Join Filter: (((\"inner\".id = \"outer\".id) OR ((\"inner\".\"domain\")::text =\n 'RT::Ticket-Role'::text) OR ((\"inner\".\"domain\")::text =\n 'RT::Queue-Role'::text)) AND ((\"inner\".id = \"outer\".id) OR\n ((\"inner\".instance)::text = '6973'::text) OR ((\"inner\".\"domain\")::text =\n 'RT::Queue-Role'::text)) AND ((\"inner\".id = \"outer\".id) OR\n ((\"inner\".\"domain\")::text = 'RT::Ticket-Role'::text) OR\n ((\"inner\".instance)::text = '25'::text)) AND ((\"inner\".id = \"outer\".id) OR\n ((\"inner\".instance)::text = '6973'::text) OR ((\"inner\".instance)::text =\n '25'::text)) AND (((\"inner\".\"domain\")::text = 'SystemInternal'::text) OR\n ((\"inner\".\"domain\")::text = 'UserDefined'::text) OR ((\"inner\".\"domain\")::text\n = 'ACLEquivalence'::text) OR (\"inner\".id = \"outer\".id))) -> Index Scan using principals_pkey on principals principals_1 \n (cost=0.00..2536.49 rows=82221 width=4) (actual time=0.073..248.849\n rows=64626 loops=1) -> Sort (cost=1788.68..1797.99 rows=3726 width=81) (actual\n time=597.360..645.859 rows=20153 loops=1) Sort Key: main.id\n -> Index Scan using groups_domain, groups_domain, groups_domain,\n groups_lower_instance, groups_domain on groups main \n (cost=0.00..1567.66 rows=3726 width=81) (actual time=0.105..456.682\n rows=20153 loops=1) Index Cond: (((\"domain\")::text = 'SystemInternal'::text) OR\n ((\"domain\")::text = 'UserDefined'::text) OR ((\"domain\")::text =\n 'ACLEquivalence'::text) OR ((instance)::text = '6973'::text) OR\n ((\"domain\")::text = 'RT::Queue-Role'::text)) Filter: ((((\"domain\")::text = 'SystemInternal'::text) OR\n ((\"domain\")::text = 'UserDefined'::text) OR ((\"domain\")::text =\n 'ACLEquivalence'::text) OR ((\"domain\")::text =\n 'RT::Ticket-Role'::text) OR ((\"domain\")::text =\n 'RT::Queue-Role'::text)) AND (((\"domain\")::text =\n 'SystemInternal'::text) OR ((\"domain\")::text =\n 'UserDefined'::text) OR ((\"domain\")::text =\n 'ACLEquivalence'::text) OR ((\"domain\")::text =\n 'RT::Ticket-Role'::text) OR ((instance)::text = '25'::text)) AND\n (((\"domain\")::text = 'SystemInternal'::text) OR ((\"domain\")::text\n = 'UserDefined'::text) OR ((\"domain\")::text =\n 'ACLEquivalence'::text) OR ((instance)::text = '6973'::text) OR\n ((instance)::text = '25'::text))) -> Index Scan using acl_objectid, acl_objecttype on acl acl_2 (cost=0.00..8.03\n rows=3 width=13) (actual time=0.034..0.150 rows=6 loops=20153) Index Cond: ((objectid = 25) OR ((objecttype)::text = 'RT::System'::text))\n Filter: ((((rightname)::text = 'OwnTicket'::text) OR ((rightname)::text =\n 'SuperUser'::text)) AND (((objecttype)::text = 'RT::Queue'::text) OR\n ((objecttype)::text = 'RT::System'::text))) Total runtime: 6778.888 ms\n\n\nBETTER ONE:\nexplain analyze SELECT DISTINCT main.* FROM Groups main join Principals Principals_1 using(id)\njoin ACL ACL_2 on (ACL_2.PrincipalId = Principals_1.id) WHERE ((ACL_2.RightName =\n'OwnTicket')OR(ACL_2.RightName = 'SuperUser')) AND ( ( ACL_2.PrincipalType = 'Group' AND ( \nmain.Domain = 'SystemInternal' OR main.Domain = 'UserDefined' OR main.Domain = 'ACLEquivalence') )\nOR ( ( (main.Domain = 'RT::Queue-Role' AND main.Instance = 25) OR ( main.Domain =\n'RT::Ticket-Role' AND main.Instance = 6973) ) AND main.Type = ACL_2.PrincipalType ) ) AND\n(ACL_2.ObjectType = 'RT::System' OR (ACL_2.ObjectType = 'RT::Queue' AND ACL_2.ObjectId = 25) ) \nORDER BY main.Name ASC ;\n\nUnique (cost=22.18..22.20 rows=1 width=81) (actual time=0.878..0.910 rows=5 loops=1)\n -> Sort (cost=22.18..22.19 rows=1 width=81) (actual time=0.875..0.881 rows=6 loops=1)\n Sort Key: main.name, main.id, main.description, main.\"domain\", main.\"type\", main.instance\n -> Nested Loop (cost=0.00..22.17 rows=1 width=81) (actual time=0.255..0.814 rows=6\n loops=1) -> Nested Loop (cost=0.00..17.54 rows=1 width=85) (actual time=0.194..0.647\n rows=6 loops=1) Join Filter: ((((\"outer\".principaltype)::text = 'Group'::text) OR\n ((\"inner\".\"domain\")::text = 'RT::Ticket-Role'::text) OR\n ((\"inner\".\"domain\")::text = 'RT::Queue-Role'::text)) AND\n (((\"outer\".principaltype)::text = 'Group'::text) OR ((\"inner\".instance)::text\n = '6973'::text) OR ((\"inner\".\"domain\")::text = 'RT::Queue-Role'::text)) AND\n (((\"outer\".principaltype)::text = 'Group'::text) OR ((\"inner\".\"domain\")::text\n = 'RT::Ticket-Role'::text) OR ((\"inner\".instance)::text = '25'::text)) AND\n (((\"outer\".principaltype)::text = 'Group'::text) OR ((\"inner\".instance)::text\n = '6973'::text) OR ((\"inner\".instance)::text = '25'::text)) AND\n (((\"outer\".principaltype)::text = 'Group'::text) OR ((\"inner\".\"type\")::text =\n (\"outer\".principaltype)::text)) AND (((\"inner\".\"domain\")::text =\n 'SystemInternal'::text) OR ((\"inner\".\"domain\")::text = 'UserDefined'::text)\n OR ((\"inner\".\"domain\")::text = 'ACLEquivalence'::text) OR\n ((\"inner\".\"type\")::text = (\"outer\".principaltype)::text))) -> Index Scan using acl_objectid, acl_objecttype on acl acl_2 \n (cost=0.00..8.03 rows=3 width=13) (actual time=0.064..0.190 rows=6 loops=1) Index Cond: ((objectid = 25) OR ((objecttype)::text = 'RT::System'::text))\n Filter: ((((rightname)::text = 'OwnTicket'::text) OR ((rightname)::text\n = 'SuperUser'::text)) AND (((objecttype)::text = 'RT::Queue'::text) OR\n ((objecttype)::text = 'RT::System'::text))) -> Index Scan using groups_pkey on groups main (cost=0.00..3.11 rows=1\n width=81) (actual time=0.050..0.051 rows=1 loops=6) Index Cond: (\"outer\".principalid = main.id)\n Filter: ((((\"domain\")::text = 'SystemInternal'::text) OR\n ((\"domain\")::text = 'UserDefined'::text) OR ((\"domain\")::text =\n 'ACLEquivalence'::text) OR ((\"domain\")::text = 'RT::Ticket-Role'::text)\n OR ((\"domain\")::text = 'RT::Queue-Role'::text)) AND (((\"domain\")::text\n = 'SystemInternal'::text) OR ((\"domain\")::text = 'UserDefined'::text)\n OR ((\"domain\")::text = 'ACLEquivalence'::text) OR ((instance)::text =\n '6973'::text) OR ((\"domain\")::text = 'RT::Queue-Role'::text)) AND\n (((\"domain\")::text = 'SystemInternal'::text) OR ((\"domain\")::text =\n 'UserDefined'::text) OR ((\"domain\")::text = 'ACLEquivalence'::text) OR\n ((\"domain\")::text = 'RT::Ticket-Role'::text) OR ((instance)::text =\n '25'::text)) AND (((\"domain\")::text = 'SystemInternal'::text) OR\n ((\"domain\")::text = 'UserDefined'::text) OR ((\"domain\")::text =\n 'ACLEquivalence'::text) OR ((instance)::text = '6973'::text) OR\n ((instance)::text = '25'::text))) -> Index Scan using principals_pkey on principals principals_1 (cost=0.00..4.62\n rows=1 width=4) (actual time=0.017..0.019 rows=1 loops=6) Index Cond: (\"outer\".principalid = principals_1.id)\n Total runtime: 1.151 ms\n(15 rows)\n\n\n\n\n\n\n\n\n\n\n>\n> --\n> Sean Chittenden\n>\n> ---------------------------(end of broadcast)--------------------------- TIP 1: subscribe and\n> unsubscribe commands go to [email protected]\n\n\n-----------------------------------------\nOver 1,00,000 exporters are waiting for your order! Click below to get\nin touch with leading Indian exporters listed in the premier\ntrade directory Exporters Yellow Pages.\nhttp://www.trade-india.com/dyn/gdh/eyp/\n\n\n", "msg_date": "Thu, 30 Oct 2003 01:52:56 +0530 (IST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 7.4beta5 vs MySQL 4.0.16 with RT(DBIx::SearchBuilder)" }, { "msg_contents": "On Thu, 30 Oct 2003 [email protected] wrote:\n\n> >> So its not just PostgreSQL that is suffering from the bad SQL but MySQL also. But the\n> >> question is my does PostgreSQL suffer so badly ?? I think not all developers write very nice\n> >> SQLs.\n> >>\n> >> Its really sad to see that a fine peice of work (RT) is performing sub-optimal becoz of\n> >> malformed SQLs. [ specially on database of my choice ;-) ]\n> >\n> > Post EXPLAIN ANALYZES of the queries you're running, then maybe you'll be able to get some\n> > useful help from this list. Until then, it's very hard to speculate as to why PostgreSQL is\n> > slower. -sc\n> \n> Here It is:\n> \n> in case they are illegeble please lemme know i will attach it as .txt\n> files.\n> \n> Slower One:\n> \n> explain analyze SELECT DISTINCT main.* FROM Groups main , Principals Principals_1, ACL ACL_2 \n> WHERE ((ACL_2.RightName = 'OwnTicket')OR(ACL_2.RightName = 'SuperUser')) AND ( ( \n> ACL_2.PrincipalId = Principals_1.id AND ACL_2.PrincipalType = 'Group' AND ( main.Domain =\n> 'SystemInternal' OR main.Domain = 'UserDefined' OR main.Domain = 'ACLEquivalence') AND main.id =\n> Principals_1.id) OR ( ( (main.Domain = 'RT::Queue-Role' AND main.Instance = 25) OR ( main.Domain\n> = 'RT::Ticket-Role' AND main.Instance = 6973) ) AND main.Type = ACL_2.PrincipalType AND main.id\n> = Principals_1.id) ) AND (ACL_2.ObjectType = 'RT::System' OR (ACL_2.ObjectType = 'RT::Queue' AND\n> ACL_2.ObjectId = 25) ) ORDER BY main.Name ASC ;\n\nNote here:\n\nMerge Join \n\t(cost=1788.68..4735.71 rows=1 width=85) \n\t(actual time=597.540..1340.526 rows=20153 loops=1)\n\tMerge Cond: (\"outer\".id = \"inner\".id)\n\nThis estimate is WAY off. Are both of those fields indexed and analyzed? \nHave you tried upping the statistics target on those two fields?\nI assume they are compatible types.\n\nYou might try 'set enable_mergejoin = false' and see if it does something \nfaster here. Just a guess.\n\n", "msg_date": "Wed, 29 Oct 2003 14:51:18 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 7.4beta5 vs MySQL 4.0.16 with" }, { "msg_contents": "> On Thu, 30 Oct 2003 [email protected] wrote:\n>\n>> >> So its not just PostgreSQL that is suffering from the bad SQL but MySQL also. But the\n>> >> question is my does PostgreSQL suffer so badly ?? I think not all developers write very\n>> >> nice SQLs.\n>> >>\n>> >> Its really sad to see that a fine peice of work (RT) is performing sub-optimal becoz of\n>> >> malformed SQLs. [ specially on database of my choice ;-) ]\n>> >\n>> > Post EXPLAIN ANALYZES of the queries you're running, then maybe you'll be able to get some\n>> > useful help from this list. Until then, it's very hard to speculate as to why PostgreSQL is\n>> > slower. -sc\n>>\n>> Here It is:\n>>\n>> in case they are illegeble please lemme know i will attach it as .txt files.\n>>\n>> Slower One:\n>>\n>> explain analyze SELECT DISTINCT main.* FROM Groups main , Principals Principals_1, ACL ACL_2\n>> WHERE ((ACL_2.RightName = 'OwnTicket')OR(ACL_2.RightName = 'SuperUser')) AND ( (\n>> ACL_2.PrincipalId = Principals_1.id AND ACL_2.PrincipalType = 'Group' AND ( main.Domain =\n>> 'SystemInternal' OR main.Domain = 'UserDefined' OR main.Domain = 'ACLEquivalence') AND main.id\n>> = Principals_1.id) OR ( ( (main.Domain = 'RT::Queue-Role' AND main.Instance = 25) OR (\n>> main.Domain = 'RT::Ticket-Role' AND main.Instance = 6973) ) AND main.Type =\n>> ACL_2.PrincipalType AND main.id = Principals_1.id) ) AND (ACL_2.ObjectType = 'RT::System' OR\n>> (ACL_2.ObjectType = 'RT::Queue' AND ACL_2.ObjectId = 25) ) ORDER BY main.Name ASC ;\n>\n> Note here:\n>\n> Merge Join\n> \t(cost=1788.68..4735.71 rows=1 width=85)\n> \t(actual time=597.540..1340.526 rows=20153 loops=1)\n> \tMerge Cond: (\"outer\".id = \"inner\".id)\n>\n> This estimate is WAY off. Are both of those fields indexed and analyzed?\n\nYes both are primary keys. and i did vacuum full verbose analyze;\n\n Have you tried\n> upping the statistics target on those two fields?\n> I assume they are compatible types.\n\nYes they are\n\n>\n> You might try 'set enable_mergejoin = false' and see if it does something faster here. Just a\n> guess.\n\n\nDid not help\n\nregds\nmallah.\n\n\n\n\n\n-----------------------------------------\nOver 1,00,000 exporters are waiting for your order! Click below to get\nin touch with leading Indian exporters listed in the premier\ntrade directory Exporters Yellow Pages.\nhttp://www.trade-india.com/dyn/gdh/eyp/\n\n\n", "msg_date": "Thu, 30 Oct 2003 03:47:09 +0530 (IST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 7.4beta5 vs MySQL 4.0.16 with RT(DBIx::SearchBuilder)" }, { "msg_contents": "[email protected] writes:\n> I really not intend to start a flame war here but i am genuinely\n> seeking help to retain PostgreSQL as my database for my RT system.\n\nIf there are things that can be discovered to feed back to the RT\ndevelopers to improve PostgreSQL's usefulness as a data store for RT,\nthat would be a Good Thing for anyone that would be interested in\nusing PG+RT.\n-- \noutput = reverse(\"ofni.smrytrebil\" \"@\" \"enworbbc\")\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n", "msg_date": "Wed, 29 Oct 2003 18:17:26 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 7.4beta5 vs MySQL 4.0.16 with RT(DBIx::SearchBuilder)" }, { "msg_contents": "<[email protected]> writes:\n> Actually PostgreSQL is at par with MySQL when the query is being\n> Properly Written(simplified)\n\nThese are not the same query, though. Your original looks like\n\nSELECT DISTINCT main.*\nFROM Groups main , Principals Principals_1, ACL ACL_2\nWHERE\n ((ACL_2.RightName = 'OwnTicket') OR (ACL_2.RightName = 'SuperUser'))\nAND ((ACL_2.PrincipalId = Principals_1.id AND\n ACL_2.PrincipalType = 'Group' AND\n (main.Domain = 'SystemInternal' OR main.Domain = 'UserDefined' OR main.Domain = 'ACLEquivalence') AND\n main.id = Principals_1.id)\n OR\n (((main.Domain = 'RT::Queue-Role' AND main.Instance = 25) OR\n (main.Domain = 'RT::Ticket-Role' AND main.Instance = 6973)) AND\n main.Type = ACL_2.PrincipalType AND\n main.id = Principals_1.id))\nAND (ACL_2.ObjectType = 'RT::System' OR\n (ACL_2.ObjectType = 'RT::Queue' AND ACL_2.ObjectId = 25))\nORDER BY main.Name ASC\n\nwhere the replacement is\n\nSELECT DISTINCT main.*\nFROM Groups main join Principals Principals_1 using(id)\n join ACL ACL_2 on (ACL_2.PrincipalId = Principals_1.id)\nWHERE\n ((ACL_2.RightName = 'OwnTicket') OR (ACL_2.RightName = 'SuperUser'))\nAND ((ACL_2.PrincipalType = 'Group' AND\n (main.Domain = 'SystemInternal' OR main.Domain = 'UserDefined' OR main.Domain = 'ACLEquivalence'))\n OR\n (((main.Domain = 'RT::Queue-Role' AND main.Instance = 25) OR\n (main.Domain = 'RT::Ticket-Role' AND main.Instance = 6973)) AND\n main.Type = ACL_2.PrincipalType))\nAND (ACL_2.ObjectType = 'RT::System' OR\n (ACL_2.ObjectType = 'RT::Queue' AND ACL_2.ObjectId = 25)) \nORDER BY main.Name ASC ;\n\nYou have made the condition \"ACL_2.PrincipalId = Principals_1.id\"\nrequired for all cases, where before it appeared in only one arm of an\nOR condition. If the second query is correct, then the first one is\nwrong, and your real problem is that your SQL generator is broken.\n\n(I'd argue that the SQL generator is broken anyway ;-) if it generates\nsuch horrible conditions as that. Or maybe the real problem is that\nthe database schema is a mess and needs rethinking.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Oct 2003 18:23:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 7.4beta5 vs MySQL 4.0.16 with RT(DBIx::SearchBuilder) " }, { "msg_contents": "\nTom Lane <[email protected]> writes:\n\n> (I'd argue that the SQL generator is broken anyway ;-) if it generates\n> such horrible conditions as that. Or maybe the real problem is that\n> the database schema is a mess and needs rethinking.)\n\nI had the same reaction when I first saw those queries. But I think the\nproblem with the RT schema is that it needs to implement an ACL system that\nsatisfies lots of different usage models.\n\nSome people that use it want tickets to be accessible implicitly by the opener\nlike a bug tracking system, others want the tickets to be internal only like a\nnetwork trouble ticketing system. Some people want to restrict specific\noperations at a fine-grain, others want to be have more sweeping acls.\n\nI've tried doing ACL systems before and they always turned into messes long\nbefore that point. I always end up pushing back and trying to force the client\nto make up his or her mind of exactly what he or she needs before my head\nexplodes . If there's a nice general model for ACLs that can include\ncompletely different usage models I've never found it.\n\n-- \ngreg\n\n", "msg_date": "29 Oct 2003 23:19:58 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 7.4beta5 vs MySQL 4.0.16 with RT(DBIx::SearchBuilder)" }, { "msg_contents": "On Thursday 30 Oct 2003 4:53 am, you wrote:\n> <[email protected]> writes:\n> > Actually PostgreSQL is at par with MySQL when the query is being\n> > Properly Written(simplified)\n>\n> These are not the same query, though. Your original looks like\n\n\nYes that was an optimisation on haste the simplification was not \naccurate. I will work on it again. But incidently both the SQLs\nproduced the same results which *may* mean that the query could\nhave been done in a simpler manner.\n\n\n>\n> SELECT DISTINCT main.*\n> FROM Groups main , Principals Principals_1, ACL ACL_2\n> WHERE\n> ((ACL_2.RightName = 'OwnTicket') OR (ACL_2.RightName = 'SuperUser'))\n> AND ((ACL_2.PrincipalId = Principals_1.id AND\n> ACL_2.PrincipalType = 'Group' AND\n> (main.Domain = 'SystemInternal' OR main.Domain = 'UserDefined' OR\n> main.Domain = 'ACLEquivalence') AND main.id = Principals_1.id)\n> OR\n> (((main.Domain = 'RT::Queue-Role' AND main.Instance = 25) OR\n> (main.Domain = 'RT::Ticket-Role' AND main.Instance = 6973)) AND\n> main.Type = ACL_2.PrincipalType AND\n> main.id = Principals_1.id))\n> AND (ACL_2.ObjectType = 'RT::System' OR\n> (ACL_2.ObjectType = 'RT::Queue' AND ACL_2.ObjectId = 25))\n> ORDER BY main.Name ASC\n>\n> where the replacement is\n>\n> SELECT DISTINCT main.*\n> FROM Groups main join Principals Principals_1 using(id)\n> join ACL ACL_2 on (ACL_2.PrincipalId = Principals_1.id)\n> WHERE\n> ((ACL_2.RightName = 'OwnTicket') OR (ACL_2.RightName = 'SuperUser'))\n> AND ((ACL_2.PrincipalType = 'Group' AND\n> (main.Domain = 'SystemInternal' OR main.Domain = 'UserDefined' OR\n> main.Domain = 'ACLEquivalence')) OR\n> (((main.Domain = 'RT::Queue-Role' AND main.Instance = 25) OR\n> (main.Domain = 'RT::Ticket-Role' AND main.Instance = 6973)) AND\n> main.Type = ACL_2.PrincipalType))\n> AND (ACL_2.ObjectType = 'RT::System' OR\n> (ACL_2.ObjectType = 'RT::Queue' AND ACL_2.ObjectId = 25))\n> ORDER BY main.Name ASC ;\n>\n> You have made the condition \"ACL_2.PrincipalId = Principals_1.id\"\n> required for all cases, where before it appeared in only one arm of an\n> OR condition. If the second query is correct, then the first one is\n> wrong, and your real problem is that your SQL generator is broken.\n\n\nYes the SQL generator is not doing the best things at the moment\nand the author(Jesse) is aware of it and looking forward to our\nhelp in optimising it.\n\n\n>\n> (I'd argue that the SQL generator is broken anyway ;-) if it generates\n> such horrible conditions as that. Or maybe the real problem is that\n> the database schema is a mess and needs rethinking.)\n\nI do not think the database schema is a mess.\nThe ACL system in RT and RT itself is quite comprehensive.\nThe problem is with the Query Generator.\n\nApologies for delayed response to your email.\n\nRegards\nMallah.\n\n\n>\n> \t\t\tregards, tom lane\n\n", "msg_date": "Thu, 30 Oct 2003 16:34:38 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 7.4beta5 vs MySQL 4.0.16 with RT(DBIx::SearchBuilder)" }, { "msg_contents": ">>>>> \"scott\" == scott marlowe <[email protected]> writes:\n\n[...]\n\n scott> Note here:\n\n scott> Merge Join (cost=1788.68..4735.71 rows=1 width=85) (actual\n scott> time=597.540..1340.526 rows=20153 loops=1) Merge Cond:\n scott> (\"outer\".id = \"inner\".id)\n\n scott> This estimate is WAY off. Are both of those fields indexed\n scott> and analyzed? Have you tried upping the statistics target on\n scott> those two fields? I assume they are compatible types.\n\nShould I understand that a join on incompatible types (such as integer\nand varchar) may lead to bad performances ?\n\n-- \nLaurent Martelli\[email protected] Java Aspect Components\nhttp://www.aopsys.com/ http://jac.aopsys.com\n\n", "msg_date": "Tue, 18 Nov 2003 11:01:48 +0100", "msg_from": "Laurent Martelli <[email protected]>", "msg_from_op": false, "msg_subject": "Join on incompatible types" }, { "msg_contents": "Laurent Martelli wrote:\n\n>>>>>>\"scott\" == scott marlowe <[email protected]> writes:\n> \n> \n> [...]\n> \n> scott> Note here:\n> \n> scott> Merge Join (cost=1788.68..4735.71 rows=1 width=85) (actual\n> scott> time=597.540..1340.526 rows=20153 loops=1) Merge Cond:\n> scott> (\"outer\".id = \"inner\".id)\n> \n> scott> This estimate is WAY off. Are both of those fields indexed\n> scott> and analyzed? Have you tried upping the statistics target on\n> scott> those two fields? I assume they are compatible types.\n> \n> Should I understand that a join on incompatible types (such as integer\n> and varchar) may lead to bad performances ?\n\nConversely, you should enforce strict type compatibility in comparisons for \ngetting any good plans..:-)\n\n Shridhar\n\n", "msg_date": "Tue, 18 Nov 2003 16:15:41 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join on incompatible types" }, { "msg_contents": ">>>>> \"Shridhar\" == Shridhar Daithankar <[email protected]> writes:\n\n Shridhar> Laurent Martelli wrote:\n\n[...]\n\n >> Should I understand that a join on incompatible types (such as\n >> integer and varchar) may lead to bad performances ?\n\n Shridhar> Conversely, you should enforce strict type compatibility\n Shridhar> in comparisons for getting any good plans..:-)\n\nHa ha, now I understand why a query of mine was so sluggish.\n\nIs there a chance I could achieve the good perfs without having he\nsame types ? I've tried a CAST in the query, but it's even a little\nworse than without it. However, using a view to cast integers into\nvarchar gives acceptable results (see at the end).\n\nI'm using Postgresql 7.3.4.\n\niprofil-jac=# EXPLAIN ANALYZE SELECT * from classes where exists (select value from lists where lists.id='16' and lists.value=classes.id);\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------\n Seq Scan on classes (cost=0.00..5480289.75 rows=9610 width=25) (actual time=31.68..7321.56 rows=146 loops=1)\n Filter: (subplan)\n SubPlan\n -> Index Scan using lists_id on lists (cost=0.00..285.12 rows=1 width=8) (actual time=0.38..0.38 rows=0 loops=19220)\n Index Cond: (id = 16)\n Filter: ((value)::text = ($0)::text)\n Total runtime: 7321.72 msec\n\niprofil-jac=# EXPLAIN ANALYZE SELECT * from classes2 where exists (select value from lists where lists.id='16' and lists.value=classes2.id);\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------\n Seq Scan on classes2 (cost=0.00..5923.87 rows=500 width=64) (actual time=0.76..148.20 rows=146 loops=1)\n Filter: (subplan)\n SubPlan\n -> Index Scan using lists_value on lists (cost=0.00..5.90 rows=1 width=8) (actual time=0.01..0.01 rows=0 loops=19220)\n Index Cond: ((id = 16) AND (value = $0))\n Total runtime: 148.34 msec\n\n\n--\n-- Tables classes and classes2 are populated with the same data, they\n-- only differ on the type of the \"id\" column.\n--\n\n\niprofil-jac=# \\d classes\n Table \"public.classes\"\n Colonne | Type | Modifications \n---------+-------------------+---------------\n id | integer | not null\n classid | character varying | \nIndex: classes_pkey primary key btree (id)\n\niprofil-jac=# \\d classes2\n Table \"public.classes2\"\n Colonne | Type | Modifications \n---------+-------------------+---------------\n id | character varying | not null\n classid | character varying | \nIndex: classes2_pkey primary key btree (id)\n\niprofil-jac=# \\d lists \n Table \"public.lists\"\n Colonne | Type | Modifications \n---------+-------------------+---------------\n id | integer | not null\n index | integer | not null\n value | character varying | \nIndex: lists_index unique btree (id, \"index\"),\n lists_id btree (id),\n lists_value btree (id, value)\n\n--\n-- IT'S EVEN BETTER WITH A JOIN\n--\n\niprofil-jac=# EXPLAIN ANALYZE SELECT * from lists join classes on classes.id=lists.value where lists.id='16';\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..90905.88 rows=298 width=41) (actual time=53.93..9327.87 rows=146 loops=1)\n Join Filter: ((\"inner\".id)::text = (\"outer\".value)::text)\n -> Seq Scan on lists (cost=0.00..263.43 rows=146 width=16) (actual time=8.38..9.70 rows=146 loops=1)\n Filter: (id = 16)\n -> Seq Scan on classes (cost=0.00..333.20 rows=19220 width=25) (actual time=0.00..28.45 rows=19220 loops=146)\n Total runtime: 9328.35 msec\n\n\niprofil-jac=# EXPLAIN ANALYZE SELECT * from lists join classes2 on classes2.id=lists.value where lists.id='16';\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=268.67..324.09 rows=16 width=80) (actual time=9.59..65.55 rows=146 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".value)\n -> Index Scan using classes2_pkey on classes2 (cost=0.00..52.00 rows=1000 width=64) (actual time=0.03..40.83 rows=18778 loops=1)\n -> Sort (cost=268.67..269.03 rows=146 width=16) (actual time=9.50..9.56 rows=146 loops=1)\n Sort Key: lists.value\n -> Seq Scan on lists (cost=0.00..263.43 rows=146 width=16) (actual time=8.83..9.17 rows=146 loops=1)\n Filter: (id = 16)\n Total runtime: 65.73 msec\n\n\n--\n-- CASTING IN THE QUERY IS NO GOOD\n--\n\niprofil-jac=# EXPLAIN ANALYZE SELECT * from lists join classes on CAST(classes.id AS character varying)=lists.value where lists.id='16';\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..90905.88 rows=298 width=41) (actual time=69.03..10017.26 rows=146 loops=1)\n Join Filter: (((\"inner\".id)::text)::character varying = \"outer\".value)\n -> Seq Scan on lists (cost=0.00..263.43 rows=146 width=16) (actual time=20.64..22.03 rows=146 loops=1)\n Filter: (id = 16)\n -> Seq Scan on classes (cost=0.00..333.20 rows=19220 width=25) (actual time=0.00..30.45 rows=19220 loops=146)\n Total runtime: 10017.72 msec\n\n\n--\n-- CREATING A VIEW IS BETTER\n--\n\niprofil-jac=# CREATE VIEW classes3 as SELECT CAST(id AS varchar), classid from classes;\niprofil-jac=# EXPLAIN ANALYZE SELECT * from classes3 where exists (select value from lists where lists.id='16' and lists.value=classes3.id);\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------\n Seq Scan on classes (cost=0.00..113853.60 rows=9610 width=25) (actual time=0.91..192.31 rows=146 loops=1)\n Filter: (subplan)\n SubPlan\n -> Index Scan using lists_value on lists (cost=0.00..5.91 rows=1 width=8) (actual time=0.01..0.01 rows=0 loops=19220)\n Index Cond: ((id = 16) AND (value = (($0)::text)::character varying))\n Total runtime: 192.47 msec\n\n\n-- \nLaurent Martelli\[email protected] Java Aspect Components\nhttp://www.aopsys.com/ http://jac.aopsys.com\n\n", "msg_date": "Tue, 18 Nov 2003 14:24:51 +0100", "msg_from": "Laurent Martelli <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join on incompatible types" }, { "msg_contents": "Laurent Martelli wrote:\n\n>>>>>>\"Shridhar\" == Shridhar Daithankar <[email protected]> writes:\n> \n> \n> Shridhar> Laurent Martelli wrote:\n> \n> [...]\n> \n> >> Should I understand that a join on incompatible types (such as\n> >> integer and varchar) may lead to bad performances ?\n> \n> Shridhar> Conversely, you should enforce strict type compatibility\n> Shridhar> in comparisons for getting any good plans..:-)\n> \n> Ha ha, now I understand why a query of mine was so sluggish.\n> \n> Is there a chance I could achieve the good perfs without having he\n> same types ? I've tried a CAST in the query, but it's even a little\n> worse than without it. However, using a view to cast integers into\n> varchar gives acceptable results (see at the end).\n> \n> I'm using Postgresql 7.3.4.\n\nI am stripping the analyze outputs and directly jumping to the end.\n\nCan you try following?\n\n1. Make all fields integer in all the table.\n2. Try following query\nEXPLAIN ANALYZE SELECT * from lists join classes on classes.id=lists.value where \nlists.id='16'::integer;\n\nHow does it affect the runtime?\n\n Shridhar\n\n", "msg_date": "Wed, 19 Nov 2003 12:24:04 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join on incompatible types" }, { "msg_contents": ">>>>> \"Shridhar\" == Shridhar Daithankar <[email protected]> writes:\n\n Shridhar> Laurent Martelli wrote:\n >>>>>>> \"Shridhar\" == Shridhar Daithankar\n >>>>>>> <[email protected]> writes:\n Shridhar> Laurent Martelli wrote:\n >> [...] >> Should I understand that a join on incompatible types\n >> (such as >> integer and varchar) may lead to bad performances ?\n Shridhar> Conversely, you should enforce strict type compatibility\n Shridhar> in comparisons for getting any good plans..:-)\n >> Ha ha, now I understand why a query of mine was so sluggish. Is\n >> there a chance I could achieve the good perfs without having he\n >> same types ? I've tried a CAST in the query, but it's even a\n >> little worse than without it. However, using a view to cast\n >> integers into varchar gives acceptable results (see at the end).\n >> I'm using Postgresql 7.3.4.\n\n Shridhar> I am stripping the analyze outputs and directly jumping to\n Shridhar> the end.\n\n Shridhar> Can you try following?\n\n Shridhar> 1. Make all fields integer in all the table. \n\nI can't do this because lists.values contains non integer data which\ndo not refer to a classes.id value. It may sound weird. This is\nbecause it's a generic schema for a transparent persistence framework.\n\nThe solution for me would rather be to have varchar everywhere.\n\n Shridhar> 2. Try following query EXPLAIN ANALYZE SELECT * from lists\n Shridhar> join classes on classes.id=lists.value where\n Shridhar> lists.id='16'::integer;\n\n\n\n Shridhar> How does it affect the runtime?\n\n Shridhar> Shridhar\n\n\n\n-- \nLaurent Martelli\[email protected] Java Aspect Components\nhttp://www.aopsys.com/ http://jac.aopsys.com\n\n", "msg_date": "Wed, 19 Nov 2003 10:52:43 +0100", "msg_from": "Laurent Martelli <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join on incompatible types" }, { "msg_contents": "Laurent Martelli wrote:\n\n>>>>>>\"Shridhar\" == Shridhar Daithankar <[email protected]> writes:\n> Shridhar> I am stripping the analyze outputs and directly jumping to\n> Shridhar> the end.\n> \n> Shridhar> Can you try following?\n> \n> Shridhar> 1. Make all fields integer in all the table. \n> \n> I can't do this because lists.values contains non integer data which\n> do not refer to a classes.id value. It may sound weird. This is\n> because it's a generic schema for a transparent persistence framework.\n\nFine .I understand. So instead of using a field value, can you use integer \nversion of that field? (Was that one of your queries used that? I deleted the OP)\n\n\n> The solution for me would rather be to have varchar everywhere.\n\nYou need to cast every occurance of that varchar field appropriately, to start \nwith. The performance might suffer as well for numbers.\n\n> Shridhar> 2. Try following query EXPLAIN ANALYZE SELECT * from lists\n> Shridhar> join classes on classes.id=lists.value where\n> Shridhar> lists.id='16'::integer;\n\nclasses.id=lists.value::integer.\n\nTry that.\n\nThe aim is absolute type compatibility. If types aren't exactly same, the plan \nis effectively dead.\n\n<OT>\nI would say postgresql enforces good habits in it's application developers, from \na cultural POV.\n\nHad C refused to compile without such strict type compatibility, we wouldn't \nhave to worry about 16bit/32bit and 64 bit software. Just upgrade the compiler \nand everything is damn good..:-)\n\nI doubt if C would have so popular with such strict type checking but that is \nanother issue. I think pascal enforces such strict syntax.. Not sure though..\n</OT>\n\n Shridhar\n\n\n", "msg_date": "Wed, 19 Nov 2003 15:49:23 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join on incompatible types" }, { "msg_contents": ">>>>> \"Shridhar\" == Shridhar Daithankar <[email protected]> writes:\n\n[...]\n\n Shridhar> 2. Try following query EXPLAIN ANALYZE SELECT * from lists\n Shridhar> join classes on classes.id=lists.value where\n Shridhar> lists.id='16'::integer;\n\n Shridhar> classes.id=lists.value::integer.\n\nWith classes.id of type integer and lists.value of type varchar, I get\n\"ERROR: Cannot cast type character varying to integer\", which is not\nsuch a surprise. \n\nThanks for your help anyway.\n\n\n-- \nLaurent Martelli\[email protected] Java Aspect Components\nhttp://www.aopsys.com/ http://jac.aopsys.com\n\n", "msg_date": "Wed, 19 Nov 2003 13:16:03 +0100", "msg_from": "Laurent Martelli <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join on incompatible types" }, { "msg_contents": "Laurent Martelli wrote:\n\n>>>>>>\"Shridhar\" == Shridhar Daithankar <[email protected]> writes:\n> \n> \n> [...]\n> \n> Shridhar> 2. Try following query EXPLAIN ANALYZE SELECT * from lists\n> Shridhar> join classes on classes.id=lists.value where\n> Shridhar> lists.id='16'::integer;\n> \n> Shridhar> classes.id=lists.value::integer.\n> \n> With classes.id of type integer and lists.value of type varchar, I get\n> \"ERROR: Cannot cast type character varying to integer\", which is not\n> such a surprise. \n\nTry to_numbr function to get a number out of string. Then cast it to integer.\n\nhttp://developer.postgresql.org/docs/postgres/functions-formatting.html\n\nI hope that works. Don't have postgresql installation handy here..\n\n Shridhar\n\n", "msg_date": "Wed, 19 Nov 2003 18:01:49 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join on incompatible types" } ]
[ { "msg_contents": "\nok this time it constructs a query which puts 7.3.4 on a infinite loop\nbut 7.4b5 is able to come out of it.\n\nsince it may be of interest to the pgsql people i am Ccing it to the\npgsql-performance list i hope its ok.\n\n\n\nPgsql 7.3.4 on an endless loop:\n\nSELECT DISTINCT main.* FROM ((((Tickets main JOIN Groups as Groups_1 ON ( main.id =\nGroups_1.Instance)) JOIN Principals as Principals_2 ON ( Groups_1.id = Principals_2.ObjectId)) \nJOIN CachedGroupMembers as CachedGroupMembers_3 ON ( Principals_2.id =\nCachedGroupMembers_3.GroupId)) JOIN Users as Users_4 ON ( CachedGroupMembers_3.MemberId =\nUsers_4.id)) WHERE ((main.EffectiveId = main.id)) AND ((main.Type = 'ticket')) AND ( ( (\n(Users_4.EmailAddress = '[email protected]')AND(Groups_1.Domain =\n'RT::Ticket-Role')AND(Groups_1.Type = 'Requestor')AND(Principals_2.PrincipalType = 'Group') ) )\nAND ( (main.Status = 'new')OR(main.Status = 'open') ) ) ORDER BY main.Priority DESC LIMIT 10\n\n\nBut 7.4 beta5 seems to be able to handle it:\n\nSELECT DISTINCT main.* FROM ((((Tickets main JOIN Groups as Groups_1 ON ( main.id =\nGroups_1.Instance)) JOIN Principals as Principals_2 ON ( Groups_1.id = Principals_2.ObjectId)) \nJOIN CachedGroupMembers as CachedGroupMembers_3 ON ( Principals_2.id =\nCachedGroupMembers_3.GroupId)) JOIN Users as Users_4 ON ( CachedGroupMembers_3.MemberId =\nUsers_4.id)) WHERE ((main.EffectiveId = main.id)) AND ((main.Type = 'ticket')) AND ( ( (\n(Users_4.EmailAddress = '[email protected]')AND(Groups_1.Domain =\n'RT::Ticket-Role')AND(Groups_1.Type = 'Requestor')AND(Principals_2.PrincipalType = 'Group') ) )\nAND ( (main.Status = 'new')OR(main.Status = 'open') ) ) ORDER BY main.Priority DESC LIMIT 10; id | effectiveid | queue | type | issuestatement | resolution | owner | subject \n | initialpriority | finalpriority | priority | timeestimated | timeworked | status | timeleft\n | told | starts | started | due | resolved |\n lastupdatedby | lastupdated | creator | created | disabled------+-------------+-------+--------+----------------+------------+-------+-------------------------+-----------------+---------------+----------+---------------+------------+--------+----------+------+---------------------+---------------------+---------------------+---------------------+---------------+---------------------+---------+---------------------+---------- 13 | 13 | 23 | ticket | 0 | 0 | 31122 | General Discussion \n | 0 | 0 | 0 | 0 | 0 | new | 0\n | | 1970-01-01 00:00:00 | 1970-01-01 00:00:00 | 1970-01-01 00:00:00 | 1970-01-01 00:00:00\n | 31122 | 2001-11-22 04:19:10 | 31122 | 2001-11-22 04:19:07 | 0 6018 | 6018 | 19 | ticket | 0 | 0 | 10 | EYP Prospective\n Clients | 0 | 0 | 0 | 0 | 0 | new | \n 0 | | 1970-01-01 00:00:00 | 1970-01-01 00:00:00 | 2002-09-11 18:29:37 | 1970-01-01\n 00:00:00 | 31122 | 2002-09-11 18:29:39 | 31122 | 2002-09-11 18:29:37 | 0 6336 | 6336 | 19 | ticket | 0 | 0 | 10 | EYP Prospective\n Clients | 0 | 0 | 0 | 0 | 0 | new | \n 0 | | 1970-01-01 00:00:00 | 1970-01-01 00:00:00 | 2002-09-20 12:31:02 | 1970-01-01\n 00:00:00 | 31122 | 2002-09-20 12:31:09 | 31122 | 2002-09-20 12:31:02 | 0 6341 | 6341 | 19 | ticket | 0 | 0 | 10 | IP Prospective\n Clients | 0 | 0 | 0 | 0 | 0 | new | \n 0 | | 1970-01-01 00:00:00 | 1970-01-01 00:00:00 | 2002-09-20 14:34:25 | 1970-01-01\n 00:00:00 | 31122 | 2002-09-20 14:34:26 | 31122 | 2002-09-20 14:34:25 | 0(4 rows)\n\nTime: 900.930 ms\n\n\n\nWith The explain analyze below:\n\nrt3=# explain analyze SELECT DISTINCT main.* FROM ((((Tickets main JOIN Groups as Groups_1 ON (\nmain.id = Groups_1.Instance)) JOIN Principals as Principals_2 ON ( Groups_1.id =\nPrincipals_2.ObjectId)) JOIN CachedGroupMembers as CachedGroupMembers_3 ON ( Principals_2.id =\nCachedGroupMembers_3.GroupId)) JOIN Users as Users_4 ON ( CachedGroupMembers_3.MemberId =\nUsers_4.id)) WHERE ((main.EffectiveId = main.id)) AND ((main.Type = 'ticket')) AND ( ( (\n(Users_4.EmailAddress = '[email protected]')AND(Groups_1.Domain =\n'RT::Ticket-Role')AND(Groups_1.Type = 'Requestor')AND(Principals_2.PrincipalType = 'Group') ) )\nAND ( (main.Status = 'new')OR(main.Status = 'open') ) ) ORDER BY main.Priority DESC LIMIT 10; QUERY\n PLAN------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=582.27..582.34 rows=1 width=164) (actual time=854.302..854.433 rows=4 loops=1)\n -> Unique (cost=582.27..582.34 rows=1 width=164) (actual time=854.297..854.418 rows=4 loops=1)\n -> Sort (cost=582.27..582.28 rows=1 width=164) (actual time=854.294..854.303 rows=8\n loops=1) Sort Key: main.priority, main.id, main.effectiveid, main.queue, main.\"type\",\n main.issuestatement, main.resolution, main.\"owner\", main.subject,\n main.initialpriority, main.finalpriority, main.timeestimated, main.timeworked,\n main.status, main.timeleft, main.told, main.starts, main.started, main.due,\n main.resolved, main.lastupdatedby, main.lastupdated, main.creator, main.created,\n main.disabled -> Hash Join (cost=476.18..582.26 rows=1 width=164) (actual time=853.025..854.056\n rows=8 loops=1) Hash Cond: (\"outer\".groupid = \"inner\".id)\n -> Nested Loop (cost=0.00..105.97 rows=21 width=4) (actual\n time=0.372..1.073 rows=37 loops=1) -> Index Scan using users4 on users users_4 (cost=0.00..3.99 rows=2\n width=4) (actual time=0.182..0.188 rows=1 loops=1) Index Cond: ((emailaddress)::text = '[email protected]'::text)\n -> Index Scan using cachedgroupmembers2 on cachedgroupmembers\n cachedgroupmembers_3 (cost=0.00..50.81 rows=14 width=8) (actual\n time=0.165..0.703 rows=37 loops=1) Index Cond: (cachedgroupmembers_3.memberid = \"outer\".id)\n -> Hash (cost=476.17..476.17 rows=1 width=168) (actual\n time=852.267..852.267 rows=0 loops=1) -> Nested Loop (cost=0.00..476.17 rows=1 width=168) (actual\n time=0.684..842.401 rows=3209 loops=1) -> Nested Loop (cost=0.00..471.54 rows=1 width=168) (actual\n time=0.571..704.492 rows=3209 loops=1) -> Seq Scan on tickets main (cost=0.00..465.62 rows=1\n width=164) (actual time=0.212..87.100 rows=3209 loops=1) Filter: ((effectiveid = id) AND ((\"type\")::text =\n 'ticket'::text) AND (((status)::text = 'new'::text)\n OR ((status)::text = 'open'::text))) -> Index Scan using groups1 on groups groups_1 \n (cost=0.00..5.90 rows=1 width=12) (actual time=0.158..0.168\n rows=1 loops=3209) Index Cond: (((groups_1.\"domain\")::text =\n 'RT::Ticket-Role'::text) AND ((\"outer\".id)::text =\n (groups_1.instance)::text) AND\n ((groups_1.\"type\")::text = 'Requestor'::text)) -> Index Scan using principals2 on principals principals_2 \n (cost=0.00..4.62 rows=1 width=8) (actual time=0.019..0.022 rows=1\n loops=3209) Index Cond: (\"outer\".id = principals_2.objectid)\n Filter: ((principaltype)::text = 'Group'::text)\n Total runtime: 855.472 ms\n(22 rows)\n\nTime: 895.739 ms\nrt3=#\n\n\n\n\n\n\n\n\n\n> http://backpan.cpan.org/authors/id/J/JE/JESSE/DBIx-SearchBuilder-0.90.tar.gz\n>\n>\n> On Thu, Oct 30, 2003 at 01:57:31AM +0530, [email protected] wrote:\n>> >\n>> >\n>> >\n>> > On Thu, Oct 30, 2003 at 01:30:26AM +0530, [email protected] wrote:\n>> >>\n>> >> Dear Jesse,\n>> >>\n>> >> I really want to add a Pg specific better query builder\n>> >> the generic one is messing with postgresql.\n>> >\n>> > I've removed the CC to ivan, to my knowledge, he has nothing to do with SB these days\n>> > anymore.\n>> >\n>> >\n>> >> i think i have to work in :\n>> >> DBIx/SearchBuilder/Handle/Pg.pm\n>> >>\n>> >> i hope u read my recent emails to pgsql-performance list.\n>>\n>> Yes.\n>>\n>>\n>>\n>> >>\n>> > And I hope you read my reply. There _is_ a postgres specific query builder. And there was a\n>> > bug in 0.90 that caused a possible endless loop. The bugfix disabled some of the\n>> > optimization. If you can tell me that 0.90 improves your performance (as it generated more\n>> > correct queries for pg) then we can work on just fixing the little bug.\n>>\n>> where is .90 ? i dont see it in\n>>\n>> http://www.fsck.com/pub/rt/devel/\n>>\n>>\n>> regds\n>> mallah.\n>>\n>>\n>> >\n>> >\n>> >\n>> >> =================================================================================== # this\n>> >> code is all hacky and evil. but people desperately want _something_ and I'm # super tired.\n>> >> refactoring gratefully appreciated.\n>> >> ===================================================================================\n>> >>\n>> >> sub _BuildJoins {\n>> >> my $self = shift;\n>> >> my $sb = shift;\n>> >> my %seen_aliases;\n>> >>\n>> >> $seen_aliases{'main'} = 1;\n>> >>\n>> >> my $join_clause =$sb->{'table'} . \" main \" ;\n>> >>\n>> >> my @keys = ( keys %{ $sb->{'left_joins'} } );\n>> >>\n>> >> while ( my $join = shift @keys ) {\n>> >> if ( $seen_aliases{ $sb->{'left_joins'}{$join}{'depends_on'} } ) {\n>> >> $join_clause = \"(\" . $join_clause;\n>> >> $join_clause .= $sb->{'left_joins'}{$join}{'alias_string'} . \" ON (\";\n>> >> $join_clause .=\n>> >> join ( ') AND( ', values %{ $sb->{'left_joins'}{$join}{'criteria'} } );\n>> >> $join_clause .= \")) \";\n>> >>\n>> >> $seen_aliases{$join} = 1;\n>> >> }\n>> >> else {\n>> >> push ( @keys, $join );\n>> >> }\n>> >>\n>> >> }\n>> >> return (\n>> >> join ( \", \", ($join_clause, @{ $sb->{'aliases'} }))) ;\n>> >>\n>> >> }\n>> >>\n>> >>\n>> >>\n>> >>\n>> >>\n>> >> -----------------------------------------\n>> >> Over 1,00,000 exporters are waiting for your order! Click below to get in touch with\n>> >> leading Indian exporters listed in the premier\n>> >> trade directory Exporters Yellow Pages.\n>> >> http://www.trade-india.com/dyn/gdh/eyp/\n>> >>\n>> >>\n>> >\n>> > --\n>> > jesse reed vincent -- [email protected] -- [email protected]\n>> > 70EBAC90: 2A07 FC22 7DB4 42C1 9D71 0108 41A3 3FB3 70EB AC90\n>> >\n>> > . . . when not in doubt, get in doubt. -- Old Discordian Proveb\n>>\n>>\n>> -----------------------------------------\n>> Over 1,00,000 exporters are waiting for your order! Click below to get in touch with leading\n>> Indian exporters listed in the premier\n>> trade directory Exporters Yellow Pages.\n>> http://www.trade-india.com/dyn/gdh/eyp/\n>>\n>>\n>\n> --\n> jesse reed vincent -- [email protected] -- [email protected]\n> 70EBAC90: 2A07 FC22 7DB4 42C1 9D71 0108 41A3 3FB3 70EB AC90\n>\n> . . . when not in doubt, get in doubt. -- Old Discordian Proveb\n\n\n-----------------------------------------\nOver 1,00,000 exporters are waiting for your order! Click below to get\nin touch with leading Indian exporters listed in the premier\ntrade directory Exporters Yellow Pages.\nhttp://www.trade-india.com/dyn/gdh/eyp/\n\n\n", "msg_date": "Thu, 30 Oct 2003 02:17:09 +0530 (IST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Query puts 7.3.4 on endless loop but 7.4beta5 is fine." }, { "msg_contents": "\nDear Tom,\n\nCan you please have a Look at the below and suggest why it apparently puts\n7.3.4 on an infinite loop . the CPU utilisation of the backend running it \napproches 99%.\n\n\nQuery:\n\n I have tried my best to indent it :)\n\nSELECT DISTINCT main.* FROM\n(\n (\n (\n (\n Tickets main JOIN Groups as Groups_1 ON ( main.id = Groups_1.Instance)\n ) JOIN\n Principals as Principals_2 ON ( Groups_1.id = Principals_2.ObjectId)\n ) JOIN\n CachedGroupMembers as CachedGroupMembers_3 ON ( Principals_2.id = CachedGroupMembers_3.GroupId)\n ) JOIN\n Users as Users_4 ON ( CachedGroupMembers_3.MemberId = Users_4.id)\n) WHERE\n\n( (main.EffectiveId = main.id) ) AND\n( (main.Type = 'ticket') ) AND\n\n(\n (\n (\n (Users_4.EmailAddress = '[email protected]')AND\n (Groups_1.Domain = 'RT::Ticket-Role')AND\n (Groups_1.Type = 'Requestor')AND\n (Principals_2.PrincipalType = 'Group')\n )\n )\n AND\n (\n (main.Status = 'new')OR(main.Status = 'open')\n )\n) ORDER BY main.Priority DESC LIMIT 10\n\n\nOn Thursday 30 Oct 2003 2:17 am, [email protected] wrote:\n> ok this time it constructs a query which puts 7.3.4 on a infinite loop\n> but 7.4b5 is able to come out of it.\n>\n> since it may be of interest to the pgsql people i am Ccing it to the\n> pgsql-performance list i hope its ok.\n>\n>\n>\n> Pgsql 7.3.4 on an endless loop:\n>\n> SELECT DISTINCT main.* FROM ((((Tickets main JOIN Groups as Groups_1 ON (\n> main.id = Groups_1.Instance)) JOIN Principals as Principals_2 ON (\n> Groups_1.id = Principals_2.ObjectId)) JOIN CachedGroupMembers as\n> CachedGroupMembers_3 ON ( Principals_2.id = CachedGroupMembers_3.GroupId))\n> JOIN Users as Users_4 ON ( CachedGroupMembers_3.MemberId = Users_4.id)) \n> WHERE ((main.EffectiveId = main.id)) AND ((main.Type = 'ticket')) AND ( ( \n> ( (Users_4.EmailAddress = '[email protected]')AND(Groups_1.Domain =\n> 'RT::Ticket-Role')AND(Groups_1.Type =\n> 'Requestor')AND(Principals_2.PrincipalType = 'Group') ) ) AND (\n> (main.Status = 'new')OR(main.Status = 'open') ) ) ORDER BY main.Priority\n> DESC LIMIT 10\n>\n>\n> But 7.4 beta5 seems to be able to handle it:\n>\n> SELECT DISTINCT main.* FROM ((((Tickets main JOIN Groups as Groups_1 ON (\n> main.id = Groups_1.Instance)) JOIN Principals as Principals_2 ON (\n> Groups_1.id = Principals_2.ObjectId)) JOIN CachedGroupMembers as\n> CachedGroupMembers_3 ON ( Principals_2.id = CachedGroupMembers_3.GroupId))\n> JOIN Users as Users_4 ON ( CachedGroupMembers_3.MemberId = Users_4.id)) \n> WHERE ((main.EffectiveId = main.id)) AND ((main.Type = 'ticket')) AND ( ( \n> ( (Users_4.EmailAddress = '[email protected]')AND(Groups_1.Domain =\n> 'RT::Ticket-Role')AND(Groups_1.Type =\n> 'Requestor')AND(Principals_2.PrincipalType = 'Group') ) ) AND (\n> (main.Status = 'new')OR(main.Status = 'open') ) ) ORDER BY main.Priority\n> DESC LIMIT 10; id | effectiveid | queue | type | issuestatement |\n> resolution | owner | subject\n>\n> | initialpriority | finalpriority | priority | timeestimated |\n> | timeworked | status | timeleft\n> |\n> | told | starts | started | due \n> | | resolved |\n>\n> lastupdatedby | lastupdated | creator | created |\n> disabled------+-------------+-------+--------+----------------+------------\n>+-------+-------------------------+-----------------+---------------+-------\n>---+---------------+------------+--------+----------+------+----------------\n>-----+---------------------+---------------------+---------------------+----\n>-----------+---------------------+---------+---------------------+----------\n> 13 | 13 | 23 | ticket | 0 | 0 | 31122 |\n> General Discussion\n>\n> | 0 | 0 | 0 | 0 | \n> | 0 | new | 0\n> |\n> | | 1970-01-01 00:00:00 | 1970-01-01 00:00:00 | 1970-01-01 00:00:00\n> | | | 1970-01-01 00:00:00\n> |\n> | 31122 | 2001-11-22 04:19:10 | 31122 | 2001-11-22 04:19:07 | \n> | 0 6018 | 6018 | 19 | ticket | 0 | \n> | 0 | 10 | EYP Prospective\n>\n> Clients | 0 | 0 | 0 | 0 | \n> 0 | new | 0 | | 1970-01-01 00:00:00 | 1970-01-01 00:00:00 |\n> 2002-09-11 18:29:37 | 1970-01-01 00:00:00 | 31122 | 2002-09-11\n> 18:29:39 | 31122 | 2002-09-11 18:29:37 | 0 6336 | 6336 | \n> 19 | ticket | 0 | 0 | 10 | EYP Prospective Clients\n> | 0 | 0 | 0 | 0 | 0 |\n> new | 0 | | 1970-01-01 00:00:00 | 1970-01-01 00:00:00 | 2002-09-20\n> 12:31:02 | 1970-01-01 00:00:00 | 31122 | 2002-09-20 12:31:09 | \n> 31122 | 2002-09-20 12:31:02 | 0 6341 | 6341 | 19 | ticket\n> | 0 | 0 | 10 | IP Prospective Clients | \n> 0 | 0 | 0 | 0 | 0 | new | 0\n> | | 1970-01-01 00:00:00 | 1970-01-01 00:00:00 | 2002-09-20 14:34:25 |\n> 1970-01-01 00:00:00 | 31122 | 2002-09-20 14:34:26 | 31122 |\n> 2002-09-20 14:34:25 | 0(4 rows)\n>\n> Time: 900.930 ms\n>\n>\n>\n> With The explain analyze below:\n>\n> rt3=# explain analyze SELECT DISTINCT main.* FROM ((((Tickets main JOIN\n> Groups as Groups_1 ON ( main.id = Groups_1.Instance)) JOIN Principals as\n> Principals_2 ON ( Groups_1.id = Principals_2.ObjectId)) JOIN\n> CachedGroupMembers as CachedGroupMembers_3 ON ( Principals_2.id =\n> CachedGroupMembers_3.GroupId)) JOIN Users as Users_4 ON (\n> CachedGroupMembers_3.MemberId = Users_4.id)) WHERE ((main.EffectiveId =\n> main.id)) AND ((main.Type = 'ticket')) AND ( ( ( (Users_4.EmailAddress =\n> '[email protected]')AND(Groups_1.Domain =\n> 'RT::Ticket-Role')AND(Groups_1.Type =\n> 'Requestor')AND(Principals_2.PrincipalType = 'Group') ) ) AND (\n> (main.Status = 'new')OR(main.Status = 'open') ) ) ORDER BY main.Priority\n> DESC LIMIT 10; \n> \n> QUERY\n> PLAN-----------------------------------------------------------------------\n>----------------------------------------------------------------------------\n>----------------------------------------------------------------------------\n>----------------------------------------------------------------------------\n>----------------------------------------------------------------------------\n>--------------------------------- Limit (cost=582.27..582.34 rows=1\n> width=164) (actual time=854.302..854.433 rows=4 loops=1) -> Unique \n> (cost=582.27..582.34 rows=1 width=164) (actual time=854.297..854.418 rows=4\n> loops=1) -> Sort (cost=582.27..582.28 rows=1 width=164) (actual\n> time=854.294..854.303 rows=8 loops=1) Sort Key:\n> main.priority, main.id, main.effectiveid, main.queue, main.\"type\",\n> main.issuestatement, main.resolution, main.\"owner\", main.subject,\n> main.initialpriority, main.finalpriority, main.timeestimated,\n> main.timeworked, main.status, main.timeleft, main.told, main.starts,\n> main.started, main.due, main.resolved, main.lastupdatedby,\n> main.lastupdated, main.creator, main.created, main.disabled \n> -> Hash Join (cost=476.18..582.26 rows=1 width=164) (actual\n> time=853.025..854.056 rows=8 loops=1) Hash Cond:\n> (\"outer\".groupid = \"inner\".id) -> Nested Loop (cost=0.00..105.97 rows=21\n> width=4) (actual time=0.372..1.073 rows=37 loops=1) \n> -> Index Scan using users4 on users users_4 (cost=0.00..3.99 rows=2\n> width=4) (actual time=0.182..0.188 rows=1 loops=1) \n> Index Cond: ((emailaddress)::text = '[email protected]'::text)\n> -> Index Scan using cachedgroupmembers2 on cachedgroupmembers\n> cachedgroupmembers_3 (cost=0.00..50.81 rows=14 width=8) (actual\n> time=0.165..0.703 rows=37 loops=1) Index\n> Cond: (cachedgroupmembers_3.memberid = \"outer\".id) -> Hash \n> (cost=476.17..476.17 rows=1 width=168) (actual time=852.267..852.267 rows=0\n> loops=1) -> Nested Loop (cost=0.00..476.17\n> rows=1 width=168) (actual time=0.684..842.401 rows=3209 loops=1) \n> -> Nested Loop (cost=0.00..471.54 rows=1 width=168)\n> (actual time=0.571..704.492 rows=3209 loops=1) \n> -> Seq Scan on tickets main (cost=0.00..465.62 rows=1 width=164)\n> (actual time=0.212..87.100 rows=3209 loops=1) \n> Filter: ((effectiveid = id) AND ((\"type\")::text =\n> 'ticket'::text) AND (((status)::text = 'new'::text) OR ((status)::text =\n> 'open'::text))) -> Index Scan using\n> groups1 on groups groups_1 (cost=0.00..5.90 rows=1 width=12) (actual\n> time=0.158..0.168 rows=1 loops=3209) \n> Index Cond: (((groups_1.\"domain\")::text = 'RT::Ticket-Role'::text) AND\n> ((\"outer\".id)::text = (groups_1.instance)::text) AND\n> ((groups_1.\"type\")::text = 'Requestor'::text)) \n> -> Index Scan using principals2 on principals principals_2\n> (cost=0.00..4.62 rows=1 width=8) (actual time=0.019..0.022 rows=1\n> loops=3209) Index Cond: (\"outer\".id =\n> principals_2.objectid) Filter: ((principaltype)::text = 'Group'::text)\n> Total runtime: 855.472 ms\n> (22 rows)\n>\n> Time: 895.739 ms\n> rt3=#\n>\n> > http://backpan.cpan.org/authors/id/J/JE/JESSE/DBIx-SearchBuilder-0.90.tar\n> >.gz\n> >\n> > On Thu, Oct 30, 2003 at 01:57:31AM +0530, [email protected] wrote:\n> >> > On Thu, Oct 30, 2003 at 01:30:26AM +0530, [email protected] wrote:\n> >> >> Dear Jesse,\n> >> >>\n> >> >> I really want to add a Pg specific better query builder\n> >> >> the generic one is messing with postgresql.\n> >> >\n> >> > I've removed the CC to ivan, to my knowledge, he has nothing to do\n> >> > with SB these days anymore.\n> >> >\n> >> >> i think i have to work in :\n> >> >> DBIx/SearchBuilder/Handle/Pg.pm\n> >> >>\n> >> >> i hope u read my recent emails to pgsql-performance list.\n> >>\n> >> Yes.\n> >>\n> >> > And I hope you read my reply. There _is_ a postgres specific query\n> >> > builder. And there was a bug in 0.90 that caused a possible endless\n> >> > loop. The bugfix disabled some of the optimization. If you can tell me\n> >> > that 0.90 improves your performance (as it generated more correct\n> >> > queries for pg) then we can work on just fixing the little bug.\n> >>\n> >> where is .90 ? i dont see it in\n> >>\n> >> http://www.fsck.com/pub/rt/devel/\n> >>\n> >>\n> >> regds\n> >> mallah.\n> >>\n> >> >> =====================================================================\n> >> >>============== # this code is all hacky and evil. but people\n> >> >> desperately want _something_ and I'm # super tired. refactoring\n> >> >> gratefully appreciated.\n> >> >> =====================================================================\n> >> >>==============\n> >> >>\n> >> >> sub _BuildJoins {\n> >> >> my $self = shift;\n> >> >> my $sb = shift;\n> >> >> my %seen_aliases;\n> >> >>\n> >> >> $seen_aliases{'main'} = 1;\n> >> >>\n> >> >> my $join_clause =$sb->{'table'} . \" main \" ;\n> >> >>\n> >> >> my @keys = ( keys %{ $sb->{'left_joins'} } );\n> >> >>\n> >> >> while ( my $join = shift @keys ) {\n> >> >> if ( $seen_aliases{ $sb->{'left_joins'}{$join}{'depends_on'}\n> >> >> } ) { $join_clause = \"(\" . $join_clause;\n> >> >> $join_clause .=\n> >> >> $sb->{'left_joins'}{$join}{'alias_string'} . \" ON (\"; $join_clause .=\n> >> >> join ( ') AND( ', values %{\n> >> >> $sb->{'left_joins'}{$join}{'criteria'} } ); $join_clause .= \")) \";\n> >> >>\n> >> >> $seen_aliases{$join} = 1;\n> >> >> }\n> >> >> else {\n> >> >> push ( @keys, $join );\n> >> >> }\n> >> >>\n> >> >> }\n> >> >> return (\n> >> >> join ( \", \", ($join_clause, @{ $sb->{'aliases'}\n> >> >> }))) ;\n> >> >>\n> >> >> }\n> >> >>\n> >> >>\n> >> >>\n> >> >>\n> >> >>\n> >> >> -----------------------------------------\n> >> >> Over 1,00,000 exporters are waiting for your order! Click below to\n> >> >> get in touch with leading Indian exporters listed in the premier\n> >> >> trade directory Exporters Yellow Pages.\n> >> >> http://www.trade-india.com/dyn/gdh/eyp/\n> >> >\n> >> > --\n> >> > jesse reed vincent -- [email protected] -- [email protected]\n> >> > 70EBAC90: 2A07 FC22 7DB4 42C1 9D71 0108 41A3 3FB3 70EB AC90\n> >> >\n> >> > . . . when not in doubt, get in doubt. -- Old Discordian Proveb\n> >>\n> >> -----------------------------------------\n> >> Over 1,00,000 exporters are waiting for your order! Click below to get\n> >> in touch with leading Indian exporters listed in the premier\n> >> trade directory Exporters Yellow Pages.\n> >> http://www.trade-india.com/dyn/gdh/eyp/\n> >\n> > --\n> > jesse reed vincent -- [email protected] -- [email protected]\n> > 70EBAC90: 2A07 FC22 7DB4 42C1 9D71 0108 41A3 3FB3 70EB AC90\n> >\n> > . . . when not in doubt, get in doubt. -- Old Discordian Proveb\n>\n> -----------------------------------------\n> Over 1,00,000 exporters are waiting for your order! Click below to get\n> in touch with leading Indian exporters listed in the premier\n> trade directory Exporters Yellow Pages.\n> http://www.trade-india.com/dyn/gdh/eyp/\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n", "msg_date": "Thu, 30 Oct 2003 17:02:00 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query puts 7.3.4 on endless loop but 7.4beta5 is fine. [ with\n\tbetter indenting ]" }, { "msg_contents": "[email protected] (Rajesh Kumar Mallah) wrote:\n> Can you please have a Look at the below and suggest why it\n> apparently puts 7.3.4 on an infinite loop . the CPU utilisation of\n> the backend running it approches 99%.\n\nWhat would be useful, for this case, would be to provide the query\nplan, perhaps via\n\n EXPLAIN [Big Long Query].\n\nThe difference between that EXPLAIN and what you get on 7.4 might be\nquite interesting.\n\nI would think it quite unlikely that it is truly an \"infinite\" loop;\nit is rather more likely that the plan winds up being pretty bad and\ndoing something [a bunch of nested loops, maybe?] that run longer than\nyour patience will permit.\n-- \nwm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','acm.org').\nhttp://www3.sympatico.ca/cbbrowne/lsf.html\nRules of the Evil Overlord #81. \"If I am fighting with the hero atop a\nmoving platform, have disarmed him, and am about to finish him off and\nhe glances behind me and drops flat, I too will drop flat instead of\nquizzically turning around to find out what he saw.\"\n<http://www.eviloverlord.com/>\n", "msg_date": "Thu, 30 Oct 2003 07:52:28 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query puts 7.3.4 on endless loop but 7.4beta5 is fine. [ with\n\tbetter indenting ]" }, { "msg_contents": "\n\n\n> [email protected] (Rajesh Kumar Mallah) wrote:\n>> Can you please have a Look at the below and suggest why it\n>> apparently puts 7.3.4 on an infinite loop . the CPU utilisation of the backend running it\n>> approches 99%.\n>\n> What would be useful, for this case, would be to provide the query plan, perhaps via\n>\n> EXPLAIN [Big Long Query].\n>\n> The difference between that EXPLAIN and what you get on 7.4 might be quite interesting.\n>\n> I would think it quite unlikely that it is truly an \"infinite\" loop; it is rather more likely\n> that the plan winds up being pretty bad and doing something [a bunch of nested loops, maybe?]\n> that run longer than your patience will permit.\n\n:-) ok i will leave it running and try to get it.\n\nRegds\nMallah.\n\n\n\n\n\n> --\n> wm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','acm.org').\n> http://www3.sympatico.ca/cbbrowne/lsf.html\n> Rules of the Evil Overlord #81. \"If I am fighting with the hero atop a moving platform, have\n> disarmed him, and am about to finish him off and he glances behind me and drops flat, I too\n> will drop flat instead of quizzically turning around to find out what he saw.\"\n> <http://www.eviloverlord.com/>\n>\n> ---------------------------(end of broadcast)--------------------------- TIP 6: Have you\n> searched our list archives?\n>\n> http://archives.postgresql.org\n\n\n\n-----------------------------------------\nOver 1,00,000 exporters are waiting for your order! Click below to get\nin touch with leading Indian exporters listed in the premier\ntrade directory Exporters Yellow Pages.\nhttp://www.trade-india.com/dyn/gdh/eyp/\n\n\n", "msg_date": "Thu, 30 Oct 2003 19:42:00 +0530 (IST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query puts 7.3.4 on endless loop but 7.4beta5 is fine. [ with\n\tbetter indenting ]" }, { "msg_contents": "In the last exciting episode, [email protected] wrote:\n>> [email protected] (Rajesh Kumar Mallah) wrote:\n>>> Can you please have a Look at the below and suggest why it\n>>> apparently puts 7.3.4 on an infinite loop . the CPU utilisation of the backend running it\n>>> approches 99%.\n>>\n>> What would be useful, for this case, would be to provide the query plan, perhaps via\n>>\n>> EXPLAIN [Big Long Query].\n>>\n>> The difference between that EXPLAIN and what you get on 7.4 might be quite interesting.\n>>\n>> I would think it quite unlikely that it is truly an \"infinite\" loop; it is rather more likely\n>> that the plan winds up being pretty bad and doing something [a bunch of nested loops, maybe?]\n>> that run longer than your patience will permit.\n>\n> :-) ok i will leave it running and try to get it.\n\nNo, if you just do EXPLAIN (and not EXPLAIN ANALYZE), that returns\nwithout executing the query.\n\nIf the query runs for a really long time, then we _know_ that there is\nsomething troublesome. EXPLAIN (no ANALYZE) should provide some\ninsight without having anything run for a long time.\n\nIf EXPLAIN [big long query] turns into what you are terming an\n\"infinite loop,\" then you have a quite different problem, and it would\nbe very useful to know that.\n-- \n\"cbbrowne\",\"@\",\"ntlug.org\"\nhttp://www3.sympatico.ca/cbbrowne/oses.html\nThis is Linux country. On a quiet night, you can hear NT re-boot.\n", "msg_date": "Thu, 30 Oct 2003 10:10:47 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query puts 7.3.4 on endless loop but 7.4beta5 is fine. [ with\n\tbetter indenting ]" }, { "msg_contents": "Rajesh Kumar Mallah <[email protected]> writes:\n> SELECT DISTINCT main.* FROM\n> (\n> (\n> (\n> (\n> Tickets main JOIN Groups as Groups_1 ON ( main.id = Groups_1.Instance)\n> ) JOIN\n> Principals as Principals_2 ON ( Groups_1.id = Principals_2.ObjectId)\n> ) JOIN\n> CachedGroupMembers as CachedGroupMembers_3 ON ( Principals_2.id = CachedGroupMembers_3.GroupId)\n> ) JOIN\n> Users as Users_4 ON ( CachedGroupMembers_3.MemberId = Users_4.id)\n> ) WHERE\n> ...\n\nI think the reason for the performance difference is that 7.3 treats\nJOIN syntax as forcing a particular join order, while 7.4 doesn't.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Oct 2003 11:11:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query puts 7.3.4 on endless loop but 7.4beta5 is fine. [ with\n\tbetter indenting ]" }, { "msg_contents": "Rajesh Kumar Mallah <[email protected]> writes:\n\n> -> Seq Scan on tickets main (cost=0.00..465.62 rows=1 width=164)\n> Filter: ((effectiveid = id) AND ((\"type\")::text = 'ticket'::text) AND (((status)::text = 'new'::text) OR ((status)::text = 'open'::text)))\n\nThis query has to read through every ticket in the system and check if the\ncurrent user has access to it? It seems like this isn't going to be a terribly\nfast query no matter how you slice it.\n\nOne thing that might help keep its run-time from growing is a partial index\n WHERE type = 'ticket' and (status = 'new' OR status = 'open')\n\n(I'm not sure what the point of the effectiveid=id clause is)\n\nThat at least might help when 99% of your tickets are old closed tickets. But\nit will still have to scan through every new and open ticket which on some\nsystems could be a large number.\n\n-- \ngreg\n\n", "msg_date": "30 Oct 2003 13:18:05 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query puts 7.3.4 on endless loop but 7.4beta5 is fine." }, { "msg_contents": "\nRajesh Kumar Mallah <[email protected]> writes:\n\n> rt3=# explain \n>\n> SELECT DISTINCT main.* \n> FROM (((\n> (Tickets main JOIN Groups as Groups_1 ON ( main.id = Groups_1.Instance))\n> JOIN Principals as Principals_2 ON ( Groups_1.id = Principals_2.ObjectId)\n> ) JOIN CachedGroupMembers as CachedGroupMembers_3 ON ( Principals_2.id = CachedGroupMembers_3.GroupId)\n> ) JOIN Users as Users_4 ON ( CachedGroupMembers_3.MemberId = Users_4.id)\n> )\n> WHERE ((main.EffectiveId = main.id))\n> AND ((main.Type = 'ticket'))\n> AND ((( (Users_4.EmailAddress = '[email protected]')\n> AND (Groups_1.Domain = 'RT::Ticket-Role')\n> AND (Groups_1.Type = 'Requestor')\n> AND (Principals_2.PrincipalType = 'Group')\n> ))\n> AND ((main.Status = 'new') OR (main.Status = 'open'))\n> )\n> ORDER BY main.Priority DESC LIMIT 10;\n\nSo this query seems to be going the long way around to do the equivalent of an\nIN clause. Presumably because as far as I know mysql didn't support IN\nsubqueries until recently.\n\nCan you do an \"explain analyze\" on the above query and the following rewritten\none in 7.4? The \"analyze\" is important because it'll give real timing\ninformation. And it's important that it be on 7.4 as there were improvements\nin this area specifically in 7.4.\n\nSELECT * \n FROM tickets\n WHERE id IN (\n SELECT groups.instance\n FROM groups \n JOIN principals ON (groups.id = principals.objectid) \n JOIN cachedgroupmembers ON (principals.id = cachedgroupmembers.groupid)\n JOIN users ON (cachedgroupmembers.memberid = users.id)\n WHERE users.emailaddress = '[email protected]'\n AND groups.domain = 'RT::Ticket-Role'\n AND groups.type = 'Requestor'\n AND principals.principaltype = 'group'\n )\n AND type = 'ticket'\n AND effectiveid = tickets.id \n AND (status = 'new' OR status = 'open')\nORDER BY priority DESC \nLIMIT 10;\n \n\n\n\n\n\n\n-- \ngreg\n\n", "msg_date": "30 Oct 2003 13:45:55 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query puts 7.3.4 on endless loop but 7.4beta5 is fine." }, { "msg_contents": "Rajesh Kumar Mallah <[email protected]> writes:\n\n> Nopes the query are not Equiv , earlier one returns 4 rows and the below one\n> none,\n\nSorry, i lowercased a string constant and dropped the lower() on email. \n\nTry this:\n\nSELECT *\n FROM tickets\n WHERE id IN (\n SELECT groups.instance\n FROM groups\n JOIN principals ON (groups.id = principals.objectid)\n JOIN cachedgroupmembers ON (principals.id = cachedgroupmembers.groupid)\n JOIN users ON (cachedgroupmembers.memberid = users.id)\n WHERE lower(users.emailaddress) = '[email protected]'\n AND groups.domain = 'RT::Ticket-Role'\n AND groups.type = 'Requestor'\n AND principals.principaltype = 'group'\n )\n AND type = 'ticket'\n AND effectiveid = tickets.id\n AND (status = 'new' OR status = 'open')\nORDER BY priority DESC\nLIMIT 10;\n\n-- \ngreg\n\n", "msg_date": "30 Oct 2003 16:32:43 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query puts 7.3.4 on endless loop but 7.4beta5 is fine." }, { "msg_contents": "\nWell, you might want to try the EXISTS version. I'm not sure if it'll be\nfaster or slower though. In theory it should be the same.\n\nHum, I didn't realize the principals table was the largest table. But Postgres\nknew that so one would expect it to have found a better plan. The IN/EXISTS\nhandling was recently much improved but perhaps there's still room :)\n\nSELECT *\n FROM tickets\n WHERE EXISTS (\n SELECT 1\n FROM groups\n JOIN principals ON (groups.id = principals.objectid)\n JOIN cachedgroupmembers ON (principals.id = cachedgroupmembers.groupid)\n JOIN users ON (cachedgroupmembers.memberid = users.id)\n WHERE lower(users.emailaddress) = '[email protected]'\n AND groups.domain = 'RT::Ticket-Role'\n AND groups.type = 'Requestor'\n AND principals.principaltype = 'group'\n AND groups.instance = tickets.id\n )\n AND type = 'ticket'\n AND effectiveid = tickets.id\n AND (status = 'new' OR status = 'open')\nORDER BY priority DESC\nLIMIT 10;\n\n-- \ngreg\n\n", "msg_date": "30 Oct 2003 17:38:36 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query puts 7.3.4 on endless loop but 7.4beta5 is fine." }, { "msg_contents": "Hi ,\n\nHere are the Execution Plans ,\nSorry for the delay .\n\nRegds\nMallah\n\n\n\nOn PostgreSQL 7.3.4\n\nrt3=# explain SELECT DISTINCT main.* FROM ((((Tickets main JOIN Groups as Groups_1 ON ( main.id = Groups_1.Instance)) \nJOIN Principals as Principals_2 ON ( Groups_1.id = Principals_2.ObjectId)) JOIN CachedGroupMembers as CachedGroupMembers_3 \nON ( Principals_2.id = CachedGroupMembers_3.GroupId)) JOIN Users as Users_4 ON ( CachedGroupMembers_3.MemberId = Users_4.id)) \nWHERE ((main.EffectiveId = main.id)) AND ((main.Type = 'ticket')) AND ( ( ( (Users_4.EmailAddress = '[email protected]')\nAND(Groups_1.Domain = 'RT::Ticket-Role')AND(Groups_1.Type = 'Requestor')AND(Principals_2.PrincipalType = 'Group') ) ) \nAND ( (main.Status = 'new')OR(main.Status = 'open') ) ) ORDER BY main.Priority DESC LIMIT 10;\n\n\n\nLimit (cost=2044.52..2044.58 rows=1 width=195)\n -> Unique (cost=2044.52..2044.58 rows=1 width=195)\n -> Sort (cost=2044.52..2044.52 rows=1 width=195)\n Sort Key: main.priority, main.id, main.effectiveid, main.queue, main.\"type\", main.issuestatement, main.resolution, main.\"owner\", main.subject, main.initialpriority, main.finalpriority, main.timeestimated, main.timeworked, main.status, main.timeleft, main.told, main.starts, main.started, main.due, main.resolved, main.lastupdatedby, main.lastupdated, main.creator, main.created, main.disabled\n -> Hash Join (cost=3.98..2044.51 rows=1 width=195)\n Hash Cond: (\"outer\".memberid = \"inner\".id)\n -> Nested Loop (cost=0.00..2040.51 rows=2 width=191)\n -> Nested Loop (cost=0.00..1914.41 rows=1 width=183)\n -> Nested Loop (cost=0.00..1909.67 rows=1 width=175)\n Join Filter: ((\"outer\".id)::text = (\"inner\".instance)::text)\n -> Seq Scan on tickets main (cost=0.00..465.62 rows=1 width=163)\n Filter: ((effectiveid = id) AND (\"type\" = 'ticket'::character varying) AND ((status = 'new'::character varying) OR (status = 'open'::character varying)))\n -> Index Scan using groups_domain on groups groups_1 (cost=0.00..1338.03 rows=7068 width=12)\n Index Cond: (\"domain\" = 'RT::Ticket-Role'::character varying)\n Filter: (\"type\" = 'Requestor'::character varying)\n -> Index Scan using principals2 on principals principals_2 (cost=0.00..4.73 rows=1 width=8)\n Index Cond: (\"outer\".id = principals_2.objectid)\n Filter: (principaltype = 'Group'::character varying)\n -> Index Scan using cachedgroupmembers3 on cachedgroupmembers cachedgroupmembers_3 (cost=0.00..125.54 rows=45 width=8)\n Index Cond: (\"outer\".id = cachedgroupmembers_3.groupid)\n -> Hash (cost=3.98..3.98 rows=1 width=4)\n -> Index Scan using users4 on users users_4 (cost=0.00..3.98 rows=1 width=4)\n Index Cond: (emailaddress = '[email protected]'::character varying)\n(23 rows)\n\n\nOn PostgreSQL 7.4 beta 5\n\n\nrt3=# explain SELECT DISTINCT main.* FROM ((((Tickets main JOIN Groups as Groups_1 ON ( main.id = Groups_1.Instance)) \nJOIN Principals as Principals_2 ON ( Groups_1.id = Principals_2.ObjectId)) JOIN CachedGroupMembers as CachedGroupMembers_3 \nON ( Principals_2.id = CachedGroupMembers_3.GroupId)) JOIN Users as Users_4 ON ( CachedGroupMembers_3.MemberId = Users_4.id)) \nWHERE ((main.EffectiveId = main.id)) AND ((main.Type = 'ticket')) AND ( ( ( (Users_4.EmailAddress = '[email protected]')\nAND(Groups_1.Domain = 'RT::Ticket-Role')AND(Groups_1.Type = 'Requestor')AND(Principals_2.PrincipalType = 'Group') ) ) \nAND ( (main.Status = 'new')OR(main.Status = 'open') ) ) ORDER BY main.Priority DESC LIMIT 10;\n QUERY PLAN\n---------------------------------------------------------------\n Limit (cost=582.27..582.34 rows=1 width=164)\n -> Unique (cost=582.27..582.34 rows=1 width=164)\n -> Sort (cost=582.27..582.28 rows=1 width=164)\n Sort Key: main.priority, main.id, main.effectiveid, main.queue, main.\"type\", main.issuestatement, main.resolution, main.\"owner\", main.subject, main.initialpriority, main.finalpriority, main.timeestimated, main.timeworked, main.status, main.timeleft, main.told, main.starts, main.started, main.due, main.resolved, main.lastupdatedby, main.lastupdated, main.creator, main.created, main.disabled\n -> Hash Join (cost=476.18..582.26 rows=1 width=164)\n Hash Cond: (\"outer\".groupid = \"inner\".id)\n -> Nested Loop (cost=0.00..105.97 rows=21 width=4)\n -> Index Scan using users4 on users users_4 (cost=0.00..3.99 rows=2 width=4)\n Index Cond: ((emailaddress)::text = '[email protected]'::text)\n -> Index Scan using cachedgroupmembers2 on cachedgroupmembers cachedgroupmembers_3 (cost=0.00..50.81 rows=14 width=8)\n Index Cond: (cachedgroupmembers_3.memberid = \"outer\".id)\n -> Hash (cost=476.17..476.17 rows=1 width=168)\n -> Nested Loop (cost=0.00..476.17 rows=1 width=168)\n -> Nested Loop (cost=0.00..471.54 rows=1 width=168)\n -> Seq Scan on tickets main (cost=0.00..465.62 rows=1 width=164)\n Filter: ((effectiveid = id) AND ((\"type\")::text = 'ticket'::text) AND (((status)::text = 'new'::text) OR ((status)::text = 'open'::text)))\n -> Index Scan using groups1 on groups groups_1 (cost=0.00..5.90 rows=1 width=12)\n Index Cond: (((groups_1.\"domain\")::text = 'RT::Ticket-Role'::text) AND ((\"outer\".id)::text = (groups_1.instance)::text) AND ((groups_1.\"type\")::text = 'Requestor'::text))\n -> Index Scan using principals2 on principals principals_2 (cost=0.00..4.62 rows=1 width=8)\n Index Cond: (\"outer\".id = principals_2.objectid)\n Filter: ((principaltype)::text = 'Group'::text)\n(21 rows)\n\nrt3=#\n\n\n\nChristopher Browne wrote:\n\n>In the last exciting episode, [email protected] wrote:\n> \n>\n>>>[email protected] (Rajesh Kumar Mallah) wrote:\n>>> \n>>>\n>>>>Can you please have a Look at the below and suggest why it\n>>>>apparently puts 7.3.4 on an infinite loop . the CPU utilisation of the backend running it\n>>>>approches 99%.\n>>>> \n>>>>\n>>>What would be useful, for this case, would be to provide the query plan, perhaps via\n>>>\n>>> EXPLAIN [Big Long Query].\n>>>\n>>>The difference between that EXPLAIN and what you get on 7.4 might be quite interesting.\n>>>\n>>>I would think it quite unlikely that it is truly an \"infinite\" loop; it is rather more likely\n>>>that the plan winds up being pretty bad and doing something [a bunch of nested loops, maybe?]\n>>>that run longer than your patience will permit.\n>>> \n>>>\n>>:-) ok i will leave it running and try to get it.\n>> \n>>\n>\n>No, if you just do EXPLAIN (and not EXPLAIN ANALYZE), that returns\n>without executing the query.\n>\n>If the query runs for a really long time, then we _know_ that there is\n>something troublesome. EXPLAIN (no ANALYZE) should provide some\n>insight without having anything run for a long time.\n>\n>If EXPLAIN [big long query] turns into what you are terming an\n>\"infinite loop,\" then you have a quite different problem, and it would\n>be very useful to know that.\n> \n>\n\n\n\n\n\n\n\n\n\nHi ,\n\nHere are the Execution Plans ,\nSorry for the delay .\n\nRegds\nMallah\n\n\n\nOn PostgreSQL  7.3.4 \n\nrt3=# explain SELECT DISTINCT main.* FROM ((((Tickets main JOIN Groups as Groups_1 ON ( main.id = Groups_1.Instance)) \nJOIN Principals as Principals_2 ON ( Groups_1.id = Principals_2.ObjectId)) JOIN CachedGroupMembers as CachedGroupMembers_3 \nON ( Principals_2.id = CachedGroupMembers_3.GroupId)) JOIN Users as Users_4 ON ( CachedGroupMembers_3.MemberId = Users_4.id)) \nWHERE ((main.EffectiveId = main.id)) AND ((main.Type = 'ticket')) AND ( ( ( (Users_4.EmailAddress = '[email protected]')\nAND(Groups_1.Domain = 'RT::Ticket-Role')AND(Groups_1.Type = 'Requestor')AND(Principals_2.PrincipalType = 'Group') ) ) \nAND ( (main.Status = 'new')OR(main.Status = 'open') ) ) ORDER BY main.Priority DESC LIMIT 10;\n\n\n\nLimit (cost=2044.52..2044.58 rows=1 width=195)\n -> Unique (cost=2044.52..2044.58 rows=1 width=195)\n -> Sort (cost=2044.52..2044.52 rows=1 width=195)\n Sort Key: main.priority, main.id, main.effectiveid, main.queue, main.\"type\", main.issuestatement, main.resolution, main.\"owner\", main.subject, main.initialpriority, main.finalpriority, main.timeestimated, main.timeworked, main.status, main.timeleft, main.told, main.starts, main.started, main.due, main.resolved, main.lastupdatedby, main.lastupdated, main.creator, main.created, main.disabled\n -> Hash Join (cost=3.98..2044.51 rows=1 width=195)\n Hash Cond: (\"outer\".memberid = \"inner\".id)\n -> Nested Loop (cost=0.00..2040.51 rows=2 width=191)\n -> Nested Loop (cost=0.00..1914.41 rows=1 width=183)\n -> Nested Loop (cost=0.00..1909.67 rows=1 width=175)\n Join Filter: ((\"outer\".id)::text = (\"inner\".instance)::text)\n -> Seq Scan on tickets main (cost=0.00..465.62 rows=1 width=163)\n Filter: ((effectiveid = id) AND (\"type\" = 'ticket'::character varying) AND ((status = 'new'::character varying) OR (status = 'open'::character varying)))\n -> Index Scan using groups_domain on groups groups_1 (cost=0.00..1338.03 rows=7068 width=12)\n Index Cond: (\"domain\" = 'RT::Ticket-Role'::character varying)\n Filter: (\"type\" = 'Requestor'::character varying)\n -> Index Scan using principals2 on principals principals_2 (cost=0.00..4.73 rows=1 width=8)\n Index Cond: (\"outer\".id = principals_2.objectid)\n Filter: (principaltype = 'Group'::character varying)\n -> Index Scan using cachedgroupmembers3 on cachedgroupmembers cachedgroupmembers_3 (cost=0.00..125.54 rows=45 width=8)\n Index Cond: (\"outer\".id = cachedgroupmembers_3.groupid)\n -> Hash (cost=3.98..3.98 rows=1 width=4)\n -> Index Scan using users4 on users users_4 (cost=0.00..3.98 rows=1 width=4)\n Index Cond: (emailaddress = '[email protected]'::character varying)\n(23 rows)\n\n\nOn PostgreSQL 7.4 beta 5\n\nrt3=# explain SELECT DISTINCT main.* FROM ((((Tickets main JOIN Groups as Groups_1 ON ( main.id = Groups_1.Instance)) \nJOIN Principals as Principals_2 ON ( Groups_1.id = Principals_2.ObjectId)) JOIN CachedGroupMembers as CachedGroupMembers_3 \nON ( Principals_2.id = CachedGroupMembers_3.GroupId)) JOIN Users as Users_4 ON ( CachedGroupMembers_3.MemberId = Users_4.id)) \nWHERE ((main.EffectiveId = main.id)) AND ((main.Type = 'ticket')) AND ( ( ( (Users_4.EmailAddress = '[email protected]')\nAND(Groups_1.Domain = 'RT::Ticket-Role')AND(Groups_1.Type = 'Requestor')AND(Principals_2.PrincipalType = 'Group') ) ) \nAND ( (main.Status = 'new')OR(main.Status = 'open') ) ) ORDER BY main.Priority DESC LIMIT 10;\n QUERY PLAN\n---------------------------------------------------------------\n Limit (cost=582.27..582.34 rows=1 width=164)\n -> Unique (cost=582.27..582.34 rows=1 width=164)\n -> Sort (cost=582.27..582.28 rows=1 width=164)\n Sort Key: main.priority, main.id, main.effectiveid, main.queue, main.\"type\", main.issuestatement, main.resolution, main.\"owner\", main.subject, main.initialpriority, main.finalpriority, main.timeestimated, main.timeworked, main.status, main.timeleft, main.told, main.starts, main.started, main.due, main.resolved, main.lastupdatedby, main.lastupdated, main.creator, main.created, main.disabled\n -> Hash Join (cost=476.18..582.26 rows=1 width=164)\n Hash Cond: (\"outer\".groupid = \"inner\".id)\n -> Nested Loop (cost=0.00..105.97 rows=21 width=4)\n -> Index Scan using users4 on users users_4 (cost=0.00..3.99 rows=2 width=4)\n Index Cond: ((emailaddress)::text = '[email protected]'::text)\n -> Index Scan using cachedgroupmembers2 on cachedgroupmembers cachedgroupmembers_3 (cost=0.00..50.81 rows=14 width=8)\n Index Cond: (cachedgroupmembers_3.memberid = \"outer\".id)\n -> Hash (cost=476.17..476.17 rows=1 width=168)\n -> Nested Loop (cost=0.00..476.17 rows=1 width=168)\n -> Nested Loop (cost=0.00..471.54 rows=1 width=168)\n -> Seq Scan on tickets main (cost=0.00..465.62 rows=1 width=164)\n Filter: ((effectiveid = id) AND ((\"type\")::text = 'ticket'::text) AND (((status)::text = 'new'::text) OR ((status)::text = 'open'::text)))\n -> Index Scan using groups1 on groups groups_1 (cost=0.00..5.90 rows=1 width=12)\n Index Cond: (((groups_1.\"domain\")::text = 'RT::Ticket-Role'::text) AND ((\"outer\".id)::text = (groups_1.instance)::text) AND ((groups_1.\"type\")::text = 'Requestor'::text))\n -> Index Scan using principals2 on principals principals_2 (cost=0.00..4.62 rows=1 width=8)\n Index Cond: (\"outer\".id = principals_2.objectid)\n Filter: ((principaltype)::text = 'Group'::text)\n(21 rows)\n\nrt3=#\n\n\n\nChristopher Browne wrote:\n\nIn the last exciting episode, [email protected] wrote:\n \n\n\[email protected] (Rajesh Kumar Mallah) wrote:\n \n\nCan you please have a Look at the below and suggest why it\napparently puts 7.3.4 on an infinite loop . the CPU utilisation of the backend running it\napproches 99%.\n \n\nWhat would be useful, for this case, would be to provide the query plan, perhaps via\n\n EXPLAIN [Big Long Query].\n\nThe difference between that EXPLAIN and what you get on 7.4 might be quite interesting.\n\nI would think it quite unlikely that it is truly an \"infinite\" loop; it is rather more likely\nthat the plan winds up being pretty bad and doing something [a bunch of nested loops, maybe?]\nthat run longer than your patience will permit.\n \n\n:-) ok i will leave it running and try to get it.\n \n\n\nNo, if you just do EXPLAIN (and not EXPLAIN ANALYZE), that returns\nwithout executing the query.\n\nIf the query runs for a really long time, then we _know_ that there is\nsomething troublesome. EXPLAIN (no ANALYZE) should provide some\ninsight without having anything run for a long time.\n\nIf EXPLAIN [big long query] turns into what you are terming an\n\"infinite loop,\" then you have a quite different problem, and it would\nbe very useful to know that.", "msg_date": "Fri, 31 Oct 2003 22:08:50 -0500", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query puts 7.3.4 on endless loop but 7.4beta5 is fine." }, { "msg_contents": "Tom Lane wrote:\n\n>Rajesh Kumar Mallah <[email protected]> writes:\n> \n>\n>>SELECT DISTINCT main.* FROM\n>>(\n>> (\n>> (\n>> (\n>> Tickets main JOIN Groups as Groups_1 ON ( main.id = Groups_1.Instance)\n>> ) JOIN\n>> Principals as Principals_2 ON ( Groups_1.id = Principals_2.ObjectId)\n>> ) JOIN\n>> CachedGroupMembers as CachedGroupMembers_3 ON ( Principals_2.id = CachedGroupMembers_3.GroupId)\n>> ) JOIN\n>> Users as Users_4 ON ( CachedGroupMembers_3.MemberId = Users_4.id)\n>>) WHERE\n>> ...\n>> \n>>\n>\n>I think the reason for the performance difference is that 7.3 treats\n>JOIN syntax as forcing a particular join order, while 7.4 doesn't.\n>\n\nJust out of curiosity , how does 7.4 determine the optimal Join Order?\nis it GEQO in case of 7.4 although i did not enable it explicitly?\nThanks for the reply , I sent the EXPLAINs also just now.\n\nWhat i really want is to help improving the Pg specific Component\nfor DBIx::SearchBuilder. The module is being widely used in\nthe mod_perl world and has impact on the performance perception\nof PostgreSQL.\n\n>\n>\t\t\tregards, tom lane\n> \n>\n\n\n\n\n\n\n\n\nTom Lane wrote:\n\nRajesh Kumar Mallah <[email protected]> writes:\n \n\nSELECT DISTINCT main.* FROM\n(\n (\n (\n (\n Tickets main JOIN Groups as Groups_1 ON ( main.id = Groups_1.Instance)\n ) JOIN\n Principals as Principals_2 ON ( Groups_1.id = Principals_2.ObjectId)\n ) JOIN\n CachedGroupMembers as CachedGroupMembers_3 ON ( Principals_2.id = CachedGroupMembers_3.GroupId)\n ) JOIN\n Users as Users_4 ON ( CachedGroupMembers_3.MemberId = Users_4.id)\n) WHERE\n ...\n \n\n\nI think the reason for the performance difference is that 7.3 treats\nJOIN syntax as forcing a particular join order, while 7.4 doesn't.\n\n\nJust out of curiosity , how does 7.4 determine the optimal Join Order?\nis it GEQO in case of 7.4 although i did not enable it explicitly?\nThanks for the reply , I sent the EXPLAINs also just now. \n\nWhat i really want is to help improving the Pg specific Component\nfor DBIx::SearchBuilder. The module is being widely used in \nthe  mod_perl world and has impact on the performance perception\nof PostgreSQL.\n\n\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 31 Oct 2003 22:19:55 -0500", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query puts 7.3.4 on endless loop but 7.4beta5 is fine." }, { "msg_contents": "explain analyze of original Query:\n\nrt3=# explain analyze SELECT DISTINCT main.* FROM Tickets main JOIN Groups as Groups_1 ON ( main.id = Groups_1.Instance) JOIN Principals as Principals_2 ON ( Groups_1.id = Principals_2.ObjectId) JOIN CachedGroupMembers as CachedGroupMembers_3 ON ( Principals_2.id = CachedGroupMembers_3.GroupId) JOIN Users as Users_4 ON ( CachedGroupMembers_3.MemberId = Users_4.id) WHERE ((main.EffectiveId = main.id)) AND ((main.Type = 'ticket')) AND ( ( ( (lower(Users_4.EmailAddress) = '[email protected]')AND(Groups_1.Domain = 'RT::Ticket-Role')AND(Groups_1.Type = 'Requestor')AND(Principals_2.PrincipalType = 'Group') ) ) AND ( (main.Status = 'new')OR(main.Status = 'open') ) ) ORDER BY main.Priority DESC LIMIT 10;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=619.93..620.00 rows=1 width=164) (actual time=994.570..994.683 rows=4 loops=1)\n -> Unique (cost=619.93..620.00 rows=1 width=164) (actual time=994.565..994.672 rows=4 loops=1)\n -> Sort (cost=619.93..619.93 rows=1 width=164) (actual time=994.561..994.569 rows=8 loops=1)\n Sort Key: main.priority, main.id, main.effectiveid, main.queue, main.\"type\", main.issuestatement, main.resolution, main.\"owner\", main.subject, main.initialpriority, main.finalpriority, main.timeestimated, main.timeworked, main.status, main.timeleft, main.told, main.starts, main.started, main.due, main.resolved, main.lastupdatedby, main.lastupdated, main.creator, main.created, main.disabled\n -> Nested Loop (cost=0.00..619.92 rows=1 width=164) (actual time=1.374..993.998 rows=8 loops=1)\n -> Nested Loop (cost=0.00..610.83 rows=3 width=168) (actual time=0.691..839.633 rows=9617 loops=1)\n -> Nested Loop (cost=0.00..476.17 rows=1 width=168) (actual time=0.524..616.937 rows=3209 loops=1)\n -> Nested Loop (cost=0.00..471.54 rows=1 width=168) (actual time=0.376..503.774 rows=3209 loops=1)\n -> Seq Scan on tickets main (cost=0.00..465.62 rows=1 width=164) (actual time=0.114..60.044 rows=3209 loops=1)\n Filter: ((effectiveid = id) AND ((\"type\")::text = 'ticket'::text) AND (((status)::text = 'new'::text) OR ((status)::text = 'open'::text)))\n -> Index Scan using groups1 on groups groups_1 (cost=0.00..5.90 rows=1 width=12) (actual time=0.111..0.119 rows=1 loops=3209)\n Index Cond: (((groups_1.\"domain\")::text = 'RT::Ticket-Role'::text) AND ((\"outer\".id)::text = (groups_1.instance)::text) AND ((groups_1.\"type\")::text = 'Requestor'::text))\n -> Index Scan using principals2 on principals principals_2 (cost=0.00..4.62 rows=1 width=8) (actual time=0.015..0.018 rows=1 loops=3209)\n Index Cond: (\"outer\".id = principals_2.objectid)\n Filter: ((principaltype)::text = 'Group'::text)\n -> Index Scan using cachedgroupmembers3 on cachedgroupmembers cachedgroupmembers_3 (cost=0.00..134.06 rows=47 width=8) (actual time=0.015..0.026 rows=3 loops=3209)\n Index Cond: (\"outer\".id = cachedgroupmembers_3.groupid)\n -> Index Scan using users_pkey on users users_4 (cost=0.00..3.02 rows=1 width=4) (actual time=0.013..0.013 rows=0 loops=9617)\n Index Cond: (\"outer\".memberid = users_4.id)\n Filter: (lower((emailaddress)::text) = '[email protected]'::text)\n Total runtime: 995.326 ms\n(21 rows)\nrt3=#\n\n999 ms is not that bad but u think it deserves this many ms?\n\n\nNopes the query are not Equiv , earlier one returns 4 rows and the below \none none,\ncan you spot any obvious and resend plz. thats why i did not do an \nexplain analyze\n\nrt3=# SELECT *\nrt3-# FROM tickets\nrt3-# WHERE id IN (\nrt3(# SELECT groups.instance\nrt3(# FROM groups\nrt3(# JOIN principals ON (groups.id = principals.objectid)\nrt3(# JOIN cachedgroupmembers ON (principals.id = cachedgroupmembers.groupid)\nrt3(# JOIN users ON (cachedgroupmembers.memberid = users.id)\nrt3(# WHERE users.emailaddress = '[email protected]'\nrt3(# AND groups.domain = 'RT::Ticket-Role'\nrt3(# AND groups.type = 'Requestor'\nrt3(# AND principals.principaltype = 'group'\nrt3(# )\nrt3-# AND type = 'ticket'\nrt3-# AND effectiveid = tickets.id\nrt3-# AND (status = 'new' OR status = 'open')\nrt3-# ORDER BY priority DESC\nrt3-# LIMIT 10;\n\n id | effectiveid | queue | type | issuestatement | resolution | owner | subject | initialpriority | finalpriority | priority | timeestimated | timeworked | status | timeleft | told | starts | started | due | resolved | lastupdatedby | lastupdated | creator | created | disabled\n----+-------------+-------+------+----------------+------------+-------+---------+-----------------+---------------+----------+---------------+------------+--------+----------+------+--------+---------+-----+----------+---------------+-------------+---------+---------+----------\n(0 rows)\n\nTime: 2670.85 ms\nrt3=#\n\n\n\nWell it may be of interest to write the query in best possible way\nbut i am not sure if it really helps the RT application becoz i do\nnot know whether DBIx::SearchBuilder would currently allow\nauto generation of such arbitrary SQLs.\n\nRegds\nMallah.\n\n\n\n\nGreg Stark wrote:\n\n>Rajesh Kumar Mallah <[email protected]> writes:\n>\n> \n>\n>>rt3=# explain \n>>\n>>SELECT DISTINCT main.* \n>> FROM (((\n>> (Tickets main JOIN Groups as Groups_1 ON ( main.id = Groups_1.Instance))\n>> JOIN Principals as Principals_2 ON ( Groups_1.id = Principals_2.ObjectId)\n>> ) JOIN CachedGroupMembers as CachedGroupMembers_3 ON ( Principals_2.id = CachedGroupMembers_3.GroupId)\n>> ) JOIN Users as Users_4 ON ( CachedGroupMembers_3.MemberId = Users_4.id)\n>> )\n>> WHERE ((main.EffectiveId = main.id))\n>> AND ((main.Type = 'ticket'))\n>> AND ((( (Users_4.EmailAddress = '[email protected]')\n>> AND (Groups_1.Domain = 'RT::Ticket-Role')\n>> AND (Groups_1.Type = 'Requestor')\n>> AND (Principals_2.PrincipalType = 'Group')\n>> ))\n>> AND ((main.Status = 'new') OR (main.Status = 'open'))\n>> )\n>> ORDER BY main.Priority DESC LIMIT 10;\n>> \n>>\n>\n>So this query seems to be going the long way around to do the equivalent of an\n>IN clause. Presumably because as far as I know mysql didn't support IN\n>subqueries until recently.\n>\n>Can you do an \"explain analyze\" on the above query and the following rewritten\n>one in 7.4? The \"analyze\" is important because it'll give real timing\n>information. And it's important that it be on 7.4 as there were improvements\n>in this area specifically in 7.4.\n>\n>SELECT * \n> FROM tickets\n> WHERE id IN (\n> SELECT groups.instance\n> FROM groups \n> JOIN principals ON (groups.id = principals.objectid) \n> JOIN cachedgroupmembers ON (principals.id = cachedgroupmembers.groupid)\n> JOIN users ON (cachedgroupmembers.memberid = users.id)\n> WHERE users.emailaddress = '[email protected]'\n> AND groups.domain = 'RT::Ticket-Role'\n> AND groups.type = 'Requestor'\n> AND principals.principaltype = 'group'\n> )\n> AND type = 'ticket'\n> AND effectiveid = tickets.id \n> AND (status = 'new' OR status = 'open')\n>ORDER BY priority DESC \n>LIMIT 10;\n> \n>\n>\n>\n>\n>\n>\n> \n>\n\n\n\n\n\n\n\n\n\n\nexplain analyze of original Query:\n\nrt3=# explain analyze SELECT DISTINCT main.* FROM Tickets main JOIN Groups as Groups_1 ON ( main.id = Groups_1.Instance) JOIN Principals as Principals_2 ON ( Groups_1.id = Principals_2.ObjectId) JOIN CachedGroupMembers as CachedGroupMembers_3 ON ( Principals_2.id = CachedGroupMembers_3.GroupId) JOIN Users as Users_4 ON ( CachedGroupMembers_3.MemberId = Users_4.id) WHERE ((main.EffectiveId = main.id)) AND ((main.Type = 'ticket')) AND ( ( ( (lower(Users_4.EmailAddress) = '[email protected]')AND(Groups_1.Domain = 'RT::Ticket-Role')AND(Groups_1.Type = 'Requestor')AND(Principals_2.PrincipalType = 'Group') ) ) AND ( (main.Status = 'new')OR(main.Status = 'open') ) ) ORDER BY main.Priority DESC LIMIT 10;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=619.93..620.00 rows=1 width=164) (actual time=994.570..994.683 rows=4 loops=1)\n -> Unique (cost=619.93..620.00 rows=1 width=164) (actual time=994.565..994.672 rows=4 loops=1)\n -> Sort (cost=619.93..619.93 rows=1 width=164) (actual time=994.561..994.569 rows=8 loops=1)\n Sort Key: main.priority, main.id, main.effectiveid, main.queue, main.\"type\", main.issuestatement, main.resolution, main.\"owner\", main.subject, main.initialpriority, main.finalpriority, main.timeestimated, main.timeworked, main.status, main.timeleft, main.told, main.starts, main.started, main.due, main.resolved, main.lastupdatedby, main.lastupdated, main.creator, main.created, main.disabled\n -> Nested Loop (cost=0.00..619.92 rows=1 width=164) (actual time=1.374..993.998 rows=8 loops=1)\n -> Nested Loop (cost=0.00..610.83 rows=3 width=168) (actual time=0.691..839.633 rows=9617 loops=1)\n -> Nested Loop (cost=0.00..476.17 rows=1 width=168) (actual time=0.524..616.937 rows=3209 loops=1)\n -> Nested Loop (cost=0.00..471.54 rows=1 width=168) (actual time=0.376..503.774 rows=3209 loops=1)\n -> Seq Scan on tickets main (cost=0.00..465.62 rows=1 width=164) (actual time=0.114..60.044 rows=3209 loops=1)\n Filter: ((effectiveid = id) AND ((\"type\")::text = 'ticket'::text) AND (((status)::text = 'new'::text) OR ((status)::text = 'open'::text)))\n -> Index Scan using groups1 on groups groups_1 (cost=0.00..5.90 rows=1 width=12) (actual time=0.111..0.119 rows=1 loops=3209)\n Index Cond: (((groups_1.\"domain\")::text = 'RT::Ticket-Role'::text) AND ((\"outer\".id)::text = (groups_1.instance)::text) AND ((groups_1.\"type\")::text = 'Requestor'::text))\n -> Index Scan using principals2 on principals principals_2 (cost=0.00..4.62 rows=1 width=8) (actual time=0.015..0.018 rows=1 loops=3209)\n Index Cond: (\"outer\".id = principals_2.objectid)\n Filter: ((principaltype)::text = 'Group'::text)\n -> Index Scan using cachedgroupmembers3 on cachedgroupmembers cachedgroupmembers_3 (cost=0.00..134.06 rows=47 width=8) (actual time=0.015..0.026 rows=3 loops=3209)\n Index Cond: (\"outer\".id = cachedgroupmembers_3.groupid)\n -> Index Scan using users_pkey on users users_4 (cost=0.00..3.02 rows=1 width=4) (actual time=0.013..0.013 rows=0 loops=9617)\n Index Cond: (\"outer\".memberid = users_4.id)\n Filter: (lower((emailaddress)::text) = '[email protected]'::text)\n Total runtime: 995.326 ms\n(21 rows)\nrt3=#\n\n999 ms is not that bad but u think it deserves this many ms?\n\n\nNopes the query are not Equiv , earlier one returns 4 rows and the\nbelow one none,\ncan you spot any obvious and resend plz. thats why i did not do an\nexplain analyze\nrt3=# SELECT *\nrt3-# FROM tickets\nrt3-# WHERE id IN (\nrt3(# SELECT groups.instance\nrt3(# FROM groups\nrt3(# JOIN principals ON (groups.id = principals.objectid)\nrt3(# JOIN cachedgroupmembers ON (principals.id = cachedgroupmembers.groupid)\nrt3(# JOIN users ON (cachedgroupmembers.memberid = users.id)\nrt3(# WHERE users.emailaddress = '[email protected]'\nrt3(# AND groups.domain = 'RT::Ticket-Role'\nrt3(# AND groups.type = 'Requestor'\nrt3(# AND principals.principaltype = 'group'\nrt3(# )\nrt3-# AND type = 'ticket'\nrt3-# AND effectiveid = tickets.id\nrt3-# AND (status = 'new' OR status = 'open')\nrt3-# ORDER BY priority DESC\nrt3-# LIMIT 10;\n\n id | effectiveid | queue | type | issuestatement | resolution | owner | subject | initialpriority | finalpriority | priority | timeestimated | timeworked | status | timeleft | told | starts | started | due | resolved | lastupdatedby | lastupdated | creator | created | disabled\n----+-------------+-------+------+----------------+------------+-------+---------+-----------------+---------------+----------+---------------+------------+--------+----------+------+--------+---------+-----+----------+---------------+-------------+---------+---------+----------\n(0 rows)\n\nTime: 2670.85 ms\nrt3=#\n\n\n\nWell it may be of interest to write the query in best possible way\nbut i am not sure if it really helps the RT application becoz i do\nnot know whether DBIx::SearchBuilder would currently allow \nauto generation of such arbitrary SQLs.\n\nRegds\nMallah.\n\n\n\n\nGreg Stark wrote:\n\nRajesh Kumar Mallah <[email protected]> writes:\n\n \n\nrt3=# explain \n\nSELECT DISTINCT main.* \n FROM (((\n (Tickets main JOIN Groups as Groups_1 ON ( main.id = Groups_1.Instance))\n JOIN Principals as Principals_2 ON ( Groups_1.id = Principals_2.ObjectId)\n ) JOIN CachedGroupMembers as CachedGroupMembers_3 ON ( Principals_2.id = CachedGroupMembers_3.GroupId)\n ) JOIN Users as Users_4 ON ( CachedGroupMembers_3.MemberId = Users_4.id)\n )\n WHERE ((main.EffectiveId = main.id))\n AND ((main.Type = 'ticket'))\n AND ((( (Users_4.EmailAddress = '[email protected]')\n AND (Groups_1.Domain = 'RT::Ticket-Role')\n AND (Groups_1.Type = 'Requestor')\n AND (Principals_2.PrincipalType = 'Group')\n ))\n AND ((main.Status = 'new') OR (main.Status = 'open'))\n )\n ORDER BY main.Priority DESC LIMIT 10;\n \n\n\nSo this query seems to be going the long way around to do the equivalent of an\nIN clause. Presumably because as far as I know mysql didn't support IN\nsubqueries until recently.\n\nCan you do an \"explain analyze\" on the above query and the following rewritten\none in 7.4? The \"analyze\" is important because it'll give real timing\ninformation. And it's important that it be on 7.4 as there were improvements\nin this area specifically in 7.4.\n\nSELECT * \n FROM tickets\n WHERE id IN (\n SELECT groups.instance\n FROM groups \n JOIN principals ON (groups.id = principals.objectid) \n JOIN cachedgroupmembers ON (principals.id = cachedgroupmembers.groupid)\n JOIN users ON (cachedgroupmembers.memberid = users.id)\n WHERE users.emailaddress = '[email protected]'\n AND groups.domain = 'RT::Ticket-Role'\n AND groups.type = 'Requestor'\n AND principals.principaltype = 'group'\n )\n AND type = 'ticket'\n AND effectiveid = tickets.id \n AND (status = 'new' OR status = 'open')\nORDER BY priority DESC \nLIMIT 10;", "msg_date": "Sat, 01 Nov 2003 00:44:07 -0500", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query puts 7.3.4 on endless loop but 7.4beta5 is fine." }, { "msg_contents": "\n\nWell Sorry everyone ,\n\nThe problem was tracked down to a silly\ndatatype mismatch between two join columns\nin table Groups(instance) and Tickets(id)\n(int vs varchar )\n\n7.4b5 is automatically taking care of this\nmismatch hence it was getting executed there.\n\nBut , The problem is will this behaviour not \nallow to go such mistakes unnoticed?\n\n\nRegards\nMallah.\n\n\nOn Friday 31 Oct 2003 4:08 am, Greg Stark wrote:\n> Well, you might want to try the EXISTS version. I'm not sure if it'll be\n> faster or slower though. In theory it should be the same.\n>\n> Hum, I didn't realize the principals table was the largest table. But\n> Postgres knew that so one would expect it to have found a better plan. The\n> IN/EXISTS handling was recently much improved but perhaps there's still\n> room :)\n>\n> SELECT *\n> FROM tickets\n> WHERE EXISTS (\n> SELECT 1\n> FROM groups\n> JOIN principals ON (groups.id = principals.objectid)\n> JOIN cachedgroupmembers ON (principals.id =\n> cachedgroupmembers.groupid) JOIN users ON (cachedgroupmembers.memberid =\n> users.id)\n> WHERE lower(users.emailaddress) = '[email protected]'\n> AND groups.domain = 'RT::Ticket-Role'\n> AND groups.type = 'Requestor'\n> AND principals.principaltype = 'group'\n> AND groups.instance = tickets.id\n> )\n> AND type = 'ticket'\n> AND effectiveid = tickets.id\n> AND (status = 'new' OR status = 'open')\n> ORDER BY priority DESC\n> LIMIT 10;\n\n", "msg_date": "Sat, 1 Nov 2003 11:17:02 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "[ PROBLEM SOLVED ] Re: Query puts 7.3.4 on endless loop but 7.4beta5\n\tis fine." }, { "msg_contents": "The g in group had to be uppercased, the query produced the same results\nbut performance was worse for the IN version . 2367 ms vs 600 ms\n\nrt3=# explain analyze SELECT * from tickets where id in ( SELECT groups.instance FROM groups\n JOIN principals ON (groups.id = principals.objectid) JOIN cachedgroupmembers ON \n(principals.id = cachedgroupmembers.groupid) JOIN users ON (cachedgroupmembers.memberid = users.id) \nWHERE lower(users.emailaddress) = '[email protected]' AND groups.domain = 'RT::Ticket-Role' \nAND groups.type = 'Requestor' AND principals.principaltype = 'Group' ) AND type = 'ticket' AND \neffectiveid = tickets.id AND (status = 'new' OR status = 'open') ORDER BY priority DESC LIMIT 10;;\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=10078.18..10078.19 rows=1 width=164) (actual time=2367.084..2367.096 rows=4 loops=1)\n -> Sort (cost=10078.18..10078.19 rows=1 width=164) (actual time=2367.078..2367.082 rows=4 loops=1)\n Sort Key: tickets.priority\n -> Hash Join (cost=10077.65..10078.17 rows=1 width=164) (actual time=2366.870..2367.051 rows=4 loops=1)\n Hash Cond: ((\"outer\".instance)::text = (\"inner\".id)::text)\n -> HashAggregate (cost=9612.02..9612.02 rows=69 width=8) (actual time=2303.792..2303.810 rows=7 loops=1)\n -> Hash Join (cost=4892.97..9611.85 rows=69 width=8) (actual time=1427.260..2303.685 rows=14 loops=1)\n Hash Cond: (\"outer\".memberid = \"inner\".id)\n -> Hash Join (cost=4523.65..9139.45 rows=13651 width=12) (actual time=948.960..2258.529 rows=31123 loops=1)\n Hash Cond: (\"outer\".groupid = \"inner\".id)\n -> Seq Scan on cachedgroupmembers (cost=0.00..3456.51 rows=204551 width=8) (actual time=0.048..365.147 rows=204551 loops=1)\n -> Hash (cost=4509.93..4509.93 rows=5488 width=12) (actual time=948.843..948.843 rows=0 loops=1)\n -> Hash Join (cost=1409.91..4509.93 rows=5488 width=12) (actual time=315.722..930.025 rows=10431 loops=1)\n Hash Cond: (\"outer\".objectid = \"inner\".id)\n -> Seq Scan on principals (cost=0.00..1583.76 rows=62625 width=8) (actual time=0.043..251.142 rows=62097 loops=1)\n Filter: ((principaltype)::text = 'Group'::text)\n -> Hash (cost=1359.90..1359.90 rows=7204 width=12) (actual time=315.458..315.458 rows=0 loops=1)\n -> Index Scan using groups_domain on groups (cost=0.00..1359.90 rows=7204 width=12) (actual time=0.325..297.403 rows=10431 loops=1)\n Index Cond: ((\"domain\")::text = 'RT::Ticket-Role'::text)\n Filter: ((\"type\")::text = 'Requestor'::text)\n -> Hash (cost=369.08..369.08 rows=101 width=4) (actual time=0.157..0.157 rows=0 loops=1)\n -> Index Scan using users_emailaddress_lower on users (cost=0.00..369.08 rows=101 width=4) (actual time=0.139..0.143 rows=1 loops=1)\n Index Cond: (lower((emailaddress)::text) = '[email protected]'::text)\n -> Hash (cost=465.62..465.62 rows=1 width=164) (actual time=62.944..62.944 rows=0 loops=1)\n -> Seq Scan on tickets (cost=0.00..465.62 rows=1 width=164) (actual time=0.113..52.729 rows=3208 loops=1)\n Filter: (((\"type\")::text = 'ticket'::text) AND (effectiveid = id) AND (((status)::text = 'new'::text) OR ((status)::text = 'open'::text)))\n Total runtime: 2367.908 ms\n(27 rows)\n\n\n\nrt3=# explain analyze SELECT DISTINCT main.* FROM ((((Tickets main JOIN Groups as Groups_1 ON ( main.id = Groups_1.Instance))\nrt3(# JOIN Principals as Principals_2 ON ( Groups_1.id = Principals_2.ObjectId)) JOIN CachedGroupMembers as CachedGroupMembers_3\nrt3(# ON ( Principals_2.id = CachedGroupMembers_3.GroupId)) JOIN Users as Users_4 ON ( CachedGroupMembers_3.MemberId = Users_4.id))\nrt3-# WHERE ((main.EffectiveId = main.id)) AND ((main.Type = 'ticket')) AND ( ( ( (Users_4.EmailAddress = '[email protected]')\nrt3(# AND(Groups_1.Domain = 'RT::Ticket-Role')AND(Groups_1.Type = 'Requestor')AND(Principals_2.PrincipalType = 'Group') ) )\nrt3(# AND ( (main.Status = 'new')OR(main.Status = 'open') ) ) ORDER BY main.Priority DESC LIMIT 10;\n\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=582.27..582.34 rows=1 width=164) (actual time=592.406..592.529 rows=4 loops=1)\n -> Unique (cost=582.27..582.34 rows=1 width=164) (actual time=592.401..592.516 rows=4 loops=1)\n -> Sort (cost=582.27..582.28 rows=1 width=164) (actual time=592.398..592.406 rows=8 loops=1)\n Sort Key: main.priority, main.id, main.effectiveid, main.queue, main.\"type\", main.issuestatement, main.resolution, main.\"owner\", main.subject, main.initialpriority, main.finalpriority, main.timeestimated, main.timeworked, main.status, main.timeleft, main.told, main.starts, main.started, main.due, main.resolved, main.lastupdatedby, main.lastupdated, main.creator, main.created, main.disabled\n -> Hash Join (cost=476.18..582.26 rows=1 width=164) (actual time=591.548..592.211 rows=8 loops=1)\n Hash Cond: (\"outer\".groupid = \"inner\".id)\n -> Nested Loop (cost=0.00..105.97 rows=21 width=4) (actual time=0.214..0.645 rows=37 loops=1)\n -> Index Scan using users4 on users users_4 (cost=0.00..3.99 rows=2 width=4) (actual time=0.107..0.112 rows=1 loops=1)\n Index Cond: ((emailaddress)::text = '[email protected]'::text)\n -> Index Scan using cachedgroupmembers2 on cachedgroupmembers cachedgroupmembers_3 (cost=0.00..50.81 rows=14 width=8) (actual time=0.098..0.441 rows=37 loops=1)\n Index Cond: (cachedgroupmembers_3.memberid = \"outer\".id)\n -> Hash (cost=476.17..476.17 rows=1 width=168) (actual time=591.121..591.121 rows=0 loops=1)\n -> Nested Loop (cost=0.00..476.17 rows=1 width=168) (actual time=0.391..583.085 rows=3208 loops=1)\n -> Nested Loop (cost=0.00..471.54 rows=1 width=168) (actual time=0.309..474.968 rows=3208 loops=1)\n -> Seq Scan on tickets main (cost=0.00..465.62 rows=1 width=164) (actual time=0.111..56.930 rows=3208 loops=1)\n Filter: ((effectiveid = id) AND ((\"type\")::text = 'ticket'::text) AND (((status)::text = 'new'::text) OR ((status)::text = 'open'::text)))\n -> Index Scan using groups1 on groups groups_1 (cost=0.00..5.90 rows=1 width=12) (actual time=0.105..0.112 rows=1 loops=3208)\n Index Cond: (((groups_1.\"domain\")::text = 'RT::Ticket-Role'::text) AND ((\"outer\".id)::text = (groups_1.instance)::text) AND ((groups_1.\"type\")::text = 'Requestor'::text))\n -> Index Scan using principals2 on principals principals_2 (cost=0.00..4.62 rows=1 width=8) (actual time=0.014..0.017 rows=1 loops=3208)\n Index Cond: (\"outer\".id = principals_2.objectid)\n Filter: ((principaltype)::text = 'Group'::text)\n Total runtime: 593.062 ms\n(22 rows)\n\n\n\n\nRegds\nMallah.\n\nGreg Stark wrote:\n\n>Rajesh Kumar Mallah <[email protected]> writes:\n>\n> \n>\n>>Nopes the query are not Equiv , earlier one returns 4 rows and the below one\n>>none,\n>> \n>>\n>\n>Sorry, i lowercased a string constant and dropped the lower() on email. \n>\n>Try this:\n>\n>SELECT *\n> FROM tickets\n> WHERE id IN (\n> SELECT groups.instance\n> FROM groups\n> JOIN principals ON (groups.id = principals.objectid)\n> JOIN cachedgroupmembers ON (principals.id = cachedgroupmembers.groupid)\n> JOIN users ON (cachedgroupmembers.memberid = users.id)\n> WHERE lower(users.emailaddress) = '[email protected]'\n> AND groups.domain = 'RT::Ticket-Role'\n> AND groups.type = 'Requestor'\n> AND principals.principaltype = 'group'\n> )\n> AND type = 'ticket'\n> AND effectiveid = tickets.id\n> AND (status = 'new' OR status = 'open')\n>ORDER BY priority DESC\n>LIMIT 10;\n>\n> \n>\n\n\n\n\n\n\n\n\n\nThe g in group had to be uppercased, the query produced the same results\nbut performance was worse  for the IN version .  2367 ms vs 600 ms\nrt3=# explain analyze SELECT * from tickets where id in ( SELECT groups.instance FROM groups\n JOIN principals ON (groups.id = principals.objectid) JOIN cachedgroupmembers ON \n(principals.id = cachedgroupmembers.groupid) JOIN users ON (cachedgroupmembers.memberid = users.id) \nWHERE lower(users.emailaddress) = '[email protected]' AND groups.domain = 'RT::Ticket-Role' \nAND groups.type = 'Requestor' AND principals.principaltype = 'Group' ) AND type = 'ticket' AND \neffectiveid = tickets.id AND (status = 'new' OR status = 'open') ORDER BY priority DESC LIMIT 10;;\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=10078.18..10078.19 rows=1 width=164) (actual time=2367.084..2367.096 rows=4 loops=1)\n -> Sort (cost=10078.18..10078.19 rows=1 width=164) (actual time=2367.078..2367.082 rows=4 loops=1)\n Sort Key: tickets.priority\n -> Hash Join (cost=10077.65..10078.17 rows=1 width=164) (actual time=2366.870..2367.051 rows=4 loops=1)\n Hash Cond: ((\"outer\".instance)::text = (\"inner\".id)::text)\n -> HashAggregate (cost=9612.02..9612.02 rows=69 width=8) (actual time=2303.792..2303.810 rows=7 loops=1)\n -> Hash Join (cost=4892.97..9611.85 rows=69 width=8) (actual time=1427.260..2303.685 rows=14 loops=1)\n Hash Cond: (\"outer\".memberid = \"inner\".id)\n -> Hash Join (cost=4523.65..9139.45 rows=13651 width=12) (actual time=948.960..2258.529 rows=31123 loops=1)\n Hash Cond: (\"outer\".groupid = \"inner\".id)\n -> Seq Scan on cachedgroupmembers (cost=0.00..3456.51 rows=204551 width=8) (actual time=0.048..365.147 rows=204551 loops=1)\n -> Hash (cost=4509.93..4509.93 rows=5488 width=12) (actual time=948.843..948.843 rows=0 loops=1)\n -> Hash Join (cost=1409.91..4509.93 rows=5488 width=12) (actual time=315.722..930.025 rows=10431 loops=1)\n Hash Cond: (\"outer\".objectid = \"inner\".id)\n -> Seq Scan on principals (cost=0.00..1583.76 rows=62625 width=8) (actual time=0.043..251.142 rows=62097 loops=1)\n Filter: ((principaltype)::text = 'Group'::text)\n -> Hash (cost=1359.90..1359.90 rows=7204 width=12) (actual time=315.458..315.458 rows=0 loops=1)\n -> Index Scan using groups_domain on groups (cost=0.00..1359.90 rows=7204 width=12) (actual time=0.325..297.403 rows=10431 loops=1)\n Index Cond: ((\"domain\")::text = 'RT::Ticket-Role'::text)\n Filter: ((\"type\")::text = 'Requestor'::text)\n -> Hash (cost=369.08..369.08 rows=101 width=4) (actual time=0.157..0.157 rows=0 loops=1)\n -> Index Scan using users_emailaddress_lower on users (cost=0.00..369.08 rows=101 width=4) (actual time=0.139..0.143 rows=1 loops=1)\n Index Cond: (lower((emailaddress)::text) = '[email protected]'::text)\n -> Hash (cost=465.62..465.62 rows=1 width=164) (actual time=62.944..62.944 rows=0 loops=1)\n -> Seq Scan on tickets (cost=0.00..465.62 rows=1 width=164) (actual time=0.113..52.729 rows=3208 loops=1)\n Filter: (((\"type\")::text = 'ticket'::text) AND (effectiveid = id) AND (((status)::text = 'new'::text) OR ((status)::text = 'open'::text)))\n Total runtime: 2367.908 ms\n(27 rows)\n\n\n\nrt3=# explain analyze SELECT DISTINCT main.* FROM ((((Tickets main JOIN Groups as Groups_1 ON ( main.id = Groups_1.Instance))\nrt3(# JOIN Principals as Principals_2 ON ( Groups_1.id = Principals_2.ObjectId)) JOIN CachedGroupMembers as CachedGroupMembers_3\nrt3(# ON ( Principals_2.id = CachedGroupMembers_3.GroupId)) JOIN Users as Users_4 ON ( CachedGroupMembers_3.MemberId = Users_4.id))\nrt3-# WHERE ((main.EffectiveId = main.id)) AND ((main.Type = 'ticket')) AND ( ( ( (Users_4.EmailAddress = '[email protected]')\nrt3(# AND(Groups_1.Domain = 'RT::Ticket-Role')AND(Groups_1.Type = 'Requestor')AND(Principals_2.PrincipalType = 'Group') ) )\nrt3(# AND ( (main.Status = 'new')OR(main.Status = 'open') ) ) ORDER BY main.Priority DESC LIMIT 10;\n\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=582.27..582.34 rows=1 width=164) (actual time=592.406..592.529 rows=4 loops=1)\n -> Unique (cost=582.27..582.34 rows=1 width=164) (actual time=592.401..592.516 rows=4 loops=1)\n -> Sort (cost=582.27..582.28 rows=1 width=164) (actual time=592.398..592.406 rows=8 loops=1)\n Sort Key: main.priority, main.id, main.effectiveid, main.queue, main.\"type\", main.issuestatement, main.resolution, main.\"owner\", main.subject, main.initialpriority, main.finalpriority, main.timeestimated, main.timeworked, main.status, main.timeleft, main.told, main.starts, main.started, main.due, main.resolved, main.lastupdatedby, main.lastupdated, main.creator, main.created, main.disabled\n -> Hash Join (cost=476.18..582.26 rows=1 width=164) (actual time=591.548..592.211 rows=8 loops=1)\n Hash Cond: (\"outer\".groupid = \"inner\".id)\n -> Nested Loop (cost=0.00..105.97 rows=21 width=4) (actual time=0.214..0.645 rows=37 loops=1)\n -> Index Scan using users4 on users users_4 (cost=0.00..3.99 rows=2 width=4) (actual time=0.107..0.112 rows=1 loops=1)\n Index Cond: ((emailaddress)::text = '[email protected]'::text)\n -> Index Scan using cachedgroupmembers2 on cachedgroupmembers cachedgroupmembers_3 (cost=0.00..50.81 rows=14 width=8) (actual time=0.098..0.441 rows=37 loops=1)\n Index Cond: (cachedgroupmembers_3.memberid = \"outer\".id)\n -> Hash (cost=476.17..476.17 rows=1 width=168) (actual time=591.121..591.121 rows=0 loops=1)\n -> Nested Loop (cost=0.00..476.17 rows=1 width=168) (actual time=0.391..583.085 rows=3208 loops=1)\n -> Nested Loop (cost=0.00..471.54 rows=1 width=168) (actual time=0.309..474.968 rows=3208 loops=1)\n -> Seq Scan on tickets main (cost=0.00..465.62 rows=1 width=164) (actual time=0.111..56.930 rows=3208 loops=1)\n Filter: ((effectiveid = id) AND ((\"type\")::text = 'ticket'::text) AND (((status)::text = 'new'::text) OR ((status)::text = 'open'::text)))\n -> Index Scan using groups1 on groups groups_1 (cost=0.00..5.90 rows=1 width=12) (actual time=0.105..0.112 rows=1 loops=3208)\n Index Cond: (((groups_1.\"domain\")::text = 'RT::Ticket-Role'::text) AND ((\"outer\".id)::text = (groups_1.instance)::text) AND ((groups_1.\"type\")::text = 'Requestor'::text))\n -> Index Scan using principals2 on principals principals_2 (cost=0.00..4.62 rows=1 width=8) (actual time=0.014..0.017 rows=1 loops=3208)\n Index Cond: (\"outer\".id = principals_2.objectid)\n Filter: ((principaltype)::text = 'Group'::text)\n Total runtime: 593.062 ms\n(22 rows)\n\n\n\n\n\nRegds\nMallah.\n\nGreg Stark wrote:\n\nRajesh Kumar Mallah <[email protected]> writes:\n\n \n\nNopes the query are not Equiv , earlier one returns 4 rows and the below one\nnone,\n \n\n\nSorry, i lowercased a string constant and dropped the lower() on email. \n\nTry this:\n\nSELECT *\n FROM tickets\n WHERE id IN (\n SELECT groups.instance\n FROM groups\n JOIN principals ON (groups.id = principals.objectid)\n JOIN cachedgroupmembers ON (principals.id = cachedgroupmembers.groupid)\n JOIN users ON (cachedgroupmembers.memberid = users.id)\n WHERE lower(users.emailaddress) = '[email protected]'\n AND groups.domain = 'RT::Ticket-Role'\n AND groups.type = 'Requestor'\n AND principals.principaltype = 'group'\n )\n AND type = 'ticket'\n AND effectiveid = tickets.id\n AND (status = 'new' OR status = 'open')\nORDER BY priority DESC\nLIMIT 10;", "msg_date": "Sat, 01 Nov 2003 03:31:08 -0500", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query puts 7.3.4 on endless loop but 7.4beta5 is fine." }, { "msg_contents": "But the new version at lease works on 7.3 instead of putting\nit in an infinite loop.\n\n\nrt3=# explain analyze SELECT * from tickets where id in ( SELECT \ngroups.instance FROM groups\nrt3(# JOIN principals ON (groups.id = principals.objectid) JOIN \ncachedgroupmembers ON\nrt3(# (principals.id = cachedgroupmembers.groupid) JOIN users ON \n(cachedgroupmembers.memberid = users.id)\nrt3(# WHERE lower(users.emailaddress) = '[email protected]' AND \ngroups.domain = 'RT::Ticket-Role'\nrt3(# AND groups.type = 'Requestor' AND principals.principaltype = \n'Group' ) AND type = 'ticket' AND\nrt3-# effectiveid = tickets.id AND (status = 'new' OR status = 'open') \nORDER BY priority DESC LIMIT 10;\n\n\n \nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=88073404.73..88073404.73 rows=1 width=163) (actual \ntime=2859.05..2859.07 rows=4 loops=1)\n -> Sort (cost=88073404.73..88073404.73 rows=1 width=163) (actual \ntime=2859.05..2859.05 rows=4 loops=1)\n Sort Key: priority\n -> Seq Scan on tickets (cost=0.00..88073404.72 rows=1 \nwidth=163) (actual time=2525.48..2858.95 rows=4 loops=1)\n Filter: ((\"type\" = 'ticket'::character varying) AND \n(effectiveid = id) AND ((status = 'new'::character varying) OR (status = \n'open'::character varying)) AND (subplan))\n SubPlan\n -> Materialize (cost=8443.38..8443.38 rows=66 \nwidth=32) (actual time=0.79..0.81 rows=14 loops=3209)\n -> Hash Join (cost=3698.35..8443.38 rows=66 \nwidth=32) (actual time=1720.53..2525.07 rows=14 loops=1)\n Hash Cond: (\"outer\".memberid = \"inner\".id)\n -> Hash Join (cost=3329.03..7973.87 \nrows=13247 width=28) (actual time=1225.83..2458.48 rows=31123 loops=1)\n Hash Cond: (\"outer\".groupid = \"inner\".id)\n -> Seq Scan on cachedgroupmembers \n(cost=0.00..3456.51 rows=204551 width=8) (actual time=0.06..638.91 \nrows=204551 loops=1)\n -> Hash (cost=3315.71..3315.71 \nrows=5325 width=20) (actual time=1225.51..1225.51 rows=0 loops=1)\n -> Hash Join \n(cost=1355.70..3315.71 rows=5325 width=20) (actual time=529.02..1191.94 \nrows=10431 loops=1)\n Hash Cond: \n(\"outer\".objectid = \"inner\".id)\n -> Seq Scan on \nprincipals (cost=0.00..1583.76 rows=61940 width=8) (actual \ntime=0.02..450.42 rows=62097 loops=1)\n Filter: \n(principaltype = 'Group'::character varying)\n -> Hash \n(cost=1338.03..1338.03 rows=7068 width=12) (actual time=528.58..528.58 \nrows=0 loops=1)\n -> Index Scan \nusing groups_domain on groups (cost=0.00..1338.03 rows=7068 width=12) \n(actual time=0.18..498.04 rows=10431 loops=1)\n Index Cond: \n(\"domain\" = 'RT::Ticket-Role'::character varying)\n Filter: \n(\"type\" = 'Requestor'::character varying)\n -> Hash (cost=369.08..369.08 rows=101 \nwidth=4) (actual time=0.10..0.10 rows=0 loops=1)\n -> Index Scan using \nusers_emailaddress on users (cost=0.00..369.08 rows=101 width=4) \n(actual time=0.09..0.10 rows=1 loops=1)\n Index Cond: \n(lower((emailaddress)::text) = '[email protected]'::text)\n Total runtime: 2859.34 msec\n(25 rows)\n\n\n\n\n\nGreg Stark wrote:\n\n>Rajesh Kumar Mallah <[email protected]> writes:\n>\n> \n>\n>>Nopes the query are not Equiv , earlier one returns 4 rows and the below one\n>>none,\n>> \n>>\n>\n>Sorry, i lowercased a string constant and dropped the lower() on email. \n>\n>Try this:\n>\n>SELECT *\n> FROM tickets\n> WHERE id IN (\n> SELECT groups.instance\n> FROM groups\n> JOIN principals ON (groups.id = principals.objectid)\n> JOIN cachedgroupmembers ON (principals.id = cachedgroupmembers.groupid)\n> JOIN users ON (cachedgroupmembers.memberid = users.id)\n> WHERE lower(users.emailaddress) = '[email protected]'\n> AND groups.domain = 'RT::Ticket-Role'\n> AND groups.type = 'Requestor'\n> AND principals.principaltype = 'group'\n> )\n> AND type = 'ticket'\n> AND effectiveid = tickets.id\n> AND (status = 'new' OR status = 'open')\n>ORDER BY priority DESC\n>LIMIT 10;\n>\n> \n>\n\n\n\n\n\n\n\n\n\nBut the new version at lease works on 7.3 instead of putting\nit in an infinite loop.\n\n\nrt3=# explain analyze SELECT  * from tickets where id in ( \nSELECT groups.instance FROM groups\nrt3(#  JOIN principals ON (groups.id = principals.objectid) JOIN\ncachedgroupmembers ON\nrt3(# (principals.id = cachedgroupmembers.groupid) JOIN users ON\n(cachedgroupmembers.memberid = users.id)\nrt3(# WHERE lower(users.emailaddress) = '[email protected]' AND\ngroups.domain = 'RT::Ticket-Role'\nrt3(# AND groups.type   = 'Requestor' AND principals.principaltype =\n'Group' ) AND type = 'ticket' AND\nrt3-# effectiveid = tickets.id AND (status = 'new' OR status = 'open')\nORDER BY priority DESC LIMIT 10;\n\n\n                                                                                      \nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=88073404.73..88073404.73 rows=1 width=163) (actual\ntime=2859.05..2859.07 rows=4 loops=1)\n   ->  Sort  (cost=88073404.73..88073404.73 rows=1 width=163)\n(actual time=2859.05..2859.05 rows=4 loops=1)\n         Sort Key: priority\n         ->  Seq Scan on tickets  (cost=0.00..88073404.72 rows=1\nwidth=163) (actual time=2525.48..2858.95 rows=4 loops=1)\n               Filter: ((\"type\" = 'ticket'::character varying) AND\n(effectiveid = id) AND ((status = 'new'::character varying) OR (status\n= 'open'::character varying)) AND (subplan))\n               SubPlan\n                 ->  Materialize  (cost=8443.38..8443.38 rows=66\nwidth=32) (actual time=0.79..0.81 rows=14 loops=3209)\n                       ->  Hash Join  (cost=3698.35..8443.38 rows=66\nwidth=32) (actual time=1720.53..2525.07 rows=14 loops=1)\n                             Hash Cond: (\"outer\".memberid = \"inner\".id)\n                             ->  Hash Join  (cost=3329.03..7973.87\nrows=13247 width=28) (actual time=1225.83..2458.48 rows=31123 loops=1)\n                                   Hash Cond: (\"outer\".groupid =\n\"inner\".id)\n                                   ->  Seq Scan on\ncachedgroupmembers  (cost=0.00..3456.51 rows=204551 width=8) (actual\ntime=0.06..638.91 rows=204551 loops=1)\n                                   ->  Hash  (cost=3315.71..3315.71\nrows=5325 width=20) (actual time=1225.51..1225.51 rows=0 loops=1)\n                                         ->  Hash Join \n(cost=1355.70..3315.71 rows=5325 width=20) (actual time=529.02..1191.94\nrows=10431 loops=1)\n                                               Hash Cond:\n(\"outer\".objectid = \"inner\".id)\n                                               ->  Seq Scan on\nprincipals  (cost=0.00..1583.76 rows=61940 width=8) (actual\ntime=0.02..450.42 rows=62097 loops=1)\n                                                     Filter:\n(principaltype = 'Group'::character varying)\n                                               ->  Hash \n(cost=1338.03..1338.03 rows=7068 width=12) (actual time=528.58..528.58\nrows=0 loops=1)\n                                                     ->  Index Scan\nusing groups_domain on groups  (cost=0.00..1338.03 rows=7068 width=12)\n(actual time=0.18..498.04 rows=10431 loops=1)\n                                                           Index Cond:\n(\"domain\" = 'RT::Ticket-Role'::character varying)\n                                                           Filter:\n(\"type\" = 'Requestor'::character varying)\n                             ->  Hash  (cost=369.08..369.08 rows=101\nwidth=4) (actual time=0.10..0.10 rows=0 loops=1)\n                                   ->  Index Scan using\nusers_emailaddress on users  (cost=0.00..369.08 rows=101 width=4)\n(actual time=0.09..0.10 rows=1 loops=1)\n                                         Index Cond:\n(lower((emailaddress)::text) = '[email protected]'::text)\n Total runtime: 2859.34 msec\n(25 rows)\n\n\n\n\n\nGreg Stark wrote:\n\nRajesh Kumar Mallah <[email protected]> writes:\n\n \n\nNopes the query are not Equiv , earlier one returns 4 rows and the below one\nnone,\n \n\n\nSorry, i lowercased a string constant and dropped the lower() on email. \n\nTry this:\n\nSELECT *\n FROM tickets\n WHERE id IN (\n SELECT groups.instance\n FROM groups\n JOIN principals ON (groups.id = principals.objectid)\n JOIN cachedgroupmembers ON (principals.id = cachedgroupmembers.groupid)\n JOIN users ON (cachedgroupmembers.memberid = users.id)\n WHERE lower(users.emailaddress) = '[email protected]'\n AND groups.domain = 'RT::Ticket-Role'\n AND groups.type = 'Requestor'\n AND principals.principaltype = 'group'\n )\n AND type = 'ticket'\n AND effectiveid = tickets.id\n AND (status = 'new' OR status = 'open')\nORDER BY priority DESC\nLIMIT 10;", "msg_date": "Sat, 01 Nov 2003 03:50:59 -0500", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query puts 7.3.4 on endless loop but 7.4beta5 is fine." } ]
[ { "msg_contents": "\nHi\n\nWe installed our Postgresql package from the RH CDROM v9.\nThe version is v7.3.2\n\nIs there a compatibility matrix for Postgresql vs OS that I can verify?\n\nI have checked the ftp sites for Postgresql software under the binary/RPMS\nfolder and discovered that v7.3.2 is not available for redhat 9.0\nOnly v7.3.3 and above is available for redhat 9.0\n\n\nThank you,\nREgards.\n\n\n\n\n", "msg_date": "Thu, 30 Oct 2003 11:45:00 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Postgresql vs OS compatibility matrix" }, { "msg_contents": "Oops! [email protected] was seen spray-painting on a wall:\n> We installed our Postgresql package from the RH CDROM v9.\n> The version is v7.3.2\n>\n> Is there a compatibility matrix for Postgresql vs OS that I can verify?\n>\n> I have checked the ftp sites for Postgresql software under the\n> binary/RPMS folder and discovered that v7.3.2 is not available for\n> redhat 9.0 Only v7.3.3 and above is available for redhat 9.0\n\nThe reason for minor releases is to fix substantial problems.\n\nNobody bothered packaging 7.3.2 for RH9.0 because by the time RH9.0\nwas available, 7.3.3 or 7.3.4 were available, and there was therefore\nno point in packaging a version KNOWN TO BE DEFECTIVE when there was a\nversion available KNOWN TO ADDRESS THOSE DEFECTS.\n\nUnless you specifically want to live with the defects remedied in\n7.3.3 and 7.3.4, then you should upgrade to 7.3.4.\n\nIt actually appears likely, based on recent discussions, that there\nwill be a 7.3.5; there might be merit in going to that.\n-- \nlet name=\"cbbrowne\" and tld=\"acm.org\" in name ^ \"@\" ^ tld;;\nhttp://www3.sympatico.ca/cbbrowne/lsf.html\n\"If you pick up a starving dog and make him prosperous, he will not\nbite you; that is the principal difference between a dog and a man.\"\n-- Mark Twain\n", "msg_date": "Wed, 29 Oct 2003 23:33:21 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql vs OS compatibility matrix" }, { "msg_contents": "Chew,\n\nFirst off, this isn't the appropriate list. So if you have follow-up \nquestions, please post them to NOVICE or GENERAL.\n\n> I have checked the ftp sites for Postgresql software under the binary/RPMS\n> folder and discovered that v7.3.2 is not available for redhat 9.0\n> Only v7.3.3 and above is available for redhat 9.0\n\nAll versions of PostgreSQL from the last 3 years are compatible with RedHat as \nfar as I know. However, 7.3.3 and 7.3.4 are \"bug-fix\" releases; they fix \nsecurity problems and a few other known issues. As such, 7.3.2 is not \nrecommended by *anyone* for *any OS*, becuase it has known sercurity, backup, \nand recovery issue. \n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 29 Oct 2003 20:56:35 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql vs OS compatibility matrix" } ]
[ { "msg_contents": "\nHi\n \n When I do a SELECT * FROM pg_shadow, I can have more than one user with the same id. This caused the pg_dump to \n fail. \n \n I read that it happened in v7.1.2 and I am currently using v7.3.2 on Redhat v9.0 \n \n What can be the causes and how do we rectify it? \n \n\n\n\nThank you,\nREgards.\n\n\n\n\n", "msg_date": "Thu, 30 Oct 2003 12:14:12 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Duplicate user in pg_shadow" }, { "msg_contents": "Maybe you could delete one of the users from the pg_shadow table, do the \ndump and then after the dump is restored, recreate the dropped user (and \nit will get a new sysid)\n\nChris\n\n\[email protected] wrote:\n\n> Hi\n> \n> When I do a SELECT * FROM pg_shadow, I can have more than one user with the same id. This caused the pg_dump to \n> fail. \n> \n> I read that it happened in v7.1.2 and I am currently using v7.3.2 on Redhat v9.0 \n> \n> What can be the causes and how do we rectify it? \n> \n> \n> \n> \n> Thank you,\n> REgards.\n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n", "msg_date": "Thu, 30 Oct 2003 12:24:50 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Duplicate user in pg_shadow" }, { "msg_contents": "[email protected] writes:\n> When I do a SELECT * FROM pg_shadow, I can have more than one user\n> with the same id. This caused the pg_dump to fail.\n> I read that it happened in v7.1.2 and I am currently using v7.3.2\n\nThis is *real* hard to believe. Versions 7.2 and later have a unique\nindex on pg_shadow.usesysid. Are you certain the server isn't 7.1 or\nolder?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Oct 2003 23:26:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Duplicate user in pg_shadow " } ]
[ { "msg_contents": "\nHi\n\nI tried to delete the user from the pg_user but couldnt. This username is\nbeing duplicated so we have the same 2 records.\n\nWhat is the cause ? Is it due to memory or wrong configuration?\n\nThank you,\nREgards.\n\n\n\n\n \n Christopher \n Kings-Lynne To: [email protected] \n <chriskl@familyhea cc: [email protected] \n lth.com.au> Subject: Re: [PERFORM] Duplicate user in pg_shadow \n \n 30/10/2003 12:24 \n PM \n \n \n\n\n\n\nMaybe you could delete one of the users from the pg_shadow table, do the\ndump and then after the dump is restored, recreate the dropped user (and\nit will get a new sysid)\n\nChris\n\n\[email protected] wrote:\n\n> Hi\n>\n\n> When I do a SELECT * FROM pg_shadow, I can have more than one user with\nthe same id. This caused the pg_dump to\n> fail.\n\n>\n\n> I read that it happened in v7.1.2 and I am currently using v7.3.2 on\nRedhat v9.0\n>\n\n> What can be the causes and how do we rectify it?\n\n>\n\n>\n>\n>\n> Thank you,\n> REgards.\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n\n\n\n\n\n", "msg_date": "Thu, 30 Oct 2003 12:26:43 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Duplicate user in pg_shadow" }, { "msg_contents": "\n> I tried to delete the user from the pg_user but couldnt. This username is\n> being duplicated so we have the same 2 records.\n> \n> What is the cause ? Is it due to memory or wrong configuration?\n\nMaybe it's an index corruption issue.\n\nTry reindexing the pg_shadow table, based on the instructions here:\n\nhttp://www.postgresql.org/docs/7.3/static/sql-reindex.html\n\nChris\n\n\n", "msg_date": "Thu, 30 Oct 2003 12:37:41 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Duplicate user in pg_shadow" } ]
[ { "msg_contents": "Does xfs_freeze work on red hat 7.3?\n \nCynthia Leon\n\n-----Original Message-----\nFrom: Murthy Kambhampaty [mailto:[email protected]]\nSent: Friday, October 17, 2003 11:34 AM\nTo: 'Tom Lane'; Murthy Kambhampaty\nCc: 'Jeff'; Josh Berkus; [email protected];\[email protected]; [email protected];\[email protected]\nSubject: [linux-lvm] RE: [ADMIN] [PERFORM] backup/restore - another\narea.\n\n\nFriday, October 17, 2003 12:05, Tom Lane [mailto:[email protected]] wrote:\n\n>Murthy Kambhampaty <[email protected]> writes:\n>> ... The script handles situations\n>> where (i) the XFS filesystem containing $PGDATA has an \n>external log and (ii)\n>> the postmaster log ($PGDATA/pg_xlog) is written to a \n>filesystem different\n>> than the one containing the $PGDATA folder.\n>\n>It does? How exactly can you ensure snapshot consistency between\n>data files and XLOG if they are on different filesystem\n\nSay, you're setup looks something like this:\n\nmount -t xfs /dev/VG1/LV_data /home/pgdata\nmount -t xfs /dev/VG1/LV_xlog /home/pgdata/pg_xlog\n\nWhen you want to take the filesystem backup, you do:\n\nStep 1:\nxfs_freeze -f /dev/VG1/LV_xlog\nxfs_freeze -f /dev/VG1/LV_data\n\tThis should finish any checkpoints that were in progress, and not\nstart any new ones\n\ttill you unfreeze. (writes to an xfs_frozen filesystem wait for the\nxfs_freeze -u, \n\tbut reads proceed; see text from xfs_freeze manpage in postcript\nbelow.)\n\n\nStep2: \ncreate snapshots of /dev/VG1/LV_xlog and /dev/VG1/LV_xlog\n\nStep 3: \nxfs_freeze -u /dev/VG1/LV_data\nxfs_freeze -u /dev/VG1/LV_xlog\n\tUnfreezing in this order should assure that checkpoints resume where\nthey left off, then log writes commence.\n\n\nStep4:\nmount the snapshots taken in Step2 somewhere; e.g. /mnt/snap_data and\n/mnt/snap_xlog. Copy (or rsync or whatever) /mnt/snap_data to /mnt/pgbackup/\nand /mnt/snap_xlog to /mnt/pgbackup/pg_xlog. Upon completion, /mnt/pgbackup/\nlooks to the postmaster like /home/pgdata would if the server had crashed at\nthe moment that Step1 was initiated. As I understand it, during recovery\n(startup) the postmaster will roll the database forward to this point,\n\"checkpoint-ing\" all the transactions that made it into the log before the\ncrash.\n\nStep5:\nremove the snapshots created in Step2.\n\nThe key is \n(i) xfs_freeze allows you to \"quiesce\" any filesystem at any point in time\nand, if I'm not mistaken, the order (LIFO) in which you freeze and unfreeze\nthe two filesystems: freeze $PGDATA/pg_xlog then $PGDATA; unfreeze $PGDATA\nthen $PGDATA/pg_xlog.\n(ii) WAL recovery assures consistency after a (file)sytem crash.\n\nPresently, the test server for my backup scripts is set-up this way, and the\nbackup works flawlessly, AFAICT. (Note that the backup script starts a\npostmaster on the filesystem copy each time, so you get early warning of\nproblems. Moreover the data in the \"production\" and \"backup\" copies are\ntested and found to be identical.\n\nComments? Any suggestions for additional tests?\n\nThanks,\n\tMurthy\n\nPS: From the xfs_freeze manpage:\n\"xfs_freeze suspends and resumes access to an XFS filesystem (see\nxfs(5)). \n\nxfs_freeze halts new access to the filesystem and creates a stable image\non disk. xfs_freeze is intended to be used with volume managers and\nhardware RAID devices that support the creation of snapshots. \n\nThe mount-point argument is the pathname of the directory where the\nfilesystem is mounted. The filesystem must be mounted to be frozen (see\nmount(8)). \n\nThe -f flag requests the specified XFS filesystem to be frozen from new\nmodifications. When this is selected, all ongoing transactions in the\nfilesystem are allowed to complete, new write system calls are halted,\nother calls which modify the filesystem are halted, and all dirty data,\nmetadata, and log information are written to disk. Any process\nattempting to write to the frozen filesystem will block waiting for the\nfilesystem to be unfrozen. \n\nNote that even after freezing, the on-disk filesystem can contain\ninformation on files that are still in the process of unlinking. These\nfiles will not be unlinked until the filesystem is unfrozen or a clean\nmount of the snapshot is complete. \n\nThe -u option is used to un-freeze the filesystem and allow operations\nto continue. Any filesystem modifications that were blocked by the\nfreeze are unblocked and allowed to complete.\"\n\n_______________________________________________\nlinux-lvm mailing list\[email protected]\nhttp://lists.sistina.com/mailman/listinfo/linux-lvm\nread the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/\n\n==============================================================================\n--- PRESBYTERIAN HEALTHCARE SERVICES DISCLAIMER ---\n\nThis message originates from Presbyterian Healthcare Services or one of its\naffiliated organizations. It contains information, which may be confidential\nor privileged, and is intended only for the individual or entity named above.\nIt is prohibited for anyone else to disclose, copy, distribute or use the\ncontents of this message. All personal messages express views solely of the\nsender, which are not to be attributed to Presbyterian Healthcare Services or\nany of its affiliated organizations, and may not be distributed without this\ndisclaimer. If you received this message in error, please notify us\nimmediately at [email protected]. \n==============================================================================\n\n", "msg_date": "Thu, 30 Oct 2003 10:28:10 -0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: [linux-lvm] RE: [PERFORM] backup/restore - another" }, { "msg_contents": "On Thu, Oct 30, 2003 at 10:28:10AM -0700, [email protected] wrote:\n> Does xfs_freeze work on red hat 7.3?\n\nIt works on any kernel with XFS (it talks directly to XFS).\n\ncheers.\n\n-- \nNathan\n", "msg_date": "Fri, 31 Oct 2003 09:12:21 +1100", "msg_from": "Nathan Scott <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [linux-lvm] RE: [PERFORM] backup/restore - another ar ea." } ]
[ { "msg_contents": "Hi,\n\nOld: Post 7.3.2, P4 1.8, 1 MB RAM, 2 x IDE SW RAID 1, RedHat 8\nNew: Post 7.3.4, Xeon 2.4, 1 MB RAM, 2 x SCSI 15k SW RAID 1, RedHat 9\n\nBoth use: Only postgresql on server. Buffers = 8192, effective cache = 100000\n\nIn old plataform the free and vmstat reports no use of swap.\nIn new, the swap is in constant use (40-100 MB), with a low but constant\nswap out and swap in. The cache memory ~ 830000 and buffers ~ 43000\n\nI try to reduce the buffers to 1024 with no effects.\n\nThanks,\n\nAlexandre\n\n", "msg_date": "Thu, 30 Oct 2003 17:49:08 -0200 (BRST)", "msg_from": "\"alexandre :: aldeia digital\" <[email protected]>", "msg_from_op": true, "msg_subject": "Pg+Linux swap use " }, { "msg_contents": "On Thu, 30 Oct 2003, alexandre :: aldeia digital wrote:\n\n> Hi,\n> \n> Old: Post 7.3.2, P4 1.8, 1 MB RAM, 2 x IDE SW RAID 1, RedHat 8\n> New: Post 7.3.4, Xeon 2.4, 1 MB RAM, 2 x SCSI 15k SW RAID 1, RedHat 9\n> \n> Both use: Only postgresql on server. Buffers = 8192, effective cache = 100000\n> \n> In old plataform the free and vmstat reports no use of swap.\n> In new, the swap is in constant use (40-100 MB), with a low but constant\n> swap out and swap in. The cache memory ~ 830000 and buffers ~ 43000\n\nDo both machines have the same amount of swap, and what kernels are the \ntwo running? Also, do the xeons have hyperthreading turned on. I doubt \nhyperthreading has anything to do with this problem, but I've found that \nturning off hyperthreading in the BIOS gets me a faster machine under \nPostgresql so I just thought I'd throw that out there.\n\n", "msg_date": "Thu, 30 Oct 2003 13:08:50 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pg+Linux swap use " }, { "msg_contents": "On Thu, 30 Oct 2003 17:49:08 -0200 (BRST)\n\"alexandre :: aldeia digital\" <[email protected]> wrote:\n\n> Both use: Only postgresql on server. Buffers = 8192, effective cache =\n> 100000\n> \n\n\nWell, I'm assuming you meant 1GB of ram, not 1MB :)\n\nCheck a ps auxw to see what is running. Perhaps X is running gobbling up\nyour precious mem. But still.. with 1GB there should be virtually no\nswap activity. \n\nHow busy is the DB? How many connections?\n\nand is sort_mem set high?\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n", "msg_date": "Fri, 31 Oct 2003 07:41:14 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pg+Linux swap use" }, { "msg_contents": "Jeff wrote:\n\n> On Thu, 30 Oct 2003 17:49:08 -0200 (BRST)\n> \"alexandre :: aldeia digital\" <[email protected]> wrote:\n> \n> \n>>Both use: Only postgresql on server. Buffers = 8192, effective cache =\n>>100000\n> Well, I'm assuming you meant 1GB of ram, not 1MB :)\n> \n> Check a ps auxw to see what is running. Perhaps X is running gobbling up\n> your precious mem. But still.. with 1GB there should be virtually no\n> swap activity. \n> \n> How busy is the DB? How many connections?\n> \n> and is sort_mem set high?\n\nAlso are two kernels exactly same? In my experience linux kernel behaves \nslightly different from version to version w.r.t swap aggressiveness...\n\n Shridhar\n\n", "msg_date": "Fri, 31 Oct 2003 18:22:07 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pg+Linux swap use" }, { "msg_contents": "Scott, Jeff and Shridhar:\n\n1 GB RAM :)\n\nThe stock kernels are not the same, HyperThreading enabled. 80\nsimultaneous connections. sort_mem = 4096\n\nI will compile my own kernel on this weekend, and I will report\nto the list after.\n\nThank's all\n\nAlexandre\n\n\n> Also are two kernels exactly same? In my experience linux kernel behaves\n> slightly different from version to version w.r.t swap aggressiveness...\n>\n> Shridhar\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Fri, 31 Oct 2003 12:03:59 -0200 (BRST)", "msg_from": "\"alexandre :: aldeia digital\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pg+Linux swap use" }, { "msg_contents": "On Fri, Oct 31, 2003 at 12:03:59PM -0200, alexandre :: aldeia digital wrote:\n> Scott, Jeff and Shridhar:\n> \n> 1 GB RAM :)\n> \n> The stock kernels are not the same, HyperThreading enabled. 80\n\nSome people have reported that things actually slow down with HT\nenabled. Have you tried turning it off?\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 31 Oct 2003 09:36:24 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pg+Linux swap use" }, { "msg_contents": "Not being one to hijack threads, but I haven't heard of this performance hit\nwhen using HT, I have what should all rights be a pretty fast server, dual\n2.4 Xeons with HT 205gb raid 5 array, 1 gig of memory. And it is only 50% as\nfast as my old server which was a dual AMD MP 1400's with a 45gb raid 5\narray and 1gb of ram. I have read everything I could find on Pg performance\ntweaked all the variables that were suggested and nothing. Which is why I\nsubscribed to this list, just been lurking so far but this caught my eye. \n\nRob\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Andrew Sullivan\nSent: Friday, October 31, 2003 8:36 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Pg+Linux swap use\n\nOn Fri, Oct 31, 2003 at 12:03:59PM -0200, alexandre :: aldeia digital wrote:\n> Scott, Jeff and Shridhar:\n> \n> 1 GB RAM :)\n> \n> The stock kernels are not the same, HyperThreading enabled. 80\n\nSome people have reported that things actually slow down with HT\nenabled. Have you tried turning it off?\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 8: explain analyze is your friend\n\n", "msg_date": "Fri, 31 Oct 2003 09:31:19 -0600", "msg_from": "\"Rob Sell\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pg+Linux swap use" }, { "msg_contents": "On Fri, 31 Oct 2003 09:31:19 -0600\n\"Rob Sell\" <[email protected]> wrote:\n\n> Not being one to hijack threads, but I haven't heard of this\n> performance hit when using HT, I have what should all rights be a\n> pretty fast server, dual 2.4 Xeons with HT 205gb raid 5 array, 1 gig\n> of memory. And it is only 50% as fast as my old server which was a\n> dual AMD MP 1400's with a 45gb raid 5 array and 1gb of ram. I have\n> read everything I could find on Pg performance tweaked all the\n> variables that were suggested and nothing. Which is why I subscribed\n> to this list, just been lurking so far but this caught my eye. \n> \n> Rob\n\n\nThere's benchmarks around that show in _some_ cases HT is not all it is\ncracked up to be, somtimes running slower.\n\nBut the real point of this thread isn't all this stuff about\nhyperthreading, the problem is the guy is seeing swapping..\n\nI'm guessing RH is running some useless stuff in the BG.. or maybe he's\nrunning a retarded kernel... or.. maybe.. just.. maybe.. little elves\nare doing it.\n\n\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n", "msg_date": "Fri, 31 Oct 2003 10:39:02 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pg+Linux swap use" }, { "msg_contents": "Jeff wrote:\n\n> On Fri, 31 Oct 2003 09:31:19 -0600\n> \"Rob Sell\" <[email protected]> wrote:\n> \n> \n>>Not being one to hijack threads, but I haven't heard of this\n>>performance hit when using HT, I have what should all rights be a\n>>pretty fast server, dual 2.4 Xeons with HT 205gb raid 5 array, 1 gig\n>>of memory. And it is only 50% as fast as my old server which was a\n>>dual AMD MP 1400's with a 45gb raid 5 array and 1gb of ram. I have\n>>read everything I could find on Pg performance tweaked all the\n>>variables that were suggested and nothing. Which is why I subscribed\n>>to this list, just been lurking so far but this caught my eye. \n>>\n>>Rob\n> There's benchmarks around that show in _some_ cases HT is not all it is\n> cracked up to be, somtimes running slower.\n\nTo use HT effectively on needs.\n\n1. A kernel that understands HT.\n2. A task scheduler that understands HT\n3. A CPU intensive load.\n\nSo if you are running a stock RedHat and production postgresql database, turn it \noff. It won't hurt certainly(Almost certainly)\n\n> I'm guessing RH is running some useless stuff in the BG.. or maybe he's\n> running a retarded kernel... or.. maybe.. just.. maybe.. little elves\n> are doing it.\n\nToo much..:-)\n\nI guess Alexandre can tune bdflush to be little less agressive. Comparing \nbdflush values on two machines might turn up something.\n\nHis idea of compiling kernel is also good one. He can also try tuning some \nvalues in /proc/sys/vm but I don't find any documentation offhand.\n\nI usually run slackware and a handcompiled 2.6-test4. None of them use any swap \nunless true memory starts falling low. This \ntouch-swap-even-if-oodles-of-ram-is-free is something I have't experienced on my \ndesktop for quite a while.\n\n Shridhar\n\n", "msg_date": "Fri, 31 Oct 2003 21:24:43 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pg+Linux swap use" }, { "msg_contents": "Just for an additional viewpoint. I'm finishing up a project based on FreeBSD\nand PostgreSQL. The target server is a Dual 2.4G Intel machine. I have tested\nthe application with hyperthreading enabled and disabled. To all appearances,\nenabling hyperthreading makes the box act like a quad, with the expected increase\nin processing capability - _for_this_application_.\n\nI have also heard the claims and seen the tests that show hyperthreading\noccasionally decreasing performance. I think in the end, you just have to\ntest your particular application to see how it reacts.\n\nRob Sell wrote:\n> Not being one to hijack threads, but I haven't heard of this performance hit\n> when using HT, I have what should all rights be a pretty fast server, dual\n> 2.4 Xeons with HT 205gb raid 5 array, 1 gig of memory. And it is only 50% as\n> fast as my old server which was a dual AMD MP 1400's with a 45gb raid 5\n> array and 1gb of ram. I have read everything I could find on Pg performance\n> tweaked all the variables that were suggested and nothing. Which is why I\n> subscribed to this list, just been lurking so far but this caught my eye. \n> \n> Rob\n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Andrew Sullivan\n> Sent: Friday, October 31, 2003 8:36 AM\n> To: [email protected]\n> Subject: Re: [PERFORM] Pg+Linux swap use\n> \n> On Fri, Oct 31, 2003 at 12:03:59PM -0200, alexandre :: aldeia digital wrote:\n> \n>>Scott, Jeff and Shridhar:\n>>\n>>1 GB RAM :)\n>>\n>>The stock kernels are not the same, HyperThreading enabled. 80\n> \n> \n> Some people have reported that things actually slow down with HT\n> enabled. Have you tried turning it off?\n> \n> A\n> \n\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Fri, 31 Oct 2003 10:55:33 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pg+Linux swap use" }, { "msg_contents": "Bill Moran <[email protected]> writes:\n\n> Just for an additional viewpoint. I'm finishing up a project based on FreeBSD\n> and PostgreSQL. The target server is a Dual 2.4G Intel machine. I have tested\n> the application with hyperthreading enabled and disabled. To all appearances,\n> enabling hyperthreading makes the box act like a quad, with the expected increase\n> in processing capability - _for_this_application_.\n> \n> I have also heard the claims and seen the tests that show hyperthreading\n> occasionally decreasing performance. I think in the end, you just have to\n> test your particular application to see how it reacts.\n\nMy understanding is that the case where HT hurts is precisely your case. When\nyou have two real processors with HT the kernel will sometimes schedule two\njobs on the two virtual processors on the same real processor leaving the two\nvirtual processors on the other real processor idle.\n\nAs far as I know a single processor machine with HT does not benefit from\ndisabling HT.\n\n\n-- \ngreg\n\n", "msg_date": "31 Oct 2003 11:37:45 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pg+Linux swap use" }, { "msg_contents": "For the record I am running on SuSE with a pretty much stock kernel. Not to\nsound naïve, but is turning of HT something done in the bios?\n\nRob\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Bill Moran\nSent: Friday, October 31, 2003 9:56 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Pg+Linux swap use\n\nJust for an additional viewpoint. I'm finishing up a project based on\nFreeBSD\nand PostgreSQL. The target server is a Dual 2.4G Intel machine. I have\ntested\nthe application with hyperthreading enabled and disabled. To all\nappearances,\nenabling hyperthreading makes the box act like a quad, with the expected\nincrease\nin processing capability - _for_this_application_.\n\nI have also heard the claims and seen the tests that show hyperthreading\noccasionally decreasing performance. I think in the end, you just have to\ntest your particular application to see how it reacts.\n\nRob Sell wrote:\n> Not being one to hijack threads, but I haven't heard of this performance\nhit\n> when using HT, I have what should all rights be a pretty fast server, dual\n> 2.4 Xeons with HT 205gb raid 5 array, 1 gig of memory. And it is only 50%\nas\n> fast as my old server which was a dual AMD MP 1400's with a 45gb raid 5\n> array and 1gb of ram. I have read everything I could find on Pg\nperformance\n> tweaked all the variables that were suggested and nothing. Which is why I\n> subscribed to this list, just been lurking so far but this caught my eye. \n> \n> Rob\n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Andrew\nSullivan\n> Sent: Friday, October 31, 2003 8:36 AM\n> To: [email protected]\n> Subject: Re: [PERFORM] Pg+Linux swap use\n> \n> On Fri, Oct 31, 2003 at 12:03:59PM -0200, alexandre :: aldeia digital\nwrote:\n> \n>>Scott, Jeff and Shridhar:\n>>\n>>1 GB RAM :)\n>>\n>>The stock kernels are not the same, HyperThreading enabled. 80\n> \n> \n> Some people have reported that things actually slow down with HT\n> enabled. Have you tried turning it off?\n> \n> A\n> \n\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faqs/FAQ.html\n\n", "msg_date": "Fri, 31 Oct 2003 10:42:06 -0600", "msg_from": "\"Rob Sell\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pg+Linux swap use" }, { "msg_contents": "Greg Stark wrote:\n> Bill Moran <[email protected]> writes:\n> \n>>Just for an additional viewpoint. I'm finishing up a project based on FreeBSD\n>>and PostgreSQL. The target server is a Dual 2.4G Intel machine. I have tested\n>>the application with hyperthreading enabled and disabled. To all appearances,\n>>enabling hyperthreading makes the box act like a quad, with the expected increase\n>>in processing capability - _for_this_application_.\n>>\n>>I have also heard the claims and seen the tests that show hyperthreading\n>>occasionally decreasing performance. I think in the end, you just have to\n>>test your particular application to see how it reacts.\n> \n> My understanding is that the case where HT hurts is precisely your case. When\n> you have two real processors with HT the kernel will sometimes schedule two\n> jobs on the two virtual processors on the same real processor leaving the two\n> virtual processors on the other real processor idle.\n\nYup, that's why I tested it.\n\nWhile more testing is probably in order, I could find no disadvantages to running\nwith HTT enabled. And when I ran many background jobs (which is likely to happen\nduring batch processing on this sytem) the system seemed to perform as if it\nreally had 4 processors.\n\nPerhaps this is an indication of the quality of the FreeBSD scheduler (maybe it's\nHTT aware and makes decisions accordingly?), but I'm not involved enough in that\nlevel of development to do any more than speculate.\n\nAgain, this is pure speculation ... can anyone with a more technical insight\ncomment on whether my guess is correct, or whether I'm not testing enough?\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Fri, 31 Oct 2003 11:55:49 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pg+Linux swap use" }, { "msg_contents": "On Fri, Oct 31, 2003 at 10:42:06AM -0600, Rob Sell wrote:\n> For the record I am running on SuSE with a pretty much stock kernel. Not to\n> sound na?ve, but is turning of HT something done in the bios?\n\nAs far as I know, yes.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 31 Oct 2003 12:06:37 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pg+Linux swap use" }, { "msg_contents": "Rob Sell wrote:\n> Not being one to hijack threads, but I haven't heard of this performance hit\n> when using HT, I have what should all rights be a pretty fast server, dual\n> 2.4 Xeons with HT 205gb raid 5 array, 1 gig of memory. And it is only 50% as\n> fast as my old server which was a dual AMD MP 1400's with a 45gb raid 5\n> array and 1gb of ram. I have read everything I could find on Pg performance\n> tweaked all the variables that were suggested and nothing. Which is why I\n> subscribed to this list, just been lurking so far but this caught my eye. \n\nNot to get into a big Intel vs AMD argument but 50% sounds about right. \nLet's first assume that the QS rating for the MP1400 is relatively \naccurate and convert that to a 1.4GHz Xeon. 2.4/1.4 = +71%. Since \nprocessor performance does not increase linearly with clockspeed, 50% is \nin line with expectations. Then you throw in the fact that (1) QS \nratings for slower AMD chips are understated (but overstated for the \nfastest chips), (2) AMD uses a point-to-point CPU/memory interface (much \nbetter for SMP) versus the P4/Xeon's shared bus, (3) Athlon architecture \nis more suited for DB work compared to the P4, I'd say you're lucky to \nsee 50% more performance from a Xeon 2.4.\n\nAs for HT, I've seen quite a few benchmarks where HT hurts performance. \nThe problem is it's not only app and workload specific but also system \nand usage specific. As it involves the internal rescheduling of \nprocesses, adding more simultaneous processes could help to a point and \nthen start hurting or vice-versa.\n\n", "msg_date": "Fri, 31 Oct 2003 09:17:26 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pg+Linux swap use" }, { "msg_contents": "On Fri, 2003-10-31 at 11:37, Greg Stark wrote:\n> My understanding is that the case where HT hurts is precisely your case. When\n> you have two real processors with HT the kernel will sometimes schedule two\n> jobs on the two virtual processors on the same real processor leaving the two\n> virtual processors on the other real processor idle.\n\nIf you're seeing this behavior, it's sounds like a bug/deficiency in\nyour kernel's scheduler: if it is HT-aware, it should go to some lengths\nto avoid this kind of processor allocation.\n\n-Neil\n\n\n", "msg_date": "Fri, 31 Oct 2003 13:45:48 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pg+Linux swap use" }, { "msg_contents": "We had a problem at work that when a windows box would connect to a samba \nshare with a lot of files in it, the kswapd was going nuts, even though we \nweren't low on memory at all. Updating to the 2.4.18 or so of the later \nredhats fixed that issue. It might be related. I think the kflush daemon \ncan get fooled into thinking it needs to get hoppin right now in older \n2.4.x kernels.\n\nOn Fri, 31 Oct 2003, alexandre :: aldeia digital wrote:\n\n> Scott, Jeff and Shridhar:\n> \n> 1 GB RAM :)\n> \n> The stock kernels are not the same, HyperThreading enabled. 80\n> simultaneous connections. sort_mem = 4096\n> \n> I will compile my own kernel on this weekend, and I will report\n> to the list after.\n> \n> Thank's all\n> \n> Alexandre\n> \n> \n> > Also are two kernels exactly same? In my experience linux kernel behaves\n> > slightly different from version to version w.r.t swap aggressiveness...\n> >\n> > Shridhar\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n> \n\n", "msg_date": "Fri, 31 Oct 2003 11:52:35 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pg+Linux swap use" }, { "msg_contents": "\nWilliam Yu <[email protected]> writes:\n\n> Rob Sell wrote:\n>\n> > Not being one to hijack threads, but I haven't heard of this performance hit\n> > when using HT, I have what should all rights be a pretty fast server, dual\n> > 2.4 Xeons with HT 205gb raid 5 array, 1 gig of memory. And it is only 50% as\n> > fast as my old server which was a dual AMD MP 1400's with a 45gb raid 5\n> > array and 1gb of ram. \n> \n> Not to get into a big Intel vs AMD argument but 50% sounds about right. Let's\n> first assume that the QS rating for the MP1400 is relatively accurate and\n> convert that to a 1.4GHz Xeon. 2.4/1.4 = +71%. Since processor performance\n> does not increase linearly with clockspeed, 50% is in line with expectations.\n\nHm. You've read \"50% as fast\" as \"50% faster\". \nI wonder which the original poster intended.\n\n\n-- \ngreg\n\n", "msg_date": "02 Nov 2003 11:33:03 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pg+Linux swap use" }, { "msg_contents": "Greg Stark wrote:\n> \n> William Yu <[email protected]> writes:\n> \n> > Rob Sell wrote:\n> >\n> > > Not being one to hijack threads, but I haven't heard of this performance hit\n> > > when using HT, I have what should all rights be a pretty fast server, dual\n> > > 2.4 Xeons with HT 205gb raid 5 array, 1 gig of memory. And it is only 50% as\n> > > fast as my old server which was a dual AMD MP 1400's with a 45gb raid 5\n> > > array and 1gb of ram. \n> > \n> > Not to get into a big Intel vs AMD argument but 50% sounds about right. Let's\n> > first assume that the QS rating for the MP1400 is relatively accurate and\n> > convert that to a 1.4GHz Xeon. 2.4/1.4 = +71%. Since processor performance\n> > does not increase linearly with clockspeed, 50% is in line with expectations.\n> \n> Hm. You've read \"50% as fast\" as \"50% faster\". \n> I wonder which the original poster intended.\n\nHyper-threading makes 2 cpus be 4 cpu's, but the 4 cpu's are each only\n70% as fast, so HT is taking 2x cpus and making it 4x0.70 cpu's, which\ngives 2.80 cpu's, and you get that only if you are hammering all four\ncpu's with a full load. Imagine ifd get two cpu-bound processes on the\nfirst die (first 2 cpu's of 4) and the other CPU die is idle, and you\ncan see that HT isn't all that useful unless you are sure to keep all 4\ncpu's busy.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 6 Nov 2003 17:58:52 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pg+Linux swap use" } ]
[ { "msg_contents": "Yet another question.. thanks to everyone responding to all these so far.. ;)\n\nThis one is basically.. given I have a big table already in COPY format, \nabout 28 million rows, all keys guaranteed to be unique, I'm trying to find \nout which of the following will get the import finished the fastest:\n\na) CREATE TABLE with no indexes or keys. Run the COPY (fast, ~30min), then \nCREATE INDEX on each column it's needed on, and ALTER TABLE for the pk and \neach fk needed.\n\nb) Same as above, but instead of ALTER TABLE -- ditch the FK, and CREATE \nUNIQUE INDEX on the PK.\n\nc) CREATE TABLE with the PK/FK's in the table structure, CREATE INDEX on \nneeded columns, then run the COPY.\n\nd) .. is to c as b is to a .. Don't create PK/FK's, just CREATE UNIQUE \nINDEX after table creation, then run the COPY.\n\nMy gut instinct tells me that in order, fastest to slowest, it's going to \nbe d,b,c,a; this is what I've experienced on other DBs such as MSSQL and \nOracle.\n\nIf there isn't a significant difference between all of them, performance \nwise, I think something is dreadfully wrong here. Running \"a\", the ALTER \nTABLE to add the PK ran for 17 hours and still wasn't finished.\n\nThe table without indexes or keys is:\nCREATE TABLE foo (\nid BIGINT NOT NULL DEFAULT nextval('foo_id_sequence'),\nmaster_id BIGINT NOT NULL,\nother_id INTEGER NOT NULL,\nstatus INTEGER NOT NULL,\naddtime TIMESTAMP WITH TIME ZONE DEFAULT now()\n);\n\nDetails on machine and configuration are:\n\nThe machine is the same one I've mentioned before.. SMP AthlonMP 2800+ \n(2.1GHz), 4x18GB 15krpm SCSI RAID-0 with 256MB onboard cache on a \nquad-channel ICP-Vortex controller, 2GB physical memory. Running FreeBSD \nRELENG_4, relevant filesystems with softupdates enabled and mounted noatime.\n\nkernel options are:\nmaxusers 0\n\noptions MAXDSIZ=\"(1536UL*1024*1024)\" # maximum limit\noptions MAXSSIZ=\"(512UL*1024*1024)\" # maximum stack\noptions DFLDSIZ=\"(512UL*1024*1024)\" # default limit\noptions VM_BCACHE_SIZE_MAX=\"(384UL*1024*1024)\" # cache size upped \nfrom default 200MB\noptions SYSVSHM #SYSV-style shared memory\noptions SYSVMSG #SYSV-style message queues\noptions SYSVSEM #SYSV-style semaphores\noptions SHMMAXPGS=262144\noptions SHMALL=262144\noptions SHMSEG=256\noptions SEMMNI=384\noptions SEMMNS=768\noptions SEMMNU=384\noptions SEMMAP=384\n\npostgresql.conf settings are:\n\nshared_buffers = 30000\nmax_fsm_relations = 10000\nmax_fsm_pages = 2000000\nmax_locks_per_transaction = 64\nwal_buffers = 128\nsort_mem = 1310720 (1.2GB)\nvacuum_mem = 262144 (256MB)\ncheckpoint_segments = 64\ncheckpoint_timeout = 1200\ncommit_delay = 20000\ncommit_siblings = 2\nfsync=true\nrandom_page_cost = 1.7\ncpu_tuple_cost = 0.005\ncpu_index_tuple_cost = 0.005\ncpu_operator_cost = 0.0012\n\nstats_start_collector = true\nstats_command_string = true\nstats_row_level = true\nstats_block_level = true\n\n", "msg_date": "Fri, 31 Oct 2003 11:02:24 -0500", "msg_from": "Allen Landsidel <[email protected]>", "msg_from_op": true, "msg_subject": "index creation order?" }, { "msg_contents": "If it is 7.4 beta 5 or later, I would definitely go with A.\n\nAdding indexes after the fact seems to be much quicker. Foreign keys use\nthe same algorithm prior to beta 5 regardless of timing. \n\nA primary key and unique index will have approx the same performance (a\ncheck for NULL isn't very costly).\n\nOn Fri, 2003-10-31 at 11:02, Allen Landsidel wrote:\n> Yet another question.. thanks to everyone responding to all these so far.. ;)\n> \n> This one is basically.. given I have a big table already in COPY format, \n> about 28 million rows, all keys guaranteed to be unique, I'm trying to find \n> out which of the following will get the import finished the fastest:\n> \n> a) CREATE TABLE with no indexes or keys. Run the COPY (fast, ~30min), then \n> CREATE INDEX on each column it's needed on, and ALTER TABLE for the pk and \n> each fk needed.\n> \n> b) Same as above, but instead of ALTER TABLE -- ditch the FK, and CREATE \n> UNIQUE INDEX on the PK.\n> \n> c) CREATE TABLE with the PK/FK's in the table structure, CREATE INDEX on \n> needed columns, then run the COPY.\n> \n> d) .. is to c as b is to a .. Don't create PK/FK's, just CREATE UNIQUE \n> INDEX after table creation, then run the COPY.\n> \n> My gut instinct tells me that in order, fastest to slowest, it's going to \n> be d,b,c,a; this is what I've experienced on other DBs such as MSSQL and \n> Oracle.\n> \n> If there isn't a significant difference between all of them, performance \n> wise, I think something is dreadfully wrong here. Running \"a\", the ALTER \n> TABLE to add the PK ran for 17 hours and still wasn't finished.\n> \n> The table without indexes or keys is:\n> CREATE TABLE foo (\n> id BIGINT NOT NULL DEFAULT nextval('foo_id_sequence'),\n> master_id BIGINT NOT NULL,\n> other_id INTEGER NOT NULL,\n> status INTEGER NOT NULL,\n> addtime TIMESTAMP WITH TIME ZONE DEFAULT now()\n> );\n> \n> Details on machine and configuration are:\n> \n> The machine is the same one I've mentioned before.. SMP AthlonMP 2800+ \n> (2.1GHz), 4x18GB 15krpm SCSI RAID-0 with 256MB onboard cache on a \n> quad-channel ICP-Vortex controller, 2GB physical memory. Running FreeBSD \n> RELENG_4, relevant filesystems with softupdates enabled and mounted noatime.\n> \n> kernel options are:\n> maxusers 0\n> \n> options MAXDSIZ=\"(1536UL*1024*1024)\" # maximum limit\n> options MAXSSIZ=\"(512UL*1024*1024)\" # maximum stack\n> options DFLDSIZ=\"(512UL*1024*1024)\" # default limit\n> options VM_BCACHE_SIZE_MAX=\"(384UL*1024*1024)\" # cache size upped \n> from default 200MB\n> options SYSVSHM #SYSV-style shared memory\n> options SYSVMSG #SYSV-style message queues\n> options SYSVSEM #SYSV-style semaphores\n> options SHMMAXPGS=262144\n> options SHMALL=262144\n> options SHMSEG=256\n> options SEMMNI=384\n> options SEMMNS=768\n> options SEMMNU=384\n> options SEMMAP=384\n> \n> postgresql.conf settings are:\n> \n> shared_buffers = 30000\n> max_fsm_relations = 10000\n> max_fsm_pages = 2000000\n> max_locks_per_transaction = 64\n> wal_buffers = 128\n> sort_mem = 1310720 (1.2GB)\n> vacuum_mem = 262144 (256MB)\n> checkpoint_segments = 64\n> checkpoint_timeout = 1200\n> commit_delay = 20000\n> commit_siblings = 2\n> fsync=true\n> random_page_cost = 1.7\n> cpu_tuple_cost = 0.005\n> cpu_index_tuple_cost = 0.005\n> cpu_operator_cost = 0.0012\n> \n> stats_start_collector = true\n> stats_command_string = true\n> stats_row_level = true\n> stats_block_level = true\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>", "msg_date": "Fri, 31 Oct 2003 11:23:30 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index creation order?" }, { "msg_contents": "Allen,\n\n> a) CREATE TABLE with no indexes or keys. Run the COPY (fast, ~30min), then\n> CREATE INDEX on each column it's needed on, and ALTER TABLE for the pk and\n> each fk needed.\n\nDid you ANALYZE after the copy?\n\n> If there isn't a significant difference between all of them, performance\n> wise, I think something is dreadfully wrong here. Running \"a\", the ALTER\n> TABLE to add the PK ran for 17 hours and still wasn't finished.\n\nAdding the *primary key* locked up? This seems unlikely; we have a known \nproblem with *foreign* keys until the current beta. But I've added primary \nkeys on 20Gb tables and had it complete in a couple of hours. Ignore this \nadivice and look for Stephan Szabo's FK patch instead if what you really \nmeant was that the FK creation locked up.\n\n> shared_buffers = 30000\nhmmm ... 236MB ....\n> max_fsm_pages = 2000000\n2MB, fine ...\n> wal_buffers = 128\n1MB, also fine ...\n> sort_mem = 1310720 (1.2GB)\nProblem here. As documented everywhere, sort_mem is allocated *per sort* not \nper query, user, or shared. This means that if the \"add PK\" operation \ninvolves 2 or more sorts (not sure, haven't tested it), then you're \nallocating .7GB RAM more than you acutally have. This may be the cause of \nyour problem, particularly if *anything* is going on concurrent to the load.\n\n> checkpoint_segments = 64\nIF you have the disk space (+ 2GB) I'd raise this to 150-300 during the load \noperation.\n\n> commit_delay = 20000\n> commit_siblings = 2\nThese settings are for heavy multi-user update activity. They are not useful \nfor a single-user load, and may even lower performance.\n\n> stats_start_collector = true\n> stats_command_string = true\n> stats_row_level = true\n> stats_block_level = true\n\nIf you can do without stats collection during load, I would suggest that you \ndo so. The above add both RAM and I/O overhead to your operation.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 31 Oct 2003 09:10:30 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index creation order?" }, { "msg_contents": "At 12:10 10/31/2003, Josh Berkus wrote:\n>Allen,\n>\n> > a) CREATE TABLE with no indexes or keys. Run the COPY (fast, ~30min), then\n> > CREATE INDEX on each column it's needed on, and ALTER TABLE for the pk and\n> > each fk needed.\n>\n>Did you ANALYZE after the copy?\n\nNo, and this was my major mistake. I normally run analyze periodically \nfrom cron, anywhere from once an hour to ever 15 minutes depending on the \ndb.. I had disabled that for this because I didn't want anything competing \nwith this stuff for disk I/O.\n\nI followed your other suggestions as well, canceled the index that was \nrunning, analyzed the whole db, and ran the queries again. All of them are \nrunning in under 10 or so minutes after the analyze.\n\nI'll just be adding the PKs and the Indexes, I can add triggers/rules of my \nown for the RI, rather than worry about FK creation screwing up.\n\nI had no idea analyze was playing such a big role in this sense.. I really \nthought that other than saving space, it wasn't doing much for tables that \ndon't have indexes on the.\n\nThanks for the help.\n\n> > shared_buffers = 30000\n>hmmm ... 236MB ....\n> > max_fsm_pages = 2000000\n>2MB, fine ...\n> > wal_buffers = 128\n>1MB, also fine ...\n> > sort_mem = 1310720 (1.2GB)\n>Problem here. As documented everywhere, sort_mem is allocated *per sort* \n>not\n>per query, user, or shared. This means that if the \"add PK\" operation\n>involves 2 or more sorts (not sure, haven't tested it), then you're\n>allocating .7GB RAM more than you acutally have. This may be the cause of\n>your problem, particularly if *anything* is going on concurrent to the load.\n\nI didn't know this was per-sort per-backend, I thought it was per-backend \nfor all sorts running on that backend. I've dropped it down to 256MB.\n\n> > checkpoint_segments = 64\n>IF you have the disk space (+ 2GB) I'd raise this to 150-300 during the load\n>operation.\n\nDone, at 128, which seems to be enough for now. I'll fiddle more with this \nlater on.\n\n> > commit_delay = 20000\n> > commit_siblings = 2\n>These settings are for heavy multi-user update activity. They are not useful\n>for a single-user load, and may even lower performance.\n\nThat's what's going on.. this database I'm working on isn't the only one in \nthe system, and some things are using different schemas in the database I'm \nworking on, so this isn't something I can afford to turn off. Most of the \nactivity is heavy and transient.. many INSERT/UPDATE/DELETE cycles.\n\nAgain, thanks for the help, I really do appreciate it. It's gratifying and \ndepressing to know the last two or so days work could've been compressed \ninto 3 hours if I'd just run that damn analyze. ;)\n\n\n\n", "msg_date": "Fri, 31 Oct 2003 13:27:12 -0500", "msg_from": "Allen Landsidel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index creation order?" }, { "msg_contents": "Nope, still 7.3.4 here.. I am very excited about 7.4 though.. almost as \nexcited as I am about FreeBSD 5.x going -STABLE.. it's a close race \nbetween the two..\n\nI'll keep this in mind for when I update though, thanks.\n\nAt 11:23 10/31/2003, Rod Taylor wrote:\n>If it is 7.4 beta 5 or later, I would definitely go with A.\n>\n>Adding indexes after the fact seems to be much quicker. Foreign keys use\n>the same algorithm prior to beta 5 regardless of timing.\n>\n>A primary key and unique index will have approx the same performance (a\n>check for NULL isn't very costly).\n>\n>On Fri, 2003-10-31 at 11:02, Allen Landsidel wrote:\n> > Yet another question.. thanks to everyone responding to all these so \n> far.. ;)\n> >\n> > This one is basically.. given I have a big table already in COPY format,\n> > about 28 million rows, all keys guaranteed to be unique, I'm trying to \n> find\n> > out which of the following will get the import finished the fastest:\n> >\n> > a) CREATE TABLE with no indexes or keys. Run the COPY (fast, ~30min), \n> then\n> > CREATE INDEX on each column it's needed on, and ALTER TABLE for the pk and\n> > each fk needed.\n> >\n> > b) Same as above, but instead of ALTER TABLE -- ditch the FK, and CREATE\n> > UNIQUE INDEX on the PK.\n> >\n> > c) CREATE TABLE with the PK/FK's in the table structure, CREATE INDEX on\n> > needed columns, then run the COPY.\n> >\n> > d) .. is to c as b is to a .. Don't create PK/FK's, just CREATE UNIQUE\n> > INDEX after table creation, then run the COPY.\n> >\n> > My gut instinct tells me that in order, fastest to slowest, it's going to\n> > be d,b,c,a; this is what I've experienced on other DBs such as MSSQL and\n> > Oracle.\n> >\n> > If there isn't a significant difference between all of them, performance\n> > wise, I think something is dreadfully wrong here. Running \"a\", the ALTER\n> > TABLE to add the PK ran for 17 hours and still wasn't finished.\n> >\n> > The table without indexes or keys is:\n> > CREATE TABLE foo (\n> > id BIGINT NOT NULL DEFAULT nextval('foo_id_sequence'),\n> > master_id BIGINT NOT NULL,\n> > other_id INTEGER NOT NULL,\n> > status INTEGER NOT NULL,\n> > addtime TIMESTAMP WITH TIME ZONE DEFAULT now()\n> > );\n> >\n> > Details on machine and configuration are:\n> >\n> > The machine is the same one I've mentioned before.. SMP AthlonMP 2800+\n> > (2.1GHz), 4x18GB 15krpm SCSI RAID-0 with 256MB onboard cache on a\n> > quad-channel ICP-Vortex controller, 2GB physical memory. Running FreeBSD\n> > RELENG_4, relevant filesystems with softupdates enabled and mounted \n> noatime.\n> >\n> > kernel options are:\n> > maxusers 0\n> >\n> > options MAXDSIZ=\"(1536UL*1024*1024)\" # maximum limit\n> > options MAXSSIZ=\"(512UL*1024*1024)\" # maximum stack\n> > options DFLDSIZ=\"(512UL*1024*1024)\" # default limit\n> > options VM_BCACHE_SIZE_MAX=\"(384UL*1024*1024)\" # cache size upped\n> > from default 200MB\n> > options SYSVSHM #SYSV-style shared memory\n> > options SYSVMSG #SYSV-style message queues\n> > options SYSVSEM #SYSV-style semaphores\n> > options SHMMAXPGS=262144\n> > options SHMALL=262144\n> > options SHMSEG=256\n> > options SEMMNI=384\n> > options SEMMNS=768\n> > options SEMMNU=384\n> > options SEMMAP=384\n> >\n> > postgresql.conf settings are:\n> >\n> > shared_buffers = 30000\n> > max_fsm_relations = 10000\n> > max_fsm_pages = 2000000\n> > max_locks_per_transaction = 64\n> > wal_buffers = 128\n> > sort_mem = 1310720 (1.2GB)\n> > vacuum_mem = 262144 (256MB)\n> > checkpoint_segments = 64\n> > checkpoint_timeout = 1200\n> > commit_delay = 20000\n> > commit_siblings = 2\n> > fsync=true\n> > random_page_cost = 1.7\n> > cpu_tuple_cost = 0.005\n> > cpu_index_tuple_cost = 0.005\n> > cpu_operator_cost = 0.0012\n> >\n> > stats_start_collector = true\n> > stats_command_string = true\n> > stats_row_level = true\n> > stats_block_level = true\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n> >\n\n", "msg_date": "Fri, 31 Oct 2003 13:28:44 -0500", "msg_from": "Allen Landsidel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index creation order?" }, { "msg_contents": "On Fri, 2003-10-31 at 13:27, Allen Landsidel wrote:\n> I had no idea analyze was playing such a big role in this sense.. I really \n> thought that other than saving space, it wasn't doing much for tables that \n> don't have indexes on the.\n\nANALYZE doesn't save any space at all -- VACUUM is probably what you're\nthinking of.\n\n-Neil\n\n\n", "msg_date": "Fri, 31 Oct 2003 13:40:22 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index creation order?" }, { "msg_contents": "Allen,\n\n> I had no idea analyze was playing such a big role in this sense.. I really\n> thought that other than saving space, it wasn't doing much for tables that\n> don't have indexes on the.\n\nAmong other things, ANALYZE tells postgres how many rows are in the table. So \nif you add a PK constraint after loading 10 million rows without ANALYZE, \nPostgreSQL is likely to think that there is only one row in the table ... and \nchoose a nested loop or some other really inefficient method of checking for \nuniqueness.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 31 Oct 2003 10:58:19 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index creation order?" }, { "msg_contents": "At 13:40 10/31/2003, Neil Conway wrote:\n>On Fri, 2003-10-31 at 13:27, Allen Landsidel wrote:\n> > I had no idea analyze was playing such a big role in this sense.. I really\n> > thought that other than saving space, it wasn't doing much for tables that\n> > don't have indexes on the.\n>\n>ANALYZE doesn't save any space at all -- VACUUM is probably what you're\n>thinking of.\n\nActually, I was thinking VACUUM ANALYZE.. which is what I ran after the \nCOPY.. sorry for my lack of precision.\n\nI've yet to run straight-up ANALYZE AFAIK.\n\n-Allen\n\n", "msg_date": "Fri, 31 Oct 2003 14:01:28 -0500", "msg_from": "Allen Landsidel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index creation order?" }, { "msg_contents": "is there any way to update the stats inside a transaction? what i have is\nsomething like:\n\nselect count(*) from foo;\n-> 0\n\nbegin;\n\ncopy foo from '/tmp/foo'; -- about 100k rows\n\n-- run some queries on foo which perform horribly because the stats\n-- are way off (100k rows v. 0 rows)\n\ncommit;\n\n\nit seems that you cannot run analyze inside a transaction:\n\nbegin;\nanalyze foo;\nERROR: ANALYZE cannot run inside a BEGIN/END block\n\ni am using version 7.2.3.\n\nany work-a-rounds? should i try updating pg_statistic manually?\n\nOn Fri, 31 Oct 2003, Josh Berkus wrote:\n> Among other things, ANALYZE tells postgres how many rows are in the table. So\n> if you add a PK constraint after loading 10 million rows without ANALYZE,\n> PostgreSQL is likely to think that there is only one row in the table ... and\n> choose a nested loop or some other really inefficient method of checking for\n> uniqueness.\n\n", "msg_date": "Fri, 31 Oct 2003 15:32:30 -0500 (EST)", "msg_from": "Chester Kustarz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index creation order?" }, { "msg_contents": "> begin;\n> analyze foo;\n> ERROR: ANALYZE cannot run inside a BEGIN/END block\n> \n> i am using version 7.2.3.\n\nTime to upgrade. 7.3 / 7.4 allows this to happen.", "msg_date": "Fri, 31 Oct 2003 15:57:38 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index creation order?" }, { "msg_contents": "Chester Kustarz <[email protected]> writes:\n> it seems that you cannot run analyze inside a transaction:\n\nYou can in 7.3.* ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 31 Oct 2003 16:33:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index creation order? " } ]
[ { "msg_contents": "Hello all!\n\nDo anyone have experience installing Postgres 7.3.4 on Slackware 9.1? \n\nDo exist any trouble, bug, problem... or is a good MIX?\n\nI want to \"leave\" RedHat (9) because is not \"free\" anymore and i don't \nwant to use fedora BETA TEST versions.\n\nAny suggestion? \n\nTHANKS ALL.\n\n", "msg_date": "Fri, 31 Oct 2003 14:55:12 -0600", "msg_from": "\"PostgreSQL\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres 7.3.4 + Slackware 9.1" }, { "msg_contents": "Hi!\n\nI havenᅵt really tested it on Slackware 9.1. But i am running\nPostgresql now for over two years on various Slackware versions.\nMy current server is running on Slackware 8.0 with a lot of packages\n(especially the core libs) upgraded to slack 9.1 packages.\n\nI had never problems with postgresql related to slackware except that\nthe old 8.0 readline packages was too old for postgresql 7.3.x, but\nthat was not really a problem.\n\nBut it seems, thereᅵs no prepackaged Postgresql for Slackware, so you\nwould have to compile it yourself.\n\n\nChristoph Nelles\n\n\n\nAm Freitag, 31. Oktober 2003 um 21:55 schrieben Sie:\n\nP> Hello all!\n\nP> Do anyone have experience installing Postgres 7.3.4 on Slackware 9.1?\n\nP> Do exist any trouble, bug, problem... or is a good MIX?\n\nP> I want to \"leave\" RedHat (9) because is not \"free\" anymore and i don't\nP> want to use fedora BETA TEST versions.\n\nP> Any suggestion? \n\nP> THANKS ALL.\n\n\nP> ---------------------------(end of\nP> broadcast)---------------------------\nP> TIP 5: Have you checked our extensive FAQ?\n\nP> http://www.postgresql.org/docs/faqs/FAQ.html\n\n\n\n-- \nMit freundlichen Grᅵssen\nEvil Azrael mailto:[email protected]\n\n", "msg_date": "Fri, 31 Oct 2003 22:31:43 +0100", "msg_from": "Evil Azrael <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.3.4 + Slackware 9.1" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nI run Slackware 9.1 and Postgres 7.3.4. I compiled Postgres from source and I \nhave actually had fewer problems with it then on Redhat 8.\n\nOn Friday 31 October 2003 02:55 pm, PostgreSQL wrote:\n> Hello all!\n>\n> Do anyone have experience installing Postgres 7.3.4 on Slackware 9.1?\n>\n> Do exist any trouble, bug, problem... or is a good MIX?\n>\n> I want to \"leave\" RedHat (9) because is not \"free\" anymore and i don't\n> want to use fedora BETA TEST versions.\n>\n> Any suggestion?\n>\n> THANKS ALL.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n\n- -- \nJeremy M. Guthrie\nSystems Engineer\nBerbee\n5520 Research Park Dr.\nMadison, WI 53711\nPhone: 608-298-1061\n\nBerbee...Decade 1. 1993-2003\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.2 (GNU/Linux)\n\niD8DBQE/otZeqtjaBHGZBeURArn+AJ4leCrBQIm2fj01davX4n9FcMs2lgCeLisL\nC0+9VnkJn7EFelWLm4RGrRA=\n=umPm\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Fri, 31 Oct 2003 15:38:38 -0600", "msg_from": "\"Jeremy M. Guthrie\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.3.4 + Slackware 9.1" } ]
[ { "msg_contents": "Hi, I'm trying to set these two options to tune performance but both\nreturn 'not a valid option name'. Dumping the pg_settings table confirms\nthat they are missing. I'm using the PostgreSQL packages included with\nRedHat 9 (7.3.2) and Mandrake 9.2 Beta (7.3.4). Do I need to install the\nfull tarball to get these options? Any help greatly appreciated.\n\nThanks, Lee\n", "msg_date": "Mon, 3 Nov 2003 10:20:53 -0600", "msg_from": "\"Lee Hughes\" <[email protected]>", "msg_from_op": true, "msg_subject": "join_collapse_limit, from_collapse_limit options missing" }, { "msg_contents": "\"Lee Hughes\" <[email protected]> writes:\n> Hi, I'm trying to set these two options to tune performance but both\n> return 'not a valid option name'. Dumping the pg_settings table confirms\n> that they are missing. I'm using the PostgreSQL packages included with\n> RedHat 9 (7.3.2) and Mandrake 9.2 Beta (7.3.4). Do I need to install the\n> full tarball to get these options? Any help greatly appreciated.\n\nThey don't exist in 7.3. Why are you consulting 7.4 documentation for a\n7.3 installation?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Nov 2003 11:24:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: join_collapse_limit, from_collapse_limit options missing " }, { "msg_contents": "Lee,\n\n> Hi, I'm trying to set these two options to tune performance but both\n> return 'not a valid option name'. Dumping the pg_settings table confirms\n> that they are missing. I'm using the PostgreSQL packages included with\n> RedHat 9 (7.3.2) and Mandrake 9.2 Beta (7.3.4). Do I need to install the\n> full tarball to get these options? Any help greatly appreciated.\n\nIf you're working from the documentation on General Bits, please notice that \nboth of those options are marked as \"new for 7.4\".\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 3 Nov 2003 10:41:04 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: join_collapse_limit, from_collapse_limit options missing" } ]
[ { "msg_contents": "How do we measure the response time in postgresql?\n\nYour response would be very much appreciated.\n\nThanks and Regards,\n\nRadha\n\n\n", "msg_date": "Tue, 4 Nov 2003 08:49:39 -0600 (CST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Response time" }, { "msg_contents": "Hello\n\nexplain analyse select * from lidi;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------\n Seq Scan on lidi (cost=0.00..1.04 rows=4 width=96) (actual \ntime=0.046..0.092 rows=4 loops=1)\n Total runtime: 0.369 ms\n\nRegards\nPavel\n\n\n\n\nOn Tue, 4 Nov 2003 [email protected] wrote:\n\n> How do we measure the response time in postgresql?\n> \n> Your response would be very much appreciated.\n> \n> Thanks and Regards,\n> \n> Radha\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n", "msg_date": "Tue, 4 Nov 2003 16:02:11 +0100 (CET)", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Response time" }, { "msg_contents": "On Tue, 2003-11-04 at 09:49, [email protected] wrote:\n> How do we measure the response time in postgresql?\n\nIn addition to EXPLAIN ANALYZE, the log_min_duration_statement\nconfiguration variable and the \\timing psql command might also be\nuseful.\n\n-Neil\n\n\n", "msg_date": "Tue, 04 Nov 2003 15:12:37 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Response time" }, { "msg_contents": "The \\timing psql command gives different time for the same query executed\nrepeatedly.\n\nSo, how can we know the exact response time for any query?\n\nThanks and Regards,\n\nRadha\n\n> On Tue, 2003-11-04 at 09:49, [email protected] wrote:\n>> How do we measure the response time in postgresql?\n>\n> In addition to EXPLAIN ANALYZE, the log_min_duration_statement\n> configuration variable and the \\timing psql command might also be\n> useful.\n>\n> -Neil\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n\n\n", "msg_date": "Wed, 5 Nov 2003 11:35:22 -0600 (CST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Response time" }, { "msg_contents": "<[email protected]> writes:\n> The \\timing psql command gives different time for the same query executed\n> repeatedly.\n\nThat's probably because executing the query repeatedly results in\ndifferent execution times, as one would expect. \\timing returns the\n\"exact\" query response time, nevertheless.\n\n-Neil\n\n", "msg_date": "Wed, 05 Nov 2003 12:40:18 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Response time" }, { "msg_contents": "On Wed, Nov 05, 2003 at 11:35:22AM -0600, [email protected] wrote:\n> The \\timing psql command gives different time for the same query executed\n> repeatedly.\n\nWhy do you believe that the same query will always take the same time\nto execute?\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 5 Nov 2003 13:04:22 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Response time" } ]
[ { "msg_contents": "\n\nHi all,\n\n\nI have just started with PostgreSQL on Linux and in the past I've done a \ngood bit of work on Interbase (both on Windows and Linux).\n\n\nWhat I want to know here is\n\n\nWhat do people here think of Interbase/Firebird?\n\n\nHas anybody done performance metrics or could they point me to a \ncomparison between the two?\n\n\nDoes Interbase/Firebird have (as far as people here are concerned) any \nshow-stoppers in terms of functionality which they do have on \nPostgreSQL? Or, indeed, the other way round?\n\n\nI'm not interested in starting a flame war or anything like that - just \na presentation of the facts as you see them, and if you want to put in \nyour opinion also, that's fine, just make a note!\n\n\nTIA.\n\n\nPaul...\n\n\n-- \n\nplinehan__AT__yahoo__DOT__com\n\nC++ Builder 5 SP1, Interbase 6.0.1.6 IBX 5.04 W2K Pro\n\nPlease do not top-post.\n\n", "msg_date": "Wed, 5 Nov 2003 10:28:25 -0000", "msg_from": "Paul Ganainm <[email protected]>", "msg_from_op": true, "msg_subject": "Interbase/Firebird - any users out there - what's the performance\n\tlike compared to PostgreSQL?" }, { "msg_contents": "Paul Ganainm wrote:\n> Does Interbase/Firebird have (as far as people here are concerned) any \n> show-stoppers in terms of functionality which they do have on \n> PostgreSQL? Or, indeed, the other way round?\n\nPersonally I think native windows port is plus that interbase/firebird has over \npostgresql. It also has native threaded model which *could* be benefitial at times.\n\nI combed docs once or twice. It looks like the linux/unix support is failry new \nso I don't know how it stacks in production and performance.\n\nOTOH, I don't like storing entire database in one file. That could get messy. \nOtherwise two databases are on par, at least on paper. I hope SQL compliance of \ninterbase/firebird is better than mysql and closer to postgresql, +/- delta.\n\nOf course PG has it's own goodies. Rules/Create language are just tip of iceberg.\n\nCan you come up with some relative comparison? We can help you from postgresql \nside..:-) That would be great.\n\n Bye\n Shridhar\n\n", "msg_date": "Wed, 05 Nov 2003 18:45:36 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Interbase/Firebird - any users out there - what's the" }, { "msg_contents": "About a year ago I programmed a php/firebird application, and I've never\nhad a problem with firebird. It's a small database (a few megabytes),\nbut it just works day after day.\n\nI've seen firebird has updatable views, they seem to work very well.\n\nI have the feeling that It's not as flexible as postgresql, but I still\nlike it. It would be my choice for win32 applications (until win32\npostgresql port become available).\n\n\nOn Wed, 2003-11-05 at 07:28, Paul Ganainm wrote:\n\n> Hi all,\n> \n> \n> I have just started with PostgreSQL on Linux and in the past I've done a \n> good bit of work on Interbase (both on Windows and Linux).\n> \n> \n> What I want to know here is\n> \n> \n> What do people here think of Interbase/Firebird?\n> \n> \n> Has anybody done performance metrics or could they point me to a \n> comparison between the two?\n> \n> \n> Does Interbase/Firebird have (as far as people here are concerned) any \n> show-stoppers in terms of functionality which they do have on \n> PostgreSQL? Or, indeed, the other way round?\n> \n> \n> I'm not interested in starting a flame war or anything like that - just \n> a presentation of the facts as you see them, and if you want to put in \n> your opinion also, that's fine, just make a note!\n> \n> \n> TIA.\n> \n> \n> Paul...\n>", "msg_date": "Wed, 05 Nov 2003 11:05:46 -0300", "msg_from": "Franco Bruno Borghesi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Interbase/Firebird - any users out there - what's" }, { "msg_contents": "Try this benchmark on PostgreSQL, MySQL, FireBird, Oracle:\n\nhttp://go.jitbot.com/dbbench-pg-fb-mys-orcl\n\nThis page expired but if you click on \"Cache\" on the side of JitBot search\npane, you'll see the cached page:\n\nhttp://go.jitbot.com/opensource-dbs-table\n\n\nCheers,\n\n\n\"Paul Ganainm\" <[email protected]> wrote in message\nnews:[email protected]...\n>\n>\n> Hi all,\n>\n>\n> I have just started with PostgreSQL on Linux and in the past I've done a\n> good bit of work on Interbase (both on Windows and Linux).\n>\n>\n> What I want to know here is\n>\n>\n> What do people here think of Interbase/Firebird?\n>\n>\n> Has anybody done performance metrics or could they point me to a\n> comparison between the two?\n>\n>\n> Does Interbase/Firebird have (as far as people here are concerned) any\n> show-stoppers in terms of functionality which they do have on\n> PostgreSQL? Or, indeed, the other way round?\n>\n>\n> I'm not interested in starting a flame war or anything like that - just\n> a presentation of the facts as you see them, and if you want to put in\n> your opinion also, that's fine, just make a note!\n>\n>\n> TIA.\n>\n>\n> Paul...\n>\n>\n> --\n>\n> plinehan__AT__yahoo__DOT__com\n>\n> C++ Builder 5 SP1, Interbase 6.0.1.6 IBX 5.04 W2K Pro\n>\n> Please do not top-post.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n\n", "msg_date": "Thu, 6 Nov 2003 11:57:15 -0500", "msg_from": "\"Private\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Interbase/Firebird - any users out there - what's the performance\n\tlike compared to PostgreSQL?" }, { "msg_contents": "\"Private\" <[email protected]> writes:\n> Try this benchmark on PostgreSQL, MySQL, FireBird, Oracle:\n>\n> http://go.jitbot.com/dbbench-pg-fb-mys-orcl\n\nIt looks like a good candidate for adding in a plpgsql stored\nprocedure to get similar speedups to what was gotten with the Oracle\nbenchmark.\n\nIt looks, from the numbers, as though Firebird is _slightly_ slower\nthan PostgreSQL, which seems not totally remarkable. I'd expect that\nreimplementing the benchmark inside a stored procedure would provide\nsimilar improvements with Firebird...\n-- \n(reverse (concatenate 'string \"ofni.smrytrebil\" \"@\" \"enworbbc\"))\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n", "msg_date": "Thu, 06 Nov 2003 13:23:06 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Interbase/Firebird - any users out there - what's the performance\n\tlike compared to PostgreSQL?" }, { "msg_contents": "Christopher Browne kirjutas N, 06.11.2003 kell 20:23:\n> \"Private\" <[email protected]> writes:\n> > Try this benchmark on PostgreSQL, MySQL, FireBird, Oracle:\n> >\n> > http://go.jitbot.com/dbbench-pg-fb-mys-orcl\n> \n> It looks like a good candidate for adding in a plpgsql stored\n> procedure to get similar speedups to what was gotten with the Oracle\n> benchmark.\n\nIt would also be interesting to see the same test run on Postgresql on\nLinux/UNIX. PgSQL on win2000 (most likely using cygwin) is probably not\nthe best you can get out of that hardware.\n\n-----------\nHannu\n\n", "msg_date": "Thu, 06 Nov 2003 21:54:13 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Interbase/Firebird - any users out there - what's" } ]
[ { "msg_contents": "Hello guys !\n\nI am trying to connect to pgsql using ODBC from Visual Objects\n(for the ones who don't know what is Visual Objects, it's an IDE\nwith its own object-oriented language made by Computer Associates.\nVO's roots are related to Clipper. ).\n\nThe time it takes to get the list of the tables, and then to get the \ndata and display it way too long. It took more than 16 seconds to\nconnect, get 200 records and print them on the console.\n\nI know for sure that it's not pgsql's fault: with a small c program\nthat uses libpq i got the same data instantaneously. Also, it looks\nlike it's not VO's fault (i know that there are people who used VO\nagainst big databases). So, the problem must be related to the ODBC\n(the pgsql driver or something else?).\n\nAnyway, i've tried to open the same table from MSAccess, and the\nperformance was even worse. Just an example: for going from one record\nto the next one it lasts about 4-5 seconds.\n\nThe machine is a Pentium2 266mhz (128mb) laptop running windows NT.\n\nToday, I have tried to access pgsql from MSAccess at work. It worked \nvery fast. I've used the same ODBC drivers, but the machine is\nfaster (p4 2.4ghz), has more memory, and the OS is windows 2000.\nBut it is hard for me to believe that a Pentium2 machine with 128mb RAM\ncan't handle 200 records !\n\nI begin to think that there is something fishy about the ODBC\nsupport installed on the laptop (i have recently \"inherited\" the\nlaptop from someone else, and i haven't reinstalled the os ).\n\nWhat do you guys think? what could be done to make things go\nfaster?\n\n\nI have also two more questions. After installing the pgsql odbc\ndrivers, there are 3 drivers that appear:\n\"PostgreSQL\", \"POstgreSQL legacy\" and \"PostgreSQL unicode\".\nWhat's the difference between \"PostgreSQL\" and \"PostgreSQL lecacy\"?\nAre there any options of the odbc driver that could cause a\nperformance boost?\n\nThanks you,\n\nAdrian Maier\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Thu, 06 Nov 2003 10:43:39 +0200", "msg_from": "Adrian Maier <[email protected]>", "msg_from_op": true, "msg_subject": "horrible performance when trying to connect to PostgreSQL using ODBC" }, { "msg_contents": "Adrian Maier <[email protected]> writes:\n> I begin to think that there is something fishy about the ODBC\n> support installed on the laptop (i have recently \"inherited\" the\n> laptop from someone else, and i haven't reinstalled the os ).\n\nYou'd be better off asking these questions on pgsql-odbc. I'm not sure\nthat any of the ODBC gurus read this list.\n\nI believe that the ODBC driver has some incredibly extensive, and\nexpensive, debug-logging options. I dunno if max logging would entirely\nexplain the slowdown you see, but definitely check what you have turned\non.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Nov 2003 09:57:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: horrible performance when trying to connect to PostgreSQL using\n\tODBC" } ]
[ { "msg_contents": "I have a c program called test1.pgc with some sql statements embedded in\nit. The program was preprocessed, compiled and linked. Now, I have the\nexecutable test1.\n\nWhen I run the executable it says,\n\n./test1: error while loading shared libraries: libecpg.so.3: cannot open\nshared object file: No such file or directory\n\nWhat does it mean by this error message? What should I do to correct this\nerror and run the executable successfully?\n\nYour response would be very much appreciated.\n\nThanks and Regards,\n\nRadha\n\n\n", "msg_date": "Sun, 9 Nov 2003 09:06:14 -0600 (CST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: error while executing a c program with embedded sql" }, { "msg_contents": "<[email protected]> writes:\n> ./test1: error while loading shared libraries: libecpg.so.3: cannot open\n> shared object file: No such file or directory\n\nThe dynamic linker is failing to find either libecpg.so itself, or one\nof the shared libraries it depends on (perhaps libpq.so).\n\nIf you are on a Linux system you probably want to fix your ldconfig\nconfiguration so that all these libraries are found automatically.\nYou can run ldd on a particular executable or shared library to see what\nlibraries it references and whether those libraries are getting found\nin the proper places.\n\nNote that it is entirely possible for the program linking stage to\nsucceed but dynamic linking to fail at runtime. For various reasons the\nsearch rules are not quite the same in the two contexts ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Nov 2003 10:34:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: error while executing a c program with embedded sql " }, { "msg_contents": "On Sun, 2003-11-09 at 15:06, [email protected] wrote:\n> I have a c program called test1.pgc with some sql statements embedded in\n> it. The program was preprocessed, compiled and linked. Now, I have the\n> executable test1.\n> \n> When I run the executable it says,\n> \n> ./test1: error while loading shared libraries: libecpg.so.3: cannot open\n> shared object file: No such file or directory\n> \n> What does it mean by this error message? What should I do to correct this\n> error and run the executable successfully?\n\nShared libraries are loaded from directories specified to the system by\nldconfig. Your shared library, libecpg.so.3, is in a PostgreSQL\ndirectory, such as /usr/local/pgsql/lib, which has not been added to the\ndirectories known to the loader.\n\nIf you are able to add that directory with ldconfig, that is the best\nway to do it, but it requires root privilege.\n\nOtherwise you can set the environment variable LD_LIBRARY_PATH, thus:\n\n\texport LD_LIBRARY_PATH=/usr/local/pgsql/lib\n\nbefore you run the program, or you can use LD_PRELOAD:\n\n\tLD_PRELOAD=/usr/local/pgsql/lib/libecpg.so.3 ./test1\n\n-- \nOliver Elphick [email protected]\nIsle of Wight, UK http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"O death, where is thy sting? O grave, where is \n thy victory?\" 1 Corinthians 15:55 \n\n", "msg_date": "Sun, 09 Nov 2003 17:32:25 +0000", "msg_from": "Oliver Elphick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: error while executing a c program with embedded sql" }, { "msg_contents": "Thanks a lot. IT WORKED! with your suggestions.\n\nRegards,\nRadha\n\n> On Sun, 2003-11-09 at 15:06, [email protected] wrote:\n>> I have a c program called test1.pgc with some sql statements embedded\n>> in it. The program was preprocessed, compiled and linked. Now, I have\n>> the executable test1.\n>>\n>> When I run the executable it says,\n>>\n>> ./test1: error while loading shared libraries: libecpg.so.3: cannot\n>> open shared object file: No such file or directory\n>>\n>> What does it mean by this error message? What should I do to correct\n>> this error and run the executable successfully?\n>\n> Shared libraries are loaded from directories specified to the system by\n> ldconfig. Your shared library, libecpg.so.3, is in a PostgreSQL\n> directory, such as /usr/local/pgsql/lib, which has not been added to the\n> directories known to the loader.\n>\n> If you are able to add that directory with ldconfig, that is the best\n> way to do it, but it requires root privilege.\n>\n> Otherwise you can set the environment variable LD_LIBRARY_PATH, thus:\n>\n> \texport LD_LIBRARY_PATH=/usr/local/pgsql/lib\n>\n> before you run the program, or you can use LD_PRELOAD:\n>\n> \tLD_PRELOAD=/usr/local/pgsql/lib/libecpg.so.3 ./test1\n>\n> --\n> Oliver Elphick [email protected]\n> Isle of Wight, UK\n> http://www.lfix.co.uk/oliver GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870\n> 5839 932A 614D 4C34 3E1D 0C1C\n> ========================================\n> \"O death, where is thy sting? O grave, where is\n> thy victory?\" 1 Corinthians 15:55\n\n\n\n", "msg_date": "Mon, 10 Nov 2003 09:00:27 -0600 (CST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: error while executing a c program with embedded sql" }, { "msg_contents": "[email protected] wrote:\n\n>I have a c program called test1.pgc with some sql statements embedded in\n>it. The program was preprocessed, compiled and linked. Now, I have the\n>executable test1.\n>\n>When I run the executable it says,\n>\n>./test1: error while loading shared libraries: libecpg.so.3: cannot open\n>shared object file: No such file or directory\n>\ncheck where the so file is located.\n$ locate libecpg.so.3\n\nsay its /usr/local/pgsql/lib\nthen either add the folder above to /etc/ld.so.conf and run ldconfig\nas root.\n\nor\n\n$ export LD_LIBRARY_PATH=/path/to/folder/containing/the/so/file\n$ ./test1\n\nthe above assumes u are on linux. on unix also its similar.\n\n\n>\n>What does it mean by this error message? What should I do to correct this\n>error and run the executable successfully?\n>\n>Your response would be very much appreciated.\n>\n>Thanks and Regards,\n>\n>Radha\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n>\n\n\n", "msg_date": "Mon, 10 Nov 2003 21:57:20 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] error while executing a c program with embedded sql" } ]
[ { "msg_contents": "\nTable structure is simple:\n\nCREATE TABLE traffic_logs (\n company_id bigint,\n ip_id bigint,\n port integer,\n bytes bigint,\n runtime timestamp without time zone\n);\n\nruntime is 'day of month' ...\n\nI need to summarize the month, per company, with a query as:\n\nexplain analyze SELECT ts.company_id, company_name, SUM(ts.bytes) AS total_traffic\n FROM company c, traffic_logs ts\n WHERE c.company_id = ts.company_id\n AND month_trunc(ts.runtime) = '2003-10-01'\nGROUP BY company_name,ts.company_id;\n\nand the explain looks like:\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=32000.94..32083.07 rows=821 width=41) (actual time=32983.36..47586.17 rows=144 loops=1)\n -> Group (cost=32000.94..32062.54 rows=8213 width=41) (actual time=32957.40..42817.88 rows=462198 loops=1)\n -> Sort (cost=32000.94..32021.47 rows=8213 width=41) (actual time=32957.38..36261.31 rows=462198 loops=1)\n Sort Key: c.company_name, ts.company_id\n -> Merge Join (cost=31321.45..31466.92 rows=8213 width=41) (actual time=13983.07..22642.14 rows=462198 loops=1)\n Merge Cond: (\"outer\".company_id = \"inner\".company_id)\n -> Sort (cost=24.41..25.29 rows=352 width=25) (actual time=5.52..7.40 rows=348 loops=1)\n Sort Key: c.company_id\n -> Seq Scan on company c (cost=0.00..9.52 rows=352 width=25) (actual time=0.02..2.78 rows=352 loops=1)\n -> Sort (cost=31297.04..31317.57 rows=8213 width=16) (actual time=13977.49..16794.41 rows=462198 loops=1)\n Sort Key: ts.company_id\n -> Index Scan using tl_month on traffic_logs ts (cost=0.00..30763.02 rows=8213 width=16) (actual time=0.29..5562.25 rows=462198 loops=1)\n Index Cond: (month_trunc(runtime) = '2003-10-01 00:00:00'::timestamp without time zone)\n Total runtime: 47587.82 msec\n(14 rows)\n\nthe problem is that we're only taking a few months worth of data, so I\ndon't think there is much of a way of 'improve performance' on this, but\nfigured I'd ask quickly before I do something rash ...\n\nNote that without the month_trunc() index, the Total runtime more then\ndoubles:\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=39578.63..39660.76 rows=821 width=41) (actual time=87805.47..101251.35 rows=144 loops=1)\n -> Group (cost=39578.63..39640.23 rows=8213 width=41) (actual time=87779.56..96824.56 rows=462198 loops=1)\n -> Sort (cost=39578.63..39599.17 rows=8213 width=41) (actual time=87779.52..90781.48 rows=462198 loops=1)\n Sort Key: c.company_name, ts.company_id\n -> Merge Join (cost=38899.14..39044.62 rows=8213 width=41) (actual time=64073.98..72783.68 rows=462198 loops=1)\n Merge Cond: (\"outer\".company_id = \"inner\".company_id)\n -> Sort (cost=24.41..25.29 rows=352 width=25) (actual time=64.66..66.55 rows=348 loops=1)\n Sort Key: c.company_id\n -> Seq Scan on company c (cost=0.00..9.52 rows=352 width=25) (actual time=1.76..61.70 rows=352 loops=1)\n -> Sort (cost=38874.73..38895.27 rows=8213 width=16) (actual time=64009.26..66860.71 rows=462198 loops=1)\n Sort Key: ts.company_id\n -> Seq Scan on traffic_logs ts (cost=0.00..38340.72 rows=8213 width=16) (actual time=5.02..-645982.04 rows=462198 loops=1)\n Filter: (date_trunc('month'::text, runtime) = '2003-10-01 00:00:00'::timestamp without time zone)\n Total runtime: 101277.17 msec\n(14 rows)\n\n", "msg_date": "Mon, 10 Nov 2003 16:18:52 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": true, "msg_subject": "*very* slow query to summarize data for a month ..." }, { "msg_contents": "\nDo you have an index on ts.bytes? Josh had suggested this and after I put\nit on my summed fields, I saw a speed increase. I can't remember the\narticle was that Josh had written about index usage, but maybe he'll chime\nin and supply the URL for his article.\nhth\n\nPatrick Hatcher\n\n\n\n \n \"Marc G. Fournier\" \n <scrappy@postgresql \n .org> To \n Sent by: [email protected] \n pgsql-performance-o cc \n [email protected] \n Subject \n [PERFORM] *very* slow query to \n 11/10/2003 12:18 PM summarize data for a month ... \n \n \n \n \n \n \n\n\n\n\n\nTable structure is simple:\n\nCREATE TABLE traffic_logs (\n company_id bigint,\n ip_id bigint,\n port integer,\n bytes bigint,\n runtime timestamp without time zone\n);\n\nruntime is 'day of month' ...\n\nI need to summarize the month, per company, with a query as:\n\nexplain analyze SELECT ts.company_id, company_name, SUM(ts.bytes) AS\ntotal_traffic\n FROM company c, traffic_logs ts\n WHERE c.company_id = ts.company_id\n AND month_trunc(ts.runtime) = '2003-10-01'\nGROUP BY company_name,ts.company_id;\n\nand the explain looks like:\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Aggregate (cost=32000.94..32083.07 rows=821 width=41) (actual\ntime=32983.36..47586.17 rows=144 loops=1)\n -> Group (cost=32000.94..32062.54 rows=8213 width=41) (actual\ntime=32957.40..42817.88 rows=462198 loops=1)\n -> Sort (cost=32000.94..32021.47 rows=8213 width=41) (actual\ntime=32957.38..36261.31 rows=462198 loops=1)\n Sort Key: c.company_name, ts.company_id\n -> Merge Join (cost=31321.45..31466.92 rows=8213 width=41)\n(actual time=13983.07..22642.14 rows=462198 loops=1)\n Merge Cond: (\"outer\".company_id = \"inner\".company_id)\n -> Sort (cost=24.41..25.29 rows=352 width=25)\n(actual time=5.52..7.40 rows=348 loops=1)\n Sort Key: c.company_id\n -> Seq Scan on company c (cost=0.00..9.52\nrows=352 width=25) (actual time=0.02..2.78 rows=352 loops=1)\n -> Sort (cost=31297.04..31317.57 rows=8213 width=16)\n(actual time=13977.49..16794.41 rows=462198 loops=1)\n Sort Key: ts.company_id\n -> Index Scan using tl_month on traffic_logs ts\n(cost=0.00..30763.02 rows=8213 width=16) (actual time=0.29..5562.25\nrows=462198 loops=1)\n Index Cond: (month_trunc(runtime)\n= '2003-10-01 00:00:00'::timestamp without time zone)\n Total runtime: 47587.82 msec\n(14 rows)\n\nthe problem is that we're only taking a few months worth of data, so I\ndon't think there is much of a way of 'improve performance' on this, but\nfigured I'd ask quickly before I do something rash ...\n\nNote that without the month_trunc() index, the Total runtime more then\ndoubles:\n\n\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Aggregate (cost=39578.63..39660.76 rows=821 width=41) (actual\ntime=87805.47..101251.35 rows=144 loops=1)\n -> Group (cost=39578.63..39640.23 rows=8213 width=41) (actual\ntime=87779.56..96824.56 rows=462198 loops=1)\n -> Sort (cost=39578.63..39599.17 rows=8213 width=41) (actual\ntime=87779.52..90781.48 rows=462198 loops=1)\n Sort Key: c.company_name, ts.company_id\n -> Merge Join (cost=38899.14..39044.62 rows=8213 width=41)\n(actual time=64073.98..72783.68 rows=462198 loops=1)\n Merge Cond: (\"outer\".company_id = \"inner\".company_id)\n -> Sort (cost=24.41..25.29 rows=352 width=25)\n(actual time=64.66..66.55 rows=348 loops=1)\n Sort Key: c.company_id\n -> Seq Scan on company c (cost=0.00..9.52\nrows=352 width=25) (actual time=1.76..61.70 rows=352 loops=1)\n -> Sort (cost=38874.73..38895.27 rows=8213 width=16)\n(actual time=64009.26..66860.71 rows=462198 loops=1)\n Sort Key: ts.company_id\n -> Seq Scan on traffic_logs ts\n(cost=0.00..38340.72 rows=8213 width=16) (actual time=5.02..-645982.04\nrows=462198 loops=1)\n Filter: (date_trunc('month'::text,\nruntime) = '2003-10-01 00:00:00'::timestamp without time zone)\n Total runtime: 101277.17 msec\n(14 rows)\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n\n\n\n", "msg_date": "Mon, 10 Nov 2003 12:31:08 -0800", "msg_from": "\"Patrick Hatcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: *very* slow query to summarize data for a month ..." }, { "msg_contents": "Marc,\n\nI'd say your machine is very low on available RAM, particularly sort_mem. \nThe steps which are taking a long time are:\n \n> Aggregate (cost=32000.94..32083.07 rows=821 width=41) (actual \ntime=32983.36..47586.17 rows=144 loops=1)\n> -> Group (cost=32000.94..32062.54 rows=8213 width=41) (actual \ntime=32957.40..42817.88 rows=462198 loops=1)\n\nand:\n\n> -> Merge Join (cost=31321.45..31466.92 rows=8213 width=41) \n(actual time=13983.07..22642.14 rows=462198 loops=1)\n> Merge Cond: (\"outer\".company_id = \"inner\".company_id)\n> -> Sort (cost=24.41..25.29 rows=352 width=25) (actual \ntime=5.52..7.40 rows=348 loops=1)\n\nThere are also *large* delays between steps. Either your I/O is saturated, \nor you haven't run a VACUUM FULL ANALYZE in a while (which would also explain \nthe estimates being off).\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 10 Nov 2003 12:59:03 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: *very* slow query to summarize data for a month ..." }, { "msg_contents": "\"Patrick Hatcher\" <[email protected]> writes:\n> Do you have an index on ts.bytes? Josh had suggested this and after I put\n> it on my summed fields, I saw a speed increase.\n\nWhat's the reasoning behind this? ISTM that sum() should never use an\nindex, nor would it benefit from using one.\n\n-Neil\n\n", "msg_date": "Mon, 10 Nov 2003 17:51:10 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: *very* slow query to summarize data for a month ..." }, { "msg_contents": "\"Marc G. Fournier\" <[email protected]> writes:\n> -> Index Scan using tl_month on traffic_logs ts (cost=0.00..30763.02 rows=8213 width=16) (actual time=0.29..5562.25 rows=462198 loops=1)\n> Index Cond: (month_trunc(runtime) = '2003-10-01 00:00:00'::timestamp without time zone)\n\nInteresting that we get the row count estimate for this index scan so\nwrong -- I believe this is the root of the problem. Hmmm... I would\nguess that the optimizer stats we have for estimating the selectivity\nof a functional index is pretty primitive, but I haven't looked into\nit at all. Tom might be able to shed some light...\n\n[ In the second EXPLAIN ANALYZE, ... ]\n\n> -> Seq Scan on traffic_logs ts (cost=0.00..38340.72 rows=8213 width=16) (actual time=5.02..-645982.04 rows=462198 loops=1)\n> Filter: (date_trunc('month'::text, runtime) = '2003-10-01 00:00:00'::timestamp without time zone)\n\nUh, what? The \"actual time\" seems to have finished far before it has\nbegun :-) Is this just a typo, or does the actual output include a\nnegative number?\n\n-Neil\n\n", "msg_date": "Mon, 10 Nov 2003 18:15:41 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: *very* slow query to summarize data for a month ..." }, { "msg_contents": "Neil Conway <[email protected]> writes:\n> Interesting that we get the row count estimate for this index scan so\n> wrong -- I believe this is the root of the problem. Hmmm... I would\n> guess that the optimizer stats we have for estimating the selectivity\n> of a functional index is pretty primitive, but I haven't looked into\n> it at all. Tom might be able to shed some light...\n\nTry \"none at all\". I have speculated in the past that it would be worth\ngathering statistics about the contents of functional indexes, but it's\nstill on the to-do-someday list.\n\n>> -> Seq Scan on traffic_logs ts (cost=0.00..38340.72 rows=8213 width=16) (actual time=5.02..-645982.04 rows=462198 loops=1)\n\n> Uh, what?\n\nThat is bizarre, all right. Is it reproducible?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Nov 2003 18:42:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: *very* slow query to summarize data for a month ... " }, { "msg_contents": "\n\nOn Mon, 10 Nov 2003, Neil Conway wrote:\n\n> \"Marc G. Fournier\" <[email protected]> writes:\n> > -> Index Scan using tl_month on traffic_logs ts (cost=0.00..30763.02 rows=8213 width=16) (actual time=0.29..5562.25 rows=462198 loops=1)\n> > Index Cond: (month_trunc(runtime) = '2003-10-01 00:00:00'::timestamp without time zone)\n>\n> Interesting that we get the row count estimate for this index scan so\n> wrong -- I believe this is the root of the problem. Hmmm... I would\n> guess that the optimizer stats we have for estimating the selectivity\n> of a functional index is pretty primitive, but I haven't looked into\n> it at all. Tom might be able to shed some light...\n>\n> [ In the second EXPLAIN ANALYZE, ... ]\n>\n> > -> Seq Scan on traffic_logs ts (cost=0.00..38340.72 rows=8213 width=16) (actual time=5.02..-645982.04 rows=462198 loops=1)\n> > Filter: (date_trunc('month'::text, runtime) = '2003-10-01 00:00:00'::timestamp without time zone)\n>\n> Uh, what? The \"actual time\" seems to have finished far before it has\n> begun :-) Is this just a typo, or does the actual output include a\n> negative number?\n\nThis was purely a cut-n-paste ...\n\n", "msg_date": "Mon, 10 Nov 2003 20:19:56 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: *very* slow query to summarize data for a month ..." }, { "msg_contents": "\n\nOn Mon, 10 Nov 2003, Tom Lane wrote:\n\n> Neil Conway <[email protected]> writes:\n> > Interesting that we get the row count estimate for this index scan so\n> > wrong -- I believe this is the root of the problem. Hmmm... I would\n> > guess that the optimizer stats we have for estimating the selectivity\n> > of a functional index is pretty primitive, but I haven't looked into\n> > it at all. Tom might be able to shed some light...\n>\n> Try \"none at all\". I have speculated in the past that it would be worth\n> gathering statistics about the contents of functional indexes, but it's\n> still on the to-do-someday list.\n>\n> >> -> Seq Scan on traffic_logs ts (cost=0.00..38340.72 rows=8213 width=16) (actual time=5.02..-645982.04 rows=462198 loops=1)\n>\n> > Uh, what?\n>\n> That is bizarre, all right. Is it reproducible?\n\nNope, and a subsequent run shows better results too:\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=39674.38..39756.70 rows=823 width=41) (actual time=35573.27..49953.47 rows=144 loops=1)\n -> Group (cost=39674.38..39736.12 rows=8232 width=41) (actual time=35547.27..45479.27 rows=462198 loops=1)\n -> Sort (cost=39674.38..39694.96 rows=8232 width=41) (actual time=35547.23..39167.90 rows=462198 loops=1)\n Sort Key: c.company_name, ts.company_id\n -> Merge Join (cost=38993.22..39139.02 rows=8232 width=41) (actual time=16658.23..25559.08 rows=462198 loops=1)\n Merge Cond: (\"outer\".company_id = \"inner\".company_id)\n -> Sort (cost=24.41..25.29 rows=352 width=25) (actual time=5.51..7.38 rows=348 loops=1)\n Sort Key: c.company_id\n -> Seq Scan on company c (cost=0.00..9.52 rows=352 width=25) (actual time=0.02..2.80 rows=352 loops=1)\n -> Sort (cost=38968.82..38989.40 rows=8232 width=16) (actual time=16652.66..19785.83 rows=462198 loops=1)\n Sort Key: ts.company_id\n -> Seq Scan on traffic_logs ts (cost=0.00..38433.46 rows=8232 width=16) (actual time=0.11..8794.43 rows=462198 loops=1)\n Filter: (date_trunc('month'::text, runtime) = '2003-10-01 00:00:00'::timestamp without time zone)\n Total runtime: 49955.22 msec\n\n", "msg_date": "Mon, 10 Nov 2003 20:28:07 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: *very* slow query to summarize data for a month ... " }, { "msg_contents": "\n\nOn Mon, 10 Nov 2003, Josh Berkus wrote:\n\n> Marc,\n>\n> I'd say your machine is very low on available RAM, particularly sort_mem.\n> The steps which are taking a long time are:\n\nHere's the server:\n\nlast pid: 42651; load averages: 1.52, 0.96, 0.88\nup 28+07:43:33 20:35:44\n307 processes: 2 running, 304 sleeping, 1 zombie\nCPU states: 18.0% user, 0.0% nice, 29.1% system, 0.6% interrupt, 52.3% idle\nMem: 1203M Active, 1839M Inact, 709M Wired, 206M Cache, 199M Buf, 5608K Free\nSwap: 8192M Total, 1804K Used, 8190M Free\n\n>\n> > Aggregate (cost=32000.94..32083.07 rows=821 width=41) (actual\n> time=32983.36..47586.17 rows=144 loops=1)\n> > -> Group (cost=32000.94..32062.54 rows=8213 width=41) (actual\n> time=32957.40..42817.88 rows=462198 loops=1)\n>\n> and:\n>\n> > -> Merge Join (cost=31321.45..31466.92 rows=8213 width=41)\n> (actual time=13983.07..22642.14 rows=462198 loops=1)\n> > Merge Cond: (\"outer\".company_id = \"inner\".company_id)\n> > -> Sort (cost=24.41..25.29 rows=352 width=25) (actual\n> time=5.52..7.40 rows=348 loops=1)\n>\n> There are also *large* delays between steps. Either your I/O is saturated,\n> or you haven't run a VACUUM FULL ANALYZE in a while (which would also explain\n> the estimates being off).\n\nthought about that before I started the thread, and ran it just in case ...\n\njust restarted the server with sort_mem set to 10M, and didn't help much on the Aggregate, or MergeJoin ... :\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=39674.38..39756.70 rows=823 width=41) (actual time=33066.25..54021.50 rows=144 loops=1)\n -> Group (cost=39674.38..39736.12 rows=8232 width=41) (actual time=33040.25..47005.57 rows=462198 loops=1)\n -> Sort (cost=39674.38..39694.96 rows=8232 width=41) (actual time=33040.22..37875.97 rows=462198 loops=1)\n Sort Key: c.company_name, ts.company_id\n -> Merge Join (cost=38993.22..39139.02 rows=8232 width=41) (actual time=14428.17..23568.80 rows=462198 loops=1)\n Merge Cond: (\"outer\".company_id = \"inner\".company_id)\n -> Sort (cost=24.41..25.29 rows=352 width=25) (actual time=5.80..7.66 rows=348 loops=1)\n Sort Key: c.company_id\n -> Seq Scan on company c (cost=0.00..9.52 rows=352 width=25) (actual time=0.08..3.06 rows=352 loops=1)\n -> Sort (cost=38968.82..38989.40 rows=8232 width=16) (actual time=14422.27..17429.34 rows=462198 loops=1)\n Sort Key: ts.company_id\n -> Seq Scan on traffic_logs ts (cost=0.00..38433.46 rows=8232 width=16) (actual time=0.15..8119.72 rows=462198 loops=1)\n Filter: (date_trunc('month'::text, runtime) = '2003-10-01 00:00:00'::timestamp without time zone)\n Total runtime: 54034.44 msec\n(14 rows)\n\nthe problem is that the results we are comparing with right now is the one\nthat had the - time on it :( Just restarted the server with default\nsort_mem, and here is the query with that:\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=39691.27..39773.61 rows=823 width=41) (actual time=35077.18..50424.74 rows=144 loops=1)\n -> Group (cost=39691.27..39753.03 rows=8234 width=41) (actual time=35051.29..-650049.84 rows=462198 loops=1)\n -> Sort (cost=39691.27..39711.86 rows=8234 width=41) (actual time=35051.26..38847.40 rows=462198 loops=1)\n Sort Key: c.company_name, ts.company_id\n -> Merge Join (cost=39009.92..39155.76 rows=8234 width=41) (actual time=16155.37..25439.42 rows=462198 loops=1)\n Merge Cond: (\"outer\".company_id = \"inner\".company_id)\n -> Sort (cost=24.41..25.29 rows=352 width=25) (actual time=5.85..7.71 rows=348 loops=1)\n Sort Key: c.company_id\n -> Seq Scan on company c (cost=0.00..9.52 rows=352 width=25) (actual time=0.10..3.07 rows=352 loops=1)\n -> Sort (cost=38985.51..39006.10 rows=8234 width=16) (actual time=16149.46..19437.47 rows=462198 loops=1)\n Sort Key: ts.company_id\n -> Seq Scan on traffic_logs ts (cost=0.00..38450.00 rows=8234 width=16) (actual time=0.16..8869.37 rows=462198 loops=1)\n Filter: (date_trunc('month'::text, runtime) = '2003-10-01 00:00:00'::timestamp without time zone)\n Total runtime: 50426.80 msec\n(14 rows)\n\n\nAnd, just on a whim, here it is set to 100M:\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=39691.27..39773.61 rows=823 width=41) (actual time=25888.20..38909.88 rows=144 loops=1)\n -> Group (cost=39691.27..39753.03 rows=8234 width=41) (actual time=25862.81..34591.76 rows=462198 loops=1)\n -> Sort (cost=39691.27..39711.86 rows=8234 width=41) (actual time=25862.77..723885.95 rows=462198 loops=1)\n Sort Key: c.company_name, ts.company_id\n -> Merge Join (cost=39009.92..39155.76 rows=8234 width=41) (actual time=12471.23..21855.08 rows=462198 loops=1)\n Merge Cond: (\"outer\".company_id = \"inner\".company_id)\n -> Sort (cost=24.41..25.29 rows=352 width=25) (actual time=5.87..7.74 rows=348 loops=1)\n Sort Key: c.company_id\n -> Seq Scan on company c (cost=0.00..9.52 rows=352 width=25) (actual time=0.11..3.14 rows=352 loops=1)\n -> Sort (cost=38985.51..39006.10 rows=8234 width=16) (actual time=12465.29..14941.24 rows=462198 loops=1)\n Sort Key: ts.company_id\n -> Seq Scan on traffic_logs ts (cost=0.00..38450.00 rows=8234 width=16) (actual time=0.18..9106.16 rows=462198 loops=1)\n Filter: (date_trunc('month'::text, runtime) = '2003-10-01 00:00:00'::timestamp without time zone)\n Total runtime: 39077.75 msec\n(14 rows)\n\nSo, it does give a noticeable improvement the higher the sort_mem ...\n\nAnd, @ 100M for sort_mem and using the month_trunc index:\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=32089.29..32171.63 rows=823 width=41) (actual time=30822.51..57202.44 rows=144 loops=1)\n -> Group (cost=32089.29..32151.04 rows=8234 width=41) (actual time=30784.24..743396.18 rows=462198 loops=1)\n -> Sort (cost=32089.29..32109.87 rows=8234 width=41) (actual time=30784.21..36212.96 rows=462198 loops=1)\n Sort Key: c.company_name, ts.company_id\n -> Merge Join (cost=31407.94..31553.77 rows=8234 width=41) (actual time=11384.79..24918.56 rows=462198 loops=1)\n Merge Cond: (\"outer\".company_id = \"inner\".company_id)\n -> Sort (cost=24.41..25.29 rows=352 width=25) (actual time=5.92..9.55 rows=348 loops=1)\n Sort Key: c.company_id\n -> Seq Scan on company c (cost=0.00..9.52 rows=352 width=25) (actual time=0.08..3.21 rows=352 loops=1)\n -> Sort (cost=31383.53..31404.12 rows=8234 width=16) (actual time=11378.81..15211.07 rows=462198 loops=1)\n Sort Key: ts.company_id\n -> Index Scan using tl_month on traffic_logs ts (cost=0.00..30848.02 rows=8234 width=16) (actual time=0.46..7055.75 rows=462198 loops=1)\n Index Cond: (month_trunc(runtime) = '2003-10-01 00:00:00'::timestamp without time zone)\n Total runtime: 57401.72 msec\n(14 rows)\n\n", "msg_date": "Mon, 10 Nov 2003 20:49:57 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: *very* slow query to summarize data for a month ..." }, { "msg_contents": "On Mon, 10 Nov 2003, Marc G. Fournier wrote:\n\n> \n> explain analyze SELECT ts.company_id, company_name, SUM(ts.bytes) AS total_traffic\n> FROM company c, traffic_logs ts\n> WHERE c.company_id = ts.company_id\n> AND month_trunc(ts.runtime) = '2003-10-01'\n> GROUP BY company_name,ts.company_id;\n\nWhat if you do\n\n ts.runtime >= '2003-10-01' AND ts.runtime < '2003-11-01'\n\nand add an index like (runtime, company_name, company_id)?\n\n\n-- \n/Dennis\n\n", "msg_date": "Tue, 11 Nov 2003 07:50:07 +0100 (CET)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: *very* slow query to summarize data for a month ..." }, { "msg_contents": "\nDennis Bjorklund <[email protected]> writes:\n\n> On Mon, 10 Nov 2003, Marc G. Fournier wrote:\n> \n> > \n> > explain analyze SELECT ts.company_id, company_name, SUM(ts.bytes) AS total_traffic\n> > FROM company c, traffic_logs ts\n> > WHERE c.company_id = ts.company_id\n> > AND month_trunc(ts.runtime) = '2003-10-01'\n> > GROUP BY company_name,ts.company_id;\n\nSo depending on how much work you're willing to do there are some more\ndramatic speedups you could get:\n\nUse partial indexes like this (you'll need one for every month):\n\ncreate index i on traffic_log (company_id) \n where month_trunc(runtime) = '2003-10-01'\n\nthen group by company_id only so it can use the index:\n\nselect * \n from company\n join (\n select company_id, sum(bytes) as total_traffic\n from traffic_log\n where month_trunc(runtime) = '2003-10-01'\n group by company_id\n ) as x using (company_id)\n order by company_name\n\n\n\nActually you might be able to get the same effect using function indexes like:\n\ncreate index i on traffic_log (month_trunc(runtime), company_id)\n\n\n-- \ngreg\n\n", "msg_date": "11 Nov 2003 12:47:06 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: *very* slow query to summarize data for a month ..." }, { "msg_contents": "\nOn Tue, 11 Nov 2003, Greg Stark wrote:\n\n> Actually you might be able to get the same effect using function indexes\n> like:\n>\n> create index i on traffic_log (month_trunc(runtime), company_id)\n\nhad actually thought of that one ... is it something that is only\navailable in v7.4?\n\nams=# create index i on traffic_logs ( month_trunc(runtime), company_id );\nERROR: parser: parse error at or near \",\" at character 54\n\n", "msg_date": "Tue, 11 Nov 2003 14:14:01 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: *very* slow query to summarize data for a month ..." }, { "msg_contents": "marc,\n\n> had actually thought of that one ... is it something that is only\n> available in v7.4?\n\nYes. New feature.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 11 Nov 2003 10:30:39 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: *very* slow query to summarize data for a month ..." }, { "msg_contents": "\n\nOn Tue, 11 Nov 2003, Dennis Bjorklund wrote:\n\n> On Mon, 10 Nov 2003, Marc G. Fournier wrote:\n>\n> >\n> > explain analyze SELECT ts.company_id, company_name, SUM(ts.bytes) AS total_traffic\n> > FROM company c, traffic_logs ts\n> > WHERE c.company_id = ts.company_id\n> > AND month_trunc(ts.runtime) = '2003-10-01'\n> > GROUP BY company_name,ts.company_id;\n>\n> What if you do\n>\n> ts.runtime >= '2003-10-01' AND ts.runtime < '2003-11-01'\n>\n> and add an index like (runtime, company_name, company_id)?\n\nGood thought, but even simplifying it to the *lowest* query possible, with\nno table joins, is painfully slow:\n\nexplain analyze SELECT ts.company_id, SUM(ts.bytes) AS total_traffic\n FROM traffic_logs ts\n WHERE month_trunc(ts.runtime) = '2003-10-01'\nGROUP BY ts.company_id;\n\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=31630.84..31693.05 rows=829 width=16) (actual time=14862.71..26552.39 rows=144 loops=1)\n -> Group (cost=31630.84..31672.31 rows=8295 width=16) (actual time=9634.28..20967.07 rows=462198 loops=1)\n -> Sort (cost=31630.84..31651.57 rows=8295 width=16) (actual time=9634.24..12838.73 rows=462198 loops=1)\n Sort Key: company_id\n -> Index Scan using tl_month on traffic_logs ts (cost=0.00..31090.93 rows=8295 width=16) (actual time=0.26..6043.35 rows=462198 loops=1)\n Index Cond: (month_trunc(runtime) = '2003-10-01 00:00:00'::timestamp without time zone)\n Total runtime: 26659.35 msec\n(7 rows)\n\n\n\n-OR-\n\nexplain analyze SELECT ts.company_id, SUM(ts.bytes) AS total_traffic\n FROM traffic_logs ts\n WHERE ts.runtime >= '2003-10-01' AND ts.runtime < '2003-11-01'\nGROUP BY ts.company_id;\n\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=81044.53..84424.21 rows=45062 width=16) (actual time=13307.52..29274.66 rows=144 loops=1)\n -> Group (cost=81044.53..83297.65 rows=450625 width=16) (actual time=10809.02..-673265.13 rows=462198 loops=1)\n -> Sort (cost=81044.53..82171.09 rows=450625 width=16) (actual time=10808.99..14069.79 rows=462198 loops=1)\n Sort Key: company_id\n -> Seq Scan on traffic_logs ts (cost=0.00..38727.35 rows=450625 width=16) (actual time=0.07..6801.92 rows=462198 loops=1)\n Filter: ((runtime >= '2003-10-01 00:00:00'::timestamp without time zone) AND (runtime < '2003-11-01 00:00:00'::timestamp without time zone))\n Total runtime: 29385.97 msec\n(7 rows)\n\n\nJust as a side note, just doing a straight scan for the records, with no\nSUM()/GROUP BY involved, with the month_trunc() index is still >8k msec:\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using tl_month on traffic_logs ts (cost=0.00..31096.36 rows=8297 width=16) (actual time=0.96..5432.93 rows=462198 loops=1)\n Index Cond: (month_trunc(runtime) = '2003-10-01 00:00:00'::timestamp without time zone)\n Total runtime: 8092.88 msec\n(3 rows)\n\nand without the index, >15k msec:\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Seq Scan on traffic_logs ts (cost=0.00..38719.55 rows=8297 width=16) (actual time=0.11..11354.45 rows=462198 loops=1)\n Filter: (date_trunc('month'::text, runtime) = '2003-10-01 00:00:00'::timestamp without time zone)\n Total runtime: 15353.57 msec\n(3 rows)\n\nso the GROUP BY is affecting the overall, but even without it, its still\ntaking a helluva long time ...\n\nI'm going to modify my load script so that it dumps monthly totals to\ntraffic_logs, and 'details' to a schema.traffic_logs table ... I don't\nneed the 'per day totals' at the top level at all, only speed ... the 'per\nday totals' are only required at the 'per client' level, and by moving the\n'per day' into a client schema will shrink the table significantly ...\n\nIf it wasn't for trying to pull in that 'whole month' summary, it would be\nfine :(\n", "msg_date": "Tue, 11 Nov 2003 14:38:04 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: *very* slow query to summarize data for a month ..." }, { "msg_contents": "\"Marc G. Fournier\" <[email protected]> writes:\n\n> On Tue, 11 Nov 2003, Greg Stark wrote:\n> \n> > Actually you might be able to get the same effect using function indexes\n> > like:\n> >\n> > create index i on traffic_log (month_trunc(runtime), company_id)\n> \n> had actually thought of that one ... is it something that is only\n> available in v7.4?\n\nHum, I thought you could do simple functional indexes like that in 7.3, but\nperhaps only single-column indexes.\n\nIn any case, given your situation I would seriously consider putting a\n\"month\" integer column on your table anyways. Then your index would be a\nsimple (month, company_id) index.\n\n-- \ngreg\n\n", "msg_date": "11 Nov 2003 14:51:22 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: *very* slow query to summarize data for a month ..." }, { "msg_contents": "On 11 Nov 2003, Greg Stark wrote:\n\n> \"Marc G. Fournier\" <[email protected]> writes:\n> \n> > On Tue, 11 Nov 2003, Greg Stark wrote:\n> > \n> > > Actually you might be able to get the same effect using function indexes\n> > > like:\n> > >\n> > > create index i on traffic_log (month_trunc(runtime), company_id)\n> > \n> > had actually thought of that one ... is it something that is only\n> > available in v7.4?\n> \n> Hum, I thought you could do simple functional indexes like that in 7.3, but\n> perhaps only single-column indexes.\n> \n> In any case, given your situation I would seriously consider putting a\n> \"month\" integer column on your table anyways. Then your index would be a\n> simple (month, company_id) index.\n\nIn 7.3 and before, you had to use only column names as inputs, so you \ncould cheat:\n\nalter table test add alp int;\nalter table test add omg int;\nupdate test set alp=0;\nupdate test set omg=13;\n\nand then create a functional index:\n\ncreate index test_xy on test (substr(info,alp,omg));\n\nselect * from test where substr(info,alp,omg)=='abcd';\n\n\n\n\n", "msg_date": "Tue, 11 Nov 2003 14:25:06 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: *very* slow query to summarize data for a month ..." }, { "msg_contents": "\n\"Marc G. Fournier\" <[email protected]> writes:\n\n> Just as a side note, just doing a straight scan for the records, with no\n> SUM()/GROUP BY involved, with the month_trunc() index is still >8k msec:\n\nWell so the problem isn't the query at all, you just have too much data to\nmassage online. You can preprocess the data offline into a more managable\namount of data for your online reports.\n\nWhat I used to do for a similar situation was to do hourly queries sort of\nlike this:\n\ninsert into data_aggregate (day, hour, company_id, total_bytes)\n (select trunc(now(),'day'), trunc(now(), 'hour'), company_id, sum(bytes)\n from raw_data\n where time between trunc(now(),'hour') and trunc(now(),'hour')+'1 hour'::interval\n group by company_id\n )\n\n[this was actually on oracle and the data looked kind of different, i'm making\nthis up as i go along]\n\nThen later the reports could run quickly based on data_aggregate instead of\nslowly based on the much larger data set accumulated by the minute. Once I had\nthis schema set up it was easy to follow it for all of the rapidly growing\ndata tables.\n\nNow in my situation I had thousands of records accumulating per second, so\nhourly was already a big win. I originally chose hourly because I thought I\nmight want time-of-day reports but that never panned out. On the other hand it\nwas a win when the system broke once because I could easily see that and fix\nit before midnight when it would have actually mattered. Perhaps in your\nsituation you would want daily aggregates or something else.\n\nOne of the other advantages of these aggregate tables was that we could purge\nthe old data much sooner with much less resistance from the business. Since\nthe reports were all still available and a lot of ad-hoc queries could still\nbe done without the raw data anyways.\n\nAlternatively you can just give up on online reports. Eventually you'll have\nsome query that takes way more than 8s anyways. You can pregenerate the entire\nreport as a batch job instead. Either send it off as a nightly e-mail, store\nit as an html or csv file for the web server, or (my favourite) store the data\nfor the report as an sql table and then have multiple front-ends that do a\nsimple \"select *\" to pull the data and format it.\n\n-- \ngreg\n\n", "msg_date": "12 Nov 2003 00:52:44 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: *very* slow query to summarize data for a month ..." }, { "msg_contents": "\n\nOn Wed, 12 Nov 2003, Greg Stark wrote:\n\n>\n> \"Marc G. Fournier\" <[email protected]> writes:\n>\n> > Just as a side note, just doing a straight scan for the records, with no\n> > SUM()/GROUP BY involved, with the month_trunc() index is still >8k msec:\n>\n> One of the other advantages of these aggregate tables was that we could\n> purge the old data much sooner with much less resistance from the\n> business. Since the reports were all still available and a lot of ad-hoc\n> queries could still be done without the raw data anyways.\n\nActually, what I've done is do this at the 'load stage' ... but same\nconcept ...\n\n", "msg_date": "Wed, 12 Nov 2003 12:08:56 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: *very* slow query to summarize data for a month ..." } ]
[ { "msg_contents": "\nhere's the URL:\nhttp://techdocs.postgresql.org/techdocs/pgsqladventuresep2.php\n\nPatrick Hatcher\nMacys.Com\nLegacy Integration Developer\n415-422-1610 office\nHatcherPT - AIM\n\n\n \n Patrick \n Hatcher/MCOM/FDD \n To \n 11/10/2003 12:31 PM \"Marc G. Fournier\" \n <[email protected]>@FDS-NOTES \n cc \n [email protected], \n [email protected] \n rg \n Subject \n Re: [PERFORM] *very* slow query to \n summarize data for a month ... \n (Document link: Patrick Hatcher) \n \n \n \n \n \n \n\n\n\nDo you have an index on ts.bytes? Josh had suggested this and after I put\nit on my summed fields, I saw a speed increase. I can't remember the\narticle was that Josh had written about index usage, but maybe he'll chime\nin and supply the URL for his article.\nhth\n\nPatrick Hatcher\n\n\n\n \n \"Marc G. Fournier\" \n <scrappy@postgresql \n .org> To \n Sent by: [email protected] \n pgsql-performance-o cc \n [email protected] \n Subject \n [PERFORM] *very* slow query to \n 11/10/2003 12:18 PM summarize data for a month ... \n \n \n \n \n \n \n\n\n\n\n\nTable structure is simple:\n\nCREATE TABLE traffic_logs (\n company_id bigint,\n ip_id bigint,\n port integer,\n bytes bigint,\n runtime timestamp without time zone\n);\n\nruntime is 'day of month' ...\n\nI need to summarize the month, per company, with a query as:\n\nexplain analyze SELECT ts.company_id, company_name, SUM(ts.bytes) AS\ntotal_traffic\n FROM company c, traffic_logs ts\n WHERE c.company_id = ts.company_id\n AND month_trunc(ts.runtime) = '2003-10-01'\nGROUP BY company_name,ts.company_id;\n\nand the explain looks like:\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Aggregate (cost=32000.94..32083.07 rows=821 width=41) (actual\ntime=32983.36..47586.17 rows=144 loops=1)\n -> Group (cost=32000.94..32062.54 rows=8213 width=41) (actual\ntime=32957.40..42817.88 rows=462198 loops=1)\n -> Sort (cost=32000.94..32021.47 rows=8213 width=41) (actual\ntime=32957.38..36261.31 rows=462198 loops=1)\n Sort Key: c.company_name, ts.company_id\n -> Merge Join (cost=31321.45..31466.92 rows=8213 width=41)\n(actual time=13983.07..22642.14 rows=462198 loops=1)\n Merge Cond: (\"outer\".company_id = \"inner\".company_id)\n -> Sort (cost=24.41..25.29 rows=352 width=25)\n(actual time=5.52..7.40 rows=348 loops=1)\n Sort Key: c.company_id\n -> Seq Scan on company c (cost=0.00..9.52\nrows=352 width=25) (actual time=0.02..2.78 rows=352 loops=1)\n -> Sort (cost=31297.04..31317.57 rows=8213 width=16)\n(actual time=13977.49..16794.41 rows=462198 loops=1)\n Sort Key: ts.company_id\n -> Index Scan using tl_month on traffic_logs ts\n(cost=0.00..30763.02 rows=8213 width=16) (actual time=0.29..5562.25\nrows=462198 loops=1)\n Index Cond: (month_trunc(runtime)\n= '2003-10-01 00:00:00'::timestamp without time zone)\n Total runtime: 47587.82 msec\n(14 rows)\n\nthe problem is that we're only taking a few months worth of data, so I\ndon't think there is much of a way of 'improve performance' on this, but\nfigured I'd ask quickly before I do something rash ...\n\nNote that without the month_trunc() index, the Total runtime more then\ndoubles:\n\n\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Aggregate (cost=39578.63..39660.76 rows=821 width=41) (actual\ntime=87805.47..101251.35 rows=144 loops=1)\n -> Group (cost=39578.63..39640.23 rows=8213 width=41) (actual\ntime=87779.56..96824.56 rows=462198 loops=1)\n -> Sort (cost=39578.63..39599.17 rows=8213 width=41) (actual\ntime=87779.52..90781.48 rows=462198 loops=1)\n Sort Key: c.company_name, ts.company_id\n -> Merge Join (cost=38899.14..39044.62 rows=8213 width=41)\n(actual time=64073.98..72783.68 rows=462198 loops=1)\n Merge Cond: (\"outer\".company_id = \"inner\".company_id)\n -> Sort (cost=24.41..25.29 rows=352 width=25)\n(actual time=64.66..66.55 rows=348 loops=1)\n Sort Key: c.company_id\n -> Seq Scan on company c (cost=0.00..9.52\nrows=352 width=25) (actual time=1.76..61.70 rows=352 loops=1)\n -> Sort (cost=38874.73..38895.27 rows=8213 width=16)\n(actual time=64009.26..66860.71 rows=462198 loops=1)\n Sort Key: ts.company_id\n -> Seq Scan on traffic_logs ts\n(cost=0.00..38340.72 rows=8213 width=16) (actual time=5.02..-645982.04\nrows=462198 loops=1)\n Filter: (date_trunc('month'::text,\nruntime) = '2003-10-01 00:00:00'::timestamp without time zone)\n Total runtime: 101277.17 msec\n(14 rows)\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n\n\n\n\n", "msg_date": "Mon, 10 Nov 2003 12:32:12 -0800", "msg_from": "\"Patrick Hatcher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: *very* slow query to summarize data for a month ..." } ]
[ { "msg_contents": "After a long battle with technology,[email protected] (Rajesh Kumar Mallah), an earthling, wrote:\n> the error mentioned in first email has been overcome\n> by running osdb on the same machine hosting the DB server.\n\nYes, it seems unrealistic to try to run the \"client\" on a separate\nhost from the database. \n\nI got the osdb benchmark running last week, and had to separate client\nfrom server. I had to jump through a fair number of hoops including\ncopying data files over to the server. The benchmark software needs a\nbit more work...\n-- \nlet name=\"cbbrowne\" and tld=\"cbbrowne.com\" in String.concat \"@\" [name;tld];;\nhttp://cbbrowne.com/info/lsf.html\nNobody can fix the economy. Nobody can be trusted with their finger\non the button. Nobody's perfect. VOTE FOR NOBODY.\n", "msg_date": "Tue, 11 Nov 2003 13:08:08 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suggestions for benchmarking 7.4RC2 against 7.3" }, { "msg_contents": "Rajesh, Chris,\n\n> I got the osdb benchmark running last week, and had to separate client\n> from server. I had to jump through a fair number of hoops including\n> copying data files over to the server. The benchmark software needs a\n> bit more work...\n\nWhat about the OSDL's TPC-derivative benchmarks? That's a much more \nrespected database test, and probably less buggy than OSDB.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 11 Nov 2003 10:25:28 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suggestions for benchmarking 7.4RC2 against 7.3" }, { "msg_contents": "\nHi,\n\nI plan to put 7.4-RC2 in our production servers in next few hours.\n\nSince the hardware config & the performance related GUCs parameter\nare going to remain the same i am interested in seeing the performance\nimprovements in 7.4 as compared 7.3 .\n\nFor this i plan to use the OSDB 0.14 and compare the results for both the\ncases.\n\nDoes any one has suggestions for comparing 7.4 against 7.3 ?\nSince i am using OSDB for second time only any tips/guidance\non usage of that is also appreciated.\n\n\n\nH/W config:\n\nCPU: 4 X Intel(R) Xeon(TM) CPU 2.00GHz\nMEM : 2 GB\nI/O config : PGDATA on 10000 RPM Ultra160 scsi , pg_xlog on a similar\nseperate SCSI\n\nGUC:\nshared_buffers = 10000\nmax_fsm_relations = 5000\nmax_fsm_pages = 55099264\nsort_mem = 16384\nvacuum_mem = 8192\n\nAll other performance related parameter have default\nvalue eg:\n\n#effective_cache_size = 1000 # typically 8KB each\n#random_page_cost = 4 # units are one sequential page fetch cost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n\n\n\nBTW i get following error at the moment:\n-----------------------------------------\n/usr/local/bin/osdb-pg-ui --postgresql=no_hash_index\n\"osdb\"\n\"Invoked: /usr/local/bin/osdb-pg-ui --postgresql=no_hash_index\"\n\n create_tables() 0.78 seconds return value = 0\n load() 1.02 seconds return value = 0\n create_idx_uniques_key_bt() 0.64 seconds return value = 0\n create_idx_updates_key_bt() 0.61 seconds return value = 0\n create_idx_hundred_key_bt() 0.61 seconds return value = 0\n create_idx_tenpct_key_bt() 0.62 seconds return value = 0\n create_idx_tenpct_key_code_bt() 0.45 seconds return value = 0\n create_idx_tiny_key_bt() 0.46 seconds return value = 0\n create_idx_tenpct_int_bt() 0.46 seconds return value = 0\n create_idx_tenpct_signed_bt() 0.45 seconds return value = 0\n create_idx_uniques_code_h() 0.46 seconds return value = 0\n create_idx_tenpct_double_bt() 0.46 seconds return value = 0\n create_idx_updates_decim_bt() 0.45 seconds return value = 0\n create_idx_tenpct_float_bt() 0.46 seconds return value = 0\n create_idx_updates_int_bt() 0.46 seconds return value = 0\n create_idx_tenpct_decim_bt() 0.46 seconds return value = 0\n create_idx_hundred_code_h() 0.45 seconds return value = 0\n create_idx_tenpct_name_h() 0.46 seconds return value = 0\n create_idx_updates_code_h() 0.46 seconds return value = 0\n create_idx_tenpct_code_h() 0.45 seconds return value = 0\n create_idx_updates_double_bt() 0.46 seconds return value = 0\n create_idx_hundred_foreign() 0.41 seconds return value = 0\n populateDataBase() 11.54 seconds return value = 0\n\nError in test Counting tuples at (6746)osdb.c:294:\n... empty database -- empty results\nperror() reports: Resource temporarily unavailable\n\nsomeone sighup'd the parent\n\nAny clue?\n\n------------------------------------------\n\n\nRegards\nMallah.\n\n\n\n\n\n", "msg_date": "Wed, 12 Nov 2003 20:17:26 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Suggestions for benchmarking 7.4RC2 against 7.3" }, { "msg_contents": "\n\nthe error mentioned in first email has been overcome\nby running osdb on the same machine hosting the DB server.\n\nregds\nmallah.\n\nRajesh Kumar Mallah wrote:\n\n>\n> Hi,\n>\n> I plan to put 7.4-RC2 in our production servers in next few hours.\n>\n> Since the hardware config & the performance related GUCs parameter\n> are going to remain the same i am interested in seeing the performance\n> improvements in 7.4 as compared 7.3 .\n>\n> For this i plan to use the OSDB 0.14 and compare the results for both \n> the\n> cases.\n>\n> Does any one has suggestions for comparing 7.4 against 7.3 ?\n> Since i am using OSDB for second time only any tips/guidance\n> on usage of that is also appreciated.\n>\n>\n>\n> H/W config:\n>\n> CPU: 4 X Intel(R) Xeon(TM) CPU 2.00GHz\n> MEM : 2 GB\n> I/O config : PGDATA on 10000 RPM Ultra160 scsi , pg_xlog on a similar\n> seperate SCSI\n>\n> GUC:\n> shared_buffers = 10000\n> max_fsm_relations = 5000\n> max_fsm_pages = 55099264\n> sort_mem = 16384\n> vacuum_mem = 8192\n>\n> All other performance related parameter have default\n> value eg:\n>\n> #effective_cache_size = 1000 # typically 8KB each\n> #random_page_cost = 4 # units are one sequential page fetch \n> cost\n> #cpu_tuple_cost = 0.01 # (same)\n> #cpu_index_tuple_cost = 0.001 # (same)\n> #cpu_operator_cost = 0.0025 # (same)\n>\n>\n>\n> BTW i get following error at the moment:\n> -----------------------------------------\n> /usr/local/bin/osdb-pg-ui --postgresql=no_hash_index\n> \"osdb\"\n> \"Invoked: /usr/local/bin/osdb-pg-ui --postgresql=no_hash_index\"\n>\n> create_tables() 0.78 seconds return value = 0\n> load() 1.02 seconds return value = 0\n> create_idx_uniques_key_bt() 0.64 seconds return value = 0\n> create_idx_updates_key_bt() 0.61 seconds return value = 0\n> create_idx_hundred_key_bt() 0.61 seconds return value = 0\n> create_idx_tenpct_key_bt() 0.62 seconds return value = 0\n> create_idx_tenpct_key_code_bt() 0.45 seconds return value = 0\n> create_idx_tiny_key_bt() 0.46 seconds return value = 0\n> create_idx_tenpct_int_bt() 0.46 seconds return value = 0\n> create_idx_tenpct_signed_bt() 0.45 seconds return value = 0\n> create_idx_uniques_code_h() 0.46 seconds return value = 0\n> create_idx_tenpct_double_bt() 0.46 seconds return value = 0\n> create_idx_updates_decim_bt() 0.45 seconds return value = 0\n> create_idx_tenpct_float_bt() 0.46 seconds return value = 0\n> create_idx_updates_int_bt() 0.46 seconds return value = 0\n> create_idx_tenpct_decim_bt() 0.46 seconds return value = 0\n> create_idx_hundred_code_h() 0.45 seconds return value = 0\n> create_idx_tenpct_name_h() 0.46 seconds return value = 0\n> create_idx_updates_code_h() 0.46 seconds return value = 0\n> create_idx_tenpct_code_h() 0.45 seconds return value = 0\n> create_idx_updates_double_bt() 0.46 seconds return value = 0\n> create_idx_hundred_foreign() 0.41 seconds return value = 0\n> populateDataBase() 11.54 seconds return value = 0\n>\n> Error in test Counting tuples at (6746)osdb.c:294:\n> ... empty database -- empty results\n> perror() reports: Resource temporarily unavailable\n>\n> someone sighup'd the parent\n>\n> Any clue?\n>\n> ------------------------------------------\n>\n>\n> Regards\n> Mallah.\n>\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n\n\n\n", "msg_date": "Wed, 12 Nov 2003 20:34:53 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suggestions for benchmarking 7.4RC2 against 7.3" }, { "msg_contents": "Josh Berkus wrote:\n\n>Rajesh, Chris,\n>\n> \n>\n>>I got the osdb benchmark running last week, and had to separate client\n>>from server. I had to jump through a fair number of hoops including\n>>copying data files over to the server. The benchmark software needs a\n>>bit more work...\n>> \n>>\n>\n>What about the OSDL's TPC-derivative benchmarks? That's a much more \n>respected database test, and probably less buggy than OSDB.\n>\n> \n>\nHmm... really sorry! my\npg_dump | psql is almost finishing in next 20 mins.\n\ncreating indexes at the moment :)\n\nReally sorry can't rollback and delay anymore becoz my\nwebsite is *unavailable* for past 30 mins.\n\nI ran OSDB .15 version and pg_bench .\n\n\nRegds\nMallah.\n\n\n\n\n\n\n\n\n\n\n\nJosh Berkus wrote:\n\nRajesh, Chris,\n\n \n\nI got the osdb benchmark running last week, and had to separate client\nfrom server. I had to jump through a fair number of hoops including\ncopying data files over to the server. The benchmark software needs a\nbit more work...\n \n\n\nWhat about the OSDL's TPC-derivative benchmarks? That's a much more \nrespected database test, and probably less buggy than OSDB.\n\n \n\nHmm... really sorry! my \npg_dump | psql is almost finishing in next 20 mins. \n\ncreating indexes at the moment :)\n\nReally sorry can't rollback and delay anymore becoz my\nwebsite is *unavailable* for past 30 mins.\n\nI ran OSDB .15 version  and pg_bench .\n\n\nRegds\nMallah.", "msg_date": "Thu, 13 Nov 2003 01:46:50 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suggestions for benchmarking 7.4RC2 against 7.3" }, { "msg_contents": "\n\n\n\n\n\n\n\nRC2 is running in production without any apparent problems\ntill now.  Well its difficult to say at the moment how much speed\ngain is there unless the heavy duty batch SQL scripts are run by\ncron. \n\nCount(*) and group by on large tables are significantly (5x) faster\nand better error reporting has made it easier to spot the faulty data.\neg in fkey violation.\n\nWill post the OSDB .15 versions' results on 7.3 & 7.4 soon.\n\nRegds\nMallah.\n\nChristopher Browne wrote:\n\nAfter a long battle with technology,[email protected] (Rajesh Kumar Mallah), an earthling, wrote:\n \n\nthe error mentioned in first email has been overcome\nby running osdb on the same machine hosting the DB server.\n \n\n\nYes, it seems unrealistic to try to run the \"client\" on a separate\nhost from the database. \n\nI got the osdb benchmark running last week, and had to separate client\nfrom server. I had to jump through a fair number of hoops including\ncopying data files over to the server. The benchmark software needs a\nbit more work...\n \n\n\n\n\n", "msg_date": "Thu, 13 Nov 2003 02:55:20 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suggestions for benchmarking 7.4RC2 against 7.3" } ]
[ { "msg_contents": "Hi-\n\nI have a query that takes too long. I haven't been able to come up with any\nideas for speeding it up, so I'm seeking some input from the list.\n\nI'm using version 7.3.2\n\nI have three tables:\n\ncase_data (1,947,386 rows)\nactor (3,385,669 rows)\nactor_case_assignment (8,668,650 rows)\n\nAs the names imply, actor_case_assignment contains records that assign an\nactor to a case. Actors such as attorneys or judges may have many cases,\nwhile the average actor (we hope) only has one.\n\nWhat I'm trying to do is link these tables to get back a single row per\nactor that shows the actor's name, the number of cases that actor is\nassigned to, and if they only have one case, I want the public_id for that\ncase. This means I have to do a group by to get the case count, but I'm then\nforced to use an aggregate function like max on the other fields.\n\nAll of the fields ending in \"_id\" have unique indexes, and\nactor_full_name_uppercase is indexed.\n\nHere's the select:\n\n select\n actor.actor_id,\n max(actor.actor_full_name),\n max(case_data.case_public_id),\n max(case_data.case_id),\n count(case_data.case_id) as case_count\n from\n actor,\n actor_case_assignment,\n case_data\n where\n actor.actor_full_name_uppercase like upper('sanders%')\n and actor.actor_id = actor_case_assignment.actor_id\n and case_data.case_id = actor_case_assignment.case_id\n group by\n actor.actor_id\n order by\n max(actor.actor_full_name),\n case_count desc\n limit\n 1000;\n\n\nHere's the explain analyze:\n\n\nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n--------------------------------------------------\n Limit (cost=2214.71..2214.72 rows=1 width=115) (actual\ntime=120034.61..120035.67 rows=1000 loops=1)\n -> Sort (cost=2214.71..2214.72 rows=1 width=115) (actual\ntime=120034.60..120034.98 rows=1001 loops=1)\n Sort Key: max((actor.actor_full_name)::text),\ncount(case_data.case_id)\n -> Aggregate (cost=2214.67..2214.70 rows=1 width=115) (actual\ntime=119962.80..120011.49 rows=3456 loops=1)\n -> Group (cost=2214.67..2214.68 rows=2 width=115) (actual\ntime=119962.76..119987.04 rows=5879 loops=1)\n -> Sort (cost=2214.67..2214.68 rows=2 width=115)\n(actual time=119962.74..119965.09 rows=5879 loops=1)\n Sort Key: actor.actor_id\n -> Nested Loop (cost=0.00..2214.66 rows=2\nwidth=115) (actual time=59.05..119929.71 rows=5879 loops=1)\n -> Nested Loop (cost=0.00..2205.26 rows=3\nwidth=76) (actual time=51.46..66089.04 rows=5882 loops=1)\n -> Index Scan using\nactor_full_name_uppercase on actor (cost=0.00..6.01 rows=1 width=42)\n(actual time=37.62..677.44 rows=3501 loops=1)\n Index Cond:\n((actor_full_name_uppercase >= 'SANDERS'::character varying) AND\n(actor_full_name_uppercase < 'SANDERT'::character varying))\n Filter:\n(actor_full_name_uppercase ~~ 'SANDERS%'::text)\n -> Index Scan using\nactor_case_assignment_actor_id on actor_case_assignment (cost=0.00..2165.93\nrows=2666 width=34) (actual time=16.37..18.67 rows=2 loops=3501)\n Index Cond: (\"outer\".actor_id =\nactor_case_assignment.actor_id)\n -> Index Scan using case_data_case_id on\ncase_data (cost=0.00..3.66 rows=1 width=39) (actual time=9.14..9.15 rows=1\nloops=5882)\n Index Cond: (case_data.case_id =\n\"outer\".case_id)\n Total runtime: 120038.60 msec\n(17 rows)\n\n\nAny ideas?\n\nThanks!\n -Nick\n\n\n---------------------------------------------------------------------\nNick Fankhauser\n\n [email protected] Phone 1.765.965.7363 Fax 1.765.962.9788\ndoxpop - Court records at your fingertips - http://www.doxpop.com/\n\n\n", "msg_date": "Tue, 11 Nov 2003 17:26:48 -0500", "msg_from": "\"Nick Fankhauser - Doxpop\" <[email protected]>", "msg_from_op": true, "msg_subject": "Seeking help with a query that take too long" } ]
[ { "msg_contents": "Dear Gurus,\n\nWe are planning to add more db server hardware for the apps. The\nquestion is, what makes more sense regarding\nperformance/scalability/price of the hardware...\n\nThere are a couple of apps, currently on a dual-cpu Dell server. The\nusage of the apps is going to increase quite a lot, and considering the\nprices, we are looking at the following options:\n\nOption 1:\n==========\nHave each app on a separate db server (looking at 4 of these). The\nserver being a PowerEdge 2650, Dual 2.8GHz/512KB XEONS, 2GB RAM, PERC-3\nRAID-5, split back plane (2+3), and 5 x 36GB HDDs (10K RPM).\n\nNote: These servers are 1/3 the price of the Quad-cpu 6650 server.\n\nOption 2:\n==========\nHave two to three apps dbs hosted on a single server. The server being a\nPowerEdge 6650, 4 x 2GHz/1MB XEONS, 8GB RAM, PERC-3 RAID-5, split back\nplane (2+3), and 5 x 36GB HDDs (10K RPM).\n\nNote: This server is 3 times more the price of the option 1.\n\n\n\nAppreciate your guidance.\n\nThanks,\nAnjan\n", "msg_date": "Tue, 11 Nov 2003 17:57:15 -0500", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Server Configs" } ]
[ { "msg_contents": "We are getting ready to spec out a new machine and are wondering about\nthe wisdom of buying a quad versus a dual processor machine. Seing as\nhow postgres in not a threaded application, and this server will only be\nused for log/transaction analysis (it will only ever have a few large\nqueries running). Is there any performance to be gained, and if so is\nit worth the large cost? Any thoughts/experience are much\nappreciated...\n\n\n\n\n-- \nChris Field\[email protected]\nAffinity Solutions Inc.\n386 Park Avenue South\nSuite 1209\nNew York, NY 10016\n(212) 685-8748 ext. 32", "msg_date": "Tue, 11 Nov 2003 18:32:47 -0500", "msg_from": "Chris Field <[email protected]>", "msg_from_op": true, "msg_subject": "Value of Quad vs. Dual Processor machine" }, { "msg_contents": "On Tue, 2003-11-11 at 18:32, Chris Field wrote:\n> We are getting ready to spec out a new machine and are wondering about\n> the wisdom of buying a quad versus a dual processor machine. Seing as\n> how postgres in not a threaded application, and this server will only be\n> used for log/transaction analysis (it will only ever have a few large\n> queries running). Is there any performance to be gained, and if so is\n> it worth the large cost? Any thoughts/experience are much\n> appreciated...\n\nSince you're asking the question, I'll assume you don't have CPU\nintensive queries or monstrous loads.\n\nI'd probably invest in a Quad system with 2 chips in it (2 empty\nsockets) and put the difference in funds into a few extra GB of Ram or\nimproved IO.\n\nIn 6 months or a year, if you start doing longer or more complex\nqueries, toss in the other 2 chips. So long as you don't hit a memory\nlimit, it'll be fine.\n\n", "msg_date": "Tue, 11 Nov 2003 18:51:28 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Value of Quad vs. Dual Processor machine" }, { "msg_contents": "On Tue, 11 Nov 2003, Rod Taylor wrote:\n\n> On Tue, 2003-11-11 at 18:32, Chris Field wrote:\n> > We are getting ready to spec out a new machine and are wondering about\n> > the wisdom of buying a quad versus a dual processor machine. Seing as\n> > how postgres in not a threaded application, and this server will only be\n> > used for log/transaction analysis (it will only ever have a few large\n> > queries running). Is there any performance to be gained, and if so is\n> > it worth the large cost? Any thoughts/experience are much\n> > appreciated...\n> \n> Since you're asking the question, I'll assume you don't have CPU\n> intensive queries or monstrous loads.\n> \n> I'd probably invest in a Quad system with 2 chips in it (2 empty\n> sockets) and put the difference in funds into a few extra GB of Ram or\n> improved IO.\n> \n> In 6 months or a year, if you start doing longer or more complex\n> queries, toss in the other 2 chips. So long as you don't hit a memory\n> limit, it'll be fine.\n\nNote that you want to carefully look at the difference in cost of the \nmotherboard versus the CPUs. It's often the motherboard that raises the \ncost, not the CPUs so much. Although with Xeons, the CPUs are not cheap.\n\nThe second issue is that Intel (and AMD probably) only guarantee proper \nperformance from chips int he same batch, so you may wind up replacing the \ntwo working CPUs with two new ones to go with the other two you'll be \nbuying, to make sure that they work together.\n\nMy guess is that more CPUs aren't gonna help this problem a lot, so look \nmore at fast RAM and lots of it, as well as a fast I/O subsystem.\n\n2 CPUs should be plenty.\n\n", "msg_date": "Tue, 11 Nov 2003 17:40:14 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Value of Quad vs. Dual Processor machine" }, { "msg_contents": "On 2003-11-11T17:40:14-0700, scott.marlowe wrote:\n> 2 CPUs should be plenty.\n\nfor everyone? No, I must have been thinking of someone else :-)\n\n\n/Allan\n-- \nAllan Wind\nP.O. Box 2022\nWoburn, MA 01888-0022\nUSA", "msg_date": "Tue, 11 Nov 2003 20:02:40 -0500", "msg_from": "[email protected] (Allan Wind)", "msg_from_op": false, "msg_subject": "Re: Value of Quad vs. Dual Processor machine" }, { "msg_contents": "> On Tue, 11 Nov 2003, Rod Taylor wrote:\n>\n>> On Tue, 2003-11-11 at 18:32, Chris Field wrote:\n>> > We are getting ready to spec out a new machine and are wondering about\n>> > the wisdom of buying a quad versus a dual processor machine. Seing as\n>> > how postgres in not a threaded application, and this server will only\n>> be\n>> > used for log/transaction analysis (it will only ever have a few large\n>> > queries running). Is there any performance to be gained, and if so is\n>> > it worth the large cost? Any thoughts/experience are much\n>> > appreciated...\n>>\n>> Since you're asking the question, I'll assume you don't have CPU\n>> intensive queries or monstrous loads.\n>>\n>> I'd probably invest in a Quad system with 2 chips in it (2 empty\n>> sockets) and put the difference in funds into a few extra GB of Ram or\n>> improved IO.\n>>\n>> In 6 months or a year, if you start doing longer or more complex\n>> queries, toss in the other 2 chips. So long as you don't hit a memory\n>> limit, it'll be fine.\n>\n> Note that you want to carefully look at the difference in cost of the\n> motherboard versus the CPUs. It's often the motherboard that raises the\n> cost, not the CPUs so much. Although with Xeons, the CPUs are not cheap.\n>\n> The second issue is that Intel (and AMD probably) only guarantee proper\n> performance from chips int he same batch, so you may wind up replacing the\n> two working CPUs with two new ones to go with the other two you'll be\n> buying, to make sure that they work together.\n>\n> My guess is that more CPUs aren't gonna help this problem a lot, so look\n> more at fast RAM and lots of it, as well as a fast I/O subsystem.\n>\n> 2 CPUs should be plenty.\nI agree that the additional cpus won't help as much since I haven't found\nany benefits in terms of individual query speed for a quad vs. an smp on\nbenchmarks I've run on test machines I was considering purchasing. Quads\nare also expensive - on similar architectures the quad was 20k vs 7k for\nthe dual.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n", "msg_date": "Tue, 11 Nov 2003 17:11:20 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Value of Quad vs. Dual Processor machine" }, { "msg_contents": "On Tue, 2003-11-11 at 17:32, Chris Field wrote:\n> We are getting ready to spec out a new machine and are wondering about\n> the wisdom of buying a quad versus a dual processor machine. Seing as\n> how postgres in not a threaded application, and this server will only be\n> used for log/transaction analysis (it will only ever have a few large\n> queries running). Is there any performance to be gained, and if so is\n> it worth the large cost? Any thoughts/experience are much\n> appreciated...\n\nXeon or Opteron? The faster Opterons *really* blaze, especially\nin 64-bit mode. As others have said, though, RAM and I/O are most\nimportant.\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"As I like to joke, I may have invented it, but Microsoft made it\npopular\"\nDavid Bradley, regarding Ctrl-Alt-Del \n\n", "msg_date": "Tue, 11 Nov 2003 19:24:51 -0600", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Value of Quad vs. Dual Processor machine" }, { "msg_contents": "we are looking at Xeon, We are currently running it on a quad sun v880\ncompiled to be 64bit and have been getting dreadful performance. I don't\nthink we really have much to gain from going 64bit.\n\n\n----- Original Message ----- \nFrom: \"Ron Johnson\" <[email protected]>\nTo: \"PgSQL Performance ML\" <[email protected]>\nSent: Tuesday, November 11, 2003 8:24 PM\nSubject: Re: [PERFORM] Value of Quad vs. Dual Processor machine\n\n\n> On Tue, 2003-11-11 at 17:32, Chris Field wrote:\n> > We are getting ready to spec out a new machine and are wondering about\n> > the wisdom of buying a quad versus a dual processor machine. Seing as\n> > how postgres in not a threaded application, and this server will only be\n> > used for log/transaction analysis (it will only ever have a few large\n> > queries running). Is there any performance to be gained, and if so is\n> > it worth the large cost? Any thoughts/experience are much\n> > appreciated...\n>\n> Xeon or Opteron? The faster Opterons *really* blaze, especially\n> in 64-bit mode. As others have said, though, RAM and I/O are most\n> important.\n>\n> -- \n> -----------------------------------------------------------------\n> Ron Johnson, Jr. [email protected]\n> Jefferson, LA USA\n>\n> \"As I like to joke, I may have invented it, but Microsoft made it\n> popular\"\n> David Bradley, regarding Ctrl-Alt-Del\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n>\n>\n\n", "msg_date": "Tue, 11 Nov 2003 21:13:19 -0500", "msg_from": "\"Chris Field\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Value of Quad vs. Dual Processor machine" }, { "msg_contents": "One thing I learned after spending about a week comparing the Athlon (2\nghz, 333 mhz frontside bus) and Xeon (2.4 ghz, 266 mhz frontside bus)\nplatforms was that on average the select queries I was benchmarking ran\n30% faster on the Athlon (this was with data cached in memory so may not\napply to the larger data sets where I/O is the limiting factor.)\n\nI benchmarked against the Opteron 244 when it came out and it came in\nabout the same as the Athlon (makes sense since both were 333 mhz\nmemory). The results within +/- 5-10% that of the Athlon. From testing\nagainst a couple of other machines I noticed that the memory bus speeds\nwere almost directly proportional to the query times under these\nconditions.\n\nNot sure how these compare against the quad sun but the AMD chips\nreturned the select queries faster than the Xeons from the informal\ninvestigations I did. Definitely try it before you buy it if possible.\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Chris Field\nSent: Tuesday, November 11, 2003 6:13 PM\nTo: Ron Johnson; PgSQL Performance ML\nSubject: Re: [PERFORM] Value of Quad vs. Dual Processor machine\n\n\nwe are looking at Xeon, We are currently running it on a quad sun v880\ncompiled to be 64bit and have been getting dreadful performance. I\ndon't think we really have much to gain from going 64bit.\n\n\n----- Original Message ----- \nFrom: \"Ron Johnson\" <[email protected]>\nTo: \"PgSQL Performance ML\" <[email protected]>\nSent: Tuesday, November 11, 2003 8:24 PM\nSubject: Re: [PERFORM] Value of Quad vs. Dual Processor machine\n\n\n> On Tue, 2003-11-11 at 17:32, Chris Field wrote:\n> > We are getting ready to spec out a new machine and are wondering \n> > about the wisdom of buying a quad versus a dual processor machine. \n> > Seing as how postgres in not a threaded application, and this server\n\n> > will only be used for log/transaction analysis (it will only ever \n> > have a few large queries running). Is there any performance to be \n> > gained, and if so is it worth the large cost? Any \n> > thoughts/experience are much appreciated...\n>\n> Xeon or Opteron? The faster Opterons *really* blaze, especially in \n> 64-bit mode. As others have said, though, RAM and I/O are most \n> important.\n>\n> --\n> -----------------------------------------------------------------\n> Ron Johnson, Jr. [email protected]\n> Jefferson, LA USA\n>\n> \"As I like to joke, I may have invented it, but Microsoft made it \n> popular\" David Bradley, regarding Ctrl-Alt-Del\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n>\n>\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n\n", "msg_date": "Tue, 11 Nov 2003 19:17:40 -0800", "msg_from": "\"Fred Moyer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Value of Quad vs. Dual Processor machine" }, { "msg_contents": "Fred Moyer wrote:\n> One thing I learned after spending about a week comparing the Athlon (2\n> ghz, 333 mhz frontside bus) and Xeon (2.4 ghz, 266 mhz frontside bus)\n> platforms was that on average the select queries I was benchmarking ran\n> 30% faster on the Athlon (this was with data cached in memory so may not\n> apply to the larger data sets where I/O is the limiting factor.)\n> \n> I benchmarked against the Opteron 244 when it came out and it came in\n> about the same as the Athlon (makes sense since both were 333 mhz\n> memory). The results within +/- 5-10% that of the Athlon. From testing\n> against a couple of other machines I noticed that the memory bus speeds\n> were almost directly proportional to the query times under these\n> conditions.\n\nI remember a posting here about opteron, which essentially said, even if opteron \nworks on par with athlon under few clients, as load increases it scales more \nthan 50% better than athlons.\n\nSo that could be another shot at it.Sorry, no handy URL here.\n\n Shridhar\n\n", "msg_date": "Wed, 12 Nov 2003 14:12:54 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Value of Quad vs. Dual Processor machine" }, { "msg_contents": "On Tue, 11 Nov 2003 21:13:19 -0500\n\"Chris Field\" <[email protected]> wrote:\n\n> we are looking at Xeon, We are currently running it on a quad sun v880\n> compiled to be 64bit and have been getting dreadful performance. I\n> don't think we really have much to gain from going 64bit.\n> \n> \nBy chance, are you running 7.3.4 on that sun?\nIf so, try this:\nexport CFLAGS=-02\n./configure\n\nand rebuild PG.\n\nBefore 7.4 PG was build with _no_ optimization on Solaris. \nRecompiling gives __HUGE__ (notice the underscores) performance gains.\n\nAnd onto the dual vs quad.\n\nPG will only use 1 cpu / connection / query. \n\nSo if your machine iwll have 1-2 queries running at a time those other 2\nproc's will sit around idling. However if you are going to have a bunch\ngoing, 4 cpus will be most useful. One of hte nicest things to do for\nPG is more ram and fast IO. It really loves those things.\n\ngood luck\n\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n", "msg_date": "Wed, 12 Nov 2003 09:28:07 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Value of Quad vs. Dual Processor machine" }, { "msg_contents": "As another post pointed out, you need to set cflags to get optimization \nunder Solaris on that flavor of Postgresql.\n\nAlso, Postgresql tends to get its best performance from the free unixes, \nLinux and BSD. those are available for Sun Sparcs, but postgresql in 64 \nbit mode on those boxes is still a bit cutting edge.\n\nIt might be worth a try to set up the sun to dual boot to either BSD or \nLinux and test Postgresql under that environment to see how it works and \ncompare it to Sun after you've set the cflags and recompiled.\n\nOn Tue, 11 Nov 2003, Chris Field wrote:\n\n> we are looking at Xeon, We are currently running it on a quad sun v880\n> compiled to be 64bit and have been getting dreadful performance. I don't\n> think we really have much to gain from going 64bit.\n> \n> \n> ----- Original Message ----- \n> From: \"Ron Johnson\" <[email protected]>\n> To: \"PgSQL Performance ML\" <[email protected]>\n> Sent: Tuesday, November 11, 2003 8:24 PM\n> Subject: Re: [PERFORM] Value of Quad vs. Dual Processor machine\n> \n> \n> > On Tue, 2003-11-11 at 17:32, Chris Field wrote:\n> > > We are getting ready to spec out a new machine and are wondering about\n> > > the wisdom of buying a quad versus a dual processor machine. Seing as\n> > > how postgres in not a threaded application, and this server will only be\n> > > used for log/transaction analysis (it will only ever have a few large\n> > > queries running). Is there any performance to be gained, and if so is\n> > > it worth the large cost? Any thoughts/experience are much\n> > > appreciated...\n> >\n> > Xeon or Opteron? The faster Opterons *really* blaze, especially\n> > in 64-bit mode. As others have said, though, RAM and I/O are most\n> > important.\n> >\n> > -- \n> > -----------------------------------------------------------------\n> > Ron Johnson, Jr. [email protected]\n> > Jefferson, LA USA\n> >\n> > \"As I like to joke, I may have invented it, but Microsoft made it\n> > popular\"\n> > David Bradley, regarding Ctrl-Alt-Del\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/docs/faqs/FAQ.html\n> >\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n> \n> \n\n", "msg_date": "Wed, 12 Nov 2003 08:53:56 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Value of Quad vs. Dual Processor machine" }, { "msg_contents": "On Wed, 2003-11-12 at 09:28, Jeff wrote:\n> On Tue, 11 Nov 2003 21:13:19 -0500\n> \"Chris Field\" <[email protected]> wrote:\n> \n> > we are looking at Xeon, We are currently running it on a quad sun v880\n> > compiled to be 64bit and have been getting dreadful performance. I\n> > don't think we really have much to gain from going 64bit.\n> > \n> > \n> By chance, are you running 7.3.4 on that sun?\n> If so, try this:\n> export CFLAGS=-02\n> ./configure\n> \n> and rebuild PG.\n> \n> Before 7.4 PG was build with _no_ optimization on Solaris. \n> Recompiling gives __HUGE__ (notice the underscores) performance gains.\n> \n> And onto the dual vs quad.\n> \n> PG will only use 1 cpu / connection / query. \n> \n> So if your machine iwll have 1-2 queries running at a time those other 2\n> proc's will sit around idling. However if you are going to have a bunch\n> going, 4 cpus will be most useful. One of hte nicest things to do for\n> PG is more ram and fast IO. It really loves those things.\n> \n\nWe've just started kicking around the idea of moving one of our boxes to\na quad-proc machine from a dual. Under normal circumstances the 2\nprocessors handle maybe 200 transactions per second with 90% system\nidle. However we have people who occasionally run historical reports on\nour data, and those reports are fairly CPU intensive. Usually it is not\na problem for the main web system, but when pg_dump is running, that is\nalso cpu intensive, so we end up with two highly cpu intensive items\nrunning on our machine, and we start to notice issues on the main web\nsystem. \n \n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "13 Nov 2003 10:52:25 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Value of Quad vs. Dual Processor machine" } ]
[ { "msg_contents": "[I originally posted this using the wrong E-Mail account, so a double\nposting may occur if the first message gets released by the moderator later-\nsorry!]\n\nHi-\n\nI have a query that I'm trying to speed up. I haven't been able to come up\nwith any workable ideas for speeding it up, so I'm seeking some input from\nthe list.\n\nI'm using version 7.3.2\n\nI have three tables:\n\ncase_data (1,947,386 rows)\nactor (3,385,669 rows)\nactor_case_assignment (8,668,650 rows)\n\nAs the names imply, actor_case_assignment contains records that assign an\nactor to a case. Actors such as attorneys or judges may have many cases,\nwhile the average actor (we hope) only has one.\n\nWhat I'm trying to do is link these tables to get back a single row per\nactor that shows the actor's name, the number of cases that actor is\nassigned to, and if they only have one case, I want the public_id for that\ncase. This means I have to do a group by to get the case count, but I'm then\nforced to use an aggregate function like max on the other fields.\n\nAll of the fields ending in \"_id\" have unique indexes, and\nactor_full_name_uppercase is indexed. An analyze is done every night & the\ndatabase is fairly stable in it's composition.\n\nHere's the select:\n\n select\n actor.actor_id,\n max(actor.actor_full_name),\n max(case_data.case_public_id),\n max(case_data.case_id),\n count(case_data.case_id) as case_count\n from\n actor,\n actor_case_assignment,\n case_data\n where\n actor.actor_full_name_uppercase like upper('sanders%')\n and actor.actor_id = actor_case_assignment.actor_id\n and case_data.case_id = actor_case_assignment.case_id\n group by\n actor.actor_id\n order by\n max(actor.actor_full_name),\n case_count desc\n limit\n 1000;\n\n\nHere's the explain analyze:\n\n\nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n--------------------------------------------------\n Limit (cost=2214.71..2214.72 rows=1 width=115) (actual\ntime=120034.61..120035.67 rows=1000 loops=1)\n -> Sort (cost=2214.71..2214.72 rows=1 width=115) (actual\ntime=120034.60..120034.98 rows=1001 loops=1)\n Sort Key: max((actor.actor_full_name)::text),\ncount(case_data.case_id)\n -> Aggregate (cost=2214.67..2214.70 rows=1 width=115) (actual\ntime=119962.80..120011.49 rows=3456 loops=1)\n -> Group (cost=2214.67..2214.68 rows=2 width=115) (actual\ntime=119962.76..119987.04 rows=5879 loops=1)\n -> Sort (cost=2214.67..2214.68 rows=2 width=115)\n(actual time=119962.74..119965.09 rows=5879 loops=1)\n Sort Key: actor.actor_id\n -> Nested Loop (cost=0.00..2214.66 rows=2\nwidth=115) (actual time=59.05..119929.71 rows=5879 loops=1)\n -> Nested Loop (cost=0.00..2205.26 rows=3\nwidth=76) (actual time=51.46..66089.04 rows=5882 loops=1)\n -> Index Scan using\nactor_full_name_uppercase on actor (cost=0.00..6.01 rows=1 width=42)\n(actual time=37.62..677.44 rows=3501 loops=1)\n Index Cond:\n((actor_full_name_uppercase >= 'SANDERS'::character varying) AND\n(actor_full_name_uppercase < 'SANDERT'::character varying))\n Filter:\n(actor_full_name_uppercase ~~ 'SANDERS%'::text)\n -> Index Scan using\nactor_case_assignment_actor_id on actor_case_assignment (cost=0.00..2165.93\nrows=2666 width=34) (actual time=16.37..18.67 rows=2 loops=3501)\n Index Cond: (\"outer\".actor_id =\nactor_case_assignment.actor_id)\n -> Index Scan using case_data_case_id on\ncase_data (cost=0.00..3.66 rows=1 width=39) (actual time=9.14..9.15 rows=1\nloops=5882)\n Index Cond: (case_data.case_id =\n\"outer\".case_id)\n Total runtime: 120038.60 msec\n(17 rows)\n\n\nAny ideas?\n\nThanks!\n -Nick\n\n\n---------------------------------------------------------------------\nNick Fankhauser\n\n [email protected] Phone 1.765.965.7363 Fax 1.765.962.9788\ndoxpop - Court records at your fingertips - http://www.doxpop.com/\n\n\n", "msg_date": "Wed, 12 Nov 2003 08:34:50 -0500", "msg_from": "\"Nick Fankhauser\" <[email protected]>", "msg_from_op": true, "msg_subject": "Seeking help with a query that takes too long" }, { "msg_contents": "On Wed, 12 Nov 2003 08:34:50 -0500, \"Nick Fankhauser\"\n<[email protected]> wrote:\n> -> Index Scan using\n>actor_full_name_uppercase on actor (cost=0.00..6.01 rows=1 width=42)\n ^^^^^^\n>(actual time=37.62..677.44 rows=3501 loops=1)\n ^^^^^^^^^\n> Index Cond:\n>((actor_full_name_uppercase >= 'SANDERS'::character varying) AND\n>(actor_full_name_uppercase < 'SANDERT'::character varying))\n> Filter:\n>(actor_full_name_uppercase ~~ 'SANDERS%'::text)\n\nNick, can you find out why this row count estimation is so far off?\n\n\\x\nSELECT * FROM pg_stats\n WHERE tablename='actor' AND attname='actor_full_name_uppercase';\n\nBTW, there seem to be missing cases:\n> -> Nested Loop (cost=0.00..2214.66 rows=2 width=115)\n> (actual time=59.05..119929.71 rows=5879 loops=1)\n ^^^^\n> -> Nested Loop (cost=0.00..2205.26 rows=3 width=76)\n> (actual time=51.46..66089.04 rows=5882 loops=1)\n ^^^^\n\nServus\n Manfred\n", "msg_date": "Wed, 12 Nov 2003 16:04:48 +0100", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeking help with a query that takes too long" }, { "msg_contents": "\n\n>(actual time=37.62..677.44 rows=3501 loops=1)\n ^^^^^^^^^\n\n> Nick, can you find out why this row count estimation is so far off?\n\nIt's actually correct:\n\nprod1=# select count(actor_id) from actor where actor_full_name_uppercase\nlike 'SANDERS%';\n count\n-------\n 3501\n(1 row)\n\nOf course, I merely chose \"SANDERS\" arbitrarily as a name that falls\nsomewhere near the middle of the frequency range for names. SMITH or JONES\nwould represent a worst-case, and something like KOIZAR would probably be\nunique.\n\n\nHere are the stats:\n\nprod1=# SELECT * FROM pg_stats\nprod1-# WHERE tablename='actor' AND attname='actor_full_name_uppercase';\n-[ RECORD\n1 ]-----+-------------------------------------------------------------------\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-------------------------------------------------------------\nschemaname | public\ntablename | actor\nattname | actor_full_name_uppercase\nnull_frac | 0.000333333\navg_width | 21\nn_distinct | 24215\nmost_common_vals | {\"STATE OF INDIANA\",\"INDIANA DEPARTMENT OF\nREVENUE\",\"BARTH CONS SCHOOL CORP\",\"HOWARD COUNTY CLERK\",\"ADVANCED RECOVERY\nSERVICES\",\"STATE OF INDIANA-DEPT OF REVENUE\",\"ALLIED COLLECTION SERVICE\nINC\",\"CREDIT BUREAU OF LAPORTE\",\"MIDWEST COLLECTION SVC INC\",\"NCO FINANCIAL\nSYSTEMS INC\"}\nmost_common_freqs |\n{0.0153333,0.0143333,0.00433333,0.00433333,0.004,0.00366667,0.00333333,0.003\n33333,0.00266667,0.00266667}\nhistogram_bounds | {\"(POE) ESTELLE, DENISE\",\"BRIEN, LIISI\",\"COTTRELL,\nCAROL\",\"FAMILY RENTALS\",\"HAYNES, TAMIKA\",\"KESSLER, VICTORIA\",\"MEFFORD,\nVERNON L\",\"PHILLIPS, GERALD L\",\"SHELTON, ANTOINETTE\",\"TRICARICO, MELISSA\nSUE\",\"ZUEHLKE, THOMAS L\"}\ncorrelation | -0.00147395\n\n\nI think this means that the average is 357 per actor. As you can see, the\nrange of assignments varies from people with a single parking ticket to\n\"State of Indiana\", which is party to many thousands of cases.\n\n\n> BTW, there seem to be missing cases:\n> > -> Nested Loop (cost=0.00..2214.66 rows=2 width=115)\n> > (actual time=59.05..119929.71 rows=5879 loops=1)\n> ^^^^\n> > -> Nested Loop (cost=0.00..2205.26 rows=3 width=76)\n> > (actual time=51.46..66089.04 rows=5882 loops=1)\n\nThis is expected- We actually aggregate data from many county court\ndatabases, with varying levels of data \"cleanliness\".\n\nRegards,\n -Nick\n\n\n\n", "msg_date": "Wed, 12 Nov 2003 10:55:48 -0500", "msg_from": "\"Nick Fankhauser\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Seeking help with a query that takes too long" }, { "msg_contents": "\n> >actor_full_name_uppercase on actor (cost=0.00..6.01 rows=1 width=42)\n> ^^^^^^\n> >(actual time=37.62..677.44 rows=3501 loops=1)\n> ^^^^^^^^^\n> Nick, can you find out why this row count estimation is so far off?\n^^^^^^^^^\n\nOops- I read this backward- I see what you mean now. That's a good question.\nI'm not sure what part of the stats this estimate might be pulled from. The\naverage is 357, but the most common frequency may be around 1.\n\n-Nick\n\n\n", "msg_date": "Wed, 12 Nov 2003 11:05:10 -0500", "msg_from": "\"Nick Fankhauser\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Seeking help with a query that takes too long" }, { "msg_contents": "\"Nick Fankhauser\" <[email protected]> writes:\n>> Nick, can you find out why this row count estimation is so far off?\n\n> It's actually correct:\n\nSure, the 3501 was the \"actual\". The estimate was 1 row, which was\npretty far off :-(\n\n> Here are the stats:\n\nIt looks like you are running with the default statistics target (10).\nTry boosting it to 100 or even more for this column (see ALTER TABLE\nSET STATISTICS, then re-ANALYZE) and see if the estimate gets better.\nI think the major problem is likely here:\n> n_distinct | 24215\nwhich is no doubt much too small (do you have an idea of the number\nof distinct actor_full_name_uppercase values?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Nov 2003 11:10:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeking help with a query that takes too long " }, { "msg_contents": "\n> It looks like you are running with the default statistics target (10).\n> Try boosting it to 100 or even more for this column (see ALTER TABLE\n> SET STATISTICS, then re-ANALYZE) and see if the estimate gets better.\n\n\nHere are the results & a few more clues:\n\nprod1=# alter table actor alter column actor_full_name_uppercase set\nstatistics 1000;\nALTER TABLE\nprod1=# analyze actor;\nANALYZE\nprod1=# select count(distinct actor_full_name_uppercase) from actor;\n count\n---------\n 1453371\n(1 row)\n\nprod1=# select count(actor_id) from actor;\n count\n---------\n 3386359\n(1 row)\n\nThis indicates to me that 1 isn't too shabby as an estimate if the whole\nname is specified, but I'm not sure how this gets altered in the case of a\n\"LIKE\"\n\n\nprod1=# \\x\nExpanded display is on.\nprod1=# SELECT * FROM pg_stats\nprod1-# WHERE tablename='actor' AND attname='actor_full_name_uppercase';\n\n<Header boilerplate snipped out>\n\nschemaname | public\ntablename | actor\nattname | actor_full_name_uppercase\nnull_frac | 0.000586667\navg_width | 21\nn_distinct | -0.14701\n\n<Long list of values and frequencies snipped out>\n\ncorrelation | -0.00211291\n\n\nQuestion: What does it mean when n_distinct is negative?\n\nNew results of explain analyze:\n\n\n\nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n--------------------------------------------------\n Limit (cost=252683.61..252683.68 rows=28 width=116) (actual\ntime=169377.32..169378.39 rows=1000 loops=1)\n -> Sort (cost=252683.61..252683.68 rows=29 width=116) (actual\ntime=169377.31..169377.69 rows=1001 loops=1)\n Sort Key: max((actor.actor_full_name)::text),\ncount(case_data.case_id)\n -> Aggregate (cost=252678.57..252682.91 rows=29 width=116)\n(actual time=169305.79..169354.50 rows=3456 loops=1)\n -> Group (cost=252678.57..252680.01 rows=289 width=116)\n(actual time=169305.76..169330.00 rows=5879 loops=1)\n -> Sort (cost=252678.57..252679.29 rows=289\nwidth=116) (actual time=169305.75..169308.15 rows=5879 loops=1)\n Sort Key: actor.actor_id\n -> Nested Loop (cost=0.00..252666.74 rows=289\nwidth=116) (actual time=89.27..169273.51 rows=5879 loops=1)\n -> Nested Loop (cost=0.00..251608.11\nrows=289 width=77) (actual time=57.73..92753.49 rows=5882 loops=1)\n -> Index Scan using\nactor_full_name_uppercase on actor (cost=0.00..456.88 rows=113 width=42)\n(actual time=32.80..3197.28 rows=3501 loops=1)\n Index Cond:\n((actor_full_name_uppercase >= 'SANDERS'::character varying) AND\n(actor_full_name_uppercase < 'SANDERT'::character varying))\n Filter:\n(actor_full_name_uppercase ~~ 'SANDERS%'::text)\n -> Index Scan using\nactor_case_assignment_actor_id on actor_case_assignment (cost=0.00..2181.29\nrows=2616 width=35) (actual time=22.26..25.57 rows=2 loops=3501)\n Index Cond: (\"outer\".actor_id =\nactor_case_assignment.actor_id)\n -> Index Scan using case_data_case_id on\ncase_data (cost=0.00..3.65 rows=1 width=39) (actual time=13.00..13.00\nrows=1 loops=5882)\n Index Cond: (case_data.case_id =\n\"outer\".case_id)\n Total runtime: 169381.38 msec\n(17 rows)\n\n\n", "msg_date": "Wed, 12 Nov 2003 11:52:51 -0500", "msg_from": "\"Nick Fankhauser\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Seeking help with a query that takes too long " }, { "msg_contents": "\"Nick Fankhauser\" <[email protected]> writes:\n> This indicates to me that 1 isn't too shabby as an estimate if the whole\n> name is specified, but I'm not sure how this gets altered in the case of a\n> \"LIKE\"\n\nFor a pattern like \"SANDERS%\", the estimate is basically a range estimate\nfor this condition:\n\n> ((actor_full_name_uppercase >= 'SANDERS'::character varying) AND\n> (actor_full_name_uppercase < 'SANDERT'::character varying))\n\n> n_distinct | -0.14701\n\n> Question: What does it mean when n_distinct is negative?\n\nIt means that the number of distinct values is estimated as a fraction\nof the table size, rather than an absolute number. In this case 14.7%\nof the table size, which is a bit off compared to the correct value\nof 43% (1453371/3386359), but at least it's of the right order of\nmagnitude now ...\n\n> -> Index Scan using\n> actor_full_name_uppercase on actor (cost=0.00..456.88 rows=113 width=42)\n> (actual time=32.80..3197.28 rows=3501 loops=1)\n\nHmm. Better, but not enough better to force a different plan choice.\n\nYou might have to resort to brute force, like \"set enable_nestloop=false\".\nJust out of curiosity, what do you get if you do that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Nov 2003 12:10:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeking help with a query that takes too long " }, { "msg_contents": "\n> You might have to resort to brute force, like \"set enable_nestloop=false\".\n> Just out of curiosity, what do you get if you do that?\n\nI get a different plan, but similar execution time:\n\n\n Limit (cost=323437.13..323437.13 rows=1 width=115) (actual\ntime=170921.89..170922.95 rows=1000 loops=1)\n -> Sort (cost=323437.13..323437.13 rows=1 width=115) (actual\ntime=170921.89..170922.26 rows=1001 loops=1)\n Sort Key: max((actor.actor_full_name)::text),\ncount(case_data.case_id)\n -> Aggregate (cost=323437.08..323437.12 rows=1 width=115) (actual\ntime=170849.94..170898.06 rows=3457 loops=1)\n -> Group (cost=323437.08..323437.09 rows=3 width=115)\n(actual time=170849.90..170873.60 rows=5880 loops=1)\n -> Sort (cost=323437.08..323437.08 rows=3 width=115)\n(actual time=170847.97..170850.21 rows=5880 loops=1)\n Sort Key: actor.actor_id\n -> Hash Join (cost=253333.29..323437.06 rows=3\nwidth=115) (actual time=122873.80..170814.27 rows=5880 loops=1)\n Hash Cond: (\"outer\".case_id =\n\"inner\".case_id)\n -> Seq Scan on case_data\n(cost=0.00..60368.16 rows=1947116 width=39) (actual time=12.95..43542.25\nrows=1947377 loops=1)\n -> Hash (cost=253333.28..253333.28 rows=3\nwidth=76) (actual time=122844.40..122844.40 rows=0 loops=1)\n -> Hash Join (cost=6.02..253333.28\nrows=3 width=76) (actual time=24992.70..122810.32 rows=5883 loops=1)\n Hash Cond: (\"outer\".actor_id =\n\"inner\".actor_id)\n -> Seq Scan on\nactor_case_assignment (cost=0.00..209980.49 rows=8669349 width=34) (actual\ntime=9.13..85504.05 rows=8670467 loops=1)\n -> Hash (cost=6.01..6.01\nrows=1 width=42) (actual time=24926.56..24926.56 rows=0 loops=1)\n -> Index Scan using\nactor_full_name_uppercase on actor (cost=0.00..6.01 rows=1 width=42)\n(actual time=51.67..24900.53 rows=3502 loops=1)\n Index Cond:\n((actor_full_name_uppercase >= 'SANDERS'::character varying) AND\n(actor_full_name_uppercase < 'SANDERT'::character varying))\n Filter:\n(actor_full_name_uppercase ~~ 'SANDERS%'::text)\n Total runtime: 170925.93 msec\n(19 rows)\n\n\n-Nick\n\n\n", "msg_date": "Wed, 12 Nov 2003 13:27:53 -0500", "msg_from": "\"Nick Fankhauser\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Seeking help with a query that takes too long " }, { "msg_contents": "On Wed, 12 Nov 2003 13:27:53 -0500, \"Nick Fankhauser\"\n<[email protected]> wrote:\n>\n>> You might have to resort to brute force, like \"set enable_nestloop=false\".\n\n> -> Seq Scan on\n>actor_case_assignment (cost=0.00..209980.49 rows=8669349 width=34) (actual\n>time=9.13..85504.05 rows=8670467 loops=1)\n\nDoes actor_case_assignment contain more columns than just the two ids?\nIf yes, do these additional fields account for ca. 70 bytes per tuple?\nIf not, try\n\tVACUUM FULL ANALYSE actor_case_assignment;\n\n> -> Index Scan using\n>actor_full_name_uppercase on actor (cost=0.00..6.01 rows=1 width=42)\n>(actual time=51.67..24900.53 rows=3502 loops=1)\n\nThis same index scan on actor has been much faster in your previous\npostings (677ms, 3200ms), probably due to caching effects. 7ms per\ntuple returned looks like a lot of disk seeks are involved. Is\nclustering actor on actor_full_name_uppercase an option or would this\nslow down other queries?\n\nServus\n Manfred\n", "msg_date": "Wed, 12 Nov 2003 23:25:54 +0100", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeking help with a query that takes too long " }, { "msg_contents": "\n> Does actor_case_assignment contain more columns than just the two ids?\n> If yes, do these additional fields account for ca. 70 bytes per tuple?\n> If not, try\n> \tVACUUM FULL ANALYSE actor_case_assignment;\n\nactor_case_assignment has its own primary key and a \"role\" field in addition\nto the ids you've seen, so 70 bytes sounds reasonable. (The PK is to allow a\nremote mirroring application to update these records- otherwise it would be\nunnecessary.)\n\n\n\n> 7ms per\n> tuple returned looks like a lot of disk seeks are involved. Is\n> clustering actor on actor_full_name_uppercase an option or would this\n> slow down other queries?\n\nGood question... I've never used clustering in PostgreSQL before, so I'm\nunsure. I presume this is like clustering in Oracle where the table is\nordered to match the index? If so, I think you may be onto something because\nthe only other field We regularly query on is the actor_id. Actor_id has a\nunique index with no clustering currently, so I don't think I'd lose a thing\nby clustering on actor_full_name_uppercase.\n\nI'll give this a try & let you know how it changes.\n\nBTW, you are correct that caching has a big affect on the actual time\nfigures in this case- I'm working on my development DB, so cahced info\ndoesn't get trampled as quickly by other users. Is there a way to flush out\nthe cache in a testing situation like this in order to start from a\nconsistent base?\n\n\nThanks!\n -Nick\n\n\n", "msg_date": "Fri, 14 Nov 2003 11:00:38 -0500", "msg_from": "\"Nick Fankhauser\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Seeking help with a query that takes too long " }, { "msg_contents": "On Fri, 14 Nov 2003 11:00:38 -0500, \"Nick Fankhauser\"\n<[email protected]> wrote:\n>Good question... I've never used clustering in PostgreSQL before, so I'm\n>unsure. I presume this is like clustering in Oracle where the table is\n>ordered to match the index?\n\nYes, something like that. With the exception that Postgres looses the\nclustered status, while you INSERT and UPDATE tuples. So you have to\nre-CLUSTER from time to time. Look at pg_stats.correlation to see, if\nits necessary.\n\n> Is there a way to flush out\n>the cache in a testing situation like this in order to start from a\n>consistent base?\n\nTo flush Postgres shared buffers:\n\tSELECT count(*) FROM another_large_table;\n\nTo flush your database pages from the OS cache:\n\ttar cf /dev/null /some/large/directory\n\nAnd run each of your tests at least twice to get a feeling how caching\naffects your specific queries.\n\nServus\n Manfred\n", "msg_date": "Fri, 14 Nov 2003 17:35:44 +0100", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeking help with a query that takes too long " } ]
[ { "msg_contents": "Hi,\n\nI am trying the PG 7.4 RC1 and RC2 and I see a superb performance\nimprovement compared with 7.3....\n\nExplaining the querys, I see a change of planner that, in my case,\nprefer Nested Loops in 7.4 opposite to Hash or Merge Join in 7.3.\n\nTo test, I disable Hash and Merge Joins in 7.3 and performance\nhave been very improved using nested loops...\n\nBoth systems are identical in configurations, properly vacuuned and\nanalyzed before tests.\n\nSomething can be wrong with my tests ? [ I desire that not :) ]\n\nAlexandre\n\n", "msg_date": "Wed, 12 Nov 2003 13:41:03 -0200 (BRST)", "msg_from": "\"alexandre :: aldeia digital\" <[email protected]>", "msg_from_op": true, "msg_subject": "Superior performance in PG 7.4 " } ]
[ { "msg_contents": "I'm moving a webmail service over to use a postgresql database for\nstorage and wanted to get any tips for optimizing performance. The\nmachine will be a multiprocessor (either 2 or 4 cpu ) system with a raid\narray. What layout should be used? I was thinking using about using a\nraid 1+0 array to hold the database but since I can use different array\ntypes, would it be better to use 1+0 for the wal logs and a raid 5 for\nthe database?\n\nThe database gets fairly heavy activity (the system handles about 500MB\nof incoming and about 750MB of outgoing emails daily). I have a fairly\nfree rein in regards to the system's layout as well as how the\napplications will interact with the database since I'm writing the\ncode. \n\n\n-- \nSuchandra Thapa <[email protected]>", "msg_date": "12 Nov 2003 11:34:41 -0600", "msg_from": "Suchandra Thapa <[email protected]>", "msg_from_op": true, "msg_subject": "performance optimzations" }, { "msg_contents": "On Wed, 2003-11-12 at 12:34, Suchandra Thapa wrote:\n> I'm moving a webmail service over to use a postgresql database for\n> storage and wanted to get any tips for optimizing performance. The\n> machine will be a multiprocessor (either 2 or 4 cpu ) system with a raid\n> array. What layout should be used? I was thinking using about using a\n> raid 1+0 array to hold the database but since I can use different array\n> types, would it be better to use 1+0 for the wal logs and a raid 5 for\n> the database?\n\nHow much in total storage? If you have (or will have) > ~6 disks, go\nfor RAID 5 otherwise 10 is probably appropriate.\n\n> The database gets fairly heavy activity (the system handles about 500MB\n> of incoming and about 750MB of outgoing emails daily). I have a fairly\n> free rein in regards to the system's layout as well as how the\n> applications will interact with the database since I'm writing the\n> code.\n\nThese are archived permanently -- ~450GB of annual data? Or is the data\nremoved upon delivery?\n\n", "msg_date": "Wed, 12 Nov 2003 13:23:01 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance optimzations" }, { "msg_contents": "On Wed, 2003-11-12 at 12:23, Rod Taylor wrote:\n> On Wed, 2003-11-12 at 12:34, Suchandra Thapa wrote:\n> > I'm moving a webmail service over to use a postgresql database for\n> > storage and wanted to get any tips for optimizing performance. The\n> > machine will be a multiprocessor (either 2 or 4 cpu ) system with a raid\n> > array. What layout should be used? I was thinking using about using a\n> > raid 1+0 array to hold the database but since I can use different array\n> > types, would it be better to use 1+0 for the wal logs and a raid 5 for\n> > the database?\n> \n> How much in total storage? If you have (or will have) > ~6 disks, go\n> for RAID 5 otherwise 10 is probably appropriate.\n\nI'm not sure but I believe there are about 6-8 10K scsi drives on the\nsystem. There is quite a bit of storage to spare currently so I think \n\n> > The database gets fairly heavy activity (the system handles about 500MB\n> > of incoming and about 750MB of outgoing emails daily). I have a fairly\n> > free rein in regards to the system's layout as well as how the\n> > applications will interact with the database since I'm writing the\n> > code.\n> \n> These are archived permanently -- ~450GB of annual data? Or is the data\n> removed upon delivery?\n\nNo, it's more like hotmail. Some users may keep mail for a longer term\nbut a lot of the mail probably gets deleted fairly quickly. The\ndatabase load will be mixed with a insertions due to deliveries, queries\nby the webmail system, and deletions from pop and webmail.\n\n-- \nSuchandra Thapa <[email protected]>", "msg_date": "12 Nov 2003 12:51:10 -0600", "msg_from": "Suchandra Thapa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance optimzations" }, { "msg_contents": "Suchandra Thapa <[email protected]> writes:\n> I was thinking using about using a raid 1+0 array to hold the\n> database but since I can use different array types, would it be\n> better to use 1+0 for the wal logs and a raid 5 for the database?\n\nIt has been recommended on this list that getting a RAID controller\nwith a battery-backed cache is pretty essential to getting good\nperformance. Search the list archives for lots more discussion about\nRAID configurations.\n\n-Neil\n\n", "msg_date": "Wed, 12 Nov 2003 17:29:30 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance optimzations" }, { "msg_contents": "On Wed, 2003-11-12 at 16:29, Neil Conway wrote:\n> Suchandra Thapa <[email protected]> writes:\n> > I was thinking using about using a raid 1+0 array to hold the\n> > database but since I can use different array types, would it be\n> > better to use 1+0 for the wal logs and a raid 5 for the database?\n> \n> It has been recommended on this list that getting a RAID controller\n> with a battery-backed cache is pretty essential to getting good\n> performance. Search the list archives for lots more discussion about\n> RAID configurations.\n\nThe server is already using a raid controller with battery backed ram\nand the cache set to write back (the server is on a ups so power\nfailures shouldn't cause problems). I'll look at the list archives\nfor RAID information. \n\n-- \nSuchandra Thapa <[email protected]>", "msg_date": "12 Nov 2003 18:15:58 -0600", "msg_from": "Suchandra Thapa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance optimzations" }, { "msg_contents": "> > How much in total storage? If you have (or will have) > ~6 disks, go\n> > for RAID 5 otherwise 10 is probably appropriate.\n> \n> I'm not sure but I believe there are about 6-8 10K scsi drives on the\n> system. There is quite a bit of storage to spare currently so I think \n\nI see.. With 8 drives, you'll probably want to go with RAID 5. It grows\nbeyond that point fairly well with a decent controller card. Be sure to\nhave some battery backed write cache on the raid card (128MB goes a long\nway).\n\n> > > The database gets fairly heavy activity (the system handles about 500MB\n> > > of incoming and about 750MB of outgoing emails daily). I have a fairly\n\n> No, it's more like hotmail. Some users may keep mail for a longer term\n> but a lot of the mail probably gets deleted fairly quickly. The\n> database load will be mixed with a insertions due to deliveries, queries\n> by the webmail system, and deletions from pop and webmail.\n\nYou might consider having the mailserver gzip the emails prior to\ninjection into the database (turn off compression in PostgreSQL) and\ndecompress the data on the webserver for display to the client. Now you\nhave about 7 times the number of emails in memory.\n\nIt's easier to toss a webserver at the problem than make the database\nbigger in size. Take the savings in CPU on the DB and add it to ram.\n\n1200MB of compressed mail is about 200MB? Assume email descriptive\nmaterial (subject, from, etc.), account structure, indexes... so about\n400MB for one days worth of information?\n\nYou may want to consider keeping the compressed email in a separate\ntable than the information describing it. It would mean descriptive\ninformation is more likely to be in RAM, where the body probably doesn't\nmatter as much (you view them 1 at a time, subjects tend to be listed\nall at once).\n\nMost clients will be interested in say the last 7 days worth of data? \nGreat.. Start out with 4GB ram on a good Dual CPU -- Opterons seem to\nwork quite well -- and make sure the motherboard can hold double that in\nmemory for an upgrade sometime next year when you've become popular.\n\nI firmly believe lots of RAM is the answer to most IO issues until you\nstart getting into large sets of active data (>50GB). 64GB ram is fairly\ncheap compared to ongoing maintenance of the 30+ drive system required\nto get decent throughput.\n\n", "msg_date": "Wed, 12 Nov 2003 23:35:33 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance optimzations" }, { "msg_contents": "On Wed, 2003-11-12 at 22:35, Rod Taylor wrote:\n> You may want to consider keeping the compressed email in a separate\n> table than the information describing it. It would mean descriptive\n> information is more likely to be in RAM, where the body probably doesn't\n> matter as much (you view them 1 at a time, subjects tend to be listed\n> all at once).\n\nThanks for the suggestions. Splitting the load between several machines\nwas the original intent of moving the storage from the file system to a\ndatabase. I believe the schema I'm already using splits out the body\ndue to the size of some attachments. Luckily the code already gzips the\nemail body and abbreviates common email headers so storing compressed\nemails isn't a problem. \n\n> Most clients will be interested in say the last 7 days worth of data? \n> Great.. Start out with 4GB ram on a good Dual CPU -- Opterons seem to\n> work quite well -- and make sure the motherboard can hold double that in\n> memory for an upgrade sometime next year when you've become popular.\n\nUnfortunately, the hardware available is pretty much fixed in regards to\nthe system. I can play around with the raid configurations and have\nsome limited choice in regards to the raid controller and number of\ndrivers but that's about all in terms of hardware. \n\n> I firmly believe lots of RAM is the answer to most IO issues until you\n> start getting into large sets of active data (>50GB). 64GB ram is fairly\n> cheap compared to ongoing maintenance of the 30+ drive system required\n> to get decent throughput.\n\nThe current file system holding the user and email information indicates\nthe current data has about 64GB (70K accounts, I'm not sure how many are\nactive but 50% might be good guess). This seems to be somewhat of a\nsteady state however.\n\n-- \nSuchandra Thapa <[email protected]>", "msg_date": "12 Nov 2003 23:58:45 -0600", "msg_from": "Suchandra Thapa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance optimzations" }, { "msg_contents": "> > Most clients will be interested in say the last 7 days worth of data? \n> > Great.. Start out with 4GB ram on a good Dual CPU -- Opterons seem to\n> > work quite well -- and make sure the motherboard can hold double that in\n> > memory for an upgrade sometime next year when you've become popular.\n> \n> Unfortunately, the hardware available is pretty much fixed in regards to\n> the system. I can play around with the raid configurations and have\n> some limited choice in regards to the raid controller and number of\n> drivers but that's about all in terms of hardware. \n\nGood luck then. Unless the configuration takes into account incremental\nadditions in ram and disk, sustained growth could get very expensive. I\nguess that depends on the business plan expectations.\n\nThis just puts more emphasis to offload everything you can onto machines\nthat can multiply.\n\n> The current file system holding the user and email information indicates\n> the current data has about 64GB (70K accounts, I'm not sure how many are\n> active but 50% might be good guess). This seems to be somewhat of a\n> steady state however.\n\n35k clients checking their mail daily isn't so bad. Around 10 pages per\nsecond peak load?\n\n", "msg_date": "Thu, 13 Nov 2003 08:47:33 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance optimzations" } ]
[ { "msg_contents": "Hi,\n\nI have coded some improvements to phpPgAdmin that I think are pretty \ncool. Basicaly, once you are browsing the results of an arbitrary \nSELECT query, you can still sort by columns, regardless of the \nunderlying ORDER BY of the SELECT.\n\nI do this like this:\n\nSELECT * FROM (arbitrary subquery) AS sub ORDER BY 1,3;\n\nNow, this all works fine, but I want to know if this is efficient or not.\n\nDoes doing a select of a select cause serious performance degradation?\n\nChris\n\n\n", "msg_date": "Thu, 13 Nov 2003 12:09:13 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": true, "msg_subject": "Query question" }, { "msg_contents": "Chris,\n\n> SELECT * FROM (arbitrary subquery) AS sub ORDER BY 1,3;\n>\n> Now, this all works fine, but I want to know if this is efficient or not.\n>\n> Does doing a select of a select cause serious performance degradation?\n\nIt would be better if you could strip out the inner sort, but I can understand \nwhy that might not be possible in all cases.\n\nThe only thing you're adding to the query is a second SORT step, so it \nshouldn't require any more time/memory than the query's first SORT did.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 12 Nov 2003 21:20:28 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query question" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> The only thing you're adding to the query is a second SORT step, so it \n> shouldn't require any more time/memory than the query's first SORT\n> did.\n\nInteresting -- I wonder if it would be possible for the optimizer to\ndetect this and avoid the redundant inner sort ... (/me muses to\nhimself)\n\n-Neil\n\n", "msg_date": "Fri, 14 Nov 2003 11:07:15 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query question" }, { "msg_contents": "Neil Conway <[email protected]> writes:\n> Interesting -- I wonder if it would be possible for the optimizer to\n> detect this and avoid the redundant inner sort ... (/me muses to\n> himself)\n\nI think the ability to generate two sort steps is a feature, not a bug.\nThis has been often requested in connection with user-defined\naggregates, where it's handy to be able to control the order of arrival\nof rows at the aggregation function. If the optimizer suppressed the\ninner sort then we'd lose that ability.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Nov 2003 11:40:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query question " }, { "msg_contents": ">>The only thing you're adding to the query is a second SORT step, so it \n>>shouldn't require any more time/memory than the query's first SORT\n>>did.\n> \n> \n> Interesting -- I wonder if it would be possible for the optimizer to\n> detect this and avoid the redundant inner sort ... (/me muses to\n> himself)\n\nThat's somethign I've wondered myself as well. Also - I wonder if the \noptimiser could be made smart enough to push down the outer LIMIT and \nOFFSET clauses into the subquery.\n\nChris\n\n\n", "msg_date": "Sat, 15 Nov 2003 11:35:46 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query question" } ]
[ { "msg_contents": "\nHi,\n\nNOT EXISTS is taking almost double time than NOT IN .\nI know IN has been optimised in 7.4 but is anything \nwrong with the NOT EXISTS?\n\nI have vaccumed , analyze and run the query many times\nstill not in is faster than exists :>\n\n\nRegds\nMallah.\n\nNOT IN PLAN\n\ntradein_clients=# explain analyze SELECT count(*) from general.profile_master where\n profile_id not in (select profile_id from general.account_profiles ) ;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=32238.19..32238.19 rows=1 width=0) (actual time=5329.206..5329.207 rows=1 loops=1)\n -> Seq Scan on profile_master (cost=4458.25..31340.38 rows=359125 width=0) (actual time=1055.496..4637.908 rows=470386 loops=1)\n Filter: (NOT (hashed subplan))\n SubPlan\n -> Seq Scan on account_profiles (cost=0.00..3817.80 rows=256180 width=4) (actual time=0.061..507.811 rows=256180 loops=1)\nTotal runtime: 5337.591 ms\n(6 rows)\n\n\ntradein_clients=# explain analyze SELECT count(*) from general.profile_master where not exists \n(select profile_id from general.account_profiles where profile_id=general.profile_master.profile_id ) ;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=1674981.97..1674981.97 rows=1 width=0) (actual time=14600.386..14600.387 rows=1 loops=1)\n -> Seq Scan on profile_master (cost=0.00..1674084.16 rows=359125 width=0) (actual time=13.687..13815.798 rows=470386 loops=1)\n Filter: (NOT (subplan))\n SubPlan\n -> Index Scan using account_profiles_profile_id on account_profiles (cost=0.00..4.59 rows=2 width=4) (actual time=0.013..0.013 rows=0 loops=718250)\n Index Cond: (profile_id = $0)\nTotal runtime: 14600.531 ms\n\n\n", "msg_date": "Thu, 13 Nov 2003 13:23:39 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "IN surpasses NOT EXISTS in 7.4RC2 ??" }, { "msg_contents": "Rajesh Kumar Mallah wrote:\n> \n> Hi,\n> \n> NOT EXISTS is taking almost double time than NOT IN .\n> I know IN has been optimised in 7.4 but is anything \n> wrong with the NOT EXISTS?\n> \n> I have vaccumed , analyze and run the query many times\n> still not in is faster than exists :>\n\nSeems fine. In 7.4, NOT IN will often be faster that NOT EXISTS. NOT\nEXISTS didn't change --- there are restrictions on how far we can\noptimize NOT EXISTS. NOT IN has just become much faster in 7.4.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 13 Nov 2003 09:56:57 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IN surpasses NOT EXISTS in 7.4RC2 ??" }, { "msg_contents": "Rajesh Kumar Mallah <[email protected]> writes:\n> NOT EXISTS is taking almost double time than NOT IN .\n> I know IN has been optimised in 7.4 but is anything \n> wrong with the NOT EXISTS?\n\nThat's the expected behavior in 7.4. EXISTS in the style you are using\nit effectively forces a nestloop-with-inner-indexscan implementation.\nAs of 7.4, IN can do that, but it can do several other things too,\nincluding the hash-type plan you have here. So assuming that the\nplanner chooses the right plan choice (not always a given ;-))\nIN should be as fast or faster than EXISTS in all cases.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Nov 2003 10:19:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IN surpasses NOT EXISTS in 7.4RC2 ?? " }, { "msg_contents": "It is believed that the IN optimization can lead to faster IN times than\nEXIST times on some queries, the extent of which is still a bit of an\nunknown. (Incidentally is there an FAQ item on this that needs\nupdating?)\n\nDoes the not exist query produce worse results in 7.4 than it did in\n7.3?\n\nRobert Treat\n\nOn Thu, 2003-11-13 at 02:53, Rajesh Kumar Mallah wrote:\n> \n> Hi,\n> \n> NOT EXISTS is taking almost double time than NOT IN .\n> I know IN has been optimised in 7.4 but is anything \n> wrong with the NOT EXISTS?\n> \n> I have vaccumed , analyze and run the query many times\n> still not in is faster than exists :>\n> \n> \n> Regds\n> Mallah.\n> \n> NOT IN PLAN\n> \n> tradein_clients=# explain analyze SELECT count(*) from general.profile_master where\n> profile_id not in (select profile_id from general.account_profiles ) ;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=32238.19..32238.19 rows=1 width=0) (actual time=5329.206..5329.207 rows=1 loops=1)\n> -> Seq Scan on profile_master (cost=4458.25..31340.38 rows=359125 width=0) (actual time=1055.496..4637.908 rows=470386 loops=1)\n> Filter: (NOT (hashed subplan))\n> SubPlan\n> -> Seq Scan on account_profiles (cost=0.00..3817.80 rows=256180 width=4) (actual time=0.061..507.811 rows=256180 loops=1)\n> Total runtime: 5337.591 ms\n> (6 rows)\n> \n> \n> tradein_clients=# explain analyze SELECT count(*) from general.profile_master where not exists \n> (select profile_id from general.account_profiles where profile_id=general.profile_master.profile_id ) ;\n> \n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=1674981.97..1674981.97 rows=1 width=0) (actual time=14600.386..14600.387 rows=1 loops=1)\n> -> Seq Scan on profile_master (cost=0.00..1674084.16 rows=359125 width=0) (actual time=13.687..13815.798 rows=470386 loops=1)\n> Filter: (NOT (subplan))\n> SubPlan\n> -> Index Scan using account_profiles_profile_id on account_profiles (cost=0.00..4.59 rows=2 width=4) (actual time=0.013..0.013 rows=0 loops=718250)\n> Index Cond: (profile_id = $0)\n> Total runtime: 14600.531 ms\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "13 Nov 2003 10:31:42 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IN surpasses NOT EXISTS in 7.4RC2 ??" }, { "msg_contents": "Robert Treat <[email protected]> writes:\n> Does the not exist query produce worse results in 7.4 than it did in\n> 7.3?\n\nEXISTS should work the same as before.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Nov 2003 12:00:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IN surpasses NOT EXISTS in 7.4RC2 ?? " }, { "msg_contents": "On Thu, 2003-11-13 at 12:00, Tom Lane wrote:\n> Robert Treat <[email protected]> writes:\n> > Does the not exist query produce worse results in 7.4 than it did in\n> > 7.3?\n> \n> EXISTS should work the same as before.\n> \n\nright. the original poster is asking if there is \"something wrong with\nexist\" based on the comparison to IN, he needs to compare it vs. 7.3\nEXISTS to determine if something is wrong. \n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "13 Nov 2003 12:19:42 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IN surpasses NOT EXISTS in 7.4RC2 ??" }, { "msg_contents": "\n\n\n\n\n\nRobert Treat wrote:\n\nIt is believed that the IN optimization can lead to faster IN times than\nEXIST times on some queries, the extent of which is still a bit of an\nunknown. (Incidentally is there an FAQ item on this that needs\nupdating?)\n \n\n\nThanks every one for clarifying. Its really a nice thing to see IN\nworking\nso well becoz its easier to read the SQL using IN. \n\nlooks like NOT IN is indifferent to indexes where is IN uses indexes ,\nis it true?\n\ndoes indexes affect the new manner in which IN works in 7.4 ?\n\n\n\n\n\n\n\nDoes the not exist query produce worse results in 7.4 than it did in\n7.3?\n\nWill surely post the overvation sometime.\n\n\n\nRegards\nMallah.\n\n\n\n\n\nRobert Treat\n\nOn Thu, 2003-11-13 at 02:53, Rajesh Kumar Mallah wrote:\n \n\nHi,\n\nNOT EXISTS is taking almost double time than NOT IN .\nI know IN has been optimised in 7.4 but is anything \nwrong with the NOT EXISTS?\n\nI have vaccumed , analyze and run the query many times\nstill not in is faster than exists :>\n\n\nRegds\nMallah.\n\nNOT IN PLAN\n\ntradein_clients=# explain analyze SELECT count(*) from general.profile_master where\n profile_id not in (select profile_id from general.account_profiles ) ;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=32238.19..32238.19 rows=1 width=0) (actual time=5329.206..5329.207 rows=1 loops=1)\n -> Seq Scan on profile_master (cost=4458.25..31340.38 rows=359125 width=0) (actual time=1055.496..4637.908 rows=470386 loops=1)\n Filter: (NOT (hashed subplan))\n SubPlan\n -> Seq Scan on account_profiles (cost=0.00..3817.80 rows=256180 width=4) (actual time=0.061..507.811 rows=256180 loops=1)\nTotal runtime: 5337.591 ms\n(6 rows)\n\n\ntradein_clients=# explain analyze SELECT count(*) from general.profile_master where not exists \n(select profile_id from general.account_profiles where profile_id=general.profile_master.profile_id ) ;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=1674981.97..1674981.97 rows=1 width=0) (actual time=14600.386..14600.387 rows=1 loops=1)\n -> Seq Scan on profile_master (cost=0.00..1674084.16 rows=359125 width=0) (actual time=13.687..13815.798 rows=470386 loops=1)\n Filter: (NOT (subplan))\n SubPlan\n -> Index Scan using account_profiles_profile_id on account_profiles (cost=0.00..4.59 rows=2 width=4) (actual time=0.013..0.013 rows=0 loops=718250)\n Index Cond: (profile_id = $0)\nTotal runtime: 14600.531 ms\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n \n\n\n \n\n\n\n\n", "msg_date": "Fri, 14 Nov 2003 21:31:42 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IN surpasses NOT EXISTS in 7.4RC2 ??" }, { "msg_contents": "Tom Lane wrote:\n\n>Rajesh Kumar Mallah <[email protected]> writes:\n> \n>\n>>NOT EXISTS is taking almost double time than NOT IN .\n>>I know IN has been optimised in 7.4 but is anything \n>>wrong with the NOT EXISTS?\n>> \n>>\n>\n>That's the expected behavior in 7.4. EXISTS in the style you are using\n>it effectively forces a nestloop-with-inner-indexscan implementation.\n>As of 7.4, IN can do that, but it can do several other things too,\n>including the hash-type plan you have here. So assuming that the\n>planner chooses the right plan choice (not always a given ;-))\n>\n\n>IN should be as fast or faster than EXISTS in *all* *cases.*\n>\n\nNot in this case :) , did i miss something silly?\n\ntradein_clients=# explain SELECT count(*) from user_accounts where \nemail is not null and email not in\n (select email from profile_master where email is not null) ;\n QUERY PLAN\n--------------------------------------------------------------------------------------\n Aggregate (cost=*9587726326.93..9587726326.93* rows=1 width=0)\n -> Seq Scan on user_accounts (cost=0.00..9587725473.40 rows=341412 \nwidth=0)\n Filter: ((email IS NOT NULL) AND (NOT (subplan)))\n SubPlan\n -> Seq Scan on profile_master (cost=0.00..25132.24 \nrows=674633 width=25)\n Filter: (email IS NOT NULL)\n(6 rows)\n\n*The query above does not return*\n\ntradein_clients=# explain analyze SELECT count(*) from user_accounts \nwhere email is not null and\nnot exists (select email from profile_master where \nemail=user_accounts.email) ;\n \nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=2850847.55..2850847.55 rows=1 width=0) (actual \ntime=34075.100..34075.101 rows=1 loops=1)\n -> Seq Scan on user_accounts (cost=0.00..2849994.02 rows=341412 \nwidth=0) (actual time=8.066..34066.329 rows=3882 loops=1)\n Filter: ((email IS NOT NULL) AND (NOT (subplan)))\n SubPlan\n -> Index Scan using profile_master_email on profile_master \n(cost=0.00..35.60 rows=9 width=25) (actual time=0.044..0.044 rows=1 \nloops=686716)\n Index Cond: ((email)::text = ($0)::text)\n Total runtime: 34075.213 ms\n(7 rows)\n\ntradein_clients=#\n\n\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n>\n\n\n-- \n\nRajesh Kumar Mallah,\nInfocom Network Limited, New Delhi\nphone: +91(11)6152172 (221) (L) ,9811255597 (M)\n\nVisit http://www.trade-india.com ,\nIndia's Leading B2B eMarketplace.\n\n\n\n\n\n\n\n\nTom Lane wrote:\n\nRajesh Kumar Mallah <[email protected]> writes:\n \n\nNOT EXISTS is taking almost double time than NOT IN .\nI know IN has been optimised in 7.4 but is anything \nwrong with the NOT EXISTS?\n \n\n\nThat's the expected behavior in 7.4. EXISTS in the style you are using\nit effectively forces a nestloop-with-inner-indexscan implementation.\nAs of 7.4, IN can do that, but it can do several other things too,\nincluding the hash-type plan you have here. So assuming that the\nplanner chooses the right plan choice (not always a given ;-))\n\n\n\nIN should be as fast or faster than EXISTS in all cases.\n\n\nNot in this case :) , did i miss something silly?\n\ntradein_clients=# explain  SELECT count(*)\nfrom user_accounts where email is not null and email not in\n (select email from profile_master where email is not null) ;\n                                      QUERY PLAN\n--------------------------------------------------------------------------------------\n Aggregate  (cost=9587726326.93..9587726326.93\nrows=1 width=0)\n   ->  Seq Scan on user_accounts  (cost=0.00..9587725473.40\nrows=341412 width=0)\n         Filter: ((email IS NOT NULL) AND (NOT (subplan)))\n         SubPlan\n           ->  Seq Scan on profile_master  (cost=0.00..25132.24\nrows=674633 width=25)\n                 Filter: (email IS NOT NULL)\n(6 rows)\n\nThe query above does not return\n\ntradein_clients=# explain analyze SELECT count(*)\nfrom user_accounts where email is not null and \nnot exists (select email from profile_master where\nemail=user_accounts.email) ;\n                                                                       \nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=2850847.55..2850847.55 rows=1 width=0) (actual\ntime=34075.100..34075.101 rows=1 loops=1)\n   ->  Seq Scan on user_accounts  (cost=0.00..2849994.02 rows=341412\nwidth=0) (actual time=8.066..34066.329 rows=3882 loops=1)\n         Filter: ((email IS NOT NULL) AND (NOT (subplan)))\n         SubPlan\n           ->  Index Scan using profile_master_email on\nprofile_master  (cost=0.00..35.60 rows=9 width=25) (actual\ntime=0.044..0.044 rows=1 loops=686716)\n                 Index Cond: ((email)::text = ($0)::text)\n Total runtime: 34075.213 ms\n(7 rows)\n\ntradein_clients=#\n\n\n\n\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n \n\n\n\n-- \n\nRajesh Kumar Mallah,\nInfocom Network Limited, New Delhi\nphone: +91(11)6152172 (221) (L) ,9811255597 (M)\n\nVisit http://www.trade-india.com ,\nIndia's Leading B2B eMarketplace.", "msg_date": "Sat, 15 Nov 2003 00:57:41 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IN surpasses NOT EXISTS in 7.4RC2 ??" } ]
[ { "msg_contents": "Hi all,\n\nI've one here that I cannot fathom. Any suggestions? \n\nWe have a table, call it tablename, where we're selecting by a range\nof dates and an identifier. (This is redacted, obviously):\n\n\\d tablename\n\n Column | Type | Modifiers \n--------------------+--------------------------+--------------------\n id | integer | not null\n transaction_date | timestamp with time zone | not null\n product_id | integer | not null\nIndexes:\n \"trans_posted_trans_date_idx\" btree (transaction_date, product_id)\n\n\nThe statistics on transaction_date and product_id are set to 1000. \nEverything is all analysed nicely. But I'm getting a poor plan,\nbecause of an estimate that the number of rows to be returned is\nabout double how many actually are:\n\nexplain analyse select * from transactions_posted where\ntransaction_date >= '2003-9-1' and transaction_date < '2003-10-1' and\nproduct_id = 2;\n\nQUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on transactions_posted (cost=0.00..376630.33 rows=700923\nwidth=91) (actual time=8422.253..36176.078 rows=316029 loops=1)\n Filter: ((transaction_date >= '2003-09-01 00:00:00-04'::timestamp\nwith time zone) AND (transaction_date < '2003-10-01\n00:00:00-04'::timestamp with time zone) AND (product_id = 2))\n Total runtime: 36357.630 ms\n(3 rows)\n\nSET enable_seqscan = off;\n\nexplain analyse select * from transactions_posted where\ntransaction_date >= '2003-9-1' and transaction_date < '2003-10-1' and\nproduct_id = 2;\n\nQUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using trans_posted_trans_date_idx on transactions_posted\n(cost=0.00..1088862.56 rows=700923 width=91) (actual\ntime=35.214..14816.257 rows=316029 loops=1)\n Index Cond: ((transaction_date >= '2003-09-01\n00:00:00-04'::timestamp with time zone) AND (transaction_date <\n'2003-10-01 00:00:00-04'::timestamp with time zone) AND (product_id =\n2))\n Total runtime: 15009.816 ms\n(3 rows)\n\nSELECT attname,null_frac,avg_width,n_distinct,correlation FROM\npg_stats where tablename = 'transactions_posted' AND attname in\n('transaction_date','product_id');\n attname | null_frac | avg_width | n_distinct | correlation \n------------------+-----------+-----------+------------+-------------\n product_id | 0 | 4 | 2 | 0.200956\n transaction_date | 0 | 8 | -0.200791 | 0.289248\n\nAny ideas? I'm loathe to recommend cluster, since the data will not\nstay clustered.\n\nA\n\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 13 Nov 2003 13:14:15 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": true, "msg_subject": "strange estimate for number of rows" }, { "msg_contents": "Andrew Sullivan <[email protected]> writes:\n> The statistics on transaction_date and product_id are set to 1000. \n> Everything is all analysed nicely. But I'm getting a poor plan,\n> because of an estimate that the number of rows to be returned is\n> about double how many actually are:\n\n> explain analyse select * from transactions_posted where\n> transaction_date >= '2003-9-1' and transaction_date < '2003-10-1' and\n> product_id = 2;\n\nAre the estimates accurate for queries on the two columns individually,\nie\n... where transaction_date >= '2003-9-1' and transaction_date < '2003-10-1'\n... where product_id = 2\n\nIf so, the problem is that there's a correlation between\ntransaction_date and product_id, which the system cannot model because\nit has no multi-column statistics.\n\nHowever, given that the estimate is only off by about a factor of 2,\nyou'd still be getting the wrong plan even if the estimate were perfect,\nbecause the estimated costs differ by nearly a factor of 3.\n\nGiven the actual runtimes, I'm thinking maybe you want to reduce\nrandom_page_cost. What are you using for that now?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Nov 2003 13:56:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange estimate for number of rows " }, { "msg_contents": "Hi, I'm the lead developer on the project this concerns (forgive my \nnewbiness on this list).\n\nWe tried a couple of scenarios with effective_cache_size=60000, \ncpu_index_tuple_cost=0.0001 and random_page_cost=2 with no change in the \nplan.\n\nexplain analyse select * from tablename where transaction_date >= \n'2003-9-1' and transaction_date < '2003-10-1';\n-----------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on tablename (cost=0.00..348199.14 rows=1180724 width=91) \n(actual time=7727.668..36286.898 rows=579238 loops=1)\n Filter: ((transaction_date >= '2003-09-01 00:00:00+00'::timestamp \nwith time zone) AND (transaction_date < '2003-10-01 \n00:00:00+00'::timestamp with time zone))\n Total runtime: 36625.351 ms\n\nexplain analyse select * from transactions_posted where product_id = 2;\n-----------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on transactions_posted (cost=0.00..319767.95 rows=6785237 \nwidth=91) (actual time=0.091..35596.328 rows=5713877 loops=1)\n Filter: (product_id = 2)\n Total runtime: 38685.373 ms\n\nThe product_id alone gives a difference of a millions rows from estimate \nto actual, vs. the factor of 2 from the transaction_date.\n\nDan Manley\n\nTom Lane О©╫О©╫О©╫О©╫О©╫:\n\n>Andrew Sullivan <[email protected]> writes:\n> \n>\n>>The statistics on transaction_date and product_id are set to 1000. \n>>Everything is all analysed nicely. But I'm getting a poor plan,\n>>because of an estimate that the number of rows to be returned is\n>>about double how many actually are:\n>> \n>>\n>\n> \n>\n>>explain analyse select * from transactions_posted where\n>>transaction_date >= '2003-9-1' and transaction_date < '2003-10-1' and\n>>product_id = 2;\n>> \n>>\n>\n>Are the estimates accurate for queries on the two columns individually,\n>ie\n>... where transaction_date >= '2003-9-1' and transaction_date < '2003-10-1'\n>... where product_id = 2\n>\n>If so, the problem is that there's a correlation between\n>transaction_date and product_id, which the system cannot model because\n>it has no multi-column statistics.\n>\n>However, given that the estimate is only off by about a factor of 2,\n>you'd still be getting the wrong plan even if the estimate were perfect,\n>because the estimated costs differ by nearly a factor of 3.\n>\n>Given the actual runtimes, I'm thinking maybe you want to reduce\n>random_page_cost. What are you using for that now?\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n> \n>\n\n\n", "msg_date": "Thu, 13 Nov 2003 14:35:58 -0500", "msg_from": "Daniel Manley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange estimate for number of rows" }, { "msg_contents": "Daniel Manley <[email protected]> writes:\n> The product_id alone gives a difference of a millions rows from estimate \n> to actual, vs. the factor of 2 from the transaction_date.\n\nYou should be thinking in terms of ratios, not absolute difference.\nThe rows estimate for product_id doesn't look too bad to me; the one for\ntransaction_date is much further off (a factor of 2). Which is odd,\nbecause the system can usually do all right on range estimates if you've\nlet it run an ANALYZE with enough histogram bins. Could we see the\npg_stats row for transaction_date?\n\n> We tried a couple of scenarios with effective_cache_size=60000, \n> cpu_index_tuple_cost=0.0001 and random_page_cost=2 with no change in the \n> plan.\n\nSince you need about a factor of 3 change in the cost estimate to get it to\nswitch plans, changing random_page_cost by a factor of 2 ain't gonna do\nit (the other two numbers are second-order adjustments unlikely to have\nmuch effect, I think). Try 1.5, or even less ... of course, you have to\nkeep an eye on your other queries and make sure they don't go nuts, but\nfrom what I've heard about your hardware setup a low random_page_cost\nisn't out of line with the physical realities.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Nov 2003 15:19:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange estimate for number of rows " }, { "msg_contents": "On Thu, Nov 13, 2003 at 03:19:08PM -0500, Tom Lane wrote:\n> because the system can usually do all right on range estimates if you've\n> let it run an ANALYZE with enough histogram bins. Could we see the\n> pg_stats row for transaction_date?\n\nDo you want the whole thing? I left out the really verbose bits when\nI posted this in the original:\n\nSELECT attname,null_frac,avg_width,n_distinct,correlation FROM\npg_stats where tablename = 'transactions_posted' AND attname in\n('transaction_date','product_id');\n attname | null_frac | avg_width | n_distinct | correlation\n------------------+-----------+-----------+------------+-------------\n product_id | 0 | 4 | 2 | 0.200956\n transaction_date | 0 | 8 | -0.200791 | 0.289248\n\n> \n> Since you need about a factor of 3 change in the cost estimate to get it to\n> switch plans, changing random_page_cost by a factor of 2 ain't gonna do\n> it (the other two numbers are second-order adjustments unlikely to have\n> much effect, I think). Try 1.5, or even less ... of course, you have to\n> keep an eye on your other queries and make sure they don't go nuts, but\n> from what I've heard about your hardware setup a low random_page_cost\n> isn't out of line with the physical realities.\n\nActually, this one's on an internal box, and I think 1.5 is too low\n-- it's really just a pair of mirrored SCSI disks on a PCI controller\nin this case. That does the trick, though, so maybe I'm just being\ntoo conservantive.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 13 Nov 2003 16:37:03 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: strange estimate for number of rows" }, { "msg_contents": "On Thu, Nov 13, 2003 at 04:37:03PM -0500, Andrew Sullivan wrote:\n> Actually, this one's on an internal box, and I think 1.5 is too low\n> -- it's really just a pair of mirrored SCSI disks on a PCI controller\n> in this case. That does the trick, though, so maybe I'm just being\n> too conservantive.\n\nI spoke too soon. I'd left enable_seqscan=off set. It doesn't\nactually prefer an indexscan until I set the random_page_cost to .5. \nI think that's a little unrealistic ;-)\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 13 Nov 2003 16:52:51 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: strange estimate for number of rows" } ]
[ { "msg_contents": "On 7.4 RC2, I'm seeing a case where the query planner estimates are way\nout of line after grouping the result of a union. I've tried adjusting the\nstatistics targets up to 200, and it made no difference in the planner's\nestimates. The point of the full query this came from is that it also has\nan aggregate function that produces a space-delimited list of commodity &\nfak for each id. Does anyone have any suggestions on tweaks to apply or\nways to rewrite this? Is this one of those ugly corners where the query\nplanner doesn't have a clue how to estimate this (seeing the nice round\n200 estimate makes me suspicious)?\n\nEXPLAIN ANALYZE SELECT id FROM\n(SELECT id, commodity FROM commodities WHERE commodity IS NOT NULL\n UNION\n SELECT id, fak FROM commodities WHERE fak IS NOT NULL\n) all_commodities GROUP BY id;\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=15939.16..15939.16 rows=200 width=4) (actual\ntime=3537.281..3680.418 rows=83306 loops=1)\n -> Subquery Scan all_commodities (cost=14002.00..15697.02 rows=96858\nwidth=4) (actual time=2268.052..3214.996 rows=95715 loops=1)\n -> Unique (cost=14002.00..14728.44 rows=96858 width=15) (actual\ntime=2268.043..2881.688 rows=95715 loops=1)\n -> Sort (cost=14002.00..14244.15 rows=96858 width=15)\n(actual time=2268.037..2527.083 rows=100008 loops=1)\n Sort Key: id, commodity\n -> Append (cost=0.00..5034.42 rows=96858 width=15)\n(actual time=7.402..1220.320 rows=100008 loops=1)\n -> Subquery Scan \"*SELECT* 1\" \n(cost=0.00..2401.23 rows=36831 width=15)\n(actual time=7.398..590.004 rows=39772 loops=1)\n -> Seq Scan on commodities \n(cost=0.00..2032.92 rows=36831 width=15)\n(actual time=7.388..468.415 rows=39772\nloops=1)\n Filter: (commodity IS NOT NULL)\n -> Subquery Scan \"*SELECT* 2\" \n(cost=0.00..2633.19 rows=60027 width=14)\n(actual time=0.016..408.160 rows=60236 loops=1)\n -> Seq Scan on commodities \n(cost=0.00..2032.92 rows=60027 width=14)\n(actual time=0.010..221.635 rows=60236\nloops=1)\n Filter: (fak IS NOT NULL)\n Total runtime: 3783.009 ms\n(13 rows)\n\n", "msg_date": "Thu, 13 Nov 2003 12:28:15 -0600 (CST)", "msg_from": "\"Arthur Ward\" <[email protected]>", "msg_from_op": true, "msg_subject": "Union+group by planner estimates way off?" }, { "msg_contents": "\"Arthur Ward\" <[email protected]> writes:\n> EXPLAIN ANALYZE SELECT id FROM\n> (SELECT id, commodity FROM commodities WHERE commodity IS NOT NULL\n> UNION\n> SELECT id, fak FROM commodities WHERE fak IS NOT NULL\n> ) all_commodities GROUP BY id;\n> QUERY\n> PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=15939.16..15939.16 rows=200 width=4) (actual\n> time=3537.281..3680.418 rows=83306 loops=1)\n> -> Subquery Scan all_commodities (cost=14002.00..15697.02 rows=96858\n> width=4) (actual time=2268.052..3214.996 rows=95715 loops=1)\n\nIt's falling back to a default estimate because it doesn't know how to\nfind any statistics for the output of a sub-select. I have a TODO\nsomewhere about burrowing down into sub-selects to see if the output maps\ndirectly to a column that we'd have stats for ... but it's not done yet.\n\nIn this particular case the inaccurate estimate doesn't matter too much,\nI think, although it might be encouraging the system to select hash\naggregation since it thinks the hashtable will be pretty small. If the\nestimate were getting used to plan higher-up plan steps then it could\nbe a bigger problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Nov 2003 13:46:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Union+group by planner estimates way off? " }, { "msg_contents": "> In this particular case the inaccurate estimate doesn't matter too much,\n> I think, although it might be encouraging the system to select hash\n> aggregation since it thinks the hashtable will be pretty small. If the\n> estimate were getting used to plan higher-up plan steps then it could\n> be a bigger problem.\n\nThat's my problem; this is a subselect feeding in to a larger query. That\nwrong estimate causes the planner to select a nested-loop at the next step\nup. At 83,000 rows, the word is \"ouch!\"\n\nAt any rate, I discovered this while dissecting a giant & slow query.\nHence, while disabling nested-loop joins avoids this particular pitfall,\nit's not good for the bigger picture. I think I'm going to end up\nsplitting this larger query into smaller parts and reassemble the pieces\nin the application so I can push some smarts past other subselect\nboundaries. For my purposes, that should skirt the issue of union+group\nestimates not being calculated.\n\nAs always, thanks for the fast answers!\n", "msg_date": "Thu, 13 Nov 2003 13:19:07 -0600 (CST)", "msg_from": "\"Arthur Ward\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Union+group by planner estimates way off?" } ]
[ { "msg_contents": "Folks,\n\nHow would I calculate storage space/required ram on a 50-digit NUMERIC?\n\nAnd the docs state that NUMERIC is slow. Is this just slow for calculations \n(due to the conversion to float & back) or slow for index lookups as well?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 13 Nov 2003 11:36:05 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Storage space, RAM for NUMERIC" } ]
[ { "msg_contents": "\n\nHi , \n\nmy database seems to be taking too long for a select count(*)\ni think there are lot of dead rows. I do a vacuum full it improves\nbu again the performance drops in a short while ,\ncan anyone please tell me if anything worng with my fsm settings\ncurrent fsm=55099264 (not sure how i calculated it)\n\nRegds\nMallah\n\ntradein_clients=# SELECT count(*) from data_bank.profiles ;\n\n+--------+\n| count |\n+--------+\n| 123065 |\n+--------+\n(1 row)\n\nTime: 49756.969 ms\ntradein_clients=#\ntradein_clients=#\ntradein_clients=# VACUUM full verbose analyze data_bank.profiles ;\nINFO: vacuuming \"data_bank.profiles\"\n\nINFO: \"profiles\": found 0 removable, 369195 nonremovable row versions in 43423 pages\nDETAIL: 246130 dead row versions cannot be removed yet.\nNonremovable row versions range from 136 to 2036 bytes long.\nThere were 427579 unused item pointers.\nTotal free space (including removable row versions) is 178536020 bytes.\n15934 pages are or will become empty, including 0 at the end of the table.\n38112 pages containing 178196624 free bytes are potential move destinations.\nCPU 1.51s/0.63u sec elapsed 23.52 sec.\nINFO: index \"profiles_pincode\" now contains 369195 row versions in 3353 pages\nDETAIL: 0 index row versions were removed.\n379 index pages have been deleted, 379 are currently reusable.\nCPU 0.20s/0.24u sec elapsed 22.73 sec.\nINFO: index \"profiles_city\" now contains 369195 row versions in 3411 pages\nDETAIL: 0 index row versions were removed.\n1030 index pages have been deleted, 1030 are currently reusable.\nCPU 0.17s/0.21u sec elapsed 20.67 sec.\nINFO: index \"profiles_branch\" now contains 369195 row versions in 2209 pages\nDETAIL: 0 index row versions were removed.\n783 index pages have been deleted, 783 are currently reusable.\nCPU 0.07s/0.14u sec elapsed 6.38 sec.\nINFO: index \"profiles_area_code\" now contains 369195 row versions in 2606 pages\nDETAIL: 0 index row versions were removed.\n856 index pages have been deleted, 856 are currently reusable.\nCPU 0.11s/0.17u sec elapsed 19.62 sec.\nINFO: index \"profiles_source\" now contains 369195 row versions in 3137 pages\nDETAIL: 0 index row versions were removed.\n1199 index pages have been deleted, 1199 are currently reusable.\nCPU 0.14s/0.12u sec elapsed 9.95 sec.\nINFO: index \"co_name_index_idx\" now contains 368742 row versions in 3945 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.19s/0.69u sec elapsed 11.56 sec.\nINFO: index \"address_index_idx\" now contains 368898 row versions in 4828 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.16s/0.61u sec elapsed 9.17 sec.\nINFO: index \"profiles_exp_cat\" now contains 153954 row versions in 2168 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.07s/0.25u sec elapsed 3.14 sec.\nINFO: index \"profiles_imp_cat\" now contains 73476 row versions in 1030 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.05s/0.11u sec elapsed 8.73 sec.\nINFO: index \"profiles_manu_cat\" now contains 86534 row versions in 1193 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.03s/0.13u sec elapsed 1.44 sec.\nINFO: index \"profiles_serv_cat\" now contains 19256 row versions in 267 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.01s/0.01u sec elapsed 0.25 sec.\nINFO: index \"profiles_pid\" now contains 369195 row versions in 812 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.03s/0.12u sec elapsed 0.41 sec.\nINFO: index \"profiles_pending_branch_id\" now contains 0 row versions in 1 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"profiles\": moved 0 row versions, truncated 43423 to 43423 pages\nDETAIL: CPU 1.76s/3.01u sec elapsed 60.39 sec.\nINFO: vacuuming \"pg_toast.pg_toast_39873340\"\nINFO: \"pg_toast_39873340\": found 0 removable, 65 nonremovable row versions in 15 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 47 to 2034 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 17672 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n14 pages containing 17636 free bytes are potential move destinations.\nCPU 0.00s/0.00u sec elapsed 0.77 sec.\nINFO: index \"pg_toast_39873340_index\" now contains 65 row versions in 2 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.46 sec.\nINFO: \"pg_toast_39873340\": moved 0 row versions, truncated 15 to 15 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"data_bank.profiles\"\nINFO: \"profiles\": 43423 pages, 123065 rows sampled, 123065 estimated total rows\nVACUUM\nTime: 246989.138 ms\ntradein_clients=# SELECT count(*) from data_bank.profiles ;\n+--------+\n| count |\n+--------+\n| 123065 |\n+--------+\n(1 row)\n\nTime: 4978.725 ms\ntradein_clients=#\n\nIMPORVED but still not very good.\n\nRegds\nMallah.\n\n\n\n", "msg_date": "Fri, 14 Nov 2003 12:51:38 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Help with count(*)" }, { "msg_contents": "On Friday 14 November 2003 12:51, Rajesh Kumar Mallah wrote:\n> Hi ,\n>\n> my database seems to be taking too long for a select count(*)\n> i think there are lot of dead rows. I do a vacuum full it improves\n> bu again the performance drops in a short while ,\n> can anyone please tell me if anything worng with my fsm settings\n> current fsm=55099264 (not sure how i calculated it)\n\nIf you don't need exact count, you can use statistics. Just analyze frequently \nand you will get the statistics.\n\nand I didn't exact;y understand this in the text.\n\nINFO:  \"profiles\": found 0 removable, 369195 nonremovable row versions in \n43423 pages\nDETAIL:  246130 dead row versions cannot be removed yet.\n\nIs there a transaction holoding up large amount of stuff?\n\n Shridhar\n\n", "msg_date": "Fri, 14 Nov 2003 13:00:34 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with count(*)" }, { "msg_contents": "Martha Stewart called it a Good Thing when [email protected] (Rajesh Kumar Mallah) wrote:\n> INFO: \"profiles\": found 0 removable, 369195 nonremovable row versions in 43423 pages\n> DETAIL: 246130 dead row versions cannot be removed yet.\n> Nonremovable row versions range from 136 to 2036 bytes long.\n\nIt seems as though you have a transaction open that is holding onto a\nwhole lot of old rows.\n\nI have seen this happen somewhat-invisibly when a JDBC connection\nmanager opens transactions for each connection, and then no processing\nhappens to use those connections for a long time. The open\ntransactions prevent vacuums from doing any good...\n-- \nIf this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me\nhttp://cbbrowne.com/info/multiplexor.html\n\"Waving away a cloud of smoke, I look up, and am blinded by a bright,\nwhite light. It's God. No, not Richard Stallman, or Linus Torvalds,\nbut God. In a booming voice, He says: \"THIS IS A SIGN. USE LINUX, THE\nFREE Unix SYSTEM FOR THE 386.\" -- Matt Welsh\n", "msg_date": "Fri, 14 Nov 2003 09:13:48 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with count(*)" }, { "msg_contents": "Christopher Browne kirjutas R, 14.11.2003 kell 16:13:\n> Martha Stewart called it a Good Thing when [email protected] (Rajesh Kumar Mallah) wrote:\n> > INFO: \"profiles\": found 0 removable, 369195 nonremovable row versions in 43423 pages\n> > DETAIL: 246130 dead row versions cannot be removed yet.\n> > Nonremovable row versions range from 136 to 2036 bytes long.\n> \n> It seems as though you have a transaction open that is holding onto a\n> whole lot of old rows.\n> \n> I have seen this happen somewhat-invisibly when a JDBC connection\n> manager opens transactions for each connection, and then no processing\n> happens to use those connections for a long time. The open\n> transactions prevent vacuums from doing any good...\n\nCan't the backend be made to delay the \"real\" start of transaction until\nthe first query gets executed ?\n\n------------\nHannu\n\n", "msg_date": "Fri, 14 Nov 2003 19:43:27 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with count(*)" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> Christopher Browne kirjutas R, 14.11.2003 kell 16:13:\n>> I have seen this happen somewhat-invisibly when a JDBC connection\n>> manager opens transactions for each connection, and then no processing\n>> happens to use those connections for a long time. The open\n>> transactions prevent vacuums from doing any good...\n\n> Can't the backend be made to delay the \"real\" start of transaction until\n> the first query gets executed ?\n\nThat is on the TODO list. I looked at it briefly towards the end of the\n7.4 development cycle, and decided that it was nontrivial and I didn't\nhave time to make it happen before beta started. I don't recall why it\ndidn't seem trivial.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Nov 2003 13:49:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with count(*) " }, { "msg_contents": "After a long battle with technology, [email protected] (Hannu Krosing), an earthling, wrote:\n> Christopher Browne kirjutas R, 14.11.2003 kell 16:13:\n>> Martha Stewart called it a Good Thing when [email protected] (Rajesh Kumar Mallah) wrote:\n>> > INFO: \"profiles\": found 0 removable, 369195 nonremovable row versions in 43423 pages\n>> > DETAIL: 246130 dead row versions cannot be removed yet.\n>> > Nonremovable row versions range from 136 to 2036 bytes long.\n>> \n>> It seems as though you have a transaction open that is holding onto a\n>> whole lot of old rows.\n>> \n>> I have seen this happen somewhat-invisibly when a JDBC connection\n>> manager opens transactions for each connection, and then no processing\n>> happens to use those connections for a long time. The open\n>> transactions prevent vacuums from doing any good...\n>\n> Can't the backend be made to delay the \"real\" start of transaction until\n> the first query gets executed ?\n\nOne would hope so. Some time when I have the Round Tuits, I ought to\ntake a browse of the connection pool code to notice if there's\nanything to notice. \n\nThe thing that I keep imagining would be a slick idea would be to have\na thread periodically go through once for however many connections the\npool permits and fire a short transaction through every\notherwise-unoccupied connection in the pool, in effect, doing a sort\nof \"vacuum\" of the connections. I don't get very favorable reactions\nwhen I suggest that, though...\n-- \n(reverse (concatenate 'string \"ac.notelrac.teneerf\" \"@\" \"454aa\"))\nhttp://cbbrowne.com/info/sgml.html\nRules of the Evil Overlord #80. \"If my weakest troops fail to\neliminate a hero, I will send out my best troops instead of wasting\ntime with progressively stronger ones as he gets closer and closer to\nmy fortress.\" <http://www.eviloverlord.com/>\n", "msg_date": "Fri, 14 Nov 2003 14:16:56 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with count(*)" }, { "msg_contents": "Hannu Krosing wrote:\n\n>Christopher Browne kirjutas R, 14.11.2003 kell 16:13:\n> \n>\n>>Martha Stewart called it a Good Thing when [email protected] (Rajesh Kumar Mallah) wrote:\n>> \n>>\n>>>INFO: \"profiles\": found 0 removable, 369195 nonremovable row versions in 43423 pages\n>>>DETAIL: 246130 dead row versions cannot be removed yet.\n>>>Nonremovable row versions range from 136 to 2036 bytes long.\n>>> \n>>>\n>>It seems as though you have a transaction open that is holding onto a\n>>whole lot of old rows.\n>>\n>>I have seen this happen somewhat-invisibly when a JDBC connection\n>>manager opens transactions for each connection, and then no processing\n>>happens to use those connections for a long time. The open\n>>transactions prevent vacuums from doing any good...\n>> \n>>\n>\n>Can't the backend be made to delay the \"real\" start of transaction until\n>the first query gets executed ?\n> \n>\n\nThat seems counter intuitive doesn't it? Why write more code in the \nserver when the client is the thing that has the problem?\n\nWill\n\n", "msg_date": "Fri, 14 Nov 2003 12:40:23 -0700", "msg_from": "Will LaShell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with count(*)" }, { "msg_contents": "Will LaShell <[email protected]> writes:\n> Hannu Krosing wrote:\n>> Can't the backend be made to delay the \"real\" start of transaction until\n>> the first query gets executed ?\n\n> That seems counter intuitive doesn't it? Why write more code in the \n> server when the client is the thing that has the problem?\n\nBecause there are a lot of clients with the same problem :-(\n\nA more principled argument is that we already postpone the setting of\nthe transaction snapshot until the first query arrives within the\ntransaction. In a very real sense, the setting of the snapshot *is*\nthe start of the transaction. So it would make sense if incidental\nstuff like VACUUM also thought that the transaction hadn't started\nuntil the first query arrives. (I believe the previous discussion\nalso agreed that we wanted to postpone the freezing of now(), which\ncurrently also happens at BEGIN rather than the first command after\nBEGIN.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Nov 2003 15:38:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with count(*) " }, { "msg_contents": "On Fri, Nov 14, 2003 at 02:16:56PM -0500, Christopher Browne wrote:\n> otherwise-unoccupied connection in the pool, in effect, doing a sort\n> of \"vacuum\" of the connections. I don't get very favorable reactions\n> when I suggest that, though...\n\nBecause it's a kludge on top of another kludge, perhaps? ;-) This\nneeds to be fixed properly, not through an ungraceful series of\nworkarounds.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 14 Nov 2003 16:30:16 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with count(*)" }, { "msg_contents": "On Fri, 14 Nov 2003, Tom Lane wrote:\n\n> I believe the previous discussion also agreed that we wanted to postpone\n> the freezing of now(), which currently also happens at BEGIN rather than\n> the first command after BEGIN.\n\nOr should that happen at the first call to now()?\n\n/me should ge back and try to find this previous discussion.\n\n-- \n/Dennis\n\n", "msg_date": "Sat, 15 Nov 2003 11:21:09 +0100 (CET)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with count(*) " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n> (I believe the previous discussion also agreed that we wanted to\n> postpone the freezing of now(), which currently also happens at\n> BEGIN rather than the first command after BEGIN.)\n\nThat doesn't make sense to me: from a user's perspective, the \"start\nof the transaction\" is when the BEGIN is issued, regardless of any\ntricks we may play in the backend.\n\nMaking now() return the time the current transaction started is\nreasonably logical; making now() return \"the time when the first\ncommand after the BEGIN in the current transaction was issued\" makes a\nlot less sense to me.\n\n-Neil\n\n", "msg_date": "Sat, 15 Nov 2003 15:20:43 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with count(*)" }, { "msg_contents": "Redirected to -hackers\n\nNeil Conway kirjutas L, 15.11.2003 kell 22:20:\n> Tom Lane <[email protected]> writes:\n> > (I believe the previous discussion also agreed that we wanted to\n> > postpone the freezing of now(), which currently also happens at\n> > BEGIN rather than the first command after BEGIN.)\n> \n> That doesn't make sense to me: from a user's perspective, the \"start\n> of the transaction\" is when the BEGIN is issued, regardless of any\n> tricks we may play in the backend.\n\nFor me, the \"start of transaction\" is not about time, but about grouping\na set of statements into one. So making the exact moment of \"start\" be\nthe first statement that actually does something with data seems\nperfectly reasonable. If you really need to preserve time, do \"select\ncurrent_timestamp\" and use the result.\n\n> Making now() return the time the current transaction started is\n> reasonably logical; making now() return \"the time when the first\n> command after the BEGIN in the current transaction was issued\" makes a\n> lot less sense to me.\n\nfor me \"the time the current transactuion is started\" == \"the time when\nthe first command after the BEGIN in the current transaction was issued\"\nand thus I see no conflict here ;)\n\nDelaying the locking effects of transactions as long as possible can\nincrease performance overall, not just for pathological clients that sit\non idle open transactions.\n\nProbably the latest time we can start the transaction is ath the start\nof executor step after the first statement in a transaction is planned\nand optimized.\n\n---------------\nHannu\n\n\n", "msg_date": "Sun, 16 Nov 2003 13:58:08 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "start of transaction (was: Re: [PERFORM] Help with count(*))" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> Probably the latest time we can start the transaction is ath the start\n> of executor step after the first statement in a transaction is planned\n> and optimized.\n\nThe transaction has to exist before it can take locks, so the above\nwould not fly.\n\nA complete example of what we have to think about is:\n\n\tBEGIN;\n\tSET TRANSACTION ISOLATION LEVEL SERIALIZABLE;\n\tLOCK TABLE foo;\n\tUPDATE foo ... -- or in general a SELECT/UPDATE/INSERT/DELETE query\n\t... etc ...\n\nThe transaction snapshot *must* be set at the time of the first query\n(here, the UPDATE). It obviously can't be later, and it cannot be\nearlier either, because in this sort of example you need the requested\nlocks to be taken before the snapshot is set.\n\nThe transaction must be created (as observed by other backends, in\nparticular VACUUM) not later than the LOCK statement, else there is\nnothing that can own the lock. In principle though, the effects of\nBEGIN and perhaps SET could be strictly local to the current backend,\nand only when we hit a LOCK or query do we create the transaction\nexternally.\n\nIn practice the problem we observe is clients that issue BEGIN and then\ngo to sleep (typically because of poorly-designed autocommit behavior in\ninterface libraries). Postponing externally-visible creation of the\ntransaction to the first command after BEGIN would be enough to get\naround the real-world issues, and it would not require code changes\nnearly as extensive as trying to let other stuff like SET happen\n\"before\" the transaction starts.\n\nThere isn't any compelling implementation reason when to freeze the\nvalue of now(). Reasonable options are\n\t1. at BEGIN (current behavior)\n\t2. at transaction's external creation \n\t3. at freezing of transaction snapshot\n#1 and #2 are actually the same at the moment, but could be decoupled\nas sketched above, in which case the behavior of #2 would effectively\nbecome \"at first command afte BEGIN\".\n\nIn the previous thread:\nhttp://archives.postgresql.org/pgsql-hackers/2003-03/msg01178.php\nI argued that now() should be frozen at the time of the transaction\nsnapshot, and I still think that that's a defensible behavior.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Nov 2003 09:50:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: start of transaction (was: Re: [PERFORM] Help with count(*)) " }, { "msg_contents": "Neil Conway <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> (I believe the previous discussion also agreed that we wanted to\n>> postpone the freezing of now(), which currently also happens at\n>> BEGIN rather than the first command after BEGIN.)\n\n> That doesn't make sense to me: from a user's perspective, the \"start\n> of the transaction\" is when the BEGIN is issued, regardless of any\n> tricks we may play in the backend.\n\nThat's defensible when the user issued the BEGIN himself. When the\nBEGIN is coming from some interface library's autocommit logic, it's\na lot less defensible. If you consult the archives, you will find\nactual user complaints about \"why is now() returning a very old time?\"\nthat we traced to use of interface layers that handle \"commit()\" by\nissuing \"COMMIT; BEGIN;\".\n\nWhen BEGIN actually is issued by live application logic, I'd expect it\nto be followed immediately by some kind of command --- so the user would\nbe unable to tell the difference in practice.\n\nHannu moved this thread to -hackers, please follow up there if you want\nto discuss it more.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Nov 2003 10:22:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with count(*) " }, { "msg_contents": "On Sun, 16 Nov 2003, Tom Lane wrote:\n\n> There isn't any compelling implementation reason when to freeze the\n> value of now(). Reasonable options are\n> \t1. at BEGIN (current behavior)\n> \t2. at transaction's external creation \n> \t3. at freezing of transaction snapshot\n> #1 and #2 are actually the same at the moment, but could be decoupled\n> as sketched above, in which case the behavior of #2 would effectively\n> become \"at first command afte BEGIN\".\n> \n> I argued that now() should be frozen at the time of the transaction\n> snapshot, and I still think that that's a defensible behavior.\n\nIs it important exactly what value is returned as long as it's the same in \nthe whole transaction? I think not.\n\nTo me it would be just as logical to fix it at the first call to now() in\nthe transaction. The first time you call it you get the actual time as it\nis now and the next time you get the same as before since every operation\nin the transaction logically happens at the same time. If you don't call\nnow() at all, the system time will not be fetched at all.\n\n-- \n/Dennis\n\n", "msg_date": "Sun, 16 Nov 2003 16:51:49 +0100 (CET)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: start of transaction (was: Re: [PERFORM] Help with" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> For me, the \"start of transaction\" is not about time, but about grouping\n> a set of statements into one. So making the exact moment of \"start\" be\n> the first statement that actually does something with data seems\n> perfectly reasonable.\n\nThis might be a perfectly logical change in semantics, but what\nbenefit does it provide over the old way of doing things?\n\nWhat does BEGIN actually do now, from a user's perspective? At\npresent, it \"starts a transaction block\", which is pretty simple. If\nwe adopted the proposed change, it would \"change the state of the\nsystem so that the next command is part of a new transaction\". This is\nnaturally more complex; but more importantly, what benefit does it\nACTUALLY provide to the user?\n\n(I can't see one, but perhaps I'm missing something...)\n\n> Delaying the locking effects of transactions as long as possible can\n> increase performance overall, not just for pathological clients that sit\n> on idle open transactions.\n\nI agree, but this is irrelevant to the semantics of now().\n\n-Neil\n\n", "msg_date": "Sun, 16 Nov 2003 17:55:41 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: start of transaction (was: Re: [PERFORM] Help with count(*))" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n> That's defensible when the user issued the BEGIN himself. When the\n> BEGIN is coming from some interface library's autocommit logic, it's\n> a lot less defensible. If you consult the archives, you will find\n> actual user complaints about \"why is now() returning a very old time?\"\n> that we traced to use of interface layers that handle \"commit()\" by\n> issuing \"COMMIT; BEGIN;\".\n\nHmmm... I agree this behavior isn't ideal, although I can see the case\nfor viewing this as a mistake by the application developer: they are\nassuming that they know exactly when transactions begin, which is not\na feature provided by their language interface. They should be using\ncurrent_timestamp, and/or changing their language interface's\nconfiguration.\n\nThat said, I think this is a minor irritation at best. The dual\ndrawbacks of breaking backward compatibility and making the BEGIN\nsemantics more confusing is enough to leave me satisfies with the\nstatus quo.\n\nIf we do change this, I think Dennis' idea of making now() always\nreturn the same value within a given transaction is interesting: that\nmight be a way to fix this problem without confusing the semantics of\nBEGIN.\n\n-Neil\n\n", "msg_date": "Sun, 16 Nov 2003 18:18:02 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "start of transaction (was: Re: [PERFORM] Help with count(*))" }, { "msg_contents": "Neil Conway <[email protected]> writes:\n> Hmmm... I agree this behavior isn't ideal, although I can see the case\n> for viewing this as a mistake by the application developer: they are\n> assuming that they know exactly when transactions begin, which is not\n> a feature provided by their language interface.\n\nWell, actually, it's a bug in the interface IMHO. But as I said in the\nlast thread, it's a fairly widespread bug. We've been taking the\nposition that the interface libraries should get fixed, and that's not\nhappening. It's probably time to look at a server-side fix.\n\n> If we do change this, I think Dennis' idea of making now() always\n> return the same value within a given transaction is interesting:\n\nYou mean the time of the first now() call? I thought that was an\ninteresting idea also, but it's probably not going to look so hot\nwhen we complete the TODO item of adding access to\nthe start-of-current-statement time. Having start-of-transaction be\nlater than start-of-statement isn't gonna fly :-(. If we were willing\nto abandon that TODO item then I'd be interested in defining now() as\nDennis suggested.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Nov 2003 19:08:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: start of transaction (was: Re: [PERFORM] Help with count(*)) " }, { "msg_contents": "Tom Lane wrote:\n> Neil Conway <[email protected]> writes:\n> > Hmmm... I agree this behavior isn't ideal, although I can see the case\n> > for viewing this as a mistake by the application developer: they are\n> > assuming that they know exactly when transactions begin, which is not\n> > a feature provided by their language interface.\n> \n> Well, actually, it's a bug in the interface IMHO. But as I said in the\n> last thread, it's a fairly widespread bug. We've been taking the\n> position that the interface libraries should get fixed, and that's not\n> happening. It's probably time to look at a server-side fix.\n> \n> > If we do change this, I think Dennis' idea of making now() always\n> > return the same value within a given transaction is interesting:\n> \n> You mean the time of the first now() call? I thought that was an\n> interesting idea also, but it's probably not going to look so hot\n> when we complete the TODO item of adding access to\n> the start-of-current-statement time. Having start-of-transaction be\n> later than start-of-statement isn't gonna fly :-(. If we were willing\n> to abandon that TODO item then I'd be interested in defining now() as\n> Dennis suggested.\n\nDefining now() as the first call seems pretty arbitrary to me. I can't\nthink of any time-based interface that has that API. And what if a\ntrigger called now() in an earlier query and you didn't even know about\nit.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 16 Nov 2003 19:31:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: start of transaction (was: Re: [PERFORM] Help with count(*))" }, { "msg_contents": "\nNeil Conway <[email protected]> writes:\n\n> What does BEGIN actually do now, from a user's perspective? \n\nI think you're thinking about this all wrong. BEGIN doesn't \"do\" anything.\nIt's not a procedural statement, it's a declaration. It declares that the\nblock of statements form a transaction so reads should be consistent and\nfailures should be handled in a particular way to preserve data integrity.\n\nGiven that declaration and the guarantees it requires of the database it's\nthen up to the database to figure out what constraints that imposes on what\nthe database can do and still meet the guarantees the BEGIN declaration\nrequires. The more clever the database is about minimizing those restrictions\nthe better as it means the database can run more efficiently.\n\nFor what it's worth, this is how Oracle handles things too. On the\ncommand-line issuing a BEGIN following a COMMIT is just noise; you're _always_\nin a transaction. A COMMIT ends the previous the transaction and implicitly\nstarts the next transaction. But the snapshot isn't frozen until you first\nread from a table.\n\nI'm not sure what other databases do, but I think this is why clients behave\nlike this. They think of BEGIN as a declaration and therefore initiating a\nCOMMIT;BEGIN; at the end of every request is perfectly logical, and works fine\nin at least Oracle, and probably other databases.\n\n-- \ngreg\n\n", "msg_date": "17 Nov 2003 01:11:53 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: start of transaction (was: Re: [PERFORM] Help with count(*))" }, { "msg_contents": "On Sun, 17 Nov 2003, Greg Stark wrote:\n\n> Neil Conway <[email protected]> writes:\n>\n> > What does BEGIN actually do now, from a user's perspective?\n>\n> I think you're thinking about this all wrong. BEGIN doesn't \"do\" anything.\n> It's not a procedural statement, it's a declaration. It declares that the\n> block of statements form a transaction so reads should be consistent and\n> failures should be handled in a particular way to preserve data integrity.\n>\n> Given that declaration and the guarantees it requires of the database it's\n> then up to the database to figure out what constraints that imposes on what\n> the database can do and still meet the guarantees the BEGIN declaration\n> requires. The more clever the database is about minimizing those restrictions\n> the better as it means the database can run more efficiently.\n>\n> For what it's worth, this is how Oracle handles things too. On the\n> command-line issuing a BEGIN following a COMMIT is just noise; you're _always_\n> in a transaction. A COMMIT ends the previous the transaction and implicitly\n> starts the next transaction. But the snapshot isn't frozen until you first\n> read from a table.\n\nThe earlier portion of the described behavior is AFAICS not complient to\nSQL99 at least. COMMIT (without AND CHAIN) terminates a transaction and\ndoes not begin a new one. The new transaction does not begin until a\ntransaction initiating command (for example START TRANSACTION, CREATE\nTABLE, INSERT, ...) is executed. The set of things you can do that aren't\ninitiating is fairly small admittedly, but it's not a null set.\n", "msg_date": "Sun, 16 Nov 2003 23:09:05 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: start of transaction (was: Re: [PERFORM] Help with" }, { "msg_contents": "Tom Lane kirjutas E, 17.11.2003 kell 02:08:\n> Neil Conway <[email protected]> writes:\n> > Hmmm... I agree this behavior isn't ideal, although I can see the case\n> > for viewing this as a mistake by the application developer: they are\n> > assuming that they know exactly when transactions begin, which is not\n> > a feature provided by their language interface.\n> \n> Well, actually, it's a bug in the interface IMHO. But as I said in the\n> last thread, it's a fairly widespread bug. \n\nI'm not sure that it is a client-side bug. For example Oracle seems to\n_always_ have a transaction going, i.e. you can't be \"outside\" of\ntransaction, and you use just COMMIT to commit old _and_start_new_\ntransaction.\n\nIIRC the same is true for DB2.\n\nFor these database the BEGIN TRANSACTION command is mainly used for\nstarting nested transactions, which we don't have.\n\n> We've been taking the\n> position that the interface libraries should get fixed, and that's not\n> happening. It's probably time to look at a server-side fix.\n\nMaybe \"fixing\" the interface libraries would make them incompatible with\n*DBC's for all other databases in some subtle ways ?\n\n-----------------\nHannu\n\n", "msg_date": "Mon, 17 Nov 2003 12:16:06 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: start of transaction (was: Re: [PERFORM] Help with" }, { "msg_contents": "Bruce Momjian kirjutas E, 17.11.2003 kell 02:31:\n\n> Defining now() as the first call seems pretty arbitrary to me. I can't\n> think of any time-based interface that has that API. And what if a\n> trigger called now() in an earlier query and you didn't even know about\n> it.\n\nThat would be OK. The whole point of that previous discussion was to\nhave now() that returns the same value over the span of the whole\ntransaction.\n\nIt would be even better to have now() that returns the time current\ntransaction is COMMITted as this is the time other backend become aware\nof it ;)\n\n-----------\nHannu\n\n", "msg_date": "Mon, 17 Nov 2003 12:19:06 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: start of transaction (was: Re: [PERFORM] Help with" }, { "msg_contents": "Hannu Krosing wrote:\n\n> Tom Lane kirjutas E, 17.11.2003 kell 02:08:\n> \n>>Neil Conway <[email protected]> writes:\n>>\n>>>Hmmm... I agree this behavior isn't ideal, although I can see the case\n>>>for viewing this as a mistake by the application developer: they are\n>>>assuming that they know exactly when transactions begin, which is not\n>>>a feature provided by their language interface.\n>>\n>>Well, actually, it's a bug in the interface IMHO. But as I said in the\n>>last thread, it's a fairly widespread bug. \n> \n> \n> I'm not sure that it is a client-side bug. For example Oracle seems to\n> _always_ have a transaction going, i.e. you can't be \"outside\" of\n> transaction, and you use just COMMIT to commit old _and_start_new_\n> transaction.\n> \n> IIRC the same is true for DB2.\n\nActually, in oracle a new transaction starts with first DDL after a commit. That \ndoes not include DML BTW.\n\nAnd Damn.. Actually I recently fixed a \"bug\" where I had to force a start of \ntransaction in Pro*C, immediately after commit. Otherwise a real start of \ntransaction could be anywhere down the line, causing some weird concurrency \nissues. Rather than fiddling with oracle support, I would hack my source code, \nespecially this is not the first oracle bug I have worked around....:-(\n\nThe fact that I couldn't control exact transaction start was such a irritation \nto put it mildly.. I sooooo missed 'exec sql begin work' in ecpg..:-)\n\n>>We've been taking the\n>>position that the interface libraries should get fixed, and that's not\n>>happening. It's probably time to look at a server-side fix.\n\nI hope that does not compramise transaction control I have with libpq/ecpg etc.\n\nAnd when we are talking about interface libraries, how many of them are within \nPG control and how many are not? With languages maintenend by postgresql group, \nit should behave correctly, right? E.g pl/perl,pl/python etc.\n\nAnd for other interface libraries, what are they exactly? php? Can't we just \nsend them a stinker/patch to get that damn thing right(Whatever wrong they are \ndoing. I have kinda lost thread on it..:-) Was it exact time of transaction \nstart v/s now()?)\n\n Shridhar\n\n", "msg_date": "Mon, 17 Nov 2003 16:13:28 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: start of transaction" }, { "msg_contents": "Hannu Krosing wrote:\n> Bruce Momjian kirjutas E, 17.11.2003 kell 02:31:\n> \n> > Defining now() as the first call seems pretty arbitrary to me. I can't\n> > think of any time-based interface that has that API. And what if a\n> > trigger called now() in an earlier query and you didn't even know about\n> > it.\n> \n> That would be OK. The whole point of that previous discussion was to\n> have now() that returns the same value over the span of the whole\n> transaction.\n\nI think my issue is that there isn't any predictable way for a user to\nknow when the now() time is recorded. By using start of transaction, at\nleast we know for sure the point in time it is showing.\n\n> It would be even better to have now() that returns the time current\n> transaction is COMMITted as this is the time other backend become aware\n> of it ;)\n\nTrue, but implementing that would be very hard.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 17 Nov 2003 19:52:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: start of transaction (was: Re: [PERFORM] Help with count(*))" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Hannu Krosing wrote:\n>> It would be even better to have now() that returns the time current\n>> transaction is COMMITted as this is the time other backend become aware\n>> of it ;)\n\n> True, but implementing that would be very hard.\n\nSon, that was a *joke* ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 18 Nov 2003 02:34:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: start of transaction (was: Re: [PERFORM] Help with count(*)) " } ]
[ { "msg_contents": "Hi Everyone,\n \nI am using PostgreSQL 7.3.2 and have used earlier versions (7.1.x onwards) \nand with all of them I noticed same problem with INSERTs when there is a\nlarge data set. Just to so you guys can compare time it takes to insert\none row into a table when there are only few rows present and when there\nare thousands:\n\nRows Present\t\tStart Time\t\tFinish Time\n------------------------------------------------------------\n100\t\t\t1068790804.12 \t1068790804.12\n1000\t\t\t1068790807.87\t \t1068790807.87\n5000\t\t\t1068790839.26\t\t1068790839.27\n10000\t\t\t1068790909.24 \t\t1068790909.26\n20000\t\t\t1068791172.82\t\t1068791172.85\n30000\t\t\t1068791664.06\t\t1068791664.09 \n40000\t\t\t1068792369.94\t\t1068792370.0\n50000\t\t\t1068793317.53\t\t1068793317.6\n60000\t\t\t1068794369.38\t\t1068794369.47\n\nAs you can see if takes awfully lots of time for me just to have those\nvalues inserted. Now to make a picture a bit clearer for you this table \nhas lots of information in there, about 25 columns. Also there are few\nindexes that I created so that the process of selecting values from there\nis faster which by the way works fine. Selecting anything takes under 5\nseconds.\n\nAny help would be greatly appreciated even pointing me in the right\ndirection where to ask this question. By the way I designed the database\nthis way as my application that uses PGSQL a lot during the execution so\nthere was a huge need for fast SELECTs. Our experiments are getting larger\nand larger every day so fast inserts would be good as well.\n\nJust to note those times above are of INSERTs only. Nothing else done that\nwould be included in those times. Machine was also free and that was the\nonly process running all the time and the machine was Intel(R) Pentium(R)\n4 CPU 2.40GHz.\n\nRegards,\nSlavisa\n\n\n\n\n", "msg_date": "Fri, 14 Nov 2003 20:38:33 +1100 (EST)", "msg_from": "Slavisa Garic <[email protected]>", "msg_from_op": true, "msg_subject": "INSERT extremely slow with large data sets (fwd)" }, { "msg_contents": "On Fri, 14 Nov 2003 20:38:33 +1100 (EST)\nSlavisa Garic <[email protected]> wrote:\n\n> Any help would be greatly appreciated even pointing me in the right\n> direction where to ask this question. By the way I designed the\n> database this way as my application that uses PGSQL a lot during the\n> execution so there was a huge need for fast SELECTs. Our experiments\n> are getting larger and larger every day so fast inserts would be good\n> as well.\n> \n\nFirst, you need to upgrade to 7.3.4, 7.4 is prefable if a dump/reload is\nnot too bad.\n\nStandard set of questions:\n\n1. Any foreign keys\n2. are these inserts batched into transactions\n3. CPU usage?\n4. OS?\n5. PG config? [shared_buffers, effective_cache_size, etc]\n6. IO saturation?\n\nAlso, try searching the archives. lots of juicy info there too.\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n", "msg_date": "Fri, 14 Nov 2003 08:37:55 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERT extremely slow with large data sets (fwd)" } ]
[ { "msg_contents": "Heya,\n\nFYI just spotted this and thought I would pass it on, for all those who are\nlooking at new boxes.\n\nhttp://www.theinquirer.net/?article=12665\nhttp://www.promise.com/product/product_detail_eng.asp?productId=112&familyId\n=2\n\nLooks like a four-channel hot-swap IDE (SATA) hardware RAID controller with\nup to 256Mb onboard RAM.\n\n\nNick\n\n\n\n\n", "msg_date": "Fri, 14 Nov 2003 10:11:44 -0000", "msg_from": "\"Nick Barr\" <[email protected]>", "msg_from_op": true, "msg_subject": "IDE Hardware RAID Controller" } ]
[ { "msg_contents": "Dear Gurus,\n\nI have two SQL function that produce different times and I can't understand\nwhy. Here is the basic difference between them:\n\nCREATE FUNCTION test_const_1234 () RETURNS int4 AS '\n SELECT ... 1234 ... 1234 .... 1234 ...\n' LANGUAGE 'SQL';\n\nCREATE FUNCTION test_param (int4) RETURNS int4 AS '\n SELECT ... $1 .... $1 .... $1 ...\n' LANGUAGE 'SQL';\n\nSome sample times for different data:\n\ntest_const_1234() 450 msec\ntest_param(1234) 2700-4000 msec (probably disk cache)\ntest_const_5678() 13500 msec\ntest_param(5678) 14500 msec\n\nIs there a sane explanation? a solution?\nI can send more info if you wish.\n\nTIA,\nG.\n------------------------------- cut here -------------------------------\n\n", "msg_date": "Fri, 14 Nov 2003 16:08:27 +0100", "msg_from": "\"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "constant vs function param differs in performance" }, { "msg_contents": "\"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=\" <[email protected]> writes:\n> I have two SQL function that produce different times and I can't understand\n> why.\n\nThe planner often produces different plans when there are constants in\nWHERE clauses than when there are variables, because it can get more\naccurate ideas of how many rows will be retrieved.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Nov 2003 15:59:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: constant vs function param differs in performance " }, { "msg_contents": "Dear Tom,\n\nThanks for your early response.\n\nAn addition: the nastier difference increased by adding an index (it was an\nessential index for this query):\n\n func with param improved from 2700ms to 2300ms\n func with constant improved from 400ms to 31ms\n inline query improved from 390ms to 2ms\n\nSo am I reading correct and it is completely normal and can't be helped?\n(couldn't have tried 7.4 yet)\n\nIn case it reveals something:\n\n------------------------------- cut here -------------------------------\nSELECT field FROM\n(SELECT field, sum(something)=0 AS boolvalue\n FROM\n (SELECT * FROM subselect1 NATURAL LEFT JOIN subselect2\n UNION\n SELECT * FROM subselect3 NATURAL LEFT JOIN subselect4\n ) AS u\n GROUP BY field) AS t\nWHERE not boolvalue\nORDER BY simple_sql_func_returns_bool(field) DESC\nLIMIT 1;\n------------------------------- cut here -------------------------------\n\nG.\n----- Original Message ----- \nFrom: \"Tom Lane\" <[email protected]>\nSent: Friday, November 14, 2003 9:59 PM\n\n\n> \"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=\" <[email protected]> writes:\n> > I have two SQL function that produce different times and I can't\nunderstand\n> > why.\n>\n> The planner often produces different plans when there are constants in\n> WHERE clauses than when there are variables, because it can get more\n> accurate ideas of how many rows will be retrieved.\n>\n> regards, tom lane\n>\n\n", "msg_date": "Wed, 19 Nov 2003 17:04:01 +0100", "msg_from": "\"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: constant vs function param differs in performance" } ]
[ { "msg_contents": "Hi-\n\nI'm seeing estimates for n_distinct that are way off for a large table\n(8,700,000 rows). They get better by setting the stats target higher, but\nare still off by a factor of 10 with the stats set to 1000. I've noticed and\nreported a similar pattern before on another table. Because this follows the\nsame very consistent pattern, I figured it was worth reporting again. This\nlooks more like a bug than randomness. If the poor result was simply due to\nhaving a small sample to work from, the estimates should be all over the\nmap, but these are consistently low, and vary in almost exact inverse\nproportion to the stats target:\n\n run 1: run2: run3:\nn_distinct estimate, statistics = 10: 3168 3187 3212\nn_distinct estimate, statistics = 100: 23828 24059 23615\nn_distinct estimate, statistics = 1000: 194690 194516 194081\nActual distinct values: 3340724\n\nOr to put it another way, if you were to take the estimate from analyze,\ndivide by the stats target and multiply by 10000, the result would be pretty\nclose to exact. (Within a factor of 2, which ought to be plenty close for\nplanning purposes.)\n\nI'm running version 7.3.2\n\nAny thoughts from folks familiar with this part of the source code?\n\nRegards,\n -Nick\n\nPS-\nHere's a log of the session that I got this from.\n\nalpha=# select count(distinct actor_id) from actor_case_assignment;\n-[ RECORD 1 ]--\ncount | 3340724\nalpha=# analyze;\nANALYZE\nalpha=# SELECT * FROM pg_stats\nalpha-# WHERE tablename='actor_case_assignment' AND attname='actor_id';\n-[ RECORD\n1 ]-----+-------------------------------------------------------------------\n----------------------------------------------------------------------------\n----------------\nschemaname | public\ntablename | actor_case_assignment\nattname | actor_id\nnull_frac | 0\navg_width | 16\nn_distinct | 3168\nmost_common_vals |\n{18105XS,18115XS,18106XS,18113JD02,18115JD02,18106J27,18113XS,18113A10656,18\n115LST,18108XS}\nmost_common_freqs |\n{0.0206667,0.0206667,0.0196667,0.019,0.0176667,0.0173333,0.0163333,0.015,0.0\n14,0.0136667}\nhistogram_bounds |\n{18067A000-07P,18067PD397SC1574-1,18105LBPD,18106A2119-49,18106PD399IF845-1,\n18108A03068-20,18108LECS207,18108PTW03737278-2,18111A19788-77,18115A50420,18\n115XC}\ncorrelation | 0.876795\nalpha=#\nalpha=# alter table actor_case_assignment alter column actor_id set\nstatistics 100;\nALTER TABLE\nalpha=# analyze actor_case_assignment;\nANALYZE\nalpha=# SELECT * FROM pg_stats\nalpha-# WHERE tablename='actor_case_assignment' AND attname='actor_id';\n-[ RECORD 1 ]\n<Header snipped>\nschemaname | public\ntablename | actor_case_assignment\nattname | actor_id\nnull_frac | 0\navg_width | 17\nn_distinct | 23828\nmost_common_vals | {18115XS,18113JD02,18106XS,1....\n<Rest of values snipped>\nalpha=# alter table actor_case_assignment alter column actor_id set\nstatistics 1000;\nALTER TABLE\nalpha=# analyze actor_case_assignment;\nANALYZE\nalpha=# SELECT * FROM pg_stats\nalpha-# WHERE tablename='actor_case_assignment' AND attname='actor_id';\n-[ RECORD 1 ]-----\n<Header snipped>\nschemaname | public\ntablename | actor_case_assignment\nattname | actor_id\nnull_frac | 0\navg_width | 16\nn_distinct | 194690\nmost_common_vals | {18106XS,18115XS,18115...\n<Rest of values snipped>\n\nalpha=# \\x\nExpanded display is off.\nalpha=# alter table actor_case_assignment alter column actor_id set\nstatistics 10;\nALTER TABLE\nalpha=# analyze actor_case_assignment;\nANALYZE\nalpha=# select n_distinct from pg_stats where\ntablename='actor_case_assignment and attname='actor_id';\nalpha'# ';\nERROR: parser: parse error at or near \"actor_id\" at character 85\nalpha=# select n_distinct from pg_stats where\ntablename='actor_case_assignment' and attname='actor_id';\n n_distinct\n------------\n 3187\n(1 row)\n\nalpha=# alter table actor_case_assignment alter column actor_id set\nstatistics 10;\nALTER TABLE\nalpha=# analyze actor_case_assignment;\nANALYZE\nalpha=# select n_distinct from pg_stats where\ntablename='actor_case_assignment' and attname='actor_id';\n n_distinct\n------------\n 3212\n(1 row)\n\nalpha=# analyze actor_case_assignment;\nANALYZE\nalpha=# alter table actor_case_assignment alter column actor_id set\nstatistics 100;\nALTER TABLE\nalpha=# analyze actor_case_assignment;\nANALYZE\nalpha=# select n_distinct from pg_stats where\ntablename='actor_case_assignment' and attname='actor_id';\n n_distinct\n------------\n 24059\n(1 row)\n\nalpha=# analyze actor_case_assignment;\nANALYZE\nalpha=# alter table actor_case_assignment alter column actor_id set\nstatistics 100;\nALTER TABLE\nalpha=# select n_distinct from pg_stats where\ntablename='actor_case_assignment' and attname='actor_id';\n n_distinct\n------------\n 23615\n(1 row)\n\nalpha=# alter table actor_case_assignment alter column actor_id set\nstatistics 1000;\nALTER TABLE\nalpha=# analyze actor_case_assignment;\nANALYZE\nalpha=# select n_distinct from pg_stats where\ntablename='actor_case_assignment' and attname='actor_id';\n n_distinct\n------------\n 194516\n(1 row)\n\nalpha=# analyze actor_case_assignment;\nANALYZE\nalpha=# select n_distinct from pg_stats where\ntablename='actor_case_assignment' and attname='actor_id';\n n_distinct\n------------\n 194081\n(1 row)\n\nalpha=#\n\n\n", "msg_date": "Fri, 14 Nov 2003 16:27:16 -0500", "msg_from": "\"Nick Fankhauser\" <[email protected]>", "msg_from_op": true, "msg_subject": "n_distinct way off, but following a pattern." }, { "msg_contents": "\"Nick Fankhauser\" <[email protected]> writes:\n> I'm seeing estimates for n_distinct that are way off for a large table\n\nEstimating n_distinct from a small sample is inherently a hard problem.\nI'm not surprised that the estimates would get better as the sample size\nincreases. But maybe we can do better. The method we are currently\nusing is this:\n\n /*----------\n * Estimate the number of distinct values using the estimator\n * proposed by Haas and Stokes in IBM Research Report RJ 10025:\n * n*d / (n - f1 + f1*n/N)\n * where f1 is the number of distinct values that occurred\n * exactly once in our sample of n rows (from a total of N),\n * and d is the total number of distinct values in the sample.\n * This is their Duj1 estimator; the other estimators they\n * recommend are considerably more complex, and are numerically\n * very unstable when n is much smaller than N.\n\nIt would be interesting to see exactly what inputs are going into this\nequation. Do you feel like adding some debug printouts into this code?\nOr just looking at the variables with a debugger? In 7.3 it's about\nline 1060 in src/backend/commands/analyze.c.\n\nBTW, this is already our second try at this problem, the original 7.2\nequation didn't last long at all ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Nov 2003 17:12:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: n_distinct way off, but following a pattern. " }, { "msg_contents": "\n\n> It would be interesting to see exactly what inputs are going into this\n> equation. Do you feel like adding some debug printouts into this code?\n> Or just looking at the variables with a debugger? In 7.3 it's about\n> line 1060 in src/backend/commands/analyze.c.\n\nTom-\n\nI don't really have time to follow up at this moment, but I think this would\nbe interesting to look into, so I'll plan to dig into it over the\nThanksgiving Holiday when I'll have a little time free to follow up on some\nfun projects. Your pointers should let me get into it pretty quickly.\n\nIn the meantime, I'll just set up a cron job that runs behind my nightly\nanalyze to put the correct numbers into pg_statistic on the tables that this\naffects.\n\nThanks-\n -Nick\n\n\n\n", "msg_date": "Sun, 16 Nov 2003 08:52:29 -0500", "msg_from": "\"Nick Fankhauser\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: n_distinct way off, but following a pattern. " } ]
[ { "msg_contents": "Slavisa Garic wrote:\n\n> Hi Everyone,\n \n> I am using PostgreSQL 7.3.2 and have used earlier versions (7.1.x \n> onwards) \n> and with all of them I noticed same problem with INSERTs when there is \n> a\n> large data set. Just to so you guys can compare time it takes to insert\n> one row into a table when there are only few rows present and when \n> there\n> are thousands:\n\nTry running VACUUM ANALYZE periodically during inserts. I found this to help.\n\nGeorge Essig\n\n", "msg_date": "Fri, 14 Nov 2003 14:02:38 -0800 (PST)", "msg_from": "George Essig <[email protected]>", "msg_from_op": true, "msg_subject": "Re: INSERT extremely slow with large data sets (fwd)" }, { "msg_contents": "Does VACUUM ANALYZE help with the analysis or it also speeds up the\nprocess. I know i could try that before I ask but experiment is running\nnow and I am too curious to wait :),\n\nAnyway thanks for the hint,\nSlavisa\n\nOn Fri, 14 Nov 2003, George Essig wrote:\n\n> Slavisa Garic wrote:\n> \n> > Hi Everyone,\n> \n> > I am using PostgreSQL 7.3.2 and have used earlier versions (7.1.x \n> > onwards) \n> > and with all of them I noticed same problem with INSERTs when there is \n> > a\n> > large data set. Just to so you guys can compare time it takes to insert\n> > one row into a table when there are only few rows present and when \n> > there\n> > are thousands:\n> \n> Try running VACUUM ANALYZE periodically during inserts. I found this to help.\n> \n> George Essig\n> \n> \n\n", "msg_date": "Sat, 15 Nov 2003 12:20:20 +1100 (EST)", "msg_from": "Slavisa Garic <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERT extremely slow with large data sets (fwd)" }, { "msg_contents": "\n--- Slavisa Garic <[email protected]> wrote:\n> Does VACUUM ANALYZE help with the analysis or it also speeds up the\n> process. I know i could try that before I ask but experiment is running\n> now and I am too curious to wait :),\n> \n> Anyway thanks for the hint,\n> Slavisa\n> \n\nVACUUM ANALYZE will reclaim disk space and update statistics used by the optimizer to help execute\nqueries faster. This could speed up your inserts. See\nhttp://www.postgresql.org/docs/7.3/static/sql-vacuum.html.\n\nGeorge Essig\n", "msg_date": "Sat, 15 Nov 2003 05:13:38 -0800 (PST)", "msg_from": "George Essig <[email protected]>", "msg_from_op": true, "msg_subject": "Re: INSERT extremely slow with large data sets (fwd)" }, { "msg_contents": "On Sat, Nov 15, 2003 at 05:13:38AM -0800, George Essig wrote:\n> \n> VACUUM ANALYZE will reclaim disk space and update statistics used\n\nStrictly speaking, it does not reclaim disk space. It merely marks\nit as available, assuming you have enough room in your free space\nmap. VACUUM FULL reclaims disk space, i.e. it compacts the data\nfiles and returns that space to the operating system.\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 17 Nov 2003 09:42:43 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERT extremely slow with large data sets (fwd)" } ]
[ { "msg_contents": "When I execute a transaction using embedded sql statements in a c program,\nI get the error,\n\nError in transaction processing. I could see from the documentation that\nit means, \"Postgres signalled to us that we cannot start, commit or\nrollback the transaction\"\n\nI don't find any mistakes in the transaction statements.\n\nWhat can I do to correct this error?\n\nYour response would be very much appreciated.\n\nThanks and Regards,\n\nRadha\n\n\n\n\n", "msg_date": "Fri, 14 Nov 2003 16:07:26 -0600 (CST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Error in transaction processing" } ]
[ { "msg_contents": "When I execute a transaction using embedded sql statements in a c program,\nI get the error,\n\nError in transaction processing. I could see from the documentation that\nit means, \"Postgres signalled to us that we cannot start, commit or\nrollback the transaction\"\n\nI don't find any mistakes in the transaction statements.\n\nWhat can I do to correct this error?\n\nYour response would be very much appreciated.\n\nThanks and Regards,\n\nRadha\n\n\n\n\n\n\n", "msg_date": "Sat, 15 Nov 2003 08:52:45 -0600 (CST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Error in transaction processing" } ]
[ { "msg_contents": "Hi,\n\nI understand that it is not possible to occasionally re-plan the queries in a\nPL/pgSQL function without dropping and re-creating the function.\n\nI think it would be useful if the queries in a PL/pgSQL function could be\nre-planned on-the-fly.\n\nWhen a lot of data has been added/modified and ANALYZE is suitable to run, it\nwould also be a great idea to re-plan the queries used in PL/pgSQL functions.\nI understand that this is not possible?\nThe only way would be to DROP/CREATE the functions or to use EXECUTE.\nI don't think EXECUTE is an option, because preparing the queries every time the\nfunction is called is in my case not necessary and just a waste of\nperformance.\n\nAs a work-around, I am forced to,\n1. populate the database with a lot of test data,\n2. run ANALYZE,\n3. and finally, create the PL/pgSQL functions\nThe prepared queries in the functions will now be sufficiently optimized.\n\nI don't think this is a nice solution.\n\nI also thought of a slightly better solution, but I don't know if it is\npossible.\nMy idea was to populate the database once and then save the data in\npg_statistics generated by ANALYZE to a file. Every time the database needs to\nbe created, the statistics could then be restored thus making the planner\nproduce \"future-optimized\" queries when the PL/pgSQL functions are created,\neven though the database is empty.\n\nI would greatly appreciate any help/comments.\n\nThank you.\n\nJoel Jacobson <[email protected]>\n", "msg_date": "Mon, 17 Nov 2003 15:51:20 +0100", "msg_from": "Joel Jacobson <[email protected]>", "msg_from_op": true, "msg_subject": "Backup/restore of pg_statistics" }, { "msg_contents": "Joel Jacobson <[email protected]> writes:\n> I understand that it is not possible to occasionally re-plan the queries in a\n> PL/pgSQL function without dropping and re-creating the function.\n\nHuh? You only need to start a fresh connection.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 Nov 2003 23:39:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup/restore of pg_statistics " } ]
[ { "msg_contents": "All,\nThis is a straight SQL question, maybe not appropriate for a performance \nlist, but...\n\nI have a simple stock holdings setup:\n\n=> select * from t1;\n nam | co | num\n-----+-----------+------\n joe | ibm | 600\n abe | ibm | 1500\n joe | cisco | 1200\n abe | cisco | 800\n joe | novell | 500\n joe | microsoft | 200\n\nWhat I would like to see is a Top-n-holdings-by-name\", e.g, for n=2:\n\n nam | co | num\n----------+--------+-----\n joe | cisco | 1200\n joe | ibm | 600\n abe | ibm | 1500\n abe | cisco | 800\n\nI can get part of the way by using a LIMIT clause in a subquery, e.g,\n\n=> select 'abe', a.co, a.num from (select co, num from t1 where \nnam='abe' order by num desc limit 2) as a;\n ?column? | co | num\n----------+-------+------\n abe | ibm | 1500\n abe | cisco | 800\n\nbut I can't figure out a correlated subquery (or GROUP BY arrangement or \nanything else) that will cycle through the names. I vaguely remember \nthat these kinds or queries are hard to do in standard SQL, but I was \nhoping that PG, with its extensions...\n\n Thanks, Rich Cullingford\n [email protected]\n\n\n", "msg_date": "Mon, 17 Nov 2003 11:38:37 -0500", "msg_from": "Rich Cullingford <[email protected]>", "msg_from_op": true, "msg_subject": "Top n queries and GROUP BY" }, { "msg_contents": "Rich Cullingford wrote:\n> All,\n> This is a straight SQL question, maybe not appropriate for a performance \n> list, but...\n> \n> I have a simple stock holdings setup:\n> \n> => select * from t1;\n> nam | co | num\n> -----+-----------+------\n> joe | ibm | 600\n> abe | ibm | 1500\n> joe | cisco | 1200\n> abe | cisco | 800\n> joe | novell | 500\n> joe | microsoft | 200\n> \n> What I would like to see is a Top-n-holdings-by-name\", e.g, for n=2:\n> \n> nam | co | num\n> ----------+--------+-----\n> joe | cisco | 1200\n> joe | ibm | 600\n> abe | ibm | 1500\n> abe | cisco | 800\n> \n> I can get part of the way by using a LIMIT clause in a subquery, e.g,\n> \n> => select 'abe', a.co, a.num from (select co, num from t1 where \n> nam='abe' order by num desc limit 2) as a;\n> ?column? | co | num\n> ----------+-------+------\n> abe | ibm | 1500\n> abe | cisco | 800\n> \n> but I can't figure out a correlated subquery (or GROUP BY arrangement or \n> anything else) that will cycle through the names. I vaguely remember \n> that these kinds or queries are hard to do in standard SQL, but I was \n> hoping that PG, with its extensions...\n\nI forgot about row subqueries; for n=3, for example:\n\n=> SELECT * FROM t1\n WHERE (nam,co,num) IN\n (SELECT nam,co,num FROM t1 b\n where b.nam=t1.nam\n order by num desc limit 3)\n order by nam, num desc;\n\n nam | co | num\n-----+--------+------\n abe | ibm | 1500\n abe | cisco | 800\n joe | cisco | 1200\n joe | ibm | 600\n joe | novell | 500\n(5 rows)\n\nSeems to work...\n Thanks all, Rich Cullingford\n [email protected]\n\n", "msg_date": "Mon, 17 Nov 2003 12:56:23 -0500", "msg_from": "Rich Cullingford <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Top n queries and GROUP BY" }, { "msg_contents": "In article <[email protected]>,\nRich Cullingford <[email protected]> writes:\n\n> All,\n> This is a straight SQL question, maybe not appropriate for a performance \n> list, but...\n\n> I have a simple stock holdings setup:\n\n> => select * from t1;\n> nam | co | num\n> -----+-----------+------\n> joe | ibm | 600\n> abe | ibm | 1500\n> joe | cisco | 1200\n> abe | cisco | 800\n> joe | novell | 500\n> joe | microsoft | 200\n\n> What I would like to see is a Top-n-holdings-by-name\", e.g, for n=2:\n\n> nam | co | num\n> ----------+--------+-----\n> joe | cisco | 1200\n> joe | ibm | 600\n> abe | ibm | 1500\n> abe | cisco | 800\n\n> I can get part of the way by using a LIMIT clause in a subquery, e.g,\n\n> => select 'abe', a.co, a.num from (select co, num from t1 where \n> nam='abe' order by num desc limit 2) as a;\n> ?column? | co | num\n> ----------+-------+------\n> abe | ibm | 1500\n> abe | cisco | 800\n\n> but I can't figure out a correlated subquery (or GROUP BY arrangement or \n> anything else) that will cycle through the names. I vaguely remember \n> that these kinds or queries are hard to do in standard SQL, but I was \n> hoping that PG, with its extensions...\n\nHow about an outer join?\n\n SELECT x1.nam, x1.co, x1.num\n FROM t1 x1\n LEFT JOIN t1 x2 ON x2.nam = x1.nam AND x2.num > x1.num\n GROUP BY x1.nam, x1.co, x1.num\n HAVING count(*) < 2\n ORDER BY x1.nam, x1.num DESC\n\n", "msg_date": "18 Nov 2003 18:01:13 +0100", "msg_from": "Harald Fuchs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Top n queries and GROUP BY" } ]
[ { "msg_contents": "\nHi.\n\nI'm trying to set run-time environment in pgsql7.4 so, that it prints\nall statements with duration time, but I can't understand why setting\nlog_min_duration_statement to '0' causes printing to syslog plenty of\nlines ending with 'duration: statement:', i.e. without any statement\nstring (except expected ones). Can anybody help me?\n\nRichard.\n\n-- \n\"First they ignore you. Then they laugh at you. Then they\nfight you. Then you win.\" - Mohandas Gandhi.\n", "msg_date": "Mon, 17 Nov 2003 23:20:58 +0100", "msg_from": "Ryszard Lach <[email protected]>", "msg_from_op": true, "msg_subject": "duration logging setting in 7.4" }, { "msg_contents": "Ryszard Lach wrote:\n> \n> Hi.\n> \n> I'm trying to set run-time environment in pgsql7.4 so, that it prints\n> all statements with duration time, but I can't understand why setting\n> log_min_duration_statement to '0' causes printing to syslog plenty of\n> lines ending with 'duration: statement:', i.e. without any statement\n> string (except expected ones). Can anybody help me?\n\nCan you show us some of the log file? If I do:\n\t\n\ttest=> set log_min_duration_statement = 0;\n\tSET\n\ttest=> select 1;\n\t ?column?\n\t----------\n\t 1\n\t(1 row)\n\nI get:\n\n\tLOG: duration: 0.861 ms statement: select 1;\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 17 Nov 2003 21:37:07 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: duration logging setting in 7.4" }, { "msg_contents": "On Mon, Nov 17, 2003 at 09:37:07PM -0500, Bruce Momjian wrote:\n> Ryszard Lach wrote:\n> > \n> > Hi.\n> > \n> > I'm trying to set run-time environment in pgsql7.4 so, that it prints\n> > all statements with duration time, but I can't understand why setting\n> > log_min_duration_statement to '0' causes printing to syslog plenty of\n> > lines ending with 'duration: statement:', i.e. without any statement\n> > string (except expected ones). Can anybody help me?\n> \n> Can you show us some of the log file? If I do:\n> \t\n\nSure.\n\nNov 18 10:05:20 postgres[1348]: [318-1] LOG: duration: 0.297 ms statement:\nNov 18 10:05:20 postgres[1311]: [5477-1] LOG: duration: 0.617 ms statement:\nNov 18 10:05:20 postgres[1312]: [5134-1] LOG: duration: 0.477 ms statement:\nNov 18 10:05:20 postgres[1349]: [318-1] LOG: duration: 0.215 ms statement:\nNov 18 10:05:20 postgres[1313]: [5449-1] LOG: duration: 0.512 ms statement:\nNov 18 10:05:20 postgres[1314]: [5534-1] LOG: duration: 0.420 ms statement:\nNov 18 10:05:20 postgres[1330]: [772-1] LOG: duration: 1.386 ms statement: SELECT * FROM mytablemius WHERE id = 0;\nNov 18 10:05:20 postgres[1315]: [5757-1] LOG: duration: 0.417 ms statement:\nNov 18 10:05:20 postgres[1316]: [5885-1] LOG: duration: 0.315 ms statement:\nNov 18 10:05:20 postgres[1317]: [5914-1] LOG: duration: 0.301 ms statement:\nNov 18 10:05:20 postgres[1318]: [5990-1] LOG: duration: 0.293 ms statement:\nNov 18 10:05:20 postgres[1319]: [6009-1] LOG: duration: 0.211 ms statement:\nNov 18 10:05:20 postgres[1320]: [6039-1] LOG: duration: 0.188 ms statement:\n\n\nthis is with setting\n\nlog_duration = false\nlog_statement = false\nlog_min_duration_statement = 0\n\nThe amount of lines containing statement string is nearly the same ase before\nupgrade (from 7.3), all other lines are extra.\n\nI don't know if this can be a reason, this is on a pretty busy machine (ca. 100\nselects/second, but loadavg lower then 0.9), I'm logging postgres through syslog.\n\nRichard.\n\n-- \n\"First they ignore you. Then they laugh at you. Then they\nfight you. Then you win.\" - Mohandas Gandhi.\n", "msg_date": "Tue, 18 Nov 2003 10:16:46 +0100", "msg_from": "Ryszard Lach <[email protected]>", "msg_from_op": true, "msg_subject": "Re: duration logging setting in 7.4" }, { "msg_contents": "\nWow, that is strange. If you don't use syslog, do you see the proper\noutput? If you turn on log_statement, do you see the statements?\n\n---------------------------------------------------------------------------\n\nRyszard Lach wrote:\n> On Mon, Nov 17, 2003 at 09:37:07PM -0500, Bruce Momjian wrote:\n> > Ryszard Lach wrote:\n> > > \n> > > Hi.\n> > > \n> > > I'm trying to set run-time environment in pgsql7.4 so, that it prints\n> > > all statements with duration time, but I can't understand why setting\n> > > log_min_duration_statement to '0' causes printing to syslog plenty of\n> > > lines ending with 'duration: statement:', i.e. without any statement\n> > > string (except expected ones). Can anybody help me?\n> > \n> > Can you show us some of the log file? If I do:\n> > \t\n> \n> Sure.\n> \n> Nov 18 10:05:20 postgres[1348]: [318-1] LOG: duration: 0.297 ms statement:\n> Nov 18 10:05:20 postgres[1311]: [5477-1] LOG: duration: 0.617 ms statement:\n> Nov 18 10:05:20 postgres[1312]: [5134-1] LOG: duration: 0.477 ms statement:\n> Nov 18 10:05:20 postgres[1349]: [318-1] LOG: duration: 0.215 ms statement:\n> Nov 18 10:05:20 postgres[1313]: [5449-1] LOG: duration: 0.512 ms statement:\n> Nov 18 10:05:20 postgres[1314]: [5534-1] LOG: duration: 0.420 ms statement:\n> Nov 18 10:05:20 postgres[1330]: [772-1] LOG: duration: 1.386 ms statement: SELECT * FROM mytablemius WHERE id = 0;\n> Nov 18 10:05:20 postgres[1315]: [5757-1] LOG: duration: 0.417 ms statement:\n> Nov 18 10:05:20 postgres[1316]: [5885-1] LOG: duration: 0.315 ms statement:\n> Nov 18 10:05:20 postgres[1317]: [5914-1] LOG: duration: 0.301 ms statement:\n> Nov 18 10:05:20 postgres[1318]: [5990-1] LOG: duration: 0.293 ms statement:\n> Nov 18 10:05:20 postgres[1319]: [6009-1] LOG: duration: 0.211 ms statement:\n> Nov 18 10:05:20 postgres[1320]: [6039-1] LOG: duration: 0.188 ms statement:\n> \n> \n> this is with setting\n> \n> log_duration = false\n> log_statement = false\n> log_min_duration_statement = 0\n> \n> The amount of lines containing statement string is nearly the same ase before\n> upgrade (from 7.3), all other lines are extra.\n> \n> I don't know if this can be a reason, this is on a pretty busy machine (ca. 100\n> selects/second, but loadavg lower then 0.9), I'm logging postgres through syslog.\n> \n> Richard.\n> \n> -- \n> \"First they ignore you. Then they laugh at you. Then they\n> fight you. Then you win.\" - Mohandas Gandhi.\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 18 Nov 2003 10:07:48 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: duration logging setting in 7.4" }, { "msg_contents": "On Tue, Nov 18, 2003 at 10:07:48AM -0500, Bruce Momjian wrote:\n> \n> Wow, that is strange. If you don't use syslog, do you see the proper\n> output?\n\nI've just checked this. It behaves exactly the same way.\n\n\n> If you turn on log_statement, do you see the statements?\n\nIf I turn on log_min_duration_statement (i.e. set to 0), log_statement and\nlog_duration, then I receive something like that\n\nNov 17 22:33:27 postgres[22945]: [29231-1] LOG: statement:\nNov 17 22:33:27 postgres[22945]: [29232-1] LOG: duration: 0.198 ms\nNov 17 22:33:27 postgres[22945]: [29233-1] LOG: duration: 0.198 ms statement:\nNov 17 22:33:27 postgres[22946]: [29231-1] LOG: statement:\nNov 17 22:33:27 postgres[22946]: [29232-1] LOG: duration: 0.191 ms\nNov 17 22:33:27 postgres[22946]: [29233-1] LOG: duration: 0.191 ms statement:\nNov 17 22:33:27 postgres[22678]: [147134-1] LOG: statement: select * from cms where id=1465\nNov 17 22:33:27 postgres[22679]: [154907-1] LOG: statement:\nNov 17 22:33:27 postgres[22679]: [154908-1] LOG: duration: 0.867 ms\nNov 17 22:33:27 postgres[22679]: [154909-1] LOG: duration: 0.867 ms statement:\nNov 17 22:33:27 postgres[22678]: [147135-1] LOG: duration: 1.458 ms\nNov 17 22:33:27 postgres[22678]: [147136-1] LOG: duration: 1.458 ms statement: select * from cms where id=1465\nNov 17 22:33:27 postgres[22680]: [158366-1] LOG: statement:\nNov 17 22:33:27 postgres[22680]: [158367-1] LOG: duration: 0.620 ms\nNov 17 22:33:27 postgres[22680]: [158368-1] LOG: duration: 0.620 ms statement:\nNov 17 22:33:27 postgres[22681]: [161294-1] LOG: statement:\nNov 17 22:33:27 postgres[22681]: [161295-1] LOG: duration: 0.650 ms\n\nIt seems, that log_duration is responsible only for \"duration:\" lines,\nlog_statement - for \"statement:\" ones, and \"log_min_duration_statement\" - for\n\"duration: .* statement:\". I think, that the above output should exclude losing\nof data by syslog from further delibarations. Do you thing that could be\na bug? \n\nThere is another one thing: logs from the same database running on 7.3 and the same\napplication contained lines like 'select getdatabaseencoding()', 'select\ndatestyle()' and similar (not used by application explicite, probably\nadded by JDBC driver), now they are missed - maybe this is the\nproblem?\n\nRichard.\n\n-- \n\"First they ignore you. Then they laugh at you. Then they\nfight you. Then you win.\" - Mohandas Gandhi.\n", "msg_date": "Wed, 19 Nov 2003 19:38:24 +0100", "msg_from": "Ryszard Lach <[email protected]>", "msg_from_op": false, "msg_subject": "Re: duration logging setting in 7.4" }, { "msg_contents": "Ryszard Lach wrote:\n> If I turn on log_min_duration_statement (i.e. set to 0), log_statement and\n> log_duration, then I receive something like that\n> \n> Nov 17 22:33:27 postgres[22945]: [29231-1] LOG: statement:\n> Nov 17 22:33:27 postgres[22945]: [29232-1] LOG: duration: 0.198 ms\n> Nov 17 22:33:27 postgres[22945]: [29233-1] LOG: duration: 0.198 ms statement:\n> Nov 17 22:33:27 postgres[22946]: [29231-1] LOG: statement:\n> Nov 17 22:33:27 postgres[22946]: [29232-1] LOG: duration: 0.191 ms\n> Nov 17 22:33:27 postgres[22946]: [29233-1] LOG: duration: 0.191 ms statement:\n> Nov 17 22:33:27 postgres[22678]: [147134-1] LOG: statement: select * from cms where id=1465\n> Nov 17 22:33:27 postgres[22679]: [154907-1] LOG: statement:\n> Nov 17 22:33:27 postgres[22679]: [154908-1] LOG: duration: 0.867 ms\n> Nov 17 22:33:27 postgres[22679]: [154909-1] LOG: duration: 0.867 ms statement:\n> Nov 17 22:33:27 postgres[22678]: [147135-1] LOG: duration: 1.458 ms\n> Nov 17 22:33:27 postgres[22678]: [147136-1] LOG: duration: 1.458 ms statement: select * from cms where id=1465\n> Nov 17 22:33:27 postgres[22680]: [158366-1] LOG: statement:\n> Nov 17 22:33:27 postgres[22680]: [158367-1] LOG: duration: 0.620 ms\n> Nov 17 22:33:27 postgres[22680]: [158368-1] LOG: duration: 0.620 ms statement:\n> Nov 17 22:33:27 postgres[22681]: [161294-1] LOG: statement:\n> Nov 17 22:33:27 postgres[22681]: [161295-1] LOG: duration: 0.650 ms\n> \n> It seems, that log_duration is responsible only for \"duration:\" lines,\n> log_statement - for \"statement:\" ones, and \"log_min_duration_statement\" - for\n> \"duration: .* statement:\". I think, that the above output should exclude losing\n> of data by syslog from further delibarations. Do you thing that could be\n> a bug? \n\nYes, the problem is not related to syslog. Are you using prepared\nqueries, perhaps? I don't think those show the query, but it seems we\nshould display something better than blanks.\n\n> There is another one thing: logs from the same database running on 7.3 and the same\n> application contained lines like 'select getdatabaseencoding()', 'select\n> datestyle()' and similar (not used by application explicite, probably\n> added by JDBC driver), now they are missed - maybe this is the\n> problem?\n\nNo, those are missing because the new 7.4 wire protocol doesn't require\nthose queries anymore --- the data is send automatically.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 19 Nov 2003 13:58:27 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: duration logging setting in 7.4" }, { "msg_contents": "On Wed, Nov 19, 2003 at 01:58:27PM -0500, Bruce Momjian wrote:\n> Ryszard Lach wrote:\n> \n> > There is another one thing: logs from the same database running on 7.3 and the same\n> > application contained lines like 'select getdatabaseencoding()', 'select\n> > datestyle()' and similar (not used by application explicite, probably\n> > added by JDBC driver), now they are missed - maybe this is the\n> > problem?\n> \n> No, those are missing because the new 7.4 wire protocol doesn't require\n> those queries anymore --- the data is send automatically.\n> \n\nMayby this is a solution? Because of some\ncharset-related problems we are still using an old (AFAiR modified)\nversion of JDBC driver. I'm not a programmer, but I think and don't know\nwhat JDBC driver does, but maybe it sends from client side those queries\nand server doesn't know what to do with them? I'll ask our programmers\nto try with 7.4 driver and tell you about results.\n\nRichard.\n\n-- \n\"First they ignore you. Then they laugh at you. Then they\nfight you. Then you win.\" - Mohandas Gandhi.\n", "msg_date": "Wed, 19 Nov 2003 20:28:55 +0100", "msg_from": "Ryszard Lach <[email protected]>", "msg_from_op": true, "msg_subject": "Re: duration logging setting in 7.4" }, { "msg_contents": "Ryszard Lach wrote:\n> On Wed, Nov 19, 2003 at 01:58:27PM -0500, Bruce Momjian wrote:\n> > Ryszard Lach wrote:\n> > \n> > > There is another one thing: logs from the same database running on 7.3 and the same\n> > > application contained lines like 'select getdatabaseencoding()', 'select\n> > > datestyle()' and similar (not used by application explicite, probably\n> > > added by JDBC driver), now they are missed - maybe this is the\n> > > problem?\n> > \n> > No, those are missing because the new 7.4 wire protocol doesn't require\n> > those queries anymore --- the data is send automatically.\n> > \n> \n> Mayby this is a solution? Because of some\n> charset-related problems we are still using an old (AFAiR modified)\n> version of JDBC driver. I'm not a programmer, but I think and don't know\n> what JDBC driver does, but maybe it sends from client side those queries\n> and server doesn't know what to do with them? I'll ask our programmers\n> to try with 7.4 driver and tell you about results.\n\nAlso, try plain psql and issue a query and see if it appears.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 19 Nov 2003 14:42:20 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: duration logging setting in 7.4" }, { "msg_contents": "Ryszard Lach <[email protected]> writes:\n> Nov 18 10:05:20 postgres[1348]: [318-1] LOG: duration: 0.297 ms statement:\n> Nov 18 10:05:20 postgres[1311]: [5477-1] LOG: duration: 0.617 ms statement:\n> Nov 18 10:05:20 postgres[1312]: [5134-1] LOG: duration: 0.477 ms statement:\n> Nov 18 10:05:20 postgres[1349]: [318-1] LOG: duration: 0.215 ms statement:\n> Nov 18 10:05:20 postgres[1313]: [5449-1] LOG: duration: 0.512 ms statement:\n> Nov 18 10:05:20 postgres[1314]: [5534-1] LOG: duration: 0.420 ms statement:\n> Nov 18 10:05:20 postgres[1330]: [772-1] LOG: duration: 1.386 ms statement: SELECT * FROM mytablemius WHERE id = 0;\n> Nov 18 10:05:20 postgres[1315]: [5757-1] LOG: duration: 0.417 ms statement:\n> Nov 18 10:05:20 postgres[1316]: [5885-1] LOG: duration: 0.315 ms statement:\n> Nov 18 10:05:20 postgres[1317]: [5914-1] LOG: duration: 0.301 ms statement:\n> Nov 18 10:05:20 postgres[1318]: [5990-1] LOG: duration: 0.293 ms statement:\n> Nov 18 10:05:20 postgres[1319]: [6009-1] LOG: duration: 0.211 ms statement:\n> Nov 18 10:05:20 postgres[1320]: [6039-1] LOG: duration: 0.188 ms statement:\n\n\nIs it possible that you're sending a lot of queries that have an initial\nnewline in the text? I'd expect the first line of log output for such a\nquery to look as above.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Nov 2003 19:17:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: duration logging setting in 7.4 " }, { "msg_contents": "On Thu, Nov 20, 2003 at 07:17:01PM -0500, Tom Lane wrote:\n> \n> \n> Is it possible that you're sending a lot of queries that have an initial\n> newline in the text? I'd expect the first line of log output for such a\n> query to look as above.\n\nI don't think so, but it is possible, that queries have e.g. two\nsemicolons on end - I've just noticed, that separating two queries with\ntwo or more semicolons gives one empty log entry for each redundand\nsemicolon. We'll debug our application keeping this in mind.\n\nRichard.\n\n-- \n\"First they ignore you. Then they laugh at you. Then they\nfight you. Then you win.\" - Mohandas Gandhi.\n", "msg_date": "Fri, 21 Nov 2003 09:53:17 +0100", "msg_from": "Ryszard Lach <[email protected]>", "msg_from_op": true, "msg_subject": "Re: duration logging setting in 7.4" }, { "msg_contents": "\nHi, again.\n\nI've turned on only log_connections and log_statement. See the following\nlog fragment (I've included lines only related to opening of new\nconnection);\n\nNov 21 11:06:44 postgres[3359]: [3-1] LOG: connection received: host= port=\nNov 21 11:06:44 postgres[3359]: [4-1] LOG: connection authorized: user=pracuj database=pracuj\nNov 21 11:06:44 postgres[3359]: [5-1] LOG: statement: set datestyle to 'ISO'; select version(), case when pg_encoding_to_char(1) = 'SQL_ASCII' then 'UNKNOWN' else\nNov 21 11:06:44 postgres[3359]: [5-2] getdatabaseencoding() end;\nNov 21 11:06:44 postgres[3359]: [6-1] LOG: statement:\nNov 21 11:06:44 postgres[3359]: [7-1] LOG: statement: select * from ...\n\nNov 21 11:06:45 postgres[3376]: [3-1] LOG: connection received: host= port=\nNov 21 11:06:45 postgres[3376]: [4-1] LOG: connection authorized: user=pracuj database=pracuj\nNov 21 11:06:45 postgres[3376]: [5-1] LOG: statement: set datestyle to 'ISO'; select version(), case when pg_encoding_to_char(1) = 'S\nQL_ASCII' then ' else\nNov 21 11:06:45 postgres[3376]: [5-2] getdatabaseencoding() end;\nNov 21 11:06:45 postgres[3376]: [6-1] LOG: statement:\n\nIt seems, that empty statements are generated during opening of\nconnection.\n\nPlease, note also:\n\n1. We are using an older jdbc driver (pgjdbc2)\n2. We ar using encoding in URL\n(jdbc:postgresql://localhost:5432/database?charSet=iso-8859-1)\n\nRichard.\n\n-- \n\"First they ignore you. Then they laugh at you. Then they\nfight you. Then you win.\" - Mohandas Gandhi.\n", "msg_date": "Fri, 21 Nov 2003 11:18:54 +0100", "msg_from": "Ryszard Lach <[email protected]>", "msg_from_op": false, "msg_subject": "Re: duration logging setting in 7.4" }, { "msg_contents": "Ryszard Lach <[email protected]> writes:\n> It seems, that empty statements are generated during opening of\n> connection.\n\nHmm. Try asking about that on the pgsql-jdbc list. I think the JDBC\ndriver must actually be sending empty commands.\n\nLooking at the backend code, I realize that 7.4 will emit LOG: entries\nfor empty query strings, whereas prior releases would not. This isn't\na bug IMHO, but it does explain why you are noticing output that wasn't\nthere before you updated to 7.4.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Nov 2003 09:54:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: duration logging setting in 7.4 " } ]
[ { "msg_contents": "Shridhar,\n\nI was looking at the -V/-v and -A/-a settings in pgavd, and really don't \nunderstand how the calculation works. According to the readme, if I set -v \nto 1000 and -V to 2 (the defaults) for a table with 10,000 rows, pgavd would \nonly vacuum after 21,000 rows had been updated. This seems wrong.\n\nCan you clear this up a little? I'd like to tweak these settings but can't \nwithout being better aquainted with the calculation.\n\nAlso, you may want to reverse your default ratio for Vacuum/analyze frequency. \nTrue, analyze is a less expensive operation than Vacuum, but it's also needed \nless often -- only when the *distribution* of data changes. I've seen \ndatabases where the optimal vacuum/analyze frequency was every 10 min/once \nper day.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 18 Nov 2003 15:58:45 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "More detail on settings for pgavd?" }, { "msg_contents": "Josh Berkus wrote:\n\n> Shridhar,\n> \n> I was looking at the -V/-v and -A/-a settings in pgavd, and really don't \n> understand how the calculation works. According to the readme, if I set -v \n> to 1000 and -V to 2 (the defaults) for a table with 10,000 rows, pgavd would \n> only vacuum after 21,000 rows had been updated. This seems wrong.\n> \n> Can you clear this up a little? I'd like to tweak these settings but can't \n> without being better aquainted with the calculation.\n> \n> Also, you may want to reverse your default ratio for Vacuum/analyze frequency. \n> True, analyze is a less expensive operation than Vacuum, but it's also needed \n> less often -- only when the *distribution* of data changes. I've seen \n> databases where the optimal vacuum/analyze frequency was every 10 min/once \n> per day.\n\nWill look into it. Give me a day or so. I am planning couple of other patches as \nwell. May be over week end.\n\nIs this urgent?\n\n Shridhar\n\n", "msg_date": "Wed, 19 Nov 2003 12:02:23 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More detail on settings for pgavd?" }, { "msg_contents": "Josh Berkus wrote:\n\n> Shridhar,\n> \n> I was looking at the -V/-v and -A/-a settings in pgavd, and really don't \n> understand how the calculation works. According to the readme, if I set -v \n> to 1000 and -V to 2 (the defaults) for a table with 10,000 rows, pgavd would \n> only vacuum after 21,000 rows had been updated. This seems wrong.\n\nNo. that is correct.\n\nIt is calculated as\n\nthreshold = base + scale*numebr of current rows\n\nWhich translates to\n\n21,000 = 1000 + 2*1000\n\nHowever I do not agree with this logic entirely. It pegs the next vacuum w.r.t \ncurrent table size which is not always a good thing.\n\nI would rather vacuum the table at 2000 updates, which is what you probably want.\n\nFurthermore analyze threshold depends upon inserts+updates. I think it should \nalso depends upon deletes for obvious reasons.\n\n> Can you clear this up a little? I'd like to tweak these settings but can't \n> without being better aquainted with the calculation.\n\nWhat did you expected in above example? It is not difficult to tweak \npg_autovacuum calculations. For testing we can play around.\n\n> Also, you may want to reverse your default ratio for Vacuum/analyze frequency. \n> True, analyze is a less expensive operation than Vacuum, but it's also needed \n> less often -- only when the *distribution* of data changes. I've seen \n> databases where the optimal vacuum/analyze frequency was every 10 min/once \n> per day.\n\nOK vacuum and analyze thresholds are calculated with same formula as shown above \n but with different parameters as follows.\n\nvacthresh = vacbase + vacscale*ntuples\nanathresh = anabase + anascale*ntuples\n\nWhat you are asking for is\n\nvacthresh = vacbase*vacscale\nanathresh = anabase + anascale*ntuples\n\nWould that tilt the favour the way you want? i.e. an analyze is triggered when a \nfixed *percentage* of table changes but a vacuum is triggered when a fixed \n*number of rows* are changed.\n\nI am all for experimentation. If you have real life data to play with, I can \ngive you some patches to play around.\n\nAnd BTW, this is all brain child of Mathew O.Connor(Correct? I am not good at \neither names or spellings). The way I wrote pgavd originally, each table got to \nget separate threshold..:-). That was rather a brute force approach.\n\n Shridhar\n\n\n\n\n\n", "msg_date": "Wed, 19 Nov 2003 19:40:46 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More detail on settings for pgavd?" }, { "msg_contents": "Shridhar,\n\n> Will look into it. Give me a day or so. I am planning couple of other\n> patches as well. May be over week end.\n\nThanks, appreciated. As I said, I don't think the settings themselves are \nwrong, I think the documentation is.\n\nWhat are you patching?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 19 Nov 2003 08:00:11 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: More detail on settings for pgavd?" }, { "msg_contents": "Shridhar,\n\n> However I do not agree with this logic entirely. It pegs the next vacuum\n> w.r.t current table size which is not always a good thing.\n\nNo, I think the logic's fine, it's the numbers which are wrong. We want to \nvacuum when updates reach between 5% and 15% of total rows. NOT when \nupdates reach 110% of total rows ... that's much too late.\n\nHmmm ... I also think the threshold level needs to be lowered; I guess the \npurpose was to prevent continuous re-vacuuuming of small tables? \nUnfortunately, in the current implementation, the result is tha small tables \nnever get vacuumed at all.\n\nSo for defaults, I would peg -V at 0.1 and -v at 100, so our default \ncalculation for a table with 10,000 rows is:\n\n100 + ( 0.1 * 10,000 ) = 1100 rows.\n\n> I would rather vacuum the table at 2000 updates, which is what you probably\n> want.\n\nNot necessarily. This would be painful if the table has 10,000,000 rows. It \n*should* be based on a % of rows.\n\n> Furthermore analyze threshold depends upon inserts+updates. I think it\n> should also depends upon deletes for obvious reasons.\n\nYes. Vacuum threshold is counting deletes, I hope?\n\n> What did you expected in above example? It is not difficult to tweak\n> pg_autovacuum calculations. For testing we can play around.\n\nCan I set the settings to decimals, or are they integers?\n\n> vacthresh = vacbase*vacscale\n> anathresh = anabase + anascale*ntuples\n\nNope, see above.\n\nMy comment about the frequency of vacuums vs. analyze is that currently the \n*default* is to analyze twice as often as you vacuum. Based on my \nexperiece as a PG admin on a variety of databases, I believe that the default \nshould be to analyze half as often as you vacuum.\n\n> I am all for experimentation. If you have real life data to play with, I\n> can give you some patches to play around.\n\nI will have real data very soon .....\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 19 Nov 2003 09:06:15 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: More detail on settings for pgavd?" }, { "msg_contents": "Josh Berkus wrote:\n\n> Shridhar,\n >>However I do not agree with this logic entirely. It pegs the next vacuum\n>>w.r.t current table size which is not always a good thing.\n> \n> \n> No, I think the logic's fine, it's the numbers which are wrong. We want to \n> vacuum when updates reach between 5% and 15% of total rows. NOT when \n> updates reach 110% of total rows ... that's much too late.\n\nWell, looks like thresholds below 1 should be norm rather than exception.\n\n> Hmmm ... I also think the threshold level needs to be lowered; I guess the \n> purpose was to prevent continuous re-vacuuuming of small tables? \n> Unfortunately, in the current implementation, the result is tha small tables \n> never get vacuumed at all.\n> \n> So for defaults, I would peg -V at 0.1 and -v at 100, so our default \n> calculation for a table with 10,000 rows is:\n> \n> 100 + ( 0.1 * 10,000 ) = 1100 rows.\n\nI would say -V 0.2-0.4 could be great as well. Fact to emphasize is that \nthresholds less than 1 should be used.\n\n>>Furthermore analyze threshold depends upon inserts+updates. I think it\n>>should also depends upon deletes for obvious reasons.\n> Yes. Vacuum threshold is counting deletes, I hope?\n\nIt does.\n\n> My comment about the frequency of vacuums vs. analyze is that currently the \n> *default* is to analyze twice as often as you vacuum. Based on my \n> experiece as a PG admin on a variety of databases, I believe that the default \n> should be to analyze half as often as you vacuum.\n\nOK.\n\n>>I am all for experimentation. If you have real life data to play with, I\n>>can give you some patches to play around.\n> I will have real data very soon .....\n\nI will submit a patch that would account deletes in analyze threshold. Since you \nwant to delay the analyze, I would calculate analyze count as\n\nn=updates + inserts *-* deletes\n\nRather than current \"n = updates + inserts\". Also update readme about examples \nand analyze frequency.\n\nWhat does statistics gather BTW? Just number of rows or something else as well? \nI think I would put that on Hackers separately.\n\nI am still wary of inverting vacuum analyze frequency. You think it is better to \nset inverted default rather than documenting it?\n\n Shridhar\n\n", "msg_date": "Thu, 20 Nov 2003 12:53:25 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More detail on settings for pgavd?" }, { "msg_contents": "Shridhar Daithankar wrote:\n\n> Josh Berkus wrote:\n>\n>> Shridhar,\n>\n> >>However I do not agree with this logic entirely. It pegs the next \n> vacuum\n>\n>>> w.r.t current table size which is not always a good thing.\n>>\nOk, what do you recommend? The point of two separate variables allows \nyou to specify if you want vacuum based on a fixed number, based on \ntable size or something inbetween.\n\n>>\n>> No, I think the logic's fine, it's the numbers which are wrong. We \n>> want to vacuum when updates reach between 5% and 15% of total rows. \n>> NOT when updates reach 110% of total rows ... that's much too late.\n>\nFor small tables, you don't need to vacuum too often. In the testing I \ndid a small table ~100 rows, didn't really show significant performance \ndegredation until it had close to 1000 updates. For large tables, \nvacuum is so expensive, that you don't want to do it very often, and \nscanning the whole table when there is only 5% wasted space is not very \nhelpful.\n\n>> Hmmm ... I also think the threshold level needs to be lowered; I \n>> guess the purpose was to prevent continuous re-vacuuuming of small \n>> tables? Unfortunately, in the current implementation, the result is \n>> tha small tables never get vacuumed at all.\n>>\n>> So for defaults, I would peg -V at 0.1 and -v at 100, so our default \n>> calculation for a table with 10,000 rows is:\n>>\n>> 100 + ( 0.1 * 10,000 ) = 1100 rows.\n>\nYes, the I set the defaults a little high perhaps so as to err on the \nside of caution. I didn't want people to say pg_autovacuum kills the \nperformance of my server. A small table will get vacuumed, just not \nuntil it has reached the threshold. So a table with 100 rows, will get \nvacuumed after 1200 updates / deletes. In my testing it showed that \nthere was no major performance problems until you reached several \nthousand updates / deletes.\n\n>>> Furthermore analyze threshold depends upon inserts+updates. I think it\n>>> should also depends upon deletes for obvious reasons.\n>>\n>> Yes. Vacuum threshold is counting deletes, I hope?\n>\n> It does.\n>\n>> My comment about the frequency of vacuums vs. analyze is that \n>> currently the *default* is to analyze twice as often as you \n>> vacuum. Based on my experiece as a PG admin on a variety of \n>> databases, I believe that the default should be to analyze half as \n>> often as you vacuum.\n>\nHUH? analyze is very very cheap compared to vacuum. Why not do it more \noften?\n\n>>> I am all for experimentation. If you have real life data to play \n>>> with, I\n>>> can give you some patches to play around.\n>>\n>> I will have real data very soon .....\n>\n> I will submit a patch that would account deletes in analyze threshold. \n> Since you want to delay the analyze, I would calculate analyze count as\n\ndeletes are already accounted for in the analyze threshold.\n\n> I am still wary of inverting vacuum analyze frequency. You think it is \n> better to set inverted default rather than documenting it?\n\nI think inverting the vacuum and analyze frequency is wrong. \n\nWhat I think I am hearing is that people would like very much to be able \nto tweak the settings of pg_autovacuum for individual tables / databases \netc. So that you could set certain tables to be vacuumed more \nagressivly than others. I agree this would be a good and welcome \naddition. I hope have time to work on this at some point, but in the \nnear future I won't.\n\nMatthew\n\n\n", "msg_date": "Thu, 20 Nov 2003 09:30:43 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] More detail on settings for pgavd?" }, { "msg_contents": "On Thursday 20 November 2003 20:00, Matthew T. O'Connor wrote:\n> Shridhar Daithankar wrote:\n> > I will submit a patch that would account deletes in analyze threshold.\n> > Since you want to delay the analyze, I would calculate analyze count as\n>\n> deletes are already accounted for in the analyze threshold.\n\nYes. My bad. Deletes are not accounted in initializing analyze count but later \nthey are used.\n\n> > I am still wary of inverting vacuum analyze frequency. You think it is\n> > better to set inverted default rather than documenting it?\n>\n> I think inverting the vacuum and analyze frequency is wrong.\n\nMe. Too. ATM all I can think of this patch attached. Josh, is it sufficient \nfor you?..:-)\n\nMatthew, I am confyused about one thing. Why would autovacuum count updates \nwhile checking for analyze threshold? Analyze does not change statistics \nright? ( w.r.t line 1072, pg_autovacuum.c). For updating statistics, only \ninserts+deletes should suffice, isn't it?\n\nOther than that, I think autovacuum does everything it can.\n\nComments?\n\n Shridhar", "msg_date": "Thu, 20 Nov 2003 20:29:47 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd?" }, { "msg_contents": "On Thursday 20 November 2003 20:29, Shridhar Daithankar wrote:\n> On Thursday 20 November 2003 20:00, Matthew T. O'Connor wrote:\n> > Shridhar Daithankar wrote:\n> > > I will submit a patch that would account deletes in analyze threshold.\n> > > Since you want to delay the analyze, I would calculate analyze count as\n> >\n> > deletes are already accounted for in the analyze threshold.\n>\n> Yes. My bad. Deletes are not accounted in initializing analyze count but\n> later they are used.\n>\n> > > I am still wary of inverting vacuum analyze frequency. You think it is\n> > > better to set inverted default rather than documenting it?\n> >\n> > I think inverting the vacuum and analyze frequency is wrong.\n>\n> Me. Too. ATM all I can think of this patch attached. Josh, is it sufficient\n> for you?..:-)\n\nuse this one. A warning added for too aggressive vacuumming. If it is OK by \neverybody, we can send it to patches list.\n\n Shridhar", "msg_date": "Thu, 20 Nov 2003 20:37:13 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd?" }, { "msg_contents": "Shridhar Daithankar wrote:\n\n>On Thursday 20 November 2003 20:00, Matthew T. O'Connor wrote:\n> \n>\n>>Shridhar Daithankar wrote:\n>> \n>>\n>>>I am still wary of inverting vacuum analyze frequency. You think it is\n>>>better to set inverted default rather than documenting it?\n>>> \n>>>\n>>I think inverting the vacuum and analyze frequency is wrong.\n>> \n>>\n>Me. Too. ATM all I can think of this patch attached. Josh, is it sufficient \n>for you?..:-)\n> \n>\nThe patch just adds an example to the README, this looks ok to me.\n\n>Matthew, I am confyused about one thing. Why would autovacuum count updates \n>while checking for analyze threshold? Analyze does not change statistics \n>right? ( w.r.t line 1072, pg_autovacuum.c). For updating statistics, only \n>inserts+deletes should suffice, isn't it?\n> \n>\nAn update is the equivelant of an insert and a delete, so it counts \ntowards the analyze count as much as an insert.\n\n>Other than that, I think autovacuum does everything it can.\n> \n>\nIt could be more customizable.\n\n\n", "msg_date": "Thu, 20 Nov 2003 11:32:24 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] More detail on settings for pgavd?" }, { "msg_contents": "Matthew,\n\n> For small tables, you don't need to vacuum too often. In the testing I\n> did a small table ~100 rows, didn't really show significant performance\n> degredation until it had close to 1000 updates. \n\nThis is accounted for by using the \"threshold\" value. That way small tables \nget vacuumed less often. However, the way large tables work is very different \nand I think your strategy shows a lack of testing on large active tables.\n\n> For large tables,\n> vacuum is so expensive, that you don't want to do it very often, and\n> scanning the whole table when there is only 5% wasted space is not very\n> helpful.\n\n5% is probably too low, you're right ... in my experience, performance \ndegredation starts to set in a 10-15% updates to, for example, a 1.1 million \nrow table, particularly since users tend to request the most recently updated \nrows. As long as we have the I/O issues that Background Writer and ARC are \nintended to solve, though, I can see being less agressive on the defaults; \nperhaps 20% or 25%. If you wait until 110% of a 1.1 million row table is \nupdated, though, that vaccuum will take an hour or more.\n\nAdditionally, you are not thinking of this in terms of an overall database \nmaintanence strategy. Lazy Vacuum needs to stay below the threshold of the \nFree Space Map (max_fsm_pages) to prevent creeping bloat from setting in to \nyour databases. With proper configuration of pg_avd, vacuum_mem and FSM \nvalues, it should be possible to never run a VACUUM FULL again, and as of 7.4 \nnever run an REINDEX again either. \n\nBut this means running vacuum frequently enough that your max_fsm_pages \nthreshold is never reached. Which for a large database is going to have to \nbe more frequently than 110% updates, because setting 20,000,000 \nmax_fsm_pages will eat your RAM.\n\n> Yes, the I set the defaults a little high perhaps so as to err on the\n> side of caution. I didn't want people to say pg_autovacuum kills the\n> performance of my server. A small table will get vacuumed, just not\n> until it has reached the threshold. So a table with 100 rows, will get\n> vacuumed after 1200 updates / deletes. \n\nOk, I can see that for small tables.\n\n> In my testing it showed that\n> there was no major performance problems until you reached several\n> thousand updates / deletes.\n\nSure. But several thousand updates can be only 2% of a very large table.\n\n> HUH? analyze is very very cheap compared to vacuum. Why not do it more\n> often?\n\nBecause nothing is cheap if it's not needed. \n\nAnalyze is needed only as often as the *aggregate distribution* of data in the \ntables changes. Depending on the application, this could be frequently, but \nfar more often (in my experience running multiple databases for several \nclients) the data distribution of very large tables changes very slowly over \ntime. \n\nOne client's database, for example, that I have running VACUUM on chron \nscripts runs on this schedule for the main tables:\nVACUUM only: twice per hour\nVACUUM ANALYZE: twice per day\n\nOn the other hand, I've another client's database where most activity involves \nupdates to entire classes of records. They run ANALYZE at the end of every \ntransaction.\n\nSo if you're going to have a seperate ANALYZE schedule at all, it should be \nslightly less frequent than VACUUM for large tables. Either that, or drop \nthe idea, and simplify pg_avd by running VACUUM ANALYZE all the time instead \nof having 2 seperate schedules.\n\nBUT .... now I see how you arrived at the logic you did. If you're testing \nonly on small tables, and not vacuuming them until they reach 110% updates, \nthen you *would* need to analyze more frequently. This is because of your \nthreshold value ... you'd want to analyze the small table as soon as even 30% \nof its rows changed.\n\nSo the answer is to dramatically lower the threshold for the small tables.\n\n> What I think I am hearing is that people would like very much to be able\n> to tweak the settings of pg_autovacuum for individual tables / databases\n> etc. \n\nNot from me you're not. Though that would be nice, too.\n\nSo, my suggested defaults based on our conversation above:\n\nVacuum threshold: 1000 records\nVacuum scale factor: 0.2\nAnalyze threshold: 50 records\nAnalyze scale factor: 0.3\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 20 Nov 2003 09:18:30 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd?" }, { "msg_contents": "Shridhar,\n\n> I would say -V 0.2-0.4 could be great as well. Fact to emphasize is that\n> thresholds less than 1 should be used.\n\nYes, but not thresholds, scale factors of less than 1.0. Thresholds should \nstill be in the range of 100 to 1000.\n\n> I will submit a patch that would account deletes in analyze threshold.\n> Since you want to delay the analyze, I would calculate analyze count as\n>\n> n=updates + inserts *-* deletes\n\nI'm not clear on how this is a benefit. Deletes affect the statistics, too.\n\n> What does statistics gather BTW? Just number of rows or something else as\n> well? I think I would put that on Hackers separately.\n\nNumber of tuples, degree of uniqueness, some sample values, and high/low \nvalues. Just query your pg_statistics view for an example.\n\n> I am still wary of inverting vacuum analyze frequency. You think it is\n> better to set inverted default rather than documenting it?\n\nSee my post to Matthew.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 20 Nov 2003 10:00:34 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: More detail on settings for pgavd?" }, { "msg_contents": "On Thu, 20 Nov 2003, Josh Berkus wrote:\n> Additionally, you are not thinking of this in terms of an overall database\n> maintanence strategy. Lazy Vacuum needs to stay below the threshold of the\n> Free Space Map (max_fsm_pages) to prevent creeping bloat from setting in to\n> your databases. With proper configuration of pg_avd, vacuum_mem and FSM\n> values, it should be possible to never run a VACUUM FULL again, and as of 7.4\n> never run an REINDEX again either.\n\nis there any command you can run to see how much of the FSM is filled? is\nthere any way to tell which tables are filling it?\n\n> Analyze is needed only as often as the *aggregate distribution* of data in the\n> tables changes. Depending on the application, this could be frequently, but\n> far more often (in my experience running multiple databases for several\n> clients) the data distribution of very large tables changes very slowly over\n> time.\n\nanalyze does 2 things for me:\n1. gets reasonable aggregate statistics\n2. generates STATISTICS # of bins for the most frequent hitters\n\n(2) is very important for me. my values typically seem to have power-law\nlike distributions. i need enough bins to reach a \"cross-over\" point where\nthe last bin is frequent enough to make an index scan useful. also,\ni want enough bins so that the planner can choose index a or b for:\n\tselect * from foo where a=n and b=m;\n\nthe selectivity of either index depends not only on the average selectivity\nof index a or index b, but on n and m as well. for example, 1M row table:\n\nvalue\t% of rows\nv1\t23\nv2\t12\nv3\t4.5\nv4\t4\nv5\t3.5\n...\n\nyou can see that picking an index for =v1 would be poor. picking the\n20th most common value would be 0.5% selective. much better. of course\nthis breaks down for more complex operators, but = is fairly common.\n\n> So if you're going to have a seperate ANALYZE schedule at all, it should be\n> slightly less frequent than VACUUM for large tables. Either that, or drop\n> the idea, and simplify pg_avd by running VACUUM ANALYZE all the time instead\n> of having 2 seperate schedules.\n\ni have some tables which are insert only. i do not want to vacuum them\nbecause there are never any dead tuples in them and the vacuum grows the\nindexes. plus it is very expensive (they tables grow rather large.) after they\nexpire i drop the whole table to make room for a newer one (making sort\nof a rolling log with many large tables.)\n\ni need to analyze them every so often so that the planner knows that\nthere is 1 row, 100 rows, 100k rows, 1M. the funny thing is\nthat because i never vacuum the tables, the relpages on the index never\ngrows. don't know if this affects anything (this is on 7.2.3).\n\nvacuum is to reclaim dead tuples. this means it depends on update and\ndelete. analyze depends on data values/distribution. this means it depends on\ninsert, update, and delete. thus the dependencies are slightly different\nbetween the 2 operations, an so you can come up with use-cases that\njustify running either more frequently.\n\ni am not sure how failed transactions fit into this though, not that i think\nanybody ever has very many. maybe big rollbacks during testing?\n\n\n", "msg_date": "Thu, 20 Nov 2003 13:48:21 -0500 (EST)", "msg_from": "Chester Kustarz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd?" }, { "msg_contents": "Chester Kustarz <[email protected]> writes:\n> i have some tables which are insert only. i do not want to vacuum them\n> because there are never any dead tuples in them and the vacuum grows the\n> indexes.\n\nThose claims cannot both be true. In any case, plain vacuum cannot grow\nthe indexes --- only a VACUUM FULL that moves a significant number of\nrows could cause index growth.\n\n> vacuum is to reclaim dead tuples. this means it depends on update and\n> delete. analyze depends on data values/distribution. this means it depends on\n> insert, update, and delete. thus the dependencies are slightly different\n> between the 2 operations, an so you can come up with use-cases that\n> justify running either more frequently.\n\nAgreed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Nov 2003 14:20:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd? " }, { "msg_contents": "On Thu, 20 Nov 2003, Tom Lane wrote:\n> Those claims cannot both be true. In any case, plain vacuum cannot grow\n> the indexes --- only a VACUUM FULL that moves a significant number of\n> rows could cause index growth.\n\ner, yeah. you're right of course. having flashbacks of vacuum full.\n\n\n", "msg_date": "Thu, 20 Nov 2003 15:54:24 -0500 (EST)", "msg_from": "Chester Kustarz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd? " }, { "msg_contents": "Tom Lane wrote:\n\n>Chester Kustarz <[email protected]> writes:\n> \n>\n>>vacuum is to reclaim dead tuples. this means it depends on update and\n>>delete. analyze depends on data values/distribution. this means it depends on\n>>insert, update, and delete. thus the dependencies are slightly different\n>>between the 2 operations, an so you can come up with use-cases that\n>>justify running either more frequently.\n>> \n>>\n>Agreed.\n> \n>\n\nAnd that is why pg_autovacuum looks at insert, update and delete when \ndeciding to do an analyze, but only looks at update and delete when \ndeciding to do a vacuum. In addition, this is why pg_autovacuum was \ngiven knobs so that the vacuum and analyze thresholds can be set \nindependently.\n\nMatthew\n\n", "msg_date": "Thu, 20 Nov 2003 18:18:27 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd?" }, { "msg_contents": "Josh Berkus wrote:\n\n>Matthew,\n> \n>\n>>For small tables, you don't need to vacuum too often. In the testing I\n>>did a small table ~100 rows, didn't really show significant performance\n>>degredation until it had close to 1000 updates. \n>> \n>>\n>This is accounted for by using the \"threshold\" value. That way small tables \n>get vacuumed less often. However, the way large tables work is very different \n>and I think your strategy shows a lack of testing on large active tables.\n> \n>\nProbably more true than I would like to think...\n\n>>For large tables,\n>>vacuum is so expensive, that you don't want to do it very often, and\n>>scanning the whole table when there is only 5% wasted space is not very\n>>helpful.\n>> \n>>\n>5% is probably too low, you're right ... in my experience, performance \n>degredation starts to set in a 10-15% updates to, for example, a 1.1 million \n>row table, particularly since users tend to request the most recently updated \n>rows. As long as we have the I/O issues that Background Writer and ARC are \n>intended to solve, though, I can see being less agressive on the defaults; \n>perhaps 20% or 25%. If you wait until 110% of a 1.1 million row table is \n>updated, though, that vaccuum will take an hour or more.\n> \n>\nTrue, but I think it would be one hour once, rather than 30 minutes 4 times.\n\n>Additionally, you are not thinking of this in terms of an overall database \n>maintanence strategy. Lazy Vacuum needs to stay below the threshold of the \n>Free Space Map (max_fsm_pages) to prevent creeping bloat from setting in to \n>your databases. With proper configuration of pg_avd, vacuum_mem and FSM \n>values, it should be possible to never run a VACUUM FULL again, and as of 7.4 \n>never run an REINDEX again either. \n> \n>\nThis is one of the things I had hoped to add to pg_autovacuum, but never \ngot to. In addition to just the information from the stats collector on \ninserts updates and deletes, pg_autovacuum should also look at the FSM, \nand make decisions based on it. Anyone looking for a project?\n\n>But this means running vacuum frequently enough that your max_fsm_pages \n>threshold is never reached. Which for a large database is going to have to \n>be more frequently than 110% updates, because setting 20,000,000 \n>max_fsm_pages will eat your RAM.\n> \n>\nAgain, the think the only way to do this efficiently is to look at the \nFSM. Otherwise the only way to make sure you keep the FSM populated is \nto run vacuum more than needed.\n\n>>Yes, the I set the defaults a little high perhaps so as to err on the\n>>side of caution. I didn't want people to say pg_autovacuum kills the\n>>performance of my server. A small table will get vacuumed, just not\n>>until it has reached the threshold. So a table with 100 rows, will get\n>>vacuumed after 1200 updates / deletes. \n>> \n>>\n>Ok, I can see that for small tables.\n> \n>\n>>In my testing it showed that\n>>there was no major performance problems until you reached several\n>>thousand updates / deletes.\n>> \n>>\n>Sure. But several thousand updates can be only 2% of a very large table.\n> \n>\nBut I can't imagine that 2% makes any difference on a large table. In \nfact I would think that 10-15% would hardly be noticable, beyond that \nI'm not sure.\n\n>>HUH? analyze is very very cheap compared to vacuum. Why not do it more\n>>often?\n>> \n>>\n>Because nothing is cheap if it's not needed. \n>\n>Analyze is needed only as often as the *aggregate distribution* of data in the \n>tables changes. Depending on the application, this could be frequently, but \n>far more often (in my experience running multiple databases for several \n>clients) the data distribution of very large tables changes very slowly over \n>time. \n> \n>\nValid points, and again I think this points to the fact that \npg_autovacuum needs to be more configurable. Being able to set \ndifferent thresholds for different tables will help considerably. In \nfact, you may find that some tables should have a vac threshold much \nlarger than the analyze thresold, while other tables might want the \nopposite.\n\n>One client's database, for example, that I have running VACUUM on chron \n>scripts runs on this schedule for the main tables:\n>VACUUM only: twice per hour\n>VACUUM ANALYZE: twice per day\n> \n>\nI would be surprized if you can notice the difference between a vacuum \nanalyze and a vacuum, especially on large tables.\n\n>On the other hand, I've another client's database where most activity involves \n>updates to entire classes of records. They run ANALYZE at the end of every \n>transaction.\n>\n>So if you're going to have a seperate ANALYZE schedule at all, it should be \n>slightly less frequent than VACUUM for large tables. Either that, or drop \n>the idea, and simplify pg_avd by running VACUUM ANALYZE all the time instead \n>of having 2 seperate schedules.\n> \n>\nI think you need two separate schedules. There are lots of times where \na vacuum doesn't help, and an analyze is all that is needed, and an \nanalyze is MUCH cheaper than a vacuum.\n\n>BUT .... now I see how you arrived at the logic you did. If you're testing \n>only on small tables, and not vacuuming them until they reach 110% updates, \n>then you *would* need to analyze more frequently. This is because of your \n>threshold value ... you'd want to analyze the small table as soon as even 30% \n>of its rows changed.\n>\n>So the answer is to dramatically lower the threshold for the small tables.\n> \n>\nPerhaps.\n\n>>What I think I am hearing is that people would like very much to be able\n>>to tweak the settings of pg_autovacuum for individual tables / databases\n>>etc. \n>> \n>>\n>Not from me you're not. Though that would be nice, too.\n>\n>So, my suggested defaults based on our conversation above:\n>\n>Vacuum threshold: 1000 records\n>Vacuum scale factor: 0.2\n>Analyze threshold: 50 records\n>Analyze scale factor: 0.3\n> \n>\nI'm open to discussion on changing the defaults. Perhaps what it would \nbe better to use some non-linear (perhaps logorithmic) scaling factor. \nSo that you wound up with something roughly like this:\n\n#tuples activity% for vacuum\n1k 100%\n10k 70%\n100k 45%\n1M 20%\n10M 10%\n100M 8%\n\nThanks for the lucid feedback / discussion. autovacuum is a feature \nthat, despite it's simple implementation, has generated a lot of \nfeedback from users, and I would really like to see it become something \ncloser to what it should be.\n\n\n", "msg_date": "Thu, 20 Nov 2003 19:40:15 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd?" }, { "msg_contents": "Matthew,\n\n> > 110% of a 1.1 million row table is updated, though, that vaccuum will\n> > take an hour or more.\n>\n> True, but I think it would be one hour once, rather than 30 minutes 4\n> times.\n\nWell, generally it would be about 6-8 times at 2-4 minutes each.\n\n> This is one of the things I had hoped to add to pg_autovacuum, but never\n> got to. In addition to just the information from the stats collector on\n> inserts updates and deletes, pg_autovacuum should also look at the FSM,\n> and make decisions based on it. Anyone looking for a project?\n\nHmmm ... I think that's the wrong approach. Once your database is populated, \nit's very easy to determine how to set the FSM for a given pg_avd level. If \nyou're vacuuming after 20% updates, for example, just set fsm_pages to 20% of \nthe total database pages plus growth & safety margins.\n\nI'd be really reluctant to base pv-avd frequency on the fsm settings instead. \nWhat if the user loads 8GB of data but leaves fsm_pages at the default of \n10,000? You can't do much with that; you'd have to vacuum if even 1% of the \ndata changed.\n\nThe other problem is that calculating data pages from a count of \nupdates+deletes would require pg_avd to keep more statistics and do more math \nfor every table. Do we want to do this?\n\n> But I can't imagine that 2% makes any difference on a large table. In\n> fact I would think that 10-15% would hardly be noticable, beyond that\n> I'm not sure.\n\nI've seen performance lag at 10% of records, especially in tables where both \nupdate and select activity focus on one subset of the table (calendar tables, \nfor example).\n\n> Valid points, and again I think this points to the fact that\n> pg_autovacuum needs to be more configurable. Being able to set\n> different thresholds for different tables will help considerably. In\n> fact, you may find that some tables should have a vac threshold much\n> larger than the analyze thresold, while other tables might want the\n> opposite.\n\nSure. Though I think we can make the present configuration work with a little \nadjustment of the numbers. I'll have a chance to test on production \ndatabases soon.\n\n> I would be surprized if you can notice the difference between a vacuum\n> analyze and a vacuum, especially on large tables.\n\nIt's substantial for tables with high statistics settings. A 1,000,000 row \ntable with 5 columns set to statistics=250 can take 3 minutes to analyze on a \nmedium-grade server.\n\n> I think you need two separate schedules. There are lots of times where\n> a vacuum doesn't help, and an analyze is all that is needed\n\nAgreed. And I've just talked to a client who may want to use pg_avd's ANALYZE \nscheduling but not use vacuum at all. BTW, I think we should have a setting \nfor this; for example, if -V is -1, don't vacuum.\n\n> I'm open to discussion on changing the defaults. Perhaps what it would\n> be better to use some non-linear (perhaps logorithmic) scaling factor.\n> So that you wound up with something roughly like this:\n>\n> #tuples activity% for vacuum\n> 1k 100%\n> 10k 70%\n> 100k 45%\n> 1M 20%\n> 10M 10%\n> 100M 8%\n\nThat would be cool, too. Though a count of data pages would be a better \nscale than a count of rows, and equally obtainable from pg_class.\n\n> Thanks for the lucid feedback / discussion. autovacuum is a feature\n> that, despite it's simple implementation, has generated a lot of\n> feedback from users, and I would really like to see it become something\n> closer to what it should be.\n\nWell, I hope to help now. Until very recently, I've not had a chance to \nseriously look at pg_avd and test it in production. Now that I do, I'm \ninterested in improving it.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 20 Nov 2003 22:24:45 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd?" }, { "msg_contents": "On Thu, 2003-11-20 at 19:40, Matthew T. O'Connor wrote:\n> I'm open to discussion on changing the defaults. Perhaps what it would \n> be better to use some non-linear (perhaps logorithmic) scaling factor. \n> So that you wound up with something roughly like this:\n> \n> #tuples activity% for vacuum\n> 1k 100%\n> 10k 70%\n> 100k 45%\n> 1M 20%\n> 10M 10%\n> 100M 8%\n> \n\n\nJust thinking out loud here, so disregard if you think its chaff but...\nif we had a system table pg_avd_defaults that held what we generally\nconsider the best default percentages based on reltuples/pages, and\nadded a column to pg_class (could be some place better but..) which\ncould hold an overriding percentage, you could then have a column added\nto pg_stat_all_tables called vacuum_percentage, which would be a\ncoalesce of the override percentage or the default percentages based on\nrel_tuples (or rel_pages). This would give autovacuum a place to look\nfor each table as to when it should vacuum, and gives administrators the\noption to tweak it on a per table basis if they find they need a\nspecific table to vacuum at a different rate than the \"standard\". \n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "21 Nov 2003 09:14:14 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd?" }, { "msg_contents": "Robert Treat wrote:\n\n>Just thinking out loud here, so disregard if you think its chaff but...\n>if we had a system table pg_avd_defaults \n>\n[snip]\n\nAs long as pg_autovacuum remains a contrib module, I don't think any \nchanges to the system catelogs will be make. If pg_autovacuum is \ndeemed ready to move out of contrib, then we can talk about the above.\n\n", "msg_date": "Fri, 21 Nov 2003 09:31:49 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd?" }, { "msg_contents": "Josh Berkus wrote:\n\n>Matthew,\n> \n>\n>>True, but I think it would be one hour once, rather than 30 minutes 4\n>>times.\n>> \n>>\n>Well, generally it would be about 6-8 times at 2-4 minutes each.\n> \n>\nAre you saying that you can vacuum a 1 million row table in 2-4 \nminutes? While a vacuum of the same table with an additional 1 million \ndead tuples would take an hour?\n\n>>This is one of the things I had hoped to add to pg_autovacuum, but never\n>>got to. In addition to just the information from the stats collector on\n>>inserts updates and deletes, pg_autovacuum should also look at the FSM,\n>>and make decisions based on it. Anyone looking for a project?\n>> \n>>\n>Hmmm ... I think that's the wrong approach. Once your database is populated, \n>it's very easy to determine how to set the FSM for a given pg_avd level. If \n>you're vacuuming after 20% updates, for example, just set fsm_pages to 20% of \n>the total database pages plus growth & safety margins.\n> \n>\nOk.\n\n>I'd be really reluctant to base pv-avd frequency on the fsm settings instead. \n>What if the user loads 8GB of data but leaves fsm_pages at the default of \n>10,000? You can't do much with that; you'd have to vacuum if even 1% of the \n>data changed.\n>\nOk, but as you said above it's very easy to set the FSM once you know \nyour db size.\n\n>The other problem is that calculating data pages from a count of \n>updates+deletes would require pg_avd to keep more statistics and do more math \n>for every table. Do we want to do this?\n> \n>\nI would think the math is simple enough to not be a big problem. Also, \nI did not recommend looking blindly at the FSM as our guide, rather \nconsulting it as another source of information as to when it would be \nuseful to vacuum. I don't have a good plan as to how to incorporate \nthis data, but to a large extent the FSM already tracks table activity \nand gives us the most accurate answer about storage growth (short of \nusing something like contrib/pgstattuple which takes nearly the same \namount of time as an actual vacuum)\n\n>>But I can't imagine that 2% makes any difference on a large table. In\n>>fact I would think that 10-15% would hardly be noticable, beyond that\n>>I'm not sure.\n>> \n>>\n>I've seen performance lag at 10% of records, especially in tables where both \n>update and select activity focus on one subset of the table (calendar tables, \n>for example).\n> \n>\nOk.\n\n>>Valid points, and again I think this points to the fact that\n>>pg_autovacuum needs to be more configurable. Being able to set\n>>different thresholds for different tables will help considerably. In\n>>fact, you may find that some tables should have a vac threshold much\n>>larger than the analyze thresold, while other tables might want the\n>>opposite.\n>> \n>>\n>Sure. Though I think we can make the present configuration work with a little \n>adjustment of the numbers. I'll have a chance to test on production \n>databases soon.\n> \n>\nI look forward to hearing results from your testing.\n\n>>I would be surprized if you can notice the difference between a vacuum\n>>analyze and a vacuum, especially on large tables.\n>> \n>>\n>It's substantial for tables with high statistics settings. A 1,000,000 row \n>table with 5 columns set to statistics=250 can take 3 minutes to analyze on a \n>medium-grade server.\n> \n>\nIn my testing, I never changed the default statistics settings.\n\n>>I think you need two separate schedules. There are lots of times where\n>>a vacuum doesn't help, and an analyze is all that is needed\n>> \n>>\n>Agreed. And I've just talked to a client who may want to use pg_avd's ANALYZE \n>scheduling but not use vacuum at all. BTW, I think we should have a setting \n>for this; for example, if -V is -1, don't vacuum.\n> \n>\nThat would be nice. Easy to add, and something I never thought of....\n\n>>I'm open to discussion on changing the defaults. Perhaps what it would\n>>be better to use some non-linear (perhaps logorithmic) scaling factor.\n>> \n>>\n>That would be cool, too. Though a count of data pages would be a better \n>scale than a count of rows, and equally obtainable from pg_class.\n> \n>\nBut we track tuples because we can compare against the count given by \nthe stats system. I don't know of a way (other than looking at the FSM, \nor contrib/pgstattuple ) to see how many dead pages exist.\n\n\n", "msg_date": "Fri, 21 Nov 2003 09:56:17 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd?" }, { "msg_contents": "Matthew T. O'Connor wrote:\n\n> But we track tuples because we can compare against the count given by \n> the stats system. I don't know of a way (other than looking at the FSM, \n> or contrib/pgstattuple ) to see how many dead pages exist.\n\nI think making pg_autovacuum dependent of pgstattuple is very good idea.\n\nProbably it might be a good idea to extend pgstattuple to return pages that are \nexcessively contaminated and clean them ASAP. Step by step getting closer to \ndaemonized vacuum.\n\n Shridhar\n\n", "msg_date": "Fri, 21 Nov 2003 20:36:54 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd?" }, { "msg_contents": "Shridhar Daithankar wrote:\n\n> Matthew T. O'Connor wrote:\n>\n>> But we track tuples because we can compare against the count given by \n>> the stats system. I don't know of a way (other than looking at the \n>> FSM, or contrib/pgstattuple ) to see how many dead pages exist.\n>\n> I think making pg_autovacuum dependent of pgstattuple is very good idea. \n\nOnly if pgstattuple can become much cheaper than it is now. Based on \nthe testing I did when I wrote pg_autovacuum, pgstattuple cost nearly \nthe same amount as a regular vacuum. Given that, what have we gained \nfrom that work? Wouldn't it just be better to run a vacuum and actually \nreclaim space rather than running pgstattuple, and just look and see if \nthere is free space to be reclaimed?\n\nPerhaps we could use pgstattuple ocasionally to see if we are going a \ngood job of keeping the amount of dead space to a reasonable level, but \nI'm still not really sure about this.\n\n> Probably it might be a good idea to extend pgstattuple to return pages \n> that are excessively contaminated and clean them ASAP. Step by step \n> getting closer to daemonized vacuum.\n\nI don't know of anyway to clean a particular set of pages. This is \nsomething that has been talked about (partial vacuums and such), but I \nthink Tom has raised several issues with it, I don't remember the \ndetails. Right now the only tool we have to reclaim space is vacuum, a \nwhole table at a time.\n\n\n", "msg_date": "Fri, 21 Nov 2003 10:17:31 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd?" }, { "msg_contents": "Matthew,\n\n> As long as pg_autovacuum remains a contrib module, I don't think any\n> changes to the system catelogs will be make. If pg_autovacuum is\n> deemed ready to move out of contrib, then we can talk about the above.\n\nBut we could create a config file that would store stuff in a flatfile table, \nOR we could add our own \"system table\" that would be created when one \n\"initializes\" pg_avd.\n\nJust an idea. Mind you, I'm not so sure that we want to focus immediately on \nper-table settings. I think that we want to get the \"automatic\" settings \nworking fairly well first; a lot of new DBAs would use the per-table settings \nto shoot themselves in the foot. So we need to be able to make a strong \nrecommendation to \"try the automatic settings first.\"\n\n> Are you saying that you can vacuum a 1 million row table in 2-4\n> minutes? While a vacuum of the same table with an additional 1 million\n> dead tuples would take an hour?\n\nI'm probably exaggerating. I do know that I can vacuum a fairly clean 1-5 \nmillion row table in less than 4 mintues. I've never let such a table get \nto 50% dead tuples, so I don't really know how long that takes. Call me a \ncoward if you like ...\n\n> >I'd be really reluctant to base pv-avd frequency on the fsm settings\n> > instead. What if the user loads 8GB of data but leaves fsm_pages at the\n> > default of 10,000? You can't do much with that; you'd have to vacuum if\n> > even 1% of the data changed.\n>\n> Ok, but as you said above it's very easy to set the FSM once you know\n> your db size.\n\nActually, thinking about this I realize that PG_AVD and the Perl-based \npostgresql.conf configuration script I was working on (darn, who was doing \nthat with me?) need to go togther. With pg_avd, setting max_fsm_pages is \nvery easy; without it its a bit of guesswork.\n\nSo I think we can do this: for 'auto' settings:\n\nIf max_fsm_pages is between 13% and 100% of the total database pages, then set \nthe vacuum scale factor to match 3/4 of the fsm_pages setting, e.g.\ndatabase = 18,000,000 data pages;\nmax_fsm_pages = 3,600,000;\nset vacuum scale factor = 3.6mil/18mil * 3/4 = 0.15\n\nIf max_fsm_pages is less than 13% of database pages, issue a warning to the \nuser (log it, if possible) and set scale factor to 0.1. If it's greater \nthan 100% set it to 1 and leave it alone.\n\n> I don't have a good plan as to how to incorporate\n> this data, but to a large extent the FSM already tracks table activity\n> and gives us the most accurate answer about storage growth (short of\n> using something like contrib/pgstattuple which takes nearly the same\n> amount of time as an actual vacuum)\n\nI don't really think we need to do dynamic monitoring at this point. It \nwould be a lot of engineering to check data page pollution without having \nsignificant performance impact. It's doable, but something I think we \nshould hold off until version 3. It would mean hacking the FSM, which is a \nlittle beyond me right now.\n\n> In my testing, I never changed the default statistics settings.\n\nAh. Well, a lot of users do to resolve query problems.\n\n> But we track tuples because we can compare against the count given by\n> the stats system. I don't know of a way (other than looking at the FSM,\n> or contrib/pgstattuple ) to see how many dead pages exist.\n\nNo, but for scaling you don't need the dynamic count of tuples or of dead \ntuples; pg_class holds a reasonable accurate count of pages per table as of \nlast vacuum.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 21 Nov 2003 09:09:00 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd?" }, { "msg_contents": "Matthew,\n\n> Actually, this might be a necessary addition as pg_autovacuum currently \n> suffers from the startup transients that the FSM used to suffer from, \n> that is, it doesn't remember anything that happened the last time it \n> ran. A pg_autovacuum database could also be used to store thresholds \n> and counts from the last time it ran.\n\nI don't see how a seperate database is better than a table in the databases., \nexcept that it means scanning only one table and not one per database. For \none thing, making it a seperate database could make it hard to back up and \nmove your database+pg_avd config.\n\nBut I don't feel strongly about it.\n\n> Where are you getting 13% from? \n\n13% * 3/4 ~~ 10%\n\nAnd I think both of use agree that vacuuming tables with less than 10% changes \nis excessive and could lead to problems on its own, like overlapping vacuums.\n\n> Do you know of an easy way to get a \n> count of the total pages used by a whole cluster?\n\nSelect sum(relpages) from pg_class.\n\n> I do like the concept though as long as we find good values for \n> min_fsm_percentage and min_autovac_scaling_factor.\n\nSee above. I propose 0.13 and 0.1\n\n> Which we already keep a copy of inside of pg_autovacuum, and update \n> after we issue a vacuum.\n\nEven easier then.\n\nBTW, do we have any provisions to avoid overlapping vacuums? That is, to \nprevent a second vacuum on a table if an earlier one is still running?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 21 Nov 2003 13:23:18 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd?" }, { "msg_contents": "Josh Berkus wrote:\n\n>Matthew,\n> \n>\n>But we could create a config file that would store stuff in a flatfile table, \n>OR we could add our own \"system table\" that would be created when one \n>\"initializes\" pg_avd.\n> \n>\nI don't want to add tables to existing databases, as I consider that \nclutter and I never like using tools that clutter my production \ndatabases. I had considered using a pg_autovacuum database that if \nfound, would store customized settings for individual tables / \ndatabases. Dunno if this is a good idea, but it might make a good \nstopgap until people are comfortable modifying the system catalogs for \nautovacuum. \n\nActually, this might be a necessary addition as pg_autovacuum currently \nsuffers from the startup transients that the FSM used to suffer from, \nthat is, it doesn't remember anything that happened the last time it \nran. A pg_autovacuum database could also be used to store thresholds \nand counts from the last time it ran.\n\n>Just an idea. Mind you, I'm not so sure that we want to focus immediately on \n>per-table settings. I think that we want to get the \"automatic\" settings \n>working fairly well first; a lot of new DBAs would use the per-table settings \n>to shoot themselves in the foot. So we need to be able to make a strong \n>recommendation to \"try the automatic settings first.\"\n> \n>\nI agree in principle, question is what are the best settings, I still \nthink it will be hard to find a one size fits all, but I'm sure we can \ndo better than what we have.\n\n>Actually, thinking about this I realize that PG_AVD and the Perl-based \n>postgresql.conf configuration script I was working on (darn, who was doing \n>that with me?) need to go togther. With pg_avd, setting max_fsm_pages is \n>very easy; without it its a bit of guesswork.\n>\n>So I think we can do this: for 'auto' settings:\n>\n>If max_fsm_pages is between 13% and 100% of the total database pages, then set \n>the vacuum scale factor to match 3/4 of the fsm_pages setting, e.g.\n>database = 18,000,000 data pages;\n>max_fsm_pages = 3,600,000;\n>set vacuum scale factor = 3.6mil/18mil * 3/4 = 0.15\n> \n>\nWhere are you getting 13% from? Do you know of an easy way to get a \ncount of the total pages used by a whole cluster? I guess we can just \niterate over all the tables in all the databases and sum up the total \nnum of pages. We already iterate over them all, we just don't sum it up.\n\n>If max_fsm_pages is less than 13% of database pages, issue a warning to the \n>user (log it, if possible) and set scale factor to 0.1. If it's greater \n>than 100% set it to 1 and leave it alone.\n> \n>\nAgain I ask where 13% is coming from and also where is 0.1 coming from? \nI assume these are your best guesses right now, but not more than that. \nI do like the concept though as long as we find good values for \nmin_fsm_percentage and min_autovac_scaling_factor.\n\n>>But we track tuples because we can compare against the count given by\n>>the stats system. I don't know of a way (other than looking at the FSM,\n>>or contrib/pgstattuple ) to see how many dead pages exist.\n>> \n>>\n>No, but for scaling you don't need the dynamic count of tuples or of dead \n>tuples; pg_class holds a reasonable accurate count of pages per table as of \n>last vacuum.\n> \n>\nWhich we already keep a copy of inside of pg_autovacuum, and update \nafter we issue a vacuum.\n\n\n", "msg_date": "Fri, 21 Nov 2003 16:24:25 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd?" }, { "msg_contents": "Matthew,\n\n> Basically, I don't like the idea of modifying users databases, besides, \n> in the long run most of what needs to be tracked will be moved to the \n> system catalogs. I kind of consider the pg_autvacuum database to \n> equivalent to the changes that will need to be made to the system catalogs.\n\nOK. As I said, I don't feel strongly about it.\n\n> I certainly agree that less than 10% would be excessive, I still feel \n> that 10% may not be high enough though. That's why I kinda liked the \n> sliding scale I mentioned earlier, because I agree that for very large \n> tables, something as low as 10% might be useful, but most tables in a \n> database would not be that large.\n\nYes, but I thought that we were taking care of that through the \"threshold\" \nvalue?\n\nA sliding scale would also be OK. However, that would definitely require a \nleap to storing per-table pg_avd statistics and settings.\n\n> Only that pg_autovacuum isn't smart enough to kick off more than one \n> vacuum at a time. Basically, pg_autovacuum issues a vacuum on a table \n> and waits for it to finish, then check the next table in it's list to \n> see if it needs to be vacuumed, if so, it does it and waits for that \n> vacuum to finish. \n\nOK, then, we just need to detect the condition of the vacuums \"piling up\" \nbecause they are happening too often.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 21 Nov 2003 13:49:58 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd?" }, { "msg_contents": "Josh Berkus wrote:\n\n>Matthew,\n>\n> \n>\n>I don't see how a seperate database is better than a table in the databases., \n>except that it means scanning only one table and not one per database. For \n>one thing, making it a seperate database could make it hard to back up and \n>move your database+pg_avd config.\n> \n>\nBasically, I don't like the idea of modifying users databases, besides, \nin the long run most of what needs to be tracked will be moved to the \nsystem catalogs. I kind of consider the pg_autvacuum database to \nequivalent to the changes that will need to be made to the system catalogs.\n\nI guess it could make it harder to backup if you are moving your \ndatabase between clusters. Perhaps, if you create a pg_autovacuum \nschema inside of your database then we would could use that. I just \ndon't like tools that drop things into your database.\n\n>>Where are you getting 13% from? \n>> \n>>\n>\n>13% * 3/4 ~~ 10%\n>\n>And I think both of use agree that vacuuming tables with less than 10% changes \n>is excessive and could lead to problems on its own, like overlapping vacuums.\n>\n> \n>\nI certainly agree that less than 10% would be excessive, I still feel \nthat 10% may not be high enough though. That's why I kinda liked the \nsliding scale I mentioned earlier, because I agree that for very large \ntables, something as low as 10% might be useful, but most tables in a \ndatabase would not be that large.\n\n>> Do you know of an easy way to get a \n>>count of the total pages used by a whole cluster?\n>> \n>>\n>\n>Select sum(relpages) from pg_class.\n>\n> \n>\nduh....\n\n>BTW, do we have any provisions to avoid overlapping vacuums? That is, to \n>prevent a second vacuum on a table if an earlier one is still running?\n>\n> \n>\nOnly that pg_autovacuum isn't smart enough to kick off more than one \nvacuum at a time. Basically, pg_autovacuum issues a vacuum on a table \nand waits for it to finish, then check the next table in it's list to \nsee if it needs to be vacuumed, if so, it does it and waits for that \nvacuum to finish. There was some discussion of issuing concurrent \nvacuum against different tables, but it was decided that since vacuum is \nI/O bound, it would only make sense to issue concurrent vacuums that \nwere on different spindles, which is not something I wanted to get \ninto. Also, given the recent talk about how vacuum is still such a \nperformance hog, I can't imagine what multiple concurrent vacuums would \ndo to performance. Maybe as 7.5 develops and many of the vacuum \nperformance issues are addressed, we can revisit this question.\n\n\n", "msg_date": "Fri, 21 Nov 2003 16:52:29 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd?" }, { "msg_contents": "Josh Berkus wrote:\n\n>Matthew,\n> \n>\n>>I certainly agree that less than 10% would be excessive, I still feel \n>>that 10% may not be high enough though. That's why I kinda liked the \n>>sliding scale I mentioned earlier, because I agree that for very large \n>>tables, something as low as 10% might be useful, but most tables in a \n>>database would not be that large.\n>> \n>>\n>\n>Yes, but I thought that we were taking care of that through the \"threshold\" \n>value?\n> \n>\nWell the threshold is a combination of the base value and the scaling \nfactor which you are proposing is 0.1, so the threshold is base + \n(scaling factor)(num of tuples) So with the default base of 1000 and \nyour 0.1 you would have this:\n\n Num Rows threshold Percent\n 1,000 1,100 110%\n 10,000 2,000 20% \n 100,000 11,000 11%\n1,000,000 102,000 10%\n\nI don't like how that looks, hence the thought of some non-linear \nscaling factor that would still allow the percent to reach 10%, but at a \nslower rate, perhaps just a larger base value would suffice, but I think \nsmall table performance is going to suffer much above 1000. Anyone else \nhave an opinion on the table above? Good / Bad / Indifferent?\n\n>A sliding scale would also be OK. However, that would definitely require a \n>leap to storing per-table pg_avd statistics and settings.\n>\n> \n>\nI don't think it would, it would correlate the scaling factor with the \nnumber of tuples, no per-table settings required.\n\n>>Only that pg_autovacuum isn't smart enough to kick off more than one \n>>vacuum at a time. Basically, pg_autovacuum issues a vacuum on a table \n>>and waits for it to finish, then check the next table in it's list to \n>>see if it needs to be vacuumed, if so, it does it and waits for that \n>>vacuum to finish. \n>> \n>>\n>\n>OK, then, we just need to detect the condition of the vacuums \"piling up\" \n>because they are happening too often.\n>\n> \n>\nThat would be good to look into at some point, especially if vacuum is \ngoing to get slower as a result of the page loop delay patch that has \nbeen floating around.\n\n\n", "msg_date": "Fri, 21 Nov 2003 17:40:45 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> BTW, do we have any provisions to avoid overlapping vacuums? That is, to \n> prevent a second vacuum on a table if an earlier one is still running?\n\nYes, VACUUM takes a lock that prevents another VACUUM on the same table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Nov 2003 18:04:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd? " }, { "msg_contents": "On Fri, 21 Nov 2003, Matthew T. O'Connor wrote:\n> >> Do you know of an easy way to get a\n> >>count of the total pages used by a whole cluster?\n> >\n> >Select sum(relpages) from pg_class.\n\nYou might want to exclude indexes from this calculation. Some large\nread only tables might have indexes larger than the tables themselves.\n\n\n", "msg_date": "Fri, 21 Nov 2003 18:53:26 -0500 (EST)", "msg_from": "Chester Kustarz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd?" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> Josh Berkus <[email protected]> writes:\n> > BTW, do we have any provisions to avoid overlapping vacuums? That is, to \n> > prevent a second vacuum on a table if an earlier one is still running?\n> \n> Yes, VACUUM takes a lock that prevents another VACUUM on the same table.\n\nThe second vacuum waits for the lock to become available. If the situation got\nreally bad there could end up being a growing queue of vacuums waiting.\n\nI'm not sure how likely this is as the subsequent vacuums appear to finish\nquite quickly though. But then the largest table I have to play with fits\nentirely in memory.\n\n-- \ngreg\n\n", "msg_date": "21 Nov 2003 19:51:17 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] More detail on settings for pgavd?" }, { "msg_contents": "On Fri, Nov 21, 2003 at 04:24:25PM -0500, Matthew T. O'Connor wrote:\n\n> I don't want to add tables to existing databases, as I consider that \n> clutter and I never like using tools that clutter my production \n> databases. [...]\n> \n> Actually, this might be a necessary addition as pg_autovacuum currently \n> suffers from the startup transients that the FSM used to suffer from, \n> that is, it doesn't remember anything that happened the last time it \n> ran. A pg_autovacuum database could also be used to store thresholds \n> and counts from the last time it ran.\n\nYou could use the same approach the FSM uses: keep a file with the data,\nPGDATA/base/global/pg_fsm.cache. You don't need the data to be in a table\nafter all ...\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\nEssentially, you're proposing Kevlar shoes as a solution for the problem\nthat you want to walk around carrying a loaded gun aimed at your foot.\n(Tom Lane)\n", "msg_date": "Sat, 22 Nov 2003 11:46:37 -0300", "msg_from": "Alvaro Herrera Munoz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd?" }, { "msg_contents": "After a long battle with technology, [email protected] (Josh Berkus), an earthling, wrote:\n>> As long as pg_autovacuum remains a contrib module, I don't think\n>> any changes to the system catelogs will be make. If pg_autovacuum\n>> is deemed ready to move out of contrib, then we can talk about the\n>> above.\n>\n> But we could create a config file that would store stuff in a\n> flatfile table, OR we could add our own \"system table\" that would be\n> created when one \"initializes\" pg_avd.\n\nThe problem with introducing a \"config file\" is that you then have to\nintroduce a language and a parser for that language.\n\nThat introduces rather a lot of complexity. That was the BIG problem\nwith pgavd (which is a discarded project; pg_autovacuum is NOT the\nsame thing as pgavd). There was more code involved just in managing\nthe pgavd parser than there is in all of pg_autovacuum.\n\nI think the right answer for more sophisticated configuration would\ninvolve specifying a database in which to find the pg_autovacuum\ntable(s).\n\n> Just an idea. Mind you, I'm not so sure that we want to focus\n> immediately on per-table settings. I think that we want to get the\n> \"automatic\" settings working fairly well first; a lot of new DBAs\n> would use the per-table settings to shoot themselves in the foot.\n> So we need to be able to make a strong recommendation to \"try the\n> automatic settings first.\"\n\nYeah, it's probably a good idea to ensure that per-table settings\ninvolves some really conspicuous form of \"foot gun\" (with no kevlar\nsocks) to discourage its use except when you _know_ what you're\ndoing...\n-- \nlet name=\"cbbrowne\" and tld=\"ntlug.org\" in String.concat \"@\" [name;tld];;\nhttp://www3.sympatico.ca/cbbrowne/nonrdbms.html\nQ: Can SETQ only be used with numerics?\nA: No, SETQ may also be used by Symbolics, and use it they do.\n", "msg_date": "Mon, 24 Nov 2003 23:08:08 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More detail on settings for pgavd?" }, { "msg_contents": "On Fri, Nov 21, 2003 at 07:51:17PM -0500, Greg Stark wrote:\n> The second vacuum waits for the lock to become available. If the\n> situation got really bad there could end up being a growing queue\n> of vacuums waiting.\n\nThose of us who have run into this know that \"the situation got\nreally bad\" is earlier than one might think. And it can indeed cause\nsome pretty pathological behaviour.\n\nA\n\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 25 Nov 2003 23:27:46 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] More detail on settings for pgavd?" } ]
[ { "msg_contents": "\nI am running RH 7.3 running Apache 1.3.27-2 and PostgreSQL 7.2.3-5.73.\nWhen having 100+ users connected to my server I notice that postmaster\nconsumes up wards of 90% of the processor and I hardly am higher than\n10% idle. I did notice that when I kill apache and postmaster that my\nidle processor percentage goes to 95% or higher. I am looking on ways\nthat I can find what connections are making the database processes run\nso high. If this could also tell which program is accessing it, it would\nbe helpful. I have look through the documents on performance tuning\npostgresql and have adjusted my memory with little effect. I have even\nrouted all traffic away from the Apache server so no load is on apache.\nI do have C programs that run and access the database. Any help will be\ngreatly appreciated. \n\nThanks in advance,\nBen\n\n\n\n", "msg_date": "18 Nov 2003 18:19:14 -0700", "msg_from": "Benjamin Bostow <[email protected]>", "msg_from_op": true, "msg_subject": "High Processor consumption" }, { "msg_contents": "Benjamin Bostow wrote:\n\n> I am running RH 7.3 running Apache 1.3.27-2 and PostgreSQL 7.2.3-5.73.\n> When having 100+ users connected to my server I notice that postmaster\n> consumes up wards of 90% of the processor and I hardly am higher than\n> 10% idle. I did notice that when I kill apache and postmaster that my\n> idle processor percentage goes to 95% or higher. I am looking on ways\n> that I can find what connections are making the database processes run\n> so high. If this could also tell which program is accessing it, it would\n> be helpful. I have look through the documents on performance tuning\n> postgresql and have adjusted my memory with little effect. I have even\n> routed all traffic away from the Apache server so no load is on apache.\n> I do have C programs that run and access the database. Any help will be\n> greatly appreciated. \n\nRoutinely the CPU load for postgresql translates to too much low shared buffers \nsetting for requirement.\n\nWhat are your postgresql.conf tunings? Could you please post them?\n\n Shridhar\n\n", "msg_date": "Wed, 19 Nov 2003 12:04:12 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High Processor consumption" }, { "msg_contents": "Benjamin Bostow wrote:\n\n> I haven't modified any of the setting. I did try changing shmmax from\n> 32MB to 256MB but didn't see much change in the processor usage. The\n> init script that runs to start the server uses the following:\n> su -l postgres -s /bin/sh -c \"/usr/bin/pg_ctl -D $PGDATA -p\n> /usr/bin/postmaster -o \\\"-i -N 128 -B 256\\\" start > /dev/null 2>&1\" <\n> /dev/null\n\nLol.. -B 256 does not mean 256MB of buffers. It means 2MB of buffers. Each \nbuffer is of 8K. Try upping it to a 1000 or 2000\n> \n> I haven't modified postgresql.conf yet but am going through a book on\n> performance tuning the server. If you can provide any suggestions or\n> help as I am new to postgres it would be appreciated.\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n HTH\n\n Shridhar\n\n", "msg_date": "Thu, 20 Nov 2003 11:28:13 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High Processor consumption" } ]
[ { "msg_contents": "\nI have this table:\n\ndb=> \\d object_property_value\n Table \"db.object_property_value\"\n Column | Type | Modifiers \n-----------------------+------------------------+--------------------\n obj_property_value_id | integer | not null default nextval(...\n obj_property_id | integer | not null\n value | text | \nIndexes:\n \"object_property_value_pkey\" primary key, btree (obj_property_value_id)\n \"opv_obj_property_id_ix\" btree (obj_property_id)\n \"opv_v_ix\" btree (substr(value, 1, 128))\nForeign-key constraints:\n \"object_property_fkey\" FOREIGN KEY (obj_property_id) \n REFERENCES object_property(obj_property_id) \n ON UPDATE CASCADE ON DELETE CASCADE\n\n(long lines edited for readability).\n\nThe table contains about 250,000 records and will grow at regular intervals. \nThe 'value' column contains text of various lengths. The table is VACUUMed\nand ANALYZEd regularly and waxed on Sunday mornings. Database encoding is\nUnicode. Server is 7.4RC1 or 7.4RC2 and will be 7.4 ASAP.\n\nI want to query this table to match a specific value along\nthe lines of:\n\nSELECT obj_property_id\n FROM object_property_value opv\n WHERE opv.value = 'foo'\n\nThere will only be a few (at the moment 2 or 3) rows exactly matching\n'foo'. This query will only be performed with values containing less\nthan around 100 characters, which account for ca. 10% of all rows in the\ntable.\n\nThe performance is of course lousy:\n\ndb=> EXPLAIN\ndb-> SELECT obj_property_id\ndb-> FROM object_property_value opv\ndb-> WHERE opv.value = 'foo';\n QUERY PLAN \n-----------------------------------------------------------------------------\n Seq Scan on object_property_value opv (cost=0.00..12258.26 rows=2 width=4)\n Filter: (value = 'foo'::text)\n(2 rows)\n\nHowever, if I create a VARCHAR field containing the first 128 characters of\nthe text field and index that, an index scan is used:\n\ndb=> EXPLAIN\ndb-> SELECT obj_property_id\ndb-> FROM object_property_value opv\ndb-> WHERE opv.opv_vc = 'foo';\n QUERY PLAN \n-------------------------------------------------------------------------------------------\n Index Scan using opv_vc_ix on object_property_value opv (cost=0.00..6.84 \nrows=2 width=4)\n Index Cond: ((opv_vc)::text = 'foo'::text)\n\nThe question is therefore: can I get an index to work on the TEXT column? It\nis currently indexed with:\n \"opv_v_ix\" btree (substr(value, 1, 128))\n\nwhich doesn't appear to have any effect. I am probably missing something\nobvious though. I can live with maintaining an extra VARCHAR column but\nwould like to keep the table as simple as possible.\n\n(For anyone wondering: yes, I can access the data using tsearch2 - via\na different table in this case - but this is not always appropriate).\n\n\nThanks for any hints.\n\n\nIan Barwick\[email protected]\n\n", "msg_date": "Wed, 19 Nov 2003 10:18:18 +0100", "msg_from": "Ian Barwick <[email protected]>", "msg_from_op": true, "msg_subject": "TEXT column and indexing" }, { "msg_contents": "On Wed, 19 Nov 2003 10:18:18 +0100, Ian Barwick <[email protected]>\nwrote:\n>Indexes:\n> [...]\n> \"opv_v_ix\" btree (substr(value, 1, 128))\n\n>SELECT obj_property_id\n> FROM object_property_value opv\n> WHERE opv.value = 'foo'\n\nTry\n\t... WHERE substr(opv.value, 1, 128) = 'foo'\n\nHTH.\nServus\n Manfred\n", "msg_date": "Wed, 19 Nov 2003 17:26:01 +0100", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TEXT column and indexing" }, { "msg_contents": "\nOn Wed, 19 Nov 2003, Ian Barwick wrote:\n\n>\n> I have this table:\n>\n> db=> \\d object_property_value\n> Table \"db.object_property_value\"\n> Column | Type | Modifiers\n> -----------------------+------------------------+--------------------\n> obj_property_value_id | integer | not null default nextval(...\n> obj_property_id | integer | not null\n> value | text |\n> Indexes:\n> \"object_property_value_pkey\" primary key, btree (obj_property_value_id)\n> \"opv_obj_property_id_ix\" btree (obj_property_id)\n> \"opv_v_ix\" btree (substr(value, 1, 128))\n> Foreign-key constraints:\n> \"object_property_fkey\" FOREIGN KEY (obj_property_id)\n> REFERENCES object_property(obj_property_id)\n> ON UPDATE CASCADE ON DELETE CASCADE\n> I want to query this table to match a specific value along\n> the lines of:\n>\n> SELECT obj_property_id\n> FROM object_property_value opv\n> WHERE opv.value = 'foo'\n>\n> The question is therefore: can I get an index to work on the TEXT column? It\n> is currently indexed with:\n> \"opv_v_ix\" btree (substr(value, 1, 128))\n>\n> which doesn't appear to have any effect. I am probably missing something\n> obvious though. I can live with maintaining an extra VARCHAR column but\n\nYou probably need to be querying like:\nWHERE substr(value,1,128)='foo';\nin order to use that index.\n\nWhile substr(txtcol, 1,128) happens to have the property that it would be\nprobably be useful in a search against a short constant string, that's an\ninternal property of that function.\n", "msg_date": "Wed, 19 Nov 2003 08:35:05 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TEXT column and indexing" }, { "msg_contents": "On Wednesday 19 November 2003 17:35, Stephan Szabo wrote:\n> On Wed, 19 Nov 2003, Ian Barwick wrote:\n> > I have this table:\n(...)\n>\n> You probably need to be querying like:\n> WHERE substr(value,1,128)='foo';\n> in order to use that index.\n>\n> While substr(txtcol, 1,128) happens to have the property that it would be\n> probably be useful in a search against a short constant string, that's an\n> internal property of that function.\n\nThat's the one :-). Thanks!\n\nIan Barwick\[email protected]\n\n", "msg_date": "Wed, 19 Nov 2003 20:13:42 +0100", "msg_from": "Ian Barwick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TEXT column and indexing" }, { "msg_contents": "On Wednesday 19 November 2003 17:26, you wrote:\n> On Wed, 19 Nov 2003 10:18:18 +0100, Ian Barwick <[email protected]>\n>\n> wrote:\n> >Indexes:\n> > [...]\n> > \"opv_v_ix\" btree (substr(value, 1, 128))\n> >\n> >SELECT obj_property_id\n> > FROM object_property_value opv\n> > WHERE opv.value = 'foo'\n>\n> Try\n> \t... WHERE substr(opv.value, 1, 128) = 'foo'\n>\n> HTH.\n\nYup:\ndb=> explain\ndb-> SELECT obj_property_id\ndb-> FROM object_property_value opv\ndb-> WHERE substr(opv.value,1,128) = 'foo';\n QUERY PLAN \n------------------------------------------------------------------------------------------------\n Index Scan using opv_v_ix on object_property_value opv (cost=0.00..4185.78 \nrows=1101 width=4)\n Index Cond: (substr(value, 1, 128) = 'foo'::text)\n(2 rows)\n\nMany thanks\n\nIan Barwick\[email protected]\n\n", "msg_date": "Wed, 19 Nov 2003 20:16:48 +0100", "msg_from": "Ian Barwick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TEXT column and indexing" } ]
[ { "msg_contents": "On Thu, Nov 20, 2003 at 07:07:30 +0530,\n Rajesh Kumar Mallah <[email protected]> wrote:\n> \n> If i dump and reload the performance improves and it takes < 1 sec. This\n> is what i have been doing since the upgrade. But its not a solution.\n> \n> The Vacuum full is at the end of a loading batch SQL file which makes lot of\n> insert , deletes and updates.\n\nIf a dump and reload fixes your problem, most likely you have a lot of\ndead tuples in the table. You might need to run vacuum more often.\nYou might have an open transaction that is preventing vacuum full\nfrom cleaning up the table.\n", "msg_date": "Wed, 19 Nov 2003 09:49:59 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problem with select count(*) .." }, { "msg_contents": "\nEver Since i upgraded to 7.4RC2 i am facing problem \nwith select count(*) . In 7.3 the problem was not there\nselect count(*) from data_bank.profiles used to return almost\ninstantly , but in 7.4\n\nexplain analyze SELECT count(*) from data_bank.profiles;\n QUERY PLAN\n---------------------------------------------------------------------------\n Aggregate (cost=48361.30..48361.30 rows=1 width=0) (actual time=23456.870..23456.871 rows=1 loops=1)\n -> Seq Scan on profiles (cost=0.00..47431.84 rows=371784 width=0) (actual time=12174.999..23262.823 rows=123928 loops=1)\n Total runtime: 23458.460 ms\n(3 rows)\n\ntradein_clients=#\n\nIf i dump and reload the performance improves and it takes < 1 sec. This\nis what i have been doing since the upgrade. But its not a solution.\n\nThe Vacuum full is at the end of a loading batch SQL file which makes lot of\ninsert , deletes and updates.\n\nRegds\nMallah.\n\n\n\n\n\n\nVACUUM FULL VERBOSE ANALYZE data_bank.profiles;\n INFO: vacuuming \"data_bank.profiles\"\n INFO: \"profiles\": found 430524 removable, 371784 nonremovable row versions in 43714 pages\n INFO: index \"profiles_pincode\" now contains 371784 row versions in 3419 pages\n INFO: index \"profiles_city\" now contains 371784 row versions in 3471 pages\n INFO: index \"profiles_branch\" now contains 371784 row versions in 2237 pages\n INFO: index \"profiles_area_code\" now contains 371784 row versions in 2611 pages\n INFO: index \"profiles_source\" now contains 371784 row versions in 3165 pages\n INFO: index \"co_name_index_idx\" now contains 371325 row versions in 3933 pages\n INFO: index \"address_index_idx\" now contains 371490 row versions in 4883 pages\n INFO: index \"profiles_exp_cat\" now contains 154836 row versions in 2181 pages\n INFO: index \"profiles_imp_cat\" now contains 73678 row versions in 1043 pages\n INFO: index \"profiles_manu_cat\" now contains 87124 row versions in 1201 pages\n INFO: index \"profiles_serv_cat\" now contains 19340 row versions in 269 pages\n INFO: index \"profiles_pid\" now contains 371784 row versions in 817 pages\n INFO: index \"profiles_pending_branch_id\" now contains 0 row versions in 1 pages\n INFO: \"profiles\": moved 0 row versions, truncated 43714 to 43714 pages\n INFO: vacuuming \"pg_toast.pg_toast_67748379\"\n INFO: \"pg_toast_67748379\": found 0 removable, 74 nonremovable row versions in 17 pages\n INFO: index \"pg_toast_67748379_index\" now contains 74 row versions in 2 pages\n INFO: \"pg_toast_67748379\": moved 1 row versions, truncated 17 to 17 pages\n INFO: index \"pg_toast_67748379_index\" now contains 74 row versions in 2 pages\n INFO: analyzing \"data_bank.profiles\"\n INFO: \"profiles\": 43714 pages, 3000 rows sampled, 3634 estimated total rows\nVACUUM\nTime: 1001525.19 ms\n\n\n", "msg_date": "Thu, 20 Nov 2003 07:07:30 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "problem with select count(*) .." } ]
[ { "msg_contents": "I'm having a problem with a queyr like: INSERT INTO FACT (x,x,x,x,x,x)\nSELECT a.key,b.key,c.key,d.key,e.key,f.key from x,a,b,c,d,e,f where x=a\nand x=b .... -- postgres7.4 is running out of memory. I'm not sure\nwhy this would happen -- does it buffer the subselect before doing the\ninsert?\n\nThings are pretty big scale: 3gb ram, 32768 shared buffers, 700gb disk,\nmillions of rows in the tables.\n\n\n", "msg_date": "Thu, 20 Nov 2003 13:04:29 -0800", "msg_from": "stephen farrell <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with insert into select..." }, { "msg_contents": "On Thursday 20 November 2003 21:04, stephen farrell wrote:\n> I'm having a problem with a queyr like: INSERT INTO FACT (x,x,x,x,x,x)\n> SELECT a.key,b.key,c.key,d.key,e.key,f.key from x,a,b,c,d,e,f where x=a\n> and x=b .... -- postgres7.4 is running out of memory.\n\nWhen this has happened to me it's always been because I've got an \nunconstrained join due to pilot error. Try an EXPLAIN on the select part and \nsee if that pops up anything.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 24 Nov 2003 19:29:40 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with insert into select..." } ]
[ { "msg_contents": "I'm having a problem with a queyr like: INSERT INTO FACT (x,x,x,x,x,x)\nSELECT a.key,b.key,c.key,d.key,e.key,f.key from x,a,b,c,d,e,f where x=a\nand x=b .... -- postgres7.4 is running out of memory. I'm not sure\nwhy this would happen -- does it buffer the subselect before doing the\ninsert?\n\nThings are pretty big scale: 3gb ram, 32768 shared buffers, 700gb disk,\nmillions of rows in the tables.\n\n\n\n\n", "msg_date": "Thu, 20 Nov 2003 13:09:31 -0800", "msg_from": "stephen farrell <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with insert into select..." }, { "msg_contents": "stephen farrell <[email protected]> writes:\n> I'm having a problem with a queyr like: INSERT INTO FACT (x,x,x,x,x,x)\n> SELECT a.key,b.key,c.key,d.key,e.key,f.key from x,a,b,c,d,e,f where x=a\n> and x=b .... -- postgres7.4 is running out of memory. I'm not sure\n> why this would happen -- does it buffer the subselect before doing the\n> insert?\n\nWhat does EXPLAIN show for the query? And we need to see the exact\nquery and table definitions, not abstractions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Nov 2003 17:58:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with insert into select... " }, { "msg_contents": "Ok -- so we created indexes and it was able to complete successfully. \nBut why would creating indexes affect the memory footprint, and should it?\n\n\nDoes it buffer the sub-select before doing the insert, or does it do the \ninsert record-by-record?\n\n\nSee correspondence below for details:\n\n\nSteve,\n\n With the indexes created it worked. It took about 4 hours, but \nit inserted all of the records.\n\n\tstephen farrell <[email protected]>\n\n11/20/2003 05:22 PM\n\t\n To: James Rhodes/Almaden/IBM@IBMUS\n cc:\n Subject: Re: [Fwd: Re: [PERFORM] Problem with insert \ninto select...]\n\n\n\nif you do \"explain\" before the sql statement (e.g., \"explain select *\nfrom foo\"), it'll tell you the query plan.\n\nJames Rhodes wrote:\n >\n > Steve,\n >\n > Here is the detailed structure of the tables and the query that\n > is failing (the \"INSERT INTO FACT\" query) and I attached the logfile.\n > Also what is EXPLAIN???\n >\n > CREATE TABLE RAW ( RAW_KEY serial, PATNO_TEXT VARCHAR (9),\n > APPDATE_DATETIME VARCHAR (11), ISDATE_DATETIME VARCHAR (11),\n > WHATEVERSNO_TEXT VARCHAR (5), WHATEVERSNO_NUMBER VARCHAR (6), APPNO_TEXT\n > VARCHAR (10), TITLE_TEXT TEXT, USCLASS_TEXT VARCHAR (14),\n > USCLASS_TEXTLIST_TEXT TEXT, AUTHORCODE_TEXT VARCHAR (9),\n > AUTHORNORM_TEXT VARCHAR (195), AUTHOR_TEXT VARCHAR (212),\n > AUTHOR_TEXTLIST_TEXT TEXT, AUTHORADDRESS_TEXT VARCHAR (84),\n > AUTHORADDRESS_TEXTLIST_TEXT TEXT, INVENTOR_TEXT VARCHAR (50),\n > INVENTOR_TEXTLIST_TEXT TEXT, INVENTORADDRESS_TEXT VARCHAR (90),\n > INVENTORADDRESS_TEXTLIST_TEXT TEXT, AGENT_TEXT TEXT, AGENT_TEXTLIST_TEXT\n > TEXT, USSEARCHFIELD_TEXT VARCHAR (26), USSEARCHFIELD_TEXTLIST_TEXT\n > VARCHAR (150), USREFISDATE_TEXT VARCHAR (13), USREFISDATE_TEXTLIST_TEXT\n > TEXT, USREFNAME_TEXT VARCHAR (34), USREFNAME_TEXTLIST_TEXT TEXT,\n > ABSTRACT_TEXT TEXT, ABSTRACT_TEXTLIST_TEXT TEXT, ABSTRACT_RICHTEXT_PAR\n > TEXT, WHATEVERS_RICHTEXT_PAR TEXT, USREFPATNO_RICHTEXT_PAR TEXT, PRIMARY\n > KEY(RAW_KEY));\n >\n >\n > CREATE TABLE ISSUE_TIME (\n > TAB_KEY serial,\n > ISDATE_DATETIME varchar (8),\n > MONTH INT,\n > DAY INT,\n > YEAR INT\n > , PRIMARY KEY(TAB_KEY))\n >\n > CREATE TABLE SOMETHING_NUMBER (\n > TAB_KEY serial,\n > PATNO_TEXT varchar (7)\n > , PRIMARY KEY(TAB_KEY))\n >\n > CREATE TABLE APP_TIME (\n > TAB_KEY serial,\n > APPDATE_DATETIME varchar (8),\n > MONTH INT,\n > DAY INT,\n > YEAR INT\n > , PRIMARY KEY(TAB_KEY))\n >\n > CREATE TABLE AUTHOR (\n > TAB_KEY serial,\n > CODE varchar (6),\n > AUTHOR text\n > , PRIMARY KEY(TAB_KEY))\n >\n > CREATE TABLE APPLICATION_NUMBER (\n > TAB_KEY serial,\n > APPNO_TEXT varchar (7)\n > , PRIMARY KEY(TAB_KEY))\n >\n > CREATE TABLE WHATEVERS (\n > TAB_KEY serial,\n > abstract_richtext_par text,\n > WHATEVERS_richtext_par text,\n > raw_key int,\n > title_text text\n > , PRIMARY KEY(TAB_KEY))\n >\n > CREATE TABLE FACT (DYN_DIM1 BIGINT, DYN_DIM2 BIGINT,DYN_DIM3\n > BIGINT,ISSUE_TIME BIGINT, SOMETHING_NUMBER BIGINT, APP_TIME BIGINT,\n > AUTHOR BIGINT, APPLICATION_NUMBER BIGINT, WHATEVERS BIGINT)\n >\n > INSERT INTO FACT (ISSUE_TIME, SOMETHING_NUMBER, APP_TIME, AUTHOR,\n > APPLICATION_NUMBER, WHATEVERS) SELECT ISSUE_TIME.TAB_KEY,\n > SOMETHING_NUMBER.TAB_KEY, APP_TIME.TAB_KEY, AUTHOR.TAB_KEY,\n > APPLICATION_NUMBER.TAB_KEY, WHATEVERS.TAB_KEY FROM ISSUE_TIME,\n > SOMETHING_NUMBER, APP_TIME, AUTHOR, APPLICATION_NUMBER, WHATEVERS, raw\n > WHERE ISSUE_TIME.ISDATE_DATETIME=raw.ISDATE_DATETIME AND\n > SOMETHING_NUMBER.PATNO_TEXT=raw.PATNO_TEXT AND\n > APP_TIME.APPDATE_DATETIME=raw.APPDATE_DATETIME AND\n > AUTHOR.CODE=AUTHORCODE_TEXT AND AUTHOR.AUTHOR=(AUTHOR_TEXT ||\n > ' | ' || AUTHOR_TEXTLIST_TEXT) AND\n > APPLICATION_NUMBER.APPNO_TEXT=raw.APPNO_TEXT AND\n > WHATEVERS.raw_key=raw.raw_key\n\nTom Lane wrote:\n> stephen farrell <[email protected]> writes:\n> \n>>I'm having a problem with a queyr like: INSERT INTO FACT (x,x,x,x,x,x)\n>>SELECT a.key,b.key,c.key,d.key,e.key,f.key from x,a,b,c,d,e,f where x=a\n>>and x=b .... -- postgres7.4 is running out of memory. I'm not sure\n>>why this would happen -- does it buffer the subselect before doing the\n>>insert?\n> \n> \n> What does EXPLAIN show for the query? And we need to see the exact\n> query and table definitions, not abstractions.\n> \n> \t\t\tregards, tom lane\n\n", "msg_date": "Thu, 20 Nov 2003 18:05:09 -0800", "msg_from": "stephen farrell <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problem with insert into select..." }, { "msg_contents": "stephen farrell <[email protected]> writes:\n> With the indexes created it worked. It took about 4 hours, but it\n> inserted all of the records.\n\nHas this been satisfactorily resolved?\n\nIf not, can you post an EXPLAIN ANALYZE for the failing query, as Tom\nasked earlier?\n\n-Neil\n\n", "msg_date": "Tue, 09 Dec 2003 03:32:24 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with insert into select..." } ]
[ { "msg_contents": "Heya,\n\nSorry for no updates on the old pg_autoconfig script thing, been rather \nbusy at work the last week or two :-( but i suppose it pays the bills. I \ndo have a few days off work now so I can spend some time on finishing \nthe first version off. The latest version can be found at\n\nhttp://www.chuckie.co.uk/postgresql/pg_autoconfig.txt\n\nJosh have you managed to put together the rest of the calculations yet?\n\n\nThanks,\n\n\nNick\n\n", "msg_date": "Sat, 22 Nov 2003 11:56:35 +0000", "msg_from": "Nick Barr <[email protected]>", "msg_from_op": true, "msg_subject": "pg_autoconfig.pl" }, { "msg_contents": "Nick,\n\n> Josh have you managed to put together the rest of the calculations yet?\n\nGiven that I spent most of November working on the PR for the 7.4 release, \nI've just started to think about it. As you can see, I'm thinking about \ndovetailing pg_avd and pg_autoconf.\n\nThe difficult thing is to figure out settings for \"bad\" hardware setups. Like \na 5GB database on a PIII500 + 256mb running 4 other pieces of major software \n(I've acctually seen this). Or a 500/minute OLTP database on a machine \nwith 1 fixed disk.\n\nAll the variables are going into a hash array, right?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sat, 22 Nov 2003 09:19:23 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_autoconfig.pl" }, { "msg_contents": "----- Original Message ----- \nFrom: \"Josh Berkus\" <[email protected]>\nTo: \"Nick Barr\" <[email protected]>; <[email protected]>\nSent: Saturday, November 22, 2003 5:19 PM\nSubject: Re: pg_autoconfig.pl\n\n\n> Nick,\n>\n> > Josh have you managed to put together the rest of the calculations yet?\n>\n> Given that I spent most of November working on the PR for the 7.4 release,\n> I've just started to think about it. As you can see, I'm thinking about\n> dovetailing pg_avd and pg_autoconf.\n>\n> The difficult thing is to figure out settings for \"bad\" hardware setups.\nLike\n> a 5GB database on a PIII500 + 256mb running 4 other pieces of major\nsoftware\n> (I've acctually seen this). Or a 500/minute OLTP database on a machine\n> with 1 fixed disk.\n>\n> All the variables are going into a hash array, right?\n\nYep that rights.$sys is the array name. And there are loads of different\nkeys for the hash. I will try and get a listing out of the script sometime.\n\n\nNick\n\n\n", "msg_date": "Mon, 24 Nov 2003 11:24:47 -0000", "msg_from": "\"Nick Barr\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_autoconfig.pl" } ]