threads
listlengths
1
275
[ { "msg_contents": "Any of you chaps used this controller?\n\n\n\n ___________________________________________________________ \nRise to the challenge for Sport Relief with Yahoo! For Good \n\nhttp://uk.promotions.yahoo.com/forgood/\n\n", "msg_date": "Fri, 14 Mar 2008 17:20:29 +0000 (GMT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Adaptec 5805 SAS Raid" }, { "msg_contents": "Glyn Astill wrote:\n> Any of you chaps used this controller?\n> \n\nIt looks very similar to the rebadged Adaptec that Sun shipped in the \nX4150 I ordered a few weeks ago, though the Sun model had only 256MB of \ncache RAM. I was wary of going Adaptec after my experiences with the \nPERC/3i, which couldn't even seem to manage a single disk's worth of \nread performance from a RAID-1 array, but I was pleasantly surprised by \nthis card. I'm only running a RAID-1 array on it, with 2 146GB 10krpm \nSAS drives, but I was impressed with the read performance -- it seems \nquite happy to split sequential reads across the two disks.\n\nHere are the bonnie++ numbers I took during my run-in testing:\n\nVersion 1.03 ------Sequential Output------ --Sequential Input- \n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- \n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP \n/sec %CP\nmembrane 12G 54417 89 86808 15 41489 6 59517 96 125266 10 \n629.6 0\n ------Sequential Create------ --------Random \nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- \n-Delete--\nfiles:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP \n/sec %CP\nmembrane 16 19496 97 +++++ +++ 14220 68 7673 40 +++++ +++ \n5246 26\n\nI'm not sure if I'd yet be comfortable running a larger array for a \ndatabase on an Adaptec card, but it's definitely a great improvement on \nthe earlier Adaptec hardware I've used.\n\nThanks\nLeigh\n\n> \n> \n> ___________________________________________________________ \n> Rise to the challenge for Sport Relief with Yahoo! For Good \n> \n> http://uk.promotions.yahoo.com/forgood/\n> \n> \n", "msg_date": "Sat, 15 Mar 2008 09:34:08 +1100", "msg_from": "Leigh Dyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adaptec 5805 SAS Raid" } ]
[ { "msg_contents": "It often happens that a particular pieces of information is non-null for a\nsmall minority of cases. A superficially different manifestation of this is\nwhen two pieces of information are identical in all but a small minority of\ncases. This can be easily mapped to the previous description by defining a\nnull in one column to mean that its contents should be obtained from those\nof another column. A further variant of this is when one piece of\ninformation is a simple function of another one in all but a small minority\nof cases.\n\n(BTW, I vaguely recall that RDb theorists have a technical term for this\nparticular design issue, but I don't remember it.)\n\nIn all these cases, the design choice, at least according to RDb's 101, is\nbetween including a column in the table that will be NULL most of the time,\nor defining a second auxiliary column that references the first one and\nholds the non-redundant information for the minority of cases for which this\nis necessary (and maybe define a VIEW that includes all the columns).\n\nBut for me it is a frequent occurrence that my quaint and simple RDb's 101\nreasoning doesn't really apply for PostgreSQL. Basically, Pg is too smart\nfor it! For example, does a large proportion of NULLs really imply a lot of\nwasted space? Maybe this is true for fixed-length data types, but what\nabout for type TEXT or VARCHAR?\n\nJust to be concrete, consider the case of a customers database for some home\nshopping website. Suppose that, as it happens, for the majority of this\nsite's customers, the shipping and billing addresses are identical. Or\nconsider the scenario of a company in which, for most employees, the email\naddress can be readily computed from the first and last name using the rule\nFirst M. Last => [email protected], but the company allows some\nflexibility for special cases (e.g. for people like Yasuhiro Tanaka who's\nknown to everyone by his nickname, Yaz, the email is\[email protected] hardly anyone remembers or even knows his\nfull name.)\n\nWhat's your schema design approach for such situations? How would you go\nabout deciding whether the number of exceptional cases is small enough to\nwarrant a second table? Of course, one could do a systematic profiling of\nvarious possible scenarios, but as a first approximation what's your\nrule-of-thumb?\n\nTIA!\n\nKynn\n\nIt often happens that a particular pieces of information is non-null for a small minority of cases.  A superficially different manifestation of this is when two pieces of information are identical in all but a small minority of cases.  This can be easily mapped to the previous description by defining a null in one column to mean that its contents should be obtained from those of another column.  A further variant of this is when one piece of information is a simple function of another one in all but a small minority of cases.\n(BTW, I vaguely recall that RDb theorists have a technical term for this particular design issue, but I don't remember it.)\nIn all these cases, the design choice, at least according to RDb's 101, is between including a column in the table that will be NULL most of the time, or defining a second auxiliary column that references the first one and holds the non-redundant information for the minority of cases for which this is necessary (and maybe define a VIEW that includes all the columns).\nBut for me it is a frequent occurrence that my quaint and simple RDb's 101 reasoning doesn't really apply for PostgreSQL.  Basically, Pg is too smart for it!  For example, does a large proportion of NULLs really imply a lot of wasted space?  Maybe this is true for fixed-length data types, but what about for type TEXT or VARCHAR?\nJust to be concrete, consider the case of a customers database for some home shopping website.  Suppose that, as it happens, for the majority of this site's customers, the shipping and billing addresses are identical.  Or consider the scenario of a company in which, for most employees, the email address can be readily computed from the first and last name using the rule First M. Last => [email protected], but the company allows some flexibility for special cases (e.g. for people like Yasuhiro Tanaka who's known to everyone by his nickname, Yaz, the email is [email protected] because hardly anyone remembers or even knows his full name.)\nWhat's your schema design approach for such situations?  How would you go about deciding whether the number of exceptional cases is small enough to warrant a second table?  Of course, one could do a systematic profiling of various possible scenarios, but as a first approximation what's your rule-of-thumb?\nTIA!Kynn", "msg_date": "Fri, 14 Mar 2008 14:05:50 -0400", "msg_from": "\"Kynn Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "The \"many nulls\" problem" }, { "msg_contents": "Kynn,\n\nhave you seen contrib/hstore ? You can have one table with common attributes\nand hide others in hstore\n\nOleg\nOn Fri, 14 Mar 2008, Kynn Jones wrote:\n\n> It often happens that a particular pieces of information is non-null for a\n> small minority of cases. A superficially different manifestation of this is\n> when two pieces of information are identical in all but a small minority of\n> cases. This can be easily mapped to the previous description by defining a\n> null in one column to mean that its contents should be obtained from those\n> of another column. A further variant of this is when one piece of\n> information is a simple function of another one in all but a small minority\n> of cases.\n>\n> (BTW, I vaguely recall that RDb theorists have a technical term for this\n> particular design issue, but I don't remember it.)\n>\n> In all these cases, the design choice, at least according to RDb's 101, is\n> between including a column in the table that will be NULL most of the time,\n> or defining a second auxiliary column that references the first one and\n> holds the non-redundant information for the minority of cases for which this\n> is necessary (and maybe define a VIEW that includes all the columns).\n>\n> But for me it is a frequent occurrence that my quaint and simple RDb's 101\n> reasoning doesn't really apply for PostgreSQL. Basically, Pg is too smart\n> for it! For example, does a large proportion of NULLs really imply a lot of\n> wasted space? Maybe this is true for fixed-length data types, but what\n> about for type TEXT or VARCHAR?\n>\n> Just to be concrete, consider the case of a customers database for some home\n> shopping website. Suppose that, as it happens, for the majority of this\n> site's customers, the shipping and billing addresses are identical. Or\n> consider the scenario of a company in which, for most employees, the email\n> address can be readily computed from the first and last name using the rule\n> First M. Last => [email protected], but the company allows some\n> flexibility for special cases (e.g. for people like Yasuhiro Tanaka who's\n> known to everyone by his nickname, Yaz, the email is\n> [email protected] hardly anyone remembers or even knows his\n> full name.)\n>\n> What's your schema design approach for such situations? How would you go\n> about deciding whether the number of exceptional cases is small enough to\n> warrant a second table? Of course, one could do a systematic profiling of\n> various possible scenarios, but as a first approximation what's your\n> rule-of-thumb?\n>\n> TIA!\n>\n> Kynn\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Fri, 14 Mar 2008 21:59:05 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The \"many nulls\" problem" }, { "msg_contents": "Kynn Jones wrote:\n> In all these cases, the design choice, at least according to RDb's 101, is\n> between including a column in the table that will be NULL most of the time,\n> or defining a second auxiliary column that references the first one and\n> holds the non-redundant information for the minority of cases for which this\n> is necessary (and maybe define a VIEW that includes all the columns).\n> \n> But for me it is a frequent occurrence that my quaint and simple RDb's 101\n> reasoning doesn't really apply for PostgreSQL. Basically, Pg is too smart\n> for it! For example, does a large proportion of NULLs really imply a lot of\n> wasted space? \n\nIt depends. If there's *any* NULLs on a row, a bitmap of the NULLs is \nstored in the tuple header. Without NULL bitmap, the tuple header is 23 \nbytes, and due to memory alignment, it's always rounded up to 24 bytes. \nThat one padding byte is \"free\" for use as NULL bitmap, so it happens \nthat if your table has eight columns or less, NULLs will take no space \nat all. If you have more columns than that, if there's *any* NULLs on a \nrow you'll waste a whole 4 or 8 bytes (or more if you have a very wide \ntable and go beyond the next 4/8 byte boundary), depending on whether \nyou're on a 32-bit or 64-bit platform, regardless of how many NULLs \nthere is.\n\nThat's on 8.3. 8.2 and earlier versions are similar, but the tuple \nheader used to be 27 bytes instead of 23, so you have either one or five \n\"free\" bytes, depending on architecture.\n\nIn any case, that's pretty good compared to many other RDBMSs.\n\n > Maybe this is true for fixed-length data types, but what\n > about for type TEXT or VARCHAR?\n\nDatatype doesn't make any difference. Neither does fixed vs variable length.\n\n> What's your schema design approach for such situations? How would you go\n> about deciding whether the number of exceptional cases is small enough to\n> warrant a second table? Of course, one could do a systematic profiling of\n> various possible scenarios, but as a first approximation what's your\n> rule-of-thumb?\n\n From performance point of view, I would go with a single table with \nNULL fields on PostgreSQL.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 14 Mar 2008 19:46:00 +0000", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The \"many nulls\" problem" }, { "msg_contents": "On Fri, Mar 14, 2008 at 3:46 PM, Heikki Linnakangas <[email protected]>\nwrote:\n>\n> <tons of useful info snipped>\n>\n From performance point of view, I would go with a single table with\n> NULL fields on PostgreSQL.\n\n\nWow. I'm so glad I asked! Thank you very much!\n\nKynn\n\nOn Fri, Mar 14, 2008 at 3:46 PM, Heikki Linnakangas <[email protected]> wrote:\n<tons of useful info snipped>  From performance point of view, I would go with a single table with\n\nNULL fields on PostgreSQL.Wow.  I'm so glad I asked!  Thank you very much!Kynn", "msg_date": "Fri, 14 Mar 2008 16:05:20 -0400", "msg_from": "\"Kynn Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The \"many nulls\" problem" }, { "msg_contents": "On Fri, Mar 14, 2008 at 2:59 PM, Oleg Bartunov <[email protected]> wrote:\n\n> have you seen contrib/hstore ? You can have one table with common\n> attributes\n> and hide others in hstore\n>\n\nThat's interesting. I'll check it out. Thanks!\n\nKynn\n\nOn Fri, Mar 14, 2008 at 2:59 PM, Oleg Bartunov <[email protected]> wrote:\nhave you seen contrib/hstore ? You can have one table with common attributes\nand hide others in hstoreThat's interesting.  I'll check it out.  Thanks!Kynn", "msg_date": "Fri, 14 Mar 2008 16:06:43 -0400", "msg_from": "\"Kynn Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The \"many nulls\" problem" }, { "msg_contents": "On Fri, 14 Mar 2008, Kynn Jones wrote:\n\n> On Fri, Mar 14, 2008 at 2:59 PM, Oleg Bartunov <[email protected]> wrote:\n>\n>> have you seen contrib/hstore ? You can have one table with common\n>> attributes\n>> and hide others in hstore\n>>\n>\n> That's interesting. I'll check it out. Thanks!\n\nactually, hstore was designed specially for this kind of problems.\n\n\n>\n> Kynn\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Fri, 14 Mar 2008 23:30:46 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The \"many nulls\" problem" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Fri, 14 Mar 2008 17:00:21 -0800\r\nVinubalaji Gopal <[email protected]> wrote:\r\n\r\n> Hi all,\r\n> I have been searching for the best way to run maintenance scripts\r\n> which does a vacuum, analyze and deletes some old data. Whenever the\r\n> maintenance script runs - mainly the pg_maintenance --analyze script -\r\n> it slows down postgresql inserts and I want to avoid that. The system\r\n> is under constant load and I am not interested in the time taken to\r\n> vacuum. Is there a utility or mechanism in postgresql which helps in\r\n> reducing priority of maintenance queries?\r\n\r\nYou can use parameters such as vacuum_cost_delay to help this... see\r\nthe docs:\r\n\r\nhttp://www.postgresql.org/docs/8.3/static/runtime-config-autovacuum.html\r\n\r\n> \r\n> Is writing a postgresql C function and setting the priority of process\r\n> the only way to change the priority of the maintenance script or is\r\n> there a better way.\r\n> http://weblog.bignerdranch.com/?p=11\r\n> \r\n> I tried using the nice command (Linux system) on the maintenance\r\n> script\r\n> - it did not have any effect - guess it does not change the niceness\r\n> of the postgresql vacuum process.\r\n> \r\n> (I am running Postgresql 8.0 on a Linux)\r\n\r\nIf you are truly running 8.0 and not something like 8.0.15 vacuum is\r\nthe least of your worries.\r\n\r\nSincerely,\r\n\r\nJoshua D. Drake\r\n\r\n\r\n- -- \r\nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \r\nPostgreSQL Community Conference: http://www.postgresqlconference.org/\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\n PostgreSQL political pundit | Mocker of Dolphins\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFH2xkkATb/zqfZUUQRAsFxAJ422xFUGNwJZZVS47SwM9HJEYrb/gCePESL\r\nYZFM27b93ylhy5TuE2MCcww=\r\n=2Zpp\r\n-----END PGP SIGNATURE-----\r\n", "msg_date": "Fri, 14 Mar 2008 17:32:35 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: best way to run maintenance script" }, { "msg_contents": "Hi all,\n I have been searching for the best way to run maintenance scripts\nwhich does a vacuum, analyze and deletes some old data. Whenever the\nmaintenance script runs - mainly the pg_maintenance --analyze script -\nit slows down postgresql inserts and I want to avoid that. The system is\nunder constant load and I am not interested in the time taken to vacuum.\nIs there a utility or mechanism in postgresql which helps in reducing\npriority of maintenance queries?\n\nIs writing a postgresql C function and setting the priority of process\nthe only way to change the priority of the maintenance script or is\nthere a better way.\nhttp://weblog.bignerdranch.com/?p=11\n\nI tried using the nice command (Linux system) on the maintenance script\n- it did not have any effect - guess it does not change the niceness of\nthe postgresql vacuum process.\n\n(I am running Postgresql 8.0 on a Linux)\n\n--\nVinu\n", "msg_date": "Fri, 14 Mar 2008 17:00:21 -0800", "msg_from": "Vinubalaji Gopal <[email protected]>", "msg_from_op": false, "msg_subject": "best way to run maintenance script" }, { "msg_contents": "Vinubalaji Gopal <[email protected]> writes:\n>> If you are truly running 8.0 and not something like 8.0.15 vacuum is\n>> the least of your worries.\n\n> Its 8.0.4. \n\nThat's only a little bit better. Read about all the bug fixes you're\nmissing at\nhttp://www.postgresql.org/docs/8.0/static/release.html\nand then consider updating ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Mar 2008 21:37:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best way to run maintenance script " }, { "msg_contents": "Hi Joshua,\n\n> You can use parameters such as vacuum_cost_delay to help this... see\n> the docs:\n> \n> http://www.postgresql.org/docs/8.3/static/runtime-config-autovacuum.html\n\nI am checking it out. Seems to be a nice option for vacuum - but wish\nthere was a way to change the delete priority or I will try to use the C\nbased priority hack.\n\n\n> If you are truly running 8.0 and not something like 8.0.15 vacuum is\n> the least of your worries.\nIts 8.0.4. \n\nThanks.\n\n--\nVinu\n", "msg_date": "Fri, 14 Mar 2008 17:51:52 -0800", "msg_from": "Vinubalaji Gopal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best way to run maintenance script" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Fri, 14 Mar 2008 17:51:52 -0800\r\nVinubalaji Gopal <[email protected]> wrote:\r\n\r\n> Hi Joshua,\r\n> \r\n> > You can use parameters such as vacuum_cost_delay to help this... see\r\n> > the docs:\r\n> > \r\n> > http://www.postgresql.org/docs/8.3/static/runtime-config-autovacuum.html\r\n> \r\n> I am checking it out. Seems to be a nice option for vacuum - but wish\r\n> there was a way to change the delete priority or I will try to use\r\n> the C based priority hack.\r\n\r\nI think you will find if you do it the right way, which is to say the\r\nway that it is meant to be done with the configurable options, your\r\nlife will be a great deal more pleasant than some one off hack.\r\n\r\nSincerely,\r\n\r\nJoshua D. Drake\r\n\r\n- -- \r\nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \r\nPostgreSQL Community Conference: http://www.postgresqlconference.org/\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\n PostgreSQL political pundit | Mocker of Dolphins\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFH2zG0ATb/zqfZUUQRAtmeAKCpKUbZP63qmiAPI6x4i9sLaf3LfwCfTPwb\r\nmdS3L7JzlwarEjuu3WGFdaE=\r\n=V7wn\r\n-----END PGP SIGNATURE-----\r\n", "msg_date": "Fri, 14 Mar 2008 19:17:22 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: best way to run maintenance script" }, { "msg_contents": "\nOn Fri, 2008-03-14 at 18:37 -0700, Tom Lane wrote:\n> That's only a little bit better. Read about all the bug fixes you're\n\nSure - will eventually upgrade it sometime - but it has to wait for\nnow :(\n\n\n--\nVinu\n", "msg_date": "Fri, 14 Mar 2008 20:28:08 -0800", "msg_from": "Vinubalaji Gopal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best way to run maintenance script" }, { "msg_contents": "\n> \n> I think you will find if you do it the right way, which is to say the\n> way that it is meant to be done with the configurable options, your\n> life will be a great deal more pleasant than some one off hack.\n> \n\nyeah I agree. The pg_maintanence script which calls vacuum and analyze\nis the one of the thing that is causing more problems. I am trying out\nvarious vacuum options (vacuum_cost_limit, vacuum_cost_delay) and\nfinding it hard to understand the implications of the variables. What\nare the optimal values for the vacuum_* parameters - for a really active\ndatabase (writes at the rate of ~ 50 rows/seconds).\n\nI started with\nvacuum_cost_delay = 200\nvacuum_cost_limit = 400\n\nand that did not help much. \n\n--\nVinu\n\n", "msg_date": "Fri, 14 Mar 2008 21:07:45 -0800", "msg_from": "Vinubalaji Gopal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best way to run maintenance script" }, { "msg_contents": "\"Vinubalaji Gopal\" <[email protected]> writes:\n\n> On Fri, 2008-03-14 at 18:37 -0700, Tom Lane wrote:\n>> That's only a little bit better. Read about all the bug fixes you're\n>\n> Sure - will eventually upgrade it sometime - but it has to wait for\n> now :(\n\nWaiting for one of those bugs to bite you is a bad plan.\n\nWe're not talking about an upgrade to 8.1, 8.2, or 8.3. We're talking about\ntaking bug-fixes and security fixes for the release you're already using.\n\nNormally it's just a shutdown and immediate restart. There are exceptions\nlisted in the 8.0.6 release notes which would require a REINDEX but they don't\naffect most people.\n\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's PostGIS support!\n", "msg_date": "Sat, 15 Mar 2008 12:46:04 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best way to run maintenance script" }, { "msg_contents": "Vinubalaji Gopal wrote:\n> I tried using the nice command (Linux system) on the maintenance script\n> - it did not have any effect - guess it does not change the niceness of\n> the postgresql vacuum process.\n\nYou are probably looking for the command ionice. nice only affects the CPU \npriority, and that is usually not the primary problem for vacuum. (And yes, \nyou need to nice the server process, not the client script.)\n", "msg_date": "Sun, 16 Mar 2008 19:11:48 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best way to run maintenance script" } ]
[ { "msg_contents": "I am going to embarkon building a music library using apache,\npostgresql and php. What is the best way to store the music files?\nWhich file type should I use?\n", "msg_date": "Sun, 16 Mar 2008 02:11:48 -0400", "msg_from": "Rich <[email protected]>", "msg_from_op": true, "msg_subject": "What is the best way to storage music files in Postgresql" }, { "msg_contents": "Rich wrote:\n> I am going to embarkon building a music library using apache,\n> postgresql and php. What is the best way to store the music files?\n> Which file type should I use?\n\nIn Postgres, its all just binary data. It's entirely up to you which particular format you use. mp2, mp3 mp4, wmv, avi, whatever, it's all the same to Postgres.\n\nA better question is: Should you store the binary data in Postgres itself, or keep it in files and only store the filenames? The Postgres archives have several discussions of this topic, and the answer is, \"it depends.\"\n\nCraig\n\n", "msg_date": "Sat, 15 Mar 2008 23:20:08 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the best way to storage music files in Postgresql" }, { "msg_contents": "Rich wrote:\n> I am going to embarkon building a music library using apache,\n> postgresql and php. What is the best way to store the music files?\n\nYour options are either to use a BLOB within the database or to store \npaths to normal files in the file system in the database. I suspect \nusing normal files will make backup and management a great deal easier \nthan using in-database BLOBs, so personally I'd do it that way.\n\nStoring the audio files in the database does make it easier to keep the \ndatabase and file system backups in sync, but I'm not really sure that's \nworth the costs.\n\nI'm sure that what you're doing has been done many times before, though, \nso even if you're not going to use one of the existing options you might \nat least want to have a look at how they work.\n\n> Which file type should I use?\n\nI'm not sure I understand this question. Are you asking which audio \ncompression codec and audio container file type (like \"mp3\", \"aac\", etc) \n you should use? If so, this is really not the right place to ask that.\n\nDo you mean which file /system/ ?\n\n--\nCraig Ringer\n\n", "msg_date": "Sun, 16 Mar 2008 15:25:39 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the best way to storage music files in Postgresql" }, { "msg_contents": "> > I am going to embarkon building a music library using apache,\n> > postgresql and php. What is the best way to store the music files?\n>\n> Your options are either to use a BLOB within the database or to store\n> paths to normal files in the file system in the database. I suspect\n> using normal files will make backup and management a great deal easier\n> than using in-database BLOBs, so personally I'd do it that way.\n\nI discussed something like this with some co-workers recently, and\nhere's what I had to say. Not all of these apply to the original\nmessage, but they are things to consider when marrying a database to a\nfile storage system.\n\nStoring the files in the database as BLOBs:\nPros:\n- The files can always be seen by the database system as long as it's\nup (there's no dependence on an external file system).\n- There is one set of locking mechanisms, meaning that the file\noperations can be atomic with the database operations.\n- There is one set of permissions to deal with.\nCons:\n- There is almost no way to access files outside of the database. If\nthe database goes down, you are screwed.\n- If you don't make good use of tablespaces and put blobs on a\nseparate disk system, the disk could thrash going between data and\nblobs, affecting performance.\n- There are stricter limits for PostgreSQL blobs (1 GB size limits, I've read).\n\nStoring files externally, storing pathnames in the database:\nPros:\n- You can access and manage files from outside the database and\npossibly using different interfaces.\n- There's a lot less to store directly in the database.\n- You can use existing file-system permissions, mechanisms, and limits.\nCons:\n- You are dealing with two storage systems and two different locking\nsystems which are unlikely to play nice with each other. Transactions\nare not guaranteed to be atomic (e.g. a database rollback will not\nrollback a file system operation, a commit will not guarantee that\ndata in a file will stay).\n- The file system has to be seen by the database system and any remote\nclients that wish to use your application, meaning that a networked FS\nis likely to be used (depending on how many clients you have and how\nyou like to separate services), with all the fun that comes from\nadministering one of those. Note that this one in particular really\nonly applies to enterprise-level installations, not smaller\ninstallations like the original poster's.\n- If you don't put files on a separate disk-system or networked FS,\nyou can get poor performance from the disk thrashing between the\ndatabase and the files.\n\nThere are a couple main points:\n1. The favorite answer in computing, \"it depends\", applies here. What\nyou decide depends on your storage system, your service and\ninstallation policies, and how important fully atomic transactions are\nto you.\n2. If you want optimal performance out of either of these basic\nmodels, you should make proper use of separate disk systems. I have no\nidea which one is faster (it depends, I'm sure) nor do I have much of\nan idea of how to benchmark this properly.\n\nPeter\n", "msg_date": "Mon, 17 Mar 2008 13:01:06 -0500", "msg_from": "\"Peter Koczan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the best way to storage music files in Postgresql" }, { "msg_contents": "On Mon, Mar 17, 2008 at 2:01 PM, Peter Koczan <[email protected]> wrote:\n> > > I am going to embarkon building a music library using apache,\n> > > postgresql and php. What is the best way to store the music files?\n> >\n> > Your options are either to use a BLOB within the database or to store\n> > paths to normal files in the file system in the database. I suspect\n> > using normal files will make backup and management a great deal easier\n> > than using in-database BLOBs, so personally I'd do it that way.\n>\n> I discussed something like this with some co-workers recently, and\n> here's what I had to say. Not all of these apply to the original\n> message, but they are things to consider when marrying a database to a\n> file storage system.\n>\n> Storing the files in the database as BLOBs:\n> Pros:\n> - The files can always be seen by the database system as long as it's\n> up (there's no dependence on an external file system).\n> - There is one set of locking mechanisms, meaning that the file\n> operations can be atomic with the database operations.\n> - There is one set of permissions to deal with.\n> Cons:\n> - There is almost no way to access files outside of the database. If\n> the database goes down, you are screwed.\n> - If you don't make good use of tablespaces and put blobs on a\n> separate disk system, the disk could thrash going between data and\n> blobs, affecting performance.\n> - There are stricter limits for PostgreSQL blobs (1 GB size limits, I've read).\n>\n> Storing files externally, storing pathnames in the database:\n> Pros:\n> - You can access and manage files from outside the database and\n> possibly using different interfaces.\n> - There's a lot less to store directly in the database.\n> - You can use existing file-system permissions, mechanisms, and limits.\n> Cons:\n> - You are dealing with two storage systems and two different locking\n> systems which are unlikely to play nice with each other. Transactions\n> are not guaranteed to be atomic (e.g. a database rollback will not\n> rollback a file system operation, a commit will not guarantee that\n> data in a file will stay).\n> - The file system has to be seen by the database system and any remote\n> clients that wish to use your application, meaning that a networked FS\n> is likely to be used (depending on how many clients you have and how\n> you like to separate services), with all the fun that comes from\n> administering one of those. Note that this one in particular really\n> only applies to enterprise-level installations, not smaller\n> installations like the original poster's.\n> - If you don't put files on a separate disk-system or networked FS,\n> you can get poor performance from the disk thrashing between the\n> database and the files.\n>\n> There are a couple main points:\n> 1. The favorite answer in computing, \"it depends\", applies here. What\n> you decide depends on your storage system, your service and\n> installation policies, and how important fully atomic transactions are\n> to you.\n> 2. If you want optimal performance out of either of these basic\n> models, you should make proper use of separate disk systems. I have no\n> idea which one is faster (it depends, I'm sure) nor do I have much of\n> an idea of how to benchmark this properly.\n>\n> Peter\n> It seems to me as such a database gets larger, it will become much harder to manage with the 2 systems. I am talking mostly about music. So each song should not get too large. I have read alot on this list and on other resources and there seems to be leanings toward 1+0 raids for storage. It seems to the most flexible when it comes to speed, redundancy and recovery time. I do want my database to be fully atomic. I think that is important as this database grows. Are my assumptions wrong?\n", "msg_date": "Mon, 17 Mar 2008 14:39:32 -0400", "msg_from": "Rich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What is the best way to storage music files in Postgresql" }, { "msg_contents": "On Sun, Mar 16, 2008 at 2:25 AM, Craig Ringer\n<[email protected]> wrote:\n> Rich wrote:\n> > I am going to embarkon building a music library using apache,\n> > postgresql and php. What is the best way to store the music files?\n>\n> Your options are either to use a BLOB within the database or to store\n> paths to normal files in the file system in the database. I suspect\n> using normal files will make backup and management a great deal easier\n> than using in-database BLOBs, so personally I'd do it that way.\n>\n> Storing the audio files in the database does make it easier to keep the\n> database and file system backups in sync, but I'm not really sure that's\n> worth the costs.\nWhat costs are to speaking of?\n>\n> I'm sure that what you're doing has been done many times before, though,\n> so even if you're not going to use one of the existing options you might\n> at least want to have a look at how they work.\n>\n>\n> > Which file type should I use?\n>\n> I'm not sure I understand this question. Are you asking which audio\n> compression codec and audio container file type (like \"mp3\", \"aac\", etc)\n> you should use? If so, this is really not the right place to ask that.\n>\n> Do you mean which file /system/ ?\n>\n> --\n> Craig Ringer\n>\n>\n", "msg_date": "Mon, 17 Mar 2008 14:40:05 -0400", "msg_from": "Rich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What is the best way to storage music files in Postgresql" }, { "msg_contents": "> It seems to me as such a database gets larger, it will become much harder to manage with the 2 systems. I am talking mostly about music. So each song should not get too large.\n\nI was just talking about points to consider in general. Getting to\nyour specific situation...\n\nAs far as BLOBs vs. file pointers. Test it out, use what you're most\ncomfortable using.\n\nI would not set up a networked file system for the sole purpose of\nmanaging and storing files a database will point to. If you already\nhave access to a networked file system, consider that as an option,\nbut don't add more work for yourself if you don't have to. Many\napplications I work on use the database to store pathnames while the\nfiles themselves are stored in a networked file system. It's honestly\nnot a huge pain to manage this if it's already available, but as I\nmentioned before, there are caveats.\n\nAlso, in my experiences, the amount of management you do in a database\ndoesn't directly depending on the amount of data you put in. In other\nwords, your database shouldn't become much more difficult to manage\nover time if all you are doing is adding more rows to tables.\n\n> I have read alot on this list and on other resources and there seems to be leanings toward 1+0 raids for storage. It seems to the most flexible when it comes to speed, redundancy and recovery time. I do want my database to be fully atomic. I think that is important as this database grows. Are my assumptions wrong?\n>\n\nAs far as RAID levels go, RAID 10 is usually optimal for databases, so\nyour assumptions are correct. The extra cost for disks, I believe, is\npaid off by the advantages you mentioned, at least for typical\ndatabase-related workloads. RAID 0 doesn't allow for any disaster\nrecovery, RAID 1 is ok as long as you can handle having only 2 disks\navailable, and RAID 5 and RAID 6 are just huge pains and terribly slow\nfor writes.\n\nNote that you should go for a battery-backup if you use hardware RAID.\n\nHope this helps.\n\nPeter\n", "msg_date": "Mon, 17 Mar 2008 22:26:04 -0500", "msg_from": "\"Peter Koczan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the best way to storage music files in Postgresql" }, { "msg_contents": "On 18/03/2008, Peter Koczan <[email protected]> wrote:\n\n\n> available, and RAID 5 and RAID 6 are just huge pains and terribly slow\n> for writes.\nRAID 5 and RAID 6 are just huge pains and terribly slow for writes\nwith small numbers of spindles.... ;}\n\nIn my testing I found that once you hit 10 spindles in a RAID5 the\ndifferences between it and a RAID10 started to become negligible\n(around 6% slower on writes average with 10 runs of bonnie++ on\n10 spindles) while the read speed (if you're doing similar amounts\nof reads & writes it's a fair criterion) were in about the 10% region\nfaster. With 24 spindles I couldn't see any difference at all. Those\nwere 73GB 15K SCAs, btw, and the SAN connected via 2GB fibre.\n\n\n> Peter\nCheers,\nAndrej\n\n\n-- \nPlease don't top post, and don't use HTML e-Mail :} Make your quotes concise.\n\nhttp://www.american.edu/econ/notes/htmlmail.htm\n", "msg_date": "Tue, 18 Mar 2008 22:13:47 +1300", "msg_from": "\"Andrej Ricnik-Bay\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the best way to storage music files in Postgresql" }, { "msg_contents": "Andrej Ricnik-Bay wrote:\n\n> In my testing I found that once you hit 10 spindles in a RAID5 the\n> differences between it and a RAID10 started to become negligible\n> (around 6% slower on writes average with 10 runs of bonnie++ on\n> 10 spindles) while the read speed (if you're doing similar amounts\n> of reads & writes it's a fair criterion) were in about the 10% region\n> faster. With 24 spindles I couldn't see any difference at all. Those\n> were 73GB 15K SCAs, btw, and the SAN connected via 2GB fibre.\n\nIsn't a 10 or 24 spindle RAID 5 array awfully likely to encounter a\ndouble disk failure (such as during the load imposed by rebuild onto a\nspare) ?\n\nI guess if you have good backups - as you must - it's not that big a\ndeal, but I'd be pretty nervous with anything less than RAID 6 or RAID 10 .\n\n--\nCraig Ringer\n", "msg_date": "Tue, 18 Mar 2008 18:59:20 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the best way to storage music files in Postgresql" }, { "msg_contents": "On 18/03/2008, Craig Ringer <[email protected]> wrote:\n> Isn't a 10 or 24 spindle RAID 5 array awfully likely to encounter a\n> double disk failure (such as during the load imposed by rebuild onto a\n> spare) ?\n\nI never said that we actually USED that set-up. I just said\nI did extensive testing with varied RAID-setups. ;} We did go\nwith the 10 in the end because of that very consideration.\n\nIt's just that the mantra \"RAID5 = slow writes\" isn't quite true.\n\n\nCheers,\nAndrej\n\n\n-- \nPlease don't top post, and don't use HTML e-Mail :} Make your quotes concise.\n\nhttp://www.american.edu/econ/notes/htmlmail.htm\n", "msg_date": "Wed, 19 Mar 2008 06:44:48 +1300", "msg_from": "\"Andrej Ricnik-Bay\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the best way to storage music files in Postgresql" }, { "msg_contents": "On Wed, 19 Mar 2008, Andrej Ricnik-Bay wrote:\n\n> \n> On 18/03/2008, Craig Ringer <[email protected]> wrote:\n>> Isn't a 10 or 24 spindle RAID 5 array awfully likely to encounter a\n>> double disk failure (such as during the load imposed by rebuild onto a\n>> spare) ?\n\nthat's why you should use raid6 (allowing for dual failures)\n\nDavid Lang\n\n", "msg_date": "Tue, 18 Mar 2008 11:04:37 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: What is the best way to storage music files in\n Postgresql" }, { "msg_contents": "On Tue, 2008-03-18 at 11:04 -0700, [email protected] wrote:\n\n> On Wed, 19 Mar 2008, Andrej Ricnik-Bay wrote:\n> \n> > \n> > On 18/03/2008, Craig Ringer <[email protected]> wrote:\n> >> Isn't a 10 or 24 spindle RAID 5 array awfully likely to encounter a\n> >> double disk failure (such as during the load imposed by rebuild onto a\n> >> spare) ?\n> \n> that's why you should use raid6 (allowing for dual failures)\n\n\nAnd, if possible, try to use drives from a couple of different batches.\nI wouldn't go crazy trying to do this, but sometimes a combination of\ndrives from two or three batches can make a difference (though not\nalways).\n\nAlso, a very informative read:\nhttp://research.google.com/archive/disk_failures.pdf\nIn short, best thing to do is watch SMART and be prepared to try and\nswap a drive out before it fails completely. :)\n\n\n\n\n\n\n\nOn Tue, 2008-03-18 at 11:04 -0700, [email protected] wrote:\n\n\nOn Wed, 19 Mar 2008, Andrej Ricnik-Bay wrote:\n\n> \n> On 18/03/2008, Craig Ringer <[email protected]> wrote:\n>> Isn't a 10 or 24 spindle RAID 5 array awfully likely to encounter a\n>> double disk failure (such as during the load imposed by rebuild onto a\n>> spare) ?\n\nthat's why you should use raid6 (allowing for dual failures)\n\n\n\nAnd, if possible, try to use drives from a couple of different batches. I wouldn't go crazy trying to do this, but sometimes a combination of drives from two or three batches can make a difference (though not always).\n\nAlso, a very informative read: http://research.google.com/archive/disk_failures.pdf\nIn short, best thing to do is watch SMART and be prepared to try and swap a drive out before it fails completely. :)", "msg_date": "Tue, 18 Mar 2008 11:52:16 -0700", "msg_from": "Gregory Youngblood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the best way to storage music files in\n\tPostgresql" }, { "msg_contents": "<[email protected]> writes:\n\n> On Wed, 19 Mar 2008, Andrej Ricnik-Bay wrote:\n>\n>>\n>> On 18/03/2008, Craig Ringer <[email protected]> wrote:\n>>> Isn't a 10 or 24 spindle RAID 5 array awfully likely to encounter a\n>>> double disk failure (such as during the load imposed by rebuild onto a\n>>> spare) ?\n>\n> that's why you should use raid6 (allowing for dual failures)\n\nYou can have as many parity drives as you want with RAID 5 too.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's RemoteDBA services!\n", "msg_date": "Tue, 18 Mar 2008 19:46:53 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the best way to storage music files in Postgresql" }, { "msg_contents": "On Tue, 18 Mar 2008, Gregory Stark wrote:\n\n> <[email protected]> writes:\n>\n>> On Wed, 19 Mar 2008, Andrej Ricnik-Bay wrote:\n>>\n>>>\n>>> On 18/03/2008, Craig Ringer <[email protected]> wrote:\n>>>> Isn't a 10 or 24 spindle RAID 5 array awfully likely to encounter a\n>>>> double disk failure (such as during the load imposed by rebuild onto a\n>>>> spare) ?\n>>\n>> that's why you should use raid6 (allowing for dual failures)\n>\n> You can have as many parity drives as you want with RAID 5 too.\n\nyou can? I've never seen a raid 5 setup with more then a single parity \ndirve (or even the option of having more then one drives worth of \nredundancy). you can have hot-spare drives, but thats a different thing.\n\nwhat controller/software lets you do this?\n\nDavid Lang\n", "msg_date": "Tue, 18 Mar 2008 13:01:28 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: What is the best way to storage music files in Postgresql" }, { "msg_contents": "Gregory Youngblood wrote:\n> Also, a very informative read:\n> http://research.google.com/archive/disk_failures.pdf\n> In short, best thing to do is watch SMART and be prepared to try and\n> swap a drive out before it fails completely. :)\n> \nI currently have four brand new 1TB disks (7200RPM SATA - they're for \nour backup server). Two of them make horrible clicking noises - they're \nrapidly parking and unparking or doing constant seeks. One of those two \nalso spins up very loudly, and on spin down rattles and buzzes.\n\nTheir internal SMART \"health check\" reports the problem two to be just \nfine, and both pass a short SMART self test (smartctl -d ata -t short). \nBoth have absurdly huge seek_error_rate values, but the SMART thresholds \nsee nothing wrong with this.\n\nThe noisy spin down one is so defective that I can't even write data to \nit successfully, and the other problem disk has regular I/O errors and \nfails an extended SMART self test (the short test fails).\n\n\nI see this sort of thing regularly. Vendors are obviously setting the \nSMART health thresholds so that there's absolutely no risk of reporting \nan issue with a working drive, and in the process making it basically \nuseless for detecting failing or faulty drives.\n\nI rely on manual examination of the vendor attributes like the seek \nerror rate, ECC recovered sectors, offline uncorrectable sectors \n(usually a REALLY bad sign if this grows), etc combined with regular \nextended SMART tests (which do a surface scan). Just using SMART - say, \nthe basic health check - really isn't enough.\n\n--\nCraig Ringer\n", "msg_date": "Wed, 19 Mar 2008 07:44:57 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the best way to storage music files in\tPostgresql" }, { "msg_contents": "[email protected] wrote:\n> you can? I've never seen a raid 5 setup with more then a single parity \n> dirve (or even the option of having more then one drives worth of \n> redundancy). you can have hot-spare drives, but thats a different thing.\n>\nWith RAID 4, where the \"parity drives\" are in fact dedicated to parity \ninformation, the controller could just store the parity data mirrored on \nmore than one drive. Unfortunately write performance on RAID 4 is \nabsolutely horrible, and a second or third parity disk would not help \nwith that.\n\nI suppose there's nothing stopping a controller adding a second disk's \nworth of duplicate parity information when striping a four or more disk \nRAID 5 array, but I thought that's basically what RAID 6 was.\n\n--\nCraig Ringer\n", "msg_date": "Wed, 19 Mar 2008 07:52:32 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the best way to storage music files in Postgresql" }, { "msg_contents": "On Wed, 19 Mar 2008, Craig Ringer wrote:\n\n> [email protected] wrote:\n>> you can? I've never seen a raid 5 setup with more then a single parity \n>> dirve (or even the option of having more then one drives worth of \n>> redundancy). you can have hot-spare drives, but thats a different thing.\n>> \n> With RAID 4, where the \"parity drives\" are in fact dedicated to parity \n> information, the controller could just store the parity data mirrored on more \n> than one drive. Unfortunately write performance on RAID 4 is absolutely \n> horrible, and a second or third parity disk would not help with that.\n>\n> I suppose there's nothing stopping a controller adding a second disk's worth \n> of duplicate parity information when striping a four or more disk RAID 5 \n> array, but I thought that's basically what RAID 6 was.\n\njust duplicating the Raid 4 or 5 pairity information will not help you if \nthe parity drive is not one of the drives that fail.\n\nraid 6 uses a different pairity algorithm so that any two drives in the \narray can fail with no data loss.\n\neven this isn't completely error proof. I just went through a scare with a \n15 disk array where it reported 3 dead drives after a power outage. one of \nthe dead drives ended up being the hot-spare, and another drive that acted \nup worked well enough to let me eventually recover all the data (seek \nerrors), but it was a very scary week while I worked through this.\n\nDavid Lang\n", "msg_date": "Tue, 18 Mar 2008 16:09:59 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: What is the best way to storage music files in Postgresql" }, { "msg_contents": "\n<[email protected]> writes:\n\n> On Tue, 18 Mar 2008, Gregory Stark wrote:\n>\n>> You can have as many parity drives as you want with RAID 5 too.\n>\n> you can? I've never seen a raid 5 setup with more then a single parity dirve\n> (or even the option of having more then one drives worth of redundancy). you\n> can have hot-spare drives, but thats a different thing.\n>\n> what controller/software lets you do this?\n\nHm, some research shows I may have completely imagined this. I don't see why\nyou couldn't but I can't find any evidence that this feature exists. I could\nhave sworn I've seen it before though.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's PostGIS support!\n", "msg_date": "Wed, 19 Mar 2008 00:30:18 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the best way to storage music files in Postgresql" }, { "msg_contents": "[email protected] wrote:\n\n> just duplicating the Raid 4 or 5 pairity information will not help you\n> if the parity drive is not one of the drives that fail.\n\nGood point - and no doubt why nothing supports extra disks worth of\nparity on RAID 5, which would be entirely useless (still only protecting\nagainst a 1-disk failure but wasting more space).\n\nExcept, apparently, the earlier poster's RAID 5 controller that DOES\nsupport extra parity disks.\n\nIt must just be hot spares, nothing else makes any sense.\n\n> even this isn't completely error proof. I just went through a scare with\n> a 15 disk array where it reported 3 dead drives after a power outage.\n> one of the dead drives ended up being the hot-spare, and another drive\n> that acted up worked well enough to let me eventually recover all the\n> data (seek errors), but it was a very scary week while I worked through\n> this.\n\nAs file systems can be corrupted, files deleted, etc, I try to make sure\nthat all my data is sufficiently well backed up that a week's worth of\nrecovery effort is never needed. Dead array? Rebuild and restore from\nbackups. Admittedly this practice has arisen because of a couple of\nscares much like you describe, but at least now it happens.\n\nI even test the backups ;-)\n\nBig SATA 7200rpm disks are so cheap compared to high performance SAS or\neven 10kRPM SATA disks that it seems like a really bad idea not to have\na disk-based backup server with everything backed up quick to hand.\n\nFor that reason I'm absolutely loving PostgreSQL's archive_wal feature\nand support for a warm spare server. I can copy the WAL files to another\nmachine and immediately restore them there (providing a certain level of\ninherent testing) as well as writing them to tape. It's absolutely\nwonderful. Sure, the warm spare will run like a man in knee-deep mud,\nbut it'll do in an emergency.\n\nThe existing \"database\" used by the app I'm working to replace is an\nISAM-based single host shared-file DB where all the user processes\naccess the DB directly. Concurrency is only supported through locking,\nthere's no transaction support, referential integrity checking, data\ntyping, no SQL of any sort, AND it's prone to corruption and data loss\nif a user process is killed. User processes are killed when the user's\nterminal is closed or loses its connection. Backups are only possible\nonce a day when all users are logged off. It's not an application where\nlosing half a day of data is fun. On top of all that it runs on SCO\nOpenServer 5.0.5 (which has among other things the most broken C\ntoolchain I've ever seen).\n\nSo ... hooray for up-to-date, well tested backups and how easy\nPostgreSQL makes them.\n\n--\nCraig Ringer\n", "msg_date": "Wed, 19 Mar 2008 09:31:52 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the best way to storage music files in Postgresql" }, { "msg_contents": "On Wed, 2008-03-19 at 07:44 +0900, Craig Ringer wrote:\n\n> Gregory Youngblood wrote:\n> > Also, a very informative read:\n> > http://research.google.com/archive/disk_failures.pdf\n> > In short, best thing to do is watch SMART and be prepared to try and\n> > swap a drive out before it fails completely. :)\n> > \n> I currently have four brand new 1TB disks (7200RPM SATA - they're for \n> our backup server). Two of them make horrible clicking noises - they're \n> rapidly parking and unparking or doing constant seeks. One of those two \n> also spins up very loudly, and on spin down rattles and buzzes.\n> \n> Their internal SMART \"health check\" reports the problem two to be just \n> fine, and both pass a short SMART self test (smartctl -d ata -t short). \n> Both have absurdly huge seek_error_rate values, but the SMART thresholds \n> see nothing wrong with this.\n\n\n-->8 snip 8<--\n\nIn that Google report, one of their conclusions was that after the first\nscan error drives were 39 times more likely to fail within the next 60\ndays. And, first errors in reallocations, etc. also correlated to higher\nfailure probabilities.\n\nIn my way of thinking, and what I was referring to above, was using\nthose error conditions to identify drives to change before the reported\ncomplete failures. Yes, that will mean changing drives before SMART\nactually says there is a full failure, and you may have to fight to get\na drive replaced under warranty when you do so, but you are protecting\nyour data.\n\nI agree with you completely that waiting for SMART to actually indicate\na true failure is pointless due to the thresholds set by mfrs. But using\nSMART for early warning signs still has value IMO.\n\n\n\n\n\n\n\nOn Wed, 2008-03-19 at 07:44 +0900, Craig Ringer wrote:\n\n\nGregory Youngblood wrote:\n> Also, a very informative read:\n> http://research.google.com/archive/disk_failures.pdf\n> In short, best thing to do is watch SMART and be prepared to try and\n> swap a drive out before it fails completely. :)\n> \nI currently have four brand new 1TB disks (7200RPM SATA - they're for \nour backup server). Two of them make horrible clicking noises - they're \nrapidly parking and unparking or doing constant seeks. One of those two \nalso spins up very loudly, and on spin down rattles and buzzes.\n\nTheir internal SMART \"health check\" reports the problem two to be just \nfine, and both pass a short SMART self test (smartctl -d ata -t short). \nBoth have absurdly huge seek_error_rate values, but the SMART thresholds \nsee nothing wrong with this.\n\n\n\n-->8 snip 8<--\n\nIn that Google report, one of their conclusions was that after the first scan error drives were 39 times more likely to fail within the next 60 days. And, first errors in reallocations, etc. also correlated to higher failure probabilities.\n\nIn my way of thinking, and what I was referring to above, was using those error conditions to identify drives to change before the reported complete failures. Yes, that will mean changing drives before SMART actually says there is a full failure, and you may have to fight to get a drive replaced under warranty when you do so, but you are protecting your data.\n\nI agree with you completely that waiting for SMART to actually indicate a true failure is pointless due to the thresholds set by mfrs. But using SMART for early warning signs still has value IMO.", "msg_date": "Tue, 18 Mar 2008 18:03:53 -0700", "msg_from": "Gregory Youngblood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the best way to storage music files\n\tin\tPostgresql" }, { "msg_contents": "Gregory Youngblood wrote:\n> In my way of thinking, and what I was referring to above, was using\n> those error conditions to identify drives to change before the reported\n> complete failures. Yes, that will mean changing drives before SMART\n> actually says there is a full failure, and you may have to fight to get\n> a drive replaced under warranty when you do so, but you are protecting\n> your data.\n>\n> \nI actually find it surprisingly easy to get a disk replaced based on a \nprinted SMART report showing uncorrectable sectors or just very high \nreallocated sector counts etc. Almost suspiciously easy. I would not be \nat all surprised if the disk vendors are, at least for their 7200rpm \nSATA disks, recording a \"black mark\" against the serial number, doing a \nlow level reformat and sending them back out as a new disk to another \ncustomer. Some of the \"new\" disks I've received have lifetimes and logs \nthat suggest they might be such refurbs - much longer test logs than \nmost new drives for example, as well as earlier serial numbers than \nothers ordered at the same time. They're also much, much more likely to \nbe DOA or develop defects early.\n> I agree with you completely that waiting for SMART to actually indicate\n> a true failure is pointless due to the thresholds set by mfrs. But using\n> SMART for early warning signs still has value IMO.\n> \nI could not agree more. smartmontools is right up there with tools like \nwireshark, mrt, and tcptraceroute in my most-vital toolbox, and it's \nmostly because of its ability to examine the vendor attributes and kick \noff scheduled self tests.\n\nI've saved a great deal of dead-disk-replacement hassle by ensuring that \nsmartd is configured to run extended self tests on the disks in all the \nmachines I operate at least fortnightly, and short tests at least \nweekly. Being able to plan ahead to swap a dying disk is very nice indeed.\n\n--\nCraig Ringer\n", "msg_date": "Wed, 19 Mar 2008 10:13:44 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the best way to storage music files\tin\tPostgresql" } ]
[ { "msg_contents": "Log Message:\n-----------\nFix TransactionIdIsCurrentTransactionId() to use binary search instead of\nlinear search when checking child-transaction XIDs. This makes for an\nimportant speedup in transactions that have large numbers of children,\nas in a recent example from Craig Ringer. We can also get rid of an\nugly kluge that represented lists of TransactionIds as lists of OIDs.\n\nHeikki Linnakangas\n\nModified Files:\n--------------\n pgsql/src/backend/access/transam:\n twophase.c (r1.39 -> r1.40)\n (http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/backend/access/transam/twophase.c?r1=1.39&r2=1.40)\n xact.c (r1.258 -> r1.259)\n (http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/backend/access/transam/xact.c?r1=1.258&r2=1.259)\n pgsql/src/include/nodes:\n pg_list.h (r1.57 -> r1.58)\n (http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/include/nodes/pg_list.h?r1=1.57&r2=1.58)\n", "msg_date": "Mon, 17 Mar 2008 02:18:56 +0000 (UTC)", "msg_from": "[email protected] (Tom Lane)", "msg_from_op": true, "msg_subject": "pgsql: Fix TransactionIdIsCurrentTransactionId() to use binary search" }, { "msg_contents": "On Sunday 16 March 2008 22:18, Tom Lane wrote:\n> Log Message:\n> -----------\n> Fix TransactionIdIsCurrentTransactionId() to use binary search instead of\n> linear search when checking child-transaction XIDs. This makes for an\n> important speedup in transactions that have large numbers of children,\n> as in a recent example from Craig Ringer. We can also get rid of an\n> ugly kluge that represented lists of TransactionIds as lists of OIDs.\n>\n\nAre there any plans to backpatch this into REL8_3_STABLE? It looks like I am \nhitting a pretty serious performance regression on 8.3 with a stored \nprocedure that grabs a pretty big recordset, and loops through doing \ninsert....update on unique failures. The procedure get progressivly slower \nthe more records involved... and dbx shows me stuck in \nTransactionIdIsCurrentTransactionId(). I can provide provide more details if \nneeded (lmk what your looking for) but it certainly looks like the issue \ndiscussed here: \nhttp://archives.postgresql.org/pgsql-performance/2008-03/msg00191.php\n\n-- \nRobert Treat\nBuild A Brighter LAMP :: Linux Apache {middleware} PostgreSQL\n", "msg_date": "Thu, 27 Mar 2008 16:56:45 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql: Fix TransactionIdIsCurrentTransactionId() to\n\tuse binary search" }, { "msg_contents": "Robert Treat <[email protected]> writes:\n> On Sunday 16 March 2008 22:18, Tom Lane wrote:\n>> Fix TransactionIdIsCurrentTransactionId() to use binary search instead of\n>> linear search when checking child-transaction XIDs.\n\n> Are there any plans to backpatch this into REL8_3_STABLE?\n\nNo.\n\n> It looks like I am \n> hitting a pretty serious performance regression on 8.3 with a stored \n> procedure that grabs a pretty big recordset, and loops through doing \n> insert....update on unique failures. The procedure get progressivly slower \n> the more records involved... and dbx shows me stuck in \n> TransactionIdIsCurrentTransactionId().\n\nIf you can convince me it's a regression I might reconsider, but I\nrather doubt that 8.2 was better,\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Mar 2008 17:11:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql: Fix TransactionIdIsCurrentTransactionId() to\n\tuse binary search" }, { "msg_contents": "On Thursday 27 March 2008 17:11, Tom Lane wrote:\n> Robert Treat <[email protected]> writes:\n> > On Sunday 16 March 2008 22:18, Tom Lane wrote:\n> >> Fix TransactionIdIsCurrentTransactionId() to use binary search instead\n> >> of linear search when checking child-transaction XIDs.\n> >\n> > Are there any plans to backpatch this into REL8_3_STABLE?\n>\n> No.\n>\n> > It looks like I am\n> > hitting a pretty serious performance regression on 8.3 with a stored\n> > procedure that grabs a pretty big recordset, and loops through doing\n> > insert....update on unique failures. The procedure get progressivly\n> > slower the more records involved... and dbx shows me stuck in\n> > TransactionIdIsCurrentTransactionId().\n>\n> If you can convince me it's a regression I might reconsider, but I\n> rather doubt that 8.2 was better,\n>\n\nWell, I can't speak for 8.2, but I have a second system crunching the same \ndata using the same function on 8.1 (on lesser hardware in fact), and it \ndoesn't have these type of issues. Looking over the past week, the 8.3 box \naverages about 19 minutes to complete each run, and the 8.1 box averages 15 \nminutes (sample size is over 100 iterations of both). Of course this is when \nit completes, the 8.3 box often does not complete, as it falls farther behind \nduring the day and eventually cannot finish (it does periodic intra-day \nsumming, so there's a limited time frame it has to run, and it's jobs end up \ntaking hours to complete). \n\nI am open to the idea that there is some other issue going on here, but \nwhenever I look at it, it seems stuck in \nTransactionIdIsCurrentTransactionId(), progress for the function does get \nincreasingly slower as it progresses, and I can watch the process consuming \nmore and more memory as it goes on... I can probably get some dtrace output \ntommorrow if you want. \n\n-- \nRobert Treat\nBuild A Brighter LAMP :: Linux Apache {middleware} PostgreSQL\n", "msg_date": "Fri, 28 Mar 2008 01:51:54 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql: Fix\n\tTransactionIdIsCurrentTransactionId() to use binary search" }, { "msg_contents": "Robert Treat <[email protected]> writes:\n>> If you can convince me it's a regression I might reconsider, but I\n>> rather doubt that 8.2 was better,\n\n> Well, I can't speak for 8.2, but I have a second system crunching the same \n> data using the same function on 8.1 (on lesser hardware in fact), and it \n> doesn't have these type of issues.\n\nIf you can condense it to a test case that is worse on 8.3 than 8.1,\nI'm willing to listen...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 28 Mar 2008 01:55:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql: Fix TransactionIdIsCurrentTransactionId() to\n\tuse binary search" }, { "msg_contents": "On Thursday 27 March 2008 17:11, Tom Lane wrote:\n> Robert Treat <[email protected]> writes:\n> > On Sunday 16 March 2008 22:18, Tom Lane wrote:\n> > > > > Fix TransactionIdIsCurrentTransactionId() to use binary \n> > > > > search instead \n> > > > > of linear search when checking child-transaction XIDs.\n> >\n> > > > Are there any plans to backpatch this into REL8_3_STABLE?\n> > >\n> > > No.\n> > >\n> > > > It looks like I am\n> > > > hitting a pretty serious performance regression on 8.3 with a stored\n> > > > procedure that grabs a pretty big recordset, and loops through doing\n> > > > insert....update on unique failures. The procedure get progressivly\n> > > > slower the more records involved... and dbx shows me stuck in\n> > > > TransactionIdIsCurrentTransactionId().\n> > >\n> > > If you can convince me it's a regression I might reconsider, but I\n> > > rather doubt that 8.2 was better,\n> > > \n\n> > Well, I can't speak for 8.2, but I have a second system crunching the\n> > same data using the same function on 8.1 (on lesser hardware in fact),\n> > and it doesn't have these type of issues.\n>\n> If you can condense it to a test case that is worse on 8.3 than 8.1,\n> I'm willing to listen...\n\nI spent some time trying to come up with a test case, but had no luck. Dtrace \nshowed that the running process was calling this function rather excessively; \nsample profiling for 30 seconds would look like this: \n\nFUNCTION COUNT PCNT\n<snip>\npostgres`LockBuffer 10 0.0%\npostgres`slot_deform_tuple 11 0.0%\npostgres`ExecEvalScalarVar 11 0.0%\npostgres`ExecMakeFunctionResultNoSets 13 0.0%\npostgres`IndexNext 14 0.0%\npostgres`slot_getattr 15 0.0%\npostgres`LWLockRelease 20 0.0%\npostgres`index_getnext 55 0.1%\npostgres`TransactionIdIsCurrentTransactionId 40074 99.4%\n\nBut I saw similar percentages on the 8.1 machine, so I am not convinced this \nis where the problem is. Unfortunatly (in some respects) the problem went \naway up untill this morning, so I haven't been looking at it since the above \nexchange. I'm still open to the idea that something inside \nTransactionIdIsCurrentTransactionId could have changed to make things worse \n(in addition to cpu, the process does consume a significant amount of \nmemory... prstat shows:\n\n PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP\n 3844 postgres 1118M 1094M cpu3 50 0 6:25:48 12% postgres/1\n\nI do wonder if the number of rows being worked on is significant in some \nway... by looking in the job log for the running procedure (we use \nautonoumous logging in this function), I can see that it has a much larger \nnumber of rows to be processed, so perhaps there is simply a tipping point \nthat is reached which causes it to stop performing... still it would be \ncurious that I never saw this behavior on 8.1\n\n= current job\n elapsed | status\n-----------------+--------------------------------------------------------\n 00:00:00.042895 | OK/starting with 2008-04-21 03:20:03\n 00:00:00.892663 | OK/processing 487291 hits up until 2008-04-21 05:20:03\n 05:19:26.595508 | ??/Processed 70000 aggregated rows so far\n(3 rows)\n\n= yesterdays run\n| elapsed | status\n+-----------------+--------------------------------------------------------\n| 00:00:00.680222 | OK/starting with 2008-04-20 04:20:02\n| 00:00:00.409331 | OK/processing 242142 hits up until 2008-04-20 05:20:04\n| 00:25:02.306736 | OK/Processed 35936 aggregated rows\n| 00:00:00.141179 | OK/\n(4 rows)\n\nUnfortunatly I don't have the 8.1 system to bang on anymore for this, (though \nanecdotaly speaking, I never saw this behavior in 8.1) however I do now have \na parallel 8.3 system crunching the data, and it is showing the same symptom \n(yes, 2 8.3 servers, crunching the same data, both bogged down now), so I do \nfeel this is something specific to 8.3. \n\nI am mostly wondering if anyone else has encountered behavior like this on 8.3 \n(large sets of insert....update exception block in plpgsql bogging down), or \nif anyone has any thoughts on which direction I should poke at it from here. \nTIA.\n\n-- \nRobert Treat\nBuild A Brighter LAMP :: Linux Apache {middleware} PostgreSQL\n", "msg_date": "Mon, 21 Apr 2008 12:29:00 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [COMMITTERS] pgsql: Fix\n\tTransactionIdIsCurrentTransactionId() to use binary search" }, { "msg_contents": "Robert Treat wrote:\n\n> Unfortunatly I don't have the 8.1 system to bang on anymore for this, (though \n> anecdotaly speaking, I never saw this behavior in 8.1) however I do now have \n> a parallel 8.3 system crunching the data, and it is showing the same symptom \n> (yes, 2 8.3 servers, crunching the same data, both bogged down now), so I do \n> feel this is something specific to 8.3. \n> \n> I am mostly wondering if anyone else has encountered behavior like this on 8.3 \n> (large sets of insert....update exception block in plpgsql bogging down), or \n> if anyone has any thoughts on which direction I should poke at it from here. \n> TIA.\n\nPerhaps what you could do is backpatch the change and see if the problem\ngoes away.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Mon, 21 Apr 2008 12:54:24 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] [COMMITTERS] pgsql: Fix\n\tTransactionIdIsCurrentTransactionId() to use binary search" }, { "msg_contents": "On Monday 21 April 2008 12:54, Alvaro Herrera wrote:\n> Robert Treat wrote:\n> > Unfortunatly I don't have the 8.1 system to bang on anymore for this,\n> > (though anecdotaly speaking, I never saw this behavior in 8.1) however I\n> > do now have a parallel 8.3 system crunching the data, and it is showing\n> > the same symptom (yes, 2 8.3 servers, crunching the same data, both\n> > bogged down now), so I do feel this is something specific to 8.3.\n> >\n> > I am mostly wondering if anyone else has encountered behavior like this\n> > on 8.3 (large sets of insert....update exception block in plpgsql bogging\n> > down), or if anyone has any thoughts on which direction I should poke at\n> > it from here. TIA.\n>\n> Perhaps what you could do is backpatch the change and see if the problem\n> goes away.\n\nSo, after some more digging, we ended up backpatching the change. Results as \nfollows:\n\n= hanging job before patch\n\n elapsed | status\n-----------------+--------------------------------------------------------\n 00:00:00.024075 | OK/starting with 2008-04-25 08:20:02\n 00:00:00.611411 | OK/processing 624529 hits up until 2008-04-25 10:20:02\n 03:48:02.748319 | ??/Processed 65000 aggregated rows so far\n(3 rows)\n\n= successful job after patch\n\n elapsed | status\n-----------------+---------------------------------------------------------\n 00:00:00.026809 | OK/starting with 2008-04-25 08:20:02\n 00:00:03.921532 | OK/processing 2150115 hits up until 2008-04-25 15:00:02\n 00:24:45.439081 | OK/Processed 334139 aggregated rows\n 00:00:00.019433 | OK/\n(4 rows)\n\nNote the second run had to do all the rows from the first run, plus additional \nrows that accumulated while the first job was running. \n\nOddly some dtrace profiling gave me this, which is pretty different, but \ncertainly doesn't have concerns about TransactionIdIsCurrentTransactionId\n\n<snip>\npostgres`hash_search_with_hash_value 536 2.3%\npostgres`SearchCatCache 538 2.3%\npostgres`hash_seq_search 577 2.4%\npostgres`MemoryContextAllocZeroAligned 610 2.6%\npostgres`_bt_compare 671 2.8%\nlibc.so.1`memcpy 671 2.8%\npostgres`XLogInsert 755 3.2%\npostgres`LockReassignCurrentOwner 757 3.2%\npostgres`base_yyparse 1174 5.0%\npostgres`AllocSetAlloc 1244 5.3%\n\nWe still have one of our 8.3 servers running stock 8.3.1, so we'll see how \nlong before this bites us again. Would certainly be nice to get this fixed \nin the mainline code. \n\n-- \nRobert Treat\nBuild A Brighter LAMP :: Linux Apache {middleware} PostgreSQL\n", "msg_date": "Fri, 25 Apr 2008 17:24:47 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] [COMMITTERS] pgsql: Fix\n\tTransactionIdIsCurrentTransactionId() to use binary search" }, { "msg_contents": "Robert Treat <[email protected]> writes:\n> Oddly some dtrace profiling gave me this, which is pretty different, but \n> certainly doesn't have concerns about TransactionIdIsCurrentTransactionId\n\n... which seems to pretty much destroy your thesis, no?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 25 Apr 2008 17:32:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] [COMMITTERS] pgsql: Fix\n\tTransactionIdIsCurrentTransactionId() to use binary search" }, { "msg_contents": "On Friday 25 April 2008 17:32, Tom Lane wrote:\n> Robert Treat <[email protected]> writes:\n> > Oddly some dtrace profiling gave me this, which is pretty different, but\n> > certainly doesn't have concerns about TransactionIdIsCurrentTransactionId\n>\n> .... which seems to pretty much destroy your thesis, no?\n>\n\nHow so? Before the patch we bog down for hours, spending 99% of our time in \nTransactionIdIsCurrentTransactionId, after the patch everything performs well \n(really better than before) and we spend so little time in \nTransactionIdIsCurrentTransactionId it barely shows up on the radar. \n\nNote I'm open to the idea that TransactionIdIsCurrentTransactionId itself is \nnot the problem, but that something else changed between 8.1 and 8.3 that \nexposes TransactionIdIsCurrentTransactionId as a problem. Changing to a \nbinary search for TransactionIdIsCurrentTransactionId makes that a non-issue \nthough. \n\n-- \nRobert Treat\nBuild A Brighter LAMP :: Linux Apache {middleware} PostgreSQL\n", "msg_date": "Sat, 26 Apr 2008 09:10:53 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] [COMMITTERS] pgsql: Fix\n\tTransactionIdIsCurrentTransactionId() to use binary search" }, { "msg_contents": "Robert Treat <[email protected]> writes:\n> On Friday 25 April 2008 17:32, Tom Lane wrote:\n>> Robert Treat <[email protected]> writes:\n>>> Oddly some dtrace profiling gave me this, which is pretty different, but\n>>> certainly doesn't have concerns about TransactionIdIsCurrentTransactionId\n>> \n>> .... which seems to pretty much destroy your thesis, no?\n\n> How so? Before the patch we bog down for hours, spending 99% of our time in \n> TransactionIdIsCurrentTransactionId, after the patch everything performs well\n> (really better than before) and we spend so little time in \n> TransactionIdIsCurrentTransactionId it barely shows up on the radar. \n\nOh, you failed to state that the dtrace output was post-patch. You need\nto show *pre* patch dtrace output if you want us to think it relevant.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 26 Apr 2008 13:26:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] [COMMITTERS] pgsql: Fix\n\tTransactionIdIsCurrentTransactionId() to use binary search" }, { "msg_contents": "On Saturday 26 April 2008 13:26, Tom Lane wrote:\n> Robert Treat <[email protected]> writes:\n> > On Friday 25 April 2008 17:32, Tom Lane wrote:\n> >> Robert Treat <[email protected]> writes:\n> >>> Oddly some dtrace profiling gave me this, which is pretty different,\n> >>> but certainly doesn't have concerns about\n> >>> TransactionIdIsCurrentTransactionId\n> >>\n> >> .... which seems to pretty much destroy your thesis, no?\n> >\n> > How so? Before the patch we bog down for hours, spending 99% of our time\n> > in TransactionIdIsCurrentTransactionId, after the patch everything\n> > performs well (really better than before) and we spend so little time in\n> > TransactionIdIsCurrentTransactionId it barely shows up on the radar.\n>\n> Oh, you failed to state that the dtrace output was post-patch. You need\n> to show *pre* patch dtrace output if you want us to think it relevant.\n>\n\nPlease read up-thread. \n\n-- \nRobert Treat\nBuild A Brighter LAMP :: Linux Apache {middleware} PostgreSQL\n", "msg_date": "Sat, 26 Apr 2008 14:20:15 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] [COMMITTERS] pgsql: Fix\n\tTransactionIdIsCurrentTransactionId() to use binary search" }, { "msg_contents": "Robert Treat <[email protected]> writes:\n> On Saturday 26 April 2008 13:26, Tom Lane wrote:\n>> Oh, you failed to state that the dtrace output was post-patch. You need\n>> to show *pre* patch dtrace output if you want us to think it relevant.\n\n> Please read up-thread. \n\nSorry, I'd forgotten your previous post.\n\nI poked around for calls to TransactionIdIsCurrentTransactionId that\nare in current code and weren't in 8.1. I found these:\n\nsrc/backend/commands/analyze.c: 965: \t\t\t\t\tif (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmin(targtuple.t_data)))\nsrc/backend/commands/analyze.c: 984: \t\t\t\t\tif (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmax(targtuple.t_data)))\nsrc/backend/commands/cluster.c: 803: \t\t\t\tif (!TransactionIdIsCurrentTransactionId(\nsrc/backend/commands/cluster.c: 816: \t\t\t\tif (!TransactionIdIsCurrentTransactionId(\nsrc/backend/storage/ipc/procarray.c: 374: \tif (TransactionIdIsCurrentTransactionId(xid))\nsrc/backend/utils/time/combocid.c: 108: \tAssert(TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmin(tup)));\nsrc/backend/utils/time/combocid.c: 123: \tAssert(TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmax(tup)));\nsrc/backend/utils/time/combocid.c: 156: \t\tTransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmin(tup)))\n\nThe ANALYZE and CLUSTER calls are not likely to be your issue, but the\none in HeapTupleHeaderAdjustCmax could get called a lot, and the one\nin TransactionIdIsInProgress definitely will get called a lot.\nNeither of those calls existed in 8.2.\n\nSo I think that explains why TransactionIdIsCurrentTransactionId has\nbecome more performance-critical in 8.3 than it was before. Will\napply the back-patch.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 26 Apr 2008 19:20:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] [COMMITTERS] pgsql: Fix\n\tTransactionIdIsCurrentTransactionId() to use binary search" } ]
[ { "msg_contents": "hi all,\nI want this mail to be continued about summary of performance tuning\ntools... or other postgres related tools..\n\nI ll start with saying there is a tool SCHEMASPY ( i got to know about this\nfrom this group only ), this will draw ER diagram and gives interesting\ninformations about our postgres database..\n\n\nWhat are all the other opensource tools available like this for seeing\ninformations about our postgres database... and tools for finetuning our\npostgres database....\n\nPlease join with me and summarize the names and usage of the tools....\n\nUse SchemaSpy a very easily installable and usable tool...\n\nhi all,I want this mail to be continued about summary of performance tuning tools... or other postgres related tools..I ll start with saying there is a tool SCHEMASPY ( i got to know about this from this group only ), this will draw ER diagram and gives interesting informations about our postgres database..\nWhat are all the other opensource tools available like this for seeing informations about our postgres database... and tools for finetuning our postgres database....Please join with me and summarize the names and usage of the tools.... \nUse SchemaSpy a very easily installable and usable tool...", "msg_date": "Mon, 17 Mar 2008 13:27:40 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "performance tools" }, { "msg_contents": "Toad Data Modeler from Quest Software is an E/R diagram tool that works for\nus.\n\nAnd - it has a freeware version.\n\n \n\nMark Steben\n\nSenior Database Administrator\n@utoRevenueT \nA Dominion Enterprises Company\n480 Pleasant Street\nSuite B200\nLee, MA 01238\n413-243-4800 Home Office \n413-243-4809 Corporate Fax\n\nmsteben <blocked::mailto:[email protected]> @autorevenue.com\n\nVisit our new website at \n <blocked::http://www.autorevenue.com/> www.autorevenue.com\n\n \n\nIMPORTANT: The information contained in this e-mail message is confidential\nand is intended only for the named addressee(s). If the reader of this\ne-mail message is not the intended recipient (or the individual responsible\nfor the delivery of this e-mail message to the intended recipient), please\nbe advised that any re-use, dissemination, distribution or copying of this\ne-mail message is prohibited. If you have received this e-mail message in\nerror, please reply to the sender that you have received this e-mail message\nin error and then delete it. Thank you.\n\n _____ \n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of sathiya psql\nSent: Monday, March 17, 2008 3:58 AM\nTo: [email protected]\nSubject: [PERFORM] performance tools\n\n \n\nhi all,\nI want this mail to be continued about summary of performance tuning\ntools... or other postgres related tools..\n\nI ll start with saying there is a tool SCHEMASPY ( i got to know about this\nfrom this group only ), this will draw ER diagram and gives interesting\ninformations about our postgres database..\n\n\nWhat are all the other opensource tools available like this for seeing\ninformations about our postgres database... and tools for finetuning our\npostgres database....\n\nPlease join with me and summarize the names and usage of the tools.... \n\nUse SchemaSpy a very easily installable and usable tool...\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nToad Data Modeler from Quest Software is an\nE/R diagram tool that works for us.\nAnd – it has a freeware version.\n \n\n\nMark Steben\nSenior Database Administrator\n@utoRevenue™ \nA\nDominion Enterprises Company\n480\nPleasant Street\nSuite B200\nLee, MA\n01238\n413-243-4800 Home Office \n413-243-4809 Corporate Fax\n\[email protected]\n\nVisit\nour new website at \nwww.autorevenue.com\n\n\n\n \n\n\nIMPORTANT: The information contained in\nthis e-mail message is confidential and is intended only for the named\naddressee(s). If the reader of this e-mail message is not the intended\nrecipient (or the individual responsible for the delivery of this e-mail\nmessage to the intended recipient), please be advised that any re-use,\ndissemination, distribution or copying of this e-mail message is prohibited.\n If you have received this e-mail message in error, please reply to the\nsender that you have received this e-mail message in error and then delete it.\n Thank you.\n\n\n\n\n\n\nFrom:\[email protected]\n[mailto:[email protected]] On Behalf Of sathiya psql\nSent: Monday, March 17, 2008 3:58\nAM\nTo: [email protected]\nSubject: [PERFORM] performance\ntools\n\n \nhi all,\nI want this mail to be continued about summary of performance tuning tools...\nor other postgres related tools..\n\nI ll start with saying there is a tool SCHEMASPY ( i got to know about this\nfrom this group only ), this will draw ER diagram and gives interesting\ninformations about our postgres database..\n\n\nWhat are all the other opensource tools available like this for seeing\ninformations about our postgres database... and tools for finetuning our\npostgres database....\n\nPlease join with me and summarize the names and usage of the tools.... \n\nUse SchemaSpy a very easily installable and usable tool...", "msg_date": "Mon, 17 Mar 2008 11:26:54 -0400", "msg_from": "\"Mark Steben\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance tools" }, { "msg_contents": "i thought many a replies will come... but only one..\n\ncommon guys... it may be helping you in tuning your database indirectly,\npost that tool also, give some informations such as\n\nTool Name: Schemaspy\nOpen Source: YES\nDatabase: Postgres\nURL: http://sourceforge.net/projects/schemaspy/1\nFollowing can be taken as optional..\nEasily Installable: YES\nPerformance TUNING tool: Partially YES ( YES / NO / Partially Yes )\nER diagram tool: Yes / No\nQuery Analysis Tool: Yes / No\n\nProbably other informations also\n\ncommon start sharing...\n\ni thought many a replies will come... but only one.. common guys... it may be helping you in tuning your database indirectly, post that tool also, give some informations such asTool Name: SchemaspyOpen Source: YES\nDatabase: PostgresURL: http://sourceforge.net/projects/schemaspy/1Following can be taken as optional..Easily Installable: YESPerformance TUNING tool: Partially YES ( YES / NO / Partially Yes )\nER diagram tool: Yes / NoQuery Analysis Tool: Yes / NoProbably other informations alsocommon start sharing...", "msg_date": "Mon, 17 Mar 2008 21:16:33 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance tools" }, { "msg_contents": "Have you tried the archive search tool for this very mailing list? \nThere's a wealth of information imparted all the time, and tuning is \ngenerally about knowledge of what's happening, not blindly following \nthe output of some program.\n\n\nOn Mar 17, 2008, at 8:46 AM, sathiya psql wrote:\n\n> i thought many a replies will come... but only one..\n>\n> common guys... it may be helping you in tuning your database \n> indirectly, post that tool also, give some informations such as\n>\n> Tool Name: Schemaspy\n> Open Source: YES\n> Database: Postgres\n> URL: http://sourceforge.net/projects/schemaspy/1\n> Following can be taken as optional..\n> Easily Installable: YES\n> Performance TUNING tool: Partially YES ( YES / NO / Partially Yes )\n> ER diagram tool: Yes / No\n> Query Analysis Tool: Yes / No\n>\n> Probably other informations also\n>\n> common start sharing...\n\n\nHave you tried the archive search tool for this very mailing list? There's a wealth of information imparted all the time, and tuning is generally about knowledge of what's happening, not blindly following the output of some program.On Mar 17, 2008, at 8:46 AM, sathiya psql wrote:i thought many a replies will come... but only one.. common guys... it may be helping you in tuning your database indirectly, post that tool also, give some informations such asTool Name: SchemaspyOpen Source: YES Database: PostgresURL: http://sourceforge.net/projects/schemaspy/1Following can be taken as optional..Easily Installable: YESPerformance TUNING tool: Partially YES ( YES / NO / Partially Yes ) ER diagram tool: Yes / NoQuery Analysis Tool: Yes / NoProbably other informations alsocommon start sharing...", "msg_date": "Mon, 17 Mar 2008 09:07:56 -0700", "msg_from": "Ben Chobot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance tools" }, { "msg_contents": "Hi,\n\nok, what about Squirrel-Sql ?\n\nTool Name: Squirrel sql\nFree software: YES\nURL: http://squirrel-sql.sourceforge.net/\n\nJava multi-rdbms sql client.\nRun on Linux, Windows, ...\n\nEasily Installable: YES\nPerformance TUNING tool: don't know what this means :)\nER diagram tool: Yes\nQuery Analysis Tool: Yes\n\n+ autocompletion, hibernate plugin (HQL), parametrized queries, ...\n\n\n\nttp://squirrel-sql.sourceforge.net/\nLe lundi 17 mars 2008 à 21:16 +0530, sathiya psql a écrit :\n> i thought many a replies will come... but only one.. \n> \n> common guys... it may be helping you in tuning your database\n> indirectly, post that tool also, give some informations such as\n> \n> Tool Name: Schemaspy\n> Open Source: YES\n> Database: Postgres\n> URL: http://sourceforge.net/projects/schemaspy/1\n> Following can be taken as optional..\n> Easily Installable: YES\n> Performance TUNING tool: Partially YES ( YES / NO / Partially Yes )\n> ER diagram tool: Yes / No\n> Query Analysis Tool: Yes / No\n> \n> Probably other informations also\n> \n> common start sharing...\n-- \nFranck Routier\nAxège Sarl - 23, rue Saint-Simon, 63000 Clermont-Ferrand (FR)\nTél : +33 463 059 540\nmèl : [email protected]\n\n\n", "msg_date": "Mon, 17 Mar 2008 17:37:42 +0100", "msg_from": "Franck Routier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance tools" }, { "msg_contents": "On 18/03/2008, sathiya psql <[email protected]> wrote:\n> i thought many a replies will come... but only one..\nYou *ARE* aware of the fact that many people on this planet\naren't in your time-zone, eh? And as Ben pointed out: there's\nbeen a good lot of similar questions - people who want to know\ncan actually find them using the search on this page:\n\nhttp://archives.postgresql.org/\n\n\nCheers,\nAndrej\n", "msg_date": "Tue, 18 Mar 2008 11:06:50 +1300", "msg_from": "\"Andrej Ricnik-Bay\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance tools" } ]
[ { "msg_contents": "A number of weeks ago, I had posted a request for help regarding join\nestimates in pg 8.2.6. In moderately complex to very complex ad hoc queries\nin our system, we were consistently having the system massively\nunderestimate the number of rows coming out of join at a low level making\nthese queries very slow and inefficient. At times the mis-estimation was\n1000:1. Ie when it should have been 2000 returned rows from a join, the\nplanner assumed 1 or 2 rows. Modifying stats on the join columns up to the\nmax made little difference (y, we analyzed tables in question after each\nchange). Since the planner sees only one row coming out of the low level\njoin, it uses nested loops all the way up chain when it would be more\nefficient to use another join type. In our informal testing, we found that\nby disabling nested loops and forcing other join types, we could get\nfantastic speedups. Those queries that seem to benefit most from this have\na lot of sub-queries being built up into a final query set as well as a fair\nnumber of joins in the sub-queries. Since these are user created and are\nthen generated via our tools, they can be quite messy at times.\nAfter doing this testing, have since added some functionality in our ad hoc\nreporting tool to allow us to tune individual queries by turning on and off\nindividual join types at runtime. As we hear of slow reports, we've been\nindividually turning off the nested loops on those reports. Almost always,\nthis has increased the performance of the reports, sometimes in a completely\namazing fashion (many, many minutes to seconds at times). It of course\ndoesn't help everything and turning off nested loops in general causes\noverall slowdown in other parts of the system.\n\nAs this has gone on over the last couple of weeks, it feels like we either\nhave a misconfiguration on the server, or we are tickling a mis-estimation\nbug in the planner. I'm hoping it's the former. The db server has 8G of\nmemory and raid1 -wal, raid10- data configuration, os is linux 2.6.9, db is\n8.2.6. The db is a utf-8 db if that is of any bearing and autovac and\nbgwriter are on.\n\nNondefault settings of interest from postgresql.conf\n\n\nshared_buffers = 1024MB # min 128kB or max_connections*16kB\nwork_mem = 256MB # min 64kB\nmaintenance_work_mem = 256MB # min 1MB\nrandom_page_cost = 1.75 # same scale as above\neffective_cache_size = 4096MB\ndefault_statistics_target = 100 # range 1-1000\n\n\nIf nothing else, perhaps this will help somebody else who has run into the\nsame problem. If explain analyze of a query shows a large mis-estimation of\nrows returned on a join (estimate=1, actual=2k) causing the planner to\nchoose nested loops instead of another join type, you might try running the\nquery with nested loops set to off and see if that helps w/ performance.\n\nThanks,\n\n-Chris\n\nA number of weeks ago, I had posted a request for help regarding join estimates in pg 8.2.6.  In moderately complex to very complex ad hoc queries in our system, we were consistently having the system massively underestimate the number of rows coming out of join at a low level making these queries very slow and inefficient.  At times the mis-estimation was 1000:1.  Ie when it should have been 2000 returned rows from a join, the planner assumed 1 or 2 rows.  Modifying stats on the join columns up to the max made little difference (y, we analyzed tables in question after each change).  Since the planner sees only one row coming out of the low level join, it uses nested loops all the way up chain when it would be more efficient to use another join type.  In our informal testing, we found that by disabling nested loops and forcing other join types, we could get fantastic speedups.  Those queries that seem to benefit most from this have a lot of sub-queries being built up into a final query set as well as a fair number of joins in the sub-queries.  Since these are user created and are then generated via our tools, they can be quite messy at times.  \nAfter doing this testing, have since added some functionality in our ad hoc reporting tool to allow us to tune individual queries by turning on and off individual join types at runtime.  As we hear of slow reports, we've been individually turning off the nested loops on those reports.  Almost always, this has increased the performance of the reports, sometimes in a completely amazing fashion (many, many minutes to seconds at times).  It of course doesn't help everything and turning off nested loops in general causes overall slowdown in other parts of the system.\nAs this has gone on over the last couple of weeks, it feels like we either have a misconfiguration on the server, or we are tickling a mis-estimation bug in the planner.  I'm hoping it's the former.  The db server has 8G of memory and raid1 -wal, raid10- data configuration, os is linux 2.6.9, db is 8.2.6.  The db is a utf-8 db if that is of any bearing and autovac and bgwriter are on.\nNondefault settings of interest from postgresql.conf shared_buffers = 1024MB                 # min 128kB or max_connections*16kB\nwork_mem = 256MB                                # min 64kBmaintenance_work_mem = 256MB            # min 1MBrandom_page_cost = 1.75                 # same scale as aboveeffective_cache_size = 4096MB\ndefault_statistics_target = 100         # range 1-1000 If nothing else, perhaps this will help somebody else who has run into the same problem.  If explain analyze of a query shows a large mis-estimation of rows returned on a join (estimate=1, actual=2k) causing the planner to choose nested loops instead of another join type, you might try running the query with nested loops set to off and see if that helps w/ performance.\nThanks,-Chris", "msg_date": "Tue, 18 Mar 2008 11:35:08 -0400", "msg_from": "\"Chris Kratz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Planner mis-estimation using nested loops followup" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Tue, 18 Mar 2008 11:35:08 -0400\r\n\"Chris Kratz\" <[email protected]> wrote:\r\n\r\n> Nondefault settings of interest from postgresql.conf\r\n> \r\n> \r\n> shared_buffers = 1024MB # min 128kB or\r\n> max_connections*16kB work_mem = 256MB\r\n> # min 64kB maintenance_work_mem = 256MB # min 1MB\r\n> random_page_cost = 1.75 # same scale as above\r\n> effective_cache_size = 4096MB\r\n> default_statistics_target = 100 # range 1-1000\r\n> \r\n> \r\n> If nothing else, perhaps this will help somebody else who has run\r\n> into the same problem. If explain analyze of a query shows a large\r\n> mis-estimation of rows returned on a join (estimate=1, actual=2k)\r\n> causing the planner to choose nested loops instead of another join\r\n> type, you might try running the query with nested loops set to off\r\n> and see if that helps w/ performance.\r\n\r\nDid you try that? Did it work?\r\n\r\nJoshua D. Drake\r\n\r\n\r\n- -- \r\nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \r\nPostgreSQL Community Conference: http://www.postgresqlconference.org/\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\n PostgreSQL political pundit | Mocker of Dolphins\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFH3+TlATb/zqfZUUQRAmXUAKCjwidfW0KXjzUM26I4yTx94/wSiQCfaqWU\r\neI9i5yucBH718okW3w2UewQ=\r\n=BO3E\r\n-----END PGP SIGNATURE-----\r\n", "msg_date": "Tue, 18 Mar 2008 08:50:59 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner mis-estimation using nested loops followup" }, { "msg_contents": "Y, turning nested loops off in specific cases has increased performance\ngreatly. It didn't fix the planner mis-estimation, just the plan it chose.\n It's certainly not a panacea, but it's something we now try early on when\ntrying to speed up a query that matches these characteristics.\n-Chris\n\nOn 3/18/08, Joshua D. Drake <[email protected]> wrote:\n>\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n>\n> On Tue, 18 Mar 2008 11:35:08 -0400\n> \"Chris Kratz\" <[email protected]> wrote:\n>\n> > Nondefault settings of interest from postgresql.conf\n> >\n> >\n> > shared_buffers = 1024MB # min 128kB or\n> > max_connections*16kB work_mem = 256MB\n> > # min 64kB maintenance_work_mem = 256MB # min 1MB\n> > random_page_cost = 1.75 # same scale as above\n> > effective_cache_size = 4096MB\n> > default_statistics_target = 100 # range 1-1000\n> >\n> >\n> > If nothing else, perhaps this will help somebody else who has run\n> > into the same problem. If explain analyze of a query shows a large\n> > mis-estimation of rows returned on a join (estimate=1, actual=2k)\n> > causing the planner to choose nested loops instead of another join\n> > type, you might try running the query with nested loops set to off\n> > and see if that helps w/ performance.\n>\n>\n> Did you try that? Did it work?\n>\n> Joshua D. Drake\n>\n>\n> - --\n> The PostgreSQL Company since 1997: http://www.commandprompt.com/\n> PostgreSQL Community Conference: http://www.postgresqlconference.org/\n> Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n> PostgreSQL political pundit | Mocker of Dolphins\n>\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.6 (GNU/Linux)\n>\n> iD8DBQFH3+TlATb/zqfZUUQRAmXUAKCjwidfW0KXjzUM26I4yTx94/wSiQCfaqWU\n> eI9i5yucBH718okW3w2UewQ=\n> =BO3E\n> -----END PGP SIGNATURE-----\n>\n\nY, turning nested loops off in specific cases has increased performance greatly.  It didn't fix the planner mis-estimation, just the plan it chose.  It's certainly not a panacea, but it's something we now try early on when trying to speed up a query that matches these characteristics.\n-ChrisOn 3/18/08, Joshua D. Drake <[email protected]> wrote:\n-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tue, 18 Mar 2008 11:35:08 -0400 \"Chris Kratz\" <[email protected]> wrote: \n > Nondefault settings of interest from postgresql.conf > > > shared_buffers = 1024MB                 # min 128kB or > max_connections*16kB work_mem = 256MB > # min 64kB maintenance_work_mem = 256MB            # min 1MB\n > random_page_cost = 1.75                 # same scale as above > effective_cache_size = 4096MB > default_statistics_target = 100         # range 1-1000 > > > If nothing else, perhaps this will help somebody else who has run\n > into the same problem.  If explain analyze of a query shows a large > mis-estimation of rows returned on a join (estimate=1, actual=2k) > causing the planner to choose nested loops instead of another join\n > type, you might try running the query with nested loops set to off > and see if that helps w/ performance. Did you try that? Did it work? Joshua D. Drake - -- The PostgreSQL Company since 1997: http://www.commandprompt.com/\n PostgreSQL Community Conference: http://www.postgresqlconference.org/ Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n      PostgreSQL political pundit | Mocker of Dolphins -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFH3+TlATb/zqfZUUQRAmXUAKCjwidfW0KXjzUM26I4yTx94/wSiQCfaqWU eI9i5yucBH718okW3w2UewQ=\n =BO3E -----END PGP SIGNATURE-----", "msg_date": "Tue, 18 Mar 2008 11:58:21 -0400", "msg_from": "\"Chris Kratz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner mis-estimation using nested loops followup" }, { "msg_contents": "On Tue, 18 Mar 2008, Chris Kratz wrote:\n> In moderately complex to very complex ad hoc queries in our system, we \n> were consistently having the system massively underestimate the number \n> of rows coming out of join at a low level making these queries very slow \n> and inefficient.\n\nI have long thought that perhaps Postgres should be a little more cautious \nabout its estimates, and assume the worst case scenario sometimes, rather \nthan blindly following the estimates from the statistics. The problem is \nthat Postgres uses the statistics to generate best estimates of the cost. \nHowever, it does not take into account the consequences of being wrong. If \nit was more clever, then it may be able to decide to use a non-optimal \nalgorithm according to the best estimate, if the optimal algorithm has the \npossibility of blowing up to 1000 times the work if the estimates are off \nby a bit.\n\nSuch cleverness would be very cool, but (I understand) a lot of work. It \nwould hopefully solve this problem.\n\nMatthew\n\n-- \n<Taking apron off> And now you can say honestly that you have been to a\nlecture where you watched paint dry.\n - Computer Graphics Lecturer\n", "msg_date": "Tue, 18 Mar 2008 16:24:27 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner mis-estimation using nested loops followup" }, { "msg_contents": "On Tue, Mar 18, 2008 at 9:58 AM, Chris Kratz <[email protected]> wrote:\n> Y, turning nested loops off in specific cases has increased performance\n> greatly. It didn't fix the planner mis-estimation, just the plan it chose.\n> It's certainly not a panacea, but it's something we now try early on when\n> trying to speed up a query that matches these characteristics.\n\nI have to admit I've had one or two reporting queries in the past that\nturning off nested_loop was the only reasonable fix due to\nmisestimation. I'd tried changing the stats targets etc and nothing\nreally worked reliably to prevent the nested_loop from showing up in\nthe wrong places.\n", "msg_date": "Tue, 18 Mar 2008 13:46:51 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner mis-estimation using nested loops followup" }, { "msg_contents": "Scott Marlowe wrote\n> On Tue, Mar 18, 2008 at 9:58 AM, Chris Kratz \n> <[email protected]> wrote:\n> > Y, turning nested loops off in specific cases has increased \n> performance\n> > greatly. It didn't fix the planner mis-estimation, just \n> the plan it chose.\n> > It's certainly not a panacea, but it's something we now try \n> early on when\n> > trying to speed up a query that matches these characteristics.\n> \n> I have to admit I've had one or two reporting queries in the past that\n> turning off nested_loop was the only reasonable fix due to\n> misestimation. I'd tried changing the stats targets etc and nothing\n> really worked reliably to prevent the nested_loop from showing up in\n> the wrong places.\n\nOne cause of planner mis-estimation I've seen quite frequently is when there are a number of predicates on the data that filter the results in roughly the same manner. PostgreSQL, not knowing that the filters are highly correlated, multiplies the \"fraction of selected rows\" together.\n\nMaking up an example using pseudo-code, if this is one of the subqueries:\n\nselect * from orders where\norder_date is recent\nand\norder_fulfilled is false\n\nUsed in an application where the unfulfilled orders are the recent ones.\n\nIf postgresql estimates that 1% of the orders are recent, and 1% are unfulfilled, then it will assume that 0.01% are both recent and unfulfilled. If in reality it's more like 0.9%, and your actual row count will be 90 times your estimate.\n\nThe only kind of simple behind-the-scenes fix for these situations that I know of is to add more indexes (such as a partial index on order_date where order_fulfilled is false), which slows down all your updates, and only works for the simplest situations.\n\nA general fix would need to calculate, store, and lookup a huge amount of correlation data. Probably equal to the square of the number of rows in pg_stats, though this could possibly be generated as needed.\n\nPerhaps if the analyze command was extended to be able to take a command line like:\nANALYZE CARTESIAN CORRELATION orders(order_date,order_fulfilled);\nwhich stores the fraction for each combination of most frequent value, and domain buckets from order_date and order_fulfilled.\nThe difficulty is whether the planner can quickly and easily determine whether appropriate correlation data exists for the query plan it is estimating.\n\nRegards,\nStephen Denne.\n\nDisclaimer:\nAt the Datamail Group we value team commitment, respect, achievement, customer focus, and courage. This email with any attachments is confidential and may be subject to legal privilege. If it is not intended for you please advise by reply immediately, destroy it and do not copy, disclose or use it in any way.\n__________________________________________________________________\n This email has been scanned by the DMZGlobal Business Quality\n Electronic Messaging Suite.\nPlease see http://www.dmzglobal.com/dmzmessaging.htm for details.\n__________________________________________________________________\n\n\n", "msg_date": "Wed, 19 Mar 2008 10:26:58 +1300", "msg_from": "\"Stephen Denne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner mis-estimation using nested loops followup" }, { "msg_contents": "At 00:24 08/03/19, Matthew wrote:\n>On Tue, 18 Mar 2008, Chris Kratz wrote:\n>>In moderately complex to very complex ad hoc queries in our system, \n>>we were consistently having the system massively underestimate the \n>>number of rows coming out of join at a low level making these \n>>queries very slow and inefficient.\n>\n>I have long thought that perhaps Postgres should be a little more \n>cautious about its estimates, and assume the worst case scenario \n>sometimes, rather than blindly following the estimates from the \n>statistics. The problem is that Postgres uses the statistics to \n>generate best estimates of the cost. However, it does not take into \n>account the consequences of being wrong. If it was more clever, then \n>it may be able to decide to use a non-optimal algorithm according to \n>the best estimate, if the optimal algorithm has the possibility of \n>blowing up to 1000 times the work if the estimates are off by a bit.\n>\n>Such cleverness would be very cool, but (I understand) a lot of \n>work. It would hopefully solve this problem.\n>\n>Matthew\n\nJust a crazy thought. If Postgres could check its own estimates or \nset some limits while executing the query and, if it found that the \nestimates were way off, fall back to a less optimal plan immediately \nor the next time, that would be cool.\n\nKC \n\n", "msg_date": "Wed, 19 Mar 2008 08:00:47 +0800", "msg_from": "KC ESL <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner mis-estimation using nested loops\n followup" } ]
[ { "msg_contents": "Hi folks,\n\nWe are running Postgres 8.2.5.\n\nI have 3 tables, call them A, B, and C\n\n \n\nTable A houses info on all emails that have ever been created for the\npurpose of being delivered to our end customers.\n\nBig table. About 23 million rows.\n\n Table B, the 'holding' table is populated with Table A key information via\nan after trigger whenever Table A is updated or inserted to.\n\n Table C, the 'work' table is populated by function D from table B. It is\nconfigured exactly like table B.\n\n PLPGSQL Function D inserts a predefined number of rows from table B to\ntable C. For purposes of discussion, say 500. \n\n Function D, after it does its thing, then deletes the 500 rows it\nprocessed from table B, and ALL 500 rows from table C.\n\n \n\nThis entire process, after a sleep period of 10 seconds, repeats itself all\nday.\n\n \n\nAfter each fifth iteration of function D, we perform a VACUUM FULL on both\ntables B and C. \n\n Takes less than 5 seconds.\n\n \n\nIn terms of transaction processing:\n\n Table A is processed by many transactions (some read, some update), \n\n Table B is processed by\n\n- any transaction updating or inserting to Table A via the after\ntrigger (insert, update)\n\n- Function D (insert, update, delete)\n\n Table C is processed ONLY by function D (insert, update, delete). Nothing\nelse touches it;\n\n PG_LOCKS table verifies that that this table is totally free of any\ntransaction \n\n Between iterations of function D.\n\n \n\nSo my question is this: Shouldn't VACUUM FULL clean Table C and reclaim all\nits space?\n\nIt doesn't. It usually reports the same number of pages before and after\nthe Vacuum.\n\nWe have to resort to TRUNCATE to clean and reclaim this table, which\n\nMust be empty at the beginning of function D. \n\n \n\nAny insights appreciated. Thanks,\n\n \n\nMark Steben\n\nSenior Database Administrator\n@utoRevenueT \nA Dominion Enterprises Company\n480 Pleasant Street\nSuite B200\nLee, MA 01238\n413-243-4800 Home Office \n413-243-4809 Corporate Fax\n\nmsteben <blocked::mailto:[email protected]> @autorevenue.com\n\nVisit our new website at \n <blocked::http://www.autorevenue.com/> www.autorevenue.com\n\n \n\nIMPORTANT: The information contained in this e-mail message is confidential\nand is intended only for the named addressee(s). If the reader of this\ne-mail message is not the intended recipient (or the individual responsible\nfor the delivery of this e-mail message to the intended recipient), please\nbe advised that any re-use, dissemination, distribution or copying of this\ne-mail message is prohibited. If you have received this e-mail message in\nerror, please reply to the sender that you have received this e-mail message\nin error and then delete it. Thank you.\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi folks,\nWe are running Postgres 8.2.5.\nI have 3 tables, call them A, B, and C\n \nTable A houses info on all emails that have ever been created for the\npurpose of being delivered to our end customers.\nBig table.  About 23 million rows.\n  Table B, the ‘holding’ table is populated with Table\nA key information via an after trigger whenever Table A is updated or inserted\nto.\n  Table C, the ‘work’ table is populated by function D\nfrom table B.  It is configured exactly like table B.\n  PLPGSQL Function D inserts a predefined number of rows from\ntable B to table C. For purposes of discussion, say 500.  \n  Function D, after it does its thing, then deletes the 500 rows\nit processed from table B, and ALL 500 rows from table C.\n \nThis entire process, after a sleep period of 10 seconds, repeats itself\nall day.\n \nAfter each fifth iteration of function D, we perform a VACUUM FULL on\nboth tables B and C. \n   Takes less than 5 seconds.\n \nIn terms of transaction processing:\n  Table A is processed by many transactions (some read, some\nupdate), \n  Table B is processed by\n-        \nany transaction updating or\ninserting to Table A via the after trigger (insert, update)\n-        \nFunction D (insert, update, delete)\n  Table C is processed ONLY by function D (insert, update, delete). \nNothing else touches it;\n    PG_LOCKS table verifies that that this table is\ntotally free of any transaction \n        Between iterations of\nfunction D.\n \nSo my question is this:  Shouldn’t VACUUM FULL clean Table C\nand reclaim all its space?\nIt doesn’t.  It usually reports the same number of pages\nbefore and after the Vacuum.\nWe have to resort to TRUNCATE to clean and reclaim this table, which\nMust be empty at the beginning of function D. \n \nAny insights appreciated. Thanks,\n \n\nMark Steben\nSenior Database Administrator\n@utoRevenue™ \nA Dominion Enterprises\nCompany\n480 Pleasant Street\nSuite B200\nLee, MA\n01238\n413-243-4800 Home Office \n413-243-4809 Corporate Fax\n\[email protected]\n\nVisit our new website\nat \nwww.autorevenue.com\n\n\n\n \n\n\nIMPORTANT: The information contained in\nthis e-mail message is confidential and is intended only for the named\naddressee(s). If the reader of this e-mail message is not the intended\nrecipient (or the individual responsible for the delivery of this e-mail\nmessage to the intended recipient), please be advised that any re-use,\ndissemination, distribution or copying of this e-mail message is prohibited.\n If you have received this e-mail message in error, please reply to the\nsender that you have received this e-mail message in error and then delete it.\n Thank you.", "msg_date": "Tue, 18 Mar 2008 17:06:45 -0400", "msg_from": "\"Mark Steben\" <[email protected]>", "msg_from_op": true, "msg_subject": "question on TRUNCATE vs VACUUM FULL" }, { "msg_contents": "\n> \n> So my question is this: Shouldn�t VACUUM FULL clean Table C and reclaim \n> all its space?\n\nYou've got concepts mixed up.\n\nTRUNCATE deletes all of the data from a particular table (and works in \nall dbms's).\n\nhttp://www.postgresql.org/docs/8.3/interactive/sql-truncate.html\n\n\n\nVACUUM FULL is a postgres-specific thing which does work behind the \nscenes to clean up MVCC left-overs. It does not touch any current data \nor records in the table, it's purely behind the scenes work.\n\nhttp://www.postgresql.org/docs/current/interactive/sql-vacuum.html\n\n\nThe two have completely different uses and nothing to do with each other \nwhat-so-ever.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Wed, 19 Mar 2008 12:10:37 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question on TRUNCATE vs VACUUM FULL" }, { "msg_contents": "\nI know what Vacuum full and truncate are supposed to do.\n\nMy confusion lies in the fact that we empty table C after\nFunction D finishes. There aren't any current data or records\nTo touch on the table. The MVCC leftovers are all purely dead\nRows that should be deleted. Given this, I thought that \nVacuum full and truncate should provide exactly the same result.\n\nI've attached my original memo to the bottom.\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Chris\nSent: Tuesday, March 18, 2008 9:11 PM\nTo: Mark Steben\nCc: [email protected]\nSubject: Re: [PERFORM] question on TRUNCATE vs VACUUM FULL\n\n\n> \n> So my question is this: Shouldn't VACUUM FULL clean Table C and reclaim \n> all its space?\n\nYou've got concepts mixed up.\n\nTRUNCATE deletes all of the data from a particular table (and works in \nall dbms's).\n\nhttp://www.postgresql.org/docs/8.3/interactive/sql-truncate.html\n\n\n\nVACUUM FULL is a postgres-specific thing which does work behind the \nscenes to clean up MVCC left-overs. It does not touch any current data \nor records in the table, it's purely behind the scenes work.\n\nhttp://www.postgresql.org/docs/current/interactive/sql-vacuum.html\n\n\nThe two have completely different uses and nothing to do with each other \nwhat-so-ever.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n[Mark Steben] \n\nTable A houses info on all emails that have ever been created for the\npurpose of being delivered to our end customers.\n\nBig table. About 23 million rows.\n\n Table B, the 'holding' table is populated with Table A key information via\nan after trigger whenever Table A is updated or inserted to.\n\n Table C, the 'work' table is populated by function D from table B. It is\nconfigured exactly like table B.\n\n PLPGSQL Function D inserts a predefined number of rows from table B to\ntable C. For purposes of discussion, say 500. \n\n Function D, after it does its thing, then deletes the 500 rows it\nprocessed from table B, and ALL 500 rows from table C.\n\n \n\nThis entire process, after a sleep period of 10 seconds, repeats itself all\nday.\n\n \n\nAfter each fifth iteration of function D, we perform a VACUUM FULL on both\ntables B and C. \n\n Takes less than 5 seconds.\n\n \n\nIn terms of transaction processing:\n\n Table A is processed by many transactions (some read, some update), \n\n Table B is processed by\n\n- any transaction updating or inserting to Table A via the after\ntrigger (insert, update)\n\n- Function D (insert, update, delete)\n\n Table C is processed ONLY by function D (insert, update, delete). Nothing\nelse touches it;\n\n PG_LOCKS table verifies that that this table is totally free of any\ntransaction \n\n Between iterations of function D.\n\n\n\n", "msg_date": "Wed, 19 Mar 2008 09:22:33 -0400", "msg_from": "\"Mark Steben\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: question on TRUNCATE vs VACUUM FULL" }, { "msg_contents": "In response to \"Mark Steben\" <[email protected]>:\n> \n> I know what Vacuum full and truncate are supposed to do.\n\nThen why do you keep doing the vacuum full? Doesn't really make\nsense as a maintenance strategy.\n\n> My confusion lies in the fact that we empty table C after\n> Function D finishes. There aren't any current data or records\n> To touch on the table. The MVCC leftovers are all purely dead\n> Rows that should be deleted. Given this, I thought that \n> Vacuum full and truncate should provide exactly the same result.\n\nI would expect so as well. You may want to mention which version\nof PostgreSQL you are using, because it sounds like a bug. If it's\nan old version, you probably need to upgrade. If it's a recent\nversion and you can reproduce this behaviour, you probably need\nto approach this like a bug report.\n\n> \n> I've attached my original memo to the bottom.\n> \n> \n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Chris\n> Sent: Tuesday, March 18, 2008 9:11 PM\n> To: Mark Steben\n> Cc: [email protected]\n> Subject: Re: [PERFORM] question on TRUNCATE vs VACUUM FULL\n> \n> \n> > \n> > So my question is this: Shouldn't VACUUM FULL clean Table C and reclaim \n> > all its space?\n> \n> You've got concepts mixed up.\n> \n> TRUNCATE deletes all of the data from a particular table (and works in \n> all dbms's).\n> \n> http://www.postgresql.org/docs/8.3/interactive/sql-truncate.html\n> \n> \n> \n> VACUUM FULL is a postgres-specific thing which does work behind the \n> scenes to clean up MVCC left-overs. It does not touch any current data \n> or records in the table, it's purely behind the scenes work.\n> \n> http://www.postgresql.org/docs/current/interactive/sql-vacuum.html\n> \n> \n> The two have completely different uses and nothing to do with each other \n> what-so-ever.\n> \n> -- \n> Postgresql & php tutorials\n> http://www.designmagick.com/\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> [Mark Steben] \n> \n> Table A houses info on all emails that have ever been created for the\n> purpose of being delivered to our end customers.\n> \n> Big table. About 23 million rows.\n> \n> Table B, the 'holding' table is populated with Table A key information via\n> an after trigger whenever Table A is updated or inserted to.\n> \n> Table C, the 'work' table is populated by function D from table B. It is\n> configured exactly like table B.\n> \n> PLPGSQL Function D inserts a predefined number of rows from table B to\n> table C. For purposes of discussion, say 500. \n> \n> Function D, after it does its thing, then deletes the 500 rows it\n> processed from table B, and ALL 500 rows from table C.\n> \n> \n> \n> This entire process, after a sleep period of 10 seconds, repeats itself all\n> day.\n> \n> \n> \n> After each fifth iteration of function D, we perform a VACUUM FULL on both\n> tables B and C. \n> \n> Takes less than 5 seconds.\n> \n> \n> \n> In terms of transaction processing:\n> \n> Table A is processed by many transactions (some read, some update), \n> \n> Table B is processed by\n> \n> - any transaction updating or inserting to Table A via the after\n> trigger (insert, update)\n> \n> - Function D (insert, update, delete)\n> \n> Table C is processed ONLY by function D (insert, update, delete). Nothing\n> else touches it;\n> \n> PG_LOCKS table verifies that that this table is totally free of any\n> transaction \n> \n> Between iterations of function D.\n> \n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Wed, 19 Mar 2008 09:34:50 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question on TRUNCATE vs VACUUM FULL" }, { "msg_contents": "Mark Steben escribi�:\n\n> My confusion lies in the fact that we empty table C after\n> Function D finishes. There aren't any current data or records\n> To touch on the table. The MVCC leftovers are all purely dead\n> Rows that should be deleted.\n\nNot if there are open transactions that might want to look at the table\nafter the VACUUM FULL is completed.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Wed, 19 Mar 2008 10:37:33 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question on TRUNCATE vs VACUUM FULL" }, { "msg_contents": "Bill,\nThanks for your quick response.\nWe are at version 8.2.5 - just recently upgraded from 7.4.5.\nThis strategy using truncate was just implemented yesterday.\nNow I will revisit the vacuum full strategy. Does seem to\nBe redundant. \nIs there a procedure to begin reporting a bug? Is there\nSomeone or an email address that I could bring evidence to?\n\n\nMark Steben\nSenior Database Administrator\n@utoRevenueT \nA Dominion Enterprises Company\n480 Pleasant Street\nSuite B200\nLee, MA 01238\n413-243-4800 Home Office \n413-243-4809 Corporate Fax\[email protected]\n\nVisit our new website at \nwww.autorevenue.com\n \nIMPORTANT: The information contained in this e-mail message is confidential\nand is intended only for the named addressee(s). If the reader of this\ne-mail message is not the intended recipient (or the individual responsible\nfor the delivery of this e-mail message to the intended recipient), please\nbe advised that any re-use, dissemination, distribution or copying of this\ne-mail message is prohibited. If you have received this e-mail message in\nerror, please reply to the sender that you have received this e-mail message\nin error and then delete it. Thank you.\n\n\n-----Original Message-----\nFrom: Bill Moran [mailto:[email protected]] \nSent: Wednesday, March 19, 2008 9:35 AM\nTo: Mark Steben\nCc: 'Chris'; [email protected]\nSubject: Re: [PERFORM] question on TRUNCATE vs VACUUM FULL\n\nIn response to \"Mark Steben\" <[email protected]>:\n> \n> I know what Vacuum full and truncate are supposed to do.\n\nThen why do you keep doing the vacuum full? Doesn't really make\nsense as a maintenance strategy.\n\n> My confusion lies in the fact that we empty table C after\n> Function D finishes. There aren't any current data or records\n> To touch on the table. The MVCC leftovers are all purely dead\n> Rows that should be deleted. Given this, I thought that \n> Vacuum full and truncate should provide exactly the same result.\n\nI would expect so as well. You may want to mention which version\nof PostgreSQL you are using, because it sounds like a bug. If it's\nan old version, you probably need to upgrade. If it's a recent\nversion and you can reproduce this behaviour, you probably need\nto approach this like a bug report.\n\n> \n> I've attached my original memo to the bottom.\n> \n> \n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Chris\n> Sent: Tuesday, March 18, 2008 9:11 PM\n> To: Mark Steben\n> Cc: [email protected]\n> Subject: Re: [PERFORM] question on TRUNCATE vs VACUUM FULL\n> \n> \n> > \n> > So my question is this: Shouldn't VACUUM FULL clean Table C and reclaim\n\n> > all its space?\n> \n> You've got concepts mixed up.\n> \n> TRUNCATE deletes all of the data from a particular table (and works in \n> all dbms's).\n> \n> http://www.postgresql.org/docs/8.3/interactive/sql-truncate.html\n> \n> \n> \n> VACUUM FULL is a postgres-specific thing which does work behind the \n> scenes to clean up MVCC left-overs. It does not touch any current data \n> or records in the table, it's purely behind the scenes work.\n> \n> http://www.postgresql.org/docs/current/interactive/sql-vacuum.html\n> \n> \n> The two have completely different uses and nothing to do with each other \n> what-so-ever.\n> \n> -- \n> Postgresql & php tutorials\n> http://www.designmagick.com/\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> [Mark Steben] \n> \n> Table A houses info on all emails that have ever been created for the\n> purpose of being delivered to our end customers.\n> \n> Big table. About 23 million rows.\n> \n> Table B, the 'holding' table is populated with Table A key information\nvia\n> an after trigger whenever Table A is updated or inserted to.\n> \n> Table C, the 'work' table is populated by function D from table B. It\nis\n> configured exactly like table B.\n> \n> PLPGSQL Function D inserts a predefined number of rows from table B to\n> table C. For purposes of discussion, say 500. \n> \n> Function D, after it does its thing, then deletes the 500 rows it\n> processed from table B, and ALL 500 rows from table C.\n> \n> \n> \n> This entire process, after a sleep period of 10 seconds, repeats itself\nall\n> day.\n> \n> \n> \n> After each fifth iteration of function D, we perform a VACUUM FULL on both\n> tables B and C. \n> \n> Takes less than 5 seconds.\n> \n> \n> \n> In terms of transaction processing:\n> \n> Table A is processed by many transactions (some read, some update), \n> \n> Table B is processed by\n> \n> - any transaction updating or inserting to Table A via the after\n> trigger (insert, update)\n> \n> - Function D (insert, update, delete)\n> \n> Table C is processed ONLY by function D (insert, update, delete).\nNothing\n> else touches it;\n> \n> PG_LOCKS table verifies that that this table is totally free of any\n> transaction \n> \n> Between iterations of function D.\n> \n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n\n", "msg_date": "Wed, 19 Mar 2008 09:43:09 -0400", "msg_from": "\"Mark Steben\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: question on TRUNCATE vs VACUUM FULL" }, { "msg_contents": "In response to \"Mark Steben\" <[email protected]>:\n\n> Bill,\n> Thanks for your quick response.\n> We are at version 8.2.5 - just recently upgraded from 7.4.5.\n> This strategy using truncate was just implemented yesterday.\n> Now I will revisit the vacuum full strategy. Does seem to\n> Be redundant.\n> Is there a procedure to begin reporting a bug? Is there\n> Someone or an email address that I could bring evidence to?\n\nYou're kinda on the right path already. The next thing to do (if nobody\ngets back to you with an explanation or solution) is to put together a\nsimple, reproducible case that others can use to reproduce the behaviour\non systems where they can investigate it. Once you have that, use the\nbug reporting form on the web site to report it as a bug.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Wed, 19 Mar 2008 09:55:59 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question on TRUNCATE vs VACUUM FULL" } ]
[ { "msg_contents": "I have a big PG server dedicated to serve only SELECT queries.\nThe database is updated permanently using Slony.\n\nThe server has 8 Xeon cores running at 3Ghz, 24GB or RAM and the\nfollowing disk arrays:\n- one RAID1 serving the OS and the pg_xlog\n- one RAID5 serving the database and the tables (base directory)\n- one RAID5 serving the indexes (indexes have an alternate tablespace)\n\nThis server can't take anything, it writes too much.\n\nWhen I try to plug it to a client (sending 20\ntransactions/s) it works fine for like 10 minutes, then start to write\na lot in the pgdata/base directory (where the database files are, not\nthe index).\n\nIt writes so much (3MB/s randomly) that it can't serve the queries anymore, the\nload is huge.\n\nIn order to locate the problem, I stopped Slony (no updates anymore),\nmounted the database and index partitions with the sync option (no FS\nwrite cache), and the problem happens faster, like 2 minutes after\nhaving plugged the client (and the queries) to it.\nI can reproduce the problem at will.\n\nI tried to see if some file size were increasing a lot, and found\nnothing more than the usual DB increase (DB is constantly updated by\nSlony).\n\nWhat does it writes so much in the base directory ? If it's some\ntemporary table or anything, how can I locate it so I can fix the\nproblem ?\n\nHere's the PG memory configuration:\nmax_connections = 128\nshared_buffers = 2GB\ntemp_buffers = 8MB\nwork_mem = 96MB\nmaintenance_work_mem = 4GB\nmax_stack_depth = 7MB\ndefault_statistics_target = 100\neffective_cache_size = 20GB\n\nThanks a lot for your advices !\n\n-- \nLaurent Raufaste\n<http://www.glop.org/>\n", "msg_date": "Wed, 19 Mar 2008 12:18:16 +0100", "msg_from": "\"Laurent Raufaste\" <[email protected]>", "msg_from_op": true, "msg_subject": "PG writes a lot to the disk" }, { "msg_contents": "In response to \"Laurent Raufaste\" <[email protected]>:\n\n> I have a big PG server dedicated to serve only SELECT queries.\n> The database is updated permanently using Slony.\n> \n> The server has 8 Xeon cores running at 3Ghz, 24GB of RAM and the\n> following disk arrays:\n> - one RAID1 serving the OS and the pg_xlog\n> - one RAID5 serving the database and the tables (base directory)\n> - one RAID5 serving the indexes (indexes have an alternate tablespace)\n> \n> This server can't take anything, it writes too much.\n> \n> When I try to plug it to a client (sending 20\n> transactions/s) it works fine for like 10 minutes, then start to write\n> a lot in the pgdata/base directory (where the database files are, not\n> the index).\n> \n> It writes so much (3MB/s randomly) that it can't serve the queries anymore, the\n> load is huge.\n> \n> In order to locate the problem, I stopped Slony (no updates anymore),\n> mounted the database and index partitions with the sync option (no FS\n> write cache), and the problem happens faster, like 2 minutes after\n> having plugged the client (and the queries) to it.\n> I can reproduce the problem at will.\n> \n> I tried to see if some file size were increasing a lot, and found\n> nothing more than the usual DB increase (DB is constantly updated by\n> Slony).\n> \n> What does it writes so much in the base directory ? If it's some\n> temporary table or anything, how can I locate it so I can fix the\n> problem ?\n\nMy guess (based on the information you provided) is that it's temporary\nsort file usage. If you're using 8.3 there's a config option to log\neach time a sort file is required. Anything earlier than 8.3 and you'll\nhave to rely on your OS tools to track it down.\n\nHowever, what makes you so sure it's write activity? I see no evidence\nattached to this email (iostat or similar output) so I'm wondering if\nit's actually read activity.\n\nCheck your log levels, if you turn up PG's logging all the way, it generates\na LOT of write activity ... more than you might imagine under some loads.\n\nGet rid of the RAID 5. RAID 5 sucks. Have you tried running bonnie++ or\nsimilar to see if it's not just a really crappy RAID 5 controller?\n\n> Here's the PG memory configuration:\n> max_connections = 128\n> shared_buffers = 2GB\n\nHave you tuned this based on experience? Current best practices would\nrecommend that you start with ~6G (1/4 RAM) and tune up/down as experience\nwith your workload dictates.\n\n> temp_buffers = 8MB\n> work_mem = 96MB\n\nConsidering you've got 24G of RAM, you might want to try bumping this and\nsee if it helps without pushing the system into swap. If the problem\nis sort file usage, this is the option to tune it.\n\n> maintenance_work_mem = 4GB\n\nI doubt it's hurting anything, but I don't think a value this high will\nactually be used.\n\n> max_stack_depth = 7MB\n> default_statistics_target = 100\n> effective_cache_size = 20GB\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Wed, 19 Mar 2008 09:49:22 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG writes a lot to the disk" }, { "msg_contents": "On Wed, 19 Mar 2008, Laurent Raufaste wrote:\n\n> When I try to plug it to a client (sending 20 transactions/s) it works \n> fine for like 10 minutes, then start to write a lot in the pgdata/base \n> directory (where the database files are, not the index). It writes so \n> much (3MB/s randomly) that it can't serve the queries anymore, the load \n> is huge.\n\nYou didn't mention adjusting any of the checkpoint parameters and you also \ndidn't say what version of PostgreSQL you're running. If you've got \nfrozen sections approximately every 5 minutes you should figure out of \nthey line up with the checkpoints on your system. How you do that varies \ndepending on version, I've covered most of what you need to get started \nat:\n\nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n\nIt's also worth noting that RAID5 is known to be awful on write \nperformance with some disk controllers. You didn't mention what \ncontroller you had. You should measure your disks to be sure they're \nperforming well at all, it's possible you might be getting performance \nthat's barely better than a single disk. \nhttp://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm has \nsome ideas on how to do that.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 19 Mar 2008 10:16:19 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG writes a lot to the disk" }, { "msg_contents": "Laurent Raufaste wrote:\n> I have a big PG server dedicated to serve only SELECT queries.\n> The database is updated permanently using Slony.\n> \n> [...] load is huge.\n> \n> In order to locate the problem, I stopped Slony (no updates anymore),\n> mounted the database and index partitions with the sync option (no FS\n> write cache), and the problem happens faster, like 2 minutes after\n> having plugged the client (and the queries) to it.\n> I can reproduce the problem at will.\n> \n> I tried to see if some file size were increasing a lot, and found\n> nothing more than the usual DB increase (DB is constantly updated by\n> Slony).\n> \n> What does it writes so much in the base directory ? If it's some\n> temporary table or anything, how can I locate it so I can fix the\n> problem ?\n\nIt could be a temporary file, although your work_mem setting is already\nquite large.\n\nCan you attach to the rogue backend with \"strace\" (if you have Linux, else\nsomething else maybe) to see what it does and use \"lsof\" to see what files\nit has open?\n\nYours,\nLaurenz Albe\n", "msg_date": "Wed, 19 Mar 2008 16:19:34 +0100", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG writes a lot to the disk" }, { "msg_contents": "2008/3/19, Laurent Raufaste <[email protected]>:\n> What does it writes so much in the base directory ? If it's some\n> temporary table or anything, how can I locate it so I can fix the\n> problem ?\n\nThanks for your help everybody ! I fixed the problem by doing an\nANALYZE to every table (yes I'm so noob ;) ).\n\nThe problem was that the optimiser didn't know how to run the queries\nwell and used millions of tuples for simple queries. For each tuple\nused it was updating some bit in the table file, resulting in a huge\nwriting activity to that file.\n\nAfter the ANALYZE, the optimiser worked smarter, used thousand time\nless tuple for each query, and PG was not required to update so much\nbits in the table files.\n\nThe server is now OK, thanks !\n\n-- \nLaurent Raufaste\n<http://www.glop.org/>\n", "msg_date": "Thu, 20 Mar 2008 11:48:13 +0100", "msg_from": "\"Laurent Raufaste\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG writes a lot to the disk" }, { "msg_contents": "Laurent Raufaste wrote:\n> The problem was that the optimiser didn't know how to run the queries\n> well and used millions of tuples for simple queries. For each tuple\n> used it was updating some bit in the table file, resulting in a huge\n> writing activity to that file.\n\nGood that you solved your problem.\n\nPostgreSQL doesn't write into the table files when it SELECTs data.\n\nWithout an EXPLAIN plan it is impossible to say what PostgreSQL\nwas doing, but most likely it was building a large hash structure\nor something similar and had to dump data into temporary files.\n\nYours,\nLaurenz Albe\n", "msg_date": "Thu, 20 Mar 2008 15:38:34 +0100", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG writes a lot to the disk" }, { "msg_contents": "In response to \"Albe Laurenz\" <[email protected]>:\n\n> Laurent Raufaste wrote:\n> > The problem was that the optimiser didn't know how to run the queries\n> > well and used millions of tuples for simple queries. For each tuple\n> > used it was updating some bit in the table file, resulting in a huge\n> > writing activity to that file.\n> \n> Good that you solved your problem.\n> \n> PostgreSQL doesn't write into the table files when it SELECTs data.\n> \n> Without an EXPLAIN plan it is impossible to say what PostgreSQL\n> was doing, but most likely it was building a large hash structure\n> or something similar and had to dump data into temporary files.\n\nAs a parting comment on this topic ...\n\nBased on his previous messages, he was able to definitively tie\nfilesystem write activity to specific tables, but also claimed that\nhis PG logs showed only SELECT statements being executed.\n\nHowever, the part I wanted to comment on (and got busy yesterday so\nam only getting to it now) is that there's no guarantee that SELECT\nisn't modifying rows.\n\nSELECT nextval('some_seq');\n\nis the simplest example I can imagine of a select that modifies database\ndata, but it's hardly the only one. I suspect that the OP has procedures\nin his SELECTs that are modifying table data, or triggers that do it ON\nSELECT or something similar.\n\nOf course, without any details, this is purely speculation.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Thu, 20 Mar 2008 10:58:33 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG writes a lot to the disk" }, { "msg_contents": "Bill Moran <[email protected]> writes:\n> However, the part I wanted to comment on (and got busy yesterday so\n> am only getting to it now) is that there's no guarantee that SELECT\n> isn't modifying rows.\n\nAnother way that SELECT can cause disk writes is if it sets hint bits on\nrecently-committed rows. However, if the tables aren't actively being\nmodified any more, you'd expect that sort of activity to settle out pretty\nquickly.\n\nI concur with the temporary-file theory --- it's real hard to see how\nanalyzing the tables would've fixed it otherwise.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Mar 2008 11:20:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG writes a lot to the disk " }, { "msg_contents": "\n\nOn Thu, 20 Mar 2008, Albe Laurenz wrote:\n\n> PostgreSQL doesn't write into the table files when it SELECTs data.\n>\n\nIt could easily be hint bit updates that are set by selects getting \nwritten.\n\nKris Jurka\n\n", "msg_date": "Thu, 20 Mar 2008 16:22:14 -0400 (EDT)", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG writes a lot to the disk" }, { "msg_contents": "2008/3/20, Tom Lane <[email protected]>:\n>\n> Another way that SELECT can cause disk writes is if it sets hint bits on\n> recently-committed rows. However, if the tables aren't actively being\n> modified any more, you'd expect that sort of activity to settle out pretty\n> quickly.\n>\n> I concur with the temporary-file theory --- it's real hard to see how\n> analyzing the tables would've fixed it otherwise.\n>\n\nThat's exactly it, I concur with your first explanation because:\n - We have no modification at all on SELECT simply because it's a\nslony replicated table and any update is forbidden (no nextval, no\ntrigger, nothin)\n - While monitoring the SELECT activity, write activity happened\nwithin the tables files only, and without changing their size. No\nother file was created, which eliminates the possibility of using\ntemporary files.\n- Every table was recently commited, as it was a 3 days old replicated\ndatabase from scratch.\n\nThe most problematic query was like:\n\"SELECT * FROM blah WHERE tree <@ A.B.C ;\" (more complicated but it's the idea)\nWe have millions of rows in blah, and blah was created a few hours\nago, with no ANALYZE after the injection of data.\n\nAll this make me think that PG was setting some bit on every row it\nused, which caused this massive write activity (3MB/s) in the table\nfiles. I'm talking about approx. 50 SELECT per second for a single\nserver.\n\nAnd to prove that I made a test. I switched slony off on a server (no\nupdate anymore), synced the disks, got the mtime of every file in the\nbase/ folder, executed hundreds of queries of the form:\n\nSELECT 1\nFROM _comment\nINNER JOIN _article ON _article.id = _comment.parent_id\nWHERE _comment.path <@ '%RANDOM_VALUE%'\n;\n\nDuring the massive activity, I took a new snapshot of the modified\nfiles in the base/ folder.\n\nThe only files which were modified are:\nbase/16387/1819754\nbase/16387/18567\n\n# SELECT relname FROM pg_class WHERE relfilenode IN (1819754, 18567) ;\n relname\n----------\n _comment\n _article\n\n\nSo *yes* table files are modified during SELECT, and it can result in\na lot of write if the queries plan work on a lot of rows.\n\nThansk for your help, I'm relieved =)\n\n-- \nLaurent Raufaste\n<http://www.glop.org/>\n", "msg_date": "Fri, 21 Mar 2008 11:49:14 +0100", "msg_from": "\"Laurent Raufaste\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG writes a lot to the disk" }, { "msg_contents": "\"Laurent Raufaste\" <[email protected]> writes:\n\n> All this make me think that PG was setting some bit on every row it\n> used, which caused this massive write activity (3MB/s) in the table\n> files. I'm talking about approx. 50 SELECT per second for a single\n> server.\n\nWell that's true it does. But only once per row. So analyze would have set the\nbit on every row. You could do the same thing with something ligter like\n\"select count(*) from <table>\".\n\nTom's thinking was that you would only expect a high update rate for a short\ntime until all those bits were set.\n\nSlony's inserts, updates, and deletes count as updates to the table as well.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's RemoteDBA services!\n", "msg_date": "Fri, 21 Mar 2008 13:25:43 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG writes a lot to the disk" }, { "msg_contents": "2008/3/21, Gregory Stark <[email protected]>:\n>\n> Well that's true it does. But only once per row. So analyze would have set the\n> bit on every row. You could do the same thing with something ligter like\n> \"select count(*) from <table>\".\n\nWell, the table has been analyzed, I did SELECT, PG write on the\ntable. That's a fact.\n\nBut it's also true (I juste tested it) that every file of a table is\nmodified by a SELECT COUNT.\n>\n> Tom's thinking was that you would only expect a high update rate for a short\n> time until all those bits were set.\n>\n> Slony's inserts, updates, and deletes count as updates to the table as well.\n>\n\nSlony is shut down when I'm testing.\n\n-- \nLaurent Raufaste\n<http://www.glop.org/>\n", "msg_date": "Fri, 21 Mar 2008 14:45:28 +0100", "msg_from": "\"Laurent Raufaste\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG writes a lot to the disk" }, { "msg_contents": "In response to \"Laurent Raufaste\" <[email protected]>:\n\n> 2008/3/21, Gregory Stark <[email protected]>:\n> >\n> > Well that's true it does. But only once per row. So analyze would have set the\n> > bit on every row. You could do the same thing with something ligter like\n> > \"select count(*) from <table>\".\n> \n> Well, the table has been analyzed, I did SELECT, PG write on the\n> table. That's a fact.\n> \n> But it's also true (I juste tested it) that every file of a table is\n> modified by a SELECT COUNT.\n\nThe real question (to verify Tom's point) is does a _second_ SELECT count()\nmodify the table again? If so, then something else is going on than\nwhat Tom suggested.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Fri, 21 Mar 2008 09:54:37 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG writes a lot to the disk" } ]
[ { "msg_contents": "I posted this question to the admin group. I just realized that I\nshould have sent it here.\n\n \n\nI just read the following article:\n\n \n\nhttp://people.planetpostgresql.org/mha/index.php?/archives/162-PostgreSQ\nL-vs-64-bit-windows.html\n\n \n\nWould there be benefits in running PostgreSQL in a 32 bit mode on a 64\nbit version of XP? My thought is that the OS could access more of the\nmemory for the caching of the files. On my production Linux box, I\ndon't allocate more than a 2 Gig to PostgreSQL. I leave the rest of the\nmemory available for the caching of disk files. So even though\nPostgreSQL would be running in a 32 bit mode it seems like it would\nstill run better on a 64 bit XP box compared to a 32 bit version. This\nof course assumes that one does have a sizeable database and more than 3\nGig of memory.\n\n \n\nIs this a correct assumption?\n\n \n\nWould the performance be relatively similar to that of Linux? \n\n \n\nThanks,\n\n \n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nI posted this question to the admin\ngroup.  I just realized that I should have sent it here.\n \nI just read the following article:\n \nhttp://people.planetpostgresql.org/mha/index.php?/archives/162-PostgreSQL-vs-64-bit-windows.html\n \nWould there be benefits in running\nPostgreSQL in a 32 bit mode on a 64 bit version of XP?   My thought\nis that the OS could access more of the memory for the caching of the\nfiles.  On my production Linux box, I don’t allocate more than a 2\nGig to PostgreSQL.  I leave the rest of the memory available for the\ncaching of disk files.  So even though PostgreSQL would be running in a 32\nbit mode it seems like it would still run better on a 64 bit XP box compared to\na 32 bit version.  This of course assumes that one does have a sizeable\ndatabase and more than 3 Gig of memory.\n \nIs this a correct assumption?\n \nWould the performance be relatively similar\nto that of Linux? \n \nThanks,\n \n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu", "msg_date": "Wed, 19 Mar 2008 14:00:47 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Windows XP 64 bit" } ]
[ { "msg_contents": "I just found out that my company is planning on migrating my databases from\nour current ISCSI storage solution to NetApps connected via NFS. I knew\nabout the NetApp migration, but always assumed (and shame on me) that I\nwould have direct attachments to the servers.\n\nWell, I am very uncomfortable with the NFS attachement. I have been\nsearching the archives and see a lot of anocdotal stories of NFS horrors,\nbut so far, nothing but general stories and statements.\n\nI need to know if anyone out there is/has run their PostgreSQL on NetApp\narrays via NFS. My particular situation is RH Linux 4 servers running\nPostgresql 8.1. I need to provide our Operations manager with specific\nreasons why we should not run PostgreSQL over NetApp NFS. Otherwise, they\nwill go forward with this.\n\nIf you have any real life good or bad stories, I'd love to hear it. Given\nthe NetApp arrays supposedly being very good NFS platforms, overall, is this\na recommended way to run PostgreSQL, or is it recommended to not run this\nway.\n\n\nFeel free to reply directly if you are not comfortable talking to this on\nthe list, but list replies would be preferred so others in my shoes can find\nthis information.\n\nThanks,\n\nChris\n-- \nCome see how to SAVE money on fuel, decrease harmful emissions, and even\nmake MONEY. Visit http://colafuelguy.mybpi.com and join the revolution!\n\nI just found out that my company is planning on migrating my databases from our current ISCSI storage solution to NetApps connected via NFS.  I knew about the NetApp migration, but always assumed (and shame on me) that I would have direct attachments to the servers.\nWell, I am very uncomfortable with the NFS attachement.  I have been searching the archives and see a lot of anocdotal stories of NFS horrors, but so far, nothing but general stories and statements.I need to know if anyone out there is/has run their PostgreSQL on NetApp arrays via NFS.  My particular situation is RH Linux 4 servers running Postgresql 8.1.  I need to provide our Operations manager with specific reasons why we should not run PostgreSQL over NetApp NFS.  Otherwise, they will go forward with this.\nIf you have any real life good or bad stories, I'd love to hear it.  Given the NetApp arrays supposedly being very good NFS platforms, overall, is this a recommended way to run PostgreSQL, or is it recommended to not run this way.\nFeel free to reply directly if you are not comfortable talking to this on the list, but list replies would be preferred so others in my shoes can find this information.Thanks,Chris-- \nCome see how to SAVE money on fuel, decrease harmful emissions, and even make MONEY. Visit http://colafuelguy.mybpi.com and join the revolution!", "msg_date": "Thu, 20 Mar 2008 15:32:47 -0400", "msg_from": "\"Chris Hoover\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL NetApp and NFS" }, { "msg_contents": "Chris Hoover wrote:\n> If you have any real life good or bad stories, I'd love to hear it. Given\n> the NetApp arrays supposedly being very good NFS platforms, overall, is this\n> a recommended way to run PostgreSQL, or is it recommended to not run this\n> way.\n\nWe do have an NFS section in our documentation at the bottom of this\npage:\n\n\thttp://www.postgresql.org/docs/8.3/static/creating-cluster.html\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://postgres.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 20 Mar 2008 16:09:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL NetApp and NFS" }, { "msg_contents": "> I need to know if anyone out there is/has run their PostgreSQL on NetApp\narrays via NFS. My particular situation is RH Linux 4 servers running\nPostgresql 8.1. I need \n> to provide our Operations manager with specific reasons why we should not\nrun PostgreSQL over NetApp NFS. Otherwise, they will go forward with this.\n\n > If you have any real life good or bad stories, I'd love to hear it.\nGiven the NetApp arrays supposedly being very good NFS platforms, overall,\nis this a recommended way \n> to run PostgreSQL, or is it recommended to not run this way. \n \nWe have been running Postgres over NFS to a NetApp since 7.1 and we have\nnothing but good things to say. We have 75 databases in 3 clusters all\nconnected to one netapp. We don't store a huge amount of data, currently\n~43Gig, but it is constantly updated. \n \nWe keep the pgsql/data directory on the netapp. If one of our db servers\never have a problem, we can just swap out the box, mount the drive and\nrestart postgres. \n \nWe like our support we get from them, the only issue we ever have is having\na drives fail which they get replacements to us promptly. Our NetApp has an\nuptime currently over 2 years.\n \nBy the way, I though NetApp boxes came with an iSCSI license. NetApp\ndownplayed the iSCSI with us because they said you cannot share drives\nbetween servers, but for postgres you don't want that anyway. It could\nhave also been that the NetApp is better tuned for NFS throughput and they\nwant to steer the user toward that.\n \nIf you want more specifics, feel free to ask. \n \nWoody\niGLASS Networks \n \n\n\n\n\n\n > I need to know if anyone out there is/has \nrun their PostgreSQL on NetApp arrays via NFS.  My particular situation is \nRH Linux 4 servers running Postgresql 8.1.  I need  \n> to provide our Operations manager \nwith specific reasons why we should not run PostgreSQL over NetApp NFS.  \nOtherwise, they will go forward with this. > If \nyou have any real life good or bad stories, I'd love to hear it.  Given the \nNetApp arrays supposedly being very good NFS platforms, overall, is this a \nrecommended way  \n> to run PostgreSQL, or is it \nrecommended to not run this way. \n \nWe have been running Postgres over NFS to a NetApp \nsince 7.1 and we have nothing \nbut good things to say.  We have 75 databases in 3 clusters all connected \nto one netapp.  We don't store a huge amount of data, currently ~43Gig, but \nit is constantly updated.  \n \nWe keep the pgsql/data directory on the netapp.  If \none of our db servers ever have a problem, we can just swap out the box, mount \nthe drive and restart postgres. \n \nWe like our support we get from them, the only issue we \never have is having a drives fail which they get replacements to us \npromptly.  Our NetApp has an uptime currently over 2 \nyears.\n \nBy the way, I though NetApp boxes came with an iSCSI \nlicense.  NetApp downplayed the iSCSI with us because they said you \ncannot share drives between servers, but for postgres you don't want that \nanyway.   It could have \nalso been that the NetApp is better tuned for NFS throughput and they want to \nsteer the user toward that.\n \nIf you want more specifics, feel free to \nask. \n \nWoody\niGLASS Networks", "msg_date": "Thu, 20 Mar 2008 16:45:14 -0400", "msg_from": "\"Woody Woodring\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL NetApp and NFS" }, { "msg_contents": "My experience postgresql work good on NFS. Of course, use NFS over TCP, and\nuse noac if you want to protect your database even more (my experience is\nNFS client caching doesn't lead to an irrecoverable database however)\n\nI've encountered problems with RHEL4 as a database server and a client of a\nNetapp filer, due to a bug in the (redhat nfs client)\nPostgresql uses BSD read/write semantics. The BSD semantics mean an IO call\n(either read or write) is atomic.\nLinux uses system V read/write semantics. The system V semantics mean an IO\nis NOT atomic and can be interrupted.\nA read call got interrupted (due to the bug in the nfs client), which meant\nthe IO call kept waiting until infinity.\nIt even caused all other IO done against the inode to be waiting, leading to\na situation where the server needed a reboot to be able to function\npropertly.\n\nfrits\n\nOn Thu, Mar 20, 2008 at 9:09 PM, Bruce Momjian <[email protected]> wrote:\n\n> Chris Hoover wrote:\n> > If you have any real life good or bad stories, I'd love to hear it.\n> Given\n> > the NetApp arrays supposedly being very good NFS platforms, overall, is\n> this\n> > a recommended way to run PostgreSQL, or is it recommended to not run\n> this\n> > way.\n>\n> We do have an NFS section in our documentation at the bottom of this\n> page:\n>\n> http://www.postgresql.org/docs/8.3/static/creating-cluster.html\n>\n> --\n> Bruce Momjian <[email protected]> http://momjian.us\n> EnterpriseDB http://postgres.enterprisedb.com\n>\n> + If your life is a hard drive, Christ can be your backup. +\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nMy experience postgresql work good on NFS. Of course, use NFS over TCP, and use noac if you want to protect your database even more (my experience is NFS client caching doesn't lead to an irrecoverable database however)\nI've encountered problems with RHEL4 as a database server and a client of a Netapp filer, due to a bug in the (redhat nfs client) Postgresql uses BSD read/write semantics. The BSD semantics mean an IO call (either read or write) is atomic.\nLinux uses system V read/write semantics. The system V semantics mean an IO is NOT atomic and can be interrupted. A read call got interrupted (due to the bug in the nfs client), which meant the IO call kept waiting until infinity. \nIt even caused all other IO done against the inode to be waiting, leading to a situation where the server needed a reboot to be able to function propertly.fritsOn Thu, Mar 20, 2008 at 9:09 PM, Bruce Momjian <[email protected]> wrote:\nChris Hoover wrote:\n> If you have any real life good or bad stories, I'd love to hear it.  Given\n> the NetApp arrays supposedly being very good NFS platforms, overall, is this\n> a recommended way to run PostgreSQL, or is it recommended to not run this\n> way.\n\nWe do have an NFS section in our documentation at the bottom of this\npage:\n\n        http://www.postgresql.org/docs/8.3/static/creating-cluster.html\n\n--\n  Bruce Momjian  <[email protected]>        http://momjian.us\n  EnterpriseDB                             http://postgres.enterprisedb.com\n\n  + If your life is a hard drive, Christ can be your backup. +\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 20 Mar 2008 22:23:30 +0100", "msg_from": "\"Frits Hoogland\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL NetApp and NFS" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Chris Hoover wrote:\n>> If you have any real life good or bad stories, I'd love to hear it. Given\n>> the NetApp arrays supposedly being very good NFS platforms, overall, is this\n>> a recommended way to run PostgreSQL, or is it recommended to not run this\n>> way.\n\n> We do have an NFS section in our documentation at the bottom of this\n> page:\n> \thttp://www.postgresql.org/docs/8.3/static/creating-cluster.html\n\nAside from what's said there, I'd note that it's a seriously bad idea\nto use a \"soft mount\" or any arrangement wherein it's possible for\nPostgres to be running while the NFS disk is not mounted. Joe Conway\ncan still show you the scars from learning that lesson, I believe.\nSee the archives...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Mar 2008 17:52:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL NetApp and NFS " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Chris Hoover wrote:\n> >> If you have any real life good or bad stories, I'd love to hear it. Given\n> >> the NetApp arrays supposedly being very good NFS platforms, overall, is this\n> >> a recommended way to run PostgreSQL, or is it recommended to not run this\n> >> way.\n> \n> > We do have an NFS section in our documentation at the bottom of this\n> > page:\n> > \thttp://www.postgresql.org/docs/8.3/static/creating-cluster.html\n> \n> Aside from what's said there, I'd note that it's a seriously bad idea\n> to use a \"soft mount\" or any arrangement wherein it's possible for\n> Postgres to be running while the NFS disk is not mounted. Joe Conway\n> can still show you the scars from learning that lesson, I believe.\n> See the archives...\n\nDo the docs need updating for this?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://postgres.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 20 Mar 2008 23:30:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL NetApp and NFS" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom Lane wrote:\n>> Aside from what's said there, I'd note that it's a seriously bad idea\n>> to use a \"soft mount\" or any arrangement wherein it's possible for\n>> Postgres to be running while the NFS disk is not mounted.\n\n> Do the docs need updating for this?\n\nWouldn't be a bad idea to mention it, if we're going to have a section\npointing out NFS risks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Mar 2008 23:41:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL NetApp and NFS " }, { "msg_contents": "On Thu, Mar 20, 2008 at 8:32 PM, Chris Hoover <[email protected]> wrote:\n> I just found out that my company is planning on migrating my databases from\n> our current ISCSI storage solution to NetApps connected via NFS. I knew\n> about the NetApp migration, but always assumed (and shame on me) that I\n> would have direct attachments to the servers.\n\nIt is also possible to present block devices from NetApp over iSCSI or FC\n(I am not sure about licensing model though). You get all the goodies\nlike thin provisioning (only non-zero blocks are allocated), snapshots and\nall, but you see it as a block device. Works fine.\n\nIt is also worth to mention that NetApp utilizes somewhat \"copy on write\"\nwrite strategy -- so whenever you modify a block, new version of the block\nis written on its WAFL filesystem. In practical terms it is quite resilient to\nrandom writes (and that read performance is not stellar ;)).\n\nI didn't try putting database on NFS mount directly, but I know NetApp\nadvertises that such setups are being used with Oracle database\n(and allegedly Oracle website's databases are on such setup).\nSo I would feel quite safe with such a setup.\n\nOh, and don't forget to set rsize and wsize to 8K-32K (test and write here\nwhat gives best performance!).\n\n Regards,\n Dawid\n", "msg_date": "Fri, 21 Mar 2008 10:34:09 +0100", "msg_from": "\"Dawid Kuroczko\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL NetApp and NFS" }, { "msg_contents": "\"Dawid Kuroczko\" <[email protected]> writes:\n\n> It is also possible to present block devices from NetApp over iSCSI or FC\n> (I am not sure about licensing model though). You get all the goodies\n> like thin provisioning (only non-zero blocks are allocated), snapshots and\n> all, but you see it as a block device. Works fine.\n\nNote that Postgres doesn't expect to get \"out of space\" errors on writes\nspecifically because it pre-allocates blocks. So this \"thin provisioning\"\nthing sounds kind of dangerous.\n\n> It is also worth to mention that NetApp utilizes somewhat \"copy on write\"\n> write strategy -- so whenever you modify a block, new version of the block\n> is written on its WAFL filesystem. In practical terms it is quite resilient to\n> random writes (and that read performance is not stellar ;)).\n\nAgain, Postgres goes to some effort to keep its reads sequential. So this\nsounds like it destroys that feature. Random writes don't matter so much\nbecause Postgres has its WAL which it writes sequentially. Writes to data\nfiles aren't in the critical path and can finish after the transaction is\ncommitted.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!\n", "msg_date": "Fri, 21 Mar 2008 13:22:26 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL NetApp and NFS" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Tom Lane wrote:\n> >> Aside from what's said there, I'd note that it's a seriously bad idea\n> >> to use a \"soft mount\" or any arrangement wherein it's possible for\n> >> Postgres to be running while the NFS disk is not mounted.\n> \n> > Do the docs need updating for this?\n> \n> Wouldn't be a bad idea to mention it, if we're going to have a section\n> pointing out NFS risks.\n\nDocumentation mention added.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://postgres.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +", "msg_date": "Fri, 21 Mar 2008 10:23:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL NetApp and NFS" }, { "msg_contents": "Well, we're not running PGSQL on a Netapp over NFS, but a DB2 Database.\nBut nevertheless, it runs quite well. NFS is not a bad choice for your\ndatabase, the big memory buffer that allocates the raid6 blocks makes it\nall very quick, like you're working directly on a 1+ TB ramdisk.\n\nOne important thing to keep in mind, is to make sure the NFS protocol\nused is at least V3 and to check your locking options.\nThis made our DB2 crash, because when configured wrong, the file locking\nmechanism on an NFS mount behaves differently than that of the local\nstorage. These parameters can be forced from the client side (fstab).\n\nBut still, with our 100+ GB OLTP database, I'm still quite fond of our\nnetapp.\n\n-R-\n\nChris Hoover wrote:\n> I just found out that my company is planning on migrating my databases\n> from our current ISCSI storage solution to NetApps connected via NFS. I\n> knew about the NetApp migration, but always assumed (and shame on me)\n> that I would have direct attachments to the servers.\n\n> \n> Chris\n> -- \n> Come see how to SAVE money on fuel, decrease harmful emissions, and even\n> make MONEY. Visit http://colafuelguy.mybpi.com and join the revolution!\n\n-- \nEasyflex diensten b.v.\nAcaciastraat 16\n4921 MA MADE\nT: 0162 - 690410\nF: 0162 - 690419\nE: [email protected]\nW: http://www.easyflex.nl\n", "msg_date": "Tue, 25 Mar 2008 14:33:23 +0100", "msg_from": "Jurgen Haan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL NetApp and NFS" } ]
[ { "msg_contents": "Hi All,\n\nI want some clarification in the following,\n\nIn a database which we are having we have nearly 100 tables, and in 75% of\nthe tables we have 6 columns ( INT ) as standard columns. What is standard\ncolumns, if you create a table in this database you should have some default\n6 columns in there they should maintain\n 1. who is the owner of that read\n 2. when it is added\n 3. who is updating the record\n 4. when it is updated .... and other columns....\n\nBut many of the users are not doing anything with those columns, they are\nall empty always....\n\nSay in that 75 % of tables, 60 % table contains nearly 1000 records\nalways...\n\nand other 10% of tables contains less than 10000 records\n\nand 5% of table contain records nealy 5 lakh.....\n\nWhat i need is???\n\nIf you drop those columns we will gain any performance or not.....\nDefinitely i know having that columns is not useful, but i want some\nclarification that having empty columns will make performance degradation or\nnot....\n\n\nAm using Debian, and having 1 GB RAM...\nI want this informations in both postgres 7.4 and 8.1 ( don't ask me to use\n8.3 please, i want info in 7.4 and 8.1 )...\n\n\nRegards\nSathiyaMoorthy\n\nHi All,I want some clarification in the following, In a database which we are having we have nearly 100 tables, and in 75% of the tables we have 6 columns ( INT ) as standard columns. What is standard columns, if you create a table in this database you should have some default 6 columns in there they should maintain \n    1. who is the owner of that read    2. when it is added    3. who is updating the record    4. when it is updated .... and other columns....But many of the users are not doing anything with those columns, they are all empty always....\nSay in that 75 % of tables, 60 % table contains nearly 1000 records always...and other 10% of tables contains less than 10000 recordsand 5% of table contain records nealy 5 lakh.....What i need is???\nIf you drop those columns we will gain any performance or not..... Definitely i know having that columns is not useful, but i want some clarification that having empty columns will make performance degradation or not....\nAm using Debian, and having 1 GB RAM...I want this informations in both postgres 7.4 and 8.1 ( don't ask me to use 8.3 please, i want info in 7.4 and 8.1 )...RegardsSathiyaMoorthy", "msg_date": "Sat, 22 Mar 2008 09:40:02 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Having MANY MANY empty columns in database" }, { "msg_contents": "\n> In a database which we are having we have nearly 100 tables, and in 75% of\n> the tables we have 6 columns ( INT ) as standard columns. What is standard\n> columns, if you create a table in this database you should have some default\n> 6 columns in there they should maintain\n> 1. who is the owner of that read\n> 2. when it is added\n> 3. who is updating the record\n> 4. when it is updated .... and other columns....\n\nOK, so your tables all have the same fields (columns), as if you used \nCREATE TABLE new_table ( LIKE some_template_table ) ?\n\n> But many of the users are not doing anything with those columns, they are\n> all empty always....\n\nmeaning that they contain NULL values in that field for every record?\n\n> If you drop those columns we will gain any performance or not.....\n\nThe best way to find that out is to test it. I'd be surprised if it \ndidn't make *some* performance difference, but the question is whether \nit will be enough to be worth caring about.\n\nHowever, I recall hearing that PostgreSQL keeps a null bitmap and \ndoesn't use any storage for null fields. If that is correct then you \nprobably won't be paying much of a price in disk I/O, but there might \nstill be other costs.\n\nI can't help wondering why you have all those useless columns in the \nfirst place, and why you have so many identically structured tables.\n\n--\nCraig Ringer\n", "msg_date": "Sat, 22 Mar 2008 17:24:17 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Having MANY MANY empty columns in database" }, { "msg_contents": ">\n> OK, so your tables all have the same fields (columns), as if you used\n> CREATE TABLE new_table ( LIKE some_template_table ) ?\n\n\nIt will contain some other unique columns for each table.\n\n> meaning that they contain NULL values in that field for every record?\n\n\nwhat is the value it may contain i don't know ?? we are not filling any\nvalue !!\n\n>\n> > If you drop those columns we will gain any performance or not.....\n>\n I need to test... HOW to test the overall performance of database..\n\n> However, I recall hearing that PostgreSQL keeps a null bitmap and doesn't\n> use any storage for null fields. If that is correct then you probably won't\n> be paying much of a price in disk I/O, but there might still be other costs.\n>\nif it is sure that it will not make disk I/O then it is ok\n\n>\n> I can't help wondering why you have all those useless columns in the\n> first place, and why you have so many identically structured tables.\n>\nthese are not useless columns... it should be used to update the owner of\nthe record, updated time, created and other stuffs, but nobody is using now.\n\n\n>\n> --\n> Craig Ringer\n>\n\nOK, so your tables all have the same fields (columns), as if you used\n\nCREATE TABLE new_table ( LIKE some_template_table ) ?It will contain some other unique columns for each table. \nmeaning that they contain NULL values in that field for every record?\n what is the  value it may contain i don't know ?? we are not filling any value !!\n\n> If you drop those columns we will gain any performance or not..... I need to test... HOW to test the overall performance of database..\n\nHowever, I recall hearing that PostgreSQL keeps a null bitmap and doesn't use any storage for null fields. If that is correct then you probably won't be paying much of a price in disk I/O, but there might still be other costs.\nif it is sure that it will not make disk I/O then it is ok \n\nI can't help wondering why you have all those useless columns in the\nfirst place, and why you have so many identically structured tables.\nthese are not useless columns... it should be used to update the owner of the record, updated time, created and other stuffs, but nobody is using now. \n\n--\nCraig Ringer", "msg_date": "Sat, 22 Mar 2008 14:38:25 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Having MANY MANY empty columns in database" } ]
[ { "msg_contents": "Hi all,\nmaybe it�s a naive question but I was wondering if there is any \ndifference, from a performance point of view, between a view and a \nfunction performing the same task, something like:\n\nCREATE VIEW foo AS �;\nCREATE FUNCTION getfoo(int) RETURNS SETOF foo AS $$\n\tSELECT * FROM foo WHERE fooid = $1;\n$$ LANGUAGE SQL;\n\n\nThank you\n--\nGiorgio Valoti\n\n\n\n\n", "msg_date": "Sat, 22 Mar 2008 15:01:50 +0100", "msg_from": "Giorgio Valoti <[email protected]>", "msg_from_op": true, "msg_subject": "Views and functions returning sets of records" }, { "msg_contents": "Giorgio Valoti <[email protected]> writes:\n> maybe it�s a naive question but I was wondering if there is any \n> difference, from a performance point of view, between a view and a \n> function performing the same task,\n\nYes. Usually the view will win.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 22 Mar 2008 12:33:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Views and functions returning sets of records " }, { "msg_contents": "Giorgio Valoti <[email protected]> schrieb:\n\n> Hi all,\n> maybe it?s a naive question but I was wondering if there is any \n> difference, from a performance point of view, between a view and a \n> function performing the same task, something like:\n> \n> CREATE VIEW foo AS ?;\n> CREATE FUNCTION getfoo(int) RETURNS SETOF foo AS $$\n> \tSELECT * FROM foo WHERE fooid = $1;\n> $$ LANGUAGE SQL;\n\nYes. The planner can't sometimes optimze the query, a simple example:\n\nI have ha table called 'words', it contains a few thousand simple words.\n\ntest=# \\d words\n Table \"public.words\"\n Column | Type | Modifiers\n--------+------+-----------\n w | text |\nIndexes:\n \"idx_words\" btree (lower(w) varchar_pattern_ops)\n\n\nNow i'm searching and the index is in use:\n\ntest=# explain analyse select * from words where lower(w) like lower('foo');\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_words on words (cost=0.00..6.01 rows=1 width=12) (actual time=0.065..0.065 rows=0 loops=1)\n Index Cond: (lower(w) ~=~ 'foo'::character varying)\n Filter: (lower(w) ~~ 'foo'::text)\n Total runtime: 0.187 ms\n(4 rows)\n\n\n\n\n\nNow i'm writung a function for that:\n\ntest=*# create or replace function get_words(text) returns setof record as $$select * from words where lower(w) like lower($1); $$ language sql;\nCREATE FUNCTION\nTime: 4.413 ms\n\nThe query inside the function body is the same as above, let's test:\n\ntest=*# explain analyse select * from get_words('foo') as (w text);\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Function Scan on get_words (cost=0.00..12.50 rows=1000 width=32) (actual time=213.947..213.947 rows=0 loops=1)\n Total runtime: 214.031 ms\n(2 rows)\n\n\nAs you can see, a slow seq. scan are used now. Because the planner don't\nknow the argument and don't know if he can use the index or not. In my\ncase the planner created a bad plan.\n\n\nBut a VIEW is not a function, it's only a RULE for SELECT on a virtual table:\n\ntest=*# create view view_words as select * from words;\nCREATE VIEW\nTime: 277.411 ms\ntest=*# explain analyse select * from view_words where lower(w) like lower('foo');\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_words on words (cost=0.00..6.01 rows=1 width=12) (actual time=0.044..0.044 rows=0 loops=1)\n Index Cond: (lower(w) ~=~ 'foo'::character varying)\n Filter: (lower(w) ~~ 'foo'::text)\n Total runtime: 0.259 ms\n(4 rows)\n\n\nIt's the same plan as above for the source table.\n\n\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknow)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n", "msg_date": "Sat, 22 Mar 2008 17:35:47 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Views and functions returning sets of records" }, { "msg_contents": "Tom Lane <[email protected]> schrieb:\n\n> Giorgio Valoti <[email protected]> writes:\n> > maybe it�s a naive question but I was wondering if there is any \n> > difference, from a performance point of view, between a view and a \n> > function performing the same task,\n> \n> Yes. Usually the view will win.\n\n*smile*\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknow)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n", "msg_date": "Sat, 22 Mar 2008 17:55:51 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Views and functions returning sets of records" }, { "msg_contents": "\nOn 22/mar/08, at 17:55, Andreas Kretschmer wrote:\n\n> Tom Lane <[email protected]> schrieb:\n>\n>> Giorgio Valoti <[email protected]> writes:\n>>> maybe it�s a naive question but I was wondering if there is any\n>>> difference, from a performance point of view, between a view and a\n>>> function performing the same task,\n>>\n>> Yes. Usually the view will win.\n>\n> *smile*\n\n:-(\nI was thinking about using using functions as the main way to \ninteract with the database from an external application. The \n(supposed) rationale was to simplify the application code: you only \nhave to worry about in and out function parameters.\nAre there any way to pass some hints to the planner? For example, \ncould the IMMUTABLE/STABLE/VOLATILE modifiers be of some help?\n\n\nThank you\n--\nGiorgio Valoti\n\n\n\n\n", "msg_date": "Sun, 23 Mar 2008 10:37:12 +0100", "msg_from": "Giorgio Valoti <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Views and functions returning sets of records" }, { "msg_contents": "Giorgio Valoti <[email protected]> writes:\n> Are there any way to pass some hints to the planner? For example, \n> could the IMMUTABLE/STABLE/VOLATILE modifiers be of some help?\n\nThose don't really do anything for set-returning functions at the\nmoment.\n\nAs of 8.3 there is a ROWS attribute for SRFs that can help with one\nof the worst problems, namely that the planner has no idea how many\nrows a SRF might return. It's simplistic (just an integer constant\nestimate) but better than no control at all.\n\nAs of CVS HEAD (8.4 to be) there's a capability in the planner to\n\"inline\" SRFs that are single SELECTs in SQL language, which should\npretty much eliminate the performance differential against a comparable\nview. Unfortunately 8.4 release is at least a year away, but just\nso you know. (I suppose if you were desperate enough to run a privately\nmodified copy, that patch should drop into 8.3 easily enough.) IIRC\nthe restrictions for this to happen are\n\t* single SELECT\n\t* function declared to return set\n\t* function NOT declared strict or volatile\n\t* function NOT declared SECURITY DEFINER or given any\n\t local parameter settings\nThe latter restrictions are needed so that inlining doesn't change\nthe semantics.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 23 Mar 2008 11:28:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Views and functions returning sets of records " }, { "msg_contents": "Can we write retrieving only 10 records from 4000 records\nplz tell me asap\n\nOn Mar 23, 8:28 pm, [email protected] (Tom Lane) wrote:\n> Giorgio Valoti <[email protected]> writes:\n> > Are there any way to pass some hints to the planner? For example,\n> > could the IMMUTABLE/STABLE/VOLATILE modifiers be of some help?\n>\n> Those don't really do anything for set-returning functions at the\n> moment.\n>\n> As of 8.3 there is a ROWS attribute for SRFs that can help with one\n> of the worst problems, namely that the planner has no idea how many\n> rows a SRF might return. It's simplistic (just an integer constant\n> estimate) but better than no control at all.\n>\n> As of CVS HEAD (8.4 to be) there's a capability in the planner to\n> \"inline\" SRFs that are single SELECTs in SQL language, which should\n> pretty much eliminate the performance differential against a comparable\n> view. Unfortunately 8.4 release is at least a year away, but just\n> so you know. (I suppose if you were desperate enough to run a privately\n> modified copy, that patch should drop into 8.3 easily enough.) IIRC\n> the restrictions for this to happen are\n> * single SELECT\n> * function declared to return set\n> * function NOT declared strict or volatile\n> * function NOT declared SECURITY DEFINER or given any\n> local parameter settings\n> The latter restrictions are needed so that inlining doesn't change\n> the semantics.\n>\n> regards, tom lane\n>\n> -\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Tue, 8 Apr 2008 04:54:19 -0700 (PDT)", "msg_from": "Rajashri Tupe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Views and functions returning sets of records" }, { "msg_contents": "Rajashri Tupe wrote:\n> Can we write retrieving only 10 records from 4000 records\n> plz tell me asap\n> \nIs this question connected to the previous discussion?\n\nCan you be more specific with your question?\n\nWithout having any idea what you're talking about, I'll direct you to \nthe \"LIMIT\" clause of the SELECT statement in case that's what you mean.\n\nhttp://www.postgresql.org/docs/current/static/queries-limit.html\n\n--\nCraig Ringer\n", "msg_date": "Fri, 11 Apr 2008 15:23:36 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Views and functions returning sets of records" } ]
[ { "msg_contents": "Look like the mysql people found a subquery that postgresql doesn't \nhandle as good as possible:\n\n http://s.petrunia.net/blog/\n\nIs there some deeper issue here that I fail to see or is it simply that \nit hasn't been implemented but is fairly straigt forward? In the link \nabove they do state that it isn't a very common case anyway.\n\n/Dennis\n", "msg_date": "Mon, 24 Mar 2008 08:21:32 +0100", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": true, "msg_subject": "Turn correlated in subquery into join" }, { "msg_contents": "Dennis Bjorklund <[email protected]> writes:\n> Look like the mysql people found a subquery that postgresql doesn't \n> handle as good as possible:\n\n> http://s.petrunia.net/blog/\n\n> Is there some deeper issue here that I fail to see or is it simply that \n> it hasn't been implemented but is fairly straigt forward?\n\nI don't think it's straightforward: you'd need to do some careful\nanalysis to prove under what conditions such a transformation would be\nsafe. If the answer is \"always, it doesn't matter what the correlation\ncondition is\" then the actual implementation would probably not be\ntremendously difficult. If there are restrictions then checking whether\nthe restrictions hold could be interesting ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 24 Mar 2008 12:30:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Turn correlated in subquery into join " } ]
[ { "msg_contents": "Hi,\n\nI'm uning postgres 8.1 at P4 2.8GHz with 2GB RAM.\n(web server + database on the same server)\n\nPlease, how long takes your connectiong to postgres?\n\n$starttimer=time()+microtime();\n\n$dbconn = pg_connect(\"host=localhost port=5432 dbname=xxx user=xxx password=xxx\") \n or die(\"Couldn't Connect\".pg_last_error());\t\n\n$stoptimer = time()+microtime(); \necho \"Generated in \".round($stoptimer-$starttimer,4).\" s\";\n\nIt takes more then 0.05s :(\n\nOnly this function reduce server speed max to 20request per second.\n\nThan you for any Help!\n\nBest regards.\n\n", "msg_date": "Mon, 24 Mar 2008 08:40:15 +0100", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "slow pg_connect()" }, { "msg_contents": "[email protected] wrote:\n> It takes more then 0.05s :(\n>\n> Only this function reduce server speed max to 20request per second.\n> \nIf you need that sort of frequent database access, you might want to \nlook into:\n\n- Doing more work in each connection and reducing the number of \nconnections required;\n- Using multiple connections in parallel;\n- Pooling connections so you don't need to create a new one for every job;\n- Using a more efficient database connector and/or language;\n- Dispatching requests to a persistent database access provider that's \nalways connected\n\nHowever, your connections are indeed taking a long time. I wrote a \ntrivial test using psycopg for Python and found that the following script:\n\n#!/usr/bin/env python\nimport psycopg\nconn = pyscopg.connect(\"dbname=testdb\")\n\ngenerally took 0.035 seconds (350ms) to run on my workstation - \nincluding OS process creation, Python interpreter startup, database \ninterface loading, connection, disconnection, and process termination.\n\nA quick timing test shows that the connection/disconnection can be \nperformed 100 times in 1.2 seconds:\n\nimport psycopg\nimport timeit\nprint timeit.Timer('conn = psycopg.connect(\"dbname=craig\")', 'import \npsycopg').timeit(number=100);\n\n... and this is still with an interpreted language. I wouldn't be too \nsurprised if much better again could be achieved with the C/C++ APIs, \nthough I don't currently feel the desire to write a test for that.\n\n--\nCraig Ringer\n", "msg_date": "Mon, 24 Mar 2008 16:58:16 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow pg_connect()" }, { "msg_contents": "Craig Ringer wrote:\n> [email protected] wrote:\n>> It takes more then 0.05s :(\n>>\n>> Only this function reduce server speed max to 20request per second.\n>> \n> If you need that sort of frequent database access, you might want to \n> look into:\n>\n> - Doing more work in each connection and reducing the number of \n> connections required;\n> - Using multiple connections in parallel;\n> - Pooling connections so you don't need to create a new one for every \n> job;\n> - Using a more efficient database connector and/or language;\n> - Dispatching requests to a persistent database access provider that's \n> always connected\n>\nOh, I missed:\n\nUse a UNIX domain socket rather than a TCP/IP local socket. Database \ninterfaces that support UNIX sockets (like psycopg) will normally do \nthis if you omit the host argument entirely.\n\n--\nCraig Ringer\n", "msg_date": "Mon, 24 Mar 2008 17:04:23 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow pg_connect()" }, { "msg_contents": "[email protected] wrote:\n> Hi,\n> \n> I'm uning postgres 8.1 at P4 2.8GHz with 2GB RAM.\n> (web server + database on the same server)\n> \n> Please, how long takes your connectiong to postgres?\n> \n> It takes more then 0.05s :(\n> \n> Only this function reduce server speed max to 20request per second.\n\n\nI tried running the script a few times, and got substantially lower \nstart up times than you are getting. I'm using 8.1.11 on Debian on a 2x \nXeon CPU 2.40GHz with 3GB memory, so I don't think that would account \nfor the difference.\n\n\nGenerated in 0.0046 s\nGenerated in 0.0036 s\nGenerated in 0.0038 s\nGenerated in 0.0037 s\nGenerated in 0.0038 s\nGenerated in 0.0037 s\nGenerated in 0.0047 s\nGenerated in 0.0052 s\nGenerated in 0.005 s\n\n\n-- \nTommy Gildseth\n", "msg_date": "Mon, 24 Mar 2008 11:48:20 +0100", "msg_from": "Tommy Gildseth <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow pg_connect()" }, { "msg_contents": "Hi,\n\[email protected] schrieb:\n> Please, how long takes your connectiong to postgres?\n> \n> $starttimer=time()+microtime();\n> \n> $dbconn = pg_connect(\"host=localhost port=5432 dbname=xxx user=xxx password=xxx\") \n> or die(\"Couldn't Connect\".pg_last_error());\t\n> \n> $stoptimer = time()+microtime(); \n> echo \"Generated in \".round($stoptimer-$starttimer,4).\" s\";\n> \n> It takes more then 0.05s :(\n> \n> Only this function reduce server speed max to 20request per second.\n\nTwo hints:\n* Read about configuring and using persistent database connections\n (http://www.php.net/manual/en/function.pg-pconnect.php) with PHP\n* Use a connection pooler such as pgpool-II\n (http://pgpool.projects.postgresql.org/)\n\nUsing both techniques together should boost your performance.\n\nCiao,\nThomas\n", "msg_date": "Mon, 24 Mar 2008 13:39:43 +0100", "msg_from": "Thomas Pundt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow pg_connect()" }, { "msg_contents": "\n> * Read about configuring and using persistent database connections\n> (http://www.php.net/manual/en/function.pg-pconnect.php) with PHP\n\nThough make sure you understand the ramifications of using persistent \nconnections. You can quickly exhaust your connections by using this and \nalso cause other issues for your server.\n\nIf you do this you'll probably have to adjust postgres to allow more \nconnections, which usually means lowering the amount of shared memory \neach connection can use which can also cause performance issues.\n\nI'd probably use pgpool-II and have it handle the connection stuff for \nyou rather than doing it through php.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Tue, 25 Mar 2008 15:18:00 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow pg_connect()" } ]
[ { "msg_contents": ">\n> It takes more then 0.05s :(\n>\n> Only this function reduce server speed max to 20request per second.\n\nFirst, benchmarking using only PHP is not very accurate, you're probably\nalso measuring some work that PHP needs to do just to get started in the\nfirst place.\n\nSecond, this 20r/s is not requests/sec but connections per second per PHP\nscript. One pageview in PHP needs one connection, so it will delay the\npageview by 0.05 seconds.\n\nIf you need raw speed, you can use pg_pconnect(), but be VERY carefull\nbecause that will keep one databaseconnection open for every database for\nevery webserverprocess. If you have 10 databasedriven websites running on\nthe same webserver and that server is configured to run 100 processes at\nthe same time, you will get 10x100=1000 open connections, which eats more\nRAM than you have.\n\n", "msg_date": "Mon, 24 Mar 2008 09:35:44 +0100 (CET)", "msg_from": "\"vincent\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow pg_connect()" } ]
[ { "msg_contents": "Hi friends,\n\nI am using postgresql 8.1, I have shared_buffers = 50000, now i execute the\nquery, it takes 18 seconds to do sequential scan, when i reduced to 5000, it\ntakes one 10 seconds, Why.\n\nCan anyone explain what is the reason, ( any other configuration is needed\nin postgresql.conf)\n\n-- \nWith Best Regards,\nPetchimuthulingam S\n\nHi friends,I am using postgresql 8.1, I have shared_buffers = 50000, now i execute the query, it takes 18 seconds to do sequential scan, when i reduced to 5000, it takes one 10 seconds, Why.Can anyone explain what is the reason, ( any other configuration is needed in postgresql.conf)\n-- With Best Regards,Petchimuthulingam S", "msg_date": "Mon, 24 Mar 2008 14:57:59 +0530", "msg_from": "\"petchimuthu lingam\" <[email protected]>", "msg_from_op": true, "msg_subject": "increasing shared buffer slow downs query performance." }, { "msg_contents": "petchimuthu lingam <[email protected]> schrieb:\n\n> Hi friends,\n> \n> I am using postgresql 8.1, I have shared_buffers = 50000, now i execute the\n> query, it takes 18 seconds to do sequential scan, when i reduced to 5000, it\n> takes one 10 seconds, Why.\n\nWild guess: the second time the data are in the filesystem cache.\n\n> \n> Can anyone explain what is the reason, ( any other configuration is needed in\n> postgresql.conf)\n\nShow us the EXPLAIN ANALYSE - Output.\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknow)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n", "msg_date": "Mon, 24 Mar 2008 11:07:19 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increasing shared buffer slow downs query performance." }, { "msg_contents": "On Mon, Mar 24, 2008 at 3:37 PM, Andreas Kretschmer\n<[email protected]> wrote:\n> petchimuthu lingam <[email protected]> schrieb:\n>\n>\n> > Hi friends,\n> >\n> > I am using postgresql 8.1, I have shared_buffers = 50000, now i execute the\n> > query, it takes 18 seconds to do sequential scan, when i reduced to 5000, it\n> > takes one 10 seconds, Why.\n>\n> Wild guess: the second time the data are in the filesystem cache.\n>\n\n\nAnother wild possibility is that the first query sets the hint bits for the\nrows involved and hence the second time it runs fast. May be you want\nto run the query few times in both the settings and then compare.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 25 Mar 2008 17:53:58 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increasing shared buffer slow downs query performance." } ]
[ { "msg_contents": "\n\tHello,\n\tI am using Postgres with PHP and persistent connections.\n\tFor simple queries, parsing & preparing time is often longer than actual \nquery execution time...\n\n\tI would like to execute a bunch of PREPARE statements to prepare my most \noften used small queries on connection startup, then reuse these prepared \nstatements during all the life of the persistent connection.\n\t(Simple queries in PG are actually faster than in MySQL if prepared, lol)\n\tHow do I achieve this ?\n\n\tBest way, would be of course a \"PERSISTENT PREPARE\" which would record \nthe information (name, SQL, params, not the Plan) about the prepared \nstatement in a system catalog shared by all connections ; when issuing \nEXECUTE, if the prepared statement does not exist in the current \nconnection, pg would look there, and if it finds the name of the statement \nand corresponding SQL, issue a PREPARE so the current connection would \nthen have this statement in its store, and be able to execute it faster \nfor all the times this connection is reused.\n\tIs such a feature planned someday ?\n\n\tI tried to write a function which is called by my PHP script just after \nestablishing the connection, it is a simple function which looks in \npg_prepared_statements, if it is empty it issues the PREPARE statements I \nneed. It works, no problem, but it is less elegant and needs one extra \nquery per page.\n\n\tI also tried to issue a dummy EXECUTE of a prepared \"SELECT 1\" just after \nestablishing the connection : if it fails, we prepare the plans (by \nissuing queries from PHP), if it succeeds, this means we are reusing a \nconnection with all the plans already prepared. This also works well.\n\n\tWhat do you think ?\n\tRegards,\n\tPierre\n", "msg_date": "Mon, 24 Mar 2008 12:23:29 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": true, "msg_subject": "Preparing statements on connection startup" } ]
[ { "msg_contents": "i am using postgresql 8.1.8,\n\nFollowing configurations:\n shared_buffers = 5000\n work_mem = 65536\n maintenance_work_mem = 65536\n effective_cache_size = 16000\n random_page_cost = 0.1\n\nThe cpu is waiting percentage goes upto 50%, and query result comes later,\n\ni am using normal select query ( select * from table_name ).\n\ntable has more then 6 million records.\n\n\n\n-- \nWith Best Regards,\nPetchimuthulingam S\n\ni am using postgresql 8.1.8, Following configurations:           shared_buffers = 5000            work_mem = 65536            maintenance_work_mem = 65536            effective_cache_size = 16000\n            random_page_cost = 0.1The cpu is waiting percentage goes upto 50%, and query result comes later,i am using normal select query ( select * from table_name ).table has more then 6 million records.\n-- With Best Regards,Petchimuthulingam S", "msg_date": "Mon, 24 Mar 2008 18:35:51 +0530", "msg_from": "\"petchimuthu lingam\" <[email protected]>", "msg_from_op": true, "msg_subject": "waiting for harddisk" }, { "msg_contents": "> i am using postgresql 8.1.8,\n>\n> Following configurations:\n> shared_buffers = 5000\n> work_mem = 65536\n> maintenance_work_mem = 65536\n> effective_cache_size = 16000\n> random_page_cost = 0.1\n>\n> The cpu is waiting percentage goes upto 50%, and query result comes \n> later,\n>\n> i am using normal select query ( select * from table_name ).\n>\n> table has more then 6 million records.\n\n\n\n\tWhen you mean SELECT *, are you selecting the WHOLE 6 million records ? \nWithout WHERE ? Or just a few rows ?\n\tPlease post EXPLAIN ANALYZE of your query.\n", "msg_date": "Mon, 24 Mar 2008 14:20:56 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: waiting for harddisk" }, { "msg_contents": "On Mon, Mar 24, 2008 at 7:05 AM, petchimuthu lingam <[email protected]> wrote:\n> i am using postgresql 8.1.8,\n>\n> Following configurations:\n> shared_buffers = 5000\n> work_mem = 65536\n> maintenance_work_mem = 65536\n> effective_cache_size = 16000\n> random_page_cost = 0.1\n\nThat number, 0.1 is not logical. anything below 1.0 is generally a\nbad idea, and means that you've got some other setting wrong.\n\n> The cpu is waiting percentage goes upto 50%, and query result comes later,\n>\n> i am using normal select query ( select * from table_name ).\n>\n> table has more then 6 million records.\n\nYou need faster disks if you want sequential scans to go faster. Look\ninto a decent RAID controller (Areca, Escalade (forgot what they're\ncalled now) or LSI) with battery backed cache. Run RAID-10 on it with\nas many drives as you can afford to throw at the problem.\n", "msg_date": "Mon, 24 Mar 2008 19:13:28 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: waiting for harddisk" } ]
[ { "msg_contents": "The owners of the animal hospital where I work at want to consider live/hot\nbackups through out the day so we're less likely to lose a whole\nday of transaction. We use Postgresql 8.0.15. We do 3AM\nbackups, using pg_dumpall, to a file when there is very little activity.\n\nThe hospital enjoys the overall performance of the veterinary\napplication running\non Postgresql. I know doing a mid-day backup when up to 60 computers\n(consistently\n35-40) are access client/patient information, it will cause some\nfrustration. I understand\nthere needs to be balance of performance and backup of current records.\n\nWhile I know that not all situations are the same, I am hoping there\nis a performance\nlatency that others have experienced when doing backups during the day and/or\nplanning for cluster (or other types of redundancy).\n\nMy animal hospital operates 24x7 and is in the south part of the San\nFrancisco Bay area. Outside\nof sharing your experiences/input with me, I would not mind if you/your company\ndo this type of consulting offline.\n\nThank you.\n\nSteve\n", "msg_date": "Mon, 24 Mar 2008 13:23:08 -0700", "msg_from": "\"Steve Poe\" <[email protected]>", "msg_from_op": true, "msg_subject": "Planning hot/live backups?" }, { "msg_contents": "I back up around 10 Gig of data every half hour using pg_dump. I don't\nbackup the entire database at once. Instead I backup at the schema\nnamespace level. But I do all of them every half hour. It takes four\nminutes. That includes the time to copy the files to the backup server.\nI do each schema namespace backup consecutively. I also run vacuum full\nanalyze once a day. My system is up 24/7 as well. I don't backup in\nthe middle of the night. There is so little back. But I could. I am\nable to have more backups by not doing it when there are only a handful\nof transactions. \n\nThanks,\n\nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Steve Poe\nSent: Monday, March 24, 2008 3:23 PM\nTo: [email protected]\nSubject: [PERFORM] Planning hot/live backups?\n\nThe owners of the animal hospital where I work at want to consider\nlive/hot\nbackups through out the day so we're less likely to lose a whole\nday of transaction. We use Postgresql 8.0.15. We do 3AM\nbackups, using pg_dumpall, to a file when there is very little activity.\n\nThe hospital enjoys the overall performance of the veterinary\napplication running\non Postgresql. I know doing a mid-day backup when up to 60 computers\n(consistently\n35-40) are access client/patient information, it will cause some\nfrustration. I understand\nthere needs to be balance of performance and backup of current records.\n\nWhile I know that not all situations are the same, I am hoping there\nis a performance\nlatency that others have experienced when doing backups during the day\nand/or\nplanning for cluster (or other types of redundancy).\n\nMy animal hospital operates 24x7 and is in the south part of the San\nFrancisco Bay area. Outside\nof sharing your experiences/input with me, I would not mind if you/your\ncompany\ndo this type of consulting offline.\n\nThank you.\n\nSteve\n\n-\nSent via pgsql-performance mailing list\n([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 24 Mar 2008 15:39:32 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planning hot/live backups?" }, { "msg_contents": "Steve Poe wrote:\n> The owners of the animal hospital where I work at want to consider live/hot\n> backups through out the day so we're less likely to lose a whole\n> day of transaction. We use Postgresql 8.0.15. We do 3AM\n> backups, using pg_dumpall, to a file when there is very little activity.\n\n\n\nYou probably want to look into PITR, you can have a constant ongoing \nbackup of your data and never lose more than a few minutes of data. The \noverhead isn't all the big especially if you are shipping the log files \nto a separate server.\n\n", "msg_date": "Mon, 24 Mar 2008 16:43:58 -0400", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planning hot/live backups?" }, { "msg_contents": "\nMatthew T. O'Connor wrote:\n> Steve Poe wrote:\n>> The owners of the animal hospital where I work at want to consider \n>> live/hot\n>> backups through out the day so we're less likely to lose a whole\n>> day of transaction. We use Postgresql 8.0.15. We do 3AM\n>> backups, using pg_dumpall, to a file when there is very little activity.\n>\n>\n>\n> You probably want to look into PITR, you can have a constant ongoing \n> backup of your data and never lose more than a few minutes of data. \n> The overhead isn't all the big especially if you are shipping the log \n> files to a separate server.\n>\n>\n\nI'll second that. PITR is IMHO the way to go, and I believe you'll be \npleasantly surprised how easy it is to do. As always, test your backup \nstrategy by restoring. Even better, make a point of periodically \ntesting a restore of production backups to a non-production system.\n\nPaul\n\n\n\n", "msg_date": "Mon, 24 Mar 2008 14:22:49 -0700", "msg_from": "paul rivers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planning hot/live backups?" }, { "msg_contents": "\"Matthew T. O'Connor\" <[email protected]> writes:\n> Steve Poe wrote:\n>> The owners of the animal hospital where I work at want to consider live/hot\n>> backups through out the day so we're less likely to lose a whole\n>> day of transaction. We use Postgresql 8.0.15. We do 3AM\n>> backups, using pg_dumpall, to a file when there is very little activity.\n\n> You probably want to look into PITR, you can have a constant ongoing \n> backup of your data and never lose more than a few minutes of data. The \n> overhead isn't all the big especially if you are shipping the log files \n> to a separate server.\n\nBut note that you really need to update to a newer major release before\ndepending on PITR. While 8.0 nominally has support for it, it's taken\nus several releases to really get the operational gotchas sorted out.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 24 Mar 2008 18:23:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planning hot/live backups? " }, { "msg_contents": "Steve Poe wrote:\n> At this point, I am just moving the pg_dumpall file to another server. Pardon\n> my question: how would you 'ship the log files'?\n> \n\n[ You should cc the mailing list so that everyone can benefit from the \nconversation. ]\n\nRTM: \nhttp://www.postgresql.org/docs/8.3/interactive/continuous-archiving.html\n\n", "msg_date": "Mon, 24 Mar 2008 18:28:27 -0400", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planning hot/live backups?" }, { "msg_contents": "Tom,\n\nSo, are you saying we need to get to at least 8.1.x before considering PITR\nfor a production environment? Unfortunately, the vendor/supplier of\nour veterinary application\ndoes not support higher versions. We would be proceeding \"at our own risk\".\n\nIs there anything else we can do we 8.0.15 version?\n\nSteve\n\n\n\nOn Mon, Mar 24, 2008 at 3:23 PM, Tom Lane <[email protected]> wrote:\n>\n> \"Matthew T. O'Connor\" <[email protected]> writes:\n> > Steve Poe wrote:\n> >> The owners of the animal hospital where I work at want to consider live/hot\n> >> backups through out the day so we're less likely to lose a whole\n> >> day of transaction. We use Postgresql 8.0.15. We do 3AM\n> >> backups, using pg_dumpall, to a file when there is very little activity.\n>\n> > You probably want to look into PITR, you can have a constant ongoing\n> > backup of your data and never lose more than a few minutes of data. The\n> > overhead isn't all the big especially if you are shipping the log files\n> > to a separate server.\n>\n> But note that you really need to update to a newer major release before\n> depending on PITR. While 8.0 nominally has support for it, it's taken\n> us several releases to really get the operational gotchas sorted out.\n>\n> regards, tom lane\n>\n", "msg_date": "Mon, 24 Mar 2008 15:49:40 -0700", "msg_from": "\"Steve Poe\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planning hot/live backups?" }, { "msg_contents": "Hi!\n\n Going a bit off topic, but one quick question: to avoid storing GB \nof WAL files that will probably take a lot of time to reload, how can \nthe backup be \"reset\"? I suspect that it's something like stopping the \nWAL archiving, doing a new base backup, and restart archiving, but \nI've never done it (have been using SQL dumps), so...\n\n Yours\n\nMiguel Arroz\n\nOn 2008/03/24, at 22:28, Matthew T. O'Connor wrote:\n\n> Steve Poe wrote:\n>> At this point, I am just moving the pg_dumpall file to another \n>> server. Pardon\n>> my question: how would you 'ship the log files'?\n>>\n>\n> [ You should cc the mailing list so that everyone can benefit from \n> the conversation. ]\n>\n> RTM: http://www.postgresql.org/docs/8.3/interactive/continuous-archiving.html\n>\n>\n> -\n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nMiguel Arroz\nhttp://www.terminalapp.net\nhttp://www.ipragma.com", "msg_date": "Mon, 24 Mar 2008 23:45:38 +0000", "msg_from": "Miguel Arroz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planning hot/live backups?" }, { "msg_contents": "Miguel Arroz wrote:\n> Going a bit off topic, but one quick question: to avoid storing GB of \n> WAL files that will probably take a lot of time to reload, how can the \n> backup be \"reset\"? I suspect that it's something like stopping the WAL \n> archiving, doing a new base backup, and restart archiving, but I've \n> never done it (have been using SQL dumps), so... \n\nBasically, you only need WAL logs from the last time you made a back-up \nof your data directory. We do nightly re-rsync of our data directory \nand then purge old WAL files that are no longer needed.\n", "msg_date": "Tue, 25 Mar 2008 00:50:41 -0400", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planning hot/live backups?" } ]
[ { "msg_contents": "Dear Friends,\n I have a table with 32 lakh record in it. Table size is nearly 700 MB,\nand my machine had a 1 GB + 256 MB RAM, i had created the table space in\nRAM, and then created this table in this RAM.\n\n So now everything is in RAM, if i do a count(*) on this table it returns\n327600 in 3 seconds, why it is taking 3 seconds ????? because am sure that\nno Disk I/O is happening. ( using vmstat i had confirmed, no disk I/O is\nhappening, swap is also not used )\n\nAny Idea on this ???\n\nI searched a lot in newsgroups ... can't find relevant things.... ( because\neverywhere they are speaking about disk access speed, here i don't want to\nworry about disk access )\n\nIf required i will give more information on this.\n\nDear Friends,     I have a table with 32 lakh record in it. Table size is nearly 700 MB, and my machine had a 1 GB + 256 MB RAM, i had created the table space in RAM, and then created this table in this RAM.    So now everything is in RAM, if i do a count(*) on this table it returns 327600 in 3 seconds, why it is taking 3 seconds ????? because am sure that no Disk I/O is happening. ( using vmstat i had confirmed, no disk I/O is happening, swap is also not used )\nAny Idea on this ???I searched a lot in newsgroups ... can't find relevant things.... ( because everywhere they are speaking about disk access speed, here i don't want to worry about disk access )\nIf required i will give more information on this.", "msg_date": "Tue, 25 Mar 2008 14:05:20 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql is slow with larger table even it is in RAM" }, { "msg_contents": "On Tue, Mar 25, 2008 at 2:09 PM, jose javier parra sanchez <\[email protected]> wrote:\n\n> It's been said zillions of times on the maillist. Using a select\n> count(*) in postgres is slow, and probably will be slow for a long\n> time. So that function is not a good way to measure perfomance.\n>\nYes, but if the data is in HDD then we can say this...\n\nbut now the data is in RAM\n\nOn Tue, Mar 25, 2008 at 2:09 PM, jose javier parra sanchez <[email protected]> wrote:\nIt's been said zillions of times on the maillist. Using a select\ncount(*) in postgres is slow, and probably will be slow for a long\ntime. So that function is not a good way to measure perfomance.\nYes, but if the data is in HDD then we can say this...but now the data is in RAM", "msg_date": "Tue, 25 Mar 2008 14:12:53 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql is slow with larger table even it is in RAM" }, { "msg_contents": "On Tue, Mar 25, 2008 at 02:05:20PM +0530, sathiya psql wrote:\n> Any Idea on this ???\n\nyes. dont use count(*).\n\nif you want whole-table row count, use triggers to store the count.\n\nit will be slow. regeardless of whether it's in ram or on hdd.\n\ndepesz\n\n-- \nquicksil1er: \"postgres is excellent, but like any DB it requires a\nhighly paid DBA. here's my CV!\" :)\nhttp://www.depesz.com/ - blog dla ciebie (i moje CV)\n", "msg_date": "Tue, 25 Mar 2008 10:08:23 +0100", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql is slow with larger table even it is in RAM" }, { "msg_contents": "hubert depesz lubaczewski wrote:\n> On Tue, Mar 25, 2008 at 02:05:20PM +0530, sathiya psql wrote:\n>> Any Idea on this ???\n> \n> yes. dont use count(*).\n> \n> if you want whole-table row count, use triggers to store the count.\n> \n> it will be slow. regeardless of whether it's in ram or on hdd.\n\nIn other words, if you're having performance problems please provide\nEXPLAIN ANALYZE output from a more useful query that does real work,\nrather than something like count(*).\n\nCOUNT(*) can be slow due to some MVCC limitations; it's been discussed\nfrequently here so you should search the archives for information.\n\n--\nCraig Ringer\n", "msg_date": "Tue, 25 Mar 2008 18:34:35 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql is slow with larger table even it is in\n RAM" }, { "msg_contents": "sathiya psql escribi�:\n\n> So now everything is in RAM, if i do a count(*) on this table it returns\n> 327600 in 3 seconds, why it is taking 3 seconds ????? because am sure that\n> no Disk I/O is happening.\n\nIt has to scan every page and examine visibility for every record. Even\nif there's no I/O involved, there's a lot of work to do. I am not sure\nif with your hardware it is expected for it to take 3 seconds though.\nDo you see high CPU usage during that period?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Tue, 25 Mar 2008 08:59:21 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql is slow with larger table even it is in\n\tRAM" }, { "msg_contents": "In response to \"sathiya psql\" <[email protected]>:\n\n> Dear Friends,\n> I have a table with 32 lakh record in it. Table size is nearly 700 MB,\n> and my machine had a 1 GB + 256 MB RAM, i had created the table space in\n> RAM, and then created this table in this RAM.\n> \n> So now everything is in RAM, if i do a count(*) on this table it returns\n> 327600 in 3 seconds, why it is taking 3 seconds ????? because am sure that\n> no Disk I/O is happening. ( using vmstat i had confirmed, no disk I/O is\n> happening, swap is also not used )\n> \n> Any Idea on this ???\n\nYes. It takes your hardware about 3 seconds to read through 700M of ram.\n\nKeep in mind that you're not just reading RAM. You're pushing system\nrequests through the VFS layer of your operating system, which is treating\nthe RAM like a disk (with cylinder groups and inodes and blocks, etc) so\nyou have all that processing overhead as well. What filesystem did you\nformat the RAM disk with?\n\nWhy are you doing this? If you have enough RAM to store the table, why\nnot just allocate it to shared buffers?\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Tue, 25 Mar 2008 09:10:46 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql is slow with larger table even it is in\n RAM" }, { "msg_contents": ">\n> Yes. It takes your hardware about 3 seconds to read through 700M of ram.\n>\n\n\n\n>\n> Keep in mind that you're not just reading RAM. You're pushing system\n> requests through the VFS layer of your operating system, which is treating\n> the RAM like a disk (with cylinder groups and inodes and blocks, etc) so\n> you have all that processing overhead as well. What filesystem did you\n> format the RAM disk with?\n>\ntmpfs\n\n>\n> Why are you doing this? If you have enough RAM to store the table, why\n> not just allocate it to shared buffers?\n\n\njust allocating will make read from hdd to RAM at first time, to eliminate\nthat\n\n>\n>\nare you saying it will take 3 seconds surely if i have 50 lakh record\n\nYes.  It takes your hardware about 3 seconds to read through 700M of ram.\n \nKeep in mind that you're not just reading RAM.  You're pushing system\nrequests through the VFS layer of your operating system, which is treating\nthe RAM like a disk (with cylinder groups and inodes and blocks, etc) so\nyou have all that processing overhead as well.  What filesystem did you\nformat the RAM disk with?\ntmpfs \nWhy are you doing this?  If you have enough RAM to store the table, why\nnot just allocate it to shared buffers?just allocating will  make read from hdd to RAM at first time, to eliminate that \n\nare you saying it will take 3 seconds surely if i have 50 lakh record", "msg_date": "Tue, 25 Mar 2008 18:57:12 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql is slow with larger table even it is in RAM" }, { "msg_contents": "In response to \"sathiya psql\" <[email protected]>:\n> >\n> > Yes. It takes your hardware about 3 seconds to read through 700M of ram.\n> >\n> >\n> > Keep in mind that you're not just reading RAM. You're pushing system\n> > requests through the VFS layer of your operating system, which is treating\n> > the RAM like a disk (with cylinder groups and inodes and blocks, etc) so\n> > you have all that processing overhead as well. What filesystem did you\n> > format the RAM disk with?\n>\n> tmpfs\n\nI'm not an expert, but according to wikipedia:\n\n\"tmpfs (previously known as shmfs) distinguishes itself from the Linux ramdisk device by allocating memory dynamically and by allowing less-used pages to be moved onto swap space.\"\n\nBoth dynamically allocating and swapping are potential problems, but I\ndon't know how to tell you to determine if they're issues or not.\n\n> > Why are you doing this? If you have enough RAM to store the table, why\n> > not just allocate it to shared buffers?\n> \n> just allocating will make read from hdd to RAM at first time, to eliminate\n> that\n\nPostgreSQL is still going to copy the data from your RAM disk into shared\nbuffers before working with it, so you still have that overhead. All\nyou're escaping is the time involved in physical disk activity, which is\nwhat shared_buffers are designed to avoid.\n\n> are you saying it will take 3 seconds surely if i have 50 lakh record\n\nNo. That is dependent on your hardware and other factors.\n\nYou are trying to use the system in a non-standard configuration. If it\ndoesn't work that way, don't be surprised.\n\nAlso, what are you expectations? Honestly, I don't see any problems with\nthe results you're getting, they're about what I would expect. Are you\ntrying to compare PostgreSQL to MySQL/MyISAM? More directly, what is\nyour purpose in starting this email conversation? What are you hoping\nto accomplish?\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Tue, 25 Mar 2008 10:03:15 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql is slow with larger table even it is in\n RAM" }, { "msg_contents": "Hello Sathiya,\n\n1st: you should not use a ramdisk for this, it will slow things down as\ncompared to simply having the table on disk. Scanning it the first time\nwhen on disk will load it into the OS IO cache, after which you will get\nmemory speed.\n\n2nd: you should expect the ³SELECT COUNT(*)² to run at a maximum of about\n350 ­ 600 MB/s (depending on PG version and CPU speed). It is CPU speed\nlimited to that rate of counting rows no matter how fast your IO is.\n\nSo, for your 700 MB table, you should expect a COUNT(*) to run in about 1-2\nseconds best case. This will approximate the speed at which other queries\ncan run against the table.\n\n- Luke\n\n\nOn 3/25/08 1:35 AM, \"sathiya psql\" <[email protected]> wrote:\n\n> Dear Friends,\n> I have a table with 32 lakh record in it. Table size is nearly 700 MB,\n> and my machine had a 1 GB + 256 MB RAM, i had created the table space in RAM,\n> and then created this table in this RAM.\n> \n> So now everything is in RAM, if i do a count(*) on this table it returns\n> 327600 in 3 seconds, why it is taking 3 seconds ????? because am sure that no\n> Disk I/O is happening. ( using vmstat i had confirmed, no disk I/O is\n> happening, swap is also not used )\n> \n> Any Idea on this ???\n> \n> I searched a lot in newsgroups ... can't find relevant things.... ( because\n> everywhere they are speaking about disk access speed, here i don't want to\n> worry about disk access )\n> \n> If required i will give more information on this.\n> \n> \n> \n\n\n\n\nRe: [PERFORM] postgresql is slow with larger table even it is in RAM\n\n\nHello Sathiya,\n\n1st:  you should not use a ramdisk for this, it will slow things down as compared to simply having the table on disk.  Scanning it the first time when on disk will load it into the OS IO cache, after which you will get memory speed.\n\n2nd: you should expect the “SELECT COUNT(*)” to run at a maximum of about 350 – 600 MB/s (depending on PG version and CPU speed).  It is CPU speed limited to that rate of counting rows no matter how fast your IO is.\n\nSo, for your 700 MB table, you should expect a COUNT(*) to run in about 1-2 seconds best case.  This will approximate the speed at which other queries can run against the table.\n\n- Luke\n\n\nOn 3/25/08 1:35 AM, \"sathiya psql\" <[email protected]> wrote:\n\nDear Friends,\n     I have a table with 32 lakh record in it. Table size is nearly 700 MB, and my machine had a 1 GB + 256 MB RAM, i had created the table space in RAM, and then created this table in this RAM.\n\n    So now everything is in RAM, if i do a count(*) on this table it returns 327600 in 3 seconds, why it is taking 3 seconds ????? because am sure that no Disk I/O is happening. ( using vmstat i had confirmed, no disk I/O is happening, swap is also not used )\n\nAny Idea on this ???\n\nI searched a lot in newsgroups ... can't find relevant things.... ( because everywhere they are speaking about disk access speed, here i don't want to worry about disk access )\n\nIf required i will give more information on this.", "msg_date": "Tue, 25 Mar 2008 07:46:02 -0700", "msg_from": "Luke Lonergan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql is slow with larger table even it is in RAM" }, { "msg_contents": "[email protected] (\"sathiya psql\") writes:\n> \t\t\t\t\t On Tue, Mar 25, 2008 at 2:09 PM, jose javier parra sanchez <[email protected]> wrote:\n>\n>\n> \t\t\t\t\t\t It's been said zillions of times on the maillist. Using a select\n> \t\t\t\t\t\t count(*) in postgres is slow, and probably will be slow for a long\n> \t\t\t\t\t\t time. So that function is not a good way to measure perfomance.\n> \n>\n>\n> \t\t\t\t\t\t\t Yes, but if the data is in HDD then we can say this...\n> \t\t\t\t\t\t\t\t\tbut now the data is in RAM\n\nEven if the data all is in RAM, it will still take some definitely\nnon-zero time to examine all of the pages, looking for tuples, and\nthen to determine which of those tuples are visible from the\nperspective of your DB connection.\n\nIf 500MB of relevant data is sitting on disk, then it will take\nwhatever time it takes to pull it from disk; if it is in memory, there\nis still work to be done...\n-- \nselect 'cbbrowne' || '@' || 'cbbrowne.com';\nhttp://linuxdatabases.info/info/finances.html\nRules of the Evil Overlord #76. \"If the hero runs up to my roof, I\nwill not run up after him and struggle with him in an attempt to push\nhim over the edge. I will also not engage him at the edge of a\ncliff. (In the middle of a rope-bridge over a river of molten lava is\nnot even worth considering.)\" <http://www.eviloverlord.com/>\n", "msg_date": "Tue, 25 Mar 2008 11:02:22 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql is slow with larger table even it is in RAM" }, { "msg_contents": ">\n>\n> 1st: you should not use a ramdisk for this, it will slow things down as\n> compared to simply having the table on disk. Scanning it the first time\n> when on disk will load it into the OS IO cache, after which you will get\n> memory speed.\n>\nabsolutely....\n\nafter getting some replies, i dropped the table from ramdisk,\nand started to have that in the disk itself..\n\n>\n> 2nd: you should expect the \"SELECT COUNT(*)\" to run at a maximum of about\n> 350 – 600 MB/s (depending on PG version and CPU speed). It is CPU speed\n> limited to that rate of counting rows no matter how fast your IO is.\n>\nam using 8.1\npentium duo core\n\n>\n> So, for your 700 MB table, you should expect a COUNT(*) to run in about\n> 1-2 seconds best case. This will approximate the speed at which other\n> queries can run against the table.\n>\nok count(*) per say, but other queries is taking much time...\n\nok i ll do more experimentations and i ll be back....\n\n\nVery great thanks for all of your replies GUYZ.....\n\n>\n> - Luke\n>\n>\n>\n> On 3/25/08 1:35 AM, \"sathiya psql\" <[email protected]> wrote:\n>\n> Dear Friends,\n> I have a table with 32 lakh record in it. Table size is nearly 700\n> MB, and my machine had a 1 GB + 256 MB RAM, i had created the table space in\n> RAM, and then created this table in this RAM.\n>\n> So now everything is in RAM, if i do a count(*) on this table it\n> returns 327600 in 3 seconds, why it is taking 3 seconds ????? because am\n> sure that no Disk I/O is happening. ( using vmstat i had confirmed, no disk\n> I/O is happening, swap is also not used )\n>\n> Any Idea on this ???\n>\n> I searched a lot in newsgroups ... can't find relevant things.... (\n> because everywhere they are speaking about disk access speed, here i don't\n> want to worry about disk access )\n>\n> If required i will give more information on this.\n>\n>\n>\n>\n\n\n\n1st:  you should not use a ramdisk for this, it will slow things down as compared to simply having the table on disk.  Scanning it the first time when on disk will load it into the OS IO cache, after which you will get memory speed.\nabsolutely....after getting  some replies, i dropped the table from ramdisk,and started to have that in the disk itself..\n\n2nd: you should expect the \"SELECT COUNT(*)\" to run at a maximum of about 350 – 600 MB/s (depending on PG version and CPU speed).  It is CPU speed limited to that rate of counting rows no matter how fast your IO is.\nam using 8.1pentium duo core \n\nSo, for your 700 MB table, you should expect a COUNT(*) to run in about 1-2 seconds best case.  This will approximate the speed at which other queries can run against the table.\nok count(*) per say, but other queries is taking much time...ok i ll do more experimentations and i ll be back....Very great thanks for all of your replies GUYZ..... \n\n\n- Luke\n\n\nOn 3/25/08 1:35 AM, \"sathiya psql\" <[email protected]> wrote:\n\nDear Friends,\n     I have a table with 32 lakh record in it. Table size is nearly 700 MB, and my machine had a 1 GB + 256 MB RAM, i had created the table space in RAM, and then created this table in this RAM.\n\n    So now everything is in RAM, if i do a count(*) on this table it returns 327600 in 3 seconds, why it is taking 3 seconds ????? because am sure that no Disk I/O is happening. ( using vmstat i had confirmed, no disk I/O is happening, swap is also not used )\n\nAny Idea on this ???\n\nI searched a lot in newsgroups ... can't find relevant things.... ( because everywhere they are speaking about disk access speed, here i don't want to worry about disk access )\n\nIf required i will give more information on this.", "msg_date": "Tue, 25 Mar 2008 20:41:47 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql is slow with larger table even it is in RAM" }, { "msg_contents": "On Tue, Mar 25, 2008 at 3:35 AM, sathiya psql <[email protected]> wrote:\n> Dear Friends,\n> I have a table with 32 lakh record in it. Table size is nearly 700 MB,\n> and my machine had a 1 GB + 256 MB RAM, i had created the table space in\n> RAM, and then created this table in this RAM.\n>\n> So now everything is in RAM, if i do a count(*) on this table it returns\n> 327600 in 3 seconds, why it is taking 3 seconds ????? because am sure that\n> no Disk I/O is happening. ( using vmstat i had confirmed, no disk I/O is\n> happening, swap is also not used )\n>\n> Any Idea on this ???\n>\n> I searched a lot in newsgroups ... can't find relevant things.... ( because\n> everywhere they are speaking about disk access speed, here i don't want to\n> worry about disk access )\n>\n> If required i will give more information on this.\n\nTwo things:\n\n- Are you VACUUM'ing regularly? It could be that you have a lot of\ndead rows and the table is spread out over a lot of pages of mostly\ndead space. That would cause *very* slow seq scans.\n\n- What is your shared_buffers set to? If it's really low then postgres\ncould be constantly swapping from ram-disk to memory. Not much would\nbe cached, and performance would suffer.\n\nFWIW, I did a select count(*) on a table with just over 300000 rows,\nand it only took 0.28 sec.\n\nPeter\n", "msg_date": "Wed, 26 Mar 2008 18:48:56 -0500", "msg_from": "\"Peter Koczan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql is slow with larger table even it is in RAM" }, { "msg_contents": "So your table is about 80 MB in size, or perhaps 120 MB if it fits in\nshared_buffers. You can check it using ³SELECT\npg_size_pretty(pg_relation_size(Œmytable¹))²\n\n- Luke \n\n\nOn 3/26/08 4:48 PM, \"Peter Koczan\" <[email protected]> wrote:\n\n> FWIW, I did a select count(*) on a table with just over 300000 rows,\n> and it only took 0.28 sec.\n\n\n\n\nRe: [PERFORM] postgresql is slow with larger table even it is in RAM\n\n\nSo your table is about 80 MB in size, or perhaps 120 MB if it fits in shared_buffers.  You can check it using “SELECT pg_size_pretty(pg_relation_size(‘mytable’))”\n\n- Luke  \n\n\nOn 3/26/08 4:48 PM, \"Peter Koczan\" <[email protected]> wrote:\n\nFWIW, I did a select count(*) on a table with just over 300000 rows,\nand it only took 0.28 sec.", "msg_date": "Thu, 27 Mar 2008 10:32:44 -0700", "msg_from": "Luke Lonergan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql is slow with larger table even it is in RAM" } ]
[ { "msg_contents": "Ok, finally am changing my question.\n\n\nDo get quick response from postgresql what is the maximum number of records\ni can have in a table in postgresql 8.1 ???\n\nOk, finally am changing my question.Do get quick response from postgresql what is the maximum number of records i can have in a table in postgresql 8.1 ???", "msg_date": "Tue, 25 Mar 2008 17:12:02 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "what is the maximum number of rows in a table in postgresql 8.1" }, { "msg_contents": "Sathiya,\n\nth maximum number of records in one PostreSQL table ist unlimited:\n\nhttp://www.postgresql.org/about/\n\n[for some values of unlimited]\n\nSome further help:\n\ngoogling for:\npostgresql limits site:postgresql.org\n\nleads you to this answer quite quick, while googling for\n\nmaximum number of rows in a postgresql table\n\nleads you to a lot of misleading pages.\n\nHarald\n\n\nOn Tue, Mar 25, 2008 at 12:42 PM, sathiya psql <[email protected]> wrote:\n> Ok, finally am changing my question.\n>\n>\n> Do get quick response from postgresql what is the maximum number of records\n> i can have in a table in postgresql 8.1 ???\n>\n>\n>\n>\n\n\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nSpielberger Straße 49\n70435 Stuttgart\n0173/9409607\nfx 01212-5-13695179\n-\nEuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!\n", "msg_date": "Tue, 25 Mar 2008 12:48:21 +0100", "msg_from": "\"Harald Armin Massa\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what is the maximum number of rows in a table in postgresql 8.1" }, { "msg_contents": ">\n> th maximum number of records in one PostreSQL table ist unlimited:\n>\nam asking for good performance, not just limitation..\n\nIf i have half a crore record, how the performance will be ?\n\n>\n> http://www.postgresql.org/about/\n>\n> [for some values of unlimited]\n>\n> Some further help:\n>\n> googling for:\n> postgresql limits site:postgresql.org\n>\nbut i need some experimentation result...\n\nI have 1 GB RAM with Pentium Celeron.\n50 lakh records and postgres performance is not good....\n\nIt takes 30 sec for simple queries....\n\n>\n>\n\n\nth maximum number of records in one PostreSQL table ist unlimited:\nam asking for good performance, not just limitation..If i have half a crore record, how the performance will be ?\n\nhttp://www.postgresql.org/about/\n\n[for some values of unlimited]\n\nSome further help:\n\ngoogling for:\npostgresql limits site:postgresql.org\nbut i need some experimentation result... I have 1 GB RAM with Pentium Celeron.50 lakh records and postgres performance is not good....It takes 30 sec for simple queries....", "msg_date": "Tue, 25 Mar 2008 17:24:06 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: what is the maximum number of rows in a table in postgresql 8.1" }, { "msg_contents": "sathiya psql escribi�:\n\n> I have 1 GB RAM with Pentium Celeron.\n> 50 lakh records and postgres performance is not good....\n> \n> It takes 30 sec for simple queries....\n\nShows us the explain analyze. There is no problem with a large number\nof records, as long as you're not expecting to process all of them all\nthe time.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Tue, 25 Mar 2008 09:01:01 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what is the maximum number of rows in a table in\n\tpostgresql 8.1" }, { "msg_contents": ">\n>\n>\n> Shows us the explain analyze. There is no problem with a large number\n> of records, as long as you're not expecting to process all of them all\n> the time.\n\nyes many a times i need to process all the records,\n\noften i need to use count(*) ????\n\nso what to do ?? ( those trigger options i know already, but i wil l do\ncount on different parameters )\n\n>\n>\n>\n\n\n\nShows us the explain analyze.  There is no problem with a large number\nof records, as long as you're not expecting to process all of them all\nthe time.yes many a times i need to process all the records,often i need to use count(*) ????so what to do  ?? ( those trigger options i know already, but i wil l do count on different parameters )", "msg_date": "Tue, 25 Mar 2008 17:38:24 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: what is the maximum number of rows in a table in postgresql 8.1" }, { "msg_contents": "sathiya psql wrote:\n>\n> yes many a times i need to process all the records,\n>\n> often i need to use count(*) ????\n>\n> so what to do ?? ( those trigger options i know already, but i wil l do\n> count on different parameters )\n*** PLEASE *** post the output of an EXPLAIN ANALYSE on one or more of \nyour queries, and POST THE QUERY TEXT TOO. For example, if your query was:\n\nSELECT COUNT(*) FROM sometable WHERE somefield > 42 ;\n\nthen you would run:\n\nANALYZE sometable;\n\nthen you would run:\n\nEXPLAIN ANALYZE SELECT COUNT(*) FROM sometable WHERE somefield > 42 ;\n\nand paste the resulting text into an email message to this list. Without \nyour query text and the EXPLAIN ANALYZE output from it it is much harder \nfor anybody to help you. You should also post the output of a psql \"\\d\" \ncommand on your main table definitions.\n\n\nAs for what you can do to improve performance, some (hardly an exclusive \nlist) of options include:\n\n\n- Maintaining a summary table using a trigger. The summary table might \ntrack counts for various commonly-searched-for criteria. Whether this is \npractical or not depends on your queries, which you have still not \nposted to the list.\n\n- Tuning your use of indexes (adding, removing, or adjusting indexes to \nbetter service your queries). Use EXPLAIN ANALYZE to help with this, and \nREAD THE MANUAL, which has excellent information on tuning index use and \nprofiling queries.\n\n- Tune the query planner parameters to make better planning decisions. \nIn particular, if your data and indexes all fit in ram you should reduce \nthe cost of index scans relative to sequential scans. There is plenty of \ninformation about that on this mailing list. Also, READ THE MANUAL, \nwhich has excellent information on tuning the planner.\n\n- Investigating table partitioning and tablespaces (this requires \nconsiderable understanding of postgresql to use successfully). You \nprobably want to avoid this unless you really need it, and I doubt it \nwill help much for in-memory databases anyway.\n\n- Buy a faster computer\n\n--\nCraig Ringer\n", "msg_date": "Tue, 25 Mar 2008 21:24:17 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what is the maximum number of rows in a table in postgresql\n 8.1" }, { "msg_contents": "EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ;\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=90760.80..90760.80 rows=1 width=0) (actual time=\n6069.373..6069.374 rows=1 loops=1)\n -> Seq Scan on call_log_in_ram (cost=0.00..89121.24 rows=3279119\nwidth=0) (actual time=0.012..4322.345 rows=3279119 loops=1)\n Total runtime: 6069.553 ms\n(3 rows)\n\nzivah=# EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ;\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=90760.80..90760.80 rows=1 width=0) (actual time=\n6259.436..6259.437 rows=1 loops=1)\n -> Seq Scan on call_log_in_ram (cost=0.00..89121.24 rows=3279119\nwidth=0) (actual time=0.013..4448.549 rows=3279119 loops=1)\n Total runtime: 6259.543 ms\n\nEXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ;                                                            QUERY PLAN                                                         ----------------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=90760.80..90760.80 rows=1 width=0) (actual time=6069.373..6069.374 rows=1 loops=1)   ->  Seq Scan on call_log_in_ram  (cost=0.00..89121.24 rows=3279119 width=0) (actual time=0.012..4322.345 rows=3279119 loops=1)\n Total runtime: 6069.553 ms(3 rows)zivah=# EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ;                                                            QUERY PLAN                                                         \n---------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=90760.80..90760.80 rows=1 width=0) (actual time=6259.436..6259.437 rows=1 loops=1)\n   ->  Seq Scan on call_log_in_ram  (cost=0.00..89121.24 rows=3279119 width=0) (actual time=0.013..4448.549 rows=3279119 loops=1) Total runtime: 6259.543 ms", "msg_date": "Tue, 25 Mar 2008 18:20:25 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: what is the maximum number of rows in a table in postgresql 8.1" }, { "msg_contents": "sathiya psql wrote:\n> EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ;\n> \nAnd your usual query is:\n\nSELECT count(*) from call_log_in_ram;\n\n?\n\nIf so, you should definitely build a summary table maintained by a \ntrigger to track the row count. That's VERY well explained in the \nmailing list archives. This was suggested to you very early on in the \ndiscussion.\n\nIf you have problems with other queries, how about showing EXPLAIN \nANALYZE for the other queries you're having problems with?\n\n--\nCraig Ringer\n", "msg_date": "Tue, 25 Mar 2008 22:09:34 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what is the maximum number of rows in a table in postgresql\n 8.1" }, { "msg_contents": "sathiya psql wrote:\n> EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ;\n> QUERY\n> PLAN \n> ------------------------------------\n> ----------------------------------------------------------------------------------------------\n> Aggregate (cost=90760.80..90760.80 rows=1 width=0) (actual\n> time=6069.373..6069.374 rows=1 loops=1)\n> -> Seq Scan on call_log_in_ram (cost=0.00..89121.24 rows=3279119\n> width=0) (actual time=0.012..4322.345 rows=3279119 loops=1)\n> Total runtime: 6069.553 ms\n> (3 rows)\n\nYou will never get good performance automatically with COUNT(*) in\nPostgreSQL. You can either create your own infrastructure (triggers,\nstatistics tables, etc) or use an approximate result like this:\n\nCREATE OR REPLACE FUNCTION fcount(varchar) RETURNS bigint AS $$\n SELECT reltuples::bigint FROM pg_class WHERE relname=$1;\n$$ LANGUAGE 'sql';\n\n\nUse the above function as:\n\nSELECT fcount('table_name');\n fcount\n--------\n 7412\n(1 row)\n\n", "msg_date": "Tue, 25 Mar 2008 14:16:43 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what is the maximum number of rows in a table in postgresql 8.1" }, { "msg_contents": "In response to \"sathiya psql\" <[email protected]>:\n\n> EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ;\n> QUERY\n> PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=90760.80..90760.80 rows=1 width=0) (actual time=\n> 6069.373..6069.374 rows=1 loops=1)\n> -> Seq Scan on call_log_in_ram (cost=0.00..89121.24 rows=3279119\n> width=0) (actual time=0.012..4322.345 rows=3279119 loops=1)\n> Total runtime: 6069.553 ms\n> (3 rows)\n> \n> zivah=# EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ;\n> QUERY\n> PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=90760.80..90760.80 rows=1 width=0) (actual time=\n> 6259.436..6259.437 rows=1 loops=1)\n> -> Seq Scan on call_log_in_ram (cost=0.00..89121.24 rows=3279119\n> width=0) (actual time=0.013..4448.549 rows=3279119 loops=1)\n> Total runtime: 6259.543 ms\n\n6 seconds doesn't sound like an unreasonable amount of time to count 3\nmillion rows. I don't see any performance issue here.\n\nWhat were your expectations?\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Tue, 25 Mar 2008 09:34:40 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what is the maximum number of rows in a table in\n postgresql 8.1" }, { "msg_contents": ">> th maximum number of records in one PostreSQL table ist unlimited:\n>\n> am asking for good performance, not just limitation..\n>\n> If i have half a crore record, how the performance will be ?\n\nHow long is a piece of string?\n\nIt depends what you are doing, whether you have indexes, how the tables \nare arranged, and how good the statistics are. Postgres has available to \nit almost all of the best data handling algorithms, and generally it uses \nthem sensibly. Use the EXPLAIN tool to get Postgres to tell you how it \nwill execute a query. Read the manual.\n\nWe have people running databases with an arawb (thousand million) or more \nrows without any significant performance problems. However, if you tell \nPostgres to read the entire table (like doing SELECT COUNT(*) FROM table), \nit will obviously take time.\n\nMatthew\n\n-- \nIn the beginning was the word, and the word was unsigned,\nand the main() {} was without form and void...\n", "msg_date": "Wed, 26 Mar 2008 12:24:06 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what is the maximum number of rows in a table in\n postgresql 8.1" } ]
[ { "msg_contents": "Hello,\n\nwe have several indexes such as:\n\ncreate index foo1 on bla (a);\ncreate index foo2 on bla (b);\ncreate index foo3 on bla (a,b);\n\nThey are all used often by frequently used queries (according to \npg_statio_user_indexes), but we need somewhat higher INSERT/UPDATE \nperformance (having tuned most other things) so we'd like to remove some.\n\nWhich of the above would generally speaking be most redundant / best to \nremove? Is a 2-dimensional index always much slower than a 1-dimensional \nwith the first column for queries on the first column? Any other \nsuggestions?\n\nThanks,\n Marinos\n\n", "msg_date": "Wed, 26 Mar 2008 15:18:53 +0100", "msg_from": "Marinos Yannikos <[email protected]>", "msg_from_op": true, "msg_subject": "1-/2-dimensional indexes for common columns, rationale?" }, { "msg_contents": "am Wed, dem 26.03.2008, um 15:18:53 +0100 mailte Marinos Yannikos folgendes:\n> Hello,\n> \n> we have several indexes such as:\n> \n> create index foo1 on bla (a);\n> create index foo2 on bla (b);\n> create index foo3 on bla (a,b);\n> \n> They are all used often by frequently used queries (according to \n> pg_statio_user_indexes), but we need somewhat higher INSERT/UPDATE \n> performance (having tuned most other things) so we'd like to remove some.\n\nWhich version do you have? Since 8.1 pg can use a so called 'bitmap\nindex scan', because of this feature i guess you don't need the index\nfoo3. (if you have 8.1 or higher)\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Wed, 26 Mar 2008 16:02:08 +0100", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 1-/2-dimensional indexes for common columns, rationale?" }, { "msg_contents": "A. Kretschmer schrieb:\n>> create index foo1 on bla (a);\n>> create index foo2 on bla (b);\n>> create index foo3 on bla (a,b);\n>>[...]\n> \n> Which version do you have? Since 8.1 pg can use a so called 'bitmap\n> index scan', because of this feature i guess you don't need the index\n> foo3. (if you have 8.1 or higher)\n\n8.3.1 - foo3 is being used though in presence of both foo1 and foo2, so \nI'd suppose that it's a better choice even with bitmap index scan \navailable...\n\n-mjy\n\n\n", "msg_date": "Wed, 26 Mar 2008 16:15:20 +0100", "msg_from": "Marinos Yannikos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 1-/2-dimensional indexes for common columns, rationale?" }, { "msg_contents": "On Wed, 26 Mar 2008, A. Kretschmer wrote:\n>> create index foo1 on bla (a);\n>> create index foo2 on bla (b);\n>> create index foo3 on bla (a,b);\n>\n> Which version do you have? Since 8.1 pg can use a so called 'bitmap\n> index scan', because of this feature i guess you don't need the index\n> foo3. (if you have 8.1 or higher)\n\nDepending on your query, the bitmap index scan could be a good deal slower \nthan index foo3.\n\nAll of this depends on what queries you are going to be running, and how \nmuch you value insert performance compared to select performance. I know \nthat foo3 can do everything that foo1 can, so foo1 could be viewed as \nredundant. I'd be interested in hearing from the Powers That Be whether \nfoo2 is redundant too. It wasn't a while back.\n\nMy impression is that foo3 isn't much more expensive to alter than foo1 - \nis that correct?\n\nMatthew\n\n-- \nLord grant me patience, and I want it NOW!\n", "msg_date": "Wed, 26 Mar 2008 15:16:55 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 1-/2-dimensional indexes for common columns, rationale?" }, { "msg_contents": "am Wed, dem 26.03.2008, um 16:15:20 +0100 mailte Marinos Yannikos folgendes:\n> A. Kretschmer schrieb:\n> >>create index foo1 on bla (a);\n> >>create index foo2 on bla (b);\n> >>create index foo3 on bla (a,b);\n> >>[...]\n> >\n> >Which version do you have? Since 8.1 pg can use a so called 'bitmap\n> >index scan', because of this feature i guess you don't need the index\n> >foo3. (if you have 8.1 or higher)\n> \n> 8.3.1 - foo3 is being used though in presence of both foo1 and foo2, so \n> I'd suppose that it's a better choice even with bitmap index scan \n> available...\n\nMaybe...\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Wed, 26 Mar 2008 16:25:18 +0100", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 1-/2-dimensional indexes for common columns, rationale?" }, { "msg_contents": "Marinos Yannikos wrote:\n>>\n>> Which version do you have? Since 8.1 pg can use a so called 'bitmap\n>> index scan', because of this feature i guess you don't need the index\n>> foo3. (if you have 8.1 or higher)\n>\n> 8.3.1 - foo3 is being used though in presence of both foo1 and foo2, \n> so I'd suppose that it's a better choice even with bitmap index scan \n> available...\n>\nPostgreSQL can also partially use a multi-column index. For example, if \nyou dropped your index on (a) Pg could use index (a,b) to help with \nqueries for `a'. However, the index would be slower than an index on a \nalone would be.\n\nSee:\n\nhttp://www.postgresql.org/docs/8.3/interactive/indexes-multicolumn.html\n\nAs usual, the best answer is really \"do some testing with your queries, \nand with EXPLAIN ANALYZE, and see what works best\". Test with inserts \ntoo, because it's likely that the cost of updating each of the three \nindexes isn't equal.\n\nIt might also be worth looking into using partial indexes if some of \nyour data is \"hotter\" than others and perhaps more worth the index \nupdate cost.\n\n--\nCraig Ringer\n", "msg_date": "Thu, 27 Mar 2008 00:27:24 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 1-/2-dimensional indexes for common columns, rationale?" }, { "msg_contents": "\n\n>>> create index foo1 on bla (a);\n>>> create index foo2 on bla (b);\n>>> create index foo3 on bla (a,b);\n\n\tYou say you need faster INSERT performance. Getting rid of some indexes \nis a way, but can you tell a bit more about your hardware setup ?\n\tFor instance, if you only have one HDD, put an extra HDD in the machine, \nand put the database on it, but leave the pg_xlog on the OS's disk. Or the \nreverse, depending on which disk is faster, and other factors. Since heavy \nINSERTs mean heavy log writing traffic, this almost doubles your write \nbandwidth for the cost of a disk. Cheap and efficient. You can also put \nthe indexes on a third disk, but separating database and log on 2 disks \nwill give you the most benefits.\n\tIf you already have a monster hardware setup, though...\n", "msg_date": "Wed, 26 Mar 2008 17:24:46 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 1-/2-dimensional indexes for common columns, rationale?" } ]
[ { "msg_contents": "Dear Sirs,\n \n I am doing this project of optimizing pg-sql query engine with compressed annealing. I would like to know if any deficiency in existing GEQO. If there are any TODO items remaining in GEQO kindly brief about the same. Awaiting discussions on this.\n \n GN\n\n \n---------------------------------\nNever miss a thing. Make Yahoo your homepage.\nDear Sirs,   I am doing this project of optimizing pg-sql query engine with compressed annealing. I would like to know if any deficiency in existing GEQO. If there are any TODO items remaining in GEQO kindly brief about the same. Awaiting discussions on this.   GN\nNever miss a thing. Make Yahoo your homepage.", "msg_date": "Wed, 26 Mar 2008 11:36:55 -0700 (PDT)", "msg_from": "Gopinath Narasimhan <[email protected]>", "msg_from_op": true, "msg_subject": "Query Optimization" } ]
[ { "msg_contents": "I'm not a DBA....but I play one at my office.\nI also have a hand in system administration, development and stairwell\nsweeping.\nSmall shop...many hats per person.\n\nWe have a postgresql (v8.2.X) database with about 75 gigabytes of\ndata.....almost half of it is represented by audit tables (changes made to\nthe other tables).\nIt's running on a 8-cpu Sun box with 32 gig of ram (no other processes\nactively run on the database server)\nThe database itself resides on a Pillar SAN (Axiom) and is ZFS mounted to\nthe database box.\nWe have upwards of 3000 active users hitting the system (via web/app\nservers) to the tune of (at peak times) of about 75-100 database\ntransactions per second (many inserts/updates but just as many reads)\nWe have a couple of un-tuned queries that can be kicked off that can take\nmultiple minutes to run.....(specifically ones that rummage through that\naudit data)\n\nThe other day, somebody kicked off 4 of these bad boys and other non-related\ntransactions started taking much longer....inserts, updates, selects...all\nmuch longer than normal......(there was no table/row locking issue that we\ncould locate). propagated to the point where the system was nearly\nuseless...the load average jumped up to almost 2.0 (normally hovers around\n.5) and all these queries were just taking too long...users started timing\nout...calls started....etc....\n\nToday...a single expensive query brought the load average up to nearly 2\nand started slowing down other transactions........\n\nis this 'normal'? (loaded question I know)\nShould I be looking to offload expensive reporting queries to read-only\nreplicants of my database?\nIs this a symptom of slow disk? imporoperly tuned postgres settings? bad\nchoice of OS, hardware, storage?\nIs this a sign of disk contention?\nHow does CPU load come into play?\n\nAny thoughts would be helpful.....\n\nPrince\n\nI'm not a DBA....but I play one at my office.I also have a hand in system administration, development and stairwell sweeping.Small shop...many hats per person.We have a postgresql (v8.2.X) database with about 75 gigabytes of data.....almost half of it is represented by audit tables (changes made to the other tables).\nIt's running on a 8-cpu Sun box with 32 gig of ram (no other processes actively run on the database server)The database itself resides on a Pillar SAN (Axiom) and is ZFS mounted to the database box.We have upwards of 3000 active users hitting the system (via web/app servers) to the tune of (at peak times) of about 75-100 database transactions per second (many inserts/updates but just as many reads)\nWe have a couple of un-tuned queries that can be kicked off that can take multiple minutes to run.....(specifically ones that rummage through that audit data)The other day, somebody kicked off 4 of these bad boys and other non-related transactions started taking much longer....inserts, updates, selects...all much longer than normal......(there was no table/row locking issue that we could locate).  propagated to the point where the system was nearly useless...the load average jumped up to almost 2.0 (normally hovers around .5) and all these queries were just taking too long...users started timing out...calls started....etc....\nToday...a single expensive query brought the load average up to nearly 2  and started slowing down other transactions........is this 'normal'? (loaded question I know)Should I be looking to offload expensive reporting queries to read-only replicants of my database?\nIs this a symptom of slow disk? imporoperly tuned postgres settings? bad choice of OS, hardware, storage?Is this a sign of disk contention?How does CPU load come into play?Any thoughts would be helpful.....\nPrince", "msg_date": "Wed, 26 Mar 2008 14:48:01 -0500", "msg_from": "\"p prince\" <[email protected]>", "msg_from_op": true, "msg_subject": "how can a couple of expensive queries drag my system down?" }, { "msg_contents": "On Wednesday 26 March 2008, \"p prince\" <[email protected]> wrote:\n> Is this a sign of disk contention?\n\nYes.\n\n> How does CPU load come into play?\n\nProcesses waiting for disk I/O generally show up as load.\n\n-- \nAlan\n", "msg_date": "Wed, 26 Mar 2008 13:19:06 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how can a couple of expensive queries drag my system down?" }, { "msg_contents": "On Wed, Mar 26, 2008 at 1:48 PM, p prince <[email protected]> wrote:\n> is this 'normal'? (loaded question I know)\n> Should I be looking to offload expensive reporting queries to read-only\n> replicants of my database?\n\nYes, definitely look into setting up something like a slony slave\nthat's used for reporting queries. The nice thing about this setup is\nyou only need to replicate the tables you run reports against.\n", "msg_date": "Wed, 26 Mar 2008 14:31:32 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how can a couple of expensive queries drag my system down?" }, { "msg_contents": "\n> is this 'normal'? (loaded question I know)\n\n\tDepends. If you are on the edge, disk-wise, yes a big fat query can push \nit over and make it fall.\n\n> Should I be looking to offload expensive reporting queries to read-only\n> replicants of my database?\n\n\tYou could do this, especially if the heavy queries involve reading \ngigabytes of data from disk (as reporting queries like to do). In that \ncase, you can even use a cheap machine with cheap disks for the slave \n(even striped RAID) since data is duplicated anyway and all that matters \nis megabytes/second, not IOs/second.\n\n> Is this a symptom of slow disk?\n\n\tvmstat will tell you this.\n\tIf iowait time goes through the roof, yes it's disk bound.\n\tIf cpu use goes 100%, then it's cpu bound.\n\n> imporoperly tuned postgres settings? bad\n\n\tAlso possible, you can try EXPLAIN of the problematic queries.\n\n> choice of OS, hardware, storage?\n\n\tDepends on how your SAN handles load. No idea about that.\n\n> Is this a sign of disk contention?\n\n\tMost probable.\n\n> How does CPU load come into play?\n\n\tWith 8 CPUs, less likely.\n\t(Your problem query can swamp at most 1 CPU, so if the machine grinds \nwith still 7 other cores available for the usual, it probably isn't \ncpu-bound)\n\n", "msg_date": "Wed, 26 Mar 2008 22:58:16 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how can a couple of expensive queries drag my system down?" }, { "msg_contents": "Scott Marlowe wrote:\n> On Wed, Mar 26, 2008 at 1:48 PM, p prince <[email protected]> wrote:\n>> is this 'normal'? (loaded question I know)\n>> Should I be looking to offload expensive reporting queries to read-only\n>> replicants of my database?\n> \n> Yes, definitely look into setting up something like a slony slave\n> that's used for reporting queries. The nice thing about this setup is\n> you only need to replicate the tables you run reports against.\n> \n\nI would look at fixing the slow queries so that they aren't a problem first.\n\nI'm sure if you send in your queries and table defs you can get some \nuseful feedback here.\n\nIf there is no way of improving them then look at a reporting slave.\n\n\n-- \n\nShane Ambler\npgSQL (at) Sheeky (dot) Biz\n\nGet Sheeky @ http://Sheeky.Biz\n", "msg_date": "Thu, 27 Mar 2008 14:39:13 +1030", "msg_from": "Shane Ambler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how can a couple of expensive queries drag my system\n down?" }, { "msg_contents": "On Wed, Mar 26, 2008 at 10:09 PM, Shane Ambler <[email protected]> wrote:\n> Scott Marlowe wrote:\n> > On Wed, Mar 26, 2008 at 1:48 PM, p prince <[email protected]> wrote:\n> >> is this 'normal'? (loaded question I know)\n> >> Should I be looking to offload expensive reporting queries to read-only\n> >> replicants of my database?\n> >\n> > Yes, definitely look into setting up something like a slony slave\n> > that's used for reporting queries. The nice thing about this setup is\n> > you only need to replicate the tables you run reports against.\n> >\n>\n> I would look at fixing the slow queries so that they aren't a problem first.\n\nI'm not sure you're reading the same thread as me. Or something.\nI've had reporting queries that took the better part of an hour to\nrun, and this was completely normal. When you're running millions of\nrows against each other for reporting queries it's not unusual to blow\nout the cache.\n\nMaybe the queries are inefficient, and maybe they're not. But one\nshould not be running reporting queries on a live transactional\ndatabase.\n\n\n>\n> I'm sure if you send in your queries and table defs you can get some\n> useful feedback here.\n>\n> If there is no way of improving them then look at a reporting slave.\n>\n>\n> --\n>\n> Shane Ambler\n> pgSQL (at) Sheeky (dot) Biz\n>\n> Get Sheeky @ http://Sheeky.Biz\n>\n", "msg_date": "Wed, 26 Mar 2008 22:57:11 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how can a couple of expensive queries drag my system down?" }, { "msg_contents": "\nOn Mar 26, 2008, at 3:31 PM, Scott Marlowe wrote:\n> On Wed, Mar 26, 2008 at 1:48 PM, p prince <[email protected]> \n> wrote:\n>> is this 'normal'? (loaded question I know)\n>> Should I be looking to offload expensive reporting queries to read- \n>> only\n>> replicants of my database?\n>\n> Yes, definitely look into setting up something like a slony slave\n> that's used for reporting queries. The nice thing about this setup is\n> you only need to replicate the tables you run reports against.\n\nFor simple two-node (i.e. no cascaded replication) I'd suggest looking \ninto Londiste. It's loads easier to wrap your head around and it's \nextremely easy to add/remove tables from replication as it doesn't \ndeal with \"table sets\" like Slony does.\n\nErik Jones\n\nDBA | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n\n", "msg_date": "Thu, 27 Mar 2008 10:13:37 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how can a couple of expensive queries drag my system down?" } ]
[ { "msg_contents": "On Wed, 26 Mar 2008 12:49:56 -0800\nVinubalaji Gopal <[email protected]> wrote:\n\n> The big table has never been reindexed and has a primary, unique key\n> with btree index and one foreign key constraint.\n\nThe slowness is likely attributed to Vacuum's use of I/O. When vacuum\nis running what does iostat -k 10 say?\n\nJoshua D. Drake\n\n\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL SPI Liaison | SPI Director | PostgreSQL political pundit", "msg_date": "Wed, 26 Mar 2008 13:02:13 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum in Postgresql 8.0.x slowing down the database" }, { "msg_contents": "Hey all,\n I had posted sometime back asking for the best way to perform vacuum\nwith a lower priority - I did tune it up to a lower priority\nand still noticed that the other database queries are slowing down\nwith a vacuum on one big table. I also tried to upgrade Postgresql to\n8.0.15 as suggested and I still could reproduce the problem.\n\nIt happens when I try to vacuum one particular table which has 9.5\nmillion rows and all the other inserts/selects are slowing down by a\nfactor of 10 times. I am using\nvacuum_cost_delay = 100\nvacuum_cost_limit = 200\n\nEven if I cancel the vacuum operation or let the vacuum complete - the\nslowdown continues to persist till I restart my application.\n\nWhats the best way to analyze the reason Postgresql is slowing down? I\nhad a look at pg_locks (did not know what to look for) and also tried\nstarting postgresql with the debug option using: postmaster -d 5\n-D /var/pgdata (saw a lot of output including the query being performed\nbut could not gather anything useful) \n\nThe big table has never been reindexed and has a primary, unique key\nwith btree index and one foreign key constraint.\n\n--\nVinu\n", "msg_date": "Wed, 26 Mar 2008 12:49:56 -0800", "msg_from": "Vinubalaji Gopal <[email protected]>", "msg_from_op": false, "msg_subject": "vacuum in Postgresql 8.0.x slowing down the database" }, { "msg_contents": "On Wed, 26 Mar 2008 13:02:13 -0700\n\"Joshua D. Drake\" <[email protected]> wrote:\n\n\n> The slowness is likely attributed to Vacuum's use of I/O. When vacuum\n> is running what does iostat -k 10 say?\n\nSeems to be higher than normal - here is the output with vacuum run\nwithout the other queries and the default vacuum taking ~1 hr:\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 13.30 0.00 4.50 25.91 0.00 56.29\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nmd2 3356.94 2005.59 12945.45 20076 129584\n\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 16.20 0.00 6.32 24.89 0.00 52.59\n\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nmd2 461.70 667.20 1512.00 6672 15120\n\n\nI don't know if the output helps much since the vacuum took some time\nand I lost more than half of my iostat -k screen output. (I scrolled up\n- but got only some of the data)\n\nIf vacuum does affect the io what are the ways to reduce the io during\nvacuum (the delay and cost parameter did not help that much - should I\nconsider reducing the cost even further)? Should I consider\npartitioning the table?\n\nThank you.\n\n--\nVinu\n", "msg_date": "Thu, 27 Mar 2008 23:28:33 -0800", "msg_from": "Vinubalaji Gopal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum in Postgresql 8.0.x slowing down the database" } ]
[ { "msg_contents": "Hi\n\nI have a table with around 10 million entries The webpage rendered hits\nat most 200 records which are distributed well in the 10m with an average\nof 2 \"references\" pr. entry.\n\nIs there anyway to speed this query more up than allready. .. yes running\nit subsequenctly it is blazingly fast, but with view of around 200/10m we\nmost\noften dont hit the same query again.\n\n\n# explain analyze SELECT \"me\".\"created\", \"me\".\"created_initials\",\n\"me\".\"updated\", \"me\".\"updated_initials\", \"me\".\"start_time\",\n\"me\".\"end_time\", \"me\".\"notes\", \"me\".\"id\", \"me\".\"sequence_id\",\n\"me\".\"database\", \"me\".\"name\", \"numbers\".\"reference_id\",\n\"numbers\".\"evidence\" FROM \"reference\" \"me\" LEFT JOIN \"number\" \"numbers\" ON\n( \"numbers\".\"reference_id\" = \"me\".\"id\" ) WHERE ( \"me\".\"sequence_id\" IN (\n34284, 41503, 42274, 42285, 76847, 78204, 104721, 126279, 274770, 274790,\n274809, 305346, 307383, 307411, 309691, 311362, 344930, 352530, 371033,\n371058, 507790, 517521, 517537, 517546, 526883, 558976, 4894317, 4976383,\n1676203, 4700800, 688803, 5028679, 5028694, 5028696, 5028684, 5028698,\n5028701, 5028676, 5028682, 5028686, 5028692, 5028689, 3048683, 5305427,\n5305426, 4970187, 4970216, 4970181, 4970208, 4970196, 4970226, 4970232,\n4970201, 4970191, 4970222, 4350307, 4873618, 1806537, 1817367, 1817432,\n4684270, 4981822, 3172776, 4894299, 4894304, 4700798, 1120990, 4981817,\n4831109, 4831036, 4831068, 4831057, 4831105, 4831038, 4831044, 4831081,\n4831063, 4831051, 4831086, 4831049, 4831071, 4831075, 4831114, 4831093,\n2635142, 4660208, 4660199, 4912338, 4660150, 4662011, 5307782, 4894286,\n4894292, 4894296, 4894309, 4894313, 1428388, 1932290, 5306082, 2010148,\n3979647, 4382006, 4220374, 1880794, 1526588, 774838, 1377100, 969316,\n1796618, 1121046, 4662009, 963535, 5302610, 1121105, 688700, 688743,\n688836, 688763, 688788, 1056859, 2386006, 2386015, 2386023, 4265832,\n4231262, 4265743, 5302612, 1121056, 1121090, 1121074, 688659, 688650 ) )\nORDER BY \"ecnumbers\".\"reference_id\";\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n ---------------------------------------------------------------------------------------------\n Sort (cost=56246.18..56275.20 rows=11606 width=299) (actual\ntime=2286.900..2287.215 rows=389 loops=1)\n Sort Key: numbers.reference_id\n -> Nested Loop Left Join (cost=388.48..55462.63 rows=11606 width=299)\n(actual time=475.071..2284.502 rows=389 loops=1)\n -> Bitmap Heap Scan on reference me (cost=388.48..23515.97\nrows=11606 width=191) (actual time=451.245..1583.966 rows=389\nloops=1)\n Recheck Cond: (sequence_id = ANY\n('{34284,41503,42274,42285,76847,78204,104721,126279,274770,274790,274809,305346,307383,307411,309691,311362,344930,352530,371033,371058,507790,517521,517537,517546,526883,558976,4894317,4976383,1676203,4700800,688803,5028679,5028694,5028696,5028684,5028698,5028701,5028676,5028682,5028686,5028692,5028689,3048683,5305427,5305426,4970187,4970216,4970181,4970208,4970196,4970226,4970232,4970201,4970191,4970222,4350307,4873618,1806537,1817367,1817432,4684270,4981822,3172776,4894299,4894304,4700798,1120990,4981817,4831109,4831036,4831068,4831057,4831105,4831038,4831044,4831081,4831063,4831051,4831086,4831049,4831071,4831075,4831114,4831093,2635142,4660208,4660199,4912338,4660150,4662011,5307782,4894286,4894292,4894296,4894309,4894313,1428388,1932290,5306082,2010148,3979647,4382006,4220374,1880794,1526588,774838,1377100,969316,1796618,1121046,4662009,963535,5302610,1121105,688700,688743,688836,688763,688788,1056859,2386006,2386015,2386023,4265832,4231262,4265743,5302612,1121056,1121\n 090,1121074,688659,688650}'::integer[]))\n -> Bitmap Index Scan on reference_seq_idx \n(cost=0.00..385.58 rows=11606 width=0) (actual\ntime=422.691..422.691 rows=450 loops=1)\n Index Cond: (sequence_id = ANY\n('{34284,41503,42274,42285,76847,78204,104721,126279,274770,274790,274809,305346,307383,307411,309691,311362,344930,352530,371033,371058,507790,517521,517537,517546,526883,558976,4894317,4976383,1676203,4700800,688803,5028679,5028694,5028696,5028684,5028698,5028701,5028676,5028682,5028686,5028692,5028689,3048683,5305427,5305426,4970187,4970216,4970181,4970208,4970196,4970226,4970232,4970201,4970191,4970222,4350307,4873618,1806537,1817367,1817432,4684270,4981822,3172776,4894299,4894304,4700798,1120990,4981817,4831109,4831036,4831068,4831057,4831105,4831038,4831044,4831081,4831063,4831051,4831086,4831049,4831071,4831075,4831114,4831093,2635142,4660208,4660199,4912338,4660150,4662011,5307782,4894286,4894292,4894296,4894309,4894313,1428388,1932290,5306082,2010148,3979647,4382006,4220374,1880794,1526588,774838,1377100,969316,1796618,1121046,4662009,963535,5302610,1121105,688700,688743,688836,688763,688788,1056859,2386006,2386015,2386023,4265832,4231262,4265743,5302612,1121056,1121\n 090,1121074,688659,688650}'::integer[]))\n -> Index Scan using ecn_ref_idx on number eumbers \n(cost=0.00..2.74 rows=1 width=108) (actual time=1.794..1.795\nrows=0 loops=389)\n Index Cond: (numbers.reference_id = me.id)\n Total runtime: 2287.701 ms\n(10 rows)\n\n.. subsequent run: 32.367ms\n\nOn a X4600 server with 32GB of ram and Equalogic iSCSI SAN attached.\n\nJesper\n\n\n-- \nJesper Krogh\n\n", "msg_date": "Thu, 27 Mar 2008 16:34:28 +0100 (CET)", "msg_from": "\"Jesper Krogh\" <[email protected]>", "msg_from_op": true, "msg_subject": "\"Slow\" query or just \"Bad hardware\"? " }, { "msg_contents": "On Thu, 27 Mar 2008, Jesper Krogh wrote:\n> # explain analyze SELECT \"me\".\"created\", \"me\".\"created_initials\",\n> \"me\".\"updated\", \"me\".\"updated_initials\", \"me\".\"start_time\",\n> \"me\".\"end_time\", \"me\".\"notes\", \"me\".\"id\", \"me\".\"sequence_id\",\n> \"me\".\"database\", \"me\".\"name\", \"numbers\".\"reference_id\",\n> \"numbers\".\"evidence\" FROM \"reference\" \"me\" LEFT JOIN \"number\" \"numbers\" ON\n> ( \"numbers\".\"reference_id\" = \"me\".\"id\" ) WHERE ( \"me\".\"sequence_id\" IN (\n> 34284, 41503, 42274, 42285, 76847, 78204, 104721, 126279, 274770, 274790,\n> 274809, 305346, 307383, 307411, 309691, 311362, 344930, 352530, 371033,\n> 371058, 507790, 517521, 517537, 517546, 526883, 558976, 4894317, 4976383,\n> 1676203, 4700800, 688803, 5028679, 5028694, 5028696, 5028684, 5028698,\n> 5028701, 5028676, 5028682, 5028686, 5028692, 5028689, 3048683, 5305427,\n> 5305426, 4970187, 4970216, 4970181, 4970208, 4970196, 4970226, 4970232,\n> 4970201, 4970191, 4970222, 4350307, 4873618, 1806537, 1817367, 1817432,\n> 4684270, 4981822, 3172776, 4894299, 4894304, 4700798, 1120990, 4981817,\n> 4831109, 4831036, 4831068, 4831057, 4831105, 4831038, 4831044, 4831081,\n> 4831063, 4831051, 4831086, 4831049, 4831071, 4831075, 4831114, 4831093,\n> 2635142, 4660208, 4660199, 4912338, 4660150, 4662011, 5307782, 4894286,\n> 4894292, 4894296, 4894309, 4894313, 1428388, 1932290, 5306082, 2010148,\n> 3979647, 4382006, 4220374, 1880794, 1526588, 774838, 1377100, 969316,\n> 1796618, 1121046, 4662009, 963535, 5302610, 1121105, 688700, 688743,\n> 688836, 688763, 688788, 1056859, 2386006, 2386015, 2386023, 4265832,\n> 4231262, 4265743, 5302612, 1121056, 1121090, 1121074, 688659, 688650 ) )\n> ORDER BY \"ecnumbers\".\"reference_id\";\n\nLooks like a very reasonable performance, given that the database is \nhaving to seek nearly a thousand times to collect the data from where it \nis scattered over the disc. We had a thread a while ago about using aio or \nfadvise to speed this sort of thing up (with some really really good \ninitial test results). Greg, is this still in active consideration?\n\nYou don't say if there is much write traffic, and what sort of order the \ndata gets written to the tables. It may be a significant benefit to \ncluster the tables on sequence id or reference id. If you have lots of \nwrite traffic make sure you recluster every now and again. Experiment with \nthat, and see if it helps.\n\nMatthew\n\n-- \nThe only secure computer is one that's unplugged, locked in a safe,\nand buried 20 feet under the ground in a secret location...and i'm not\neven too sure about that one. --Dennis Huges, FBI\n", "msg_date": "Thu, 27 Mar 2008 16:06:03 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Slow\" query or just \"Bad hardware\"? " }, { "msg_contents": "Hm, so this table has 10 million entries and it does not fit in 32GB of \nRAM ?\nCould you investigate :\n- average size of rows in both tables\n- a quick description of your table columns especially the average size of \nyour TEXT fields, especially the large one(s) like comments etc (don't \nbother about INTs unless you have like 50 int columns)\n- which fields get toasted, which don't, number of accesses to TOASTed \nfields in this query, could add 1 seek per field per fetched row if \nthey're not cached\n- other stuff in your database that is using those gigabytes of RAM ? \n(indexes which are used often do count)\n\nI would tend to think that you are not going to display 200 kilobytes of \ntext on your listing webpage, most likely something like 100 or 200 bytes \nof text from each row, right ? If that is the case, 10M rows * 200 bytes = \n2G to keep cached in RAM, plus overhead, so it should work fast.\n\nYou may want to partition your table in two, one which holds the fields \nwhich are often used in bulk, search, and listings, especially when you \nlist 200 rows, and the other table holding the large fields which are only \ndisplayed on the \"show details\" page.\n\nNote that one (or several) large text field will not kill your \nperformance, postgres will store that offline (TOAST) for you without you \nneeding to ask, so your main table stays small and well cached. Of course \nif you grab that large 10 kB text field 200 times to display the first 80 \ncharachers of it followed by \"...\" in your listing page, then, you're \nscrewed ;) that's one of the things to avoid.\n\nHowever, if your \"comments\" field is small enough that PG doesn't want to \nTOAST it offline (say, 500 bytes), but still represents the bulk of your \ntable size (for instance you have just a few INTs beside that that you \nwant to quickly search on) then you may tell postgres to store the large \nfields offline (EXTERNAL, check the docs), and also please enable \nautomatic compression.\n\nIf however, you have something like 200 INT columns, or a few dozens of \nsmall TEXTs, or just way lots of columns, TOAST is no help and in this \ncase you you must fight bloat by identifying which columns of your table \nneed to be accessed often (for searches, listing, reporting, etc), and \nwhich are not accessed often (ie. details page only, monthly reports, \netc). If you are lucky the column in the first group will form a much \nsmaller subset of your gigabytes of data. Then, you partition your table \nin two (vertically), so the small table stays small.\n\nEXAMPLE on a community site :\n\n- members table, huge, search is slow, join to forum tables to get user's \nname horribly slow because cache is full and it seeks\n- push members' profiles and other data that only shows up in the details \npage to a second table : main members table much smaller, fits in RAM now, \nsearch is fast, joins to members are also fast.\n\nWord to remember : working set ;)\n\n", "msg_date": "Thu, 27 Mar 2008 17:45:08 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Slow\" query or just \"Bad hardware\"?" }, { "msg_contents": "\nAlso, sometimes offine TOASTing is evil :\nSay you have a forum, you want the posts table to be CLUSTER'ed on \n(topic_id, post_id) so displaying 1 page with 30 posts on it uses 1 seek, \nnot 30 seeks. But CLUSTER doesn't touch the data that has been pushed \noffline in the toast table. So, in that case, it can pay (big time \nactually) to disable toasting, store the data inline, and benefit from \ncluster.\n\nSo basically :\n\nData that is seldom used or used only in queries returning/examining 1 row \nbu otherwise eats cache -> push it away (toast or partition)\nData that is used very often in queries that return/examine lots of rows, \nespecially if said rows are in sequence (cluster...) -> keep it inline\n\n\n", "msg_date": "Thu, 27 Mar 2008 17:57:06 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Slow\" query or just \"Bad hardware\"?" }, { "msg_contents": "In response to \"Jesper Krogh\" <[email protected]>:\n\n> Hi\n> \n> I have a table with around 10 million entries The webpage rendered hits\n> at most 200 records which are distributed well in the 10m with an average\n> of 2 \"references\" pr. entry.\n> \n> Is there anyway to speed this query more up than allready. .. yes running\n> it subsequenctly it is blazingly fast, but with view of around 200/10m we\n> most\n> often dont hit the same query again.\n\nWhile all the other advice is good, what you really need to do to address\nthis is figure out what's in your cache and whether it's the right things.\nOnce you _know_ that (and aren't just speculating) you can start to use\nthe solutions that others have suggested to improve on the situation. If\nyou just start trying things at random, you'll probably figure it out\neventually anyway, but I'm assuming you'll want a direct route.\n\nSo, I'm going to repeat something that I say on this mailing list about\ntwice a month: install MRTG or some equivalent and start graphing critical\ndatabase statistics.\n\nIn your case, install the pg_buffercache addon and use it to track how\nmuch of your shared buffers each table is using. Based on your\ndescription of the problem, I doubt it will take more than a few days\nto have a clear view of exactly what's going on (i.e. you'll probably\nsee table X clearing table Y out of the buffers or something ...)\n\n From there you can start making all kinds of decisions:\n* Do you need more RAM overall?\n* Is enough RAM allocated to shared_buffers (you don't provide any\n details on config settings, so I can't guess at this)\n* Are there queries that can be better optimized to not fill up the\n cache with data that they don't really need?\n* Can switching up storage methods for TEXT fields help you out?\n* Are your demands simply to high for what a SAN can provide and\n you'll be better off with a big RAID-10 of SCSI disks?\n\nHTH\n\n> # explain analyze SELECT \"me\".\"created\", \"me\".\"created_initials\",\n> \"me\".\"updated\", \"me\".\"updated_initials\", \"me\".\"start_time\",\n> \"me\".\"end_time\", \"me\".\"notes\", \"me\".\"id\", \"me\".\"sequence_id\",\n> \"me\".\"database\", \"me\".\"name\", \"numbers\".\"reference_id\",\n> \"numbers\".\"evidence\" FROM \"reference\" \"me\" LEFT JOIN \"number\" \"numbers\" ON\n> ( \"numbers\".\"reference_id\" = \"me\".\"id\" ) WHERE ( \"me\".\"sequence_id\" IN (\n> 34284, 41503, 42274, 42285, 76847, 78204, 104721, 126279, 274770, 274790,\n> 274809, 305346, 307383, 307411, 309691, 311362, 344930, 352530, 371033,\n> 371058, 507790, 517521, 517537, 517546, 526883, 558976, 4894317, 4976383,\n> 1676203, 4700800, 688803, 5028679, 5028694, 5028696, 5028684, 5028698,\n> 5028701, 5028676, 5028682, 5028686, 5028692, 5028689, 3048683, 5305427,\n> 5305426, 4970187, 4970216, 4970181, 4970208, 4970196, 4970226, 4970232,\n> 4970201, 4970191, 4970222, 4350307, 4873618, 1806537, 1817367, 1817432,\n> 4684270, 4981822, 3172776, 4894299, 4894304, 4700798, 1120990, 4981817,\n> 4831109, 4831036, 4831068, 4831057, 4831105, 4831038, 4831044, 4831081,\n> 4831063, 4831051, 4831086, 4831049, 4831071, 4831075, 4831114, 4831093,\n> 2635142, 4660208, 4660199, 4912338, 4660150, 4662011, 5307782, 4894286,\n> 4894292, 4894296, 4894309, 4894313, 1428388, 1932290, 5306082, 2010148,\n> 3979647, 4382006, 4220374, 1880794, 1526588, 774838, 1377100, 969316,\n> 1796618, 1121046, 4662009, 963535, 5302610, 1121105, 688700, 688743,\n> 688836, 688763, 688788, 1056859, 2386006, 2386015, 2386023, 4265832,\n> 4231262, 4265743, 5302612, 1121056, 1121090, 1121074, 688659, 688650 ) )\n> ORDER BY \"ecnumbers\".\"reference_id\";\n> QUERY\n> PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n --\n> ---------------------------------------------------------------------------------------------\n> Sort (cost=56246.18..56275.20 rows=11606 width=299) (actual\n> time=2286.900..2287.215 rows=389 loops=1)\n> Sort Key: numbers.reference_id\n> -> Nested Loop Left Join (cost=388.48..55462.63 rows=11606 width=299)\n> (actual time=475.071..2284.502 rows=389 loops=1)\n> -> Bitmap Heap Scan on reference me (cost=388.48..23515.97\n> rows=11606 width=191) (actual time=451.245..1583.966 rows=389\n> loops=1)\n> Recheck Cond: (sequence_id = ANY\n> ('{34284,41503,42274,42285,76847,78204,104721,126279,274770,274790,274809,305346,307383,307411,309691,311362,344930,352530,371033,371058,507790,517521,517537,517546,526883,558976,4894317,4976383,1676203,4700800,688803,5028679,5028694,5028696,5028684,5028698,5028701,5028676,5028682,5028686,5028692,5028689,3048683,5305427,5305426,4970187,4970216,4970181,4970208,4970196,4970226,4970232,4970201,4970191,4970222,4350307,4873618,1806537,1817367,1817432,4684270,4981822,3172776,4894299,4894304,4700798,1120990,4981817,4831109,4831036,4831068,4831057,4831105,4831038,4831044,4831081,4831063,4831051,4831086,4831049,4831071,4831075,4831114,4831093,2635142,4660208,4660199,4912338,4660150,4662011,5307782,4894286,4894292,4894296,4894309,4894313,1428388,1932290,5306082,2010148,3979647,4382006,4220374,1880794,1526588,774838,1377100,969316,1796618,1121046,4662009,963535,5302610,1121105,688700,688743,688836,688763,688788,1056859,2386006,2386015,2386023,4265832,4231262,4265743,5302612,1121056,11\n 21\n> 090,1121074,688659,688650}'::integer[]))\n> -> Bitmap Index Scan on reference_seq_idx \n> (cost=0.00..385.58 rows=11606 width=0) (actual\n> time=422.691..422.691 rows=450 loops=1)\n> Index Cond: (sequence_id = ANY\n> ('{34284,41503,42274,42285,76847,78204,104721,126279,274770,274790,274809,305346,307383,307411,309691,311362,344930,352530,371033,371058,507790,517521,517537,517546,526883,558976,4894317,4976383,1676203,4700800,688803,5028679,5028694,5028696,5028684,5028698,5028701,5028676,5028682,5028686,5028692,5028689,3048683,5305427,5305426,4970187,4970216,4970181,4970208,4970196,4970226,4970232,4970201,4970191,4970222,4350307,4873618,1806537,1817367,1817432,4684270,4981822,3172776,4894299,4894304,4700798,1120990,4981817,4831109,4831036,4831068,4831057,4831105,4831038,4831044,4831081,4831063,4831051,4831086,4831049,4831071,4831075,4831114,4831093,2635142,4660208,4660199,4912338,4660150,4662011,5307782,4894286,4894292,4894296,4894309,4894313,1428388,1932290,5306082,2010148,3979647,4382006,4220374,1880794,1526588,774838,1377100,969316,1796618,1121046,4662009,963535,5302610,1121105,688700,688743,688836,688763,688788,1056859,2386006,2386015,2386023,4265832,4231262,4265743,5302612,1121056,11\n 21\n> 090,1121074,688659,688650}'::integer[]))\n> -> Index Scan using ecn_ref_idx on number eumbers \n> (cost=0.00..2.74 rows=1 width=108) (actual time=1.794..1.795\n> rows=0 loops=389)\n> Index Cond: (numbers.reference_id = me.id)\n> Total runtime: 2287.701 ms\n> (10 rows)\n> \n> .. subsequent run: 32.367ms\n> \n> On a X4600 server with 32GB of ram and Equalogic iSCSI SAN attached.\n> \n> Jesper\n> \n> \n> -- \n> Jesper Krogh\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Thu, 27 Mar 2008 13:07:44 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Slow\" query or just \"Bad hardware\"?" }, { "msg_contents": "You might try turning ³enable_bitmapscan² off, that will avoid the full\nindex scan and creation of the bitmap.\n\n- Luke\n\n\nOn 3/27/08 8:34 AM, \"Jesper Krogh\" <[email protected]> wrote:\n\n> Hi\n> \n> I have a table with around 10 million entries The webpage rendered hits\n> at most 200 records which are distributed well in the 10m with an average\n> of 2 \"references\" pr. entry.\n> \n> Is there anyway to speed this query more up than allready. .. yes running\n> it subsequenctly it is blazingly fast, but with view of around 200/10m we\n> most\n> often dont hit the same query again.\n> \n> \n> # explain analyze SELECT \"me\".\"created\", \"me\".\"created_initials\",\n> \"me\".\"updated\", \"me\".\"updated_initials\", \"me\".\"start_time\",\n> \"me\".\"end_time\", \"me\".\"notes\", \"me\".\"id\", \"me\".\"sequence_id\",\n> \"me\".\"database\", \"me\".\"name\", \"numbers\".\"reference_id\",\n> \"numbers\".\"evidence\" FROM \"reference\" \"me\" LEFT JOIN \"number\" \"numbers\" ON\n> ( \"numbers\".\"reference_id\" = \"me\".\"id\" ) WHERE ( \"me\".\"sequence_id\" IN (\n> 34284, 41503, 42274, 42285, 76847, 78204, 104721, 126279, 274770, 274790,\n> 274809, 305346, 307383, 307411, 309691, 311362, 344930, 352530, 371033,\n> 371058, 507790, 517521, 517537, 517546, 526883, 558976, 4894317, 4976383,\n> 1676203, 4700800, 688803, 5028679, 5028694, 5028696, 5028684, 5028698,\n> 5028701, 5028676, 5028682, 5028686, 5028692, 5028689, 3048683, 5305427,\n> 5305426, 4970187, 4970216, 4970181, 4970208, 4970196, 4970226, 4970232,\n> 4970201, 4970191, 4970222, 4350307, 4873618, 1806537, 1817367, 1817432,\n> 4684270, 4981822, 3172776, 4894299, 4894304, 4700798, 1120990, 4981817,\n> 4831109, 4831036, 4831068, 4831057, 4831105, 4831038, 4831044, 4831081,\n> 4831063, 4831051, 4831086, 4831049, 4831071, 4831075, 4831114, 4831093,\n> 2635142, 4660208, 4660199, 4912338, 4660150, 4662011, 5307782, 4894286,\n> 4894292, 4894296, 4894309, 4894313, 1428388, 1932290, 5306082, 2010148,\n> 3979647, 4382006, 4220374, 1880794, 1526588, 774838, 1377100, 969316,\n> 1796618, 1121046, 4662009, 963535, 5302610, 1121105, 688700, 688743,\n> 688836, 688763, 688788, 1056859, 2386006, 2386015, 2386023, 4265832,\n> 4231262, 4265743, 5302612, 1121056, 1121090, 1121074, 688659, 688650 ) )\n> ORDER BY \"ecnumbers\".\"reference_id\";\n> \n> QUERY\n> PLAN\n> ------------------------------------------------------------------------------\n> ------------------------------------------------------------------------------\n> ------------------------------------------------------------------------------\n> ------------------------------------------------------------------------------\n> ------------------------------------------------------------------------------\n> ------------------------------------------------------------------------------\n> ------------------------------------------------------------------------------\n> ------------------------------------------------------------------------------\n> ------------------------------------------------------------------------------\n> ------------------------------------------------------------------------------\n> ------------------------------------------------------------------------------\n> ------------------------------------------------------------------------------\n> ------------------------------------------------------\n> \n> ------------------------------------------------------------------------------\n> ---------------\n> Sort (cost=56246.18..56275.20 rows=11606 width=299) (actual\n> time=2286.900..2287.215 rows=389 loops=1)\n> Sort Key: numbers.reference_id\n> -> Nested Loop Left Join (cost=388.48..55462.63 rows=11606 width=299)\n> (actual time=475.071..2284.502 rows=389 loops=1)\n> -> Bitmap Heap Scan on reference me (cost=388.48..23515.97\n> rows=11606 width=191) (actual time=451.245..1583.966 rows=389\n> loops=1)\n> Recheck Cond: (sequence_id = ANY\n> ('{34284,41503,42274,42285,76847,78204,104721,126279,274770,274790,274809,3053\n> 46,307383,307411,309691,311362,344930,352530,371033,371058,507790,517521,51753\n> 7,517546,526883,558976,4894317,4976383,1676203,4700800,688803,5028679,5028694,\n> 5028696,5028684,5028698,5028701,5028676,5028682,5028686,5028692,5028689,304868\n> 3,5305427,5305426,4970187,4970216,4970181,4970208,4970196,4970226,4970232,4970\n> 201,4970191,4970222,4350307,4873618,1806537,1817367,1817432,4684270,4981822,31\n> 72776,4894299,4894304,4700798,1120990,4981817,4831109,4831036,4831068,4831057,\n> 4831105,4831038,4831044,4831081,4831063,4831051,4831086,4831049,4831071,483107\n> 5,4831114,4831093,2635142,4660208,4660199,4912338,4660150,4662011,5307782,4894\n> 286,4894292,4894296,4894309,4894313,1428388,1932290,5306082,2010148,3979647,43\n> 82006,4220374,1880794,1526588,774838,1377100,969316,1796618,1121046,4662009,96\n> 3535,5302610,1121105,688700,688743,688836,688763,688788,1056859,2386006,238601\n> 5,2386023,4265832,4231262,4265743,5302612,1121056,1121\n> 090,1121074,688659,688650}'::integer[]))\n> -> Bitmap Index Scan on reference_seq_idx\n> (cost=0.00..385.58 rows=11606 width=0) (actual\n> time=422.691..422.691 rows=450 loops=1)\n> Index Cond: (sequence_id = ANY\n> ('{34284,41503,42274,42285,76847,78204,104721,126279,274770,274790,274809,3053\n> 46,307383,307411,309691,311362,344930,352530,371033,371058,507790,517521,51753\n> 7,517546,526883,558976,4894317,4976383,1676203,4700800,688803,5028679,5028694,\n> 5028696,5028684,5028698,5028701,5028676,5028682,5028686,5028692,5028689,304868\n> 3,5305427,5305426,4970187,4970216,4970181,4970208,4970196,4970226,4970232,4970\n> 201,4970191,4970222,4350307,4873618,1806537,1817367,1817432,4684270,4981822,31\n> 72776,4894299,4894304,4700798,1120990,4981817,4831109,4831036,4831068,4831057,\n> 4831105,4831038,4831044,4831081,4831063,4831051,4831086,4831049,4831071,483107\n> 5,4831114,4831093,2635142,4660208,4660199,4912338,4660150,4662011,5307782,4894\n> 286,4894292,4894296,4894309,4894313,1428388,1932290,5306082,2010148,3979647,43\n> 82006,4220374,1880794,1526588,774838,1377100,969316,1796618,1121046,4662009,96\n> 3535,5302610,1121105,688700,688743,688836,688763,688788,1056859,2386006,238601\n> 5,2386023,4265832,4231262,4265743,5302612,1121056,1121\n> 090,1121074,688659,688650}'::integer[]))\n> -> Index Scan using ecn_ref_idx on number eumbers\n> (cost=0.00..2.74 rows=1 width=108) (actual time=1.794..1.795\n> rows=0 loops=389)\n> Index Cond: (numbers.reference_id = me.id)\n> Total runtime: 2287.701 ms\n> (10 rows)\n> \n> .. subsequent run: 32.367ms\n> \n> On a X4600 server with 32GB of ram and Equalogic iSCSI SAN attached.\n> \n> Jesper\n> \n> \n> --\n> Jesper Krogh\n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n\n\nRe: [PERFORM] \"Slow\" query or just \"Bad hardware\"? \n\n\nYou might try turning “enable_bitmapscan” off, that will avoid the full index scan and creation of the bitmap.\n\n- Luke\n\n\nOn 3/27/08 8:34 AM, \"Jesper Krogh\" <[email protected]> wrote:\n\nHi\n\nI have a table with around 10 million entries  The webpage rendered hits\nat most 200 records which are distributed well in the 10m with an average\nof 2 \"references\" pr. entry.\n\nIs there anyway to speed this query more up than allready. .. yes running\nit subsequenctly it is blazingly fast, but with view of around 200/10m we\nmost\noften dont hit the same query again.\n\n\n# explain analyze SELECT \"me\".\"created\", \"me\".\"created_initials\",\n\"me\".\"updated\", \"me\".\"updated_initials\", \"me\".\"start_time\",\n\"me\".\"end_time\", \"me\".\"notes\", \"me\".\"id\", \"me\".\"sequence_id\",\n\"me\".\"database\", \"me\".\"name\", \"numbers\".\"reference_id\",\n\"numbers\".\"evidence\" FROM \"reference\" \"me\" LEFT JOIN \"number\" \"numbers\" ON\n( \"numbers\".\"reference_id\" = \"me\".\"id\" ) WHERE ( \"me\".\"sequence_id\" IN (\n34284, 41503, 42274, 42285, 76847, 78204, 104721, 126279, 274770, 274790,\n274809, 305346, 307383, 307411, 309691, 311362, 344930, 352530, 371033,\n371058, 507790, 517521, 517537, 517546, 526883, 558976, 4894317, 4976383,\n1676203, 4700800, 688803, 5028679, 5028694, 5028696, 5028684, 5028698,\n5028701, 5028676, 5028682, 5028686, 5028692, 5028689, 3048683, 5305427,\n5305426, 4970187, 4970216, 4970181, 4970208, 4970196, 4970226, 4970232,\n4970201, 4970191, 4970222, 4350307, 4873618, 1806537, 1817367, 1817432,\n4684270, 4981822, 3172776, 4894299, 4894304, 4700798, 1120990, 4981817,\n4831109, 4831036, 4831068, 4831057, 4831105, 4831038, 4831044, 4831081,\n4831063, 4831051, 4831086, 4831049, 4831071, 4831075, 4831114, 4831093,\n2635142, 4660208, 4660199, 4912338, 4660150, 4662011, 5307782, 4894286,\n4894292, 4894296, 4894309, 4894313, 1428388, 1932290, 5306082, 2010148,\n3979647, 4382006, 4220374, 1880794, 1526588, 774838, 1377100, 969316,\n1796618, 1121046, 4662009, 963535, 5302610, 1121105, 688700, 688743,\n688836, 688763, 688788, 1056859, 2386006, 2386015, 2386023, 4265832,\n4231262, 4265743, 5302612, 1121056, 1121090, 1121074, 688659, 688650 ) )\nORDER BY \"ecnumbers\".\"reference_id\";\n                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n ---------------------------------------------------------------------------------------------\n Sort  (cost=56246.18..56275.20 rows=11606 width=299) (actual\ntime=2286.900..2287.215 rows=389 loops=1)\n   Sort Key: numbers.reference_id\n   ->  Nested Loop Left Join  (cost=388.48..55462.63 rows=11606 width=299)\n(actual time=475.071..2284.502 rows=389 loops=1)\n         ->  Bitmap Heap Scan on reference me  (cost=388.48..23515.97\nrows=11606 width=191) (actual time=451.245..1583.966 rows=389\nloops=1)\n               Recheck Cond: (sequence_id = ANY\n('{34284,41503,42274,42285,76847,78204,104721,126279,274770,274790,274809,305346,307383,307411,309691,311362,344930,352530,371033,371058,507790,517521,517537,517546,526883,558976,4894317,4976383,1676203,4700800,688803,5028679,5028694,5028696,5028684,5028698,5028701,5028676,5028682,5028686,5028692,5028689,3048683,5305427,5305426,4970187,4970216,4970181,4970208,4970196,4970226,4970232,4970201,4970191,4970222,4350307,4873618,1806537,1817367,1817432,4684270,4981822,3172776,4894299,4894304,4700798,1120990,4981817,4831109,4831036,4831068,4831057,4831105,4831038,4831044,4831081,4831063,4831051,4831086,4831049,4831071,4831075,4831114,4831093,2635142,4660208,4660199,4912338,4660150,4662011,5307782,4894286,4894292,4894296,4894309,4894313,1428388,1932290,5306082,2010148,3979647,4382006,4220374,1880794,1526588,774838,1377100,969316,1796618,1121046,4662009,963535,5302610,1121105,688700,688743,688836,688763,688788,1056859,2386006,2386015,2386023,4265832,4231262,4265743,5302612,1121056,1121\n 090,1121074,688659,688650}'::integer[]))\n               ->  Bitmap Index Scan on reference_seq_idx\n(cost=0.00..385.58 rows=11606 width=0) (actual\ntime=422.691..422.691 rows=450 loops=1)\n                     Index Cond: (sequence_id = ANY\n('{34284,41503,42274,42285,76847,78204,104721,126279,274770,274790,274809,305346,307383,307411,309691,311362,344930,352530,371033,371058,507790,517521,517537,517546,526883,558976,4894317,4976383,1676203,4700800,688803,5028679,5028694,5028696,5028684,5028698,5028701,5028676,5028682,5028686,5028692,5028689,3048683,5305427,5305426,4970187,4970216,4970181,4970208,4970196,4970226,4970232,4970201,4970191,4970222,4350307,4873618,1806537,1817367,1817432,4684270,4981822,3172776,4894299,4894304,4700798,1120990,4981817,4831109,4831036,4831068,4831057,4831105,4831038,4831044,4831081,4831063,4831051,4831086,4831049,4831071,4831075,4831114,4831093,2635142,4660208,4660199,4912338,4660150,4662011,5307782,4894286,4894292,4894296,4894309,4894313,1428388,1932290,5306082,2010148,3979647,4382006,4220374,1880794,1526588,774838,1377100,969316,1796618,1121046,4662009,963535,5302610,1121105,688700,688743,688836,688763,688788,1056859,2386006,2386015,2386023,4265832,4231262,4265743,5302612,1121056,1121\n 090,1121074,688659,688650}'::integer[]))\n         ->  Index Scan using ecn_ref_idx on number eumbers\n(cost=0.00..2.74 rows=1 width=108) (actual time=1.794..1.795\nrows=0 loops=389)\n               Index Cond: (numbers.reference_id = me.id)\n Total runtime: 2287.701 ms\n(10 rows)\n\n.. subsequent run: 32.367ms\n\nOn a X4600 server with 32GB of ram and Equalogic iSCSI SAN attached.\n\nJesper\n\n\n--\nJesper Krogh\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 27 Mar 2008 10:36:55 -0700", "msg_from": "Luke Lonergan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Slow\" query or just \"Bad hardware\"? " }, { "msg_contents": "PFC wrote:\n\n> Also, sometimes offine TOASTing is evil :\n> Say you have a forum, you want the posts table to be CLUSTER'ed on \n> (topic_id, post_id) so displaying 1 page with 30 posts on it uses 1 seek, \n> not 30 seeks. But CLUSTER doesn't touch the data that has been pushed \n> offline in the toast table. So, in that case, it can pay (big time \n> actually) to disable toasting, store the data inline, and benefit from \n> cluster.\n\nThis claim is false -- CLUSTER does process the toast table along the\nmain heap.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 27 Mar 2008 14:54:09 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Slow\" query or just \"Bad hardware\"?" } ]
[ { "msg_contents": "I have a query which is\n\nprepare s_18 as select uid from user_profile where name like \n$1::varchar and isactive=$2 order by name asc limit 250;\n\nexplain analyze execute s_18 ('atxchery%','t');\n QUERY \n PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..7965.22 rows=250 width=14) (actual \ntime=301.714..3732.269 rows=1 loops=1)\n -> Index Scan using user_profile_name_key on user_profile \n(cost=0.00..404856.37 rows=12707 width=14) (actual \ntime=301.708..3732.259 rows=1 loops=1)\n Filter: (((name)::text ~~ $1) AND (isactive = $2))\n Total runtime: 3732.326 ms\n\nwithout prepared statements we get\n\nexplain analyze select uid from user_profile where name like 'foo%' \nand isactive='t' order by name asc limit 250;\n \nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=293.89..294.08 rows=73 width=14) (actual \ntime=5.947..6.902 rows=250 loops=1)\n -> Sort (cost=293.89..294.08 rows=73 width=14) (actual \ntime=5.943..6.258 rows=250 loops=1)\n Sort Key: name\n Sort Method: top-N heapsort Memory: 38kB\n -> Bitmap Heap Scan on user_profile (cost=5.36..291.64 \nrows=73 width=14) (actual time=0.394..2.481 rows=627 loops=1)\n Filter: (isactive AND ((name)::text ~~ 'foo%'::text))\n -> Bitmap Index Scan on user_profile_name_idx \n(cost=0.00..5.34 rows=73 width=0) (actual time=0.307..0.307 rows=628 \nloops=1)\n Index Cond: (((name)::text ~>=~ 'foo'::text) AND \n((name)::text ~<~ 'fop'::text))\n\n\nThere are two indexes on it\n\n\"user_profile_name_idx\" UNIQUE, btree (name varchar_pattern_ops)\n\"user_profile_name_key\" UNIQUE, btree (name)\n\none for equality, one for like\n\nSo .... how to get the prepare to use the right index\n\nDave\n\n", "msg_date": "Thu, 27 Mar 2008 15:14:49 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "using like in a prepare doesnt' use the right index" }, { "msg_contents": "On Thu, Mar 27, 2008 at 03:14:49PM -0400, Dave Cramer wrote:\n> I have a query which is\n> \n> prepare s_18 as select uid from user_profile where name like \n> $1::varchar and isactive=$2 order by name asc limit 250;\n> \n> explain analyze execute s_18 ('atxchery%','t');\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..7965.22 rows=250 width=14) (actual \n> time=301.714..3732.269 rows=1 loops=1)\n> -> Index Scan using user_profile_name_key on user_profile \n> (cost=0.00..404856.37 rows=12707 width=14) (actual \n> time=301.708..3732.259 rows=1 loops=1)\n> Filter: (((name)::text ~~ $1) AND (isactive = $2))\n> Total runtime: 3732.326 ms\n> \n> without prepared statements we get\n> \n> explain analyze select uid from user_profile where name like 'foo%' \n> and isactive='t' order by name asc limit 250;\n> \n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=293.89..294.08 rows=73 width=14) (actual \n> time=5.947..6.902 rows=250 loops=1)\n> -> Sort (cost=293.89..294.08 rows=73 width=14) (actual \n> time=5.943..6.258 rows=250 loops=1)\n> Sort Key: name\n> Sort Method: top-N heapsort Memory: 38kB\n> -> Bitmap Heap Scan on user_profile (cost=5.36..291.64 \n> rows=73 width=14) (actual time=0.394..2.481 rows=627 loops=1)\n> Filter: (isactive AND ((name)::text ~~ 'foo%'::text))\n> -> Bitmap Index Scan on user_profile_name_idx \n> (cost=0.00..5.34 rows=73 width=0) (actual time=0.307..0.307 rows=628 \n> loops=1)\n> Index Cond: (((name)::text ~>=~ 'foo'::text) AND \n> ((name)::text ~<~ 'fop'::text))\n> \n> \n> There are two indexes on it\n> \n> \"user_profile_name_idx\" UNIQUE, btree (name varchar_pattern_ops)\n> \"user_profile_name_key\" UNIQUE, btree (name)\n> \n> one for equality, one for like\n\nThis is behaving as designed because the planner transforms the\npredicate in the second query: Index Cond: (((name)::text ~>=~\n'foo'::text) AND ((name)::text ~<~ 'fop'::text)).\n\nIt cannot make this transformation for a prepared statement where the \nLIKE argument is a PREPARE parameter (the first query), since the\ntransformation depends on inspecting the actual string.\n\nYou could probably continue using prepared statements and make this\ntransformation yourself but you'll have to be careful about creating the\n'greater' string (see make_greater_string()).\n\nCome to think of it, it'd easier to just make a set returning function\nwhich executes this query, if you need to stick with prepare/execute.\n\nThanks,\n\nGavin\n", "msg_date": "Fri, 28 Mar 2008 15:53:47 +0100", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: using like in a prepare doesnt' use the right index" } ]
[ { "msg_contents": "Hello,\n\nI need to install a new server for postgresql 8.3. It will run two \ndatabases, web server and some background programs. We already have a \nserver but it is becoming slow and we would like to have something that \nis faster. It is a cost sensitive application, and I would like to get \nyour opinion in some questions.\n\nThe database itself is an OLTP system. There are many smaller tables, \nand some bigger ones (biggest table with 1.2 million records, table size \n966MB, indexes size 790MB). In the bigger tables there are only a few \nrecords updated frequently, most of the other records are not changed. \nThe smaller tables are updated continuously.\n\nQuestion 1. We are going to use PostgreSQL 3.1 with FreeBSD. The pg docs \nsay that it is better to use FreeBSD because it can alter the I/O \npriority of processes dynamically. The latest legacy release is 6.3 \nwhich is probably more stable. However, folks say that 7.0 has superior \nperformance on the same hardware. Can I use 7.0 on a production server?\n\nQuestion 2. SCSI or SATA? I plan to buy a RocketRAID 3520 controller \nwith 8 SATA 2 disks. The operating system would be on another disk pair, \nconnected to the motherboard's controller. I wonder if I can get more \nperformance with SCSI, for the same amount of money? (I can spend about \n$1500 on the controller and the disks, that would cover 10 SATA 2 disks \nand the controller.)\n\nQuestion 3. FreeBSD 7.0 can use the ZFS file system. I suspect that UFS \n2 + soft updates will be better, but I'm not sure. Which is better?\n\nQuestion 4. How to make the partitions? This is the hardest question. \nHere is my plan:\n\n- the OS resides on 2 disks, RAID 1\n- the databases should go on 8 disks, RAID 0 + 1\n\nHowever, the transaction log file should be on a separate disk and maybe \nI could gain more performance by putting indexes on a separate drive, \nbut I do not want to reduce the number of disks in the RAID 0+1 array. \nShould I put indexes and transaction log on the RAID 1 array? Or should \nI invest a bit more money, add an SATA RAID controller with 16 channels \nand add more disks? Would it pay the bill? Another alternative is to put \nthe biggest tables on a separate array so that it will be faster when we \njoin these tables with other tables.\n\nI know that it is hard to answer without knowing the structure of the \ndatabases. :-( I can make tests with different configurations later, but \nI would like to know your opinion first - what should I try?\n\nThanks,\n\n Laszlo\n\n", "msg_date": "Fri, 28 Mar 2008 10:05:58 +0100", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Planning a new server - help needed" }, { "msg_contents": "> Question 1. We are going to use PostgreSQL 3.1 with FreeBSD. The pg \n> docs say that it is better to use FreeBSD because it can alter the \n> I/O priority of processes dynamically. The latest legacy release is \n> 6.3 which is probably more stable. However, folks say that 7.0 has \n> superior performance on the same hardware. Can I use 7.0 on a \n> production server?\n\nFreeBSD 7.x is pretty stable, and it has the advantage of having the \nnew ULE and other things that can't be MFC'd to 6.x branch. And as a \nlong time FreeBSD enthusiast having Cisco sponsoring Dtrace, Nokia \nsponsoring scheduler development etc. 7.x is definitely in my opinion \nnow the branch to install and start following for ease of upgrading \nlater. Of course, as always check that your intended hardware is \nsupported.\n\nULE which is pretty much the key for performance boost in 7.x branch \nisn't yet the default scheduler, but will be in 7.1 and afterwards. \nThis means you have to roll custom kernel if you want to use ULE.\n\n> Question 3. FreeBSD 7.0 can use the ZFS file system. I suspect that \n> UFS 2 + soft updates will be better, but I'm not sure. Which is \n> better?\n\nFor now I'd choose between UFS+gjournal or plain UFS, although with \nbigger disks journaling is a boon compared to fsck'ing plain UFS \npartition. ZFS isn't yet ready for production I think, but it seems to \nbe getting there. This is opinion based on bug reports and discussions \nin stable and current mailing lists, not on personal testing though. \nMy experiences with gjournal have been positive so far.\n\nOn the drives and controller - I'm not sure whether SCSI/SAS will give \nany noticeable boost over SATA, but based on personal experience SCSI \nis still ahead on terms of drive reliability. Whatever technology I'd \nchoose, for production server getting decent battery backed controller \nwould be the start. And of course a controller that does the RAID's in \nhardware.\n\n-Reko \n\n", "msg_date": "Fri, 28 Mar 2008 12:00:41 +0200", "msg_from": "\"Reko Turja\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planning a new server - help needed" }, { "msg_contents": "> I need to install a new server for postgresql 8.3. It will run two\n> databases, web server and some background programs. We already have a\n> server but it is becoming slow and we would like to have something that\n> is faster. It is a cost sensitive application, and I would like to get\n> your opinion in some questions.\n>\n> The database itself is an OLTP system. There are many smaller tables,\n> and some bigger ones (biggest table with 1.2 million records, table size\n> 966MB, indexes size 790MB). In the bigger tables there are only a few\n> records updated frequently, most of the other records are not changed.\n> The smaller tables are updated continuously.\n>\n> Question 1. We are going to use PostgreSQL 3.1 with FreeBSD. The pg docs\n> say that it is better to use FreeBSD because it can alter the I/O\n> priority of processes dynamically. The latest legacy release is 6.3\n> which is probably more stable. However, folks say that 7.0 has superior\n> performance on the same hardware. Can I use 7.0 on a production server?\n\nI guess you mean postgresql 8.3.1? :-)\n\nI use FreeBSD 7 release on a 8-way HP DL360 G5 with a ciss controller.\nWorks out of the box and I haven't had any issue with 7.0 at all.\n\n> Question 2. SCSI or SATA? I plan to buy a RocketRAID 3520 controller\n> with 8 SATA 2 disks. The operating system would be on another disk pair,\n> connected to the motherboard's controller. I wonder if I can get more\n> performance with SCSI, for the same amount of money? (I can spend about\n> $1500 on the controller and the disks, that would cover 10 SATA 2 disks\n> and the controller.)\n\nSAS would probably be the way to go. I haven't tried the\nrocketraid-controller. I use the built-in p400i-controller on my\nservers using the ciss-driver. I've heard many positive remarks about\nareca.\n\n> Question 3. FreeBSD 7.0 can use the ZFS file system. I suspect that UFS\n> 2 + soft updates will be better, but I'm not sure. Which is better?\n\nI'd stick with ufs2 atm. There are some issues with zfs which probably\nhave been ironed out by now but ufs2 has been deployed for a longer\ntime. Performance-wise they are about the same.\n\n> Question 4. How to make the partitions? This is the hardest question.\n> Here is my plan:\n>\n> - the OS resides on 2 disks, RAID 1\n> - the databases should go on 8 disks, RAID 0 + 1\n\nIf you have enough disks raid-6 should perform almost as good as raid\n1+0. I've setup 11 disks in raid-6 plus one hotspare so I can get more\nspace out of it. \"Enough disks\" are approx. eight and up.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Fri, 28 Mar 2008 11:03:49 +0100", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planning a new server - help needed" }, { "msg_contents": "\n> I guess you mean postgresql 8.3.1? :-)\n> \nYep. Sorry.\n>> Question 3. FreeBSD 7.0 can use the ZFS file system. I suspect that UFS\n>> 2 + soft updates will be better, but I'm not sure. Which is better?\n>> \n>\n> I'd stick with ufs2 atm. There are some issues with zfs which probably\n> have been ironed out by now but ufs2 has been deployed for a longer\n> time. Performance-wise they are about the same.\n> \nThank you. I suspected the same but it was good to get positive \nconfirmation.\n>> Question 4. How to make the partitions? This is the hardest question.\n>> Here is my plan:\n>>\n>> - the OS resides on 2 disks, RAID 1\n>> - the databases should go on 8 disks, RAID 0 + 1\n>> \n>\n> If you have enough disks raid-6 should perform almost as good as raid\n> 1+0. \nHmm, I have heard that RAID 1 or RAID 1 + 0 should be used for \ndatabases, never RAID 5. I know nothing about RAID 6. I guess I must \naccept your suggestion since you have more experience than I have. :-) \nObviously, it would be easier to manage a single RAID 6 array.\n> I've setup 11 disks in raid-6 plus one hotspare so I can get more\n> space out of it. \"Enough disks\" are approx. eight and up.\n> \nThe RAID controller that I have selected can only handle 8 disks. I \nguess I need to find a different one with 16 channels and use more \ndisks. So are you saying that with all disks in a bigger RAID 6 array, I \nwill get the most out of the hardware? In that case, I'll try to get a \nbit more money from the management and build RAID 6 with 12 disks.\n\nI also feel that I need to use a separate RAID 1 array (I prefer \ngmirror) for the base system.\n\nThanks,\n\n Laszlo\n\n", "msg_date": "Fri, 28 Mar 2008 11:47:27 +0100", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planning a new server - help needed" }, { "msg_contents": "On Fri, 28 Mar 2008, Laszlo Nagy wrote:\n\n> We already have a server but it is becoming slow and we would like to \n> have something that is faster.\n\nWhat's it slow at? Have you identified the bottlenecks and current \nsources of sluggish behavior? That sort of thing is much more informative \nto look into in regards to redesigning for new hardware than trivia like \ndisk layout. For all we know you're CPU bound.\n\n> The database itself is an OLTP system. There are many smaller tables, and \n> some bigger ones (biggest table with 1.2 million records, table size 966MB, \n> indexes size 790MB).\n\nThe total database size is the interesting number you left out here. And \nyou didn't mention how much RAM either. That ratio has a lot of impact on \nhow hard you'll push the disks.\n\n> Question 1. We are going to use PostgreSQL 3.1 with FreeBSD. The pg docs say \n> that it is better to use FreeBSD because it can alter the I/O priority of \n> processes dynamically.\n\nYou shouldn't make an OS decision based on a technical detail that small. \nI won't knock FreeBSD because it's a completely reasonable choice, but \nthere's no credible evidence it's a better performer for the workload you \nexpect than, say, Linux or even Solaris x64. (The benchmarks the FreeBSD \nteam posted as part of their 7.0 fanfare are not representative of real \nPostgreSQL performance, and are read-only as well).\n\nAll the reasonable OS choices here are close enough to one another (as \nlong as you get FreeBSD 7, earlier versions are really slow) that you \nshould be thinking in terms of reliability, support, and features rather \nthan viewing this from a narrow performance perspective. There's nothing \nabout what you've described that sounds like it needs bleeding-edge \nperformance to achieve. For reliability, I first look at how good the disk \ncontroller and its matching driver in the OS used is, which brings us to:\n\n> Question 2. SCSI or SATA? I plan to buy a RocketRAID 3520 controller with 8 \n> SATA 2 disks. The operating system would be on another disk pair, connected \n> to the motherboard's controller. I wonder if I can get more performance with \n> SCSI, for the same amount of money? (I can spend about $1500 on the \n> controller and the disks, that would cover 10 SATA 2 disks and the \n> controller.)\n\nHighpoint has traditionally made disk controllers that were garbage. The \n3520 is from a relatively new series of products from them, and it seems \nlike a reasonable unit. However: do you want to be be deploying your \nsystem on a new card with zero track record for reliability, and from a \ncompany that has never done a good job before? I can't think of any \nreason at all why you should take that risk.\n\nThe standard SATA RAID controller choices people suggest here are 3ware, \nAreca, and LSI Logic. Again, unless you're really pushing what the \nhardware is capable of these are all close to each other performance-wise \n(see http://femme.tweakblogs.net/blog/196/highpoint-rocketraid-3220.html \nfor something that include the Highpoint card). You should be thinking in \nterms of known reliability and stability when you select a database \ncontroller card, and Highpoint isn't even on the list of vendors to \nconsider yet by those standards.\n\nAs for SCSI vs. SATA, I collected up the usual arguments on both sides at \nhttp://www.postgresqldocs.org/index.php/SCSI_vs._IDE/SATA_Disks\n\n> However, the transaction log file should be on a separate disk and maybe I \n> could gain more performance by putting indexes on a separate drive, but I do \n> not want to reduce the number of disks in the RAID 0+1 array.\n\nIf you're looking at 8+ disks and have a caching controller with a battery \nbackup, which appears to be your target configuration, there little reason \nto expect a big performance improvement from splitting the transaction log \nout onto a seperate disk. As you note, doing that will reduce the spread \nof disk for the database which may cost you more in performance than \nseperate transaction logs gain.\n\nIt is worth considering creating a seperate filesystem on the big array to \nhold the xlog data through, because that gives you more flexibility in \nterms of mount parameters there. For example, you can always turn off \natime updates on the transaction log filesystem, and in many cases the \nfilesystem journal updates can be optimized more usefully (the xlog \ndoesn't require them).\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 28 Mar 2008 13:42:16 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planning a new server - help needed" }, { "msg_contents": "Laszlo Nagy wrote:\n>\n> Question 4. How to make the partitions? This is the hardest question. \n> Here is my plan:\n>\n> - the OS resides on 2 disks, RAID 1\n> - the databases should go on 8 disks, RAID 0 + 1\nMake sure you understand the difference between RAID 1+0 and RAID 0+1.. \nI suspect you'll end up going with 1+0 instead.\n\n-Dan\n\n\n", "msg_date": "Fri, 28 Mar 2008 12:35:56 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planning a new server - help needed" }, { "msg_contents": "Laszlo Nagy schrieb:\n[...]\n> The RAID controller that I have selected can only handle 8 disks. I \n> guess I need to find a different one with 16 channels and use more \n> disks. So are you saying that with all disks in a bigger RAID 6 array, I \n> will get the most out of the hardware? In that case, I'll try to get a \n> bit more money from the management and build RAID 6 with 12 disks.\n\nHere a good SATA-Controllers for 4/8/12/16-Disks:\nhttp://www.tekram.com/product2/product_detail.asp?pid=51\n\nStefan\n", "msg_date": "Fri, 28 Mar 2008 19:38:07 +0100", "msg_from": "Weinzierl Stefan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planning a new server - help needed" }, { "msg_contents": "Greg Smith wrote:\n> As for SCSI vs. SATA, I collected up the usual arguments on both sides \n> at http://www.postgresqldocs.org/index.php/SCSI_vs._IDE/SATA_Disks\n>\nWhy do you claim that 'More platters also means slower seeks\nand generally slower performance.'?\n\nOn the face of it, it should mean that the number of track step\noperations is reduced, even if the drive doesn't buffer a slice\nof tracks aross all platters (which would help if it did).\n\nI'm not entirely sure why the extra platters should really count\nas more moving parts since I think the platter assembly and\nhead assembly are both single parts in effect, albeit they will\nbe more massive with more platters. I'm not sure how much\nextra bearing friction that will mean, but it is reasonable that\nsome extra energy is going to be needed.\n\nRecent figures I've seen suggest that the increased storage\ndensity per platter, plus the extra number of platters, means\nthat the streaming speed of good 7200rpm SATA drives is\nvery close to that of good 15000rpm SAS drives - and you\ncan choose which bit of the disk to use to reduce seek time and\nmaximise transfer rate with the oversize drives. You can\nget about 100MB/s out of both technologies, streaming.\n\nIt may be worth considering an alternative approach. I suspect\nthat a god RAID1 or RAID1+0 is worthwhile for WAL, but\nyou might consider a RAID1 of a pair of SSDs for data. They\nwill use a lot of your budget, but the seek time is negligible so the\neffective usable performance is higher than you get with\nspinning disks - so you might trade a fancy controller with\nbattery-backed cache for straight SSD.\n\nI haven't done this, so YMMV. But the prices are getting\ninteresting for OLTP where most disks are massively\noversized. The latest Samsung and SanDisk are expensive\nin the UK but the Transcend 16GB TS16GSSD25S-S SATA\nis about $300 equiv - it can do 'only' 'up to' 28MB/s write and\nyou wouldn't want to put WAL on one, but sustaining\n15-20MB/s through random access to a real disk isn't\ntrivial. If average access is 10ms, and you write 100MB/s\nstreaming, then you have to ask yourself if you going to do\n80 or more seeks a second.\n\nJames\n\nJames\n\n\n\n\n", "msg_date": "Sat, 29 Mar 2008 10:34:07 +0000", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planning a new server - help needed" }, { "msg_contents": "\n> Why do you claim that 'More platters also means slower seeks\n> and generally slower performance.'?\n\n\tMore platters -> more heads -> heavier head assembly -> slower seek time\n\tBut..\n\tMore platters -> higher density -> less seek distance (in mm of head \nmovement) -> faster seek time\n\n\tAs usual, no clear-cut case, a real-life test would tell more interesting \nthings.\n\n> I'm not entirely sure why the extra platters should really count\n> as more moving parts since I think the platter assembly and\n> head assembly are both single parts in effect, albeit they will\n> be more massive with more platters. I'm not sure how much\n> extra bearing friction that will mean, but it is reasonable that\n> some extra energy is going to be needed.\n\n\tSince the bearings are only on one side of the axle (not both), a heavier \nplatter assembly would put more stress on the bearing if the disk is \nsubject to vibrations (like, all those RAID disks seeking together) which \nwould perhaps shorten its life. Everything with conditionals of course ;)\n\tI remember reading a paper on vibration from many RAID disks somewhere a \nyear or so ago, vibration from other disks seeking at the exact same time \nand in the same direction would cause resonances in the housing chassis \nand disturb the heads of disks, slightly worsening seek times and \nreliability. But, on the other hand, the 7 disks raided in my home storage \nserver never complained, even though the $30 computer case vibrates all \nover the place when they seek. Perhaps if they were subject to 24/7 heavy \ntorture, a heavier/better damped chassis would be a good investment.\n\n> It may be worth considering an alternative approach. I suspect\n> that a god RAID1 or RAID1+0 is worthwhile for WAL, but\n\n\tActually, now that 8.3 can sync to disk every second instead of at every \ncommit, I wonder, did someone do some enlightening benchmarks ? I remember \nbenchmarking 8.2 on a forum style load and using a separate disk for WAL \n(SATA, write cache off) made a huge difference (as expected) versus one \ndisk for everything (SATA, and write cache off). Postgres beat the crap \nout of MyISAM, lol.\n\tSeems like Postgres is one of the rare apps which gets faster and meaner \nwith every release, instead of getting slower and more bloated like \neveryone else.\n\n\tAlso, there is a thing called write barriers, which supposedly could be \nused to implement fsync-like behaviour without the penalty, if the disk, \nthe OS, the controller, and the filesystem support it (that's a lot of \nifs)...\n\n> I haven't done this, so YMMV. But the prices are getting\n> interesting for OLTP where most disks are massively\n> oversized. The latest Samsung and SanDisk are expensive\n> in the UK but the Transcend 16GB TS16GSSD25S-S SATA\n> is about $300 equiv - it can do 'only' 'up to' 28MB/s write and\n\n\tGigabyte should revamp their i-RAM to use ECC RAM of a larger capacity... \nand longer lasting battery backup...\n\tI wonder, how many write cycles those Flash drives can take before \nreliability becomes a problem...\n\n\n\n", "msg_date": "Sat, 29 Mar 2008 14:57:28 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planning a new server - help needed" }, { "msg_contents": "PFC wrote:\n>\n>> Why do you claim that 'More platters also means slower seeks\n>> and generally slower performance.'?\n>\n> More platters -> more heads -> heavier head assembly -> slower \n> seek time\nNote sure I've sen a lot of evidence of that in drive specifications!\n\n> Gigabyte should revamp their i-RAM to use ECC RAM of a larger \n> capacity... and longer lasting battery backup...\nYou would think a decent capacitor or rechargable button battery would \nbe enough to dump it to a flash memory.\nNo problem with flash wear then.\n> I wonder, how many write cycles those Flash drives can take before \n> reliability becomes a problem...\nHard to get data isn't it? I believe its hundreds of thousands to \nmillions now. Now each record in most OLTP\ntables is rewritten a few times unless its stuff that can go into temp \ntables etc, which should be elsewhere.\nIndex pages clearly get rewritten often.\n\nI suspect a mix of storage technologies will be handy for some time yet \n- WAL on disk, and temp tables on\ndisk with no synchronous fsync requirement.\n\nI think life is about to get interesting in DBMS storage. All good for \nus users.\n\nJames\n\n", "msg_date": "Sat, 29 Mar 2008 16:47:53 +0000", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planning a new server - help needed" }, { "msg_contents": "Laszlo Nagy wrote:\n> Question 1. We are going to use PostgreSQL 3.1 with FreeBSD. The pg docs\n> say that it is better to use FreeBSD because it can alter the I/O\n> priority of processes dynamically.\n\nWhere does it say that?\n", "msg_date": "Sun, 30 Mar 2008 00:49:05 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planning a new server - help needed" }, { "msg_contents": "On Sat, 29 Mar 2008, PFC wrote:\n\n>> Why do you claim that 'More platters also means slower seeks\n>> and generally slower performance.'?\n> \tMore platters -> more heads -> heavier head assembly -> slower seek \n> time\n\nI recall seeing many designs with more platters that have slower seek \ntimes in benchmarks, and I always presumed this was the reason. That's \nthe basis for that comment. I'll disclaim that section a bit.\n\n> Actually, now that 8.3 can sync to disk every second instead of at every \n> commit, I wonder, did someone do some enlightening benchmarks ?\n\nI've seen some really heavy workloads where using async commit helped \ngroup commits in a larger batches usefully, but I personally haven't found \nit to be all that useful if you're already got a caching controller to \naccelerate writes on the kinds of hardware most people have. It's a great \nsolution for situations without a usable write cache though.\n\n> Also, there is a thing called write barriers, which supposedly could be \n> used to implement fsync-like behaviour without the penalty, if the disk, \n> the OS, the controller, and the filesystem support it (that's a lot of \n> ifs)...\n\nThe database can't use fsync-like behavior for the things it calls fsync \nfor; it needs the full semantics. You're either doing the full operation, \nor you're cheating and it doesn't do what it's supposed to. Write \nbarriers aren't any improvement over a good direct I/O sync write setup \nfor the WAL. There may be some limited value to that approach for the \ndatabase writes at checkpoint time, but again there's a real fsync coming \nat the end of that and it's not satisfied until everything is on disk (or \nin a good disk controller cache).\n\n> Gigabyte should revamp their i-RAM to use ECC RAM of a larger \n> capacity... and longer lasting battery backup...\n\nI saw a rumor somewhere that they were close to having a new version of \nthat using DDR2 ready, which would make it pretty easy to have 8GB on \nthere.\n\n> I wonder, how many write cycles those Flash drives can take before \n> reliability becomes a problem...\n\nThe newer SSD drives with good write leveling should last at least as long \nas you'd expect a mechanical drive to, even in a WAL application. Lesser \ngrades of flash used as disk could be a problem though.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sun, 30 Mar 2008 00:45:36 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planning a new server - help needed" }, { "msg_contents": "Greg Smith wrote:\n\n>> Gigabyte should revamp their i-RAM to use ECC RAM of a larger\n>> capacity... and longer lasting battery backup...\n> \n> I saw a rumor somewhere that they were close to having a new version of\n> that using DDR2 ready, which would make it pretty easy to have 8GB on\n> there.\n\nI'm hoping - probably in vain - that they'll also include a CF/SD slot\nor some onboard flash so it can dump its contents to flash using the\nbattery backup.\n\nFor anybody wondering what the devices in question are, see:\n\nhttp://www.anandtech.com/storage/showdoc.aspx?i=2480\nhttp://www.gigabyte.com.tw/Products/Storage/Products_Overview.aspx?ProductID=2180\n\n--\nCraig Ringer\n", "msg_date": "Sun, 30 Mar 2008 13:39:59 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planning a new server - help needed" }, { "msg_contents": "PFC wrote:\n> \n>> Why do you claim that 'More platters also means slower seeks\n>> and generally slower performance.'?\n> \n> More platters -> more heads -> heavier head assembly -> slower seek \n> time\n> But..\n> More platters -> higher density -> less seek distance (in mm of head \n> movement) -> faster seek time\n\nMore platters means more tracks under the read heads at a time, so \ngenerally *better* performance. All other things (like rotational \nspeed) being equal, of course.\n\n-- \nGuy Rouillier\n", "msg_date": "Sun, 30 Mar 2008 22:25:36 -0400", "msg_from": "Guy Rouillier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planning a new server - help needed" } ]
[ { "msg_contents": "Hello,\n\nI'm sorry if this has been discussed before, I couldn't find a discussion about \nthis problem.\n\nI've done the same query on a 8.2.5 database. The first one is prepared first \nand the other is executed directly.\n\nI understand why the there is such a great difference between the two ways of \nexecuting the query (postgres has no way of knowing that $1 will be quite big \nand that the result is not too big). \n\nI could just avoid using prepare statements, but this is done automatically with \nPerl's DBD::Pg. I know how to avoid using prepare statements (avoid having \nplaceholders in the statement), but that is not the prettiest of work arounds. \nIs there any planner hints I can use or anything happened or happening with 8.3 \nor later that I can use?\n\nThank you in advance and the following is the EXPLAIN ANALYZE for the queries.\n\nBest regards\n\n\nMartin Kjeldsen\n\n-----\n\nPREPARE test_x (INT) AS SELECT * FROM v_rt_trap_detailed WHERE guid > $1 ORDER BY created LIMIT 3000;\n\nEXPLAIN ANALYZE EXECUTE test_x (116505531);\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=1143870.95..1143878.45 rows=3000 width=267) (actual time=83033.101..83033.111 rows=4 loops=1)\n -> Sort (cost=1143870.95..1148074.36 rows=1681367 width=267) (actual time=83033.099..83033.103 rows=4 loops=1)\n Sort Key: rt_trap.created\n -> Merge Left Join (cost=0.00..829618.73 rows=1681367 width=267) (actual time=83032.946..83033.051 rows=4 loops=1)\n Merge Cond: (rt_trap.guid = tp.trap_guid)\n -> Index Scan using idx_rt_trap_guid on rt_trap (cost=0.00..81738.88 rows=1681367 width=192) (actual time=0.012..0.020 rows=4 loops=1)\n Index Cond: (guid > $1)\n Filter: (deleted IS NULL)\n -> Index Scan using idx_rt_trap_param_trap_guid on rt_trap_param tp (cost=0.00..706147.04 rows=4992440 width=79) (actual time=6.523..78594.750 rows=5044927 loops=1)\n Filter: (param_oid = 'snmpTrapOID.0'::text)\n Total runtime: 83033.411 ms\n(11 rows)\n\ndmon2=# EXPLAIN ANALYZE SELECT * FROM v_rt_trap_detailed WHERE guid > 116505531 ORDER BY created LIMIT 3000;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=9866.45..9867.71 rows=504 width=267) (actual time=0.590..0.616 rows=12 loops=1)\n -> Sort (cost=9866.45..9867.71 rows=504 width=267) (actual time=0.587..0.596 rows=12 loops=1)\n Sort Key: rt_trap.created\n -> Nested Loop Left Join (cost=0.00..9843.83 rows=504 width=267) (actual time=0.157..0.531 rows=12 loops=1)\n -> Index Scan using idx_rt_trap_guid on rt_trap (cost=0.00..26.78 rows=504 width=192) (actual time=0.022..0.034 rows=12 loops=1)\n Index Cond: (guid > 116505531)\n Filter: (deleted IS NULL)\n -> Index Scan using idx_rt_trap_param_trap_guid on rt_trap_param tp (cost=0.00..18.36 rows=89 width=79) (actual time=0.006..0.009 rows=1 loops=12)\n Index Cond: (rt_trap.guid = tp.trap_guid)\n Filter: (param_oid = 'snmpTrapOID.0'::text)\n Total runtime: 0.733 ms\n(11 rows)\n", "msg_date": "Mon, 31 Mar 2008 13:00:13 +0200", "msg_from": "Martin Kjeldsen <[email protected]>", "msg_from_op": true, "msg_subject": "Bad prepare performance" }, { "msg_contents": "Le Monday 31 March 2008, Martin Kjeldsen a écrit :\n> Hello,\n>\n> I'm sorry if this has been discussed before, I couldn't find a discussion\n> about this problem.\n>\n> I've done the same query on a 8.2.5 database. The first one is prepared\n> first and the other is executed directly.\n>\n> I understand why the there is such a great difference between the two ways\n> of executing the query (postgres has no way of knowing that $1 will be\n> quite big and that the result is not too big).\n>\n> I could just avoid using prepare statements, but this is done automatically\n> with Perl's DBD::Pg. I know how to avoid using prepare statements (avoid\n> having placeholders in the statement), but that is not the prettiest of\n> work arounds. \n\nDid you saw this option :\n\n $sth = $dbh->prepare(\"SELECT id FROM mytable WHERE val = ?\",\n { pg_server_prepare => 0 });\n\nThen, *this* query will not be prepared by the server.\n\n> Is there any planner hints I can use or anything happened or \n> happening with 8.3 or later that I can use?\n>\n> Thank you in advance and the following is the EXPLAIN ANALYZE for the\n> queries.\n>\n> Best regards\n>\n>\n> Martin Kjeldsen\n>\n> -----\n>\n> PREPARE test_x (INT) AS SELECT * FROM v_rt_trap_detailed WHERE guid > $1\n> ORDER BY created LIMIT 3000;\n>\n> EXPLAIN ANALYZE EXECUTE test_x (116505531);\n> \n> QUERY PLAN\n> ---------------------------------------------------------------------------\n>----------------------------------------------------------------------------\n>------------------------------- Limit (cost=1143870.95..1143878.45\n> rows=3000 width=267) (actual time=83033.101..83033.111 rows=4 loops=1) -> \n> Sort (cost=1143870.95..1148074.36 rows=1681367 width=267) (actual\n> time=83033.099..83033.103 rows=4 loops=1) Sort Key: rt_trap.created\n> -> Merge Left Join (cost=0.00..829618.73 rows=1681367 width=267)\n> (actual time=83032.946..83033.051 rows=4 loops=1) Merge Cond: (rt_trap.guid\n> = tp.trap_guid)\n> -> Index Scan using idx_rt_trap_guid on rt_trap \n> (cost=0.00..81738.88 rows=1681367 width=192) (actual time=0.012..0.020\n> rows=4 loops=1) Index Cond: (guid > $1)\n> Filter: (deleted IS NULL)\n> -> Index Scan using idx_rt_trap_param_trap_guid on\n> rt_trap_param tp (cost=0.00..706147.04 rows=4992440 width=79) (actual\n> time=6.523..78594.750 rows=5044927 loops=1) Filter: (param_oid =\n> 'snmpTrapOID.0'::text)\n> Total runtime: 83033.411 ms\n> (11 rows)\n>\n> dmon2=# EXPLAIN ANALYZE SELECT * FROM v_rt_trap_detailed WHERE guid >\n> 116505531 ORDER BY created LIMIT 3000; QUERY PLAN\n> ---------------------------------------------------------------------------\n>----------------------------------------------------------------------------\n>------------- Limit (cost=9866.45..9867.71 rows=504 width=267) (actual\n> time=0.590..0.616 rows=12 loops=1) -> Sort (cost=9866.45..9867.71\n> rows=504 width=267) (actual time=0.587..0.596 rows=12 loops=1) Sort Key:\n> rt_trap.created\n> -> Nested Loop Left Join (cost=0.00..9843.83 rows=504 width=267)\n> (actual time=0.157..0.531 rows=12 loops=1) -> Index Scan using\n> idx_rt_trap_guid on rt_trap (cost=0.00..26.78 rows=504 width=192) (actual\n> time=0.022..0.034 rows=12 loops=1) Index Cond: (guid > 116505531)\n> Filter: (deleted IS NULL)\n> -> Index Scan using idx_rt_trap_param_trap_guid on\n> rt_trap_param tp (cost=0.00..18.36 rows=89 width=79) (actual\n> time=0.006..0.009 rows=1 loops=12) Index Cond: (rt_trap.guid =\n> tp.trap_guid)\n> Filter: (param_oid = 'snmpTrapOID.0'::text)\n> Total runtime: 0.733 ms\n> (11 rows)\n\n\n\n-- \nCédric Villemain\nAdministrateur de Base de Données\nCel: +33 (0)6 74 15 56 53\nhttp://dalibo.com - http://dalibo.org", "msg_date": "Mon, 31 Mar 2008 13:59:11 +0200", "msg_from": "=?iso-8859-1?q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad prepare performance" } ]
[ { "msg_contents": "Hi! I've got the following statement:\n\nSELECT DISTINCT sub.os,\n COUNT(sub.os) as total\nFROM (\n SELECT split_part(system.name, ' ', 1) as os\n FROM system, attacks\n WHERE 1 = 1\n AND timestamp >= 1205708400\n AND timestamp <= 1206313200\n AND attacks.source = system.ip_addr\n AND NOT attacks.source IN (\n SELECT exclusion\n FROM org_excl\n WHERE orgid=2\n )\n ) as sub\n GROUP BY sub.os\n ORDER BY total DESC LIMIT 5\n\nwhich has the following execution plan:\n\nLimit (cost=1831417.45..1831417.48 rows=5 width=34) (actual time=\n1599.915..1599.925 rows=3 loops=1)\n -> Unique (cost=1831417.45..1831417.75 rows=41 width=34) (actualtime=\n1599.912..1599.918 rows=3 loops=1)\n -> Sort (cost=1831417.45..1831417.55 rows=41 width=34) (actual\ntime=1599.911..1599.913 rows=3 loops=1)\n Sort Key: count(split_part((\"system\".name)::text, ''::text,\n1)), split_part((\"system\".name)::text, ' '::text, 1)\n -> HashAggregate (cost=1831415.63..1831416.35 rows=41\nwidth=34) (actual time=1599.870..1599.876 rows=3 loops=1)\n -> Nested Loop (cost=23.77..1829328.68 rows=417390\nwidth=34) (actual time=0.075..1474.260 rows=75609 loops=1)\n -> Index Scan using index_attacks_timestamp on\nattacks (cost=23.77..2454.92 rows=36300 width=11) (actual time=\n0.041..137.045 rows=72380 loops=1)\n Index Cond: ((\"timestamp\" >= 1205708400) AND\n(\"timestamp\" <= 1206313200))\n Filter: (NOT (hashed subplan))\n SubPlan\n -> Seq Scan on org_excl\n(cost=0.00..23.75rows=6 width=32) (actual time=\n0.014..0.014 rows=0 loops=1)\n Filter: (orgid = 2)\n -> Index Scan using ip_addr_name_index on\n\"system\" (cost=0.00..50.15 rows=12 width=45) (actual\ntime=0.009..0.012rows=1 loops=72380)\n Index Cond: (\"outer\".source =\n\"system\".ip_addr)\n\nTotal runtime: 1600.056 ms\n\nthe NL (nested loop) is accountable for most of the total query time. Is\nthere any way to avoid the NL and/or speed up the query?\n\nThanks,\n\nFrits\n\nHi! I've got the following statement:SELECT DISTINCT sub.os,             COUNT(sub.os) as totalFROM\n(            SELECT split_part(system.name, ' ', 1) as os            FROM system, attacks            WHERE 1 = 1            AND  timestamp >= 1205708400\n            AND\ntimestamp <= 1206313200             AND\nattacks.source = system.ip_addr            AND NOT attacks.source IN (                    SELECT exclusion                    FROM org_excl                    WHERE orgid=2                     )              ) as sub\n             GROUP BY sub.os             ORDER BY total DESC LIMIT\n5which has the following execution plan:\nLimit \n(cost=1831417.45..1831417.48 rows=5 width=34) (actual time=1599.915..1599.925 rows=3 loops=1)  ->  Unique \n(cost=1831417.45..1831417.75 rows=41 width=34) (actualtime=1599.912..1599.918 rows=3 loops=1)        ->  Sort \n(cost=1831417.45..1831417.55 rows=41 width=34) (actual time=1599.911..1599.913 rows=3 loops=1)              Sort\nKey: count(split_part((\"system\".name)::text, ''::text, 1)), split_part((\"system\".name)::text,\n' '::text, 1)             \n->  HashAggregate  (cost=1831415.63..1831416.35 rows=41 width=34) (actual time=1599.870..1599.876 rows=3 loops=1)                   \n->  Nested Loop  (cost=23.77..1829328.68 rows=417390 width=34) (actual time=0.075..1474.260 rows=75609\nloops=1)                         \n->  Index Scan using\nindex_attacks_timestamp on attacks \n(cost=23.77..2454.92 rows=36300 width=11) (actual time=0.041..137.045 rows=72380 loops=1)                                Index Cond:\n((\"timestamp\" >= 1205708400) AND (\"timestamp\" <=\n1206313200))                                Filter: (NOT\n(hashed subplan))                                SubPlan                                  ->  Seq Scan on org_excl (cost=0.00..23.75 rows=6 width=32) (actual\ntime=0.014..0.014 rows=0 loops=1)                                        Filter:\n(orgid = 2)                         \n->  Index Scan using\nip_addr_name_index on \"system\" \n(cost=0.00..50.15 rows=12 width=45) (actual time=0.009..0.012 rows=1 loops=72380)                                Index Cond:\n(\"outer\".source = \"system\".ip_addr)\nTotal runtime: 1600.056 ms\nthe NL (nested loop) is accountable for most of the total query time. Is there any way to avoid the NL and/or speed up the query?Thanks,Frits", "msg_date": "Mon, 31 Mar 2008 13:57:08 +0200", "msg_from": "\"Frits Hoogland\" <[email protected]>", "msg_from_op": true, "msg_subject": "optimizing query performance" } ]
[ { "msg_contents": "(Declaration of interest: I'm researching for a publication\non OLTP system design)\n\nI have a question about file writes, particularly on POSIX.\nThis arose while considering the extent to which cache memory\nand command queueing on disk\ndrives can help improve performance.\n\nIs it correct that POSIX requires that the updates to a single\nfile are serialised in the filesystem layer?\n\nSo, if we have a number of dirty pages to write back to a single\nfile in the database (whether a table or index) then we cannot\npass these through the POSIX filesystem layer into the TCQ/NCQ\nsystem on the disk drive, so it can reorder them?\n\nI have seen suggestions that on Solaris this can be relaxed.\n\nI *assume* that PostgreSQL's lack of threads or AIO and the\nsingle bgwriter means that PostgreSQL 8.x does not normally\nattempt to make any use of such a relaxation but could do so if the\nbgwriter fails to keep up and other backends initiate flushes.\n\nDoes anyone know (perhaps from other systems) whether it is\nvaluable to attempt to take advantage of such a relaxation\nwhere it is available?\n\nDoes the serialisation for file update apply in the case\nwhere the file contents have been memory-mapped and we\ntry an msync (or equivalent)?\n\n\n", "msg_date": "Mon, 31 Mar 2008 20:53:31 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": true, "msg_subject": "POSIX file updates" }, { "msg_contents": "James Mansion wrote:\n> (Declaration of interest: I'm researching for a publication\n> on OLTP system design)\n>\n> I have a question about file writes, particularly on POSIX.\n> This arose while considering the extent to which cache memory\n> and command queueing on disk\n> drives can help improve performance.\n>\n> Is it correct that POSIX requires that the updates to a single\n> file are serialised in the filesystem layer?\n\nIs there anything in POSIX that seems to suggest this? :-) (i.e. why are \nyou going under the assumption that the answer is yes - did you read \nsomething?)\n\n> So, if we have a number of dirty pages to write back to a single\n> file in the database (whether a table or index) then we cannot\n> pass these through the POSIX filesystem layer into the TCQ/NCQ\n> system on the disk drive, so it can reorder them?\n\nI don't believe POSIX has any restriction such as you describe - or if \nit does, and I don't know about it, then most UNIX file systems (if not \nmost file systems on any platform) are not POSIX compliant.\n\nLinux itself, even without NCQ, might choose to reorder the writes. If \nyou use ext2, the pressure to push pages out is based upon last used \ntime rather than last write time. It can choose to push out pages at any \ntime, and it's only every 5 seconds or so the the system task (bdflush?) \ntries to force out all dirty file system pages. NCQ exaggerates the \nsituation, but I believe the issue pre-exists NCQ or the SCSI equivalent \nof years past.\n\nThe rest of your email relies on the premise that POSIX enforces such a \nthing, or that systems are POSIX compliant. :-)\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n", "msg_date": "Mon, 31 Mar 2008 16:02:53 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: POSIX file updates" }, { "msg_contents": "Mark Mielke wrote:\n> Is there anything in POSIX that seems to suggest this? :-) (i.e. why \n> are you going under the assumption that the answer is yes - did you \n> read something?)\n>\nIt was something somewhere on the Sun web site, relating to tuning Solaris\nfilesystems. Or databases. Or ZFS. :-(\n\nNeedless to say I can't find a search string that finds it now. I \nremember being surprised\nthough, since I wasn't aware of it either.\n> I don't believe POSIX has any restriction such as you describe - or if \n> it does, and I don't know about it, then most UNIX file systems (if \n> not most file systems on any platform) are not POSIX compliant.\nThat, I can believe.\n\n>\n> Linux itself, even without NCQ, might choose to reorder the writes. If \n> you use ext2, the pressure to push pages out is based upon last used \n> time rather than last write time. It can choose to push out pages at \n> any time, and it's only every 5 seconds or so the the system task \n> (bdflush?) tries to force out all dirty file system pages. NCQ \n> exaggerates the situation, but I believe the issue pre-exists NCQ or \n> the SCSI equivalent of years past.\nIndeed there do seem to be issues with Linux and fsync. Its one of \nthings I'm trying to get a\nhandle on as well - the relationship between fsync and flushes of \ncontroller and/or disk caches.\n>\n> The rest of your email relies on the premise that POSIX enforces such \n> a thing, or that systems are POSIX compliant. :-)\n>\nTrue. I'm hoping someone (Jignesh?) will be prompted to remember.\n\nIt may have been something in a blog related to ZFS vs other \nfilesystems, but so far I'm coming\nup empty in google. doesn't feel like something I imagined though.\n\nJames\n\n", "msg_date": "Mon, 31 Mar 2008 21:41:29 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: POSIX file updates" }, { "msg_contents": "\n> I don't believe POSIX has any restriction such as you describe - or if \n> it does, and I don't know about it, then most UNIX file systems (if \n> not most file systems on any platform) are not POSIX compliant.\n>\nI suspect that indeed there are two different issues here in that the \nfile mutex relates to updates\nto the file, not passing the buffers through into the drive, which \nindeed might be delayed.\n\nBeen using direct io too much recently. :-(\n\n", "msg_date": "Mon, 31 Mar 2008 21:57:39 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: POSIX file updates" }, { "msg_contents": "Mark Mielke wrote:\n> Is there anything in POSIX that seems to suggest this? :-) (i.e. why \n> are you going under the assumption that the answer is yes - did you \n> read something?)\n>\nPerhaps it was just this:\n\nhttp://kevinclosson.wordpress.com/2007/01/18/yes-direct-io-means-concurrent-writes-oracle-doesnt-need-write-ordering/\n\nWhichof course isn't on Sun.\n\n", "msg_date": "Mon, 31 Mar 2008 22:28:14 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: POSIX file updates" }, { "msg_contents": "On Mon, 31 Mar 2008, James Mansion wrote:\n\n> Is it correct that POSIX requires that the updates to a single\n> file are serialised in the filesystem layer?\n\nQuoting from Lewine's \"POSIX Programmer's Guide\":\n\n\"After a write() to a regular file has successfully returned, any \nsuccessful read() from each byte position in the file that was modified by \nthat write() will return the data that was written by the write()...a \nsimilar requirement applies to multiple write operations to the same file \nposition\"\n\nThat's the \"contract\" that has to be honored. How your filesystem \nactually implements this contract is none of a POSIX write() call's \nbusiness, so long as it does.\n\nIt is the case that multiple writers to the same file can get serialized \nsomewhere because of how this call is implemented though, so you're \ncorrect about that aspect of the practical impact being a possibility.\n\n> So, if we have a number of dirty pages to write back to a single\n> file in the database (whether a table or index) then we cannot\n> pass these through the POSIX filesystem layer into the TCQ/NCQ\n> system on the disk drive, so it can reorder them?\n\nAs long as the reordering mechanism also honors that any reads that come \nafter a write to a block reflect that write, they can be reordered. The \nfilesystem and drives are already doing elevator sorting and similar \nmechanisms underneath you to optimize things. Unless you use a sync \noperation or some sort of write barrier, you don't really know what has \nhappened.\n\n> I have seen suggestions that on Solaris this can be relaxed.\n\nThere's some good notes in this area at:\n\nhttp://www.solarisinternals.com/wiki/index.php/Direct_I/O and \nhttp://www.solarisinternals.com/wiki/index.php/ZFS_Performance\n\nIt's clear that such relaxation has benefits with some of Oracle's \nmechanisms as described. But amusingly, PostgreSQL doesn't even support \nSolaris's direct I/O method right now unless you override the filesystem \nmounting options, so you end up needing to split it out and hack at that \nlevel regardless.\n\n> I *assume* that PostgreSQL's lack of threads or AIO and the\n> single bgwriter means that PostgreSQL 8.x does not normally\n> attempt to make any use of such a relaxation but could do so if the\n> bgwriter fails to keep up and other backends initiate flushes.\n\nPostgreSQL writes transactions to the WAL. When they have reached disk, \nconfirmed by a successful f[data]sync or a completed syncronous write, \nthat transactions is now committed. Eventually the impacted items in the \nbuffer cache will be written as well. At checkpoint time, things are \nreconciled such that all dirty buffers at that point have been written, \nand now f[data]sync is called on each touched file to make sure those \nchanges have made it to disk.\n\nWrites are assumed to be lost in some memory (kernel, filesystem or disk \ncache) until they've been confirmed to be written to disk via the sync \nmechanism. When a backend flushes a buffer out, as soon as the OS caches \nthat write the database backend moves on without being concerned about how \nit's eventually going to get to disk one day. As long as the newly \nwritten version comes back again if it's read, the database doesn't worry \nabout what's happening until it specifically asks for a sync that proves \neverything is done. So if the backends or the background writer are \nspewing updates out, they don't care if the OS doesn't guarantee the order \nthey hit disk until checkpoint time; it's only the synchronous WAL writes \nthat do.\n\nAlso note that it's usually the case that backends write a substantial \npercentage of the buffers out themselves. You should assume that's the \ncase unless you've done some work to prove the background writer is \nhandling most writes (which is difficult to even know before 8.3, much \nless tune for).\n\nThat how I understand everything to work at least. I will add the \ndisclaimer that I haven't looked at the archive recovery code much yet. \nMaybe there's some expectation it has for general database write ordering \nin order for the WAL replay mechanism to work correctly, I can't imagine \nhow that could work though.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 31 Mar 2008 18:44:27 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: POSIX file updates" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> Quoting from Lewine's \"POSIX Programmer's Guide\":\n\n> \"After a write() to a regular file has successfully returned, any \n> successful read() from each byte position in the file that was modified by \n> that write() will return the data that was written by the write()...a \n> similar requirement applies to multiple write operations to the same file \n> position\"\n\nYeah, I imagine this is what the OP is thinking of. But note that what\nit describes is the behavior of concurrent write() and read() calls\nwithin a normally-functioning system. I see nothing there that\nconstrains the order in which writes hit physical disk, nor (to put it\nanother way) that promises anything much about the state of the\nfilesystem after a crash.\n\nAs you stated, PG is largely independent of these issues anyway. As\nlong as the filesystem honors its spec-required contract that it won't\nclaim fsync() is complete before all the referenced data is safely on\npersistent store, we are OK.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Apr 2008 01:07:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: POSIX file updates " }, { "msg_contents": "Greg Smith wrote:\n> \"After a write() to a regular file has successfully returned, any \n> successful read() from each byte position in the file that was \n> modified by that write() will return the data that was written by the \n> write()...a similar requirement applies to multiple write operations \n> to the same file position\"\n>\nYes, but that doesn't say anything about simultaneous read and write \nfrom multiple threads from\nthe same or different processes with descriptors on the same file.\n\nNo matter, I was thinking about a case with direct unbuffered IO. Too \nmany years\nusing Sybase on raw devices. :-(\n\nThough, some of the performance studies relating to UFS directio suggest \nthat there\nare indeed benefits to managing the write through rather than using the \nOS as a poor\nman's background thread to do it. SQLServer allows config based on deadline\nscheduling for checkpoint completion I believe. This seems to me a very \ndesirable\nfeature, but it does need more active scheduling of the write-back.\n\n>\n> It's clear that such relaxation has benefits with some of Oracle's \n> mechanisms as described. But amusingly, PostgreSQL doesn't even \n> support Solaris's direct I/O method right now unless you override the \n> filesystem mounting options, so you end up needing to split it out and \n> hack at that level regardless.\nIndeed that's a shame. Why doesn't it use the directio?\n> PostgreSQL writes transactions to the WAL. When they have reached \n> disk, confirmed by a successful f[data]sync or a completed syncronous \n> write, that transactions is now committed. Eventually the impacted \n> items in the buffer cache will be written as well. At checkpoint \n> time, things are reconciled such that all dirty buffers at that point \n> have been written, and now f[data]sync is called on each touched file \n> to make sure those changes have made it to disk.\nYes but fsync and stable on disk isn't the same thing if there is a \ncache anywhere is it?\nHence the fuss a while back about Apple's control of disk caches. \nSolaris and Windows\ndo it too.\n\nIsn't allowing the OS to accumulate an arbitrary number of dirty blocks \nwithout\ncontrol of the rate at which they spill to media just exposing a \npossibility of an IO\nstorm when it comes to checkpoint time? Does bgwriter attempt to \ncontrol this\nwith intermediate fsync (and push to media if available)?\n\nIt strikes me as odd that fsync_writethrough isn't the most preferred \noption where\nit is implemented. The postgres approach of *requiring* that there be no \ncache\nbelow the OS is problematic, especially since the battery backup on internal\narray controllers is hardly the handiest solution when you find the mobo \nhas died.\nAnd especially when the inability to flush caches on modern SATA and SAS\ndrives would appear to be more a failing in some operating systems than in\nthe drives themselves..\n\nThe links I've been accumulating into my bibliography include:\n\nhttp://www.h2database.com/html/advanced.html#transaction_isolation\nhttp://lwn.net/Articles/270891/\nhttp://article.gmane.org/gmane.linux.kernel/646040\nhttp://lists.apple.com/archives/darwin-dev/2005/Feb/msg00072.html\nhttp://brad.livejournal.com/2116715.html\n\nAnd your handy document on wal tuning, of course.\n\nJames\n\n", "msg_date": "Wed, 02 Apr 2008 20:10:29 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: POSIX file updates" }, { "msg_contents": "Am Mittwoch, den 02.04.2008, 20:10 +0100 schrieb James Mansion:\n> It strikes me as odd that fsync_writethrough isn't the most preferred \n> option where\n> it is implemented. The postgres approach of *requiring* that there be no \n> cache\n> below the OS is problematic, especially since the battery backup on internal\n> array controllers is hardly the handiest solution when you find the mobo \n> has died.\n\nWell, that might sound brutal, but I'm having today a brute day.\n\nThere are some items here.\n\n1.) PostgreSQL relies on filesystem semantics. Which might be better or\nworse then the raw devices other RDBMS use as an interface, but in the\nend it is just an interface. How well that works out depends strongly on\nyour hardware selection, your OS selection and so on. DB tuning is an\nscientific art form ;) Worse the fact that raw devices work better on\nhardware X/os Y than say filesystems is only of limited interest, only\nif you happen to have already an investement in X or Y. In the end the\nquestions are is the performance good enough, is the data safety good\nenough, and at which cost (in money, work, ...).\n\n2.) data safety requirements vary strongly. In many (if not most) cases,\nthe recovery of the data on a failed hardware is not critical. Hint:\nbeing down till somebody figures out what failed, if the rest of the\nsystem is still stable, and so on are not acceptable at all. Meaning the\nmoment that the database server has any problem, one of the hot standbys\ntakes over. The thing you worry about is if all data has made it to the\nreplication servers, not if some data might get lost in the hardware\ncache of a controller. (Actually, talk to your local computer forensics\nguru, there are a number of way to keep the current to electronics while\nmoving them.)\n\n3.) a controller cache is an issue if you have a filesystem in your data\npath or not. If you do raw io, and the stupid hardware do cache writes,\nwell it's about as stupid as it would be if it would have cached\nfilesystem writes.\n\nAndreas\n\n\n> And especially when the inability to flush caches on modern SATA and SAS\n> drives would appear to be more a failing in some operating systems than in\n> the drives themselves..\n> \n> The links I've been accumulating into my bibliography include:\n> \n> http://www.h2database.com/html/advanced.html#transaction_isolation\n> http://lwn.net/Articles/270891/\n> http://article.gmane.org/gmane.linux.kernel/646040\n> http://lists.apple.com/archives/darwin-dev/2005/Feb/msg00072.html\n> http://brad.livejournal.com/2116715.html\n> \n> And your handy document on wal tuning, of course.\n> \n> James\n> \n>", "msg_date": "Wed, 02 Apr 2008 22:24:12 +0200", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: POSIX file updates" }, { "msg_contents": "Andreas Kostyrka wrote:\n> takes over. The thing you worry about is if all data has made it to the\n> replication servers, not if some data might get lost in the hardware\n> cache of a controller. (Actually, talk to your local computer forensics\n> guru, there are a number of way to keep the current to electronics while\n> moving them.)\n> \nBut it doesn't, unless you use a synchronous rep at block level - which \nis why we have SRDF.\nLog-based reps are async and will lose committed transactions. Even if \nyou failed over, its\nstill extra-ordinarily useful to be able to see what the primary tried \nto do - its the only place\nthe e-comm transactions are stored, and the customer will still expect \ndelivery.\n\nI'm well aware that there are battery-backed caches that can be detached \nfrom controllers\nand moved. But you'd better make darn sure you move all the drives and \nplug them in in\nexactly the right order and make sure they all spin up OK with the \nreplaced cache, because\nits expecting them to be exactly as they were last time they were on the \nbus.\n\n> 3.) a controller cache is an issue if you have a filesystem in your data\n> path or not. If you do raw io, and the stupid hardware do cache writes,\n> well it's about as stupid as it would be if it would have cached\n> filesystem writes.\n> \nOnly if the OS doesn't know how to tell the cache to flush. SATA and \nSAS both have that facility.\nBut the semantics of *sync don't seem to be defined to require it being \nexercised, at least as far\nas many operating systems implement it.\n\nYou would think hard drives could have enough capacitor store to dump \ncache to flash or the\ndrive - if only to a special dump zone near where the heads park. They \nare spinning already\nafter all.\n\nOn small systems in SMEs its inevitable that large drives will be shared \nwith filesystem use, even\nif the database files are on their own slice. If you can allow the drive \nto run with writeback cache\nturned on, then the users will be a lot happier, even if dbms commits \nforce *all* that cache to\nflush to the platters.\n\n\n", "msg_date": "Wed, 02 Apr 2008 23:48:52 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: POSIX file updates" }, { "msg_contents": "On Wed, 2 Apr 2008, James Mansion wrote:\n\n>> But amusingly, PostgreSQL doesn't even support Solaris's direct I/O \n>> method right now unless you override the filesystem mounting options, \n>> so you end up needing to split it out and hack at that level \n>> regardless.\n> Indeed that's a shame. Why doesn't it use the directio?\n\nYou turn on direct I/O differently under Solaris then everywhere else, and \nnobody has bothered to write the patch (trivial) and OS-specific code to \nturn it on only when appropriate (slightly tricker) to handle this case. \nThere's not a lot of pressure on PostgreSQL to handle this case correctly \nwhen Solaris admins are used to doing direct I/O tricks on filesystems \nalready, so they don't complain about it much.\n\n> Yes but fsync and stable on disk isn't the same thing if there is a \n> cache anywhere is it? Hence the fuss a while back about Apple's control \n> of disk caches. Solaris and Windows do it too.\n\nIf your caches don't honor fsync by making sure it's on disk or a \nbattery-backed cache, you can't use them and expect PostgreSQL to operate \nreliably. Back to that \"doesn't honor the contract\" case. The code that \nimplements fsync_writethrough on both Windows and Mac OS handles those two \ncases by writing with the appropriate flags to not get cached in a harmful \nway. I'm not aware of Solaris doing anything stupid here--the last two \nSolaris x64 systems I've tried that didn't have a real controller write \ncache ignored the drive cache and blocked at fsync just as expected, \nlimiting commits to the RPM of the drive. Seen it on UFS and ZFS, both \nseem to do the right thing here.\n\n> Isn't allowing the OS to accumulate an arbitrary number of dirty blocks \n> without control of the rate at which they spill to media just exposing a \n> possibility of an IO storm when it comes to checkpoint time? Does \n> bgwriter attempt to control this with intermediate fsync (and push to \n> media if available)?\n\nIt can cause exactly such a storm. If you haven't noticed my other paper \nat http://www.westnet.com/~gsmith/content/linux-pdflush.htm yet it goes \nover this exact issue as far as how Linux handles it. Now that it's easy \nto get even a home machine to have 8GB of RAM in it, Linux will gladly \nbuffer ~800MB worth of data for you and cause a serious storm at fsync \ntime. It's not pretty when that happens into a single SATA drive because \nthere's typically plenty of seeks in that write storm too.\n\nThere was a prototype implementation plan that wasn't followed completely \nthrough in 8.3 to spread fsyncs out a bit better to keep this from being \nas bad. That optimization might make it into 8.4 but I don't know that \nanybody is working on it. The spread checkpoints in 8.3 are so much \nbetter than 8.2 that many are happy to at least have that.\n\n> It strikes me as odd that fsync_writethrough isn't the most preferred \n> option where it is implemented.\n\nIt's only available on Win32 and Mac OS X (the OSes that might get it \nwrong without that nudge). I believe every path through the code uses it \nby default on those platforms, there's a lot of remapping in there.\n\nYou can get an idea of what code was touched by looking at the patch that \nadded the OS X version of fsync_writethrough (it was previously only \nWin32): http://archives.postgresql.org/pgsql-patches/2005-05/msg00208.php\n\n> The postgres approach of *requiring* that there be no cache below the OS \n> is problematic, especially since the battery backup on internal array \n> controllers is hardly the handiest solution when you find the mobo has \n> died.\n\nIf the battery backup cache doesn't survive being moved to another machine \nafter a motherboard failure, it's not very good. The real risk to be \nconcerned about is what happens if the card itself dies. If that happens, \nyou can't help but lose transactions.\n\nYou seem to feel that there is an alternative here that PostgreSQL could \ntake but doesn't. There is not. You either wait until writes hit disk, \nwhich by physical limitations only happens at RPM speed and therefore is \ntoo slow to commit for many cases, or you cache in the most reliable \nmemory you've got and hope for the best. No software approach can change \nany of that.\n\n> And especially when the inability to flush caches on modern SATA and SAS \n> drives would appear to be more a failing in some operating systems than \n> in the drives themselves..\n\nI think you're extrapolating too much from the Win32/Apple cases here. \nThere are plenty of cases where the so-called \"lying\" drives themselves \nare completely stupid on their own regardless of operating system.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 2 Apr 2008 19:39:46 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: POSIX file updates" }, { "msg_contents": "On Wed, 2 Apr 2008, James Mansion wrote:\n\n> I'm well aware that there are battery-backed caches that can be detached \n> from controllers and moved. But you'd better make darn sure you move \n> all the drives and plug them in in exactly the right order and make sure \n> they all spin up OK with the replaced cache, because its expecting them \n> to be exactly as they were last time they were on the bus.\n\nThe better controllers tag the drives with a unique ID number so they can \nroute pending writes correctly even after such a disaster. This falls \ninto the category of tests people should do more often but don't: write \nsomething into the cache, pull the power, rearrange the drives, and see if \neverything still recovers.\n\n> You would think hard drives could have enough capacitor store to dump \n> cache to flash or the drive - if only to a special dump zone near where \n> the heads park. They are spinning already after all.\n\nThe free market seems to have established that the preferred design model \nfor hard drives is that they be cheap and fast rather than focused on \nreliability. I rather doubt the tiny percentage of the world who cares as \nmuch about disk write integrity as database professionals do can possibly \nmake a big enough market to bother increasing the cost and design \ncomplexity of the drive to do this.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 2 Apr 2008 19:49:58 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: POSIX file updates" }, { "msg_contents": "Greg Smith wrote:\n> You turn on direct I/O differently under Solaris then everywhere else, \n> and nobody has bothered to write the patch (trivial) and OS-specific \n> code to turn it on only when appropriate (slightly tricker) to handle \n> this case. There's not a lot of pressure on PostgreSQL to handle this \n> case correctly when Solaris admins are used to doing direct I/O tricks \n> on filesystems already, so they don't complain about it much.\nI'm not sure that this will survive use of PostgreSQL on Solaris with \nmore users\non Indiana though. Which I'm hoping will happen\n> RPM of the drive. Seen it on UFS and ZFS, both seem to do the right \n> thing here.\nBut ZFS *is* smart enough to manage the cache, albeit sometimes with \nunexpected\nconsequences as with the 2530 here http://milek.blogspot.com/.\n> You seem to feel that there is an alternative here that PostgreSQL \n> could take but doesn't. There is not. You either wait until writes \n> hit disk, which by physical limitations only happens at RPM speed and \n> therefore is too slow to commit for many cases, or you cache in the \n> most reliable memory you've got and hope for the best. No software \n> approach can change any of that.\nIndeed I do, but the issue I have is that the problem is that some \npopular operating\nsystems (lets try to avoid the flame war) fail to expose control of disk \ncaches and the\nso the code assumes that the onus is on the admin and the documentation \nrightly says\nso. But this is as much a failure of the POSIX API and operating \nsystems to expose\nsomething that's necessary and it seems to me rather valuable that the \napplication be\nable to work with such facilities as they become available. Exposing the \nflush cache\nmechanisms isn't dangerous and can improve performance for non-dbms users of\nthe same drives.\n\nI think manipulation of this stuff is a major concern for a DBMS that \nmight be\nused by amateur SAs, and if at all possible it should work out of the \nbox on common\nhardware. So far as I can tell, SQLServerExpress makes a pretty good \nattempt\nat it, for example It might be enough for initdb to whinge and fail if \nit thinks the\ndisks are behaving insanely unless the wouldbe dba sets a \n'my_disks_really_are_that_fast'\nflag in the config. At the moment anyone can apt-get themselves a DBMS \nwhich may\nbecome a liability.\n\nAt the moment:\n - casual use is likely to be unreliable\n - uncontrolled deferred IO can result in almost DOS-like checkpoints\n\nThese affect other systems than PostgreSQL too - but would be avoidable \nif the\ndrive cache flush was better exposed and the IO was staged to use it. \nThere's no\nreason to block on anything but the final IO in a WAL commit after all, \nand with\nthe deferred commit feature (which I really like for workflow engines) \nintermediate\nWAL writes of configured chunk size could let the WAL drives get on with it.\nAdmitedly I'm assuming a non-blocking write through - direct IO from a\nbackground thread (process if you must) or aio.\n\n> There are plenty of cases where the so-called \"lying\" drives \n> themselves are completely stupid on their own regardless of operating \n> system.\nWith modern NCQ capable drive firmware? Or just with older PATA stuff? \nThere's\nan awful lot of fud out there about SCSI vs IDE still.\n\nJames\n\n", "msg_date": "Thu, 03 Apr 2008 06:32:47 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: POSIX file updates" }, { "msg_contents": "On Mon, 31 Mar 2008, James Mansion wrote:\n\n> I have a question about file writes, particularly on POSIX.\n\nIn other reading I just came across this informative article on this \nissue, which amusingly was written the same day you asked about this:\n\nhttp://jeffr-tech.livejournal.com/20707.html\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 17 Apr 2008 14:19:28 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: POSIX file updates" } ]
[ { "msg_contents": "Hello,\n\nI have a typical many to many join table, in this instance it is\ncapturing the multiplicity described as \"one person can have many\nrooms and one room can have many persons\". Further the join expresses\nwhere in the room the person is sitting, a seat number. I am creating\na function to abstract this away, if there is no record with the same\nperson and room the insert otherwise if it already exists update the\nrecord with the new seat value.\n\ncreate table person_room (\n id serial,\n person_id int,\n room_id int,\n seat varchar(255),\n unique (person_id, room_id)\n);\n\n-- version 1\ncreate or replace function add_person_to_room(person int, room int, s\nvarchar(255)) as $$\nbegin\n insert into person_room(person_id, room_id, seat) values (person, room, s);\nexception when unique_violation then\n update person_room set seat = s where (person_id = person) and\n(room_id = room);\nend;\n$$ language 'plpgsql';\n\n-- version 2\ncreate or replace function add_person_to_room(person int, room int, s\nvarchar(255)) as $$\ndeclare\n i int;\nbegin\n select into i id from person_room where (person_id = person) and\n(room_id = room);\n if (not found) then\n insert into person_room(person_id, room_id, seat) values\n(person, room, s);\n else\n update person_room set seat = s where (person_id = person) and\n(room_id = room);\nend;\n$$ language 'plpgsql';\n\n\nWhich version is faster?\nDoes the exception mechanism add any overhead?\nWhich is more cleaner?\n\n-ravi\n", "msg_date": "Tue, 1 Apr 2008 13:20:34 +1300", "msg_from": "\"Ravi Chemudugunta\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance Implications of Using Exceptions" }, { "msg_contents": "* Ravi Chemudugunta ([email protected]) wrote:\n> Which version is faster?\n\nIn general I would recommend that you benchmark them using\nas-close-to-real load as possible again as-real-as-possible data.\n\n> Does the exception mechanism add any overhead?\n\nYes, using exceptions adds a fair bit of overhead. Quote from the\ndocumentation found here:\nhttp://www.postgresql.org/docs/8.3/static/plpgsql-control-structures.html\n\nTip: A block containing an EXCEPTION clause is significantly more\nexpensive to enter and exit than a block without one. Therefore, don't\nuse EXCEPTION without need.\n\n> Which is more cleaner?\n\nThat would be in the eye of the beholder, generally. Given the lack of\ncomplexity, I don't think 'cleanness' in this case really matters all\nthat much.\n\n\tEnjoy,\n\n\t\tStephen", "msg_date": "Mon, 31 Mar 2008 20:52:35 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Implications of Using Exceptions" }, { "msg_contents": "Stephen Frost wrote\n> * Ravi Chemudugunta ([email protected]) wrote:\n> > Which version is faster?\n> \n> In general I would recommend that you benchmark them using\n> as-close-to-real load as possible again as-real-as-possible data.\n> \n> > Does the exception mechanism add any overhead?\n> \n> Yes, using exceptions adds a fair bit of overhead. Quote from the\n> documentation found here:\n> http://www.postgresql.org/docs/8.3/static/plpgsql-control-stru\n> ctures.html\n> \n> Tip: A block containing an EXCEPTION clause is significantly more\n> expensive to enter and exit than a block without one. Therefore, don't\n> use EXCEPTION without need.\n> \n> > Which is more cleaner?\n> \n> That would be in the eye of the beholder, generally. Given \n> the lack of\n> complexity, I don't think 'cleanness' in this case really matters all\n> that much.\n\nA third option is to update, if not found, insert.\n\nRegards,\nStephen Denne.\n\nDisclaimer:\nAt the Datamail Group we value team commitment, respect, achievement, customer focus, and courage. This email with any attachments is confidential and may be subject to legal privilege. If it is not intended for you please advise by reply immediately, destroy it and do not copy, disclose or use it in any way.\n__________________________________________________________________\n This email has been scanned by the DMZGlobal Business Quality\n Electronic Messaging Suite.\nPlease see http://www.dmzglobal.com/dmzmessaging.htm for details.\n__________________________________________________________________\n\n\n", "msg_date": "Tue, 1 Apr 2008 13:56:35 +1300", "msg_from": "\"Stephen Denne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Implications of Using Exceptions" }, { "msg_contents": "Hi, thanks for the quick reply.\n\n> In general I would recommend that you benchmark them using\n> as-close-to-real load as possible again as-real-as-possible data.\n\nI am running a benchmark with around 900,000 odd records (real-load on\nthe live machine :o ) ... should show hopefully some good benchmarking\nresults for the two methods.\n\n> That would be in the eye of the beholder, generally. Given the lack of\n> complexity, I don't think 'cleanness' in this case really matters all\n> that much.\n\nI would like to make a comment that is that the only downside I saw of\nusing the exception approach was that if for some reason someone\nforgot to add the unique constraint to the table, it would be a bit of\na nightmare-ness. (I am porting some code into the server where the\nschema does not have these constraints setup, only in the devel\ndatabase).\n\nWill reply back with my conclusions, I am expecting a large difference.\n\nCheers,\n\nravi\n", "msg_date": "Tue, 1 Apr 2008 14:23:00 +1300", "msg_from": "\"Ravi Chemudugunta\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Implications of Using Exceptions" }, { "msg_contents": "Coming to think of it.\n\nWould it fine to consider that an UPDATE query that found no records to\nupdate is (performance wise) the same as a SELECT query with the same WHERE\nclause ?\n\nAs in, does an UPDATE query perform additional overhead even before it finds\nthe record to work on ?\n\n*Robins*\n\n\nOn Tue, Apr 1, 2008 at 7:53 AM, Robins Tharakan <[email protected]> wrote:\n\n> I get into these situations quite often and use exactly what stephen\n> pointed out.\n>\n> Do an Update, but if not found, do an insert. Its (by and large) better\n> than your version 2 since here you may skip running the second query (if the\n> record exists) but in version 2, two queries are *always* run. And\n> considering that exception is heavy, this may be a good attempt to give a\n> try as well.\n>\n> update person_room set seat = s where (person_id = person) and (room_id =\n> room);\n> if not found then\n> insert into person_room(person_id, room_id, seat) values (person, room,\n> s);\n> end if\n>\n> Robins\n>\n>\n>\n> On Tue, Apr 1, 2008 at 6:26 AM, Stephen Denne <\n> [email protected]> wrote:\n>\n> > Stephen Frost wrote\n> > > * Ravi Chemudugunta ([email protected]) wrote:\n> > > > Which version is faster?\n> > >\n> > > In general I would recommend that you benchmark them using\n> > > as-close-to-real load as possible again as-real-as-possible data.\n> > >\n> > > > Does the exception mechanism add any overhead?\n> > >\n> > > Yes, using exceptions adds a fair bit of overhead. Quote from the\n> > > documentation found here:\n> > > http://www.postgresql.org/docs/8.3/static/plpgsql-control-stru\n> > > ctures.html\n> > >\n> > > Tip: A block containing an EXCEPTION clause is significantly more\n> > > expensive to enter and exit than a block without one. Therefore, don't\n> > > use EXCEPTION without need.\n> > >\n> > > > Which is more cleaner?\n> > >\n> > > That would be in the eye of the beholder, generally. Given\n> > > the lack of\n> > > complexity, I don't think 'cleanness' in this case really matters all\n> > > that much.\n> >\n> > A third option is to update, if not found, insert.\n> >\n> > Regards,\n> > Stephen Denne.\n> >\n> > Disclaimer:\n> > At the Datamail Group we value team commitment, respect, achievement,\n> > customer focus, and courage. This email with any attachments is confidential\n> > and may be subject to legal privilege. If it is not intended for you please\n> > advise by reply immediately, destroy it and do not copy, disclose or use it\n> > in any way.\n> > __________________________________________________________________\n> > This email has been scanned by the DMZGlobal Business Quality\n> > Electronic Messaging Suite.\n> > Please see http://www.dmzglobal.com/dmzmessaging.htm for details.\n> > __________________________________________________________________\n> >\n> >\n> >\n> > --\n> > Sent via pgsql-performance mailing list (\n> > [email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n> >\n>\n>\n\nComing to think of it.Would it fine to consider that an UPDATE query that found no records to update is (performance wise) the same as a SELECT query with the same WHERE clause ?As in, does an UPDATE query perform additional overhead even before it finds the record to work on ?\nRobinsOn Tue, Apr 1, 2008 at 7:53 AM, Robins Tharakan <[email protected]> wrote:\nI get into these situations quite often and use exactly what stephen pointed out.Do an Update, but if not found, do an insert. Its (by and large) better than your version 2 since here you may skip running the second query (if the record exists) but in version 2, two queries are *always* run. And considering that exception is heavy, this may be a good attempt to give a try as well.\n\nupdate person_room set seat = s where (person_id = person) and \n(room_id = room);if not found then   insert into person_room(person_id, room_id, seat) values (person, room, s);end ifRobins\nOn Tue, Apr 1, 2008 at 6:26 AM, Stephen Denne <[email protected]> wrote:\nStephen Frost wrote\n> * Ravi Chemudugunta ([email protected]) wrote:\n> > Which version is faster?\n>\n> In general I would recommend that you benchmark them using\n> as-close-to-real load as possible again as-real-as-possible data.\n>\n> > Does the exception mechanism add any overhead?\n>\n> Yes, using exceptions adds a fair bit of overhead.  Quote from the\n> documentation found here:\n> http://www.postgresql.org/docs/8.3/static/plpgsql-control-stru\n> ctures.html\n>\n> Tip:  A block containing an EXCEPTION clause is significantly more\n> expensive to enter and exit than a block without one. Therefore, don't\n> use EXCEPTION without need.\n>\n> > Which is more cleaner?\n>\n> That would be in the eye of the beholder, generally.  Given\n> the lack of\n> complexity, I don't think 'cleanness' in this case really matters all\n> that much.\n\nA third option is to update, if not found, insert.\n\nRegards,\nStephen Denne.\n\nDisclaimer:\nAt the Datamail Group we value team commitment, respect, achievement, customer focus, and courage. This email with any attachments is confidential and may be subject to legal privilege.  If it is not intended for you please advise by reply immediately, destroy it and do not copy, disclose or use it in any way.\n\n\n__________________________________________________________________\n  This email has been scanned by the DMZGlobal Business Quality\n              Electronic Messaging Suite.\nPlease see http://www.dmzglobal.com/dmzmessaging.htm for details.\n__________________________________________________________________\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 1 Apr 2008 07:56:31 +0530", "msg_from": "\"Robins Tharakan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Implications of Using Exceptions" }, { "msg_contents": "\"Robins Tharakan\" <[email protected]> writes:\n> Would it fine to consider that an UPDATE query that found no records to\n> update is (performance wise) the same as a SELECT query with the same WHERE\n> clause ?\n\n> As in, does an UPDATE query perform additional overhead even before it finds\n> the record to work on ?\n\nThe UPDATE would fire BEFORE STATEMENT and AFTER STATEMENT triggers, if\nthere are any. Also, it would take a slightly stronger lock on the\ntable, which might result in blocking either the UPDATE itself or some\nconcurrent query where a plain SELECT would not've.\n\nThere might be some other corner cases I've forgotten. But in the basic\ncase I think your assumption is correct.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Apr 2008 00:29:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Implications of Using Exceptions " }, { "msg_contents": "Stephen Denne wrote:\n> A third option is to update, if not found, insert.\n>\n> \nI find myself having to do this in Sybase, but it sucks because there's\na race - if there's no row updated then there's no lock and you race\nanother thread doing the same thing. So you grab a row lock on a\nsacrificial row used as a mutex, or just a table lock. Or you just\naccept that sometimes you have to detect the insert fail and retry the\nwhole transaction. Which is sucky however you look at it.\n\nI think the 'update or insert' or 'merge' extensions make a degree\nof sense. At least in psql one can use the lightweight lock manager.\n\n\n", "msg_date": "Wed, 02 Apr 2008 20:19:54 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Implications of Using Exceptions" }, { "msg_contents": "> I find myself having to do this in Sybase, but it sucks because there's\n> a race - if there's no row updated then there's no lock and you race\n> another thread doing the same thing. So you grab a row lock on a\n> sacrificial row used as a mutex, or just a table lock. Or you just\n> accept that sometimes you have to detect the insert fail and retry the\n> whole transaction. Which is sucky however you look at it.\n\nhmm should I be worried ?\n\nI am doing an 'update if not found insert', in some cases I have found\nthat I need to select anyway, for e.g. take away 20 dollars from this\nperson;\n\n(all variables prefixed with _ are local variables)\n\nselect into _money money from person_money where person_id = _person;\nif (not found) then\n insert into person_money (person_id, money) values (_person, -\n_requested_amount);\nelse\n update person_money set money = money - _requested_amount where\nperson_id = _person;\n -- return new quantity\n return _money - _requested_quantity; -- <- i need the quantity so I\nhave to select here.\nend if;\n\nif I am not mistaken your are saying that between the select and the\nif (not found) then ... end if; block ... another concurrent process\ncould be executing the same thing and insert ... while in the first\nthread found is still 'false' and so it ends up inserting and over\nwriting / causing a unique violation or some kind?\n\nBTW, I did a benchmark with and without exceptions, the exceptions\nversion was very slow, so slow that I ended up killing it ... I am\nsure it would have taken atleast 5 hours (was already 3 hours in) ...\nversus, 25 mins! I guess the trouble was that I was using exceptions\nto overload 'normal' flow ... i.e. update if exists else update is not\nan exceptional circumstance and so exceptions are a bad choice.\n\nIt would be interesting to see how much overhead exception containing\nfunctions present when they do not throw any exceptions ... for never\nto every few records to all the time ... maybe I will try it with my\nparsing functions (which catch exceptions thrown by substring()).\n", "msg_date": "Thu, 3 Apr 2008 02:18:56 -0700 (PDT)", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Implications of Using Exceptions" }, { "msg_contents": "I think James was talking about Sybase. Postgresql on the other hand has a\nslightly better way to do this.\n\nSELECT ... FOR UPDATE allows you to lock a given row (based on the SELECT\n... WHERE clause) and update it... without worrying about a concurrent\nmodification. Of course, if the SELECT ... WHERE didn't bring up any rows,\nyou would need to do an INSERT anyway.\n\nRead more about SELECT ... FOR UPDATE here:\nhttp://www.postgresql.org/docs/8.3/static/sql-select.html#SQL-FOR-UPDATE-SHARE\n\n*Robins*\n\nOn Thu, Apr 3, 2008 at 2:48 PM, [email protected] <\[email protected]> wrote:\n\n> > I find myself having to do this in Sybase, but it sucks because there's\n> > a race - if there's no row updated then there's no lock and you race\n> > another thread doing the same thing. So you grab a row lock on a\n> > sacrificial row used as a mutex, or just a table lock. Or you just\n> > accept that sometimes you have to detect the insert fail and retry the\n> > whole transaction. Which is sucky however you look at it.\n>\n> hmm should I be worried ?\n>\n> I am doing an 'update if not found insert', in some cases I have found\n> that I need to select anyway, for e.g. take away 20 dollars from this\n> person;\n>\n> (all variables prefixed with _ are local variables)\n>\n> select into _money money from person_money where person_id = _person;\n> if (not found) then\n> insert into person_money (person_id, money) values (_person, -\n> _requested_amount);\n> else\n> update person_money set money = money - _requested_amount where\n> person_id = _person;\n> -- return new quantity\n> return _money - _requested_quantity; -- <- i need the quantity so I\n> have to select here.\n> end if;\n>\n> if I am not mistaken your are saying that between the select and the\n> if (not found) then ... end if; block ... another concurrent process\n> could be executing the same thing and insert ... while in the first\n> thread found is still 'false' and so it ends up inserting and over\n> writing / causing a unique violation or some kind?\n>\n> BTW, I did a benchmark with and without exceptions, the exceptions\n> version was very slow, so slow that I ended up killing it ... I am\n> sure it would have taken atleast 5 hours (was already 3 hours in) ...\n> versus, 25 mins! I guess the trouble was that I was using exceptions\n> to overload 'normal' flow ... i.e. update if exists else update is not\n> an exceptional circumstance and so exceptions are a bad choice.\n>\n> It would be interesting to see how much overhead exception containing\n> functions present when they do not throw any exceptions ... for never\n> to every few records to all the time ... maybe I will try it with my\n> parsing functions (which catch exceptions thrown by substring()).\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI think James was talking about Sybase. Postgresql on the other hand has a slightly better way to do this.SELECT ... FOR UPDATE allows you to lock a given row (based on the SELECT ... WHERE clause) and update it... without worrying about a concurrent modification. Of course, if the SELECT ... WHERE didn't bring up any rows, you would need to do an INSERT anyway.\nRead more about SELECT ... FOR UPDATE here: http://www.postgresql.org/docs/8.3/static/sql-select.html#SQL-FOR-UPDATE-SHARE\nRobinsOn Thu, Apr 3, 2008 at 2:48 PM, [email protected] <[email protected]> wrote:\n> I find myself having to do this in Sybase, but it sucks because there's\n\n> a race - if there's no row updated then there's no lock and you race\n> another thread doing the same thing. So you grab a row lock on a\n> sacrificial row used as a mutex, or just a table lock. Or you just\n> accept that sometimes you have to detect the insert fail and retry the\n> whole transaction. Which is sucky however you look at it.\n\nhmm should I be worried ?\n\nI am doing an 'update if not found insert', in some cases I have found\nthat I need to select anyway, for e.g. take away 20 dollars from this\nperson;\n\n(all variables prefixed with _ are local variables)\n\nselect into _money money from person_money where person_id = _person;\nif (not found) then\n  insert into person_money (person_id, money) values (_person, -\n_requested_amount);\nelse\n  update person_money set money = money - _requested_amount where\nperson_id = _person;\n  -- return new quantity\n  return _money - _requested_quantity; -- <- i need the quantity so I\nhave to select here.\nend if;\n\nif I am not mistaken your are saying that between the select and the\nif (not found) then ... end if; block ... another concurrent process\ncould be executing the same thing and insert ... while in the first\nthread found is still 'false' and so it ends up inserting and over\nwriting / causing a unique violation or some kind?\n\nBTW, I did a benchmark with and without exceptions, the exceptions\nversion was very slow, so slow that I ended up killing it ... I am\nsure it would have taken atleast 5 hours (was already 3 hours in) ...\nversus, 25 mins!  I guess the trouble was that I was using exceptions\nto overload 'normal' flow ... i.e. update if exists else update is not\nan exceptional circumstance and so exceptions are a bad choice.\n\nIt would be interesting to see how much overhead exception containing\nfunctions present when they do not throw any exceptions ... for never\nto every few records to all the time ... maybe I will try it with my\nparsing functions (which catch exceptions thrown by substring()).\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 3 Apr 2008 18:57:27 +0530", "msg_from": "\"Robins Tharakan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Implications of Using Exceptions" }, { "msg_contents": "Robins Tharakan wrote:\n>\n> I think James was talking about Sybase. Postgresql on the other hand \n> has a slightly better way to do this.\n>\n> SELECT ... FOR UPDATE allows you to lock a given row (based on the \n> SELECT ... WHERE clause) and update it... without worrying about a \n> concurrent modification. Of course, if the SELECT ... WHERE didn't \n> bring up any rows, you would need to do an INSERT anyway.\nHow does that help?\n\nIf the matching row doesn't exist at that point - what is there to get \nlocked?\n\nThe problem is that you need to effectively assert a lock on the primary \nkey so that you can update\nthe row (if it exists) or insert a row with that key (if it doesn't) \nwithout checking and then inserting and\nfinding that some other guy you were racing performed the insert and you \nget a duplicate key error.\n\nHow does Postgresql protect against this?\n\nJames\n\n", "msg_date": "Sun, 06 Apr 2008 21:21:41 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Implications of Using Exceptions" }, { "msg_contents": "On Mar 31, 2008, at 8:23 PM, Ravi Chemudugunta wrote:\n>> In general I would recommend that you benchmark them using\n>> as-close-to-real load as possible again as-real-as-possible data.\n>\n> I am running a benchmark with around 900,000 odd records (real-load on\n> the live machine :o ) ... should show hopefully some good benchmarking\n> results for the two methods.\n\n\nPlease do, and please share. I know the docs say that exception \nblocks make things \"significantly\" more expensive, but I think that \nthe community also sometimes loses the forest for the tree. Setting \nup a savepoint (AFAIK that's the actual expense in the exception \nblock) is fairly CPU-intensive, but it's not common for a database \nserver to be CPU-bound, even for OLTP. You're usually still waiting \non disk.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Wed, 9 Apr 2008 15:00:12 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Implications of Using Exceptions" } ]
[ { "msg_contents": "Hi Martin, please CC the mailing-list, \nthen others can repply ;)\n\nCédric Villemain (13:59 2008-03-31):\n> Le Monday 31 March 2008, Martin Kjeldsen a écrit :\n> > I've done the same query on a 8.2.5 database. The first one is prepared\n> > first and the other is executed directly.\n> >\n> > I understand why the there is such a great difference between the two ways\n> > of executing the query (postgres has no way of knowing that $1 will be\n> > quite big and that the result is not too big).\n> >\n> > I could just avoid using prepare statements, but this is done \nautomatically\n> > with Perl's DBD::Pg. I know how to avoid using prepare statements (avoid\n> > having placeholders in the statement), but that is not the prettiest of\n> > work arounds. \n> \n> Did you saw this option :\n> \n> $sth = $dbh->prepare(\"SELECT id FROM mytable WHERE val = ?\",\n> { pg_server_prepare => 0 });\n> \n> Then, *this* query will not be prepared by the server.\n\nThis works very well. Thanks!\n\nStill I regard this as a work around and the optimal solution would be to \nallow the prepare statement to be prepared with an max(guid) is close to $1 \nhint or something like that. \n\nI heard something about delayed prepare, where the prepared statements is \nprepared on first use, this would solve my problem. Is this something being \nwork on right now?\n\nBest regards\n\n\nMartin Kjeldsen", "msg_date": "Tue, 1 Apr 2008 09:53:36 +0200", "msg_from": "=?iso-8859-1?q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad prepare performance" } ]
[ { "msg_contents": "Hi there,\n\nI have an application accessing a postgres database, and I need to estimate \nthe following parameters:\n\n- read / write ratio\n- reads/second on typical load / peak load\n- writes/second on typical load / peak load\n\nIs there any available tool to achieve that ?\n\nTIA,\nSabin\n\n\n\n", "msg_date": "Tue, 1 Apr 2008 14:40:13 +0300", "msg_from": "\"Sabin Coanda\" <[email protected]>", "msg_from_op": true, "msg_subject": "check performance parameters" }, { "msg_contents": "On Tue, 1 Apr 2008, Sabin Coanda wrote:\n> I have an application accessing a postgres database, and I need to estimate\n> the following parameters:\n>\n> - read / write ratio\n> - reads/second on typical load / peak load\n> - writes/second on typical load / peak load\n>\n> Is there any available tool to achieve that ?\n\nAssuming you have a vaguely unix/linux-ish type system, then use iostat.\n\nMatthew\n\n-- \nReality is that which, when you stop believing in it, doesn't go away.\n -- Philip K. Dick\n", "msg_date": "Tue, 1 Apr 2008 14:55:51 +0100 (BST)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: check performance parameters" } ]
[ { "msg_contents": "Hi,\n\nthe following statement retrieves 16358 rows in a cursor by fetching\n1024 rows in bulks on a 8.2.4 server:\n\nDECLARE curs_285058224 CURSOR FOR SELECT objid, attrid, aggrid, lineid,\nobjval FROM atobjval WHERE objid IN\n(281479288456304,281479288456359,281479288456360,281479288456384,2814792\n88456385,281479288456403,281479288456404,281479288456406,281479288456408\n,281479288456432,281479288456433,281479288456434,281479288456438,2814792\n88456442,281479288456468,281479288456499,281479288456546,281479288456547\n,281479288456590,281479288456636,281479288456638,281479288456722,2814792\n88457111,281479288457125,281479288457126,281479288457143,281479288457229\n,281479288457230,281479288457477,281479288457478,281479288457546,2814792\n88457559,281479288457676,281479288457686,281479288457792,281479288457808\n,281479288457809,281479288457852,281479288457853,281479288457902,2814792\n88457961,281479288457962,281479288458005,281479288458097,281479288458116\n,281479288458155,281479288458156,281479288458183,281479288458516,2814792\n88458523,281479288458576,281479288458577,281479288458624,281479288458716\n,281479288458721,281479288458735,281479288458736,281479288458737,2814792\n88458758,281479288458786,281479288458788,281479288458789,281479288458794\n,281479288458806,281479288458914,281479288458957,281479288458958,2814792\n88459029,281479288459126,281479288459127,281479288459135,281479288459259\n,281479288459260,281479288459261,281479288459262,281479288459321,2814792\n88459425,281479288459426,281479288459427,281479288459428,281479288459447\n,281479288459450,281479288459453,281479288459457,281479288459462,2814792\n88459607,281479288459608,281479288459635,281479288459636,281479288459732\n,281479288459767,281479288459954,281479288459974,281479288459975,2814792\n88459976,281479288459977,281479288460034,281479288460060,281479288460070\n,281479288460073,281479288460088,281479288460162,281479288460163,2814792\n88460167,281479288460170,281479288460173,281479288460176,281479288460179\n,281479288460182,281479288460185,281479288460188,281479288460217,2814792\n88460290,281479288460292,281479288460318,281479288460325,281479288460332\n,281479288460337,281479288460339,281479288460377,281479288460378,2814792\n88460394,281479288460412,281479288460457,281479288460565,281479288460566\n,281479288460567,281479288460608,281479288460609,281479288460683,2814792\n88460684,281479288461021,281479288461024,281479288461059,281479288461091\n,281479288461281,281479288461367,281479288461368,281479288461369,2814792\n88461377,281479288461429,281479288461477,281479288461483,281479288461484\n,281479288461485,281479288461493,281479288461494,281479288461502,2814792\n88461522,281479288461570,281479288461578,281479288461654,281479288461655\n,281479288461690,281479288461711,281479288461712,281479288461747,2814792\n88461776,281479288461777,281479288461838,281479288461839,281479288461878\n,281479288461889,281479288462036,281479288462083,281479288462090,2814792\n88462096,281479288462104,281479288462129,281479288462136,281479288462276\n,281479288462277,281479288462366,281479288462367,281479288462448,2814792\n88462450,281479288462502,281479288462817,281479288462967,281479288462968\n,281479288462969,281479288463200,281479288463246,281479288463247,2814792\n88463248,281479288463255,281479288463437,281479288463441,281479288463462\n,281479288463482,281479288463642,281479288463645,281479288463782,2814792\n88463790,281479288463802,281479288463809,281479288463819,281479288463843\n,281479288463859,281479288463967,281479288463968,281479288463969,2814792\n88465253,281479288465396,281479288465397,281479288465417,281479288465429\n,281479288465436,281479288467191,285774255752169,285774255752181,2857742\n55752183,285774255752188,285774255752189,285774255752198,285774255753788\n,285774255753789,285774255753790,285774255753793,285774255753794,2857742\n55753808,285774255753809,285774255753811,285774255753812,285774255753828\n,285774255753893,285774255753993,285774255754091,285774255754106,2857742\n55754110,285774255754160,285774255755169,285774255755179,285774255755184\n,285774255755187,285774255755205,285774255755252,285774255755254,2857742\n55755271,285774255755481,285774255755494,285774255755514,285774255755534\n,285774255755571,285774255755597,285774255755616,285774255755622,2857742\n55755632,285774255755696,285774255755717,285774255755729,285774255755747\n,285774255755759,285774255755787,285774255755791,285774255755798,2857742\n55755802,285774255757269,285774255757270,285774255757286,285774255757287\n,285774255757518,285774255757687,285774255757797,285774255761019,2857742\n55761021,285774255761069,285774255761070,285774255764181,285774255764182\n,285774255764196,285774255764204,285774255764276,285774255764290,2857742\n55764301,285774255764312,285774255764333,285774255764334,285774255764335\n,285774255764367,285774255764369,285774255764371,285774255764382,2857742\n55764394,285774255764418,285774255764420,285774255764430,285774255764486\n,285774255764490,285774255764498,285774255764616,285774255764683,2857742\n55764787,285774255764802,285774255765031,285774255765043,285774255765052\n,285774255765066,285774255765081,285774255765145,285774255766471,2857742\n55767469,285774255767809,285774255767971,285774255768111,285774255768151\n,285774255768193,285774255768199,285774255768220,285774255769244,2857742\n55769317,285774255770269,285774255770343,285774255770373,285774255770374\n,285774255770475,285774255770488,285774255772471,285774255773974,2857742\n55773977,285774255773981,285774255774003,285774255774012,285774255774018\n,285774255774079,285774255774080,285774255774098,285774255774106,2857742\n55774110,285774255774130,285774255774775,285774255777173,285774255777188\n,285774255777205,285774255777219,285774255777241,285774255777242,2857742\n55777243,285774255777245,285774255777260,285774255777299,285774255777337\n,285774255777422,285774255777445,285774255777446,285774255778669,2857742\n55778671,285774255778672,285774255779069,285774255779070,285774255781196\n,285774255782209,285774255782221,285774255782224,285774255782226,2857742\n55782325,285774255782430,285774255783469,285774255783470,285774255783575\n,285774255783576,285774255783577,285774255785169,285774255785170,2857742\n55785173,285774255785174,285774255785177,285774255785178,285774255785189\n,285774255785190,285774255785197,285774255785198,285774255785209,2857742\n55785210,285774255785238,285774255785239,285774255788781,285774255788784\n,285774255788821,285774255788827,285774255788830,285774255788852,2857742\n55788867,285774255788889,285774255789671,285774255789852,285774255790150\n,285774255790369,285774255790370,285774255790373,285774255790569,2857742\n55790571,285774255790572,285774255790573,285774255790645,285774255790655\n,285774255793470,285774255793517,285774255793647,285774255793650,2857742\n55793687,285774255795211,285774255797003,285774255798195,285774255798234\n,285774255798242,285774255800551,285774255800689,285774255800696,2857742\n55800751,285774255800821,285774255809954,285774255809981,285774255810032\n,285774255810033,285774255812694,285774255812706,285774255812708,2857742\n55812713,285774255812746,285774255812747,285774255812752,285774255812761\n,285774255812765,285774255812768,285774255812771,285774255812774,2857742\n55812857,285774255813980,285774255814277,285774255814296,285774255814313\n,285774255814314,285774255814333,285774255814357,285774255814368,2857742\n55814385,285774255815169,285774255816279,285774255816675,285774255817669\n,285774255817688,285774255817699,285774255817793,285774255817874,2857742\n55817952,285774255817960,285774255817981,285774255818045,285774255818052\n,285774255818067,285774255818068,285774255818106,285774255820418,2857742\n55821169,285774255821224,285774255821232,285774255821303,285774255821387\n,285774255821393,285774255821468,285774255821481,285774255826269,2857742\n55826980,285774255826985,285774255826994,285774255827971,285774255827999\n,285774255828050,285774255828168,285774255828171,285774255828172,2857742\n55828173,285774255828174,285774255828205,285774255828213,285774255828221\n,285774255832360,285774255832381,285774255832500,285774255832534,2857742\n55832551,285774255832590,285774255832611,285774255832641,285774255832726\n,285774255832782,285774255833369,285774255833477,285774255833490,2857742\n55833532,285774255833537,285774255833669,285774255834874,285774255834950\n,285774255835014,285774255835035,285774255836198,285774255836199,2857742\n55837674) ORDER BY objid, attrid, aggrid, lineid;\n\nWhen we use 20 as default_statistics_target the retrieval of the data\ntakes 7.5 seconds - with 25 as default_statistics_target (with restart\nand analyze) it takes 0.6 seconds.\nThe query plan is identical in both situations (row estimation differs a\nlittle bit) - the query is always fast when it is executed without a\ncursor.\n\nThe problem is the last of the 16 fetches - it takes about 7 seconds\nwhen using 20 as default_statistics_target.\nDuring the fetch there is no disk IO - the backend which is running the\nquery uses 100 % of one cpu core during the seven seconds.\n\nThe following shows OProfile output for the seven seconds:\nsamples % symbol name\n200318 41.0421 ExecEvalScalarArrayOp\n106214 21.7616 int8eq\n36031 7.3822 ExecMakeFunctionResultNoSets\n17894 3.6662 ExecEvalAnd\n16518 3.3843 ExecEvalScalarVar\n13282 2.7213 ExecEvalConst\n9980 2.0448 HeapTupleSatisfiesSnapshot\n9696 1.9866 SyncOneBuffer\n8643 1.7708 slot_getattr\n6792 1.3916 int8le\n6232 1.2768 hash_search_with_hash_value\n6032 1.2359 heap_release_fetch\n5690 1.1658 ExecEvalOr\n5442 1.1150 _bt_checkkeys\n5430 1.1125 LWLockAcquire\n5228 1.0711 PinBuffer\n\nDoes anyone has an explanation for this behavior?\n\nRegards,\nRobert Hell\n", "msg_date": "Tue, 1 Apr 2008 14:14:25 +0200", "msg_from": "\"Hell, Robert\" <[email protected]>", "msg_from_op": true, "msg_subject": "Cursors and different settings for default_statistics_target" }, { "msg_contents": "\"Hell, Robert\" <[email protected]> writes:\n> When we use 20 as default_statistics_target the retrieval of the data\n> takes 7.5 seconds - with 25 as default_statistics_target (with restart\n> and analyze) it takes 0.6 seconds.\n> The query plan is identical in both situations (row estimation differs a\n> little bit) - the query is always fast when it is executed without a\n> cursor.\n\nA cursor doesn't necessarily use the same plan as a straight query does.\nTry \"EXPLAIN DECLARE curs_285058224 CURSOR FOR ...\" and see if you\naren't getting different plans in these two cases.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Apr 2008 11:30:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cursors and different settings for default_statistics_target " }, { "msg_contents": "That's it - I found a more simple statement which has the same problem\n(0.02 seconds vs. 6 seconds):\n\nWith cursor (6 seconds):\nappcooelakdb2=> explain DECLARE curs_1 CURSOR FOR SELECT DISTINCT\nt2.objid FROM atobjval t2 WHERE t2.aggrid = 0 AND t2.attrid =\n281479288455385 ORDER BY t2.objid;\n QUERY PLAN\n------------------------------------------------------------------------\n----------------------\n Unique (cost=0.00..1404823.63 rows=538 width=8)\n -> Index Scan using atobjvalix on atobjval t2\n(cost=0.00..1404751.32 rows=28925 width=8)\n Index Cond: ((attrid = 281479288455385::bigint) AND (aggrid =\n0))\n\n\nWithout cursor (0.02 seconds)\nappcooelakdb2=> explain SELECT DISTINCT t2.objid FROM atobjval t2 WHERE\nt2.aggrid = 0 AND t2.attrid = 281479288455385 ORDER BY t2.objid;\n QUERY PLAN\n------------------------------------------------------------------------\n----------------------\n Unique (cost=151717.85..151862.48 rows=538 width=8)\n -> Sort (cost=151717.85..151790.17 rows=28925 width=8)\n Sort Key: objid\n -> Bitmap Heap Scan on atobjval t2 (cost=1692.40..149574.51\nrows=28925 width=8)\n Recheck Cond: (attrid = 281479288455385::bigint)\n Filter: (aggrid = 0)\n -> Bitmap Index Scan on ind_atobjval\n(cost=0.00..1685.16 rows=59402 width=0)\n Index Cond: (attrid = 281479288455385::bigint)\n\nWhat's the difference between plan calculation for cursors and straight\nqueries?\n\nKind regards,\nRobert\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Dienstag, 01. April 2008 17:30\nTo: Hell, Robert\nCc: [email protected]\nSubject: Re: [PERFORM] Cursors and different settings for\ndefault_statistics_target \n\n\"Hell, Robert\" <[email protected]> writes:\n> When we use 20 as default_statistics_target the retrieval of the data\n> takes 7.5 seconds - with 25 as default_statistics_target (with restart\n> and analyze) it takes 0.6 seconds.\n> The query plan is identical in both situations (row estimation differs\na\n> little bit) - the query is always fast when it is executed without a\n> cursor.\n\nA cursor doesn't necessarily use the same plan as a straight query does.\nTry \"EXPLAIN DECLARE curs_285058224 CURSOR FOR ...\" and see if you\naren't getting different plans in these two cases.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 1 Apr 2008 17:48:04 +0200", "msg_from": "\"Hell, Robert\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cursors and different settings for default_statistics_target " }, { "msg_contents": "\"Hell, Robert\" <[email protected]> writes:\n> That's it - I found a more simple statement which has the same problem\n> (0.02 seconds vs. 6 seconds):\n\nThis isn't necessarily the very same problem --- what are the plans for\nyour original case with the two different stats settings?\n\n> What's the difference between plan calculation for cursors and straight\n> queries?\n\nThe planner is set up to favor fast-start plans a little bit when\nplanning a cursor, on the theory that you are probably more interested\nin getting some of the rows sooner than you are in the total runtime,\nand that you might not ever intend to fetch all the rows anyway.\nIn the example you give here, it likes the indexscan/unique plan because\nof the zero startup cost, even though the total cost is (correctly)\nestimated as much higher. (Looking at this example, I wonder if the\nfast-start bias isn't a bit too strong...)\n\nIt's not immediately apparent to me though how that would affect\nyour original query.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Apr 2008 12:16:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cursors and different settings for default_statistics_target " }, { "msg_contents": "Here are the query plans for the original query - looks very similar (to\nme):\n\nEXPLAIN SELECT objid, attrid, aggrid, lineid, objval FROM atobjval WHERE\nobjid IN (281479288456304,<many of them>,285774255837674) ORDER BY\nobjid, attrid, aggrid, lineid;\n QUERY PLAN\n------------------------------------------------------------------------\n----------------------\nSort (cost=116851.38..117196.22 rows=137935 width=32)\n Sort Key: objid, attrid, aggrid, lineid\n -> Bitmap Heap Scan on atobjval (cost=4947.40..105076.13\nrows=137935 width=32)\n Recheck Cond: (objid = ANY ('{281479288456304,<many of\nthem>,285774255837674}'::bigint[]))\n -> Bitmap Index Scan on atobjvalix (cost=0.00..4912.92\nrows=137935 width=0)\n Index Cond: (objid = ANY ('{281479288456304,<many of\nthem>,285774255837674}'::bigint[]))\n\n\nexplain DECLARE curs_285058224 CURSOR FOR SELECT objid, attrid, aggrid,\nlineid, objval FROM atobjval WHERE objid IN (281479288456304,<many of\nthem>,285774255837674) ORDER BY objid, attrid, aggrid, lineid;\n QUERY PLAN\n------------------------------------------------------------------------\n----------------------\nIndex Scan using atobjvalix on atobjval (cost=0.00..1041413.49\nrows=137935 width=32)\n Filter: (objid = ANY ('{281479288456304,<many of\nthem>,285774255837674}'::bigint[]))\n\n\nThat's CURSOR_OPT_FAST_PLAN and isn't it? Our application reads the full\nresults of most cursors.\n\nRegards,\nRobert\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Dienstag, 01. April 2008 18:17\nTo: Hell, Robert\nCc: [email protected]\nSubject: Re: [PERFORM] Cursors and different settings for\ndefault_statistics_target \n\n\"Hell, Robert\" <[email protected]> writes:\n> That's it - I found a more simple statement which has the same problem\n> (0.02 seconds vs. 6 seconds):\n\nThis isn't necessarily the very same problem --- what are the plans for\nyour original case with the two different stats settings?\n\n> What's the difference between plan calculation for cursors and\nstraight\n> queries?\n\nThe planner is set up to favor fast-start plans a little bit when\nplanning a cursor, on the theory that you are probably more interested\nin getting some of the rows sooner than you are in the total runtime,\nand that you might not ever intend to fetch all the rows anyway.\nIn the example you give here, it likes the indexscan/unique plan because\nof the zero startup cost, even though the total cost is (correctly)\nestimated as much higher. (Looking at this example, I wonder if the\nfast-start bias isn't a bit too strong...)\n\nIt's not immediately apparent to me though how that would affect\nyour original query.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 1 Apr 2008 18:33:19 +0200", "msg_from": "\"Hell, Robert\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cursors and different settings for default_statistics_target " }, { "msg_contents": "\"Hell, Robert\" <[email protected]> writes:\n> That's CURSOR_OPT_FAST_PLAN and isn't it? Our application reads the full\n> results of most cursors.\n\nJust out of curiosity, why use a cursor at all then? But anyway, you\nmight want to consider running a custom build with a higher setting for\ntuple_fraction for OPT_FAST_PLAN (look into planner.c). I've\noccasionally thought about exposing that as a GUC parameter, but\nnever gotten motivated to do it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Apr 2008 12:42:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cursors and different settings for default_statistics_target " }, { "msg_contents": "On Tue, Apr 01, 2008 at 12:42:03PM -0400, Tom Lane wrote:\n>> That's CURSOR_OPT_FAST_PLAN and isn't it? Our application reads the full\n>> results of most cursors.\n> Just out of curiosity, why use a cursor at all then?\n\nThis isn't the same scenario as the OP, but I've used a cursor in cases where\nI cannot keep all of the dataset in memory at the client at once, but I _can_\ncoerce it down to a more manageable size as it comes in.\n\nI don't know if a cursor is the only way to do this (short of making a custom\nfunction inside Postgres of some sort), but it seems to be the simplest way\nin libpqxx, at least.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Tue, 1 Apr 2008 21:57:44 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cursors and different settings for\n\tdefault_statistics_target" }, { "msg_contents": "Looks much better when using 0.0 for tuple_fraction in case of a cursor instead of 0.1.\n\nBut why are the first 15 fetches (15360 rows) processed in 0.5 seconds and the last fetch (998 rows) takes 7 seconds.\nAre we just unlucky that the last fetch takes that long?\n\n\nEXPLAIN SELECT objid, attrid, aggrid, lineid, objval FROM atobjval WHERE objid IN (281479288456304,<many of them>,285774255837674) ORDER BY objid, attrid, aggrid, lineid;\n QUERY PLAN\n----------------------------------------------------------------------------------------------\nSort (cost=116851.38..117196.22 rows=137935 width=32)\n Sort Key: objid, attrid, aggrid, lineid\n -> Bitmap Heap Scan on atobjval (cost=4947.40..105076.13 rows=137935 width=32)\n Recheck Cond: (objid = ANY ('{281479288456304,<many of them>,285774255837674}'::bigint[]))\n -> Bitmap Index Scan on atobjvalix (cost=0.00..4912.92 rows=137935 width=0)\n Index Cond: (objid = ANY ('{281479288456304,<many of them>,285774255837674}'::bigint[]))\n\n\nexplain DECLARE curs_285058224 CURSOR FOR SELECT objid, attrid, aggrid, lineid, objval FROM atobjval WHERE objid IN (281479288456304,<many of them>,285774255837674) ORDER BY objid, attrid, aggrid, lineid;\n QUERY PLAN\n----------------------------------------------------------------------------------------------\nIndex Scan using atobjvalix on atobjval (cost=0.00..1041413.49 rows=137935 width=32)\n Filter: (objid = ANY ('{281479288456304,<many of them>,285774255837674}'::bigint[]))\n\nRegards,\nRobert\n\n\n-----Ursprüngliche Nachricht-----\nVon: Tom Lane [mailto:[email protected]] \nGesendet: Dienstag, 01. April 2008 18:42\nAn: Hell, Robert\nCc: [email protected]\nBetreff: Re: [PERFORM] Cursors and different settings for default_statistics_target \n\n\"Hell, Robert\" <[email protected]> writes:\n> That's CURSOR_OPT_FAST_PLAN and isn't it? Our application reads the full\n> results of most cursors.\n\nJust out of curiosity, why use a cursor at all then? But anyway, you\nmight want to consider running a custom build with a higher setting for\ntuple_fraction for OPT_FAST_PLAN (look into planner.c). I've\noccasionally thought about exposing that as a GUC parameter, but\nnever gotten motivated to do it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 1 Apr 2008 23:12:58 +0200", "msg_from": "\"Hell, Robert\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cursors and different settings for default_statistics_target " }, { "msg_contents": "\"Hell, Robert\" <[email protected]> writes:\n> But why are the first 15 fetches (15360 rows) processed in 0.5 seconds and the last fetch (998 rows) takes 7 seconds.\n> Are we just unlucky that the last fetch takes that long?\n\nWell, the indexscan plan is going to scan through all the rows in objid\norder and return whichever of them happen to match the IN list. So I\nthink you're just saying that your IN list isn't uniformly dense through\nthe whole set of objids.\n\nIf the real story is that you tend to select only objids within a narrow\nrange, adding explicit \"AND objid >= x AND objid <= y\" constraints\n(where you compute x and y on the fly from the set of objids you're\nasking for) would reduce the overhead of the indexscan.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Apr 2008 18:23:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cursors and different settings for default_statistics_target " }, { "msg_contents": "I'm motivated to contribute a patch for that.\n\nI would prefer to make tuple_fraction for cursors configurable as GUC\nparameter cursor_tuple_fraction.\nDo you agree with that?\n\nRegards,\nRobert\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Mittwoch, 02. April 2008 00:24\nTo: Hell, Robert\nCc: [email protected]\nSubject: Re: [PERFORM] Cursors and different settings for\ndefault_statistics_target \n\n\"Hell, Robert\" <[email protected]> writes:\n> But why are the first 15 fetches (15360 rows) processed in 0.5 seconds\nand the last fetch (998 rows) takes 7 seconds.\n> Are we just unlucky that the last fetch takes that long?\n\nWell, the indexscan plan is going to scan through all the rows in objid\norder and return whichever of them happen to match the IN list. So I\nthink you're just saying that your IN list isn't uniformly dense through\nthe whole set of objids.\n\nIf the real story is that you tend to select only objids within a narrow\nrange, adding explicit \"AND objid >= x AND objid <= y\" constraints\n(where you compute x and y on the fly from the set of objids you're\nasking for) would reduce the overhead of the indexscan.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 2 Apr 2008 12:11:06 +0200", "msg_from": "\"Hell, Robert\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cursors and different settings for default_statistics_target " } ]
[ { "msg_contents": "Hi everyone,\n\nI am running a test with 1 thread calling a stored\nprocedure in an endless loop. The stored procedure\ninserts 1000 records in a table that does not have\nindexes or constraints.\nIn the log file I see that the time to execute the\nprocedure sometimes it jumps from 100 ms to 700 ms.\nThe auto-vacuum is turned off.\nCan anyone give me some details about this?\n\nThanks a lot,\n\n17221%2008-04-01 09:22:53 ESTLOG: statement: select *\nfrom testinsert(100001001,1000)\n17221%2008-04-01 09:22:53 ESTLOG: duration: 111.654\nms\n17223%2008-04-01 09:22:53 ESTLOG: statement: select *\nfrom testinsert(100001001,1000)\n17223%2008-04-01 09:22:54 ESTLOG: duration: 710.426\nms\n\n\n __________________________________________________________________\nAsk a question on any topic and get answers from real people. Go to Yahoo! Answers and share what you know at http://ca.answers.yahoo.com\n\n", "msg_date": "Tue, 1 Apr 2008 11:38:38 -0400 (EDT)", "msg_from": "Ioana Danes <[email protected]>", "msg_from_op": true, "msg_subject": "Insert time" } ]
[ { "msg_contents": "Hi\nIam getting the below error when iam running my program.\nERROR: cannot have more than 2^32-1 commands in a transaction\nSQL state: 54000\nIf iam not wrong this error ocuurs when there are too many statements executing\nin one single transaction.\nBut this error is occuring in a function that iam least expecting it\nto occur in. The function it occurs in is as follows:\nBEGIN\n resultindex := 1;\n\n -- Go through summedprobs, find where rnum falls, set resultindex\n FOR i IN REVERSE (array_upper(summedprobs,1)-1)..1 LOOP\n IF rnum >= summedprobs[i] AND rnum <= summedprobs[i+1] THEN\n resultindex := i;\n END IF;\n END LOOP;\n\n\n RETURN (dobs[resultindex]);\n EXCEPTION WHEN program_limit_exceeded THEN\n RAISE NOTICE 'Exception in GETRES';\n RETURN (dobs[resultindex]);\nEND;\n\n\nIs is beacuse of the REVERSE command? or because the program is\nexecutiung many select and update statements?\nCatching the exception isnt helping here either.Can anyone explain me\nwhy this error occurs and what i can do to resolve it?\n\n\nThanks\nSam\n", "msg_date": "Tue, 1 Apr 2008 16:58:21 -0400", "msg_from": "\"samantha mahindrakar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Too many commands in a transaction" } ]
[ { "msg_contents": "Tried harder to find info on the write cycles: found som CFs that claim \n2million\ncycles, and found the Mtron SSDs which claim to have very advanced wear\nlevelling and a suitably long lifetime as a result even with an \nassumption that\nthe underlying flash can do 100k writes only.\n\nThe 'consumer' MTrons are not shabby on the face of it and not too \nexpensive,\nand the pro models even faster.\n\nBut ... the spec pdf shows really hight performance for average access, \nstream\nread *and* write, random read ... and absolutely pants performance for \nrandom\nwrite. Like 130/s, for .5k and 4k writes.\n\nIts so pants it looks like a misprint and it doesn't seem to square with the\nreview on tomshardware:\nhttp://www.tomshardware.com/2007/11/21/mtron_ssd_32_gb/page7.html\n\nEven there, the database IO rate does seem lower than you might hope,\nand this *might* be because the random reads are very very fast and the\nrandom writes ... aren't. Which is a shame, because that's exactly the\nbit I'd hope was fast.\n\nSo, more work to do somewhere.\n\n\n", "msg_date": "Wed, 02 Apr 2008 07:16:20 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": true, "msg_subject": "SSDs" }, { "msg_contents": "My colleague has tested a single Mtron Mobo's and a set of 4. He also \nmentioned the write performance was pretty bad compared to a Western \nDigital Raptor. He had a solution for that however, just plug the SSD in \na raid-controller with decent cache performance (his favorites are the \nAreca controllers) and the \"bad\" write performance is masked by the \ncontroller's cache. It wood probably be really nice if you'd get tuned \ncontrollers for ssd's so they use less cache for reads and more for writes.\n\nBest regards,\n\nArjen\n\nOn 2-4-2008 8:16, James Mansion wrote:\n> Tried harder to find info on the write cycles: found som CFs that claim \n> 2million\n> cycles, and found the Mtron SSDs which claim to have very advanced wear\n> levelling and a suitably long lifetime as a result even with an \n> assumption that\n> the underlying flash can do 100k writes only.\n> \n> The 'consumer' MTrons are not shabby on the face of it and not too \n> expensive,\n> and the pro models even faster.\n> \n> But ... the spec pdf shows really hight performance for average access, \n> stream\n> read *and* write, random read ... and absolutely pants performance for \n> random\n> write. Like 130/s, for .5k and 4k writes.\n> \n> Its so pants it looks like a misprint and it doesn't seem to square with \n> the\n> review on tomshardware:\n> http://www.tomshardware.com/2007/11/21/mtron_ssd_32_gb/page7.html\n> \n> Even there, the database IO rate does seem lower than you might hope,\n> and this *might* be because the random reads are very very fast and the\n> random writes ... aren't. Which is a shame, because that's exactly the\n> bit I'd hope was fast.\n> \n> So, more work to do somewhere.\n> \n> \n> \n", "msg_date": "Wed, 02 Apr 2008 08:44:39 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSDs" }, { "msg_contents": "On Wed, Apr 2, 2008 at 1:16 AM, James Mansion\n<[email protected]> wrote:\n> Tried harder to find info on the write cycles: found som CFs that claim\n> 2million\n> cycles, and found the Mtron SSDs which claim to have very advanced wear\n> levelling and a suitably long lifetime as a result even with an\n> assumption that\n> the underlying flash can do 100k writes only.\n>\n> The 'consumer' MTrons are not shabby on the face of it and not too\n> expensive,\n> and the pro models even faster.\n>\n> But ... the spec pdf shows really hight performance for average access,\n> stream\n> read *and* write, random read ... and absolutely pants performance for\n> random\n> write. Like 130/s, for .5k and 4k writes.\n>\n> Its so pants it looks like a misprint and it doesn't seem to square with the\n> review on tomshardware:\n> http://www.tomshardware.com/2007/11/21/mtron_ssd_32_gb/page7.html\n>\n> Even there, the database IO rate does seem lower than you might hope,\n> and this *might* be because the random reads are very very fast and the\n> random writes ... aren't. Which is a shame, because that's exactly the\n> bit I'd hope was fast.\n>\n> So, more work to do somewhere.\n\nif flash ssd random write was as good as random read, a single flash\nssd could replace a stack of 15k disks in terms of iops (!).\n\nunfortunately, the random write performance of flash SSD is indeed\ngrim. there are some technical reasons for this that are basically\nfundamental tradeoffs in how flash works, and the electronic processes\ninvolved. unfortunately even with 10% write 90% read workloads this\nmakes flash a non-starter for 'OLTP' systems (exactly the sort of\nworkloads you would want the super seek times).\n\na major contributing factor is that decades of optimization and\nresearch have gone into disk based sytems which are pretty similar in\nterms of read and write performance. since flash just behaves\ndifferently, these optimizations\n\nread this paper for a good explanation of this [pdf]:\nhttp://tinyurl.com/357zux\n\nmy personal opinion is these problems will prove correctable due to\nimprovements in flash technology, improvement of filesystems and raid\ncontrollers in terms of flash, and introduction of other non volatile\nmemory. so the ssd is coming...it's inevitable, just not as soon as\nsome of us had hoped.\n\nmerlin\n", "msg_date": "Wed, 2 Apr 2008 23:36:50 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSDs" }, { "msg_contents": "What can be set as max of postgreSQL shared_buffers and work_mem\n\n2008-04-03 \n\n\n\nbitaoxiao \n\n\n\n\n\n\n\n \nWhat can be set as max \r\nof postgreSQL shared_buffers and work_mem\n \n2008-04-03 \n\n\nbitaoxiao", "msg_date": "Thu, 3 Apr 2008 12:56:17 +0800", "msg_from": "\"bitaoxiao\" <[email protected]>", "msg_from_op": false, "msg_subject": "Max shared_buffers" }, { "msg_contents": "There is NO MAX....\n\nIt is according to your hardware you have, and the db you have.\n\n2008/4/3 bitaoxiao <[email protected]>:\n\n>\n> What can be set as max of postgreSQL shared_buffers and work_mem\n>\n> 2008-04-03\n> ------------------------------\n> bitaoxiao\n>\n>\n\nThere is NO MAX....It is according to your hardware you have, and the db you have.2008/4/3 bitaoxiao <[email protected]>:\n\n \nWhat can be set as max \nof postgreSQL shared_buffers and work_mem\n \n2008-04-03 \n\n\nbitaoxiao", "msg_date": "Thu, 3 Apr 2008 15:40:57 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Max shared_buffers" }, { "msg_contents": "On Thu, Apr 3, 2008 at 4:10 AM, sathiya psql <[email protected]> wrote:\n> There is NO MAX....\n>\n> It is according to your hardware you have, and the db you have.\n\nNot entirely true. on 32 bit OS / software, the limit is just under 2\nGig. I'd imagine that the limit on 64 bit hardware / software is\ntherefore something around 2^63-somesmallnumber which is, for all\npractical purposes, unlimited.\n", "msg_date": "Thu, 3 Apr 2008 10:49:17 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Max shared_buffers" }, { "msg_contents": "On 04/04/2008, Scott Marlowe <[email protected]> wrote:\n> Not entirely true. on 32 bit OS / software, the limit is just under 2\n> Gig.\n\nWhere do you get that figure from?\n\nThere's an architectural (theoretical) limitation of RAM at 4GB,\nbut with the PAE (that pretty much any CPU since the Pentium Pro\noffers) one can happily address 64GB on 32-bit.\n\nOr are you talking about some Postgres limitation?\n\n\nCheers,\nAndrej\n\n-- \nPlease don't top post, and don't use HTML e-Mail :} Make your quotes concise.\n\nhttp://www.american.edu/econ/notes/htmlmail.htm\n", "msg_date": "Fri, 4 Apr 2008 06:16:22 +1300", "msg_from": "\"Andrej Ricnik-Bay\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Max shared_buffers" }, { "msg_contents": "On Fri, 4 Apr 2008, Andrej Ricnik-Bay wrote:\n> On 04/04/2008, Scott Marlowe <[email protected]> wrote:\n>> Not entirely true. on 32 bit OS / software, the limit is just under 2\n>> Gig.\n>\n> Or are you talking about some Postgres limitation?\n\nSince the original question was:\n\n> What can be set as max of ¿½postgreS shared_buffers and work_mem?\n\nthat would be a \"Yes.\"\n\nMatthew\n\n-- \nI quite understand I'm doing algebra on the blackboard and the usual response\nis to throw objects... If you're going to freak out... wait until party time\nand invite me along -- Computer Science Lecturer", "msg_date": "Thu, 3 Apr 2008 18:20:46 +0100 (BST)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Max shared_buffers" }, { "msg_contents": "On Thu, Apr 3, 2008 at 11:16 AM, Andrej Ricnik-Bay\n<[email protected]> wrote:\n> On 04/04/2008, Scott Marlowe <[email protected]> wrote:\n> > Not entirely true. on 32 bit OS / software, the limit is just under 2\n> > Gig.\n>\n> Where do you get that figure from?\n>\n> There's an architectural (theoretical) limitation of RAM at 4GB,\n> but with the PAE (that pretty much any CPU since the Pentium Pro\n> offers) one can happily address 64GB on 32-bit.\n>\n> Or are you talking about some Postgres limitation?\n\nNote I was talking about running 32 bit postgresql (on either 32 or 64\nbit hardware, it doesn't matter) where the limit we've seen in the\nperf group over the years has been just under 2G.\n\nI'm extrapolating that on 64 bit hardware, 64 bit postgresql's limit\nwould be similar, i.e. 2^63-x where x is some small number that keeps\nus just under 2^63.\n\nSo, experience and reading here for a few years is where I get that\nnumber from. But feel free to test it. It'd be nice to know you\ncould get >2Gig shared buffer on 32 bit postgresql on some\nenvironment.\n", "msg_date": "Thu, 3 Apr 2008 11:48:58 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Max shared_buffers" }, { "msg_contents": "Andrej Ricnik-Bay wrote:\n> On 04/04/2008, Scott Marlowe <[email protected]> wrote:\n>> Not entirely true. on 32 bit OS / software, the limit is just under 2\n>> Gig.\n\nThat depends on the OS. On Linux it's AFAIK closer to 3GB because of\nless address space being consumed by the kernel, though I think free app\naddress space might be further reduced with truly *massive* amounts of\nRAM. There are patches (the \"4GB/4GB\" patches) that do dodgy address\nspace mapping to support a full 4GB application address space.\n\n> Where do you get that figure from?\n> \n> There's an architectural (theoretical) limitation of RAM at 4GB,\n> but with the PAE (that pretty much any CPU since the Pentium Pro\n> offers) one can happily address 64GB on 32-bit.\n\nThe OS can address more than 4GB of physical RAM with PAE, yes.\n\nHowever, AFAIK no single process may directly use more than (4GB -\nkernel address space requirements) of RAM without using special\nextensions like address space windowing. Of course, they still benefit\nfrom the extra RAM indirectly through bigger disk caches, less\ncompetition with other processes for free physical RAM, etc.\n\nAs Pg uses a multiprocess model I imagine individual backends can make\nuse of a large amount of RAM (as work_mem etc), though the address space\nconsumed by the shared memory will limit how much it can use.\n\nThere's a decent, if Microsoft-specific, article about PAE here:\n\nhttp://www.microsoft.com/whdc/system/platform/server/PAE/pae_os.mspx\n\n\n--\nCraig Ringer\n", "msg_date": "Fri, 04 Apr 2008 01:58:32 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Max shared_buffers" } ]
[ { "msg_contents": "Hi All,\n\nWe are using solaris 10 x86/AMD Opteron and postgresql\n8.2 on SunFire X2100 , however performance is very\nslow in contrast to linux debian in the same platform.\nIs this normal?\n\nThanks & Regards\nMahi\n\n\n ____________________________________________________________________________________\nYou rock. That's why Blockbuster's offering you one month of Blockbuster Total Access, No Cost. \nhttp://tc.deals.yahoo.com/tc/blockbuster/text5.com\n", "msg_date": "Thu, 3 Apr 2008 04:45:06 -0700 (PDT)", "msg_from": "MUNAGALA REDDY <[email protected]>", "msg_from_op": true, "msg_subject": "Performance is low Postgres+Solaris" }, { "msg_contents": "MUNAGALA REDDY wrote:\n> Hi All,\n> \n> We are using solaris 10 x86/AMD Opteron and postgresql\n> 8.2 on SunFire X2100 , however performance is very\n> slow in contrast to linux debian in the same platform.\n> Is this normal?\n> \n> Thanks & Regards\n> Mahi\n\nhttp://www.google.com/search?q=postgresql+solaris+tuning&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a\n", "msg_date": "Thu, 10 Apr 2008 19:22:14 -0400", "msg_from": "Reid Thompson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance is low Postgres+Solaris" }, { "msg_contents": "Reid Thompson wrote:\n> MUNAGALA REDDY wrote:\n>> Hi All,\n>>\n>> We are using solaris 10 x86/AMD Opteron and postgresql\n>> 8.2 on SunFire X2100 , however performance is very\n>> slow in contrast to linux debian in the same platform.\n>> Is this normal?\n>>\n>> Thanks & Regards\n>> Mahi\n> \n> http://www.google.com/search?q=postgresql+solaris+tuning&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a \n> \n> \nhttp://wikis.sun.com/display/DBonSolaris/PostgreSQL\n\nhttp://www.sun.com/bigadmin/features/articles/postgresql_opensolaris.jsp\n\nhttp://www.google.fr/search?q=site%3Asun.com+postgresql+tuning&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a\n", "msg_date": "Thu, 10 Apr 2008 19:30:31 -0400", "msg_from": "Reid Thompson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance is low Postgres+Solaris" }, { "msg_contents": "On Thu, 3 Apr 2008, MUNAGALA REDDY wrote:\n\n> We are using solaris 10 x86/AMD Opteron and postgresql\n> 8.2 on SunFire X2100 , however performance is very\n> slow in contrast to linux debian in the same platform.\n\nThere are some significant differences between how Solaris caches database \nfiles in both of its major filesystems (UFS and ZFS) compared to how Linux \nhandles caching. Linux is much more aggressive in using lots of memory \neffectively for database caching compared to an untuned Solaris 10.\n\nI'd recommend the just published talk at \nhttp://blogs.sun.com/jkshah/entry/postgresql_east_2008_talk_best for an \nintroduction to the details; the information at pages 8 and 9 there I \nfound particularly informative.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 11 Apr 2008 16:26:23 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance is low Postgres+Solaris" } ]
[ { "msg_contents": "Scott Marlowe wrote:\n> On Thu, Apr 3, 2008 at 4:10 AM, sathiya psql <[email protected]> wrote:\n> \n>> There is NO MAX....\n>>\n>> It is according to your hardware you have, and the db you have.\n>> \n>\n> Not entirely true. on 32 bit OS / software, the limit is just under 2\n> Gig. I'd imagine that the limit on 64 bit hardware / software is\n> therefore something around 2^63-somesmallnumber which is, for all\n> practical purposes, unlimited.\n>\n> \nIts limited only by the hardware and OS you are running. Some OS's on \n32bit allow for up to 4 gigs but you have to leave room for the OS and \nother running processes. On 64 bit hardware it can be limited by the \nHardware along with OS where its artificially limited to 32gigs, 64 gigs \nand 2TB seem to be the most common limits set by the hardware or OS. \nBut again you have to leave room in these setting for other processes if \nnot out memory errors will occur and may crash the server.\n\nIn general these setting are only limited by the Hardware and OS\n\n\n\n\n\n\n\n\nScott Marlowe wrote:\n\nOn Thu, Apr 3, 2008 at 4:10 AM, sathiya psql <[email protected]> wrote:\n \n\nThere is NO MAX....\n\nIt is according to your hardware you have, and the db you have.\n \n\n\nNot entirely true. on 32 bit OS / software, the limit is just under 2\nGig. I'd imagine that the limit on 64 bit hardware / software is\ntherefore something around 2^63-somesmallnumber which is, for all\npractical purposes, unlimited.\n\n \n\nIts limited only by the hardware and OS you are running.    Some OS's\non 32bit allow for up to 4 gigs but you have to leave room for the OS\nand other running processes.  On 64 bit hardware it can be limited by\nthe Hardware along with OS where its artificially limited to 32gigs, 64\ngigs and 2TB  seem to be the most common limits set by the hardware or\nOS.   But again you have to leave room in these setting for other\nprocesses if not out memory errors will occur and may crash the server.\n\n\nIn general these setting are only limited by the Hardware and OS", "msg_date": "Thu, 03 Apr 2008 13:21:27 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: Re: Max shared_buffers]" } ]
[ { "msg_contents": "I'm trying to fine tune this query to return in a reasonable amount of time\nand am having difficulties getting the query to run the way I'd like. I\nhave a couple of semi-related entities that are stored in individual tables,\nsay, A and B. There is then a view created that pulls together the common\nfields from these 2 tables. These are then related through a m:m\nrelationship to a classification. Quick definitions of all of this follows:\n\nTable: ItemA\nid <- primary key\nname\ndescription\n<addtl fields for A>\n\nTable: ItemB\nid <- primary key\nname\ndescription\n<addtl fields for B>\n\n\nView: Combined\nSELECT id, name, description from ItemA\nUNION ALL\nSELECT id, name, description from ItemB\n\n\nTable: xref\nid <- primary key\nitem_id <- indexed, points to either ItemA.id or ItemB.id\nclassifcation_id <- indexed, points to classification.id\n\n\nTable: classifcation\nid <- primiary key\nname\n\nI'm trying to query from the classification, through the xref, and to the\nview to get a list of Items (either A or B) that are tied to a specific\nclassification. My query is rather simple, baiscally as follows:\n\nSELECT id, name, description\nFROM combination c\n INNER JOIN xref on c.id = xref.item_id\nWHERE xref.classifcation_id = 1\n\nThis query runs in about 2-3 minutes (I should mention that ItemA has ~18M\nrecords and xref has ~26M records - and both will continue to grow). The\nexplain text shows a disregard for the indexes on ItemA and ItemB and a\nsequence scan is done on both of them. However, if I rewrite this query to\njoin directly to ItemA rather to the view it runs in ~50ms because it now\nuses the proper index.\n\nI know it's generally requested to include the EXPLAIN text when submitting\na specific question, but I thought perhaps this was generic enough that\nsomeone might at least have some suggestions. If required I can certainly\nwork up a simpler example, or I could include my actual explain (though it\ndoesn't exactly match everything defined above as I tried to keep things\nrather generic).\n\nAny links would be nice as well, from all my searching the past few days,\nmost of the performance tuning resources I could find where about tuning the\nserver itself, not really a specific query - at least not one that dealt\nwith this issue. If you've read this far - thank you much!\n\nI'm trying to fine tune this query to return in a reasonable amount of time and am having difficulties getting the query to run the way I'd like.  I have a couple of semi-related entities that are stored in individual tables, say, A and B.  There is then a view created that pulls together the common fields from these 2 tables.  These are then related through a m:m relationship to a classification.  Quick definitions of all of this follows:\nTable: ItemAid     <- primary keynamedescription<addtl fields for A>Table: ItemBid    <- primary keynamedescription<addtl fields for B>View: Combined\nSELECT id, name, description from ItemAUNION ALLSELECT id, name, description from ItemBTable: xrefid   <- primary keyitem_id  <- indexed, points to either ItemA.id or ItemB.idclassifcation_id  <- indexed, points to classification.id\nTable: classifcationid   <- primiary keynameI'm trying to query from the classification, through the xref, and to the view to get a list of Items (either A or B) that are tied to a specific classification.  My query is rather simple, baiscally as follows:\nSELECT id, name, descriptionFROM combination c    INNER JOIN xref on c.id = xref.item_idWHERE xref.classifcation_id = 1This query runs in about 2-3 minutes (I should mention that ItemA has ~18M records and xref has ~26M records - and both will continue to grow).  The explain text shows a disregard for the indexes on ItemA and ItemB and a sequence scan is done on both of them.  However, if I rewrite this query to join directly to ItemA rather to the view it runs in ~50ms because it now uses the proper index.\nI know it's generally requested to include the EXPLAIN text when submitting a specific question, but I thought perhaps this was generic enough that someone might at least have some suggestions.  If required I can certainly work up a simpler example, or I could include my actual explain (though it doesn't exactly match everything defined above as I tried to keep things rather generic).\nAny links would be nice as well, from all my searching the past few days, most of the performance tuning resources I could find where about tuning the server itself, not really a specific query - at least not one that dealt with this issue.  If you've read this far - thank you much!", "msg_date": "Thu, 3 Apr 2008 17:31:45 -0600", "msg_from": "\"Matt Klinker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query plan excluding index on view" }, { "msg_contents": "\"Matt Klinker\" <[email protected]> writes:\n> I know it's generally requested to include the EXPLAIN text when submitting\n> a specific question, but I thought perhaps this was generic enough that\n> someone might at least have some suggestions.\n\nYou're usually only going to get generic suggestions from a generic\nexplanation.\n\nOne thought here though is that it's only been since PG 8.2 that you had\nany hope of getting an indexscan on a join condition pushed down through\na UNION, which it looks like is what you're hoping for. What version\nare you running?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Apr 2008 19:50:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan excluding index on view " }, { "msg_contents": "Sorry for not including this extra bit originally. Below is the explain\ndetail from both the query to the view that takes longer and then the query\ndirectly to the single table that performs quickly.\n\nHash Join (cost=49082.96..1940745.80 rows=11412 width=76)\n Hash Cond: (outer.?column1? = inner.listing_fid)\n -> Append (cost=0.00..1290709.94 rows=18487347 width=252)\n -> Subquery Scan *SELECT* 1 (cost=0.00..1285922.80 rows=18384890\nwidth=251)\n -> Seq Scan on company (cost=0.00..1102073.90 rows=18384890\nwidth=251)\n -> Subquery Scan *SELECT* 2 (cost=0.00..4787.14 rows=102457\nwidth=252)\n -> Seq Scan on school (cost=0.00..3762.57 rows=102457\nwidth=252)\n -> Hash (cost=49042.64..49042.64 rows=16130 width=8)\n -> Bitmap Heap Scan on listing_node_xref xref\n(cost=102.45..49042.64 rows=16130 width=8)\n Recheck Cond: (node_fid = 173204537)\n -> Bitmap Index Scan on idx_listing_node_xref_node_fid\n(cost=0.00..102.45 rows=16130 width=0)\n Index Cond: (node_fid = 173204537)\n\n\nNested Loop (cost=102.45..98564.97 rows=11349 width=517)\n -> Bitmap Heap Scan on listing_node_xref xref (cost=102.45..49042.64\nrows=16130 width=8)\n Recheck Cond: (node_fid = 173204537)\n -> Bitmap Index Scan on idx_listing_node_xref_node_fid\n(cost=0.00..102.45 rows=16130 width=0)\n Index Cond: (node_fid = 173204537)\n -> Index Scan using idx_pki_company_id on company c (cost=0.00..3.06\nrows=1 width=517)\n Index Cond: (c.id = outer.listing_fid)\n\n\nOn Thu, Apr 3, 2008 at 7:19 PM, Tom Lane <[email protected]> wrote:\n\n> \"Matt Klinker\" <[email protected]> writes:\n> > I new I'd forget something! I've tried this on both 8.2 and 8.3 with\n> the\n> > same results.\n>\n> Then you're going to have to provide more details ...\n>\n> regards, tom lane\n>\n\nSorry for not including this extra bit originally.  Below is the explain detail from both the query to the view that takes longer and then the query directly to the single table that performs quickly.Hash Join  (cost=49082.96..1940745.80 rows=11412 width=76)\n\n  Hash Cond: (outer.?column1? = inner.listing_fid)  ->  Append  (cost=0.00..1290709.94 rows=18487347 width=252)        ->  Subquery Scan *SELECT* 1  (cost=0.00..1285922.80 rows=18384890 width=251)              ->  Seq Scan on company  (cost=0.00..1102073.90 rows=18384890 width=251)\n\n        ->  Subquery Scan *SELECT* 2  (cost=0.00..4787.14 rows=102457 width=252)              ->  Seq Scan on school  (cost=0.00..3762.57 rows=102457 width=252)  ->  Hash  (cost=49042.64..49042.64 rows=16130 width=8)\n\n        ->  Bitmap Heap Scan on listing_node_xref xref  (cost=102.45..49042.64 rows=16130 width=8)              Recheck Cond: (node_fid = 173204537)              ->  Bitmap Index Scan on idx_listing_node_xref_node_fid  (cost=0.00..102.45 rows=16130 width=0)\n\n                    Index Cond: (node_fid = 173204537)Nested Loop  (cost=102.45..98564.97 rows=11349 width=517)  ->  Bitmap Heap Scan on listing_node_xref xref  (cost=102.45..49042.64 rows=16130 width=8)\n\n        Recheck Cond: (node_fid = 173204537)        ->  Bitmap Index Scan on idx_listing_node_xref_node_fid  (cost=0.00..102.45 rows=16130 width=0)              Index Cond: (node_fid = 173204537)  ->  Index Scan using idx_pki_company_id on company c  (cost=0.00..3.06 rows=1 width=517)\n\n        Index Cond: (c.id = outer.listing_fid)On Thu, Apr 3, 2008 at 7:19 PM, Tom Lane <[email protected]> wrote:\n\"Matt Klinker\" <[email protected]> writes:\n> I new I'd forget something!  I've tried this on both 8.2 and 8.3 with the\n> same results.\n\nThen you're going to have to provide more details ...\n\n                        regards, tom lane", "msg_date": "Thu, 3 Apr 2008 22:58:10 -0600", "msg_from": "\"Matt Klinker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query plan excluding index on view" }, { "msg_contents": "\"Matt Klinker\" <[email protected]> writes:\n> Sorry for not including this extra bit originally. Below is the explain\n> detail from both the query to the view that takes longer and then the query\n> directly to the single table that performs quickly.\n...\n> -> Subquery Scan *SELECT* 1 (cost=0.00..1285922.80 rows=18384890\n> width=251)\n> -> Seq Scan on company (cost=0.00..1102073.90 rows=18384890\n\nThe presence of a Subquery Scan node tells me that either this is a much\nolder PG version than you stated, or there are some interesting details\nto the query that you omitted. Please drop the fan-dance routine and\nshow us a complete reproducible test case.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Apr 2008 01:49:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan excluding index on view " }, { "msg_contents": "I'm sorry for the \"fan-dance\", it was not my intention to make it difficult\nbut actually simpler in leaving out the finer details - lesson learned.\nBelow you'll find create scripts for all tables and views invlolved. Also\nI've included the explain text for both queries when ran on the 8.3 database\nwhere what was included before was from 8.1 (I was incorrect in stating I\nhad tried version 8.2, as I thought the 8.1 install was 8.2 - my apologies).\n\n--Table 1 - (Item A) ~18M records\nCREATE TABLE company\n(\n id bigint NOT NULL DEFAULT nextval('global_sequence'::regclass),\n \"name\" character varying(65) NOT NULL,\n description character varying(100),\n recordid character varying(10),\n full_address character varying(45),\n street_number character varying(10),\n street_directional character(2),\n street_name character varying(20),\n unit_designator character varying(4),\n unit_number character varying(8),\n city_name character varying(20),\n state_code character(2),\n zip character(5),\n zip_extension character(4),\n phone character varying(10),\n phone_code character(1),\n publish_date character varying(6),\n solicitation_restrictions character(1),\n business_flag character(1),\n latitude character varying(11),\n longitude character varying(11),\n precision_code character(1),\n fips character varying(16),\n is_telco_unique boolean,\n vanity_city_name character varying(20),\n book_number character varying(6),\n web_address character varying(50),\n primary_bdc_flag character(1),\n msa character varying(4),\n is_amex_accepted boolean,\n is_mastercard_accepted boolean,\n is_visa_accepted boolean,\n is_discover_accepted boolean,\n is_diners_accepted boolean,\n is_other_cc_accepted boolean,\n fax character varying(10),\n free_eac character(1),\n hours_of_operation character(1),\n is_spanish_spoken boolean,\n is_french_spoken boolean,\n is_german_spoken boolean,\n is_japanese_spoken boolean,\n is_italian_spoken boolean,\n is_korean_spoken boolean,\n is_chinese_spoken boolean,\n senior_discount_key character(1),\n listing_type_fid bigint,\n CONSTRAINT pk_company_id PRIMARY KEY (id)\n)\nWITH (OIDS=FALSE);\n\n--Table 2 - (Item B) ~100k records\nCREATE TABLE school\n(\n id bigint NOT NULL DEFAULT nextval('global_sequence'::regclass),\n \"name\" character varying(65) NOT NULL,\n description character varying(100),\n address1 character varying(100),\n address2 character varying(100),\n city character varying(50),\n state character(2),\n CONSTRAINT pk_school_id PRIMARY KEY (id)\n)\nWITH (OIDS=FALSE);\n\n--Joined View:\nCREATE OR REPLACE VIEW directory_listing AS\n SELECT school.id, school.name, school.description, 119075291 AS\nlisting_type_fid\n FROM school\nUNION ALL\n SELECT company.id, company.name, company.description, 119074833 AS\nlisting_type_fid\n FROM company;\n\n--Listing-Classification Xref: ~26M records\nCREATE TABLE listing_node_xref\n(\n id bigint NOT NULL DEFAULT nextval('global_sequence'::regclass),\n listing_fid bigint NOT NULL,\n node_fid bigint NOT NULL,\n CONSTRAINT pk_listing_node_xref PRIMARY KEY (id),\n CONSTRAINT fk_listing_node_xref_node_fid FOREIGN KEY (node_fid)\n REFERENCES node (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT uqe_listing_node_xref_listing_fid_node_fid UNIQUE (listing_fid,\nnode_fid)\n)\nWITH (OIDS=FALSE);\nALTER TABLE listing_node_xref OWNER TO vml;\n\nCREATE INDEX idx_listing_node_xref_listing_fid\n ON listing_node_xref\n USING btree\n (listing_fid);\n\nCREATE INDEX idx_listing_node_xref_node_fid\n ON listing_node_xref\n USING btree\n (node_fid);\n\nHere is the version of Postgres: PostgreSQL 8.3.1\n\nQuery:\nSELECT l.id, l.name, l.description, l.listing_type_fid\nFROM directory_listing l\n INNER JOIN listing_node_xref xref ON l.id = xref.listing_fid\nWHERE xref.node_fid = 173204537\n\nExplain:\nHash Join (cost=48449.22..1223695.46 rows=11472 width=378)\n Hash Cond: (school.id = xref.listing_fid)\n -> Append (cost=0.00..945319.40 rows=18384970 width=378)\n -> Seq Scan on school (cost=0.00..10.80 rows=80 width=374)\n -> Seq Scan on company (cost=0.00..761458.90 rows=18384890\nwidth=247)\n -> Hash (cost=48246.22..48246.22 rows=16240 width=8)\n -> Bitmap Heap Scan on listing_node_xref xref\n(cost=308.96..48246.22 rows=16240 width=8)\n Recheck Cond: (node_fid = 173204537)\n -> Bitmap Index Scan on idx_listing_node_xref_node_fid\n(cost=0.00..304.90 rows=16240 width=0)\n Index Cond: (node_fid = 173204537)\n\nQuery:\nselect c.*\nfrom company c\n inner join listing_node_xref xref on c.id = xref.listing_fid\nwhere xref.node_fid = 173204537\n\nExplain:\nNested Loop (cost=308.96..205552.40 rows=11471 width=424)\n -> Bitmap Heap Scan on listing_node_xref xref (cost=308.96..48246.22\nrows=16240 width=8)\n Recheck Cond: (node_fid = 173204537)\n -> Bitmap Index Scan on idx_listing_node_xref_node_fid\n(cost=0.00..304.90 rows=16240 width=0)\n Index Cond: (node_fid = 173204537)\n -> Index Scan using pk_company_id on company c (cost=0.00..9.67 rows=1\nwidth=424)\n Index Cond: (c.id = xref.listing_fid)\n\n\n\n\nOn Thu, Apr 3, 2008 at 11:49 PM, Tom Lane <[email protected]> wrote:\n\n> \"Matt Klinker\" <[email protected]> writes:\n> > Sorry for not including this extra bit originally. Below is the explain\n> > detail from both the query to the view that takes longer and then the\n> query\n> > directly to the single table that performs quickly.\n> ...\n> > -> Subquery Scan *SELECT* 1 (cost=0.00..1285922.80\n> rows=18384890\n> > width=251)\n> > -> Seq Scan on company (cost=0.00..1102073.90\n> rows=18384890\n>\n> The presence of a Subquery Scan node tells me that either this is a much\n> older PG version than you stated, or there are some interesting details\n> to the query that you omitted. Please drop the fan-dance routine and\n> show us a complete reproducible test case.\n>\n> regards, tom lane\n>\n\nI'm sorry for the \"fan-dance\", it was not my intention to make it difficult but actually simpler in leaving out the finer details - lesson learned.  Below you'll find create scripts for all tables and views invlolved.  Also I've included the explain text for both queries when ran on the 8.3 database where what was included before was from 8.1  (I was incorrect in stating I had tried version 8.2, as I thought the 8.1 install was 8.2 - my apologies).\n--Table 1 - (Item A)  ~18M recordsCREATE TABLE company(  id bigint NOT NULL DEFAULT nextval('global_sequence'::regclass),  \"name\" character varying(65) NOT NULL,  description character varying(100),\n  recordid character varying(10),  full_address character varying(45),  street_number character varying(10),  street_directional character(2),  street_name character varying(20),  unit_designator character varying(4),\n  unit_number character varying(8),  city_name character varying(20),  state_code character(2),  zip character(5),  zip_extension character(4),  phone character varying(10),  phone_code character(1),\n  publish_date character varying(6),  solicitation_restrictions character(1),  business_flag character(1),  latitude character varying(11),  longitude character varying(11),  precision_code character(1),\n  fips character varying(16),  is_telco_unique boolean,  vanity_city_name character varying(20),  book_number character varying(6),  web_address character varying(50),  primary_bdc_flag character(1),\n  msa character varying(4),  is_amex_accepted boolean,  is_mastercard_accepted boolean,  is_visa_accepted boolean,  is_discover_accepted boolean,  is_diners_accepted boolean,  is_other_cc_accepted boolean,\n  fax character varying(10),  free_eac character(1),  hours_of_operation character(1),  is_spanish_spoken boolean,  is_french_spoken boolean,  is_german_spoken boolean,  is_japanese_spoken boolean,\n  is_italian_spoken boolean,  is_korean_spoken boolean,  is_chinese_spoken boolean,  senior_discount_key character(1),  listing_type_fid bigint,  CONSTRAINT pk_company_id PRIMARY KEY (id))WITH (OIDS=FALSE);\n--Table 2 - (Item B)  ~100k recordsCREATE TABLE school(  id bigint NOT NULL DEFAULT nextval('global_sequence'::regclass),  \"name\" character varying(65) NOT NULL,  description character varying(100),\n  address1 character varying(100),  address2 character varying(100),  city character varying(50),  state character(2),  CONSTRAINT pk_school_id PRIMARY KEY (id))WITH (OIDS=FALSE);--Joined View:\nCREATE OR REPLACE VIEW directory_listing AS  SELECT school.id, school.name, school.description, 119075291 AS listing_type_fid   FROM schoolUNION ALL \n SELECT company.id, company.name, company.description, 119074833 AS listing_type_fid   FROM company;--Listing-Classification  Xref:  ~26M records\nCREATE TABLE listing_node_xref(  id bigint NOT NULL DEFAULT nextval('global_sequence'::regclass),  listing_fid bigint NOT NULL,  node_fid bigint NOT NULL,  CONSTRAINT pk_listing_node_xref PRIMARY KEY (id),\n  CONSTRAINT fk_listing_node_xref_node_fid FOREIGN KEY (node_fid)      REFERENCES node (id) MATCH SIMPLE      ON UPDATE NO ACTION ON DELETE NO ACTION,  CONSTRAINT uqe_listing_node_xref_listing_fid_node_fid UNIQUE (listing_fid, node_fid)\n)WITH (OIDS=FALSE);ALTER TABLE listing_node_xref OWNER TO vml;CREATE INDEX idx_listing_node_xref_listing_fid  ON listing_node_xref  USING btree  (listing_fid);CREATE INDEX idx_listing_node_xref_node_fid\n  ON listing_node_xref  USING btree  (node_fid);Here is the version of Postgres:  PostgreSQL 8.3.1Query:SELECT l.id, l.name, l.description, l.listing_type_fid\nFROM  directory_listing l    INNER JOIN listing_node_xref xref  ON  l.id = xref.listing_fid WHERE xref.node_fid = 173204537Explain:Hash Join  (cost=48449.22..1223695.46 rows=11472 width=378)\n  Hash Cond: (school.id = xref.listing_fid)  ->  Append  (cost=0.00..945319.40 rows=18384970 width=378)        ->  Seq Scan on school  (cost=0.00..10.80 rows=80 width=374)        ->  Seq Scan on company  (cost=0.00..761458.90 rows=18384890 width=247)\n  ->  Hash  (cost=48246.22..48246.22 rows=16240 width=8)        ->  Bitmap Heap Scan on listing_node_xref xref  (cost=308.96..48246.22 rows=16240 width=8)              Recheck Cond: (node_fid = 173204537)\n              ->  Bitmap Index Scan on idx_listing_node_xref_node_fid  (cost=0.00..304.90 rows=16240 width=0)                    Index Cond: (node_fid = 173204537)Query:select c.*from company c    inner join listing_node_xref xref on c.id = xref.listing_fid\nwhere xref.node_fid = 173204537Explain:Nested Loop  (cost=308.96..205552.40 rows=11471 width=424)  ->  Bitmap Heap Scan on listing_node_xref xref  (cost=308.96..48246.22 rows=16240 width=8)        Recheck Cond: (node_fid = 173204537)\n        ->  Bitmap Index Scan on idx_listing_node_xref_node_fid  (cost=0.00..304.90 rows=16240 width=0)              Index Cond: (node_fid = 173204537)  ->  Index Scan using pk_company_id on company c  (cost=0.00..9.67 rows=1 width=424)\n        Index Cond: (c.id = xref.listing_fid)On Thu, Apr 3, 2008 at 11:49 PM, Tom Lane <[email protected]> wrote:\n\"Matt Klinker\" <[email protected]> writes:\n> Sorry for not including this extra bit originally.  Below is the explain\n> detail from both the query to the view that takes longer and then the query\n> directly to the single table that performs quickly.\n...\n>         ->  Subquery Scan *SELECT* 1  (cost=0.00..1285922.80 rows=18384890\n> width=251)\n>               ->  Seq Scan on company  (cost=0.00..1102073.90 rows=18384890\n\nThe presence of a Subquery Scan node tells me that either this is a much\nolder PG version than you stated, or there are some interesting details\nto the query that you omitted.  Please drop the fan-dance routine and\nshow us a complete reproducible test case.\n\n                        regards, tom lane", "msg_date": "Fri, 4 Apr 2008 08:26:50 -0600", "msg_from": "\"Matt Klinker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query plan excluding index on view" }, { "msg_contents": "\"Matt Klinker\" <[email protected]> writes:\n> --Joined View:\n> CREATE OR REPLACE VIEW directory_listing AS\n> SELECT school.id, school.name, school.description, 119075291 AS\n> listing_type_fid\n> FROM school\n> UNION ALL\n> SELECT company.id, company.name, company.description, 119074833 AS\n> listing_type_fid\n> FROM company;\n\nAh, there's the problem :-(. Can you get rid of the constants here?\nThe planner's currently not smart about UNION ALL subqueries unless\ntheir SELECT lists contain just simple column references.\n\n(Yes, fixing that is on the todo list, but don't hold your breath...\nit'll be 8.4 material at the earliest.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Apr 2008 00:20:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan excluding index on view " }, { "msg_contents": "Removing the constants definitely did take care of the issue on 8.3 (still\nsame query plan on 8.1). Thanks for your help in getting this resolved, and\nsorry again for not including all relevant information on my initial request\n\nOn Fri, Apr 4, 2008 at 10:20 PM, Tom Lane <[email protected]> wrote:\n\n> \"Matt Klinker\" <[email protected]> writes:\n> > --Joined View:\n> > CREATE OR REPLACE VIEW directory_listing AS\n> > SELECT school.id, school.name, school.description, 119075291 AS\n> > listing_type_fid\n> > FROM school\n> > UNION ALL\n> > SELECT company.id, company.name, company.description, 119074833 AS\n> > listing_type_fid\n> > FROM company;\n>\n> Ah, there's the problem :-(. Can you get rid of the constants here?\n> The planner's currently not smart about UNION ALL subqueries unless\n> their SELECT lists contain just simple column references.\n>\n> (Yes, fixing that is on the todo list, but don't hold your breath...\n> it'll be 8.4 material at the earliest.)\n>\n> regards, tom lane\n>\n\nRemoving the constants definitely did take care of the issue on 8.3 (still same query plan on 8.1).  Thanks for your help in getting this resolved, and sorry again for not including all relevant information on my initial request\nOn Fri, Apr 4, 2008 at 10:20 PM, Tom Lane <[email protected]> wrote:\n\"Matt Klinker\" <[email protected]> writes:\n> --Joined View:\n> CREATE OR REPLACE VIEW directory_listing AS\n>  SELECT school.id, school.name, school.description, 119075291 AS\n> listing_type_fid\n>    FROM school\n> UNION ALL\n>  SELECT company.id, company.name, company.description, 119074833 AS\n> listing_type_fid\n>    FROM company;\n\nAh, there's the problem :-(.  Can you get rid of the constants here?\nThe planner's currently not smart about UNION ALL subqueries unless\ntheir SELECT lists contain just simple column references.\n\n(Yes, fixing that is on the todo list, but don't hold your breath...\nit'll be 8.4 material at the earliest.)\n\n                        regards, tom lane", "msg_date": "Mon, 7 Apr 2008 09:46:38 -0600", "msg_from": "\"Matt Klinker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SOLVED] Query plan excluding index on view" } ]
[ { "msg_contents": "just wondering if there's a special tweak i can do to force more usage\nof indexes to do BITMAP ands?\n\nI have a table like\n\nA int\nB int\nC int\nD int\nE int\nF int\ng int\n\nwhere A/B/C/D/E are indexes\n\nThere's ~20millions rows in the table.\n\nQuery are something like this.\n\nselect * from table \nwhere A=X\nand B = Y\nand C = Z\nand D = AA\nand E = BB\n\nthe query plan will only pick 2 indexes to do the bitmap.\nI'm not sure how to tweak the config for it to use more indexes.\n\nBox is a celeron 1.7 w/ 768MB ram with shared buffers at 250MB and\neffective cache size 350MB\n\n\n", "msg_date": "Fri, 04 Apr 2008 11:39:43 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": true, "msg_subject": "Forcing more agressive index scans for BITMAP AND" }, { "msg_contents": "On Fri, 4 Apr 2008, Ow Mun Heng wrote:\n> select * from table\n> where A=X\n> and B = Y\n> and C = Z\n> and D = AA\n> and E = BB\n\nThis may not be the answer you're looking for, but if you create a \nmulti-coloumn index, it should be able to make your query run fast:\n\nCREATE INDEX foo ON table (A, B, C, D, E);\n\nIt'll certainly be faster than building a bitmap for the contents of five \nseparate indexes.\n\nMatthew\n\n-- \n-. .-. .-. .-. .-. .-. .-. .-. .-. .-. .-. .-. .-.\n||X|||\\ /|||X|||\\ /|||X|||\\ /|||X|||\\ /|||X|||\\ /|||X|||\\ /|||\n|/ \\|||X|||/ \\|||X|||/ \\|||X|||/ \\|||X|||/ \\|||X|||/ \\|||X|||/\n' `-' `-' `-' `-' `-' `-' `-' `-' `-' `-' `-' `-'\n", "msg_date": "Fri, 4 Apr 2008 11:40:30 +0100 (BST)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Forcing more agressive index scans for BITMAP AND" }, { "msg_contents": "\n> On Fri, 4 Apr 2008, Ow Mun Heng wrote:\n>> select * from table\n>> where A=X\n>> and B = Y\n>> and C = Z\n>> and D = AA\n>> and E = BB\n\n\tWith that kind of WHERE condition, Postgres will use a Bitmap Index Scan \nto combine your indices. If, however, postgres notices while looking at \nthe statistics gathered during ANALYZE, that for one of your columns, you \nrequest a value that happens in a large percentage of the rows (like 20%), \nand this value has a rather random distribution, Postgres will not bother \nscanning the index, because it is very likely that all the pages would \ncontain a row satisfying your condition anyway, so the time taken to scan \nthis huge index and mark the bitmap would be lost because it would not \nallow a better selectivity, since all the pages would get selected for \nscan anyway.\n\tI would guess that Postgres uses Bitmap Index Scan only on your columns \nthat have good selectivity (ie. lots of different values).\n\n\tSo :\n\n\tIf you use conditions on (a,b) or (a,b,c) or (a,b,c,d) etc, you will \nbenefit GREATLY from a multicolumn index on (a,b,c,d...).\n\tHowever, even if postgres can use some clever tricks, a multicolumn index \non (a,b,c,d) will not be optimal for a condition on (b,c,d) for instance.\n\n\tSo, if you mostly use conditions on a left-anchored subset of \n(a,b,c,d,e), the multicolumn index will be a great tool.\n\tA multicolumn index on (a,b,c,d,e) is always slightly slower than an \nindex on (a) if you only use a condition on (a), but it is immensely \nfaster when you use a multicolumn condition.\n\n\tCan you tell us more about what those columns mean and what you store in \nthem, how many distinct values, etc ?\n", "msg_date": "Fri, 04 Apr 2008 13:02:59 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Forcing more agressive index scans for BITMAP AND" } ]
[ { "msg_contents": "Hi List;\n\nI'm having some performance issues with a partitioned table. We have a \nVERY large table that we've partitioned by day.\n\nCurrently we have 17 partitions - each partition table contains > \n700million rows.\nOne of the things we need to query is the min date from the master \ntable - we may explore alternatives for this particular query, however \neven if we fix this query I think we have a fundamental issue with the \nuse of indexes (actuallt the non-use) by the planner.\n\nBelow is a sample of the DDL used to create our tables and an explain \nshowing that the planner wants to do a sequential scan on each \npartition. We do have \"constraint_elimination = on\" set in the \npostgresql.conf file.\n\nI tried removing the index from the part_master table and got the same \nresult\n\nLikewise the costs associated with the seq scans seem to be way off \n(yes I've run analyze on the master and all partition tables) - I ran \nthe actual SQL statement below and killed it after about 15min.\n\nThanks in advance for any help, advice, etc...\n\n\n\n\nTables:\n\n------------------------------------------\n-- Master Table\n------------------------------------------\nCREATE TABLE part_master (\n filename character varying(100),\n logdate date,\n ... -- about 50 more columns go here\n\tloghour date,\n url character varying(500),\n\tcustomer character varying(500)\n);\nCREATE INDEX master_logdate ON part_master USING btree (logdate);\n\n------------------------------------------\n-- Partitions:\n------------------------------------------\n\n------------------------------------------\n-- part_20080319\n------------------------------------------\nCREATE TABLE part_20080319 (CONSTRAINT part_20080319_logdate_check\n\tCHECK ((logdate = '2008-03-19'::date))\n)\nINHERITS (part_master);\n\n\nCREATE INDEX idx_part_20080319_customer ON part_20080319 USING btree \n(customer);\nCREATE INDEX idx_part_20080319_logdate ON part_20080319 USING btree \n(logdate);\nCREATE INDEX idx_part_20080319_loghour ON part_20080319 USING btree \n(loghour);\n\n\n------------------------------------------\n-- part_20080320\n------------------------------------------\nCREATE TABLE part_20080320 (CONSTRAINT part_20080320_logdate_check\n\tCHECK ((logdate = '2008-03-20'::date))\n)\nINHERITS (part_master);\n\n\nCREATE INDEX idx_part_20080320_customer ON part_20080320 USING btree \n(customer);\nCREATE INDEX idx_part_20080320_logdate ON part_20080320 USING btree \n(logdate);\nCREATE INDEX idx_part_20080320_loghour ON part_20080320 USING btree \n(loghour);\n\n\n-- And so on, thru part_20080404\n\n\n\n------------------------------------------\n-- explain plan\n------------------------------------------\n\nmyDB=# explain SELECT min(logdate) FROM part_master;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=117070810.10..117070810.11 rows=1 width=4)\n -> Append (cost=0.00..114866502.48 rows=881723048 width=4)\n -> Seq Scan on part_master (cost=0.00..85596244.18 \nrows=679385718 width=4)\n -> Seq Scan on part_20080319 part (cost=0.00..212860.86 \nrows=1674986 width=4)\n -> Seq Scan on part_20080320 part (cost=0.00..1753802.51 \nrows=13782951 width=4)\n -> Seq Scan on part_20080321 part (cost=0.00..2061636.83 \nrows=15881283 width=4)\n -> Seq Scan on part_20080322 part (cost=0.00..1965144.71 \nrows=14936971 width=4)\n -> Seq Scan on part_20080323 part (cost=0.00..1614413.18 \nrows=12345618 width=4)\n -> Seq Scan on part_20080324 part (cost=0.00..1926520.22 \nrows=14741022 width=4)\n -> Seq Scan on part_20080325 part (cost=0.00..2356704.22 \nrows=18477622 width=4)\n -> Seq Scan on part_20080326 part (cost=0.00..1889267.71 \nrows=14512171 width=4)\n -> Seq Scan on part_20080327 part (cost=0.00..1622100.34 \nrows=12445034 width=4)\n -> Seq Scan on part_20080328 part (cost=0.00..1711779.49 \nrows=12885749 width=4)\n -> Seq Scan on part_20080329 part (cost=0.00..1568192.94 \nrows=11958394 width=4)\n -> Seq Scan on part_20080330 part (cost=0.00..1521204.64 \nrows=11676564 width=4)\n -> Seq Scan on part_20080331 part (cost=0.00..1587138.77 \nrows=12180377 width=4)\n -> Seq Scan on part_20080401 part (cost=0.00..2324352.82 \nrows=18211382 width=4)\n -> Seq Scan on part_20080402 part (cost=0.00..2891295.04 \nrows=6693804 width=4)\n -> Seq Scan on part_20080403 part (cost=0.00..1707327.48 \nrows=5748348 width=4)\n -> Seq Scan on part_20080404 part (cost=0.00..556516.54 \nrows=4185054 width=4)\n(20 rows)\n\n\n", "msg_date": "Fri, 4 Apr 2008 16:36:07 -0600", "msg_from": "kevin kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Partitioned tables - planner wont use indexes" }, { "msg_contents": "\n> I tried removing the index from the part_master table and got the same \n> result\n\n\tSince all the data is in the partitions, the part_master table is empty, \nso the index is not useful for your query.\n\n> myDB=# explain SELECT min(logdate) FROM part_master;\n\n\tProposals :\n\n\t1- Use plpgsql to parse the system catalogs, get the list of partitions, \nand issue a min() query against each\n\t2- Since dates tend to be incrementing, I guess the minimum date must not \nbe changing that often (unless you delete rows) ; therefore if you need \nthat information often I suggest a trigger that updates a separate table \nwhich keeps the min_date (perhaps global or for each client, you choose).\n", "msg_date": "Fri, 11 Apr 2008 12:41:15 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned tables - planner wont use indexes" } ]
[ { "msg_contents": "Hi List;\n\nSorry if this is a dupe, my first post never showed up...\n\nI'm having some performance issues with a partitioned table. We have a \nVERY large table that we've partitioned by day.\n\nCurrently we have 17 partitions - each partition table contains > \n700million rows.\nOne of the things we need to query is the min date from the master \ntable - we may explore alternatives for this particular query, however \neven if we fix this query I think we have a fundamental issue with the \nuse of indexes (actuallt the non-use) by the planner.\n\nBelow is a sample of the DDL used to create our tables and an explain \nshowing that the planner wants to do a sequential scan on each \npartition. We do have \"constraint_elimination = on\" set in the \npostgresql.conf file.\n\nI tried removing the index from the part_master table and got the same \nresult\n\nLikewise the costs associated with the seq scans seem to be way off \n(yes I've run analyze on the master and all partition tables) - I ran \nthe actual SQL statement below and killed it after about 15min.\n\nThanks in advance for any help, advice, etc...\n\n\n\n\nTables:\n\n------------------------------------------\n-- Master Table\n------------------------------------------\nCREATE TABLE part_master (\n filename character varying(100),\n logdate date,\n ... -- about 50 more columns go here\n\tloghour date,\n url character varying(500),\n\tcustomer character varying(500)\n);\nCREATE INDEX master_logdate ON part_master USING btree (logdate);\n\n------------------------------------------\n-- Partitions:\n------------------------------------------\n\n------------------------------------------\n-- part_20080319\n------------------------------------------\nCREATE TABLE part_20080319 (CONSTRAINT part_20080319_logdate_check\n\tCHECK ((logdate = '2008-03-19'::date))\n)\nINHERITS (part_master);\n\n\nCREATE INDEX idx_part_20080319_customer ON part_20080319 USING btree \n(customer);\nCREATE INDEX idx_part_20080319_logdate ON part_20080319 USING btree \n(logdate);\nCREATE INDEX idx_part_20080319_loghour ON part_20080319 USING btree \n(loghour);\n\n\n------------------------------------------\n-- part_20080320\n------------------------------------------\nCREATE TABLE part_20080320 (CONSTRAINT part_20080320_logdate_check\n\tCHECK ((logdate = '2008-03-20'::date))\n)\nINHERITS (part_master);\n\n\nCREATE INDEX idx_part_20080320_customer ON part_20080320 USING btree \n(customer);\nCREATE INDEX idx_part_20080320_logdate ON part_20080320 USING btree \n(logdate);\nCREATE INDEX idx_part_20080320_loghour ON part_20080320 USING btree \n(loghour);\n\n\n-- And so on, thru part_20080404\n\n\n\n------------------------------------------\n-- explain plan\n------------------------------------------\n\nmyDB=# explain SELECT min(logdate) FROM part_master;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=117070810.10..117070810.11 rows=1 width=4)\n -> Append (cost=0.00..114866502.48 rows=881723048 width=4)\n -> Seq Scan on part_master (cost=0.00..85596244.18 \nrows=679385718 width=4)\n -> Seq Scan on part_20080319 part (cost=0.00..212860.86 \nrows=1674986 width=4)\n -> Seq Scan on part_20080320 part (cost=0.00..1753802.51 \nrows=13782951 width=4)\n -> Seq Scan on part_20080321 part (cost=0.00..2061636.83 \nrows=15881283 width=4)\n -> Seq Scan on part_20080322 part (cost=0.00..1965144.71 \nrows=14936971 width=4)\n -> Seq Scan on part_20080323 part (cost=0.00..1614413.18 \nrows=12345618 width=4)\n -> Seq Scan on part_20080324 part (cost=0.00..1926520.22 \nrows=14741022 width=4)\n -> Seq Scan on part_20080325 part (cost=0.00..2356704.22 \nrows=18477622 width=4)\n -> Seq Scan on part_20080326 part (cost=0.00..1889267.71 \nrows=14512171 width=4)\n -> Seq Scan on part_20080327 part (cost=0.00..1622100.34 \nrows=12445034 width=4)\n -> Seq Scan on part_20080328 part (cost=0.00..1711779.49 \nrows=12885749 width=4)\n -> Seq Scan on part_20080329 part (cost=0.00..1568192.94 \nrows=11958394 width=4)\n -> Seq Scan on part_20080330 part (cost=0.00..1521204.64 \nrows=11676564 width=4)\n -> Seq Scan on part_20080331 part (cost=0.00..1587138.77 \nrows=12180377 width=4)\n -> Seq Scan on part_20080401 part (cost=0.00..2324352.82 \nrows=18211382 width=4)\n -> Seq Scan on part_20080402 part (cost=0.00..2891295.04 \nrows=6693804 width=4)\n -> Seq Scan on part_20080403 part (cost=0.00..1707327.48 \nrows=5748348 width=4)\n -> Seq Scan on part_20080404 part (cost=0.00..556516.54 \nrows=4185054 width=4)\n(20 rows)\n", "msg_date": "Fri, 4 Apr 2008 18:48:25 -0600", "msg_from": "kevin kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Partitioned tables - planner wont use indexes" }, { "msg_contents": "kevin kempter wrote:\n> Hi List;\n>\n> Sorry if this is a dupe, my first post never showed up...\n>\n> I'm having some performance issues with a partitioned table. We have a \n> VERY large table that we've partitioned by day.\n>\n\nUnfortunately, that is the defined behavior in this case. From 5.9.6 of \nthe manual:\n\n\"Constraint exclusion only works when the query's WHERE clause contains \nconstants.\"\n\n[Where the constants are of course your partitioning column(s)]\n\n\nThe best way around this depends mostly on what you're up to. You can \nget the min tablename from the catalogs, or you can keep a table of \nactive partitions that your script which drops off old partitions and \ngenerates new ones can keep updated on the oldest/newest partition \ndates. Or some number of other solutions, whatever you find cleanest for \nyour purposes.\n\nPaul\n\n\n", "msg_date": "Fri, 04 Apr 2008 18:07:49 -0700", "msg_from": "paul rivers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned tables - planner wont use indexes" }, { "msg_contents": "\n\"kevin kempter\" <[email protected]> writes:\n\n> that the planner wants to do a sequential scan on each partition. We do have\n> \"constraint_elimination = on\" set in the postgresql.conf file.\n\n\"constraint_exclusion\" btw.\n\n\n> myDB=# explain SELECT min(logdate) FROM part_master;\n\nEr, yeah. Unfortunately this is just not a kind of query our planner knows how\nto optimize when dealing with a partitioned table... yet. There are several\ndifferent pieces missing to make this work. There's some hope some of them\nmight show up for 8.4 but no guarantees.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's Slony Replication support!\n", "msg_date": "Sat, 05 Apr 2008 02:26:36 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned tables - planner wont use indexes" }, { "msg_contents": "kevin kempter wrote:\n> One of the things we need to query is the min date from the master table \n> - we may explore alternatives for this particular query, however even if \n> we fix this query I think we have a fundamental issue with the use of \n> indexes (actuallt the non-use) by the planner.\n\nWe had a similar requirement, so I've been using a function that loops \nover the child tables, and queries for the min date from each. If all \nyou need is the date, you can try a function call. Here is a modified \nversion of what I've been using:\n\nCREATE OR REPLACE function get_min_date() RETURNS DATE as $_$\nDECLARE\n x RECORD;\n min_date DATE;\n min_date_tmp DATE;\n qry TEXT;\nBEGIN\n /* can also test MIN() aggregate, rather than ORDER BY/LIMIT */\n FOR x IN EXECUTE 'select tablename from pg_tables where tablename \nlike ''part_20%''' loop\n qry := 'SELECT logdate FROM '||x.tablename||' ORDER BY logdate \nLIMIT 1';\n EXECUTE qry INTO min_date_tmp;\n IF (min_date IS NULL OR (min_date_tmp IS NOT NULL AND \nmin_date_tmp<min_date)) THEN\n min_date := min_date_tmp;\n END IF;\n END LOOP;\n RETURN min_date;\nEND;\n$_$ language plpgsql immutable;\n", "msg_date": "Mon, 07 Apr 2008 08:52:50 -0700", "msg_from": "Bricklen Anderson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned tables - planner wont use indexes" } ]
[ { "msg_contents": "just wondering if there's a special tweak i can do to force more usage\nof indexes to do BITMAP ands?\n\nI have a table like\n\nA int\nB int\nC int\nD int\nE int\nF int\ng int\n\nwhere A/B/C/D/E are indexes\n\nThere's ~20millions rows in the table.\n\nQuery are something like this.\n\nselect * from table \nwhere A=X\nand B = Y\nand C = Z\nand D = AA\nand E = BB\n\nthe query plan will only pick 2 indexes to do the bitmap.\nI'm not sure how to tweak the config for it to use more indexes.\n\nBox is a celeron 1.7 w/ 768MB ram with shared buffers at 250MB and\neffective cache size 350MB\n\n\n", "msg_date": "Mon, 07 Apr 2008 16:15:51 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": true, "msg_subject": "Forcing more agressive index scans for BITMAP AND" }, { "msg_contents": "On Mon, 7 Apr 2008, Ow Mun Heng wrote:\n> just wondering if there's a special tweak i can do to force more usage\n> of indexes to do BITMAP ands?\n\nThere's no need to post this again. You have already had a couple of \nuseful answers.\n\nMatthew\n\n-- \nAll of this sounds mildly turgid and messy and confusing... but what the\nheck. That's what programming's all about, really\n -- Computer Science Lecturer\n", "msg_date": "Mon, 7 Apr 2008 11:50:27 +0100 (BST)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Forcing more agressive index scans for BITMAP AND" }, { "msg_contents": "\nOn Mon, 2008-04-07 at 11:50 +0100, Matthew wrote:\n> On Mon, 7 Apr 2008, Ow Mun Heng wrote:\n> > just wondering if there's a special tweak i can do to force more usage\n> > of indexes to do BITMAP ands?\n> \n> There's no need to post this again. You have already had a couple of \n> useful answers.\n\nSorry about this. I didn't see any responses(and my own mail) in my\nINBOX (I'm subscribed to the list and should be receiving all the\nmessages) and thus I thought that it didn't go through. I didn't check\nthe internet arhives as I do not have internet access at the workplace.\n\nI saw the answers from the list at home though and I'm trying to answer\nthose questions below.\n\nTo answer (based on what I see in pgadmin)\n\nindex A = 378 distinct values\nindex B = 235\nindex C = 53\nindex D = 32\nindex E = 1305\nindex F = 246993 (This is timestamp w/o timezone)\n\n(note that this is just 1 table and there are no joins whatsoever.)\n\nI moved from multicolumn indexes to individual indexes because the\nqueries does not always utilise the same few indexes, some users would\nuse \n\neg: index F, A, B or D,A,E or any other combination.\n\nwith regard to the fact that perhaps a sec scan is much IO efficient,\nthis is true when using index F (timestamp) of > 2 weeks interval, then\nit will ignore the other indexes to be searched but do a filter.\n\n\"Bitmap Heap Scan on dtt (cost=25109.93..30213.85 rows=1 width=264)\"\n\" Recheck Cond: (((A)::text = 'H3'::text) AND (F >= '2008-04-01 00:00:00'::timestamp without time zone) AND (F <= '2008-04-08 00:00:00'::timestamp without time zone))\"\n\" Filter: (((B)::text = ANY (('{P000,000}'::character varying[])::text[])) AND ((C)::text ~~ 'F8.M.Y%'::text))\"\n\" -> BitmapAnd (cost=25109.93..25109.93 rows=1299 width=0)\"\n\" -> Bitmap Index Scan on idx_dtt_A (cost=0.00..986.12 rows=47069 width=0)\"\n\" Index Cond: ((A)::text = 'H3'::text)\"\n\" -> Bitmap Index Scan on idx_dtt_date (cost=0.00..24123.56 rows=1007422 width=0)\"\n\" Index Cond: ((F >= '2008-04-01 00:00:00'::timestamp without time zone) AND (F <= '2008-04-08 00:00:00'::timestamp without time zone))\"\n\n\nChanging the date to query from 3/10 to 4/8\n\n\"Bitmap Heap Scan on dtt (cost=47624.67..59045.32 rows=1 width=264)\"\n\" Recheck Cond: (((A)::text = 'H3'::text) AND ((B)::text = 'MD'::text))\"\n\" Filter: ((F >= '2008-03-10 00:00:00'::timestamp without time zone) AND (F <= '2008-04-08 00:00:00'::timestamp without time zone) AND ((B)::text = ANY (('{P000,000}'::character varying[])::text[])) AND ((C)::text ~~ 'F8.M.Y%'::text))\"\n\" -> BitmapAnd (cost=47624.67..47624.67 rows=2944 width=0)\"\n\" -> Bitmap Index Scan on idx_d_dtt (cost=0.00..986.13 rows=47070 width=0)\"\n\" Index Cond: ((A)::text = 'H3'::text)\"\n\" -> Bitmap Index Scan on idx_dtt_B (cost=0.00..46638.29 rows=2283910 width=0)\"\n\" Index Cond: ((B)::text = 'MD'::text)\"\n\n\nI've seen many explains on my tables and IIRC never seen one in this it will use more than 2 indexes to do the query.\n", "msg_date": "Tue, 08 Apr 2008 13:42:51 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Forcing more agressive index scans for BITMAP AND" }, { "msg_contents": "On Tue, 8 Apr 2008, Ow Mun Heng wrote:\n> I moved from multicolumn indexes to individual indexes because the\n> queries does not always utilise the same few indexes, some users would\n> use\n>\n> eg: index F, A, B or D,A,E or any other combination.\n\nYes, that does make it more tricky, but it still may be best to use \nmulticolumn indexes. You would just need to create an index for each of \nthe combinations that you are likely to use.\n\nMatthew\n\n-- \n\"To err is human; to really louse things up requires root\n privileges.\" -- Alexander Pope, slightly paraphrased\n", "msg_date": "Tue, 8 Apr 2008 12:02:00 +0100 (BST)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Forcing more agressive index scans for BITMAP AND" } ]
[ { "msg_contents": "We have a PostgreSQL 8.2.7 database (~230 GB) running on a machine with 8 Intel Xeon Cores and 32 GB RAM (64-bit Linux 2.6.18). Data is stored on an EMC² CLARiiON on RAID 1/0 (8 x 146 GB 15k rpm). \nWhen we do random I/O with a small test tool (reading random 8k blocks from big files in 200 threads) on the disk we retrieve data with about 25 MB/s.\n\nFor testing purpose a test set of about 700.000 queries (those were logged during a problem situation) are executed against the database in 180 concurrent threads.\nSome of the queries are very small and fast - other ones read more than 50000 blocks. All queries are selects (using cursors) - there is only read activity on the database.\n\nBy setting tuple_fraction for cursors to 0.0 instead of 0.1 (http://archives.postgresql.org/pgsql-performance/2008-04/msg00018.php) we reduced reads during the test (pg_statio_all_tables):\n- 8.2.7\n reads from disk: 4.395.276, reads from cache: 471.575.925\n- 8.2.7 cursor_tuple_fraction=0.0\n Reads from disk: 3.406.164, reads from cache: 37.924.625\n\nBut the duration of the test was only reduced by 18 % (from 110 minutes to 90 minutes).\n\nWhen running the test with tuple_fraction=0.0 we observe the following on the server:\n- avg read from disk is at 7 MB/s\n- when we start the random I/O tool during the test we again read data with about 25 MB/s from disk (for me it seems that disk isn't the bottleneck)\n- cpu time is divided between idle and iowait - user and system cpu are practically zero\n- there are from 5000 to 10000 context switches per second\n\nI can't see a bottleneck here. Does anyone has an explanation for that behavior?\n\nRegards,\nRobert\n\n", "msg_date": "Mon, 7 Apr 2008 14:16:11 +0200", "msg_from": "\"Hell, Robert\" <[email protected]>", "msg_from_op": true, "msg_subject": "Looking for bottleneck during load test" }, { "msg_contents": "Hell, Robert wrote:\n> We have a PostgreSQL 8.2.7 database (~230 GB) running on a machine with 8 Intel Xeon Cores and 32 GB RAM (64-bit Linux 2.6.18). Data is stored on an EMC² CLARiiON on RAID 1/0 (8 x 146 GB 15k rpm). \n> When we do random I/O with a small test tool (reading random 8k blocks from big files in 200 threads) on the disk we retrieve data with about 25 MB/s.\n\nHow do you test random IO? Do you use this utility:\nhttp://arctic.org/~dean/randomio/ ?\n\nIf not, try using it, with same parameters. It might be that the latency\nis destroying your performance. Do you use NFS or are you accessing the\nstorage as SAN?\n\n", "msg_date": "Mon, 07 Apr 2008 14:37:32 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for bottleneck during load test" }, { "msg_contents": "I tried different other tools for random IO (including a self written one which does random lseek and read).\r\n\r\nThis tool, started during one of our tests, achieves 2 iops (8k each).\r\nStarted alone I get something about 1,500 iops with an avg latency of 100 ms.\r\n\r\nWe are using SAN (EMC CLARiiON CX 300) - are those ~7 MB/s really our bottleneck?\r\nAny other tuning ideas?\r\n\r\nRegards,\r\nRobert\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Ivan Voras\r\nSent: Montag, 07. April 2008 14:38\r\nTo: [email protected]\r\nSubject: Re: [PERFORM] Looking for bottleneck during load test\r\n\r\nHell, Robert wrote:\r\n> We have a PostgreSQL 8.2.7 database (~230 GB) running on a machine with 8 Intel Xeon Cores and 32 GB RAM (64-bit Linux 2.6.18). Data is stored on an EMC² CLARiiON on RAID 1/0 (8 x 146 GB 15k rpm). \r\n> When we do random I/O with a small test tool (reading random 8k blocks from big files in 200 threads) on the disk we retrieve data with about 25 MB/s.\r\n\r\nHow do you test random IO? Do you use this utility:\r\nhttp://arctic.org/~dean/randomio/ ?\r\n\r\nIf not, try using it, with same parameters. It might be that the latency\r\nis destroying your performance. Do you use NFS or are you accessing the\r\nstorage as SAN?\r\n\r\n\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n", "msg_date": "Mon, 7 Apr 2008 22:12:23 +0200", "msg_from": "\"Hell, Robert\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Looking for bottleneck during load test" }, { "msg_contents": "In response to \"Hell, Robert\" <[email protected]>:\n\n> I tried different other tools for random IO (including a self written one which does random lseek and read).\n> \n> This tool, started during one of our tests, achieves 2 iops (8k each).\n> Started alone I get something about 1,500 iops with an avg latency of 100 ms.\n> \n> We are using SAN (EMC CLARiiON CX 300) - are those ~7 MB/s really our bottleneck?\n> Any other tuning ideas?\n\nYou know, with all the performance problems people have been bringing up\nwith regard to SANs, I'm putting SAN in the same category as RAID-5 ...\n\n\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]] On Behalf Of Ivan Voras\n> Sent: Montag, 07. April 2008 14:38\n> To: [email protected]\n> Subject: Re: [PERFORM] Looking for bottleneck during load test\n> \n> Hell, Robert wrote:\n> > We have a PostgreSQL 8.2.7 database (~230 GB) running on a machine with 8 Intel Xeon Cores and 32 GB RAM (64-bit Linux 2.6.18). Data is stored on an EMC² CLARiiON on RAID 1/0 (8 x 146 GB 15k rpm). \n> > When we do random I/O with a small test tool (reading random 8k blocks from big files in 200 threads) on the disk we retrieve data with about 25 MB/s.\n> \n> How do you test random IO? Do you use this utility:\n> http://arctic.org/~dean/randomio/ ?\n> \n> If not, try using it, with same parameters. It might be that the latency\n> is destroying your performance. Do you use NFS or are you accessing the\n> storage as SAN?\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Mon, 7 Apr 2008 16:20:08 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for bottleneck during load test" }, { "msg_contents": "On Mon, 7 Apr 2008, Bill Moran wrote:\n\n> You know, with all the performance problems people have been bringing up\n> with regard to SANs, I'm putting SAN in the same category as RAID-5 ...\n\nNot really fair, because unlike RAID5 it's at least *possible* to get good \nwrite performance out of a SAN. Just harder than most people think it is.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 7 Apr 2008 16:40:13 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for bottleneck during load test" }, { "msg_contents": "In response to Greg Smith <[email protected]>:\n\n> On Mon, 7 Apr 2008, Bill Moran wrote:\n> \n> > You know, with all the performance problems people have been bringing up\n> > with regard to SANs, I'm putting SAN in the same category as RAID-5 ...\n> \n> Not really fair, because unlike RAID5 it's at least *possible* to get good \n> write performance out of a SAN. Just harder than most people think it is.\n\n*shrug* \"in theory\" it's possible to get good write performance out of\nRAID-5 as well, with fast enough disks and enough cache ...\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Mon, 7 Apr 2008 16:49:54 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for bottleneck during load test" }, { "msg_contents": "Hell, Robert wrote:\n> I tried different other tools for random IO (including a self written one which does random lseek and read).\n> \n> This tool, started during one of our tests, achieves 2 iops (8k each).\n> Started alone I get something about 1,500 iops with an avg latency of 100 ms.\n\n1500 iops looks about right for 4x2 RAID 10 volume. What's your worst\nlatency (as reported by the tool)? iowait is mostly seek time.\n\n> We are using SAN (EMC CLARiiON CX 300) - are those ~7 MB/s really our bottleneck?\n\nDepending on your access pattern to the database, it could be (if you\nhave lots of random IO, and 180 concurrent database threads can make any\nIO random enough). Are your queries read-mostly or a mix?\n\n> Any other tuning ideas?\n\nOnly generic ones:\n\n- Are your queries optimized, use indexes, etc.?\n- Try PostgreSQL 8.3 - if you have sequential seeks it can in theory\nmake better use of data between connections.\n- Do you have enough memory dedicated to data caches, both in PostgreSQL\nand in the OS? (i.e. what is your shared_buffers setting?)\n- If the SAN can configure parameters such as prefetch (pre-read) and\nstripe size, try lowering them (should help if you have random IO).", "msg_date": "Tue, 08 Apr 2008 11:01:59 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for bottleneck during load test" }, { "msg_contents": "Hell, Robert wrote:\n> > I tried different other tools for random IO (including a self\nwritten one which does random lseek and read).\n> >\n> > This tool, started during one of our tests, achieves 2 iops (8k each).\n> > Started alone I get something about 1,500 iops with an avg latency\nof 100 ms.\n\n1500 iops looks about right for 4x2 RAID 10 volume. What's your worst\nlatency (as reported by the tool)? iowait is mostly seek time.\n\n> > We are using SAN (EMC CLARiiON CX 300) - are those ~7 MB/s really\nour bottleneck?\n\nDepending on your access pattern to the database, it could be (if you\nhave lots of random IO, and 180 concurrent database threads can make any\nIO random enough). Are your queries read-mostly or a mix?\n\n> > Any other tuning ideas?\n\nOnly generic ones:\n\n- Are your queries optimized, use indexes, etc.?\n- Try PostgreSQL 8.3 - if you have sequential seeks it can in theory\nmake better use of data between connections.\n- Do you have enough memory dedicated to data caches, both in PostgreSQL\nand in the OS? (i.e. what is your shared_buffers setting?)\n- If the SAN can configure parameters such as prefetch (pre-read) and\nstripe size, try lowering them (should help if you have random IO).\n\n", "msg_date": "Tue, 08 Apr 2008 11:02:44 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for bottleneck during load test" }, { "msg_contents": "Worst latency was at ~600 ms.\r\n\r\nIn this test case we have only selects - so it's read only.\r\n\r\nI tried 8.3 - it's better there (10-15 %) - but IO rates stay the same.\r\nI use 18 GB shared memory on a machine with 32 GB during the test, I think this shouldn't be a problem.\r\n\r\nI did some further testing and I now really think that disk is the bottleneck here. \r\nI'm really surprised that those 7 MB/s in average are the maximum for that disks.\r\n\r\nAny filesystem (ext3) settings that could help here?\r\n\r\nRegards,\r\nRobert\r\n\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Ivan Voras\r\nSent: Dienstag, 08. April 2008 11:03\r\nTo: [email protected]\r\nSubject: Re: [PERFORM] Looking for bottleneck during load test\r\n\r\nHell, Robert wrote:\r\n> > I tried different other tools for random IO (including a self\r\nwritten one which does random lseek and read).\r\n> >\r\n> > This tool, started during one of our tests, achieves 2 iops (8k each).\r\n> > Started alone I get something about 1,500 iops with an avg latency\r\nof 100 ms.\r\n\r\n1500 iops looks about right for 4x2 RAID 10 volume. What's your worst\r\nlatency (as reported by the tool)? iowait is mostly seek time.\r\n\r\n> > We are using SAN (EMC CLARiiON CX 300) - are those ~7 MB/s really\r\nour bottleneck?\r\n\r\nDepending on your access pattern to the database, it could be (if you\r\nhave lots of random IO, and 180 concurrent database threads can make any\r\nIO random enough). Are your queries read-mostly or a mix?\r\n\r\n> > Any other tuning ideas?\r\n\r\nOnly generic ones:\r\n\r\n- Are your queries optimized, use indexes, etc.?\r\n- Try PostgreSQL 8.3 - if you have sequential seeks it can in theory\r\nmake better use of data between connections.\r\n- Do you have enough memory dedicated to data caches, both in PostgreSQL\r\nand in the OS? (i.e. what is your shared_buffers setting?)\r\n- If the SAN can configure parameters such as prefetch (pre-read) and\r\nstripe size, try lowering them (should help if you have random IO).\r\n\r\n\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n", "msg_date": "Tue, 8 Apr 2008 11:16:14 +0200", "msg_from": "\"Hell, Robert\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Looking for bottleneck during load test" } ]
[ { "msg_contents": "Hi folks,\n\nHere is the executive summary:\n * RHEL5 (postgresql 8.1, .conf tweaked for performance [1])\n * 2x Intel E5410 @ 2.33GHz (8 cores), 8GB RAM, 15KRPM SAS disks\n * 4.9 million records in a table (IP address info)\n * composite primary key: primary key(ipFrom, ipTo)\n * ipFrom/ipTo are int8 (see below for full schema info [2])\n * bad performance on queries of the form:\n select * from ipTable where ipFrom <= val and val <= ipTo\n\nPart of the problem is that most of the time PostgreSQL decides to\nuse seq scans on the table, resulting in queries taking many seconds\n(sometimes 3, 7, 20 sec). We did ANALYZE and enabled statistics, and\nthat sometimes fixes the problem temporarily, but overnight (without\nthe database being used), it reverts to seq scans. For example:\n\nperpedes_db=# explain ANALYZE select * from static.ipligenceipaddress where ipfrom <= 2130706433 and 2130706433 <= ipto;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on ipligenceipaddress (cost=0.00..139903.80 rows=1209530 width=145) (actual time=1233.628..2100.891 rows=1 loops=1)\n Filter: ((ipfrom <= 2130706433) AND (2130706433 <= ipto))\n Total runtime: 2100.928 ms\n(3 rows)\n\n\n\nMoreover, even when it is using the index, it is not all that fast:\nperpedes_db=# SET enable_seqscan = off;\nSET\nperpedes_db=# EXPLAIN ANALYZE select * from static.ipligenceipaddress where 3507360727 between ipfrom and ipto;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using ipligenceipaddress_pkey on ipligenceipaddress (cost=0.00..148143.67 rows=806199 width=146) (actual time=351.316..351.320 rows=1 loops=1)\n Index Cond: ((3507360727::bigint >= ipfrom) AND (3507360727::bigint <= ipto))\n Total runtime: 351.355 ms\n(3 rows)\n\n\nSo, my questions are:\n * did we miss any obvious settings?\n * why does it decide all of a sudden to do seq scans?\n * adding a \"limit 1\" sometimes causes the query to be even slower, \n when in fact it should have helped the DB to return faster, no?\n * in the ideal case, what execution times should I be expecting?\n Is ~400ms reasonable? I would have hoped this to be <40ms... \n * AFAICT, the (ipFrom, ipTo) intervals should be mutually exclusive,\n so the result should be at most one row. Can this info help the\n DB do a faster query? If so, how can I express that?\n * the DB takes tens of minutes to do an ANALYZE on this table,\n which doesn't happen with the default configuration. Any idea\n how I can fix that?\n\nThank you!\n\n====================================================================\n[1] Changes from standard config:\n--- /var/lib/pgsql/data/postgresql.conf.orig 2008-03-21 11:51:45.000000000 -0400\n+++ /var/lib/pgsql/data/postgresql.conf 2008-03-21 21:04:38.000000000 -0400\n@@ -90,19 +90,19 @@\n \n # - Memory -\n \n-shared_buffers = 1000 # min 16 or max_connections*2, 8KB each\n-#temp_buffers = 1000 # min 100, 8KB each\n-#max_prepared_transactions = 5 # can be 0 or more\n+shared_buffers = 50000 # min 16 or max_connections*2, 8KB each\n+temp_buffers = 10000 # min 100, 8KB each\n+max_prepared_transactions = 100 # can be 0 or more\n # note: increasing max_prepared_transactions costs ~600 bytes of shared memory\n # per transaction slot, plus lock space (see max_locks_per_transaction).\n-#work_mem = 1024 # min 64, size in KB\n-#maintenance_work_mem = 16384 # min 1024, size in KB\n+work_mem = 2048 # min 64, size in KB\n+maintenance_work_mem = 131072 # min 1024, size in KB\n #max_stack_depth = 2048 # min 100, size in KB\n \n # - Free Space Map -\n \n-#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n-#max_fsm_relations = 1000 # min 100, ~70 bytes each\n+max_fsm_pages = 200000 # min max_fsm_relations*16, 6 bytes each\n+max_fsm_relations = 10000 # min 100, ~70 bytes each\n \n # - Kernel Resource Usage -\n \n@@ -111,11 +111,11 @@\n \n # - Cost-Based Vacuum Delay -\n \n-#vacuum_cost_delay = 0 # 0-1000 milliseconds\n-#vacuum_cost_page_hit = 1 # 0-10000 credits\n+vacuum_cost_delay = 200 # 0-1000 milliseconds\n+vacuum_cost_page_hit = 6 # 0-10000 credits\n #vacuum_cost_page_miss = 10 # 0-10000 credits\n #vacuum_cost_page_dirty = 20 # 0-10000 credits\n-#vacuum_cost_limit = 200 # 0-10000 credits\n+vacuum_cost_limit = 100 # 0-10000 credits\n \n # - Background writer -\n \n@@ -141,13 +141,13 @@\n # fsync_writethrough\n # open_sync\n #full_page_writes = on # recover from partial page writes\n-#wal_buffers = 8 # min 4, 8KB each\n+wal_buffers = 128 # min 4, 8KB each\n #commit_delay = 0 # range 0-100000, in microseconds\n #commit_siblings = 5 # range 1-1000\n \n # - Checkpoints -\n \n-#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n+checkpoint_segments = 192 # in logfile segments, min 1, 16MB each\n #checkpoint_timeout = 300 # range 30-3600, in seconds\n #checkpoint_warning = 30 # in seconds, 0 is off\n \n@@ -175,12 +175,12 @@\n \n # - Planner Cost Constants -\n \n-#effective_cache_size = 1000 # typically 8KB each\n-#random_page_cost = 4 # units are one sequential page fetch \n+effective_cache_size = 393216 # typically 8KB each\n+random_page_cost = 2 # units are one sequential page fetch \n # cost\n-#cpu_tuple_cost = 0.01 # (same)\n-#cpu_index_tuple_cost = 0.001 # (same)\n-#cpu_operator_cost = 0.0025 # (same)\n+cpu_tuple_cost = 0.002 # (same)\n+cpu_index_tuple_cost = 0.0002 # (same)\n+cpu_operator_cost = 0.0005 # (same)\n \n # - Genetic Query Optimizer -\n \n@@ -329,10 +329,10 @@\n \n # - Query/Index Statistics Collector -\n \n-#stats_start_collector = on\n-#stats_command_string = off\n-#stats_block_level = off\n-#stats_row_level = off\n+stats_start_collector = on\n+stats_command_string = on\n+stats_block_level = on\n+stats_row_level = on\n #stats_reset_on_server_start = off\n \n \n@@ -340,8 +340,8 @@\n # AUTOVACUUM PARAMETERS\n #---------------------------------------------------------------------------\n \n-#autovacuum = off # enable autovacuum subprocess?\n+autovacuum = on # enable autovacuum subprocess?\n #autovacuum_naptime = 60 # time between autovacuum runs, in secs\n #autovacuum_vacuum_threshold = 1000 # min # of tuple updates before\n # vacuum\n #autovacuum_analyze_threshold = 500 # min # of tuple updates before \n@@ -400,7 +400,7 @@\n #---------------------------------------------------------------------------\n \n #deadlock_timeout = 1000 # in milliseconds\n-#max_locks_per_transaction = 64 # min 10\n+max_locks_per_transaction = 512 # min 10\n # note: each lock table slot uses ~220 bytes of shared memory, and there are\n # max_locks_per_transaction * (max_connections + max_prepared_transactions)\n # lock table slots.\n\n\n[2] Actual schema for the table:\ncreate table ipligenceIpAddress\n(\n ipFrom int8 not null default 0,\n ipTo int8 not null default 0,\n countryCode varchar(10) not null,\n countryName varchar(255) not null,\n continentCode varchar(10) not null,\n continentName varchar(255) not null,\n timeZone varchar(10) not null,\n regionCode varchar(10) not null,\n regionName varchar(255) not null,\n owner varchar(255) not null,\n cityName varchar(255) not null,\n countyName varchar(255) not null,\n latitude float8 not null,\n longitude float8 not null,\n createdTS timestamp with time zone default current_timestamp,\n primary key(ipFrom, ipTo)\n);\n\n-- \nDimi Paun <[email protected]>\nLattica, Inc.\n\n", "msg_date": "Mon, 07 Apr 2008 11:21:59 -0400", "msg_from": "Dimi Paun <[email protected]>", "msg_from_op": true, "msg_subject": "Severe performance problems for simple query" }, { "msg_contents": "On Mon, 7 Apr 2008, Dimi Paun wrote:\n> * bad performance on queries of the form:\n> select * from ipTable where ipFrom <= val and val <= ipTo\n\nThis type of query is very hard for a normal B-tree index to answer. For \nexample, say val is half-way between min and max values. If you have an \nindex on ipFrom, it will be able to restrict the entries to about half of \nthem, which is no real benefit over a sequential scan. Likewise, an index \non ipTo will be able to restrict the entries to half of them, with no \nbenefit. The intersection of these two halves may be just one entry, but \nfinding that out is non-trivial. An index bitmap scan would do it if you \ncan persuade Postgres to do that, but really you want an R-tree index on \nthe two columns, like I have requested in the past.\n\nYou can achieve that to some degree by using Postgres' geometric indexes, \nbut it's ugly. Note that the following is completely untested and may not \nwork with int8 values.\n\nFirstly, you need to create the index. The index will contain fake \"boxes\" \nthat stretch from ipFrom to ipTo.\n\nCREATE INDEX index_name ON table_name ((box '((ipFrom, 0), (ipTo, 1))'))\n\nThen, instead of querying simply for fromIp and toIp, query on whether the \nfake box overlaps with a point representing val.\n\nSELECT blah FROM table_name\n WHERE (box '((ipFrom, 0), (ipTo, 2))') @> (point '(val, 1)');\n\nOr alternatively you could adapt the \"seg\" GiST index to int8 values.\n\nHope you get this sorted out - it's something I'll have to do at some \npoint soon too.\n\nMatthew\n\n-- \nI wouldn't be so paranoid if you weren't all out to get me!!\n", "msg_date": "Mon, 7 Apr 2008 17:19:27 +0100 (BST)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Severe performance problems for simple query" }, { "msg_contents": "On Mon, 7 Apr 2008, Dimi Paun wrote:\n> * bad performance on queries of the form:\n> select * from ipTable where ipFrom <= val and val <= ipTo\n\nOh yes, if you can guarantee that no two entries overlap at all, then \nthere is a simpler way. Just create a B-tree index on ipFrom as usual, \nsort by ipFrom, and LIMIT to the first result:\n\nSELECT blah FROM table_name\n WHERE ipFrom <= 42 ORDER BY ipFrom DESC LIMIT 1\n\nThis should run *very* quickly. However, if any entries overlap at all \nthen you will get incorrect results.\n\nMatthew\n\n-- \nI'm always interested when [cold callers] try to flog conservatories.\nAnyone who can actually attach a conservatory to a fourth floor flat\nstands a marginally better than average chance of winning my custom.\n(Seen on Usenet)\n", "msg_date": "Mon, 7 Apr 2008 17:27:57 +0100 (BST)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Severe performance problems for simple query" }, { "msg_contents": "Matthew wrote:\n> On Mon, 7 Apr 2008, Dimi Paun wrote:\n>> * bad performance on queries of the form:\n>> select * from ipTable where ipFrom <= val and val <= ipTo\n> \n> This type of query is very hard for a normal B-tree index to answer. For \n> example, say val is half-way between min and max values. If you have an \n> index on ipFrom, it will be able to restrict the entries to about half \n> of them, which is no real benefit over a sequential scan. Likewise, an \n> index on ipTo will be able to restrict the entries to half of them, with \n> no benefit. The intersection of these two halves may be just one entry, \n> but finding that out is non-trivial. An index bitmap scan would do it if \n> you can persuade Postgres to do that, but really you want an R-tree \n> index on the two columns, like I have requested in the past.\n\nIf I understood the original post correctly, the ipFrom and ipTo columns \nactually split a single linear ip address space into non-overlapping \nchunks. Something like this:\n\nipFrom\tipTo\n1\t10\n10\t20\n20\t50\n50\t60\n...\n\nIn that case, a regular index on (ipFrom, ipTo) should work just fine, \nand that's what he's got. Actually, an index on just ipFrom would \nprobably work just as well. The problem is that the planner doesn't know \nabout that special relationship between ipFrom and ipTo. Perhaps it \ncould be hinted by explicitly specifying \"AND ipTo > ipFrom\" in the query?\n\nI don't know why the single index lookup took > 300ms, though. That does \nseem high to me.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 07 Apr 2008 17:32:22 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Severe performance problems for simple query" }, { "msg_contents": "\nOn Mon, 2008-04-07 at 17:32 +0100, Heikki Linnakangas wrote:\n> If I understood the original post correctly, the ipFrom and ipTo\n> columns actually split a single linear ip address space into\n> non-overlapping chunks. Something like this:\n> \n> ipFrom ipTo\n> 1 10\n> 10 20\n> 20 50\n> 50 60\n> ...\n> \n\nIndeed.\n\n> In that case, a regular index on (ipFrom, ipTo) should work just fine,\n> and that's what he's got. Actually, an index on just ipFrom would\n> probably work just as well. \n\nNo, it doesn't:\n\nperpedes_db=# CREATE INDEX temp1 ON static.ipligenceipaddress (ipFrom);\nCREATE INDEX\nperpedes_db=# explain ANALYZE select * from static.ipligenceipaddress where ipfrom <= 2130706433 and 2130706433 <= ipto limit 1;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.07 rows=1 width=145) (actual time=1519.526..1519.527 rows=1 loops=1)\n -> Index Scan using temp1 on ipligenceipaddress (cost=0.00..84796.50 rows=1209308 width=145) (actual time=1519.524..1519.524 rows=1 loops=1)\n Index Cond: (ipfrom <= 2130706433)\n Filter: (2130706433 <= ipto)\n Total runtime: 1519.562 ms\n(5 rows)\n\nThis is huge, I'd say...\n\n> The problem is that the planner doesn't know about that special\n> relationship between ipFrom and ipTo. Perhaps it could be hinted by\n> explicitly specifying \"AND ipTo > ipFrom\" in the query?\n\nUnfortunately, it still does a seq scan:\n\nperpedes_db=# SET enable_seqscan = on;\nSET\nperpedes_db=# explain ANALYZE select * from static.ipligenceipaddress where ipfrom <= 2130706433 and 2130706433 <= ipto AND ipTo > ipFrom limit 1;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.35 rows=1 width=145) (actual time=1245.293..1245.294 rows=1 loops=1)\n -> Seq Scan on ipligenceipaddress (cost=0.00..142343.80 rows=403103 width=145) (actual time=1245.290..1245.290 rows=1 loops=1)\n Filter: ((ipfrom <= 2130706433) AND (2130706433 <= ipto) AND (ipto > ipfrom))\n Total runtime: 1245.335 ms\n(4 rows)\n\n\n> I don't know why the single index lookup took > 300ms, though. That\n> does seem high to me.\n\nThat is my feeling. I would have expected order of magnitude faster\nexecution times, the DB runs on fairly decent hardware...\n\n-- \nDimi Paun <[email protected]>\nLattica, Inc.\n\n", "msg_date": "Mon, 07 Apr 2008 12:41:25 -0400", "msg_from": "Dimi Paun <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Severe performance problems for simple query" }, { "msg_contents": "\nOn Mon, 2008-04-07 at 17:27 +0100, Matthew wrote:\n> Oh yes, if you can guarantee that no two entries overlap at all, then \n> there is a simpler way. Just create a B-tree index on ipFrom as usual,\n> sort by ipFrom, and LIMIT to the first result:\n> \n> SELECT blah FROM table_name\n> WHERE ipFrom <= 42 ORDER BY ipFrom DESC LIMIT 1\n> \n> This should run *very* quickly. However, if any entries overlap at all\n> then you will get incorrect results.\n\nThanks Matthew, this seems to be indeed a lot faster:\n\nperpedes_db=# CREATE INDEX temp1 ON static.ipligenceipaddress (ipFrom);\nCREATE INDEX\nperpedes_db=# explain ANALYZE select * from static.ipligenceipaddress where ipfrom <= 2130706433 ORDER BY ipFrom DESC LIMIT 1;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.03 rows=1 width=145) (actual time=0.060..0.060 rows=1 loops=1)\n -> Index Scan Backward using temp1 on ipligenceipaddress (cost=0.00..83453.92 rows=2685155 width=145) (actual time=0.057..0.057 rows=1 loops=1)\n Index Cond: (ipfrom <= 2130706433)\n Total runtime: 0.094 ms\n(4 rows)\n\n\nHowever, it is rather disappointing that the DB can't figure out\nhow to execute such a simple query in a half decent manner (seq scan\non an indexed table for a BETWEEN filter doesn't qualify :)).\n\nMany thanks!\n\n-- \nDimi Paun <[email protected]>\nLattica, Inc.\n\n", "msg_date": "Mon, 07 Apr 2008 12:45:42 -0400", "msg_from": "Dimi Paun <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Severe performance problems for simple query" }, { "msg_contents": "On Mon, 7 Apr 2008, Heikki Linnakangas wrote:\n> In that case, a regular index on (ipFrom, ipTo) should work just fine, and \n> that's what he's got. Actually, an index on just ipFrom would probably work \n> just as well. The problem is that the planner doesn't know about that special \n> relationship between ipFrom and ipTo. Perhaps it could be hinted by \n> explicitly specifying \"AND ipTo > ipFrom\" in the query?\n\nActually, the problem is that the database doesn't know that the entries \ndon't overlap. For all it knows, you could have data like this:\n\n0 10\n10 20\n20 30\n... ten million rows later\n100000030 100000040\n100000040 100000050\n0 100000050\n\nSo say you wanted to search for the value of 50,000,000. The index on \nipFrom would select five million rows, all of which then have to be \nfiltered by the constraint on ipTo. Likewise, an index on ipTo would \nreturn five million rows, all of which then have to be filtered by the \nconstraint on ipFrom. If you just read the index and took the closest \nentry to the value, then you would miss out on the last entry which \noverlaps with the whole range. An R-tree on both fields will correctly \nfind the small set of entries that are relevant.\n\nIt would be very cool to be able to create an R-tree index that would just \nmake the original query run fast without needing alteration. I had a look \nat this a while back, but it is not currently possible in GiST, because \nonly one field is handed to the index at a time. So all the current R-tree \nimplementations require that you generate an object containing the two \nvalues, like the box, and then index that.\n\nSomething for 8.4?\n\nMatthew\n\n-- \n$ rm core\nSegmentation Fault (core dumped)\n", "msg_date": "Mon, 7 Apr 2008 18:02:52 +0100 (BST)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Severe performance problems for simple query" }, { "msg_contents": "* Dimi Paun:\n\n> * 4.9 million records in a table (IP address info)\n\nYou should use the ip4r type for that.\n", "msg_date": "Mon, 07 Apr 2008 20:53:37 +0200", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Severe performance problems for simple query" } ]
[ { "msg_contents": "Hi. I am looking for information about using table partitions in Postgres,\nin particular w.r.t. performance when querying a partitioned table.\n\nI implemented table partitioning following the documentation, which\nis quite good and easy to follow (Chapter 5.9). I am doing some\ntesting, so at this point my test dataset has 1M records; in the table,\neach row will have an autoincrement integer index, two numerics,\none integer (DMID), and one smallint.\n\nI created five master tables and created a different number of\npartitions for each one (2, 4, 8, 16, 32). I am using a range partition\nfor the integer, DMID, which represents a file index. The values of\nDMID range from 0 to 180360. I also create and index for DMID.\n\nI don't understand the timing results that I am getting. I got these\ntimes by averaging the results of querying the database from within\na loop in a Perl script:\n\nno. of partitions constraint_exclusion off constraint_exclusion on\n 2 0.597 ms 0.427 ms\n 4 0.653 ms 0.414 ms\n 8 0.673 ms 0.654 ms\n 16 1.068 ms 1.014 ms\n 32 2.301 ms 1.537 ms\n\nI expected that the query time would decrease as the number of\npartitions increases, but that's not what I am seeing. I get better results\n(0.29 ms) if I simply index DMID and don't use the partitions.\n\nWhen I run \"explain analyze\"on a query, the results (with and without\nconstraint_exclusion set) indicate that fewer partitions are being scanned\nwhen constraint_exclusion is set to on.\n\nI am testing table partitioning in Postgres against table partitioning\nusing MySQL. The results for MySQL make sense: more partitions,\nfaster query times.\n\nThe underlying application is a file index. It is expected that groups\nof files in selected ranges of DMID values will be accessed more\noften, but this is not the key implementation issue.\n\nThis is basically a \"write once, read often\" database. We expect that\nthe database will grow to 50M records in a few years, and I thought\nthat using range partitions for the DMID value might decrease the\nquery time.\n\nShould I be using many more partitions? Am I expecting too much in\nterms of performance when using partitions? Do these results point to\nsome obvious implementation error?\n\nThank you for any help/suggestions you can give.\n\nJanet Jacobsen\n\n-- \nJanet Jacobsen\nNERSC Analytics/HPCRD Visualization Group\nLawrence Berkeley National Laboratory\n\n", "msg_date": "Mon, 07 Apr 2008 09:57:34 -0700", "msg_from": "Janet Jacobsen <[email protected]>", "msg_from_op": true, "msg_subject": "performance using table partitions in Postgres 8.2.6" }, { "msg_contents": "A Dilluns 07 Abril 2008, Janet Jacobsen va escriure:\n> no. of partitions constraint_exclusion off constraint_exclusion on\n> 2 0.597 ms 0.427 ms\n> 4 0.653 ms 0.414 ms\n> 8 0.673 ms 0.654 ms\n> 16 1.068 ms 1.014 ms\n> 32 2.301 ms 1.537 ms\n>\n> I expected that the query time would decrease as the number of\n> partitions increases, but that's not what I am seeing. I get better\n> results (0.29 ms) if I simply index DMID and don't use the partitions.\n\nI see really small times here so probably the overhead that partitioning \nimposes isn't worth yet. Maybe with 50M rows it'll help, you could try \nfeeding those 50M tuples and test again.\n\n-- \nAlbert Cervera Areny\nDept. Informàtica Sedifa, S.L.\n\nAv. Can Bordoll, 149\n08202 - Sabadell (Barcelona)\nTel. 93 715 51 11\nFax. 93 715 51 12\n\n====================================================================\n........................ AVISO LEGAL ............................\nLa presente comunicación y sus anexos tiene como destinatario la\npersona a la que va dirigida, por lo que si usted lo recibe\npor error debe notificarlo al remitente y eliminarlo de su\nsistema, no pudiendo utilizarlo, total o parcialmente, para\nningún fin. Su contenido puede tener información confidencial o\nprotegida legalmente y únicamente expresa la opinión del\nremitente. El uso del correo electrónico vía Internet no\npermite asegurar ni la confidencialidad de los mensajes\nni su correcta recepción. En el caso de que el\ndestinatario no consintiera la utilización del correo electrónico,\ndeberá ponerlo en nuestro conocimiento inmediatamente.\n====================================================================\n........................... DISCLAIMER .............................\nThis message and its attachments are intended exclusively for the\nnamed addressee. If you receive this message in error, please\nimmediately delete it from your system and notify the sender. You\nmay not use this message or any part of it for any purpose.\nThe message may contain information that is confidential or\nprotected by law, and any opinions expressed are those of the\nindividual sender. Internet e-mail guarantees neither the\nconfidentiality nor the proper receipt of the message sent.\nIf the addressee of this message does not consent to the use\nof internet e-mail, please inform us inmmediately.\n====================================================================\n\n\n \n", "msg_date": "Mon, 7 Apr 2008 20:06:14 +0200", "msg_from": "Albert Cervera Areny <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance using table partitions in Postgres 8.2.6" } ]
[ { "msg_contents": "Hi\nI have written a program that imputes(or rather corrects data) with in\nmy database.\nIam using a temporary table where in i put data from other partitoined\ntable. I then query this table to get the desired data.But the thing\nis this temporary table has to be craeted for every record that i need\nto correct and there are thousands of such records that need to be\ncorrected.\nSo the program necessarily creates a temporary table evrytime it has\nto correct a record. However this table is dropeed after each record\nis corrected.\nThe program works fine.....but it runs for a very long time....or it\nruns for days.\nIam particularyly finding that it takes more time during this statement:\n\nNOTICE: theQuery in createtablevolumelaneshist CREATE TEMPORARY TABLE\npredictiontable(lane_id, measurement_start, speed,volume,occupancy) AS\nSELECT lane_id, measurement_start, speed,volume,occupancy\nFROM samantha.lane_data_I_495 WHERE\nlane_id IN (1317) AND\nmeasurement_start BETWEEN '2007-11-18 09:25:00' AND 2007-11-19 01:39:06'\n\nIam not sure if i can use a cursor to replicate the functionality of\nthe temp table. Is the performance bad because of the creation and\ndeletion of the temp table?\n\n\nThanks\nSamantha\n", "msg_date": "Mon, 7 Apr 2008 14:27:21 -0400", "msg_from": "\"samantha mahindrakar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance with temporary table" }, { "msg_contents": "samantha mahindrakar wrote:\n> Iam using a temporary table where in i put data from other partitoined\n> table. I then query this table to get the desired data.But the thing\n> is this temporary table has to be craeted for every record that i need\n> to correct and there are thousands of such records that need to be\n> corrected.\n> So the program necessarily creates a temporary table evrytime it has\n> to correct a record. However this table is dropeed after each record\n> is corrected.\n\nWhy?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 08 Apr 2008 16:50:43 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance with temporary table" }, { "msg_contents": "samantha mahindrakar escribi�:\n\n> So the program necessarily creates a temporary table evrytime it has\n> to correct a record. However this table is dropeed after each record\n> is corrected.\n\nPerhaps it would be better to truncate the temp table instead.\n\n> Iam not sure if i can use a cursor to replicate the functionality of\n> the temp table. Is the performance bad because of the creation and\n> deletion of the temp table?\n\nYes -- if you create/drop thousands of temp tables (or create/drop the\nsame temp table thousands of time), the resulting catalog bloat is\nlikely to hinder performance. Perhaps autovacuum should be at work here\n(and if not you can solve the issue with manual vacuums to the system\ncatalogs), but even then it is at best unnecessary.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Tue, 8 Apr 2008 12:17:45 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance with temporary table" }, { "msg_contents": "Well instead of creating a temp table everytime i just created a\npermanant table and insert the data into it everytime and truncate it.\nI created indexes on this permanent table too. This did improve the\nperformance to some extent.\n\nDoes using permanant tables also bloat the catalog or hinder the performance?\n\nThanks\nSamantha\n\nOn 4/8/08, Alvaro Herrera <[email protected]> wrote:\n> samantha mahindrakar escribió:\n>\n> > So the program necessarily creates a temporary table evrytime it has\n> > to correct a record. However this table is dropeed after each record\n> > is corrected.\n>\n> Perhaps it would be better to truncate the temp table instead.\n>\n> > Iam not sure if i can use a cursor to replicate the functionality of\n> > the temp table. Is the performance bad because of the creation and\n> > deletion of the temp table?\n>\n> Yes -- if you create/drop thousands of temp tables (or create/drop the\n> same temp table thousands of time), the resulting catalog bloat is\n> likely to hinder performance. Perhaps autovacuum should be at work here\n> (and if not you can solve the issue with manual vacuums to the system\n> catalogs), but even then it is at best unnecessary.\n>\n> --\n> Alvaro Herrera http://www.CommandPrompt.com/\n> The PostgreSQL Company - Command Prompt, Inc.\n>\n", "msg_date": "Tue, 8 Apr 2008 15:28:03 -0400", "msg_from": "\"samantha mahindrakar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance with temporary table" }, { "msg_contents": "samantha mahindrakar escribi�:\n> Well instead of creating a temp table everytime i just created a\n> permanant table and insert the data into it everytime and truncate it.\n> I created indexes on this permanent table too. This did improve the\n> performance to some extent.\n> \n> Does using permanant tables also bloat the catalog or hinder the performance?\n\nIn terms of catalog usage, permanent tables behave exactly the same as\ntemp tables.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Tue, 8 Apr 2008 15:43:10 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance with temporary table" }, { "msg_contents": "On Apr 7, 8:27 pm, [email protected] (\"samantha mahindrakar\")\nwrote:\n> Hi\n> I have written a program that imputes(or rather corrects data) with in\n> my database.\n> Iam using a temporary table where in i put data from other partitoined\n> table. I then query this table to get the desired data.But the thing\n> is this temporary table has to be craeted for every record that i need\n> to correct and there are thousands of such records that need to be\n> corrected.\n> So the program necessarily creates a temporary table evrytime it has\n> to correct a record. However this table is dropeed after each record\n> is corrected.\n> The program works fine.....but it runs for a very long time....or it\n> runs for days.\n> Iam particularyly finding that it takes more time during this statement:\n>\n> NOTICE: theQuery in createtablevolumelaneshist CREATE TEMPORARY TABLE\n> predictiontable(lane_id, measurement_start, speed,volume,occupancy) AS\n> SELECT lane_id, measurement_start, speed,volume,occupancy\n> FROM samantha.lane_data_I_495 WHERE\n> lane_id IN (1317) AND\n> measurement_start BETWEEN '2007-11-18 09:25:00' AND 2007-11-19 01:39:06'\n>\n> Iam not sure if i can use a cursor to replicate the functionality of\n> the temp table. Is the performance bad because of the creation and\n> deletion of the temp table?\n>\n> Thanks\n> Samantha\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n\nAnd why do you copy data from the partition tables? Did you try to\nmanipulate data directly in the needed tables? Or you are aggregating\nsome of the data there? How the partitioning is actually designed? Do\nyou use table inheritance?\n\n-- Valentine\n", "msg_date": "Wed, 9 Apr 2008 03:44:19 -0700 (PDT)", "msg_from": "valgog <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance with temporary table" }, { "msg_contents": "On Apr 8, 2008, at 2:43 PM, Alvaro Herrera wrote:\n> samantha mahindrakar escribi�:\n>> Well instead of creating a temp table everytime i just created a\n>> permanant table and insert the data into it everytime and truncate \n>> it.\n>> I created indexes on this permanent table too. This did improve the\n>> performance to some extent.\n>>\n>> Does using permanant tables also bloat the catalog or hinder the \n>> performance?\n>\n> In terms of catalog usage, permanent tables behave exactly the same as\n> temp tables.\n\nTrue, but the point is that you're not bloating the catalogs with \nthousands of temp table entries.\n\nI agree with others though: it certainly doesn't sound like there's \nany reason to be using temp tables here at all. This sounds like a \ncase of trying to apply procedural programming techniques to a \ndatabase instead of using set theory (which generally doesn't work \nwell).\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Wed, 9 Apr 2008 15:09:45 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance with temporary table" }, { "msg_contents": "The partitions are used to separate the data according to months. I\nhave run a query o find bad data from each such partition. The\nimputation algorithm that i use requires data from 10 previous weeks\nin order to impute the data. This historical data i store in a\ntemporary table, the i query this data so that i can take a average of\nall the historical data. Before taking average some computations are\nperformed. Since i need the historical data for every minute of data\nthat i need to impute i have to store the data in some intermediate\ntable. Hence the temporary table.\nNow i changed the code to use a permanent table that is truncated\nafter one set of data is imputed.\nI hope this makes sense.\n\n\nSamantha\n\nOn Wed, Apr 9, 2008 at 6:44 AM, valgog <[email protected]> wrote:\n> On Apr 7, 8:27 pm, [email protected] (\"samantha mahindrakar\")\n> wrote:\n>\n>\n> > Hi\n> > I have written a program that imputes(or rather corrects data) with in\n> > my database.\n> > Iam using a temporary table where in i put data from other partitoined\n> > table. I then query this table to get the desired data.But the thing\n> > is this temporary table has to be craeted for every record that i need\n> > to correct and there are thousands of such records that need to be\n> > corrected.\n> > So the program necessarily creates a temporary table evrytime it has\n> > to correct a record. However this table is dropeed after each record\n> > is corrected.\n> > The program works fine.....but it runs for a very long time....or it\n> > runs for days.\n> > Iam particularyly finding that it takes more time during this statement:\n> >\n> > NOTICE: theQuery in createtablevolumelaneshist CREATE TEMPORARY TABLE\n> > predictiontable(lane_id, measurement_start, speed,volume,occupancy) AS\n> > SELECT lane_id, measurement_start, speed,volume,occupancy\n> > FROM samantha.lane_data_I_495 WHERE\n> > lane_id IN (1317) AND\n> > measurement_start BETWEEN '2007-11-18 09:25:00' AND 2007-11-19 01:39:06'\n> >\n> > Iam not sure if i can use a cursor to replicate the functionality of\n> > the temp table. Is the performance bad because of the creation and\n> > deletion of the temp table?\n> >\n> > Thanks\n> > Samantha\n> >\n> > --\n> > Sent via pgsql-performance mailing list ([email protected])\n> > To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n>\n> And why do you copy data from the partition tables? Did you try to\n> manipulate data directly in the needed tables? Or you are aggregating\n> some of the data there? How the partitioning is actually designed? Do\n> you use table inheritance?\n>\n> -- Valentine\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Wed, 9 Apr 2008 19:33:46 -0400", "msg_from": "\"samantha mahindrakar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance with temporary table" }, { "msg_contents": "Hi\nThe reason for using the temporary table is that i need this data\nbuffered somewhere so that i can use it for later computation. And the\nfact that for each imputation i need to have historical data from 10\nprevious weeks makes it necessary to create something that can hold\nthe data. However once the computation is done for each record i\nwouldn't need that historical data for that record. I Would be moving\non to the next record and find its own historical data.\nIs there any way i can avoid using temp table?\n\nSamantha\n\nOn Wed, Apr 9, 2008 at 4:09 PM, Decibel! <[email protected]> wrote:\n> On Apr 8, 2008, at 2:43 PM, Alvaro Herrera wrote:\n>\n> > samantha mahindrakar escribió:\n> >\n> > > Well instead of creating a temp table everytime i just created a\n> > > permanant table and insert the data into it everytime and truncate it.\n> > > I created indexes on this permanent table too. This did improve the\n> > > performance to some extent.\n> > >\n> > > Does using permanant tables also bloat the catalog or hinder the\n> performance?\n> > >\n> >\n> > In terms of catalog usage, permanent tables behave exactly the same as\n> > temp tables.\n> >\n>\n> True, but the point is that you're not bloating the catalogs with thousands\n> of temp table entries.\n>\n> I agree with others though: it certainly doesn't sound like there's any\n> reason to be using temp tables here at all. This sounds like a case of\n> trying to apply procedural programming techniques to a database instead of\n> using set theory (which generally doesn't work well).\n> --\n> Decibel!, aka Jim C. Nasby, Database Architect [email protected]\n> Give your computer some brain candy! www.distributed.net Team #1828\n>\n>\n>\n", "msg_date": "Wed, 9 Apr 2008 19:41:18 -0400", "msg_from": "\"samantha mahindrakar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance with temporary table" }, { "msg_contents": "\nOn Apr 9, 2008, at 6:41 PM, samantha mahindrakar wrote:\n> Hi\n> The reason for using the temporary table is that i need this data\n> buffered somewhere so that i can use it for later computation. And the\n> fact that for each imputation i need to have historical data from 10\n> previous weeks makes it necessary to create something that can hold\n> the data. However once the computation is done for each record i\n> wouldn't need that historical data for that record. I Would be moving\n> on to the next record and find its own historical data.\n> Is there any way i can avoid using temp table?\n\nWhat's wrong with the data in the paritions?\n\nErik Jones\n\nDBA | Emma®\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n\n", "msg_date": "Wed, 9 Apr 2008 21:31:54 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance with temporary table" }, { "msg_contents": "We store traffic data in the partitioned tables. But the problem is\nthat all this data is not correct. The data is corrupt, hence they\nneed to be corrected.\n\nOn Wed, Apr 9, 2008 at 10:31 PM, Erik Jones <[email protected]> wrote:\n>\n> On Apr 9, 2008, at 6:41 PM, samantha mahindrakar wrote:\n>\n> > Hi\n> > The reason for using the temporary table is that i need this data\n> > buffered somewhere so that i can use it for later computation. And the\n> > fact that for each imputation i need to have historical data from 10\n> > previous weeks makes it necessary to create something that can hold\n> > the data. However once the computation is done for each record i\n> > wouldn't need that historical data for that record. I Would be moving\n> > on to the next record and find its own historical data.\n> > Is there any way i can avoid using temp table?\n> >\n>\n> What's wrong with the data in the paritions?\n>\n> Erik Jones\n>\n> DBA | Emma(R)\n> [email protected]\n> 800.595.4401 or 615.292.5888\n> 615.292.0777 (fax)\n>\n> Emma helps organizations everywhere communicate & market in style.\n> Visit us online at http://www.myemma.com\n>\n>\n>\n>\n", "msg_date": "Wed, 9 Apr 2008 22:43:17 -0400", "msg_from": "\"samantha mahindrakar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance with temporary table" }, { "msg_contents": "\nI see, I am having practically the same problem... utilizing\npartitioning idea http://www.postgresql.org/docs/8.3/interactive/ddl-partitioning.html\nby table inheritance.\n\nI have prepared a post with some trigger and rule examples for you\nhttp://valgogtech.blogspot.com/2008/04/table-partitioning-automation-triggers.html\n. So I hope you will find it useful if you are not doing it already\nyourself :-).\n\nAbout the use of the temporary table, I would say, that you actually\ncould try to add some special row status flag colum (I use \"char\" for\nsuch flags) to your partitioned tables to mark some rows as unused and\nthen create some conditional indexes that consider this flag for your\ndata operation... This would make it possible for you not to creating\ntemporary tables I hope...\n\nWith best regards,\n\n-- Valentine\n\nOn Apr 10, 1:33 am, [email protected] (\"samantha mahindrakar\")\nwrote:\n> The partitions are used to separate the data according to months. I\n> have run a query o find bad data from each such partition. The\n> imputation algorithm that i use requires data from 10 previous weeks\n> in order to impute the data. This historical data i store in a\n> temporary table, the i query this data so that i can take a average of\n> all the historical data. Before taking average some computations are\n> performed. Since i need the historical data for every minute of data\n> that i need to impute i have to store the data in some intermediate\n> table. Hence the temporary table.\n> Now i changed the code to use a permanent table that is truncated\n> after one set of data is imputed.\n> I hope this makes sense.\n>\n> Samantha\n>\n>\n> On Wed, Apr 9, 2008 at 6:44 AM, valgog <[email protected]> wrote:\n> > On Apr 7, 8:27 pm, [email protected] (\"samantha mahindrakar\")\n> > wrote:\n>\n> > > Hi\n> > > I have written a program that imputes(or rather corrects data) with in\n> > > my database.\n> > > Iam using a temporary table where in i put data from other partitoined\n> > > table. I then query this table to get the desired data.But the thing\n> > > is this temporary table has to be craeted for every record that i need\n> > > to correct and there are thousands of such records that need to be\n> > > corrected.\n> > > So the program necessarily creates a temporary table evrytime it has\n> > > to correct a record. However this table is dropeed after each record\n> > > is corrected.\n> > > The program works fine.....but it runs for a very long time....or it\n> > > runs for days.\n> > > Iam particularyly finding that it takes more time during this statement:\n>\n> > > NOTICE: theQuery in createtablevolumelaneshist CREATE TEMPORARY TABLE\n> > > predictiontable(lane_id, measurement_start, speed,volume,occupancy) AS\n> > > SELECT lane_id, measurement_start, speed,volume,occupancy\n> > > FROM samantha.lane_data_I_495 WHERE\n> > > lane_id IN (1317) AND\n> > > measurement_start BETWEEN '2007-11-18 09:25:00' AND 2007-11-19 01:39:06'\n>\n> > > Iam not sure if i can use a cursor to replicate the functionality of\n> > > the temp table. Is the performance bad because of the creation and\n> > > deletion of the temp table?\n>\n> > > Thanks\n> > > Samantha\n>\n> > > --\n> > > Sent via pgsql-performance mailing list ([email protected])\n> > > To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n>\n> > And why do you copy data from the partition tables? Did you try to\n> > manipulate data directly in the needed tables? Or you are aggregating\n> > some of the data there? How the partitioning is actually designed? Do\n> > you use table inheritance?\n>\n> > -- Valentine\n>\n\n", "msg_date": "Thu, 10 Apr 2008 07:58:48 -0700 (PDT)", "msg_from": "valgog <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance with temporary table" } ]
[ { "msg_contents": "The http://www.1-800-save-a-pet.com site is hosted with FreeBSD and\nPostgreSQL and as a geo-spatial search as a central feature.\n\nOne thing that made a substantial performance improvement was switching\nfrom the \"geo_distance()\" search in the earthdistance contrib, to use\nthe \"cube\" based geo-spatial calculations, also in available in contrib/\nIn our case, the slight loss of precision between the two methods didn't\nmatter.\n\nThe other things that made a noticeable performance improvement was\nupgrading our servers from FreeBSD 4.x or 5.x (old, I know!) to FreeBSD\n6.2 or later. We also upgrade these systems from PostgreSQL 8.1 to 8.2\nat the same time. I assume the upgrade to 8.2 must be responsible at\nleast in part for the performance gains.\n\nThe result of these two rounds of updates is that our overall CPU\ncapacity in the cluster seems to be double or triple what it was before.\n\nAs the site grows we continue to be very happy with the performance,\nfeatures and stability of PostgreSQL.\n\n Mark\n\n", "msg_date": "Mon, 07 Apr 2008 14:35:14 -0400", "msg_from": "Mark Stosberg <[email protected]>", "msg_from_op": true, "msg_subject": "what worked: performance improvements for geo-spatial searching on\n\tFreeBSD" }, { "msg_contents": "Mark,\n\ndo you know about our sky segmentation code Q3C,\nsee details http://www.sai.msu.su/~megera/wiki/SkyPixelization\nWe use it for billions objects in database and quite happy.\n\nOleg\n\nOn Mon, 7 Apr 2008, Mark Stosberg wrote:\n\n> The http://www.1-800-save-a-pet.com site is hosted with FreeBSD and\n> PostgreSQL and as a geo-spatial search as a central feature.\n>\n> One thing that made a substantial performance improvement was switching\n> from the \"geo_distance()\" search in the earthdistance contrib, to use\n> the \"cube\" based geo-spatial calculations, also in available in contrib/\n> In our case, the slight loss of precision between the two methods didn't\n> matter.\n>\n> The other things that made a noticeable performance improvement was\n> upgrading our servers from FreeBSD 4.x or 5.x (old, I know!) to FreeBSD\n> 6.2 or later. We also upgrade these systems from PostgreSQL 8.1 to 8.2\n> at the same time. I assume the upgrade to 8.2 must be responsible at\n> least in part for the performance gains.\n>\n> The result of these two rounds of updates is that our overall CPU\n> capacity in the cluster seems to be double or triple what it was before.\n>\n> As the site grows we continue to be very happy with the performance,\n> features and stability of PostgreSQL.\n>\n> Mark\n>\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Mon, 7 Apr 2008 22:47:58 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what worked: performance improvements for geo-spatial\n\tsearching on FreeBSD" }, { "msg_contents": "Oleg Bartunov wrote:\n> Mark,\n> \n> do you know about our sky segmentation code Q3C,\n> see details http://www.sai.msu.su/~megera/wiki/SkyPixelization\n> We use it for billions objects in database and quite happy.\n\nOleg,\n\nThanks for the response. That sounds interesting, but it's not clear to \nme how I would put together a geo-spatial search calculating distances \naround the curvature of the earth using this technique. Is there is a \nSQL sample for this that you could point to?\n\nAlso, I didn't recognize the names of the techniques you were \nbenchmarking against \"RADEC\" and \"Rtree\", are either of these related to \nthe \"earthdistance\" or \"cube()\" based searches I would have used already?\n\n Mark\n\n", "msg_date": "Tue, 08 Apr 2008 09:45:07 -0400", "msg_from": "Mark Stosberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: what worked: performance improvements for geo-spatial searching\n\ton FreeBSD" }, { "msg_contents": "On Tue, 8 Apr 2008, Mark Stosberg wrote:\n\n> Oleg Bartunov wrote:\n>> Mark,\n>> \n>> do you know about our sky segmentation code Q3C,\n>> see details http://www.sai.msu.su/~megera/wiki/SkyPixelization\n>> We use it for billions objects in database and quite happy.\n>\n> Oleg,\n>\n> Thanks for the response. That sounds interesting, but it's not clear to me \n> how I would put together a geo-spatial search calculating distances around \n> the curvature of the earth using this technique. Is there is a SQL sample for \n> this that you could point to?\n\nit's not about calculating distances, but about searching objects around\ngiven point.\n\n>\n> Also, I didn't recognize the names of the techniques you were benchmarking \n> against \"RADEC\" and \"Rtree\", are either of these related to the \n> \"earthdistance\" or \"cube()\" based searches I would have used already?\n\nRtree is a standard spatial tree, RADEC - is naive approach of \nhaving two indexes, one on ra (right ascension) and another - on dec (declination).\nBoth are an astronomical coordinates.\n\n>\n> Mark\n>\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Tue, 8 Apr 2008 21:38:56 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: what worked: performance improvements for geo-spatial\n\tsearching on FreeBSD" } ]
[ { "msg_contents": "\nWhen traffic to our PostgreSQL-backed website spikes, the first resource\nwe see being exhausted is the DB slots on the master server (currently\nset to about 400).\n\nI expect that as new Apache/mod_perl children are being put to us, they\nare creating new database connections.\n\nI'm interested in recommendations to funnel more of that traffic through\n fewer DB slots, if that's possible. (We could also consider increasing\nthe handles available, since the DB server has some CPU and memory to\nspare).\n\nI'm particularly interested in review of DBD::Gofer, which seems like it\nwould help with this in our Perl application:\nhttp://search.cpan.org/dist/DBI/lib/DBD/Gofer.pm\n\nI realize it has limitations, like \"no transactions\", but I think we\nwould still able to use it selectively in our application.\n\n Mark\n\n", "msg_date": "Mon, 07 Apr 2008 14:36:00 -0400", "msg_from": "Mark Stosberg <[email protected]>", "msg_from_op": true, "msg_subject": "recommendations for web/db connection pooling or DBD::Gofer reviews" }, { "msg_contents": "On Mon, 07 Apr 2008 14:36:00 -0400\nMark Stosberg <[email protected]> wrote:\n\n \n> I'm particularly interested in review of DBD::Gofer, which seems like\n> it would help with this in our Perl application:\n> http://search.cpan.org/dist/DBI/lib/DBD/Gofer.pm\n> \n> I realize it has limitations, like \"no transactions\", but I think we\n> would still able to use it selectively in our application.\n\nI would stick to proven postgresql technologies such as pgbouncer.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> Mark\n> \n> \n\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nUnited States PostgreSQL Association: http://www.postgresql.us/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Mon, 7 Apr 2008 11:44:46 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: recommendations for web/db connection pooling or\n\tDBD::Gofer reviews" }, { "msg_contents": "\n> I would stick to proven postgresql technologies such as pgbouncer.\n\nThanks for the fast recommendation, Joshua. I'll consider it.\n\nOur application is Slony-replicated web/db project with two slaves.\n\nDoes this design seem sensible?\n\n- Run one pgbouncer server on the master, with settings to\n service the master and both slaves.\n\n- We already handle balancing traffic between the slaves separately, so \nthat can remain unchanged.\n\n- Use Session Pooling both both the masters and the slaves. In theory, \nthe slaves should just be doing transaction-less SELECT statements, so a \nmore aggressive setting might be possible, but I believe there might be \na \"leak\" in the logic where we create a temporary table on the slave in \none case.\n\n- Redirect all application connections through pgbouncer\n\n###\n\n From graphs we keep, we can see that the slaves currently use a max of \nabout 64 connections...they are far from maxing out what's possible. So \nI was trying to think through if made sense to bother using the \npgBouncer layer with them. I through of two potential reasons to still \nuse it:\n - In the event of a major traffic spike on the web servers, pgbouncer \nwould keep the number of db slots under control.\n - Potentially there's a performance gain in having PgBouncer hold the \nconnections open.\n\nDoes that analysis seem correct?\n\nFor the master's pool size, I thought I would just choose a number \nthat's a little larger that the daily max number of DB slots in use.\n\n Mark\n\n", "msg_date": "Mon, 07 Apr 2008 15:50:19 -0400", "msg_from": "Mark Stosberg <[email protected]>", "msg_from_op": true, "msg_subject": "per-review of PgBouncer / Slony design " }, { "msg_contents": "\n> When traffic to our PostgreSQL-backed website spikes, the first resource\n> we see being exhausted is the DB slots on the master server (currently\n> set to about 400).\n>\n> I expect that as new Apache/mod_perl children are being put to us, they\n> are creating new database connections.\n>\n> I'm interested in recommendations to funnel more of that traffic through\n> fewer DB slots, if that's possible. (We could also consider increasing\n> the handles available, since the DB server has some CPU and memory to\n> spare).\n>\n> I'm particularly interested in review of DBD::Gofer, which seems like it\n> would help with this in our Perl application:\n> http://search.cpan.org/dist/DBI/lib/DBD/Gofer.pm\n>\n> I realize it has limitations, like \"no transactions\", but I think we\n> would still able to use it selectively in our application.\n\n\tUnder heavy load, Apache has the usual failure mode of spawning so many \nthreads/processes and database connections that it just exhausts all the \nmemory on the webserver and also kills the database.\n\tAs usual, I would use lighttpd as a frontend (also serving static files) \nto handle the large number of concurrent connections to clients, and then \nhave it funnel this to a reasonable number of perl backends, something \nlike 10-30. I don't know if fastcgi works with perl, but with PHP it \ncertainly works very well. If you can't use fastcgi, use lighttpd as a \nHTTP proxy and apache with mod_perl behind.\n\tRecipe for good handling of heavy load is using an asynchronous server \n(which by design can handle any number of concurrent connections up to the \nOS' limit) in front of a small number of dynamic webpage generating \nthreads/processes.\n", "msg_date": "Wed, 09 Apr 2008 00:40:16 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: recommendations for web/db connection pooling or DBD::Gofer\n\treviews" }, { "msg_contents": "\n> Under heavy load, Apache has the usual failure mode of spawning so \n> many threads/processes and database connections that it just exhausts \n> all the memory on the webserver and also kills the database.\n> As usual, I would use lighttpd as a frontend (also serving static \n> files) to handle the large number of concurrent connections to clients, \n> and then have it funnel this to a reasonable number of perl backends, \n> something like 10-30. I don't know if fastcgi works with perl, but with \n> PHP it certainly works very well. If you can't use fastcgi, use lighttpd \n> as a HTTP proxy and apache with mod_perl behind.\n> Recipe for good handling of heavy load is using an asynchronous \n> server (which by design can handle any number of concurrent connections \n> up to the OS' limit) in front of a small number of dynamic webpage \n> generating threads/processes.\n\nThanks for the response.\n\nTo be clear, it sounds like you are advocating solving the problem with \nscaling the number of connections with a different approach, by limiting \nthe number of web server processes.\n\nSo, the front-end proxy would have a number of max connections, say 200, \n and it would connect to another httpd/mod_perl server behind with a \nlower number of connections, say 20. If the backend httpd server was \nbusy, the proxy connection to it would just wait in a queue until it was \navailable.\n\nIs that the kind of design you had in mind?\n\nThat seems like a reasonable option as well. We already have some \nlightweight Apache servers in use on the project which currently just \nserve static content.\n\n Mark\n\n", "msg_date": "Thu, 10 Apr 2008 17:28:51 -0400", "msg_from": "Mark Stosberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: recommendations for web/db connection pooling or DBD::Gofer\n\treviews" }, { "msg_contents": "\nOn Apr 10, 2008, at 5:28 PM, Mark Stosberg wrote:\n> So, the front-end proxy would have a number of max connections, say \n> 200, and it would connect to another httpd/mod_perl server behind \n> with a lower number of connections, say 20. If the backend httpd \n> server was busy, the proxy connection to it would just wait in a \n> queue until it was available.\n\nIf you read the mod_perl performance tuning guide, it will tell you to \ndo exactly this. These are solved problems for many, many years now. \nThe apache mod_proxy really does wonders...\n\n", "msg_date": "Fri, 11 Apr 2008 10:41:09 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: recommendations for web/db connection pooling or DBD::Gofer\n\treviews" } ]
[ { "msg_contents": "Hi,\n\nI have a performance problem with a script that does massive bulk\ninsert in 6 tables. When the script starts the performance is really\ngood but will degrade minute after minute and take almost a day to\nfinish!\n\nI almost tried everything suggested on this list, changed our external\nraid array from raid 5 to raid 10, tweaked postgresql.conf to the best\nof my knowledge, moved pg_xlog to a different array, dropped the\ntables before running the script. But the performance gain was\nnegligible even after all these changes...\n\nIMHO the hardware that we use should be up to the task: Dell PowerEdge\n6850, 4 x 3.0Ghz Dual Core Xeon, 8GB RAM, 3 x 300GB SAS 10K in raid 5\nfor / and 6 x 300GB SAS 10K in raid 10 (MD1000) for PG data, the data\nfilesystem is ext3 mounted with noatime and data=writeback. Running on\nopenSUSE 10.3 with PostgreSQL 8.2.7. The server is dedicated for\nPostgreSQL...\n\nWe tested the same script and schema with Oracle 10g on the same\nmachine and it took only 2.5h to complete!\n\nWhat I don't understand is that with Oracle the performance seems\nalways consistent but with PG it deteriorates over time...\n\nAny idea? Is there any other improvements I could do?\n\nThanks\n\nChristian\n", "msg_date": "Mon, 7 Apr 2008 23:01:18 -0400", "msg_from": "\"Christian Bourque\" <[email protected]>", "msg_from_op": true, "msg_subject": "bulk insert performance problem" }, { "msg_contents": "Christian Bourque wrote:\n> Hi,\n>\n> I have a performance problem with a script that does massive bulk\n> insert in 6 tables. When the script starts the performance is really\n> good but will degrade minute after minute and take almost a day to\n> finish!\n> \nWould I be correct in guessing that there are foreign key relationships \nbetween those tables, and that there are significant numbers of indexes \nin use?\n\nThe foreign key checking costs will go up as the tables grow, and AFAIK \nthe indexes get a bit more expensive to maintain too.\n\nIf possible you should probably drop your foreign key relationships and \ndrop your indexes, insert your data, then re-create the indexes and \nforeign keys. The foreign keys will be rechecked when you recreate them, \nand it's *vastly* faster to do it that way. Similarly, building an index \nfrom scratch is quite a bit faster than progressively adding to it. Of \ncourse, dropping the indices is only useful if you aren't querying the \ntables as you build them.\n\nAlso, if you're loading data using stored procedures you should avoid \nthe use of exception blocks. I had some major problems with my bulk data \nconversion code due to overuse of exception blocks creating large \nnumbers of subtransactions behind the scenes and slowing everything to a \ncrawl.\n\n--\nCraig Ringer\n", "msg_date": "Tue, 08 Apr 2008 11:18:48 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bulk insert performance problem" }, { "msg_contents": "Craig Ringer wrote:\n> Christian Bourque wrote:\n>> Hi,\n>>\n>> I have a performance problem with a script that does massive bulk\n>> insert in 6 tables. When the script starts the performance is really\n>> good but will degrade minute after minute and take almost a day to\n>> finish!\n>> \n> Would I be correct in guessing that there are foreign key relationships \n> between those tables, and that there are significant numbers of indexes \n> in use?\n> \n> The foreign key checking costs will go up as the tables grow, and AFAIK \n> the indexes get a bit more expensive to maintain too.\n> \n> If possible you should probably drop your foreign key relationships and \n> drop your indexes, insert your data, then re-create the indexes and \n> foreign keys. The foreign keys will be rechecked when you recreate them, \n> and it's *vastly* faster to do it that way. Similarly, building an index \n> from scratch is quite a bit faster than progressively adding to it. Of \n> course, dropping the indices is only useful if you aren't querying the \n> tables as you build them.\n\nIf you are, add \"analyze\" commands through the import, eg every 10,000 \nrows. Then your checks should be a bit faster.\n\nThe other suggestion would be to do block commits:\n\nbegin;\ndo stuff for 5000 rows;\ncommit;\n\nrepeat until finished.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Tue, 08 Apr 2008 13:32:56 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bulk insert performance problem" }, { "msg_contents": "I use 10000 rows,have big blob\r\n\r\n\r\n2008-04-08 \r\n\r\n\r\n\r\nbitaoxiao \r\n\r\n\r\n\r\n发件人: Chris \r\n发送时间: 2008-04-08 11:35:57 \r\n收件人: Christian Bourque \r\n抄送: [email protected] \r\n主题: Re: [PERFORM] bulk insert performance problem \r\n \r\nCraig Ringer wrote:\r\n> Christian Bourque wrote:\r\n>> Hi,\r\n>>\r\n>> I have a performance problem with a script that does massive bulk\r\n>> insert in 6 tables. When the script starts the performance is really\r\n>> good but will degrade minute after minute and take almost a day to\r\n>> finish!\r\n>> \r\n> Would I be correct in guessing that there are foreign key relationships \r\n> between those tables, and that there are significant numbers of indexes \r\n> in use?\r\n> \r\n> The foreign key checking costs will go up as the tables grow, and AFAIK \r\n> the indexes get a bit more expensive to maintain too.\r\n> \r\n> If possible you should probably drop your foreign key relationships and \r\n> drop your indexes, insert your data, then re-create the indexes and \r\n> foreign keys. The foreign keys will be rechecked when you recreate them, \r\n> and it's *vastly* faster to do it that way. Similarly, building an index \r\n> from scratch is quite a bit faster than progressively adding to it. Of \r\n> course, dropping the indices is only useful if you aren't querying the \r\n> tables as you build them.\r\nIf you are, add \"analyze\" commands through the import, eg every 10,000 \r\nrows. Then your checks should be a bit faster.\r\nThe other suggestion would be to do block commits:\r\nbegin;\r\ndo stuff for 5000 rows;\r\ncommit;\r\nrepeat until finished.\r\n-- \r\nPostgresql & php tutorials\r\nhttp://www.designmagick.com/\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\n\n\n\n\n\n\nI use 10000 rows,have big \r\nblob\n \n \n2008-04-08 \n\n\nbitaoxiao\n\n\n发件人: Chris \n发送时间: 2008-04-08  11:35:57 \r\n\n收件人: Christian Bourque \r\n\n抄送: \r\[email protected] \n主题: Re: [PERFORM] bulk insert \r\nperformance problem \n \n\nCraig Ringer wrote:\n> Christian Bourque wrote:\n>> Hi,\n>>\n>> I have a performance problem with a script that does massive bulk\n>> insert in 6 tables. When the script starts the performance is really\n>> good but will degrade minute after minute and take almost a day to\n>> finish!\n>>   \n> Would I be correct in guessing that there are foreign key relationships \n> between those tables, and that there are significant numbers of indexes \n> in use?\n> \n> The foreign key checking costs will go up as the tables grow, and AFAIK \n> the indexes get a bit more expensive to maintain too.\n> \n> If possible you should probably drop your foreign key relationships and \n> drop your indexes, insert your data, then re-create the indexes and \n> foreign keys. The foreign keys will be rechecked when you recreate them, \n> and it's *vastly* faster to do it that way. Similarly, building an index \n> from scratch is quite a bit faster than progressively adding to it. Of \n> course, dropping the indices is only useful if you aren't querying the \n> tables as you build them.\n\nIf you are, add \"analyze\" commands through the import, eg every 10,000 \nrows. Then your checks should be a bit faster.\n\nThe other suggestion would be to do block commits:\n\nbegin;\ndo stuff for 5000 rows;\ncommit;\n\nrepeat until finished.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 8 Apr 2008 11:50:51 +0800", "msg_from": "\"bitaoxiao\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bulk insert performance problem" }, { "msg_contents": "On Mon, Apr 07, 2008 at 11:01:18PM -0400, Christian Bourque wrote:\n> I have a performance problem with a script that does massive bulk\n> insert in 6 tables. When the script starts the performance is really\n> good but will degrade minute after minute and take almost a day to\n> finish!\n\nhow do you do this bulk insert?\n\ndepesz\n\n-- \nquicksil1er: \"postgres is excellent, but like any DB it requires a\nhighly paid DBA. here's my CV!\" :)\nhttp://www.depesz.com/ - blog dla ciebie (i moje CV)\n", "msg_date": "Tue, 8 Apr 2008 10:41:39 +0200", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bulk insert performance problem" }, { "msg_contents": "Christian Bourque wrote:\n> \n> Any idea? Is there any other improvements I could do?\n\nAre you using the \"COPY\" syntax in the import script or individual \ninsert statements? Using COPY will always be *much* faster.\n\nI believe COPY always appends to tables rather than replacing the \ncontents, you can combine this technique with the possibility of \nsplitting up the task into multiple copy statements, but that has never \nbeen necessary in my case, switching from INSERTS to a COPY statement \nalways provided the huge performance improvement I needed.\n\nIt's easy to confuse \"pg_dump -d\" with \"psql -d\" ...it's too bad they \nmean very different things.\n\nFor pg_dump, \"-d\" causes INSERT statements to be generated instead of a \nCOPY statement, and is has been a mistake I made in the past, because I \nexpected to work like \"psql -d\", where \"-d\" means \"database name\".\n\nI suppose the safe thing to do is to avoid using \"-d\" altogether!\n\n\n\tMark\n\n", "msg_date": "Tue, 08 Apr 2008 09:50:40 -0400", "msg_from": "Mark Stosberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bulk insert performance problem" }, { "msg_contents": "On Tue, 8 Apr 2008, Mark Stosberg wrote:\n>> Any idea? Is there any other improvements I could do?\n>\n> Are you using the \"COPY\" syntax in the import script or individual insert \n> statements? Using COPY will always be *much* faster.\n\nPostgreSQL (latest versions at least) has an optimisation if you create a \ntable in the same transaction as you load data into it. So, if you have a \ndatabase dump, load it in using psql -1, which wraps the entire operation \nin a single transaction. Of course, a COPY dump will load a lot faster \nthan a INSERT dump.\n\nMatthew\n\n-- \nWhat goes up must come down. Ask any system administrator.\n", "msg_date": "Tue, 8 Apr 2008 15:00:13 +0100 (BST)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bulk insert performance problem" }, { "msg_contents": "> I have a performance problem with a script that does massive bulk\n> insert in 6 tables. When the script starts the performance is really\n> good but will degrade minute after minute and take almost a day to\n> finish!\n\n\tLooks like foreign key checks slow you down.\n\n\t- Batch INSERTS in transactions (1000-10000 per transaction)\n\t- Run ANALYZE once in a while so the FK checks use indexes\n\t- Are there any DELETEs in your script which might hit nonidexed \nREFERENCES... columns to cascade ?\n\t- Do you really need to check for FKs on the fly while inserting ?\n\tie. do you handle FK violations ?\n\tOr perhaps your data is already consistent ?\n\tIn this case, load the data without any constraints (and without any \nindexes), and add indexes and foreign key constraints after the loading is \nfinished.\n\t- Use COPY instead of INSERT.\n\n\tIf you use your script to process data, perhaps you could import raw \nunprocessed data in a table (with COPY) and process it with SQL. This is \nusually much faster than doing a zillion inserts.\n", "msg_date": "Wed, 09 Apr 2008 00:48:35 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bulk insert performance problem" } ]
[ { "msg_contents": "\nHi all,\n\nI need to do a bulk data loading around 704GB (log file size) at present  in 8 hrs (1 am - 9am). The data file size may increase 3 to 5 times in future.\n\nUsing COPY it takes 96 hrs to finish the task.\nWhat is the best way to do it ?\n\nHARDWARE: SUN THUMPER/ RAID10\nOS : SOLARIS 10.\nDB: Greenplum/Postgres\n\n\nRegards,\n\nSrikanth k Potluri\n\n+63 9177444783(philippines) \n\n\n\n\n", "msg_date": "Mon, 7 Apr 2008 22:41:47 -0500", "msg_from": "\"Potluri Srikanth\" <[email protected]>", "msg_from_op": true, "msg_subject": "bulk data loading" }, { "msg_contents": "Potluri Srikanth wrote:\n> Hi all,\n> \n> I need to do a bulk data loading around 704GB (log file size) at\n> present in 8 hrs (1 am - 9am). The data file size may increase 3 to\n> 5 times in future.\n> \n> Using COPY it takes 96 hrs to finish the task.\n> What is the best way to do it ?\n> \n> HARDWARE: SUN THUMPER/ RAID10\n> OS : SOLARIS 10.\n> DB: Greenplum/Postgres\n\nIf you're using Greenplum, you should probably be talking to the\nGreenplum folks. IIRC, they have made some fairly large changes to the\nload process, so they'll be the ones with the proper answers for you.\n\n//Magnus\n", "msg_date": "Tue, 8 Apr 2008 09:40:17 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bulk data loading" } ]
[ { "msg_contents": "Hello everyone!!\n\nI have a table with 17 columns and it has almost\n530000 records and doing just a \n\nSELECT * FROM table\n\nwith the EXPLAIN ANALYZE I get:\n\nSeq Scan on table (cost=0.00...19452.95 rows=529395\nwidth=170) (actual time=0.155...2194.294 rows=529395\nloops=1)\ntotal runtime=3679.039 ms\n\nand this table has a PK...\nDo you think is too much time for a simple select?...\n\nI guess it's a bit slow to get all those records...but\nsince I'm a newbie with PostgreSQL, what I can check\nto optimize?\n\nThanks to all!\nCiao,\nLuigi \n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n", "msg_date": "Wed, 9 Apr 2008 02:51:50 -0700 (PDT)", "msg_from": "\"Luigi N. Puleio\" <[email protected]>", "msg_from_op": true, "msg_subject": "EXPLAIN detail" }, { "msg_contents": "On Wed, Apr 9, 2008 at 3:21 PM, Luigi N. Puleio <[email protected]> wrote:\n> Hello everyone!!\n>\n> I have a table with 17 columns and it has almost\n> 530000 records and doing just a\n>\n> SELECT * FROM table\n>\n> with the EXPLAIN ANALYZE I get:\n>\n> Seq Scan on table (cost=0.00...19452.95 rows=529395\n> width=170) (actual time=0.155...2194.294 rows=529395\n> loops=1)\n> total runtime=3679.039 ms\n>\n> and this table has a PK...\n> Do you think is too much time for a simple select?...\n>\n\nWell, PK won't help you here because you are selecting all rows\nfrom the table and that seq scan is the right thing for that.\nWithout knowing your hardware its difficult to judge if\nthe time taken is more or not. Anyways, I don't think there is much\ntweaking you can do for such a query except making sure that\nyour table is not bloated with dead tuples.\n\nThanks,\nPavan\n\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 9 Apr 2008 15:34:06 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN detail" }, { "msg_contents": "Pavan Deolasee wrote:\n\n> Anyways, I don't think there is much\n> tweaking you can do for such a query except making sure that\n> your table is not bloated with dead tuples.\n\nTo the OP:\n\nMore explicitly: Make sure you use autovacuum or run VACUUM manually on\nthe table periodically.\n\nWould I be correct in suspecting that your real problem is with a more\nmeaningful and complex query, and the one you've posted is\noversimplifying what you are trying to do? If that is the case, and\nyou're having problems with queries that do more real work than this one\ndoes, maybe you should post EXPLAIN ANALYZE output from such a real\nworld query.\n\n--\nCraig Ringer\n", "msg_date": "Wed, 09 Apr 2008 18:28:36 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN detail" }, { "msg_contents": "On Wed, 9 Apr 2008, Pavan Deolasee wrote:\n>> I have a table with 17 columns and it has almost\n>> 530000 records and doing just a\n>>\n>> SELECT * FROM table\n\n> Well, PK won't help you here because you are selecting all rows\n> from the table and that seq scan is the right thing for that.\n\nYes. Like he said. Basically, you're asking the database to fetch all half \na million rows. That's going to take some time, whatever hardware you \nhave. The PK is completely irrelevant, because the query doesn't refer to \nit at all. To be honest, three seconds sounds pretty reasonable for that \nsort of query.\n\nMatthew\n\n-- \nThere once was a limerick .sig\nthat really was not very big\nIt was going quite fine\nTill it reached the fourth line\n", "msg_date": "Wed, 9 Apr 2008 11:33:43 +0100 (BST)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN detail" }, { "msg_contents": "--- Pavan Deolasee <[email protected]> wrote:\n\n> On Wed, Apr 9, 2008 at 3:21 PM, Luigi N. Puleio\n> <[email protected]> wrote:\n> > Hello everyone!!\n> >\n> > I have a table with 17 columns and it has almost\n> > 530000 records and doing just a\n> >\n> > SELECT * FROM table\n> >\n> > with the EXPLAIN ANALYZE I get:\n> >\n> > Seq Scan on table (cost=0.00...19452.95\n> rows=529395\n> > width=170) (actual time=0.155...2194.294\n> rows=529395\n> > loops=1)\n> > total runtime=3679.039 ms\n> >\n> > and this table has a PK...\n> > Do you think is too much time for a simple\n> select?...\n> >\n> \n> Well, PK won't help you here because you are\n> selecting all rows\n> from the table and that seq scan is the right thing\n> for that.\n> Without knowing your hardware its difficult to judge\n> if\n> the time taken is more or not. Anyways, I don't\n> think there is much\n> tweaking you can do for such a query except making\n> sure that\n> your table is not bloated with dead tuples.\n> \n\nIn effect, this simple query is a start of examination\nto check about speed for another nested query; more\nprecisely I'm tring to obtain the difference of the\ntime querying the same table with a different\ncondition, like:\n\nSELECT \n (a.column1)::date, MIN(b.column2) - a.column2\nFROM \n table a\n inner join table b \n on ((a.column1)::date = (b.column1)::date amd\nb.column3 = 'b' and (b.column1)::time without time\nzone >= (a.column1)::time without time zone)\nWHERE \n (a.column1)::date = '2008-04-09'\n a.column3 = 'a'\nGROUP BY a.column1\n\nand with this I have to obtain like 3-4 records from\nall those whole 500000 records and with the explain\nanalyze I get almost 6 seconds:\n\nNested Loop (cost=0.00...52140.83 rows=1 width=34)\n(actual time=4311.756...5951.271 rows=1 loops=1)\n\nSo its been a lot of time because I could wonder how\nlong it would take for example if I do a filter not\nfor a single day but for a month which should return\nmuch more than 1 row...\n\nActually I emailed to the responsible of the server\nwhere PostgreSQL is installed to see if he done a\nvacuum manually lately since querying the pg_settings\nor the pg_stat_all_tables I have no response about\nautovacuum...\n\nBut maybe there's a better way to query this nested\nloop for more efficience....\n\nThanks to all!\nCiao,\nLuigi\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n", "msg_date": "Wed, 9 Apr 2008 04:05:54 -0700 (PDT)", "msg_from": "\"Luigi N. Puleio\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: EXPLAIN detail" }, { "msg_contents": "Luigi N. Puleio wrote:\n\n> SELECT \n> (a.column1)::date, MIN(b.column2) - a.column2\n> FROM \n> table a\n> inner join table b \n> on ((a.column1)::date = (b.column1)::date amd\n> b.column3 = 'b' and (b.column1)::time without time\n> zone >= (a.column1)::time without time zone)\n> WHERE \n> (a.column1)::date = '2008-04-09'\n> a.column3 = 'a'\n> GROUP BY a.column1\n> \n> and with this I have to obtain like 3-4 records from\n> all those whole 500000 records and with the explain\n> analyze I get almost 6 seconds:\n> \n> Nested Loop (cost=0.00...52140.83 rows=1 width=34)\n> (actual time=4311.756...5951.271 rows=1 loops=1)\n\nWith all that casting, is it possible that appropriate indexes aren't\nbeing used because your WHERE / ON clauses aren't an exact type match\nfor the index?\n\nCan you post the full EXPLAIN ANALYZE from the query? This snippet\ndoesn't even show how records are being looked up.\n\nWhat about a \\d of the table from psql, or at least a summary of the\ninvolved column data types and associated indexes?\n\n--\nCraig Ringer\n", "msg_date": "Wed, 09 Apr 2008 19:16:29 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN detail" } ]
[ { "msg_contents": ">> SELECT \n>> (a.column1)::date, MIN(b.column2) - a.column2\n>> FROM \n>> table a\n>> inner join table b \n>> on ((a.column1)::date = (b.column1)::date amd\n>> b.column3 = 'b' and (b.column1)::time without time\n>> zone >= (a.column1)::time without time zone)\n>> WHERE \n>> (a.column1)::date = '2008-04-09'\n>> a.column3 = 'a'\n>> GROUP BY a.column1\n>> \n>> and with this I have to obtain like 3-4 records from\n>> all those whole 500000 records and with the explain\n>> analyze I get almost 6 seconds:\n>> \n>> Nested Loop (cost=0.00...52140.83 rows=1 width=34)\n>> (actual time=4311.756...5951.271 rows=1 loops=1)\n\n> With all that casting, is it possible that appropriate indexes aren't\n> being used because your WHERE / ON clauses aren't an exact type match\n> for the index?\n\nYou mean to put an index on date with timestamptz datatype column?...\n\n> Can you post the full EXPLAIN ANALYZE from the query? This snippet\n> doesn't even show how records are being looked up.\n\nHashAggregate (cost=52236.31..52236.33 rows=1 width=34) (actual time=7004.779...7004.782 rows=1 loops=1)\n -> Nested Loop (cost=0.00..52236.30 rows=1 width=34) (actual time=3939.450..7004.592 rows=1 loops=1)\n Join filter: ((\"inner\".calldate)::time without time zone => (\"outer\".calldate)::time without time zone)\n -> Seq Scan on table a (cost=0.00..27444.03 rows=1 width=26) (actual time=2479.199..2485.266 rows=3 loops=1) \n Filter: (((calldate)::date = '2008-04-09'::date) AND ((src)::text = '410'::text) AND (substr((dst)::text, 1, 4)='*100'::text) AND ((lastdata)::text ='/dati/ita/loginok'::text)) \n ->Seq Scan on table b (cost=0.00..24792.22 rows=3 width=16) (actual time=1504.508..1506.374 rows=1 loops=3)\n Filter: ((((lastdata)::text ='/dati/ita/logoutok'::text) AND ('410'::text=(src)::text) AND ('2008-04-09'::date = (calldate)::date))\nTotal runtime: 7005.706 ms\n\n> What about a \\d of the table from psql, or at least a summary of the\n> involved column data types and associated indexes?\n\nthis table has an acctid column which is PK then most of the other columns are varchar(80) or so....\n\nSo for 4 records result, 7 seconds are too way a lot I guess... but as I said before I'm gonna wait if the responsible of the server did a VACUUM on the table...\n\nWhat do you think?...\n\n\nThanks again to all.\nCiao,\nLuigi\n\n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n", "msg_date": "Wed, 9 Apr 2008 06:45:59 -0700 (PDT)", "msg_from": "\"Luigi N. Puleio\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: EXPLAIN detail" }, { "msg_contents": "Luigi N. Puleio wrote:\n\n>> With all that casting, is it possible that appropriate indexes aren't\n>> being used because your WHERE / ON clauses aren't an exact type match\n>> for the index?\n> \n> You mean to put an index on date with timestamptz datatype column?...\n\nEr ... I'm not quite sure what you mean. Do you mean an index on a cast\nof the column, eg:\n\nCREATE INDEX some_idx_name ON some_table ( some_timestamp_field::date )\n\nthen ... maybe. It's hard to be sure when there is so little information\navailable. It shouldn't be necessary, but there are certainly uses for\nthat sort of thing - for example, I use a couple of functional indexes\nin the schema I'm working on at the moment. It's probably a good idea to\nlook at ways to avoid doing that first, though.\n\n>> Can you post the full EXPLAIN ANALYZE from the query? This snippet\n>> doesn't even show how records are being looked up.\n> \n> HashAggregate (cost=52236.31..52236.33 rows=1 width=34) (actual time=7004.779...7004.782 rows=1 loops=1)\n> -> Nested Loop (cost=0.00..52236.30 rows=1 width=34) (actual time=3939.450..7004.592 rows=1 loops=1)\n> Join filter: ((\"inner\".calldate)::time without time zone => (\"outer\".calldate)::time without time zone)\n> -> Seq Scan on table a (cost=0.00..27444.03 rows=1 width=26) (actual time=2479.199..2485.266 rows=3 loops=1) \n> Filter: (((calldate)::date = '2008-04-09'::date) AND ((src)::text = '410'::text) AND (substr((dst)::text, 1, 4)='*100'::text) AND ((lastdata)::text ='/dati/ita/loginok'::text)) \n> ->Seq Scan on table b (cost=0.00..24792.22 rows=3 width=16) (actual time=1504.508..1506.374 rows=1 loops=3)\n> Filter: ((((lastdata)::text ='/dati/ita/logoutok'::text) AND ('410'::text=(src)::text) AND ('2008-04-09'::date = (calldate)::date))\n> Total runtime: 7005.706 ms\n\nPersonally, I'd want to get rid of all those casts first. Once that's\ncleaned up I'd want to look at creating appropriate indexes on your\ntables. If necessary, I might even create a composite index on\n(lastdata,src,calldate) .\n\n>> What about a \\d of the table from psql, or at least a summary of the\n>> involved column data types and associated indexes?\n> \n> this table has an acctid column which is PK then most of the other columns are varchar(80) or so....\n\nDo you mean that the columns involved in your WHERE and ON clauses, the\nones you're casting to date, timestamp, etc, are stored as VARCHAR? If\nso, it's no surprise that the query is slow because you're forcing\nPostgreSQL to convert a string to a date, timestamp, or time datatype to\ndo anything with it ... and you're doing it many times in every query.\nThat will be VERY slow, and prevent the use of (simple) indexes on those\ncolumns.\n\nIf you're really storing dates/times as VARCHAR, you should probably\nlook at some changes to your database design, starting with the use of\nappropriate data types.\n\nThat's all guesswork, because you have not provided enough information.\n\nCan you please post the output of psql's \\d command on the table in\nquestion?\n\nIf for some reason you cannot do that, please at least include the data\ntype of the primary key and all fields involved in the query, as well as\na list of all the indexes on both tables.\n\nThe easy way to do that is to just launch \"psql\" then run:\n\n\\d table\n\nand paste the output to an email.\n\n> So for 4 records result, 7 seconds are too way a lot I guess... but as I said before I'm gonna wait if the responsible of the server did a VACUUM on the table...\n>\n> What do you think?...\n\nIf you're really casting VARCHAR to DATE, TIME, TIMESTAMP, etc on demand\nthen personally I really doubt that dead rows are your problem.\n\n--\nCraig Ringer\n", "msg_date": "Wed, 09 Apr 2008 22:15:06 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN detail" } ]
[ { "msg_contents": ">> With all that casting, is it possible that appropriate indexes aren't\n>> being used because your WHERE / ON clauses aren't an exact type match\n>> for the index?\n> \n> You mean to put an index on date with timestamptz datatype column?...\n\n> Er ... I'm not quite sure what you mean. Do you mean an index on a cast\n> of the column, eg:\n\n> CREATE INDEX some_idx_name ON some_table ( some_timestamp_field::date )\n\n> then ... maybe. It's hard to be sure when there is so little information\n> available. It shouldn't be necessary, but there are certainly uses for\n> that sort of thing - for example, I use a couple of functional indexes\n> in the schema I'm working on at the moment. It's probably a good idea to\n> look at ways to avoid doing that first, though.\n\n>> Can you post the full EXPLAIN ANALYZE from the query? This snippet\n>> doesn't even show how records are being looked up.\n> \n> HashAggregate (cost=52236.31..52236.33 rows=1 width=34) (actual time=7004.779...7004.782 rows=1 loops=1)\n> -> Nested Loop (cost=0.00..52236.30 rows=1 width=34) (actual time=3939.450..7004.592 rows=1 loops=1)\n> Join filter: ((\"inner\".calldate)::time without time zone => (\"outer\".calldate)::time without time zone)\n> -> Seq Scan on table a (cost=0.00..27444.03 rows=1 width=26) (actual time=2479.199..2485.266 rows=3 loops=1) \n> Filter: (((calldate)::date = '2008-04-09'::date) AND ((src)::text = '410'::text) AND (substr((dst)::text, 1, 4)='*100'::text) AND ((lastdata)::text ='/dati/ita/loginok'::text)) \n> ->Seq Scan on table b (cost=0.00..24792.22 rows=3 width=16) (actual time=1504.508..1506.374 rows=1 loops=3)\n> Filter: ((((lastdata)::text ='/dati/ita/logoutok'::text) AND ('410'::text=(src)::text) AND ('2008-04-09'::date = (calldate)::date))\n> Total runtime: 7005.706 ms\n\n> Personally, I'd want to get rid of all those casts first. Once that's\n> cleaned up I'd want to look at creating appropriate indexes on your\n> tables. If necessary, I might even create a composite index on\n> (lastdata,src,calldate) .\n\n>> What about a \\d of the table from psql, or at least a summary of the\n>> involved column data types and associated indexes?\n> \n> this table has an acctid column which is PK then most of the other columns are varchar(80) or so....\n\n> Do you mean that the columns involved in your WHERE and ON clauses, the\n> ones you're casting to date, timestamp, etc, are stored as VARCHAR? If\n> so, it's no surprise that the query is slow because you're forcing\n> PostgreSQL to convert a string to a date, timestamp, or time datatype to\n> do anything with it ... and you're doing it many times in every query.\n> That will be VERY slow, and prevent the use of (simple) indexes on those\n> columns.\n\n> If you're really storing dates/times as VARCHAR, you should probably\n> look at some changes to your database design, starting with the use of\n> appropriate data types.\n\n> That's all guesswork, because you have not provided enough information.\n\n> Can you please post the output of psql's \\d command on the table in\n> question?\n\n> If for some reason you cannot do that, please at least include the data\n> type of the primary key and all fields involved in the query, as well as\n> a list of all the indexes on both tables.\n\n> The easy way to do that is to just launch \"psql\" then run:\n\n> \\d table\n\n> and paste the output to an email.\n\n> So for 4 records result, 7 seconds are too way a lot I guess... but as I said before I'm gonna wait if > the responsible of the server did a VACUUM on the table...\n>\n> What do you think?...\n\n> If you're really casting VARCHAR to DATE, TIME, TIMESTAMP, etc on demand\n> then personally I really doubt that dead rows are your problem.\n\n\nWell, this table has a primary key index on first column called acctid which is an integer; instead the calldate column is a TIMESTAMPTZ and in fact I'm using to do (calldate)::date in the ON clause because since the time part of that column is always different and in the nesting I have to identificate the date is the same...\n\nthe other two columns (src and lastdata) are both VARCHAR(80) and the query is this one:\n\nEXPLAIN ANALYZE\nSELECT\n (a.calldate)::date,\n a.src,\n a.dst,\n MIN(e.calldate) - a.calldate\nFROM\n cdr a\n INNER JOIN cdr e\n ON ((e.calldate)::date = (a.calldate)::date AND e.src = a.src\n AND e.lastdata = '/dati/ita/logoutok' AND e.calldate >= a.calldate)\nWHERE\n (a.calldate)::date = '2008-04-09'\n AND a.src = '410'\n AND substr(a.dst, 1, 4) = '*100'\n AND a.lastdata = '/dati/ita/loginok'\nGROUP BY\n a.calldate, a.src, a.dst\n\n\n\n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n", "msg_date": "Wed, 9 Apr 2008 07:44:23 -0700 (PDT)", "msg_from": "\"Luigi N. Puleio\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: EXPLAIN detail" }, { "msg_contents": "Luigi N. Puleio wrote:\n>> If for some reason you cannot do that, please at least include the data\n>> type of the primary key and all fields involved in the query, as well as\n>> a list of all the indexes on both tables.\nIf you won't show people on the list your table definitions, or at least \nthe information shown above, then it's less likely that anybody can help \nyou or will spend the time trying to help you.\n\nPersonally I think you may need some functional/cast, and possibly \ncomposite, indexes to avoid the looping sequential scan as I said \nbefore. However, that's guesswork without some more information as \nrepeatedly stated and requested. I'm not going to bother replying to any \nfurther mail just to say so again.\n\n\nTry reading the documentation chapter about indexes:\n\nhttp://www.postgresql.org/docs/current/static/indexes.html\n\nand about query optimisation:\n\nhttp://www.postgresql.org/docs/current/static/performance-tips.html\n\nthen experiment with various indexes to see what works best. Think about \nthe data types. Remember that you can build an index on a cast of a \nfield, on multiple fields, on function calls, or basically any other \nsimple expression or expressions, but that complex indexes will cost \nmore to build and maintain and might be bigger (and thus slower to search).\n\n\nAnyway, I'm done.\n\n--\nCraig Ringer\n", "msg_date": "Wed, 09 Apr 2008 23:28:51 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN detail" }, { "msg_contents": "\n> Well, this table has a primary key index on first column called acctid \n> which is an integer; instead the calldate column is a TIMESTAMPTZ and in \n> fact I'm using to do (calldate)::date in the ON clause because since the \n> time part of that column is always different and in the nesting I have \n> to identificate the date is the same...\n>\n> the other two columns (src and lastdata) are both VARCHAR(80) and the \n> query is this one:\n\n\tTip for getting answers from this list :\n\tYou should just post the output of \"\\d yourtable\" from psql, it would be \nquicker than writing a paragraph... Be lazy, lol.\n\n\tSo, basically if I understand you are doing a self-join on your table, \nyou want all rows from the same day, and you're doing something with the \ndates, and...\n\n\tTip for getting answers from this list :\n\tExplain (in english) what your query actually does, someone might come up \nwith a better idea on HOW to do it.\n\n\tSnip :\n\n> EXPLAIN ANALYZE\n> SELECT\n> (a.calldate)::date,\n> a.src,\n> a.dst,\n> MIN(e.calldate) - a.calldate\n> FROM\n> cdr a\n> INNER JOIN cdr e\n> ON ((e.calldate)::date = (a.calldate)::date AND e.src = a.src\n> AND e.lastdata = '/dati/ita/logoutok' AND e.calldate >= \n> a.calldate)\n> WHERE\n> (a.calldate)::date = '2008-04-09'\n> AND a.src = '410'\n> AND substr(a.dst, 1, 4) = '*100'\n> AND a.lastdata = '/dati/ita/loginok'\n> GROUP BY\n> a.calldate, a.src, a.dst\n\n\tOK, I assume you have an index on calldate, which is a TIMESTAMPTZ ?\n\t(in that case, why is it called calldate, and not calltimestamp ?...)\n\n\tBad news, the index is useless for this condition :\n\t(a.calldate)::date = '2008-04-09'\n\tThere, you are asking postgres to scan the entire table, convert the \ncolumn to date, and test. Bad.\n\n\tIn order to use the index, you could rewrite it as something like :\n\ta.calldate >= '2008-04-09' AND a.calldate < ('2008-04-09'::DATE + '1 \nDAY'::INTERVAL)\n\tThis is a RANGE query (just like BETWEEN) which is index-friendly.\n\n\tPersonnaly, I wouldn't do it that way : since you use the date (and not \nthe time, I presume you only use the time for display purposes) I would \njust store the timestamptz in \"calltimestamp\" and the date in \"calldate\", \nwith a trigger to ensure the date is set to calltimestamp::date every time \na row is inserted/updated.\n\tThis is better than a function index since you use that column a lot in \nyour query, it will be slightly faster, and it will save a lot of \ntimestamptz->date casts hence it will save CPU cycles\n\n\tTry this last option (separate date column), and repost EXPLAIN ANALYZE \nof your query so it can be optimized further.\n\n\tAlso, PLEASE don't use substr(), use a.dst LIKE '*100%', look in the \nmanual. LIKE 'foo%' is indexable if you create the proper index.\n\n\n\n\n\n\n\n", "msg_date": "Wed, 09 Apr 2008 20:41:29 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN detail" }, { "msg_contents": "On Wed, Apr 9, 2008 at 11:41 AM, PFC <[email protected]> wrote:\n> In order to use the index, you could rewrite it as something like :\n> a.calldate >= '2008-04-09' AND a.calldate < ('2008-04-09'::DATE + '1\n> DAY'::INTERVAL)\n> This is a RANGE query (just like BETWEEN) which is index-friendly.\n\nAnother option would be to create a functional index on date_trunc(\n'day', cdr.calldate)\n\nthen using a where condition like:\n\ndate_trunc(a.calldate) = '2008-04-09'\n\nwould definitely use an index.\n\n\n-- \nRegards,\nRichard Broersma Jr.\n", "msg_date": "Wed, 9 Apr 2008 15:17:38 -0700", "msg_from": "\"Richard Broersma\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN detail" } ]
[ { "msg_contents": "Hi, I've started my first project with Postgres (after several years of \nusing Mysql), and I'm having an odd performance problem that I was \nhoping someone might be able to explain the cause of.\n\n----My query----\n - select count(*) from gene_prediction_view where gene_ref = 523\n - takes 26 seconds to execute, and returns 2400 (out of a total of \n15 million records in the table)\n \n---My problem---\n Using a single-column index to count 2400 records which are exactly \none constant value doesn't sound like something that would take 26 \nseconds. What's the slowdown? Any silver bullets that might fix this?\n\n----Steps I've taken----\n - I ran vacuum and analyze\n - I upped the shared_buffers to 58384, and I upped some of the other \npostgresql.conf values as well. Nothing seemed to help significantly, \nbut maybe I missed something that would help specifically for this query \ntype?\n - I tried to create a hash index, but gave up after more than 4 \nhours of waiting for it to finish indexing\n\n----Table stats----\n - 15 million rows; I'm expecting to have four or five times this \nnumber eventually.\n - 1.5 gigs of hard drive usage\n\n----My development environment---\n - 2.6ghz dual-core MacBook Pro with 4 gigs of ram and a 7200 rpm \nhard drive\n - OS X 10.5.2\n - Postgres 8.3 (installed via MacPorts)\n\n----My table----\n\nCREATE TABLE gene_prediction_view\n(\n id serial NOT NULL,\n gene_ref integer NOT NULL,\n go_id integer NOT NULL,\n go_description character varying(200) NOT NULL,\n go_category character varying(50) NOT NULL,\n function_verified_exactly boolean NOT NULL,\n function_verified_with_parent_go boolean NOT NULL,\n function_verified_with_child_go boolean NOT NULL,\n score numeric(10,2) NOT NULL,\n precision_score numeric(10,2) NOT NULL,\n CONSTRAINT gene_prediction_view_pkey PRIMARY KEY (id),\n CONSTRAINT gene_prediction_view_gene_ref_fkey FOREIGN KEY (gene_ref)\n REFERENCES sgd_annotations (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT gene_prediction_view_go_id_fkey FOREIGN KEY (go_id)\n REFERENCES go_terms (term) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT gene_prediction_view_gene_ref_key UNIQUE (gene_ref, go_id)\n)\nWITH (OIDS=FALSE);\nALTER TABLE gene_prediction_view OWNER TO postgres;\n\nCREATE INDEX ix_gene_prediction_view_gene_ref\n ON gene_prediction_view\n USING btree\n (gene_ref);\n\n\n\n", "msg_date": "Wed, 09 Apr 2008 16:58:27 -0400", "msg_from": "John Beaver <[email protected]>", "msg_from_op": true, "msg_subject": "large tables and simple \"= constant\" queries using indexes" }, { "msg_contents": "First of all, there is the 'explain analyze' output, which is pretty \nhelpful in postgresql.\n\nMy guess is, postgresql decides to do a table scan for some reason. It \nmight not have enough statistics for this particular table or column, to \nmake a sound decision. What you can try is to increase the statistics \ntarget, which works pretty easy:\nALTER TABLE gene_prediction_view ALTER gene_ref SET STATISTICS 200;\n\nValid ranges are from 1(0?) - 1000, the default is 10, the default on my \nsystems is usually 100. For such a large table, I'd go with 200.\n\nAfter that, you'll need to re-analyze your table and you can try again.\n\nPerhaps analyze should try to establish its own best guess to how many \nsamples it should take? The default of 10 is rather limited for large \ntables.\n\nBest regards,\n\nArjen\n\nOn 9-4-2008 22:58 John Beaver wrote:\n> Hi, I've started my first project with Postgres (after several years of \n> using Mysql), and I'm having an odd performance problem that I was \n> hoping someone might be able to explain the cause of.\n> \n> ----My query----\n> - select count(*) from gene_prediction_view where gene_ref = 523\n> - takes 26 seconds to execute, and returns 2400 (out of a total of 15 \n> million records in the table)\n> \n> ---My problem---\n> Using a single-column index to count 2400 records which are exactly \n> one constant value doesn't sound like something that would take 26 \n> seconds. What's the slowdown? Any silver bullets that might fix this?\n> \n> ----Steps I've taken----\n> - I ran vacuum and analyze\n> - I upped the shared_buffers to 58384, and I upped some of the other \n> postgresql.conf values as well. Nothing seemed to help significantly, \n> but maybe I missed something that would help specifically for this query \n> type?\n> - I tried to create a hash index, but gave up after more than 4 hours \n> of waiting for it to finish indexing\n> \n> ----Table stats----\n> - 15 million rows; I'm expecting to have four or five times this \n> number eventually.\n> - 1.5 gigs of hard drive usage\n> \n> ----My development environment---\n> - 2.6ghz dual-core MacBook Pro with 4 gigs of ram and a 7200 rpm hard \n> drive\n> - OS X 10.5.2\n> - Postgres 8.3 (installed via MacPorts)\n> \n> ----My table----\n> \n> CREATE TABLE gene_prediction_view\n> (\n> id serial NOT NULL,\n> gene_ref integer NOT NULL,\n> go_id integer NOT NULL,\n> go_description character varying(200) NOT NULL,\n> go_category character varying(50) NOT NULL,\n> function_verified_exactly boolean NOT NULL,\n> function_verified_with_parent_go boolean NOT NULL,\n> function_verified_with_child_go boolean NOT NULL,\n> score numeric(10,2) NOT NULL,\n> precision_score numeric(10,2) NOT NULL,\n> CONSTRAINT gene_prediction_view_pkey PRIMARY KEY (id),\n> CONSTRAINT gene_prediction_view_gene_ref_fkey FOREIGN KEY (gene_ref)\n> REFERENCES sgd_annotations (id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT gene_prediction_view_go_id_fkey FOREIGN KEY (go_id)\n> REFERENCES go_terms (term) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT gene_prediction_view_gene_ref_key UNIQUE (gene_ref, go_id)\n> )\n> WITH (OIDS=FALSE);\n> ALTER TABLE gene_prediction_view OWNER TO postgres;\n> \n> CREATE INDEX ix_gene_prediction_view_gene_ref\n> ON gene_prediction_view\n> USING btree\n> (gene_ref);\n> \n> \n> \n> \n", "msg_date": "Wed, 09 Apr 2008 23:21:20 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: large tables and simple \"= constant\" queries using\n indexes" }, { "msg_contents": "\nThis is a FAQ, it comes up on an almost weekly basis. Please do a\nlittle Googling on count(*) and PostgreSQL and you'll get all the\nexplanations and suggestions on how to fix the problem you could\never want.\n\nIn response to Arjen van der Meijden <[email protected]>:\n\n> First of all, there is the 'explain analyze' output, which is pretty \n> helpful in postgresql.\n> \n> My guess is, postgresql decides to do a table scan for some reason. It \n> might not have enough statistics for this particular table or column, to \n> make a sound decision. What you can try is to increase the statistics \n> target, which works pretty easy:\n> ALTER TABLE gene_prediction_view ALTER gene_ref SET STATISTICS 200;\n> \n> Valid ranges are from 1(0?) - 1000, the default is 10, the default on my \n> systems is usually 100. For such a large table, I'd go with 200.\n> \n> After that, you'll need to re-analyze your table and you can try again.\n> \n> Perhaps analyze should try to establish its own best guess to how many \n> samples it should take? The default of 10 is rather limited for large \n> tables.\n> \n> Best regards,\n> \n> Arjen\n> \n> On 9-4-2008 22:58 John Beaver wrote:\n> > Hi, I've started my first project with Postgres (after several years of \n> > using Mysql), and I'm having an odd performance problem that I was \n> > hoping someone might be able to explain the cause of.\n> > \n> > ----My query----\n> > - select count(*) from gene_prediction_view where gene_ref = 523\n> > - takes 26 seconds to execute, and returns 2400 (out of a total of 15 \n> > million records in the table)\n> > \n> > ---My problem---\n> > Using a single-column index to count 2400 records which are exactly \n> > one constant value doesn't sound like something that would take 26 \n> > seconds. What's the slowdown? Any silver bullets that might fix this?\n> > \n> > ----Steps I've taken----\n> > - I ran vacuum and analyze\n> > - I upped the shared_buffers to 58384, and I upped some of the other \n> > postgresql.conf values as well. Nothing seemed to help significantly, \n> > but maybe I missed something that would help specifically for this query \n> > type?\n> > - I tried to create a hash index, but gave up after more than 4 hours \n> > of waiting for it to finish indexing\n> > \n> > ----Table stats----\n> > - 15 million rows; I'm expecting to have four or five times this \n> > number eventually.\n> > - 1.5 gigs of hard drive usage\n> > \n> > ----My development environment---\n> > - 2.6ghz dual-core MacBook Pro with 4 gigs of ram and a 7200 rpm hard \n> > drive\n> > - OS X 10.5.2\n> > - Postgres 8.3 (installed via MacPorts)\n> > \n> > ----My table----\n> > \n> > CREATE TABLE gene_prediction_view\n> > (\n> > id serial NOT NULL,\n> > gene_ref integer NOT NULL,\n> > go_id integer NOT NULL,\n> > go_description character varying(200) NOT NULL,\n> > go_category character varying(50) NOT NULL,\n> > function_verified_exactly boolean NOT NULL,\n> > function_verified_with_parent_go boolean NOT NULL,\n> > function_verified_with_child_go boolean NOT NULL,\n> > score numeric(10,2) NOT NULL,\n> > precision_score numeric(10,2) NOT NULL,\n> > CONSTRAINT gene_prediction_view_pkey PRIMARY KEY (id),\n> > CONSTRAINT gene_prediction_view_gene_ref_fkey FOREIGN KEY (gene_ref)\n> > REFERENCES sgd_annotations (id) MATCH SIMPLE\n> > ON UPDATE NO ACTION ON DELETE NO ACTION,\n> > CONSTRAINT gene_prediction_view_go_id_fkey FOREIGN KEY (go_id)\n> > REFERENCES go_terms (term) MATCH SIMPLE\n> > ON UPDATE NO ACTION ON DELETE NO ACTION,\n> > CONSTRAINT gene_prediction_view_gene_ref_key UNIQUE (gene_ref, go_id)\n> > )\n> > WITH (OIDS=FALSE);\n> > ALTER TABLE gene_prediction_view OWNER TO postgres;\n> > \n> > CREATE INDEX ix_gene_prediction_view_gene_ref\n> > ON gene_prediction_view\n> > USING btree\n> > (gene_ref);\n> > \n> > \n> > \n> > \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Wed, 9 Apr 2008 17:36:09 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: large tables and simple \"= constant\" queries using\n indexes" }, { "msg_contents": "Bill Moran wrote:\n> This is a FAQ, it comes up on an almost weekly basis.\n\nI don't think so. \"where\".\n\n>>> - select count(*) from gene_prediction_view where gene_ref = 523\n\nCheers,\n Jeremy\n\n", "msg_date": "Wed, 09 Apr 2008 23:07:50 +0100", "msg_from": "Jeremy Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: large tables and simple \"= constant\" queries using\n indexes" }, { "msg_contents": "\n> Hi, I've started my first project with Postgres (after several years of \n> using Mysql), and I'm having an odd performance problem that I was \n> hoping someone might be able to explain the cause of.\n>\n> ----My query----\n> - select count(*) from gene_prediction_view where gene_ref = 523\n> - takes 26 seconds to execute, and returns 2400 (out of a total of \n> 15 million records in the table)\n> ---My problem---\n> Using a single-column index to count 2400 records which are exactly \n> one constant value doesn't sound like something that would take 26 \n> seconds. What's the slowdown? Any silver bullets that might fix this?\n\n\t* Please post an EXPLAIN ANALYZE of your query which will allow to choose \nbetween these two options :\n\t- If Postgres uses a bad plan (like a seq scan), you need to up the \nstatistics for this column\n\t- If you get the correct plan (index scan or bitmap index scan) then it \nis likely that postgres does one disk seek per row that has to be counted. \n26 seconds for 2400 rows would be consistent with a 10ms seek time. The \nunmistakable sign is that re-running the query will result in a very fast \nruntime (I'd say a couple ms for counting 2400 rows if no disk IO is \ninvolved).\n\n", "msg_date": "Thu, 10 Apr 2008 00:31:00 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: large tables and simple \"= constant\" queries using indexes" }, { "msg_contents": "Hi John,\n\nYou don't use the same 'gene_ref ='-value, so its not a perfect \ncomparison. And obviously, there is the fact that the data can be in the \ndisk cache, the second time you run it, which would explain the almost \ninstantaneous result for the second query.\n\nIf repeating the query a few times with 200 still makes it do its work \nin 15 seconds and with 800 in less than 100ms, than you might have found \na bug, or it is at least something I don't know how to fix.\n\nI doubt upping the default for all tables to 1000 is a good idea. The \ndata collected is used in the query-planning-stage, where more data \nmeans more processing time. Obviously there is a tradeoff somewhere \nbetween having more statistics and thus being able to plan the query \nbetter versus requiring more time to process those statistics.\n\nBest regards,\n\nArjen\n\nOn 10-4-2008 0:24 John Beaver wrote:\n> Perfect - thanks Arjen. Using your value of 200 decreased the time to 15 \n> seconds, and using a value of 800 makes it almost instantaneous. I'm \n> really not concerned about space usage; if having more statistics \n> increases performance this much, maybe I'll just default it to 1000?\n> \n> Strangely, the steps taken in the explain analyze are all the same. The \n> only differences are the predicted costs (and execution times).\n> \n> explain analyze for a statistics of 200:\n> \"Aggregate (cost=8831.27..8831.28 rows=1 width=0) (actual \n> time=15198.407..15198.408 rows=1 loops=1)\"\n> \" -> Bitmap Heap Scan on gene_prediction_view (cost=44.16..8825.29 \n> rows=2392 width=0) (actual time=19.719..15191.875 rows=2455 loops=1)\"\n> \" Recheck Cond: (gene_ref = 500)\"\n> \" -> Bitmap Index Scan on ix_gene_prediction_view_gene_ref \n> (cost=0.00..43.56 rows=2392 width=0) (actual time=18.871..18.871 \n> rows=2455 loops=1)\"\n> \" Index Cond: (gene_ref = 500)\"\n> \"Total runtime: 15198.651 ms\"\n> \n> explain analyze for a statistics of 800:\n> \"Aggregate (cost=8873.75..8873.76 rows=1 width=0) (actual \n> time=94.473..94.473 rows=1 loops=1)\"\n> \" -> Bitmap Heap Scan on gene_prediction_view (cost=44.25..8867.74 \n> rows=2404 width=0) (actual time=39.358..93.733 rows=2455 loops=1)\"\n> \" Recheck Cond: (gene_ref = 301)\"\n> \" -> Bitmap Index Scan on ix_gene_prediction_view_gene_ref \n> (cost=0.00..43.65 rows=2404 width=0) (actual time=38.472..38.472 \n> rows=2455 loops=1)\"\n> \" Index Cond: (gene_ref = 301)\"\n> \"Total runtime: 94.622 ms\"\n> \n> \n> \n> \n> Arjen van der Meijden wrote:\n>> First of all, there is the 'explain analyze' output, which is pretty \n>> helpful in postgresql.\n>>\n>> My guess is, postgresql decides to do a table scan for some reason. It \n>> might not have enough statistics for this particular table or column, \n>> to make a sound decision. What you can try is to increase the \n>> statistics target, which works pretty easy:\n>> ALTER TABLE gene_prediction_view ALTER gene_ref SET STATISTICS 200;\n>>\n>> Valid ranges are from 1(0?) - 1000, the default is 10, the default on \n>> my systems is usually 100. For such a large table, I'd go with 200.\n>>\n>> After that, you'll need to re-analyze your table and you can try again.\n>>\n>> Perhaps analyze should try to establish its own best guess to how many \n>> samples it should take? The default of 10 is rather limited for large \n>> tables.\n>>\n>> Best regards,\n>>\n>> Arjen\n>>\n>> On 9-4-2008 22:58 John Beaver wrote:\n>>> Hi, I've started my first project with Postgres (after several years \n>>> of using Mysql), and I'm having an odd performance problem that I was \n>>> hoping someone might be able to explain the cause of.\n>>>\n>>> ----My query----\n>>> - select count(*) from gene_prediction_view where gene_ref = 523\n>>> - takes 26 seconds to execute, and returns 2400 (out of a total of \n>>> 15 million records in the table)\n>>>\n>>> ---My problem---\n>>> Using a single-column index to count 2400 records which are \n>>> exactly one constant value doesn't sound like something that would \n>>> take 26 seconds. What's the slowdown? Any silver bullets that might \n>>> fix this?\n>>>\n>>> ----Steps I've taken----\n>>> - I ran vacuum and analyze\n>>> - I upped the shared_buffers to 58384, and I upped some of the \n>>> other postgresql.conf values as well. Nothing seemed to help \n>>> significantly, but maybe I missed something that would help \n>>> specifically for this query type?\n>>> - I tried to create a hash index, but gave up after more than 4 \n>>> hours of waiting for it to finish indexing\n>>>\n>>> ----Table stats----\n>>> - 15 million rows; I'm expecting to have four or five times this \n>>> number eventually.\n>>> - 1.5 gigs of hard drive usage\n>>>\n>>> ----My development environment---\n>>> - 2.6ghz dual-core MacBook Pro with 4 gigs of ram and a 7200 rpm \n>>> hard drive\n>>> - OS X 10.5.2\n>>> - Postgres 8.3 (installed via MacPorts)\n>>>\n>>> ----My table----\n>>>\n>>> CREATE TABLE gene_prediction_view\n>>> (\n>>> id serial NOT NULL,\n>>> gene_ref integer NOT NULL,\n>>> go_id integer NOT NULL,\n>>> go_description character varying(200) NOT NULL,\n>>> go_category character varying(50) NOT NULL,\n>>> function_verified_exactly boolean NOT NULL,\n>>> function_verified_with_parent_go boolean NOT NULL,\n>>> function_verified_with_child_go boolean NOT NULL,\n>>> score numeric(10,2) NOT NULL,\n>>> precision_score numeric(10,2) NOT NULL,\n>>> CONSTRAINT gene_prediction_view_pkey PRIMARY KEY (id),\n>>> CONSTRAINT gene_prediction_view_gene_ref_fkey FOREIGN KEY (gene_ref)\n>>> REFERENCES sgd_annotations (id) MATCH SIMPLE\n>>> ON UPDATE NO ACTION ON DELETE NO ACTION,\n>>> CONSTRAINT gene_prediction_view_go_id_fkey FOREIGN KEY (go_id)\n>>> REFERENCES go_terms (term) MATCH SIMPLE\n>>> ON UPDATE NO ACTION ON DELETE NO ACTION,\n>>> CONSTRAINT gene_prediction_view_gene_ref_key UNIQUE (gene_ref, go_id)\n>>> )\n>>> WITH (OIDS=FALSE);\n>>> ALTER TABLE gene_prediction_view OWNER TO postgres;\n>>>\n>>> CREATE INDEX ix_gene_prediction_view_gene_ref\n>>> ON gene_prediction_view\n>>> USING btree\n>>> (gene_ref);\n>>>\n>>>\n>>>\n>>>\n>>\n> \n", "msg_date": "Thu, 10 Apr 2008 09:13:39 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: large tables and simple \"= constant\" queries using\n indexes" }, { "msg_contents": "\n>> Perfect - thanks Arjen. Using your value of 200 decreased the time to \n>> 15 seconds, and using a value of 800 makes it almost instantaneous. I'm \n>> really not concerned about space usage; if having more statistics \n>> increases performance this much, maybe I'll just default it to 1000?\n>> Strangely, the steps taken in the explain analyze are all the same. \n>> The only differences are the predicted costs (and execution times).\n>> explain analyze for a statistics of 200:\n\n\n\tActually, since you got the exact same plans and the second one is a lot \nfaster, this can mean that the data is in the disk cache, or that the \nsecond query has all the rows it needs contiguous on disk whereas the \nfirst one has its rows all over the place. Therefore you are IO-bound. \nStatistics helped, perhaps (impossible to know since you don't provide the \nplan wit statistics set to 10), but your main problem is IO.\n\tUsually setting the statistics to 100 is enough...\n\n\tNow, here are some solutions to your problem in random order :\n\n\t- Install 64 bit Linux, 64 bit Postgres, and get lots of RAM, lol.\n\t- Switch to a RAID10 (4 times the IOs per second, however zero gain if \nyou're single-threaded, but massive gain when concurrent)\n\n\t- If you just need a count by gene_ref, a simple solution is to keep it \nin a separate table and update it via triggers, this is a frequently used \nsolution, it works well unless gene_ref is updated all the time (which is \nprobably not your case). Since you will be vacuuming this count-cache \ntable often, don't put the count as a field in your sgd_annotations table, \njust create a small table with 2 fields, gene_ref and count (unless you \nwant to use the count for other things and you don't like the join).\n\n\tFrom your table definition gene_ref references another table. It would \nseem that you have many rows in gene_prediction_view with the same \ngene_ref value.\n\n\t- If you often query rows with the same gene_ref, consider using CLUSTER \nto physically group those rows on disk. This way you can get all rows with \nthe same gene_ref in 1 seek instead of 2000. Clustered tables also make \nBitmap scan happy.\n\tThis one is good since it can also speed up other queries (not just the \ncount).\n\tYou could also cluster on (gene_ref,go_id) perhaps, I don't know what \nyour columns mean. Only you can decide that because clustering order has \nto be meaningful (to group rows according to something that makes sense \nand not at random).\n\n\t* Lose some weight :\n\nCREATE INDEX ix_gene_prediction_view_gene_ref\n ON gene_prediction_view\n USING btree\n (gene_ref);\n\n\t- This index is useless since you have an UNIQUE on (gene_ref, go_id) \nwhich is also an index.\n\tRemove the index on (gene_ref), it will leave space in the disk cache for \nother things.\n\n\t- Since (gene_ref, go_id) is UNIQUE NOT NULL, you might be able to use \nthat as your primary key, but only if it is never updated of course. Saves \nanother index.\n\n\t- If you often do queries that fetch many rows, but seldom fetch the \ndescription, tell PG to always store the description in offline compressed \nform (read the docs on ALTER TABLE ... SET STORAGE ..., I forgot the \nsyntax). Point being to make the main table smaller.\n\n\t- Also I see a category as VARCHAR. If you have a million different \ncategories, that's OK, but if you have 100 categories for your 15M rows, \nput them in a separate table and replace that by a category_id (normalize \n!)\n\n", "msg_date": "Thu, 10 Apr 2008 10:25:48 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: large tables and simple \"= constant\" queries using indexes" }, { "msg_contents": "On Thu, 10 Apr 2008, PFC wrote:\n\n... Lots of useful advice ...\n\n> \t- If you often query rows with the same gene_ref, consider using \n> CLUSTER to physically group those rows on disk. This way you can get all rows \n> with the same gene_ref in 1 seek instead of 2000. Clustered tables also make \n> Bitmap scan happy.\n\nIn my opinion this is the one that will make the most difference. You will \nneed to run:\n\nCLUSTER gene_prediction_view USING gene_prediction_view_gene_ref_key;\n\nafter you insert significant amounts of data into the table. This \nre-orders the table according to the index, but new data is always written \nout of order, so after adding lots more data the table will need to be \nre-clustered again.\n\n> - Switch to a RAID10 (4 times the IOs per second, however zero gain if \n> you're single-threaded, but massive gain when concurrent)\n\nGreg Stark has a patch in the pipeline that will change this, for bitmap \nindex scans, by using fadvise(), so a single thread can utilise multiple \ndiscs in a RAID array.\n\nMatthew\n\n-- \nProlog doesn't have enough parentheses. -- Computer Science Lecturer\n", "msg_date": "Thu, 10 Apr 2008 10:51:13 +0100 (BST)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: large tables and simple \"= constant\" queries using\n indexes" }, { "msg_contents": "Thanks a lot, all of you - this is excellent advice. With the data \nclustered and statistics at a more reasonable value of 100, it now \nreproducibly takes even less time - 20-57 ms per query.\n\nAfter reading the section on \"Statistics Used By the Planner\" in the \nmanual, I was a little concerned that, while the statistics sped up the \nqueries that I tried immeasurably, that the most_common_vals array was \nwhere the speedup was happening, and that the values which wouldn't fit \nin this array wouldn't be sped up. Though I couldn't offhand find an \nexample where this occurred, the clustering approach seems intuitively \nlike a much more complete and scalable solution, at least for a \nread-only table like this.\n\nAs to whether the entire index/table was getting into ram between my \nstatistics calls, I don't think this was the case. Here's the behavior \nthat I found:\n- With statistics at 10, the query took 25 (or so) seconds no matter how \nmany times I tried different values. The query plan was the same as for \nthe 200 and 800 statistics below.\n- Trying the same constant a second time gave an instantaneous result, \nI'm guessing because of query/result caching.\n- Immediately on increasing the statistics to 200, the query took a \nreproducibly less amount of time. I tried about 10 different values\n- Immediately on increasing the statistics to 800, the query \nreproducibly took less than a second every time. I tried about 30 \ndifferent values.\n- Decreasing the statistics to 100 and running the cluster command \nbrought it to 57 ms per query.\n- The Activity Monitor (OSX) lists the relevant postgres process as \ntaking a little less than 500 megs.\n- I didn't try decreasing the statistics back to 10 before I ran the \ncluster command, so I can't show the search times going up because of \nthat. But I tried killing the 500 meg process. The new process uses less \nthan 5 megs of ram, and still reproducibly returns a result in less than \n60 ms. Again, this is with a statistics value of 100 and the data \nclustered by gene_prediction_view_gene_ref_key.\n\nAnd I'll consider the idea of using triggers with an ancillary table for \nother purposes; seems like it could be a useful solution for something.\n\nMatthew wrote:\n> On Thu, 10 Apr 2008, PFC wrote:\n>\n> ... Lots of useful advice ...\n>\n>> - If you often query rows with the same gene_ref, consider using \n>> CLUSTER to physically group those rows on disk. This way you can get \n>> all rows with the same gene_ref in 1 seek instead of 2000. Clustered \n>> tables also make Bitmap scan happy.\n>\n> In my opinion this is the one that will make the most difference. You \n> will need to run:\n>\n> CLUSTER gene_prediction_view USING gene_prediction_view_gene_ref_key;\n>\n> after you insert significant amounts of data into the table. This \n> re-orders the table according to the index, but new data is always \n> written out of order, so after adding lots more data the table will \n> need to be re-clustered again.\n>\n>> - Switch to a RAID10 (4 times the IOs per second, however zero gain \n>> if you're single-threaded, but massive gain when concurrent)\n>\n> Greg Stark has a patch in the pipeline that will change this, for \n> bitmap index scans, by using fadvise(), so a single thread can utilise \n> multiple discs in a RAID array.\n>\n> Matthew\n>\n", "msg_date": "Thu, 10 Apr 2008 10:44:59 -0400", "msg_from": "John Beaver <[email protected]>", "msg_from_op": true, "msg_subject": "Re: large tables and simple \"= constant\" queries using\n indexes" }, { "msg_contents": "\nOn Apr 10, 2008, at 9:44 AM, John Beaver wrote:\n> Thanks a lot, all of you - this is excellent advice. With the data \n> clustered and statistics at a more reasonable value of 100, it now \n> reproducibly takes even less time - 20-57 ms per query.\n>\n> After reading the section on \"Statistics Used By the Planner\" in the \n> manual, I was a little concerned that, while the statistics sped up \n> the queries that I tried immeasurably, that the most_common_vals \n> array was where the speedup was happening, and that the values which \n> wouldn't fit in this array wouldn't be sped up. Though I couldn't \n> offhand find an example where this occurred, the clustering approach \n> seems intuitively like a much more complete and scalable solution, \n> at least for a read-only table like this.\n>\n> As to whether the entire index/table was getting into ram between my \n> statistics calls, I don't think this was the case. Here's the \n> behavior that I found:\n> - With statistics at 10, the query took 25 (or so) seconds no matter \n> how many times I tried different values. The query plan was the same \n> as for the 200 and 800 statistics below.\n> - Trying the same constant a second time gave an instantaneous \n> result, I'm guessing because of query/result caching.\n> - Immediately on increasing the statistics to 200, the query took a \n> reproducibly less amount of time. I tried about 10 different values\n> - Immediately on increasing the statistics to 800, the query \n> reproducibly took less than a second every time. I tried about 30 \n> different values.\n> - Decreasing the statistics to 100 and running the cluster command \n> brought it to 57 ms per query.\n> - The Activity Monitor (OSX) lists the relevant postgres process as \n> taking a little less than 500 megs.\n> - I didn't try decreasing the statistics back to 10 before I ran the \n> cluster command, so I can't show the search times going up because \n> of that. But I tried killing the 500 meg process. The new process \n> uses less than 5 megs of ram, and still reproducibly returns a \n> result in less than 60 ms. Again, this is with a statistics value of \n> 100 and the data clustered by gene_prediction_view_gene_ref_key.\n>\n> And I'll consider the idea of using triggers with an ancillary table \n> for other purposes; seems like it could be a useful solution for \n> something.\n\nFWIW, killing the backend process responsible for the query won't \nnecessarily clear the table's data from memory as that will be in the \nshared_buffers. If you really want to flush the data from memory you \nneed to read in data from other tables of a size total size greater \nthan your shared_buffers setting.\n\nErik Jones\n\nDBA | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n\n", "msg_date": "Thu, 10 Apr 2008 10:02:48 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: large tables and simple \"= constant\" queries using indexes" }, { "msg_contents": "John Beaver wrote:\n\n> - Trying the same constant a second time gave an instantaneous result,\n> I'm guessing because of query/result caching.\n\nAFAIK no query/result caching is in place in postgres, what you are experiencing\nis OS disk/memory caching.\n\n\nRegards\nGaetano Mendola\n", "msg_date": "Thu, 10 Apr 2008 18:18:15 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: large tables and simple \"= constant\" queries using indexes" }, { "msg_contents": "\n\n\n\n\n\nThanks Eric and Gaestano - interesting, and both examples of my\nnaivite. :)\n\nI tried running large select(*) queries on other tables followed by\nanother try at the offending query, and it was still fast. Just to be\nabsolutely sure this is a scalable solution, I'll try restarting my\ncomputer in a few hours to see if it affects anything cache-wise.\n\n\nGaetano Mendola wrote:\n\nJohn Beaver wrote:\n\n \n\n- Trying the same constant a second time gave an instantaneous result,\nI'm guessing because of query/result caching.\n \n\n\nAFAIK no query/result caching is in place in postgres, what you are experiencing\nis OS disk/memory caching.\n\n\nRegards\nGaetano Mendola\n\n\n \n\n\n\n", "msg_date": "Thu, 10 Apr 2008 12:37:45 -0400", "msg_from": "John Beaver <[email protected]>", "msg_from_op": true, "msg_subject": "Re: large tables and simple \"= constant\" queries using indexes" }, { "msg_contents": "In response to John Beaver <[email protected]>:\n\n> Thanks Eric and Gaestano - interesting, and both examples of my naivite. :)\n> \n> I tried running large select(*) queries on other tables followed by another try at the offending query, and it was still fast. Just to be absolutely sure this is a scalable solution, I'll try restarting my computer in a few hours to see if it affects anything cache-wise.\n\nI say this over and over again ... because I think it's really cool and\nuseful.\n\nIf you install the pg_buffercache addon, you can actually look into\nPostgreSQL's internals and see what tables are in the buffer in real\ntime. If you're having trouble, it can (potentially) be a helpful\ntool.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Thu, 10 Apr 2008 13:08:54 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: large tables and simple \"= constant\" queries using\n indexes" }, { "msg_contents": "On Thu, 10 Apr 2008, Bill Moran wrote:\n\n> If you install the pg_buffercache addon, you can actually look into\n> PostgreSQL's internals and see what tables are in the buffer in real\n> time.\n\nThe \"Inside the PostgreSQL Buffer Cache\" talk I did at the recent East \nconference is now on-line at \nhttp://www.westnet.com/~gsmith/content/postgresql/\n\nThe slides explain how that information gets updated and used internally, \nand the separate \"sample queries\" file there shows some more complicated \nviews I've written against pg_buffercache. Here's a sample one:\n\nrelname |buffered| buffers % | % of rel\naccounts | 306 MB | 65.3 | 24.7\naccounts_pkey | 160 MB | 34.1 | 93.2\n\nThis shows that 65.3% of the buffer cache is filled with the accounts \ntable, which is caching 24.7% of the full table. These are labeled \n\"relations\" because there's a mix of table and index data there. \naccounts_pkey is an index for example, which is why almost all of it is \nstaying inside the buffer cache.\n\nThe queries that use usage_count only work against 8.3, that one above \nshould work on older versions as well.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 10 Apr 2008 14:47:59 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: large tables and simple \"= constant\" queries using\n indexes" }, { "msg_contents": "\n> Thanks a lot, all of you - this is excellent advice. With the data \n> clustered and statistics at a more reasonable value of 100, it now \n> reproducibly takes even less time - 20-57 ms per query.\n\n\t1000x speedup with proper tuning - always impressive, lol.\n\tIO seeks are always your worst enemy.\n\n> After reading the section on \"Statistics Used By the Planner\" in the \n> manual, I was a little concerned that, while the statistics sped up the \n> queries that I tried immeasurably, that the most_common_vals array was \n> where the speedup was happening, and that the values which wouldn't fit \n> in this array wouldn't be sped up. Though I couldn't offhand find an \n> example where this occurred, the clustering approach seems intuitively \n> like a much more complete and scalable solution, at least for a \n> read-only table like this.\n\n\tActually, with statistics set to 100, then 100 values will be stored in \nmost_common_vals. This would mean that the values not in most_common_vals \nwill have less than 1% frequency, and probably much less than that. The \nchoice of plan for these rare values is pretty simple.\n\n\tWith two columns, \"interesting\" stuff can happen, like if you have col1 \nin [1...10] and col2 in [1...10] and use a condition on col1=const and \ncol2=const, the selectivity of the result depends not only on the \ndistribution of col1 and col2 but also their correlation.\n\n\tAs for the tests you did, it's hard to say without seeing the explain \nanalyze outputs. If you change the stats and the plan choice (EXPLAIN) \nstays the same, and you use the same values in your query, any difference \nin timing comes from caching, since postgres is executing the same plan \nand therefore doing the exact same thing. Caching (from PG and from the \nOS) can make the timings vary a lot.\n\n> - Trying the same constant a second time gave an instantaneous result, \n> I'm guessing because of query/result caching.\n\n\tPG does not cache queries or results. It caches data & index pages in its \nshared buffers, and then the OS adds another layer of the usual disk cache.\n\tA simple query like selecting one row based on PK takes about 60 \nmicroseconds of CPU time, but if it needs one seek for the index and one \nfor the data it may take 20 ms waiting for the moving parts to move... \nHence, CLUSTER is a very useful tool.\n\n\tBitmap index scans love clustered tables because all the interesting rows \nend up being grouped together, so much less pages need to be visited.\n\n> - I didn't try decreasing the statistics back to 10 before I ran the \n> cluster command, so I can't show the search times going up because of \n> that. But I tried killing the 500 meg process. The new process uses less \n> than 5 megs of ram, and still reproducibly returns a result in less than \n> 60 ms. Again, this is with a statistics value of 100 and the data \n> clustered by gene_prediction_view_gene_ref_key.\n\n\tKilling it or just restarting postgres ?\n\tIf you let postgres run (not idle) for a while, naturally it will fill \nthe RAM up to the shared_buffers setting that you specified in the \nconfiguration file. This is good, since grabbing data from postgres' own \ncache is faster than having to make a syscall to the OS to get it from the \nOS disk cache (or disk). This isn't bloat.\n\tBut what those 500 MB versus 6 MB show is that before, postgres had to \nread a lot of data for your query, so it stayed in the cache ; after \ntuning it needs to read much less data (thanks to CLUSTER) so the cache \nstays empty.\n\n", "msg_date": "Thu, 10 Apr 2008 23:37:50 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: large tables and simple \"= constant\" queries using indexes" } ]
[ { "msg_contents": "I'm hitting an unexpected problem with postgres 8.3 - I have some\ntables which use varchar(32) for their unique IDs which I'm attempting\nto join using some simple SQL:\n\nselect *\nfrom group_access, groups\nwhere group_access.groupid = groups.groupid and\n group_access.uid = '7275359408f44591d0717e16890ce335';\n\nthere's a unique index on group_access.groupid, and a non-unique index\non groups.groupid. both are non-null.\n\nthe problem is: if groupid (in both tables) is varchar, I cannot force\npostgres (no matter how hard I try) to do an index scan. it ends up\nreading the entire groups table (pretty large!):\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=8.89..41329.88 rows=119940 width=287) (actual\ntime=0.202..935.136 rows=981 loops=1)\n Hash Cond: ((groups.groupid)::text = (group_access.groupid)::text)\n -> Seq Scan on groups (cost=0.00..31696.48 rows=1123348\nwidth=177) (actual time=0.011..446.091 rows=1125239 loops=1)\n -> Hash (cost=8.51..8.51 rows=30 width=110) (actual\ntime=0.148..0.148 rows=30 loops=1)\n -> Seq Scan on group_access (cost=0.00..8.51 rows=30\nwidth=110) (actual time=0.014..0.126 rows=30 loops=1)\n Filter: ((uid)::text = '7275359408f44591d0717e16890ce335'::text)\n Total runtime: 935.443 ms\n(7 rows)\n\nif I disable seq_scan, I get this:\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=1.47..106189.61 rows=120004 width=287) (actual\ntime=0.100..1532.353 rows=981 loops=1)\n Merge Cond: ((group_access.groupid)::text = (groups.groupid)::text)\n -> Index Scan using group_access_pkey on group_access\n(cost=0.00..43.91 rows=30 width=110) (actual time=0.044..0.148 rows=30\nloops=1)\n Index Cond: ((uid)::text = '7275359408f44591d0717e16890ce335'::text)\n -> Index Scan using groups_1_idx on groups (cost=0.00..102135.71\nrows=1123952 width=177) (actual time=0.031..856.555 rows=1125827\nloops=1)\n Total runtime: 1532.880 ms\n(6 rows)\n\nit's running an index scan across the entire table (no condition applied) :-(\n\nso, just for the hell of it, I tried making groupid a char(32),\ndespite repeated assertions in this group that there's no performance\ndifference between the two:\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=4.48..253.85 rows=304 width=291) (actual\ntime=0.715..22.906 rows=984 loops=1)\n -> Bitmap Heap Scan on group_access (cost=4.48..9.86 rows=30\nwidth=111) (actual time=0.372..0.570 rows=30 loops=1)\n Recheck Cond: (uid = '7275359408f44591d0717e16890ce335'::bpchar)\n -> Bitmap Index Scan on group_access_uid_key\n(cost=0.00..4.48 rows=30 width=0) (actual time=0.331..0.331 rows=30\nloops=1)\n Index Cond: (uid = '7275359408f44591d0717e16890ce335'::bpchar)\n -> Index Scan using groups_1_idx on groups (cost=0.00..7.96\nrows=14 width=180) (actual time=0.176..0.396 rows=33 loops=30)\n Index Cond: (groups.groupid = group_access.groupid)\n Total runtime: 26.837 ms\n(8 rows)\n\n(this last plan is actually against a smaller test DB, but I get the\nsame behavior with it, seq scan for varchar or index scan for char,\nand the results returned are identical for this query)\n\nthe databases are UTF-8, if that makes a difference...\n", "msg_date": "Wed, 9 Apr 2008 21:13:23 -0600", "msg_from": "\"Adam Gundy\" <[email protected]>", "msg_from_op": true, "msg_subject": "varchar index joins not working?" }, { "msg_contents": "Adam Gundy wrote:\n> I'm hitting an unexpected problem with postgres 8.3 - I have some\n> tables which use varchar(32) for their unique IDs which I'm attempting\n> to join using some simple SQL:\n> \n> select *\n> from group_access, groups\n> where group_access.groupid = groups.groupid and\n> group_access.uid = '7275359408f44591d0717e16890ce335';\n> \n> there's a unique index on group_access.groupid, and a non-unique index\n> on groups.groupid. both are non-null.\n\nWhat about group_access.uid - I'd have thought that + groups pkey is \nprobably the sensible combination here.\n\n> the problem is: if groupid (in both tables) is varchar, I cannot force\n> postgres (no matter how hard I try) to do an index scan. it ends up\n> reading the entire groups table (pretty large!):\n\nOK\n\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=8.89..41329.88 rows=119940 width=287) (actual\n> time=0.202..935.136 rows=981 loops=1)\n\nThat's because it's expecting 119,940 rows to match (rather than the \nactual 981 you do get). If you were getting that many results this is \nprobably a sensible plan.\n\n> Hash Cond: ((groups.groupid)::text = (group_access.groupid)::text)\n> -> Seq Scan on groups (cost=0.00..31696.48 rows=1123348\n> width=177) (actual time=0.011..446.091 rows=1125239 loops=1)\n\nIt's got a good idea of the total number of rows in groups.\n\n> -> Hash (cost=8.51..8.51 rows=30 width=110) (actual\n> time=0.148..0.148 rows=30 loops=1)\n> -> Seq Scan on group_access (cost=0.00..8.51 rows=30\n> width=110) (actual time=0.014..0.126 rows=30 loops=1)\n\nAnd also group_access. Oh, the seq-scan doesn't really matter here. It \nprobably *is* faster to read all 30 rows in one burst rather than go to \nthe index and then back to the table.\n\n> Filter: ((uid)::text = '7275359408f44591d0717e16890ce335'::text)\n> Total runtime: 935.443 ms\n> (7 rows)\n> \n> if I disable seq_scan, I get this:\n> \n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------\n> Merge Join (cost=1.47..106189.61 rows=120004 width=287) (actual\n> time=0.100..1532.353 rows=981 loops=1)\n\nIt's still thinking it's going to get 120 thousand rows.\n\n> it's running an index scan across the entire table (no condition applied) :-(\n> \n> so, just for the hell of it, I tried making groupid a char(32),\n> despite repeated assertions in this group that there's no performance\n> difference between the two:\n\nThere's no performance difference between the two.\n\n> Nested Loop (cost=4.48..253.85 rows=304 width=291) (actual\n> time=0.715..22.906 rows=984 loops=1)\n\n> (this last plan is actually against a smaller test DB, but I get the\n> same behavior with it, seq scan for varchar or index scan for char,\n> and the results returned are identical for this query)\n\nThe char(32) thing isn't important here, what is important is that it's \nexpecting ~300 rows rather than 120,000. It's still wrong, but it's \nclose enough to make sense.\n\nSo - the question is - why is PG expecting so many matches to your join. \nHow many distinct values do you have in groups.groupid and \ngroup_access.group_id?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 10 Apr 2008 09:46:09 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: varchar index joins not working?" }, { "msg_contents": "Richard Huxton wrote:\n> Adam Gundy wrote:\n>> I'm hitting an unexpected problem with postgres 8.3 - I have some\n>> tables which use varchar(32) for their unique IDs which I'm attempting\n>> to join using some simple SQL:\n>>\n>> select *\n>> from group_access, groups\n>> where group_access.groupid = groups.groupid and\n>> group_access.uid = '7275359408f44591d0717e16890ce335';\n>>\n>> there's a unique index on group_access.groupid, and a non-unique index\n>> on groups.groupid. both are non-null.\n> \n> What about group_access.uid - I'd have thought that + groups pkey is \n> probably the sensible combination here.\n\nthat is an index on group_access:\n\n\"group_access_pkey\" PRIMARY KEY, btree (groupid, uid)\n\nadding the (uid, groupid) index helps the small database, it will do an \nindex join if forced to, but the full database still refuses to do an \nindex join - it does a full index scan followed by a merge.\n\n>> QUERY PLAN\n>> -------------------------------------------------------------------------------------------------------------------------- \n>>\n>> Hash Join (cost=8.89..41329.88 rows=119940 width=287) (actual\n>> time=0.202..935.136 rows=981 loops=1)\n> \n> That's because it's expecting 119,940 rows to match (rather than the \n> actual 981 you do get). If you were getting that many results this is \n> probably a sensible plan.\n\nsure. but it's estimate is *wildly* off\n\n>> Hash Cond: ((groups.groupid)::text = (group_access.groupid)::text)\n>> -> Seq Scan on groups (cost=0.00..31696.48 rows=1123348\n>> width=177) (actual time=0.011..446.091 rows=1125239 loops=1)\n> \n> It's got a good idea of the total number of rows in groups.\n\nyeah.\n\n>> -> Hash (cost=8.51..8.51 rows=30 width=110) (actual\n>> time=0.148..0.148 rows=30 loops=1)\n>> -> Seq Scan on group_access (cost=0.00..8.51 rows=30\n>> width=110) (actual time=0.014..0.126 rows=30 loops=1)\n> \n> And also group_access. Oh, the seq-scan doesn't really matter here. It \n> probably *is* faster to read all 30 rows in one burst rather than go to \n> the index and then back to the table.\n\nagreed.\n\n>> it's running an index scan across the entire table (no condition \n>> applied) :-(\n>>\n>> so, just for the hell of it, I tried making groupid a char(32),\n>> despite repeated assertions in this group that there's no performance\n>> difference between the two:\n> \n> There's no performance difference between the two.\n\nhah. if it makes the join with char (and runs fast), or reads the whole \ntable with varchar, then there *is* a performance difference - a big one!\n\n> The char(32) thing isn't important here, what is important is that it's \n> expecting ~300 rows rather than 120,000. It's still wrong, but it's \n> close enough to make sense.\n\n> So - the question is - why is PG expecting so many matches to your join. \n\nmore to the point, why does it get the estimate right (or close) with \nchar, but massively wrong with varchar? I've been vacuum analyzing after \neach change..\n\nwith the smaller database, and char type, it (for certain joins) still \nwants to do a seqscan because the tables are small enough, but if I \ndisable seqscan, it does an index join (usually with a small time \npenalty). if I switch the types back to varchar, re-analyze, re-run, it \n*will not* do an index join!\n\n> How many distinct values do you have in groups.groupid and \n> group_access.group_id?\n\nfor the small database (since it shows the same problem):\n\ngroup_access: 280/268\ngroups: 2006/139\n\nfor the large database:\n\ngroup_access: same\ngroups: 1712647/140\n\nthe groupid key is an MD5 hash, so it should be uniformly distributed. \nmaybe that throws the stats? but, again, char works, varchar doesn't :-(", "msg_date": "Thu, 10 Apr 2008 08:52:31 -0600", "msg_from": "Adam Gundy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: varchar index joins not working?" }, { "msg_contents": "On Thu, Apr 10, 2008 at 8:52 AM, Adam Gundy <[email protected]> wrote:\n> Richard Huxton wrote:\n> > How many distinct values do you have in groups.groupid and\n> group_access.group_id?\n> >\n>\n> for the small database (since it shows the same problem):\n>\n> group_access: 280/268\n> groups: 2006/139\n>\n> for the large database:\n>\n> group_access: same\n> groups: 1712647/140\n>\n> the groupid key is an MD5 hash, so it should be uniformly distributed.\n> maybe that throws the stats? but, again, char works, varchar doesn't :-(\n\nOK, I'm thinking the varchar/char part is not the issue.\n\nthe database is very unbalanced, most of the groups are 1000 or less\nrecords, with one group occupying 95% of the records.\n\nI *think* that when I analyze using char instead of varchar, it is\nrecording a stat for the large group, but for some reason with varchar\ndoesn't add a stat for that one.\n\nso, the real question is, how do I fix this? I can turn the stats way\nup to 1000, but that doesn't guarantee that I'll get a stat for the\nlarge group :-(\n\ncan I turn the statistics off completely for this column? I'm guessing\nthat if I can, that will mean it takes a guess based on the number of\ndistinct values in the groups table, which is still large number of\nrecords, possibly enough to trigger the seqscan anyway.\n\ndoes postgres have a way of building a 'counted index' that the\nplanner can use for it's record counts? some way of forcibly\nmaintaining a stat for every group?\n\nthe groups are not related to one another - is it possible to\npartition them into their own indexes somehow?\n\nahh. lots of questions, no (obvious to me) answers from googling around.\n", "msg_date": "Thu, 10 Apr 2008 12:54:25 -0600", "msg_from": "\"Adam Gundy\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: varchar index joins not working?" }, { "msg_contents": "Adam Gundy wrote:\n> On Thu, Apr 10, 2008 at 8:52 AM, Adam Gundy <[email protected]> wrote:\n>> Richard Huxton wrote:\n>>> How many distinct values do you have in groups.groupid and\n>> group_access.group_id?\n>> for the small database (since it shows the same problem):\n>>\n>> group_access: 280/268\n>> groups: 2006/139\n>>\n>> for the large database:\n>>\n>> group_access: same\n>> groups: 1712647/140\n>>\n>> the groupid key is an MD5 hash, so it should be uniformly distributed.\n>> maybe that throws the stats? but, again, char works, varchar doesn't :-(\n> \n> OK, I'm thinking the varchar/char part is not the issue.\n\nGood, because it's not :-)\n\n> the database is very unbalanced, most of the groups are 1000 or less\n> records, with one group occupying 95% of the records.\n\nI was wondering - that's why I asked for the stats.\n\n> I *think* that when I analyze using char instead of varchar, it is\n> recording a stat for the large group, but for some reason with varchar\n> doesn't add a stat for that one.\n> \n> so, the real question is, how do I fix this? I can turn the stats way\n> up to 1000, but that doesn't guarantee that I'll get a stat for the\n> large group :-(\n\nWell, by default it will be tracking the 10 most common values (and how \noften they occur). As you say, this can be increased to 1000 (although \nit obviously takes longer to check 1000 rather than 10).\n\nWe can have a look at the stats with something like:\nSELECT * FROM pg_stats WHERE tablename='group_access' AND attname='uid';\nYou'll be interested in n_distinct, most_common_vals and most_common_freqs.\n\nHowever, I think the problem may be that PG doesn't track cross-column \nstats, so it doesn't know that a particular uid implies one or more \nparticular groupid values.\n\n> can I turn the statistics off completely for this column? I'm guessing\n> that if I can, that will mean it takes a guess based on the number of\n> distinct values in the groups table, which is still large number of\n> records, possibly enough to trigger the seqscan anyway.\n\nNo - can't disable stats. Besides, you want it the other way around - \nindex scans for all groups except the largest.\n\n> does postgres have a way of building a 'counted index' that the\n> planner can use for it's record counts? some way of forcibly\n> maintaining a stat for every group?\n\nNo, but let's see what's in pg_stats.\n\n> the groups are not related to one another - is it possible to\n> partition them into their own indexes somehow?\n\nYes, but it will depend on having an explicit group_id=... clause in the \nquery as well as on the index. That's not going to help you here.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 14 Apr 2008 08:14:01 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: varchar index joins not working?" }, { "msg_contents": "Richard Huxton wrote:\n> Adam Gundy wrote:\n>> On Thu, Apr 10, 2008 at 8:52 AM, Adam Gundy <[email protected]> wrote:\n>>> Richard Huxton wrote:\n>>>> How many distinct values do you have in groups.groupid and\n>>> group_access.group_id?\n>>> for the small database (since it shows the same problem):\n>>>\n>>> group_access: 280/268\n>>> groups: 2006/139\n>>>\n>>> for the large database:\n>>>\n>>> group_access: same\n>>> groups: 1712647/140\n>>>\n>>> the groupid key is an MD5 hash, so it should be uniformly distributed.\n>>> maybe that throws the stats? but, again, char works, varchar doesn't :-(\n>>\n>> OK, I'm thinking the varchar/char part is not the issue.\n> \n> Good, because it's not :-)\n\nhmm. unfortunately it did turn out to be (part) of the issue. I've \ndiscovered that mixing char and varchar in a stored procedure does not \ncoerce the types, and ends up doing seq scans all the time.\n\nchanging something like this:\n\nproc x ( y char(32) )\n{\n select * from groups where groupid = y\n}\n\ninto this:\n\nproc x ( y varchar(32) )\n{\n select * from groups where groupid = y\n}\n\nand suddenly postgres does index lookups in the stored proc... way faster.\n\n>> I *think* that when I analyze using char instead of varchar, it is\n>> recording a stat for the large group, but for some reason with varchar\n>> doesn't add a stat for that one.\n>>\n>> so, the real question is, how do I fix this? I can turn the stats way\n>> up to 1000, but that doesn't guarantee that I'll get a stat for the\n>> large group :-(\n> \n> Well, by default it will be tracking the 10 most common values (and how \n> often they occur). As you say, this can be increased to 1000 (although \n> it obviously takes longer to check 1000 rather than 10).\n> \n> We can have a look at the stats with something like:\n> SELECT * FROM pg_stats WHERE tablename='group_access' AND attname='uid';\n> You'll be interested in n_distinct, most_common_vals and most_common_freqs.\n> \n> However, I think the problem may be that PG doesn't track cross-column \n> stats, so it doesn't know that a particular uid implies one or more \n> particular groupid values.\n\nI doubt we could get stats stable enough for this. the number of groups \nwill hopefully be much larger at some point.\n\nit's a shame the index entries can't be used to provide information to \nthe planner, eg a rough count of the number of entries for a given key \n(or subset). it would be nice to be able to create eg a counted btree \nwhen you know you have this kind of data as a hint to the planner.\n\n>> can I turn the statistics off completely for this column? I'm guessing\n>> that if I can, that will mean it takes a guess based on the number of\n>> distinct values in the groups table, which is still large number of\n>> records, possibly enough to trigger the seqscan anyway.\n> \n> No - can't disable stats. Besides, you want it the other way around - \n> index scans for all groups except the largest.\n\nactually, disabling seqscan at the server level gives extremely good \nresponse times. I ended up rewriting a few queries that were scanning \nthe whole group for no good reason, and bitmap index hashing seems to \ntake care of things nicely.\n\nqueries have gone from 30+ seconds to < 0.1 seconds.\n\n>> does postgres have a way of building a 'counted index' that the\n>> planner can use for it's record counts? some way of forcibly\n>> maintaining a stat for every group?\n> \n> No, but let's see what's in pg_stats.\n\nno real help there. either it hits the group being read, and does a good \nplan, or it doesn't, and tries to seqscan (unless I disable it). even \nforcing stats to 1000 only bandaids the situation, given the number of \ngroups will eventually exceed that..", "msg_date": "Mon, 14 Apr 2008 11:02:25 -0600", "msg_from": "Adam Gundy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: varchar index joins not working?" }, { "msg_contents": "Adam Gundy <[email protected]> writes:\n> hmm. unfortunately it did turn out to be (part) of the issue. I've \n> discovered that mixing char and varchar in a stored procedure does not \n> coerce the types, and ends up doing seq scans all the time.\n\nOh, it coerces the type all right, just not in the direction you'd like.\n\nregression=# create table v (f1 varchar(32) primary key);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \"v_pkey\" for table \"v\"\nCREATE TABLE\nregression=# explain select * from v where f1 = 'abc'::varchar;\n QUERY PLAN \n-----------------------------------------------------------------\n Index Scan using v_pkey on v (cost=0.00..8.27 rows=1 width=34)\n Index Cond: ((f1)::text = 'abc'::text)\n(2 rows)\n\nregression=# explain select * from v where f1 = 'abc'::char(3);\n QUERY PLAN \n---------------------------------------------------\n Seq Scan on v (cost=0.00..25.88 rows=1 width=34)\n Filter: ((f1)::bpchar = 'abc'::character(3))\n(2 rows)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Apr 2008 13:46:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: varchar index joins not working? " }, { "msg_contents": "Adam Gundy wrote:\n> I doubt we could get stats stable enough for this. the number of groups \n> will hopefully be much larger at some point.\n\nThe pg_stats table should be recording the n most-common values, so if \nyou have 1 million groups you track details of the 1000 most-common. \nThat gives you a maximum for how common any value not in the stats can be.\n\n>> No, but let's see what's in pg_stats.\n> \n> no real help there. either it hits the group being read, and does a good \n> plan, or it doesn't, and tries to seqscan (unless I disable it). even \n> forcing stats to 1000 only bandaids the situation, given the number of \n> groups will eventually exceed that..\n\nLike I say, that's not the point of gathering the stats. If one group \nrepresents 95% of your rows, then its group-id should be almost certain \nto occur in the stats. Are you saying that's not happening with your data?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 14 Apr 2008 18:54:20 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: varchar index joins not working?" }, { "msg_contents": "Tom Lane wrote:\n> Adam Gundy <[email protected]> writes:\n>> hmm. unfortunately it did turn out to be (part) of the issue. I've \n>> discovered that mixing char and varchar in a stored procedure does not \n>> coerce the types, and ends up doing seq scans all the time.\n> \n> Oh, it coerces the type all right, just not in the direction you'd like.\n> \n> regression=# create table v (f1 varchar(32) primary key);\n> NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \"v_pkey\" for table \"v\"\n> CREATE TABLE\n> regression=# explain select * from v where f1 = 'abc'::varchar;\n> QUERY PLAN \n> -----------------------------------------------------------------\n> Index Scan using v_pkey on v (cost=0.00..8.27 rows=1 width=34)\n> Index Cond: ((f1)::text = 'abc'::text)\n> (2 rows)\n> \n> regression=# explain select * from v where f1 = 'abc'::char(3);\n> QUERY PLAN \n> ---------------------------------------------------\n> Seq Scan on v (cost=0.00..25.88 rows=1 width=34)\n> Filter: ((f1)::bpchar = 'abc'::character(3))\n> (2 rows)\n\nyeah. not terribly helpful.. you'd have to assume I'm not the only one \nthis has bitten..\n\nis there a reason it doesn't coerce to a type that's useful to the \nplanner (ie varchar in my case), or the planner doesn't accept any type \nof string as a valid match for index scan? I would think the benefits of \nbeing able to index scan always outweigh the cost of type conversion...\n\n\nhmm. I only saw this with stored procs, but it's obviously generic. I \nthink the reason I didn't see it with straight SQL or views is that it \nseems to work correctly with string constants.. coercing them to the \ncorrect type for the index scan. with a stored proc, all the constants \nare passed in as args, with char() type (until I fixed it, obviously!)", "msg_date": "Mon, 14 Apr 2008 15:13:13 -0600", "msg_from": "Adam Gundy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: varchar index joins not working?" }, { "msg_contents": "Adam Gundy <[email protected]> writes:\n> Tom Lane wrote:\n>> Oh, it coerces the type all right, just not in the direction you'd like.\n\n> is there a reason it doesn't coerce to a type that's useful to the \n> planner (ie varchar in my case),\n\nIn this case I think the choice is probably semantically correct:\nshouldn't a comparison of varchar (trailing space sensitive) and\nchar (trailing space INsensitive) follow trailing-space-insensitive\nsemantics?\n\nI wouldn't swear that the behavior is intentional ;-) as to going\nthat way rather than the other, but I'm disinclined to change it.\n\n> or the planner doesn't accept any type \n> of string as a valid match for index scan?\n\nCan't. This equality operator doesn't have the same notion of equality\nthat that index does.\n\nThe long and the short of it is that mixing char and varchar is\nhazardous.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Apr 2008 18:01:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: varchar index joins not working? " }, { "msg_contents": "Tom Lane wrote:\n> Adam Gundy <[email protected]> writes:\n>> Tom Lane wrote:\n>>> Oh, it coerces the type all right, just not in the direction you'd like.\n> \n>> is there a reason it doesn't coerce to a type that's useful to the \n>> planner (ie varchar in my case),\n> \n> In this case I think the choice is probably semantically correct:\n> shouldn't a comparison of varchar (trailing space sensitive) and\n> char (trailing space INsensitive) follow trailing-space-insensitive\n> semantics?\n> \n> I wouldn't swear that the behavior is intentional ;-) as to going\n> that way rather than the other, but I'm disinclined to change it.\n\nahh. I forgot about the trailing spaces. but you can always coerce a \nchar to a varchar safely, which would have fixed my issue. you can't \ncoerce the other way, as you say, because you'll lose the trailing spaces...\n\n\nalternatively, can the planner give warnings somehow? or suggestions? eg \nsome messages from 'explain analyze' like:\n\n 'I could make your query go much faster IF ...'\n\nor\n\n 'suggestion: create an index on ...'\n 'suggestion: convert this index to ...'\n\nor\n\n 'warning: I'd really like to use this index, BUT ...'\n\n>> or the planner doesn't accept any type \n>> of string as a valid match for index scan?\n> \n> Can't. This equality operator doesn't have the same notion of equality\n> that that index does.\n> \n> The long and the short of it is that mixing char and varchar is\n> hazardous.\n\nno kidding.", "msg_date": "Mon, 14 Apr 2008 16:16:41 -0600", "msg_from": "Adam Gundy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: varchar index joins not working?" } ]
[ { "msg_contents": "Hi all,\nspecifing as shared_buffers = 26800 in 8.2.x will this value accepted like\nin the 8.1.x series and then 26800*8192 bytes = 209 MB or 26800 bytes\n(not being specified the memory unit)?\n\nRegards\nGaetano Mendola\n", "msg_date": "Thu, 10 Apr 2008 14:39:28 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "shared_buffers in 8.2.x" }, { "msg_contents": "On Apr 10, 2008, at 7:39 AM, Gaetano Mendola wrote:\n> Hi all,\n> specifing as shared_buffers = 26800 in 8.2.x will this value \n> accepted like\n> in the 8.1.x series and then 26800*8192 bytes = 209 MB or 26800 bytes\n> (not being specified the memory unit)?\n\nWith no specified unit then it defaults to 8K.\n\nErik Jones\n\nDBA | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n\n", "msg_date": "Thu, 10 Apr 2008 17:33:22 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared_buffers in 8.2.x" } ]
[ { "msg_contents": "Hello,\n\nI am creating a large database of MD5 hash values. I am a relative\nnewb with PostgreSQL (or any database for that matter). The schema and\noperation will be quite simple -- only a few tables, probably no\nstored procedures -- but I may easily end up with several hundred\nmillion rows of hash values, possible even get into the billions. The\nhash values will be organized into logical sets, with a many-many\nrelationship. I have some questions before I set out on this endeavor,\nhowever, and would appreciate any and all feedback, including SWAGs,\nWAGs, and outright lies. :-) I am trying to batch up operations as\nmuch as possible, so I will largely be doing comparisons of whole\nsets, with bulk COPY importing. I hope to avoid single hash value\nlookup as much as possible.\n\n1. Which datatype should I use to represent the hash value? UUIDs are\nalso 16 bytes...\n2. Does it make sense to denormalize the hash set relationships?\n3. Should I index?\n4. What other data structure options would it make sense for me to choose?\n\nThanks in advance,\n\n\nJon\n", "msg_date": "Thu, 10 Apr 2008 10:48:59 -0400", "msg_from": "\"Jon Stewart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Creating large database of MD5 hash values" }, { "msg_contents": "\n\n> 1. Which datatype should I use to represent the hash value? UUIDs are\n> also 16 bytes...\n\nmd5's are always 32 characters long so probably varchar(32).\n\n> 2. Does it make sense to denormalize the hash set relationships?\n\nThe general rule is normalize as much as possible then only denormalize \nwhen absolutely necessary.\n\n> 3. Should I index?\n\nWhat sort of queries are you going to be running?\n\n> 4. What other data structure options would it make sense for me to choose?\n\nWhat sort of other data will you be needing to store?\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Fri, 11 Apr 2008 15:11:34 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Creating large database of MD5 hash values" }, { "msg_contents": "* Jon Stewart:\n\n> 1. Which datatype should I use to represent the hash value? UUIDs are\n> also 16 bytes...\n\nBYTEA is slower to load and a bit inconvenient to use from DBI, but\noccupies less space on disk than TEXT or VARCHAR in hex form (17 vs 33\nbytes with PostgreSQL 8.3).\n\n> 2. Does it make sense to denormalize the hash set relationships?\n\nThat depends entirely on your application.\n\n> 3. Should I index?\n\nDepends. B-tree is generally faster than Hash, even for randomly\ndistributed keys (like the output of a hash function).\n\n> 4. What other data structure options would it make sense for me to\n> choose?\n\nSee 2.\n\nIn general, hashing is bad because it destroy locality. But in some\ncases, there is no other choice.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Fri, 11 Apr 2008 16:05:00 +0200", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Creating large database of MD5 hash values" }, { "msg_contents": "Jon Stewart escribi�:\n> Hello,\n> \n> I am creating a large database of MD5 hash values. I am a relative\n> newb with PostgreSQL (or any database for that matter). The schema and\n> operation will be quite simple -- only a few tables, probably no\n> stored procedures -- but I may easily end up with several hundred\n> million rows of hash values, possible even get into the billions. The\n> hash values will be organized into logical sets, with a many-many\n> relationship. I have some questions before I set out on this endeavor,\n> however, and would appreciate any and all feedback, including SWAGs,\n> WAGs, and outright lies. :-) I am trying to batch up operations as\n> much as possible, so I will largely be doing comparisons of whole\n> sets, with bulk COPY importing. I hope to avoid single hash value\n> lookup as much as possible.\n\nIf MD5 values will be your primary data and you'll be storing millions\nof them, it would be wise to create your own datatype and operators with\nthe most compact and efficient representation possible.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Fri, 11 Apr 2008 10:25:44 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Creating large database of MD5 hash values" }, { "msg_contents": "> > 1. Which datatype should I use to represent the hash value? UUIDs are\n> > also 16 bytes...\n>\n> BYTEA is slower to load and a bit inconvenient to use from DBI, but\n> occupies less space on disk than TEXT or VARCHAR in hex form (17 vs 33\n> bytes with PostgreSQL 8.3).\n\n\nCan you clarify the \"slower to load\" point? Where is that pain point\nin the postgres architecture?\n\nStoring the values in binary makes intuitive sense to me since the\ndata is twice as dense, thus getting you more bang for the buck on\ncomparisons, caching, and streaming reads. I'm not too concerned about\nraw convenience, as there's not going to be a lot of code around my\napplication. I haven't built the thing yet so it's hard to say what\nperformance will be like, but for the users the difference between an\n8 hour query that can run overnight and a 16 hour query that they must\nwait on is significant.\n\n\n> > 2. Does it make sense to denormalize the hash set relationships?\n>\n> That depends entirely on your application.\n\n\nGeneral schema would be as such:\n\nHASH_VALUES\ndatatype md5;\nbigint id;\n\nSET_LINK\ninteger hash_value_id;\ninteger hash_set_id;\n\nHASH_SETS\ninteger id;\nvarchar name;\n// other data here\n\nThe idea is that you have named sets of hash values, and hash values\ncan be in multiple sets.\n\nThe big operations will be to calculate the unions, intersections, and\ndifferences between sets. That is, I'll load a new set into the\ndatabase and then see whether it has anything in common with another\nset (probably throw the results into a temp table and then dump it\nout). I will also periodically run queries to determine the size of\nthe intersection of two sets for all pairs of sets (in order to\ngenerate some nice graphs).\n\nThe number of sets could grow into the thousands, but will start\nsmall. One of the sets I expect to be very large (could account for\n50%-90% of all hashes); the others will all be smaller, and range from\n10,000 in size to 1,000,000. The number of hashes total could get into\nthe hundreds of millions, possibly billions.\n\nOne problem I'm worried about is the lack of concurrency in the\napplication. It will be relatively rare for more than one query to be\ninflight at a time; this is not a high connection application. It\ndoesn't sound like I'd get any marginal performance improvement out of\npostgres by throwing more cores at the problem (other than dualcore;\nalways nice to have a spare handling everything else).\n\nThanks very much for the comments from all. Pretty simple application\nconceptually, just one at a large scale. Other approaches (map\nreduce-ish or straightahead turnkey storage) could potentially provide\nbetter performance, but the users feel more comfortable maintaining\ndatabases and the overall convenience of a database over other systems\nis nice.\n\n\nJon\n", "msg_date": "Fri, 11 Apr 2008 11:28:55 -0400", "msg_from": "\"Jon Stewart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Creating large database of MD5 hash values" }, { "msg_contents": "* Jon Stewart:\n\n>> BYTEA is slower to load and a bit inconvenient to use from DBI, but\n>> occupies less space on disk than TEXT or VARCHAR in hex form (17 vs 33\n>> bytes with PostgreSQL 8.3).\n\n> Can you clarify the \"slower to load\" point? Where is that pain point\n> in the postgres architecture?\n\nCOPY FROM needs to read 2.5 bytes on average, instead 2, and a complex\nform of double-decoding is necessary.\n\n> Storing the values in binary makes intuitive sense to me since the\n> data is twice as dense, thus getting you more bang for the buck on\n> comparisons, caching, and streaming reads. I'm not too concerned about\n> raw convenience, as there's not going to be a lot of code around my\n> application.\n\nThe main issue is that you can't use the parameter-providing version\nof $sth->execute (or things like $sth->selectarray, $sth->do), you\nmust use explicit binding by parameter index in order to specify the\ntype information.\n\n> The idea is that you have named sets of hash values, and hash values\n> can be in multiple sets.\n\nThe ID step is only going to help you if your sets are very large and\nyou use certain types of joins, I think. So it's better to\ndenormalize in this case (if that's what you were alluding to in your\noriginal post).\n\n> The big operations will be to calculate the unions, intersections, and\n> differences between sets. That is, I'll load a new set into the\n> database and then see whether it has anything in common with another\n> set (probably throw the results into a temp table and then dump it\n> out).\n\nIn this case, PostgreSQL's in-memory bitmap indices should give you\nmost of the effect of your hash <-> ID mapping anyway.\n\n> I will also periodically run queries to determine the size of\n> the intersection of two sets for all pairs of sets (in order to\n> generate some nice graphs).\n\nI think it's very difficult to compute that efficiently, but I haven't\nthought much about it. This type of query might benefit from your\nhash <-> ID mapping, however, because the working set is smaller.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Fri, 11 Apr 2008 19:04:00 +0200", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Creating large database of MD5 hash values" }, { "msg_contents": "On Apr 11, 2008, at 10:25 AM, Alvaro Herrera wrote:\n\nSorry, yes, I'm behind on email... :(\n\n> If MD5 values will be your primary data and you'll be storing millions\n> of them, it would be wise to create your own datatype and operators \n> with\n> the most compact and efficient representation possible.\n\n\nIf you do this *please* post it. I really think it would be worth \nwhile for us to have fixed-size data types for common forms of binary \ndata; MD5, SHA1 and SHA256 come to mind.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Sat, 24 May 2008 13:43:51 -0400", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Creating large database of MD5 hash values" }, { "msg_contents": "Decibel! wrote:\n> On Apr 11, 2008, at 10:25 AM, Alvaro Herrera wrote:\n> \n> Sorry, yes, I'm behind on email... :(\n> \n> > If MD5 values will be your primary data and you'll be storing millions\n> > of them, it would be wise to create your own datatype and operators \n> > with\n> > the most compact and efficient representation possible.\n> \n> \n> If you do this *please* post it. I really think it would be worth \n> while for us to have fixed-size data types for common forms of binary \n> data; MD5, SHA1 and SHA256 come to mind.\n\nWhy do you think it would be worth while?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Wed, 28 May 2008 19:37:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Creating large database of MD5 hash values" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Decibel! wrote:\n>> If you do this *please* post it. I really think it would be worth \n>> while for us to have fixed-size data types for common forms of binary \n>> data; MD5, SHA1 and SHA256 come to mind.\n\n> Why do you think it would be worth while?\n\nGiven that the overhead for short bytea values is now only one byte\nnot four-plus-padding, the argument for such types is surely a lot\nweaker than it was before 8.3.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 May 2008 19:41:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Creating large database of MD5 hash values " } ]
[ { "msg_contents": "This refers to the performance problem reported in\r\nhttp://archives.postgresql.org/pgsql-performance/2008-04/msg00052.php\r\n\r\nAfter some time of trial and error we found that changing the I/O scheduling\r\nalgorithm to \"deadline\" improved I/O performance by a factor 4 (!) for\r\nthis specific load test.\r\n\r\nIt seems that the bottleneck in this case was actually in the Linux kernel.\r\n\r\nSince performance statements are useless without a description of\r\nthe system and the type of load, I'll send a few details to make this\r\nreport more useful for the archives:\r\n\r\nThe machine is a PC with 8 AMD Opteron 885 CPUs and 32 GB RAM, attached to\r\na HP EVA 8100 storage system with 72 disks.\r\n\r\nWe are running 64-bit Linux 2.6.18-53.1.6.el5 from RedHat Enterprise 5.1.\r\nThe I/O queue depth is set to 64.\r\n\r\nOur benchmark tools show a possible I/O performance of about 11000 transactions\r\nper second for random scattered reads of 8k blocks.\r\n\r\n\r\nPostgreSQL version is 8.2.4.\r\n\r\nThe database system is a cluster with 6 databases containing tables up\r\nto a couple of GB in size. The whole database cluster takes about\r\n200 GB of storage.\r\n\r\nThe database load is a set of read-only statements, several of which have\r\nmiserable execution plans and perform table and index scans.\r\n\r\n\r\nWith the default I/O scheduler we observe a performance of about\r\n600 I/O transactions or 7 MB per second.\r\n\r\nAfter booting with elevator=deadline both values increase by a factor\r\nof up to 4 and the query response times sink drastically.\r\n\r\nYours,\r\nLaurenz Albe\r\n", "msg_date": "Fri, 11 Apr 2008 13:22:04 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance increase with elevator=deadline" }, { "msg_contents": "\"Albe Laurenz\" <[email protected]> writes:\n\n> This refers to the performance problem reported in\n> http://archives.postgresql.org/pgsql-performance/2008-04/msg00052.php\n>\n> After some time of trial and error we found that changing the I/O scheduling\n> algorithm to \"deadline\" improved I/O performance by a factor 4 (!) for\n> this specific load test.\n\nWhat was the algorithm before?\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's On-Demand Production Tuning\n", "msg_date": "Fri, 11 Apr 2008 15:47:17 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance increase with elevator=deadline" }, { "msg_contents": "\nOn Apr 11, 2008, at 7:22 AM, Albe Laurenz wrote:\n> After some time of trial and error we found that changing the I/O \n> scheduling\n> algorithm to \"deadline\" improved I/O performance by a factor 4 (!) for\n> this specific load test.\n>\nI was inspired once again to look into this - as I'm recently hitting \nsome glass ceilings with my machines.\n\nI have a little app I wrote called pgiosim (its on pgfoundry - http://pgfoundry.org/projects/pgiosim) \n that basically just opens some files, and does random seeks and 8kB \nreads, much like what our beloved PG does.\n\nUsing 4 of these with a dataset of about 30GB across a few files \n(Machine has 8GB mem) I went from around 100 io/sec to 330 changing to \nnoop. Quite an improvement. If you have a decent controller CFQ is \nnot what you want. I tried deadline as well and it was a touch \nslower. The controller is a 3ware 9550sx with 4 disks in a raid10.\n\nI'll be trying this out on the big array later today. I found it \nsuprising this info wasn't more widespread (the use of CFQ on a good \ncontroller).\n\nit also seems changing elevators on the fly works fine (echo \nschedulername > /sys/block/.../queue/scheduler I admit I sat there \nflipping back and forth going \"disk go fast.. disk go slow.. disk go \nfast... \" :)\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n", "msg_date": "Fri, 11 Apr 2008 11:09:07 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance increase with elevator=deadline" }, { "msg_contents": "On Fri, 11 Apr 2008, Jeff wrote:\n> Using 4 of these with a dataset of about 30GB across a few files (Machine has \n> 8GB mem) I went from around 100 io/sec to 330 changing to noop. Quite an \n> improvement. If you have a decent controller CFQ is not what you want. I \n> tried deadline as well and it was a touch slower. The controller is a 3ware \n> 9550sx with 4 disks in a raid10.\n\nI ran Greg's fadvise test program a while back on a 12-disc array. The \nthree schedulers (deadline, noop, anticipatory) all performed pretty-much \nthe same, with the fourth (cfq, the default) being consistently slower.\n\n> it also seems changing elevators on the fly works fine (echo schedulername > \n> /sys/block/.../queue/scheduler I admit I sat there flipping back and forth \n> going \"disk go fast.. disk go slow.. disk go fast... \" :)\n\nOh Homer Simpson, your legacy lives on.\n\nMatthew\n\n-- \nI suppose some of you have done a Continuous Maths course. Yes? Continuous\nMaths? <menacing stares from audience> Whoah, it was like that, was it!\n -- Computer Science Lecturer\n", "msg_date": "Fri, 11 Apr 2008 17:40:02 +0100 (BST)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance increase with elevator=deadline" }, { "msg_contents": "Matthew wrote:\n> On Fri, 11 Apr 2008, Jeff wrote:\n>> Using 4 of these with a dataset of about 30GB across a few files \n>> (Machine has 8GB mem) I went from around 100 io/sec to 330 changing to \n>> noop. Quite an improvement. If you have a decent controller CFQ is \n>> not what you want. I tried deadline as well and it was a touch \n>> slower. The controller is a 3ware 9550sx with 4 disks in a raid10.\n> \n> I ran Greg's fadvise test program a while back on a 12-disc array. The \n> three schedulers (deadline, noop, anticipatory) all performed \n> pretty-much the same, with the fourth (cfq, the default) being \n> consistently slower.\n\nI use CFQ on some of my servers, despite the fact that it's often slower \nin total throughput terms, because it delivers much more predictable I/O \nlatencies that help prevent light I/O processes being starved by heavy \nI/O processes. In particular, an Linux terminal server used at work has \ntaken a lot of I/O tuning before it delivers even faintly acceptable I/O \nlatencies under any sort of load.\n\nBounded I/O latency at the expense of throughput is not what you usually \nwant on a DB server, where throughput is king, so I'm not at all \nsurprised that CFQ performs poorly for PostgreSQL. I've done no testing \non that myself, though, because with my DB size and the nature of my \nqueries most of them are CPU bound anyway.\n\nSpeaking of I/O performance with PostgreSQL, has anybody here done any \ntesting to compare results with LVM to results with the same filesystem \non a conventionally partitioned or raw volume? I'd probably use LVM even \nat a performance cost because of its admin benefits, but I'd be curious \nif there is any known cost for use with Pg. I've never been able to \nmeasure one with other workloads.\n\n--\nCraig Ringer\n", "msg_date": "Sat, 12 Apr 2008 02:03:43 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance increase with elevator=deadline" }, { "msg_contents": "On Sat, 12 Apr 2008, Craig Ringer wrote:\n\n> Speaking of I/O performance with PostgreSQL, has anybody here done any \n> testing to compare results with LVM to results with the same filesystem on a \n> conventionally partitioned or raw volume?\n\nThere was some chatter on this topic last year; a quick search finds\n\nhttp://archives.postgresql.org/pgsql-performance/2007-06/msg00005.php\n\nwhich is a fair statement of the situation. I don't recall any specific \nbenchmarks.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 11 Apr 2008 15:56:36 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance increase with elevator=deadline" }, { "msg_contents": "* Jeff:\n\n> Using 4 of these with a dataset of about 30GB across a few files\n> (Machine has 8GB mem) I went from around 100 io/sec to 330 changing to\n> noop. Quite an improvement. If you have a decent controller CFQ is\n> not what you want. I tried deadline as well and it was a touch\n> slower. The controller is a 3ware 9550sx with 4 disks in a raid10.\n>\n> I'll be trying this out on the big array later today. I found it\n> suprising this info wasn't more widespread (the use of CFQ on a good\n> controller).\n\n3ware might be a bit special because the controller has got very deep\nqueues on its own, so many assumptions of the kernel I/O schedulers do\nnot seem to apply. Toying with the kernel/controller queue depths\nmight help, but I haven't done real benchmarks to see if it's actually\na difference.\n\nA few days ago, I experienced this: On a machine with a 3ware\ncontroller, a simple getxattr call on a file in an uncontended\ndirectory took several minutes because a PostgreSQL dump process was\nrunning in the background (and some other activity of a legacy\ndatabase which caused frequent fdatasync calls). Clearly, this is\nunacceptable, and I've since switched to the deadline scheduler, too.\nSo far, this particular behavior hasn't occurred again.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Tue, 15 Apr 2008 15:27:20 +0200", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance increase with elevator=deadline" }, { "msg_contents": "On Tue, 15 Apr 2008, Florian Weimer wrote:\n\n> * Jeff:\n>\n>> Using 4 of these with a dataset of about 30GB across a few files\n>> (Machine has 8GB mem) I went from around 100 io/sec to 330 changing to\n>> noop. Quite an improvement. If you have a decent controller CFQ is\n>> not what you want. I tried deadline as well and it was a touch\n>> slower. The controller is a 3ware 9550sx with 4 disks in a raid10.\n>>\n>> I'll be trying this out on the big array later today. I found it\n>> suprising this info wasn't more widespread (the use of CFQ on a good\n>> controller).\n>\n> 3ware might be a bit special because the controller has got very deep\n> queues on its own, so many assumptions of the kernel I/O schedulers do\n> not seem to apply. Toying with the kernel/controller queue depths\n> might help, but I haven't done real benchmarks to see if it's actually\n> a difference.\n>\n> A few days ago, I experienced this: On a machine with a 3ware\n> controller, a simple getxattr call on a file in an uncontended\n> directory took several minutes because a PostgreSQL dump process was\n> running in the background (and some other activity of a legacy\n> database which caused frequent fdatasync calls). Clearly, this is\n> unacceptable, and I've since switched to the deadline scheduler, too.\n> So far, this particular behavior hasn't occurred again.\n\none other thing to watch out for. up until very recent kernels (2.6.23 or \n2.6.24) it was possible for one very busy block device to starve other \nblock devices. they added isolation of queues for different block devices, \nbut I've seen reports that the isolation can end up throttling high \nperformance devices as a result. I haven't had time to really dig into \nthis, but there are tuning knobs available to adjust the que space \navailable to different devices and the reports are significantly better \nactivity on a tuned system.\n\nDavid Lang\n", "msg_date": "Tue, 15 Apr 2008 12:08:31 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Performance increase with elevator=deadline" }, { "msg_contents": "Gregory Stark wrote:\n>> After some time of trial and error we found that changing the I/O scheduling\n>> algorithm to \"deadline\" improved I/O performance by a factor 4 (!) for\n>> this specific load test.\n>\n> What was the algorithm before?\n\nThe default algorithm, CFQ I think it is.\n \nYours,\nLaurenz Albe\n", "msg_date": "Wed, 16 Apr 2008 07:27:14 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance increase with elevator=deadline" }, { "msg_contents": "Hi,\n\nIl giorno 11/apr/08, alle ore 20:03, Craig Ringer ha scritto:\n>\n> Speaking of I/O performance with PostgreSQL, has anybody here done \n> any testing to compare results with LVM to results with the same \n> filesystem on a conventionally partitioned or raw volume? I'd \n> probably use LVM even at a performance cost because of its admin \n> benefits, but I'd be curious if there is any known cost for use with \n> Pg. I've never been able to measure one with other workloads.\n\nI performed some tests some time ago. LVM is significantly slower.\nThe disk subsystem is a HP P400/512MB battery-backed controller with 4 \ndisks in raid 10.\nSee the tests:\n\n\next3 tests:\n\nbonnie++ -s 16000 -u 0 -f -b\n= \n= \n= \n= \n= \n= \n= \n========================================================================\n\nVersion 1.03 ------Sequential Output------ --Sequential Input- \n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- \n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec \n%CP /sec %CP\n 16000M 153637 50 78895 17 204124 \n17 700.6 1\n ------Sequential Create------ --------Random \nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- \n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec \n%CP /sec %CP\n 16 2233 10 +++++ +++ 2606 8 2255 10 +++++ ++ \n+ 2584 7\n16000M,,,153637,50,78895,17,,,204124,17,700.6,1,16,2233,10,+++++,+++, \n2606,8,2255,10,+++++,+++,2584,7\n\n\nbonnie++ -s 16000 -u 0 -f\n= \n= \n= \n= \n= \n= \n= \n========================================================================\n\nVersion 1.03 ------Sequential Output------ --Sequential Input- \n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- \n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec \n%CP /sec %CP\n 16000M 162223 51 77277 17 207055 \n17 765.3 1\n ------Sequential Create------ --------Random \nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- \n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec \n%CP /sec %CP\n 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ \n+++++ +++\n16000M,,,162223,51,77277,17,,,207055,17,765.3,1,16,+++++,+++,+++++,+++, \n+++++,+++,+++++,+++,+++++,+++,+++++,+++\n\n= \n= \n= \n= \n= \n= \n= \n========================================================================\n\nLVM tests:\n\nbonnie++ -u 0 -f -s 16000 -b\nVersion 1.03 ------Sequential Output------ --Sequential Input- \n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- \n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec \n%CP /sec %CP\n 16000M 153768 52 53420 13 177414 \n15 699.8 1\n ------Sequential Create------ --------Random \nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- \n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec \n%CP /sec %CP\n 16 2158 9 +++++ +++ 2490 7 2177 9 +++++ ++ \n+ 2460 7\n16000M,,,153768,52,53420,13,,,177414,15,699.8,1,16,2158,9,+++++,+++, \n2490,7,2177,9,+++++,+++,2460,7\n\nbonnie++ -u 0 -f -s 16000\nVersion 1.03 ------Sequential Output------ --Sequential Input- \n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- \n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec \n%CP /sec %CP\n 16000M 161476 53 54904 13 171693 \n14 774.3 1\n ------Sequential Create------ --------Random \nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- \n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec \n%CP /sec %CP\n 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ \n+++++ +++\n16000M,,,161476,53,54904,13,,,171693,14,774.3,1,16,+++++,+++,+++++,+++, \n+++++,+++,+++++,+++,+++++,+++,+++++,+++\n\nBye,\ne.\n\n", "msg_date": "Fri, 18 Apr 2008 16:38:05 +0200", "msg_from": "Enrico Sirola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance increase with elevator=deadline" }, { "msg_contents": "I had the opportunity to do more testing on another new server to see whether the kernel's I/O scheduling makes any difference. Conclusion: On a battery-backed RAID 10 system, the kernel's I/O scheduling algorithm has no effect. This makes sense, since a battery-backed cache will supercede any I/O rescheduling that the kernel tries to do.\n\nHardware:\n Dell 2950\n 8 CPU (Intel 2GHz Xeon)\n 8 GB memory\n Dell Perc 6i with battery-backed cache\n RAID 10 of 8x 146GB SAS 10K 2.5\" disks\n\nSoftware:\n Linux 2.6.24, 64-bit\n XFS file system\n Postgres 8.3.0\n max_connections = 1000 \n shared_buffers = 2000MB \n work_mem = 256MB \n max_fsm_pages = 1000000 \n max_fsm_relations = 5000 \n synchronous_commit = off \n wal_buffers = 256kB \n checkpoint_segments = 30 \n effective_cache_size = 4GB \n\nEach test was run 5 times:\n drop database test\n create database test\n pgbench -i -s 20 -U test\n pgbench -c 10 -t 50000 -v -U test\n\nThe I/O scheduler was changed on-the-fly using (for example) \"echo cfq >/sys/block/sda/queue/scheduler\".\n\nAutovacuum was turned off during the test.\n\nHere are the results. The numbers are those reported as \"tps = xxxx (including connections establishing)\" (which were almost identical to the \"excluding...\" tps number).\n\nI/O Sched AVG Test1 Test2 Test3 Test4 Test5\n--------- ----- ----- ----- ----- ----- -----\ncfq 3355 3646 3207 3132 3204 3584\nnoop 3163 2901 3190 3293 3124 3308\ndeadline 3547 3923 3722 3351 3484 3254\nanticipatory 3384 3453 3916 2944 3451 3156\n\nAs you can see, the averages are very close -- closer than the \"noise\" between runs. As far as I can tell, there is no significant advantage, or even any significant difference, between the various I/O scheduler algorithms.\n\n(It also reinforces what the pgbench man page says: Short runs aren't useful. Even these two-minute runs have a lot of variability. Before I turned off AutoVacuum, the variability was more like 50% between runs.)\n\nCraig\n", "msg_date": "Mon, 05 May 2008 16:33:01 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "RAID 10 Benchmark with different I/O schedulers (was: Performance\n\tincrease with elevator=deadline)" }, { "msg_contents": "On Mon, May 5, 2008 at 5:33 PM, Craig James <[email protected]> wrote:\n>\n> (It also reinforces what the pgbench man page says: Short runs aren't\n> useful. Even these two-minute runs have a lot of variability. Before I\n> turned off AutoVacuum, the variability was more like 50% between runs.)\n\nI'd suggest a couple things for more realistic tests. Run the tests\nmuch longer, say 30 minutes to an hour. Crank up your scaling factor\nuntil your test db is larger than memory. Turn on autovacuum, maybe\nraising the cost / delay factors so it doesn't affect performance too\nnegatively. And lastly tuning the bgwriter so that checkpoints are\nshort and don't interfere too much.\n\nMy guess is if you let it run for a while, you'll get a much more\nreliable number.\n", "msg_date": "Mon, 5 May 2008 21:15:44 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID 10 Benchmark with different I/O schedulers (was: Performance\n\tincrease with elevator=deadline)" }, { "msg_contents": "On Mon, 5 May 2008, Craig James wrote:\n\n> pgbench -i -s 20 -U test\n\nThat's way too low to expect you'll see a difference in I/O schedulers. \nA scale of 20 is giving you a 320MB database, you can fit the whole thing \nin RAM and almost all of it on your controller cache. What's there to \nschedule? You're just moving between buffers that are generally large \nenough to hold most of what they need.\n\n> pgbench -c 10 -t 50000 -v -U test\n\nThis is OK, because when you increase the size you're not going to be \npushing 3500 TPS anymore and this test will take quite a while.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 5 May 2008 23:59:30 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID 10 Benchmark with different I/O schedulers (was:\n\tPerformance increase with elevator=deadline)" }, { "msg_contents": "\nOn May 5, 2008, at 7:33 PM, Craig James wrote:\n\n> I had the opportunity to do more testing on another new server to \n> see whether the kernel's I/O scheduling makes any difference. \n> Conclusion: On a battery-backed RAID 10 system, the kernel's I/O \n> scheduling algorithm has no effect. This makes sense, since a \n> battery-backed cache will supercede any I/O rescheduling that the \n> kernel tries to do.\n>\n\nthis goes against my real world experience here.\n\n> pgbench -i -s 20 -U test\n> pgbench -c 10 -t 50000 -v -U test\n>\n\nYou should use a sample size of 2x ram to get a more realistic number, \nor try out my pgiosim tool on pgfoundry which \"sort of\" simulates an \nindex scan. I posted numbers from that a month or two ago here.\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n", "msg_date": "Tue, 06 May 2008 08:26:03 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID 10 Benchmark with different I/O schedulers (was:\n\tPerformance increase with elevator=deadline)" }, { "msg_contents": "Greg Smith wrote:\n> On Mon, 5 May 2008, Craig James wrote:\n> \n>> pgbench -i -s 20 -U test\n> \n> That's way too low to expect you'll see a difference in I/O schedulers. \n> A scale of 20 is giving you a 320MB database, you can fit the whole \n> thing in RAM and almost all of it on your controller cache. What's \n> there to schedule? You're just moving between buffers that are \n> generally large enough to hold most of what they need.\n\nTest repeated with:\nautovacuum enabled\ndatabase destroyed and recreated between runs\npgbench -i -s 600 ...\npgbench -c 10 -t 50000 -n ...\n\nI/O Sched AVG Test1 Test2\n--------- ----- ----- -----\ncfq 705 695 715\nnoop 758 769 747\ndeadline 741 705 775\nanticipatory 494 477 511\n\nI only did two runs of each, which took about 24 minutes. Like the first round of tests, the \"noise\" in the measurements (about 10%) exceeds the difference between scheduler-algorithm performance, except that \"anticipatory\" seems to be measurably slower.\n\nSo it still looks like cfq, noop and deadline are more or less equivalent when used with a battery-backed RAID.\n\nCraig\n", "msg_date": "Tue, 06 May 2008 11:30:26 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID 10 Benchmark with different I/O schedulers" }, { "msg_contents": "On Tue, 6 May 2008, Craig James wrote:\n\n> I only did two runs of each, which took about 24 minutes. Like the first \n> round of tests, the \"noise\" in the measurements (about 10%) exceeds the \n> difference between scheduler-algorithm performance, except that \n> \"anticipatory\" seems to be measurably slower.\n\nThose are much better results. Any test that says anticipatory is \nanything other than useless for database system use with a good controller \nI presume is broken, so that's how I know you're in the right ballpark now \nbut weren't before.\n\nIn order to actually get some useful data out of the noise that is \npgbench, you need a lot more measurements of longer runs. As perspective, \nthe last time I did something in this area, in order to get enough data to \nget a clear picture I ran tests for 12 hours. I'm hoping to repeat that \nsoon with some more common hardware that gives useful results I can give \nout.\n\n> So it still looks like cfq, noop and deadline are more or less equivalent \n> when used with a battery-backed RAID.\n\nI think it's fair to say they're within 10% of one another on raw \nthroughput. The thing you're not measuring here is worst-case latency, \nand that's where there might be a more interesting difference. Most tests \nI've seen suggest deadline is the best in that regard, cfq the worst, and \nwhere noop fits in depends on the underlying controller.\n\npgbench produces log files with latency measurements if you pass it \"-l\". \nHere's a snippet of shell that runs pgbench then looks at the resulting \nlatency results for the worst 5 numbers:\n\npgbench ... -l &\np=$!\nwait $p\nmv pgbench_log.${p} pgbench.log\necho Worst latency results:\ncat pgbench.log | cut -f 3 -d \" \" | sort -n | tail -n 5\n\nHowever, that may not give you much useful info either--in most cases \ncheckpoint issues kind of swamp the worst-base behavior in PostgreSQL, \nand to quantify I/O schedulers you need to look more complicated \nstatistics on latency.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 6 May 2008 16:21:08 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID 10 Benchmark with different I/O schedulers" }, { "msg_contents": "Greg Smith wrote:\n> On Tue, 6 May 2008, Craig James wrote:\n> \n>> I only did two runs of each, which took about 24 minutes. Like the \n>> first round of tests, the \"noise\" in the measurements (about 10%) \n>> exceeds the difference between scheduler-algorithm performance, except \n>> that \"anticipatory\" seems to be measurably slower.\n> \n> Those are much better results. Any test that says anticipatory is \n> anything other than useless for database system use with a good \n> controller I presume is broken, so that's how I know you're in the right \n> ballpark now but weren't before.\n> \n> In order to actually get some useful data out of the noise that is \n> pgbench, you need a lot more measurements of longer runs. As \n> perspective, the last time I did something in this area, in order to get \n> enough data to get a clear picture I ran tests for 12 hours. I'm hoping \n> to repeat that soon with some more common hardware that gives useful \n> results I can give out.\n\nThis data is good enough for what I'm doing. There were reports from non-RAID users that the I/O scheduling could make as much as a 4x difference in performance (which makes sense for non-RAID), but these tests show me that three of the four I/O schedulers are within 10% of each other. Since this matches my intuition of how battery-backed RAID will work, I'm satisfied. If our servers get overloaded to the point where 10% matters, then I need a much more dramatic solution, like faster machines or more machines.\n\nCraig\n\n", "msg_date": "Tue, 06 May 2008 13:43:54 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID 10 Benchmark with different I/O schedulers" }, { "msg_contents": "Craig James wrote:\n> This data is good enough for what I'm doing. There were \n> reports from non-RAID users that the I/O scheduling could \n> make as much as a 4x difference in performance (which makes \n> sense for non-RAID), but these tests show me that three of \n> the four I/O schedulers are within 10% of each other. Since \n> this matches my intuition of how battery-backed RAID will \n> work, I'm satisfied. If our servers get overloaded to the \n> point where 10% matters, then I need a much more dramatic \n> solution, like faster machines or more machines.\n\nI should comment on this as I am the one who reported the\nbig performance increase with the deadline scheduler.\nI was very surprised at this increase myself as I had not seen\nany similar reports, so I thought I should share it for whatever\nit is worth.\n\nOur SAN *is* a RAID-5 with lots of cache, so there must be a flaw\nin your intuition.\n\nPerformance measures depend a lot on your hardware and\nsoftware setup (e.g. kernel version in this case) and on the\nspecific load. The load we used was a real life load, collected\nover seveal hours and extracted from the log files.\n\nMy opinion is that performance observations can rarely be\ngeneralized - I am not surprised that with a different system\nand a different load you observe hardly any difference between\n\"cfq\" and \"deadline\".\n\nFor the record, in our test case \"noop\" performed practically\nas good as \"deadline\", while the other two did way worse.\n\nLike yourself, I have wondered why different I/O scheduling\nalgorithms should make so much difference.\nHere is my home-spun theory of what may happen; tear it apart\nand replace it with a better one at your convenience:\n\nOur SAN probably (we're investigating) has its own brains to\noptimize I/O, and I guess that any optimization that the kernel\ndoes can only deteriorate performance because the two algorithms\nmight \"step on each other's toes\". This is backed by \"noop\"\nperforming well.\nI believe that caching will not make much difference, because the\ncache is way smaller than the database, and whatever is neither in\nthe shared buffer nor in the kernel filesystem cache is also not\nlikely to be in the storage system's cache. Remember that our load\nwas read-only.\n\nYours,\nLaurenz Albe\n", "msg_date": "Wed, 7 May 2008 09:29:05 +0200", "msg_from": "\"Albe Laurenz *EXTERN*\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID 10 Benchmark with different I/O schedulers" }, { "msg_contents": "On Tue, 6 May 2008, Craig James wrote:\n> I/O Sched AVG Test1 Test2\n> --------- ----- ----- -----\n> cfq 705 695 715\n> noop 758 769 747\n> deadline 741 705 775\n> anticipatory 494 477 511\n\nInteresting. That contrasts with some tests I did a while back on a \n16-disc RAID-0, where noop, deadline, and anticipatory were all identical \nin performance, with cfq being significantly slower. Admittedly, the disc \ntest was single-process, which is probably why the anticipatory behaviour \ndidn't kick in. You are seeing a little bit of degradation with cfq - I \nguess it's worse the bigger the disc subsystem you have.\n\nMatthew\n\n-- \nMatthew: That's one of things about Cambridge - all the roads keep changing\n names as you walk along them, like Hills Road in particular.\nSagar: Yes, Sidney Street is a bit like that too.\nMatthew: Sidney Street *is* Hills Road.\n", "msg_date": "Wed, 7 May 2008 11:46:27 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID 10 Benchmark with different I/O schedulers" }, { "msg_contents": "This seems like a bug to me, but it shows up as a performance problem. Since the column being queried is an integer, the second query (see below) can't possibly match, yet Postgres uses a typecast, forcing a full table scan for a value that can't possibly be in the table.\n\nThe application could intercept these bogus queries, but that requires building schema-specific and postgres-specific knowledge into the application (i.e. \"What is the maximum legal integer for this column?\").\n\nCraig\n\n\nexplain analyze select version_id, parent_id from version where version_id = 99999;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------\n Index Scan using version_pkey on version (cost=0.00..9.89 rows=1 width=8) (actual time=0.054..0.054 rows=0 loops=1)\n Index Cond: (version_id = 99999)\n Total runtime: 0.130 ms\n(3 rows)\n\nemol_warehouse_1=> explain analyze select version_id, parent_id from version where version_id = 999999999999999999999999999;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------\n Seq Scan on version (cost=0.00..253431.77 rows=48393 width=8) (actual time=3135.530..3135.530 rows=0 loops=1)\n Filter: ((version_id)::numeric = 999999999999999999999999999::numeric)\n Total runtime: 3135.557 ms\n(3 rows)\n\n\n \\d version\n Table \"emol_warehouse_1.version\"\n Column | Type | Modifiers \n------------+---------+-----------\n version_id | integer | not null\n parent_id | integer | not null\n ... more columns\nIndexes:\n \"version_pkey\" PRIMARY KEY, btree (version_id)\n\n\n\n", "msg_date": "Wed, 25 Jun 2008 18:52:59 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID 10 Benchmark with different I/O schedulers" }, { "msg_contents": "This seems like a bug to me, but it shows up as a performance problem. Since the column being queried is an integer, the second query (see below) can't possibly match, yet Postgres uses a typecast, forcing a full table scan for a value that can't possibly be in the table.\n\nThe application could intercept these bogus queries, but that requires building schema-specific and postgres-specific knowledge into the application (i.e. \"What is the maximum legal integer for this column?\").\n\nCraig\n\n\nexplain analyze select version_id, parent_id from version where version_id = 99999;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------\nIndex Scan using version_pkey on version (cost=0.00..9.89 rows=1 width=8) (actual time=0.054..0.054 rows=0 loops=1)\n Index Cond: (version_id = 99999)\nTotal runtime: 0.130 ms\n(3 rows)\n\nemol_warehouse_1=> explain analyze select version_id, parent_id from version where version_id = 999999999999999999999999999;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------\nSeq Scan on version (cost=0.00..253431.77 rows=48393 width=8) (actual time=3135.530..3135.530 rows=0 loops=1)\n Filter: ((version_id)::numeric = 999999999999999999999999999::numeric)\nTotal runtime: 3135.557 ms\n(3 rows)\n\n\n\\d version\nTable \"emol_warehouse_1.version\"\n Column | Type | Modifiers \n------------+---------+-----------\nversion_id | integer | not null\nparent_id | integer | not null\n... more columns\nIndexes:\n \"version_pkey\" PRIMARY KEY, btree (version_id)\n\n\n\n\n", "msg_date": "Wed, 25 Jun 2008 21:17:50 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Typecast bug?" }, { "msg_contents": "Craig James <[email protected]> writes:\n> This seems like a bug to me, but it shows up as a performance problem.\n\n> emol_warehouse_1=> explain analyze select version_id, parent_id from version where version_id = 999999999999999999999999999;\n\nIf you actually *need* so many 9's here as to force it out of the range\nof bigint, then why is your id column not declared numeric?\n\nThis seems to me to be about on par with complaining that \"intcol = 4.2e1\"\nwon't be indexed. We have a numeric data type hierarchy, learn to\nwork with it ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Jun 2008 01:33:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typecast bug? " }, { "msg_contents": "Tom Lane wrote:\n> Craig James <[email protected]> writes:\n>> This seems like a bug to me, but it shows up as a performance problem.\n> \n>> emol_warehouse_1=> explain analyze select version_id, parent_id from version where version_id = 999999999999999999999999999;\n> \n> If you actually *need* so many 9's here as to force it out of the range\n> of bigint, then why is your id column not declared numeric?\n> \n> This seems to me to be about on par with complaining that \"intcol = 4.2e1\"\n> won't be indexed. We have a numeric data type hierarchy, learn to\n> work with it ...\n\nYour suggestion of \"learn to work with it\" doesn't fly. A good design separates the database schema details from the application to the greatest extent possible. What you're suggesting is that every application that queries against a Postgres database should know the exact range of every numeric data type of every indexed column in the schema, simply because Postgres can't recognize an out-of-range numeric value.\n\nIn this case, the optimizer could have instantly returned zero results with no further work, since the query was out of range for that column.\n\nThis seems like a pretty simple optimization to me, and it seems like a helpful suggestion to make to this forum.\n\nBTW, this query came from throwing lots of junk at a web app in an effort to uncover exactly this sort of problem. It's not a real query, but then, hackers don't use real queries. The app checks that its input is a well-formed integer expression, but then assumes Postgres can deal with it from there.\n\nCraig\n", "msg_date": "Wed, 25 Jun 2008 23:22:20 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typecast bug?" }, { "msg_contents": "On 6/26/08, Craig James <[email protected]> wrote:\n> This seems like a bug to me, but it shows up as a performance problem.\n> Since the column being queried is an integer, the second query (see below)\n> can't possibly match, yet Postgres uses a typecast, forcing a full table\n> scan for a value that can't possibly be in the table.\n\nWhich version are you using? 8.3 removes a lot of implicit casts (all?\nnot sure), so this may already be your fix.\n\nCheers,\n\nFrank\n", "msg_date": "Thu, 26 Jun 2008 08:02:13 +0100", "msg_from": "\"Frank Joerdens\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typecast bug?" }, { "msg_contents": "On Thu, Jun 26, 2008 at 9:02 AM, Frank Joerdens <[email protected]> wrote:\n> Which version are you using? 8.3 removes a lot of implicit casts (all?\n> not sure), so this may already be your fix.\n\n8.3 only removed implicit casts from non text types to text (date ->\ntext, int -> text, interval -> text...) to avoid unexpected\nbehaviours.\n\n-- \nGuillaume\n", "msg_date": "Thu, 26 Jun 2008 09:12:56 +0200", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typecast bug?" } ]
[ { "msg_contents": "> Well if you're caching per-connection then it doesn't really matter \n> whether\n> you do it on the client side or the server side, it's pretty much \n> exactly the\n> same problem.\n\n\tActually I thought about doing it on the server since it would then also \nwork with connection pooling.\n\tDoing it on the client means the client has to maintain state, which is \nnot possible in a pool...\n\n> Unsurprisingly most drivers do precisely what you're describing. In Perl \n> DBI\n> for example you just change $dbh->prepare(\"\") into \n> $dbh->prepare_cached(\"\")\n> and it does exactly what you want. I would expect the PHP drivers to have\n> something equivalent.\n\n\tWell, PHP doesn't.\n\tPerhaps I should patch PHP instead...\n\tOr perhaps this feature should be implemented in pgpool or pgbouncer.\n\n>> \tBut, using prepared statements with persistent connections is messy, \n>> because you never know if the connection is new or not,\n\n> If you were to fix *that* then both this problem and others (such as\n> setting up desired SET-parameter values) would go away.\t\n\n\tTrue. Languages that keep a long-running context (like application \nservers etc) can do this easily.\n\tAlthough in the newer versions of PHP, it's not so bad, pconnect seems to \nwork (ie. it will issue ROLLBACKs when the script dies, reset session \nvariables like enable_indexscan, etc), so the only remaining problem seems \nto be prepared statements.\n\tAnd again, adding a method for the application to know if the persistent \nconnection is new or not, will not work in a connection pool...\n\n\tPerhaps a GUC flag saying EXECUTE should raise an error but not kill the \ncurrent transaction if the requested prepared statement does not exist ? \nThen the application would issue a PREPARE. It could also raise a \nnon-fatal error when the tables have changed (column added, for instance) \nso the application can re-issue a PREPARE.\n\n\tBut I still think it would be cleaner to do it in the server.\n\n\tAlso, I rethought about what Gregory Stark said :\n> The contention on the shared cache is likely to negate much of the \n> planning\n> savings but I think it would still be a win.\n\n\tIf a shared plan cache is implemented, it will mostly be read-only, ie. \nwhen the application is started, new queries will come, so the plans will \nhave to be written to the cache, but then once the cache contains \neverything it needs, it will not be modified that often, so I wouldn't \nthink contention would be such a problem...\n\n> It's not so easy as all that. Consider search_path. Consider temp\n> tables.\n\n\tTemp tables : I thought plan revalidation took care of this ?\n\t(After testing, it does work, if a temp table is dropped and recreated, \nPG finds it, although of course if a table is altered by adding a column \nfor instance, it logically fails).\n\n\tsearch_path: I suggested to either put the search_path in the cache key \nalong with the SQL string, or force queries to specify schema.table for \nall tables.\n\tIt is also possible to shoot one's foot with the current PREPARE (ie. \nsearch_path is used to PREPARE but of course not for EXECUTE), and also \nwith plpgsql functions (ie. the search path used to compile the function \nis the one that is active when it is compiled, ie at its first call in the \ncurrent connection, and not the search path that was active when the \nfunction was defined)...\n\nSET search_path TO DEFAULT;\n\nCREATE SCHEMA a;\nCREATE SCHEMA b;\n\nCREATE TABLE a.test( v TEXT );\nCREATE TABLE b.test( v TEXT );\n\nINSERT INTO a.test VALUES ('This is schema a');\nINSERT INTO b.test VALUES ('This is schema b');\n\nCREATE OR REPLACE FUNCTION test_search_path()\n RETURNS SETOF TEXT\n LANGUAGE plpgsql\n AS\n$$\nDECLARE\n x TEXT;\nBEGIN\n FOR x IN SELECT v FROM test LOOP\n RETURN NEXT x;\n END LOOP;\nEND;\n$$;\n\ntest=> SET search_path TO a,public;\ntest=> SELECT * FROM test_search_path();\n test_search_path\n------------------\n This is schema a\ntest=> \\q\n$ psql test\n\ntest=> SET search_path TO b,public;\ntest=> SELECT * FROM test_search_path();\n test_search_path\n------------------\n This is schema b\n\ntest=> SET search_path TO a,public;\ntest=> SELECT * FROM test_search_path();\n test_search_path\n------------------\n This is schema b\n\n\n\n\n\n", "msg_date": "Sat, 12 Apr 2008 11:13:46 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cached Query Plans" } ]
[ { "msg_contents": "Hi\n\nWe currently have a 16CPU 32GB box running postgres 8.2.\n\nWhen I do a pg_dump with the following parameters \"/usr/bin/pg_dump -E \nUTF8 -F c -b\" I get a file of 14GB in size.\n\nBut the database is 110GB in size on the disk. Why the big difference \nin size? Does this have anything to do with performance?\n\n-- \nAdrian Moisey\nSystems Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n", "msg_date": "Mon, 14 Apr 2008 08:29:56 +0200", "msg_from": "Adrian Moisey <[email protected]>", "msg_from_op": true, "msg_subject": "db size" }, { "msg_contents": "Hi Adrian,\n\n\n\n>When I do a pg_dump with the following parameters \"/usr/bin/pg_dump -E\nUTF8 -F c -b\" I get a file of 14GB in size.\n\n\n>From the man page of pg_dump\n\"\n-F format, --format=format\n\n Selects the format of the output. format can be one of the following:\nc\noutput a custom archive suitable for input into pg_restore. This is the most flexible format in that it allows reordering of data load as well as schema elements. This format is also compressed by default.\n\"\n\n The output is compressed and it is a dump of the database which contain the SQL commands:\n\n\n\n>But the database is 110GB in size on the disk. Why the big difference\n>in size? Does this have anything to do with performance?\n\nVACUUM or VACUUM FULL of the entire database will reduce the size of the database by reclaiming any unused space and you can use the filesystem based backup or backup/restore strategy.\n", "msg_date": "Sun, 13 Apr 2008 23:55:14 -0700", "msg_from": "Vinubalaji Gopal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: db size" }, { "msg_contents": "Adrian Moisey wrote:\n> Hi\n> \n> We currently have a 16CPU 32GB box running postgres 8.2.\n> \n> When I do a pg_dump with the following parameters \"/usr/bin/pg_dump -E\n> UTF8 -F c -b\" I get a file of 14GB in size.\n> \n> But the database is 110GB in size on the disk. Why the big difference\n> in size? Does this have anything to do with performance?\n\n\nReasons:\n\nYou're using a compact format designed to limit size and provide fast\ndump/restore. The database, by contrast, is designed for fast access.\n\nThe database can contain \"dead space\" that hasn't been reclaimed by a\nVACUUM. It can also have space allocated that it doesn't need, which you\ncan reclaim with VACUUM FULL. This dead space can really add up, but\nit's the price of fast updates, inserts and deletes.\n\nYour indexes take up disk space in the database, but are not dumped and\ndo not take up space in the dump file. Indexes can get very large\nespecially if you have lots of multi-column indexes.\n\nI'm told that under certain loads indexes can grow full of mostly empty\npages, and a REINDEX every now and then can be useful to shrink them -\nsee \"\\h reindex\" in psql. That won't affect your dump sizes as indexes\naren't dumped, but will affect the database size.\n\nYou can examine index (and relation) sizes using a query like:\n\nselect * from pg_class order by relpages desc\n\n\nData in the database is either not compressed, or (for larger fields) is\ncompressed with an algorithm that's very fast but doesn't achieve high\nlevels of compression. By contrast, the dumps are quite efficiently\ncompressed.\n\nOne of my database clusters is 571MB on disk at the moment, just after\nbeing dropped, recreated, and populated from another data source. The\nrepopulation process is quite complex. I found that running VACUUM FULL\nfollowed by REINDEX DATABASE dbname knocked 50MB off the database size,\npushing it down to 521MB. That's on a basically brand new DB. Note,\nhowever, that 130MB of that space is in pg_xlog, and much of it will be\nwasted as the DB has been under very light load but uses large xlogs\nbecause it needs to perform well under huge load spikes. The size of the\n`base' directory (the \"real data\", indexes, etc) is only 392MB.\n\nIf I dump that database using the same options you dumped yours with, I\nend up with a hilariously small 29MB dump file. That's less than 10% of\nthe size of the main DB. The difference will be entirely due to\ncompression, a more compact storage layout in the dump files, and to the\nlack of index data in the dumps. The database has quite a few indexes,\nsome of which are multicolumn indexes on tables with large numbers of\ntuples, so that bloats the \"live\" version a lot.\n\n--\nCraig Ringer\n", "msg_date": "Mon, 14 Apr 2008 16:37:54 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: db size" }, { "msg_contents": "> Hi\n>\n> We currently have a 16CPU 32GB box running postgres 8.2.\n>\n> When I do a pg_dump with the following parameters \"/usr/bin/pg_dump -E \n> UTF8 -F c -b\" I get a file of 14GB in size.\n>\n> But the database is 110GB in size on the disk. Why the big difference \n> in size? Does this have anything to do with performance?\n\n\tI have a 2GB database, which dumps to a 340 MB file...\n\tTwo reasons :\n\n\t- I have lots of big fat but very necessary indexes (not included in dump)\n\t- Dump is compressed with gzip which really works well on database data.\n\n\tIf you suspect your tables or indexes are bloated, restore your dump to a \ntest box.\n\tUse fsync=off during restore, you don't care about integrity on the test \nbox.\n\tThis will avoid slowing down your production database.\n\tThen look at the size of the restored database.\n\tIf it is much smaller than your production database, then you have bloat.\n\tTime to CLUSTER, or REINDEX, or VACUUM FULL (your choice), on the tables \nthat are bloated, and take note to vacuum those more often (and perhaps \ntune the autovacuum).\n\tJudicious use of CLUSTER on that small, but extremely often updated table \ncan also be a very good option.\n\t8.3 and its new HOT feature are also a good idea.\n", "msg_date": "Mon, 14 Apr 2008 11:18:19 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: db size" }, { "msg_contents": "Hi\n\n> If you suspect your tables or indexes are bloated, restore your dump \n> to a test box.\n> Use fsync=off during restore, you don't care about integrity on the \n> test box.\n> This will avoid slowing down your production database.\n> Then look at the size of the restored database.\n> If it is much smaller than your production database, then you have \n> bloat.\n\nI have done that, and I get the following:\n\nthe live one is 113G\nthe restored one is 78G\n\nHow should I get rid of the bloat?\nVACUUM FULL?\n\n-- \nAdrian Moisey\nSystems Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n", "msg_date": "Mon, 14 Apr 2008 11:21:59 +0200", "msg_from": "Adrian Moisey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: db size" }, { "msg_contents": ">\n>> If you suspect your tables or indexes are bloated, restore your \n>> dump to a test box.\n>> Use fsync=off during restore, you don't care about integrity on the \n>> test box.\n>> This will avoid slowing down your production database.\n>> Then look at the size of the restored database.\n>> If it is much smaller than your production database, then you have \n>> bloat.\n>\n> I have done that, and I get the following:\n>\n> the live one is 113G\n> the restored one is 78G\n\n\tAh.\n\tGood news for you is that you know that you can do something ;)\n\n\tNow, is the bloat in the tables (which tables ?) or in the indexes (which \nindexes ?), or in the toast tables perhaps, or in the system catalogs or \nall of the above ? Or perhaps there is a long-forgotten process that got \nzombified while holding a huge temp table ? (not very likely, but who \nknows).\n\tUse pg_relation_size() and its friends to get an idea of the size of \nstuff.\n\tPerhaps you have 1 extremely bloated table or index, or perhaps \neverything is bloated.\n\tThe solution to your problem depends on which case you have.\n", "msg_date": "Mon, 14 Apr 2008 11:33:54 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: db size" }, { "msg_contents": "Hi\n\n>> the live one is 113G\n>> the restored one is 78G\n >\n> Good news for you is that you know that you can do something ;)\n\n:)\n\nWill this help with performance ?\n\n> Now, is the bloat in the tables (which tables ?) or in the indexes \n> (which indexes ?), or in the toast tables perhaps, or in the system \n> catalogs or all of the above ? Or perhaps there is a long-forgotten \n> process that got zombified while holding a huge temp table ? (not very \n> likely, but who knows).\n> Use pg_relation_size() and its friends to get an idea of the size of \n> stuff.\n\nI'll look into that, thanks\n\n-- \nAdrian Moisey\nSystems Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n", "msg_date": "Mon, 14 Apr 2008 11:44:12 +0200", "msg_from": "Adrian Moisey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: db size" }, { "msg_contents": "\n> Will this help with performance ?\n\n\tDepends if the bloat is in part of your working set. If debloating can \nmake the working set fit in RAM, or lower your IOs, you'll get a boost.\n\n>> Now, is the bloat in the tables (which tables ?) or in the indexes \n>> (which indexes ?), or in the toast tables perhaps, or in the system \n>> catalogs or all of the above ? Or perhaps there is a long-forgotten \n>> process that got zombified while holding a huge temp table ? (not very \n>> likely, but who knows).\n>> Use pg_relation_size() and its friends to get an idea of the size \n>> of stuff.\n>\n> I'll look into that, thanks\n>\n\n\n", "msg_date": "Mon, 14 Apr 2008 12:01:52 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: db size" }, { "msg_contents": "Adrian Moisey wrote:\n> Hi\n> \n>> If you suspect your tables or indexes are bloated, restore your\n>> dump to a test box.\n>> Use fsync=off during restore, you don't care about integrity on\n>> the test box.\n>> This will avoid slowing down your production database.\n>> Then look at the size of the restored database.\n>> If it is much smaller than your production database, then you have\n>> bloat.\n> \n> I have done that, and I get the following:\n> \n> the live one is 113G\n> the restored one is 78G\n> \n> How should I get rid of the bloat?\n> VACUUM FULL?\n\nAnd/or REINDEX if you're not satisfied with the results of a VACUUM FULL.\n\nhttp://www.postgresql.org/docs/8.3/interactive/vacuum.html\nhttp://www.postgresql.org/docs/8.3/interactive/sql-reindex.html\n\nOf course, all of these will have performance consequences while they're\nrunning, and take out locks that prevent certain other operatons as\nshown in table 13-2:\n\nhttp://www.postgresql.org/docs/8.3/static/explicit-locking.html\n\nand the explanation following it.\n\nNote in particular:\n\n----\nACCESS EXCLUSIVE\n\n Conflicts with locks of all modes (ACCESS SHARE, ROW SHARE, ROW\nEXCLUSIVE, SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE,\nEXCLUSIVE, and ACCESS EXCLUSIVE). This mode guarantees that the holder\nis the only transaction accessing the table in any way.\n\n Acquired by the ALTER TABLE, DROP TABLE, TRUNCATE, REINDEX, CLUSTER,\nand VACUUM FULL commands. This is also the default lock mode for LOCK\nTABLE statements that do not specify a mode explicitly.\n\n Tip: Only an ACCESS EXCLUSIVE lock blocks a SELECT (without FOR\nUPDATE/SHARE) statement.\n----\n\nIn other words, you won't be doing much with a table/index while a\nVACUUM FULL or a REINDEX is in progress on it.\n\nGiven that, you probably want to check your table/index sizes and see if\nthere are particular problem tables or indexes, rather than just using a\nsledgehammer approach.\n\n--\nCraig Ringer\n", "msg_date": "Mon, 14 Apr 2008 18:40:39 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: db size" }, { "msg_contents": "In response to Adrian Moisey <[email protected]>:\n> \n> We currently have a 16CPU 32GB box running postgres 8.2.\n> \n> When I do a pg_dump with the following parameters \"/usr/bin/pg_dump -E \n> UTF8 -F c -b\" I get a file of 14GB in size.\n> \n> But the database is 110GB in size on the disk. Why the big difference \n> in size? Does this have anything to do with performance?\n\nIn a dump, indexes are a single command. In the actual database, the\nindexes actually contain all the data the indexes require, which can\nbe substantially more in size than the command to create the index.\n\nAdditionally, a running database has a certain amount of wasted space.\nIf you're running vacuum on a proper schedule, this won't get out of\nhand. Read this page to understand better:\nhttp://www.postgresql.org/docs/8.1/static/maintenance.html\n\nAnd lastly, I expect that the pg_dump format is able to do more aggressive\ncompression than the running database.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Mon, 14 Apr 2008 09:12:12 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: db size" }, { "msg_contents": "Hi\n\n>>> Now, is the bloat in the tables (which tables ?) or in the \n>>> indexes (which indexes ?), or in the toast tables perhaps, or in the \n>>> system catalogs or all of the above ? Or perhaps there is a \n>>> long-forgotten process that got zombified while holding a huge temp \n>>> table ? (not very likely, but who knows).\n>>> Use pg_relation_size() and its friends to get an idea of the size \n>>> of stuff.\n\nCan anybody give me some advice on the above? I'm not sure where to \nstart looking or how to start looking\n\n-- \nAdrian Moisey\nSystems Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n", "msg_date": "Tue, 15 Apr 2008 09:33:06 +0200", "msg_from": "Adrian Moisey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: db size" }, { "msg_contents": "Adrian Moisey <[email protected]> wrote:\n>\n> Hi\n> \n> >>> Now, is the bloat in the tables (which tables ?) or in the \n> >>> indexes (which indexes ?), or in the toast tables perhaps, or in the \n> >>> system catalogs or all of the above ? Or perhaps there is a \n> >>> long-forgotten process that got zombified while holding a huge temp \n> >>> table ? (not very likely, but who knows).\n> >>> Use pg_relation_size() and its friends to get an idea of the size \n> >>> of stuff.\n> \n> Can anybody give me some advice on the above? I'm not sure where to \n> start looking or how to start looking\n\nRunning VACUUM VERBOSE will give you a detailed view of space usage of\neach individual table.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Tue, 15 Apr 2008 07:12:22 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: db size" }, { "msg_contents": "Hi\n\n> Running VACUUM VERBOSE will give you a detailed view of space usage of\n> each individual table.\n\nI did that.\n\nNot too sure what I'm looking for, can someone tell me what this means:\n\nINFO: \"blahxxx\": scanned 27 of 27 pages, containing 1272 live rows and \n0 dead rows; 1272 rows in sample, 1272 estimated total rows\nINFO: free space map contains 4667977 pages in 1199 relations\nDETAIL: A total of 4505344 page slots are in use (including overhead).\n4505344 page slots are required to track all free space.\nCurrent limits are: 15537488 page slots, 1200 relations, using 91172 kB.\n\n\n\n-- \nAdrian Moisey\nSystems Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n", "msg_date": "Thu, 17 Apr 2008 08:28:42 +0200", "msg_from": "Adrian Moisey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: db size" }, { "msg_contents": "Adrian Moisey wrote:\n> Hi\n> \n>> Running VACUUM VERBOSE will give you a detailed view of space usage of\n>> each individual table.\n> \n> I did that.\n> \n> Not too sure what I'm looking for, can someone tell me what this means:\n> \n> INFO: \"blahxxx\": scanned 27 of 27 pages, containing 1272 live rows and \n> 0 dead rows; 1272 rows in sample, 1272 estimated total rows\n\nThis is a small table that takes up 27 pages and it scanned all of them. \nYou have 1272 rows in it and none of them are dead (i.e. deleted/updated \nbut still taking up space).\n\n> INFO: free space map contains 4667977 pages in 1199 relations\n> DETAIL: A total of 4505344 page slots are in use (including overhead).\n> 4505344 page slots are required to track all free space.\n> Current limits are: 15537488 page slots, 1200 relations, using 91172 kB.\n\nYou are tracking ~ 4.6 million pages and have space to track ~ 15.5 \nmillion, so that's fine. You are right up against your limit of \nrelations (tables, indexes etc) being tracked though - 1200. You'll \nprobably want to increase max_fsm_relations - see manual for details \n(server configuration / free space map).\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 17 Apr 2008 09:15:04 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: db size" }, { "msg_contents": "Hi\n\n> You are tracking ~ 4.6 million pages and have space to track ~ 15.5 \n> million, so that's fine. You are right up against your limit of \n> relations (tables, indexes etc) being tracked though - 1200. You'll \n> probably want to increase max_fsm_relations - see manual for details \n> (server configuration / free space map).\n\nThat is helpful, thanks.\n\nI did a grep on the output to find out more about the max_fsm_relations:\n\nINFO: free space map contains 2333562 pages in 832 relations\nINFO: free space map contains 3012404 pages in 544 relations\nINFO: free space map contains 3012303 pages in 654 relations\nINFO: free space map contains 3012345 pages in 669 relations\nINFO: free space map contains 3012394 pages in 678 relations\nINFO: free space map contains 3017248 pages in 717 relations\nINFO: free space map contains 2860737 pages in 824 relations\nINFO: free space map contains 4667977 pages in 1199 relations\nINFO: free space map contains 3140238 pages in 181 relations\nINFO: free space map contains 3140322 pages in 182 relations\nINFO: free space map contains 3140387 pages in 183 relations\nINFO: free space map contains 3142781 pages in 184 relations\n\nIt doesn't go up close to 1200 often... should I still up that value?\n\n-- \nAdrian Moisey\nSystems Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n", "msg_date": "Thu, 17 Apr 2008 10:37:25 +0200", "msg_from": "Adrian Moisey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: db size" }, { "msg_contents": "Hi\n\n>> INFO: \"blahxxx\": scanned 27 of 27 pages, containing 1272 live rows \n>> and 0 dead rows; 1272 rows in sample, 1272 estimated total rows\n> \n> This is a small table that takes up 27 pages and it scanned all of them. \n> You have 1272 rows in it and none of them are dead (i.e. deleted/updated \n> but still taking up space).\n\nI had a look through a few other tables...:\n\nINFO: \"table1\": scanned 22988 of 22988 pages, containing 2713446 live \nrows and 895662 dead rows; 45000 rows in sample, 2713446 estimate\nd total rows\n\nINFO: \"table2\": scanned 24600 of 24600 pages, containing 270585 live \nrows and 65524 dead rows; 45000 rows in sample, 270585 estimated total rows\n\nIs that dead rows an issue? Should I try clean it out? Will it improve \nperformance ?\n\n\n-- \nAdrian Moisey\nSystems Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n", "msg_date": "Thu, 17 Apr 2008 10:43:47 +0200", "msg_from": "Adrian Moisey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: db size" }, { "msg_contents": "Adrian Moisey wrote:\n> Hi\n> \n>>> INFO: \"blahxxx\": scanned 27 of 27 pages, containing 1272 live rows \n>>> and 0 dead rows; 1272 rows in sample, 1272 estimated total rows\n>>\n>> This is a small table that takes up 27 pages and it scanned all of \n>> them. You have 1272 rows in it and none of them are dead (i.e. \n>> deleted/updated but still taking up space).\n> \n> I had a look through a few other tables...:\n> \n> INFO: \"table1\": scanned 22988 of 22988 pages, containing 2713446 live \n> rows and 895662 dead rows; 45000 rows in sample, 2713446 estimate\n> d total rows\n> \n> INFO: \"table2\": scanned 24600 of 24600 pages, containing 270585 live \n> rows and 65524 dead rows; 45000 rows in sample, 270585 estimated total rows\n> \n> Is that dead rows an issue? Should I try clean it out? Will it improve \n> performance ?\n\nWhat you're hoping to see is that figure remain stable. The point of the \nfree-space-map is to track these and allow the space to be re-used. If \nyou find that the number of dead rows is increasing then either you are:\n1. Just deleting rows\n2. Not vacuuming enough - check your autovacuum settings\n\nThe effect on performance is that when you read in a page from disk \nyou're reading dead rows along with the data you are after. Trying to \nkeep 0 dead rows in a constantly updated table isn't worth the effort \nthough - you'd end up wasting your disk I/O on maintenance rather than \nqueries.\n\nThe figures above look high to me - 90,000 out of 270,000 and 65,000 out \nof 270,000. Of course, if these tables have just had bulk \nupdates/deletes then that's fine. If there's a steady stream of updates \nthough, you probably want to up your autovacuum settings.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 17 Apr 2008 09:52:31 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: db size" } ]
[ { "msg_contents": "Hi all,\nI started to do some performance tests (using pgbench) in order to\nestimate the DRBD impact on our servers, my plan was to perform some\nbenchmarks without DRBD in order to compare the same benchmark with\nDRBD.\nI didn't perform yet the benchmark with DRBD and I'm already facing\nsomething I can not explain (I performed at the moment only reads test).\n\nI'm using postgres 8.2.3 on Red Hat compiled with GCC 3.4.6.\n\nI'm using pgbench with scaling factor with a range [1:500], my server\nhas 4 cores so I'm trying with 16 client and 4000 transaction per\nclient: pgbench -t 4000 -c 16 -S db_perf. I did 3 session using 3 different\nvalues of shared_buffers: 64MB, 256MB, 512MB and my server has 2GB.\n\nThe following graph reports the results:\n\nhttp://img84.imageshack.us/my.php?image=totalid7.png\n\nas you can see using 64MB as value for shared_buffers I'm obtaining better\nresults. Is this something expected or I'm looking in the wrong direction?\nI'm going to perform same tests without using the -S option in pgbench but\nbeing a time expensive operation I would like to ear your opinion first.\n\nRegards\nGaetano Mendola\n\n\n\n\n", "msg_date": "Mon, 14 Apr 2008 11:13:05 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "shared_buffers performance" }, { "msg_contents": "\"Gaetano Mendola\" <[email protected]> writes:\n\n> The following graph reports the results:\n>\n> http://img84.imageshack.us/my.php?image=totalid7.png\n\nThat's a *fascinating* graph.\n\nIt seems there are basically three domains. \n\nThe small domain where the database fits in shared buffers -- though actually\nthis domain seems to hold until the accounts table is about 1G so maybe it's\nmore that the *indexes* fit in memory. Here larger shared buffers do clearly\nwin.\n\nThe transition domain where performance drops dramatically as the database\nstarts to not fit in shared buffers but does still fit in filesystem cache.\nHere every megabyte stolen from the filesystem cache makes a *huge*\ndifference. At a scale factor of 120 or so you're talking about a factor of 4\nbetween each of the shared buffer sizes.\n\nThe large domain where the database doesn't fit in filesystem cache. Here it\ndoesn't make a large difference but the more buffers duplicated between\npostgres and the filesystem cache the lower the overall cache effectiveness.\n\nIf we used something like either mmap or directio to avoid the double\nbuffering we would be able to squeeze these into a single curve, as well as\npush the dropoff slightly to the right. In theory. \n\nIn practice it would depend on the OS's ability to handle page faults\nefficiently in the mmap case, and our ability to do read-ahead and cache\nmanagement in the directio case. And it would be a huge increase in complexity\nfor Postgres and a push into a direction which isn't our \"core competency\". We\nmight find that while in theory it should perform better our code just can't\nkeep up with Linux's and it doesn't.\n\nI'm curious about the total database size as a for each of the scaling factors\nas well as the total of the index sizes. And how much memory Linux says is\nbeing used for filesystem buffers.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's PostGIS support!\n", "msg_date": "Mon, 14 Apr 2008 11:56:47 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared_buffers performance" }, { "msg_contents": "Gregory Stark wrote:\n> \"Gaetano Mendola\" <[email protected]> writes:\n> \n>> The following graph reports the results:\n>>\n>> http://img84.imageshack.us/my.php?image=totalid7.png\n> \n> That's a *fascinating* graph.\n\nIt is, isn't it? Thanks Gaetano.\n\n> It seems there are basically three domains. \n> \n> The small domain where the database fits in shared buffers -- though actually\n> this domain seems to hold until the accounts table is about 1G so maybe it's\n> more that the *indexes* fit in memory. Here larger shared buffers do clearly\n> win.\n\nI think this is actually in two parts - you can see it clearly on the \nred trace (64MB), less so on the green (256MB) and not at all on the \nblue (512MB). Presumably the left-hand steeper straight-line decline \nstarts with the working-set in shared-buffers, and the \"knee\" is where \nwe're down to just indexes in shared-buffers.\n\nWith the blue I guess you just get the first part, because by the time \nyou're overflowing shared-buffers, you've not got enough disk-cache to \ntake up the slack for you.\n\nI wonder what difference 8.3 makes to this?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 14 Apr 2008 12:25:45 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared_buffers performance" }, { "msg_contents": "On Mon, 14 Apr 2008, Gaetano Mendola wrote:\n\n> I'm using postgres 8.2.3 on Red Hat compiled with GCC 3.4.6.\n\n8.2.3 has a performance bug that impacts how accurate pgbench results are; \nyou really should be using a later version.\n\n> http://img84.imageshack.us/my.php?image=totalid7.png\n> as you can see using 64MB as value for shared_buffers I'm obtaining \n> better results.\n\nI'm assuming you've read my scaling article at \nhttp://www.westnet.com/~gsmith/content/postgresql/pgbench-scaling.htm \nsince you're using the graph template I suggest there.\n\nIf you look carefully at your results, you are getting better results for \nhigher shared_buffers values in the cases where performance is memory \nbound (the lower scale numbers). Things reverse so that more buffers \ngives worse performance only when your scale >100. I wouldn't conclude \ntoo much from that. The pgbench select test is doing a low-level \noperation that doesn't benefit as much from having more memory available \nto PostgreSQL instead of the OS as a real-world workload will.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 14 Apr 2008 11:42:50 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared_buffers performance" }, { "msg_contents": "On Mon, 14 Apr 2008, Gregory Stark wrote:\n\n> I'm curious about the total database size as a for each of the scaling factors\n> as well as the total of the index sizes.\n\nThat's all in a table at \nhttp://www.westnet.com/~gsmith/content/postgresql/pgbench-scaling.htm\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 14 Apr 2008 11:44:44 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared_buffers performance" }, { "msg_contents": "Gregory Stark <[email protected]> writes:\n> The transition domain where performance drops dramatically as the database\n> starts to not fit in shared buffers but does still fit in filesystem cache.\n\nIt looks to me like the knee comes where the DB no longer fits in\nfilesystem cache. What's interesting is that there seems to be no\nsynergy at all between shared_buffers and the filesystem cache.\nIdeally, very hot pages would stay in shared buffers and drop out of the\nkernel cache, allowing you to use a database approximating all-of-RAM\nbefore you hit the performance wall. It's clear that in this example\nthat's not happening, or at least that only a small part of shared\nbuffers isn't getting duplicated in filesystem cache.\n\nOf course, that's because pgbench reads a randomly-chosen row of\n\"accounts\" in each transaction, so that there's exactly zero locality\nof access. A more realistic workload would probably have a Zipfian\ndistribution of account number touches, and might look a little better\non this type of test.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Apr 2008 15:31:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared_buffers performance " }, { "msg_contents": "\"Tom Lane\" <[email protected]> writes:\n\n> Gregory Stark <[email protected]> writes:\n>> The transition domain where performance drops dramatically as the database\n>> starts to not fit in shared buffers but does still fit in filesystem cache.\n>\n> It looks to me like the knee comes where the DB no longer fits in\n> filesystem cache. \n\nThat does seem to make a lot more sense. I think I misread the units of the\nsize of the accounts table. Reading it again it seems to be in the 1.5G-2G\nrange for the transition which with indexes and other tables might be starting\nto stress the filesystem cache -- though it still seems a little low.\n\nI think if I squint I can see another dropoff at the very small scaling\nnumbers. That must be the point where the database is comparable to the shared\nbuffers size. Except then I would expect the green and blue curves to be\npushed to the right a bit rather than just havin a shallower slope.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's On-Demand Production Tuning\n", "msg_date": "Mon, 14 Apr 2008 20:58:21 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared_buffers performance" }, { "msg_contents": "On Mon, 14 Apr 2008, Tom Lane wrote:\n\n> Ideally, very hot pages would stay in shared buffers and drop out of the\n> kernel cache, allowing you to use a database approximating all-of-RAM\n> before you hit the performance wall.\n\nWith \"pgbench -S\", the main hot pages that get elevated usage counts and \ncling persistantly to shared buffers are those holding data from the \nprimary key on the accounts table.\n\nHere's an example of what the buffer cache actually has after running \n\"pgbench -S -c 8 -t 10000 pgbench\" on a system with shared_buffers=256MB \nand a total of 2GB of RAM. Database scale is 100, so there's \napproximately 1.5GB worth of database, mainly a 1.3GB accounts table and \n171MB of primary key on accounts:\n\nrelname |buffered| buffers % | % of rel\naccounts | 306 MB | 65.3 | 24.7\naccounts pkey | 160 MB | 34.1 | 93.2\n\nrelname | buffers | usage\naccounts | 10223 | 0\naccounts | 25910 | 1\naccounts | 2825 | 2\naccounts | 214 | 3\naccounts | 14 | 4\naccounts pkey | 2173 | 0\naccounts pkey | 5392 | 1\naccounts pkey | 5086 | 2\naccounts pkey | 3747 | 3\naccounts pkey | 2296 | 4\naccounts pkey | 1756 | 5\n\nThis example and the queries to produce that summary are all from the \n\"Inside the PostgreSQL Buffer Cache\" talk on my web page.\n\nFor this simple workload, if you can fit the main primary key in shared \nbuffers that helps, but making that too large takes away memory that could \nbe more usefully given to the OS to manage. The fact that you can start \nto suffer from double-buffering (where the data is in the OS filesystem \ncache and shared_buffers) when making shared_buffers too large on a \nbenchmark workload is interesting. But I'd suggest considering the real \napplication, rather than drawing a conclusion about shared_buffers sizing \nbased just on that phenomenon.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 14 Apr 2008 16:08:48 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared_buffers performance " }, { "msg_contents": "Greg Smith wrote:\n> On Mon, 14 Apr 2008, Gaetano Mendola wrote:\n> \n>> I'm using postgres 8.2.3 on Red Hat compiled with GCC 3.4.6.\n> \n> 8.2.3 has a performance bug that impacts how accurate pgbench results\n> are; you really should be using a later version.\n> \n>> http://img84.imageshack.us/my.php?image=totalid7.png\n>> as you can see using 64MB as value for shared_buffers I'm obtaining\n>> better results.\n> \n> I'm assuming you've read my scaling article at\n> http://www.westnet.com/~gsmith/content/postgresql/pgbench-scaling.htm\n> since you're using the graph template I suggest there.\n> \n\nYes I was basically inspired from that page, my true goal is not to study\nthe effect of shared_buffers (this was a side effect) but to study the\nperformance lose using DRBD on our server. I'm producing similar graph\nusing pgperf without -S, I will post them as soon they are ready.\n\nRegards\nGaetano Mendola\n", "msg_date": "Tue, 15 Apr 2008 11:05:12 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: shared_buffers performance" }, { "msg_contents": "Greg Smith wrote:\n> On Mon, 14 Apr 2008, Gaetano Mendola wrote:\n> \n>> I'm using postgres 8.2.3 on Red Hat compiled with GCC 3.4.6.\n> \n> 8.2.3 has a performance bug that impacts how accurate pgbench results\n> are; you really should be using a later version.\n\nThank you, I will give it a shot and performe some tests to see if\nthey change a lot, in case I will repeat the entire benchmarks.\n\nRegards\nGaetano Mendola\n", "msg_date": "Tue, 15 Apr 2008 11:11:31 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: shared_buffers performance" }, { "msg_contents": "Gaetano Mendola wrote:\n> Hi all,\n> I started to do some performance tests (using pgbench) in order to\n> estimate the DRBD impact on our servers, my plan was to perform some\n> benchmarks without DRBD in order to compare the same benchmark with\n> DRBD.\n> I didn't perform yet the benchmark with DRBD and I'm already facing\n> something I can not explain (I performed at the moment only reads test).\n> \n> I'm using postgres 8.2.3 on Red Hat compiled with GCC 3.4.6.\n> \n> I'm using pgbench with scaling factor with a range [1:500], my server\n> has 4 cores so I'm trying with 16 client and 4000 transaction per\n> client: pgbench -t 4000 -c 16 -S db_perf. I did 3 session using 3 different\n> values of shared_buffers: 64MB, 256MB, 512MB and my server has 2GB.\n> \n> The following graph reports the results:\n> \n> http://img84.imageshack.us/my.php?image=totalid7.png\n> \n> as you can see using 64MB as value for shared_buffers I'm obtaining better\n> results. Is this something expected or I'm looking in the wrong direction?\n> I'm going to perform same tests without using the -S option in pgbench but\n> being a time expensive operation I would like to ear your opinion first.\n\nI have complete today the other benchmarks using pgbench in write mode as well,\nand the following graph resumes the results:\n\nhttp://img440.imageshack.us/my.php?image=totalwbn0.png\n\nwhat I can say here the trend is the opposite seen on the read only mode as\nincreasing the shared_buffers increases the TPS.\n\nI still didn't upgrade to 8.2.7 as suggested by Greg Smith because I would like\nto compare the results obtained till now with the new one (simulations running\nwhile I write) using postgres on a \"DRBD partition\"; sure as soon the current\ntests terminate I will upgrade postgres.\n\nIf you have any suggestions on what you would like to see/know, just let me know.\n\nRegards\nGaetano Mendola\n\n\n", "msg_date": "Tue, 15 Apr 2008 17:08:02 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: shared_buffers performance" }, { "msg_contents": "Hello,\n\nI need wo advice on vacuum settings.\n\nI have a quad core X5355 @ 2.66GHz with 8 Go of memory\n\n1Q) Why autovaccum does not work, I have set the value to on in \npostgresql.conf but when the server start it's still off !!!!\n\n2Q) Here are my settings for vacuum, could you help me to optimise those \nsettings, at the moment the vacuum analyse sent every night is taking \naround 18 h to run, which slow down the server performance.\n\n# - Cost-Based Vacuum Delay -\n\nvacuum_cost_delay = 5 # 0-1000 milliseconds\nvacuum_cost_page_hit = 1000 # 0-10000 credits\nvacuum_cost_page_miss = 1000 # 0-10000 credits\nvacuum_cost_page_dirty = 120 # 0-10000 credits\nvacuum_cost_limit = 20 # 0-10000 credits\n\n# - Background writer -\n\nbgwriter_delay = 50 # 10-10000 milliseconds between \nrounds\nbgwriter_lru_percent = 1.0 # 0-100% of LRU buffers \nscanned/round\nbgwriter_lru_maxpages = 25 # 0-1000 buffers max written/round\nbgwriter_all_percent = 0.333 # 0-100% of all buffers \nscanned/round\nbgwriter_all_maxpages = 50 # 0-1000 buffers max written/round\n\n\n\nThanks in advance for your helps\n\nRegards\n\nDavid\n", "msg_date": "Sun, 20 Apr 2008 19:48:53 +0200", "msg_from": "dforums <[email protected]>", "msg_from_op": false, "msg_subject": "Vacuum settings" }, { "msg_contents": "dforums wrote:\n> Hello,\n>\n> I need wo advice on vacuum settings.\n>\n> I have a quad core X5355 @ 2.66GHz with 8 Go of memory\n>\n> 1Q) Why autovaccum does not work, I have set the value to on in \n> postgresql.conf but when the server start it's still off !!!!\n\nYou need to turn stats_row_level on too.\n\n> # - Cost-Based Vacuum Delay -\n>\n> vacuum_cost_delay = 5 # 0-1000 milliseconds\n> vacuum_cost_page_hit = 1000 # 0-10000 credits\n> vacuum_cost_page_miss = 1000 # 0-10000 credits\n> vacuum_cost_page_dirty = 120 # 0-10000 credits\n> vacuum_cost_limit = 20 # 0-10000 credits\n\nThe cost are all too high and the limit too low. I suggest resetting to\nthe default values, and figuring out a reasonable delay limit (your\ncurrent 5ms value seems a bit too low, but I think in most cases 10ms is\nthe practical limit due to sleep granularity in the kernel. In any\ncase, since the other values are all wrong I suggest just setting it to\n10ms and seeing what happens).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Mon, 21 Apr 2008 11:03:37 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum settings" }, { "msg_contents": "dforums <dforums 'at' vieonet.com> writes:\n\n> 2Q) Here are my settings for vacuum, could you help me to optimise\n> those settings, at the moment the vacuum analyse sent every night is\n> taking around 18 h to run, which slow down the server performance.\n\nIt's a lot of time for a daily job (and it is interesting to\nvacuum hot tables more often than daily). With typical settings,\nit's probable that autovacuum will run forever (e.g. at the end\nof run, another run will already be needed). You should first\nverify you don't have bloat in your tables (a lot of dead rows) -\nbloat can be created by too infrequent vacuuming and too low FSM\nsettings[1]. To fix the bloat, you can dump and restore your DB\nif you can afford interrupting your application, or use VACUUM\nFULL if you can afford blocking your application (disclaimer:\nmany posters here passionately disgust VACUUM FULL and keep on\nsuggesting the use of CLUSTER).\n\nRef: \n[1] to say whether you have bloat, you can use\n contrib/pgstattuple (you can easily add it to a running\n PostgreSQL). If the free_percent reported for interesting\n tables is large, and free_space is large compared to 8K, then\n you have bloat;\n\n another way is to dump your database, restore it onto another\n database, issue VACUUM VERBOSE on a given table on both\n databases (in live, and on the restore) and compare the\n reported number of pages needed. The difference is the\n bloat.\n\n live=# VACUUM VERBOSE interesting_table;\n [...]\n INFO: \"interesting_table\": found 408 removable, 64994 nonremovable row versions in 4395 pages\n\n restored=# VACUUM VERBOSE interesting_table;\n [...]\n INFO: \"interesting_table\": found 0 removable, 64977 nonremovable row versions in 628 pages\n\n => (4395-628)*8/1024.0 MB of bloat\n\n (IIRC, this VACUUM output is for 7.4, it has changed a bit\n since then)\n\n-- \nGuillaume Cottenceau\n", "msg_date": "Mon, 21 Apr 2008 17:31:03 +0200", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum settings" }, { "msg_contents": "On Apr 14, 2008, at 3:31 PM, Tom Lane wrote:\n> Gregory Stark <[email protected]> writes:\n>> The transition domain where performance drops dramatically as the \n>> database\n>> starts to not fit in shared buffers but does still fit in \n>> filesystem cache.\n>\n> It looks to me like the knee comes where the DB no longer fits in\n> filesystem cache. What's interesting is that there seems to be no\n> synergy at all between shared_buffers and the filesystem cache.\n> Ideally, very hot pages would stay in shared buffers and drop out \n> of the\n> kernel cache, allowing you to use a database approximating all-of-RAM\n> before you hit the performance wall. It's clear that in this example\n> that's not happening, or at least that only a small part of shared\n> buffers isn't getting duplicated in filesystem cache.\n\nI suspect that we're getting double-buffering on everything because \nevery time we dirty a buffer and write it out the OS is considering \nthat as access, and keeping that data in it's cache. It would be \ninteresting to try an overcome that and see how it impacts things. \nWith our improvement in checkpoint handling, we might be able to just \nwrite via DIO... if not maybe there's some way to tell the OS to \nbuffer the write for us, but target that data for removal from cache \nas soon as it's written.\n\n> Of course, that's because pgbench reads a randomly-chosen row of\n> \"accounts\" in each transaction, so that there's exactly zero locality\n> of access. A more realistic workload would probably have a Zipfian\n> distribution of account number touches, and might look a little better\n> on this type of test.\n\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Sat, 24 May 2008 13:55:15 -0400", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared_buffers performance " } ]
[ { "msg_contents": "Hi everyone,\n\nI have some serious performance problems on a database where some \nqueries take up to 100 (or even more) times longer occasionally. The \ndatabase itself consists of one bigger table (around 3.5GB in size and \naround 2 mio rows, 4-5 additional indexes) and some really small tables.\n\nThe queries in question (select's) occasionally take up to 5 mins even \nif they take ~2-3 sec under \"normal\" conditions, there are no \nsequencial scans done in those queries. There are not many users \nconnected (around 3, maybe) to this database usually since it's still \nin a testing phase. I tried to hunt down the problem by playing around \nwith resource usage cfg options but it didn't really made a difference.\n\nThe processes of such queries show up in 'uninterruptible sleep' state \nmore or less for the whole time afaict. When I strace(1) such a \nprocess I see tons of _llseek()'s and and quite some read()'s. \niostat(1) shows an utilization of close to 100% with iowait of 25-50% \n(4 CPU's).\n\nI assume that the server underequipped in terms of RAM. But even if \nthe the queries need to read data from the disk it seems odd to me \nthat the variance of the time spend is so enormously big. Is this \nnormal or am I correct with my assumtion that there's something wrong?\n\nHas anyone an idea what else I could do to find out what's the cause \nof my problem?\n\nThe server:\nLinux 2.6.15.6\nPostgreSQL 8.1.8\n4x Xeon CPU's\n1.5 GB Ram\n3x SCSI HD's (probably on a RAID-5 config, not quite sure though)\n\nRegards,\n\nTom\n", "msg_date": "Tue, 15 Apr 2008 21:01:49 +0200", "msg_from": "Thomas Spreng <[email protected]>", "msg_from_op": true, "msg_subject": "Oddly slow queries" }, { "msg_contents": "\n> The queries in question (select's) occasionally take up to 5 mins even \n> if they take ~2-3 sec under \"normal\" conditions, there are no sequencial \n> scans done in those queries. There are not many users connected (around \n> 3, maybe) to this database usually since it's still in a testing phase. \n> I tried to hunt down the problem by playing around with resource usage \n> cfg options but it didn't really made a difference.\n\n\tCould that be caused by a CHECKPOINT ?\n", "msg_date": "Wed, 16 Apr 2008 01:24:01 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oddly slow queries" }, { "msg_contents": "\nOn 16.04.2008, at 01:24, PFC wrote:\n>\n>> The queries in question (select's) occasionally take up to 5 mins \n>> even if they take ~2-3 sec under \"normal\" conditions, there are no \n>> sequencial scans done in those queries. There are not many users \n>> connected (around 3, maybe) to this database usually since it's \n>> still in a testing phase. I tried to hunt down the problem by \n>> playing around with resource usage cfg options but it didn't really \n>> made a difference.\n>\n> \tCould that be caused by a CHECKPOINT ?\n\n\nactually there are a few log (around 12 per day) entries concerning \ncheckpoints:\n\nLOG: checkpoints are occurring too frequently (10 seconds apart)\nHINT: Consider increasing the configuration parameter \n\"checkpoint_segments\".\n\nBut wouldn't that only affect write performance? The main problems I'm \nconcerned about affect SELECT queries.\n\nRegards,\n\nTom\n\n\nPS: WAL settings are all set to defaults.\n", "msg_date": "Wed, 16 Apr 2008 06:07:04 +0200", "msg_from": "Thomas Spreng <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Oddly slow queries" }, { "msg_contents": "On Wed, 16 Apr 2008 06:07:04 +0200, Thomas Spreng <[email protected]> wrote:\n\n>\n> On 16.04.2008, at 01:24, PFC wrote:\n>>\n>>> The queries in question (select's) occasionally take up to 5 mins even \n>>> if they take ~2-3 sec under \"normal\" conditions, there are no \n>>> sequencial scans done in those queries. There are not many users \n>>> connected (around 3, maybe) to this database usually since it's still \n>>> in a testing phase. I tried to hunt down the problem by playing around \n>>> with resource usage cfg options but it didn't really made a difference.\n>>\n>> \tCould that be caused by a CHECKPOINT ?\n>\n>\n> actually there are a few log (around 12 per day) entries concerning \n> checkpoints:\n>\n> LOG: checkpoints are occurring too frequently (10 seconds apart)\n> HINT: Consider increasing the configuration parameter \n> \"checkpoint_segments\".\n>\n> But wouldn't that only affect write performance? The main problems I'm \n> concerned about affect SELECT queries.\n\n\tOK, so if you get 12 of those per day, this means your checkpoint \ninterval isn't set to 10 seconds... I hope...\n\tThose probably correspond to some large update or insert query that comes \n from a cron or archive job ?... or a developer doing tests or filling a \ntable...\n\n\tSo, if it is checkpointing every 10 seconds it means you have a pretty \nhigh write load at that time ; and having to checkpoint and flush the \ndirty pages makes it worse, so it is possible that your disk(s) choke on \nwrites, also killing the selects in the process.\n\n\t-> Set your checkpoint log segments to a much higher value\n\t-> Set your checkpoint timeout to a higher value (5 minutes or \nsomething), to be tuned afterwards\n\t-> Tune bgwriter settings to taste (this means you need a realistic load, \nnot a test load)\n\t-> Use separate disk(s) for the xlog\n\t-> For the love of God, don't keep the RAID5 for production !\n\t(RAID5 + 1 small write = N reads + N writes, N=3 in your case)\n\tSince this is a test server I would suggest RAID1 for the OS and database \nfiles and the third disk for the xlog, if it dies you just recreate the \nDB...\n", "msg_date": "Wed, 16 Apr 2008 10:21:38 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oddly slow queries" }, { "msg_contents": "[email protected] (Thomas Spreng) writes:\n> On 16.04.2008, at 01:24, PFC wrote:\n>>\n>>> The queries in question (select's) occasionally take up to 5 mins\n>>> even if they take ~2-3 sec under \"normal\" conditions, there are no\n>>> sequencial scans done in those queries. There are not many users\n>>> connected (around 3, maybe) to this database usually since it's\n>>> still in a testing phase. I tried to hunt down the problem by\n>>> playing around with resource usage cfg options but it didn't really\n>>> made a difference.\n>>\n>> \tCould that be caused by a CHECKPOINT ?\n>\n> actually there are a few log (around 12 per day) entries concerning\n> checkpoints:\n>\n> LOG: checkpoints are occurring too frequently (10 seconds apart)\n> HINT: Consider increasing the configuration parameter\n> \"checkpoint_segments\".\n>\n> But wouldn't that only affect write performance? The main problems I'm\n> concerned about affect SELECT queries.\n\nNo, that will certainly NOT just affect write performance; if the\npostmaster is busy writing out checkpoints, that will block SELECT\nqueries that are accessing whatever is being checkpointed.\n\nWhen we were on 7.4, we would *frequently* see SELECT queries that\nshould be running Very Quick that would get blocked by the checkpoint\nflush.\n\nWe'd periodically see hordes of queries of the form:\n\n select id from some_table where unique_field = 'somevalue.something';\n\nwhich would normally run in less than 1ms running for (say) 2s.\n\nAnd the logs would show something looking rather like the following:\n\n2008-04-03 09:01:52 LOG select id from some_table where unique_field = 'somevalue.something'; - 952ms\n2008-04-03 09:01:52 LOG select id from some_table where unique_field = 'somevalue.something'; - 742ms\n2008-04-03 09:01:52 LOG select id from some_table where unique_field = 'another.something'; - 1341ms\n2008-04-03 09:01:52 LOG select id from some_table where unique_field = 'somevalue.something'; - 911ms\n2008-04-03 09:01:52 LOG select id from some_table where unique_field = 'another.something'; - 1244ms\n2008-04-03 09:01:52 LOG select id from some_table where unique_field = 'another.something'; - 2311ms\n2008-04-03 09:01:52 LOG select id from some_table where unique_field = 'another.something'; - 1799ms\n2008-04-03 09:01:52 LOG select id from some_table where unique_field = 'somevalue.something'; - 1992ms\n\nThis was happening because the checkpoint was flushing those two\ntuples, and hence blocking 8 SELECTs that came in during the flush.\n\nThere are two things worth considering:\n\n1. If the checkpoints are taking place \"too frequently,\" then that is\nclear evidence that something is taking place that is injecting REALLY\nheavy update load on your database at those times.\n\nIf the postmaster is checkpointing every 10s, that implies Rather\nHeavy Load, so it is pretty well guaranteed that performance of other\nactivity will suck at least somewhat because this load is sucking up\nall the I/O bandwidth that it can.\n\nSo, to a degree, there may be little to be done to improve on this.\n\n2. On the other hand, if you're on 8.1 or so, you may be able to\nconfigure the Background Writer to incrementally flush checkpoint data\nearlier, and avoid the condition of 1.\n\nMind you, you'd have to set BgWr to be pretty aggressive, based on the\n\"10s periodicity\" that you describe; that may not be a nice\nconfiguration to have all the time :-(.\n-- \noutput = reverse(\"ofni.sesabatadxunil\" \"@\" \"enworbbc\")\nhttp://cbbrowne.com/info/multiplexor.html\nNagging is the repetition of unpalatable truths. --Baroness Edith\nSummerskill\n", "msg_date": "Wed, 16 Apr 2008 11:42:03 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oddly slow queries" }, { "msg_contents": "\nOn 16.04.2008, at 17:42, Chris Browne wrote:\n> [email protected] (Thomas Spreng) writes:\n>> On 16.04.2008, at 01:24, PFC wrote:\n>>>\n>>>> The queries in question (select's) occasionally take up to 5 mins\n>>>> even if they take ~2-3 sec under \"normal\" conditions, there are no\n>>>> sequencial scans done in those queries. There are not many users\n>>>> connected (around 3, maybe) to this database usually since it's\n>>>> still in a testing phase. I tried to hunt down the problem by\n>>>> playing around with resource usage cfg options but it didn't really\n>>>> made a difference.\n>>>\n>>> \tCould that be caused by a CHECKPOINT ?\n>>\n>> actually there are a few log (around 12 per day) entries concerning\n>> checkpoints:\n>>\n>> LOG: checkpoints are occurring too frequently (10 seconds apart)\n>> HINT: Consider increasing the configuration parameter\n>> \"checkpoint_segments\".\n>>\n>> But wouldn't that only affect write performance? The main problems \n>> I'm\n>> concerned about affect SELECT queries.\n>\n> No, that will certainly NOT just affect write performance; if the\n> postmaster is busy writing out checkpoints, that will block SELECT\n> queries that are accessing whatever is being checkpointed.\n\nWhat I meant is if there are no INSERT's or UPDATE's going on it \nshouldn't\naffect SELECT queries, or am I wrong?\n\nAll the data modification tasks usually run at night, during the day \nthere\nshouldn't be many INSERT's or UPDATE's going on.\n\n> When we were on 7.4, we would *frequently* see SELECT queries that\n> should be running Very Quick that would get blocked by the checkpoint\n> flush.\n\nHow did you actually see they were blocked by the checkpoint flushes?\nDo they show up as separate processes?\n\n> There are two things worth considering:\n>\n> 1. If the checkpoints are taking place \"too frequently,\" then that is\n> clear evidence that something is taking place that is injecting REALLY\n> heavy update load on your database at those times.\n>\n> If the postmaster is checkpointing every 10s, that implies Rather\n> Heavy Load, so it is pretty well guaranteed that performance of other\n> activity will suck at least somewhat because this load is sucking up\n> all the I/O bandwidth that it can.\n>\n> So, to a degree, there may be little to be done to improve on this.\n\nI strongly assume that those log entries showed up at night when the\nheavy insert routines are being run. I'm more concerned about the query\nperformance under \"normal\" conditions when there are very few \nmodifications\ndone.\n\n> 2. On the other hand, if you're on 8.1 or so, you may be able to\n> configure the Background Writer to incrementally flush checkpoint data\n> earlier, and avoid the condition of 1.\n>\n> Mind you, you'd have to set BgWr to be pretty aggressive, based on the\n> \"10s periodicity\" that you describe; that may not be a nice\n> configuration to have all the time :-(.\n\nI've just seen that the daily vacuum tasks didn't run, apparently. The \nDB\nhas almost doubled it's size since some days ago. I guess I'll have to\nVACUUM FULL (dump/restore might be faster, though) and check if that \nhelps\nanything.\n\nDoes a bloated DB affect the performance alot or does it only use up \ndisk\nspace?\n\nThanks for all the hints/help so far from both of you.\n\nCheers,\n\nTom\n", "msg_date": "Wed, 16 Apr 2008 23:48:21 +0200", "msg_from": "Thomas Spreng <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Oddly slow queries" }, { "msg_contents": "On Wed, Apr 16, 2008 at 11:48:21PM +0200, Thomas Spreng wrote:\n> What I meant is if there are no INSERT's or UPDATE's going on it \n> shouldn't\n> affect SELECT queries, or am I wrong?\n\nCHECKPOINTs also happen on a time basis. They should be short in that case,\nbut they still have to happen.\n\n", "msg_date": "Wed, 16 Apr 2008 23:19:47 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oddly slow queries" }, { "msg_contents": "On Wed, Apr 16, 2008 at 3:48 PM, Thomas Spreng <[email protected]> wrote:\n>\n> On 16.04.2008, at 17:42, Chris Browne wrote:\n>\n> > [email protected] (Thomas Spreng) writes:\n> >\n> > > On 16.04.2008, at 01:24, PFC wrote:\n> > >\n> > > >\n> > > >\n> > > > > The queries in question (select's) occasionally take up to 5 mins\n> > > > > even if they take ~2-3 sec under \"normal\" conditions, there are no\n> > > > > sequencial scans done in those queries. There are not many users\n> > > > > connected (around 3, maybe) to this database usually since it's\n> > > > > still in a testing phase. I tried to hunt down the problem by\n> > > > > playing around with resource usage cfg options but it didn't really\n> > > > > made a difference.\n> > > > >\n> > > >\n> > > > Could that be caused by a CHECKPOINT ?\n> > > >\n> > >\n> > > actually there are a few log (around 12 per day) entries concerning\n> > > checkpoints:\n> > >\n> > > LOG: checkpoints are occurring too frequently (10 seconds apart)\n> > > HINT: Consider increasing the configuration parameter\n> > > \"checkpoint_segments\".\n> > >\n> > > But wouldn't that only affect write performance? The main problems I'm\n> > > concerned about affect SELECT queries.\n> > >\n> >\n> > No, that will certainly NOT just affect write performance; if the\n> > postmaster is busy writing out checkpoints, that will block SELECT\n> > queries that are accessing whatever is being checkpointed.\n> >\n>\n> What I meant is if there are no INSERT's or UPDATE's going on it shouldn't\n> affect SELECT queries, or am I wrong?\n\nBut checkpoints only occur every 10 seconds because of a high insert /\nupdate rate. So, there ARE inserts and updates going on, and a lot of\nthem, and they are blocking your selects when checkpoint hits.\n\nWhile adjusting your background writer might be called for, and might\nprovide you with some relief, you REALLY need to find out what's\npushing so much data into your db at once that it's causing a\ncheckpoint storm.\n", "msg_date": "Sat, 19 Apr 2008 11:04:06 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oddly slow queries" }, { "msg_contents": "Martha Stewart called it a Good Thing when [email protected] (Thomas Spreng) wrote:\n> On 16.04.2008, at 17:42, Chris Browne wrote:\n>> [email protected] (Thomas Spreng) writes:\n>>> On 16.04.2008, at 01:24, PFC wrote:\n>>>>\n>>>>> The queries in question (select's) occasionally take up to 5 mins\n>>>>> even if they take ~2-3 sec under \"normal\" conditions, there are no\n>>>>> sequencial scans done in those queries. There are not many users\n>>>>> connected (around 3, maybe) to this database usually since it's\n>>>>> still in a testing phase. I tried to hunt down the problem by\n>>>>> playing around with resource usage cfg options but it didn't really\n>>>>> made a difference.\n>>>>\n>>>> \tCould that be caused by a CHECKPOINT ?\n>>>\n>>> actually there are a few log (around 12 per day) entries concerning\n>>> checkpoints:\n>>>\n>>> LOG: checkpoints are occurring too frequently (10 seconds apart)\n>>> HINT: Consider increasing the configuration parameter\n>>> \"checkpoint_segments\".\n>>>\n>>> But wouldn't that only affect write performance? The main problems\n>>> I'm\n>>> concerned about affect SELECT queries.\n>>\n>> No, that will certainly NOT just affect write performance; if the\n>> postmaster is busy writing out checkpoints, that will block SELECT\n>> queries that are accessing whatever is being checkpointed.\n>\n> What I meant is if there are no INSERT's or UPDATE's going on it\n> shouldn't affect SELECT queries, or am I wrong?\n\nYes, that's right. (Caveat: VACUUM would be a form of update, in this\ncontext...)\n\n> All the data modification tasks usually run at night, during the day\n> there shouldn't be many INSERT's or UPDATE's going on.\n>\n>> When we were on 7.4, we would *frequently* see SELECT queries that\n>> should be running Very Quick that would get blocked by the\n>> checkpoint flush.\n>\n> How did you actually see they were blocked by the checkpoint\n> flushes? Do they show up as separate processes?\n\nWe inferred this based on observed consistency of behaviour, and based\non having highly observant people like Andrew Sullivan around :-).\n\nIt definitely wasn't blatantly obvious. It *might* be easier to see\nin more recent versions, although BgWr makes the issue go away ;-).\n\n>> There are two things worth considering:\n>>\n>> 1. If the checkpoints are taking place \"too frequently,\" then that\n>> is clear evidence that something is taking place that is injecting\n>> REALLY heavy update load on your database at those times.\n>>\n>> If the postmaster is checkpointing every 10s, that implies Rather\n>> Heavy Load, so it is pretty well guaranteed that performance of\n>> other activity will suck at least somewhat because this load is\n>> sucking up all the I/O bandwidth that it can.\n>>\n>> So, to a degree, there may be little to be done to improve on this.\n>\n> I strongly assume that those log entries showed up at night when the\n> heavy insert routines are being run. I'm more concerned about the\n> query performance under \"normal\" conditions when there are very few\n> modifications done.\n\nI rather thought as much.\n\nYou *do* have to accept that when you get heavy update load, there\nwill be a lot of I/O, and in the absence of \"disk array fairies\" that\nmagically make bits get to the disks via automated mental telepathy\n;-), you have to live with the notion that there will be *some*\nside-effects on activity taking place at such times.\n\nOr you have to spend, spend, spend on heftier hardware. Sometimes too\nexpensive...\n\n>> 2. On the other hand, if you're on 8.1 or so, you may be able to\n>> configure the Background Writer to incrementally flush checkpoint data\n>> earlier, and avoid the condition of 1.\n>>\n>> Mind you, you'd have to set BgWr to be pretty aggressive, based on the\n>> \"10s periodicity\" that you describe; that may not be a nice\n>> configuration to have all the time :-(.\n>\n> I've just seen that the daily vacuum tasks didn't run,\n> apparently. The DB has almost doubled it's size since some days\n> ago. I guess I'll have to VACUUM FULL (dump/restore might be faster,\n> though) and check if that helps anything.\n\nIf you're locking out users, then it's probably a better idea to use\nCLUSTER to reorganize the tables, as that simultaneously eliminates\nempty space on tables *and indices.*\n\nIn contrast, after running VACUUM FULL, you may discover you need to\nreindex tables, because the reorganization of the *table* leads to\nbloating of the indexes.\n\nPre-8.3 (I *think*), there's a transactional issue with CLUSTER where\nit doesn't fully follow MVCC, so that \"dead, but still accessible, to\ncertain transactions\" tuples go away. That can cause surprises\n(e.g. - queries missing data) if applications are accessing the\ndatabase concurrently with the CLUSTER. It's safe as long as the DBA\ncan take over the database and block out applications. And at some\npoint, the MVCC bug got fixed.\n\nNote that you should check the output of a VACUUM VERBOSE run, and/or\nuse the contrib function pgsstattuples() to check how sparse the\nstorage usage is. There may only be a few tables that are behaving\nbadly, and cleaning up a few tables will be a lot less intrusive than\ncleaning up the whole database.\n\n> Does a bloated DB affect the performance alot or does it only use up\n> disk space?\n\nIt certainly can affect performance; if lots of pages are virtually\nempty, then you have to read more pages to find the data you're\nlooking for, and in such cases, you're mostly loading blank space into\npages in memory, cluttering memory up with \"mostly nothing.\"\n-- \nselect 'cbbrowne' || '@' || 'gmail.com';\nhttp://linuxdatabases.info/info/spreadsheets.html\nRules of the Evil Overlord #138. \"The passageways to and within my\ndomain will be well-lit with fluorescent lighting. Regrettably, the\nspooky atmosphere will be lost, but my security patrols will be more\neffective.\" <http://www.eviloverlord.com/>\n", "msg_date": "Sat, 19 Apr 2008 13:11:05 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oddly slow queries" }, { "msg_contents": "\nOn 19.04.2008, at 19:04, Scott Marlowe wrote:\n>>> No, that will certainly NOT just affect write performance; if the\n>>> postmaster is busy writing out checkpoints, that will block SELECT\n>>> queries that are accessing whatever is being checkpointed.\n>>>\n>>\n>> What I meant is if there are no INSERT's or UPDATE's going on it \n>> shouldn't\n>> affect SELECT queries, or am I wrong?\n>\n> But checkpoints only occur every 10 seconds because of a high insert /\n> update rate. So, there ARE inserts and updates going on, and a lot of\n> them, and they are blocking your selects when checkpoint hits.\n>\n> While adjusting your background writer might be called for, and might\n> provide you with some relief, you REALLY need to find out what's\n> pushing so much data into your db at once that it's causing a\n> checkpoint storm.\n\nthat's correct, there are nightly (at least at the moment) processes \nthat\ninsert around 2-3 mio rows and delete about the same amount. I can see \nthat\nthose 'checkpoints are occurring too frequently' messages are only \nlogged\nduring that timeframe.\n\nI assume that it's normal that so many INSERT's and DELETE's cause the\nbackground writer to choke a little bit. I guess I really need to \nadjust the\nprocesses to INSERT and DELETE rows in a slower pace if I want to do \nother\nqueries during the same time.\n\ncheers,\n\ntom\n", "msg_date": "Tue, 22 Apr 2008 11:48:01 +0200", "msg_from": "Thomas Spreng <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Oddly slow queries" }, { "msg_contents": "\n> that's correct, there are nightly (at least at the moment) processes that\n> insert around 2-3 mio rows and delete about the same amount. I can see \n> that\n> those 'checkpoints are occurring too frequently' messages are only logged\n> during that timeframe.\n\n\tPerhaps you should increase the quantity of xlog PG is allowed to write \nbetween each checkpoint (this is checkpoint_segments). Checkpointing every \n10 seconds is going to slow down your inserts also, because of the need to \nfsync()'ing all those pages, not to mention nuking your IO-bound SELECTs. \nIncrease it till it checkpoints every 5 minutes or something.\n\n> I assume that it's normal that so many INSERT's and DELETE's cause the\n\n\tWell, also, do you use batch-processing or plpgsql or issue a huge mass \nof individual INSERTs via some script ?\n\tIf you use a script, make sure that each INSERT doesn't have its own \ntransaction (I think you know that since with a few millions of rows it \nwould take forever... unless you can do 10000 commits/s, in which case \neither you use 8.3 and have activated the \"one fsync every N seconds\" \nfeature, or your battery backed up cache works, or your disk is lying)...\n\tIf you use a script and the server is under heavy load you can :\n\tBEGIN\n\tProcess N rows (use multi-values INSERT and DELETE WHERE .. IN (...)), or \nexecute a prepared statement multiple times, or copy to temp table and \nprocess with SQL (usually much faster)\n\tCOMMIT\n\tSleep\n\tWash, rinse, repeat\n\n> background writer to choke a little bit. I guess I really need to adjust \n> the\n> processes to INSERT and DELETE rows in a slower pace if I want to do \n> other\n> queries during the same time.\n>\n> cheers,\n>\n> tom\n>\n\n\n", "msg_date": "Tue, 22 Apr 2008 12:29:29 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oddly slow queries" }, { "msg_contents": "On 19.04.2008, at 19:11, Christopher Browne wrote:\n> Martha Stewart called it a Good Thing when [email protected] (Thomas \n> Spreng) wrote:\n>> On 16.04.2008, at 17:42, Chris Browne wrote:\n>> What I meant is if there are no INSERT's or UPDATE's going on it\n>> shouldn't affect SELECT queries, or am I wrong?\n>\n> Yes, that's right. (Caveat: VACUUM would be a form of update, in this\n> context...)\n\nthanks for pointing that out, at the moment we don't run autovacuum but\nVACUUM ANALYZE VERBOSE twice a day.\n\n>>> 2. On the other hand, if you're on 8.1 or so, you may be able to\n>>> configure the Background Writer to incrementally flush checkpoint \n>>> data\n>>> earlier, and avoid the condition of 1.\n>>>\n>>> Mind you, you'd have to set BgWr to be pretty aggressive, based on \n>>> the\n>>> \"10s periodicity\" that you describe; that may not be a nice\n>>> configuration to have all the time :-(.\n>>\n>> I've just seen that the daily vacuum tasks didn't run,\n>> apparently. The DB has almost doubled it's size since some days\n>> ago. I guess I'll have to VACUUM FULL (dump/restore might be faster,\n>> though) and check if that helps anything.\n>\n> If you're locking out users, then it's probably a better idea to use\n> CLUSTER to reorganize the tables, as that simultaneously eliminates\n> empty space on tables *and indices.*\n>\n> In contrast, after running VACUUM FULL, you may discover you need to\n> reindex tables, because the reorganization of the *table* leads to\n> bloating of the indexes.\n\nI don't VACUUM FULL but thanks for the hint.\n\n> Pre-8.3 (I *think*), there's a transactional issue with CLUSTER where\n> it doesn't fully follow MVCC, so that \"dead, but still accessible, to\n> certain transactions\" tuples go away. That can cause surprises\n> (e.g. - queries missing data) if applications are accessing the\n> database concurrently with the CLUSTER. It's safe as long as the DBA\n> can take over the database and block out applications. And at some\n> point, the MVCC bug got fixed.\n\nI think I'll upgrade PostgreSQL to the latest 8.3 version in the next\nfew days anyway, along with a memory upgrade (from 1.5GB to 4GB) and a\nnew 2x RAID-1 (instead of RAID-5) disk configuration. I hope that this\nhas already a noticeable impact on the performance.\n\n> Note that you should check the output of a VACUUM VERBOSE run, and/or\n> use the contrib function pgsstattuples() to check how sparse the\n> storage usage is. There may only be a few tables that are behaving\n> badly, and cleaning up a few tables will be a lot less intrusive than\n> cleaning up the whole database.\n\nThat surely is the case because about 90% of all data is stored in one\nbig table and most of the rows are deleted and newly INSERT'ed every\nnight.\n\ncheers,\n\ntom\n", "msg_date": "Tue, 22 Apr 2008 15:42:25 +0200", "msg_from": "Thomas Spreng <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Oddly slow queries" }, { "msg_contents": "On Tue, Apr 22, 2008 at 7:42 AM, Thomas Spreng <[email protected]> wrote:\n>\n> I think I'll upgrade PostgreSQL to the latest 8.3 version in the next\n> few days anyway, along with a memory upgrade (from 1.5GB to 4GB) and a\n> new 2x RAID-1 (instead of RAID-5) disk configuration. I hope that this\n> has already a noticeable impact on the performance.\n\nNote that if you have a good RAID controller with battery backed cache\nand write back enabled, then you're probably better or / at least as\nwell off using four disks in a RAID-10 than two separate RAID-1 sets\n(one for xlog and one for data).\n\nTest to see. I've had better performance in general with the RAID-10 setup.\n", "msg_date": "Tue, 22 Apr 2008 09:25:42 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oddly slow queries" }, { "msg_contents": "On 22.04.2008, at 17:25, Scott Marlowe wrote:\n> On Tue, Apr 22, 2008 at 7:42 AM, Thomas Spreng <[email protected]> \n> wrote:\n>>\n>> I think I'll upgrade PostgreSQL to the latest 8.3 version in the next\n>> few days anyway, along with a memory upgrade (from 1.5GB to 4GB) \n>> and a\n>> new 2x RAID-1 (instead of RAID-5) disk configuration. I hope that \n>> this\n>> has already a noticeable impact on the performance.\n>\n> Note that if you have a good RAID controller with battery backed cache\n> and write back enabled, then you're probably better or / at least as\n> well off using four disks in a RAID-10 than two separate RAID-1 sets\n> (one for xlog and one for data).\n\nI just wanted to let you know that upgrading Postgres from 8.1 to 8.3,\nRAM from 1.5GB to 4GB and changing from a 3 disk RAID5 to 2x RAID1 (OS &\nWAL, Tablespace) led to a significant speed increase. What's especially\nimportant is that those randomly slow queries seem to be gone (for now).\n\nThanks for all the help.\n\nCheers,\n\nTom\n", "msg_date": "Sat, 26 Apr 2008 16:12:39 +0200", "msg_from": "Thomas Spreng <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Oddly slow queries" } ]
[ { "msg_contents": "In 8.3.0, I'm seeing some oddities with SQL functions which I thought were\nimmune to the planner data restrictions of plpgsql functions and the sort.\n Basically I have a query which executes in 5ms but when wrapped in a SQL\nfunction, takes 500ms. I've checked all the types passed in to make sure\nthey match so there is no type conversions taking place in execution.\nI'm curious about the validity of my expectation that functions created with\nSQL as the language should be as fast as the straight SQL counterpart. I've\npreviously not run into such an order of magnitude difference in using SQL\nfunctions. Is this a change of behavior in 8.3 from 8.2? Without specific\nexamples, are there any recommendations on how to speed up these functions?\n\nThanks,\n\nGavin\n\nIn 8.3.0, I'm seeing some oddities with SQL functions which I thought were immune to the planner data restrictions of plpgsql functions and the sort.  Basically I have a query which executes in 5ms but when wrapped in a SQL function, takes 500ms.  I've checked all the types passed in to make sure they match so there is no type conversions taking place in execution.\nI'm curious about the validity of my expectation that functions created with SQL as the language should be as fast as the straight SQL counterpart.  I've previously not run into such an order of magnitude difference in using SQL functions.  Is this a change of behavior in 8.3 from 8.2?  Without specific examples, are there any recommendations on how to speed up these functions?\nThanks,Gavin", "msg_date": "Wed, 16 Apr 2008 11:06:35 -0400", "msg_from": "\"Gavin M. Roy\" <[email protected]>", "msg_from_op": true, "msg_subject": "SQL Function Slowness, 8.3.0" }, { "msg_contents": "\"Gavin M. Roy\" <[email protected]> writes:\n> In 8.3.0, I'm seeing some oddities with SQL functions which I thought were\n> immune to the planner data restrictions of plpgsql functions and the sort.\n\nWithout a specific example this discussion is pretty content-free, but\nin general SQL functions face the same hazards of bad parameterized\nplans as plpgsql functions do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 16 Apr 2008 11:09:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Function Slowness, 8.3.0 " }, { "msg_contents": "Are you going to post the function? :-)\n\nMy PL/PGSQL functions are running fine in 8.3.x.\n\nCheers,\nmark\n\n\nGavin M. Roy wrote:\n> In 8.3.0, I'm seeing some oddities with SQL functions which I thought \n> were immune to the planner data restrictions of plpgsql functions and \n> the sort. Basically I have a query which executes in 5ms but when \n> wrapped in a SQL function, takes 500ms. I've checked all the types \n> passed in to make sure they match so there is no type conversions \n> taking place in execution.\n>\n> I'm curious about the validity of my expectation that functions \n> created with SQL as the language should be as fast as the straight SQL \n> counterpart. I've previously not run into such an order of magnitude \n> difference in using SQL functions. Is this a change of behavior in \n> 8.3 from 8.2? Without specific examples, are there any \n> recommendations on how to speed up these functions?\n>\n> Thanks,\n>\n> Gavin\n>\n\n\n-- \nMark Mielke <[email protected]>\n\n\n\n\n\n\n\nAre you going to post the function? :-)\n\nMy PL/PGSQL functions are running fine in 8.3.x.\n\nCheers,\nmark\n\n\nGavin M. Roy wrote:\nIn 8.3.0, I'm seeing some oddities\nwith SQL functions which I thought were immune to the planner data\nrestrictions of plpgsql functions and the sort.  Basically I have a\nquery which executes in 5ms but when wrapped in a SQL function, takes\n500ms.  I've checked all the types passed in to make sure they match so\nthere is no type conversions taking place in execution.\n \n\nI'm curious about the validity of my expectation that functions\ncreated with SQL as the language should be as fast as the straight SQL\ncounterpart.  I've previously not run into such an order of magnitude\ndifference in using SQL functions.  Is this a change of behavior in 8.3\nfrom 8.2?  Without specific examples, are there any recommendations on\nhow to speed up these functions?\n\n\nThanks,\n\n\nGavin\n\n\n\n\n\n-- \nMark Mielke <[email protected]>", "msg_date": "Wed, 16 Apr 2008 11:14:26 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Function Slowness, 8.3.0" }, { "msg_contents": "After detailed examination of pg_stat_user_indexes usage, it's clear that\nthe functions don't use the same indexes. I've casted everything to match\nthe indexes in the SQL function, to no success. Any suggestions on next\nsteps? Maybe for 8.4 we could find a way to explain analyze function\ninternals ;-)\nGavin\n\nOn Wed, Apr 16, 2008 at 11:09 AM, Tom Lane <[email protected]> wrote:\n\n> \"Gavin M. Roy\" <[email protected]> writes:\n> > In 8.3.0, I'm seeing some oddities with SQL functions which I thought\n> were\n> > immune to the planner data restrictions of plpgsql functions and the\n> sort.\n>\n> Without a specific example this discussion is pretty content-free, but\n> in general SQL functions face the same hazards of bad parameterized\n> plans as plpgsql functions do.\n>\n> regards, tom lane\n>\n\nAfter detailed examination of pg_stat_user_indexes usage, it's clear that the functions don't use the same indexes.  I've casted everything to match the indexes in the SQL function, to no success.  Any suggestions on next steps?  Maybe for 8.4 we could find a way to explain analyze function internals ;-)\nGavinOn Wed, Apr 16, 2008 at 11:09 AM, Tom Lane <[email protected]> wrote:\n\"Gavin M. Roy\" <[email protected]> writes:\n> In 8.3.0, I'm seeing some oddities with SQL functions which I thought were\n> immune to the planner data restrictions of plpgsql functions and the sort.\n\nWithout a specific example this discussion is pretty content-free, but\nin general SQL functions face the same hazards of bad parameterized\nplans as plpgsql functions do.\n\n                        regards, tom lane", "msg_date": "Wed, 16 Apr 2008 14:44:40 -0400", "msg_from": "\"Gavin M. Roy\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL Function Slowness, 8.3.0" }, { "msg_contents": "On Wed, 16 Apr 2008 14:44:40 -0400\n\"Gavin M. Roy\" <[email protected]> wrote:\n\n> After detailed examination of pg_stat_user_indexes usage, it's clear\n> that the functions don't use the same indexes. I've casted\n> everything to match the indexes in the SQL function, to no success.\n> Any suggestions on next steps? Maybe for 8.4 we could find a way to\n> explain analyze function internals ;-)\n> Gavin\n\nTo quote Tom in the appropriate bottom posting method:\n\n> >\n> > Without a specific example this discussion is pretty content-free,\n> > but in general SQL functions face the same hazards of bad\n> > parameterized plans as plpgsql functions do.\n> >\n> > regards, tom lane\n> >\n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nUnited States PostgreSQL Association: http://www.postgresql.us/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Wed, 16 Apr 2008 11:58:08 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Function Slowness, 8.3.0" }, { "msg_contents": "\"Gavin M. Roy\" <[email protected]> writes:\n> After detailed examination of pg_stat_user_indexes usage, it's clear that\n> the functions don't use the same indexes. I've casted everything to match\n> the indexes in the SQL function, to no success. Any suggestions on next\n> steps? Maybe for 8.4 we could find a way to explain analyze function\n> internals ;-)\n\nYeah, this could be easier, but it's certainly possible to examine the\nplan generated for a function's parameterized statement. For instance,\nsay you're wondering about the plan for\n\n\tcreate function foo(int, text) ... as\n\t$$ select * from bar where f1 = $1 and f2 = $2 $$\n\tlanguage sql\n\nWhat you do is\n\nprepare p(int, text) as select * from bar where f1 = $1 and f2 = $2 ;\n\nexplain analyze execute p(42, 'hello world');\n\nIt works exactly the same for statements in plpgsql functions,\nremembering that both parameters and local variables of the function\nhave to become $n placeholders. Remember to make the parameters\nof the prepared statement have the same declared types as the\nfunction's parameters and variables.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 16 Apr 2008 16:24:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Function Slowness, 8.3.0 " }, { "msg_contents": "On Wed, 2008-04-16 at 11:09 -0400, Tom Lane wrote:\n> \"Gavin M. Roy\" <[email protected]> writes:\n> > In 8.3.0, I'm seeing some oddities with SQL functions which I thought were\n> > immune to the planner data restrictions of plpgsql functions and the sort.\n> \n> Without a specific example this discussion is pretty content-free, but\n> in general SQL functions face the same hazards of bad parameterized\n> plans as plpgsql functions do.\n\nI think it would help if there was some way to prepare functions to\nallow them to be posted and understood more easily. These would help:\n\n* a name obfuscator, so people can post functions without revealing\ninner workings of their company and potentially lose intellectual\nproperty rights over code posted in that way\n\n* a pretty printer, so we can better understand them when we see 'em\n\nWithout these, I think we need to realise that many people will never\npost their SQL at all.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Thu, 17 Apr 2008 17:12:31 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Function Slowness, 8.3.0" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> I think it would help if there was some way to prepare functions to\n> allow them to be posted and understood more easily. These would help:\n\n> * a name obfuscator, so people can post functions without revealing\n> inner workings of their company and potentially lose intellectual\n> property rights over code posted in that way\n\n> * a pretty printer, so we can better understand them when we see 'em\n\nAren't these suggestions mutually contradictory?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Apr 2008 12:12:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Function Slowness, 8.3.0 " }, { "msg_contents": "On Thu, 2008-04-17 at 12:12 -0400, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > I think it would help if there was some way to prepare functions to\n> > allow them to be posted and understood more easily. These would help:\n> \n> > * a name obfuscator, so people can post functions without revealing\n> > inner workings of their company and potentially lose intellectual\n> > property rights over code posted in that way\n> \n> > * a pretty printer, so we can better understand them when we see 'em\n> \n> Aren't these suggestions mutually contradictory?\n\nNo, they're orthogonal. The pretty printer would get the indenting and\nline feeds correct, the obfuscator would replace actual names with \"A\",\n\"B\" or \"Table1\" etc..\n\nObfuscating the names would make the code harder to understand, true,\nbut only if the code is written in English (or your language-of-choice).\nIt wouldn't damage our ability to read other language code at all.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Thu, 17 Apr 2008 17:25:57 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Function Slowness, 8.3.0" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> On Thu, 2008-04-17 at 12:12 -0400, Tom Lane wrote:\n>> Aren't these suggestions mutually contradictory?\n\n> No, they're orthogonal. The pretty printer would get the indenting and\n> line feeds correct, the obfuscator would replace actual names with \"A\",\n> \"B\" or \"Table1\" etc..\n\nHmm, that's not what I'd call an \"obfuscator\", more an \"anonymizer\".\nCode obfuscators are generally intended to make code unreadable\n(in fact de-pretty-printing is one of their chief techniques).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Apr 2008 12:41:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Function Slowness, 8.3.0 " }, { "msg_contents": "On Thu, 2008-04-17 at 12:41 -0400, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > On Thu, 2008-04-17 at 12:12 -0400, Tom Lane wrote:\n> >> Aren't these suggestions mutually contradictory?\n> \n> > No, they're orthogonal. The pretty printer would get the indenting and\n> > line feeds correct, the obfuscator would replace actual names with \"A\",\n> > \"B\" or \"Table1\" etc..\n> \n> Hmm, that's not what I'd call an \"obfuscator\", more an \"anonymizer\".\n> Code obfuscators are generally intended to make code unreadable\n> (in fact de-pretty-printing is one of their chief techniques).\n\nThat's a better term. I did say \"name obfuscator\" but I can see how that\nwas confusing, especially with the recent discussion on code\nobfuscation. Sorry about that.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Thu, 17 Apr 2008 17:50:55 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Function Slowness, 8.3.0" }, { "msg_contents": "Simon Riggs wrote:\n\n> Obfuscating the names would make the code harder to understand, true,\n> but only if the code is written in English (or your language-of-choice).\n> It wouldn't damage our ability to read other language code at all.\n\nSpeaking of this sort of support tool, what I personally often wish for \nis unique error message identifiers that can be looked up (say, with a \nweb form) or a way to un/re-translate localized messages.\n\nI'm on one other mailing list where a wide variety of languages is in \nuse; however, on that list there are lots of experienced users - \nincluding most of the translators for the app - happy to help out in the \nusers preferred language or to translate. Here much of the help seems to \nbe from mostly English (only?) speakers, so a reverse message translator \nback to the English used in the sources would be pretty cool.\n\nI should have a play and see how hard it is to generate a reverse \ntranslation tool from the .po files.\n\nI do think that something that could substitute replacement generic \nvariable names consistently throughout a schema, set of queries, EXPLAIN \n/ EXPLAIN ANALYZE output, etc would be handy, though it'd be better if \npeople just posted their original code.\n\n--\nCraig Ringer\n\n", "msg_date": "Fri, 18 Apr 2008 01:00:43 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Function Slowness, 8.3.0" }, { "msg_contents": "Craig Ringer wrote:\n> Simon Riggs wrote:\n>\n>> Obfuscating the names would make the code harder to understand, true,\n>> but only if the code is written in English (or your language-of-choice).\n>> It wouldn't damage our ability to read other language code at all.\n>\n> Speaking of this sort of support tool, what I personally often wish for \n> is unique error message identifiers that can be looked up (say, with a \n> web form) or a way to un/re-translate localized messages.\n\nI did spent some time a couple of years ago writing an April 1st joke\nthat proposed replacing error messages with unique error codes. I never\nsent it, but while writing the rationale part I realized that it could\nbe actually useful to drive a \"knowledge base\" kind of app. You could\nget back\n\n1. source code location where it is used\n2. occasions on which it has been reported before\n3. related bug fixes\n\n\n> I should have a play and see how hard it is to generate a reverse \n> translation tool from the .po files.\n\nThat would rock -- I have wished for such a thing (in fact I troll the\nPO catalogs by hand at times.)\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 17 Apr 2008 13:15:40 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Function Slowness, 8.3.0" }, { "msg_contents": "Craig Ringer <[email protected]> writes:\n> Speaking of this sort of support tool, what I personally often wish for \n> is unique error message identifiers that can be looked up (say, with a \n> web form) or a way to un/re-translate localized messages.\n\nThe VERBOSE option already gives an exact pointer into the backend\nsources...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Apr 2008 19:38:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Function Slowness, 8.3.0 " }, { "msg_contents": "Tom Lane wrote:\n> Craig Ringer <[email protected]> writes:\n> \n>> Speaking of this sort of support tool, what I personally often wish for \n>> is unique error message identifiers that can be looked up (say, with a \n>> web form) or a way to un/re-translate localized messages.\n>> \n>\n> The VERBOSE option already gives an exact pointer into the backend\n> sources...\n> \nThe trouble is getting people to use it. It's also not useful when \nyou're looking at yet another Hibernate/TopLink/whatever-generated \nbacktrace.\n\n--\nCraig Ringer\n", "msg_date": "Fri, 18 Apr 2008 11:41:17 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Function Slowness, 8.3.0" }, { "msg_contents": "Craig Ringer <[email protected]> writes:\n> Tom Lane wrote:\n>> Craig Ringer <[email protected]> writes:\n>>> Speaking of this sort of support tool, what I personally often wish for \n>>> is unique error message identifiers that can be looked up (say, with a \n>>> web form) or a way to un/re-translate localized messages.\n>> \n>> The VERBOSE option already gives an exact pointer into the backend\n>> sources...\n>> \n> The trouble is getting people to use it.\n\nSure, but what's your point? They won't provide a unique message\nidentifier without being pushed, either. (No, having such a thing\ndisplayed by default isn't going to happen.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 Apr 2008 00:01:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Function Slowness, 8.3.0 " }, { "msg_contents": "Tom Lane wrote:\n> Craig Ringer <[email protected]> writes:\n> \n>> Tom Lane wrote:\n>> \n>>> Craig Ringer <[email protected]> writes:\n>>> \n>>>> Speaking of this sort of support tool, what I personally often wish for \n>>>> is unique error message identifiers that can be looked up (say, with a \n>>>> web form) or a way to un/re-translate localized messages.\n>>>> \n>>> The VERBOSE option already gives an exact pointer into the backend\n>>> sources...\n>>>\n>>> \n>> The trouble is getting people to use it.\n>> \n>\n> Sure, but what's your point? They won't provide a unique message\n> identifier without being pushed, either. (No, having such a thing\n> displayed by default isn't going to happen.)\nIndeed. I was thinking of something like that appearing in default error \nmessages, otherwise I agree it'd be useless. It's just casual \nspeculation anyway, bought on by the previous discussion and thinking \nabout the fact that when working under Windows I actually find the VC++ \n`C' error codes _useful_.\n\nI'm more interested in re-translating messages, which I'll be having a \nbash at shortly. It's hardly important either (it's often possible to \njust figure it out given the context, or use the horribly mangled result \nfrom google translate to guess) but might be handy if it proves easy to \nput together.\n\n--\nCraig Ringer\n", "msg_date": "Fri, 18 Apr 2008 12:21:46 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Function Slowness, 8.3.0" } ]
[ { "msg_contents": "Hi,\n\nto save some people a headache or two: I believe we just solved our \nperformance problem in the following scenario:\n\n- Linux 2.6.24.4\n- lots of RAM (32GB)\n- enough CPU power (4 cores)\n- disks with relatively slow random writes (SATA RAID-5 / 7 disks, 128K \nstripe, ext2)\n\nOur database is around 86GB, the busy parts being 20-30GB. Typical load \nis regular reads of all sizes (large joins, sequential scans on a 8GB \ntable, many small selects with few rows) interspersed with writes of \nseveral 1000s rows on the busier tables by several clients.\n\nAfter many tests and research revolving around the Linux I/O-Schedulers \n(which still have some issues one should be wary about: \nhttp://lwn.net/Articles/216853/) because we saw problems when occasional \n(intensive) writes completely starved all I/O, we discovered that \nchanging the default settings for the background writer seems to have \nsolved all these problems. Performance is much better now with fsync on \nthan it was with fsync off previously, no other configuration options \nhad a noticeable effect on performance (or these problems rather).\n\nThis helped with our configuration:\nbgwriter_delay = 10000ms # 10-10000ms between rounds\nbgwriter_lru_maxpages = 1000 # 0-1000 max buffers written/round\n\nPreviously, our typical writes resulted in around 5-10MB/s going to disk \nand some reads stalling, now we are seeing typical disk I/O in the \n30-60MB/s range with write load present and no noticeable problems with \nreads except when autovacuum's \"analyze\" is running. Other options we \nhave tried/used were shared_buffers between 200MB and 20GB, wal_buffers \n= 256MB, wal_writer_delay=5000ms ...\n\nSo, using this is highly recommended and I would say that the \ndocumentation does not do it justice... (and yes, I could have figured \nit out earlier)\n\n-mjy\n", "msg_date": "Wed, 16 Apr 2008 17:22:50 +0200", "msg_from": "Marinos Yannikos <[email protected]>", "msg_from_op": true, "msg_subject": "Background writer underemphasized ... " }, { "msg_contents": "[email protected] (Marinos Yannikos) writes:\n> This helped with our configuration:\n> bgwriter_delay = 10000ms # 10-10000ms between rounds\n> bgwriter_lru_maxpages = 1000 # 0-1000 max buffers written/round\n\nFYI, I'd be inclined to reduce both of those numbers, as it should\nreduce the variability of behaviour.\n\nRather than cleaning 1K pages every 10s, I would rather clean 100\npages every 1s, as that will have much the same effect, but spread the\nwork more evenly. Or perhaps 10 pages every 100ms...\n\nCut the delay *too* low and this might make the background writer, in\neffect, poll *too* often, and start chewing resources, but there's\ndoubtless some \"sweet spot\" in between...\n-- \n\"cbbrowne\",\"@\",\"cbbrowne.com\"\nhttp://linuxdatabases.info/info/oses.html\n\"For systems, the analogue of a face-lift is to add to the control\ngraph an edge that creates a cycle, not just an additional node.\"\n-- Alan J. Perlis\n", "msg_date": "Wed, 16 Apr 2008 13:47:59 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Background writer underemphasized ..." }, { "msg_contents": "In response to Marinos Yannikos <[email protected]>:\n\n> Hi,\n> \n> to save some people a headache or two: I believe we just solved our \n> performance problem in the following scenario:\n> \n> - Linux 2.6.24.4\n> - lots of RAM (32GB)\n> - enough CPU power (4 cores)\n> - disks with relatively slow random writes (SATA RAID-5 / 7 disks, 128K \n> stripe, ext2)\n> \n> Our database is around 86GB, the busy parts being 20-30GB. Typical load \n> is regular reads of all sizes (large joins, sequential scans on a 8GB \n> table, many small selects with few rows) interspersed with writes of \n> several 1000s rows on the busier tables by several clients.\n> \n> After many tests and research revolving around the Linux I/O-Schedulers \n> (which still have some issues one should be wary about: \n> http://lwn.net/Articles/216853/) because we saw problems when occasional \n> (intensive) writes completely starved all I/O, we discovered that \n> changing the default settings for the background writer seems to have \n> solved all these problems. Performance is much better now with fsync on \n> than it was with fsync off previously, no other configuration options \n> had a noticeable effect on performance (or these problems rather).\n> \n> This helped with our configuration:\n> bgwriter_delay = 10000ms # 10-10000ms between rounds\n> bgwriter_lru_maxpages = 1000 # 0-1000 max buffers written/round\n\nWhat other values have you tried for this? Have you watched closely\nunder load to ensure that you're not seeing a huge performance hit\nevery 10s when the bgwriter kicks off?\n\nI'm with Chris -- I would be inclined to try a range of values to find\na sweet spot, and I would be _very_ shocked to find that sweet spot\nat the values you mention. However, if that really is the demonstrable\nsweet spot, there may be something we all can learn.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Wed, 16 Apr 2008 14:43:35 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Background writer underemphasized ..." }, { "msg_contents": "On Wed, 16 Apr 2008, Marinos Yannikos wrote:\n\n> to save some people a headache or two: I believe we just solved our \n> performance problem in the following scenario:\n\nI was about to ask your PostgreSQL version but since I see you mention \nwal_writer_delay it must be 8.3. Knowing your settings for shared_buffers \nand checkpoint_segments in particular would make this easier to \nunderstand.\n\nYou also didn't mention what disk controller you have, or how much write \ncache it has (if any).\n\n> This helped with our configuration:\n> bgwriter_delay = 10000ms # 10-10000ms between rounds\n> bgwriter_lru_maxpages = 1000 # 0-1000 max buffers written/round\n\nThe default for bgwriter_delay is 200ms = 5 passes/second. You're \nincreasing that to 10000ms means one pass every 10 seconds instead. \nThat's almost turning the background writer off. If that's what improved \nyour situation, you might as well as turn it off altogether by setting all \nthe bgwriter_lru_maxpages parameters to be 0. The combination you \ndescribe here, running very infrequently but with lru_maxpages set to its \nmaximum, is a bit odd.\n\n> Other options we have tried/used were shared_buffers between 200MB and \n> 20GB, wal_buffers = 256MB, wal_writer_delay=5000ms ...\n\nThe useful range for wal_buffers tops at around 1MB, so no need to get \nextreme there. wal_writer_delay shouldn't matter here unless you turned \non asyncronous commit.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 16 Apr 2008 15:45:22 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Background writer underemphasized ... " }, { "msg_contents": "On Wed, 16 Apr 2008, Bill Moran wrote:\n\n>> bgwriter_delay = 10000ms # 10-10000ms between rounds\n>> bgwriter_lru_maxpages = 1000 # 0-1000 max buffers written/round\n> Have you watched closely under load to ensure that you're not seeing a \n> huge performance hit every 10s when the bgwriter kicks off?\n\nbgwriter_lru_maxpages = 1000 means that any background writer pass can \nwrite at most 1000 pages = 8MB. Those are buffered writes going into the \nOS cache, which it will write out at its own pace later. That isn't going \nto cause a performance hit when it happens.\n\nThat isn't the real mystery though--where's the RAID5 rant I was expecting \nfrom you?\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 16 Apr 2008 15:55:51 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Background writer underemphasized ..." }, { "msg_contents": "In response to Greg Smith <[email protected]>:\n\n> On Wed, 16 Apr 2008, Bill Moran wrote:\n> \n> >> bgwriter_delay = 10000ms # 10-10000ms between rounds\n> >> bgwriter_lru_maxpages = 1000 # 0-1000 max buffers written/round\n> > Have you watched closely under load to ensure that you're not seeing a \n> > huge performance hit every 10s when the bgwriter kicks off?\n> \n> bgwriter_lru_maxpages = 1000 means that any background writer pass can \n> write at most 1000 pages = 8MB. Those are buffered writes going into the \n> OS cache, which it will write out at its own pace later. That isn't going \n> to cause a performance hit when it happens.\n> \n> That isn't the real mystery though--where's the RAID5 rant I was expecting \n> from you?\n\nOh crap ... he _is_ using RAID-5! I completely missed an opportunity to\nrant!\n\nblah blah blah ... RAID-5 == evile, etc ...\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Wed, 16 Apr 2008 16:07:24 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Background writer underemphasized ..." }, { "msg_contents": "Greg Smith schrieb:\n> You also didn't mention what disk controller you have, or how much write \n> cache it has (if any).\n\n8.3.1, Controller is \nhttp://www.infortrend.com/main/2_product/es_a08(12)f-g2422.asp with 2GB \ncache (writeback was enabled).\n\n> That's almost turning the background writer off. If that's what \n> improved your situation, you might as well as turn it off altogether by \n> setting all the bgwriter_lru_maxpages parameters to be 0. The \n> combination you describe here, running very infrequently but with \n> lru_maxpages set to its maximum, is a bit odd.\n\nPerhaps the background writer takes too long to find the required number \nof dirty pages among the 16GB shared buffers (currently), which should \nbe mostly clean. We could reduce the shared buffers to a more commonly \nused amount (<= 2GB or so) but some of our most frequently used tables \nare in the 8+ GB range and sequential scans are much faster with this \nsetting (for ~, ~* etc.).\n\n>> Other options we have tried/used were shared_buffers between 200MB and \n>> 20GB, wal_buffers = 256MB, wal_writer_delay=5000ms ...\n> \n> The useful range for wal_buffers tops at around 1MB, so no need to get \n> extreme there. wal_writer_delay shouldn't matter here unless you turned \n> on asyncronous commit.\n\nI was under the impression that wal_buffers should be kept at/above the \nsize of tyical transactions. We do have some large-ish ones that are \ntime-critical.\n\n-mjy\n", "msg_date": "Thu, 17 Apr 2008 20:46:05 +0200", "msg_from": "Marinos Yannikos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Background writer underemphasized ..." }, { "msg_contents": "On Thu, 17 Apr 2008, Marinos Yannikos wrote:\n\n> Controller is \n> http://www.infortrend.com/main/2_product/es_a08(12)f-g2422.asp with 2GB \n> cache (writeback was enabled).\n\nAh. Sometimes these fiber channel controllers can get a little weird \n(compared with more direct storage) when the cache gets completely filled. \nIf you think about it, flushing 2GB out takes takes a pretty significant \nperiod amount of time even at 4Gbps, and once all the caches at every \nlevel are filled it's possible for that to turn into a bottleneck.\n\nUsing the background writer more assures that the cache on the controller \nis going to be written to aggressively, so it may be somewhat filled \nalready come checkpoint time. If you leave the writer off, when the \ncheckpoint comes you're much more likely to have the full 2GB available to \nabsorb a large block of writes.\n\nYou suggested a documentation update; it would be fair to suggest that \nthere are caching/storage setups where even the 8.3 BGW might just be \ngetting in the way. The right thing to do there is just turn it off \naltogether, which should work a bit better than the exact tuning you \nsuggested.\n\n> Perhaps the background writer takes too long to find the required number of \n> dirty pages among the 16GB shared buffers (currently), which should be mostly \n> clean.\n\nThat would only cause a minor increase in CPU usage. You certainly don't \nwant to reduce shared_buffers for all the reasons you list.\n\n> I was under the impression that wal_buffers should be kept at/above the size \n> of tyical transactions.\n\nIt doesn't have to be large enough to hold a whole transaction, just big \nenough that when it fills and a write is forced that write isn't trivially \nsmall (and therefore wasteful in terms of I/O size). There's a fairly \ngood discussion of what's actually involved here at \nhttp://archives.postgresql.org/pgsql-advocacy/2003-02/msg00053.php ; as I \nsuggested, I've seen and heard others report small improvements in raising \nfrom the tiny default value to the small MB range, but beyond that you're \njust wasting RAM that could buffer database pages instead.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sat, 19 Apr 2008 05:22:46 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Background writer underemphasized ..." }, { "msg_contents": "Greg Smith wrote:\n> Using the background writer more assures that the cache on the \n> controller is going to be written to aggressively, so it may be \n> somewhat filled already come checkpoint time. If you leave the writer \n> off, when the checkpoint comes you're much more likely to have the \n> full 2GB available to absorb a large block of writes.\nBut isn't it the case that while using background writer might result in \n*slightly* more data to write (since data that is updated several times \nmight actually be sent several times), the total amount of data in both \ncases is much the same? And if the buffer backed up in the BGW case, \nwouldn't it also back up (more?) if the writes are deferred? And in \nfact by sending earlier, the real bottleneck (the disks) could have been \ngetting on with it and staring their IO earlier?\n\nCan you explian your reasoning a bit more?\n\nJames\n\n", "msg_date": "Sat, 19 Apr 2008 16:06:05 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Background writer underemphasized ..." }, { "msg_contents": "On Sat, 19 Apr 2008, James Mansion wrote:\n\n> But isn't it the case that while using background writer might result in \n> *slightly* more data to write (since data that is updated several times \n> might actually be sent several times), the total amount of data in both \n> cases is much the same?\n\nReally depends on your workload, how many wasted writes there are. It \nmight be significant, it might only be slight.\n\n> And if the buffer backed up in the BGW case, wouldn't it also back up \n> (more?) if the writes are deferred? And in fact by sending earlier, the \n> real bottleneck (the disks) could have been getting on with it and \n> staring their IO earlier?\n\nIf you write a giant block of writes, those tend to be sorted by the OS \nand possibly the controller to reduce total seeks. That's a pretty \nefficient write and it can clear relatively fast.\n\nBut if you're been trickling writes in an unstructured form and in low \nvolume, there can be a stack of them that aren't sorted well blocking the \nqueue from clearing. With a series of small writes, it's not that \ndifficult to end up in a situation where a controller cache is filled with \nwrites containing a larger seek component than you'd have gotten had you \nwritten in larger blocks that took advantage of more OS-level elevator \nsorting. There's actually a pending patch to try and improve this \nsituation in regards to checkpoint writes in the queue.\n\nSeeks are so slow compared to more sequential writes that you really can \nend up in the counterintuitive situation that you finish faster by \navoiding early writes, even in cases when the disk is the bottleneck.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sat, 19 Apr 2008 15:29:38 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Background writer underemphasized ..." }, { "msg_contents": "Greg Smith wrote:\n> If you write a giant block of writes, those tend to be sorted by the \n> OS and possibly the controller to reduce total seeks. That's a pretty \n> efficient write and it can clear relatively fast.\n>\n> But if you're been trickling writes in an unstructured form and in low \n> volume, there can be a stack of them that aren't sorted well blocking \n> the queue from clearing. With a series of small writes, it's not that \n> difficult to end up in a situation where a controller cache is filled \n> with writes containing a larger seek component than you'd have gotten \n> had you written in larger blocks that took advantage of more OS-level \n> elevator sorting. There's actually a pending patch to try and improve \n> this situation in regards to checkpoint writes in the queue.\n>\n> Seeks are so slow compared to more sequential writes that you really \n> can end up in the counterintuitive situation that you finish faster by \n> avoiding early writes, even in cases when the disk is the bottleneck.\nI'm sorry but I am somewhat unconvinced by this.\n\nI accept that by early submission the disk subsystem may end up doing \nmore seeks and more writes in total, but when the dam breaks at the \nstart of the checkpoint, how can it help to have _more_ data write \nvolume and _more_ implied seeks offered up at that point?\n\nAre you suggesting that the disk subsystem has already decided on its \nstrategy for a set of seeks and writes and will not insert new \ninstructions into an existing elevator plan until it is completed and it \nlooks at the new requests? This sounds a bit tenuous at best - almost to \nthe point of being a bug. Do you believe this is universal?\n\nJames\n\n", "msg_date": "Sun, 20 Apr 2008 15:41:13 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Background writer underemphasized ..." }, { "msg_contents": "On Sun, 20 Apr 2008, James Mansion wrote:\n\n> Are you suggesting that the disk subsystem has already decided on its \n> strategy for a set of seeks and writes and will not insert new \n> instructions into an existing elevator plan until it is completed and it \n> looks at the new requests?\n\nNo, just that each component only gets to sort across what it sees, and \nbecause of that the sorting horizon may not be optimized the same way \ndepending on how writes are sent.\n\nLet me try to construct a credible example of this elusive phenomenon:\n\n-We have a server with lots of RAM\n-The disk controller cache has 256MB of cache\n-We have 512MB of data to write that's spread randomly across the database \ndisk.\n\nCase 1: Write early\n\nLet's say the background writer writes a sample of 1/2 the data right now \nin anticipation of needing those buffers for something else soon. It's \nnow in the controller's cache and sorted already. The controller is \nworking on it. Presume it starts at the beginning of the disk and works \nits way toward the end, seeking past gaps in between as needed.\n\nThe checkpoint hits just after that happens. The remaining 256MB gets \ndumped into the OS buffer cache. This gets elevator sorted by the OS, \nwhich will now write it out to the card in sorted order, beginning to end. \nBut writes to the controller will block because most of the cache is \nfilled, so they trickle in as data writes are completed and the cache gets \nspace. Let's presume they're all ignored, because the drive is working \ntoward the end and these are closer to the beginning than the ones it's \nworking on.\n\nNow the disk is near the end of its logical space, and there's a cache \nfull of new dirty checkpoint data. But the OS has finished spooling all \nits dirty stuff into the cache so the checkpoint is over. During that \ncheckpoint the disk has to seek enough to cover the full logical \"length\" \nof the volume. The controller will continue merrily writing now until its \ncache clears again, moving from the end of the disk back to the beginning \nagain.\n\nCase 2: Delayed writes, no background writer use\n\nThe checkpoint hits. 512MB of data gets dumped into the OS cache. It \nsorts and feeds that in sorted order into the cache. Drive starts at the \nbeginning and works it way through everything. By the time it's finished \nseeking its way across half the disk, the OS is now unblocked becuase the \nremaining data is in the cache.\n\nCan you see how in this second case, it may very well be that the \ncheckpoint finishes *faster* because we waited longer to start writing? \nBecause the OS has a much larger elevator sorting capacity than the disk \ncontroller, leaving data in RAM and waiting until there's more of it \nqueued up there has approximately halved the number/size of seeks involved \nbefore the controller can say it's absorbed all the writes.\n\n> This sounds a bit tenuous at best - almost to the point of being a \n> bug. Do you believe this is universal?\n\nOf course not, or the background writer would be turned off by default. \nThere are occasional reports where it just gets in the way, typically in \nones where the controller has its own cache and there's a bad interaction \nthere.\n\nThis is not unique to this situation, so in that sense this class of \nproblems is universal. There's all kinds of operating sytems \nconfigurations that are tuned to delay writing in hopes of making those \nwrites more efficient, because the OS usually has a much larger capacity \nfor buffering pages to optimize what's going to happen than the downstream \ncontroller/disk caches do. Once you've filled a downstream cache, you may \nnot be able to influence what that device executing those requests does \nanymore until that cache clears.\n\nNote that the worst-case situation here actually gets worse in some \nrespects the larger the downstream cache is, because there's that much \nmore data you have to wait to clear before you can necessarily influence \nwhat the disks are doing if you've made a bad choice in what you asked it \nto write early. If the disk head is too far away from where you want to \nwrite or read to now, you can be in for quite a wait before it gets back \nyour way if the filled cache is large.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 23 Apr 2008 01:11:01 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Background writer underemphasized ..." } ]
[ { "msg_contents": "Thinking about buying the Powervault MD3000 SAS array with 15 15k\n300GB disks for use as a postgres tablespace. Is anyone using these\n(or other LSI/Engenio rebadge jobs?). I'm interested in hearing about\nperformance of the array, and problems (if any) with Dell's SAS HBA\nthat comes bundled. Also interested in performance of the maximum\nconfig of an MD3000 with two MD1000 shelves.\n\n-Jeff\n", "msg_date": "Wed, 16 Apr 2008 13:15:12 -0700", "msg_from": "\"Jeffrey Baker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Anybody using the Dell Powervault MD3000 array?" }, { "msg_contents": "Might want to check out the HP MSA70 arrays. I've had better luck with them\nand you can get 25 drives in a smaller rack unit size. I had a bad\nexperience with the MD3000 and now only buy MD1000's with Perc 6/e when I\nbuy Dell.\nGood luck!\n\nOn Wed, Apr 16, 2008 at 4:15 PM, Jeffrey Baker <[email protected]> wrote:\n\n> Thinking about buying the Powervault MD3000 SAS array with 15 15k\n> 300GB disks for use as a postgres tablespace. Is anyone using these\n> (or other LSI/Engenio rebadge jobs?). I'm interested in hearing about\n> performance of the array, and problems (if any) with Dell's SAS HBA\n> that comes bundled. Also interested in performance of the maximum\n> config of an MD3000 with two MD1000 shelves.\n>\n> -Jeff\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nMight want to check out the HP MSA70 arrays.  I've had better luck with them and you can get 25 drives in a smaller rack unit size.  I had a bad experience with the MD3000 and now only buy MD1000's with Perc 6/e when I buy Dell.\nGood luck!On Wed, Apr 16, 2008 at 4:15 PM, Jeffrey Baker <[email protected]> wrote:\nThinking about buying the Powervault MD3000 SAS array with 15 15k\n300GB disks for use as a postgres tablespace.  Is anyone using these\n(or other LSI/Engenio rebadge jobs?).  I'm interested in hearing about\nperformance of the array, and problems (if any) with Dell's SAS HBA\nthat comes bundled.  Also interested in performance of the maximum\nconfig of an MD3000 with two MD1000 shelves.\n\n-Jeff\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 16 Apr 2008 16:17:10 -0400", "msg_from": "\"Gavin M. Roy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anybody using the Dell Powervault MD3000 array?" }, { "msg_contents": "On Wed, 16 Apr 2008 16:17:10 -0400\n\n> On Wed, Apr 16, 2008 at 4:15 PM, Jeffrey Baker <[email protected]>\n> wrote:\n> \n> > Thinking about buying the Powervault MD3000 SAS array with 15 15k\n> > 300GB disks for use as a postgres tablespace. Is anyone using these\n> > (or other LSI/Engenio rebadge jobs?). I'm interested in hearing\n> > about performance of the array, and problems (if any) with Dell's\n> > SAS HBA that comes bundled. Also interested in performance of the\n> > maximum config of an MD3000 with two MD1000 shelves.\n> >\n> > -Jeff\n\n<moved Gavin's reply below>\n\n\"Gavin M. Roy\" <[email protected]> wrote:\n\n> Might want to check out the HP MSA70 arrays. I've had better luck\n> with them and you can get 25 drives in a smaller rack unit size. I\n> had a bad experience with the MD3000 and now only buy MD1000's with\n> Perc 6/e when I buy Dell.\n> Good luck!\n> \n\nI can second this. The MSA 70 is a great unit for the money.\n\nJoshua D. Drake\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nUnited States PostgreSQL Association: http://www.postgresql.us/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Wed, 16 Apr 2008 13:20:22 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anybody using the Dell Powervault MD3000 array?" }, { "msg_contents": "On Wed, Apr 16, 2008 at 1:20 PM, Joshua D. Drake <[email protected]> wrote:\n> On Wed, 16 Apr 2008 16:17:10 -0400\n>\n>\n> > On Wed, Apr 16, 2008 at 4:15 PM, Jeffrey Baker <[email protected]>\n> > wrote:\n> >\n> > > Thinking about buying the Powervault MD3000 SAS array with 15 15k\n> > > 300GB disks for use as a postgres tablespace. Is anyone using these\n> > > (or other LSI/Engenio rebadge jobs?). I'm interested in hearing\n> > > about performance of the array, and problems (if any) with Dell's\n> > > SAS HBA that comes bundled. Also interested in performance of the\n> > > maximum config of an MD3000 with two MD1000 shelves.\n> > >\n> > > -Jeff\n>\n> <moved Gavin's reply below>\n> \"Gavin M. Roy\" <[email protected]> wrote:\n>\n> > Might want to check out the HP MSA70 arrays. I've had better luck\n> > with them and you can get 25 drives in a smaller rack unit size. I\n> > had a bad experience with the MD3000 and now only buy MD1000's with\n> > Perc 6/e when I buy Dell.\n> > Good luck!\n> >\n>\n> I can second this. The MSA 70 is a great unit for the money.\n\nThank you both. The MSA 70 looks like an ordinary disk shelf. What\ncontrollers do you use? Or, do you just go with a software RAID?\n", "msg_date": "Wed, 16 Apr 2008 13:37:32 -0700", "msg_from": "\"Jeffrey Baker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Anybody using the Dell Powervault MD3000 array?" }, { "msg_contents": "On Wed, 16 Apr 2008 13:37:32 -0700\n\"Jeffrey Baker\" <[email protected]> wrote:\n\n> > I can second this. The MSA 70 is a great unit for the money.\n> \n> Thank you both. The MSA 70 looks like an ordinary disk shelf. What\n> controllers do you use? Or, do you just go with a software RAID?\n> \n\nP800, from HP.\n\nJoshua D. Drake\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nUnited States PostgreSQL Association: http://www.postgresql.us/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Wed, 16 Apr 2008 13:39:19 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anybody using the Dell Powervault MD3000 array?" }, { "msg_contents": "On Wed, Apr 16, 2008 at 4:39 PM, Joshua D. Drake <[email protected]>\nwrote:\n\n> On Wed, 16 Apr 2008 13:37:32 -0700\n> \"Jeffrey Baker\" <[email protected]> wrote:\n>\n> > > I can second this. The MSA 70 is a great unit for the money.\n> >\n> > Thank you both. The MSA 70 looks like an ordinary disk shelf. What\n> > controllers do you use? Or, do you just go with a software RAID?\n> >\n>\n> P800, from HP.\n\n\nIn a Dell box I use a Perc 6/E with a SAS to Mini SAS cable.\n\nGavin\n\nOn Wed, Apr 16, 2008 at 4:39 PM, Joshua D. Drake <[email protected]> wrote:\nOn Wed, 16 Apr 2008 13:37:32 -0700\n\"Jeffrey Baker\" <[email protected]> wrote:\n\n> >  I can second this. The MSA 70 is a great unit for the money.\n>\n> Thank you both.  The MSA 70 looks like an ordinary disk shelf.  What\n> controllers do you use?  Or, do you just go with a software RAID?\n>\n\nP800, from HP.In a Dell box I use a Perc 6/E with a SAS to Mini SAS cable.Gavin", "msg_date": "Wed, 16 Apr 2008 16:52:57 -0400", "msg_from": "\"Gavin M. Roy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anybody using the Dell Powervault MD3000 array?" }, { "msg_contents": "Gavin M. Roy wrote:\n> On Wed, Apr 16, 2008 at 4:39 PM, Joshua D. Drake <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> On Wed, 16 Apr 2008 13:37:32 -0700\n> \"Jeffrey Baker\" <[email protected] <mailto:[email protected]>> wrote:\n> \n> > > I can second this. The MSA 70 is a great unit for the money.\n> >\n> > Thank you both. The MSA 70 looks like an ordinary disk shelf. What\n> > controllers do you use? Or, do you just go with a software RAID?\n> >\n> \n> P800, from HP.\n> \n> \n> In a Dell box I use a Perc 6/E with a SAS to Mini SAS cable.\n\nThere was a fairly long recent thread discussing the Dell Perc 6 controller starting here:\n\n http://archives.postgresql.org/pgsql-performance/2008-03/msg00264.php\n\nand one relevant follow-up regarding the MD1000 box:\n\n http://archives.postgresql.org/pgsql-performance/2008-03/msg00280.php\n\n(Unfortunately, the Postgres web archive does a terrible job formatting plain-old-text messages, it doesn't seem to know that it should wrap paragraphs, so some of these are pretty hard to read as web pages.)\n\nCraig\n", "msg_date": "Wed, 16 Apr 2008 15:47:08 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anybody using the Dell Powervault MD3000 array?" } ]
[ { "msg_contents": "Hi....\nIam finding the following query is working a bit slow:\nEXECUTE '(SELECT ARRAY(SELECT DISTINCT date_part(''day'', measurement_start)\nFROM ' || gettablestring(dates)|| '\nWHERE lane_id IN (' || lanesidarr || ')))'\nINTO temparr;\n\nThis function is trying to find all the days in a prticular month\nwhihc has data for the particular lane and put it in an array...which\ncan be used later.\ngettablestring(dates) returns the partition name from which the data\nneeds to be extracted. These partitions have index on the\nmeasurement_start field.\nlanesidarr is a lane number. The partition has an index on this field to.\nCould anyone give me some hints???/\n\nThanks\nSam\n", "msg_date": "Wed, 16 Apr 2008 17:14:11 -0400", "msg_from": "\"samantha mahindrakar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query running slow" }, { "msg_contents": "On Wed, 16 Apr 2008 23:14:11 +0200, samantha mahindrakar \n<[email protected]> wrote:\n\n> Hi....\n> Iam finding the following query is working a bit slow:\n> EXECUTE '(SELECT ARRAY(SELECT DISTINCT date_part(''day'', \n> measurement_start)\n> FROM ' || gettablestring(dates)|| '\n> WHERE lane_id IN (' || lanesidarr || ')))'\n> INTO temparr;\n>\n> This function is trying to find all the days in a prticular month\n> whihc has data for the particular lane and put it in an array...which\n> can be used later.\n> gettablestring(dates) returns the partition name from which the data\n> needs to be extracted. These partitions have index on the\n> measurement_start field.\n> lanesidarr is a lane number. The partition has an index on this field to.\n> Could anyone give me some hints???/\n\n\tOK so I guess you have one partition per month since there is no month in \nyour WHERE.\n\tIf this is a table which hasn't got much write activity (probably the \ncase for last month's partition, for instance), CLUSTER it on something \nappropriate that you use often in queries, like lane_id here.\n\tAnd you can use SELECT foo GROUP BY foo, this will use a hash, it is \nfaster than a sort.\n\tExample :\n\nCREATE TABLE blop AS SELECT '2008-01-01'::TIMESTAMP + ((n%30)*'1 \nDAY'::INTERVAL) AS t FROM generate_series(1,100000) AS n;\nALTER TABLE blop ADD d DATE NULL;\nUPDATE blop SET d=t;\nVACUUM FULL ANALYZE blop;\n\n-- Now blop contains 100K timestamps and 100K dates from the month 2008-01\n\nEXPLAIN ANALYZE SELECT DISTINCT EXTRACT( DAY from t ) FROM blop;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Unique (cost=10051.82..10551.82 rows=30 width=8) (actual \ntime=221.740..289.801 rows=30 loops=1)\n -> Sort (cost=10051.82..10301.82 rows=100000 width=8) (actual \ntime=221.737..250.911 rows=100000 loops=1)\n Sort Key: (date_part('day'::text, t))\n Sort Method: quicksort Memory: 5955kB\n -> Seq Scan on blop (cost=0.00..1747.00 rows=100000 width=8) \n(actual time=0.021..115.254 rows=100000 loops=1)\n Total runtime: 290.237 ms\n(6 lignes)\n\nTemps : 290,768 ms\n\nEXPLAIN ANALYZE SELECT EXTRACT( DAY from t ) AS day FROM blop GROUP BY day;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=1997.00..1997.38 rows=30 width=8) (actual \ntime=198.375..198.390 rows=30 loops=1)\n -> Seq Scan on blop (cost=0.00..1747.00 rows=100000 width=8) (actual \ntime=0.021..129.779 rows=100000 loops=1)\n Total runtime: 198.437 ms\n(3 lignes)\n\nTemps : 198,894 ms\n\n==> Hash is faster than Sort\n\nEXPLAIN ANALYZE SELECT d FROM blop GROUP BY d;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=1747.00..1747.30 rows=30 width=4) (actual \ntime=101.829..101.842 rows=30 loops=1)\n -> Seq Scan on blop (cost=0.00..1497.00 rows=100000 width=4) (actual \ntime=0.012..33.428 rows=100000 loops=1)\n Total runtime: 101.905 ms\n(3 lignes)\n\nTemps : 102,516 ms\n\n==> Not computing the EXTRACT is faster obviously\n\n(actually EXPLAIN ANALYZE adds some overhead, the query really takes 60 ms)\n\n\n\tIf you have an index lane_id, measurement_date, you can always do :\n\nfor day in 1..31:\n\tfind 1 row with which has this day\nreutrn the days you found\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Thu, 17 Apr 2008 01:19:38 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query running slow" } ]
[ { "msg_contents": "This week I've finished building and installing OSes on some new hardware \nat home. I have a pretty standard validation routine I go through to make \nsure PostgreSQL performance is good on any new system I work with. Found \na really strange behavior this time around that seems related to changes \nin Linux. Don't expect any help here, but if someone wanted to replicate \nmy tests I'd be curious to see if that can be done. I tell the story \nmostly because I think it's an interesting tale in hardware and software \nvalidation paranoia, but there's a serious warning here as well for Linux \nPostgreSQL users.\n\nThe motherboard is fairly new, and I couldn't get CentOS 5.1, which ships \nwith kernel 2.6.18, to install with the default settings. I had to drop \nback to \"legacy IDE\" mode to install. But it was running everything in \nold-school IDE mode, no DMA or antyhing. \"hdparm -Tt\" showed a whopping \n3MB/s on reads.\n\nI pulled down the latest (at the time--only a few hours and I'm already \nbehind) Linux kernel, 2.6.24-4, and compiled that with the right modules \nincluded. Now I'm getting 70MB/s on simple reads. Everything looked fine \nfrom there until I got to the pgbench select-only tests running PG 8.2.7 \n(I do 8.2 then 8.3 separately because the checkpoint behavior on \nwrite-heavy stuff is so different and I want to see both results).\n\nHere's the regular thing I do to see how fast pgbench executes against \nthings in memory (but bigger than the CPU's cache):\n\n-Set shared_buffers=256MB, start the server\n-dropdb pgbench (if it's already there)\n-createdb pgbench\n-pgbench -i -s 10 pgbench\t(makes about a 160MB database)\n-pgbench -S -c <2*cores> -t 10000 pgbench\n\nSince the database was just written out, the whole thing will still be in \nthe shared_buffers cache, so this should execute really fast. This was an \nIntel quad-core system, I used -c 8, and that got me around 25K \ntransactions/second. Curious to see how high I could push this, I started \nstepping up the number of clients.\n\nThere's where the weird thing happened. Just by going to 12 clients \ninstead of 8, I dropped to 8.5K TPS, about 1/3 of what I get from 8 \nclients. It was like that on every test run. When I use 10 clients, it's \nabout 50/50; sometimes I get 25K, sometimes 8.5K. The only thing it \nseemed to correlate with is that vmstat on the 25K runs showed ~60K \ncontext switches/second, while the 8.5K ones had ~44K.\n\nSince I've never seen this before, I went back to my old benchmark system \nwith a dual-core AMD processor. That started with CentOS 4 and kernel \n2.6.9, but I happened to install kernel 2.6.24-3 on there to get better \nsupport for my Areca card (it goes bonkers regularly on x64 2.6.9). \nNever did a thorough perforance test of the new kernel though. Sure \nenough, the same behavior was there, except without a flip-flop point, \njust a sharp decline. Check this out:\n\n-bash-3.00$ pgbench -S -c 8 -t 10000 pgbench | grep excluding\ntps = 15787.684067 (excluding connections establishing)\ntps = 15551.963484 (excluding connections establishing)\ntps = 14904.218043 (excluding connections establishing)\ntps = 15330.519289 (excluding connections establishing)\ntps = 15606.683484 (excluding connections establishing)\n\n-bash-3.00$ pgbench -S -c 12 -t 10000 pgbench | grep excluding\ntps = 7593.572749 (excluding connections establishing)\ntps = 7870.053868 (excluding connections establishing)\ntps = 7714.047956 (excluding connections establishing)\n\nResults are consistant, right? Summarizing that and extending out, here's \nwhat the median TPS numbers look like with 3 tests at each client load:\n\n-c4: 16621\t(increased -t to 20000 here)\n-c8: 15551\t(all these with t=10000)\n-c9: 13269\n-c10: 10832\n-c11: 8993\n-c12: 7714\n-c16: 7311\n-c32: 7141\t(cut -t to 5000 here)\n\nNow, somewhere around here I start thinking about CPU cache coherency, I \nplay with forcing tasks to particular CPUs, I try the deadline scheduler \ninstead of the default CFQ, but nothing makes a difference.\n\nWanna guess what did? An earlier kernel. These results are the same test \nas above, same hardware, only difference is I used the standard CentOS 4 \n2.6.9-67.0.4 kernel instead of 2.6.24-3.\n\n-c4: 18388\n-c8: 15760\n-c9: 15814\t(one result of 12623)\n-c12: 14339 \t(one result of 11105)\n-c16: 14148\n-c32: 13647\t(one result of 10062)\n\nWe get the usual bit of pgbench flakiness, but using the earlier kernel is \nfaster in every case, only degrades slowly as clients increase, and is \nalmost twice as fast here in a typical high-client load case.\n\nSo in the case of this simple benchmark, I see an enormous performance \nregression from the newest Linux kernel compared to a much older one. I \nneed to do some version bisection to nail it down for sure, but my guess \nis it's the change to the Completely Fair Scheduler in 2.6.23 that's to \nblame. The recent FreeBSD 7.0 PostgreSQL benchmarks at \nhttp://people.freebsd.org/~kris/scaling/7.0%20and%20beyond.pdf showed an \nequally brutal performance drop going from 2.6.22 to 2.6.23 (see page 16) \nin around the same client load on a read-only test. My initial guess is \nthat I'm getting nailed by a similar issue here.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 17 Apr 2008 03:58:43 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Strange behavior: pgbench and new Linux kernels" }, { "msg_contents": "On Thu, 17 Apr 2008, Greg Smith wrote:\n> So in the case of this simple benchmark, I see an enormous performance \n> regression from the newest Linux kernel compared to a much older one. I need \n> to do some version bisection to nail it down for sure, but my guess is it's \n> the change to the Completely Fair Scheduler in 2.6.23 that's to blame.\n\nThat's a bit sad. From Documentation/sched-design-CFS.txt (2.6.23):\n\n> There is only one\n> central tunable (you have to switch on CONFIG_SCHED_DEBUG):\n>\n> /proc/sys/kernel/sched_granularity_ns\n>\n> which can be used to tune the scheduler from 'desktop' (low\n> latencies) to 'server' (good batching) workloads. It defaults to a\n> setting suitable for desktop workloads. SCHED_BATCH is handled by the\n> CFS scheduler module too.\n\nSo it'd be worth compiling a kernel with CONFIG_SCHED_DEBUG switched on \nand try increasing that value, and see if that fixes the problem. \nAlternatively, use sched_setscheduler to set SCHED_BATCH, which should \nincrease the timeslice (a Linux-only option).\n\nMatthew\n\n-- \nPsychotics are consistently inconsistent. The essence of sanity is to\nbe inconsistently inconsistent.\n", "msg_date": "Thu, 17 Apr 2008 16:52:06 +0100 (BST)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange behavior: pgbench and new Linux kernels" }, { "msg_contents": "On Thu, Apr 17, 2008 at 12:58 AM, Greg Smith <[email protected]> wrote:\n> So in the case of this simple benchmark, I see an enormous performance\n> regression from the newest Linux kernel compared to a much older one.\n\nThis has been discussed recently on linux-kernel. It's definitely a\nregression. Instead of getting a nice, flat overload behavior when\nthe # of busy threads exceeds the number of CPUs, you get the\ndeclining performance you noted.\n\nPoor PostgreSQL scaling on Linux 2.6.25-rc5 (vs 2.6.22)\nhttp://marc.info/?l=linux-kernel&m=120521826111587&w=2\n\n-jwb\n", "msg_date": "Thu, 17 Apr 2008 10:09:09 -0700", "msg_from": "\"Jeffrey Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange behavior: pgbench and new Linux kernels" }, { "msg_contents": "On Thu, 17 Apr 2008, Jeffrey Baker wrote:\n> On Thu, Apr 17, 2008 at 12:58 AM, Greg Smith <[email protected]> wrote:\n>> So in the case of this simple benchmark, I see an enormous performance\n>> regression from the newest Linux kernel compared to a much older one.\n>\n> This has been discussed recently on linux-kernel. It's definitely a\n> regression. Instead of getting a nice, flat overload behavior when\n> the # of busy threads exceeds the number of CPUs, you get the\n> declining performance you noted.\n>\n> Poor PostgreSQL scaling on Linux 2.6.25-rc5 (vs 2.6.22)\n> http://marc.info/?l=linux-kernel&m=120521826111587&w=2\n\nThe last message in the thread says that 2.6.25-rc6 has the problem \nnailed. That was a month ago. So I guess, upgrade to 2.6.25, which was \nreleased today.\n\nMatthew\n\n-- \n\"Prove to thyself that all circuits that radiateth and upon which thou worketh\n are grounded, lest they lift thee to high-frequency potential and cause thee\n to radiate also. \" -- The Ten Commandments of Electronics\n", "msg_date": "Thu, 17 Apr 2008 18:48:25 +0100 (BST)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange behavior: pgbench and new Linux kernels" }, { "msg_contents": "On Thu, 17 Apr 2008, Jeffrey Baker wrote:\n\n> This has been discussed recently on linux-kernel.\n\nExcellent pointer, here's direct to the interesting link there: \nhttp://marc.info/?l=linux-kernel&m=120574906013029&w=2\n\nIngo's test system has 16 cores and dives hard at >32 clients; my 4-core \nsystem has trouble with >8 clients; looks like the same behavior. And it \nseems to be fixed in 2.6.25, which just \"shipped\" literally in the middle \nof my testing last night. Had I waited until today to grab a kernel I \nprobably would have missed the whole thing.\n\nI'll have to re-run to be sure (I just love running a kernel with the \npaint still wet) but it looks like the conclusion here is \"don't run \nPostgreSQL on kernels 2.6.23 or 2.6.24\". Good thing I already hated FC8.\n\nIf all these kernel developers are using sysbench, we really should get \nthat thing cleaned up so it runs well with PostgreSQL.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 17 Apr 2008 14:15:31 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange behavior: pgbench and new Linux kernels" }, { "msg_contents": "On Thu, 17 Apr 2008, Matthew wrote:\n\n> The last message in the thread says that 2.6.25-rc6 has the problem nailed. \n> That was a month ago. So I guess, upgrade to 2.6.25, which was released \n> today.\n\nAh, even more support for me to distrust everything I read. The change \nhas flattened out things, so now the pgbench results are awful everywhere. \nOn this benchmark 2.6.25 is the worst kernel yet:\n\n-bash-3.00$ pgbench -S -c 4 -t 10000 pgbench | grep excluding\ntps = 8619.710649 (excluding connections establishing)\ntps = 8664.321235 (excluding connections establishing)\ntps = 8671.973915 (excluding connections establishing)\n(was 18388 in 2.6.9 and 16621 in 2.6.23-3)\n\n-bash-3.00$ pgbench -S -c 8 -t 10000 pgbench | grep excluding\ntps = 9011.728765 (excluding connections establishing)\ntps = 9039.441796 (excluding connections establishing)\ntps = 9206.574000 (excluding connections establishing)\n(was 15760 in 2.6.9 and 15551 in 2.6.23-3)\n\n-bash-3.00$ pgbench -S -c 16 -t 10000 pgbench | grep excluding\ntps = 7063.710786 (excluding connections establishing)\ntps = 6956.266777 (excluding connections establishing)\ntps = 7120.971600 (excluding connections establishing)\n(was 14148 in 2.6.9 and 7311 in 2.6.23-3)\n\n-bash-3.00$ pgbench -S -c 32 -t 10000 pgbench | grep excluding\ntps = 7006.311636 (excluding connections establishing)\ntps = 6971.305909 (excluding connections establishing)\ntps = 7002.820583 (excluding connections establishing)\n(was 13647 in 2.6.9 and 7141 in 2.6.23-3)\n\nThis is what happens when the kernel developers are using results from a \nMySQL tool to optimize things I guess. It seems I have a lot of work \nahead of me here to nail down and report what's going on here.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 17 Apr 2008 20:26:25 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange behavior: pgbench and new Linux kernels" }, { "msg_contents": "On Thu, 17 Apr 2008, Greg Smith wrote:\n\n> On Thu, 17 Apr 2008, Matthew wrote:\n>\n>> The last message in the thread says that 2.6.25-rc6 has the problem nailed. \n>> That was a month ago. So I guess, upgrade to 2.6.25, which was released \n>> today.\n>\n> Ah, even more support for me to distrust everything I read. The change has \n> flattened out things, so now the pgbench results are awful everywhere. On \n> this benchmark 2.6.25 is the worst kernel yet:\n>\n> -bash-3.00$ pgbench -S -c 4 -t 10000 pgbench | grep excluding\n> tps = 8619.710649 (excluding connections establishing)\n> tps = 8664.321235 (excluding connections establishing)\n> tps = 8671.973915 (excluding connections establishing)\n> (was 18388 in 2.6.9 and 16621 in 2.6.23-3)\n>\n> -bash-3.00$ pgbench -S -c 8 -t 10000 pgbench | grep excluding\n> tps = 9011.728765 (excluding connections establishing)\n> tps = 9039.441796 (excluding connections establishing)\n> tps = 9206.574000 (excluding connections establishing)\n> (was 15760 in 2.6.9 and 15551 in 2.6.23-3)\n>\n> -bash-3.00$ pgbench -S -c 16 -t 10000 pgbench | grep excluding\n> tps = 7063.710786 (excluding connections establishing)\n> tps = 6956.266777 (excluding connections establishing)\n> tps = 7120.971600 (excluding connections establishing)\n> (was 14148 in 2.6.9 and 7311 in 2.6.23-3)\n>\n> -bash-3.00$ pgbench -S -c 32 -t 10000 pgbench | grep excluding\n> tps = 7006.311636 (excluding connections establishing)\n> tps = 6971.305909 (excluding connections establishing)\n> tps = 7002.820583 (excluding connections establishing)\n> (was 13647 in 2.6.9 and 7141 in 2.6.23-3)\n>\n> This is what happens when the kernel developers are using results from a \n> MySQL tool to optimize things I guess. It seems I have a lot of work ahead \n> of me here to nail down and report what's going on here.\n\nreport this to the kernel list so that they know, and be ready to test \nfixes. the kernel developers base sucess or failure on the results of \ntests. if the only people providing test results are MySQL people, how \nwould they know there is a problem?\n\nDavid Lang\n", "msg_date": "Thu, 17 Apr 2008 17:41:39 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Strange behavior: pgbench and new Linux kernels" }, { "msg_contents": "On Thu, 17 Apr 2008, [email protected] wrote:\n\n> report this to the kernel list so that they know, and be ready to test fixes.\n\nDon't worry, I'm on that. I'm already having enough problems with \ndatabase performance under Linux, if they start killing results on the \neasy benchmarks I'll really be in trouble.\n\n> if the only people providing test results are MySQL people, how would \n> they know there is a problem?\n\nThe thing I was alluding to is that both FreeBSD and Linux kernel \ndevelopers are now doing all their PostgreSQL tests with sysbench, a MySQL \ntool with rudimentary PostgreSQL support bolted on (badly). I think \nrather than complain about it someone (and I fear this will be me) needs \nto just fix that so it works well. It really handy for people to have \nsomething they can get familiar with that runs against both databases in a \nway they can be compared fairly. Right now PG beats MySQL scalability \ndespite that on read tests, the big problems with PG+sysbench are when you \ntry to write with it.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 17 Apr 2008 23:25:07 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange behavior: pgbench and new Linux kernels" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> This is what happens when the kernel developers are using results from a \n> MySQL tool to optimize things I guess. It seems I have a lot of work \n> ahead of me here to nail down and report what's going on here.\n\nYeah, it's starting to be obvious that we'd better not ignore sysbench\nas \"not our problem\". Do you have any roadmap on what needs to be done\nto it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 Apr 2008 01:18:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange behavior: pgbench and new Linux kernels " }, { "msg_contents": "On Fri, 18 Apr 2008, Tom Lane wrote:\n\n> Yeah, it's starting to be obvious that we'd better not ignore sysbench \n> as \"not our problem\". Do you have any roadmap on what needs to be done \n> to it?\n\nJust dug into this code again for a minute and it goes something like \nthis:\n\n1) Wrap the write statements into transactions properly so the OLTP code \nworks. There's a BEGIN/COMMIT in there, but last time I tried that test \nit just deadlocked on me (I got a report of the same from someone else as \nwell). There's some FIXME items in the code for PostgreSQL already that \nmight be related here.\n\n2) Make sure the implementation is running statistics correctly (they \ncreate a table and index, but there's certainly no ANALYZE in there).\n\n3) Implement the part of the driver wrapper that haven't been done yet.\n\n4) Try to cut down on crashes (I recall a lot of these when I tried to use \nall the features).\n\n5) Compare performance on some simple operations to pgbench to see if it's \ncompetitive. Look into whether there's code in the PG wrapper they use \nthat can be optimized usefully.\n\nThere's two performance-related things that jump right out as things I'd \nwant to confirm aren't causing issues:\n\n-It's a threaded design\n-The interesting tests look like they use prepared statements.\n\nI think the overall approach sysbench uses is good, it just needs some \nadjustments to work right against a PG database.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 18 Apr 2008 02:47:22 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange behavior: pgbench and new Linux kernels " }, { "msg_contents": "On Thu, 17 Apr 2008, I wrote:\n>> There is only one\n>> central tunable (you have to switch on CONFIG_SCHED_DEBUG):\n>>\n>> /proc/sys/kernel/sched_granularity_ns\n>>\n>> which can be used to tune the scheduler from 'desktop' (low\n>> latencies) to 'server' (good batching) workloads. It defaults to a\n>> setting suitable for desktop workloads. SCHED_BATCH is handled by the\n>> CFS scheduler module too.\n>\n> So it'd be worth compiling a kernel with CONFIG_SCHED_DEBUG switched on and \n> try increasing that value, and see if that fixes the problem. Alternatively, \n> use sched_setscheduler to set SCHED_BATCH, which should increase the \n> timeslice (a Linux-only option).\n\nLooking at the problem a bit closer, it's obvious to me that larger \ntimeslices would not have fixed this problem, so ignore my suggestion.\n\nIt appears that the problem is caused by inter-process communication \nblocking and causing processes to be put right to the back of the run \nqueue, therefore causing a very fine-grained round-robin of the runnable \nprocesses, which trashes the CPU caches. You may also be seeing processes \nforced to switch between CPUs, which breaks the caches even more. So what \nhappens if you run pgbench on a separate machine to the server? Does the \nproblem still exist in that case?\n\nMatthew\n\n-- \nX's book explains this very well, but, poor bloke, he did the Cambridge Maths \nTripos... -- Computer Science Lecturer\n", "msg_date": "Fri, 18 Apr 2008 12:01:11 +0100 (BST)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange behavior: pgbench and new Linux kernels" }, { "msg_contents": "On Fri, 18 Apr 2008, Matthew wrote:\n\n> You may also be seeing processes forced to switch between CPUs, which \n> breaks the caches even more. So what happens if you run pgbench on a \n> separate machine to the server? Does the problem still exist in that \n> case?\n\nI haven't run that test yet but will before I submit a report. I did \nhowever try running things with the pgbench executable itself bound to a \nsingle CPU with no improvement.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 18 Apr 2008 13:19:09 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange behavior: pgbench and new Linux kernels" }, { "msg_contents": "On Fri, 18 Apr 2008, Matthew wrote:\n\n> So what happens if you run pgbench on a separate machine to the server? \n> Does the problem still exist in that case?\n\nIt does not. At the low client counts, there's a big drop-off relative to \nrunning on localhost just because of running over the network. But once I \nget to 4 clients the remote pgbench setup is even with the localhost one. \nAt 50 clients, the all local setup is at 8100 tps while the remote pgbench \nis at 26000.\n\nSo it's pretty clear to me now that the biggest problem here is the \npgbench client itself not working well at all with the newer kernels. \nIt's difficult to see through that to tell for sure how well each kernel \nversion is handling the server portion of the job underneath. I hope to \nhave time this week to finally submit all this to lkml.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sun, 18 May 2008 12:25:39 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange behavior: pgbench and new Linux kernels" }, { "msg_contents": ">>> On Thu, Apr 17, 2008 at 7:26 PM, Greg Smith wrote: \n \n> On this benchmark 2.6.25 is the worst kernel yet:\n \n> It seems I have a lot of work ahead of me here\n> to nail down and report what's going on here.\n \nI don't remember seeing a follow-up on this issue from last year.\nAre there still any particular kernels to avoid based on this?\n \nThanks,\n \n-Kevin\n", "msg_date": "Tue, 31 Mar 2009 16:35:59 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange behavior: pgbench and new Linux kernels" }, { "msg_contents": "On Tue, 31 Mar 2009, Kevin Grittner wrote:\n\n>>>> On Thu, Apr 17, 2008 at 7:26 PM, Greg Smith wrote:\n>\n>> On this benchmark 2.6.25 is the worst kernel yet:\n>\n>> It seems I have a lot of work ahead of me here\n>> to nail down and report what's going on here.\n>\n> I don't remember seeing a follow-up on this issue from last year.\n> Are there still any particular kernels to avoid based on this?\n\nI never got any confirmation that the patches that came out of my \ndiscussions with the kernel developers were ever merged. I'm in the \nmiddle of a bunch of pgbench tests this week, and one of the things I \nplanned to try was seeing if the behavior has changed in 2.6.28 or 2.6.29. \nI'm speaking about pgbench at the PostgreSQL East conference this weekend \nand will have an update by then (along with a new toolchain for automating \nlarge quantities of pgbench tests).\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 31 Mar 2009 19:35:53 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange behavior: pgbench and new Linux kernels" }, { "msg_contents": "On Tue, 31 Mar 2009, Kevin Grittner wrote:\n\n>>>> On Thu, Apr 17, 2008 at 7:26 PM, Greg Smith wrote:\n>\n>> On this benchmark 2.6.25 is the worst kernel yet:\n>\n> I don't remember seeing a follow-up on this issue from last year.\n> Are there still any particular kernels to avoid based on this?\n\nI just discovered something really fascinating here. The problem is \nstrictly limited to when you're connecting via Unix-domain sockets; use \nTCP/IP instead, and it goes away.\n\nTo refresh everyone's memory here, I reported a problem to the LKML here: \nhttp://lkml.org/lkml/2008/5/21/292 Got some patches and some kernel tweaks \nfor the scheduler but never a clear resolution for the cause, which kept \nanybody from getting too excited about merging anything. Test results \ncomparing various tweaks on the hardware I'm still using now are at \nhttp://lkml.org/lkml/2008/5/26/288\n\nFor example, here's kernel 2.6.25 running pgbench with 50 clients with a \nQ6000 processor, demonstrating poor performance--I'd get >20K TPS here \nwith a pre-CFS kernel:\n\n$ pgbench -S -t 4000 -c 50 -n pgbench\ntransaction type: SELECT only\nscaling factor: 10\nquery mode: simple\nnumber of clients: 50\nnumber of transactions per client: 4000\nnumber of transactions actually processed: 200000/200000\ntps = 8288.047442 (including connections establishing)\ntps = 8319.702195 (excluding connections establishing)\n\nIf I now execute exactly the same test, but using localhost, performance \nreturns to normal:\n\n$ pgbench -S -t 4000 -c 50 -n -h localhost pgbench\ntransaction type: SELECT only\nscaling factor: 10\nquery mode: simple\nnumber of clients: 50\nnumber of transactions per client: 4000\nnumber of transactions actually processed: 200000/200000\ntps = 17575.277771 (including connections establishing)\ntps = 17724.651090 (excluding connections establishing)\n\nThat's 100% repeatable, I ran each test several times each way.\n\nSo the new summary here of what I've found is that if:\n\n1) You're running Linux 2.6.23 or greater (confirmed in up to 2.6.26)\n2) You connect over a Unix-domain socket\n3) Your client count is relatively high (>8 clients/core)\n\nYou can expect your pgbench results to tank. Switch to connecting over \nTCP/IP to localhost, and everything is fine; it's not quite as fast as the \npre-CFS kernels in some cases, in others it's faster though.\n\nI haven't gotten to testing kernels newer than 2.6.26 yet, when I saw a \n17K TPS result during one of my tests on 2.6.25 I screeched to a halt to \nisolate this instead.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sat, 4 Apr 2009 12:07:58 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange behavior: pgbench and new Linux kernels" }, { "msg_contents": "On 4/4/09 9:07 AM, Greg Smith wrote:\n> On Tue, 31 Mar 2009, Kevin Grittner wrote:\n>\n>>>>> On Thu, Apr 17, 2008 at 7:26 PM, Greg Smith wrote:\n>>\n>>> On this benchmark 2.6.25 is the worst kernel yet:\n>>\n>> I don't remember seeing a follow-up on this issue from last year.\n>> Are there still any particular kernels to avoid based on this?\n>\n> I just discovered something really fascinating here. The problem is\n> strictly limited to when you're connecting via Unix-domain sockets; use\n> TCP/IP instead, and it goes away.\n\nHave you sent this to any Linux kernel engineers? My experience is that \nthey're fairly responsive to this sort of thing.\n\n--Josh\n", "msg_date": "Sat, 04 Apr 2009 11:55:04 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange behavior: pgbench and new Linux kernels" }, { "msg_contents": "On Sat, 4 Apr 2009, Josh Berkus wrote:\n\n> Have you sent this to any Linux kernel engineers? My experience is that \n> they're fairly responsive to this sort of thing.\n\nI'm going to submit an updated report to LKML once I get back from East, I \nwant to test against the latest kernel first.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sat, 4 Apr 2009 16:11:11 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange behavior: pgbench and new Linux kernels" } ]
[ { "msg_contents": "Hi there,\n\nI have a table which looks similar to:\n\nCREATE TABLE accounting\n(\n id text NOT NULL,\n time timestamp with time zone,\n data1 int,\n data2 int,\n data3 int,\n data4 int,\n data5 int,\n data6 int,\n data7 int,\n data8 int,\n state int\n CONSTRAINT accounting_pkey PRIMARY KEY (id),\n)\n\nThe table has about 300k rows but is growing steadily. The usage of this \ntable is few selects and inserts, tons of updates and no deletes ever. \nRatios are roughly\nselect:insert = 1:1\ninsert:update = 1:60\n\nNow it turns out that almost all reporting queries use the time field \nand without any additional indexes it ends up doing slow and expensive \nsequential scans (10-20 seconds). Therefore I'd like to create and index \non time to speed this up, yet I'm not entirely sure what overhead that \nintroduces. Clearly there's some overhead during insertion of a new row \nwhich I can live with but what's not clear is the overhead during \nupdates, and the postgresql manual doesn't make that explicit.\n\nYou see, all updates change most of the data fields but never ever touch \nthe time field. Assuming correct and efficient behaviour of postgresql \nit should then also never touch the time index and incur zero overhead \nin its presence, but is this really the case? If it somehow does update \nthe index too even though the value hasn't changed by some weird \nimplementation detail I'd rather not have that index and live with slow \nqueries for the few times a day that reporting is run.\n\nGunther\n", "msg_date": "Thu, 17 Apr 2008 11:27:35 +0200", "msg_from": "Gunther Mayer <[email protected]>", "msg_from_op": true, "msg_subject": "Exact index overhead" }, { "msg_contents": "Gunther Mayer wrote:\n> You see, all updates change most of the data fields but never ever touch \n> the time field. Assuming correct and efficient behaviour of postgresql \n> it should then also never touch the time index and incur zero overhead \n> in its presence, but is this really the case? If it somehow does update \n> the index too even though the value hasn't changed by some weird \n> implementation detail I'd rather not have that index and live with slow \n> queries for the few times a day that reporting is run.\n\nWell, until 8.3 PG does indeed update the index. That's because with \nMVCC an update is basically a delete+insert, so you'll end up with two \nversions (the V in MVCC) of the row.\n\nWith 8.3 there's a new feature called HOT which means updates that don't \nchange an index can be more efficient.\n\nSo - if you are running 8.3, I'd say try the index and see what \ndifference it makes.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 17 Apr 2008 11:00:14 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Exact index overhead" }, { "msg_contents": "Gunther Mayer wrote:\n> You see, all updates change most of the data fields but never ever touch \n> the time field. Assuming correct and efficient behaviour of postgresql \n> it should then also never touch the time index and incur zero overhead \n> in its presence, but is this really the case? If it somehow does update \n> the index too even though the value hasn't changed by some weird \n> implementation detail I'd rather not have that index and live with slow \n> queries for the few times a day that reporting is run.\n\nUpdates do generally modify the indexes as well. The way MVCC has been \nimplemented in PostgreSQL, UPDATE is internally very much like \nDELETE+INSERT. A new row version is inserted, new index pointers are \nadded for the new row version, and the old row version is marked as deleted.\n\nIn version 8.3, however, the new HOT feature reduces the need for that. \nIn a nutshell, if the new row version fits on the same page as the old \none, no new index pointers need to be created.\n\nI would suggest just testing how much additional overhead the new index \nincurs. It might be less expensive than you think.\n\nYou didn't mention how often the inserts happen, in other words, how \nfast you expect the table to grow. If the table is expected to grow \norders of magnitude larger, you might want to partition the table by date.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 17 Apr 2008 13:02:47 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Exact index overhead" }, { "msg_contents": "On Thu, Apr 17, 2008 at 2:57 PM, Gunther Mayer\n<[email protected]> wrote:\n>\n>\n> You see, all updates change most of the data fields but never ever touch\n> the time field. Assuming correct and efficient behaviour of postgresql it\n> should then also never touch the time index and incur zero overhead in its\n> presence, but is this really the case?\n\nNormally, whenever a row is updated, Postgres inserts a new index entry in each\nof the index. So to answer your question, there is certainly index\noverhead during\nupdates, even if you are not changing the indexed column.\n\nBut if you are using 8.3 then HOT may help you here, assuming you are\nnot updating\nany index keys. HOT optimizes the case by *not* inserting a new index entry and\nalso by performing retail vacuuming. The two necessary conditions for HOT are:\n\n1. Update should not change any of the index keys. So if you have two\nindexes, one\non column A and other on column B, update must not be modifying either A or B.\n\n2. The existing block should have enough free space to accommodate the\nnew version\nA less than 100 fillfactor may help you given your rate of updates.\n\nIf your application satisfies 1, then I would suggest you to upgrade\nto 8.3 (if you are\nnot using it already) and then you can create the index without\nbothering much about\noverheads.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 17 Apr 2008 15:46:05 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Exact index overhead" }, { "msg_contents": "Pavan Deolasee wrote:\n> On Thu, Apr 17, 2008 at 2:57 PM, Gunther Mayer\n> <[email protected]> wrote:\n> \n>> You see, all updates change most of the data fields but never ever touch\n>> the time field. Assuming correct and efficient behaviour of postgresql it\n>> should then also never touch the time index and incur zero overhead in its\n>> presence, but is this really the case?\n>> \n>\n> Normally, whenever a row is updated, Postgres inserts a new index entry in each\n> of the index. So to answer your question, there is certainly index\n> overhead during\n> updates, even if you are not changing the indexed column.\n> \nAh, I knew these \"obvious\" assumptions wouldn't necessarily hold. Good \nthat I checked.\n> But if you are using 8.3 then HOT may help you here, assuming you are\n> not updating\n> any index keys. HOT optimizes the case by *not* inserting a new index entry and\n> also by performing retail vacuuming. The two necessary conditions for HOT are:\n>\n> 1. Update should not change any of the index keys. So if you have two\n> indexes, one\n> on column A and other on column B, update must not be modifying either A or B.\n> \nThat condition is always satisfied.\n> 2. The existing block should have enough free space to accommodate the\n> new version\n> A less than 100 fillfactor may help you given your rate of updates.\n> \nI see, as soon as a new block is required for the new version the index \npointer needs updating too, I understand now. But at least in the common \ncase of space being available the index overhead is reduced to zero. I \ncan live with that.\n> If your application satisfies 1, then I would suggest you to upgrade\n> to 8.3 (if you are\n> not using it already) and then you can create the index without\n> bothering much about\n> overheads.\n> \nI'm still running 8.2.7 but I guess here's a compelling reason to \nupgrade ;-) Will do so soon.\n\nThanks a lot to everyone who responded (and at what pace!). I love this \ncommunity, it beats commercial support hands down.\n\nGunther\n", "msg_date": "Thu, 17 Apr 2008 17:42:05 +0200", "msg_from": "Gunther Mayer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Exact index overhead" }, { "msg_contents": "On Thu, Apr 17, 2008 at 9:42 AM, Gunther Mayer\n<[email protected]> wrote:\n> Pavan Deolasee wrote:\n>\n\n> > 2. The existing block should have enough free space to accommodate the\n> > new version\n> > A less than 100 fillfactor may help you given your rate of updates.\n> >\n> >\n> I see, as soon as a new block is required for the new version the index\n> pointer needs updating too, I understand now. But at least in the common\n> case of space being available the index overhead is reduced to zero. I can\n> live with that.\n\nQuick clarification, it's the table, not the index that has to have\nfree space for the new row version. This rewards good normalization\npractices (narrower rows) and a lower fill factor.\n", "msg_date": "Sat, 19 Apr 2008 10:48:42 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Exact index overhead" } ]
[ { "msg_contents": "Hi,\n\nI need to change the name of the constraint.,\n\nWhat will be the best way to do this.\n\nAm using postgres 8.1.\n\nIs it possible to do the rename constraint( like renaming a column), i don't\nknow how to do this ?\n\nOr i need to drop the constraint, and i need to create constraint with new\nname, how the impact of this in performance, because these constraint\nchanges am going to do in a table which has 10 million records....\n\nHi,I need to change the name of the constraint.,What will be the best way to do this.Am using postgres 8.1.Is it possible to do the rename constraint( like renaming a column), i don't know how to do this ?\nOr i need to drop the constraint, and i need to create constraint with new name, how the impact of this in performance, because these constraint changes am going to do in a table which has 10 million records....", "msg_date": "Thu, 17 Apr 2008 17:33:48 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "rename constraint" }, { "msg_contents": "\nOn 17.04.2008, at 14:03, sathiya psql wrote:\n> Hi,\n>\n> I need to change the name of the constraint.,\n>\n> Or i need to drop the constraint, and i need to create constraint \n> with new\n> name, how the impact of this in performance, because these constraint\n> changes am going to do in a table which has 10 million records....\n\nThat's how I would do it, create the new constriaint, then drop the \nold one.\nI'n not aware of any syntax that would allow to rename a constraint.\n\nCheers,\n\ntom\n\n", "msg_date": "Thu, 17 Apr 2008 14:14:38 +0200", "msg_from": "Thomas Spreng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: rename constraint" } ]
[ { "msg_contents": "Hi List;\n\nI have a large tble (playback_device) with 6million rows in it. The \naff_id_tmp1 table has 600,000 rows.\n\nI also have this query:\nselect distinct\ntmp1.affiliate_id,\ntmp1.name,\ntmp1.description,\ntmp1.create_dt,\ntmp1.playback_device_id,\npf.segment_id\nfrom\naff_id_tmp1 tmp1,\nplayback_fragment pf\nwhere\ntmp1.playback_device_id = pf.playback_device_id ;\n\n\nThe Primary Key for playback_device is the playback_device_id\nthere is also an index on playback_device_id on the aff_id_tmp1 table.\nThe only join condition I have is on this key pair (I've posted my \nexplain plan below)\n\n\n- why am I still getting a seq scan ?\n\nThanks in advance.\n\n\n\n\n\n\n============\nExplain PLan\n============\n\nexplain\nselect distinct\ntmp1.affiliate_id,\ntmp1.name,\ntmp1.description,\ntmp1.create_dt,\ntmp1.playback_device_id,\npf.segment_id\nfrom\naff_id_tmp1 tmp1,\nplayback_fragment pf\nwhere\ntmp1.playback_device_id = pf.playback_device_id ;\n\n\n Unique (cost=2966361.56..3194555.91 rows=10104496 width=97)\n -> Sort (cost=2966361.56..2998960.76 rows=13039677 width=97)\n Sort Key: tmp1.affiliate_id, tmp1.name, tmp1.description, \ntmp1.create_dt,\ntmp1.playback_device_id, pf.segment_id\n -> Hash Join (cost=23925.45..814071.14 rows=13039677 \nwidth=97)\n Hash Cond: (pf.playback_device_id = \ntmp1.playback_device_id)\n -> Seq Scan on playback_fragment pf \n(cost=0.00..464153.77 rows=130\n39677 width=16)\n -> Hash (cost=16031.31..16031.31 rows=631531 width=89)\n -> Seq Scan on aff_id_tmp1 tmp1 \n(cost=0.00..16031.31 rows=63\n1531 width=89)\n(1068 rows)\n\n", "msg_date": "Thu, 17 Apr 2008 12:24:26 -0600", "msg_from": "kevin kempter <[email protected]>", "msg_from_op": true, "msg_subject": "seq scan issue..." }, { "msg_contents": "On Thu, Apr 17, 2008 at 11:24 AM, kevin kempter\n<[email protected]> wrote:\n> Hi List;\n>\n> I have a large tble (playback_device) with 6million rows in it. The\n> aff_id_tmp1 table has 600,000 rows.\n> - why am I still getting a seq scan ?\n>\n\nYou're selecting almost all the rows in the product of aff_id_tmp1 *\nplayback_fragment. A sequential scan will be far faster than an index\nscan. You can prove this to yourself using 'set enable_seqscan to\nfalse' and running the query again. It should be much slower.\n", "msg_date": "Thu, 17 Apr 2008 11:30:15 -0700", "msg_from": "\"Jeffrey Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan issue..." }, { "msg_contents": "kevin kempter escribió:\n> Hi List;\n>\n> I have a large tble (playback_device) with 6million rows in it. The \n> aff_id_tmp1 table has 600,000 rows.\n>\n> I also have this query:\n> select distinct\n> tmp1.affiliate_id,\n> tmp1.name,\n> tmp1.description,\n> tmp1.create_dt,\n> tmp1.playback_device_id,\n> pf.segment_id\n> from\n> aff_id_tmp1 tmp1,\n> playback_fragment pf\n> where\n> tmp1.playback_device_id = pf.playback_device_id ;\n>\n>\n> The Primary Key for playback_device is the playback_device_id\n> there is also an index on playback_device_id on the aff_id_tmp1 table.\n> The only join condition I have is on this key pair (I've posted my \n> explain plan below)\n>\n>\n> - why am I still getting a seq scan ?\n>\n> Thanks in advance.\n>\n>\n>\n>\n>\n>\n> ============\n> Explain PLan\n> ============\n>\n> explain\n> select distinct\n> tmp1.affiliate_id,\n> tmp1.name,\n> tmp1.description,\n> tmp1.create_dt,\n> tmp1.playback_device_id,\n> pf.segment_id\n> from\n> aff_id_tmp1 tmp1,\n> playback_fragment pf\n> where\n> tmp1.playback_device_id = pf.playback_device_id ;\n>\n>\n> Unique (cost=2966361.56..3194555.91 rows=10104496 width=97)\n> -> Sort (cost=2966361.56..2998960.76 rows=13039677 width=97)\n> Sort Key: tmp1.affiliate_id, tmp1.name, tmp1.description, \n> tmp1.create_dt,\n> tmp1.playback_device_id, pf.segment_id\n> -> Hash Join (cost=23925.45..814071.14 rows=13039677 width=97)\n> Hash Cond: (pf.playback_device_id = \n> tmp1.playback_device_id)\n> -> Seq Scan on playback_fragment pf \n> (cost=0.00..464153.77 rows=130\n> 39677 width=16)\n> -> Hash (cost=16031.31..16031.31 rows=631531 width=89)\n> -> Seq Scan on aff_id_tmp1 tmp1 \n> (cost=0.00..16031.31 rows=63\n> 1531 width=89)\n> (1068 rows)\n>\n>\nCause you are getting all the rows so pgsql need to scan all the table...", "msg_date": "Thu, 17 Apr 2008 15:31:10 -0300", "msg_from": "Rodrigo Gonzalez <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan issue..." }, { "msg_contents": "\n> - why am I still getting a seq scan ?\n\nYou'll seq scan tmp1 obviously, and also the other table since you fetch a \nvery large part of it in the process.\nIt's the only way to do this query since there is no WHERE to restrict the \nnumber of rows and the DISTINCT applies on columns from both tables.\n\nYou might want to simplify your query. For instance perhaps you can get \npf.segment_id out of the DISTINCT, in which case you can put the distinct \nin a subquery on tmp1.\n\n>\n> Thanks in advance.\n>\n>\n>\n>\n>\n>\n> ============\n> Explain PLan\n> ============\n>\n> explain\n> select distinct\n> tmp1.affiliate_id,\n> tmp1.name,\n> tmp1.description,\n> tmp1.create_dt,\n> tmp1.playback_device_id,\n> pf.segment_id\n> from\n> aff_id_tmp1 tmp1,\n> playback_fragment pf\n> where\n> tmp1.playback_device_id = pf.playback_device_id ;\n>\n>\n> Unique (cost=2966361.56..3194555.91 rows=10104496 width=97)\n> -> Sort (cost=2966361.56..2998960.76 rows=13039677 width=97)\n> Sort Key: tmp1.affiliate_id, tmp1.name, tmp1.description, \n> tmp1.create_dt,\n> tmp1.playback_device_id, pf.segment_id\n> -> Hash Join (cost=23925.45..814071.14 rows=13039677 \n> width=97)\n> Hash Cond: (pf.playback_device_id = \n> tmp1.playback_device_id)\n> -> Seq Scan on playback_fragment pf \n> (cost=0.00..464153.77 rows=130\n> 39677 width=16)\n> -> Hash (cost=16031.31..16031.31 rows=631531 width=89)\n> -> Seq Scan on aff_id_tmp1 tmp1 \n> (cost=0.00..16031.31 rows=63\n> 1531 width=89)\n> (1068 rows)\n>\n>\n\n\n", "msg_date": "Fri, 18 Apr 2008 01:53:19 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan issue..." } ]
[ { "msg_contents": "I am trying to get a distinct set of rows from 2 tables.\nAfter looking at someone else's query I noticed they were doing a group by \nto obtain the unique list.\n\nAfter comparing on multiple machines with several tables, it seems using \ngroup by to obtain a distinct list is substantially faster than using \nselect distinct.\n\nIs there any dissadvantage of using \"group by\" to obtain a unique list?\n\nOn a small dataset the difference was about 20% percent.\n\nGroup by\n HashAggregate (cost=369.61..381.12 rows=1151 width=8) (actual \ntime=76.641..85.167 rows=2890 loops=1)\n\nDistinct\n Unique (cost=1088.23..1174.53 rows=1151 width=8) (actual \ntime=90.516..140.123 rows=2890 loops=1)\n\nAlthough I don't have the numbers here with me, a simmilar result was \nobtaining against a query that would return 100,000 rows. 20% and more \nspeed differnce between \"group by\" over \"select distinct\". \n", "msg_date": "Thu, 17 Apr 2008 23:46:08 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": true, "msg_subject": "Group by more efficient than distinct?" }, { "msg_contents": "On Freitag, 18. April 2008, Francisco Reyes wrote:\n| I am trying to get a distinct set of rows from 2 tables.\n| After looking at someone else's query I noticed they were doing a group by\n| to obtain the unique list.\n|\n| After comparing on multiple machines with several tables, it seems using\n| group by to obtain a distinct list is substantially faster than using\n| select distinct.\n|\n| Is there any dissadvantage of using \"group by\" to obtain a unique list?\n\nSearching the archives suggests that the code related to \"group by\" is much\nnewer than the one related to \"distinct\" and thus might benefit from more\noptimization paths.\n\nCiao,\nThomas\n\n-- \nThomas Pundt <[email protected]> ---- http://rp-online.de/ ----\n", "msg_date": "Fri, 18 Apr 2008 09:25:04 +0100", "msg_from": "Thomas Pundt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Group by more efficient than distinct?" }, { "msg_contents": "\"Francisco Reyes\" <[email protected]> writes:\n\n> Is there any dissadvantage of using \"group by\" to obtain a unique list?\n>\n> On a small dataset the difference was about 20% percent.\n>\n> Group by\n> HashAggregate (cost=369.61..381.12 rows=1151 width=8) (actual\n> time=76.641..85.167 rows=2890 loops=1)\n\nHashAggregate needs to store more values in memory at the same time so it's\nnot a good plan if you have a lot of distinct values.\n\nBut the planner knows that and so as long as your work_mem is set to a\nreasonable size (keeping in mind each sort or other operation feels free to\nuse that much itself even if there are several in the query) and the rows\nestimate is reasonable accurate -- here it's mediocre but not dangerously bad\n-- then if the planner is picking it it's probably a good idea.\n\nI'm not sure but I think there are cases where the DISTINCT method wins too.\nThis is basically a bug, in an ideal world both queries would generate\nprecisely the same plans since they're equivalent. It's just not a high\npriority since we have so many more interesting improvements competing for\ntime.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's PostGIS support!\n", "msg_date": "Fri, 18 Apr 2008 10:36:02 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Group by more efficient than distinct?" }, { "msg_contents": "On Fri, 18 Apr 2008 11:36:02 +0200, Gregory Stark <[email protected]> \nwrote:\n\n> \"Francisco Reyes\" <[email protected]> writes:\n>\n>> Is there any dissadvantage of using \"group by\" to obtain a unique list?\n>>\n>> On a small dataset the difference was about 20% percent.\n>>\n>> Group by\n>> HashAggregate (cost=369.61..381.12 rows=1151 width=8) (actual\n>> time=76.641..85.167 rows=2890 loops=1)\n\n\tBasically :\n\n\t- If you process up to some percentage of your RAM worth of data, hashing \nis going to be a lot faster\n\t- If the size of the hash grows larger than your RAM, hashing will fail \nmiserably and sorting will be much faster since PG's disksort is really \ngood\n\t- GROUP BY knows this and acts accordingly\n\t- DISTINCT doesn't know this, it only knows sorting, so it sorts\n\t- If you need DISTINCT x ORDER BY x, sorting may be faster too (depending \non the % of distinct rows)\n\t- If you need DISTINCT ON, well, you're stuck with the Sort\n\t- So, for the time being, you can replace DISTINCT with GROUP BY...\n", "msg_date": "Fri, 18 Apr 2008 12:35:04 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Group by more efficient than distinct?" }, { "msg_contents": "Gregory Stark writes:\n\n> HashAggregate needs to store more values in memory at the same time so it's\n> not a good plan if you have a lot of distinct values.\n\nSo far the resulting number of rows where in the thousands and the source \ndata were in there hundreds of thousands and the group by was faster.\n\nWhen you say \"a lot of distinct values\" you mean unique values as part of \nthe result data set?\n\nIn other words the HashAggregate will store in memory the resulting rows or \nwill be used for processing the source rows?\n\n> But the planner knows that and so as long as your work_mem is set to a\n> reasonable size (keeping in mind each sort or other operation feels free to\n\nIf I make sure to have vacuum analyze on a table will it be reasonable to \ntrust the explain to see whether distinct or group by is better? I started \na new job and still don't have a good feeling for the sizes or distributions \nof the data. Soon I will be given access to the test DB so I will be able to \ndo counts and explore the data without affecting production.\n", "msg_date": "Sun, 20 Apr 2008 11:12:10 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Group by more efficient than distinct?" }, { "msg_contents": "PFC writes:\n\n>- If you process up to some percentage of your RAM worth of data, hashing \n> is going to be a lot faster\n\nThanks for the excellent breakdown and explanation. I will try and get sizes \nof the tables in question and how much memory the machines have. \n \n> \t- If you need DISTINCT ON, well, you're stuck with the Sort\n> \t- So, for the time being, you can replace DISTINCT with GROUP BY...\n\nHave seen a few of those already on some code (new job..) so for those it is \na matter of having a good disk subsystem?\n", "msg_date": "Sun, 20 Apr 2008 11:15:36 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Group by more efficient than distinct?" }, { "msg_contents": "Hi Francisco,\n\nGenerally, PG sorting is much slower than hash aggregation for performing\nthe distinct operation. There may be small sizes where this isn¹t true, but\nfor large amounts of data (in-memory or not), hash agg (used most often, but\nnot always by GROUP BY) is faster.\n\nWe¹ve implemented a special optimization to PG sorting that does the\ndistinct processing within the sort, instead of afterward, but it¹s limited\nto some small-ish number (10,000) of distinct values due to it¹s use of a\nmemory and processing intensive heap.\n\nSo, you¹re better off using GROUP BY and making sure that the planner is\nusing hash agg to do the work.\n\n- Luke \n\n\nOn 4/17/08 8:46 PM, \"Francisco Reyes\" <[email protected]> wrote:\n\n> I am trying to get a distinct set of rows from 2 tables.\n> After looking at someone else's query I noticed they were doing a group by\n> to obtain the unique list.\n> \n> After comparing on multiple machines with several tables, it seems using\n> group by to obtain a distinct list is substantially faster than using\n> select distinct.\n> \n> Is there any dissadvantage of using \"group by\" to obtain a unique list?\n> \n> On a small dataset the difference was about 20% percent.\n> \n> Group by\n> HashAggregate (cost=369.61..381.12 rows=1151 width=8) (actual\n> time=76.641..85.167 rows=2890 loops=1)\n> \n> Distinct\n> Unique (cost=1088.23..1174.53 rows=1151 width=8) (actual\n> time=90.516..140.123 rows=2890 loops=1)\n> \n> Although I don't have the numbers here with me, a simmilar result was\n> obtaining against a query that would return 100,000 rows. 20% and more\n> speed differnce between \"group by\" over \"select distinct\".\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n\n\nRe: [PERFORM] Group by more efficient than distinct?\n\n\nHi Francisco,\n\nGenerally, PG sorting is much slower than hash aggregation for performing the distinct operation.  There may be small sizes where this isn’t true, but for large amounts of data (in-memory or not), hash agg (used most often, but not always by GROUP BY) is faster.\n\nWe’ve implemented a special optimization to PG sorting that does the distinct processing within the sort, instead of afterward, but it’s limited to some small-ish number (10,000) of distinct values due to it’s use of a memory and processing intensive heap.\n\nSo, you’re better off using GROUP BY and making sure that the planner is using hash agg to do the work.\n\n- Luke \n\n\nOn 4/17/08 8:46 PM, \"Francisco Reyes\" <[email protected]> wrote:\n\nI am trying to get a distinct set of rows from 2 tables.\nAfter looking at someone else's query I noticed they were doing a group by\nto obtain the unique list.\n\nAfter comparing on multiple machines with several tables, it seems using\ngroup by to obtain a distinct list is substantially faster than using\nselect distinct.\n\nIs there any dissadvantage of using \"group by\" to obtain a unique list?\n\nOn a small dataset the difference was about 20% percent.\n\nGroup by\n HashAggregate  (cost=369.61..381.12 rows=1151 width=8) (actual\ntime=76.641..85.167 rows=2890 loops=1)\n\nDistinct\n Unique  (cost=1088.23..1174.53 rows=1151 width=8) (actual\ntime=90.516..140.123 rows=2890 loops=1)\n\nAlthough I don't have the numbers here with me, a simmilar result was\nobtaining against a query that would return 100,000 rows. 20% and more\nspeed differnce between \"group by\" over \"select distinct\".  \n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sun, 20 Apr 2008 22:35:58 -0700", "msg_from": "Luke Lonergan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Group by more efficient than distinct?" }, { "msg_contents": "On Sun, 20 Apr 2008 17:15:36 +0200, Francisco Reyes \n<[email protected]> wrote:\n\n> PFC writes:\n>\n>> - If you process up to some percentage of your RAM worth of data, \n>> hashing is going to be a lot faster\n>\n> Thanks for the excellent breakdown and explanation. I will try and get \n> sizes of the tables in question and how much memory the machines have.\n\n\tActually, the memory used by the hash depends on the number of distinct \nvalues, not the number of rows which are processed...\n\tConsider :\n\nSELECT a GROUP BY a\nSELECT a,count(*) GROUP BY a\n\n\tIn both cases the hash only holds discinct values. So if you have 1 \nmillion rows to process but only 10 distinct values of \"a\", the hash will \nonly contain those 10 values (and the counts), so it will be very small \nand fast, it will absorb a huge seq scan without problem. If however, you \nhave (say) 100 million distinct values for a, using a hash would be a bad \nidea. As usual, divide the size of your RAM by the number of concurrent \nconnections or something.\n\tNote that \"a\" could be a column, several columns, anything, the size of \nthe hash will be proportional to the number of distinct values, ie. the \nnumber of rows returned by the query, not the number of rows processed \n(read) by the query. Same with hash joins etc, that's why when you join a \nvery small table to a large one Postgres likes to use seq scan + hash join \non the small table.\n\n\n>> \t- If you need DISTINCT ON, well, you're stuck with the Sort\n>> \t- So, for the time being, you can replace DISTINCT with GROUP BY...\n>\n> Have seen a few of those already on some code (new job..) so for those \n> it is a matter of having a good disk subsystem?\n\n\tDepends on your RAM, sorting in RAM is always faster than sorting on disk \nof course, unless you eat all the RAM and trash the other processes. \nTradeoffs...\n\n\n", "msg_date": "Tue, 22 Apr 2008 01:34:40 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Group by more efficient than distinct?" }, { "msg_contents": "PFC wrote:\n> Actually, the memory used by the hash depends on the number of \n> distinct values, not the number of rows which are processed...\n> Consider :\n>\n> SELECT a GROUP BY a\n> SELECT a,count(*) GROUP BY a\n>\n> In both cases the hash only holds discinct values. So if you have \n> 1 million rows to process but only 10 distinct values of \"a\", the hash \n> will only contain those 10 values (and the counts), so it will be very \n> small and fast, it will absorb a huge seq scan without problem. If \n> however, you have (say) 100 million distinct values for a, using a \n> hash would be a bad idea. As usual, divide the size of your RAM by the \n> number of concurrent connections or something.\n> Note that \"a\" could be a column, several columns, anything, the \n> size of the hash will be proportional to the number of distinct \n> values, ie. the number of rows returned by the query, not the number \n> of rows processed (read) by the query. Same with hash joins etc, \n> that's why when you join a very small table to a large one Postgres \n> likes to use seq scan + hash join on the small table.\n\nThis surprises me - hash values are lossy, so it must still need to \nconfirm against the real list of values, which at a minimum should \nrequire references to the rows to check against?\n\nIs PostgreSQL doing something beyond my imagination? :-)\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n", "msg_date": "Mon, 21 Apr 2008 19:50:22 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Group by more efficient than distinct?" }, { "msg_contents": "Mark Mielke wrote:\n> PFC wrote:\n>> Actually, the memory used by the hash depends on the number of \n>> distinct values, not the number of rows which are processed...\n>> Consider :\n>>\n>> SELECT a GROUP BY a\n>> SELECT a,count(*) GROUP BY a\n>>\n>> In both cases the hash only holds discinct values. So if you have \n>> 1 million rows to process but only 10 distinct values of \"a\", the \n>> hash will only contain those 10 values (and the counts), so it will \n>> be very small and fast, it will absorb a huge seq scan without \n>> problem. If however, you have (say) 100 million distinct values for \n>> a, using a hash would be a bad idea. As usual, divide the size of \n>> your RAM by the number of concurrent connections or something.\n>> Note that \"a\" could be a column, several columns, anything, the \n>> size of the hash will be proportional to the number of distinct \n>> values, ie. the number of rows returned by the query, not the number \n>> of rows processed (read) by the query. Same with hash joins etc, \n>> that's why when you join a very small table to a large one Postgres \n>> likes to use seq scan + hash join on the small table.\n>\n> This surprises me - hash values are lossy, so it must still need to \n> confirm against the real list of values, which at a minimum should \n> require references to the rows to check against?\n>\n> Is PostgreSQL doing something beyond my imagination? :-)\n\nHmmm... You did say distinct values, so I can see how that would work \nfor distinct. What about seq scan + hash join, though? To complete the \njoin, wouldn't it need to have a reference to each of the rows to join \nagainst? If there is 20 distinct values and 200 rows in the small table \n- wouldn't it need 200 references to be stored?\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n", "msg_date": "Mon, 21 Apr 2008 21:39:15 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Group by more efficient than distinct?" }, { "msg_contents": "On Mon, 21 Apr 2008, Mark Mielke wrote:\n> This surprises me - hash values are lossy, so it must still need to confirm \n> against the real list of values, which at a minimum should require references \n> to the rows to check against?\n>\n> Is PostgreSQL doing something beyond my imagination? :-)\n\nNot too far beyond your imagination, I hope.\n\nIt's simply your assumption that the hash table is lossy. Sure, hash \nvalues are lossy, but a hash table isn't. Postgres stores in memory not \nonly the hash values, but the rows they refer to as well, having checked \nthem all on disc beforehand. That way, it doesn't need to look up anything \non disc for that branch of the join again, and it has a rapid in-memory \nlookup for each row.\n\nMatthew\n\n-- \nX's book explains this very well, but, poor bloke, he did the Cambridge Maths \nTripos... -- Computer Science Lecturer\n", "msg_date": "Tue, 22 Apr 2008 11:34:23 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Group by more efficient than distinct?" }, { "msg_contents": "Matthew Wakeling wrote:\n> On Mon, 21 Apr 2008, Mark Mielke wrote:\n>> This surprises me - hash values are lossy, so it must still need to \n>> confirm against the real list of values, which at a minimum should \n>> require references to the rows to check against?\n>>\n>> Is PostgreSQL doing something beyond my imagination? :-)\n>\n> Not too far beyond your imagination, I hope.\n>\n> It's simply your assumption that the hash table is lossy. Sure, hash \n> values are lossy, but a hash table isn't. Postgres stores in memory \n> not only the hash values, but the rows they refer to as well, having \n> checked them all on disc beforehand. That way, it doesn't need to look \n> up anything on disc for that branch of the join again, and it has a \n> rapid in-memory lookup for each row.\n\nI said hash *values* are lossy. I did not say hash table is lossy.\n\nThe poster I responded to said that the memory required for a hash join \nwas relative to the number of distinct values, not the number of rows. \nThey gave an example of millions of rows, but only a few distinct \nvalues. Above, you agree with me that it it would include the rows (or \nat least references to the rows) as well. If it stores rows, or \nreferences to rows, then memory *is* relative to the number of rows, and \nmillions of records would require millions of rows (or row references).\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n", "msg_date": "Tue, 22 Apr 2008 08:01:20 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Group by more efficient than distinct?" }, { "msg_contents": "On Tue, 22 Apr 2008, Mark Mielke wrote:\n> The poster I responded to said that the memory required for a hash join was \n> relative to the number of distinct values, not the number of rows. They gave \n> an example of millions of rows, but only a few distinct values. Above, you \n> agree with me that it it would include the rows (or at least references to \n> the rows) as well. If it stores rows, or references to rows, then memory *is* \n> relative to the number of rows, and millions of records would require \n> millions of rows (or row references).\n\nYeah, I think we're talking at cross-purposes, due to hash tables being \nused in two completely different places in Postgres. Firstly, you have \nhash joins, where Postgres loads the references to the actual rows, and \nputs those in the hash table. For that situation, you want a small number \nof rows. Secondly, you have hash aggregates, where Postgres stores an \nentry for each \"group\" in the hash table, and does not store the actual \nrows. For that situation, you can have a bazillion individual rows, but \nonly a small number of distinct groups.\n\nMatthew\n\n-- \nFirst law of computing: Anything can go wro\nsig: Segmentation fault. core dumped.\n", "msg_date": "Tue, 22 Apr 2008 13:22:20 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Group by more efficient than distinct?" }, { "msg_contents": "Matthew Wakeling wrote:\n> On Tue, 22 Apr 2008, Mark Mielke wrote:\n>> The poster I responded to said that the memory required for a hash \n>> join was relative to the number of distinct values, not the number of \n>> rows. They gave an example of millions of rows, but only a few \n>> distinct values. Above, you agree with me that it it would include \n>> the rows (or at least references to the rows) as well. If it stores \n>> rows, or references to rows, then memory *is* relative to the number \n>> of rows, and millions of records would require millions of rows (or \n>> row references).\n>\n> Yeah, I think we're talking at cross-purposes, due to hash tables \n> being used in two completely different places in Postgres. Firstly, \n> you have hash joins, where Postgres loads the references to the actual \n> rows, and puts those in the hash table. For that situation, you want a \n> small number of rows. Secondly, you have hash aggregates, where \n> Postgres stores an entry for each \"group\" in the hash table, and does \n> not store the actual rows. For that situation, you can have a \n> bazillion individual rows, but only a small number of distinct groups.\n\nThat makes sense with my reality. :-)\n\nThanks,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n", "msg_date": "Tue, 22 Apr 2008 09:04:30 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Group by more efficient than distinct?" } ]
[ { "msg_contents": "This autovacuum has been hammering my server with purely random i/o\nfor half a week. The table is only 20GB and the i/o subsystem is good\nfor 250MB/s sequential and a solid 5kiops. When should I expect it to\nend (if ever)?\n\ncurrent_query: VACUUM reuters.value\nquery_start: 2008-04-15 20:12:48.806885-04\nthink=# select * from pg_class where relname = 'value';\n-[ RECORD 1 ]--+---------------------------------\nrelname | value\nrelfilenode | 191425\nrelpages | 1643518\nreltuples | 1.37203e+08\n# find -name 191425\\*\n./16579/191425\n./16579/191425.1\n./16579/191425.10\n./16579/191425.11\n./16579/191425.12\n./16579/191425.13\n./16579/191425.14\n./16579/191425.15\n./16579/191425.16\n./16579/191425.17\n./16579/191425.18\n./16579/191425.19\n./16579/191425.2\n./16579/191425.3\n./16579/191425.4\n./16579/191425.5\n./16579/191425.6\n./16579/191425.7\n./16579/191425.8\n./16579/191425.9\n# vmstat 1\nprocs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 1 30336 46264 60 7882356 0 0 250 299 1 1 6 2 87 5\n 0 1 30336 47412 60 7881308 0 0 2896 48 944 4861 3 2 71 24\n 0 2 30336 46696 60 7882188 0 0 816 4 840 5019 1 0 75 24\n 0 1 30336 49228 60 7879868 0 0 1888 164 971 5687 1 1 74 24\n 0 1 30336 49688 60 7878916 0 0 2640 48 1047 5751 1 0 75 23\n autovacuum | on\n autovacuum_vacuum_cost_delay | -1\n autovacuum_vacuum_cost_limit | -1\n vacuum_cost_delay | 0\n vacuum_cost_limit | 200\n vacuum_cost_page_dirty | 20\n vacuum_cost_page_hit | 1\n vacuum_cost_page_miss | 10\n", "msg_date": "Fri, 18 Apr 2008 09:35:42 -0700", "msg_from": "\"Jeffrey Baker\" <[email protected]>", "msg_from_op": true, "msg_subject": "3-days-long vacuum of 20GB table" }, { "msg_contents": "\"Jeffrey Baker\" <[email protected]> writes:\n> This autovacuum has been hammering my server with purely random i/o\n> for half a week. The table is only 20GB and the i/o subsystem is good\n> for 250MB/s sequential and a solid 5kiops. When should I expect it to\n> end (if ever)?\n\nWhat have you got maintenance_work_mem set to? Which PG version\nexactly?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 Apr 2008 13:03:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3-days-long vacuum of 20GB table " }, { "msg_contents": "On Fri, Apr 18, 2008 at 10:03 AM, Tom Lane <[email protected]> wrote:\n> \"Jeffrey Baker\" <[email protected]> writes:\n> > This autovacuum has been hammering my server with purely random i/o\n> > for half a week. The table is only 20GB and the i/o subsystem is good\n> > for 250MB/s sequential and a solid 5kiops. When should I expect it to\n> > end (if ever)?\n>\n> What have you got maintenance_work_mem set to? Which PG version\n> exactly?\n\nThis is 8.1.9 on Linux x86_64,\n\n# show maintenance_work_mem ;\n maintenance_work_mem\n----------------------\n 16384\n", "msg_date": "Fri, 18 Apr 2008 10:32:05 -0700", "msg_from": "\"Jeffrey Baker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 3-days-long vacuum of 20GB table" }, { "msg_contents": "On Fri, Apr 18, 2008 at 10:32 AM, Jeffrey Baker <[email protected]> wrote:\n>\n> On Fri, Apr 18, 2008 at 10:03 AM, Tom Lane <[email protected]> wrote:\n> > \"Jeffrey Baker\" <[email protected]> writes:\n> > > This autovacuum has been hammering my server with purely random i/o\n> > > for half a week. The table is only 20GB and the i/o subsystem is good\n> > > for 250MB/s sequential and a solid 5kiops. When should I expect it to\n> > > end (if ever)?\n> >\n> > What have you got maintenance_work_mem set to? Which PG version\n> > exactly?\n>\n> This is 8.1.9 on Linux x86_64,\n>\n> # show maintenance_work_mem ;\n> maintenance_work_mem\n> ----------------------\n> 16384\n\nThat appears to be the default. I will try increasing this. Can I\nincrease it globally from a single backend, so that all other backends\npick up the change, or do I have to restart the instance?\n\n-jwb\n", "msg_date": "Fri, 18 Apr 2008 10:34:57 -0700", "msg_from": "\"Jeffrey Baker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 3-days-long vacuum of 20GB table" }, { "msg_contents": "On Fri, Apr 18, 2008 at 10:34 AM, Jeffrey Baker <[email protected]> wrote:\n>\n> On Fri, Apr 18, 2008 at 10:32 AM, Jeffrey Baker <[email protected]> wrote:\n> >\n> > On Fri, Apr 18, 2008 at 10:03 AM, Tom Lane <[email protected]> wrote:\n> > > \"Jeffrey Baker\" <[email protected]> writes:\n> > > > This autovacuum has been hammering my server with purely random i/o\n> > > > for half a week. The table is only 20GB and the i/o subsystem is good\n> > > > for 250MB/s sequential and a solid 5kiops. When should I expect it to\n> > > > end (if ever)?\n> > >\n> > > What have you got maintenance_work_mem set to? Which PG version\n> > > exactly?\n> >\n> > This is 8.1.9 on Linux x86_64,\n> >\n> > # show maintenance_work_mem ;\n> > maintenance_work_mem\n> > ----------------------\n> > 16384\n>\n> That appears to be the default. I will try increasing this. Can I\n> increase it globally from a single backend, so that all other backends\n> pick up the change, or do I have to restart the instance?\n\nI increased it to 1GB, restarted the vacuum, and system performance\nseems the same. The root of the problem, that an entire CPU is in the\niowait state and the storage device is doing random i/o, is unchanged:\n\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 1 1 30328 53632 60 6914716 0 0 904 2960 1216 4720 1 1 74 23\n 0 1 30328 52492 60 6916036 0 0 1152 1380 948 3637 0 0 75 24\n 0 1 30328 49600 60 6917680 0 0 1160 1420 1055 4191 1 1 75 24\n 0 1 30328 49404 60 6919000 0 0 1048 1308 1133 5054 2 2 73 23\n 0 1 30328 47844 60 6921096 0 0 1552 1788 1002 3701 1 1 75 23\n\nAt that rate it will take a month. Compare the load generated by\ncreate table foo as select * from bar:\n\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 2 2 30328 46580 60 6911024 0 0 145156 408 2006 10729 52 8 17 23\n 3 1 30328 46240 60 6900976 0 0 133312 224 1834 10005 23 12 42 23\n 1 3 30328 60700 60 6902056 0 0 121480 172 1538 10629 22 14 32 32\n 1 2 30328 49520 60 6914204 0 0 122344 256 1408 14374 13 17 41 28\n 1 2 30328 47844 60 6915960 0 0 127752 248 1313 9452 16 15 42 27\n\nThat's rather more like it. I guess I always imagined that VACUUM was\na sort of linear process, not random, and that it should proceed at\nsequential scan speeds.\n\n-jwb\n", "msg_date": "Fri, 18 Apr 2008 10:54:24 -0700", "msg_from": "\"Jeffrey Baker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 3-days-long vacuum of 20GB table" }, { "msg_contents": "Jeffrey Baker escribi�:\n\n> That's rather more like it. I guess I always imagined that VACUUM was\n> a sort of linear process, not random, and that it should proceed at\n> sequential scan speeds.\n\nIt's linear for the table, but there are passes for indexes which are\nrandom in 8.1. That code was rewritten by Heikki Linnakangas to do\nlinear passes for indexes in 8.2 AFAIR.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 18 Apr 2008 14:24:22 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3-days-long vacuum of 20GB table" }, { "msg_contents": "\"Jeffrey Baker\" <[email protected]> writes:\n> I increased it to 1GB, restarted the vacuum, and system performance\n> seems the same. The root of the problem, that an entire CPU is in the\n> iowait state and the storage device is doing random i/o, is unchanged:\n\nYeah, but you just reduced the number of index scans that will be needed\nby a factor of 1GB/16MB. Hang in there ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 Apr 2008 15:09:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3-days-long vacuum of 20GB table " }, { "msg_contents": "Jeffrey Baker wrote:\n> On Fri, Apr 18, 2008 at 10:32 AM, Jeffrey Baker <[email protected]> wrote:\n>> # show maintenance_work_mem ;\n>> maintenance_work_mem\n>> ----------------------\n>> 16384\n> \n> That appears to be the default. I will try increasing this. Can I\n> increase it globally from a single backend, so that all other backends\n> pick up the change, or do I have to restart the instance?\n\nYou can change it in the config file, and send postmaster the HUP \nsignal, which tells all backends to reload the file. \"killall -HUP \npostmaster\" or similar.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sat, 19 Apr 2008 08:57:11 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3-days-long vacuum of 20GB table" } ]
[ { "msg_contents": "\nHi.\n\nI have this \"message queue\" table.. currently with 8m+ records. Picking \nthe top priority messages seem to take quite long.. it is just a matter \nof searching the index.. (just as explain analyze tells me it does).\n\nCan anyone digest further optimizations out of this output? (All records \nhave funcid=4)\n\n# explain analyze SELECT job.jobid, job.funcid, job.arg, job.uniqkey, \njob.insert_time, job.run_after, job.grabbed_until, job.priority, \njob.coalesce FROM workqueue.job WHERE (job.funcid = 4) AND \n(job.run_after <= 1208442668) AND (job.grabbed_until <= 1208442668) AND \n(job.coalesce = 'Efam') ORDER BY funcid, priority ASC LIMIT 1\n;\n \n QUERY PLAN \n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.09 rows=1 width=106) (actual \ntime=245.273..245.274 rows=1 loops=1)\n -> Index Scan using workqueue_job_funcid_priority_idx on job \n(cost=0.00..695291.80 rows=8049405 width=106) (actual \ntime=245.268..245.268 rows=1 loops=1)\n Index Cond: (funcid = 4)\n Filter: ((run_after <= 1208442668) AND (grabbed_until <= \n1208442668) AND (\"coalesce\" = 'Efam'::text))\n Total runtime: 245.330 ms\n(5 rows)\n\n-- \nJesper\n", "msg_date": "Fri, 18 Apr 2008 19:49:39 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Message queue table.. " }, { "msg_contents": "Jesper Krogh wrote:\n> \n> Hi.\n> \n> I have this \"message queue\" table.. currently with 8m+ records. Picking \n> the top priority messages seem to take quite long.. it is just a matter \n> of searching the index.. (just as explain analyze tells me it does).\n> \n> Can anyone digest further optimizations out of this output? (All records \n> have funcid=4)\n\nYou mean all records of interest, right, not all records in the table?\n\nWhat indexes do you have in place? What's the schema? Can you post a \"\\d \ntablename\" from psql?\n\n> # explain analyze SELECT job.jobid, job.funcid, job.arg, job.uniqkey, \n> job.insert_time, job.run_after, job.grabbed_until, job.priority, \n> job.coalesce FROM workqueue.job WHERE (job.funcid = 4) AND \n> (job.run_after <= 1208442668) AND (job.grabbed_until <= 1208442668) AND \n> (job.coalesce = 'Efam') ORDER BY funcid, priority ASC LIMIT 1\n> ;\n> \n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------- \n> \n> Limit (cost=0.00..0.09 rows=1 width=106) (actual time=245.273..245.274 \n> rows=1 loops=1)\n> -> Index Scan using workqueue_job_funcid_priority_idx on job \n> (cost=0.00..695291.80 rows=8049405 width=106) (actual \n> time=245.268..245.268 rows=1 loops=1)\n> Index Cond: (funcid = 4)\n> Filter: ((run_after <= 1208442668) AND (grabbed_until <= \n> 1208442668) AND (\"coalesce\" = 'Efam'::text))\n> Total runtime: 245.330 ms\n> (5 rows)\n\nWithout seeing the schema and index definitions ... maybe you'd benefit \nfrom a multiple column index. I'd experiment with an index on \n(funcid,priority) first.\n\n--\nCraig Ringer\n", "msg_date": "Sat, 19 Apr 2008 02:18:40 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Message queue table.." }, { "msg_contents": "Craig Ringer wrote:\n> Jesper Krogh wrote:\n>>\n>> Hi.\n>>\n>> I have this \"message queue\" table.. currently with 8m+ records. \n>> Picking the top priority messages seem to take quite long.. it is just \n>> a matter of searching the index.. (just as explain analyze tells me it \n>> does).\n>>\n>> Can anyone digest further optimizations out of this output? (All \n>> records have funcid=4)\n> \n> You mean all records of interest, right, not all records in the table?\n\nActually all the records.. since all the other virtual queues currently \nare empty.\n\n> What indexes do you have in place? What's the schema? Can you post a \"\\d \n> tablename\" from psql?\n> \n>> # explain analyze SELECT job.jobid, job.funcid, job.arg, job.uniqkey, \n>> job.insert_time, job.run_after, job.grabbed_until, job.priority, \n>> job.coalesce FROM workqueue.job WHERE (job.funcid = 4) AND \n>> (job.run_after <= 1208442668) AND (job.grabbed_until <= 1208442668) \n>> AND (job.coalesce = 'Efam') ORDER BY funcid, priority ASC LIMIT 1\n\nI found that removing the funcid from the order by made it use a better \nindex. (priority, run_after, grabbed_until) that probably makes sense \nsince the funcid doesnt give any value in the index at all.\n\nthanks for leading me back on track.\n\nJesper\n\n-- \nJesper\n", "msg_date": "Fri, 18 Apr 2008 21:23:09 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Message queue table.." }, { "msg_contents": "Jesper Krogh <[email protected]> writes:\n> I have this \"message queue\" table.. currently with 8m+ records. Picking \n> the top priority messages seem to take quite long.. it is just a matter \n> of searching the index.. (just as explain analyze tells me it does).\n\n> Limit (cost=0.00..0.09 rows=1 width=106) (actual \n> time=245.273..245.274 rows=1 loops=1)\n> -> Index Scan using workqueue_job_funcid_priority_idx on job \n> (cost=0.00..695291.80 rows=8049405 width=106) (actual \n> time=245.268..245.268 rows=1 loops=1)\n> Index Cond: (funcid = 4)\n> Filter: ((run_after <= 1208442668) AND (grabbed_until <= \n> 1208442668) AND (\"coalesce\" = 'Efam'::text))\n> Total runtime: 245.330 ms\n\nWell, what that's doing in English is: scan all the rows with funcid =\n4, in priority order, until we hit the first one satisfying the filter\nconditions. Apparently there are a lot of low-priority rows that have\nfuncid = 4 but not the other conditions.\n\nIf it's the \"coalesce\" condition that's the problem, an index on\n(funcid, coalesce, priority) --- or (coalesce, funcid, priority) ---\nwould probably help. I'm not sure there's a simple fix if it's\nthe other conditions that are really selective.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 Apr 2008 15:27:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Message queue table.. " }, { "msg_contents": "[email protected] (Jesper Krogh) writes:\n> I have this \"message queue\" table.. currently with 8m+\n> records. Picking the top priority messages seem to take quite\n> long.. it is just a matter of searching the index.. (just as explain\n> analyze tells me it does).\n>\n> Can anyone digest further optimizations out of this output? (All\n> records have funcid=4)\n>\n> # explain analyze SELECT job.jobid, job.funcid, job.arg, job.uniqkey,\n> job.insert_time, job.run_after, job.grabbed_until, job.priority,\n> job.coalesce FROM workqueue.job WHERE (job.funcid = 4) AND\n> (job.run_after <= 1208442668) AND (job.grabbed_until <= 1208442668)\n> AND (job.coalesce = 'Efam') ORDER BY funcid, priority ASC LIMIT 1\n> ;\n\nThere might be value in having one or more extra indices...\n\nHere are *plausible* candidates:\n\n1. If \"funcid = 4\" is highly significant (e.g. - you are always\nrunning this query, and funcid often <> 4), then you might add a\nfunctional index such as:\n\n create index job_funcid_run_after on workqueue.job (run_after) where funcid = 4;\n create index job_funcid_grabbeduntil on workqueue.job (grabbed_until) where funcid = 4;\n\n2. Straight indices like the following:\n\n create index job_run_after on workqueue.job(run_after);\n create index job_grabbed_until on workqueue.job(grabbed_until);\n create index job_funcid on workqueue.job(funcid);\n create index job_coalesce on workqueue.job(coalesce);\n\nNote that it is _possible_ (though by no means guaranteed) that all\nthree might prove useful, if you're running 8.1+ where PostgreSQL\nsupports bitmap index scans.\n\nAnother possibility...\n\n3. You might change your process to process multiple records in a\n\"run\" so that you might instead run the query (perhaps via a cursor?)\n\nwith LIMIT [Something Bigger than 1].\n\nIt does seem mighty expensive to run a 245ms query to find just one\nrecord. It seems quite likely that you could return the top 100 rows\n(LIMIT 100) without necessarily finding it runs in any more time.\n\nReturning 100 tuples in 245ms seems rather more acceptable, no? :-)\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"linuxfinances.info\")\nhttp://linuxdatabases.info/info/linuxdistributions.html\nRules of the Evil Overlord #32. \"I will not fly into a rage and kill a\nmessenger who brings me bad news just to illustrate how evil I really\nam. Good messengers are hard to come by.\"\n<http://www.eviloverlord.com/>\n", "msg_date": "Fri, 18 Apr 2008 15:57:10 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Message queue table.." } ]
[ { "msg_contents": "Has there ever been any analysis regarding the redundant write overhead \nof full page writes?\n\nI'm wondering if once could regard an 8k page as being 64 off 128 byte \nparagraphs or\n32 off 256byte paragraphs, each represented by a bit in a word. And, \nwhen a pageis dirtied\nby changes some record is kept of this based on the paragraphs \naffected. Then you could\njust incrementally dump the pre-image of newly dirtied paragraphs as you \ngo, and the cost\nin terms of dirtied pages would be much lower for the case of scattered \nupdates.\n\n(I was also wondering about just doing preimages based on chaned byte \nranges but the\napproach above is probably faster, doesn't dump the same range twice, \nand may fit\nthe existing flow more directly)\n\nAlso - has any attempt been made to push log writes through a cheap \ncompressor, such\na zlib on lowest setting or one like Jeff Bonwick's for ZFS\n(http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/os/compress.c).\n\nWould work well for largely textual tables (and I suspect a lot of \ninteger data too).\n\nJames\n\n", "msg_date": "Fri, 18 Apr 2008 20:55:31 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": true, "msg_subject": "full_page_write and also compressed logging" } ]
[ { "msg_contents": "Hi All!\n\nWe have a database running smoothly for months. 2 days ago I get this \nerror message. I tried a restore, a full restore (deleting the old \ndatabase and recovering from backup all the information) but we are \ngetting this error every time.\n\nIn this case I got this error when I was trying to recover the database \nfrom a backup. Sometimes I can recover it and restart the application, \nbut after a while I get the same error message again and again.\n\nAny ideas what's going on? I checked filesystem and it is ok.\n\nRegards\n\nPablo\n\n\n2008-04-20 12:01:13 EDT [5309] [2758789251] [480b68c9.14bd-1] ERROR: \ndate/time field value out of range: \"2008-01-275:51:08.631\"\n2008-04-20 12:01:13 EDT [5309] [2758789251] [480b68c9.14bd-2] HINT: \nPerhaps you need a different \"datestyle\" setting.\n2008-04-20 12:01:13 EDT [5309] [2758789251] [480b68c9.14bd-3] CONTEXT: \nCOPY venta_00015, line 11626137, column dmodi: \"2008-01-275:51:08.631\"\n2008-04-20 12:01:13 EDT [5309] [2758789251] [480b68c9.14bd-4] \nSTATEMENT: COPY venta_00015 (idventa, idventapadre, estado, dmodi, \npuntuacion, moviseria\nlizados) FROM stdin;\n2008-04-20 12:01:13 EDT [5309] [2758789324] [480b68c9.14bd-5] ERROR: \ninvalid input syntax for integer: \"15820584,\"\n2008-04-20 12:01:13 EDT [5309] [2758789324] [480b68c9.14bd-6] CONTEXT: \nCOPY venta_00018, line 8388339, column idventapadre: \"15820584,\"\n2008-04-20 12:01:13 EDT [5309] [2758789324] [480b68c9.14bd-7] \nSTATEMENT: COPY venta_00018 (idventa, idventapadre, estado, dmodi, \nevaluacion, moviseria\nlizados) FROM stdin;\n [5995] [] [-1] PANIC: corrupted item pointer: offset = 7152, size = 8208\n [5148] [] [-1] LOG: autovacuum process (PID 5995) was terminated by \nsignal 6\n [5148] [] [-2] LOG: terminating any other active server processes\n2008-04-20 11:50:36 EDT [5166] [0] [480b664c.142e-1] WARNING: \nterminating connection because of crash of another server process\n2008-04-20 11:50:36 EDT [5166] [0] [480b664c.142e-2] DETAIL: The \npostmaster has commanded this server process to roll back the current \ntransaction and exit,\n because another server process exited abnormally and possibly corrupted \nshared memory.\n2008-04-20 11:50:36 EDT [5166] [0] [480b664c.142e-3] HINT: In a moment \nyou should be able to reconnect to the database and repeat your command.\n2008-04-20 12:02:35 EDT [5328] [0] [480b691b.14d0-1] WARNING: \nterminating connection because of crash of another server process\n2008-04-20 12:02:35 EDT [5328] [0] [480b691b.14d0-2] DETAIL: The \npostmaster has commanded this server process to roll back the current \ntransaction and exit,\n because another server process exited abnormally and possibly corrupted \nshared memory.\n2008-04-20 12:02:35 EDT [5328] [0] [480b691b.14d0-3] HINT: In a moment \nyou should be able to reconnect to the database and repeat your command.\n2008-04-20 11:58:49 EDT [5269] [0] [480b6839.1495-1] WARNING: \nterminating connection because of crash of another server process\n2008-04-20 11:58:49 EDT [5269] [0] [480b6839.1495-2] DETAIL: The \npostmaster has commanded this server process to roll back the current \ntransaction and exit,\n because another server process exited abnormally and possibly corrupted \nshared memory.\n2008-04-20 11:58:49 EDT [5269] [0] [480b6839.1495-3] HINT: In a moment \nyou should be able to reconnect to the database and repeat your command.\n2008-04-20 12:01:13 EDT [5309] [2758789363] [480b68c9.14bd-8] WARNING: \nterminating connection because of crash of another server process\n/\n", "msg_date": "Sun, 20 Apr 2008 16:53:21 -0300", "msg_from": "Pablo Alcaraz <[email protected]>", "msg_from_op": true, "msg_subject": "corrupted shared memory message" }, { "msg_contents": "Pablo Alcaraz <[email protected]> writes:\n> We have a database running smoothly for months. 2 days ago I get this \n> error message. I tried a restore, a full restore (deleting the old \n> database and recovering from backup all the information) but we are \n> getting this error every time.\n\nI think you've got hardware problems. Run some memory and disk\ndiagnostics.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 Apr 2008 21:30:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: corrupted shared memory message " }, { "msg_contents": "Tom Lane wrote:\n> Pablo Alcaraz <[email protected]> writes:\n> \n>> We have a database running smoothly for months. 2 days ago I get this \n>> error message. I tried a restore, a full restore (deleting the old \n>> database and recovering from backup all the information) but we are \n>> getting this error every time.\n>> \n>\n> I think you've got hardware problems. Run some memory and disk\n> diagnostics.\n>\n> \t\t\tregards, tom lane\n>\n>\n> \nBingo! Thanks!\nPablo\n\n", "msg_date": "Tue, 29 Apr 2008 09:12:36 -0300", "msg_from": "Pablo Alcaraz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: corrupted shared memory message" } ]
[ { "msg_contents": "Hi\n\n# ps -ef | grep idle | wc -l\n87\n# ps -ef | grep SELECT | wc -l\n5\n\n\nI have 2 web servers which connect to PGPool which connects to our \npostgres db. I have noticed that idle connections seem to take up CPU \nand RAM (according to top). Could this in any way cause things to slow \ndown?\n\n-- \nAdrian Moisey\nSystems Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n", "msg_date": "Mon, 21 Apr 2008 11:50:31 +0200", "msg_from": "Adrian Moisey <[email protected]>", "msg_from_op": true, "msg_subject": "connections slowing everything down?" }, { "msg_contents": "\nOn Apr 21, 2008, at 4:50 AM, Adrian Moisey wrote:\n\n> Hi\n>\n> # ps -ef | grep idle | wc -l\n> 87\n> # ps -ef | grep SELECT | wc -l\n> 5\n>\n>\n> I have 2 web servers which connect to PGPool which connects to our \n> postgres db. I have noticed that idle connections seem to take up \n> CPU and RAM (according to top). Could this in any way cause things \n> to slow down?\n\nDependant on how much memory you have in your system, yes. You can \nfix the constant use of memory by idle connections by adjusting the \nchild_life_time setting in your pgpool.conf file. The default if 5 \nminutes which a bit long. Try dropping that down to 20 or 30 seconds.\n\nErik Jones\n\nDBA | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n\n", "msg_date": "Mon, 21 Apr 2008 09:04:28 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: connections slowing everything down?" }, { "msg_contents": "Hi\n\n>> # ps -ef | grep idle | wc -l\n>> 87\n[...]\n\n>> I have 2 web servers which connect to PGPool which connects to our \n>> postgres db. I have noticed that idle connections seem to take up CPU \n>> and RAM (according to top). Could this in any way cause things to \n>> slow down?\n> \n> Dependant on how much memory you have in your system, yes. You can fix \n> the constant use of memory by idle connections by adjusting the \n> child_life_time setting in your pgpool.conf file. The default if 5 \n> minutes which a bit long. Try dropping that down to 20 or 30 seconds.\n\nWe have 32GBs. If I get it to close the connections faster, will that \nactually help? Is there a way i can figure it out?\n\n\n-- \nAdrian Moisey\nSystems Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n", "msg_date": "Mon, 21 Apr 2008 16:15:42 +0200", "msg_from": "Adrian Moisey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: connections slowing everything down?" }, { "msg_contents": "\nOn Apr 21, 2008, at 9:15 AM, Adrian Moisey wrote:\n\n> Hi\n>\n>>> # ps -ef | grep idle | wc -l\n>>> 87\n> [...]\n>\n>>> I have 2 web servers which connect to PGPool which connects to our \n>>> postgres db. I have noticed that idle connections seem to take up \n>>> CPU and RAM (according to top). Could this in any way cause \n>>> things to slow down?\n>> Dependant on how much memory you have in your system, yes. You can \n>> fix the constant use of memory by idle connections by adjusting the \n>> child_life_time setting in your pgpool.conf file. The default if 5 \n>> minutes which a bit long. Try dropping that down to 20 or 30 \n>> seconds.\n>\n> We have 32GBs. If I get it to close the connections faster, will \n> that actually help? Is there a way i can figure it out?\n\nFirst, sorry, I gave you the wrong config setting, I meant \nconnection_life_time. child_life_time is the lifetime of an idle pool \nprocess on the client machine and the connection_life_time is the \nlifetime of an idle connection (i.e. no transaction running) on the \nserver. With the default connection_life_time of 5 minutes it's \neasily possible to keep an connection open indefinitely. Imagine a \nclient gets a connection and runs a single query, then nothing happens \non that connection for 4:30 minutes at which point another single \nquery is run. If that pattern continues that connection will never be \nrelinquished. While the point of a pool is to cut down on the number \nof connections that need to be established, you don't necessarily want \nto go the extreme and never tear down connections as that will cause a \ndegradation in available server resources. With a smaller, but not 0, \nconnection life time, connections will stay open and available during \nperiods of high work rates from the client, but will be relinquished \nwhen there isn't as much to do.\n\nWithout more details on what exactly is happening on your system I \ncan't say for sure that this is your fix. Are you tracking/monitoring \nyour server's free memory? If not I'd suggest getting either Cacti or \nMonit in place to monitor system stats such as free memory (using \nvmstat), system IO (using iostat), db transaction rates (using db \nqueries). Then you'll be able to draw correlations between \napplication behavior (slowness, etc) and actual system numbers. I \nknow that I had issues with connections being held open for long times \n(using the default 300s) causing our free memory to gradually decrease \nover the day and resetting our pools would clear it out so there was a \ndirect cause and effect relationship there. When I dropped the \nconnection_life_time to 30s the problem went away.\n\nErik Jones\n\nDBA | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n\n", "msg_date": "Mon, 21 Apr 2008 09:46:17 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: connections slowing everything down?" }, { "msg_contents": "On Mon, Apr 21, 2008 at 5:50 AM, Adrian Moisey\n<[email protected]> wrote:\n> Hi\n>\n> # ps -ef | grep idle | wc -l\n> 87\n> # ps -ef | grep SELECT | wc -l\n> 5\n>\n>\n> I have 2 web servers which connect to PGPool which connects to our postgres\n> db. I have noticed that idle connections seem to take up CPU and RAM\n> (according to top). Could this in any way cause things to slow down?\n\nSomething is not quite with your assumptions. On an unloaded server,\nopen a bunch of connections (like 500) from psql doing nothing, and\ncpu load will stay at zero. IOW, an 'idle' connection does not consume\nany measurable CPU resources once connected. It does consume some ram\nbut that would presumably at least partly swap out eventually. What's\nprobably going on here is your connections are not really idle. Top\nby default aggregates usage every three seconds and ps is more of a\nsnapshot. During the top a single connection might accept and dispose\n0, 1, 50, 100, or 1000 queries depending on various factors. Your\nsampling methods are simply not accurate enough.\n\nWith statement level logging on (with pid on the log line), you can\nbreak out and measure query activity by connection.\n\nmerlin\n", "msg_date": "Tue, 22 Apr 2008 20:46:59 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: connections slowing everything down?" } ]
[ { "msg_contents": "Hi,\n\nI'm having trouble understanding the cost of the Materialize \noperator. Consider the following plan:\n\nNested Loop (cost=2783.91..33217.37 rows=78634 width=44) (actual \ntime=77.164..2478.973 rows=309 loops=1)\n Join Filter: ((rank2.pre <= rank5.pre) AND (rank5.pre <= \nrank2.post))\n -> Nested Loop (cost=0.00..12752.06 rows=1786 width=33) \n(actual time=0.392..249.255 rows=9250 loops=1)\n .....\n -> Materialize (cost=2783.91..2787.87 rows=396 width=22) \n(actual time=0.001..0.072 rows=587 loops=9250)\n -> Nested Loop (cost=730.78..2783.51 rows=396 \nwidth=22) (actual time=7.637..27.030 rows=587 loops=1)\n ....\n\nThe cost of the inner-most Nested Loop is 27 ms, but the total cost of \nthe Materialize operator is 666 ms (9250 loops * 0.072 ms per \niteration). So, Materialize introduces more than 10x overhead. Is \nthis the cost of writing the table to temporary storage or am I \nmisreading the query plan output?\n\nFurthermore, the outer table is almost 20x as big as the inner table. \nWouldn't the query be much faster by switching the inner with the \nouter table? I have switched off GEQO, so I Postgres should find the \noptimal query plan.\n\nCheers,\nViktor\n", "msg_date": "Mon, 21 Apr 2008 13:07:22 +0200", "msg_from": "Viktor Rosenfeld <[email protected]>", "msg_from_op": true, "msg_subject": "Performance of the Materialize operator in a query plan" }, { "msg_contents": "Viktor Rosenfeld <[email protected]> writes:\n> I'm having trouble understanding the cost of the Materialize \n> operator. Consider the following plan:\n\n> Nested Loop (cost=2783.91..33217.37 rows=78634 width=44) (actual \n> time=77.164..2478.973 rows=309 loops=1)\n> Join Filter: ((rank2.pre <= rank5.pre) AND (rank5.pre <= \n> rank2.post))\n> -> Nested Loop (cost=0.00..12752.06 rows=1786 width=33) \n> (actual time=0.392..249.255 rows=9250 loops=1)\n> .....\n> -> Materialize (cost=2783.91..2787.87 rows=396 width=22) \n> (actual time=0.001..0.072 rows=587 loops=9250)\n> -> Nested Loop (cost=730.78..2783.51 rows=396 \n> width=22) (actual time=7.637..27.030 rows=587 loops=1)\n> ....\n\n> The cost of the inner-most Nested Loop is 27 ms, but the total cost of \n> the Materialize operator is 666 ms (9250 loops * 0.072 ms per \n> iteration). So, Materialize introduces more than 10x overhead.\n\nNot hardly. Had the Materialize not been there, we'd have executed\nthe inner nestloop 9250 times, for a total cost of 9250 * 27ms.\n(Actually it might have been less due to cache effects, but still\na whole lot more than 0.072 per iteration.)\n\nThese numbers say that it's taking the Materialize about 120 microsec\nper row returned, which seems a bit high to me considering that the\ndata is just sitting in a tuplestore. I surmise that you are using\na machine with slow gettimeofday() and that's causing the measurement\noverhead to be high.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 Apr 2008 10:44:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of the Materialize operator in a query plan " }, { "msg_contents": "Hi Tom,\n\n>> The cost of the inner-most Nested Loop is 27 ms, but the total cost \n>> of\n>> the Materialize operator is 666 ms (9250 loops * 0.072 ms per\n>> iteration). So, Materialize introduces more than 10x overhead.\n>\n> Not hardly. Had the Materialize not been there, we'd have executed\n> the inner nestloop 9250 times, for a total cost of 9250 * 27ms.\n> (Actually it might have been less due to cache effects, but still\n> a whole lot more than 0.072 per iteration.)\n\nI realize that Materialize saves a big amount of time in the grand \nscheme, but I'm still wondering about the descrepancy between the \ntotal cost of Materialize and the contained Nested Loop.\n\n> These numbers say that it's taking the Materialize about 120 microsec\n> per row returned, which seems a bit high to me considering that the\n> data is just sitting in a tuplestore. I surmise that you are using\n> a machine with slow gettimeofday() and that's causing the measurement\n> overhead to be high.\n\nDo you mean, that the overhead is an artefact of timing the query? In \nthat case, the query should run faster than its evaluation with \nEXPLAIN ANALYZE, correct?\n\nIs there a way to test this assumption regarding the speed of \ngettimeofday? I'm on a Macbook and have no idea about the performance \nof its implementation.\n\nCheers,\nViktor\n", "msg_date": "Thu, 24 Apr 2008 16:31:58 +0200", "msg_from": "Viktor Rosenfeld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance of the Materialize operator in a query plan " }, { "msg_contents": "> Do you mean, that the overhead is an artefact of timing the query? In \n> that case, the query should run faster than its evaluation with EXPLAIN \n> ANALYZE, correct?\n>\n> Is there a way to test this assumption regarding the speed of \n> gettimeofday? I'm on a Macbook and have no idea about the performance \n> of its implementation.\n\nRun EXPLAIN ANALYZE query\nType \\timing\nRun SELECT count(*) FROM (query) AS foo\n\n\\timing gives timings as seen by the client. If you're local, and the \nresult set is one single integer, client timings are not very different \n from server timings. If the client must retrieve lots of rows, this will \nbe different, hence the fake count(*) above to prevent this. You might \nwant to explain the count(*) also to be sure the same plan is used...\n\nAnd yes EXPLAIN ANALYZE has overhead, sometimes significant. Think \nHeisenberg... You will measure it easily with this dumb method ;)\n\n\nHere a very dumb query :\n\nSELECT count(*) FROM test;\n count\n-------\n 99999\n(1 ligne)\n\nTemps : 26,924 ms\n\n\ntest=> EXPLAIN ANALYZE SELECT count(*) FROM test;\n QUERY PLAN\n-------------------------------------------------------------------------------- \n--------------------------------\n Aggregate (cost=1692.99..1693.00 rows=1 width=0) (actual \ntime=66.314..66.314 \nr \nows=1 loops=1)\n -> Seq Scan on test (cost=0.00..1442.99 rows=99999 width=0) (actual \ntime=0. \n013..34.888 rows=99999 loops=1)\n Total runtime: 66.356 ms\n(3 lignes)\n\nTemps : 66,789 ms\n\nApparently measuring the time it takes to get a row from the table takes \n2x as long as actually getting the row from the table. Which is \nreassuring, in a way, since grabbing rows out of tables isn't such an \nunusual operation.\n\n", "msg_date": "Thu, 24 Apr 2008 19:05:14 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of the Materialize operator in a query plan" }, { "msg_contents": "Hi,\n\nusing this strategy to study the overhead of EXPLAIN ANALYZE was very \ninsightful. Apparently, measuring the performance of the query plan \nintroduced a overhead of more than 10 seconds in the query I was \nlooking at.\n\nThanks,\nViktor\n\nAm 24.04.2008 um 19:05 schrieb PFC:\n>> Do you mean, that the overhead is an artefact of timing the query? \n>> In that case, the query should run faster than its evaluation with \n>> EXPLAIN ANALYZE, correct?\n>>\n>> Is there a way to test this assumption regarding the speed of \n>> gettimeofday? I'm on a Macbook and have no idea about the \n>> performance of its implementation.\n>\n> Run EXPLAIN ANALYZE query\n> Type \\timing\n> Run SELECT count(*) FROM (query) AS foo\n>\n> \\timing gives timings as seen by the client. If you're local, and \n> the result set is one single integer, client timings are not very \n> different from server timings. If the client must retrieve lots of \n> rows, this will be different, hence the fake count(*) above to \n> prevent this. You might want to explain the count(*) also to be sure \n> the same plan is used...\n>\n> And yes EXPLAIN ANALYZE has overhead, sometimes significant. Think \n> Heisenberg... You will measure it easily with this dumb method ;)\n>\n>\n> Here a very dumb query :\n>\n> SELECT count(*) FROM test;\n> count\n> -------\n> 99999\n> (1 ligne)\n>\n> Temps : 26,924 ms\n>\n>\n> test=> EXPLAIN ANALYZE SELECT count(*) FROM test;\n> QUERY PLAN\n> -------------------------------------------------------------------------------- --------------------------------\n> Aggregate (cost=1692.99..1693.00 rows=1 width=0) (actual \n> time=66.314..66.314 \n> r \n> ows \n> =1 loops=1)\n> -> Seq Scan on test (cost=0.00..1442.99 rows=99999 width=0) \n> (actual \n> time \n> = \n> 0 \n> . 013 \n> ..34.888 rows=99999 loops=1)\n> Total runtime: 66.356 ms\n> (3 lignes)\n>\n> Temps : 66,789 ms\n>\n> Apparently measuring the time it takes to get a row from the table \n> takes 2x as long as actually getting the row from the table. Which \n> is reassuring, in a way, since grabbing rows out of tables isn't \n> such an unusual operation.\n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\nHi,using this strategy to study the overhead of EXPLAIN ANALYZE was very insightful.  Apparently, measuring the performance of the query plan introduced a overhead of more than 10 seconds in the query I was looking at.Thanks,ViktorAm 24.04.2008 um 19:05 schrieb PFC:Do you mean, that the overhead is an artefact of timing the query?  In that case, the query should run faster than its evaluation with EXPLAIN ANALYZE, correct?Is there a way to test this assumption regarding the speed of gettimeofday?  I'm on a Macbook and have no idea about the performance of its implementation.Run EXPLAIN ANALYZE queryType \\timingRun SELECT count(*) FROM (query) AS foo\\timing gives timings as seen by the client. If you're local, and the result set is one single integer, client timings are not very different from server timings. If the client must retrieve lots of rows, this will be different, hence the fake count(*) above to prevent this. You might want to explain the count(*) also to be sure the same plan is used...And yes EXPLAIN ANALYZE has overhead, sometimes significant. Think Heisenberg... You will measure it easily with this dumb method ;)Here a very dumb query :SELECT count(*) FROM test; count------- 99999(1 ligne)Temps : 26,924 mstest=> EXPLAIN ANALYZE SELECT count(*) FROM test;                                                   QUERY PLAN--------------------------------------------------------------------------------                                                                                  -------------------------------- Aggregate  (cost=1692.99..1693.00 rows=1 width=0) (actual time=66.314..66.314 r                                                                                  ows=1 loops=1)   ->  Seq Scan on test  (cost=0.00..1442.99 rows=99999 width=0) (actual time=0.                                                                                  013..34.888 rows=99999 loops=1) Total runtime: 66.356 ms(3 lignes)Temps : 66,789 msApparently measuring the time it takes to get a row from the table takes 2x as long as actually getting the row from the table. Which is reassuring, in a way, since grabbing rows out of tables isn't such an unusual operation.-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sun, 27 Apr 2008 21:02:19 +0200", "msg_from": "Viktor Rosenfeld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance of the Materialize operator in a query plan" } ]
[ { "msg_contents": "Hi,\n\nI'm running into an performance problem where a Postgres db is running\nat 99% CPU (4 cores) with about 500 concurrent connection doing various\nqueries from a web application. This problem started about a week ago,\nand has been steadily going downhill. I have been tweaking the config a\nbit, mainly shared_memory but have seen no noticeable improvements.\n\nat any given time there is about 5-6 postgres in startup \n(ps auxwww | grep postgres | grep startup | wc -l)\n\nabout 2300 connections in idle \n(ps auxwww | grep postgres | idle)\n\nand loads of \"FATAL: sorry, too many clients already\" being logged.\n\nThe server that connects to the db is an apache server using persistent\nconnections. MaxClients is 2048 thus the high number of connections\nneeded. Application was written in PHP using the Pear DB class.\n\nHere are some typical queries taking place\n\n(table media has about 40,000 records and category about 40):\n\nLOG: duration: 66141.530 ms statement:\n SELECT COUNT(*) AS CNT\n FROM media m JOIN category ca USING(category_id)\n WHERE CATEGORY_ROOT(m.category_id) = '-1'\n AND m.deleted_on IS NULL\n\nLOG: duration: 57828.983 ms statement:\n SELECT COUNT(*) AS CNT\n FROM media m JOIN category ca USING(category_id)\n WHERE CATEGORY_ROOT(m.category_id) = '-1'\n AND m.deleted_on IS NULL AND m.POSTED_ON + interval '7 day'\n\nSystem\n======\ncpu Xeon(R) CPU 5160 @ 3.00GHz stepping 06 x 4\n L1, L2 = 32K, 4096K\nmem 8GB\ndbms postgresql-server 8.2.4\ndisks \t\tscsi0 : LSI Logic SAS based MegaRAID driver\n\t\tSCSI device sda: 142082048 512-byte hdwr sectors (72746 MB)\n\t\tSCSI device sda: 142082048 512-byte hdwr sectors (72746 MB)\n\nStats\n======\n\ntop - 00:28:40 up 12:43, 1 user, load average: 46.88, 36.55, 37.65\nTasks: 2184 total, 63 running, 2119 sleeping, 1 stopped, 1 zombie\nCpu0: 99.3% us, 0.5% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, 0.2% si\nCpu1: 98.3% us, 1.4% sy, 0.0% ni, 0.2% id, 0.0% wa, 0.0% hi, 0.0% si\nCpu2: 99.5% us, 0.5% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, 0.0% si\nCpu3: 99.5% us, 0.5% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, 0.0% si\nMem: 8166004k total, 6400368k used, 1765636k free, 112080k buffers\nSwap: 1020088k total, 0k used, 1020088k free, 3558764k cached\n\n\n$ vmstat 3\nprocs ---------memory---------- ---swap-- -----io---- --system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 4 0 0 559428 109440 3558684 0 0 11 27 31 117 96 2 2 0\n 5 0 0 558996 109452 3558672 0 0 0 41 1171 835 93 1 7 0\n 4 0 0 558996 109452 3558740 0 0 0 38 1172 497 98 1 1 0\n11 0 0 554516 109452 3558740 0 0 0 19 1236 610 97 1 2 0\n25 0 0 549860 109452 3558740 0 0 0 32 1228 332 99 1 0 0\n12 0 0 555412 109452 3558740 0 0 0 4 1148 284 99 1 0 0\n15 0 0 555476 109452 3558740 0 0 0 23 1202 290 99 1 0 0\n15 0 0 555476 109452 3558740 0 0 0 1 1125 260 99 1 0 0\n16 0 0 555460 109452 3558740 0 0 0 12 1214 278 99 1 0 0\n\n\n# -----------------------------\n# PostgreSQL configuration file\n# -----------------------------\n\n#data_directory = 'ConfigDir' # use data in another directory\n # (change requires restart)\n#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file\n # (change requires restart)\n#ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file\n # (change requires restart)\n\n# If external_pid_file is not explicitly set, no extra PID file is written.\n#external_pid_file = '(none)' # write an extra PID file\n # (change requires restart)\n\n\n#---------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#---------------------------------------------------------------------------\n\n# - Connection Settings -\n\nlisten_addresses = 'localhost' # what IP address(es) to listen on; \n # comma-separated list of addresses;\n # defaults to 'localhost', '*' = all\n # (change requires restart)\nport = 5432 # (change requires restart)\nmax_connections = 2400 # (change requires restart)\n# Note: increasing max_connections costs ~400 bytes of shared memory per \n# connection slot, plus lock space (see max_locks_per_transaction). You\n# might also need to raise shared_buffers to support more connections.\nsuperuser_reserved_connections = 3 # (change requires restart)\n#unix_socket_directory = '' # (change requires restart)\n#unix_socket_group = '' # (change requires restart)\n#unix_socket_permissions = 0777 # octal\n # (change requires restart)\n#bonjour_name = '' # defaults to the computer name\n # (change requires restart)\n\n# - Security & Authentication -\n\n#authentication_timeout = 1min # 1s-600s\n#ssl = off # (change requires restart)\n#password_encryption = on\n#db_user_namespace = off\n\n# Kerberos\n#krb_server_keyfile = '' # (change requires restart)\n#krb_srvname = 'postgres' # (change requires restart)\n#krb_server_hostname = '' # empty string matches any keytab entry\n # (change requires restart)\n#krb_caseins_users = off # (change requires restart)\n\n# - TCP Keepalives -\n# see 'man 7 tcp' for details\n\n#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;\n # 0 selects the system default\n#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;\n # 0 selects the system default\n#tcp_keepalives_count = 0 # TCP_KEEPCNT;\n # 0 selects the system default\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 600MB # min 128kB or max_connections*16kB\n # (change requires restart)\ntemp_buffers = 10MB # min 800kB\n#max_prepared_transactions = 5 # can be 0 or more\n # (change requires restart)\n# Note: increasing max_prepared_transactions costs ~600 bytes of shared memory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 8MB # min 64kB\nmaintenance_work_mem = 512MB # min 1MB\nmax_stack_depth = 8MB # min 100kB\n\n# - Free Space Map -\n\nmax_fsm_pages = 1536000 # min max_fsm_relations*16, 6 bytes each\n # (change requires restart)\nmax_fsm_relations = 10000 # min 100, ~70 bytes each\n # (change requires restart)\n\n# - Kernel Resource Usage -\n\nmax_files_per_process = 1024 # min 25\n # (change requires restart)\n#shared_preload_libraries = '' # (change requires restart)\n\n# - Cost-Based Vacuum Delay -\n\n#vacuum_cost_delay = 0 # 0-1000 milliseconds\n#vacuum_cost_page_hit = 1 # 0-10000 credits\n#vacuum_cost_page_miss = 10 # 0-10000 credits\n#vacuum_cost_page_dirty = 20 # 0-10000 credits\n#vacuum_cost_limit = 200 # 0-10000 credits\n\n# - Background writer -\n\n#bgwriter_delay = 200ms # 10-10000ms between rounds\n#bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers scanned/round\n#bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round\n#bgwriter_all_percent = 0.333 # 0-100% of all buffers scanned/round\n#bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\n#fsync = on # turns forced synchronization on or off\n#wal_sync_method = fsync # the default is the first option \n # supported by the operating system:\n # open_datasync\n # fdatasync\n # fsync\n # fsync_writethrough\n # open_sync\n#full_page_writes = on # recover from partial page writes\n#wal_buffers = 64kB # min 32kB\n # (change requires restart)\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n\n# - Checkpoints -\n\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 5min # range 30s-1h\n#checkpoint_warning = 30s # 0 is off\n\n# - Archiving -\n\n#archive_command = '' # command to use to archive a logfile segment\n#archive_timeout = 0 # force a logfile segment switch after this\n # many seconds; 0 is off\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\n#enable_nestloop = on\n#enable_seqscan = on\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\n#seq_page_cost = 1.0 # measured on an arbitrary scale\n#random_page_cost = 4.0 # same scale as above\n#cpu_tuple_cost = 0.01 # same scale as above\n#cpu_index_tuple_cost = 0.005 # same scale as above\n#cpu_operator_cost = 0.0025 # same scale as above\neffective_cache_size = 1000MB\n\n# - Genetic Query Optimizer -\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5 # range 1-10\n#geqo_pool_size = 0 # selects default based on effort\n#geqo_generations = 0 # selects default based on effort\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 10 # range 1-1000\n#constraint_exclusion = off\n#from_collapse_limit = 8\n#join_collapse_limit = 8 # 1 disables collapsing of explicit \n # JOINs\n\n\n#---------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------\n\n# - Where to Log -\n\n#log_destination = 'stderr' # Valid values are combinations of \n # stderr, syslog and eventlog, \n # depending on platform.\n\n# This is used when logging to stderr:\n#redirect_stderr = off # Enable capturing of stderr into log \n # files\n # (change requires restart)\n\n# These are only used if redirect_stderr is on:\n#log_directory = 'pg_log' # Directory where log files are written\n # Can be absolute or relative to PGDATA\n#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # Log file name pattern.\n # Can include strftime() escapes\n#log_truncate_on_rotation = off # If on, any existing log file of the same \n # name as the new log file will be\n # truncated rather than appended to. But\n # such truncation only occurs on\n # time-driven rotation, not on restarts\n # or size-driven rotation. Default is\n # off, meaning append to existing files\n # in all cases.\n#log_rotation_age = 1d # Automatic rotation of logfiles will \n # happen after that time. 0 to \n # disable.\n#log_rotation_size = 10MB # Automatic rotation of logfiles will \n # happen after that much log\n # output. 0 to disable.\n\n# These are relevant when logging to syslog:\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n\n# - When to Log -\n\n#client_min_messages = notice # Values, in order of decreasing detail:\n # debug5\n # debug4\n # debug3\n # debug2\n # debug1\n # log\n # notice\n # warning\n # error\n\n#log_min_messages = notice # Values, in order of decreasing detail:\n # debug5\n # debug4\n # debug3\n # debug2\n # debug1\n # info\n # notice\n # warning\n # error\n # log\n # fatal\n # panic\n\n#log_error_verbosity = default # terse, default, or verbose messages\n\n#log_min_error_statement = error # Values in order of increasing severity:\n # debug5\n # debug4\n # debug3\n # debug2\n # debug1\n # info\n # notice\n # warning\n # error\n # fatal\n # panic (effectively off)\n\nlog_min_duration_statement = -1 # -1 is disabled, 0 logs all statements\n # and their durations.\n\n#silent_mode = off # DO NOT USE without syslog or \n # redirect_stderr\n # (change requires restart)\n\n# - What to Log -\n\n#debug_print_parse = off\n#debug_print_rewritten = off\n#debug_print_plan = off\n#debug_pretty_print = off\n#log_connections = off\nlog_disconnections = on\nlog_duration = on\nlog_line_prefix = '%u@%d %h %m' # Special values:\n # %u = user name\n # %d = database name\n # %r = remote host and port\n # %h = remote host\n # %p = PID\n # %t = timestamp (no milliseconds)\n # %m = timestamp with milliseconds\n # %i = command tag\n # %c = session id\n # %l = session line number\n # %s = session start timestamp\n # %x = transaction id\n # %q = stop here in non-session \n # processes\n # %% = '%'\n # e.g. '<%u%%%d> '\n#log_statement = 'none' # none, ddl, mod, all\n#log_hostname = off\n\n\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Query/Index Statistics Collector -\n\n#stats_command_string = on\n#update_process_title = on\n\n#stats_start_collector = on # needed for block or row stats\n # (change requires restart)\n#stats_block_level = off\n#stats_row_level = off\n#stats_reset_on_server_start = off # (change requires restart)\n\n\n# - Statistics Monitoring -\n\n#log_parser_stats = off\n#log_planner_stats = off\n#log_executor_stats = off\n#log_statement_stats = off\n\n\n#---------------------------------------------------------------------------\n# AUTOVACUUM PARAMETERS\n#---------------------------------------------------------------------------\n\n#autovacuum = off # enable autovacuum subprocess?\n # 'on' requires stats_start_collector\n # and stats_row_level to also be on\n#autovacuum_naptime = 1min # time between autovacuum runs\n#autovacuum_vacuum_threshold = 500 # min # of tuple updates before\n # vacuum\n#autovacuum_analyze_threshold = 250 # min # of tuple updates before \n # analyze\n#autovacuum_vacuum_scale_factor = 0.2 # fraction of rel size before \n # vacuum\n#autovacuum_analyze_scale_factor = 0.1 # fraction of rel size before \n # analyze\n#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum\n # (change requires restart)\n#autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for \n # autovacuum, -1 means use \n # vacuum_cost_delay\n#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for \n # autovacuum, -1 means use\n # vacuum_cost_limit\n\n\n#---------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#search_path = '\"$user\",public' # schema names\n#default_tablespace = '' # a tablespace name, '' uses\n # the default\n#check_function_bodies = on\n#default_transaction_isolation = 'read committed'\n#default_transaction_read_only = off\n#statement_timeout = 0 # 0 is disabled\n#vacuum_freeze_min_age = 100000000\n\n# - Locale and Formatting -\n\ndatestyle = 'iso, mdy'\n#timezone = unknown # actually, defaults to TZ \n # environment setting\n#timezone_abbreviations = 'Default' # select the set of available timezone\n # abbreviations. Currently, there are\n # Default\n # Australia\n # India\n # However you can also create your own\n # file in share/timezonesets/.\n#extra_float_digits = 0 # min -15, max 2\n#client_encoding = sql_ascii # actually, defaults to database\n # encoding\n\n# These settings are initialized by initdb -- they might be changed\nlc_messages = 'C' # locale for system error message \n # strings\nlc_monetary = 'C' # locale for monetary formatting\nlc_numeric = 'C' # locale for number formatting\nlc_time = 'C' # locale for time formatting\n\n# - Other Defaults -\n\n#explain_pretty_print = on\n#dynamic_library_path = '$libdir'\n#local_preload_libraries = ''\n\n\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\n#deadlock_timeout = 1s\n#max_locks_per_transaction = 64 # min 10\n # (change requires restart)\n# Note: each lock table slot uses ~270 bytes of shared memory, and there are\n# max_locks_per_transaction * (max_connections + max_prepared_transactions)\n# lock table slots.\n\n\n#---------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n\n# - Previous Postgres Versions -\n\n#add_missing_from = off\n#array_nulls = on\n#backslash_quote = safe_encoding # on, off, or safe_encoding\n#default_with_oids = off\n#escape_string_warning = on\n#standard_conforming_strings = off\n#regex_flavor = advanced # advanced, extended, or basic\n#sql_inheritance = on\n\n# - Other Platforms & Clients -\n\n#transform_null_equals = off\n\n\n#---------------------------------------------------------------------------\n# CUSTOMIZED OPTIONS\n#---------------------------------------------------------------------------\n\n#custom_variable_classes = '' # list of custom variable class names\n\n-- \nBryan Buecking\n", "msg_date": "Wed, 23 Apr 2008 00:31:01 +0900", "msg_from": "Bryan Buecking <[email protected]>", "msg_from_op": true, "msg_subject": "CPU bound at 99%" }, { "msg_contents": "On Wed, 23 Apr 2008 00:31:01 +0900\nBryan Buecking <[email protected]> wrote:\n\n> at any given time there is about 5-6 postgres in startup \n> (ps auxwww | grep postgres | grep startup | wc -l)\n> \n> about 2300 connections in idle \n> (ps auxwww | grep postgres | idle)\n> \n> and loads of \"FATAL: sorry, too many clients already\" being logged.\n> \n> The server that connects to the db is an apache server using\n> persistent connections. MaxClients is 2048 thus the high number of\n> connections needed. Application was written in PHP using the Pear DB\n> class.\n\nSounds like your pooler isn't reusing connections properly.\n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nUnited States PostgreSQL Association: http://www.postgresql.us/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Tue, 22 Apr 2008 08:41:09 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU bound at 99%" }, { "msg_contents": "\nOn Apr 22, 2008, at 10:31 AM, Bryan Buecking wrote:\n\n> Hi,\n>\n> I'm running into an performance problem where a Postgres db is running\n> at 99% CPU (4 cores) with about 500 concurrent connection doing \n> various\n> queries from a web application. This problem started about a week ago,\n> and has been steadily going downhill. I have been tweaking the \n> config a\n> bit, mainly shared_memory but have seen no noticeable improvements.\n>\n> at any given time there is about 5-6 postgres in startup\n> (ps auxwww | grep postgres | grep startup | wc -l)\n>\n> about 2300 connections in idle\n> (ps auxwww | grep postgres | idle)\n>\n> and loads of \"FATAL: sorry, too many clients already\" being logged.\n>\n> The server that connects to the db is an apache server using \n> persistent\n> connections. MaxClients is 2048 thus the high number of connections\n> needed. Application was written in PHP using the Pear DB class.\n\nAre you referring to PHP's persistent connections? Do not use those. \nHere's a thread that details the issues with why not: http://archives.postgresql.org/pgsql-general/2007-08/msg00660.php \n. Basically, PHP's persistent connections are NOT pooling solution. \nUs pgpool or somesuch.\n\n<snip>\n\n>\n> max_connections = 2400\n\nThat is WAY too high. Get a real pooler, such as pgpool, and drop \nthat down to 1000 and test from there. I see you mentioned 500 \nconcurrent connections. Are each of those connections actually doing \nsomething? My guess that once you cut down on the number actual \nconnections you'll find that each connection can get it's work done \nfaster and you'll see that number drop significantly. For example, \nour application does anywhere from 200 - 600 transactions per second, \ndependent on the time of day/week, and we never need more that 150 to \n200 connections (although we do have the max_connections set to 500).\n\n<snip>\n\nErik Jones\n\nDBA | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n\n", "msg_date": "Tue, 22 Apr 2008 10:55:19 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU bound at 99%" }, { "msg_contents": "On Tue, Apr 22, 2008 at 08:41:09AM -0700, Joshua D. Drake wrote:\n> On Wed, 23 Apr 2008 00:31:01 +0900\n> Bryan Buecking <[email protected]> wrote:\n> \n> > at any given time there is about 5-6 postgres in startup \n> > (ps auxwww | grep postgres | grep startup | wc -l)\n> > \n> > about 2300 connections in idle \n> > (ps auxwww | grep postgres | idle)\n> > \n> > and loads of \"FATAL: sorry, too many clients already\" being logged.\n> > \n> > The server that connects to the db is an apache server using\n> > persistent connections. MaxClients is 2048 thus the high number of\n> > connections needed. Application was written in PHP using the Pear DB\n> > class.\n> \n> Sounds like your pooler isn't reusing connections properly.\n\nThe persistent connections are working properly. The idle connections\nare expected given that the Apache child process are not closing them\n(A la non-persistent). The connections do go away after 1000 requests\n(MaxChildRequest).\n\nI decided to move towards persistent connections since prior to\npersistent connections the idle vs startup were reversed.\n\n-- \nBryan Buecking\t\t\t\thttp://www.starling-software.com\n", "msg_date": "Wed, 23 Apr 2008 00:56:46 +0900", "msg_from": "Bryan Buecking <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU bound at 99%" }, { "msg_contents": "On Tue, Apr 22, 2008 at 10:55:19AM -0500, Erik Jones wrote:\n> On Apr 22, 2008, at 10:31 AM, Bryan Buecking wrote:\n> \n> >max_connections = 2400\n> \n> That is WAY too high. Get a real pooler, such as pgpool, and drop \n> that down to 1000 and test from there.\n\nI agree, but the number of idle connections dont' seem to affect\nperformace only memory usage. I'm trying to lessen the load of\nconnection setup. But sounds like this tax is minimal?\n\nWhen these issues started happening, max_connections was set to 1000 and\nI was not using persistent connections.\n\n> I see you mentioned 500 concurrent connections. Are each of those\n> connections actually doing something?\n\nYes out of the 2400 odd connections, 500 are either in SELECT or RESET.\n\n> My guess that once you cut down on the number actual connections\n> you'll find that each connection can get it's work done faster\n> and you'll see that number drop significantly.\n\nI agree, but not in this case. I will look at using pooling. \n-- \nBryan Buecking\t\t\t\thttp://www.starling-software.com\n", "msg_date": "Wed, 23 Apr 2008 01:10:26 +0900", "msg_from": "Bryan Buecking <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU bound at 99%" }, { "msg_contents": "Bryan,\n\n > > about 2300 connections in idle\n> > > (ps auxwww | grep postgres | idle)\n\nthat is about 2300 processes being task scheduled by your kernel, each\nof them using > 1 MB of RAM and some other ressources, are you sure\nthat this is what you want?\n\nUsual recommended design for a web application:\n\nstart request, rent a connection from connection pool, do query, put\nconnection back, finish request, wait for next request\n\nso to get 500 connections in parallel, you would have the outside\nsituaion of 500 browsers submitting requests within the time needed to\nfullfill one request.\n\nHarald\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nSpielberger Straße 49\n70435 Stuttgart\n0173/9409607\nfx 01212-5-13695179\n-\nEuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!\n", "msg_date": "Tue, 22 Apr 2008 18:15:38 +0200", "msg_from": "\"Harald Armin Massa\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU bound at 99%" }, { "msg_contents": "On Tue, Apr 22, 2008 at 10:55:19AM -0500, Erik Jones wrote:\n> \n> Are you referring to PHP's persistent connections? Do not use those. \n> Here's a thread that details the issues with why not: \n> http://archives.postgresql.org/pgsql-general/2007-08/msg00660.php . \n\nThanks for that article, very informative and persuasive enough that\nI've turned off persistent connections.\n\n-- \nBryan Buecking\t\t\t\thttp://www.starling-software.com\n", "msg_date": "Wed, 23 Apr 2008 01:16:43 +0900", "msg_from": "Bryan Buecking <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU bound at 99%" }, { "msg_contents": "Are tables vacuumed often?\n\nBryan Buecking escribió:\n> On Tue, Apr 22, 2008 at 10:55:19AM -0500, Erik Jones wrote:\n> \n>> On Apr 22, 2008, at 10:31 AM, Bryan Buecking wrote:\n>>\n>> \n>>> max_connections = 2400\n>>> \n>> That is WAY too high. Get a real pooler, such as pgpool, and drop \n>> that down to 1000 and test from there.\n>> \n>\n> I agree, but the number of idle connections dont' seem to affect\n> performace only memory usage. I'm trying to lessen the load of\n> connection setup. But sounds like this tax is minimal?\n>\n> When these issues started happening, max_connections was set to 1000 and\n> I was not using persistent connections.\n>\n> \n>> I see you mentioned 500 concurrent connections. Are each of those\n>> connections actually doing something?\n>> \n>\n> Yes out of the 2400 odd connections, 500 are either in SELECT or RESET.\n>\n> \n>> My guess that once you cut down on the number actual connections\n>> you'll find that each connection can get it's work done faster\n>> and you'll see that number drop significantly.\n>> \n>\n> I agree, but not in this case. I will look at using pooling. \n>", "msg_date": "Tue, 22 Apr 2008 13:21:03 -0300", "msg_from": "Rodrigo Gonzalez <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU bound at 99%" }, { "msg_contents": "On Tue, Apr 22, 2008 at 01:21:03PM -0300, Rodrigo Gonzalez wrote:\n> Are tables vacuumed often?\n\nHow often is often. Right now db is vaccumed once a day.\n-- \nBryan Buecking\t\t\t\thttp://www.starling-software.com\n", "msg_date": "Wed, 23 Apr 2008 01:22:37 +0900", "msg_from": "Bryan Buecking <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU bound at 99%" }, { "msg_contents": "On Tue, Apr 22, 2008 at 10:10 AM, Bryan Buecking <[email protected]> wrote:\n>\n> I agree, but the number of idle connections dont' seem to affect\n> performace only memory usage. I'm trying to lessen the load of\n> connection setup. But sounds like this tax is minimal?\n\nNot entirely true. There are certain things that happen that require\none backend to notify ALL OTHER backends. when this happens a lot,\nthen the system will slow to a crawl.\n", "msg_date": "Tue, 22 Apr 2008 10:23:55 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU bound at 99%" }, { "msg_contents": "Bryan Buecking <[email protected]> writes:\n> On Tue, Apr 22, 2008 at 10:55:19AM -0500, Erik Jones wrote:\n>> That is WAY too high. Get a real pooler, such as pgpool, and drop \n>> that down to 1000 and test from there.\n\n> I agree, but the number of idle connections dont' seem to affect\n> performace only memory usage.\n\nI doubt that's true (and your CPU load suggests the contrary as well).\nThere are common operations that have to scan the whole PGPROC array,\nwhich has one entry per open connection. What's worse, some of them\nrequire exclusive lock on the array.\n\n8.3 has some improvements in this area that will probably let it scale\nto more connections than previous releases, but in any case connection\npooling is a good thing.\n\n> I'm trying to lessen the load of\n> connection setup. But sounds like this tax is minimal?\n\nNot really. You're better off reusing a connection over a large number\nof queries.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 22 Apr 2008 12:25:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU bound at 99% " }, { "msg_contents": "Erik Jones wrote:\n\n>> max_connections = 2400\n> \n> That is WAY too high. Get a real pooler, such as pgpool, and drop that \n> down to 1000 and test from there. I see you mentioned 500 concurrent \n> connections. Are each of those connections actually doing something? \n> My guess that once you cut down on the number actual connections you'll \n> find that each connection can get it's work done faster and you'll see \n> that number drop significantly.\n\nIt's not an issue for me - I'm expecting *never* to top 100 concurrent \nconnections, and many of those will be idle, with the usual load being \ncloser to 30 connections. Big stuff ;-)\n\nHowever, I'm curious about what an idle backend really costs.\n\nOn my system each backend has an RSS of about 3.8MB, and a psql process \ntends to be about 3.0MB. However, much of that will be shared library \nbindings etc. The real cost per psql instance and associated backend \nappears to be 1.5MB (measured with 10 connections using system free RAM \nchange) . If I use a little Python program to generate 50 connections \nfree system RAM drops by ~45MB and rises by the same amount when the \nPython process exists and the backends die, so the backends presumably \nuse less than 1MB each of real unshared RAM.\n\nPresumably the backends will grow if they perform some significant \nqueries and are then left idle. I haven't checked that.\n\nAt 1MB of RAM per backend that's not a trivial cost, but it's far from \nearth shattering, especially allowing for the OS swapping out backends \nthat're idle for extended periods.\n\nSo ... what else does an idle backend cost? Is it reducing the amount of \nshared memory available for use on complex queries? Are there some lists \nPostgreSQL must scan for queries that get more expensive to examine as \nthe number of backends rise? Are there locking costs?\n\n--\nCraig Ringer\n", "msg_date": "Wed, 23 Apr 2008 00:36:35 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU bound at 99%" }, { "msg_contents": "\n> about 2300 connections in idle\n> (ps auxwww | grep postgres | idle)\n\n\t[...]\n\n> The server that connects to the db is an apache server using persistent\n> connections. MaxClients is 2048 thus the high number of connections\n> needed. Application was written in PHP using the Pear DB class.\n\n\tThis is pretty classical.\n\tWhen your number of threads gets out of control, everything gets slower, \nso more requests pile up, spawning more threads, this is positive \nfeedback, and in seconds all hell breaks loose. That's why I call it \nimploding, like if it collapses under its own weight. There is a threshold \neffect and it gets from working good to a crawl rather quickly once you \npass the threshold, as you experienced.\n\n\tNote that the same applies to Apache, PHP as well as Postgres : there is \na \"sweet spot\" in the number of threads, for optimum efficiency, depending \non how many cores you have. Too few threads, and it will be waiting for IO \nor waiting for the database. Too many threads, and CPU cache utilization \nbecomes suboptimal and context switches eat your performance.\n\n\tThis sweet spot is certainly not at 500 connections per core, either for \nPostgres or for PHP. It is much lower, about 5-20 depending on your load.\n\n\tI will copypaste here an email I wrote to another person with the exact \nsame problem, and the exact same solution.\n\tPlease read this carefully :\n\n*********************************************************************\n\nBasically there are three classes of websites in my book.\n1- Low traffic (ie a few hits/s on dynamic pages), when performance \ndoesn't matter\n2- High traffic (ie 10-100 hits/s on dynamic pages), when you must read \nthe rest of this email\n3- Monster traffic (lots more than that) when you need to give some of \nyour cash to Akamai, get some load balancers, replicate your databases, \nuse lots of caching, etc. This is yahoo, flickr, meetic, etc.\n\nUsually people whose web sites are imploding under load think they are in \nclass 3 but really most of them are in class 2 but using inadequate \ntechnical solutions like MySQL, etc. I had a website with 200K members \nthat ran on a Celeron 1200 with 512 MB RAM, perfectly fine, and lighttpd \nwasn't even visible in the top.\n\nGood news for you is that the solution to your problem is pretty easy. You \nshould be able to solve that in about 4 hours.\n\nSuppose you have some web servers for static content ; obviously you are \nusing lighttpd on that since it can service an \"unlimited\" (up to the OS \nlimit, something like 64K sockets) number of concurrent connections. You \ncould also use nginx or Zeus. I think Akamai uses Zeus. But Lighttpd is \nperfectly fine (and free). For your static content servers you will want \nto use lots of RAM for caching, if you serve images, put the small files \nlike thumbnails, css, javascript, html pages on a separate server so that \nthey are all served from RAM, use a cheap CPU since a Pentium-M with \nlighttpd will happily push 10K http hits/s if you don't wait for IO. Large \nfiles should be on the second static server to avoid cache trashing on the \nserver which has all the frequently accessed small files.\n\nThen you have some web servers for generating your dynamic content. Let's \nsuppose you have N CPU cores total.\nWith your N cores, the ideal number of threads would be N. However those \nwill also wait for IO and database operations, so you want to fill those \nwait times with useful work, so maybe you will use something like 2...10 \nthreads per core. This can only be determined by experimentation, it \ndepends on the type and length of your SQL queries so there is no \"one \nsize fits all\" answer.\n\nExample. You have pages that take 20 ms to generate, and you have 100 \nrequests for those coming up. Let's suppose you have one CPU core.\n\n(Note : if your pages take longer than 10 ms, you have a problem. On the \npreviously mentioned website, now running on the cheapest Core 2 we could \nfind since the torrent tracker eats lots of CPU, pages take about 2-5 ms \nto generate, even the forum pages with 30 posts on them. We use PHP with \ncompiled code caching and SQL is properly optimized). And, yes, it uses \nMySQL. Once I wrote (as an experiment) an extremely simple forum which did \n1400 pages/second (which is huge) with a desktop Core2 as the Postgres 8.2 \nserver.\n\n- You could use Apache in the old fasion way, have 100 threads, so all \nyour pages will take 20 ms x 100 = 2 seconds,\nBut the CPU cache utilisation will suck because of all those context \nswitches, you'll have 100 processes eating your RAM (count 8MB for a PHP \nprocess), 100 database connections, 100 postgres processes, the locks will \nstay on longer, transactions will last longer, you'll get more dead rows \nto vacuum, etc.\nAnd actually, since Apache will not buffer the output of your scripts, the \nPHP or Perl interpreter will stay in memory (and hog a database \nconnection) until the client at the other end of the internets had loaded \nall the data. If the guy has DSL, this can take 0.5 seconds, if he has \n56K, much longer. So, you are likely to get much more than 100 processes \nin your Apache, perhaps 150 or perhaps even 1000 if you are out of luck. \nIn this case the site usually implodes.\n\n- You could have a lighttpd or squid proxy handling the client \nconnections, then funnelling that to a few threads generating the \nwebpages. Then, you don't care anymore about the slowness of the clients \nbecause they are not hogging threads anymore. If you have 4 threads, your \nrequests will be processed in order, first come first served, 20 ms x 4 = \n80 ms each average, the CPU cache will work better since you'll get much \nless context switching, RAM will not be filled, postgres will be happy.\n\n> So, the front-end proxy would have a number of max connections, say 200,\n\nNumber of connections to clients => don't set any values, sockets are free \nin lighttpd.\nNumber of connections to PHP/fastcgi or apache/mod_perl backends => number \nof cores x 2 to 5, adjust to taste\n\n> and it would connect to another httpd/mod_perl server behind with a \n> lower number of connections, say 20. If the backend httpd server was \n> busy, the proxy connection to it would just wait in a queue until it was \n> available.\n\n Yes, it waits in a queue.\n\n> Is that the kind of design you had in mind?\n\n Yes.\n The two key points are that :\n * Perl/PHP processes and their heavy resources (database connections, \nRAM) are used only when they have work to do and not waiting for the \nclient.\n * The proxy must work this way :\n 1- get and buffer request data from client (slow, up to 500 ms, up \nto 2000 ms if user has emule or other crap hogging his upload)\n 2- send request to backend (fast, on your LAN, < 1 ms)\n 3- backend generates HTML and sends it to proxy (fast, LAN), proxy \nbuffers data\n 4- backend is now free to process another request\n 5- proxy sends buffered data to client (slow, up to 100-3000 ms)\n The slow parts (points 1 and 5) do not hog a perl/PHP backend.\n\n Do not use a transparent proxy ! The proxy must buffer requests and \ndata for this to work. Backends must never wait for the client. Lighttpd \nwill buffer everything, I believe Apache can be configured to do so. But I \nprefer to use lighttpd for proxying, it is faster and the queuing works \nbetter.\n\n Also, if you can use FastCGI, use it. I have never used mod_perl, but \nwith mod_php, you have a fixed startup cost every time a PHP interpreter \nstarts. With fastcgi, a number of PHP interpreter threads are spawned at \nstartup, so they are always ready, the startup cost is much smaller. You \ncan serve a small AJAX request with 1-2 database queries in less than 1 ms \nif you are careful with your code (like, no heavyweight session \ninitialization on each page, using mmcache to avoid reparsing the PHP \neverytime, etc).\n\n If you have several backend servers generating webpages, use sticky \nsessions and put the session storage on the backends themselves, if you \nuse files use ReiserFS not ext3 which sucks when you have a large number \nof session files in the same directory. Or use memcached, whatever, but \ndon't put sessions in the database, this gives you a nice tight bottleneck \nwhen adding servers. If each and every one of your pages has an UPDATE \nquery to the sessions table you have a problem.\n\n As for why I like lighttpd, I am fond of the asynchronous select/poll \nmodel for a webserver which needs to handle lots of concurrent \nconnections. When you have 50 open sockets threads are perfectly fine, \nwhen you have 1000 a threaded server will implode. I wrote a bittorrent \ntracker in Python using an asynchronous select/poll model ; it has been \nhandling about 150-400 HTTP hits per second for two years now, it has \nabout 100-200 concurrent opened sockets 24 hours a day, and the average \nlifetime of a socket connection is 600 ms. There are 3 threads (webserver, \nbackend, deferred database operations) with some queues in between for the \nplumbing. Serving an /announce HTTP request takes 350 microseconds of CPU \ntime. All using a purely interpreted language, lol. It uses half a core on \nthe Core 2 and about 40 MB of RAM.\n\n\tWhen lighttpd is overloaded (well, it's impossible to kill it with static \nfiles unless it waits for disk IO, but if you overload the fastcgi \nprocesses), requests are kicked out of the queue, so for instance it will \nonly serve 50% of the requests. But an overloaded apache will serve 0% \nsince you'll get 1000 threads, it'll swap, and everything will timeout and \ncrash.\n\n********************************************************\n\nEnd of copypaste.\n\n\tSo :\n\n\t- You need to get less Postgres connections to let Postgres breathe and \nuse your CPU power to perform queries and not context switches and cache \nmanagement.\n\t- You need to get less PHP threads which will have the same effect on \nyour webserver.\n\n\tThe way to do this is is actually pretty simple.\n\n\t- Frontend proxy (lighttpd), load balancer, whatever, sending static \nrequests to static servers, and dynamic requests to dynamic servers. If \nthe total size of your static files fits in the RAM of this server, make \nthe static server and the proxy the same lighttpd instance.\n\n\t- Backends for PHP : a number of servers running PHP/fastcgi, no web \nservers at all, the lighttpd frontend can hit several PHP/fastcgi backends.\n\n\t- Use PHP persistent connections (which now appear to work in the latest \nversion, in fastcgi mode, I don't know about mod_php's persistent \nconnections though).\n\t- Or use pgpool or pgbouncer or another connection pooler, but only if \nPHP's persistent connections do not work for you.\n\n> 1: Each apache / php process maintains its own connections, not\n> sharing with others. So it's NOT connection pooling, but people tend\n> to think it is.\n\n\tTrue with mod_php (and sad). With fastcgi, you don't really care, since \nthe PHP processes are few and are active most of the time, no connection \nhogging takes place unless you use many different users to connect to \npostgres, in which case you should switch to pgpool.\n\n> 2: Each unique connection creates another persistent connection for\n> an apache/php child process. If you routinely connect to multiple\n> servers / databases or as > 1 user, then each one of those\n> combinations that is unique makes another persistent connection.\n\n\tTrue also for fastcgi, but if you don't do that, no problem.\n\n> 3: There's no facility in PHP to clean an old connection out and make\n> sure it's in some kind of consistent state when you get it. It's in\n> exactly the same state it was when the previous php script finished\n> with it. Half completed transactions, partial sql statements,\n> sequence functions like currval() may have values that don't apply to\n> you.\n\n\tApparently now fixed.\n\n> 4: pg_close can't close a persistent connection. Once it's open, it\n> stays open until the child process is harvested.\n\n\tDon't know about that.\n\n> 5: Apache, by default, is configured for 150 child processes.\n> Postgresql, and many other databases for that matter, are configured\n> for 100 or less.\n\n\t(and for good reason)\n\n> Even if apache only opens one connection to one\n> database with one user account, it will eventually try to open the\n> 101st connection to postgresql and fail. So, the default\n> configuration of apache / postgresql for number of connections is\n> unsafe for pconnect.\n\n\tfastcgi makes this problem disappear by separating the concept of \"client \nconnection\" from the concept of \"web server thread\". Not only will it make \nPostgres happier, your PHP processing will be much faster too.\n\n> 6: The reason for connection pooling is primarily to twofold. One is\n> to allow very fast connections to your database when doing lots of\n> small things where connection time will cost too much. The other is\n> to prevent your database from having lots of stale / idle connections\n> that cause it to waste memory and to be slower since each backend\n> needs to communicate with every other backend some amount of data some\n> times. pconnect takes care of the first problem, but exacerbates the\n> second.\n\n\tMoot point with fastcgi.\n\tUnused PHP processes are removed in times of low traffic, along with \ntheir connections.\n\n> P.s. dont' think I'm dogging PHP, cause I'm not. I use it all the\n> time, and it's really great for simple small scripts that need to be\n> done NOW and need to be lightweight. I even use pconnect a bit. But\n> my machine is set for 50 or fewer apache children and 150 postgresql\n> connects, and I only use pconnect on small, lightweight things that\n> need to zoom. Everything else gets regular old connect.\n\n\tVery true for mod_php, wrong for fastcgi : you can get extreme \nperformance with pconnect and a PHP code cache like turck/mm or \neaccelerator, down to 1 ms per page.\n\n\tEspecially if you use PEAR which is very bloated, you nead a code cache \nto avoid parsing it on every page.\n\tOn previously mentioned website it cut the page time from 50 ms to 2 ms \non some pages because there was a lot of includes.\n\n\n\n\n\n\n\n\n", "msg_date": "Tue, 22 Apr 2008 18:45:24 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU bound at 99%" }, { "msg_contents": "Bryan Buecking wrote:\n> On Tue, Apr 22, 2008 at 10:55:19AM -0500, Erik Jones wrote:\n>> Are you referring to PHP's persistent connections? Do not use those. \n>> Here's a thread that details the issues with why not: \n>> http://archives.postgresql.org/pgsql-general/2007-08/msg00660.php . \n> \n> Thanks for that article, very informative and persuasive enough that\n> I've turned off persistent connections.\n\nNote that it's not always true - current recommended practice for PHP is \nto run it in FastCGI, in which case even though there are hundreds of \nApache processes, there are only few PHP processes with their persistent \ndatabase connections (and unused PHP FastCGI servers get killed off \nroutinely) so you get almost \"proper\" pooling without the overhead.\n\n", "msg_date": "Wed, 23 Apr 2008 01:58:47 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU bound at 99%" } ]
[ { "msg_contents": "Hello, i have a postgresql server running and from time to time it gets\npainfully slow. When this happens i usually connect to the server and\nrun a \"top\" command, the output i get is filled with lines like the\nfollowing\n\n71872 pgsql 1 4 0 48552K 42836K sbwait 1:41 4.79%\npostgres\n\nAre those connections that were not closed or something like that?\n\nshould i worry?\n\nThanks in advance, as always\n\nyours trully\n\nRafael\n\n", "msg_date": "Tue, 22 Apr 2008 15:34:38 -0300", "msg_from": "Rafael Barrera Oro <[email protected]>", "msg_from_op": true, "msg_subject": "Suspicious top output" }, { "msg_contents": "Rafael Barrera Oro wrote:\n> Hello, i have a postgresql server running and from time to time it gets\n> painfully slow. When this happens i usually connect to the server and\n> run a \"top\" command, the output i get is filled with lines like the\n> following\n> \n> 71872 pgsql 1 4 0 48552K 42836K sbwait 1:41 4.79%\n> postgres\n> \n> Are those connections that were not closed or something like that?\n\nThis looks like FreeBSD; \"sbwait\" state is socket buffer wait, and \nguessing from the CPU usage the process seems to be talking to another \nprocess.\n\n> should i worry?\n\nDon't know. Are you sure all client processes disconnect properly from \nthe database?\n\n", "msg_date": "Wed, 23 Apr 2008 01:50:33 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suspicious top output" }, { "msg_contents": "On Tue, 22 Apr 2008, Rafael Barrera Oro wrote:\n\n> Hello, i have a postgresql server running and from time to time it gets\n> painfully slow.\n\nThe usual information you should always include when posting messages here \nis PostgreSQL and operating system versions.\n\n> When this happens i usually connect to the server and run a \"top\" \n> command\n\nThe other thing you should fire up in another window is \"vmstat 1\" to \nfigure out just what's going on in general. The great thing about those \nis you can save them when you're done and easily analyze the results later \neasily, which is trickier to do with top.\n\n> 71872 pgsql 1 4 0 48552K 42836K sbwait 1:41 4.79%\n> postgres\n\nSome searching found this interesting suggestion from Darcy about things \nstuck in sbwait:\n\nhttp://unix.derkeiler.com/Mailing-Lists/FreeBSD/performance/2004-03/0015.html\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 23 Apr 2008 01:43:27 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suspicious top output" } ]
[ { "msg_contents": "Hi all,\n\nI´m running a copy for a 37G CSV and receiving the following error:\n\n\"invalid string enlargement request size 65536\"\n\nThe file has about 70 million lines with 101 columns, all them varchar.\n\nWhen I run the command with the whole file i receive the error after loading\nabout 29million lines. So i´ve spllited the file in 10 million lines with\nsplit:\n\nsplit --lines=10000000\n\nAnd running the copy i receive the error on the 5th file:\n\npsql:/srv/www/htdocs/import/script_q2.sql:122: ERROR: invalid string\nenlargement request size 65536\nCONTEXT: COPY temp_q2, line 3509639: \"\"000000009367276\";\"4\";\"DANIEL DO\nCARMO BARROS\";\"31-Jan-1986\";\"M\";\"1\";\"10\";\"3162906\";\"GILSON TEIXEIRA...\"\n\nAny clues?\n\nMy postgresql version is 8.2.4 the server is running suse linux with 1.5GB\nSensitive changes in postgresql.conf are:\n\nshared_buffers = 512MB\ntemp_buffers = 256MB\ncheckpoint_segments = 60\n\nI´d also like to know if there´s any way to optimize huge data load in\noperations like these.\n\nRegards\n\nAdonias Malosso\n\nHi all,I´m running a copy for a 37G CSV and receiving the following error:\"invalid string enlargement request size 65536\"The file has about 70 million lines with 101 columns, all them varchar.\nWhen I run the command with the whole file i receive the error after loadingabout 29million lines. So i´ve spllited the file in 10 million lines with split:split --lines=10000000And running the copy i receive the error on the 5th file:\npsql:/srv/www/htdocs/import/script_q2.sql:122: ERROR:  invalid string enlargement request size 65536CONTEXT:  COPY temp_q2, line 3509639: \"\"000000009367276\";\"4\";\"DANIEL DO CARMO BARROS\";\"31-Jan-1986\";\"M\";\"1\";\"10\";\"3162906\";\"GILSON TEIXEIRA...\"\nAny clues?My postgresql version is 8.2.4 the server is running suse linux with 1.5GB Sensitive changes in postgresql.conf are:shared_buffers = 512MBtemp_buffers = 256MBcheckpoint_segments = 60\nI´d also like to know if there´s any way to optimize huge data load in operations like these.RegardsAdonias Malosso", "msg_date": "Tue, 22 Apr 2008 18:05:35 -0300", "msg_from": "\"Adonias Malosso\" <[email protected]>", "msg_from_op": true, "msg_subject": "[PERFORMANCE] Error loading 37G CSV file \"invalid string enlargement\n\trequest size 65536\"" }, { "msg_contents": "\"Adonias Malosso\" <[email protected]> writes:\n> I�m running a copy for a 37G CSV and receiving the following error:\n> \"invalid string enlargement request size 65536\"\n\nAFAICS this would only happen if you've got an individual line of COPY\ndata exceeding 1GB. (PG versions later than 8.2 give a slightly more\nhelpful \"out of memory\" error in such a case.)\n\nMost likely, that was not your intention, and the real problem is\nincorrect quoting/escaping in the CSV file, causing COPY to think\nthat a large number of physical lines should be read as one logical line.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 22 Apr 2008 17:55:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] Error loading 37G CSV file \"invalid string\n\tenlargement request size 65536\"" }, { "msg_contents": "Adonias Malosso wrote:\n> Hi all,\n> \n\n> split --lines=10000000\n> \n> And running the copy i receive the error on the 5th file:\n> \n> psql:/srv/www/htdocs/import/script_q2.sql:122: ERROR: invalid string\n> enlargement request size 65536\n> CONTEXT: COPY temp_q2, line 3509639: \"\"000000009367276\";\"4\";\"DANIEL DO\n> CARMO BARROS\";\"31-Jan-1986\";\"M\";\"1\";\"10\";\"3162906\";\"GILSON TEIXEIRA...\"\n> \n> Any clues?\n\nquote problems from earlier than that?\none missing?\n\\ at end of field negating the closing quote\n\nI'd keep splitting to help isolate - what control do you have over the \ngeneration of the data?\n\nIs this one off import or ongoing?\n\n> My postgresql version is 8.2.4 the server is running suse linux with 1.5GB\n> Sensitive changes in postgresql.conf are:\n> \n> shared_buffers = 512MB\n> temp_buffers = 256MB\n> checkpoint_segments = 60\n> \n> I�d also like to know if there�s any way to optimize huge data load in\n> operations like these.\n\nSounds like you are already using copy. Where from? Is the data file on \nthe server or a seperate client? (as in reading from the same disk that \nyou are writing the data to?)\n\nSee if http://pgfoundry.org/projects/pgbulkload/ can help\n\nIt depends a lot on what you are doing and what table you are importing \ninto. Indexes will most likely be the biggest slow down, it is faster to \ncreate them after the table is filled. Also fk restraints can slow down \nas well.\n\nIs this a live server that will still be working as you load data?\n\nIf the db is not in use try dropping all indexes (on the relevant table \nanyway), loading then create indexes.\n\nYou can copy into a temp table without indexes then select into the \ntarget table.\n\nWhat fk restraints does this table have? Can they be safely deferred \nduring the import?\n\n\n-- \n\nShane Ambler\npgSQL (at) Sheeky (dot) Biz\n\nGet Sheeky @ http://Sheeky.Biz\n", "msg_date": "Wed, 23 Apr 2008 13:01:12 +0930", "msg_from": "Shane Ambler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] Error loading 37G CSV file \"invalid string\n\tenlargement request size 65536\"" } ]
[ { "msg_contents": "Hi All,\n\nThis query is being executed nearly a million times....\n SELECT 'DBD::Pg ping test'\n\nWhy this is being executed ? What is the use ?\n\nAm sure that this query is not executed explicitly.\n\nam using postgres 8.1\n\nAny idea ?\n\nHi All,This query is being executed nearly a million times....           SELECT 'DBD::Pg ping test'Why this is being executed ? What is the use ?Am sure that this query is not executed explicitly.\nam using postgres 8.1Any idea ?", "msg_date": "Wed, 23 Apr 2008 12:49:37 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "SELECT 'DBD::Pg ping test'" }, { "msg_contents": "sathiya psql wrote:\n> Hi All,\n> \n> This query is being executed nearly a million times....\n> SELECT 'DBD::Pg ping test'\n> \n> Why this is being executed ? What is the use ?\n\nA client is sending a query to the server solely to see if the server\nresponds.\n\nDBD::Pg is the Perl database driver for PostgreSQL. Presumably the\napplication using that driver has some sort of keepalive or database\nconnectivity check enabled, so it's periodically issuing these queries.\n\n--\nCraig Ringer\n", "msg_date": "Wed, 23 Apr 2008 16:28:05 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT 'DBD::Pg ping test'" }, { "msg_contents": "On Wed, Apr 23, 2008 at 12:19 AM, sathiya psql <[email protected]> wrote:\n> Hi All,\n>\n> This query is being executed nearly a million times....\n> SELECT 'DBD::Pg ping test'\n\nSomething in your Perl application is use $dbh->ping(). See perldoc\nDBI. It's possible that this is happening under the hood, because\nyour application is using connect_cached() instead of connect().\n\n-jwb\n", "msg_date": "Wed, 23 Apr 2008 09:40:23 -0700", "msg_from": "\"Jeffrey Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT 'DBD::Pg ping test'" } ]
[ { "msg_contents": "I cannot understand why the following two queries differ so much in execution time (almost ten times)\n\nQuery A (two queries)\n\nselect distinct moment.mid from moment,timecard where parent = 45 and (pid=17 and timecard.mid = moment.mid) order by moment.mid;\nselect distinct moment.mid from moment,timecard where parent = 45 and (pbar = 0) order by moment.mid;\n\nQuery B (combining the two with OR)\n\nselect distinct moment.mid from moment,timecard where parent = 45 and ((pid=17 and timecard.mid = moment.mid) or (pbar = 0)) order by moment.mid;\n\n$ time psql -o /dev/null -f query-a.sql fektest\n\nreal 0m2.016s\nuser 0m1.532s\nsys 0m0.140s\n\n$ time psql -o /dev/null -f query-b.sql fektest\n\nreal 0m28.534s\nuser 0m1.516s\nsys 0m0.156s\n\nI have tested this in two different computers with different amount of\nRAM, fast or slow CPU, and the difference is persistent, almost ten\ntimes.\n\nI should say that this is on postgresql 7.4.16 (debian stable).\n\nCan query B be rewritten so that it would execute faster?\n\nTIA\n\n-- \nHans Ekbrand (http://sociologi.cjb.net) <[email protected]>\nGPG Fingerprint: 1408 C8D5 1E7D 4C9C C27E 014F 7C2C 872A 7050 614E", "msg_date": "Wed, 23 Apr 2008 09:23:07 +0200", "msg_from": "Hans Ekbrand <[email protected]>", "msg_from_op": true, "msg_subject": "mysterious difference in speed when combining two queries with OR" }, { "msg_contents": "am Wed, dem 23.04.2008, um 9:23:07 +0200 mailte Hans Ekbrand folgendes:\n> I cannot understand why the following two queries differ so much in execution time (almost ten times)\n\nwild guess: different execution plans.\n\n\nCan you show us the plans? (EXPLAIN ANALYSE SELECT ...)\n\n> \n> Query A (two queries)\n> \n> select distinct moment.mid from moment,timecard where parent = 45 and (pid=17 and timecard.mid = moment.mid) order by moment.mid;\n> select distinct moment.mid from moment,timecard where parent = 45 and (pbar = 0) order by moment.mid;\n> \n> Query B (combining the two with OR)\n> \n> select distinct moment.mid from moment,timecard where parent = 45 and ((pid=17 and timecard.mid = moment.mid) or (pbar = 0)) order by moment.mid;\n>\n> [ snip ]\n> \n> I should say that this is on postgresql 7.4.16 (debian stable).\n\nUhh. Why not a recent version? We have 8.3.0...\n\n\n> \n> Can query B be rewritten so that it would execute faster?\n\nQuick and dirty: use both selects (query A) combined with UNION.\n\nI guess, with a recent version the planner can use a bitmap index scan\nto perform Query B faster.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Wed, 23 Apr 2008 09:58:10 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: mysterious difference in speed when combining two queries with OR" }, { "msg_contents": "am Wed, dem 23.04.2008, um 9:58:10 +0200 mailte A. Kretschmer folgendes:\n> > Query A (two queries)\n> > \n> > select distinct moment.mid from moment,timecard where parent = 45 and (pid=17 and timecard.mid = moment.mid) order by moment.mid;\n> > select distinct moment.mid from moment,timecard where parent = 45 and (pbar = 0) order by moment.mid;\n> > \n> > Query B (combining the two with OR)\n> > \n> > select distinct moment.mid from moment,timecard where parent = 45 and ((pid=17 and timecard.mid = moment.mid) or (pbar = 0)) order by moment.mid;\n\nThanks to depesz on #postgresql (irc-channel):\n\nQuery A, the second query: there are no join between the 2 tables.\nMistake?\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Wed, 23 Apr 2008 10:57:04 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: mysterious difference in speed when combining two queries with OR" }, { "msg_contents": "On Wed, Apr 23, 2008 at 09:58:10AM +0200, A. Kretschmer wrote:\n> am Wed, dem 23.04.2008, um 9:23:07 +0200 mailte Hans Ekbrand folgendes:\n> > I cannot understand why the following two queries differ so much in execution time (almost ten times)\n> \n> wild guess: different execution plans.\n> \n> \n> Can you show us the plans? (EXPLAIN ANALYSE SELECT ...)\n\nQuery A (first part)\n\nfektest=> explain analyse select distinct moment.mid from moment,timecard where parent = 45 and (pid=17 and timecard.mid = moment.mid) order by moment.mid;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------\n Unique (cost=287.66..288.67 rows=203 width=4) (actual time=11.412..11.622 rows=41 loops=1)\n -> Sort (cost=287.66..288.16 rows=203 width=4) (actual time=11.409..11.484 rows=57 loops=1)\n Sort Key: moment.mid\n -> Hash Join (cost=60.98..279.88 rows=203 width=4) (actual time=2.346..11.182 rows=57 loops=1)\n Hash Cond: (\"outer\".mid = \"inner\".mid)\n -> Seq Scan on timecard (cost=0.00..211.78 rows=1017 width=4) (actual time=0.031..7.427 rows=995 loops=1)\n Filter: (pid = 17)\n -> Hash (cost=59.88..59.88 rows=444 width=4) (actual time=2.127..2.127 rows=0 loops=1)\n -> Seq Scan on moment (cost=0.00..59.88 rows=444 width=4) (actual time=0.027..1.825 rows=199 loops=1)\n Filter: (parent = 45)\n Total runtime: 11.852 ms\n(11 rows)\n\nQuery A (second part)\n\nfektest=> explain analyse select distinct moment.mid from moment,timecard where parent = 45 and (pbar = 0) order by moment.mid;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=192.62..3800.67 rows=20 width=4) (actual time=0.578..109.274 rows=2 loops=1)\n -> Nested Loop (cost=192.62..3417.57 rows=153240 width=4) (actual time=0.575..89.546 rows=15324 loops=1)\n -> Index Scan using moment_mid_idx on moment (cost=0.00..160.15 rows=20 width=4) (actual time=0.544..3.490 rows=2 loops=1)\n Filter: ((parent = 45) AND (pbar = 0))\n -> Materialize (cost=192.62..269.24 rows=7662 width=0) (actual time=0.009..21.998 rows=7662 loops=2)\n -> Seq Scan on timecard (cost=0.00..192.62 rows=7662 width=0) (actual time=0.007..14.554 rows=7662 loops=1)\n Total runtime: 109.870 ms\n(7 rows)\n\nQuery B\n\nfektest=> EXPLAIN ANALYSE SELECT distinct moment.mid from moment,timecard where parent = 45 and ((pid=17 and timecard.mid = moment.mid) or (pbar = 0)) order by moment.mid;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=192.62..102469.31 rows=444 width=4) (actual time=143.444..4838.067 rows=42 loops=1)\n -> Nested Loop (cost=192.62..102405.04 rows=25710 width=4) (actual time=143.439..4818.215 rows=15379 loops=1)\n Join Filter: (((\"inner\".pid = 17) OR (\"outer\".pbar = 0)) AND ((\"inner\".mid = \"outer\".mid) OR (\"outer\".pbar = 0)))\n -> Index Scan using moment_mid_idx on moment (cost=0.00..154.58 rows=444 width=8) (actual time=0.390..5.954 rows=199 loops=1)\n Filter: (parent = 45)\n -> Materialize (cost=192.62..269.24 rows=7662 width=8) (actual time=0.001..9.728 rows=7662 loops=199)\n -> Seq Scan on timecard (cost=0.00..192.62 rows=7662 width=8) (actual time=0.007..17.007 rows=7662 loops=1)\n Total runtime: 4838.786 ms\n(8 rows)\n\n> > I should say that this is on postgresql 7.4.16 (debian stable).\n> \n> Uhh. Why not a recent version? We have 8.3.0...\n\nNo particularly good reason, just that I have taken over a production\nsystem and I didn't want to mess up with before I am confident with\nit. But I on a test-site I have migrated to 8.1 without problems, so\nmigration will happen, we just haven't a reason for doing it yet,\nsince 7.4 has served us well.\n\n> > Can query B be rewritten so that it would execute faster?\n> \n> Quick and dirty: use both selects (query A) combined with UNION.\n\nI will look into that.\n\n> I guess, with a recent version the planner can use a bitmap index scan\n> to perform Query B faster.\n\nThat might be a good reason to upgrade :-)\n\nThanks for your answer.\n\n-- \nEvery non-free program has a lord, a master --\nand if you use the program, he is your master.\nLearn to master free software: www.ubuntulinux.com", "msg_date": "Wed, 23 Apr 2008 11:06:08 +0200", "msg_from": "hans <[email protected]>", "msg_from_op": false, "msg_subject": "Re: mysterious difference in speed when combining two queries with OR" }, { "msg_contents": "On Wed, Apr 23, 2008 at 10:57:04AM +0200, A. Kretschmer wrote:\n> am Wed, dem 23.04.2008, um 9:58:10 +0200 mailte A. Kretschmer folgendes:\n> > > Query A (two queries)\n> > > \n> > > select distinct moment.mid from moment,timecard where parent = 45 and (pid=17 and timecard.mid = moment.mid) order by moment.mid;\n> > > select distinct moment.mid from moment,timecard where parent = 45 and (pbar = 0) order by moment.mid;\n> > > \n> > > Query B (combining the two with OR)\n> > > \n> > > select distinct moment.mid from moment,timecard where parent = 45 and ((pid=17 and timecard.mid = moment.mid) or (pbar = 0)) order by moment.mid;\n> \n> Thanks to depesz on #postgresql (irc-channel):\n> \n> Query A, the second query: there are no join between the 2 tables.\n> Mistake?\n\nNo, I just wanted to show the time differences, I haven't used join\nbefore. Now that you have adviced me to, I have tried your suggestion\nto rewrite B as a union and it works good! Just as fast as the A Query!\n\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=4100.27..4101.39 rows=223 width=4) (actual time=120.963..121.124 rows=42 loops=1)\n -> Sort (cost=4100.27..4100.83 rows=223 width=4) (actual time=120.959..121.008 rows=43 loops=1)\n Sort Key: mid\n -> Append (cost=287.66..4091.57 rows=223 width=4) (actual time=11.274..120.795 rows=43 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=287.66..290.70 rows=203 width=4) (actual time=11.270..11.604 rows=41 loops=1)\n -> Unique (cost=287.66..288.67 rows=203 width=4) (actual time=11.264..11.469 rows=41 loops=1)\n -> Sort (cost=287.66..288.16 rows=203 width=4) (actual time=11.260..11.331 rows=57 loops=1)\n Sort Key: moment.mid\n -> Hash Join (cost=60.98..279.88 rows=203 width=4) (actual time=2.563..11.136 rows=57 loops=1)\n Hash Cond: (\"outer\".mid = \"inner\".mid)\n -> Seq Scan on timecard (cost=0.00..211.78 rows=1017 width=4) (actual time=0.032..7.156 rows=995 loops=1)\n Filter: (pid = 17)\n -> Hash (cost=59.88..59.88 rows=444 width=4) (actual time=2.329..2.329 rows=0 loops=1)\n -> Seq Scan on moment (cost=0.00..59.88 rows=444 width=4) (actual time=0.035..1.980 rows=199 loops=1)\n Filter: (parent = 45)\n -> Subquery Scan \"*SELECT* 2\" (cost=192.62..3800.87 rows=20 width=4) (actual time=0.583..109.073 rows=2 loops=1)\n -> Unique (cost=192.62..3800.67 rows=20 width=4) (actual time=0.578..109.061 rows=2 loops=1)\n -> Nested Loop (cost=192.62..3417.57 rows=153240 width=4) (actual time=0.576..89.437 rows=15324 loops=1)\n -> Index Scan using moment_mid_idx on moment (cost=0.00..160.15 rows=20 width=4) (actual time=0.544..3.527 rows=2 loops=1)\n Filter: ((parent = 45) AND (pbar = 0))\n -> Materialize (cost=192.62..269.24 rows=7662 width=0) (actual time=0.014..21.930 rows=7662 loops=2)\n -> Seq Scan on timecard (cost=0.00..192.62 rows=7662 width=0) (actual time=0.005..14.560 rows=7662 loops=1)\n Total runtime: 122.076 ms\n(23 rows)\n\n-- \nHans Ekbrand (http://sociologi.cjb.net) <[email protected]>\nA. Because it breaks the logical sequence of discussion\nQ. Why is top posting bad?", "msg_date": "Wed, 23 Apr 2008 11:22:09 +0200", "msg_from": "Hans Ekbrand <[email protected]>", "msg_from_op": true, "msg_subject": "Re: mysterious difference in speed when combining two queries with OR" }, { "msg_contents": "\nOn 23 Apr 2008, at 9:23AM, Hans Ekbrand wrote:\n\n> I cannot understand why the following two queries differ so much in \n> execution time (almost ten times)\n>\n> Query A (two queries)\n>\n> select distinct moment.mid from moment,timecard where parent = 45 \n> and (pid=17 and timecard.mid = moment.mid) order by moment.mid;\n> select distinct moment.mid from moment,timecard where parent = 45 \n> and (pbar = 0) order by moment.mid;\n>\n> Query B (combining the two with OR)\n>\n> select distinct moment.mid from moment,timecard where parent = 45 \n> and ((pid=17 and timecard.mid = moment.mid) or (pbar = 0)) order by \n> moment.mid;\n>\n> $ time psql -o /dev/null -f query-a.sql fektest\n>\n> real 0m2.016s\n> user 0m1.532s\n> sys 0m0.140s\n>\n> $ time psql -o /dev/null -f query-b.sql fektest\n>\n> real 0m28.534s\n> user 0m1.516s\n> sys 0m0.156s\n>\n> I have tested this in two different computers with different amount of\n> RAM, fast or slow CPU, and the difference is persistent, almost ten\n> times.\n>\n> I should say that this is on postgresql 7.4.16 (debian stable).\n>\n> Can query B be rewritten so that it would execute faster?\n\nTry\nselect distinct moment.mid from moment,timecard where parent = 45 and \n(pid=17 and timecard.mid = moment.mid) order by moment.mid\nunion all\nselect distinct moment.mid from moment,timecard where parent = 45 and \n(pbar = 0) order by moment.mid;\n-- \nRegards\nTheo\n\n", "msg_date": "Wed, 23 Apr 2008 13:00:07 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: mysterious difference in speed when combining two queries with OR" }, { "msg_contents": "> I should say that this is on postgresql 7.4.16 (debian stable).\n\n\tWhoa.\n\n> I cannot understand why the following two queries differ so much in \n> execution time (almost ten times)\n\n\tPost EXPLAIN ANALYZE for both, and also post table definitions (with \nindexes), use \\d table. This will allow people to help you.\n\n> $ time psql -o /dev/null -f query-a.sql fektest\n>\n> real 0m2.016s\n> user 0m1.532s\n> sys 0m0.140s\n\n\tYou are measuring the time it takes the server to perform the query, plus \nthis :\n\t- time for the client (psql) to launch itself,\n\t- to read the configuration file,\n\t- to connect to the server, send the query\n\t- to transfer the results back to the client (is this on network or local \n? what is the amount of data transferred ?)\n\t- to process the results, format them as text, display them,\n\t- to close the connection,\n\t- to exit cleanly\n\n\tAs you can see from the above numbers,\n\t- 2.016 seconds elapsed on your wall clock, of which :\n\t- 76% was used as CPU time in the client (therefore of absolutely no \nrelevance to postgresql server performance)\n\t- and the rest (24%) distributed in unknown proportion between server CPU \nspent to process your query, network roundtrips, data transfer, server \niowait, etcetera.\n\n\tIn order to properly benchmark your query, you should :\n\n\t1- Ensure the server is not loaded and processing any other query (unless \nyou explicitly intend to test behaviour under load)\n\tIf you don't do that, your timings will be random, depending on how much \nload you have, if someone holds a lock you have to wait on, etc.\n\n\t2- ssh to your server and use a psql session local to the server, to \navoid network roundtrips.\n\n\t3- enable statement timing with \\t\n\n\t2- EXPLAIN your query.\n\n\tCheck the plan.\n\tCheck the time it took to EXPLAIN, this will tell you how much time it \ntakes to parse and plan your query.\n\n\t2- EXPLAIN ANALYZE your query.\n\n\tDo it several times, note the different timings and understand the query \nplans.\n\tIf the data was not cached, the first timing will be much longer than the \nsubsequent other timings. This will give you useful information about the \nbehaviour of this query : if lasts for 1 second (cached) and 5 minutes \n(not cached), you might not want to execute it at the same time as that \nhuge scheduled backup job. Those timings will also provide hints on wether \nyou should CLUSTER the table, etc.\n\n\t3- EXPLAIN SELECT count(*) FROM (your query) AS foo\n\tCheck that the plan is the same.\n\n\t4- SELECT count(*) FROM (your query) AS foo\n\tThe count(*) means very little data is exchanged between client and \nserver, so this doesn't mess with the timing.\n\n\tNow, compare :\n\n\tThe timings displayed by psql (\\t) include query planning, roundtrip to \nserver, and result processing (hence the count(*) to reduce this overhead).\n\tThe timings displayed by EXPLAIN ANALYZE include only query execution \ntime, but EXPLAIN ANALYZE is slower than just executing the query, because \nit takes time to instrument the query and measure its performance. For \ninstance, on a very simple query that computes an aggregate on lots of \nrows, more time will be spent measuring than actually executing the query. \nHence steps 3 and 4 above.\n\n\tKnowing this, you deduce the time it takes to parse & plan your query \n(should you then use PREPAREd statements ? up to you) and the time it \ntakes to execute it.\n\n\t5- EXPLAIN ANALYZE, while changing the parameters (trying some very \nselective or less selective ones) to check for plan change, mess with \nenable_**** parameters to check for different plans, rewrite the query \ndifferently (DISTINCT/GROUP BY, OR/UNION, JOIN or IN(subquery), etc).\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Wed, 23 Apr 2008 14:56:56 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: mysterious difference in speed when combining two queries with OR" }, { "msg_contents": "\"Hans Ekbrand\" <[email protected]> writes:\n\n> No, I just wanted to show the time differences, I haven't used join\n> before. Now that you have adviced me to, I have tried your suggestion\n> to rewrite B as a union and it works good! Just as fast as the A Query!\n\nYou can even do better. If you know the two sets of mid are disjoint you can\nuse UNION ALL. If not you could remove the two DISTINCTs as the UNION will\ntake care of removing duplicates.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's On-Demand Production Tuning\n", "msg_date": "Wed, 23 Apr 2008 09:31:39 -0400", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: mysterious difference in speed when combining two queries with OR" } ]
[ { "msg_contents": "On Wed, 23 Apr 2008, Justin wrote:\n\n> http://www.eweek.com/c/a/Database/CEO-Calls-MySQLs-the-Ferrari-of-Databases/\n\nI can see that, the engines from both companies make similar speed versus \nreliability trade-offs: http://www.f1technical.net/news/8507\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 23 Apr 2008 14:58:54 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sun Talks about MySQL" }, { "msg_contents": "http://www.eweek.com/c/a/Database/Sun-Asserts-MySQL-to-Remain-Open-Source/?sp=0&kc=EWKNLINF042308STR1 \n\n\ni like the statement\n\" Sun will not withhold or close-source any features that would make the \nMySQL community server less functional for users. \"\n\n\nhttp://www.eweek.com/c/a/Database/CEO-Calls-MySQLs-the-Ferrari-of-Databases/ \n\n", "msg_date": "Wed, 23 Apr 2008 14:21:17 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": false, "msg_subject": "Sun Talks about MySQL" }, { "msg_contents": "\n\nGreg Smith wrote:\n> On Wed, 23 Apr 2008, Justin wrote:\n>\n>> http://www.eweek.com/c/a/Database/CEO-Calls-MySQLs-the-Ferrari-of-Databases/ \n>>\n>\n> I can see that, the engines from both companies make similar speed \n> versus reliability trade-offs: http://www.f1technical.net/news/8507\n>\n> -- \n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\nLOL :-D\n", "msg_date": "Wed, 23 Apr 2008 15:11:46 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Talks about MySQL" }, { "msg_contents": ">> http://www.eweek.com/c/a/Database/CEO-Calls-MySQLs-the-Ferrari-of-Databases/ \n>>\n>\n> I can see that, the engines from both companies make similar speed \n> versus reliability trade-offs: http://www.f1technical.net/news/8507\nWhen they fail, they fail /fast\n\n\n /-- Korry/\n/\n\n\n\n\n\n\n\n\n\nhttp://www.eweek.com/c/a/Database/CEO-Calls-MySQLs-the-Ferrari-of-Databases/\n\n\n\nI can see that, the engines from both companies make similar speed\nversus reliability trade-offs: http://www.f1technical.net/news/8507\n\n\nWhen they fail, they fail fast\n\n\n          -- Korry", "msg_date": "Wed, 23 Apr 2008 16:59:43 -0400", "msg_from": "\"korry\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Talks about MySQL" }, { "msg_contents": "> I can see that, the engines from both companies make similar speed\n> versus reliability trade-offs: http://www.f1technical.net/news/8507\n\nGuys, let's not joke about Ferrari. :)\n\nBy the way, that article was immediately after the first Grand Prix of\nthe season - a disaster for the racing team. Apparently Ferrari fixed\nvery well the problems, as they have then won the following two grand prix.\n\nBut ... Ferrari is not MySQL, guys.\n\nActually, we already have our Ferrari in the PostgreSQL community.\n\nPhD Luca Ferrari is indeed the vice president of ITPUG. :)\n\nCiao,\nGabriele\n-- \nGabriele Bartolini: Open source programmer and data architect\nCurrent Location: Prato, Tuscany, Italy\[email protected] | www.gabrielebartolini.it\n\"If I had been born ugly, you would never have heard of Pel�\", George Best\nhttp://www.linkedin.com/in/gbartolini\n", "msg_date": "Wed, 23 Apr 2008 23:14:31 +0200", "msg_from": "Gabriele Bartolini <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Talks about MySQL" }, { "msg_contents": "On Wed, 23 Apr 2008 23:14:31 +0200\nGabriele Bartolini <[email protected]> wrote:\n\n> > I can see that, the engines from both companies make similar speed\n> > versus reliability trade-offs: http://www.f1technical.net/news/8507\n> \n> Guys, let's not joke about Ferrari. :)\n\nMore appropriately, don't give Ferrari a bad name by associating it\nwith MySQL.\n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nUnited States PostgreSQL Association: http://www.postgresql.us/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Wed, 23 Apr 2008 14:17:28 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Talks about MySQL" }, { "msg_contents": "On Wed, 2008-04-23 at 16:59 -0400, korry wrote:\n> \n> > > http://www.eweek.com/c/a/Database/CEO-Calls-MySQLs-the-Ferrari-of-Databases/ \n> > \n> > I can see that, the engines from both companies make similar speed\n> > versus reliability trade-offs: http://www.f1technical.net/news/8507 \n> When they fail, they fail fast\n\nI think they have interpreted the flaws of MySQL as something they can\nsell more services with - so, dangerous, difficult to use etc all mean\nadditional $$$. It's certainly what Oracle did for years.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Thu, 24 Apr 2008 14:28:41 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Talks about MySQL" }, { "msg_contents": "On Thu, Apr 24, 2008 at 02:28:41PM +0100, Simon Riggs wrote:\n\n> I think they have interpreted the flaws of MySQL as something they can\n> sell more services with - so, dangerous, difficult to use etc all mean\n> additional $$$. It's certainly what Oracle did for years.\n\n_The UNIX Hater's Handbook_ can be interpreted as arguing that the above is\nalso what Sun did for another well-known technology in the past ;-)\n\nA\n", "msg_date": "Thu, 24 Apr 2008 09:52:19 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Talks about MySQL" }, { "msg_contents": "Andrew Sullivan wrote:\n> On Thu, Apr 24, 2008 at 02:28:41PM +0100, Simon Riggs wrote:\n> \n> > I think they have interpreted the flaws of MySQL as something they can\n> > sell more services with - so, dangerous, difficult to use etc all mean\n> > additional $$$. It's certainly what Oracle did for years.\n> \n> _The UNIX Hater's Handbook_ can be interpreted as arguing that the above is\n> also what Sun did for another well-known technology in the past ;-)\n\nTrue, and that strategy works only when there are few alternatives.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 24 Apr 2008 11:46:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Talks about MySQL" }, { "msg_contents": "Justin wrote:\n> http://www.eweek.com/c/a/Database/Sun-Asserts-MySQL-to-Remain-Open-Source/?sp=0&kc=EWKNLINF042308STR1 \n>\n>\n> i like the statement\n> \" Sun will not withhold or close-source any features that would make \n> the MySQL community server less functional for users. \"\n>\n>\n> http://www.eweek.com/c/a/Database/CEO-Calls-MySQLs-the-Ferrari-of-Databases/ \n>\n>\nThe statement I like is:\n\n\"This version now has zero bugs,\" Urlocker told eWEEK.\n\nThough that probably contributes to the immense number of new \"features\" \n. Either that, or their bugzilla installation was running on MyISAM and \nthere was a crash...\n\n-- \nChander Ganesan\nOpen Technology Group, Inc.\nOne Copley Parkway, Suite 210\nMorrisville, NC 27560\n919-463-0999/877-258-8987\nhttp://www.otg-nc.com\n\n", "msg_date": "Fri, 25 Apr 2008 08:43:09 -0400", "msg_from": "Chander Ganesan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Talks about MySQL" }, { "msg_contents": "On Fri, Apr 25, 2008 at 08:43:09AM -0400, Chander Ganesan wrote:\n> \n> \"This version now has zero bugs,\" Urlocker told eWEEK.\n\nThat is pretty amusing, but I also liked this: \"Mickos said innovation will\nmove even faster now that about 100 experienced database engineers from Sun\nwill be working on MySQL development.\"\n\nI guess they don't read _The Mythical Man-Month_ at Sun. \n\nA\n\n", "msg_date": "Fri, 25 Apr 2008 09:01:05 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Talks about MySQL" }, { "msg_contents": "On Fri, Apr 25, 2008 at 9:01 AM, Andrew Sullivan <[email protected]> wrote:\n> On Fri, Apr 25, 2008 at 08:43:09AM -0400, Chander Ganesan wrote:\n> >\n> > \"This version now has zero bugs,\" Urlocker told eWEEK.\n>\n> That is pretty amusing, but I also liked this: \"Mickos said innovation will\n> move even faster now that about 100 experienced database engineers from Sun\n> will be working on MySQL development.\"\n>\n> I guess they don't read _The Mythical Man-Month_ at Sun.\n\nNot to rain on my MySQL/Sun bash parade, but your response is based on\nthe extremely poor assumption that Sun would put 100 developers on a\nsingle project. While Sun has certainly had its share of\nsoftware-related problems in the past, I would hope that they\nunderstand basic software project management. It's much more likely\nthat they will have teams of 6-8 people working on individual\nprojects.\n\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Fri, 25 Apr 2008 10:41:24 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Talks about MySQL" }, { "msg_contents": "On Fri, 25 Apr 2008 10:41:24 -0400\n\"Jonah H. Harris\" <[email protected]> wrote:\n\n> > I guess they don't read _The Mythical Man-Month_ at Sun.\n> \n> Not to rain on my MySQL/Sun bash parade, but your response is based on\n> the extremely poor assumption that Sun would put 100 developers on a\n> single project. While Sun has certainly had its share of\n> software-related problems in the past, I would hope that they\n> understand basic software project management. It's much more likely\n> that they will have teams of 6-8 people working on individual\n> projects.\n\nI can only assume this response is a joke. :P\n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nUnited States PostgreSQL Association: http://www.postgresql.us/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Fri, 25 Apr 2008 07:53:11 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Talks about MySQL" }, { "msg_contents": "On Fri, Apr 25, 2008 at 10:41:24AM -0400, Jonah H. Harris wrote:\n\n> Not to rain on my MySQL/Sun bash parade, but your response is based on\n> the extremely poor assumption that Sun would put 100 developers on a\n> single project. \n\nI was certainly being a little glib, but I do think the original statement\nis fatuous. Adding 100 developers to a single product like MySQL -- or, for\nthat matter, to 20 small projects all working on some feature for MySQL or\neven to 20 small projects all aimed at some sort of application that is\nsupposed to work eith MySQL -- presents a pretty serious management hurdle,\nand one that I don't actually think can be leapt in a single bound. \nTherefore the claim that the sudden addition of 100 developers who know\nabout databases to the MySQL \"stable\" will result in sudden increases in\ninnovation is nonsense. It'll play to the kind of managers who believe in\nthrowing people at a problem, of course -- and I've worked for plenty of\nthem in my life.\n\nThe claim, however, was really part of the bigger claim that the transition\nfrom small-distributed-company to organic-part-of-Sun is over: that was the\nreal message in those comments. This, too, I find incredible. Either it's\na happy face for the public, or else the honeymoon isn't over. Integrating\nsudden infusions of people into a strongly developed (some pronounce that\n\"ossified\") culture like Sun's takes rather more time than has passed so\nfar, no matter how compatible the cultures seem on the surface. Power\nrelationships do not change that fast in human societies, ever. By way of\ncomparison, the most obviously successful absorbtion of this type Sun ever\ndid was StarOffice, and it would be pretty hard to argue that that project\nwas either an unmitigated victory or a fast integration.\n\nA\n\n", "msg_date": "Fri, 25 Apr 2008 11:20:53 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Talks about MySQL" }, { "msg_contents": "On Fri, 25 Apr 2008, Andrew Sullivan wrote:\n\n> Therefore the claim that the sudden addition of 100 developers who know\n> about databases to the MySQL \"stable\" will result in sudden increases in\n> innovation is nonsense.\n\nAdding a bunch of new developers to a project involving a code base they \nare unfamiliar with will inevitably reduce short-term reliability. But \nyou can't pay attention to anything said by someone who claims any piece \nof software has zero bugs anyway; they've clearly lost all touch with \nreality and are saying nonsense. Zack at MySQL might as well have said \nthat they have magical code fairies who have fixed 5.1 and will continue \nto be on board for future releases.\n\nI think it shows how much MySQL has been feeling the pain from the truly \nshoddy 5.0 release and its associated quality control issues that they are \npublicizing the incredible number of bug fixes they had to do, and flat \nout admitting they weren't happy with it.\n\nAn interesting data point is they advertise a 10-15% gain in DBT2 results. \nI've started thinking about a 2008 update to my \"PostgreSQL vs. MySQL\" \npaper that covers PG 8.3 and MySQL 5.1. I'd love to have a similar \ncomparision using that same benchmark of PG8.2->PG8.3. I recall Heikki \nwas doing lots of tests with DBT2 for EnterpriseDB comparing those two \nreleases like that; anybody know of a good summary I could utilize there? \nI thought that was one of the benchmarks that got a 20-30% speedup on \ngoing to 8.3 because it really took advantage of HOT in particular.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 25 Apr 2008 20:09:40 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sun Talks about MySQL" }, { "msg_contents": "Greg Smith wrote:\n\n> An interesting data point is they advertise a 10-15% gain in DBT2 \n> results. I've started thinking about a 2008 update to my \"PostgreSQL vs. \n> MySQL\" paper that covers PG 8.3 and MySQL 5.1. I'd love to have a \n> similar comparision using that same benchmark of PG8.2->PG8.3. I recall \n> Heikki was doing lots of tests with DBT2 for EnterpriseDB comparing \n> those two releases like that; anybody know of a good summary I could \n> utilize there? I thought that was one of the benchmarks that got a \n> 20-30% speedup on going to 8.3 because it really took advantage of HOT \n> in particular.\n> \nI did some benchmarks for 8.2 and 8.3. I'll show them at PGCon in a few \nweeks. But I didn't try MySQL. It would be good if someone should do \nsuch a comparison.\n\n\n-- \n Euler Taveira de Oliveira\n http://www.timbira.com/\n", "msg_date": "Fri, 25 Apr 2008 21:35:51 -0300", "msg_from": "Euler Taveira de Oliveira <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Talks about MySQL" }, { "msg_contents": "\"Euler Taveira de Oliveira\" <[email protected]> writes:\n\n>> I recall Heikki was doing lots of tests with DBT2 for EnterpriseDB\n>> comparing those two releases like that; anybody know of a good summary I\n>> could utilize there? I thought that was one of the benchmarks that got a\n>> 20-30% speedup on going to 8.3 because it really took advantage of HOT in\n>> particular.\n\nIf you're doing large TPC-C runs then the phantom-command-id and packed\nvarlena changes give you about 9% space savings which translates surprisingly\nnicely into about 9% TPM increase. (This is for a specific schema, if you\ndumbify your schema with char(1)s and numerics you could see a bigger\ndifference)\n\nHOT has a *huge* effect on how long your can run the benchmark before\nperformance starts to droop. However the minimum run time for TPCC is only 2\nhours and for large runs that's not enough for vacuum-related issues to kick\nin.\n\nAlso, smoothed checkpoints have a *huge* effect but TPC-C is based on 95th\npercentile response times and the checkpoints only affect about 1% of the\ntransaction response times.\n\nI think TPC-E will make both of these major improvements much more important.\nI suspect it would be hard to get 8.2 to even pass TPC-E due to the checkpoint\ndropouts.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's On-Demand Production Tuning\n", "msg_date": "Sat, 26 Apr 2008 02:21:21 -0400", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun Talks about MySQL" }, { "msg_contents": "(thread crossed over to pgsql-performance, where it belongs, from \npgsql-advocacy)\n\nGreg,\n\n> I think TPC-E will make both of these major improvements much more important.\n> I suspect it would be hard to get 8.2 to even pass TPC-E due to the checkpoint\n> dropouts.\n> \n\nYou'd be surprised, then. We're still horribly, horribly lock-bound on \nTPC-E; on anything over 4 cores lock resolution chokes us to death. See \nJignesh's and Paul's various posts about attempts to fix this.\n\nWithout resolving the locking issues, HOT and checkpoint doesn't have \nmuch effect on TPCE.\n\n--Josh\n", "msg_date": "Sun, 27 Apr 2008 10:54:00 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] Benchmarks WAS: Sun Talks about MySQL" }, { "msg_contents": "\n\"Josh Berkus\" <[email protected]> writes:\n\n>> I think TPC-E will make both of these major improvements much more important.\n>> I suspect it would be hard to get 8.2 to even pass TPC-E due to the checkpoint\n>> dropouts.\n>\n> You'd be surprised, then. We're still horribly, horribly lock-bound on TPC-E;\n> on anything over 4 cores lock resolution chokes us to death. See Jignesh's and\n> Paul's various posts about attempts to fix this.\n\nMost of those posts have been about scalability issues with extremely large\nnumbers of sessions. Those are interesting too and they may be limiting our\nresults in benchmarks which depend on such a configuration (which I don't\nthink includes TPC-E, but the benchmark Jignesh has been writing about is some\nJava application benchmark which may be such a beast) but they don't directly\nrelate to whether we're \"passing\" TPC-E.\n\nWhat I was referring to by \"passing\" TPC-E was the criteria for a conformant\nbenchmark run. TPC-C has iirc, only two relevant criteria: \"95th percentile\nresponse time < 5s\" and \"average response time < 95th percentile response\ntime\". You can pass those even if 1 transaction in 20 takes 10-20s which is\nmore than enough to cover checkpoints and other random sources of inconsistent\nperformance.\n\nTPC-E has more stringent requirements which explicitly require very consistent\nresponse times and I doubt 8.2 would have been able to pass them. So the\nperformance limiting factors whether they be i/o, cpu, lock contention, or\nwhatever don't even come into play. We wouldn't have any conformant results\nwhatsoever, not even low values limited by contention. 8.3 however should be\nin a better position to pass.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's 24x7 Postgres support!\n", "msg_date": "Mon, 28 Apr 2008 05:17:08 -0400", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] Benchmarks WAS: Sun Talks about MySQL" }, { "msg_contents": "Gregory Stark wrote:\n> TPC-E has more stringent requirements which explicitly require very consistent\n> response times and I doubt 8.2 would have been able to pass them.\n\nSure it would. Just not for a very large scale factor ;-).\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 28 Apr 2008 11:59:48 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] Benchmarks WAS: Sun Talks about MySQL" }, { "msg_contents": "Greg,\n\n> What I was referring to by \"passing\" TPC-E was the criteria for a conformant\n> benchmark run. TPC-C has iirc, only two relevant criteria: \"95th percentile\n> response time < 5s\" and \"average response time < 95th percentile response\n> time\". You can pass those even if 1 transaction in 20 takes 10-20s which is\n> more than enough to cover checkpoints and other random sources of inconsistent\n> performance.\n\nWe can do this now. I'm unhappy because we're at about 1/4 of Oracle \nperformance, but we certainly pass -- even with 8.2.\n\n--Josh\n", "msg_date": "Mon, 28 Apr 2008 09:07:36 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmarks WAS: Sun Talks about MySQL" }, { "msg_contents": "\"Josh Berkus\" <[email protected]> writes:\n\n> Greg,\n>\n>> What I was referring to by \"passing\" TPC-E was the criteria for a conformant\n>> benchmark run. TPC-C has iirc, only two relevant criteria: \"95th percentile\n>> response time < 5s\" and \"average response time < 95th percentile response\n>> time\". You can pass those even if 1 transaction in 20 takes 10-20s which is\n>> more than enough to cover checkpoints and other random sources of inconsistent\n>> performance.\n>\n> We can do this now. I'm unhappy because we're at about 1/4 of Oracle\n> performance, but we certainly pass -- even with 8.2.\n\nWe certainly can pass TPC-C. I'm curious what you mean by 1/4 though? On\nsimilar hardware? Or the maximum we can scale to is 1/4 as large as Oracle?\nCan you point me to the actual benchmark runs you're referring to?\n\nBut I just made an off-hand comment that I doubt 8.2 could pass TPC-E which\nhas much more stringent requirements. It has requirements like: \n\n the throughput computed over any period of one hour, sliding over the Steady\n State by increments of ten minutes, varies from the Reported Throughput by no\n more than 2%\n\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's Slony Replication support!\n", "msg_date": "Mon, 28 Apr 2008 14:40:25 -0400", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmarks WAS: Sun Talks about MySQL" }, { "msg_contents": "On Mon, 28 Apr 2008 14:40:25 -0400\nGregory Stark <[email protected]> wrote:\n\n\n> We certainly can pass TPC-C. I'm curious what you mean by 1/4 though?\n> On similar hardware? Or the maximum we can scale to is 1/4 as large\n> as Oracle? Can you point me to the actual benchmark runs you're\n> referring to?\n\nI would be curious as well considering there has been zero evidence\nprovided to make such a statement. I am not saying it isn't true, it\nwouldn't be surprising to me if Oracle outperformed PostgreSQL in TPC-C\nbut I would sure like to see in general how wel we do (or don't).\n\n\nSincerely,\n\nJoshua D. Drake\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL SPI Liaison | SPI Director | PostgreSQL political pundit", "msg_date": "Mon, 28 Apr 2008 13:55:27 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmarks WAS: Sun Talks about MySQL" }, { "msg_contents": "\nJoshua D. Drake wrote:\n> On Mon, 28 Apr 2008 14:40:25 -0400\n> Gregory Stark <[email protected]> wrote:\n>\n>\n> \n>> We certainly can pass TPC-C. I'm curious what you mean by 1/4 though?\n>> On similar hardware? Or the maximum we can scale to is 1/4 as large\n>> as Oracle? Can you point me to the actual benchmark runs you're\n>> referring to?\n>> \n>\n> I would be curious as well considering there has been zero evidence\n> provided to make such a statement. I am not saying it isn't true, it\n> wouldn't be surprising to me if Oracle outperformed PostgreSQL in TPC-C\n> but I would sure like to see in general how wel we do (or don't).\n>\n>\n> Sincerely,\n>\n> Joshua D. Drake\n> \n\nI am sorry but I am far from catching my emails:\n\nBest thing is to work with TPC-E benchmarks involving the community. \n(TPC-C requirements is way too high on storage and everybody seems to be \ngetting on the TPC-E bandwagon slowly.)\n\nWhere can I get the latest DBT5 (TPC-E) kit ? Using the kit should allow \nme to recreate setups which can then be made available for various \nPostgreSQL Performance engineers to look at it.\n\n\n\nRegards,\nJignesh\n\n\n\n\n", "msg_date": "Thu, 01 May 2008 10:21:10 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmarks WAS: Sun Talks about MySQL" } ]
[ { "msg_contents": "I found strange issue in very simple query. Statistics for all columns\nis on the level 1000 but I also tried other levels.\n\ncreate table g (\n id bigint primary key,\n isgroup boolean not null);\n\ncreate table a (\n groupid bigint references g(id),\n id bigint,\n unique(id, groupid));\n\nanalyze g;\nanalyze a;\n\nselect count(*) from a\n294\n\nselect count(*) from g\n320\n\nexplain analyze\nselect *\nfrom g\n join a on a.groupid = g.id\nwhere g.isgroup\n\nHash Join (cost=5.35..11.50 rows=11 width=25) (actual time=0.261..1.755\nrows=294 loops=1)\n Hash Cond: (a.groupid = g.id)\n -> Seq Scan on a (cost=0.00..4.94 rows=294 width=16) (actual\ntime=0.047..0.482 rows=294 loops=1)\n -> Hash (cost=5.20..5.20 rows=12 width=9) (actual time=0.164..0.164\nrows=12 loops=1)\n -> Seq Scan on g (cost=0.00..5.20 rows=12 width=9) (actual\ntime=0.042..0.136 rows=12 loops=1)\n Filter: isgroup\nTotal runtime: 2.225 ms\n\nAnd this is more interesting:\nexplain analyze\nselect *\nfrom g\n join a on a.groupid = g.id\nwhere not g.isgroup\n\nHash Join (cost=9.05..17.92 rows=283 width=25) (actual\ntime=2.038..2.038 rows=0 loops=1)\n Hash Cond: (a.groupid = g.id)\n -> Seq Scan on a (cost=0.00..4.94 rows=294 width=16) (actual\ntime=0.046..0.478 rows=294 loops=1)\n -> Hash (cost=5.20..5.20 rows=308 width=9) (actual time=1.090..1.090\nrows=308 loops=1)\n -> Seq Scan on g (cost=0.00..5.20 rows=308 width=9) (actual\ntime=0.038..0.557 rows=308 loops=1)\n Filter: (NOT isgroup)\nTotal runtime: 2.126 ms\n\nPostgreSQL 8.3\nThese queries are part of big query and optimizer put them on the leaf\nof query tree, so rows miscount causes a real problem.\n\nStatistics for table a:\nid\n--\nhistogram_bounds: {1,40,73,111,143,174,204,484,683,715,753}\ncorrelation: 0.796828\n\ngroupid\n-------\nn_distinct: 12\nmost_common_vals: {96,98,21,82,114,131,48,44,173,682,752}\nmost_common_freqs:\n{0.265306,0.166667,0.163265,0.136054,0.0884354,0.0782313,0.0714286,0.00680272,0.00680272,0.00680272,0.00680272}\ncorrelation: 0.366704\n\nfor table g:\nid\n--\nhistogram_bounds: {1,32,64,101,134,166,199,451,677,714,753}\ncorrelation: 1\n\nisgroup\n-------\nn_distinct: 2\nmost_common_freqs: {0.9625,0.0375}\ncorrelation: 0.904198\n\n\n", "msg_date": "Thu, 24 Apr 2008 10:14:54 +0900", "msg_from": "Vlad Arkhipov <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizer's issue" }, { "msg_contents": "Vlad Arkhipov wrote:\r\n> I found strange issue in very simple query.\r\n\r\nYou forgot to mention what your problem is.\r\n\r\nYours,\r\nLaurenz Albe\r\n", "msg_date": "Thu, 24 Apr 2008 10:47:32 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer's issue" }, { "msg_contents": "Albe Laurenz пишет:\n> Vlad Arkhipov wrote:\n> \n>> I found strange issue in very simple query.\n>> \n>\n> You forgot to mention what your problem is.\n>\n> Yours,\n> Laurenz Albe\n> \nIt was written below in my first post:\n\"These queries are part of big query and optimizer put them on the leaf\nof query tree, so rows miscount causes a real problem. \"\nactual rows count for the first query is 294, estimate - 11; for the\nsecond -- 283 and 0. Postgres tries to use nested loops when it should\nuse merge/hash join and vice versa.\n\n", "msg_date": "Thu, 24 Apr 2008 18:47:37 +0900", "msg_from": "Vlad Arkhipov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizer's issue" }, { "msg_contents": "On Thu, 24 Apr 2008, Vlad Arkhipov wrote:\n> It was written below in my first post:\n> \"These queries are part of big query and optimizer put them on the leaf\n> of query tree, so rows miscount causes a real problem. \"\n> actual rows count for the first query is 294, estimate - 11; for the\n> second -- 283 and 0. Postgres tries to use nested loops when it should\n> use merge/hash join and vice versa.\n\nThe queries are taking less than three milliseconds. You only have three \nhundred rows in each table. The precise algorithm Postgres uses probably \nwon't make much of a difference. Have you tried turning off the relevant \njoin plans to see how long Postgres takes with your desired plan?\n\nWhen you have millions of rows, the algorithm matters a lot more.\n\nMatthew\n\n-- \nRichards' Laws of Data Security:\n 1. Don't buy a computer.\n 2. If you must buy a computer, don't turn it on.\n", "msg_date": "Thu, 24 Apr 2008 11:50:24 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer's issue" }, { "msg_contents": "On Thu, 24 Apr 2008 03:14:54 +0200, Vlad Arkhipov <[email protected]> \nwrote:\n\n> I found strange issue in very simple query. Statistics for all columns\n> is on the level 1000 but I also tried other levels.\n>\n> create table g (\n> id bigint primary key,\n> isgroup boolean not null);\n>\n> create table a (\n> groupid bigint references g(id),\n> id bigint,\n> unique(id, groupid));\n>\n> analyze g;\n> analyze a;\n>\n> select count(*) from a\n> 294\n>\n> select count(*) from g\n> 320\n>\n> explain analyze\n> select *\n> from g\n> join a on a.groupid = g.id\n> where g.isgroup\n>\n> Hash Join (cost=5.35..11.50 rows=11 width=25) (actual time=0.261..1.755\n> rows=294 loops=1)\n> Hash Cond: (a.groupid = g.id)\n> -> Seq Scan on a (cost=0.00..4.94 rows=294 width=16) (actual\n> time=0.047..0.482 rows=294 loops=1)\n> -> Hash (cost=5.20..5.20 rows=12 width=9) (actual time=0.164..0.164\n> rows=12 loops=1)\n> -> Seq Scan on g (cost=0.00..5.20 rows=12 width=9) (actual\n> time=0.042..0.136 rows=12 loops=1)\n> Filter: isgroup\n> Total runtime: 2.225 ms\n\n\tYou should really put an EXPLAIN ANALYZE of your big query.\n\n\tThis little query plan seems OK to me.\n\tTwo very small tables, ok, hash'em, it's the best.\n\tNow, of course if it is repeated for every row in your JOIN, you have a \nproblem.\n\tThe question is, why is it repeated for every row ?\n\tThis cannot be answered without seeing the whole query.\n\n\tAnother question would be, is there a way to structure the tables \ndifferently ?\n\tAgain, this cannot be answered without seeing the whole query, and some \nexplanation about what the data & fields mean.\n\n\tPlease provide more information...\n\n\n", "msg_date": "Thu, 24 Apr 2008 18:24:50 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer's issue" }, { "msg_contents": "PFC пишет:\n> On Thu, 24 Apr 2008 03:14:54 +0200, Vlad Arkhipov \n> <[email protected]> wrote:\n>\n>> I found strange issue in very simple query. Statistics for all columns\n>> is on the level 1000 but I also tried other levels.\n>>\n>> create table g (\n>> id bigint primary key,\n>> isgroup boolean not null);\n>>\n>> create table a (\n>> groupid bigint references g(id),\n>> id bigint,\n>> unique(id, groupid));\n>>\n>> analyze g;\n>> analyze a;\n>>\n>> select count(*) from a\n>> 294\n>>\n>> select count(*) from g\n>> 320\n>>\n>> explain analyze\n>> select *\n>> from g\n>> join a on a.groupid = g.id\n>> where g.isgroup\n>>\n>> Hash Join (cost=5.35..11.50 rows=11 width=25) (actual time=0.261..1.755\n>> rows=294 loops=1)\n>> Hash Cond: (a.groupid = g.id)\n>> -> Seq Scan on a (cost=0.00..4.94 rows=294 width=16) (actual\n>> time=0.047..0.482 rows=294 loops=1)\n>> -> Hash (cost=5.20..5.20 rows=12 width=9) (actual time=0.164..0.164\n>> rows=12 loops=1)\n>> -> Seq Scan on g (cost=0.00..5.20 rows=12 width=9) (actual\n>> time=0.042..0.136 rows=12 loops=1)\n>> Filter: isgroup\n>> Total runtime: 2.225 ms\n>\n> You should really put an EXPLAIN ANALYZE of your big query.\n>\n> This little query plan seems OK to me.\n> Two very small tables, ok, hash'em, it's the best.\n> Now, of course if it is repeated for every row in your JOIN, you \n> have a problem.\n> The question is, why is it repeated for every row ?\n> This cannot be answered without seeing the whole query.\n>\n> Another question would be, is there a way to structure the tables \n> differently ?\n> Again, this cannot be answered without seeing the whole query, and \n> some explanation about what the data & fields mean.\n>\n> Please provide more information...\n>\n>\n>\nI redesigned tables structure and the query seems to be become faster. \nYou was right, the problem was not in this query.\n\n", "msg_date": "Mon, 28 Apr 2008 10:51:38 +0900", "msg_from": "Vlad Arkhipov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizer's issue" } ]
[ { "msg_contents": "I've tried to communicate to the list admin address about this, so far\r\nwithout any reaction.\r\nSorry to waste bandwith here, but I don't know where else to turn:\r\n\r\nWhenever I post to the -performance list, I get spammed by a\r\nchallenge-response bot from [[email protected]]:\r\n\r\n> The email message sent to [email protected] , [email protected] requires a confirmation to be delivered. Please, answer this email informing the characters that you see in the image below. \r\n\r\nCould somebody remove the latter address from the list, please?\r\n\r\nThanks,\r\nLaurenz Albe\r\n", "msg_date": "Thu, 24 Apr 2008 10:56:07 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": true, "msg_subject": "off-topic: SPAM" }, { "msg_contents": "\"Albe Laurenz\" <[email protected]> writes:\n\n>> The email message sent to [email protected] ,\n>> [email protected] requires a confirmation to be delivered.\n>> Please, answer this email informing the characters that you see in the\n>> image below.\n>\n> Could somebody remove the latter address from the list, please?\n\nUnfortunately that's not the address causing the problem. This is a\nparticularly stupid spam filter which is just grabbing that address from your\nTo line.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!\n", "msg_date": "Thu, 24 Apr 2008 07:31:42 -0400", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: off-topic: SPAM" }, { "msg_contents": "\nGeez, I thought you were talking about recent stuff ... that is 4 month \nold stuff ...\n\nBut, I thought we had already dealt with the .br domains in the past, but \nactually blocking them ... let me look at it when I get home and report \nback, as I was certain we did something in the past because of the high \nnumber of 'confirmation' messages we were getting from a domain ...\n\n----\nMarc G. Fournier Hub.Org Networking Services (http://www.hub.org)\nEmail . [email protected] MSN . [email protected]\nYahoo . yscrappy Skype: hub.org ICQ . 7615664\n\nOn Thu, 24 Apr 2008, Alvaro Herrera wrote:\n\n> Marc G. Fournier escribió:\n>>\n>> where are the reports going to? am I missing off a list somewhere?\n>>\n>> of course, your response really didn't enlighten me any ... what is the\n>> problem that is being seen / reported ?\n>\n> The problem is that any time some posts to pgsql-performance he gets a\n> bounce from a postmaster in infotecnica.com.br domain.\n>\n> It has been reported in several lists:\n>\n> http://archives.postgresql.org/pgsql-www/2007-12/msg00203.php\n> http://archives.postgresql.org/pgsql-hackers/2007-12/msg00697.php\n> http://archives.postgresql.org/pgsql-performance/2007-12/msg00126.php\n>\n>\n>\n> -- \n> Alvaro Herrera http://www.amazon.com/gp/registry/CTMLCN8V17R4\n> We take risks not to escape from life, but to prevent life escaping from us.\n>", "msg_date": "Thu, 24 Apr 2008 15:16:35 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: off-topic: SPAM" }, { "msg_contents": "\n\nOn Thu, 24 Apr 2008, Gregory Stark wrote:\n\n> \"Alvaro Herrera\" <[email protected]> writes:\n>\n>> Marc G. Fournier escribió:\n>>>\n>>> where are the reports going to? am I missing off a list somewhere?\n>>>\n>>> of course, your response really didn't enlighten me any ... what is the\n>>> problem that is being seen / reported ?\n>>\n>> The problem is that any time some posts to pgsql-performance he gets a\n>> bounce from a postmaster in infotecnica.com.br domain.\n>>\n>> It has been reported in several lists:\n>>\n>> http://archives.postgresql.org/pgsql-www/2007-12/msg00203.php\n>> http://archives.postgresql.org/pgsql-hackers/2007-12/msg00697.php\n>> http://archives.postgresql.org/pgsql-performance/2007-12/msg00126.php\n>\n> The only way we're going to solve this is to send a personalized email to\n> every subscriber of pgsql-performance. We might be able to get away with just\n> doing brazilian subscribers first and hope it's one of those.\n\nThat would only be 42 email addresses, so not *too* painful, but if their \nemail requires a 'challenge-response' to send to them, how did they get \nsubscribed in the first place, as our subscribe method is \n'challenge-response' also, which means they have to receive the email we \nsend to them automaticalloy in the first place ...\n\nSo, are these a result of forwarding, vs direct subscription?", "msg_date": "Thu, 24 Apr 2008 15:19:57 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: off-topic: SPAM" }, { "msg_contents": "Marc G. Fournier wrote:\n>\n> Geez, I thought you were talking about recent stuff ... that is 4 month \n> old stuff ...\n\nThat's just because we got bored complaining 4 months ago. The problem\nhas persisted.\n\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 24 Apr 2008 14:22:36 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: off-topic: SPAM" }, { "msg_contents": "Marc G. Fournier wrote:\n\n> So, are these a result of forwarding, vs direct subscription?\n\nWe don't know. Normally that would be up to the owner to figure out,\nbut Greg here has offered to take matters on his own hands.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Thu, 24 Apr 2008 14:24:04 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: off-topic: SPAM" }, { "msg_contents": "\n'k, just checked and yes, we have blocked one Brazil domain in the past \nbecause of this issue:\n\nsubscribe\ndeny, reason=\"uol.com.br requires non-spam confirmation .. \"\n/@uol.com.br/i || /[email protected]/i\n\nSo we could add similar ... but, the question comes back to if they have a \nspam confirmation required, how is email getting to them in order to \nsubscribe them in the first place? The only way I can think of is that \nthis is a feature that has been added *since* they registered ...\n\nIf someone wants to write up a nice form letter, I can send out an \nindividual message to each user in the *.br domain, and for each that \nbounce back with a 'confirmation required', remove them ... but I'm not \nsure if it makes much sense to go through the confirm and see if they can \nwork around the issue ...\n\n----\nMarc G. Fournier Hub.Org Networking Services (http://www.hub.org)\nEmail . [email protected] MSN . [email protected]\nYahoo . yscrappy Skype: hub.org ICQ . 7615664\n\nOn Thu, 24 Apr 2008, Marc G. Fournier wrote:\n\n>\n>\n> On Thu, 24 Apr 2008, Gregory Stark wrote:\n>\n>> \"Alvaro Herrera\" <[email protected]> writes:\n>> \n>>> Marc G. Fournier escribi�:\n>>>> \n>>>> where are the reports going to? am I missing off a list somewhere?\n>>>> \n>>>> of course, your response really didn't enlighten me any ... what is the\n>>>> problem that is being seen / reported ?\n>>> \n>>> The problem is that any time some posts to pgsql-performance he gets a\n>>> bounce from a postmaster in infotecnica.com.br domain.\n>>> \n>>> It has been reported in several lists:\n>>> \n>>> http://archives.postgresql.org/pgsql-www/2007-12/msg00203.php\n>>> http://archives.postgresql.org/pgsql-hackers/2007-12/msg00697.php\n>>> http://archives.postgresql.org/pgsql-performance/2007-12/msg00126.php\n>> \n>> The only way we're going to solve this is to send a personalized email to\n>> every subscriber of pgsql-performance. We might be able to get away with \n>> just\n>> doing brazilian subscribers first and hope it's one of those.\n>\n> That would only be 42 email addresses, so not *too* painful, but if their \n> email requires a 'challenge-response' to send to them, how did they get \n> subscribed in the first place, as our subscribe method is \n> 'challenge-response' also, which means they have to receive the email we send \n> to them automaticalloy in the first place ...\n>\n> So, are these a result of forwarding, vs direct subscription?", "msg_date": "Thu, 24 Apr 2008 19:00:11 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: off-topic: SPAM" }, { "msg_contents": "\"Marc G. Fournier\" <[email protected]> writes:\n\n> 'k, just checked and yes, we have blocked one Brazil domain in the past because\n> of this issue:\n>\n> subscribe\n> deny, reason=\"uol.com.br requires non-spam confirmation .. \"\n> /@uol.com.br/i || /[email protected]/i\n>\n> So we could add similar ... but, the question comes back to if they have a spam\n> confirmation required, how is email getting to them in order to subscribe them\n> in the first place? The only way I can think of is that this is a feature that\n> has been added *since* they registered ...\n\nWhat does uol.com.br have to do with infotecnica.com.br? Are you sure this\nwasn't a similar but unrelated case?\n\nAnother possible answer to the question is that someone was frustrated by the\nban and subscribed at a different address then forwarded the mail.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's On-Demand Production Tuning\n", "msg_date": "Thu, 24 Apr 2008 18:41:09 -0400", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: off-topic: SPAM" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n\n- --On Thursday, April 24, 2008 18:41:09 -0400 Gregory Stark \n<[email protected]> wrote:\n\n> \"Marc G. Fournier\" <[email protected]> writes:\n>\n>> 'k, just checked and yes, we have blocked one Brazil domain in the past\n>> because of this issue:\n>>\n>> subscribe\n>> deny, reason=\"uol.com.br requires non-spam confirmation .. \"\n>> /@uol.com.br/i || /[email protected]/i\n>>\n>> So we could add similar ... but, the question comes back to if they have a\n>> spam confirmation required, how is email getting to them in order to\n>> subscribe them in the first place? The only way I can think of is that this\n>> is a feature that has been added *since* they registered ...\n>\n> What does uol.com.br have to do with infotecnica.com.br? Are you sure this\n> wasn't a similar but unrelated case?\n\nRelated only in so far as we have, in the past, banned a domain due to the \n'spam confirmation emails' issue ...\n\n> Another possible answer to the question is that someone was frustrated by the\n> ban and subscribed at a different address then forwarded the mail.\n\nThat's kind of what I'm wondering, which makes it even harder to track down ...\n\nBut, as I've said, if someone wants to write a good, well worded email, I have \nno problem with doing a quick for loop to send out one message per user, maybe \nwith a counter in the subject line, and see what bounces back for those .br \naddresses *shrug*\n\n- -- \nMarc G. Fournier Hub.Org Hosting Solutions S.A. (http://www.hub.org)\nEmail . [email protected] MSN . [email protected]\nYahoo . yscrappy Skype: hub.org ICQ . 7615664\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v2.0.8 (FreeBSD)\n\niEYEARECAAYFAkgRL10ACgkQ4QvfyHIvDvP2tACg7eeJf0vsVyxVCnI2tPCAzIZq\nSJAAnitMKKnC2rb7MGU1aAyBThL17rS5\n=2qip\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Thu, 24 Apr 2008 22:09:49 -0300", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: off-topic: SPAM" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n\n- --On Thursday, April 24, 2008 14:24:04 -0400 Alvaro Herrera \n<[email protected]> wrote:\n\n> Marc G. Fournier wrote:\n>\n>> So, are these a result of forwarding, vs direct subscription?\n>\n> We don't know. Normally that would be up to the owner to figure out,\n\nAh, that would be Josh Berkus ...\n\n- -- \nMarc G. Fournier Hub.Org Hosting Solutions S.A. (http://www.hub.org)\nEmail . [email protected] MSN . [email protected]\nYahoo . yscrappy Skype: hub.org ICQ . 7615664\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v2.0.8 (FreeBSD)\n\niEYEARECAAYFAkgRMogACgkQ4QvfyHIvDvMwAgCfcwA40Fn5Am1NRHcnkw0STEQJ\nfqMAoIck2wI6/ZMMDMh8D7gqrk49Wi0I\n=L+zb\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Thu, 24 Apr 2008 22:23:20 -0300", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: off-topic: SPAM" }, { "msg_contents": "Marc G. Fournier wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> \n> \n> - --On Thursday, April 24, 2008 14:24:04 -0400 Alvaro Herrera \n> <[email protected]> wrote:\n> \n>> Marc G. Fournier wrote:\n>>\n>>> So, are these a result of forwarding, vs direct subscription?\n>> We don't know. Normally that would be up to the owner to figure out,\n> \n> Ah, that would be Josh Berkus ...\n\nWhich lists are we talking about?\n\nJoshua D. Drake\n\n", "msg_date": "Thu, 24 Apr 2008 18:29:26 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: off-topic: SPAM" }, { "msg_contents": "\n\npgsql-performance in this case ...\n\n----\nMarc G. Fournier Hub.Org Networking Services (http://www.hub.org)\nEmail . [email protected] MSN . [email protected]\nYahoo . yscrappy Skype: hub.org ICQ . 7615664\n\nOn Thu, 24 Apr 2008, Joshua D. Drake wrote:\n\n> Marc G. Fournier wrote:\n>> -----BEGIN PGP SIGNED MESSAGE-----\n>> Hash: SHA1\n>> \n>> \n>> \n>> - --On Thursday, April 24, 2008 14:24:04 -0400 Alvaro Herrera \n>> <[email protected]> wrote:\n>> \n>>> Marc G. Fournier wrote:\n>>> \n>>>> So, are these a result of forwarding, vs direct subscription?\n>>> We don't know. Normally that would be up to the owner to figure out,\n>> \n>> Ah, that would be Josh Berkus ...\n>\n> Which lists are we talking about?\n>\n> Joshua D. Drake\n>\n>\n", "msg_date": "Thu, 24 Apr 2008 22:33:19 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: off-topic: SPAM" }, { "msg_contents": "Marc G. Fournier wrote:\n\n> - --On Thursday, April 24, 2008 14:24:04 -0400 Alvaro Herrera \n> <[email protected]> wrote:\n> \n> > Marc G. Fournier wrote:\n> >\n> >> So, are these a result of forwarding, vs direct subscription?\n> >\n> > We don't know. Normally that would be up to the owner to figure out,\n> \n> Ah, that would be Josh Berkus ...\n\nMay I suggest you just give the subscriber list to Greg Stark, like he\nasked at the start of the thread? He might be able to find the guilty.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 24 Apr 2008 21:52:13 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: off-topic: SPAM" }, { "msg_contents": "Alvaro Herrera wrote:\n\n>>>> So, are these a result of forwarding, vs direct subscription?\n>>> We don't know. Normally that would be up to the owner to figure out,\n>> Ah, that would be Josh Berkus ...\n> \n> May I suggest you just give the subscriber list to Greg Stark, like he\n> asked at the start of the thread? He might be able to find the guilty.\n> \n\nMay I also suggest that we consider giving ownership to someone with \nmore cycles?\n\nJoshua D. Drake\n", "msg_date": "Thu, 24 Apr 2008 18:53:45 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: off-topic: SPAM" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n\n- --On Thursday, April 24, 2008 21:52:13 -0400 Alvaro Herrera \n<[email protected]> wrote:\n\n> Marc G. Fournier wrote:\n>\n>> - --On Thursday, April 24, 2008 14:24:04 -0400 Alvaro Herrera\n>> <[email protected]> wrote:\n>>\n>> > Marc G. Fournier wrote:\n>> >\n>> >> So, are these a result of forwarding, vs direct subscription?\n>> >\n>> > We don't know. Normally that would be up to the owner to figure out,\n>>\n>> Ah, that would be Josh Berkus ...\n>\n> May I suggest you just give the subscriber list to Greg Stark, like he\n> asked at the start of the thread? He might be able to find the guilty.\n\nTalk to Josh Berkus, he's the owner of that list ...\n\n- -- \nMarc G. Fournier Hub.Org Hosting Solutions S.A. (http://www.hub.org)\nEmail . [email protected] MSN . [email protected]\nYahoo . yscrappy Skype: hub.org ICQ . 7615664\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v2.0.8 (FreeBSD)\n\niEYEARECAAYFAkgRQ7AACgkQ4QvfyHIvDvN0rQCfXCTLfxW6U9Nf+CEdxDZYBPYT\neKkAnjbNw0nK3uIZG5FlWlyy/6cBzAyy\n=KpN7\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Thu, 24 Apr 2008 23:36:32 -0300", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: off-topic: SPAM" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n\n- --On Thursday, April 24, 2008 18:53:45 -0700 \"Joshua D. Drake\" \n<[email protected]> wrote:\n\n> Alvaro Herrera wrote:\n>\n>>>>> So, are these a result of forwarding, vs direct subscription?\n>>>> We don't know. Normally that would be up to the owner to figure out,\n>>> Ah, that would be Josh Berkus ...\n>>\n>> May I suggest you just give the subscriber list to Greg Stark, like he\n>> asked at the start of the thread? He might be able to find the guilty.\n>>\n>\n> May I also suggest that we consider giving ownership to someone with more\n> cycles?\n\nThat works too ... talk to Josh Berkus, he's the owner of the list :)\n\n- -- \nMarc G. Fournier Hub.Org Hosting Solutions S.A. (http://www.hub.org)\nEmail . [email protected] MSN . [email protected]\nYahoo . yscrappy Skype: hub.org ICQ . 7615664\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v2.0.8 (FreeBSD)\n\niEYEARECAAYFAkgRQ8oACgkQ4QvfyHIvDvPMnQCeNF1pF+/hn9qq7fKR2/A5LnhV\n31QAni+Li7FOySqDyZ/Poq10pOxjbjhx\n=qBZp\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Thu, 24 Apr 2008 23:36:58 -0300", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: off-topic: SPAM" }, { "msg_contents": "\"Marc G. Fournier\" <[email protected]> writes:\n\n> But, as I've said, if someone wants to write a good, well worded email, I have \n> no problem with doing a quick for loop to send out one message per user, maybe \n> with a counter in the subject line, and see what bounces back for those .br \n> addresses *shrug*\n\nAs I said I was just going to send \"Test message, please ignore\" or something\nlike that.\n\nNote that you need to put the individual subscriber in the To header. The\nworst problem with this amazingly brain-dead spam filter is that it doesn't\ngive you back any other info about the original email aside from the From\nheader and the To header.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's RemoteDBA services!\n", "msg_date": "Fri, 25 Apr 2008 01:09:26 -0400", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: off-topic: SPAM" } ]
[ { "msg_contents": "I have a question about index us and IO and am seeking advice.\n\nWe are running postgres 8.2. We have two big big tables. Our ~600,000,000\nrow table is changed very infrequently and is on a 12 disk software raid-6\nfor historical reasons using an LSI Logic / Symbios Logic SAS1068 PCI-X\nFusion-MPT SAS Our ~50,000,000 row staging table is on a 12 disk hardware\nraid-10 using a Dell PowerEdge Expandable RAID controller 5. All of the\nrows in the staging table are changed at least once and then deleted and\nrecreated in the bigger table. All of the staging table's indexes are on\nthe raid-10. The postgres data directory itself is on the raid-6. I think\nall the disks are SATA 10Ks. The setup is kind of a beast.\n\nSo my disk IO and index question. When I issue a query on the big table\nlike this:\nSELECT column, count(*)\nFROM bigtable\nGROUP BY column\nORDER BY count DESC\nWhen I run dstat to see my disk IO I see the software raid-6 consistently\nholding over 70M/sec. This is fine with me, but I generally don't like to\ndo queries that table scan 600,000,000 rows. So I do:\nSELECT column, count(*)\nFROM bigtable\nWHERE date > '4-24-08'\nGROUP BY column\nORDER BY count DESC\nWhen I run dstat I see only around 2M/sec and it is not consistent at all.\n\nSo my question is, why do I see such low IO load on the index scan version?\nIf I could tweak some setting to make more aggressive use of IO, would it\nactually make the query faster? The field I'm scanning has a .960858\ncorrelation, but I haven't vacuumed since importing any of the data that I'm\nscanning, though the correlation should remain very high. When I do a\nsimilar set of queries on the hardware raid I see similar performance\nexcept the numbers are both more than doubled.\n\nHere is the explain output for the queries:\nSELECT column, count(*)\nFROM bigtable\nGROUP BY column\nORDER BY count DESC\n\"Sort (cost=74404440.58..74404444.53 rows=1581 width=10)\"\n\" Sort Key: count(*)\"\n\" -> HashAggregate (cost=74404336.81..74404356.58 rows=1581 width=10)\"\n\" -> Seq Scan on bigtable (cost=0.00..71422407.21 rows=596385921\nwidth=10)\"\n---------------\nSELECT column, count(*)\nFROM bigtable\nWHERE date > '4-24-08'\nGROUP BY column\nORDER BY count DESC\n\"Sort (cost=16948.80..16948.81 rows=1 width=10)\"\n\" Sort Key: count(*)\"\n\" -> HashAggregate (cost=16948.78..16948.79 rows=1 width=10)\"\n\" -> Index Scan using date_idx on bigtable (cost=0.00..16652.77\nrows=59201 width=10)\"\n\" Index Cond: (date > '2008-04-21 00:00:00'::timestamp without\ntime zone)\"\n\nSo now the asking for advice part. I have two questions:\nWhat is the fastest way to copy data from the smaller table to the larger\ntable?\n\nWe plan to rearrange the setup when we move to Postgres 8.3. We'll probably\nmove all the storage over to a SAN and slice the larger table into monthly\nor weekly tables. Can someone point me to a good page on partitioning? My\ngut tells me it should be better, but I'd like to learn more about why.\nDoes anyone have experience migrating large databases to a SAN? I\nunderstand that it'll give me better fail over capabilities so long as the\nSAN itself doesn't go out, but are we going to be sacrificing performance\nfor this? That doesn't even mention the cost....\n\nThanks so much for reading through all this,\n\n--Nik\n\nI have a question about index us and IO and am seeking advice.We are running postgres 8.2.  We have two big big tables.  Our ~600,000,000 row table is changed very infrequently and is on a 12 disk software raid-6 for historical reasons using an  LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS  Our ~50,000,000 row staging table is on a 12 disk hardware raid-10 using a Dell PowerEdge Expandable RAID controller 5.  All of the rows in the staging table are changed at least once and then deleted and recreated in the bigger table.  All of the staging table's indexes are on the raid-10.  The postgres data directory itself is on the raid-6.  I think all the disks are SATA 10Ks.  The setup is kind of a beast.\nSo my disk IO and index question.  When I issue a query on the big table like this:SELECT    column, count(*)FROM    bigtableGROUP BY columnORDER BY count DESCWhen I run dstat to see my disk IO I see the software raid-6 consistently holding over 70M/sec.  This is fine with me, but I generally don't like to do queries that table scan 600,000,000 rows.  So I do:\nSELECT    column, count(*)\nFROM    bigtableWHERE date > '4-24-08'\nGROUP BY column\nORDER BY count DESCWhen I run dstat I see only around 2M/sec and it is not consistent at all.So my question is, why do I see such low IO load on the index scan version?  If I could tweak some setting to make more aggressive use of IO, would it actually make the query faster?  The field I'm scanning has a .960858 correlation, but I haven't vacuumed since importing any of the data that I'm scanning, though the correlation should remain very high.  When I do a similar set of queries on the hardware raid I see similar performance except  the numbers are both more than doubled.\nHere is the explain output for the queries:SELECT    column, count(*)FROM    bigtableGROUP BY columnORDER BY count DESC\n\"Sort  (cost=74404440.58..74404444.53 rows=1581 width=10)\"\"  Sort Key: count(*)\"\"  ->  HashAggregate  (cost=74404336.81..74404356.58 rows=1581 width=10)\"\"        ->  Seq Scan on bigtable (cost=0.00..71422407.21 rows=596385921 width=10)\"\n---------------SELECT    column, count(*)\nFROM    bigtableWHERE date > '4-24-08'\nGROUP BY column\nORDER BY count DESC\"Sort  (cost=16948.80..16948.81 rows=1 width=10)\"\"  Sort Key: count(*)\"\"  ->  HashAggregate  (cost=16948.78..16948.79 rows=1 width=10)\"\"        ->  Index Scan using date_idx on bigtable (cost=0.00..16652.77 rows=59201 width=10)\"\n\"              Index Cond: (date > '2008-04-21 00:00:00'::timestamp without time zone)\"So now the asking for advice part.  I have two questions:What is the fastest way to copy data from the smaller table to the larger table?\nWe plan to rearrange the setup when we move to Postgres 8.3.  We'll probably move all the storage over to a SAN and slice the larger table into monthly or weekly tables.  Can someone point me to a good page on partitioning?  My gut tells me it should be better, but I'd like to learn more about why.\nDoes anyone have experience migrating large databases to a SAN?  I understand that it'll give me better fail over capabilities so long as the SAN itself doesn't go out, but are we going to be sacrificing performance for this?  That doesn't even mention the cost....\nThanks so much for reading through all this,--Nik", "msg_date": "Thu, 24 Apr 2008 09:59:20 -0400", "msg_from": "\"Nikolas Everett\" <[email protected]>", "msg_from_op": true, "msg_subject": "Question about disk IO an index use and seeking advice" }, { "msg_contents": "On Thu, 24 Apr 2008, Nikolas Everett wrote:\n> The setup is kind of a beast.\n\nNo kidding.\n\n> When I run dstat I see only around 2M/sec and it is not consistent at all.\n\nWell, it is having to seek over the disc a little. Firstly, your table may \nnot be wonderfully ordered for index scans, but goodness knows how long a \nCLUSTER operation might take with that much data. Secondly, when doing an \nindex scan, Postgres unfortunately can only use the performance equivalent \nof a single disc, because it accesses the pages one by one in a \nsingle-threaded manner. A large RAID array will give you a performance \nboost if you are doing lots of index scans in parallel, but not if you are \nonly doing one. Greg Stark has a patch in the pipeline to improve this \nthough.\n\n> When I do a similar set of queries on the hardware raid I see similar \n> performance except the numbers are both more than doubled.\n\nHardware RAID is often better than software RAID. 'Nuff said.\n\n> Here is the explain output for the queries:\n\nEXPLAIN ANALYSE is even better.\n\n> Sort (cost=16948.80..16948.81 rows=1 width=10)\"\n> Sort Key: count(*)\"\n> -> HashAggregate (cost=16948.78..16948.79 rows=1 width=10)\"\n> -> Index Scan using date_idx on bigtable (cost=0.00..16652.77 rows=59201 width=10)\"\n> Index Cond: (date > '2008-04-21 00:00:00'::timestamp without time zone)\"\n\nThat doesn't look like it should take too long. How long does it take? \n(EXPLAIN ANALYSE, in other words). It's a good plan, anyway.\n\n> So now the asking for advice part. I have two questions:\n> What is the fastest way to copy data from the smaller table to the larger\n> table?\n\nINSERT INTO bigtable (field1, field2) SELECT whatever FROM staging_table\n ORDER BY staging_table.date\n\nThat will do it all in Postgres. The ORDER BY clause may slow down the \ninsert, but it will certainly speed up your subsequent index scans.\n\nIf the bigtable isn't getting any DELETE or UPDATE traffic, you don't need \nto vacuum it. However, make sure you do vacuum the staging table, \npreferably directly after moving all that data to the bigtable and \ndeleting it from the staging table.\n\n> Can someone point me to a good page on partitioning? My\n> gut tells me it should be better, but I'd like to learn more about why.\n\nYou could possibly not bother with a staging table, and replace the mass \ncopy with making a new partition. Not sure of the details myself though.\n\nMatthew\n\n-- \nMe... a skeptic? I trust you have proof?\n", "msg_date": "Thu, 24 Apr 2008 16:27:39 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about disk IO an index use and seeking advice" }, { "msg_contents": "\n> Our ~600,000,000\n> row table is changed very infrequently and is on a 12 disk software \n> raid-6\n> for historical reasons using an LSI Logic / Symbios Logic SAS1068 PCI-X\n> Fusion-MPT SAS Our ~50,000,000 row staging table is on a 12 disk \n> hardware\n> raid-10 using a Dell PowerEdge Expandable RAID controller 5.\n\n> So my disk IO and index question. When I issue a query on the big table\n> like this:\n> SELECT column, count(*)\n> FROM bigtable\n> GROUP BY column\n> ORDER BY count DESC\n> When I run dstat to see my disk IO I see the software raid-6 consistently\n> holding over 70M/sec. This is fine with me, but I generally don't like \n> to\n> do queries that table scan 600,000,000 rows. So I do:\n\n\tNote that RAID5 or 6 is fine when reading, it's the small random writes \nthat kill it.\n\tIs the table being inserted to while you run this query, which will \ngenerate small random writes for the index updates ?\n\tOr is the table only inserted to during the nightly cron job ?\n\n\t70 MB/s seems to me quite close to what a single SATA disk could do these \ndays.\n\tMy software RAID 5 saturates the PCI bus in the machine and pushes more \nthan 120 MB/s.\n\tYou have PCI-X and 12 disks so you should get huuuuge disk throughput, \nreally mindboggling figures, not 70 MB/s.\n\tSince this seems a high-budget system perhaps a good fast hardware RAID ?\n\tOr perhaps this test was performed under heavy load and it is actually a \ngood result.\n\n\n> All of the\n> rows in the staging table are changed at least once and then deleted and\n> recreated in the bigger table. All of the staging table's indexes are on\n> the raid-10. The postgres data directory itself is on the raid-6. I \n> think\n> all the disks are SATA 10Ks. The setup is kind of a beast.\n>\n> SELECT column, count(*)\n> FROM bigtable\n> WHERE date > '4-24-08'\n> GROUP BY column\n> ORDER BY count DESC\n> When I run dstat I see only around 2M/sec and it is not consistent at \n> all.\n>\n> So my question is, why do I see such low IO load on the index scan \n> version?\n\n\tFirst, it is probably choosing a bitmap index scan, which means it needs \nto grab lots of pages from the index. If your index is fragmented, just \nscanning the index could take a long time.\n\tThen, i is probably taking lots of random bites in the table data.\n\tIf this is an archive table, the dates should be increasing sequentially. \nIf this is not the case you will get random IO which is rather bad on huge \ndata sets.\n\n\tSo.\n\n\tIf you need the rows to be grouped on-disk by date (or perhaps another \nfield if you more frequently run other types of query, like grouping by \ncategory, or perhaps something else, you decide) :\n\n\tThe painful thing will be to reorder the table, either\n\t- use CLUSTER\n\t- or recreate a table and INSERT INTO it ORDER BY the field you chose. \nThis is going to take a while, set sort_mem to a large value. Then create \nthe indexes.\n\n\tThen every time you insert data in the archive, be sure to insert it in \nbig batches, ORDER BY the field you chose. That way new inserts will be \nstill in the order you want.\t\n\n\tWhile you're at it you might think about partitioning the monster on a \nuseful criterion (this depends on your querying).\n\n> If I could tweak some setting to make more aggressive use of IO, would it\n> actually make the query faster? The field I'm scanning has a .960858\n> correlation, but I haven't vacuumed since importing any of the data that\n\n\tYou have ANALYZEd at least ?\n\tCause if you didn't and an index scan (not bitmap) comes up on this kind \nof query and it does a million index hits you have a problem.\n\n> I'm\n> scanning, though the correlation should remain very high. When I do a\n> similar set of queries on the hardware raid I see similar performance\n> except the numbers are both more than doubled.\n>\n> Here is the explain output for the queries:\n> SELECT column, count(*)\n> FROM bigtable\n> GROUP BY column\n> ORDER BY count DESC\n> \"Sort (cost=74404440.58..74404444.53 rows=1581 width=10)\"\n> \" Sort Key: count(*)\"\n> \" -> HashAggregate (cost=74404336.81..74404356.58 rows=1581 width=10)\"\n> \" -> Seq Scan on bigtable (cost=0.00..71422407.21 rows=596385921\n> width=10)\"\n\n\tPlan is OK (nothing else to do really)\n\n> ---------------\n> SELECT column, count(*)\n> FROM bigtable\n> WHERE date > '4-24-08'\n> GROUP BY column\n> ORDER BY count DESC\n> \"Sort (cost=16948.80..16948.81 rows=1 width=10)\"\n> \" Sort Key: count(*)\"\n> \" -> HashAggregate (cost=16948.78..16948.79 rows=1 width=10)\"\n> \" -> Index Scan using date_idx on bigtable (cost=0.00..16652.77\n> rows=59201 width=10)\"\n> \" Index Cond: (date > '2008-04-21 00:00:00'::timestamp \n> without\n> time zone)\"\n\n\tArgh.\n\tSo you got an index scan after all.\n\tIs the 59201 rows estimate right ? If it is 10 times that you really have \na problem.\n\tIs it ANALYZEd ?\n\n> So now the asking for advice part. I have two questions:\n> What is the fastest way to copy data from the smaller table to the larger\n> table?\n\n\tINSERT INTO SELECT FROM (add ORDER BY to taste)\n\n> We plan to rearrange the setup when we move to Postgres 8.3. We'll \n> probably\n> move all the storage over to a SAN and slice the larger table into \n> monthly\n> or weekly tables. Can someone point me to a good page on partitioning? \n> My\n> gut tells me it should be better, but I'd like to learn more about why.\n\n\tBecause in your case, records having the dates you want will be in 1 \npartition (or 2), so you get a kind of automatic CLUSTER. For instance if \nyou do your query on last week's data, it will seq scan last week's \npartition (which will be a much more manageable size) and not even look at \nthe others.\n\nMatthew said :\n> You could possibly not bother with a staging table, and replacethe mass \n> copy with making a new partition. Not sure of the details myself though.\n\n\tYes you could do that.\n\tWhen a partition ceases to become actively updated, though, you should \nCLUSTER it so it is really tight and fast.\n\tCLUSTER on a partition which has a week's worth of data will obviously be \nmuch faster than CLUSTERing your monster archive.\n\n\n\n\n\n\n\n\n", "msg_date": "Thu, 24 Apr 2008 18:56:34 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about disk IO an index use and seeking advice" }, { "msg_contents": "On Thu, Apr 24, 2008 at 12:56 PM, PFC <[email protected]> wrote:\n\n>\n> Our ~600,000,000\n>> row table is changed very infrequently and is on a 12 disk software raid-6\n>> for historical reasons using an LSI Logic / Symbios Logic SAS1068 PCI-X\n>> Fusion-MPT SAS Our ~50,000,000 row staging table is on a 12 disk hardware\n>> raid-10 using a Dell PowerEdge Expandable RAID controller 5.\n>>\n>\n> So my disk IO and index question. When I issue a query on the big table\n>> like this:\n>> SELECT column, count(*)\n>> FROM bigtable\n>> GROUP BY column\n>> ORDER BY count DESC\n>> When I run dstat to see my disk IO I see the software raid-6 consistently\n>> holding over 70M/sec. This is fine with me, but I generally don't like to\n>> do queries that table scan 600,000,000 rows. So I do:\n>>\n>\n> Note that RAID5 or 6 is fine when reading, it's the small random\n> writes that kill it.\n> Is the table being inserted to while you run this query, which will\n> generate small random writes for the index updates ?\n> Or is the table only inserted to during the nightly cron job ?\n>\n> 70 MB/s seems to me quite close to what a single SATA disk could do\n> these days.\n> My software RAID 5 saturates the PCI bus in the machine and pushes\n> more than 120 MB/s.\n> You have PCI-X and 12 disks so you should get huuuuge disk\n> throughput, really mindboggling figures, not 70 MB/s.\n> Since this seems a high-budget system perhaps a good fast hardware\n> RAID ?\n> Or perhaps this test was performed under heavy load and it is\n> actually a good result.\n>\n>\n> All of the\n>> rows in the staging table are changed at least once and then deleted and\n>> recreated in the bigger table. All of the staging table's indexes are on\n>> the raid-10. The postgres data directory itself is on the raid-6. I\n>> think\n>> all the disks are SATA 10Ks. The setup is kind of a beast.\n>>\n>> SELECT column, count(*)\n>> FROM bigtable\n>> WHERE date > '4-24-08'\n>> GROUP BY column\n>> ORDER BY count DESC\n>> When I run dstat I see only around 2M/sec and it is not consistent at all.\n>>\n>> So my question is, why do I see such low IO load on the index scan\n>> version?\n>>\n>\n> First, it is probably choosing a bitmap index scan, which means it\n> needs to grab lots of pages from the index. If your index is fragmented,\n> just scanning the index could take a long time.\n> Then, i is probably taking lots of random bites in the table data.\n> If this is an archive table, the dates should be increasing\n> sequentially. If this is not the case you will get random IO which is rather\n> bad on huge data sets.\n>\n> So.\n>\n> If you need the rows to be grouped on-disk by date (or perhaps\n> another field if you more frequently run other types of query, like grouping\n> by category, or perhaps something else, you decide) :\n>\n> The painful thing will be to reorder the table, either\n> - use CLUSTER\n> - or recreate a table and INSERT INTO it ORDER BY the field you\n> chose. This is going to take a while, set sort_mem to a large value. Then\n> create the indexes.\n>\n> Then every time you insert data in the archive, be sure to insert it\n> in big batches, ORDER BY the field you chose. That way new inserts will be\n> still in the order you want.\n>\n> While you're at it you might think about partitioning the monster on\n> a useful criterion (this depends on your querying).\n>\n> If I could tweak some setting to make more aggressive use of IO, would it\n>> actually make the query faster? The field I'm scanning has a .960858\n>> correlation, but I haven't vacuumed since importing any of the data that\n>>\n>\n> You have ANALYZEd at least ?\n> Cause if you didn't and an index scan (not bitmap) comes up on this\n> kind of query and it does a million index hits you have a problem.\n>\n> I'm\n>> scanning, though the correlation should remain very high. When I do a\n>> similar set of queries on the hardware raid I see similar performance\n>> except the numbers are both more than doubled.\n>>\n>> Here is the explain output for the queries:\n>> SELECT column, count(*)\n>> FROM bigtable\n>> GROUP BY column\n>> ORDER BY count DESC\n>> \"Sort (cost=74404440.58..74404444.53 rows=1581 width=10)\"\n>> \" Sort Key: count(*)\"\n>> \" -> HashAggregate (cost=74404336.81..74404356.58 rows=1581 width=10)\"\n>> \" -> Seq Scan on bigtable (cost=0.00..71422407.21 rows=596385921\n>> width=10)\"\n>>\n>\n> Plan is OK (nothing else to do really)\n>\n> ---------------\n>> SELECT column, count(*)\n>> FROM bigtable\n>> WHERE date > '4-24-08'\n>> GROUP BY column\n>> ORDER BY count DESC\n>> \"Sort (cost=16948.80..16948.81 rows=1 width=10)\"\n>> \" Sort Key: count(*)\"\n>> \" -> HashAggregate (cost=16948.78..16948.79 rows=1 width=10)\"\n>> \" -> Index Scan using date_idx on bigtable (cost=0.00..16652.77\n>> rows=59201 width=10)\"\n>> \" Index Cond: (date > '2008-04-21 00:00:00'::timestamp\n>> without\n>> time zone)\"\n>>\n>\n> Argh.\n> So you got an index scan after all.\n> Is the 59201 rows estimate right ? If it is 10 times that you really\n> have a problem.\n> Is it ANALYZEd ?\n>\n> So now the asking for advice part. I have two questions:\n>> What is the fastest way to copy data from the smaller table to the larger\n>> table?\n>>\n>\n> INSERT INTO SELECT FROM (add ORDER BY to taste)\n>\n> We plan to rearrange the setup when we move to Postgres 8.3. We'll\n>> probably\n>> move all the storage over to a SAN and slice the larger table into monthly\n>> or weekly tables. Can someone point me to a good page on partitioning?\n>> My\n>> gut tells me it should be better, but I'd like to learn more about why.\n>>\n>\n> Because in your case, records having the dates you want will be in 1\n> partition (or 2), so you get a kind of automatic CLUSTER. For instance if\n> you do your query on last week's data, it will seq scan last week's\n> partition (which will be a much more manageable size) and not even look at\n> the others.\n>\n> Matthew said :\n>\n>> You could possibly not bother with a staging table, and replacethe mass\n>> copy with making a new partition. Not sure of the details myself though.\n>>\n>\n> Yes you could do that.\n> When a partition ceases to become actively updated, though, you\n> should CLUSTER it so it is really tight and fast.\n> CLUSTER on a partition which has a week's worth of data will\n> obviously be much faster than CLUSTERing your monster archive.\n>\n\nBoth Matthew and PFC, thanks for the response.\n\nIt turns out that the DB really loves to do index scans when I check new\ndata because I haven't had a chance to analyze it yet. It should be doing a\nbitmap index scan and a bitmap heap scan. I think. Doing a quick \"set\nenable_indexscan = false\" and doing a different date range really helped\nthings. Here is my understanding of the situation:\n\nAn index scan looks through the index and pulls in each pages as it sees it.\nA bitmap index scan looks through the index and makes a sorted list of all\nthe pages it needs and then the bitmap heap scan reads all the pages.\nIf your data is scattered then you may as well do the index scan, but if\nyour data is sequential-ish then you should do the bitmap index scan.\n\nIs that right? Where can I learn more? I've read\nhttp://www.postgresql.org/docs/8.2/interactive/using-explain.html but it\ndidn't really dive deeply enough. I'd like a list of all the options the\nquery planner has and what they mean.\n\n\nAbout clustering: I know that CLUSTER takes an exclusive lock on the\ntable. At present, users can query the table at any time, so I'm not\nallowed to take an exclusive lock for more than a few seconds. Could I\nachieve the same thing by creating a second copy of the table and then\nswapping the first copy out for the second? I think something like that\nwould fit in my time frames.\n\n\nAbout partitioning: I can definitely see how having the data in more\nmanageable chunks would allow me to do things like clustering. It will\ndefinitely make vacuuming easier.\n\nAbout IO speeds: The db is always under some kind of load. I actually get\nscared if the load average isn't at least 2. Could I try to run something\nlike bonnie++ to get some real load numbers? I'm sure that would cripple\nthe system while it is running, but if it only takes a few seconds that\nwould be ok.\n\nThere were updates running while I was running the test. The WAL log is on\nthe hardware raid 10. Moving it from the software raid 5 almost doubled our\ninsert performance.\n\nThanks again,\n\n--Nik\n\nOn Thu, Apr 24, 2008 at 12:56 PM, PFC <[email protected]> wrote:\n\n\nOur ~600,000,000\nrow table is changed very infrequently and is on a 12 disk software raid-6\nfor historical reasons using an  LSI Logic / Symbios Logic SAS1068 PCI-X\nFusion-MPT SAS  Our ~50,000,000 row staging table is on a 12 disk hardware\nraid-10 using a Dell PowerEdge Expandable RAID controller 5.\n\n\n\nSo my disk IO and index question.  When I issue a query on the big table\nlike this:\nSELECT    column, count(*)\nFROM    bigtable\nGROUP BY column\nORDER BY count DESC\nWhen I run dstat to see my disk IO I see the software raid-6 consistently\nholding over 70M/sec.  This is fine with me, but I generally don't like to\ndo queries that table scan 600,000,000 rows.  So I do:\n\n\n        Note that RAID5 or 6 is fine when reading, it's the small random writes that kill it.\n        Is the table being inserted to while you run this query, which will generate small random writes for the index updates ?\n        Or is the table only inserted to during the nightly cron job ?\n\n        70 MB/s seems to me quite close to what a single SATA disk could do these days.\n        My software RAID 5 saturates the PCI bus in the machine and pushes more than 120 MB/s.\n        You have PCI-X and 12 disks so you should get huuuuge disk throughput, really mindboggling figures, not 70 MB/s.\n        Since this seems a high-budget system perhaps a good fast hardware RAID ?\n        Or perhaps this test was performed under heavy load and it is actually a good result.\n\n\n\nAll of the\nrows in the staging table are changed at least once and then deleted and\nrecreated in the bigger table.  All of the staging table's indexes are on\nthe raid-10.  The postgres data directory itself is on the raid-6.  I think\nall the disks are SATA 10Ks. The setup is kind of a beast.\n\nSELECT    column, count(*)\nFROM    bigtable\nWHERE date > '4-24-08'\nGROUP BY column\nORDER BY count DESC\nWhen I run dstat I see only around 2M/sec and it is not consistent at all.\n\nSo my question is, why do I see such low IO load on the index scan version?\n\n\n        First, it is probably choosing a bitmap index scan, which means it needs to grab lots of pages from the index. If your index is fragmented, just scanning the index could take a long time.\n        Then, i is probably taking lots of random bites in the table data.\n        If this is an archive table, the dates should be increasing sequentially. If this is not the case you will get random IO which is rather bad on huge data sets.\n\n        So.\n\n        If you need the rows to be grouped on-disk by date (or perhaps another field if you more frequently run other types of query, like grouping by category, or perhaps something else, you decide) :\n\n        The painful thing will be to reorder the table, either\n        - use CLUSTER\n        - or recreate a table and INSERT INTO it ORDER BY the field you chose. This is going to take a while, set sort_mem to a large value. Then create the indexes.\n\n        Then every time you insert data in the archive, be sure to insert it in big batches, ORDER BY the field you chose. That way new inserts will be still in the order you want.    \n\n        While you're at it you might think about partitioning the monster on a useful criterion (this depends on your querying).\n\n\nIf I could tweak some setting to make more aggressive use of IO, would it\nactually make the query faster?  The field I'm scanning has a .960858\ncorrelation, but I haven't vacuumed since importing any of the data that\n\n\n        You have ANALYZEd at least ?\n        Cause if you didn't and an index scan (not bitmap) comes up on this kind of query and it does a million index hits you have a problem.\n\n\nI'm\nscanning, though the correlation should remain very high.  When I do a\nsimilar set of queries on the hardware raid I see similar performance\nexcept  the numbers are both more than doubled.\n\nHere is the explain output for the queries:\nSELECT    column, count(*)\nFROM    bigtable\nGROUP BY column\nORDER BY count DESC\n\"Sort  (cost=74404440.58..74404444.53 rows=1581 width=10)\"\n\"  Sort Key: count(*)\"\n\"  ->  HashAggregate  (cost=74404336.81..74404356.58 rows=1581 width=10)\"\n\"        ->  Seq Scan on bigtable (cost=0.00..71422407.21 rows=596385921\nwidth=10)\"\n\n\n        Plan is OK (nothing else to do really)\n\n\n---------------\nSELECT    column, count(*)\nFROM    bigtable\nWHERE date > '4-24-08'\nGROUP BY column\nORDER BY count DESC\n\"Sort  (cost=16948.80..16948.81 rows=1 width=10)\"\n\"  Sort Key: count(*)\"\n\"  ->  HashAggregate  (cost=16948.78..16948.79 rows=1 width=10)\"\n\"        ->  Index Scan using date_idx on bigtable (cost=0.00..16652.77\nrows=59201 width=10)\"\n\"              Index Cond: (date > '2008-04-21 00:00:00'::timestamp without\ntime zone)\"\n\n\n        Argh.\n        So you got an index scan after all.\n        Is the 59201 rows estimate right ? If it is 10 times that you really have a problem.\n        Is it ANALYZEd ?\n\n\nSo now the asking for advice part.  I have two questions:\nWhat is the fastest way to copy data from the smaller table to the larger\ntable?\n\n\n        INSERT INTO SELECT FROM (add ORDER BY to taste)\n\n\nWe plan to rearrange the setup when we move to Postgres 8.3.  We'll probably\nmove all the storage over to a SAN and slice the larger table into monthly\nor weekly tables.  Can someone point me to a good page on partitioning?  My\ngut tells me it should be better, but I'd like to learn more about why.\n\n\n        Because in your case, records having the dates you want will be in 1 partition (or 2), so you get a kind of automatic CLUSTER. For instance if you do your query on last week's data, it will seq scan last week's partition (which will be a much more manageable size) and not even look at the others.\n\nMatthew said :\n\nYou could possibly not bother with a staging table, and replacethe mass copy with making a new partition. Not sure of the details myself though.\n\n\n        Yes you could do that.\n        When a partition ceases to become actively updated, though, you should CLUSTER it so it is really tight and fast.\n        CLUSTER on a partition which has a week's worth of data will obviously be much faster than CLUSTERing your monster archive.\nBoth Matthew and PFC, thanks for the response.It turns out that the DB really loves to do index scans when I check new data because I haven't had a chance to analyze it yet.  It should be doing a bitmap index scan and a bitmap heap scan.  I think.  Doing a quick \"set enable_indexscan = false\" and doing a different date range really helped things.  Here is my understanding of the situation:\nAn index scan looks through the index and pulls in each pages as it sees it.A bitmap index scan looks through the index and makes a sorted list of all the pages it needs and then the bitmap heap scan reads all the pages.\nIf your data is scattered then you may as well do the index scan, but if your data is sequential-ish then you should do the bitmap index scan.Is that right?  Where can I learn more?  I've read http://www.postgresql.org/docs/8.2/interactive/using-explain.html but it didn't really dive deeply enough.  I'd like a list of all the options the query planner has and what they mean.\nAbout clustering:  I know that CLUSTER takes an exclusive lock on the table.  At present, users can query the table at any time, so I'm not allowed to take an exclusive lock for more than a few seconds.  Could I achieve the same thing by creating a second copy of the table and then swapping the first copy out for the second?  I think something like that would fit in my time frames.\nAbout partitioning:  I can definitely see how having the data in more manageable chunks would allow me to do things like clustering.  It will definitely make vacuuming easier.About IO speeds:  The db is always under some kind of load.  I actually get scared if the load average isn't at least 2.  Could I try to run something like bonnie++ to get some real load numbers?  I'm sure that would cripple the system while it is running, but if it only takes a few seconds that would be ok.\nThere were updates running while I was running the test.  The WAL log is on the hardware raid 10.  Moving it from the software raid 5 almost doubled our insert performance.Thanks again,--Nik", "msg_date": "Thu, 24 Apr 2008 14:19:08 -0400", "msg_from": "\"Nikolas Everett\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Question about disk IO an index use and seeking advice" }, { "msg_contents": "\n> An index scan looks through the index and pulls in each pages as it sees \n> it.\n> A bitmap index scan looks through the index and makes a sorted list of \n> all\n> the pages it needs and then the bitmap heap scan reads all the pages.\n> If your data is scattered then you may as well do the index scan, but if\n> your data is sequential-ish then you should do the bitmap index scan.\n>\n> Is that right? Where can I learn more? I've read\n\n\tThat's about it, yes.\n\tIf your bitmap has large holes, it will seek, but if it has little holes, \nreadahead will work. Hence, fast, and good.\n\tOn indexscan, readahead doesn't help since the hits are pretty random. If \nyou have N rows in the index with the same date, in which order whill they \nget scanned ? There is no way to know that, and no way to be sure this \norder corresponds to physical order on disk.\n\n> About clustering: I know that CLUSTER takes an exclusive lock on the\n> table. At present, users can query the table at any time, so I'm not\n> allowed to take an exclusive lock for more than a few seconds.\n\n\tThen, CLUSTER is out.\n\n> Could I\n> achieve the same thing by creating a second copy of the table and then\n> swapping the first copy out for the second? I think something like that\n> would fit in my time frames\n\n\tIf the archive table is read-only, then yes, you can do this.\n.\n> About partitioning: I can definitely see how having the data in more\n> manageable chunks would allow me to do things like clustering. It will\n> definitely make vacuuming easier.\n>\n> About IO speeds: The db is always under some kind of load. I actually \n> get\n> scared if the load average isn't at least 2. Could I try to run \n> something\n> like bonnie++ to get some real load numbers? I'm sure that would cripple\n> the system while it is running, but if it only takes a few seconds that\n> would be ok.\n>\n> There were updates running while I was running the test. The WAL log is \n> on\n> the hardware raid 10. Moving it from the software raid 5 almost doubled \n> our\n> insert performance.\n\n\tNormal ; fsync on a RAID5-6 is bad, bad.\n\tYou have battery backed up cache ?\n\n> Thanks again,\n>\n> --Nik\n\n\n", "msg_date": "Thu, 24 Apr 2008 22:02:40 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about disk IO an index use and seeking advice" } ]
[ { "msg_contents": "Any idea why it wouldn't choose the right index ?\n\nThis is 8.3\n\n> # \\d battles\n> Table \"public.battles\"\n> Column | Type |\n> Modifiers\n> ---------------------+----------------------------- \n> +------------------------------------------------------\n> id | integer | not null default\n> nextval('battles_id_seq'::regclass)\n> user_id | integer | not null\n> contest_id | integer | not null\n> entry_1_id | integer | not null\n> entry_2_id | integer | not null\n> new_entry_1_score | integer |\n> new_entry_2_score | integer |\n> score | integer |\n> scored_at | timestamp without time zone |\n> created_at | timestamp without time zone | not null\n> function_profile_id | integer |\n> battle_type | integer | default 0\n> Indexes:\n> \"battles_pkey\" PRIMARY KEY, btree (id)\n> \"unique_with_type\" UNIQUE, btree (user_id, entry_1_id, entry_2_id,\n> battle_type)\n> \"battles_by_contest_and_type\" btree (contest_id, battle_type)\n> \"battles_by_time\" btree (scored_at)\n> Foreign-key constraints:\n> \"fk_battles_contests\" FOREIGN KEY (contest_id) REFERENCES \n> contests(id)\n> \"fk_battles_lefty\" FOREIGN KEY (entry_1_id) REFERENCES entries(id)\n> \"fk_battles_righty\" FOREIGN KEY (entry_2_id) REFERENCES entries(id)\n> \"fk_battles_users\" FOREIGN KEY (user_id) REFERENCES users(id)\n>\n>\n> Here is the analyze of the query we want but it takes forever because\n> its using the index for the sort instead of restricting the number of\n> battles by user_id:\n>\n> ourstage_production=# explain analyze SELECT * FROM battles WHERE\n> user_id = 196698 and scored_at is not null and score in (-3,3) ORDER \n> BY\n> id DESC LIMIT 5;\n>\n> QUERY PLAN\n>\n> -------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..8381.61 rows=5 width=56) (actual\n> time=124421.499..183659.404 rows=2 loops=1)\n> -> Index Scan Backward using battles_pkey on battles\n> (cost=0.00..670528.67 rows=400 width=56) (actual\n> time=124421.495..183659.394 rows=2 loops=1)\n> Filter: ((scored_at IS NOT NULL) AND (score = ANY\n> ('{-3,3}'::integer[])) AND (user_id = 196698))\n> Total runtime: 183659.446 ms\n> (4 rows)\n>\n>\n> If you remove the ORDER BY then it runs in 4 ms:\n>\n> ourstage_production=# explain analyze SELECT * FROM battles WHERE\n> user_id = 196698 and scored_at is not null and score in (-3,3) LIMIT \n> 5;\n> QUERY \n> PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..126.65 rows=5 width=56) (actual time=4.607..4.621\n> rows=2 loops=1)\n> -> Index Scan using unique_with_type on battles\n> (cost=0.00..10131.66 rows=400 width=56) (actual time=4.603..4.611\n> rows=2 loops=1)\n> Index Cond: (user_id = 196698)\n> Filter: ((scored_at IS NOT NULL) AND (score = ANY\n> ('{-3,3}'::integer[])))\n> Total runtime: 4.660 ms\n> (5 rows)\n>\n>\n> Here we tried to limit the table scan by time so that it would scan \n> far\n> fewer records. But what ended up happening is that it flipped it over\n> to using the right index. The one that is based on user_id is much\n> preferred:\n>\n>\n> ourstage_production=# explain analyze SELECT * FROM battles WHERE\n> user_id = 196698 and scored_at is not null and score in (-3,3) and\n> scored_at > now() - INTERVAL '6 month' ORDER BY id DESC LIMIT 5;\n> QUERY\n> PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=10158.16..10158.18 rows=5 width=56) (actual\n> time=0.097..0.106 rows=2 loops=1)\n> -> Sort (cost=10158.16..10158.92 rows=302 width=56) (actual\n> time=0.094..0.096 rows=2 loops=1)\n> Sort Key: id\n> Sort Method: quicksort Memory: 25kB\n> -> Index Scan using unique_with_type on battles\n> (cost=0.00..10153.15 rows=302 width=56) (actual time=0.069..0.078\n> rows=2 loops=1)\n> Index Cond: (user_id = 196698)\n> Filter: ((scored_at IS NOT NULL) AND (score = ANY\n> ('{-3,3}'::integer[])) AND (scored_at > (now() - '6 mons'::interval)))\n> Total runtime: 0.152 ms\n> (8 rows)\n>\n>\n> Notice that we added time restriction and it now chooses to not use \n> the\n> time index and goes after the index based on user_id. Why? We \n> don't know.\n\n", "msg_date": "Thu, 24 Apr 2008 20:06:01 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Planner won't use composite index if there is an order by ????" }, { "msg_contents": "Dave Cramer <[email protected]> writes:\n> Any idea why it wouldn't choose the right index ?\n\nIt thinks that way is cheaper. It's estimating its cost at 8381,\nwhereas selecting all the rows then sorting must take 10131 plus\nsome time to sort.\n\nThe reason why those estimates are so far off from reality is directly\ntied to the 400-estimated-vs-2-actual rowcount estimation error.\nI'm not sure how much of that could be fixed by raising the stats target\nfor the table, but that's certainly something to try.\n\nAnother thing you should look at is eliminating dependences between\ncolumns. I'll bet that the \"scored_at is not null\" condition is either\nredundant with the \"score in ...\" condition, or could be made so (ie\nforce score to null when scored_at is null). If you could get rid of\nthe separate test on scored_at, it'd help to avoid the estimation weak\nspot of correlated restrictions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 24 Apr 2008 20:23:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner won't use composite index if there is an order by ???? " } ]
[ { "msg_contents": "Hi All,\n\nI´d like to know what´s the best practice to LOAD a 70 milion rows, 101\ncolumns table\nfrom ORACLE to PGSQL.\n\nThe current approach is to dump the data in CSV and than COPY it to\nPostgresql.\n\nAnyone has a better idea.\n\n\nRegards\nAdonias Malosso\n\nHi All,I´d like to know what´s the best practice to LOAD a 70 milion rows, 101 columns table from ORACLE to PGSQL. The current approach is to dump the data in CSV and than COPY it to Postgresql.\nAnyone has a better idea.RegardsAdonias Malosso", "msg_date": "Sat, 26 Apr 2008 10:25:22 -0300", "msg_from": "\"Adonias Malosso\" <[email protected]>", "msg_from_op": true, "msg_subject": "Best practice to load a huge table from ORACLE to PG" }, { "msg_contents": "Adonias Malosso wrote:\n> Hi All,\n> \n> I�d like to know what�s the best practice to LOAD a 70 milion rows, 101 \n> columns table\n> from ORACLE to PGSQL.\n> \n> The current approach is to dump the data in CSV and than COPY it to \n> Postgresql.\n> \n> Anyone has a better idea.\n\nWrite a java trigger in Oracle that notes when a row has been \nadded/delete/updated and does the exact same thing in postgresql.\n\nJoshua D. Drake\n\n\n> \n> \n> Regards\n> Adonias Malosso\n\n", "msg_date": "Sat, 26 Apr 2008 08:10:24 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best practice to load a huge table from ORACLE to PG" }, { "msg_contents": "Yep ­ just do something like this within sqlplus (from\nhttp://www.dbforums.com/showthread.php?t=350614):\n\nset termout off\nset hea off\nset pagesize 0\n\nspool c:\\whatever.csv\n\nselect a.a||','||a.b||','||a.c\nfrom a\nwhere a.a=\"whatever\";\n\nspool off\n\nCOPY is the fastest approach to get it into PG.\n\n- Luke\n\nOn 4/26/08 6:25 AM, \"Adonias Malosso\" <[email protected]> wrote:\n\n> Hi All,\n> \n> I´d like to know what´s the best practice to LOAD a 70 milion rows, 101\n> columns table \n> from ORACLE to PGSQL.\n> \n> The current approach is to dump the data in CSV and than COPY it to\n> Postgresql.\n> \n> Anyone has a better idea.\n> \n> \n> Regards\n> Adonias Malosso\n> \n\n\n\n\nRe: [PERFORM] Best practice to load a huge table from ORACLE to PG\n\n\nYep – just do something like this within sqlplus (from http://www.dbforums.com/showthread.php?t=350614):\n\nset termout off\nset hea off\nset pagesize 0\n\nspool c:\\whatever.csv\n\nselect a.a||','||a.b||','||a.c\nfrom a\nwhere a.a=\"whatever\";\n\nspool off\n\nCOPY is the fastest approach to get it into PG.\n\n- Luke\n\nOn 4/26/08 6:25 AM, \"Adonias Malosso\" <[email protected]> wrote:\n\nHi All,\n\nI´d like to know what´s the best practice to LOAD a 70 milion rows, 101 columns table \nfrom ORACLE to PGSQL. \n\nThe current approach is to dump the data in CSV and than COPY it to Postgresql.\n\nAnyone has a better idea.\n\n\nRegards\nAdonias Malosso", "msg_date": "Sat, 26 Apr 2008 09:13:34 -0700", "msg_from": "Luke Lonergan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best practice to load a huge table from ORACLE to PG" }, { "msg_contents": "On Sat, Apr 26, 2008 at 9:25 AM, Adonias Malosso <[email protected]> wrote:\n> I´d like to know what´s the best practice to LOAD a 70 milion rows, 101\n> columns table\n> from ORACLE to PGSQL.\n\nThe fastest and easiest method would be to dump the data from Oracle\ninto CSV/delimited format using something like ociuldr\n(http://www.anysql.net/en/ociuldr.html) and load it back into PG using\npg_bulkload (which is a helluva lot faster than COPY). Of course, you\ncould try other things as well... such as setting up generic\nconnectivity to PG and inserting the data to a PG table over the\ndatabase link.\n\nSimilarly, while I hate to see shameless self-plugs in the community,\nthe *fastest* method you could use is dblink_ora_copy, contained in\nEnterpriseDB's PG+ Advanced Server; it uses an optimized OCI\nconnection to COPY the data directly from Oracle into Postgres, which\nalso saves you the intermediate step of dumping the data.\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Sat, 26 Apr 2008 21:14:53 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best practice to load a huge table from ORACLE to PG" }, { "msg_contents": "On Sat, 26 Apr 2008, Adonias Malosso wrote:\n\n> The current approach is to dump the data in CSV and than COPY it to \n> Postgresql.\n\nYou would have to comment on what you don't like about what you're doing \nnow, what parts need to be improved for your priorities, to get a properly \ntargeted answer here.\n\n> I�d like to know what�s the best practice to LOAD a 70 milion rows, 101 \n> columns table from ORACLE to PGSQL.\n\nThere is no one best practice. There's a wide variety of techniques on \nboth the Oracle and PostgreSQL side in this area that might be used \ndepending on what trade-offs are important to you.\n\nFor example, if the goal was to accelerate a dump of a single table to run \nas fast as possible because you need , you'd want to look into techniques \nthat dumped that table with multiple sessions going at once, each handling \na section of that table. Typically you'd use one session per CPU on the \nserver, and you'd use something with a much more direct path into the data \nthan SQL*PLUS. Then on the PostgreSQL side, you could run multiple COPY \nsessions importing at once to read this data all back in, because COPY \nwill bottleneck at the CPU level before the disks will if you've got \nreasonable storage hardware.\n\nThere's a list of utilities in this are at \nhttp://www.orafaq.com/wiki/SQL*Loader_FAQ#Is_there_a_SQL.2AUnloader_to_download_data_to_a_flat_file.3F \nyou might look for inspiration in that area, I know the WisdomForce \nFastReader handles simultaneous multi-section dumps via a very direct path \nto the data.\n\n...but that's just one example based on one set of priorities, and it will \nbe expensive in terms of dollars and complexity.\n\nAs another example of something that changes things considerably, if \nthere's any data with errors that will cause COPY to abort you might \nconsider a different approach on the PG side.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD", "msg_date": "Sun, 27 Apr 2008 09:01:46 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best practice to load a huge table from ORACLE to PG" }, { "msg_contents": "Hi,\n\nLe dimanche 27 avril 2008, Greg Smith a écrit :\n> than SQL*PLUS. Then on the PostgreSQL side, you could run multiple COPY\n> sessions importing at once to read this data all back in, because COPY\n> will bottleneck at the CPU level before the disks will if you've got\n> reasonable storage hardware.\n\nLatest pgloader version has been made to handle this exact case, so if you \nwant to take this route, please consider pgloader 2.3.0:\n http://pgloader.projects.postgresql.org/#_parallel_loading\n http://pgfoundry.org/projects/pgloader/\n\nAnother good reason to consider using pgloader is when the datafile contains \nerroneous input lines and you don't want the COPY transaction to abort. Those \nerror lines will get rejected out by pgloader while the correct ones will get \nCOPYied in.\n\nRegards,\n-- \ndim", "msg_date": "Mon, 28 Apr 2008 09:49:37 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best practice to load a huge table from ORACLE to PG" }, { "msg_contents": "Adonias Malosso wrote:\n> Hi All,\n> \n> I�d like to know what�s the best practice to LOAD a 70 milion rows, 101 \n> columns table\n> from ORACLE to PGSQL.\n> \n> The current approach is to dump the data in CSV and than COPY it to \n> Postgresql.\n> \nUhm. 101 columns you say? Sounds interesting. There are dataloaders\nlike: http://pgfoundry.org/projects/pgloader/ which could speed\nup loading the data over just copy csv. I wonder how much normalizing\ncould help.\n\nTino\n", "msg_date": "Mon, 28 Apr 2008 21:59:02 +0200", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best practice to load a huge table from ORACLE to PG" }, { "msg_contents": "Jonah,\n\nThank you for the answer. Good to know about this enterprise DB feature.\n\nI´ll follow using pgloader.\n\nRegards.\n\nAdonias Malosso\n\nOn Sat, Apr 26, 2008 at 10:14 PM, Jonah H. Harris <[email protected]>\nwrote:\n\n> On Sat, Apr 26, 2008 at 9:25 AM, Adonias Malosso <[email protected]>\n> wrote:\n> > I´d like to know what´s the best practice to LOAD a 70 milion rows, 101\n> > columns table\n> > from ORACLE to PGSQL.\n>\n> The fastest and easiest method would be to dump the data from Oracle\n> into CSV/delimited format using something like ociuldr\n> (http://www.anysql.net/en/ociuldr.html) and load it back into PG using\n> pg_bulkload (which is a helluva lot faster than COPY). Of course, you\n> could try other things as well... such as setting up generic\n> connectivity to PG and inserting the data to a PG table over the\n> database link.\n>\n> Similarly, while I hate to see shameless self-plugs in the community,\n> the *fastest* method you could use is dblink_ora_copy, contained in\n> EnterpriseDB's PG+ Advanced Server; it uses an optimized OCI\n> connection to COPY the data directly from Oracle into Postgres, which\n> also saves you the intermediate step of dumping the data.\n>\n> --\n> Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324\n> EnterpriseDB Corporation | fax: 732.331.1301\n> 499 Thornall Street, 2nd Floor | [email protected]\n> Edison, NJ 08837 | http://www.enterprisedb.com/\n>\n\nJonah,\n \nThank you for the answer. Good to know about this enterprise DB feature.\n \nI´ll follow using pgloader.\n \nRegards.\n \nAdonias Malosso\nOn Sat, Apr 26, 2008 at 10:14 PM, Jonah H. Harris <[email protected]> wrote:\n\nOn Sat, Apr 26, 2008 at 9:25 AM, Adonias Malosso <[email protected]> wrote:> I´d like to know what´s the best practice to LOAD a 70 milion rows, 101\n> columns table> from ORACLE to PGSQL.The fastest and easiest method would be to dump the data from Oracleinto CSV/delimited format using something like ociuldr(http://www.anysql.net/en/ociuldr.html) and load it back into PG using\npg_bulkload (which is a helluva lot faster than COPY).  Of course, youcould try other things as well... such as setting up genericconnectivity to PG and inserting the data to a PG table over thedatabase link.\nSimilarly, while I hate to see shameless self-plugs in the community,the *fastest* method you could use is dblink_ora_copy, contained inEnterpriseDB's PG+ Advanced Server; it uses an optimized OCIconnection to COPY the data directly from Oracle into Postgres, which\nalso saves you the intermediate step of dumping the data.--Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324EnterpriseDB Corporation | fax: 732.331.1301499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/", "msg_date": "Mon, 28 Apr 2008 18:37:46 -0300", "msg_from": "\"Adonias Malosso\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best practice to load a huge table from ORACLE to PG" }, { "msg_contents": "On Mon, Apr 28, 2008 at 5:37 PM, Adonias Malosso <[email protected]> wrote:\n> Thank you for the answer. Good to know about this enterprise DB feature.\n\nNo problem.\n\n> I´ll follow using pgloader.\n\nThat's fine. Though, I'd really suggest pg_bulkload, it's quite a bit faster.\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Mon, 28 Apr 2008 23:31:11 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best practice to load a huge table from ORACLE to PG" } ]
[ { "msg_contents": "But do we link oracle trigger to postgres trigger ?\n i mean :\n oracle trigger will take a note of what has been changed .\n but then how do we pass those changes to postgres trigger ?\n can u suggest any logic or algorithm ?\n Regards, \n Srikanth k Potluri \n +63 9177444783(philippines) \n On Sat 26/04/08 8:40 PM , \"Joshua D. Drake\" [email protected]\nsent:\n Adonias Malosso wrote: \n > Hi All, \n > \n > I�d like to know what�s the best practice to LOAD a 70 milion\nrows, 101 \n > columns table \n > from ORACLE to PGSQL. \n > \n > The current approach is to dump the data in CSV and than COPY it\nto \n > Postgresql. \n > \n > Anyone has a better idea. \n Write a java trigger in Oracle that notes when a row has been \n added/delete/updated and does the exact same thing in postgresql. \n Joshua D. Drake \n > \n > \n > Regards \n > Adonias Malosso \n -- \n Sent via pgsql-performance mailing list\n([email protected] [1]) \n To make changes to your subscription: \n http://www.postgresql.org/mailpref/pgsql-performance \n\n\nLinks:\n------\n[1]\nhttp://sitemail7.hostway.com/javascript:top.opencompose(\\'[email protected]\\',\\'\\',\\'\\',\\'\\')\n\n\nBut do we link oracle trigger to postgres trigger ?\n\ni mean :\n\noracle trigger will take a note of what has been changed .\nbut then how do we pass those changes to postgres trigger ?\n\ncan u suggest any logic or algorithm ?\n\n\n\nRegards,\n\nSrikanth k Potluri\n\n+63 9177444783(philippines) \n\nOn Sat 26/04/08 8:40 PM , \"Joshua D. Drake\" [email protected] sent:\nAdonias Malosso wrote:\n\n> Hi All,\n\n> \n\n> I�d like to know what�s the best practice to LOAD a 70 milion rows, 101 \n\n> columns table\n\n> from ORACLE to PGSQL.\n\n> \n\n> The current approach is to dump the data in CSV and than COPY it to \n\n> Postgresql.\n\n> \n\n> Anyone has a better idea.\n\n\n\nWrite a java trigger in Oracle that notes when a row has been \n\n\nadded/delete/updated and does the exact same thing in postgresql.\n\n\n\nJoshua D. Drake\n\n\n\n> \n\n> \n\n> Regards\n\n> Adonias Malosso\n\n\n\n\n-- \n\n\nSent via pgsql-performance mailing list ([email protected])\n\n\nTo make changes to your subscription:\n\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sat, 26 Apr 2008 10:17:28 -0500", "msg_from": "Potluri Srikanth <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best practice to load a huge table from ORACLE to PG" }, { "msg_contents": "Potluri Srikanth wrote:\n> But do we link oracle trigger to postgres trigger ?\n> \n> i mean :\n> \n> oracle trigger will take a note of what has been changed .\n> but then how do we pass those changes to postgres trigger ?\n\nI am assuming you can use the java trigger from oracle to load the \npostgresql jdbc driver, make a connection to postgresql and perform \nwhatever statement needed to be done.\n\nSincerely,\n\nJoshua D. Drake\n\nP.S. It is possible that Oracle can't do this (I don't know)\n", "msg_date": "Sat, 26 Apr 2008 08:33:12 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best practice to load a huge table from ORACLE to PG" }, { "msg_contents": "Joshua D. Drake wrote:\n> Potluri Srikanth wrote:\n>> But do we link oracle trigger to postgres trigger ?\n>>\n>> i mean :\n>>\n>> oracle trigger will take a note of what has been changed .\n>> but then how do we pass those changes to postgres trigger ?\n> \n> I am assuming you can use the java trigger from oracle to load the \n> postgresql jdbc driver, make a connection to postgresql and perform \n> whatever statement needed to be done.\n\nNote that this will be rather inefficient if you're obtaining a new \nconnection every time. It looks like Oracle's Java stored procedures and \ntriggers run in an appserver-like environment, though, so you should be \nable to use a connection pool, JNDI, or similar.\n\nSome Java stored procedure examples:\n\nhttp://www.oracle.com/technology/sample_code/tech/java/jsp/oracle9ijsp.html\n\nYou could also use a Java trigger to send simpler change message, with a \nserialized row if required, to an external app that's responsible for \nupdating the PostgreSQL database. That might cause less load on the DB \nserver.\n\nThe trouble with this approach, though, is that it might be hard to get \nright when transactions roll back. An alternative is to use an Oracle \ntrigger that inserts records in a change tracking / audit table. You can \nthen periodically read and clear the audit table, using that change \nhistory data to update the PostgreSQL database. This method has the \nadvantage of being transaction safe, as data will never become visible \nin the audit table until the transaction making the changes has committed.\n\n--\nCraig Ringer\n", "msg_date": "Sun, 27 Apr 2008 02:51:55 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best practice to load a huge table from ORACLE to PG" } ]
[ { "msg_contents": "I run on PostgreSQL 8.3, default settings (also tried to change \nrandom_page_cost close to 1).\nWhat need I change to make the second query run as fast as the first? \nSet enable_hashjoin to off solves this problem, but it's not the way I \ncan use.\nStatistics for all columns is on the level 1000.\n\nexplain analyze\nselect *\nfrom c\n join i on i.c_id = c.id\nwhere c.d between '2007-02-01' and '2007-02-06'\n\nNested Loop (cost=0.00..25066.24 rows=4771 width=28) (actual \ntime=0.129..52.499 rows=5215 loops=1)\n -> Index Scan using c_d_idx on c (cost=0.00..86.77 rows=2368 \nwidth=12) (actual time=0.091..4.623 rows=2455 loops=1)\n Index Cond: ((d >= '2007-02-01'::date) AND (d <= \n'2007-02-06'::date))\n -> Index Scan using i_c_id_idx on i (cost=0.00..10.51 rows=3 \nwidth=16) (actual time=0.006..0.010 rows=2 loops=2455)\n Index Cond: (i.c_id = c.id)\nTotal runtime: 59.501 ms\n\nexplain analyze\nselect *\nfrom c\n join i on i.c_id = c.id\nwhere c.d between '2007-02-01' and '2007-02-07'\n\nHash Join (cost=143.53..27980.95 rows=6021 width=28) (actual \ntime=612.282..4162.321 rows=6497 loops=1)\n Hash Cond: (i.c_id = c.id)\n -> Seq Scan on i (cost=0.00..19760.59 rows=1282659 width=16) (actual \ntime=0.073..2043.658 rows=1282659 loops=1)\n -> Hash (cost=106.18..106.18 rows=2988 width=12) (actual \ntime=11.635..11.635 rows=3064 loops=1)\n -> Index Scan using c_d_idx on c (cost=0.00..106.18 rows=2988 \nwidth=12) (actual time=0.100..6.055 rows=3064 loops=1)\n Index Cond: ((d >= '2007-02-01'::date) AND (d <= \n'2007-02-07'::date))\nTotal runtime: 4171.049 ms\n\nCREATE TABLE c\n(\n id bigint NOT NULL,\n d date,\n CONSTRAINT c_id_pk PRIMARY KEY (id)\n);\n\nCREATE INDEX c_d_idx\n ON c\n USING btree\n (d);\n\nCREATE TABLE i\n(\n val bigint,\n c_id bigint,\n CONSTRAINT i_c_id_fk FOREIGN KEY (c_id)\n REFERENCES c (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n);\n\nCREATE INDEX i_c_id_idx\n ON i\n USING btree\n (c_id);\n\n", "msg_date": "Mon, 28 Apr 2008 11:13:32 +0900", "msg_from": "Vlad Arkhipov <[email protected]>", "msg_from_op": true, "msg_subject": "Simple JOIN problem" } ]
[ { "msg_contents": "All,\n\nWe have a table \"table1\" which get insert and updates daily in high numbers,\nbcoz of which its size is increasing and we have to vacuum it every\nalternate day. Vacuuming \"table1\" take almost 30min and during that time the\nsite is down.\n\nWe need to cut down on this downtime.So thought of having a replication\nsystem, for which the replicated DB will be up during the master is getting\nvacuumed.\n\nCan anybody guide which will be the best suited replication solution for\nthis.\n\nThanx for any help\n~ Gauri\n\nAll,We have a table \"table1\" which get insert and updates daily in high numbers, bcoz of which its size is increasing and we have to vacuum it every alternate day. Vacuuming \"table1\" take almost 30min and during that time the site is down.\nWe need to cut down on this downtime.So thought of having a replication system, for which the replicated DB will be up during the master is getting vacuumed.Can anybody guide which will be the best suited replication solution for this.\nThanx for any help~ Gauri", "msg_date": "Mon, 28 Apr 2008 19:08:56 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Replication Syatem" }, { "msg_contents": "2008/4/28 Gauri Kanekar <[email protected]>:\n\n> All,\n>\n> We have a table \"table1\" which get insert and updates daily in high\n> numbers, bcoz of which its size is increasing and we have to vacuum it every\n> alternate day. Vacuuming \"table1\" take almost 30min and during that time the\n> site is down.\n>\n> We need to cut down on this downtime.So thought of having a replication\n> system, for which the replicated DB will be up during the master is getting\n> vacuumed.\n>\n> Can anybody guide which will be the best suited replication solution for\n> this.\n>\n> Thanx for any help\n> ~ Gauri\n>\n\nI home your not using Vacuum Full....... (Standard Reply for this type of\nquestion)\n\nWhat version of Postgresql are you using?\n\nHave you tried autovacuum?\n\nRun plain vacuum even more often on this even more often (like ever half\nhour) and it should not take as long and save space.\n\nIf still have trouble run \"vacuum analyse verbose table1;\" and see what it\nsays.\n\nIf your doing it right you should be able to vacuum with the database up.\n\nSounds like you might be happier a fix for the problem rather than a complex\nwork around which will actually solve a completely different problem.\n\nRegards\n\nPeter.\n\n2008/4/28 Gauri Kanekar <[email protected]>:\nAll,We have a table \"table1\" which get insert and updates daily in high numbers, bcoz of which its size is increasing and we have to vacuum it every alternate day. Vacuuming \"table1\" take almost 30min and during that time the site is down.\nWe need to cut down on this downtime.So thought of having a replication system, for which the replicated DB will be up during the master is getting vacuumed.Can anybody guide which will be the best suited replication solution for this.\nThanx for any help~ Gauri\nI home your not using Vacuum Full....... (Standard Reply for this type of question)What version of Postgresql are you using?Have you tried autovacuum?Run plain vacuum even more often on this even more often (like ever half hour) and it should not take as long and save space.\nIf still have trouble run \"vacuum analyse verbose table1;\" and see what it says.If your doing it right you should be able to vacuum with the database up.Sounds like you might be happier a fix for the problem rather than a complex work around which will actually solve a completely different problem.\nRegardsPeter.", "msg_date": "Mon, 28 Apr 2008 14:58:14 +0100", "msg_from": "\"Peter Childs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "Peter,\n\nWe are doing vacuum full every alternate day. We also do vacuum analyze very\noften.\nWe are currently using 8.1.3 version.\nAuto vacuum is already on. But the table1 is so busy that auto vacuum don't\nget sufficient chance to vacuum it :(.\n\nHave already tried all the option listed by you, thats y we reached to the\ndecision of having a replication sytsem. So any suggestion on that :).\n\nThanx\n~ Gauri\n\n\n\nOn Mon, Apr 28, 2008 at 7:28 PM, Peter Childs <[email protected]>\nwrote:\n\n>\n>\n> 2008/4/28 Gauri Kanekar <[email protected]>:\n>\n> All,\n> >\n> > We have a table \"table1\" which get insert and updates daily in high\n> > numbers, bcoz of which its size is increasing and we have to vacuum it every\n> > alternate day. Vacuuming \"table1\" take almost 30min and during that time the\n> > site is down.\n> >\n> > We need to cut down on this downtime.So thought of having a replication\n> > system, for which the replicated DB will be up during the master is getting\n> > vacuumed.\n> >\n> > Can anybody guide which will be the best suited replication solution for\n> > this.\n> >\n> > Thanx for any help\n> > ~ Gauri\n> >\n>\n> I home your not using Vacuum Full....... (Standard Reply for this type of\n> question)\n>\n> What version of Postgresql are you using?\n>\n> Have you tried autovacuum?\n>\n> Run plain vacuum even more often on this even more often (like ever half\n> hour) and it should not take as long and save space.\n>\n> If still have trouble run \"vacuum analyse verbose table1;\" and see what it\n> says.\n>\n> If your doing it right you should be able to vacuum with the database up.\n>\n> Sounds like you might be happier a fix for the problem rather than a\n> complex work around which will actually solve a completely different\n> problem.\n>\n> Regards\n>\n> Peter.\n>\n\n\n\n-- \nRegards\nGauri\n\nPeter,We are doing vacuum full every alternate day. We also do vacuum analyze very often.We are currently using 8.1.3 version.Auto vacuum is already on. But the table1 is so busy that auto vacuum don't get sufficient chance to vacuum it :(.\nHave already tried all the option listed by you, thats y we reached to the decision of having a replication sytsem. So any suggestion on that :).Thanx~ GauriOn Mon, Apr 28, 2008 at 7:28 PM, Peter Childs <[email protected]> wrote:\n2008/4/28 Gauri Kanekar <[email protected]>:\n\nAll,We have a table \"table1\" which get insert and updates daily in high numbers, bcoz of which its size is increasing and we have to vacuum it every alternate day. Vacuuming \"table1\" take almost 30min and during that time the site is down.\nWe need to cut down on this downtime.So thought of having a replication system, for which the replicated DB will be up during the master is getting vacuumed.Can anybody guide which will be the best suited replication solution for this.\nThanx for any help~ Gauri\nI home your not using Vacuum Full....... (Standard Reply for this type of question)What version of Postgresql are you using?Have you tried autovacuum?Run plain vacuum even more often on this even more often (like ever half hour) and it should not take as long and save space.\nIf still have trouble run \"vacuum analyse verbose table1;\" and see what it says.If your doing it right you should be able to vacuum with the database up.Sounds like you might be happier a fix for the problem rather than a complex work around which will actually solve a completely different problem.\nRegardsPeter.\n-- RegardsGauri", "msg_date": "Mon, 28 Apr 2008 19:35:37 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "\nOn Mon, 2008-04-28 at 19:35 +0530, Gauri Kanekar wrote:\n> Peter,\n> \n> We are doing vacuum full every alternate day. We also do vacuum\n> analyze very often.\n> We are currently using 8.1.3 version.\n> Auto vacuum is already on. But the table1 is so busy that auto vacuum\n> don't get sufficient chance to vacuum it :(.\n\nYou should seriously consider upgrading to PG 8.3. There have been\nsubstantial improvements to VACUUM since 8.1\n\nBrad.\n\n", "msg_date": "Mon, 28 Apr 2008 10:13:20 -0400", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "Gauri Kanekar wrote:\n> Peter,\n> \n> We are doing vacuum full every alternate day. We also do vacuum analyze very\n> often.\n> We are currently using 8.1.3 version.\n> Auto vacuum is already on. But the table1 is so busy that auto vacuum don't\n> get sufficient chance to vacuum it :(.\n> \n> Have already tried all the option listed by you, thats y we reached to the\n> decision of having a replication sytsem. So any suggestion on that :).\n> \n> Thanx\n> ~ Gauri\n> \n\nWe use slony for exactly this type of a situation. It's not the most \nuser-friendly piece of software, but it works well enough that I can \nschedule maintenance windows (we're a 24/7 shop) and do clustering and \nother tasks on our DB to reclaim space, etc.\n\n-salman\n", "msg_date": "Mon, 28 Apr 2008 10:16:44 -0400", "msg_from": "salman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "Thats one of the thingsto be done in near future.\nBut it need some changes from application point of view. :( ... so just got\nescalated for that reason.\n\nBut for now, which one will be a well suited replication system ?\n\n~ Gauri\n\nOn Mon, Apr 28, 2008 at 7:43 PM, Brad Nicholson <[email protected]>\nwrote:\n\n>\n> On Mon, 2008-04-28 at 19:35 +0530, Gauri Kanekar wrote:\n> > Peter,\n> >\n> > We are doing vacuum full every alternate day. We also do vacuum\n> > analyze very often.\n> > We are currently using 8.1.3 version.\n> > Auto vacuum is already on. But the table1 is so busy that auto vacuum\n> > don't get sufficient chance to vacuum it :(.\n>\n> You should seriously consider upgrading to PG 8.3. There have been\n> substantial improvements to VACUUM since 8.1\n>\n> Brad.\n>\n>\n\n\n-- \nRegards\nGauri\n\nThats one of the thingsto be done in near future.But it need some changes from application point of view. :( ... so just got escalated for that reason.But for now, which one will be a well suited replication system ?\n~ GauriOn Mon, Apr 28, 2008 at 7:43 PM, Brad Nicholson <[email protected]> wrote:\n\nOn Mon, 2008-04-28 at 19:35 +0530, Gauri Kanekar wrote:\n> Peter,\n>\n> We are doing vacuum full every alternate day. We also do vacuum\n> analyze very often.\n> We are currently using 8.1.3 version.\n> Auto vacuum is already on. But the table1 is so busy that auto vacuum\n> don't get sufficient chance to vacuum it :(.\n\nYou should seriously consider upgrading to PG 8.3.  There have been\nsubstantial improvements to VACUUM since 8.1\n\nBrad.\n\n-- RegardsGauri", "msg_date": "Mon, 28 Apr 2008 19:47:22 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "Salman,\n\nSlony don't do automatic failover. And we would appreciate a system with\nautomatic failover :(\n\n~ Gauri\n\n\nOn Mon, Apr 28, 2008 at 7:46 PM, salman <[email protected]>\nwrote:\n\n> Gauri Kanekar wrote:\n>\n> > Peter,\n> >\n> > We are doing vacuum full every alternate day. We also do vacuum analyze\n> > very\n> > often.\n> > We are currently using 8.1.3 version.\n> > Auto vacuum is already on. But the table1 is so busy that auto vacuum\n> > don't\n> > get sufficient chance to vacuum it :(.\n> >\n> > Have already tried all the option listed by you, thats y we reached to\n> > the\n> > decision of having a replication sytsem. So any suggestion on that :).\n> >\n> > Thanx\n> > ~ Gauri\n> >\n> >\n> We use slony for exactly this type of a situation. It's not the most\n> user-friendly piece of software, but it works well enough that I can\n> schedule maintenance windows (we're a 24/7 shop) and do clustering and other\n> tasks on our DB to reclaim space, etc.\n>\n> -salman\n>\n\n\n\n-- \nRegards\nGauri\n\nSalman,Slony don't do automatic failover. And we would appreciate a system with automatic failover :(~ GauriOn Mon, Apr 28, 2008 at 7:46 PM, salman <[email protected]> wrote:\nGauri Kanekar wrote:\n\nPeter,\n\nWe are doing vacuum full every alternate day. We also do vacuum analyze very\noften.\nWe are currently using 8.1.3 version.\nAuto vacuum is already on. But the table1 is so busy that auto vacuum don't\nget sufficient chance to vacuum it :(.\n\nHave already tried all the option listed by you, thats y we reached to the\ndecision of having a replication sytsem. So any suggestion on that :).\n\nThanx\n~ Gauri\n\n\n\nWe use slony for exactly this type of a situation. It's not the most user-friendly piece of software, but it works well enough that I can schedule maintenance windows (we're a 24/7 shop) and do clustering and other tasks on our DB to reclaim space, etc.\n\n\n-salman\n-- RegardsGauri", "msg_date": "Mon, 28 Apr 2008 19:48:48 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "On Mon, Apr 28, 2008 at 07:35:37PM +0530, Gauri Kanekar wrote:\n> Peter,\n> \n> We are doing vacuum full every alternate day. We also do vacuum analyze very\n> often.\n\nVACUUM FULL is making your problem worse, not better. Don't do that.\n\n> We are currently using 8.1.3 version.\n\nYou need immediately to upgrade to the latest 8.1 stability and\nsecurity release, which is 8.1.11. This is a drop-in replacement.\nIt's an urgent fix for your case.\n\n> Auto vacuum is already on. But the table1 is so busy that auto vacuum don't\n> get sufficient chance to vacuum it :(.\n\nYou probably need to tune autovacuum not to do that table, and just\nvacuum that table in a constant loop or something. VACUUM should\n_never_ \"take the site down\". If it does, you're doing it wrong.\n \n> Have already tried all the option listed by you, thats y we reached to the\n> decision of having a replication sytsem. So any suggestion on that :).\n\nI think you will find that no replication system will solve your\nunderlying problems. That said, I happen to work for a company that\nwill sell you a replication system to work with 8.1 if you really want\nit.\n\nA\n\n\n-- \nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n", "msg_date": "Mon, 28 Apr 2008 12:22:43 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "On Mon, Apr 28, 2008 at 07:48:48PM +0530, Gauri Kanekar wrote:\n\n> Slony don't do automatic failover. And we would appreciate a system with\n> automatic failover :(\n\nNo responsible asynchronous system will give you automatic failover.\nYou can lose data that way.\n\nA\n\n-- \nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n", "msg_date": "Mon, 28 Apr 2008 12:23:45 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "On Mon, 28 Apr 2008, Gauri Kanekar wrote:\n\n> We are doing vacuum full every alternate day. We also do vacuum analyze \n> very often. We are currently using 8.1.3 version...Have already tried \n> all the option listed by you, thats y we reached to the decision of \n> having a replication sytsem.\n\nAndrew Sullivan has already given a response here I agree with, I wanted \nto expland on that. You have a VACUUM problem. The fact that you need \n(or feel you need) to VACUUM FULL every other day says there's something \nvery wrong here. The way to solve most VACUUM problems is to VACUUM more \noften, so that the work in each individual one never gets so big that your \nsystem takes an unnaceptable hit, and you shouldn't ever need VACUUM FULL. \nSince your problem is being aggrevated because you're running a \ndangerously obsolete version, that's one of the first things you should \nfix--to at least the latest 8.1 if you can't deal with a larger version \nmigration. The fact that you're happily running 8.1.3 says you most \ncertainly haven't tried all the other options here.\n\nEvery minute you spend looking into a replication system is wasted time \nyou could be spending on the right fix here. You've fallen into the \ncommon trap where you're fixated on a particular technical solution so \nmuch that you're now ignoring suggestions on how to resolve the root \nproblem. Replication is hard to get going even on a system that works \nperfectly, and replicating a known buggy system just to work around a \nproblem really sounds like a bad choice.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 28 Apr 2008 13:39:39 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "On Mon, Apr 28, 2008 at 9:38 AM, Gauri Kanekar\n<[email protected]> wrote:\n> All,\n>\n> We have a table \"table1\" which get insert and updates daily in high numbers,\n> bcoz of which its size is increasing and we have to vacuum it every\n> alternate day. Vacuuming \"table1\" take almost 30min and during that time the\n> site is down.\n\nSlony is an open source replication system built for Postgres.\nBut the real problem is that you are doing a vaccum full every day.\nThis is highly invasive.\nTake a look at the postgres docs on Vacuuming the db. Analyze is best\non a daily basis. If you have a lot of deletes, then try vacuum\ntruncate.\n\nThe postgres documentation describes the various vaccuum options and\nexplains the merits of each.\n\nHope that helps.\nRadhika\n\n\n-- \nIt is all a matter of perspective. You choose your view by choosing\nwhere to stand. --Larry Wall\n", "msg_date": "Mon, 28 Apr 2008 16:58:59 -0400", "msg_from": "\"Radhika S\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "[email protected] (\"Gauri Kanekar\") writes:\n> We have a table \"table1\" which get insert and updates daily in high\n> numbers, bcoz of which its size is increasing and we have to vacuum\n> it every alternate day. Vacuuming \"table1\" take almost 30min and\n> during that time the site is down. We need to cut down on this\n> downtime.So thought of having a replication system, for which the\n> replicated DB will be up during the master is getting vacuumed. Can\n> anybody guide which will be the best suited replication solution for\n> this.\n\nThe only reason that it would be necessary for VACUUM to \"take the\nsite down\" would be if you are running version 7.1, which was\nobsoleted in 2002, which, it should be noted, was SIX YEARS AGO.\n\nAs has been noted, you seem to be presupposing a remarkably complex\nsolution to resolve a problem which is likely to be better handled via\nrunning VACUUM rather more frequently.\n-- \noutput = reverse(\"ofni.sesabatadxunil\" \"@\" \"enworbbc\")\nhttp://www3.sympatico.ca/cbbrowne/postgresql.html\nRules of the Evil Overlord #181. \"I will decree that all hay be\nshipped in tightly-packed bales. Any wagonload of loose hay attempting\nto pass through a checkpoint will be set on fire.\"\n<http://www.eviloverlord.com/>\n", "msg_date": "Mon, 28 Apr 2008 17:43:57 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "Basically we have some background process which updates \"table1\" and we\ndon't want the application to make any changes to \"table1\" while vacuum.\n\nVacuum requires exclusive lock on \"table1\" and if any of the background or\napplication is ON vacuum don't kick off. Thats the reason we need to get the\nsite down.\n\n~ Gauri\n\nOn Tue, Apr 29, 2008 at 3:13 AM, Chris Browne <[email protected]> wrote:\n\n> [email protected] (\"Gauri Kanekar\") writes:\n> > We have a table \"table1\" which get insert and updates daily in high\n> > numbers, bcoz of which its size is increasing and we have to vacuum\n> > it every alternate day. Vacuuming \"table1\" take almost 30min and\n> > during that time the site is down. We need to cut down on this\n> > downtime.So thought of having a replication system, for which the\n> > replicated DB will be up during the master is getting vacuumed. Can\n> > anybody guide which will be the best suited replication solution for\n> > this.\n>\n> The only reason that it would be necessary for VACUUM to \"take the\n> site down\" would be if you are running version 7.1, which was\n> obsoleted in 2002, which, it should be noted, was SIX YEARS AGO.\n>\n> As has been noted, you seem to be presupposing a remarkably complex\n> solution to resolve a problem which is likely to be better handled via\n> running VACUUM rather more frequently.\n> --\n> output = reverse(\"ofni.sesabatadxunil\" \"@\" \"enworbbc\")\n> http://www3.sympatico.ca/cbbrowne/postgresql.html\n> Rules of the Evil Overlord #181. \"I will decree that all hay be\n> shipped in tightly-packed bales. Any wagonload of loose hay attempting\n> to pass through a checkpoint will be set on fire.\"\n> <http://www.eviloverlord.com/>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nRegards\nGauri\n\nBasically we have some background process which updates \"table1\" and we don't want the application to make any changes to \"table1\" while vacuum.Vacuum requires exclusive lock on \"table1\" and if any of the background or application is ON vacuum don't kick off. Thats the reason we need to get the site down.\n~ GauriOn Tue, Apr 29, 2008 at 3:13 AM, Chris Browne <[email protected]> wrote:\[email protected] (\"Gauri Kanekar\") writes:\n> We have a table \"table1\" which get insert and updates daily in high\n> numbers, bcoz of which its size is increasing and we have to vacuum\n> it every alternate day. Vacuuming \"table1\" take almost 30min and\n> during that time the site is down.  We need to cut down on this\n> downtime.So thought of having a replication system, for which the\n> replicated DB will be up during the master is getting vacuumed.  Can\n> anybody guide which will be the best suited replication solution for\n> this.\n\nThe only reason that it would be necessary for VACUUM to \"take the\nsite down\" would be if you are running version 7.1, which was\nobsoleted in 2002, which, it should be noted, was SIX YEARS AGO.\n\nAs has been noted, you seem to be presupposing a remarkably complex\nsolution to resolve a problem which is likely to be better handled via\nrunning VACUUM rather more frequently.\n--\noutput = reverse(\"ofni.sesabatadxunil\" \"@\" \"enworbbc\")\nhttp://www3.sympatico.ca/cbbrowne/postgresql.html\nRules  of the  Evil Overlord  #181.  \"I  will decree  that all  hay be\nshipped in tightly-packed bales. Any wagonload of loose hay attempting\nto pass through a checkpoint will be set on fire.\"\n<http://www.eviloverlord.com/>\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- RegardsGauri", "msg_date": "Tue, 29 Apr 2008 10:25:10 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "On Tue, 29 Apr 2008, Gauri Kanekar wrote:\n\n> Basically we have some background process which updates \"table1\" and we\n> don't want the application to make any changes to \"table1\" while vacuum.\n> Vacuum requires exclusive lock on \"table1\" and if any of the background or\n> application is ON vacuum don't kick off.\n\nVACUUM FULL needs an exclusive lock, the regular one does not in 8.1. \nIt's one of the reasons FULL should be avoided. If you do regular VACUUM \nfrequently enough, you shouldn't ever need to do a FULL one anyway.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 29 Apr 2008 01:08:09 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "But unless we do full vacuum the space is not recovered. Thats y we prefer\nfull vacuum.\n\n~ Gauri\n\nOn Tue, Apr 29, 2008 at 10:38 AM, Greg Smith <[email protected]> wrote:\n\n> On Tue, 29 Apr 2008, Gauri Kanekar wrote:\n>\n> Basically we have some background process which updates \"table1\" and we\n> > don't want the application to make any changes to \"table1\" while vacuum.\n> > Vacuum requires exclusive lock on \"table1\" and if any of the background\n> > or\n> > application is ON vacuum don't kick off.\n> >\n>\n> VACUUM FULL needs an exclusive lock, the regular one does not in 8.1. It's\n> one of the reasons FULL should be avoided. If you do regular VACUUM\n> frequently enough, you shouldn't ever need to do a FULL one anyway.\n>\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\n\n\n\n-- \nRegards\nGauri\n\nBut unless we do full vacuum the space is not recovered. Thats y we prefer full vacuum.~ GauriOn Tue, Apr 29, 2008 at 10:38 AM, Greg Smith <[email protected]> wrote:\nOn Tue, 29 Apr 2008, Gauri Kanekar wrote:\n\n\nBasically we have some background process which updates \"table1\" and we\ndon't want the application to make any changes to \"table1\" while vacuum.\nVacuum requires exclusive lock on \"table1\" and if any of the background or\napplication is ON vacuum don't kick off.\n\n\nVACUUM FULL needs an exclusive lock, the regular one does not in 8.1. It's one of the reasons FULL should be avoided.  If you do regular VACUUM frequently enough, you shouldn't ever need to do a FULL one anyway.\n\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n-- RegardsGauri", "msg_date": "Tue, 29 Apr 2008 10:41:33 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "\"Gauri Kanekar\" <[email protected]> writes:\n> Vacuum requires exclusive lock on \"table1\" and if any of the background or\n> application is ON vacuum don't kick off. Thats the reason we need to get the\n> site down.\n\nAs has been pointed out to you repeatedly, \"vacuum\" hasn't required\nexclusive lock since the stone age. If you are actually running a PG\nversion in which plain \"vacuum\" takes exclusive lock, then no amount\nof replication will save you --- in particular, because no currently\nsupported replication solution even works with PG servers that old.\nOtherwise, the answer is not so much \"replicate\" as \"stop using\nvacuum full, and instead adopt a modern vacuuming strategy\".\n\nI am not sure how much more clear we can make this to you.\nReplication isn't going to solve your vacuum mismanagement problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 29 Apr 2008 01:20:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem " }, { "msg_contents": "Andrew,\n\nCan you explain me in detail why u said vacuum full is making the things\nworst.\nWe do vacuum full, as vacuum verbose analyse dont regain space for us.\n\n~ Gauri\n\nOn Mon, Apr 28, 2008 at 9:52 PM, Andrew Sullivan <[email protected]>\nwrote:\n\n> On Mon, Apr 28, 2008 at 07:35:37PM +0530, Gauri Kanekar wrote:\n> > Peter,\n> >\n> > We are doing vacuum full every alternate day. We also do vacuum analyze\n> very\n> > often.\n>\n> VACUUM FULL is making your problem worse, not better. Don't do that.\n>\n> > We are currently using 8.1.3 version.\n>\n> You need immediately to upgrade to the latest 8.1 stability and\n> security release, which is 8.1.11. This is a drop-in replacement.\n> It's an urgent fix for your case.\n>\n> > Auto vacuum is already on. But the table1 is so busy that auto vacuum\n> don't\n> > get sufficient chance to vacuum it :(.\n>\n> You probably need to tune autovacuum not to do that table, and just\n> vacuum that table in a constant loop or something. VACUUM should\n> _never_ \"take the site down\". If it does, you're doing it wrong.\n>\n> > Have already tried all the option listed by you, thats y we reached to\n> the\n> > decision of having a replication sytsem. So any suggestion on that :).\n>\n> I think you will find that no replication system will solve your\n> underlying problems. That said, I happen to work for a company that\n> will sell you a replication system to work with 8.1 if you really want\n> it.\n>\n> A\n>\n>\n> --\n> Andrew Sullivan\n> [email protected]\n> +1 503 667 4564 x104\n> http://www.commandprompt.com/\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nRegards\nGauri\n\nAndrew,Can you explain me in detail why u said vacuum full is making the things worst.We do vacuum full, as vacuum verbose analyse dont regain space for us.~ GauriOn Mon, Apr 28, 2008 at 9:52 PM, Andrew Sullivan <[email protected]> wrote:\nOn Mon, Apr 28, 2008 at 07:35:37PM +0530, Gauri Kanekar wrote:\n> Peter,\n>\n> We are doing vacuum full every alternate day. We also do vacuum analyze very\n> often.\n\nVACUUM FULL is making your problem worse, not better.  Don't do that.\n\n> We are currently using 8.1.3 version.\n\nYou need immediately to upgrade to the latest 8.1 stability and\nsecurity release, which is 8.1.11.  This is a drop-in replacement.\nIt's an urgent fix for your case.\n\n> Auto vacuum is already on. But the table1 is so busy that auto vacuum don't\n> get sufficient chance to vacuum it :(.\n\nYou probably need to tune autovacuum not to do that table, and just\nvacuum that table in a constant loop or something.  VACUUM should\n_never_ \"take the site down\".  If it does, you're doing it wrong.\n\n> Have already tried all the option listed by you, thats y we reached to the\n> decision of having a replication sytsem. So any suggestion on that :).\n\nI think you will find that no replication system will solve your\nunderlying problems.  That said, I happen to work for a company that\nwill sell you a replication system to work with 8.1 if you really want\nit.\n\nA\n\n\n--\nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- RegardsGauri", "msg_date": "Tue, 29 Apr 2008 11:16:57 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "On Tue, Apr 29, 2008 at 10:41 AM, Gauri Kanekar\n<[email protected]> wrote:\n> But unless we do full vacuum the space is not recovered. Thats y we prefer\n> full vacuum.\n\nThere is no point in recovering the space by moving tuples and\ntruncating the relation (that's what VACUUM FULL does) because you are\ndoing frequent updates on the table and that would again extend the\nrelation. If you run plain VACUUM, that would recover dead space and\nupdate the free space maps. It may not be able to reduce the table\nsize, but you should not be bothered much about it because the\nfollowing updates/inserts will fill in the fragmented free space.\n\nYou may want to check your FSM settings as well to make sure that you\nare tracking free space properly.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 29 Apr 2008 11:17:05 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "On Tue, Apr 29, 2008 at 11:16 AM, Gauri Kanekar\n<[email protected]> wrote:\n> Andrew,\n>\n> Can you explain me in detail why u said vacuum full is making the things\n> worst.\n\n1. VACUUM FULL takes exclusive lock on the table. That makes table\nunavailable for read/writes.\n\n2. VACUUM FULL moves live tuples around. When a tuple is moved, the\nold index entry is deleted and a new index entry is inserted. This\ncauses index bloats which are hard to recover.\n\n\n> We do vacuum full, as vacuum verbose analyse dont regain space for us.\n>\n\nAs I mentioned in the other reply, you are not gaining much by\nregaining space. The subsequent UPDATEs/INSERTs will quickly extend\nthe relation and you loose all the work done by VACUUM FULL. Plain\nVACUUM will update FSM to track the free space scattered across the\nrelation which is later reused by updates/inserts.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 29 Apr 2008 11:25:27 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "On Tue, 29 Apr 2008, Gauri Kanekar wrote:\n\n> We do vacuum full, as vacuum verbose analyse dont regain space for us.\n\nAh, now we're getting to the root of your problem here. You expect that \nVACUUM should reclaim space.\n\nWhenever you UPDATE a row, it writes a new one out, then switches to use \nthat version. This leaves behind the original. Those now unused rows are \nwhat VACUUM gathers, but it doesn't give that space back to the operating \nsystem.\n\nThe model here assumes that you'll need that space again for the next time \nyou UPDATE or INSERT a row. So instead VACUUM just keeps those available \nfor database reuse rather than returning it to the operating system.\n\nNow, if you don't VACUUM frequently enough, this model breaks down, and \nthe table can get bigger with space that may never get reused. The idea \nis that you should be VACUUMing up now unneeded rows at about the same \nrate they're being re-used. When you don't keep up, the database can \nexpand in space that you don't get back again. The right answer to this \nproblem is not to use VACUUM FULL; it's to use regular VACUUM more often.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 29 Apr 2008 04:37:07 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "From most of the reply found that upgrade to higher version of postgres may\nbe to 8.3.1 may be one of the solution to tackle this problem\n\nChecked about HOT feature in 8.3.1.\n\nDo we need to do any special config changes or any other setting for HOT to\nwork??\n\nAny special guideline to follow to make HOT working??\n\n~ Gauri\n\nOn Tue, Apr 29, 2008 at 2:07 PM, Greg Smith <[email protected]> wrote:\n\n> On Tue, 29 Apr 2008, Gauri Kanekar wrote:\n>\n> We do vacuum full, as vacuum verbose analyse dont regain space for us.\n> >\n>\n> Ah, now we're getting to the root of your problem here. You expect that\n> VACUUM should reclaim space.\n>\n> Whenever you UPDATE a row, it writes a new one out, then switches to use\n> that version. This leaves behind the original. Those now unused rows are\n> what VACUUM gathers, but it doesn't give that space back to the operating\n> system.\n>\n> The model here assumes that you'll need that space again for the next time\n> you UPDATE or INSERT a row. So instead VACUUM just keeps those available\n> for database reuse rather than returning it to the operating system.\n>\n> Now, if you don't VACUUM frequently enough, this model breaks down, and\n> the table can get bigger with space that may never get reused. The idea is\n> that you should be VACUUMing up now unneeded rows at about the same rate\n> they're being re-used. When you don't keep up, the database can expand in\n> space that you don't get back again. The right answer to this problem is\n> not to use VACUUM FULL; it's to use regular VACUUM more often.\n>\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\n\n\n\n-- \nRegards\nGauri\n\nFrom most of the reply found that upgrade to higher version of postgres  may be to 8.3.1 may be one of the solution to tackle this problemChecked about HOT feature in 8.3.1.Do we need to do any special config changes or any other setting for HOT to work??\nAny special guideline to follow to make HOT working??~ GauriOn Tue, Apr 29, 2008 at 2:07 PM, Greg Smith <[email protected]> wrote:\nOn Tue, 29 Apr 2008, Gauri Kanekar wrote:\n\n\nWe do vacuum full, as vacuum verbose analyse dont regain space for us.\n\n\nAh, now we're getting to the root of your problem here.  You expect that VACUUM should reclaim space.\n\nWhenever you UPDATE a row, it writes a new one out, then switches to use that version.  This leaves behind the original.  Those now unused rows are what VACUUM gathers, but it doesn't give that space back to the operating system.\n\nThe model here assumes that you'll need that space again for the next time you UPDATE or INSERT a row.  So instead VACUUM just keeps those available for database reuse rather than returning it to the operating system.\n\nNow, if you don't VACUUM frequently enough, this model breaks down, and the table can get bigger with space that may never get reused.  The idea is that you should be VACUUMing up now unneeded rows at about the same rate they're being re-used.  When you don't keep up, the database can expand in space that you don't get back again.  The right answer to this problem is not to use VACUUM FULL; it's to use regular VACUUM more often.\n\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n-- RegardsGauri", "msg_date": "Tue, 29 Apr 2008 16:35:40 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "Gauri Kanekar wrote:\n> Andrew,\n> \n> Can you explain me in detail why u said vacuum full is making the things\n> worst.\n> We do vacuum full, as vacuum verbose analyse dont regain space for us.\n> \n\nvacuum full stops all access so that the data files can be re-writen \nwithout the unused space.\n\nnormal vacuum will update the records of what space is no longer used so \nthat it can then be reused with the next update/insert. Your db size \nwill not shrink straight away but it will stop growing until you use all \nthe free space left from previous update/delete\n\nThe more frequently you do a normal vacuum the less time it will take \nand things will run a lot smoother with your file size growing slowly to \naccommodate new data.\n\nExpanding on what others have mentioned as a drawback of vacuum full - \nyou should look at REINDEX'ing as well (maybe one index or table at a \ntime). You will most likely find this will reclaim some disk space for \nyou as well.\n\n\n\n\n-- \n\nShane Ambler\npgSQL (at) Sheeky (dot) Biz\n\nGet Sheeky @ http://Sheeky.Biz\n", "msg_date": "Tue, 29 Apr 2008 20:49:38 +0930", "msg_from": "Shane Ambler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "On Tue, Apr 29, 2008 at 4:35 PM, Gauri Kanekar\n<[email protected]> wrote:\n\n>\n> Do we need to do any special config changes or any other setting for HOT to\n> work??\n\nNo. HOT is enabled by default, on all tables. There is no way and need\nto disable it.\n\n>\n> Any special guideline to follow to make HOT working??\n>\n\nYou can do couple of things to benefit from HOT.\n\n1. HOT addresses a special, but common case where UPDATE operation\ndoes not change any of the index keys. So check if your UPDATE changes\nany of the index keys. If so, see if you can avoid having index\ninvolving that column. Of course, I won't advocate dropping an index\nif it would drastically impact your frequently run queries.\n\n2. You may leave some free space in the heap (fillfactor less than\n100). My recommendation would be to leave space worth of one row or\nslightly more than that to let first UPDATE be an HOT update.\nSubsequent UPDATEs in the page may reuse the dead row created by\nearlier UPDATEs.\n\n3. Avoid any long running transactions.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 29 Apr 2008 16:55:38 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "Thanx for the help.\n\nNeed some more help.\n\n\"table1\" has two indices\nunique indx1 = \"pkfld\"\nunique indx2 = \"fkfld1,fkfld2\"\n\ndid following steps in the listed order -\n\n1. vacuumed the whole DB\n2. \"table1\"\n RecCnt ==> 11970789\n Size ==> 2702.41 MB\n3.update \"table1\" set fld7 = 1000 where fld1/1000000 = 999 ;\n this UPDATED 1230307 records\n4. checked \"table1\" size again\n Reccnt => 11970789\n Size ==> 2996.57MB\n5. Again did the update, update \"table1\" set fld7 = 1000 where fld1/1000000\n= 999 ;\n this UPDATED 1230307 records\n6. Got \"table1\" size as\n RecCnt ==> 11970789\n Size ==> 3290.64\n7. Updated again, update \"table1\" set fld7 = 1000 where fld1/1000000 = 999 ;\n this UPDATED 1230307 records\n6. \"table1\" size as\n RecCnt ==> 11970789\n Size ==> 3584.66\n\nFound that the size increased gradually. Is HOT working over here ??\nGuide me if im doing something wrong.\n\n~ Gauri\n\nOn Tue, Apr 29, 2008 at 4:55 PM, Pavan Deolasee <[email protected]>\nwrote:\n\n> On Tue, Apr 29, 2008 at 4:35 PM, Gauri Kanekar\n> <[email protected]> wrote:\n>\n> >\n> > Do we need to do any special config changes or any other setting for HOT\n> to\n> > work??\n>\n> No. HOT is enabled by default, on all tables. There is no way and need\n> to disable it.\n>\n> >\n> > Any special guideline to follow to make HOT working??\n> >\n>\n> You can do couple of things to benefit from HOT.\n>\n> 1. HOT addresses a special, but common case where UPDATE operation\n> does not change any of the index keys. So check if your UPDATE changes\n> any of the index keys. If so, see if you can avoid having index\n> involving that column. Of course, I won't advocate dropping an index\n> if it would drastically impact your frequently run queries.\n>\n> 2. You may leave some free space in the heap (fillfactor less than\n> 100). My recommendation would be to leave space worth of one row or\n> slightly more than that to let first UPDATE be an HOT update.\n> Subsequent UPDATEs in the page may reuse the dead row created by\n> earlier UPDATEs.\n>\n> 3. Avoid any long running transactions.\n>\n> Thanks,\n> Pavan\n>\n> --\n> Pavan Deolasee\n> EnterpriseDB http://www.enterprisedb.com\n>\n\n\n\n-- \nRegards\nGauri\n\nThanx for the help. Need some more help.\"table1\" has two indicesunique indx1 = \"pkfld\"unique indx2 = \"fkfld1,fkfld2\"did following steps in the listed order -\n1. vacuumed the whole DB2. \"table1\"        RecCnt ==> 11970789       Size ==> 2702.41 MB3.update \"table1\" set fld7 = 1000 where fld1/1000000 = 999 ;    this UPDATED 1230307 records\n4. checked \"table1\" size again     Reccnt =>   11970789     Size ==> 2996.57MB5. Again did the update, update \"table1\" set fld7 = 1000 where fld1/1000000 = 999 ;    this UPDATED 1230307 records\n6. Got \"table1\" size as    RecCnt ==> 11970789    Size ==> 3290.647. Updated again, update \"table1\" set fld7 = 1000 where fld1/1000000 = 999 ;\n    this UPDATED 1230307 records\n6. \"table1\" size as\n    RecCnt ==> 11970789\n    Size ==> 3584.66Found that the size increased gradually. Is HOT working over here ??Guide me if im doing something wrong.~ GauriOn Tue, Apr 29, 2008 at 4:55 PM, Pavan Deolasee <[email protected]> wrote:\nOn Tue, Apr 29, 2008 at 4:35 PM, Gauri Kanekar\n<[email protected]> wrote:\n\n>\n> Do we need to do any special config changes or any other setting for HOT to\n> work??\n\nNo. HOT is enabled by default, on all tables. There is no way and need\nto disable it.\n\n>\n> Any special guideline to follow to make HOT working??\n>\n\nYou can do couple of things to benefit from HOT.\n\n1. HOT addresses a special, but common case where UPDATE operation\ndoes not change any of the index keys. So check if your UPDATE changes\nany of the index keys. If so, see if you can avoid having index\ninvolving that column. Of course, I won't advocate dropping an index\nif it would drastically impact your frequently run queries.\n\n2. You may leave some free space in the heap (fillfactor less than\n100). My recommendation would be to leave space worth of one row or\nslightly more than that to let first UPDATE be an HOT update.\nSubsequent UPDATEs in the page may reuse the dead row created by\nearlier UPDATEs.\n\n3. Avoid any long running transactions.\n\nThanks,\nPavan\n\n--\nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n-- RegardsGauri", "msg_date": "Tue, 29 Apr 2008 18:29:43 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "Gauri Kanekar escribi�:\n\n> Do we need to do any special config changes or any other setting for HOT to\n> work??\n\nNo. HOT is always working, if it can. You don't need to configure it.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Tue, 29 Apr 2008 09:02:04 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "Gauri Kanekar escribi�:\n\n> Found that the size increased gradually. Is HOT working over here ??\n> Guide me if im doing something wrong.\n\nProbably not. Try vacuuming between the updates.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Tue, 29 Apr 2008 09:03:14 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "On Tue, Apr 29, 2008 at 6:29 PM, Gauri Kanekar\n<[email protected]> wrote:\n>\n>\n> Found that the size increased gradually. Is HOT working over here ??\n> Guide me if im doing something wrong.\n>\n\nYou have chosen a bad case for HOT. Since you are repeatedly updating\nthe same set of rows, the dead space created in the first step is the\nblocks which are not touched in the subsequent updates. Is this a real\nscenario or are you just testing ? If its just for testing, I would\nsuggest updating different sets of rows in each step and then check.\n\nThanks,\nPavan\n\n\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 29 Apr 2008 18:39:39 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "Thats how our updates works.\nWe usually tend to touch the same row many times a day.\n\n~ Gauri\n\nOn Tue, Apr 29, 2008 at 6:39 PM, Pavan Deolasee <[email protected]>\nwrote:\n\n> On Tue, Apr 29, 2008 at 6:29 PM, Gauri Kanekar\n> <[email protected]> wrote:\n> >\n> >\n> > Found that the size increased gradually. Is HOT working over here ??\n> > Guide me if im doing something wrong.\n> >\n>\n> You have chosen a bad case for HOT. Since you are repeatedly updating\n> the same set of rows, the dead space created in the first step is the\n> blocks which are not touched in the subsequent updates. Is this a real\n> scenario or are you just testing ? If its just for testing, I would\n> suggest updating different sets of rows in each step and then check.\n>\n> Thanks,\n> Pavan\n>\n>\n>\n> --\n> Pavan Deolasee\n> EnterpriseDB http://www.enterprisedb.com\n>\n\n\n\n-- \nRegards\nGauri\n\nThats how our updates works.We usually tend to touch the same row many times a day. ~ GauriOn Tue, Apr 29, 2008 at 6:39 PM, Pavan Deolasee <[email protected]> wrote:\nOn Tue, Apr 29, 2008 at 6:29 PM, Gauri Kanekar\n<[email protected]> wrote:\n>\n>\n> Found that the size increased gradually. Is HOT working over here ??\n> Guide me if im doing something wrong.\n>\n\nYou have chosen a bad case for HOT. Since you are repeatedly updating\nthe same set of rows, the dead space created in the first step is the\nblocks which are not touched in the subsequent updates. Is this a real\nscenario or are you just testing ? If its just for testing, I would\nsuggest updating different sets of rows in each step and then check.\n\nThanks,\nPavan\n\n\n\n--\nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n-- RegardsGauri", "msg_date": "Tue, 29 Apr 2008 18:42:40 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "On Tue, Apr 29, 2008 at 6:42 PM, Gauri Kanekar\n<[email protected]> wrote:\n> Thats how our updates works.\n> We usually tend to touch the same row many times a day.\n>\n\nThen start with a non-100 fillfactor. I would suggest something like\n80 and then adjust based on the testing. Since you are anyways have a\nupdate intensive setup, leaving free space in the heap won't harm you\nmuch in the long term.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 29 Apr 2008 18:46:00 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "\"Pavan Deolasee\" <[email protected]> writes:\n\n>> Any special guideline to follow to make HOT working??\n>>\n>\n> You can do couple of things to benefit from HOT.\n>\n> 1. HOT addresses a special, but common case where UPDATE operation\n> does not change any of the index keys. So check if your UPDATE changes\n> any of the index keys. If so, see if you can avoid having index\n> involving that column. Of course, I won't advocate dropping an index\n> if it would drastically impact your frequently run queries.\n>\n> 2. You may leave some free space in the heap (fillfactor less than\n> 100). My recommendation would be to leave space worth of one row or\n> slightly more than that to let first UPDATE be an HOT update.\n> Subsequent UPDATEs in the page may reuse the dead row created by\n> earlier UPDATEs.\n>\n> 3. Avoid any long running transactions.\n\nPerhaps we should put this list in the FAQ.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's 24x7 Postgres support!\n", "msg_date": "Tue, 29 Apr 2008 09:48:31 -0400", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> The model here assumes that you'll need that space again for the next time \n> you UPDATE or INSERT a row. So instead VACUUM just keeps those available \n> for database reuse rather than returning it to the operating system.\n\n> Now, if you don't VACUUM frequently enough, this model breaks down, and \n> the table can get bigger with space that may never get reused. The idea \n> is that you should be VACUUMing up now unneeded rows at about the same \n> rate they're being re-used. When you don't keep up, the database can \n> expand in space that you don't get back again. The right answer to this \n> problem is not to use VACUUM FULL; it's to use regular VACUUM more often.\n\nAlso, you need to make sure you have the FSM parameters set high enough\nso that all the free space found by a VACUUM run can be remembered.\n\nThe less often you run VACUUM, the more FSM space you need, because\nthere'll be more free space reclaimed per run.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 29 Apr 2008 10:16:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem " }, { "msg_contents": "[email protected] (\"Gauri Kanekar\") writes:\n> Basically we have some background process which updates \"table1\" and\n> we don't want the application to make any changes to \"table1\" while\n> vacuum. Vacuum requires exclusive lock on \"table1\" and if any of\n> the background or application is ON vacuum don't kick off. Thats the\n> reason we need to get the site down.\n\nVACUUM has not required an exclusive lock on tables since version 7.1.\n\nWhat version of PostgreSQL are you running?\n-- \noutput = (\"cbbrowne\" \"@\" \"acm.org\")\nhttp://linuxdatabases.info/info/sap.html\nRules of the Evil Overlord #192. \"If I appoint someone as my consort,\nI will not subsequently inform her that she is being replaced by a\nyounger, more attractive woman. <http://www.eviloverlord.com/>\n", "msg_date": "Tue, 29 Apr 2008 10:48:28 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "\nOn Apr 29, 2008, at 10:16 AM, Tom Lane wrote:\n\n> Greg Smith <[email protected]> writes:\n>> The model here assumes that you'll need that space again for the \n>> next time\n>> you UPDATE or INSERT a row. So instead VACUUM just keeps those \n>> available\n>> for database reuse rather than returning it to the operating system.\n[ ... ]\n> Also, you need to make sure you have the FSM parameters set high \n> enough\n> so that all the free space found by a VACUUM run can be remembered.\n>\n> The less often you run VACUUM, the more FSM space you need, because\n> there'll be more free space reclaimed per run.\n\nI can actually watch one of our applications slow down once the free \nspace in the table is used up. Extending the data file seems to be \nmuch more expensive than using the free space found in existing pages \nof the file.\n\n", "msg_date": "Tue, 29 Apr 2008 11:00:57 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem " }, { "msg_contents": "Alvaro Herrera wrote:\n> Gauri Kanekar escribi�:\n> \n>> Do we need to do any special config changes or any other setting for HOT to\n>> work??\n> \n> No. HOT is always working, if it can. You don't need to configure it.\n> \n\nUnless you have upgraded since you started this thread you are still \nrunning 8.1.3.\n\nHOT is only available in 8.3 and 8.3.1\n\nYou DO need to upgrade to get the benefits of HOT\n\n\n\n-- \n\nShane Ambler\npgSQL (at) Sheeky (dot) Biz\n\nGet Sheeky @ http://Sheeky.Biz\n", "msg_date": "Wed, 30 Apr 2008 02:18:10 +0930", "msg_from": "Shane Ambler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "HOT doesn't seems to be working in our case.\n\nThis is \"table1\" structure :\n id integer not null\n code integer not null\n crid integer not null\n status character varying(1) default 'A'::character varying\n delta1 bigint default 0\n delta2 bigint default 0\n delta3 bigint default 0\n delta4 bigint default 0\n tz_id integer default 0\nIndexes:\n \"idx1\" PRIMARY KEY, btree (id)\n \"idx2\" UNIQUE, btree (code, crid)\n \"idx3\" btree (tz_id)\n \"idx4\" btree (status)\n\ncode as crid are foreign key.\n\nHere delta* fields get updated through out the day. and most of the time it\nmay update the same row again n again.\n\ntable1 contains around 12843694 records.\n\nNow not understanding y HOT don't work in our case.\n\nChanged fillfactor to 80, 75,70.... but nothing seems to work.\n\n~Gauri\nOn Tue, Apr 29, 2008 at 10:18 PM, Shane Ambler <[email protected]> wrote:\n\n> Alvaro Herrera wrote:\n>\n> > Gauri Kanekar escribió:\n> >\n> > Do we need to do any special config changes or any other setting for\n> > > HOT to\n> > > work??\n> > >\n> >\n> > No. HOT is always working, if it can. You don't need to configure it.\n> >\n> >\n> Unless you have upgraded since you started this thread you are still\n> running 8.1.3.\n>\n> HOT is only available in 8.3 and 8.3.1\n>\n> You DO need to upgrade to get the benefits of HOT\n>\n>\n>\n>\n> --\n>\n> Shane Ambler\n> pgSQL (at) Sheeky (dot) Biz\n>\n> Get Sheeky @ http://Sheeky.Biz\n>\n\n\n\n-- \nRegards\nGauri\n\nHOT doesn't seems to be working in our case. This is \"table1\" structure : id        integer    not null code        integer    not null crid        integer    not null status        character varying(1)    default 'A'::character varying\n delta1        bigint    default 0 delta2        bigint    default 0 delta3        bigint    default 0 delta4        bigint    default 0 tz_id        integer    default 0Indexes:    \"idx1\" PRIMARY KEY, btree (id)\n    \"idx2\" UNIQUE, btree (code, crid)    \"idx3\" btree (tz_id)    \"idx4\" btree (status)code as crid are foreign key.Here delta* fields get updated through out the day. and most of the time it may update the same row again n again.\ntable1 contains around 12843694 records.Now not understanding y HOT don't work in our case.Changed fillfactor to 80, 75,70.... but nothing seems to work.~GauriOn Tue, Apr 29, 2008 at 10:18 PM, Shane Ambler <[email protected]> wrote:\nAlvaro Herrera wrote:\n\nGauri Kanekar escribió:\n\n\nDo we need to do any special config changes or any other setting for HOT to\nwork??\n\n\nNo.  HOT is always working, if it can.  You don't need to configure it.\n\n\n\nUnless you have upgraded since you started this thread you are still running 8.1.3.\n\nHOT is only available in 8.3 and 8.3.1\n\nYou DO need to upgrade to get the benefits of HOT\n\n\n\n-- \n\nShane Ambler\npgSQL (at) Sheeky (dot) Biz\n\nGet Sheeky @ http://Sheeky.Biz\n-- RegardsGauri", "msg_date": "Wed, 30 Apr 2008 10:59:53 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "On Wed, Apr 30, 2008 at 10:59 AM, Gauri Kanekar\n<[email protected]> wrote:\n> HOT doesn't seems to be working in our case.\n>\n\nCan you please post output of the following query ?\n\nSELECT relid, relname, n_tup_ins, n_tup_upd, n_tup_hot_upd, n_dead_tup\nfrom pg_stat_user_tables WHERE relname = 'table1';\n\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 30 Apr 2008 11:07:35 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "relid | relname | n_tup_ins | n_tup_upd | n_tup_hot_upd | n_dead_tup\n-------+----------------+-----------+-----------+---------------+------------\n 16461 | table1 | 0 | 8352496 | 5389 | 8351242\n\n\nOn Wed, Apr 30, 2008 at 11:07 AM, Pavan Deolasee <[email protected]>\nwrote:\n\n> On Wed, Apr 30, 2008 at 10:59 AM, Gauri Kanekar\n> <[email protected]> wrote:\n> > HOT doesn't seems to be working in our case.\n> >\n>\n> Can you please post output of the following query ?\n>\n> SELECT relid, relname, n_tup_ins, n_tup_upd, n_tup_hot_upd, n_dead_tup\n> from pg_stat_user_tables WHERE relname = 'table1';\n>\n>\n> Thanks,\n> Pavan\n>\n> --\n> Pavan Deolasee\n> EnterpriseDB http://www.enterprisedb.com\n>\n\n\n\n-- \nRegards\nGauri\n\n relid |    relname     | n_tup_ins | n_tup_upd | n_tup_hot_upd | n_dead_tup-------+----------------+-----------+-----------+---------------+------------ 16461 | table1 |         0 |   8352496 |          5389 |    8351242\nOn Wed, Apr 30, 2008 at 11:07 AM, Pavan Deolasee <[email protected]> wrote:\nOn Wed, Apr 30, 2008 at 10:59 AM, Gauri Kanekar\n<[email protected]> wrote:\n> HOT doesn't seems to be working in our case.\n>\n\nCan you please post output of the following query ?\n\nSELECT relid, relname, n_tup_ins, n_tup_upd, n_tup_hot_upd, n_dead_tup\nfrom pg_stat_user_tables WHERE relname = 'table1';\n\n\nThanks,\nPavan\n\n--\nPavan Deolasee\nEnterpriseDB     http://www.enterprisedb.com\n-- RegardsGauri", "msg_date": "Wed, 30 Apr 2008 11:09:56 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "On Wed, Apr 30, 2008 at 11:09 AM, Gauri Kanekar\n<[email protected]> wrote:\n> relid | relname | n_tup_ins | n_tup_upd | n_tup_hot_upd | n_dead_tup\n> -------+----------------+-----------+-----------+---------------+------------\n> 16461 | table1 | 0 | 8352496 | 5389 | 8351242\n>\n\nHmm.. So indeed there are very few HOT updates. What is the fillfactor\nyou are using for these tests ? If its much less than 100, the very\nlow percentage of HOT updates would make me guess that you are\nupdating one of the index columns. Otherwise at least the initial\nupdates until you fill up the free space should be HOT.\n\nThanks,\nPavan\n\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 30 Apr 2008 12:13:11 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "\"table1\" structure :\n id integer not null\n code integer not null\n crid integer not null\n status character varying(1) default 'A'::character varying\n delta1 bigint default 0\n delta2 bigint default 0\n delta3 bigint default 0\n delta4 bigint default 0\n tz_id integer default 0\nIndexes:\n \"idx1\" PRIMARY KEY, btree (id)\n \"idx2\" UNIQUE, btree (code, crid)\n \"idx3\" btree (tz_id)\n \"idx4\" btree (status)\n\ncode as crid are foreign key.\n\nupdate table1 set delta1 = 100 where code/1000000 =999;\n\n\nOn Wed, Apr 30, 2008 at 12:16 PM, Gauri Kanekar <[email protected]>\nwrote:\n\n> fillfactor is set to 80 as you suggested.\n> delta* fields r updated and these fields are no where related to any of\n> the index fields.\n>\n>\n>\n> On Wed, Apr 30, 2008 at 12:13 PM, Pavan Deolasee <[email protected]>\n> wrote:\n>\n> > On Wed, Apr 30, 2008 at 11:09 AM, Gauri Kanekar\n> > <[email protected]> wrote:\n> > > relid | relname | n_tup_ins | n_tup_upd | n_tup_hot_upd |\n> > n_dead_tup\n> > >\n> > -------+----------------+-----------+-----------+---------------+------------\n> > > 16461 | table1 | 0 | 8352496 | 5389 | 8351242\n> > >\n> >\n> > Hmm.. So indeed there are very few HOT updates. What is the fillfactor\n> > you are using for these tests ? If its much less than 100, the very\n> > low percentage of HOT updates would make me guess that you are\n> > updating one of the index columns. Otherwise at least the initial\n> > updates until you fill up the free space should be HOT.\n> >\n> > Thanks,\n> > Pavan\n> >\n> >\n> > --\n> > Pavan Deolasee\n> > EnterpriseDB http://www.enterprisedb.com\n> >\n>\n>\n>\n> --\n> Regards\n> Gauri\n\n\n\n\n-- \nRegards\nGauri\n\n\"table1\" structure : id        integer    not null code        integer    not null crid        integer    not null status        character varying(1)    default 'A'::character varying\n delta1        bigint    default 0 delta2        bigint    default 0 delta3        bigint    default 0 delta4        bigint    default 0 tz_id        integer    default 0Indexes:    \"idx1\" PRIMARY KEY, btree (id)\n\n    \"idx2\" UNIQUE, btree (code, crid)    \"idx3\" btree (tz_id)    \"idx4\" btree (status)code as crid are foreign key.update table1 set delta1 = 100 where code/1000000 =999;\nOn Wed, Apr 30, 2008 at 12:16 PM, Gauri Kanekar <[email protected]> wrote:\nfillfactor is set to 80 as you suggested.delta* fields r updated and these fields are no where related to any of the index fields. On Wed, Apr 30, 2008 at 12:13 PM, Pavan Deolasee <[email protected]> wrote:\nOn Wed, Apr 30, 2008 at 11:09 AM, Gauri Kanekar\n<[email protected]> wrote:\n>  relid |    relname     | n_tup_ins | n_tup_upd | n_tup_hot_upd | n_dead_tup\n> -------+----------------+-----------+-----------+---------------+------------\n>  16461 | table1 |         0 |   8352496 |          5389 |    8351242\n>\n\nHmm.. So indeed there are very few HOT updates. What is the fillfactor\nyou are using for these tests ? If its much less than 100, the very\nlow percentage of HOT updates would make me guess that you are\nupdating one of the index columns. Otherwise at least the initial\nupdates until you fill up the free space should be HOT.\n\nThanks,\nPavan\n\n\n--\nPavan Deolasee\nEnterpriseDB     http://www.enterprisedb.com\n-- RegardsGauri\n-- RegardsGauri", "msg_date": "Wed, 30 Apr 2008 12:19:04 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "On Wed, Apr 30, 2008 at 12:16 PM, Gauri Kanekar\n<[email protected]> wrote:\n> fillfactor is set to 80 as you suggested.\n> delta* fields r updated and these fields are no where related to any of the\n> index fields.\n>\n\nThat's weird. With that fillfactor, you should have a very high\npercentage of HOT update ratio. It could be a very special case that\nwe might be looking at. I think a self contained test case or a very\ndetail explanation of the exact usage is what we need to explain this\nbehavior. You may also try dropping non-critical indexes and test\nagain.\n\nBtw, I haven't been able to reproduce this at my end. With the given\nindexes and kind of updates, I get very high percentage of HOT\nupdates.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 30 Apr 2008 12:55:30 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "Gauri Kanekar wrote:\n> HOT doesn't seems to be working in our case.\n> \n> This is \"table1\" structure :\n> id integer not null\n> code integer not null\n> crid integer not null\n> status character varying(1) default 'A'::character varying\n> delta1 bigint default 0\n> delta2 bigint default 0\n> delta3 bigint default 0\n> delta4 bigint default 0\n> tz_id integer default 0\n> Indexes:\n> \"idx1\" PRIMARY KEY, btree (id)\n> \"idx2\" UNIQUE, btree (code, crid)\n> \"idx3\" btree (tz_id)\n> \"idx4\" btree (status)\n> \n> code as crid are foreign key.\n> \n> Here delta* fields get updated through out the day. and most of the time it\n> may update the same row again n again.\n> \n> table1 contains around 12843694 records.\n> \n> Now not understanding y HOT don't work in our case.\n> \n> Changed fillfactor to 80, 75,70.... but nothing seems to work.\n\nDid you dump and reload the table after setting the fill factor? It only \naffects newly inserted data.\n\nAnother possibility is that there's a long running transaction in the \nbackground, preventing HOT/vacuum from reclaiming the dead tuples.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 30 Apr 2008 11:26:18 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "Heikki Linnakangas wrote:\n\n> Did you dump and reload the table after setting the fill factor? It only \n> affects newly inserted data.\n\nVACUUM FULL or CLUSTER should do the job too, right? After all, they \nrecreate the table so they must take the fillfactor into account.\n\n--\nCraig Ringer\n", "msg_date": "Wed, 30 Apr 2008 20:16:21 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "Craig Ringer wrote:\n> Heikki Linnakangas wrote:\n> \n>> Did you dump and reload the table after setting the fill factor? It \n>> only affects newly inserted data.\n> \n> VACUUM FULL or CLUSTER should do the job too, right? After all, they \n> recreate the table so they must take the fillfactor into account.\n\nCLUSTER, yes. VACUUM FULL won't move tuples around just to make room for \nthe fillfactor.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 30 Apr 2008 14:35:29 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "Please keep list in the loop.\n\nOn Wed, Apr 30, 2008 at 6:45 PM, Gauri Kanekar\n<[email protected]> wrote:\n> Hi,\n> We have recreated the indices with fillfactor set to 80, which has improved HOT\n> a little,\n\n\nWait. Did you say, you recreated the indexes with fill factor ? That's\nno help for HOT. You need to recreate the TABLEs with a fill factor.\nAnd as Heikki pointed out, you need to dump and reload, just altering\nthe table won't affect the current data.\n\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 30 Apr 2008 19:06:12 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "\"Pavan Deolasee\" <[email protected]> writes:\n> That's weird. With that fillfactor, you should have a very high\n> percentage of HOT update ratio. It could be a very special case that\n> we might be looking at.\n\nHe's testing\n\n>> update table1 set delta1 = 100 where code/1000000 =999;\n\nso all the rows being updated fall into a contiguous range of \"code\"\nvalues. If the table was loaded in such a way that those rows were\nalso physically contiguous, then the updates would be localized and\nwould very soon run out of freespace on those pages.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 Apr 2008 10:46:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem " }, { "msg_contents": "[email protected] (Frank Ch. Eigler) writes:\n> Tom Lane <[email protected]> writes:\n>> Also, you need to make sure you have the FSM parameters set high enough\n>> so that all the free space found by a VACUUM run can be remembered.\n\n> Would it be difficult to arrange FSM parameters to be automatically\n> set from the VACUUM reclaim results?\n\nYeah, because the problem is that FSM is kept in shared memory which\ncannot be resized on-the-fly.\n\nIn retrospect, trying to keep FSM in shared memory was a spectacularly\nbad idea (one for which I take full blame). There is work afoot to\npush it out to disk so that the whole problem goes away; so I don't see\nmuch point in worrying about band-aid solutions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 Apr 2008 11:02:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem " }, { "msg_contents": "On Wed, Apr 30, 2008 at 8:16 PM, Tom Lane <[email protected]> wrote:\n> \"Pavan Deolasee\" <[email protected]> writes:\n> > That's weird. With that fillfactor, you should have a very high\n> > percentage of HOT update ratio. It could be a very special case that\n> > we might be looking at.\n>\n> He's testing\n>\n\nIt's \"She\" :-)\n\nOh yes. Apologies if I sounded harsh; did not mean that. I was just\ncompletely confused why she is not seeing the HOT updates.\n\n> >> update table1 set delta1 = 100 where code/1000000 =999;\n>\n> so all the rows being updated fall into a contiguous range of \"code\"\n> values. If the table was loaded in such a way that those rows were\n> also physically contiguous, then the updates would be localized and\n> would very soon run out of freespace on those pages.\n>\n\nYeah, that seems like the pattern. I tested with the similar layout\nand a fill factor 80. The initial few bulk updates had comparatively\nless HOT updates (somewhere 20-25%), But within 4-5 iterations of\nupdating the same set of rows, HOT updates were 90-95%. That's because\nafter few iterations (and because of non-HOT updates) the tuples get\nscattered in various blocks, thus improving chances of HOT updates.\n\nI guess the reason probably is that she is using fill factor for\nindexes and not heap, but she hasn't yet confirmed.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 30 Apr 2008 21:45:05 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Syatem" }, { "msg_contents": "We have tried fillfactor for indices and it seems to work.\nNeed to try fillfactor for table. May for that reason the bulk update\nqueries don't get the advantage of HOT\n:)\n\n\nOn Wed, Apr 30, 2008 at 9:45 PM, Pavan Deolasee <[email protected]>\nwrote:\n\n> On Wed, Apr 30, 2008 at 8:16 PM, Tom Lane <[email protected]> wrote:\n> > \"Pavan Deolasee\" <[email protected]> writes:\n> > > That's weird. With that fillfactor, you should have a very high\n> > > percentage of HOT update ratio. It could be a very special case that\n> > > we might be looking at.\n> >\n> > He's testing\n> >\n>\n> It's \"She\" :-)\n>\n> Oh yes. Apologies if I sounded harsh; did not mean that. I was just\n> completely confused why she is not seeing the HOT updates.\n>\n> > >> update table1 set delta1 = 100 where code/1000000 =999;\n> >\n> > so all the rows being updated fall into a contiguous range of \"code\"\n> > values. If the table was loaded in such a way that those rows were\n> > also physically contiguous, then the updates would be localized and\n> > would very soon run out of freespace on those pages.\n> >\n>\n> Yeah, that seems like the pattern. I tested with the similar layout\n> and a fill factor 80. The initial few bulk updates had comparatively\n> less HOT updates (somewhere 20-25%), But within 4-5 iterations of\n> updating the same set of rows, HOT updates were 90-95%. That's because\n> after few iterations (and because of non-HOT updates) the tuples get\n> scattered in various blocks, thus improving chances of HOT updates.\n>\n> I guess the reason probably is that she is using fill factor for\n> indexes and not heap, but she hasn't yet confirmed.\n>\n> Thanks,\n> Pavan\n>\n> --\n> Pavan Deolasee\n> EnterpriseDB http://www.enterprisedb.com\n>\n\n\n\n-- \nRegards\nGauri\n\nWe have tried fillfactor for indices and it seems to work.Need to try fillfactor for table. May for that reason the bulk update queries don't get the advantage of HOT:)On Wed, Apr 30, 2008 at 9:45 PM, Pavan Deolasee <[email protected]> wrote:\nOn Wed, Apr 30, 2008 at 8:16 PM, Tom Lane <[email protected]> wrote:\n\n> \"Pavan Deolasee\" <[email protected]> writes:\n>  > That's weird. With that fillfactor, you should have a very high\n>  > percentage of HOT update ratio. It could be a very special case that\n>  > we might be looking at.\n>\n>  He's testing\n>\n\nIt's \"She\" :-)\n\nOh yes. Apologies if I sounded harsh; did not mean that. I was just\ncompletely confused why she is not seeing the HOT updates.\n\n>  >> update table1 set delta1 = 100 where code/1000000 =999;\n>\n>  so all the rows being updated fall into a contiguous range of \"code\"\n>  values.  If the table was loaded in such a way that those rows were\n>  also physically contiguous, then the updates would be localized and\n>  would very soon run out of freespace on those pages.\n>\n\nYeah, that seems like the pattern. I tested with the similar layout\nand a fill factor 80. The initial few bulk updates had comparatively\nless HOT updates (somewhere 20-25%), But within 4-5 iterations of\nupdating the same set of rows, HOT updates were 90-95%. That's because\nafter few iterations (and because of non-HOT updates) the tuples get\nscattered in various blocks, thus improving chances of HOT updates.\n\nI guess the reason probably is that she is using fill factor for\nindexes and not heap, but she hasn't yet confirmed.\n\nThanks,\nPavan\n\n--\nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n-- RegardsGauri", "msg_date": "Wed, 30 Apr 2008 22:17:46 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replication Syatem" } ]
[ { "msg_contents": "So, it is time to improve performance, it is running to slow.\nAFAIK (as a novice) there are a few general areas:\n\n1) hardware\n2) rewriting my queries and table structures\n3) using more predefined queries\n4) tweek parameters in the db conf files\n\nOf these points:\n1) is nothing I can do about right now, but in the future perhaps.\n2) will be quite hard right now since there is more code than time.\n3) almost like 2 but perhaps more do-able with the current constraints.\n4) This seems to be the easiest one to start with...\n\nSo what should I do/read concerning point 4?\nIf you have other good suggestions I'd be very interested in that.\n\nThank you :-)\n", "msg_date": "Mon, 28 Apr 2008 17:56:27 +0200", "msg_from": "\"A B\" <[email protected]>", "msg_from_op": true, "msg_subject": "Where do a novice do to make it run faster?" }, { "msg_contents": "A B wrote:\n> So, it is time to improve performance, it is running to slow.\n> AFAIK (as a novice) there are a few general areas:\n> \n> 1) hardware\n> 2) rewriting my queries and table structures\n> 3) using more predefined queries\n> 4) tweek parameters in the db conf files\n> \n> Of these points:\n> 1) is nothing I can do about right now, but in the future perhaps.\n> 2) will be quite hard right now since there is more code than time.\n> 3) almost like 2 but perhaps more do-able with the current constraints.\n> 4) This seems to be the easiest one to start with...\n> \n> So what should I do/read concerning point 4?\n> If you have other good suggestions I'd be very interested in that.\n> \n> Thank you :-)\n> \n\n1st, change your log settings log_min_duration_statement to something \nlike 1000 (one second). This will allow you to see which statements \ntake the longest.\n\n2nd. Use EXPLAIN ANALYZE on those statements to determine what is \ntaking a long time and focus on optimizing those statements that take \nthe longest to execute.\n\nThat ought to get you a long way down the road.\n\n-Dennis\n", "msg_date": "Mon, 28 Apr 2008 10:10:35 -0600", "msg_from": "Dennis Muhlestein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Where do a novice do to make it run faster?" }, { "msg_contents": "> 1) hardware\n> 2) rewriting my queries and table structures\n> 3) using more predefined queries\n> 4) tweek parameters in the db conf files\n>\n> Of these points:\n> 1) is nothing I can do about right now, but in the future perhaps.\n> 2) will be quite hard right now since there is more code than time.\n> 3) almost like 2 but perhaps more do-able with the current constraints.\n> 4) This seems to be the easiest one to start with...\n>\n> So what should I do/read concerning point 4?\n> If you have other good suggestions I'd be very interested in that.\n>\n> Thank you :-)\n\nYou can provide information postgresql-version, what type of queries\nyou're running, some explain analyze of those, and what type of\nhardware you're running and what OS is installed.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Mon, 28 Apr 2008 18:19:30 +0200", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Where do a novice do to make it run faster?" }, { "msg_contents": "A B wrote:\n> So, it is time to improve performance, it is running to slow.\n> AFAIK (as a novice) there are a few general areas:\n>\n> 1) hardware\n> 2) rewriting my queries and table structures\n> 3) using more predefined queries\n> 4) tweek parameters in the db conf files\n>\n> Of these points:\n> 1) is nothing I can do about right now, but in the future perhaps.\n> 2) will be quite hard right now since there is more code than time.\n> 3) almost like 2 but perhaps more do-able with the current constraints.\n> 4) This seems to be the easiest one to start with...\n>\n> So what should I do/read concerning point 4?\n> If you have other good suggestions I'd be very interested in that.\n> \nGo back to step zero - gather information that would be helpful in \ngiving advice. For starters:\n- What hardware do you currently have?\n- What OS and version of PG?\n- How big is the database?\n- What is the nature of the workload (small queries or data-mining, how \nmany simultaneous clients, transaction rate, etc.)?\n- Is PG sharing the machine with other workloads?\n\nThen edit your postgresql.conf file to gather data (see \nhttp://www.postgresql.org/docs/8.3/interactive/monitoring-stats.html). \nWith stat collection enabled, you can often find some low-hanging fruit \nlike indexes that aren't used (look in pg_stat_user_indexes) - sometime \nbecause the query didn't case something in the where-clause correctly.\n\nAlso look at \nhttp://www.postgresql.org/docs/8.3/interactive/runtime-config-logging.html \n- especially the log_min_duration_statement setting to find long-running \nqueries. You will probably need to try different settings and watch the \nlog. Logging impacts performance so don't just set to log everything and \nforget. You need to play with it.\n\nDon't discount step 2 - you may find you can rewrite one inefficient but \nfrequent query. Or add a useful index on the server.\n\nCheers,\nSteve\n\n\n\n", "msg_date": "Mon, 28 Apr 2008 09:34:14 -0700", "msg_from": "Steve Crawford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Where do a novice do to make it run faster?" }, { "msg_contents": "[email protected] (\"A B\") writes:\n> So, it is time to improve performance, it is running to slow.\n> AFAIK (as a novice) there are a few general areas:\n>\n> 1) hardware\n> 2) rewriting my queries and table structures\n> 3) using more predefined queries\n> 4) tweek parameters in the db conf files\n>\n> Of these points:\n> 1) is nothing I can do about right now, but in the future perhaps.\n> 2) will be quite hard right now since there is more code than time.\n> 3) almost like 2 but perhaps more do-able with the current constraints.\n> 4) This seems to be the easiest one to start with...\n>\n> So what should I do/read concerning point 4?\n> If you have other good suggestions I'd be very interested in that.\n>\n> Thank you :-)\n\nIn the order of ease of implementation, it tends to be...\n\n1. Tweak postgresql.conf\n2. Make sure you ran VACUUM + ANALYZE\n3. Find some expensive queries and try to improve them, which might\n involve changing the queries and/or adding relevant indices\n4. Add RAM to your server\n5. Add disk to your server\n6. Redesign your application's DB schema so that it is more performant\n by design\n\nURL below may have some material of value...\n-- \nselect 'cbbrowne' || '@' || 'cbbrowne.com';\nhttp://linuxfinances.info/info/postgresqlperformance.html\nIt is usually a good idea to put a capacitor of a few microfarads\nacross the output, as shown.\n", "msg_date": "Mon, 28 Apr 2008 12:35:18 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Where do a novice do to make it run faster?" }, { "msg_contents": "\n\tYou got the order slightly wrong I guess.\n\n> 1) hardware\n\n\tWould only come first if your RAM is really too small, or you use RAID5 \non write-heavy tables, or what limits you is transaction fsync (hint : \n8.3).\n\tAdding RAM is cheap.\n\n> 2) rewriting my queries and table structures\n\n\tThis should really come first.\n\tLog expensive queries. Note that an expensive query can be a slow query, \nor be a rather fast query that you execute lots of times, or a very simple \nand fast query that you execute really really too often.\n\n\tNow ask yourself :\n* What is this query supposed to do ?\n\n* Do I need this query ?\n\n\tExample :\n\tYou put your sessions in a database ?\n\t=> Perhaps put them in the good old filesystem ?\n\n\tYour PHP is loading lots of configuration from the database for every \npage.\n\t=> Cache it, generate some PHP code once and include it, put it in the \nsession if it depends on the user, but don't reload the thing on each page \n!\n\n\tThis feature is useless\n\t=> Do you really need to display a birthday cake on your forum for those \nusers who have their birthday today ?\n\n\tUPDATEs...\n\t=> Do you really need to update the last time a user was online every \ntime ? What about updating it every 5 minutes instead ?\n\n* Is this query inside a loop ?\n\t=> Use JOIN.\n\n* Do I need all the rows from this query ?\n\n\tExample :\nYou use pagination and perform the same query changing LIMIT/OFFSET ?\n=> Perform the query once, retrieve the first N pages of result, cache it \nin the session or in a table.\n\n* You have a website ?\n=> Use lighttpd and fastcgi\n\n* Do I need all the columns from this query ?\n\n* Do I suffer from locking ?\n\n\tetc.\n\n\nNow you should see some easy targets.\nFor the queries that are slow, use EXPLAIN ANALYZE.\nQuestion your schema.\netc.\n", "msg_date": "Mon, 28 Apr 2008 20:23:12 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Where do a novice do to make it run faster?" }, { "msg_contents": "Here is some more information.\n\nSize of database:\n\ndu -sh /var/lib/pgsql/data/base/*\n4,1M /var/lib/pgsql/data/base/1\n4,1M /var/lib/pgsql/data/base/10792\n4,1M /var/lib/pgsql/data/base/10793\n9,1M /var/lib/pgsql/data/base/16388\n11M /var/lib/pgsql/data/base/19233\n1,6G /var/lib/pgsql/data/base/20970\n\nI'm not sure what the size acctually is... But I can't imagine that it\nis 1,6 GB!!! I'd say I have 11MB of data in it...\n\nCpu is Intel CoreDuo E6750, 4 GB RAM\nHarddiscs are two Segate 320 GB SATA discs. running software raid\n(!!), raid-1.Yes, this might be a big performance hit, but that is\nwhat I have right now, in the future I can throw more money on\nhardware.\n\nWill I see a general improvement in performance in 8.3.X over 8.1.11?\n\n\n2008/4/29 A B <[email protected]>:\n> Right now, version 8.1.11 on centos.x86-64, intel dual core cpu with 2\n> sata discs (mirror raid)\n>\n> The queries are most select/inserts.. I guess... I'm not sure exactly\n> what to answer on that.\n> \"explain analyze\" is something I have not read about yet.\n>\n>\n> 2008/4/28 Claus Guttesen <[email protected]>:\n>\n>\n> > > 1) hardware\n> > > 2) rewriting my queries and table structures\n> > > 3) using more predefined queries\n> > > 4) tweek parameters in the db conf files\n> > >\n> > > Of these points:\n> > > 1) is nothing I can do about right now, but in the future perhaps.\n> > > 2) will be quite hard right now since there is more code than time.\n> > > 3) almost like 2 but perhaps more do-able with the current constraints.\n> > > 4) This seems to be the easiest one to start with...\n> > >\n> > > So what should I do/read concerning point 4?\n> > > If you have other good suggestions I'd be very interested in that.\n> > >\n> > > Thank you :-)\n> >\n> > You can provide information postgresql-version, what type of queries\n> > you're running, some explain analyze of those, and what type of\n> > hardware you're running and what OS is installed.\n> >\n> > --\n> > regards\n> > Claus\n> >\n> > When lenity and cruelty play for a kingdom,\n> > the gentlest gamester is the soonest winner.\n> >\n> > Shakespeare\n> >\n>\n", "msg_date": "Tue, 29 Apr 2008 11:09:48 +0200", "msg_from": "\"A B\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Where do a novice do to make it run faster?" }, { "msg_contents": "\"A B\" <[email protected]> writes:\n> I'm not sure what the size acctually is... But I can't imagine that it\n> is 1,6 GB!!! I'd say I have 11MB of data in it...\n\nSounds like you've got a rather severe case of table and/or index bloat.\nThis is typically caused by not vacuuming often enough.\n\nThe easiest way to get the size back down is probably to dump and reload\nthe database. After that you need to look at your vacuuming practices.\n\n> Will I see a general improvement in performance in 8.3.X over 8.1.11?\n\nProbably so, if only because it has autovacuum turned on by default.\nThat's not really a substitute for careful administration practices,\nbut it helps.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 29 Apr 2008 10:20:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Where do a novice do to make it run faster? " } ]
[ { "msg_contents": "I have complete the benchmarks I have made with Postgres and I have talk about\nsome weeks ago on postgres performance mailing list (see post shared_buffers).\n\nOn the follow link you can find a doc that contains the graphs generated.\n\nhttp://www.mediafire.com/?lk4woomsxlc\n\n\n\nRegards\nGaetano Mendola\n", "msg_date": "Mon, 28 Apr 2008 18:19:48 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "shared_buffer/DRBD performances" } ]
[ { "msg_contents": "Hi all:\n\nWe are loading in a number (100+) of sql files that are about 100M in\nsize. It takes about three hours to load the file. There is very\nlittle load on the database other than the copy from operations.\n\nWe are running postgresql-8.1.3 under Centos 4 on a RAID 1/0 array\nwith 4 disks (so we have only one spindle). The partitions are set up\nin an LVM and iostat 5 shows (for one report):\n\n avg-cpu: %user %nice %sys %iowait %idle\n\t 1.70 0.00 0.80 51.40 46.10\n\n Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n sda 179.20 1472.00 2561.60 7360 12808\n sda1 0.00 0.00 0.00 0 0\n sda2 385.20 1462.40 2561.60 7312 12808\n dm-0 0.80 0.00 6.40 0 32\n dm-1 0.00 0.00 0.00 0 0\n dm-2 0.00 0.00 0.00 0 0\n dm-3 0.00 0.00 0.00 0 0\n dm-4 4.40 0.00 35.20 0 176\n dm-5 0.00 0.00 0.00 0 0\n dm-6 380.00 1462.40 2520.00 7312 12600\n\ndm-6 is where the data files reside and dm-4 is where the WAL archives\nare kept. Note all the DM's are on the same RAID 0 device /dev/sda2.\n\nA sample psql command file to load the data is:\n\n BEGIN;\n COPY peers (observe_start, observe_end, geo_scope, geo_value,\n peer_a, peer_b) FROM stdin WITH NULL AS '';\n (data here)\n 3 more copy commands to different tables w/ data\n COMMIT;\n\nThe primary keys for the tables being loaded are composite keys using\n4-7 columns, so that may be part of the issue.\n\n>From postgres.conf\n\n shared_buffers = 3000\n #temp_buffers = 1000 # min 100, 8KB each\n #max_prepared_transactions = 5 # can be 0 or more\n max_locks_per_transaction).\n work_mem = 2048 # min 64, size in KB\n maintenance_work_mem = 65536 # min 1024, size in KB\n #max_stack_depth = 2048 # min 100, size in KB\n\nThe prior settings for work_mem/maintenance_work_mem were the\ndefaults:\n\n #work_mem = 1024 # min 64, size in KB\n #maintenance_work_mem = 16384 # min 1024, size in KB\n\nI also took a look at disk-io hit rates:\n\n# select * from pg_statio_user_tables;\t\n relid | schema | relname | heap_blks_read | heap_blks_hit | idx_blks_read | idx_blks_hit |\n-------+--------+--------------+----------------+---------------+---------------+--------------+\n 17282 | public | providers | 179485097 | 78832253 | 835008 | 196903582 |\n 17264 | public | events | 0 | 0 | | |\n 17262 | public | days | 495 | 219 | 478 | 16 |\n 17276 | public | peers | 147435004 | 114304828 | 1188908 | 295569499 |\n 17288 | public | rankings | 564638938 | 345456664 | 275607291 | 1341727605 |\n 17270 | public | market_share | 131932 | 90048 | 5408 | 182100 |\n\nmarket_share did have one tidx_blks_read reported, but all the other\nfields (toast_blks_read, toast_blks_hit, tidx_blks_read,\ntidx_blks_hit) were empty for all rows.\n\nThis looks like we have whole indexes in memory except for the days\ntable, which has a low update rate, so I am not worried about that.\n\nHowever for the heap_blks_read and heap_blks_hit we get a different\nstory:\n\n relname | hit_percent\n --------------+-----------\n providers | 43.92\n days | 44.24\n peers | 77.52\n rankings | 61.18\n market_share | 68.25\n\nso we see a 43 % hit ratio for providers to 77% hit ratio for\npeers. Not horrible hit rates given that we are more data warehousing\nthan OLTP, but I am not sure what effect increasing these (by\nincreasing shared_buffers I think) will have on the COPY operation. I\nwould suspect none.\n\nTo try to solve this speed issue:\n\n I checked the logs and was seeing a few\n\n 2008-04-21 11:36:43 UTC @(2761)i: LOG: checkpoints ... (27 seconds apart)\n\n of these, so I changed:\n\n checkpoint_segments = 30 \n checkpoint_warning = 150\n\n in postgres.conf and reloaded postgres. I have only seen one of these\n log messages in the past week.\n \n I have turned of autovacuum.\n\n I have increased the maintenance_work_mem as mentioned\n above. (Although I didn't expect it to do anything unless we\n drop/recreate indexes).\n\n I have increased work_mem as mentioned above.\n\nThe only things I can think of is increasing shared memory, or\ndropping indexes.\n\nI don't see any indication in the docs that increasing shared memory\nwould help speed up a copy operation.\n\nThe only indexes we have to drop are the ones on the primary keys\n(there is one non-primary key index in the database as well).\n\nCan you drop an index on the primary key for a table and add it back\nlater? Am I correct in saying: the primary key index is what enforces\nthe unique constraint in the table? If the index is dropped and\nnon-unique primary key data has been added, what happens when you\nre-add the index?\n\nDoes anybody have any things to check/ideas on why loading a 100Mb sql\nfile using psql would take 3 hours?\n\nThanks in advance for any ideas.\n\n--\n\t\t\t\t-- rouilj\n\nJohn Rouillard\nSystem Administrator\nRenesys Corporation\n603-244-9084 (cell)\n603-643-9300 x 111\n", "msg_date": "Mon, 28 Apr 2008 17:24:31 +0000", "msg_from": "John Rouillard <[email protected]>", "msg_from_op": true, "msg_subject": "Very poor performance loading 100M of sql data using copy" }, { "msg_contents": "John Rouillard wrote:\n> We are running postgresql-8.1.3 under Centos 4\n\nYou should upgrade, at least to the latest minor release of the 8.1 \nseries (8.1.11), as there has been a bunch of important bug and security \nfixes. Or even better, upgrade to 8.3, which has reduced the storage \nsize of especially variable length datatypes like text/char/varchar in \nparticular. As your COPY is I/O bound, reducing storage size will \ntranslate directly to improved performance.\n\n> dm-6 is where the data files reside and dm-4 is where the WAL archives\n> are kept. Note all the DM's are on the same RAID 0 device /dev/sda2.\n\nAnother reason to upgrade to 8.3: if you CREATE or TRUNCATE the table in \nthe same transaction as you COPY into it, you can avoid WAL logging of \nthe loaded data, which will in the best case double your performance as \nyour WAL is on the same physical drives as the data files.\n\n> The only indexes we have to drop are the ones on the primary keys\n> (there is one non-primary key index in the database as well).\n> \n> Can you drop an index on the primary key for a table and add it back\n> later? Am I correct in saying: the primary key index is what enforces\n> the unique constraint in the table? If the index is dropped and\n> non-unique primary key data has been added, what happens when you\n> re-add the index?\n\nYes, the index is what enforces the uniqueness. You can drop the primary \nkey constraint, and add it back after the load with ALTER TABLE. If the \nload introduces any non-unique primary keys, adding the primary key \nconstraint will give you an error and fail.\n\nDropping and recreating the indexes is certainly worth trying.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 28 Apr 2008 18:53:09 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very poor performance loading 100M of sql data using copy" }, { "msg_contents": "On Mon, Apr 28, 2008 at 06:53:09PM +0100, Heikki Linnakangas wrote:\n> John Rouillard wrote:\n> >We are running postgresql-8.1.3 under Centos 4\n> You should upgrade, at least to the latest minor release of the 8.1 \n> series (8.1.11), as there has been a bunch of important bug and security \n> fixes. Or even better, upgrade to 8.3, which has reduced the storage \n> size of especially variable length datatypes like text/char/varchar in \n> particular. As your COPY is I/O bound, reducing storage size will \n> translate directly to improved performance.\n\nYup. Just saw that suggestion in an unrelated email.\n \n> >dm-6 is where the data files reside and dm-4 is where the WAL archives\n> >are kept. Note all the DM's are on the same RAID 0 device /dev/sda2.\n> \n> Another reason to upgrade to 8.3: if you CREATE or TRUNCATE the table in \n> the same transaction as you COPY into it, you can avoid WAL logging of \n> the loaded data, which will in the best case double your performance as \n> your WAL is on the same physical drives as the data files.\n\nWe can't do this as we are backfilling a couple of months of data into\ntables with existing data.\n \n> >The only indexes we have to drop are the ones on the primary keys\n> >(there is one non-primary key index in the database as well).\n> >\n> >Can you drop an index on the primary key for a table and add it back\n> >later? Am I correct in saying: the primary key index is what enforces\n> >the unique constraint in the table? If the index is dropped and\n> >non-unique primary key data has been added, what happens when you\n> >re-add the index?\n> \n> Yes, the index is what enforces the uniqueness. You can drop the primary \n> key constraint, and add it back after the load with ALTER TABLE. If the \n> load introduces any non-unique primary keys, adding the primary key \n> constraint will give you an error and fail.\n\nThat's the part I am worried about. I guess using psql to delete the\nproblem row then re-adding the index will work.\n \n> Dropping and recreating the indexes is certainly worth trying.\n\nThanks for the info.\n\n-- \n\t\t\t\t-- rouilj\n\nJohn Rouillard\nSystem Administrator\nRenesys Corporation\n603-244-9084 (cell)\n603-643-9300 x 111\n", "msg_date": "Mon, 28 Apr 2008 18:00:53 +0000", "msg_from": "John Rouillard <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very poor performance loading 100M of sql data using copy" }, { "msg_contents": "On Mon, 28 Apr 2008, John Rouillard wrote:\n\n> 2008-04-21 11:36:43 UTC @(2761)i: LOG: checkpoints ... (27 seconds apart)\n> so I changed:\n> checkpoint_segments = 30\n> checkpoint_warning = 150\n\nThat's good, but you might go higher than 30 for a bulk loading operation \nlike this, particularly on 8.1 where checkpoints are no fun. Using 100 is \nnot unreasonable.\n\n> shared_buffers = 3000\n> I don't see any indication in the docs that increasing shared memory\n> would help speed up a copy operation.\n\nThe index blocks use buffer space, and what ends up happening if there's \nnot enough memory is they are written out more than they need to be (and \nwith your I/O hardware you need to avoid writes unless absolutely \nnecessary). Theoretically the OS is caching around that situation but \nbetter to avoid it. You didn't say how much RAM you have, but you should \nstart by a factor of 10 increase to 30,000 and see if that helps; if so, \ntry making it large enough to use 1/4 of total server memory. 3000 is \nonly giving the server 24MB of RAM to work with, and it's unfair to expect \nit to work well in that situation.\n\nWhile not relevant to this exercise you'll need to set \neffective_cache_size to a useful value one day as well.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 28 Apr 2008 14:16:02 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very poor performance loading 100M of sql data using\n copy" }, { "msg_contents": "John Rouillard wrote:\n\n> We can't do this as we are backfilling a couple of months of data \n> into tables with existing data.\n\nIs this a one off data loading of historic data or an ongoing thing?\n\n\n>>> The only indexes we have to drop are the ones on the primary keys\n>>> (there is one non-primary key index in the database as well).\n\nIf this amount of data importing is ongoing then one thought I would try\nis partitioning (this could be worthwhile anyway with the amount of data\nyou appear to have).\nCreate an inherited table for the month being imported, load the data \ninto it, then add the check constraints, indexes, and modify the \nrules/triggers to handle the inserts to the parent table.\n\n\n\n-- \n\nShane Ambler\npgSQL (at) Sheeky (dot) Biz\n\nGet Sheeky @ http://Sheeky.Biz\n", "msg_date": "Tue, 29 Apr 2008 05:19:59 +0930", "msg_from": "Shane Ambler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very poor performance loading 100M of sql data using\n copy" }, { "msg_contents": "On Tue, Apr 29, 2008 at 05:19:59AM +0930, Shane Ambler wrote:\n> John Rouillard wrote:\n> \n> >We can't do this as we are backfilling a couple of months of data \n> >into tables with existing data.\n> \n> Is this a one off data loading of historic data or an ongoing thing?\n\nYes it's a one off bulk data load of many days of data. The daily\nloads will also take 3 hour's but that is ok since we only do those\nonce a day so we have 21 hours of slack in the schedule 8-).\n\n> >>>The only indexes we have to drop are the ones on the primary keys\n> >>> (there is one non-primary key index in the database as well).\n> \n> If this amount of data importing is ongoing then one thought I would try\n> is partitioning (this could be worthwhile anyway with the amount of data\n> you appear to have).\n> Create an inherited table for the month being imported, load the data \n> into it, then add the check constraints, indexes, and modify the \n> rules/triggers to handle the inserts to the parent table.\n\nHmm, interesting idea, worth considering if we have to do this again\n(I hope not). \n\nThaks for the reply.\n\n-- \n\t\t\t\t-- rouilj\n\nJohn Rouillard\nSystem Administrator\nRenesys Corporation\n603-244-9084 (cell)\n603-643-9300 x 111\n", "msg_date": "Tue, 29 Apr 2008 15:04:32 +0000", "msg_from": "John Rouillard <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very poor performance loading 100M of sql data using copy" }, { "msg_contents": "On Mon, Apr 28, 2008 at 02:16:02PM -0400, Greg Smith wrote:\n> On Mon, 28 Apr 2008, John Rouillard wrote:\n> \n> > 2008-04-21 11:36:43 UTC @(2761)i: LOG: checkpoints ... (27 seconds \n> > apart)\n> > so I changed:\n> > checkpoint_segments = 30\n> > checkpoint_warning = 150\n> \n> That's good, but you might go higher than 30 for a bulk loading operation \n> like this, particularly on 8.1 where checkpoints are no fun. Using 100 is \n> not unreasonable.\n\nOk. I can do that. I chose 30 to make the WAL logs span the 5 minute\n\n checkpoint_timeout = 300\n\nso that the 30 segments wouldn't wrap over before the 5 minute\ncheckpoint that usually occurs. Maybe I should increase both the\ntimeout and the segments?\n \n> >shared_buffers = 3000\n> >I don't see any indication in the docs that increasing shared memory\n> >would help speed up a copy operation.\n> \n> The index blocks use buffer space, and what ends up happening if there's \n> not enough memory is they are written out more than they need to be (and \n> with your I/O hardware you need to avoid writes unless absolutely \n> necessary).\n\nI forgot to mention the raid 1/0 is on a 3ware 9550SX-4LP raid card\nsetup as raid 1/0. The write cache is on and autoverify is turned off.\n\n> Theoretically the OS is caching around that situation but \n> better to avoid it. \n\nThe system is using 6-8MB of memory for cache.\n\n> You didn't say how much RAM you have,\n\n16GB total, but 8GB or so is taken up with other processes.\n\n> but you should \n> start by a factor of 10 increase to 30,000 and see if that helps; if so, \n> try making it large enough to use 1/4 of total server memory. 3000 is \n> only giving the server 24MB of RAM to work with, and it's unfair to expect \n> it to work well in that situation.\n\nSo swap the memory usage from the OS cache to the postgresql process.\nUsing 1/4 as a guideline it sounds like 600,000 (approx 4GB) is a\nbetter setting. So I'll try 300000 to start (1/8 of memory) and see\nwhat it does to the other processes on the box.\n \n> While not relevant to this exercise you'll need to set \n> effective_cache_size to a useful value one day as well.\n\nThis is a very lightly loaded database, a few queries/hour usually\nscattered across the data set, so hopefully that won't be much of an\nissue.\n\n-- \n\t\t\t\t-- rouilj\n\nJohn Rouillard\nSystem Administrator\nRenesys Corporation\n603-244-9084 (cell)\n603-643-9300 x 111\n", "msg_date": "Tue, 29 Apr 2008 15:16:22 +0000", "msg_from": "John Rouillard <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very poor performance loading 100M of sql data using copy" }, { "msg_contents": "On Tue, 29 Apr 2008, John Rouillard wrote:\n\n> So swap the memory usage from the OS cache to the postgresql process.\n> Using 1/4 as a guideline it sounds like 600,000 (approx 4GB) is a\n> better setting. So I'll try 300000 to start (1/8 of memory) and see\n> what it does to the other processes on the box.\n\nThat is potentially a good setting. Just be warned that when you do hit a \ncheckpoint with a high setting here, you can end up with a lot of data in \nmemory that needs to be written out, and under 8.2 that can cause an ugly \nspike in disk writes. The reason I usually threw out 30,000 as a \nsuggested starting figure is that most caching disk controllers can buffer \nat least 256MB of writes to keep that situation from getting too bad. \nTry it out and see what happens, just be warned that's the possible \ndownside of setting shared_buffers too high and therefore you might want \nto ease into that more gradually (particularly if this system is shared \nwith other apps).\nx\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 29 Apr 2008 11:58:00 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very poor performance loading 100M of sql data using\n copy" } ]
[ { "msg_contents": "Hello,\n\n\nExecutive summary :\n\nLast year I wrote a database benchmark which simulates a forum.\nIt works on Postgres and MySQL.\nIt could be useful.\nI have no time to touch this, so it is rotting on my harddrive.\n\nWho wants to adopt it ? I will put it on pgfoundry.\nI can spend a few hours documenting the source and writing some\ndocumentation and pass the package to someone who might be interested and\nmore available.\n\nDetails :\n\nThe benchmark is a forum type load (actually it came from me arguing with\nthe phpBB team, lol) but, unlike all forums I know, \"correctly\" optimized.\nA bunch of forums are created, and there is a website (in PHP), very\nbasic, which allows you to browse the forums, view topics, and insert\nposts. It displays the usual forum info like last post, number of topics\nor posts in forum, number of posts in topic, etc.\n\nThen there is a benchmarking client, written in Python. It spawns a number\nof \"users\" who perform real-life actions, like viewing pages, adding\nposts, and there a few simulated moderators who will, once in a while,\ndestroy topics and even forums.\n\nThis client can hit the PHP website via HTTP.\n\nHowever postgres is so fast that you would need several PHP servers to\nkill it. So, I added a multi-backend capability to the client : it can hit\nthe database directly, performing the queries the PHP script would have\nperformed.\n\nHowever, postgres is still so fast that you won't be able to benchmark\nanything more powerful than a Core 2, the client would need to be\nrewritten in a compiled language like Java. Also, retrieving the posts'\ntext easily blasted the 100 Mbps connection between server and client, so\nyou would need Gigabit ethernet.\n\nSo, the load is very realistic (it would mimic a real forum pretty well) ;\nbut in order to benchmark it you must simulate humongous traffic levels.\n\nThe only difference is that my benchmark does a lot more writing (post\ninsertions) than a normal forum ; I wanted the database to grow big in a\nfew hours.\n\nIt also works on MySQL so you can get a good laugh. Actually I was able to\nextract some good performance out of MySQL, after lots of headaches,\nexcept that I was never able to make it use more than 1 core.\n\nContrary to the usual benchmarks, the code is optimized for MySQL and for\nPostgres, and the stored procedures also. Thus, what is compared is not a\nleast-common-denominator implementation that happens to work on both\ndatabases, but two implementations specifically targeted and optimized at\neach database.\n\nThe benchmark is also pretty simple (unlike the TPC) but it is useful,\nfirst it is CPU-bound then IO-bound and clustering the tables does a lot\nfor performance (you can test auto-cluster), checkpoints are very visible,\netc. So it can provide useful information that is easier to understand\nthat a very complex benchmark.\n\nOriginally the purpose of the benchmark was to test postgres' full search\n; the code is still there.\n\nRegards,\nPierre\n", "msg_date": "Mon, 28 Apr 2008 23:38:18 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres Benchmark looking for maintainer" } ]
[ { "msg_contents": "I recall reading posts in the past where one could query the stat tables and \nsee how well autovacuum was performing. Not finding the posts.\n\n\nI found this query:\nSELECT relname, relkind, reltuples, relpages FROM pg_class where relkind = \n'r';\n\n From the output how can I tell the number of dead tuples? Or how effective \nautovacuum is in the particular table..\n\nRecently inheritted several large Postgresql DBs (tables in the hundreds of \nmillions and some tables over a billion rows) and I am just starting to go \nover them and see how autovacuum has been performing.\n\n", "msg_date": "Tue, 29 Apr 2008 08:14:54 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": true, "msg_subject": "Vacuum statistics" }, { "msg_contents": "What version of Postgres you are running ?\n\nIf you are using 8.3, you can use pg_stat_all_tables.If Not you can use\nhttp://www.postgresql.org/docs/current/static/pgstattuple.html\n\nChirag\n\nOn Tue, Apr 29, 2008 at 8:14 AM, Francisco Reyes <[email protected]>\nwrote:\n\n> I recall reading posts in the past where one could query the stat tables\n> and see how well autovacuum was performing. Not finding the posts.\n>\n>\n> I found this query:\n> SELECT relname, relkind, reltuples, relpages FROM pg_class where relkind =\n> 'r';\n>\n> From the output how can I tell the number of dead tuples? Or how effective\n> autovacuum is in the particular table..\n>\n> Recently inheritted several large Postgresql DBs (tables in the hundreds\n> of millions and some tables over a billion rows) and I am just starting to\n> go over them and see how autovacuum has been performing.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nWhat version of Postgres you are running ?If you are using 8.3, you can use pg_stat_all_tables.If Not you can use http://www.postgresql.org/docs/current/static/pgstattuple.html\nChiragOn Tue, Apr 29, 2008 at 8:14 AM, Francisco Reyes <[email protected]> wrote:\nI recall reading posts in the past where one could query the stat tables and see how well autovacuum was performing. Not finding the posts.\n\n\nI found this query:\nSELECT relname, relkind, reltuples, relpages FROM pg_class where relkind = 'r';\n\n>From the output how can I tell the number of dead tuples? Or how effective autovacuum is in the particular table..\n\nRecently inheritted several large Postgresql DBs (tables in the hundreds of millions and some tables over a billion rows) and I am just starting to go over them and see how autovacuum has been performing.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 2 May 2008 10:09:49 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "[email protected] writes:\n\n> What version of Postgres you are running ?\n\n8.2\n \n> If you are using 8.3, you can use pg_stat_all_tables.If Not you can use \n> <URL:http://www.postgresql.org/docs/current/static/pgstattuple.html>http:// \n> www.postgresql.org/docs/current/static/pgstattuple.html\n\npgstattuple is also a 8.3 function. Anything simmilar to it in 8.2?\n", "msg_date": "Sat, 03 May 2008 22:15:11 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum statistics" }, { "msg_contents": "Francisco Reyes wrote:\n> [email protected] writes:\n>\n>> What version of Postgres you are running ?\n>\n> 8.2\n>\n>> If you are using 8.3, you can use pg_stat_all_tables.If Not you can \n>> use \n>> <URL:http://www.postgresql.org/docs/current/static/pgstattuple.html>http:// \n>> www.postgresql.org/docs/current/static/pgstattuple.html\n>\n> pgstattuple is also a 8.3 function. Anything simmilar to it in 8.2?\n>\n\nIt is available as a contrib module in 8.2, but needs to be installed \n(see contrib/pgstattuple).\n\nregards\n\nMark\n", "msg_date": "Sun, 04 May 2008 16:43:58 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum statistics" } ]