threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hi,\n\ni am using libpq library and postgresql 8.4 for my linux application running\non ARM with 256 MB. I am just doing:\n\nPQconnectdb();\nPQexec(INSERT INTO table1 ....); (0.009661 sec.)\nPQexec(INSERT INTO table1 ....); (0.004208 sec.)\n\nPQexec(INSERT INTO table2 ....); (0.007352 sec.)\nPQexec(INSERT INTO table2 ....); (0.002533 sec.)\nPQexec(INSERT INTO table2 ....); (0.002281 sec.)\nPQexec(INSERT INTO table2 ....); (0.002244 sec.)\n\nPQexec(INSERT INTO table3 ....); (0.006903 sec.)\nPQexec(INSERT INTO table3 ....); (0.002903 sec.)\nPQfinnish();\n\nI check the time for each PQexec with gettimeofday function and I always see\nthat the first INSERT for each table needs longer than the next ones.\n\nthis must be something with the parser stage and since i am doing every time\nthe same queries, I would like to know if there is a way to cache this\nqueries in order to speed up the first INSERTs.\n\nThanks in advance,\n\nSergio\n\nHi,i am using libpq library and postgresql 8.4 for my linux application running on ARM with 256 MB. I am just doing:PQconnectdb();PQexec(INSERT INTO table1 ....); (0.009661 sec.)\nPQexec(INSERT INTO table1 ....); (0.004208 sec.)PQexec(INSERT INTO table2 ....); (0.007352 sec.)PQexec(INSERT INTO table2 ....); (0.002533 sec.)PQexec(INSERT INTO table2 ....); (0.002281 sec.)\nPQexec(INSERT INTO table2 ....); (0.002244 sec.)PQexec(INSERT INTO table3 ....); (0.006903 sec.)PQexec(INSERT INTO table3 ....); (0.002903 sec.)PQfinnish();\nI check the time for each PQexec with gettimeofday function and I always see that the first INSERT for each table needs longer than the next ones.this must be something with the parser stage and since i am doing every time the same queries, I would like to know if there is a way to cache this queries in order to speed up the first INSERTs.\nThanks in advance,Sergio",
"msg_date": "Thu, 7 Jul 2011 17:35:05 +0200",
"msg_from": "sergio mayoral <[email protected]>",
"msg_from_op": true,
"msg_subject": "INSERT query times"
},
{
"msg_contents": "Hello\n\na) look on COPY statement and COPY API protocol - it can be 100x\nfaster than INSERTS\nhttp://www.postgresql.org/docs/8.3/static/libpq-copy.html\n\nb) if you can't to use COPY use:\n\n* outer transaction - BEGIN, INSERT, INSERT ... COMMIT if this is possible\n* use a prepared statement\nhttp://www.postgresql.org/docs/8.3/static/sql-prepare.html\n\nif you cannot to use a outer transaction, and you can to replay a\nprocess, if there are some problems, use a asynchronnous commit\nhttp://www.postgresql.org/docs/8.3/static/wal-async-commit.html\n\nRegards\n\nPavel Stehule\n\n\n2011/7/7 sergio mayoral <[email protected]>:\n> Hi,\n> i am using libpq library and postgresql 8.4 for my linux application running\n> on ARM with 256 MB. I am just doing:\n> PQconnectdb();\n> PQexec(INSERT INTO table1 ....); (0.009661 sec.)\n> PQexec(INSERT INTO table1 ....); (0.004208 sec.)\n> PQexec(INSERT INTO table2 ....); (0.007352 sec.)\n> PQexec(INSERT INTO table2 ....); (0.002533 sec.)\n> PQexec(INSERT INTO table2 ....); (0.002281 sec.)\n> PQexec(INSERT INTO table2 ....); (0.002244 sec.)\n> PQexec(INSERT INTO table3 ....); (0.006903 sec.)\n> PQexec(INSERT INTO table3 ....); (0.002903 sec.)\n> PQfinnish();\n> I check the time for each PQexec with gettimeofday function and I always see\n> that the first INSERT for each table needs longer than the next ones.\n> this must be something with the parser stage and since i am doing every time\n> the same queries, I would like to know if there is a way to cache this\n> queries in order to speed up the first INSERTs.\n> Thanks in advance,\n> Sergio\n",
"msg_date": "Sun, 10 Jul 2011 10:12:04 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: INSERT query times"
},
{
"msg_contents": "sergio mayoral <[email protected]> writes:\n> i am using libpq library and postgresql 8.4 for my linux application running\n> on ARM with 256 MB. I am just doing:\n\n> PQconnectdb();\n> PQexec(INSERT INTO table1 ....); (0.009661 sec.)\n> PQexec(INSERT INTO table1 ....); (0.004208 sec.)\n\n> PQexec(INSERT INTO table2 ....); (0.007352 sec.)\n> PQexec(INSERT INTO table2 ....); (0.002533 sec.)\n> PQexec(INSERT INTO table2 ....); (0.002281 sec.)\n> PQexec(INSERT INTO table2 ....); (0.002244 sec.)\n\n> PQexec(INSERT INTO table3 ....); (0.006903 sec.)\n> PQexec(INSERT INTO table3 ....); (0.002903 sec.)\n> PQfinnish();\n\n> I check the time for each PQexec with gettimeofday function and I always see\n> that the first INSERT for each table needs longer than the next ones.\n\nThe first few commands of *any* type on a new connection are going to\ntake longer than repeat versions of those commands, because the backend\nneeds to load up its internal caches. Once it's cached information\nabout the tables, operators, etc that you are working with, it's a bit\nfaster. This isn't specific to INSERT.\n\n> this must be something with the parser stage and since i am doing every time\n> the same queries, I would like to know if there is a way to cache this\n> queries in order to speed up the first INSERTs.\n\nThe only \"fix\" is to hang onto your connections longer. Consider a\nconnection pooler.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 10 Jul 2011 12:42:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: INSERT query times "
}
] |
[
{
"msg_contents": "Hello,\n\n(Apologies if this is an obvious question. I have gone through the archives\nwithout seeing something that directly ties to this.)\n\nWe are running Postgresql on a 64b RHEL5.2 64b server. \"Uname -a\":\n--------------Linux xxxxxxx 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT\n2008 x86_64 x86_64 x86_64 GNU/Linux\n\nWe have autovacuum enabled with the following settings:\n\nautovacuum_naptime = 30s\nautovacuum_vacuum_threshold = 200\nautovacuum_vacuum_scale_factor = 0.5\nautovacuum_vacuum_cost_delay = 10\n\nIn addition to autovacuuming, each day, early, in the morning, we run a full\nvacuum, like this: \"vacuumdb --all --full --analyze\". We do not have any\nspecial variable set for vacuum in postgresql.conf.\n\nThe problem is that once or twice a week, the \"vacuum full analyze\" seems to\ncancel out the autovacuum that has already started at the same time. E.g.,\n\n-------------2011-05-07 03:51:04.959 EDT--[unknown]-[unknown] [3348]LOG:\nconnection received: host=##.##.##.## port=60470\n-------------2011-05-07 03:51:04.959 EDT-##.##.##.##-xxxx-xxxx [3348]LOG:\nconnection authorized: user=xxxx database=XXXX\n-------------2011-05-07 03:51:04.961 EDT-##.##.##.##-xxxx-xxxx [3348]LOG:\nstatement: VACUUM FULL ANALYZE;\n-------------...\n-------------2011-05-07 03:51:10.733 EDT--- [19879]ERROR: canceling\nautovacuum task\n-------------2011-05-07 03:51:10.733 EDT--- [19879]CONTEXT: automatic vacuum\nof table \"xxxx.xxx.xxxx\"\n-------------...\n-------------2011-05-07 03:52:48.918 EDT-##.##.##.##-xxxx-xxxx [3348]LOG:\nduration: 103957.270 ms\n-------------2011-05-07 03:52:48.920 EDT-##.##.##.##-xxxx-xxxx [3348]LOG:\ndisconnection: session time: 0:01:43.961 user=xxxx database=xxxx\nhost=##.##.##.## port=60470\n\nWe would like to eliminate this error. A bigger problem is that sometimes\nit seems like autovacuum wins out over \"vacuum full analyze\". This tends to\nresult in a hung job on our client, with other ensuing complications.\n\n* Our basic question is what method we might be able to use to prevent\neither of these jobs from canceling. What we would like is, instead of\nautovacuum canceling, it rather always defers to \"vacuum full analyze\" job,\nwaiting for it to complete.\n\nI am guessing that we can do the above by setting the\n\"autovacuum_vacuum_cost_limit\" to a fairly high value (rather than it not\nbeing set at all, as it is right now, and thus inheriting the \"200\" default\nvalue from vacuum_cost_limit). Does that sound right? (If, what might be a\ngood value to set?) Or perhaps there is a more foolproof way of doing this\nthat does not rely upon guesswork?\n\nAny suggestions at all would be most welcome!\n\nDaniel C.\n\nHello,(Apologies if this is an obvious question. I have gone through the archives without seeing something that directly ties to this.)We are running Postgresql on a 64b RHEL5.2 64b server. \"Uname -a\":\n--------------Linux xxxxxxx 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT 2008 x86_64 x86_64 x86_64 GNU/LinuxWe have autovacuum enabled with the following settings:autovacuum_naptime = 30sautovacuum_vacuum_threshold = 200\nautovacuum_vacuum_scale_factor = 0.5autovacuum_vacuum_cost_delay = 10In addition to autovacuuming, each day, early, in the morning, we run a full vacuum, like this: \"vacuumdb --all --full --analyze\". We do not have any special variable set for vacuum in postgresql.conf.\nThe problem is that once or twice a week, the \"vacuum full analyze\" seems to cancel out the autovacuum that has already started at the same time. E.g.,-------------2011-05-07 03:51:04.959 EDT--[unknown]-[unknown] [3348]LOG: connection received: host=##.##.##.## port=60470\n-------------2011-05-07 03:51:04.959 EDT-##.##.##.##-xxxx-xxxx [3348]LOG: connection authorized: user=xxxx database=XXXX-------------2011-05-07 03:51:04.961 EDT-##.##.##.##-xxxx-xxxx [3348]LOG: statement: VACUUM FULL ANALYZE;\n-------------... -------------2011-05-07 03:51:10.733 EDT--- [19879]ERROR: canceling autovacuum task-------------2011-05-07 03:51:10.733 EDT--- [19879]CONTEXT: automatic vacuum of table \"xxxx.xxx.xxxx\"\n-------------...-------------2011-05-07 03:52:48.918 EDT-##.##.##.##-xxxx-xxxx [3348]LOG: duration: 103957.270 ms-------------2011-05-07 03:52:48.920 EDT-##.##.##.##-xxxx-xxxx [3348]LOG: disconnection: session time: 0:01:43.961 user=xxxx database=xxxx host=##.##.##.## port=60470\nWe would like to eliminate this error. A bigger problem is that sometimes it seems like autovacuum wins out over \"vacuum full analyze\". This tends to result in a hung job on our client, with other ensuing complications.\n* Our basic question is what method we might be able to use to prevent either of these jobs from canceling. What we would like is, instead of autovacuum canceling, it rather always defers to \"vacuum full analyze\" job, waiting for it to complete.\nI am guessing that we can do the above by setting the \"autovacuum_vacuum_cost_limit\" to a fairly high value (rather than it not being set at all, as it is right now, and thus inheriting the \"200\" default value from vacuum_cost_limit). Does that sound right? (If, what might be a good value to set?) Or perhaps there is a more foolproof way of doing this that does not rely upon guesswork?\nAny suggestions at all would be most welcome!Daniel C.",
"msg_date": "Thu, 7 Jul 2011 16:23:06 -0400",
"msg_from": "D C <[email protected]>",
"msg_from_op": true,
"msg_subject": "\"VACUUM FULL ANALYZE\" vs. Autovacuum Contention"
}
] |
[
{
"msg_contents": "Hello,\n\n\n(Apologies for any possible duplication of this email.)\n\n\n(Also, apologies if this is an obvious question. I have gone through the\narchives without seeing something that directly ties to this.)\n\nWe are running Postgresql on a 64b RHEL5.2 64b server. \"Uname -a\":\n--------------Linux xxxxxxx 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT\n2008 x86_64 x86_64 x86_64 GNU/Linux\n\nWe have autovacuum enabled with the following settings:\n\nautovacuum_naptime = 30s\nautovacuum_vacuum_threshold = 200\nautovacuum_vacuum_scale_factor = 0.5\nautovacuum_vacuum_cost_delay = 10\n\nIn addition to autovacuuming, each day, early, in the morning, we run a full\nvacuum, like this: \"vacuumdb --all --full --analyze\". We do not have any\nspecial variable set for vacuum in postgresql.conf.\n\nThe problem is that once or twice a week, the \"vacuum full analyze\" seems to\ncancel out the autovacuum that has already started at the same time. E.g.,\n\n-------------2011-05-07 03:51:04.959 EDT--[unknown]-[unknown] [3348]LOG:\nconnection received: host=##.##.##.## port=60470\n-------------2011-05-07 03:51:04.959 EDT-##.##.##.##-xxxx-xxxx [3348]LOG:\nconnection authorized: user=xxxx database=XXXX\n-------------2011-05-07 03:51:04.961 EDT-##.##.##.##-xxxx-xxxx [3348]LOG:\nstatement: VACUUM FULL ANALYZE;\n-------------...\n-------------2011-05-07 03:51:10.733 EDT--- [19879]ERROR: canceling\nautovacuum task\n-------------2011-05-07 03:51:10.733 EDT--- [19879]CONTEXT: automatic vacuum\nof table \"xxxx.xxx.xxxx\"\n-------------...\n-------------2011-05-07 03:52:48.918 EDT-##.##.##.##-xxxx-xxxx [3348]LOG:\nduration: 103957.270 ms\n-------------2011-05-07 03:52:48.920 EDT-##.##.##.##-xxxx-xxxx [3348]LOG:\ndisconnection: session time: 0:01:43.961 user=xxxx database=xxxx\nhost=##.##.##.## port=60470\n\nWe would like to eliminate this error. A bigger problem is that sometimes\nit seems like autovacuum wins out over \"vacuum full analyze\". This tends to\nresult in a hung job on our client, with other ensuing complications.\n\n* Our basic question is what method we might be able to use to prevent\neither of these jobs from canceling. What we would like is, instead of\nautovacuum canceling, it rather always defers to \"vacuum full analyze\" job,\nwaiting for it to complete.\n\nI am guessing that we can do the above by setting the\n\"autovacuum_vacuum_cost_limit\" to a fairly high value (rather than it not\nbeing set at all, as it is right now, and thus inheriting the \"200\" default\nvalue from vacuum_cost_limit). Does that sound right? (If, what might be a\ngood value to set?) Or perhaps there is a more foolproof way of doing this\nthat does not rely upon guesswork?\n\nAny suggestions at all would be most welcome!\n\nDaniel C.\n\nHello,\n(Apologies for any possible duplication of this email.)\n(Also, apologies if this is an obvious question. I have gone through the\narchives without seeing something that directly ties to this.)\n\nWe are running Postgresql on a 64b RHEL5.2 64b server. \"Uname\n-a\":\n--------------Linux xxxxxxx 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT 2008\nx86_64 x86_64 x86_64 GNU/Linux\n\nWe have autovacuum enabled with the following settings:\n\nautovacuum_naptime = 30s\nautovacuum_vacuum_threshold = 200\nautovacuum_vacuum_scale_factor = 0.5\nautovacuum_vacuum_cost_delay = 10\n\nIn addition to autovacuuming, each day, early, in the morning, we run a full\nvacuum, like this: \"vacuumdb --all --full --analyze\". We do not\nhave any special variable set for vacuum in postgresql.conf.\n\nThe problem is that once or twice a week, the \"vacuum full analyze\"\nseems to cancel out the autovacuum that has already started at the same\ntime. E.g.,\n\n-------------2011-05-07 03:51:04.959 EDT--[unknown]-[unknown] [3348]LOG: \nconnection received: host=##.##.##.## port=60470\n-------------2011-05-07 03:51:04.959 EDT-##.##.##.##-xxxx-xxxx [3348]LOG: \nconnection authorized: user=xxxx database=XXXX\n-------------2011-05-07 03:51:04.961 EDT-##.##.##.##-xxxx-xxxx [3348]LOG: \nstatement: VACUUM FULL ANALYZE;\n-------------... \n-------------2011-05-07 03:51:10.733 EDT--- [19879]ERROR: canceling\nautovacuum task\n-------------2011-05-07 03:51:10.733 EDT--- [19879]CONTEXT: automatic vacuum of\ntable \"xxxx.xxx.xxxx\"\n-------------...\n-------------2011-05-07 03:52:48.918 EDT-##.##.##.##-xxxx-xxxx [3348]LOG: \nduration: 103957.270 ms\n-------------2011-05-07 03:52:48.920 EDT-##.##.##.##-xxxx-xxxx [3348]LOG: \ndisconnection: session time: 0:01:43.961 user=xxxx database=xxxx\nhost=##.##.##.## port=60470\n\nWe would like to eliminate this error. A bigger problem is that sometimes\nit seems like autovacuum wins out over \"vacuum full analyze\". \nThis tends to result in a hung job on our client, with other ensuing\ncomplications.\n\n* Our basic question is what method we might be able to use to prevent either\nof these jobs from canceling. What we would like is, instead of\nautovacuum canceling, it rather always defers to \"vacuum full\nanalyze\" job, waiting for it to complete.\n\nI am guessing that we can do the above by setting the\n\"autovacuum_vacuum_cost_limit\" to a fairly high value (rather than it\nnot being set at all, as it is right now, and thus inheriting the\n\"200\" default value from vacuum_cost_limit). Does that sound\nright? (If, what might be a good value to set?) Or perhaps there is\na more foolproof way of doing this that does not rely upon guesswork?\n\nAny suggestions at all would be most welcome!\n\nDaniel C.",
"msg_date": "Thu, 7 Jul 2011 16:30:39 -0400",
"msg_from": "D C <[email protected]>",
"msg_from_op": true,
"msg_subject": "\"VACUUM FULL ANALYZE\" vs. Autovacuum Contention"
},
{
"msg_contents": "On Thu, Jul 7, 2011 at 2:30 PM, D C <[email protected]> wrote:\n> Hello,\n>\n> (Apologies for any possible duplication of this email.)\n>\n> (Also, apologies if this is an obvious question. I have gone through the\n> archives without seeing something that directly ties to this.)\n>\n> We are running Postgresql on a 64b RHEL5.2 64b server. \"Uname -a\":\n> --------------Linux xxxxxxx 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT\n> 2008 x86_64 x86_64 x86_64 GNU/Linux\n>\n> We have autovacuum enabled with the following settings:\n>\n> autovacuum_naptime = 30s\n> autovacuum_vacuum_threshold = 200\n> autovacuum_vacuum_scale_factor = 0.5\n> autovacuum_vacuum_cost_delay = 10\n>\n> In addition to autovacuuming, each day, early, in the morning, we run a full\n> vacuum, like this: \"vacuumdb --all --full --analyze\".\n\nWhy?\n",
"msg_date": "Thu, 7 Jul 2011 15:21:42 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"VACUUM FULL ANALYZE\" vs. Autovacuum Contention"
},
{
"msg_contents": "On 07/07/2011 04:30 PM, D C wrote:\n>\n> autovacuum_naptime = 30s\n> autovacuum_vacuum_threshold = 200\n> autovacuum_vacuum_scale_factor = 0.5\n> autovacuum_vacuum_cost_delay = 10\n>\n\nThese are slightly strange settings. How did you come up with them? \nThe autovacuum_vacuum_scale_factor being so high is particularly \ndangerous. If anything, you should be reducing that from its default of \n0.2, not increasing it further.\n\n> In addition to autovacuuming, each day, early, in the morning, we run \n> a full vacuum, like this: \"vacuumdb --all --full --analyze\". We do \n> not have any special variable set for vacuum in postgresql.conf.\n>\n\nVACUUM FULL takes an exclusive lock on the table while it runs, and it \nextremely problematic for several other reasons too. See \nhttp://wiki.postgresql.org/wiki/VACUUM_FULL for more information.\n\nYou didn't mention your PostgreSQL version so I can't be sure exactly \nhow bad of a problem you're causing with this, but you should almost \ncertainly stop doing it.\n\n\n> The problem is that once or twice a week, the \"vacuum full analyze\" \n> seems to cancel out the autovacuum that has already started at the \n> same time. E.g.,\n>\n\nYes. VACUUM FULL needs to take a large lock on the table, and it will \nkick out autovacuum in that case, and cause countless other trouble \ntoo. And if the VACUUM FULL is already running, other things will end \nup getting stuck waiting for it, and all sorts of locking issues can \ncome out of that.\n\nYou should remove the \"--full\" from your daily routine, reduce \nautovacuum_vacuum_scale_factor back to a reasonable number again, and \nsee how things go after that. You're trying to use PostgreSQL in a way \nit's known not to work well right now.\n\n> I am guessing that we can do the above by setting the \n> \"autovacuum_vacuum_cost_limit\" to a fairly high value (rather than it \n> not being set at all, as it is right now, and thus inheriting the \n> \"200\" default value from vacuum_cost_limit).\n>\n\nThe cost limit has nothing to do with the issue you're seeing. It \nadjust how much work autovacuum does at any moment in time, it isn't \ninvolved in any prioritization.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nComprehensive and Customized PostgreSQL Training Classes:\nhttp://www.2ndquadrant.us/postgresql-training/\n\n",
"msg_date": "Thu, 07 Jul 2011 17:30:50 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"VACUUM FULL ANALYZE\" vs. Autovacuum Contention"
},
{
"msg_contents": "That's a great point about autovacuum_vacuum_scale_factor; I will lower the\nvalue there to 0.2 and see if autovacuum starts doing a better job. (We use\nPostgresql 8.3.5 currently, by the way.)\n\nThanks for the notes and the useful page link on \"vacuum full\". We are\nrunning \"vacuum full\" primarily because a number of tables in our database\nhave a very large amount of data added to them during each day, all of which\nis deleted in one large series of \"delete from\" statements early in the\nmorning before we perform the vacuum. Comments like the one here (\nhttp://www.postgresql.org/docs/9.0/static/routine-vacuuming.html) led us to\nthink that with this type of situation (very large deletes daily) autovacuum\nwould not in the end be sufficient over the long run.\n\nThat said, it sounds like if we switched to daily \"trucates\" of each table\n(they can be purged entirely each day) rather than \"delete froms\", then\nthere truly would not be any reason to use \"vacuum full\". Does that sound\nplausible?\n\nThanks again,\n\nDaniel\n\nOn Thu, Jul 7, 2011 at 5:30 PM, Greg Smith <[email protected]> wrote:\n\n> On 07/07/2011 04:30 PM, D C wrote:\n>\n>>\n>> autovacuum_naptime = 30s\n>> autovacuum_vacuum_threshold = 200\n>> autovacuum_vacuum_scale_factor = 0.5\n>> autovacuum_vacuum_cost_delay = 10\n>>\n>>\n> These are slightly strange settings. How did you come up with them? The\n> autovacuum_vacuum_scale_factor being so high is particularly dangerous. If\n> anything, you should be reducing that from its default of 0.2, not\n> increasing it further.\n>\n>\n> In addition to autovacuuming, each day, early, in the morning, we run a\n>> full vacuum, like this: \"vacuumdb --all --full --analyze\". We do not have\n>> any special variable set for vacuum in postgresql.conf.\n>>\n>>\n> VACUUM FULL takes an exclusive lock on the table while it runs, and it\n> extremely problematic for several other reasons too. See\n> http://wiki.postgresql.org/**wiki/VACUUM_FULL<http://wiki.postgresql.org/wiki/VACUUM_FULL>for more information.\n>\n> You didn't mention your PostgreSQL version so I can't be sure exactly how\n> bad of a problem you're causing with this, but you should almost certainly\n> stop doing it.\n>\n>\n>\n> The problem is that once or twice a week, the \"vacuum full analyze\" seems\n>> to cancel out the autovacuum that has already started at the same time.\n>> E.g.,\n>>\n>>\n> Yes. VACUUM FULL needs to take a large lock on the table, and it will kick\n> out autovacuum in that case, and cause countless other trouble too. And if\n> the VACUUM FULL is already running, other things will end up getting stuck\n> waiting for it, and all sorts of locking issues can come out of that.\n>\n> You should remove the \"--full\" from your daily routine, reduce\n> autovacuum_vacuum_scale_factor back to a reasonable number again, and see\n> how things go after that. You're trying to use PostgreSQL in a way it's\n> known not to work well right now.\n>\n>\n> I am guessing that we can do the above by setting the\n>> \"autovacuum_vacuum_cost_limit\" to a fairly high value (rather than it not\n>> being set at all, as it is right now, and thus inheriting the \"200\" default\n>> value from vacuum_cost_limit).\n>>\n>>\n> The cost limit has nothing to do with the issue you're seeing. It adjust\n> how much work autovacuum does at any moment in time, it isn't involved in\n> any prioritization.\n>\n> --\n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> Comprehensive and Customized PostgreSQL Training Classes:\n> http://www.2ndquadrant.us/**postgresql-training/<http://www.2ndquadrant.us/postgresql-training/>\n>\n>\n> --\n> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.**\n> org <[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/**mailpref/pgsql-performance<http://www.postgresql.org/mailpref/pgsql-performance>\n>\n\nThat's a great point about autovacuum_vacuum_scale_factor; I will lower the value there to 0.2 and see if autovacuum starts doing a better job. (We use Postgresql 8.3.5 currently, by the way.)Thanks for the notes and the useful page link on \"vacuum full\". We are running \"vacuum full\" primarily because a number of tables in our database have a very large amount of data added to them during each day, all of which is deleted in one large series of \"delete from\" statements early in the morning before we perform the vacuum. Comments like the one here (http://www.postgresql.org/docs/9.0/static/routine-vacuuming.html) led us to think that with this type of situation (very large deletes daily) autovacuum would not in the end be sufficient over the long run.\nThat said, it sounds like if we switched to daily \"trucates\" of each table (they can be purged entirely each day) rather than \"delete froms\", then there truly would not be any reason to use \"vacuum full\". Does that sound plausible?\nThanks again,DanielOn Thu, Jul 7, 2011 at 5:30 PM, Greg Smith <[email protected]> wrote:\nOn 07/07/2011 04:30 PM, D C wrote:\n\n\nautovacuum_naptime = 30s\nautovacuum_vacuum_threshold = 200\nautovacuum_vacuum_scale_factor = 0.5\nautovacuum_vacuum_cost_delay = 10\n\n\n\nThese are slightly strange settings. How did you come up with them? The autovacuum_vacuum_scale_factor being so high is particularly dangerous. If anything, you should be reducing that from its default of 0.2, not increasing it further.\n\n\n\nIn addition to autovacuuming, each day, early, in the morning, we run a full vacuum, like this: \"vacuumdb --all --full --analyze\". We do not have any special variable set for vacuum in postgresql.conf.\n\n\n\nVACUUM FULL takes an exclusive lock on the table while it runs, and it extremely problematic for several other reasons too. See http://wiki.postgresql.org/wiki/VACUUM_FULL for more information.\n\nYou didn't mention your PostgreSQL version so I can't be sure exactly how bad of a problem you're causing with this, but you should almost certainly stop doing it.\n\n\n\nThe problem is that once or twice a week, the \"vacuum full analyze\" seems to cancel out the autovacuum that has already started at the same time. E.g.,\n\n\n\nYes. VACUUM FULL needs to take a large lock on the table, and it will kick out autovacuum in that case, and cause countless other trouble too. And if the VACUUM FULL is already running, other things will end up getting stuck waiting for it, and all sorts of locking issues can come out of that.\n\nYou should remove the \"--full\" from your daily routine, reduce autovacuum_vacuum_scale_factor back to a reasonable number again, and see how things go after that. You're trying to use PostgreSQL in a way it's known not to work well right now.\n\n\n\nI am guessing that we can do the above by setting the \"autovacuum_vacuum_cost_limit\" to a fairly high value (rather than it not being set at all, as it is right now, and thus inheriting the \"200\" default value from vacuum_cost_limit).\n\n\n\nThe cost limit has nothing to do with the issue you're seeing. It adjust how much work autovacuum does at any moment in time, it isn't involved in any prioritization.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nComprehensive and Customized PostgreSQL Training Classes:\nhttp://www.2ndquadrant.us/postgresql-training/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 8 Jul 2011 12:46:59 -0400",
"msg_from": "D C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"VACUUM FULL ANALYZE\" vs. Autovacuum Contention"
},
{
"msg_contents": "On 07/08/2011 12:46 PM, D C wrote:\n> That said, it sounds like if we switched to daily \"trucates\" of each \n> table (they can be purged entirely each day) rather than \"delete \n> froms\", then there truly would not be any reason to use \"vacuum \n> full\". Does that sound plausible?\n\n\nThat's exactly right. If you can re-arrange this data to be truncated \ninstead of deleted, this entire problem should go away. There is also a \nnice optimization you should know about; if you do this:\n\nBEGIN;\nTRUNCATE t;\nCOPY t FROM ...\nCOMMIT;\n\nIn single-node systems (no standby slave), this can work much faster \nthan a normal load. It's able to skip the pg_xlog WAL writes in this \nsituation.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nComprehensive and Customized PostgreSQL Training Classes:\nhttp://www.2ndquadrant.us/postgresql-training/\n\n",
"msg_date": "Fri, 08 Jul 2011 13:12:46 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"VACUUM FULL ANALYZE\" vs. Autovacuum Contention"
}
] |
[
{
"msg_contents": "I am doing some research that will hopefully lead to replacing a big \nOracle installation with a set PostgreSQL servers.\n\nThe current Oracle installations consists of multiple of RAC clusters \nwith 8 RAC nodes each. Each RAC node has 256gb of\nmemory (to be doubled soon).\nThe nature of our service is such that over a reasonable time (a day or \nso) the database *is* the working set.\n\nSo I am looking at Postgres in a context where (almost) all of the data \nis cached and disk IO is only required for persistence.\n\nSetup:\nPostgreSQL 9.1beta2 on a high memory (~68gb, 12 cores) EC2 Linux \ninstance (kernel 2.6.35) with the database and\nWAL residing on the same EBS volume with EXT4 (data=ordered, barriers=1) \n- yes that is not an ideal setup\n(WAL should be on separate drive, EBS is slow to begin, etc), but I am \nmostly interested in read performance for a fully cached database.\n\nshared_buffers: varied between 1gb and 20gb\ncheckpoint_segments/timeout: varied accordingly between 16-256 and \n5-10m, resp.\nbgwriter tweaked to get a good distribution of checkpoints, bg-writes, \nand backend writes.\nwal_sync_method: tried fdatasync and open_datasync.\n\nI read \"PostgreSQL 9.0 high performance\", and have spent some \nsignificant amount of time on this already.\n\nPostgreSQL holds up extremely well, once things like \"storing \nhint-bits\", checkpoints vs bgwriter vs backend_writes, etc\nare understood. I installed pg_buffercache and pgfincore to monitor how \nand where the database is stored.\n\nThere is one observation that I wasn't able to explain:\nA SELECT only client is severely slowed down by a concurrent client \nperforming UPDATES on the same table the other\nclient selects from, even when the database resides 100% in the cache (I \ntried with shared_buffers large enough to hold\nthe database, and also with a smaller setting relying on the OS cache, \nthe behavior is the same).\n\nAs long as only the reader is running I get great performance (20-30ms, \nquery reading a random set of about 10000 rows\nout of 100m row table in a single SELECT). The backend is close to 100% \ncpu, which is what want in a cached database.\n\nOnce the writer starts the read performance drops almost immediately to \n >200ms.\nThe reading backend's cpu drop drop to <10%, and is mostly waiting (D \nstate in top).\nThe UPDATE touches a random set of also about 10000 rows (in one update \nstatement, one of the columns touched is\nindexed - and that is the same index used for the SELECTs).\n\nWhat I would have expected is that the SELECTs would just continue to \nread from the cached buffers (whether dirtied\nor not) and not be affected by concurrent updates. I could not find \nanything explaining this.\n\nThe most interesting part:\nthat this does not happen with an exact clone of that relation but \nUNLOGGED. The same amount of buffers get dirty,\nthe same amount checkpointing, bgwriting, vacuuming. The only difference \nis WAL maintenance as far as I can tell.\n\nIs there some (intentional or not) synchronization between backend when \nthe WAL is maintained? Are there times when\nread only query needs to compete disk IO when everything is cached? Or \nare there any other explanations?\n\nI am happy to provide more information. Although I am mainly looking for \na qualitative answer, which could explain this behavior.\n\nThanks.\n\n-- Lars\n\n",
"msg_date": "Thu, 07 Jul 2011 16:56:13 -0700",
"msg_from": "lars <[email protected]>",
"msg_from_op": true,
"msg_subject": "UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "I have since moved the WAL to its own EBS volume (ext4, data=writeback) \nto make it easier to monitor IO.\nThe times where the SELECTs slow down coincide with heavy write traffic \nto the WAL volume.\n\nMaybe this has to do with WALInsertLock or WALWriteLock (or some other \nlock).\nSince the slowdown was less severe with WAL on its own volume it seems \nsome exclusive lock on the pages in\nshared_buffers is held while WAL IO is in progres(?) - that would be \n\"frustrating\". (wal_buffers default to 16mb in my setup)\n\nNext I am going to have a look at the code. I would be thankful for any \nfurther insights, though :)\n\nThanks.\n\n-- Lars\n\nOn 07/07/2011 04:56 PM, lars wrote:\n> I am doing some research that will hopefully lead to replacing a big \n> Oracle installation with a set PostgreSQL servers.\n>\n> The current Oracle installations consists of multiple of RAC clusters \n> with 8 RAC nodes each. Each RAC node has 256gb of\n> memory (to be doubled soon).\n> The nature of our service is such that over a reasonable time (a day \n> or so) the database *is* the working set.\n>\n> So I am looking at Postgres in a context where (almost) all of the \n> data is cached and disk IO is only required for persistence.\n>\n> Setup:\n> PostgreSQL 9.1beta2 on a high memory (~68gb, 12 cores) EC2 Linux \n> instance (kernel 2.6.35) with the database and\n> WAL residing on the same EBS volume with EXT4 (data=ordered, \n> barriers=1) - yes that is not an ideal setup\n> (WAL should be on separate drive, EBS is slow to begin, etc), but I am \n> mostly interested in read performance for a fully cached database.\n>\n> shared_buffers: varied between 1gb and 20gb\n> checkpoint_segments/timeout: varied accordingly between 16-256 and \n> 5-10m, resp.\n> bgwriter tweaked to get a good distribution of checkpoints, bg-writes, \n> and backend writes.\n> wal_sync_method: tried fdatasync and open_datasync.\n>\n> I read \"PostgreSQL 9.0 high performance\", and have spent some \n> significant amount of time on this already.\n>\n> PostgreSQL holds up extremely well, once things like \"storing \n> hint-bits\", checkpoints vs bgwriter vs backend_writes, etc\n> are understood. I installed pg_buffercache and pgfincore to monitor \n> how and where the database is stored.\n>\n> There is one observation that I wasn't able to explain:\n> A SELECT only client is severely slowed down by a concurrent client \n> performing UPDATES on the same table the other\n> client selects from, even when the database resides 100% in the cache \n> (I tried with shared_buffers large enough to hold\n> the database, and also with a smaller setting relying on the OS cache, \n> the behavior is the same).\n>\n> As long as only the reader is running I get great performance \n> (20-30ms, query reading a random set of about 10000 rows\n> out of 100m row table in a single SELECT). The backend is close to \n> 100% cpu, which is what want in a cached database.\n>\n> Once the writer starts the read performance drops almost immediately \n> to >200ms.\n> The reading backend's cpu drop drop to <10%, and is mostly waiting (D \n> state in top).\n> The UPDATE touches a random set of also about 10000 rows (in one \n> update statement, one of the columns touched is\n> indexed - and that is the same index used for the SELECTs).\n>\n> What I would have expected is that the SELECTs would just continue to \n> read from the cached buffers (whether dirtied\n> or not) and not be affected by concurrent updates. I could not find \n> anything explaining this.\n>\n> The most interesting part:\n> that this does not happen with an exact clone of that relation but \n> UNLOGGED. The same amount of buffers get dirty,\n> the same amount checkpointing, bgwriting, vacuuming. The only \n> difference is WAL maintenance as far as I can tell.\n>\n> Is there some (intentional or not) synchronization between backend \n> when the WAL is maintained? Are there times when\n> read only query needs to compete disk IO when everything is cached? Or \n> are there any other explanations?\n>\n> I am happy to provide more information. Although I am mainly looking \n> for a qualitative answer, which could explain this behavior.\n>\n> Thanks.\n>\n> -- Lars\n>\n\n",
"msg_date": "Sun, 10 Jul 2011 13:34:04 -0700",
"msg_from": "lars <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "On 11/07/2011 4:34 AM, lars wrote:\n> I have since moved the WAL to its own EBS volume (ext4, data=writeback)\n> to make it easier to monitor IO.\n> The times where the SELECTs slow down coincide with heavy write traffic\n> to the WAL volume.\n\nIn theory, UPDATEs shouldn't be blocking or slowing SELECTs. Whether \nthat holds up to the light of reality, real-world hardware, and software \nimplementation detail, I really don't know. I avoided responding to your \nfirst mail because I generally work with smaller and less performance \ncritical databases so I haven't accumulated much experience with \nfine-tuning.\n\nIf your SELECTs were slower *after* your UPDATEs I'd be wondering if \nyour SELECTs are setting hint bits on the pages touched by the UPDATEs. \nSee: http://wiki.postgresql.org/wiki/Hint_Bits . It doesn't sound like \nthat's the case if the SELECTs are slowed down *during* a big UPDATE \nthat hasn't yet committed, though.\n\nCould it just be cache pressure - either on shm, or operating system \ndisk cache? All the dirty buffers that have to be flushed to WAL and to \nthe heap may be evicting cached data your SELECTs were benefitting from. \nUnfortunately, diagnostics in this area are ... limited ... though some \nof the pg_catalog views \n(http://www.postgresql.org/docs/9.0/static/catalogs.html) may offer some \ninformation.\n\n--\nCraig Ringer\n\nPOST Newspapers\n276 Onslow Rd, Shenton Park\nPh: 08 9381 3088 Fax: 08 9388 2258\nABN: 50 008 917 717\nhttp://www.postnewspapers.com.au/\n",
"msg_date": "Mon, 11 Jul 2011 07:11:39 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "Thanks Craig.\n\nYep, I am not seeing the SELECTs slow down (measurably) during checkpoints \n(i.e. when dirty pages are flushed to disk), but only during writing of the WAL \nfiles. The buffers shared and OS are big enough to hold the entire database, so \nevicting cached data should not be necessary. (The database including indexes \ncan fit into 16 or so GB, and I have 68GB on that machine).\nInterestingly I initially thought there might be a correlation between \ncheckpointing and slower SELECTs, but it turns out that checkpointing just \nslowed down IO to the WAL - until I move it to its own drive, and then increased \nthe effect I was seeing.\n\nI'll do more research and try to provide more useful details.\n\nThanks for the pg_catalog link, I'll have a look at it.\n\n-- Lars\n\n\n\n----- Original Message ----\nFrom: Craig Ringer <[email protected]>\nTo: [email protected]\nSent: Sun, July 10, 2011 4:11:39 PM\nSubject: Re: [PERFORM] UPDATEDs slowing SELECTs in a fully cached database\n\nOn 11/07/2011 4:34 AM, lars wrote:\n> I have since moved the WAL to its own EBS volume (ext4, data=writeback)\n> to make it easier to monitor IO.\n> The times where the SELECTs slow down coincide with heavy write traffic\n> to the WAL volume.\n\nIn theory, UPDATEs shouldn't be blocking or slowing SELECTs. Whether that holds \nup to the light of reality, real-world hardware, and software implementation \ndetail, I really don't know. I avoided responding to your first mail because I \ngenerally work with smaller and less performance critical databases so I haven't \naccumulated much experience with fine-tuning.\n\nIf your SELECTs were slower *after* your UPDATEs I'd be wondering if your \nSELECTs are setting hint bits on the pages touched by the UPDATEs. See: \nhttp://wiki.postgresql.org/wiki/Hint_Bits . It doesn't sound like that's the \ncase if the SELECTs are slowed down *during* a big UPDATE that hasn't yet \ncommitted, though.\n\nCould it just be cache pressure - either on shm, or operating system disk cache? \nAll the dirty buffers that have to be flushed to WAL and to the heap may be \nevicting cached data your SELECTs were benefitting from. Unfortunately, \ndiagnostics in this area are ... limited ... though some of the pg_catalog views \n(http://www.postgresql.org/docs/9.0/static/catalogs.html) may offer some \ninformation.\n\n--\nCraig Ringer\n\nPOST Newspapers\n276 Onslow Rd, Shenton Park\nPh: 08 9381 3088 Fax: 08 9388 2258\nABN: 50 008 917 717\nhttp://www.postnewspapers.com.au/\n\n-- Sent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Sun, 10 Jul 2011 21:47:33 -0700 (PDT)",
"msg_from": "lars hofhansl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "Hi Lars,\n\nI do not know if this makes sense in PostgreSQL and that readers\ndo not block writers and writes do not block readers. Are your\nUPDATEs to individual rows, each in a separate transaction, or\ndo you UPDATE multiple rows in the same transaction? If you\nperform multiple updates in a single transaction, you are\nsynchronizing the changes to that set of rows and that constraint\nis causing other readers that need to get the correct values post-\ntransaction to wait until the COMMIT completes. This means that\nthe WAL write must be completed.\n\nHave you tried disabling synchronous_commit? If this scenario\nholds, you should be able to reduce the slowdown by un-batching\nyour UPDATEs, as counter-intuitive as that is. This seems to\nbe similar to a problem that I have been looking at with using\nPostgreSQL as the backend to a Bayesian engine. I am following\nthis thread with interest.\n\nRegards,\nKen\n\nOn Thu, Jul 07, 2011 at 04:56:13PM -0700, lars wrote:\n> I am doing some research that will hopefully lead to replacing a big\n> Oracle installation with a set PostgreSQL servers.\n> \n> The current Oracle installations consists of multiple of RAC\n> clusters with 8 RAC nodes each. Each RAC node has 256gb of\n> memory (to be doubled soon).\n> The nature of our service is such that over a reasonable time (a day\n> or so) the database *is* the working set.\n> \n> So I am looking at Postgres in a context where (almost) all of the\n> data is cached and disk IO is only required for persistence.\n> \n> Setup:\n> PostgreSQL 9.1beta2 on a high memory (~68gb, 12 cores) EC2 Linux\n> instance (kernel 2.6.35) with the database and\n> WAL residing on the same EBS volume with EXT4 (data=ordered,\n> barriers=1) - yes that is not an ideal setup\n> (WAL should be on separate drive, EBS is slow to begin, etc), but I\n> am mostly interested in read performance for a fully cached\n> database.\n> \n> shared_buffers: varied between 1gb and 20gb\n> checkpoint_segments/timeout: varied accordingly between 16-256 and\n> 5-10m, resp.\n> bgwriter tweaked to get a good distribution of checkpoints,\n> bg-writes, and backend writes.\n> wal_sync_method: tried fdatasync and open_datasync.\n> \n> I read \"PostgreSQL 9.0 high performance\", and have spent some\n> significant amount of time on this already.\n> \n> PostgreSQL holds up extremely well, once things like \"storing\n> hint-bits\", checkpoints vs bgwriter vs backend_writes, etc\n> are understood. I installed pg_buffercache and pgfincore to monitor\n> how and where the database is stored.\n> \n> There is one observation that I wasn't able to explain:\n> A SELECT only client is severely slowed down by a concurrent client\n> performing UPDATES on the same table the other\n> client selects from, even when the database resides 100% in the\n> cache (I tried with shared_buffers large enough to hold\n> the database, and also with a smaller setting relying on the OS\n> cache, the behavior is the same).\n> \n> As long as only the reader is running I get great performance\n> (20-30ms, query reading a random set of about 10000 rows\n> out of 100m row table in a single SELECT). The backend is close to\n> 100% cpu, which is what want in a cached database.\n> \n> Once the writer starts the read performance drops almost immediately\n> to >200ms.\n> The reading backend's cpu drop drop to <10%, and is mostly waiting\n> (D state in top).\n> The UPDATE touches a random set of also about 10000 rows (in one\n> update statement, one of the columns touched is\n> indexed - and that is the same index used for the SELECTs).\n> \n> What I would have expected is that the SELECTs would just continue\n> to read from the cached buffers (whether dirtied\n> or not) and not be affected by concurrent updates. I could not find\n> anything explaining this.\n> \n> The most interesting part:\n> that this does not happen with an exact clone of that relation but\n> UNLOGGED. The same amount of buffers get dirty,\n> the same amount checkpointing, bgwriting, vacuuming. The only\n> difference is WAL maintenance as far as I can tell.\n> \n> Is there some (intentional or not) synchronization between backend\n> when the WAL is maintained? Are there times when\n> read only query needs to compete disk IO when everything is cached?\n> Or are there any other explanations?\n> \n> I am happy to provide more information. Although I am mainly looking\n> for a qualitative answer, which could explain this behavior.\n> \n> Thanks.\n> \n> -- Lars\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n",
"msg_date": "Mon, 11 Jul 2011 08:13:48 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "On Mon, Jul 11, 2011 at 3:13 PM, [email protected] <[email protected]> wrote:\n> I do not know if this makes sense in PostgreSQL and that readers\n> do not block writers and writes do not block readers. Are your\n> UPDATEs to individual rows, each in a separate transaction, or\n> do you UPDATE multiple rows in the same transaction? If you\n> perform multiple updates in a single transaction, you are\n> synchronizing the changes to that set of rows and that constraint\n> is causing other readers that need to get the correct values post-\n> transaction to wait until the COMMIT completes. This means that\n> the WAL write must be completed.\n\nWhat readers should that be? Docs explicitly state that readers are\nnever blocked by writers:\nhttp://www.postgresql.org/docs/9.0/interactive/mvcc-intro.html\nhttp://www.postgresql.org/docs/9.0/interactive/mvcc.html\n\n From what I understand about this issue the observed effect must be\ncaused by the implementation and not by a conceptual issue with\ntransactions.\n\n> Have you tried disabling synchronous_commit? If this scenario\n> holds, you should be able to reduce the slowdown by un-batching\n> your UPDATEs, as counter-intuitive as that is. This seems to\n> be similar to a problem that I have been looking at with using\n> PostgreSQL as the backend to a Bayesian engine. I am following\n> this thread with interest.\n\nI don't think this will help (see above). Also, I would be very\ncautious to do this because although the client might get a faster\nacknowledge the DB still has to do the same work as without\nsynchronous_commit (i.e. WAL, checkpointing etc.) but it still has to\ndo significantly more transactions than in the batched version.\n\nTypically there is an optimum batch size: if batch size is too small\n(say, one row) the ratio of TX overhead to \"work\" is too bad. If\nbatch size is too large (say, millions of rows) you hit resource\nlimitations (memory) which inevitable force the RDBMS to do additional\ndisk IO.\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Mon, 11 Jul 2011 17:26:49 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "On Mon, Jul 11, 2011 at 05:26:49PM +0200, Robert Klemme wrote:\n> On Mon, Jul 11, 2011 at 3:13 PM, [email protected] <[email protected]> wrote:\n> > I do not know if this makes sense in PostgreSQL and that readers\n> > do not block writers and writes do not block readers. Are your\n> > UPDATEs to individual rows, each in a separate transaction, or\n> > do you UPDATE multiple rows in the same transaction? If you\n> > perform multiple updates in a single transaction, you are\n> > synchronizing the changes to that set of rows and that constraint\n> > is causing other readers that need to get the correct values post-\n> > transaction to wait until the COMMIT completes. This means that\n> > the WAL write must be completed.\n> \n> What readers should that be? Docs explicitly state that readers are\n> never blocked by writers:\n> http://www.postgresql.org/docs/9.0/interactive/mvcc-intro.html\n> http://www.postgresql.org/docs/9.0/interactive/mvcc.html\n> \n> From what I understand about this issue the observed effect must be\n> caused by the implementation and not by a conceptual issue with\n> transactions.\n> \n> > Have you tried disabling synchronous_commit? If this scenario\n> > holds, you should be able to reduce the slowdown by un-batching\n> > your UPDATEs, as counter-intuitive as that is. This seems to\n> > be similar to a problem that I have been looking at with using\n> > PostgreSQL as the backend to a Bayesian engine. I am following\n> > this thread with interest.\n> \n> I don't think this will help (see above). Also, I would be very\n> cautious to do this because although the client might get a faster\n> acknowledge the DB still has to do the same work as without\n> synchronous_commit (i.e. WAL, checkpointing etc.) but it still has to\n> do significantly more transactions than in the batched version.\n> \n> Typically there is an optimum batch size: if batch size is too small\n> (say, one row) the ratio of TX overhead to \"work\" is too bad. If\n> batch size is too large (say, millions of rows) you hit resource\n> limitations (memory) which inevitable force the RDBMS to do additional\n> disk IO.\n> \n> Kind regards\n> \n> robert\n> \nOkay,\n\nIf we assume that the current implementation of MVCC is preventing\nreaders from blocking writers and writers from blocking readers, then\nthe application may have some statements that are implicitly locking\nthe database and that is conflicting with the UPDATEs. Maybe the\nslowdown is caused by index updates caused by the write activity.\nJust throwing out some ideas.\n\nRegards,\nKen\nregarding index updates with the read-only queries. \n> -- \n> remember.guy do |as, often| as.you_can - without end\n> http://blog.rubybestpractices.com/\n> \n",
"msg_date": "Mon, 11 Jul 2011 11:14:58 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "lars hofhansl <[email protected]> wrote:\n \n> Yep, I am not seeing the SELECTs slow down (measurably) during \n> checkpoints (i.e. when dirty pages are flushed to disk), but only\n> during writing of the WAL files.\n \nHow about if you do a whole slew of the UPDATEs and then stop those\nand run a bunch of SELECTs? (I don't think I've seen anything\nmentioned so far which rules out hint bit rewrites as an issue.)\n \nI see you have tweaked things to balance the writes -- you might\nwant to try further adjustments to reduce backend writes and see\nwhat happens.\n \n-Kevin\n",
"msg_date": "Mon, 11 Jul 2011 12:33:47 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached\n\t database"
},
{
"msg_contents": "On 07/11/2011 10:33 AM, Kevin Grittner wrote:\n> lars hofhansl<[email protected]> wrote:\n>\n>> Yep, I am not seeing the SELECTs slow down (measurably) during\n>> checkpoints (i.e. when dirty pages are flushed to disk), but only\n>> during writing of the WAL files.\n>\n> How about if you do a whole slew of the UPDATEs and then stop those\n> and run a bunch of SELECTs? (I don't think I've seen anything\n> mentioned so far which rules out hint bit rewrites as an issue.)\n>\n> I see you have tweaked things to balance the writes -- you might\n> want to try further adjustments to reduce backend writes and see\n> what happens.\n>\n> -Kevin\n\nHmm... You are right. Stopping the UPDATEs, waiting for any CHECKPOINTs \nto finish,\nand then running the SELECTs indeed shows a similar slowdown.\n\nInterestingly I see very heavy WAL traffic while executing the SELECTs.\n(So I was confused as to what caused the WAL traffic).\n\nWhy do changes to the hint bits need to be logged to the WAL? If we \nloose them we can always get that information back from the commit log.\nMaybe the backend does not know why the page is dirty and will write it \nto the WAL anyway(?)\nIf that is the case there seems to be room to optimize that.\n\n-- Lars\n\n",
"msg_date": "Mon, 11 Jul 2011 11:33:14 -0700",
"msg_from": "lars <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached\t database"
},
{
"msg_contents": "lars <[email protected]> wrote:\n \n> Stopping the UPDATEs, waiting for any CHECKPOINTs to finish,\n> and then running the SELECTs indeed shows a similar slowdown.\n> \n> Interestingly I see very heavy WAL traffic while executing the\n> SELECTs. (So I was confused as to what caused the WAL traffic).\n \nHint bit changes aren't logged, so if it was that you would be\nseeing writes to the heap, but not to the WAL. Clean-up of dead\ntuples is logged -- this is probably the result of pruning dead\ntuples. You could probably reduce the impact on your SELECT\nstatements at least a little by making autovacuum more aggressive.\n \nAt some point you could see the overhead of autovacuum having some\nimpact on your SELECT statements, so you may need to experiment to\nfind the \"sweet spot\" for your load.\n \n-Kevin\n",
"msg_date": "Mon, 11 Jul 2011 14:16:44 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached\t\n\t database"
},
{
"msg_contents": "On Mon, Jul 11, 2011 at 2:16 PM, Kevin Grittner\n<[email protected]> wrote:\n> lars <[email protected]> wrote:\n>\n>> Stopping the UPDATEs, waiting for any CHECKPOINTs to finish,\n>> and then running the SELECTs indeed shows a similar slowdown.\n>>\n>> Interestingly I see very heavy WAL traffic while executing the\n>> SELECTs. (So I was confused as to what caused the WAL traffic).\n>\n> Hint bit changes aren't logged, so if it was that you would be\n> seeing writes to the heap, but not to the WAL. Clean-up of dead\n> tuples is logged -- this is probably the result of pruning dead\n> tuples. You could probably reduce the impact on your SELECT\n> statements at least a little by making autovacuum more aggressive.\n\nyeah. In fact, I'd like to disable autovacuum completely just to\nconfirm this. In particular I'd like to know if that removes wal\ntraffic when only selects are going on. Another way to check is to\nthrow some queries to pg_stat_activity during your select period and\nsee if any non-select activity (like autovacum vacuum). Basically I'm\nsuspicious there is more to this story.\n\nhint bit flusing causing i/o during SELECT is a typical complaint\n(especially on systems with underperformant i/o), but I'm suspicious\nif that's really the problem here. Since you are on a virtualized\nplatform, I can't help but wonder if you are running into some\nbottleneck that you wouldn't see on native hardware.\n\nWhat's iowait during the slow period?\n\nmerlin\n",
"msg_date": "Mon, 11 Jul 2011 16:43:26 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "Merlin Moncure <[email protected]> wrote:\n> Kevin Grittner <[email protected]> wrote:\n>> lars <[email protected]> wrote:\n>>\n>>> Stopping the UPDATEs, waiting for any CHECKPOINTs to finish,\n>>> and then running the SELECTs indeed shows a similar slowdown.\n>>>\n>>> Interestingly I see very heavy WAL traffic while executing the\n>>> SELECTs. (So I was confused as to what caused the WAL traffic).\n>>\n>> Hint bit changes aren't logged, so if it was that you would be\n>> seeing writes to the heap, but not to the WAL. Clean-up of dead\n>> tuples is logged -- this is probably the result of pruning dead\n>> tuples. You could probably reduce the impact on your SELECT\n>> statements at least a little by making autovacuum more\n>> aggressive.\n> \n> yeah. In fact, I'd like to disable autovacuum completely just to\n> confirm this.\n \nIf I'm right, disabling autovacuum would tend to make this *worse*.\n \n> In particular I'd like to know if that removes wal traffic when\n> only selects are going on.\n \nMy guess: no.\n \n> Another way to check is to throw some queries to pg_stat_activity\n> during your select period and see if any non-select activity (like\n> autovacum vacuum).\n \nThat's not a bad thing to check, but be careful what causality you\nassume based on a correlation here -- blaming autovacuum might be a\nbit like like blaming firefighters for fires, because you keep\nseeing them at the same time. You might actually want them to\nrespond faster and more aggressively, rather than keeping them away.\n \nYou do realize, that just reading a page with dead tuples can cause\ndead tuple pruning, right? No autovacuum involved. Your SELECT\nstatement waits for things to be tidied up and the page is marked\ndirty. I'm thinking that more aggressive autovacuum runs would\nclean more of this up in background processes and let the SELECT\nstatement avoid some of this work -- speeding them up.\n \n-Kevin\n",
"msg_date": "Mon, 11 Jul 2011 16:55:32 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached\n\t database"
},
{
"msg_contents": "On Mon, Jul 11, 2011 at 4:55 PM, Kevin Grittner\n<[email protected]> wrote:\n> Merlin Moncure <[email protected]> wrote:\n>> Kevin Grittner <[email protected]> wrote:\n>>> lars <[email protected]> wrote:\n>>>\n>>>> Stopping the UPDATEs, waiting for any CHECKPOINTs to finish,\n>>>> and then running the SELECTs indeed shows a similar slowdown.\n>>>>\n>>>> Interestingly I see very heavy WAL traffic while executing the\n>>>> SELECTs. (So I was confused as to what caused the WAL traffic).\n>>>\n>>> Hint bit changes aren't logged, so if it was that you would be\n>>> seeing writes to the heap, but not to the WAL. Clean-up of dead\n>>> tuples is logged -- this is probably the result of pruning dead\n>>> tuples. You could probably reduce the impact on your SELECT\n>>> statements at least a little by making autovacuum more\n>>> aggressive.\n>>\n>> yeah. In fact, I'd like to disable autovacuum completely just to\n>> confirm this.\n>\n> If I'm right, disabling autovacuum would tend to make this *worse*.\n>\n>> In particular I'd like to know if that removes wal traffic when\n>> only selects are going on.\n>\n> My guess: no.\n>\n>> Another way to check is to throw some queries to pg_stat_activity\n>> during your select period and see if any non-select activity (like\n>> autovacum vacuum).\n>\n> That's not a bad thing to check, but be careful what causality you\n> assume based on a correlation here -- blaming autovacuum might be a\n> bit like like blaming firefighters for fires, because you keep\n> seeing them at the same time. You might actually want them to\n> respond faster and more aggressively, rather than keeping them away.\n>\n> You do realize, that just reading a page with dead tuples can cause\n> dead tuple pruning, right? No autovacuum involved. Your SELECT\n> statement waits for things to be tidied up and the page is marked\n> dirty. I'm thinking that more aggressive autovacuum runs would\n> clean more of this up in background processes and let the SELECT\n> statement avoid some of this work -- speeding them up.\n\nYeah, but that doesn't jive with the facts as I understand them -- 10k\nrecords are being written at random place and 10k records are being\nread at random place on some large table. I'm assuming (!) that most\nof the time the 'selects' are not hitting tuples that are recently\nupdated unless the table is relatively small against the 10k window\nsize. If the select is reading a bunch of un-recently-updated tuples,\nand this is not a sequential scan, and autovauum is not running and\ncausing sideband i/o, wal activity should be small. What percentage\nof the table is updated at the end of the test? If it's high, or\ngreater than 1.0 write ration, then disabling AV would be a huge\nnegative.\n\nOr maybe the random select/update index is synchronized -- but then\nautovacuum wouldn't really be a player either way.\n\nmerlin\n",
"msg_date": "Mon, 11 Jul 2011 17:11:02 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "On 07/11/2011 02:43 PM, Merlin Moncure wrote:\n> On Mon, Jul 11, 2011 at 2:16 PM, Kevin Grittner\n> <[email protected]> wrote:\n>> lars<[email protected]> wrote:\n>>\n>>> Stopping the UPDATEs, waiting for any CHECKPOINTs to finish,\n>>> and then running the SELECTs indeed shows a similar slowdown.\n>>>\n>>> Interestingly I see very heavy WAL traffic while executing the\n>>> SELECTs. (So I was confused as to what caused the WAL traffic).\n>> Hint bit changes aren't logged, so if it was that you would be\n>> seeing writes to the heap, but not to the WAL. Clean-up of dead\n>> tuples is logged -- this is probably the result of pruning dead\n>> tuples. You could probably reduce the impact on your SELECT\n>> statements at least a little by making autovacuum more aggressive.\n> yeah. In fact, I'd like to disable autovacuum completely just to\n> confirm this. In particular I'd like to know if that removes wal\n> traffic when only selects are going on. Another way to check is to\n> throw some queries to pg_stat_activity during your select period and\n> see if any non-select activity (like autovacum vacuum). Basically I'm\n> suspicious there is more to this story.\n>\n> hint bit flusing causing i/o during SELECT is a typical complaint\n> (especially on systems with underperformant i/o), but I'm suspicious\n> if that's really the problem here. Since you are on a virtualized\n> platform, I can't help but wonder if you are running into some\n> bottleneck that you wouldn't see on native hardware.\n>\n> What's iowait during the slow period?\n>\n> merlin\nThanks Kevin and Merlin this is extremely helpful...\n\nOk, that makes much more sense (WALing hint bits did not make sense).\n\nI disabled auto-vacuum and did four tests:\n1. Run a bunch of updates, stop that process, wait until checkpointing \nis finished, and run the selects (as before).\n2. run VACUUM manually, then run the SELECTs\n3. Have the UPDATEs and SELECTs touch a mutually exclusive, random sets \nof row (still in sets of 10000).\nSo the SELECTs are guaranteed not to select rows that were updated.\n4. Lastly, change the UPDATEs to update a non-indexed column. To rule \nout Index maintenance. Still distinct set of rows.\n\nIn the first case I see the same slowdown (from ~14ms to ~400-500ms). \npg_stat_activity shows no other load during that\ntime. I also see write activity only on the WAL volume.\n\nIn the 2nd case after VACUUM is finished the time is back to 14ms. As an \naside: If I run the SELECTs while VACUUM is\nrunning the slowdown is about the same as in the first case until \n(apparently) VACUUM has cleaned up most of the table,\nat which point the SELECTs become faster again (~50ms).\n\nIn the 3rd case I see exactly the same behavior, which is interesting. \nBoth before VACUUM is run and after.\nThere's no guarantee obviously that distinct rows do not share the same \npage of course especially since the index is\nupdated as part of this (but see the 4th case).\n\nIn case 4 I still see the same issue. Again both before and after VACUUM.\n\nIn all cases I see from pg_stat_bgwriter that no backend writes buffers \ndirectly (but I think that only pertains to dirty buffers, and not the WAL).\n\nSo I think I have a partial answer to my initial question.\n\nHowever, that brings me to some other questions:\nWhy do SELECTs cause dead tuples to be pruned (as Kevin suggests)?\nThat even happens when the updates did not touch the selected rows(?)\nAnd why does that slow down the SELECTs? (checkpointing activity on the \nEBS volume holding the database for example\ndoes not slow down SELECTs at all, only WAL activity does). Does the \nselecting backend do that work itself?\n\nLastly, is this documented somewhere? (I apologize if it is and I missed \nit). If not I'd be happy to write a wiki entry for this.\n\n",
"msg_date": "Mon, 11 Jul 2011 16:02:37 -0700",
"msg_from": "lars <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "On 07/11/2011 04:02 PM, lars wrote:\n> On 07/11/2011 02:43 PM, Merlin Moncure wrote:\n>> On Mon, Jul 11, 2011 at 2:16 PM, Kevin Grittner\n>> <[email protected]> wrote:\n>>> lars<[email protected]> wrote:\n>>>\n>>>> Stopping the UPDATEs, waiting for any CHECKPOINTs to finish,\n>>>> and then running the SELECTs indeed shows a similar slowdown.\n>>>>\n>>>> Interestingly I see very heavy WAL traffic while executing the\n>>>> SELECTs. (So I was confused as to what caused the WAL traffic).\n>>> Hint bit changes aren't logged, so if it was that you would be\n>>> seeing writes to the heap, but not to the WAL. Clean-up of dead\n>>> tuples is logged -- this is probably the result of pruning dead\n>>> tuples. You could probably reduce the impact on your SELECT\n>>> statements at least a little by making autovacuum more aggressive.\n>> yeah. In fact, I'd like to disable autovacuum completely just to\n>> confirm this. In particular I'd like to know if that removes wal\n>> traffic when only selects are going on. Another way to check is to\n>> throw some queries to pg_stat_activity during your select period and\n>> see if any non-select activity (like autovacum vacuum). Basically I'm\n>> suspicious there is more to this story.\n>>\n>> hint bit flusing causing i/o during SELECT is a typical complaint\n>> (especially on systems with underperformant i/o), but I'm suspicious\n>> if that's really the problem here. Since you are on a virtualized\n>> platform, I can't help but wonder if you are running into some\n>> bottleneck that you wouldn't see on native hardware.\n>>\n>> What's iowait during the slow period?\n>>\n>> merlin\n> Thanks Kevin and Merlin this is extremely helpful...\n>\n> Ok, that makes much more sense (WALing hint bits did not make sense).\n>\n> I disabled auto-vacuum and did four tests:\n> 1. Run a bunch of updates, stop that process, wait until checkpointing \n> is finished, and run the selects (as before).\n> 2. run VACUUM manually, then run the SELECTs\n> 3. Have the UPDATEs and SELECTs touch a mutually exclusive, random \n> sets of row (still in sets of 10000).\n> So the SELECTs are guaranteed not to select rows that were updated.\n> 4. Lastly, change the UPDATEs to update a non-indexed column. To rule \n> out Index maintenance. Still distinct set of rows.\n>\n> In the first case I see the same slowdown (from ~14ms to ~400-500ms). \n> pg_stat_activity shows no other load during that\n> time. I also see write activity only on the WAL volume.\n>\n> In the 2nd case after VACUUM is finished the time is back to 14ms. As \n> an aside: If I run the SELECTs while VACUUM is\n> running the slowdown is about the same as in the first case until \n> (apparently) VACUUM has cleaned up most of the table,\n> at which point the SELECTs become faster again (~50ms).\n>\n> In the 3rd case I see exactly the same behavior, which is interesting. \n> Both before VACUUM is run and after.\n> There's no guarantee obviously that distinct rows do not share the \n> same page of course especially since the index is\n> updated as part of this (but see the 4th case).\n>\n> In case 4 I still see the same issue. Again both before and after VACUUM.\n>\n> In all cases I see from pg_stat_bgwriter that no backend writes \n> buffers directly (but I think that only pertains to dirty buffers, and \n> not the WAL).\n>\n> So I think I have a partial answer to my initial question.\n>\n> However, that brings me to some other questions:\n> Why do SELECTs cause dead tuples to be pruned (as Kevin suggests)?\n> That even happens when the updates did not touch the selected rows(?)\n> And why does that slow down the SELECTs? (checkpointing activity on \n> the EBS volume holding the database for example\n> does not slow down SELECTs at all, only WAL activity does). Does the \n> selecting backend do that work itself?\n>\n> Lastly, is this documented somewhere? (I apologize if it is and I \n> missed it). If not I'd be happy to write a wiki entry for this.\n>\n>\nOh, and iowait hovers around 20% when SELECTs are slow:\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 1.54 0.00 0.98 18.49 0.07 78.92\n\nWhen SELECTs are fast it looks like this:\navg-cpu: %user %nice %system %iowait %steal %idle\n 8.72 0.00 0.26 0.00 0.00 91.01\n\nNote that this is a 12 core VM. So one core at 100% would show as 8.33% CPU.\n\n",
"msg_date": "Mon, 11 Jul 2011 17:09:19 -0700",
"msg_from": "lars <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "On 07/11/2011 08:26 AM, Robert Klemme wrote:\n> On Mon, Jul 11, 2011 at 3:13 PM, [email protected]<[email protected]> wrote:\n>> I do not know if this makes sense in PostgreSQL and that readers\n>> do not block writers and writes do not block readers. Are your\n>> UPDATEs to individual rows, each in a separate transaction, or\n>> do you UPDATE multiple rows in the same transaction? If you\n>> perform multiple updates in a single transaction, you are\n>> synchronizing the changes to that set of rows and that constraint\n>> is causing other readers that need to get the correct values post-\n>> transaction to wait until the COMMIT completes. This means that\n>> the WAL write must be completed.\n> What readers should that be? Docs explicitly state that readers are\n> never blocked by writers:\n> http://www.postgresql.org/docs/9.0/interactive/mvcc-intro.html\n> http://www.postgresql.org/docs/9.0/interactive/mvcc.html\n>\n> From what I understand about this issue the observed effect must be\n> caused by the implementation and not by a conceptual issue with\n> transactions.\n>\n>> Have you tried disabling synchronous_commit? If this scenario\n>> holds, you should be able to reduce the slowdown by un-batching\n>> your UPDATEs, as counter-intuitive as that is. This seems to\n>> be similar to a problem that I have been looking at with using\n>> PostgreSQL as the backend to a Bayesian engine. I am following\n>> this thread with interest.\n> I don't think this will help (see above). Also, I would be very\n> cautious to do this because although the client might get a faster\n> acknowledge the DB still has to do the same work as without\n> synchronous_commit (i.e. WAL, checkpointing etc.) but it still has to\n> do significantly more transactions than in the batched version.\n>\n> Typically there is an optimum batch size: if batch size is too small\n> (say, one row) the ratio of TX overhead to \"work\" is too bad. If\n> batch size is too large (say, millions of rows) you hit resource\n> limitations (memory) which inevitable force the RDBMS to do additional\n> disk IO.\n>\n> Kind regards\n>\n> robert\n>\nThanks Ken and Robert,\n\nWhat I am observing is definitely not readers blocked by writers by \nmeans of row-level locking.\n\nThis seems to be some implementation detail in Postgres about how dirty \npages (or dead versions of tuples) are\nflushed to the disk (see the other parts of this thread) when they \naccessed by a SELECT query.\n\nThe batch size in this case is one SELECT statement accessing 10000 rows \nvia an aggregate (such as COUNT) and\nan UPDATE updating 10000 rows in a single statement.\n\nI am not trying to optimize this particular use case, but rather to \nunderstand what Postgres is doing, and why SELECT\nqueries are affected negatively (sometimes severely) by concurrent (or \neven preceding) UPDATEs at all when the\ndatabase resides in the cache completely.\n\n-- Lars\n\n",
"msg_date": "Mon, 11 Jul 2011 21:42:45 -0700",
"msg_from": "lars <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "On 08/07/2011 01:56, lars wrote:\n\n> Setup:\n> PostgreSQL 9.1beta2 on a high memory (~68gb, 12 cores) EC2 Linux\n> instance (kernel 2.6.35) with the database and\n> WAL residing on the same EBS volume with EXT4 (data=ordered, barriers=1)\n> - yes that is not an ideal setup\n> (WAL should be on separate drive, EBS is slow to begin, etc), but I am\n> mostly interested in read performance for a fully cached database.\n\nI know you said you know these things - but do you really know the \n(huge) extent to which all your IO is slowed? Even context switches in a \nvirtualized environment are slowed down by a huge margin - which would \nmake practically all in-memory lock operations very slow - much slower \nthan they would be on \"real\" hardware, and EBS by definition is even \nslower then regular private virtual storage environments. I regrettably \ndidn't bookmark the page which did exact measurements of EBS, but \nhttp://www.google.com/search?q=how+slow+is+ec2 will illustrate my point. \n(of course, you may already know all this :) ).\n\n\n",
"msg_date": "Tue, 12 Jul 2011 15:22:46 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "On Tue, Jul 12, 2011 at 8:22 AM, Ivan Voras <[email protected]> wrote:\n> On 08/07/2011 01:56, lars wrote:\n>\n>> Setup:\n>> PostgreSQL 9.1beta2 on a high memory (~68gb, 12 cores) EC2 Linux\n>> instance (kernel 2.6.35) with the database and\n>> WAL residing on the same EBS volume with EXT4 (data=ordered, barriers=1)\n>> - yes that is not an ideal setup\n>> (WAL should be on separate drive, EBS is slow to begin, etc), but I am\n>> mostly interested in read performance for a fully cached database.\n>\n> I know you said you know these things - but do you really know the (huge)\n> extent to which all your IO is slowed? Even context switches in a\n> virtualized environment are slowed down by a huge margin - which would make\n> practically all in-memory lock operations very slow - much slower than they\n> would be on \"real\" hardware, and EBS by definition is even slower then\n> regular private virtual storage environments. I regrettably didn't bookmark\n> the page which did exact measurements of EBS, but\n> http://www.google.com/search?q=how+slow+is+ec2 will illustrate my point. (of\n> course, you may already know all this :) ).\n\nsure, but the OP's question is valid: in postgres, readers don't block\nwriters, so why is the reader waiting? I'd like to know definitively:\n*) is the reader bottlenecked on disk i/o (it seems yes)\n*) is that disk i/o heap or wal (it seems wal)\n*) is that disk i/o reading/writing (it seems writing)\n*) given the above, why is this happening (presumably disk page tidying)?\n\nWe need some more information here. I'd like to see the table\ninformation -- at least the average width of the record both pre/post\nupdate, if it is or is not toasted, and the number of size and indexes\npre/post update. I'm really suspicious of the virtualization tech as\nwell -- is it possible to run this test on at least semi decent native\nhardware?\n\nmerlin\n",
"msg_date": "Tue, 12 Jul 2011 09:18:08 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "lars <[email protected]> wrote:\n \n> I am not trying to optimize this particular use case, but rather\n> to understand what Postgres is doing, and why SELECT queries are\n> affected negatively (sometimes severely) by concurrent (or even\n> preceding) UPDATEs at all when the database resides in the cache\n> completely.\n \nI think we're stuck in terms of trying to guess what is causing the\nbehavior you are seeing. Things which would help get it \"unstuck\":\n \n(1) More information about your load process. Looking at the code,\nI could sort of see a possible path to this behavior if the load\nprocess involves any adjustments beyond straight INSERTs or COPYs\nin.\n \n(2) You could poke around with a profiler, a debugger, and/or\nthe contrib/pageinspect module to sort things out.\n \n(3) You could post a reproducible test case -- where you start with\nCREATE TABLE and populate with something like the generate_series()\nfunction and go through a clearly described set of steps to see the\nbehavior. With the someone else could do the profiling, debugging,\nand/or page inspection.\n \n>From what you've said, it seems like either you're omitting some\ndetail as irrelevant which is actually significant, or you've found\na bug we should hunt down and fix. I really don't know which it is.\n \n-Kevin\n",
"msg_date": "Tue, 12 Jul 2011 09:36:14 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached\n\t database"
},
{
"msg_contents": "On 12/07/2011 16:18, Merlin Moncure wrote:\n> On Tue, Jul 12, 2011 at 8:22 AM, Ivan Voras<[email protected]> wrote:\n>> On 08/07/2011 01:56, lars wrote:\n>>\n>>> Setup:\n>>> PostgreSQL 9.1beta2 on a high memory (~68gb, 12 cores) EC2 Linux\n>>> instance (kernel 2.6.35) with the database and\n>>> WAL residing on the same EBS volume with EXT4 (data=ordered, barriers=1)\n>>> - yes that is not an ideal setup\n>>> (WAL should be on separate drive, EBS is slow to begin, etc), but I am\n>>> mostly interested in read performance for a fully cached database.\n>>\n>> I know you said you know these things - but do you really know the (huge)\n>> extent to which all your IO is slowed? Even context switches in a\n>> virtualized environment are slowed down by a huge margin - which would make\n>> practically all in-memory lock operations very slow - much slower than they\n>> would be on \"real\" hardware, and EBS by definition is even slower then\n>> regular private virtual storage environments. I regrettably didn't bookmark\n>> the page which did exact measurements of EBS, but\n>> http://www.google.com/search?q=how+slow+is+ec2 will illustrate my point. (of\n>> course, you may already know all this :) ).\n>\n> sure, but the OP's question is valid: in postgres, readers don't block\n> writers, so why is the reader waiting?\n\nYes, but I'm suggesting a different question: are we sure we are not \nseeing the influences of the environment (EC2+EBS) instead of the \nsoftware system?\n\n> We need some more information here.\n\nDefinitely.\n\n\n",
"msg_date": "Tue, 12 Jul 2011 16:43:03 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "On 12/07/2011 02:09, lars wrote:\n\n> Oh, and iowait hovers around 20% when SELECTs are slow:\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 1.54 0.00 0.98 18.49 0.07 78.92\n>\n> When SELECTs are fast it looks like this:\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 8.72 0.00 0.26 0.00 0.00 91.01\n>\n> Note that this is a 12 core VM. So one core at 100% would show as 8.33%\n> CPU.\n\nNow only if you could do an \"iostat -x\" and show the output in both cases...\n\n",
"msg_date": "Tue, 12 Jul 2011 17:13:03 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "On Tue, Jul 12, 2011 at 9:36 AM, Kevin Grittner\n<[email protected]> wrote:\n> lars <[email protected]> wrote:\n>\n>> I am not trying to optimize this particular use case, but rather\n>> to understand what Postgres is doing, and why SELECT queries are\n>> affected negatively (sometimes severely) by concurrent (or even\n>> preceding) UPDATEs at all when the database resides in the cache\n>> completely.\n>\n> I think we're stuck in terms of trying to guess what is causing the\n> behavior you are seeing. Things which would help get it \"unstuck\":\n>\n> (1) More information about your load process. Looking at the code,\n> I could sort of see a possible path to this behavior if the load\n> process involves any adjustments beyond straight INSERTs or COPYs\n> in.\n>\n> (2) You could poke around with a profiler, a debugger, and/or\n> the contrib/pageinspect module to sort things out.\n\n\nhm, also strace on the 'select' process might give some clues.\n\nmerlin\n",
"msg_date": "Tue, 12 Jul 2011 11:23:36 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "On 07/12/2011 08:13 AM, Ivan Voras wrote:\n> On 12/07/2011 02:09, lars wrote:\n>\n>> Oh, and iowait hovers around 20% when SELECTs are slow:\n>>\n>> avg-cpu: %user %nice %system %iowait %steal %idle\n>> 1.54 0.00 0.98 18.49 0.07 78.92\n>>\n>> When SELECTs are fast it looks like this:\n>> avg-cpu: %user %nice %system %iowait %steal %idle\n>> 8.72 0.00 0.26 0.00 0.00 91.01\n>>\n>> Note that this is a 12 core VM. So one core at 100% would show as 8.33%\n>> CPU.\n>\n> Now only if you could do an \"iostat -x\" and show the output in both \n> cases...\n>\n>\nSure (sorry for missing details):\n\niostat -x during selects when all's fine:\navg-cpu: %user %nice %system %iowait %steal %idle\n 8.25 0.00 0.00 0.00 0.00 91.75\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s \navgrq-sz avgqu-sz await svctm %util\nxvdap1 0.00 1.00 0.00 2.00 0.00 24.00 \n12.00 0.00 0.00 0.00 0.00\nxvdf 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00\nxvdg 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00\n\nxvdap1 is OS volumn.\nxvdf holds the database files\nxvdg holds the WAL\n\nNo IO on database/WAL volumes, one core is pegged close to 100% CPU.\n\n------------------------------------\n\niostat -x during update:\navg-cpu: %user %nice %system %iowait %steal %idle\n 1.05 0.00 0.58 4.00 0.00 94.37\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s \navgrq-sz avgqu-sz await svctm %util\nxvdap1 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00\nxvdf 0.00 0.00 7.00 0.00 128.00 0.00 \n18.29 0.00 0.00 0.00 0.00\nxvdg 0.00 7352.00 0.00 804.00 0.00 62368.00 \n77.57 66.07 68.83 0.86 69.20\n\nJust updating the WAL.\n\n-----------------------------------\n\nand while it's checkpointing:\navg-cpu: %user %nice %system %iowait %steal %idle\n 0.64 0.00 0.32 8.88 0.00 90.16\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s \navgrq-sz avgqu-sz await svctm %util\nxvdap1 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00\nxvdf 0.00 2548.00 2.00 1658.00 32.00 33408.00 \n20.14 144.18 86.69 0.60 100.00\nxvdg 0.00 5428.00 0.00 778.00 0.00 58480.00 \n75.17 77.44 100.22 1.21 94.00\n\nUpdating the WAL, and database volume due to checkpointing.\n\n----------------------------------\n\niostat -x after I stopped the update process and checkpointing is done:\navg-cpu: %user %nice %system %iowait %steal %idle\n 0.00 0.00 0.00 0.00 0.00 100.00\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s \navgrq-sz avgqu-sz await svctm %util\nxvdap1 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00\nxvdf 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00\nxvdg 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00\n\nNo activity at all.\n\n---------------------------------\n\niostat -x after I started the select queries after the updates:\navg-cpu: %user %nice %system %iowait %steal %idle\n 2.09 0.00 1.49 12.15 0.00 84.26\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s \navgrq-sz avgqu-sz await svctm %util\nxvdap1 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00\nxvdf 0.00 8.00 0.00 2.00 0.00 80.00 \n40.00 0.00 2.00 2.00 0.40\nxvdg 0.00 7844.00 1.00 1098.00 8.00 82336.00 \n74.93 58.27 59.39 0.70 77.20\n\nHeavy writes to the WAL volume.\n\nselect * from pg_stat_activity;\n datid | datname | procpid | usesysid | usename | application_name | \nclient_addr | client_hostname | client_port | \nbackend_start | xact_start | qu\nery_start | waiting | current_query\n-------+---------+---------+----------+----------+------------------+-------------+-----------------+-------------+-------------------------------+-------------------------------+------------\n-------------------+---------+---------------------------------------------------------------\n 16385 | lars | 2654 | 16384 | lars | | \n127.0.0.1 | | 44972 | 2011-07-12 \n18:44:09.479581+00 | 2011-07-12 18:50:32.629412+00 | 2011-07-12\n18:50:32.629473+00 | f | select count(*) from test where tenant = \n$1 and created_date = $2\n 16385 | lars | 2658 | 10 | postgres | psql \n| | | -1 | 2011-07-12 \n18:49:02.675436+00 | 2011-07-12 18:50:32.631013+00 | 2011-07-12\n18:50:32.631013+00 | f | select * from pg_stat_activity;\n 16385 | lars | 2660 | 16384 | lars | psql \n| | | -1 | 2011-07-12 \n18:49:45.711643+00 | |\n | f | <IDLE>\n(3 rows)\n\n",
"msg_date": "Tue, 12 Jul 2011 12:01:25 -0700",
"msg_from": "lars <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "lars <[email protected]> wrote:\n \n> select count(*) from test where tenant = $1 and created_date = $2\n \nAh, that might be a major clue -- prepared statements.\n \nWhat sort of a plan do you get for that as a prepared statement? \n(Note, it is very likely *not* to be the same plan as you get if you\nrun with literal values!) It is not at all unlikely that it could\nresort to a table scan if you have one tenant which is five or ten\npercent of the table, which would likely trigger the pruning as it\npassed over the modified pages.\n \n-Kevin\n",
"msg_date": "Tue, 12 Jul 2011 14:08:35 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached\n\t database"
},
{
"msg_contents": "lars <[email protected]> wrote:\n \n> select count(*) from test where tenant = $1 and created_date = $2\n \nThinking about this some more, it would be interesting to know your\nPostgreSQL configuration. I seem to remember you mentioning some\nsettings up-thread, but I'm not sure whether it was comprehensive. \nCould you paste the result of running the query on this Wiki page?:\n \nhttp://wiki.postgresql.org/wiki/Server_Configuration\n \nIt might be that if you generate your queries with literals rather\nthan using server-side prepared statements, and tweak a couple\nconfiguration settings, this problem will evaporate.\n \n-Kevin\n",
"msg_date": "Tue, 12 Jul 2011 14:26:29 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached\n\t database"
},
{
"msg_contents": "On 07/12/2011 12:08 PM, Kevin Grittner wrote:\n> lars<[email protected]> wrote:\n>\n>> select count(*) from test where tenant = $1 and created_date = $2\n>\n> Ah, that might be a major clue -- prepared statements.\n>\n> What sort of a plan do you get for that as a prepared statement?\n> (Note, it is very likely *not* to be the same plan as you get if you\n> run with literal values!) It is not at all unlikely that it could\n> resort to a table scan if you have one tenant which is five or ten\n> percent of the table, which would likely trigger the pruning as it\n> passed over the modified pages.\n>\n> -Kevin\nSo a read of a row *will* trigger dead tuple pruning, and that requires \nWAL logging, and this is known/expected?\nThis is actually the only answer I am looking for. :) I have not seen \nthis documented anywhere.\n\nI know that Postgres will generate general plans for prepared statements \n(how could it do otherwise?),\nI also know that it sometimes chooses a sequential scan.\nThis can always be tweaked to touch fewer rows and/or use a different \nplan. That's not my objective, though!\n\nThe fact that a select (maybe a big analytical query we'll run) touching \nmany rows will update the WAL and wait\n(apparently) for that IO to complete is making a fully cached database \nfar less useful.\nI just artificially created this scenario.\n\n... Just dropped the table to test something so I can't get the plan \nright now. Will send an update as soon as I get\nit setup again.\n\nThanks again.\n\n-- Lars\n\n",
"msg_date": "Tue, 12 Jul 2011 13:04:57 -0700",
"msg_from": "lars <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached\t database"
},
{
"msg_contents": "lars <[email protected]> wrote:\n \n> So a read of a row *will* trigger dead tuple pruning, and that\n> requires WAL logging, and this is known/expected?\n \nYes, because pruning dead line pointers will make subsequent reads\nfaster. It's intended to be an optimization.\n \n> This is actually the only answer I am looking for. :) I have not\n> seen this documented anywhere.\n \nYou would currently need to look at the README-HOT file or source\ncode, I think. There probably should be some mention in the user\ndocs, but I haven't noticed any, and it is more technical than most\nof the documentation gets. Perhaps a \"note\" block somewhere...\n \n> The fact that a select (maybe a big analytical query we'll run)\n> touching many rows will update the WAL and wait (apparently) for\n> that IO to complete is making a fully cached database far less\n> useful.\n \nWell, I've never run into this because I have directly attached\nstorage through a RAID controller with a battery-backed cache\nconfigured for write-back. The pruning is pretty light on CPU\nusage, and with a BBU controller, the WAL writes just move from one\ncache to another.\n \nIf that's not an option for you, you could contrive to have the\nupdate code reread the modified rows after COMMIT, or configure your\nautovacuum to be very aggressive so that a background process\nusually takes care of this before a SELECT gets to it. And there's\na good chance that tuning your query and/or running with literal\nvalues available to the planner would be a big net win even without\nthis issue; if this issue is a problem for you, it's just another\nreason to do that tuning.\n \n> Just dropped the table to test something so I can't get the plan \n> right now. Will send an update as soon as I get\n> it setup again.\n \nI'll be surprised if you don't see a seqscan. The most interesting\nbit at this point (at least to me) is working on tuning the cost\nfactors for the planner.\n \n-Kevin\n",
"msg_date": "Tue, 12 Jul 2011 15:51:24 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached\t\n\t database"
},
{
"msg_contents": "On 07/12/2011 01:04 PM, lars wrote:\n> On 07/12/2011 12:08 PM, Kevin Grittner wrote:\n>> lars<[email protected]> wrote:\n>>\n>>> select count(*) from test where tenant = $1 and created_date = $2\n>>\n>> Ah, that might be a major clue -- prepared statements.\n>>\n>> What sort of a plan do you get for that as a prepared statement?\n>> (Note, it is very likely *not* to be the same plan as you get if you\n>> run with literal values!) It is not at all unlikely that it could\n>> resort to a table scan if you have one tenant which is five or ten\n>> percent of the table, which would likely trigger the pruning as it\n>> passed over the modified pages.\n>>\n>> -Kevin\n> So a read of a row *will* trigger dead tuple pruning, and that \n> requires WAL logging, and this is known/expected?\n> This is actually the only answer I am looking for. :) I have not seen \n> this documented anywhere.\n>\n> I know that Postgres will generate general plans for prepared \n> statements (how could it do otherwise?),\n> I also know that it sometimes chooses a sequential scan.\n> This can always be tweaked to touch fewer rows and/or use a different \n> plan. That's not my objective, though!\n>\n> The fact that a select (maybe a big analytical query we'll run) \n> touching many rows will update the WAL and wait\n> (apparently) for that IO to complete is making a fully cached database \n> far less useful.\n> I just artificially created this scenario.\n>\n> ... Just dropped the table to test something so I can't get the plan \n> right now. Will send an update as soon as I get\n> it setup again.\n>\n> Thanks again.\n>\n> -- Lars\n>\n>\nOk... Slightly changes the indexes:\n\\d test\n Table \"lars.test\"\n Column | Type | Modifiers\n--------------+---------------+-----------\n tenant | character(15) |\n created_by | character(15) |\n created_date | date |\nIndexes:\n \"i1\" btree (tenant)\n\nSo just just a simple index on tenant.\n\nprepare x as select count(*) from test where tenant = $1 and \ncreated_date = $2;\nPREPARE\nexplain execute x('000000000000001','2011-6-30');\n QUERY PLAN\n-------------------------------------------------------------------------------\n Aggregate (cost=263301.40..263301.41 rows=1 width=0)\n -> Bitmap Heap Scan on test (cost=3895.99..263299.28 rows=847 width=0)\n Recheck Cond: (tenant = $1)\n Filter: (created_date = $2)\n -> Bitmap Index Scan on i1 (cost=0.00..3895.77 rows=169372 \nwidth=0)\n Index Cond: (tenant = $1)\n(6 rows)\n\n-- this is when the WAL rows are written:\nexplain (analyze on, buffers on) execute x('000000000000001','2011-6-30');\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=263301.40..263301.41 rows=1 width=0) (actual \ntime=191.150..191.151 rows=1 loops=1)\n Buffers: shared hit=3716\n -> Bitmap Heap Scan on test (cost=3895.99..263299.28 rows=847 \nwidth=0) (actual time=1.966..188.221 rows=3712 loops=1)\n Recheck Cond: (tenant = $1)\n Filter: (created_date = $2)\n Buffers: shared hit=3716\n -> Bitmap Index Scan on i1 (cost=0.00..3895.77 rows=169372 \nwidth=0) (actual time=1.265..1.265 rows=3712 loops=1)\n Index Cond: (tenant = $1)\n Buffers: shared hit=20\n Total runtime: 191.243 ms\n(10 rows)\n\n-- this is when no WAL is written:\nexplain (analyze on, buffers on) execute x('000000000000001','2011-6-30');\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=263301.40..263301.41 rows=1 width=0) (actual \ntime=11.529..11.530 rows=1 loops=1)\n Buffers: shared hit=3715\n -> Bitmap Heap Scan on test (cost=3895.99..263299.28 rows=847 \nwidth=0) (actual time=1.341..9.187 rows=3712 loops=1)\n Recheck Cond: (tenant = $1)\n Filter: (created_date = $2)\n Buffers: shared hit=3715\n -> Bitmap Index Scan on i1 (cost=0.00..3895.77 rows=169372 \nwidth=0) (actual time=0.756..0.756 rows=3712 loops=1)\n Index Cond: (tenant = $1)\n Buffers: shared hit=19\n Total runtime: 11.580 ms\n(10 rows)\n\nIf you wanted to recreate this scenario I created a simple script to \ncreate the table:\n\ncreate table test(tenant char(15), created_by char(15), created_date date);\ninsert into test values('x', 'y','2011-6-30');\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test; -- 256k rows\nupdate test set tenant = lpad((random()*10000)::int::text,15,'0'), \ncreated_by = lpad((random()*10000)::int::text,15,'0');\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test; -- 32m rows\ncreate index i1 on test(tenant);\nvacuum analyze;\n\nI use JDBC to perform the updates and selects, but this will do too:\n\nprepare x as select count(*) from test where tenant = $1 and \ncreated_date = $2;\nprepare y as update test set created_by = $1 where tenant = $2 and \ncreated_date = $3;\n\nexecute y('000000000000001', '000000000000001','2011-6-30');\nexecute x('000000000000001','2011-6-30');\n\nI'll probably compile Postgres with WAL debug on next, and try the page \ninspect module.\nOn the face of it, though, this looks like Postgres would not be that \nuseful as database that resides (mostly) in the cache.\n\nHere what the query mentioned here returns (note that shared_buffers is \nvery large, but I observed the same with smaller settings):\n\nhttp://wiki.postgresql.org/wiki/Server_Configuration\n\n\n name \n| current_setting\n------------------------------+------------------------------------------------------------------------------------------------------------------\n version | PostgreSQL 9.1beta2 on \nx86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.4 20100726 (Red Hat \n4.4.4-13), 64-bit\n autovacuum | off\n bgwriter_delay | 10ms\n bgwriter_lru_maxpages | 1000\n checkpoint_completion_target | 0.9\n checkpoint_segments | 128\n client_encoding | UTF8\n effective_cache_size | 64GB\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n log_checkpoints | on\n log_line_prefix | %m\n maintenance_work_mem | 2GB\n max_connections | 100\n max_stack_depth | 2MB\n server_encoding | UTF8\n shared_buffers | 20GB\n TimeZone | UTC\n wal_buffers | 16MB\n work_mem | 1GB\n(20 rows)\n\nThanks.\n\n-- Lars\n\n",
"msg_date": "Tue, 12 Jul 2011 13:58:22 -0700",
"msg_from": "lars <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached\t database"
},
{
"msg_contents": "On Tue, Jul 12, 2011 at 2:08 PM, Kevin Grittner\n<[email protected]> wrote:\n> lars <[email protected]> wrote:\n>\n>> select count(*) from test where tenant = $1 and created_date = $2\n>\n> Ah, that might be a major clue -- prepared statements.\n\nI'm really skeptical that this is the case -- the table is 100m and\nthere is no way you are banging through 100m in 500ms records\nparticularly if doing i/o.\n\nAlso regarding the page cleanup, IIRC the optimization only takes\nplace if the dead tuples are HOT -- when the OP opened the thread he\nstated it was happening with update on indexed field (and later tested\nit on unindexed field).\n\nSomething is not adding up here. Perhaps there is an alternate route\nto WAL logged activity from selects I'm not thinking of. Right now\nI'm thinking to run the selects on table 'a' and the inserts\nconcurrently on table 'b' and seeing how that behaves. Another way to\nget to the bottom is to oprofile the selecting-during-load backend to\nsee where the time is getting spent. Alternative way to do this is\nto strace attach to the selecting-during-load backend to see if it's\nreally writing to the WAL (I'm really curious about this).\n\nAnother interesting test would be to try and reproduce the results on\nnative machine. It should be possible to do this on your workstation\nwith a more modestly sized scaling factor.\n\nmerlin\n",
"msg_date": "Tue, 12 Jul 2011 16:38:05 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "lars <[email protected]> wrote:\n\n> vacuum analyze;\n\nI tried this out on a 16 core, 64 GB machine. It was a replication\ntarget for a few dozen source databases into a couple 2 TB reporting\ndatabases, and had some light testing going on, but it was only at\nabout 50% capacity, so that shouldn't throw this off by *too* much,\nI hope. Since our data is long-lived enough to worry about\ntransaction ID freezing issues, I always follow a bulk load with\nVACUUM FREEZE ANALYZE; so I did that here. I also just threw this\ninto the 2 TB database without changing our configuration. Among\nother things, that means that autovacuum was on.\n\n> prepare x as select count(*) from test where tenant = $1 and\n> created_date = $2;\n> prepare y as update test set created_by = $1 where tenant = $2 and\n> created_date = $3;\n>\n> execute y('000000000000001', '000000000000001','2011-6-30');\n> execute x('000000000000001','2011-6-30');\n\nI ran x a bunch of times to get a baseline, then y once, then x a\nbunch more times. The results were a bit surprising:\n\ncir=> \\timing\nTiming is on.\ncir=> execute x('000000000000001','2011-6-30');\n count\n-------\n 3456\n(1 row)\n\nTime: 9.823 ms\ncir=> execute x('000000000000001','2011-6-30');\n count\n-------\n 3456\n(1 row)\n\nTime: 8.481 ms\ncir=> execute x('000000000000001','2011-6-30');\n count\n-------\n 3456\n(1 row)\n\nTime: 14.054 ms\ncir=> execute x('000000000000001','2011-6-30');\n count\n-------\n 3456\n(1 row)\n\nTime: 10.169 ms\ncir=> execute y('000000000000001', '000000000000001','2011-6-30');\nUPDATE 3456\nTime: 404.244 ms\ncir=> execute x('000000000000001','2011-6-30');\n count\n-------\n 3456\n(1 row)\n\nTime: 128.643 ms\ncir=> execute x('000000000000001','2011-6-30');\n count\n-------\n 3456\n(1 row)\n\nTime: 2.657 ms\ncir=> execute x('000000000000001','2011-6-30');\n count\n-------\n 3456\n(1 row)\n\nTime: 5.883 ms\ncir=> execute x('000000000000001','2011-6-30');\n count\n-------\n 3456\n(1 row)\n\nTime: 2.645 ms\ncir=> execute x('000000000000001','2011-6-30');\n count\n-------\n 3456\n(1 row)\n\nTime: 2.753 ms\ncir=> execute x('000000000000001','2011-6-30');\n count\n-------\n 3456\n(1 row)\n\nTime: 2.253 ms\n\nRunning the update made the next SELECT slow, then it was much\n*faster*. My best guess is that the data landed in a more\nconcentrated set of pages after the update, and once autovacuum\nkicked in and cleaned things up it was able to get to that set of\ndata faster.\n\n> On the face of it, though, this looks like Postgres would not be\n> that useful as database that resides (mostly) in the cache.\n\n> autovacuum | off\n\nWell, certainly not while under modification without running\nautovacuum. That's disabling an integral part of what keeps\nperformance up. There are very few, if any, situations where\nrunning PostgreSQL in production without autovacuum makes any sense,\nand benchmarks which disable it don't give a very accurate picture\nof typical performance. Now, if you're looking to artificially\ncreate a worst-case scenario, then it makes sense, but I'm not clear\non the point of it.\n\nI do understand the impulse, though. When we first started using\nPostgreSQL there were certain very small tables which were updated\nvery frequently which got slow when autovacuum kicked in. We made\nautovacuum less aggressive, and found that things go worse! Se we\nwent the other way and made autovacuum much more aggressive than the\ndefaults, and everything was fine.\n\n-Kevin\n",
"msg_date": "Tue, 12 Jul 2011 16:51:04 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached\t\n\t database"
},
{
"msg_contents": "On 07/12/2011 02:51 PM, Kevin Grittner wrote:\n> I ran x a bunch of times to get a baseline, then y once, then x a\n> bunch more times. The results were a bit surprising:\n>\n> cir=> \\timing\n> Timing is on.\n> cir=> execute x('000000000000001','2011-6-30');\n> count\n> -------\n> 3456\n> (1 row)\n>\n> Time: 9.823 ms\n> cir=> execute x('000000000000001','2011-6-30');\n> count\n> -------\n> 3456\n> (1 row)\n>\n> Time: 8.481 ms\n> cir=> execute x('000000000000001','2011-6-30');\n> count\n> -------\n> 3456\n> (1 row)\n>\n> Time: 14.054 ms\n> cir=> execute x('000000000000001','2011-6-30');\n> count\n> -------\n> 3456\n> (1 row)\n>\n> Time: 10.169 ms\n> cir=> execute y('000000000000001', '000000000000001','2011-6-30');\n> UPDATE 3456\n> Time: 404.244 ms\n> cir=> execute x('000000000000001','2011-6-30');\n> count\n> -------\n> 3456\n> (1 row)\n>\n> Time: 128.643 ms\n> cir=> execute x('000000000000001','2011-6-30');\n> count\n> -------\n> 3456\n> (1 row)\n>\n> Time: 2.657 ms\n> cir=> execute x('000000000000001','2011-6-30');\n> count\n> -------\n> 3456\n> (1 row)\n>\n> Time: 5.883 ms\n> cir=> execute x('000000000000001','2011-6-30');\n> count\n> -------\n> 3456\n> (1 row)\n>\n> Time: 2.645 ms\n> cir=> execute x('000000000000001','2011-6-30');\n> count\n> -------\n> 3456\n> (1 row)\n>\n> Time: 2.753 ms\n> cir=> execute x('000000000000001','2011-6-30');\n> count\n> -------\n> 3456\n> (1 row)\n>\n> Time: 2.253 ms\n>\nInteresting. When you did you test, did you also find WAL write activity \nwhen running x the first time after y?\n(It's very hard to catch in only a single query, though).\n\n> Running the update made the next SELECT slow, then it was much\n> *faster*. My best guess is that the data landed in a more\n> concentrated set of pages after the update, and once autovacuum\n> kicked in and cleaned things up it was able to get to that set of\n> data faster.\n>\n>> autovacuum | off\n> Well, certainly not while under modification without running\n> autovacuum. That's disabling an integral part of what keeps\n> performance up.\nOh, it's just switched off for testing, so that I can control when \nvacuum runs and make sure that it's not\nskewing the results while I am measuring something.\nIn a real database I would probably err on vacuuming more than less.\n\nFor a fully cached database I would probably want to switch off HOT \npruning and compaction (which from what we see\nis done synchronously with the select) and leave it up to the \nasynchronous auto vacuum to do that. But maybe I am\nstill not quite understanding the performance implications.\n\n",
"msg_date": "Tue, 12 Jul 2011 15:36:20 -0700",
"msg_from": "lars <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached\t\t database"
},
{
"msg_contents": "On 07/12/2011 02:38 PM, Merlin Moncure wrote:\n>\n> Something is not adding up here. Perhaps there is an alternate route\n> to WAL logged activity from selects I'm not thinking of. Right now\n> I'm thinking to run the selects on table 'a' and the inserts\n> concurrently on table 'b' and seeing how that behaves. Another way to\n> get to the bottom is to oprofile the selecting-during-load backend to\n> see where the time is getting spent. Alternative way to do this is\n> to strace attach to the selecting-during-load backend to see if it's\n> really writing to the WAL (I'm really curious about this).\n>\n> Another interesting test would be to try and reproduce the results on\n> native machine. It should be possible to do this on your workstation\n> with a more modestly sized scaling factor.\n>\n> merlin\n>\nJust tried with two of my test tables.\nUpdates on 'a' have no (measurable) effect on select from 'b'.\n\nBack to the first case, here's an strace from the backend doing the \nselect right after the updates.\n\n\"Q\\0\\0\\0`select count(*) from test1 \"..., 8192, 0, NULL, NULL) = 97\ngettimeofday({1310512219, 723762}, NULL) = 0\nopen(\"base/16385/33032\", O_RDWR) = 8\nlseek(8, 0, SEEK_END) = 1073741824\nopen(\"base/16385/33032.1\", O_RDWR|O_CREAT, 0600) = 9\nlseek(9, 0, SEEK_END) = 1073741824\nopen(\"base/16385/33032.2\", O_RDWR|O_CREAT, 0600) = 10\nlseek(10, 0, SEEK_END) = 191348736\nopen(\"base/16385/33035\", O_RDWR) = 11\nlseek(11, 0, SEEK_END) = 1073741824\nopen(\"base/16385/33035.1\", O_RDWR|O_CREAT, 0600) = 12\nlseek(12, 0, SEEK_END) = 3571712\nlseek(10, 0, SEEK_END) = 191348736\nbrk(0x28ad000) = 0x28ad000\nmmap(NULL, 135168, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, \n0) = 0x7f5f28ca0000\nmmap(NULL, 266240, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, \n0) = 0x7f5f28c5f000\nmunmap(0x7f5f28c5f000, 266240) = 0\nmunmap(0x7f5f28ca0000, 135168) = 0\nopen(\"pg_xlog/00000001000000BB00000012\", O_RDWR) = 13\nlseek(13, 1564672, SEEK_SET) = 1564672\nwrite(13, \n\"f\\320\\1\\0\\1\\0\\0\\0\\273\\0\\0\\0\\0\\340\\27\\22`\\32\\0\\00000002833!000\"..., \n2400256) = 2400256\nfdatasync(13) = 0\nsemop(229383, {{9, 1, 0}}, 1) = 0\ngettimeofday({1310512219, 885287}, NULL) = 0\nsendto(5, \n\"\\2\\0\\0\\0\\300\\3\\0\\0\\1@\\0\\0\\t\\0\\0\\0\\1\\0\\0\\0\\0\\0\\0\\0\\353\\4\\0\\0@\\0\\2\\0\"..., \n960, 0, NULL, 0) = 960\nsendto(5, \n\"\\2\\0\\0\\0\\300\\3\\0\\0\\1@\\0\\0\\t\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0009\\n\\0\\0@\\0\\2\\0\"..., \n960, 0, NULL, 0) = 960\nsendto(5, \n\"\\2\\0\\0\\0\\300\\3\\0\\0\\1@\\0\\0\\t\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0v\\n\\0\\0@\\0\\2\\0\"..., \n960, 0, NULL, 0) = 960\nsendto(5, \n\"\\2\\0\\0\\0\\270\\1\\0\\0\\0\\0\\0\\0\\4\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\275\\4\\0\\0\\377\\177\\0\\0\"..., \n440, 0, NULL, 0) = 440\nsendto(6, \n\"T\\0\\0\\0\\36\\0\\1count\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\24\\0\\10\\377\\377\\377\\377\\0\\0D\"..., \n66, 0, NULL, 0) = 66\n\nSo the backend definitely writing to the WAL, directly and synchronously.\n\nSelecting the same set of rows again:\n\"Q\\0\\0\\0`select count(*) from test1 \"..., 8192, 0, NULL, NULL) = 97\ngettimeofday({1310512344, 823728}, NULL) = 0\nlseek(10, 0, SEEK_END) = 191348736\nlseek(12, 0, SEEK_END) = 3571712\nlseek(10, 0, SEEK_END) = 191348736\nbrk(0x28d5000) = 0x28d5000\nbrk(0x2915000) = 0x2915000\nbrk(0x2897000) = 0x2897000\ngettimeofday({1310512344, 831043}, NULL) = 0\nsendto(5, \n\"\\2\\0\\0\\0\\350\\0\\0\\0\\1@\\0\\0\\2\\0\\0\\0\\1\\0\\0\\0\\0\\0\\0\\0\\10\\201\\0\\0?\\0\\2\\0\"..., 232, \n0, NULL, 0) = 232\nsendto(6, \n\"T\\0\\0\\0\\36\\0\\1count\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\24\\0\\10\\377\\377\\377\\377\\0\\0D\"..., \n66, 0, NULL, 0) = 66\n\nNo writing to the WAL.\n\n-- Lars\n\n",
"msg_date": "Tue, 12 Jul 2011 16:15:12 -0700",
"msg_from": "lars <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "On 7/12/11, lars <[email protected]> wrote:\n>\n>\n> The fact that a select (maybe a big analytical query we'll run) touching\n> many rows will update the WAL and wait\n> (apparently) for that IO to complete is making a fully cached database\n> far less useful.\n> I just artificially created this scenario.\n\nI can't think of any reason that that WAL would have to be flushed\nsynchronously.\n\nThere is already code that makes transactions that only have certain\nkinds of \"maintenance\" WAL to skip the flush. Could this pruning WAL\nbe added to that group?\n",
"msg_date": "Tue, 12 Jul 2011 20:35:54 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> On 7/12/11, lars <[email protected]> wrote:\n>> The fact that a select (maybe a big analytical query we'll run) touching\n>> many rows will update the WAL and wait\n>> (apparently) for that IO to complete is making a fully cached database\n>> far less useful.\n>> I just artificially created this scenario.\n\n> I can't think of any reason that that WAL would have to be flushed\n> synchronously.\n\nMaybe he's running low on shared_buffers? We would have to flush WAL\nbefore writing a dirty buffer out, so maybe excessive pressure on\navailable buffers is part of the issue here.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Jul 2011 21:16:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database "
},
{
"msg_contents": "[combining responses to two posts on this thread by lars]\n \nlars <[email protected]> wrote:\n \n> On the face of it, though, this looks like Postgres would not be\n> that useful as database that resides (mostly) in the cache.\n \nI've mentioned this in a hand-wavy general sense, but I should have\nmentioned specifics ages ago: for a database where the active\nportion of the database is fully cached, it is best to set\nseq_page_cost and random_page_cost to the same value, somewhere in\nthe 0.1 to 0.05 range. (In your case I would use 0.05.) In highly\ncached databases I have sometimes also found it necessary to\nincrease cpu_tuple_cost. (In your case I might try 0.02.)\n \nThis won't directly address the specific issue you've been\ninvestigating in this thread, but it will tend to give you faster\nplans for your actual environment without needing to fuss with\nthings on a query-by-query basis. It may indirectly mitigate the\nproblem at hand through heavier use of indexes which would reduce\npruning and hint-bit setting by readers.\n \n> Interesting. When you did you test, did you also find WAL write\n> activity when running x the first time after y?\n \nI wasn't able to check that in this quick, ad hoc run.\n \n> Oh, it's just switched off for testing, so that I can control\n> when vacuum runs and make sure that it's not skewing the results\n> while I am measuring something.\n \nAgain, that's an impulse I can certainly understand, but the problem\nis that turning autovacuum off skews results in other ways, such as\nforcing readers to do maintenance work which might otherwise be done\nin a cost-limited background process. Or if that didn't happen you\nwould be constantly chasing through lots of dead line pointers which\nwould hurt performance in another way. It's probably best to\nconsider autovacuum an integral part of normal database operations\nand run benchmarks like this with it operational. This will also\ngive you an opportunity to tune thresholds and costing factors to\nevaluate the impact that such adjustments have on potential\nworkloads.\n \n> For a fully cached database I would probably want to switch off\n> HOT pruning and compaction (which from what we see is done\n> synchronously with the select) and leave it up to the asynchronous\n> auto vacuum to do that. But maybe I am still not quite\n> understanding the performance implications.\n \nCode comments indicate that they expect the pruning to be a pretty\nclear win on multiple reads, although I don't know how much that was\nbenchmarked. Jeff does raise a good point, though -- it seems odd\nthat WAL-logging of this pruning would need to be synchronous. We\nsupport asynchronous commits -- why not use that feature\nautomatically for transactions where the only writes are this sort\nof thing. Which raises an interesting question -- what happens to\nthe timings if your SELECTs are done with synchronous_commit = off?\n \nI wonder if it would make any sense to implicitly use async commit\nfor a transaction which is declared READ ONLY or which never\nacquires and XID?\n \n-Kevin\n",
"msg_date": "Wed, 13 Jul 2011 09:46:36 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached\t\n\t database"
},
{
"msg_contents": "On Tue, Jul 12, 2011 at 6:15 PM, lars <[email protected]> wrote:\n> Back to the first case, here's an strace from the backend doing the select\n> right after the updates.\n> write(13,\n> \"f\\320\\1\\0\\1\\0\\0\\0\\273\\0\\0\\0\\0\\340\\27\\22`\\32\\0\\00000002833!000\"..., 2400256)\n> = 2400256\n\nOn Wed, Jul 13, 2011 at 9:46 AM, Kevin Grittner\n<[email protected]> wrote:\n> Code comments indicate that they expect the pruning to be a pretty\n> clear win on multiple reads, although I don't know how much that was\n> benchmarked. Jeff does raise a good point, though -- it seems odd\n> that WAL-logging of this pruning would need to be synchronous. We\n> support asynchronous commits -- why not use that feature\n\nRight -- here are my thoughts. notice the above is writing out 293\npages. this is suggesting to me that Kevin is right and you've\nidentified a pattern where you are aggravating the page cleanup\nfacilities of HOT. What threw me off here (and perhaps bears some\nadditional investigation) is that early on in the report you were\nclaiming an update to an indexed field which effectively disables HOT.\n The fairly lousy I/O performance of EBS is further hurting you here:\nyou have a very fast computer with lots of memory with a late 90's\ndisk system in terms of performance. This means that AWS is not all\nthat great for simulating load profiles unless you are also highly\nunderweight I/O in your servers. Postgres btw demands (as does\nOracle) a decent i/o system for many workloads that might be\nsurprising.\n\nA note about HOT: there is no way to disable it (other than updating\nan indexed field to bypass it) -- HOT was a performance revolution for\nPostgres and numerous benchmarks as well as anecdotal reports have\nconfirmed this. HOT mitigates the impact of dead tuples by 1. highly\nreducing index bloat under certain conditions and 2. allowing dead\ntuples to be more aggressively cleaned up -- a 'page level vacuum' if\nit were. HOT is an especially huge win when updates are frequent and\ntransactions are small and short....but maybe in your simulated case\nit's not helping. hm.\n\nmerlin\n",
"msg_date": "Wed, 13 Jul 2011 09:52:53 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> ... Jeff does raise a good point, though -- it seems odd\n> that WAL-logging of this pruning would need to be synchronous.\n\nYeah, we need to get to the bottom of that. If there's enough\nshared_buffer space then it shouldn't be.\n\n> We\n> support asynchronous commits -- why not use that feature\n> automatically for transactions where the only writes are this sort\n> of thing. Which raises an interesting question -- what happens to\n> the timings if your SELECTs are done with synchronous_commit = off?\n> I wonder if it would make any sense to implicitly use async commit\n> for a transaction which is declared READ ONLY or which never\n> acquires and XID?\n\nHuh? If there was never an XID, there's no commit WAL record, hence\nnothing to make asynchronous.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Jul 2011 11:17:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database "
},
{
"msg_contents": "On 07/13/2011 08:17 AM, Tom Lane wrote:\n> \"Kevin Grittner\"<[email protected]> writes:\n>> ... Jeff does raise a good point, though -- it seems odd\n>> that WAL-logging of this pruning would need to be synchronous.\n> Yeah, we need to get to the bottom of that. If there's enough\n> shared_buffer space then it shouldn't be.\nThis thread has gotten long, let me try to compile all the relevant \ninformation in one email.\n\n\\d test\n Table \"lars.test\"\n Column | Type | Modifiers\n--------------+---------------+-----------\n tenant | character(15) |\n created_by | character(15) |\n created_date | date |\nIndexes:\n \"i1\" btree (tenant)\n \"i11\" btree (created_by)\n\n-- Table is populated like this:\n------------------------------------\ncreate table test(tenant char(15), created_by char(15), created_date date);\ninsert into test values('x', 'y','2011-6-30');\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test; -- 256k rows\nupdate test set tenant = lpad((random()*10000)::int::text,15,'0'), \ncreated_by = lpad((random()*10000)::int::text,15,'0');\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test;\ninsert into test select * from test; -- 32m rows\ncreate index i1 on test(tenant);\ncreate index i11 on test(created_by);\nvacuum analyze;\n\n-- I doubt it needs that many rows.\n\n=> SELECT\n 'version'::text AS \"name\",\n version() AS \"current_setting\"\n UNION ALL\n SELECT\n name,current_setting(name)\n FROM pg_settings\n WHERE NOT source='default' AND NOT name IN\n ('config_file','data_directory','hba_file','ident_file',\n 'log_timezone','DateStyle','lc_messages','lc_monetary',\n 'lc_numeric','lc_time','timezone_abbreviations',\n 'default_text_search_config','application_name',\n 'transaction_deferrable','transaction_isolation',\n 'transaction_read_only');\n\n name \n| current_setting\n------------------------------+------------------------------------------------------------------------------------------------------------------\n version | PostgreSQL 9.1beta2 on \nx86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.4 20100726 (Red Hat \n4.4.4-13), 64-bit\n bgwriter_delay | 10ms\n bgwriter_lru_maxpages | 1000\n checkpoint_completion_target | 0.9\n checkpoint_segments | 128\n client_encoding | UTF8\n effective_cache_size | 64GB\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n log_checkpoints | on\n log_line_prefix | %m\n maintenance_work_mem | 2GB\n max_connections | 100\n max_stack_depth | 2MB\n server_encoding | UTF8\n shared_buffers | 20GB\n TimeZone | UTC\n wal_buffers | 16MB\n work_mem | 1GB\n(19 rows)\n\n\n-- Now:\n----------\n=> select count(*) from test where tenant = '000000000000001' and \ncreated_date = '2011-6-30';\n count\n-------\n 3712\n(1 row)\n\n=> SELECT c.relname, isdirty, count(*) * 8192 / 1024/1024 AS buffers\nFROM pg_buffercache b, pg_class c\nWHERE b.relfilenode = pg_relation_filenode(c.oid)\nAND b.reldatabase IN (0, (SELECT oid FROM pg_database WHERE datname = \ncurrent_database()))\nGROUP BY c.relname,isdirty\nORDER BY 3 DESC\nLIMIT 6;\n\n relname | isdirty | buffers\n-------------------------------+---------+---------\n test | t | 14\n pg_opclass_oid_index | f | 0\n pg_rewrite | f | 0\n i11 | t | 0\n pg_rewrite_rel_rulename_index | f | 0\n pg_constraint | f | 0\n\n-- Just started the server, no nothing else is cached, yet\n\n-- it doesn't matter if that update is executed by the same or another \nbackend.\n=> update test set created_by = '000000000000001' where tenant = \n'000000000000001';\nUPDATE 3712\n=> select count(*) from test where tenant = '000000000000001' and \ncreated_date = '2011-6-30';\n count\n-------\n 3712\n(1 row)\n\nstrace now shows:\n-------------------------\nQ\\0\\0\\0_select count(*) from test w\"..., 8192, 0, NULL, NULL) = 96\ngettimeofday({1310579341, 854669}, NULL) = 0\nlseek(38, 0, SEEK_END) = 187465728\nlseek(41, 0, SEEK_END) = 115564544\nlseek(43, 0, SEEK_END) = 101040128\nlseek(38, 0, SEEK_END) = 187465728\nbrk(0x1a85000) = 0x1a85000\nbrk(0x1a5d000) = 0x1a5d000\nlseek(52, 12443648, SEEK_SET) = 12443648\nwrite(52, \n\"f\\320\\1\\0\\1\\0\\0\\0\\276\\0\\0\\0\\0\\340\\275\\0016\\0\\0\\0S\\0\\0\\0\\4\\0cY\\3\\0\\0\\0\"..., \n122880) = 122880\nfdatasync(52) = 0\nsemop(688135, {{9, 1, 0}}, 1) = 0\ngettimeofday({1310579341, 889163}, NULL) = 0\nsendto(5, \n\"\\2\\0\\0\\0\\350\\0\\0\\0\\1@\\0\\0\\2\\0\\0\\0\\1\\0\\0\\0\\0\\0\\0\\0\\376\\200\\0\\0<\\0\\2\\0\"..., \n232, 0, NULL, 0) = 232\nsendto(6, \n\"T\\0\\0\\0\\36\\0\\1count\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\24\\0\\10\\377\\377\\377\\377\\0\\0D\"..., \n66, 0, NULL, 0) = 66\n\n-- fd 52 was opened earlier to the WAL file in pg_xlog.\n\n=> select count(*) from test where tenant = '000000000000001' and \ncreated_date = '2011-6-30';\n count\n-------\n 3712\n(1 row)\n\nnow strace shows:\n-------------------------\nQ\\0\\0\\0_select count(*) from test w\"..., 8192, 0, NULL, NULL) = 96\ngettimeofday({1310579380, 862637}, NULL) = 0\nlseek(38, 0, SEEK_END) = 187465728\nlseek(41, 0, SEEK_END) = 115564544\nlseek(43, 0, SEEK_END) = 101040128\nlseek(38, 0, SEEK_END) = 187465728\ngettimeofday({1310579380, 868149}, NULL) = 0\nsendto(5, \n\"\\2\\0\\0\\0\\350\\0\\0\\0\\1@\\0\\0\\2\\0\\0\\0\\1\\0\\0\\0\\0\\0\\0\\0\\376\\200\\0\\0@\\0\\2\\0\"..., \n232, 0, NULL, 0) = 232\nsendto(6, \n\"T\\0\\0\\0\\36\\0\\1count\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\24\\0\\10\\377\\377\\377\\377\\0\\0D\"..., \n66, 0, NULL, 0) = 66\n\n=> update test set created_by = '000000000000001' where tenant = \n'000000000000001';\nUPDATE 3712\n=> select count(*) from test where tenant = '000000000000001' and \ncreated_date = '2011-6-30';\n count\n-------\n 3712\n(1 row)\n\nstrace again indicates that a WAL log is written by the backend by the \nselect:\n---------------------------------------------------------------------------------------\nQ\\0\\0\\0_select count(*) from test w\"..., 8192, 0, NULL, NULL) = 96\ngettimeofday({1310579663, 890641}, NULL) = 0\nlseek(38, 0, SEEK_END) = 187596800\nlseek(41, 0, SEEK_END) = 115638272\nlseek(43, 0, SEEK_END) = 101122048\nlseek(38, 0, SEEK_END) = 187596800\nbrk(0x1a85000) = 0x1a85000\nbrk(0x1a5d000) = 0x1a5d000\nsemop(688135, {{3, -1, 0}}, 1) = 0\nlseek(52, 10223616, SEEK_SET) = 10223616\nwrite(52, \n\"f\\320\\1\\0\\1\\0\\0\\0\\276\\0\\0\\0\\0\\0\\234\\2\\16\\0\\0\\0C(\\0\\0\\1\\0\\0\\0~\\0\\177\\0\"..., \n16384) = 16384\nfdatasync(52) = 0\ngettimeofday({1310579663, 921932}, NULL) = 0\nsendto(5, \n\"\\2\\0\\0\\0\\350\\0\\0\\0\\1@\\0\\0\\2\\0\\0\\0\\1\\0\\0\\0\\0\\0\\0\\0\\376\\200\\0\\0?\\0\\2\\0\"..., \n232, 0, NULL, 0) = 232\nsendto(6, \n\"T\\0\\0\\0\\36\\0\\1count\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\24\\0\\10\\377\\377\\377\\377\\0\\0D\"..., \n66, 0, NULL, 0) = 66\n\n=> explain (analyze on, buffers on) select count(*) from test where \ntenant = '000000000000001' and created_date = '2011-6-30';\n QUERY PLAN\n\n--------------------------------------------------------------------------------\n-----------------------------------------\n Aggregate (cost=12284.65..12284.66 rows=1 width=0) (actual \ntime=9.150..9.150 r\nows=1 loops=1)\n Buffers: shared hit=1976\n -> Bitmap Heap Scan on test (cost=91.78..12276.35 rows=3319 \nwidth=0) (actua\nl time=2.338..6.866 rows=3712 loops=1)\n Recheck Cond: (tenant = '000000000000001'::bpchar)\n Filter: (created_date = '2011-06-30'::date)\n Buffers: shared hit=1976\n -> Bitmap Index Scan on i1 (cost=0.00..90.95 rows=3319 \nwidth=0) (actu\nal time=2.063..2.063 rows=15179 loops=1)\n Index Cond: (tenant = '000000000000001'::bpchar)\n Buffers: shared hit=98\n Total runtime: 9.206 ms\n(10 rows)\n\n=> update test set created_by = '000000000000001' where tenant = \n'000000000000001';\nUPDATE 3712\n=> explain (analyze on, buffers on) select count(*) from test where \ntenant = '000000000000001' and created_date = '2011-6-30';\n QUERY PLAN\n\n--------------------------------------------------------------------------------\n-----------------------------------------\n Aggregate (cost=12284.68..12284.69 rows=1 width=0) (actual \ntime=30.738..30.739\n rows=1 loops=1)\n Buffers: shared hit=2001\n -> Bitmap Heap Scan on test (cost=91.78..12276.38 rows=3319 \nwidth=0) (actua\nl time=2.589..28.361 rows=3712 loops=1)\n Recheck Cond: (tenant = '000000000000001'::bpchar)\n Filter: (created_date = '2011-06-30'::date)\n Buffers: shared hit=2001\n -> Bitmap Index Scan on i1 (cost=0.00..90.95 rows=3319 \nwidth=0) (actu\nal time=2.301..2.301 rows=17123 loops=1)\n Index Cond: (tenant = '000000000000001'::bpchar)\n Buffers: shared hit=107\n Total runtime: 30.785 ms\n(10 rows)\n\n----\n\nThere seems to be definitely something funky going on. Since created_by \nis indexed it shouldn't do any HOT logic.\n\nIs there any other information that I can provide? I'm happy to \nrecompile with a patch applied, etc.\n\nThanks.\n\n-- Lars\n\n",
"msg_date": "Wed, 13 Jul 2011 11:10:47 -0700",
"msg_from": "lars <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "On 07/13/2011 07:46 AM, Kevin Grittner wrote:\n>\n> I've mentioned this in a hand-wavy general sense, but I should have\n> mentioned specifics ages ago: for a database where the active\n> portion of the database is fully cached, it is best to set\n> seq_page_cost and random_page_cost to the same value, somewhere in\n> the 0.1 to 0.05 range. (In your case I would use 0.05.) In highly\n> cached databases I have sometimes also found it necessary to\n> increase cpu_tuple_cost. (In your case I might try 0.02.)\n> \nI've been doing that for other tests already (I didn't want to add too \nmany variations here).\nThe Bitmap Heap scans through the table are only useful for spinning \nmedia and not the cache\n(just to state the obvious).\n\nAs an aside: I found that queries in a cold database take almost twice \nas long when I make that change,\nso for spinning media this is very important.\n\n> Which raises an interesting question -- what happens to\n> the timings if your SELECTs are done with synchronous_commit = off?\n\nJust tried that...\nIn that case the WAL is still written (as seen via iostat), but not \nsynchronously by the transaction (as seen by strace).\n\n-- Lars\n\n",
"msg_date": "Wed, 13 Jul 2011 11:23:09 -0700",
"msg_from": "lars <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached\t\t database"
},
{
"msg_contents": "lars <[email protected]> wrote:\n> On 07/13/2011 07:46 AM, Kevin Grittner wrote:\n>>\n>> I've mentioned this in a hand-wavy general sense, but I should\n>> have mentioned specifics ages ago: for a database where the\n>> active portion of the database is fully cached, it is best to set\n>> seq_page_cost and random_page_cost to the same value, somewhere\n>> in the 0.1 to 0.05 range. (In your case I would use 0.05.) In\n>> highly cached databases I have sometimes also found it necessary\n>> to increase cpu_tuple_cost. (In your case I might try 0.02.)\n>> \n> I've been doing that for other tests already (I didn't want to add\n> too many variations here).\n> The Bitmap Heap scans through the table are only useful for\n> spinning media and not the cache (just to state the obvious).\n> \n> As an aside: I found that queries in a cold database take almost\n> twice as long when I make that change,\n> so for spinning media this is very important.\n \nNo doubt. We normally run months to years between reboots, with\nmost of our cache at the OS level. We don't have much reason to\never restart PostgreSQL except to install new versions. So we don't\nworry overly much about the cold cache scenario.\n \n>> Which raises an interesting question -- what happens to the\n>> timings if your SELECTs are done with synchronous_commit = off?\n> \n> Just tried that...\n> In that case the WAL is still written (as seen via iostat), but\n> not synchronously by the transaction (as seen by strace).\n \nSo transactions without an XID *are* sensitive to\nsynchronous_commit. That's likely a useful clue.\n \nHow much did it help the run time of the SELECT which followed the\nUPDATE?\n \n-Kevin\n",
"msg_date": "Wed, 13 Jul 2011 13:42:26 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached\t\t\n\t database"
},
{
"msg_contents": "On Wed, Jul 13, 2011 at 1:10 PM, lars <[email protected]> wrote:\n> On 07/13/2011 08:17 AM, Tom Lane wrote:\n>>\n>> \"Kevin Grittner\"<[email protected]> writes:\n>>>\n>>> ... Jeff does raise a good point, though -- it seems odd\n>>> that WAL-logging of this pruning would need to be synchronous.\n>>\n>> Yeah, we need to get to the bottom of that. If there's enough\n>> shared_buffer space then it shouldn't be.\n>\n> This thread has gotten long, let me try to compile all the relevant\n> information in one email.\n>\n> \\d test\n> Table \"lars.test\"\n> Column | Type | Modifiers\n> --------------+---------------+-----------\n> tenant | character(15) |\n> created_by | character(15) |\n> created_date | date |\n\nsmall aside here: try to avoid use of character(n) type -- varchar(n)\nis superior in every way, including performance (although that has\nnothing to do with your WAL issues on this thread).\n\nmerlin\n",
"msg_date": "Wed, 13 Jul 2011 14:01:30 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "Tom Lane <[email protected]> wrote:\n \n> Huh? If there was never an XID, there's no commit WAL record,\n> hence nothing to make asynchronous.\n \nIf you look at the RecordTransactionCommit() function in xact.c\nyou'll see that's not correct. Currently the commit record has\nnothing to do with whether it synchronizes on WAL writes. In\nparticular, this section around line 1096 is where the choice is\nmade:\n \n if ((wrote_xlog && synchronous_commit > SYNCHRONOUS_COMMIT_OFF)\n || forceSyncCommit || nrels > 0)\n {\n /*\n * Synchronous commit case:\n \nIn the OP's test case, wrote_xlog is true, while forceSyncCommit is\nfalse and nrels == 0.\n \nIt doesn't seem like commit of a read-only transaction should be a\nmagical time for pruning WAL entries to hit the disk, so it probably\nwould work to modify that \"if\" to not drop into the synchronous\ncommit code if the transaction is explicitly declared READ ONLY or\nif it never acquired an XID; although there's likely some better way\nto deal with it.\n \n-Kevin\n",
"msg_date": "Wed, 13 Jul 2011 16:57:37 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached\n\t database"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Tom Lane <[email protected]> wrote:\n>> Huh? If there was never an XID, there's no commit WAL record,\n>> hence nothing to make asynchronous.\n \n> If you look at the RecordTransactionCommit() function in xact.c\n> you'll see that's not correct.\n\nOh, hmmm ... that code was written with the idea that things like\nsequence XLOG_SEQ_LOG records ought to be flushed to disk before\nreporting commit; otherwise you don't have a guarantee that the same\nsequence value couldn't be handed out again after crash/restart,\nin a transaction that just does something like\n\tSELECT nextval('myseq');\nwithout any updates of regular tables.\n\nIt seems like we ought to distinguish heap cleanup activities from\nuser-visible semantics (IOW, users shouldn't care if a HOT cleanup has\nto be done over after restart, so if the transaction only wrote such\nrecords there's no need to flush). This'd require more process-global\nstate than we keep now, I'm afraid.\n\nAnother approach we could take (also nontrivial) is to prevent\nselect-only queries from doing HOT cleanups. You said upthread that\nthere were alleged performance benefits from aggressive cleanup, but\nIMO that can charitably be described as unproven. The real reason it\nhappens is that we didn't see a simple way for page fetches to know soon\nenough whether a tuple update would be likely to happen later, so they\njust do cleanups unconditionally.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Jul 2011 18:21:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database "
},
{
"msg_contents": "On 07/13/2011 11:42 AM, Kevin Grittner wrote:\n> So transactions without an XID *are* sensitive to\n> synchronous_commit. That's likely a useful clue.\n>\n> How much did it help the run time of the SELECT which followed the\n> UPDATE?\n\nIt has surprisingly little impact on the SELECT side:\n\n=> set synchronous_commit = on;\n=> update test set created_by = '000000000000001' where tenant = \n'000000000000001';\nUPDATE 3712\nTime: 384.702 ms\nlars=> select count(*) from test where tenant = '000000000000001' and \ncreated_date = '2011-6-30';\n count\n-------\n 3712\n(1 row)\n\nTime: 36.571 ms\n=> select count(*) from test where tenant = '000000000000001' and \ncreated_date = '2011-6-30';\n count\n-------\n 3712\n(1 row)\n\nTime: 5.702 ms\n=> select count(*) from test where tenant = '000000000000001' and \ncreated_date = '2011-6-30';\n count\n-------\n 3712\n(1 row)\n\nTime: 5.822 ms\n=> set synchronous_commit = off;\nSET\nTime: 0.145 ms\n=> update test set created_by = '000000000000001' where tenant = \n'000000000000001';\nUPDATE 3712\nTime: 96.227 ms\n=> select count(*) from test where tenant = '000000000000001' and \ncreated_date = '2011-6-30';\n count\n-------\n 3712\n(1 row)\n\nTime: 32.422 ms\n=> select count(*) from test where tenant = '000000000000001' and \ncreated_date = '2011-6-30';\n count\n-------\n 3712\n(1 row)\n\nTime: 6.080 ms\n\nI tried it multiple times, and while the numbers change by 5-10ms the \nrelationship is the same.\n\nThe same results show when I use my JDBC code to run updates/selects as \nfast as possible. When synchronous_commit is\noff for the SELECTing process it seems to be slightly faster.\n\n-- Lars\n\n",
"msg_date": "Wed, 13 Jul 2011 15:41:16 -0700",
"msg_from": "lars <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached\t\t\t database"
},
{
"msg_contents": "Tom Lane <[email protected]> wrote:\n \n> It seems like we ought to distinguish heap cleanup activities from\n> user-visible semantics (IOW, users shouldn't care if a HOT cleanup\n> has to be done over after restart, so if the transaction only\n> wrote such records there's no need to flush). This'd require more\n> process-global state than we keep now, I'm afraid.\n \nThat makes sense, and seems like the right long-term fix. It seems\nlike a boolean might do it; the trick would be setting it (or not)\nin all the right places.\n \n> Another approach we could take (also nontrivial) is to prevent\n> select-only queries from doing HOT cleanups. You said upthread\n> that there were alleged performance benefits from aggressive\n> cleanup, but IMO that can charitably be described as unproven. \n> The real reason it happens is that we didn't see a simple way for\n> page fetches to know soon enough whether a tuple update would be\n> likely to happen later, so they just do cleanups unconditionally.\n \nHmm. One trivial change could be to skip it when the top level\ntransaction is declared to be READ ONLY. At least that would give\npeople a way to work around it for now. Of course, that can't be\nback-patched before 9.1 because subtransactions could override READ\nONLY before that.\n \n-Kevin\n",
"msg_date": "Thu, 14 Jul 2011 09:05:41 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached\n\t database"
},
{
"msg_contents": "On Thu, Jul 14, 2011 at 4:05 PM, Kevin Grittner\n<[email protected]> wrote:\n> Tom Lane <[email protected]> wrote:\n>\n>> It seems like we ought to distinguish heap cleanup activities from\n>> user-visible semantics (IOW, users shouldn't care if a HOT cleanup\n>> has to be done over after restart, so if the transaction only\n>> wrote such records there's no need to flush). This'd require more\n>> process-global state than we keep now, I'm afraid.\n>\n> That makes sense, and seems like the right long-term fix. It seems\n> like a boolean might do it; the trick would be setting it (or not)\n> in all the right places.\n\nI also believe this is the right way to go. I think the crucial thing\nis in \"distinguish heap cleanup activities from user-visible\nsemantics\" - basically this is what happens with auto vacuum: it does\nwork concurrently that you do not want to burden on user transactions.\n\n>> Another approach we could take (also nontrivial) is to prevent\n>> select-only queries from doing HOT cleanups. You said upthread\n>> that there were alleged performance benefits from aggressive\n>> cleanup, but IMO that can charitably be described as unproven.\n>> The real reason it happens is that we didn't see a simple way for\n>> page fetches to know soon enough whether a tuple update would be\n>> likely to happen later, so they just do cleanups unconditionally.\n>\n> Hmm. One trivial change could be to skip it when the top level\n> transaction is declared to be READ ONLY. At least that would give\n> people a way to work around it for now. Of course, that can't be\n> back-patched before 9.1 because subtransactions could override READ\n> ONLY before that.\n\nWhat I don't like about this approach is that it a) increases\ncomplexity for the user, b) might not be for everyone (i.e. tools like\nOR mappers which do not allow such setting of the TX or cases where\nyou do not know what type of TX this is when you start it) and c) it\nstill keeps the performance penalty to suddenly come to haunt a\ndifferent TX.\n\nI can only speculate whether the latter might actually cause other\npeople to run into issues because their usage patterns currently force\nthe cleanout activities into an unimportant TX while the workaround\nwould suddenly have the cleanout delay show up in an important TX\nwhich used to be fast. This is also hard to debug since you would\nnormally only look at the slow TX before you realize you need to look\nelsewhere (the length of this thread is kind of proof of this already\n:-)).\n\nMy 0.01 EUR...\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Thu, 14 Jul 2011 17:03:36 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Tom Lane <[email protected]> wrote:\n>> It seems like we ought to distinguish heap cleanup activities from\n>> user-visible semantics (IOW, users shouldn't care if a HOT cleanup\n>> has to be done over after restart, so if the transaction only\n>> wrote such records there's no need to flush). This'd require more\n>> process-global state than we keep now, I'm afraid.\n \n> That makes sense, and seems like the right long-term fix. It seems\n> like a boolean might do it; the trick would be setting it (or not)\n> in all the right places.\n\nThe implementation I was imagining was to define another bit in the info\nparameter for XLogInsert, say XLOG_NON_TRANSACTIONAL. This could be a\nhigh-order bit that would not go to disk. Anytime it was *not* set,\nXLogInsert would set a global boolean that would remember that the\ncurrent transaction wrote a transactional WAL record. This is the\nright default since the vast majority of call sites are writing records\nthat we would want to have flushed at commit. There are just a couple\nof places that would need to be changed to add this flag to their calls.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Jul 2011 11:47:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database "
},
{
"msg_contents": "On Wed, Jul 13, 2011 at 11:10 AM, lars <[email protected]> wrote:\n\n...\n\n> => update test set created_by = '000000000000001' where tenant =\n> '000000000000001';\n> UPDATE 3712\n...\n>\n> There seems to be definitely something funky going on. Since created_by is\n> indexed it shouldn't do any HOT logic.\n\nOnce the update has been run once, further executions are degenerate\n(they replace the updated indexed column with the same value it\nalready holds). The HOT code detects this and uses a HOT update in\nthis case despite the apparent update of an indexed column.\n\n\nCheers,\n\nJeff\n",
"msg_date": "Thu, 14 Jul 2011 14:45:43 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "On Wed, Jul 13, 2011 at 3:41 PM, lars <[email protected]> wrote:\n> On 07/13/2011 11:42 AM, Kevin Grittner wrote:\n>>\n>> So transactions without an XID *are* sensitive to\n>> synchronous_commit. That's likely a useful clue.\n>>\n>> How much did it help the run time of the SELECT which followed the\n>> UPDATE?\n>\n> It has surprisingly little impact on the SELECT side:\n\nIf your fsync is truly fsyncing, it seems like it should have\nconsiderable effect.\n\nCould you strace with both -ttt and -T, with and without synchronous commit?\n\nCheers,\n\nJeff\n",
"msg_date": "Thu, 14 Jul 2011 16:03:43 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "On 07/14/2011 04:03 PM, Jeff Janes wrote:\n> On Wed, Jul 13, 2011 at 3:41 PM, lars<[email protected]> wrote:\n>> On 07/13/2011 11:42 AM, Kevin Grittner wrote:\n>>> So transactions without an XID *are* sensitive to\n>>> synchronous_commit. That's likely a useful clue.\n>>>\n>>> How much did it help the run time of the SELECT which followed the\n>>> UPDATE?\n>> It has surprisingly little impact on the SELECT side:\n> If your fsync is truly fsyncing, it seems like it should have\n> considerable effect.\n>\n> Could you strace with both -ttt and -T, with and without synchronous commit?\n>\n> Cheers,\n>\n> Jeff\nOk, here we go:\n\n\"Q\\0\\0\\0_select count(*) from test w\"..., 8192, 0, NULL, NULL) = 96 \n<5.357152>\n1310774187.750791 gettimeofday({1310774187, 750809}, NULL) = 0 <0.000022>\n1310774187.751023 lseek(12, 0, SEEK_END) = 329908224 <0.000023>\n1310774187.751109 lseek(15, 0, SEEK_END) = 396607488 <0.000022>\n1310774187.751186 lseek(18, 0, SEEK_END) = 534175744 <0.000022>\n1310774187.751360 lseek(12, 0, SEEK_END) = 329908224 <0.000023>\n1310774187.753389 brk(0x248e000) = 0x248e000 <0.000026>\n1310774187.753953 brk(0x24ce000) = 0x24ce000 <0.000023>\n1310774187.755158 brk(0x254e000) = 0x254e000 <0.000024>\n1310774187.766605 brk(0x2450000) = 0x2450000 <0.000170>\n1310774187.766852 lseek(23, 4513792, SEEK_SET) = 4513792 <0.000023>\n1310774187.766927 write(23, \n\"f\\320\\1\\0\\1\\0\\0\\0\\320\\0\\0\\0\\0\\340D-\\22\\0\\0\\0\\30@!000000000\"..., 32768) \n= 32768 <0.000075>\n1310774187.767071 fdatasync(23) = 0 <0.002618>\n1310774187.769760 gettimeofday({1310774187, 769778}, NULL) = 0 <0.000022>\n1310774187.769848 sendto(5, \n\"\\2\\0\\0\\0\\350\\0\\0\\0\\1@\\0\\0\\2\\0\\0\\0\\1\\0\\0\\0\\0\\0\\0\\0\\r\\201\\0\\0>\\0\\2\\0\"..., \n232, 0, NULL, 0) = 232 <0.000064>\n1310774187.769993 sendto(6, \n\"T\\0\\0\\0\\36\\0\\1count\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\24\\0\\10\\377\\377\\377\\377\\0\\0D\"..., \n66, 0, NULL, 0) = 66 <0.000199>\n\n(23 is the WAL fd) vs.\n\n\"Q\\0\\0\\0_select count(*) from test w\"..., 8192, 0, NULL, NULL) = 96 \n<7.343720>\n1310774306.978767 gettimeofday({1310774306, 978785}, NULL) = 0 <0.000021>\n1310774306.978994 lseek(12, 0, SEEK_END) = 330883072 <0.000024>\n1310774306.979080 lseek(15, 0, SEEK_END) = 397131776 <0.000021>\n1310774306.979157 lseek(18, 0, SEEK_END) = 534732800 <0.000022>\n1310774306.979332 lseek(12, 0, SEEK_END) = 330883072 <0.000022>\n1310774306.983096 brk(0x248e000) = 0x248e000 <0.000026>\n1310774306.983653 brk(0x24ce000) = 0x24ce000 <0.000023>\n1310774306.984667 brk(0x254e000) = 0x254e000 <0.000023>\n1310774306.996040 brk(0x2450000) = 0x2450000 <0.000168>\n1310774306.996298 gettimeofday({1310774306, 996317}, NULL) = 0 <0.000021>\n1310774306.996388 sendto(5, \n\"\\2\\0\\0\\0\\350\\0\\0\\0\\1@\\0\\0\\2\\0\\0\\0\\1\\0\\0\\0\\0\\0\\0\\0\\r\\201\\0\\0>\\0\\2\\0\"..., \n232, 0, NULL, 0) = 232 <0.000078>\n1310774306.996550 sendto(6, \n\"T\\0\\0\\0\\36\\0\\1count\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\24\\0\\10\\377\\377\\377\\377\\0\\0D\"..., \n66, 0, NULL, 0) = 66 <0.000202>\n\nSo the difference is only 2ms. The size of the WAL buffers written is on \n32k,\n\nHere's an example with more dirty rows (I basically let the updater run \nfor a while dirtying very many rows).\n\n\"Q\\0\\0\\0_select count(*) from test w\"..., 8192, 0, NULL, NULL) = 96 \n<23.690018>\n1310775141.398780 gettimeofday({1310775141, 398801}, NULL) = 0 <0.000028>\n1310775141.399018 lseek(12, 0, SEEK_END) = 372514816 <0.000023>\n1310775141.399105 lseek(15, 0, SEEK_END) = 436232192 <0.000022>\n1310775141.399185 lseek(18, 0, SEEK_END) = 573620224 <0.000023>\n1310775141.399362 lseek(12, 0, SEEK_END) = 372514816 <0.000024>\n1310775141.414017 brk(0x2490000) = 0x2490000 <0.000028>\n1310775141.414575 brk(0x24d0000) = 0x24d0000 <0.000025>\n1310775141.415600 brk(0x2550000) = 0x2550000 <0.000024>\n1310775141.417757 semop(229383, {{0, -1, 0}}, 1) = 0 <0.000024>\n...\n1310775141.448998 semop(229383, {{0, -1, 0}}, 1) = 0 <0.000025>\n1310775141.453134 brk(0x2452000) = 0x2452000 <0.000167>\n1310775141.453377 fadvise64(22, 0, 0, POSIX_FADV_DONTNEED) = 0 <0.000025>\n1310775141.453451 close(22) = 0 <0.000032>\n1310775141.453537 open(\"pg_xlog/00000001000000D1000000C2\", O_RDWR) = 22 \n<0.000059>\n1310775141.453696 write(22, \n\"f\\320\\3\\0\\1\\0\\0\\0\\321\\0\\0\\0\\0\\0\\0\\3023\\356\\17N\\23l\\vN\\0\\0\\0\\1\\0 \n\\0\\0\"..., 5365760) = 5365760 <0.005991>\n1310775141.459798 write(22, \n\"f\\320\\1\\0\\1\\0\\0\\0\\321\\0\\0\\0\\0\\340Q\\302`\\5\\0\\00000000915!000\"..., \n9019392) = 9019392 <0.010062>\n1310775141.469965 fdatasync(22) = 0 <0.231385>\n1310775141.701424 semop(229383, {{2, 1, 0}}, 1) = 0 <0.000031>\n1310775141.702657 gettimeofday({1310775141, 702682}, NULL) = 0 <0.000028>\n1310775141.702765 sendto(5, \n\"\\2\\0\\0\\0\\350\\0\\0\\0\\1@\\0\\0\\2\\0\\0\\0\\1\\0\\0\\0\\0\\0\\0\\0\\r\\201\\0\\0>\\0\\2\\0\"..., \n232, 0, NULL, 0) = 232 <0.000071>\n1310775141.702942 sendto(6, \n\"T\\0\\0\\0\\36\\0\\1count\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\24\\0\\10\\377\\377\\377\\377\\0\\0D\"..., \n66, 0, NULL, 0) = 66 <0.000220>\n\nvs\n\n\"Q\\0\\0\\0_select count(*) from test w\"..., 8192, 0, NULL, NULL) = 96 \n<55.595425>\n1310775406.842823 gettimeofday({1310775406, 842842}, NULL) = 0 <0.000026>\n1310775406.843092 lseek(12, 0, SEEK_END) = 382787584 <0.000023>\n1310775406.843179 lseek(15, 0, SEEK_END) = 457596928 <0.000042>\n1310775406.843280 lseek(18, 0, SEEK_END) = 594968576 <0.000023>\n1310775406.843459 lseek(12, 0, SEEK_END) = 382787584 <0.000022>\n1310775406.860266 brk(0x2490000) = 0x2490000 <0.000046>\n1310775406.860968 brk(0x24d0000) = 0x24d0000 <0.000095>\n1310775406.862449 brk(0x2550000) = 0x2550000 <0.000112>\n1310775406.865095 semop(229383, {{2, -1, 0}}, 1) = 0 <0.111698>\n...\n1310775407.027235 semop(229383, {{2, -1, 0}}, 1) = 0 <0.000039>\n1310775407.027503 semop(229383, {{2, -1, 0}}, 1) = 0 <2.215137>\n1310775409.243291 semop(229383, {{1, 1, 0}}, 1) = 0 <0.000029>\n...\n1310775409.246963 semop(229383, {{2, -1, 0}}, 1) = 0 <0.000024>\n1310775409.252029 brk(0x2452000) = 0x2452000 <0.000168>\n1310775409.252288 gettimeofday({1310775409, 252307}, NULL) = 0 <0.000021>\n1310775409.252393 sendto(5, \n\"\\2\\0\\0\\0\\350\\0\\0\\0\\1@\\0\\0\\2\\0\\0\\0\\2\\0\\0\\0\\0\\0\\0\\0\\r\\201\\0\\0>\\0\\2\\0\"..., \n232, 0, NULL, 0) = 232 <0.000078>\n1310775409.252557 sendto(6, \n\"T\\0\\0\\0\\36\\0\\1count\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\24\\0\\10\\377\\377\\377\\377\\0\\0D\"..., \n66, 0, NULL, 0) = 66 <0.000201>\n\nNo WAL, but checkout that one expensive semop! 2s!!\n\n-- Lars\n\n",
"msg_date": "Fri, 15 Jul 2011 17:21:28 -0700",
"msg_from": "lars <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "On Fri, Jul 15, 2011 at 5:21 PM, lars <[email protected]> wrote:\n> On 07/14/2011 04:03 PM, Jeff Janes wrote:\n>>\n>> On Wed, Jul 13, 2011 at 3:41 PM, lars<[email protected]> wrote:\n>>>\n>>> On 07/13/2011 11:42 AM, Kevin Grittner wrote:\n>>>>\n>>>> So transactions without an XID *are* sensitive to\n>>>> synchronous_commit. That's likely a useful clue.\n>>>>\n>>>> How much did it help the run time of the SELECT which followed the\n>>>> UPDATE?\n>>>\n>>> It has surprisingly little impact on the SELECT side:\n>>\n>> If your fsync is truly fsyncing, it seems like it should have\n>> considerable effect.\n>>\n>> Could you strace with both -ttt and -T, with and without synchronous\n>> commit?\n>>\n>> Cheers,\n>>\n>> Jeff\n>\n> Ok, here we go:\n>\n> \"Q\\0\\0\\0_select count(*) from test w\"..., 8192, 0, NULL, NULL) = 96\n> <5.357152>\n> 1310774187.750791 gettimeofday({1310774187, 750809}, NULL) = 0 <0.000022>\n> 1310774187.751023 lseek(12, 0, SEEK_END) = 329908224 <0.000023>\n> 1310774187.751109 lseek(15, 0, SEEK_END) = 396607488 <0.000022>\n> 1310774187.751186 lseek(18, 0, SEEK_END) = 534175744 <0.000022>\n> 1310774187.751360 lseek(12, 0, SEEK_END) = 329908224 <0.000023>\n> 1310774187.753389 brk(0x248e000) = 0x248e000 <0.000026>\n> 1310774187.753953 brk(0x24ce000) = 0x24ce000 <0.000023>\n> 1310774187.755158 brk(0x254e000) = 0x254e000 <0.000024>\n> 1310774187.766605 brk(0x2450000) = 0x2450000 <0.000170>\n> 1310774187.766852 lseek(23, 4513792, SEEK_SET) = 4513792 <0.000023>\n> 1310774187.766927 write(23,\n> \"f\\320\\1\\0\\1\\0\\0\\0\\320\\0\\0\\0\\0\\340D-\\22\\0\\0\\0\\30@!000000000\"..., 32768) =\n> 32768 <0.000075>\n> 1310774187.767071 fdatasync(23) = 0 <0.002618>\n> 1310774187.769760 gettimeofday({1310774187, 769778}, NULL) = 0 <0.000022>\n> 1310774187.769848 sendto(5,\n> \"\\2\\0\\0\\0\\350\\0\\0\\0\\1@\\0\\0\\2\\0\\0\\0\\1\\0\\0\\0\\0\\0\\0\\0\\r\\201\\0\\0>\\0\\2\\0\"...,\n> 232, 0, NULL, 0) = 232 <0.000064>\n> 1310774187.769993 sendto(6,\n> \"T\\0\\0\\0\\36\\0\\1count\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\24\\0\\10\\377\\377\\377\\377\\0\\0D\"...,\n> 66, 0, NULL, 0) = 66 <0.000199>\n\nThe total time for this is about 19 ms, but your previous example was\naround 35 ms. Is this reproducible? A change of set up between then\nand now?\n\n2.6 ms for an fsync seems awfully quick. I wonder if EBS uses\nnonvolatile/battery-backed write cache, or if it just lies about fsync\nactually hitting disk.\n\nBut anyway it looks like you aren't blocking much in system calls, and\nI don't think there are non-system-call ways to block, so the time is\nprobably being spent in something CPU intensive.\n\nOn my (physical) computer, synchronous_commit=off does eliminate the\ntiming differences between the select immediately after the update and\nsubsequent selects. So while I could reproduce timing differences\nthat were superficially similar to yours, they seem to have some\nfundamentally different cause.\n\nMaybe the best way to figure out what is going on is to loop the\nupdate and the select in different processes, and use perf or oprof to\nprofile just the select process (with and without the update running).\n It would also be good to know the timings without profiling turned on\nas well, to know how much the profiling is disturbing the timing.\n\n...\n\n> Here's an example with more dirty rows (I basically let the updater run for\n> a while dirtying very many rows).\n\nI'm surprised that make that much of a difference. The select should\nonly clean up blocks it actually visits, and updating more rows\nshouldn't change that very much.\n\n...\n> 1310775407.027503 semop(229383, {{2, -1, 0}}, 1) = 0 <2.215137>\n...\n> No WAL, but checkout that one expensive semop! 2s!!\n\nIs that reproducible, or is it just a one time anomaly?\n\nCheers,\n\nJeff\n",
"msg_date": "Sat, 16 Jul 2011 15:33:03 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "On 07/14/2011 08:47 AM, Tom Lane wrote:\n>\n> The implementation I was imagining was to define another bit in the info\n> parameter for XLogInsert, say XLOG_NON_TRANSACTIONAL. This could be a\n> high-order bit that would not go to disk. Anytime it was *not* set,\n> XLogInsert would set a global boolean that would remember that the\n> current transaction wrote a transactional WAL record. This is the\n> right default since the vast majority of call sites are writing records\n> that we would want to have flushed at commit. There are just a couple\n> of places that would need to be changed to add this flag to their calls.\n>\n> \t\t\tregards, tom lane\n>\n\nIf you have a patch in mind I'm happy to test it on my setup and report \nback.\n\n-- Lars\n\n",
"msg_date": "Sat, 16 Jul 2011 19:12:47 -0700",
"msg_from": "lars <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "On 07/16/2011 06:33 PM, Jeff Janes wrote:\n> 2.6 ms for an fsync seems awfully quick. I wonder if EBS uses\n> nonvolatile/battery-backed write cache, or if it just lies about fsync\n> actually hitting disk.\n> \n\nThey have the right type of cache in there to make fsync quick, when you \nhappen to be the lucky one to find it free of a write backlog. So the \nbest case is much better than a typical spinning drive with no such \ncache. The worst case is in the 100ms+ range though on EBS.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\n\n\n",
"msg_date": "Sun, 17 Jul 2011 20:44:03 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "On Wed, Jul 13, 2011 at 10:52 AM, Merlin Moncure <[email protected]> wrote:\n> On Tue, Jul 12, 2011 at 6:15 PM, lars <[email protected]> wrote:\n>> Back to the first case, here's an strace from the backend doing the select\n>> right after the updates.\n>> write(13,\n>> \"f\\320\\1\\0\\1\\0\\0\\0\\273\\0\\0\\0\\0\\340\\27\\22`\\32\\0\\00000002833!000\"..., 2400256)\n>> = 2400256\n>\n> On Wed, Jul 13, 2011 at 9:46 AM, Kevin Grittner\n> <[email protected]> wrote:\n>> Code comments indicate that they expect the pruning to be a pretty\n>> clear win on multiple reads, although I don't know how much that was\n>> benchmarked. Jeff does raise a good point, though -- it seems odd\n>> that WAL-logging of this pruning would need to be synchronous. We\n>> support asynchronous commits -- why not use that feature\n>\n> Right -- here are my thoughts. notice the above is writing out 293\n> pages. this is suggesting to me that Kevin is right and you've\n> identified a pattern where you are aggravating the page cleanup\n> facilities of HOT. What threw me off here (and perhaps bears some\n> additional investigation) is that early on in the report you were\n> claiming an update to an indexed field which effectively disables HOT.\n\nThere are couple of other (very important) things that HOT does, but\nprobably its not advertised a lot. Even for non-HOT updates (which\nmeans either indexed columns were changed or page ran out of free\nspace) or deletes, HOT prunes those tuples and instead mark the line\npointer as DEAD. The page is defragmented and dead space is recovered.\nEach such dead tuple now only consumes two bytes in the page until\nvacuum removes the dead line pointers. Thats the reason why OP is\nseeing the behavior even when index columns are being updated.\n\nWe made a few adjustments to ensure that a page is not pruned too\nearly. So we track the oldest XID that did any updates/deletes to the\npage and attempt pruning only when the RecentXmin is past the XID. We\nalso mark the page as \"full\" if some previous update did not find\nenough free space to do in-block update and use that hint to decide if\nwe should attempt to prune the page. Finally, we prune only if we get\nthe cleanup lock without blocking.\n\nWhat might be worth looking at this condition in pruneheap.c:\n\n/*\n * We prune when a previous UPDATE failed to find enough space on the page\n * for a new tuple version, or when free space falls below the relation's\n * fill-factor target (but not less than 10%).\n *\n * Checking free space here is questionable since we aren't holding any\n * lock on the buffer; in the worst case we could get a bogus answer. It's\n * unlikely to be *seriously* wrong, though, since reading either pd_lower\n * or pd_upper is probably atomic. Avoiding taking a lock seems more\n * important than sometimes getting a wrong answer in what is after all\n * just a heuristic estimate.\n */\n minfree = RelationGetTargetPageFreeSpace(relation,\n HEAP_DEFAULT_FILLFACTOR);\n minfree = Max(minfree, BLCKSZ / 10);\n\n if (PageIsFull(page) || PageGetHeapFreeSpace(page) < minfree)\n {\n\n\nSo if the free space in a page falls below the fill-factor or 10% of\nthe block size, we would try to prune the page. We probably need to\nrevisit this area and see if we need to tune HOT ever better. One\noption could be to see how much space we are going to free and carry\nout the operation only if its significant enough to justify the cost.\n\nI know we had done several benchmarking tests while HOT development,\nbut the tuning mechanism still may not be perfect for all kinds of\nwork loads and it would probably never be.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Wed, 27 Jul 2011 19:45:52 +0530",
"msg_from": "Pavan Deolasee <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "Thanks Pavan!\n\n\nI think the most important points are still that:\n1. The WAL write should be happening asynchronously (if that is possible)\n2. There should be an option do not perform these compactions if the page is only touched by reads.\n\n(Assuming that when most of the databaseresides in the cache these optimizations are less important.)\n\n\n-- Lars\n\n\n----- Original Message -----\nFrom: Pavan Deolasee <[email protected]>\nTo: Merlin Moncure <[email protected]>\nCc: lars <[email protected]>; Kevin Grittner <[email protected]>; Ivan Voras <[email protected]>; [email protected]\nSent: Wednesday, July 27, 2011 7:15 AM\nSubject: Re: [PERFORM] UPDATEDs slowing SELECTs in a fully cached database\n\nOn Wed, Jul 13, 2011 at 10:52 AM, Merlin Moncure <[email protected]> wrote:\n...\n\nThere are couple of other (very important) things that HOT does, but\nprobably its not advertised a lot. Even for non-HOT updates (which\nmeans either indexed columns were changed or page ran out of free\nspace) or deletes, HOT prunes those tuples and instead mark the line\npointer as DEAD. The page is defragmented and dead space is recovered.\nEach such dead tuple now only consumes two bytes in the page until\nvacuum removes the dead line pointers. Thats the reason why OP is\nseeing the behavior even when index columns are being updated.\n\nWe made a few adjustments to ensure that a page is not pruned too\nearly. So we track the oldest XID that did any updates/deletes to the\npage and attempt pruning only when the RecentXmin is past the XID. We\nalso mark the page as \"full\" if some previous update did not find\nenough free space to do in-block update and use that hint to decide if\nwe should attempt to prune the page. Finally, we prune only if we get\nthe cleanup lock without blocking.\n\nWhat might be worth looking at this condition in pruneheap.c:\n\n/*\n * We prune when a previous UPDATE failed to find enough space on the page\n * for a new tuple version, or when free space falls below the relation's\n * fill-factor target (but not less than 10%).\n *\n * Checking free space here is questionable since we aren't holding any\n * lock on the buffer; in the worst case we could get a bogus answer. It's\n * unlikely to be *seriously* wrong, though, since reading either pd_lower\n * or pd_upper is probably atomic. Avoiding taking a lock seems more\n * important than sometimes getting a wrong answer in what is after all\n * just a heuristic estimate.\n */\n minfree = RelationGetTargetPageFreeSpace(relation,\n HEAP_DEFAULT_FILLFACTOR);\n minfree = Max(minfree, BLCKSZ / 10);\n\n if (PageIsFull(page) || PageGetHeapFreeSpace(page) < minfree)\n {\n\n\nSo if the free space in a page falls below the fill-factor or 10% of\nthe block size, we would try to prune the page. We probably need to\nrevisit this area and see if we need to tune HOT ever better. One\noption could be to see how much space we are going to free and carry\nout the operation only if its significant enough to justify the cost.\n\nI know we had done several benchmarking tests while HOT development,\nbut the tuning mechanism still may not be perfect for all kinds of\nwork loads and it would probably never be.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Fri, 29 Jul 2011 08:57:37 -0700 (PDT)",
"msg_from": "lars hofhansl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
},
{
"msg_contents": "On 7/29/11, lars hofhansl <[email protected]> wrote:\n> Thanks Pavan!\n>\n>\n> I think the most important points are still that:\n> 1. The WAL write should be happening asynchronously (if that is possible)\n\nI think it is agreed that this is a \"todo\"; but since you reported\nthat turning off synchronous commit did not improve performance, it is\nnot directly relevant for you.\n\n\n> 2. There should be an option do not perform these compactions if the page is\n> only touched by reads.\n\nIf the page is only touched by reads, there would be nothing to compact.\n\nCheers,\n\nJeff\n",
"msg_date": "Fri, 5 Aug 2011 02:58:46 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
}
] |
[
{
"msg_contents": "Is there any guidelines to sizing work_mem, shared_bufferes and other\nconfiguration parameters etc., with regards to very large records? I\nhave a table that has a bytea column and I am told that some of these\ncolumns contain over 400MB of data. I am having a problem on several\nservers reading and more specifically dumping these records (table)\nusing pg_dump\n\nThanks\n",
"msg_date": "Thu, 07 Jul 2011 20:02:26 -0400",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Very large record sizes and resource usage"
}
] |
[
{
"msg_contents": "Hi,\n\ni am using libpq library and postgresql 8.4 for my linux application running on ARM with 256 MB. I am just doing:\n\nPQconnectdb();\nPQexec(INSERT INTO table1 ....); (0.009661 sec.)\nPQexec(INSERT INTO table1 ....); (0.004208 sec.)\n\nPQexec(INSERT INTO table2 ....); (0.007352 sec.)\nPQexec(INSERT INTO table2 ....); (0.002533 sec.)\nPQexec(INSERT INTO table2 ....); (0.002281 sec.)\nPQexec(INSERT INTO table2 ....); (0.002244 sec.)\n\nPQexec(INSERT INTO table3 ....); (0.006903 sec.)\nPQexec(INSERT INTO table3 ....); (0.002903 sec.)\nPQfinnish();\n\nI check the time for each PQexec with gettimeofday function and I always see that the first INSERT for each table needs longer than the next ones.\n\nthis must be something with the parser stage and since i am doing every time the same queries, I would like to know if there is a way to cache these queries in order to speed up the first INSERT.\n\nThanks in advance,\n\nsma\n",
"msg_date": "Fri, 8 Jul 2011 04:23:09 -0700 (PDT)",
"msg_from": "Sergio Mayoral <[email protected]>",
"msg_from_op": true,
"msg_subject": "execution time for first INSERT"
},
{
"msg_contents": "On Fri, 2011-07-08 at 04:23 -0700, Sergio Mayoral wrote:\n> this must be something with the parser stage and since i am doing\n> every time the same queries, I would like to know if there is a way to\n> cache these queries in order to speed up the first INSERT.\n\nI doubt it's the parser.\n\nSeeing as it's around a couple ms at minimum, it's probably some kind of\nIO latency. You could see that by wrapping the statements in a big\ntransaction (BEGIN/END block) -- I bet the inserts go very quickly and\nthe final commit takes longer.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Fri, 08 Jul 2011 17:40:23 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: execution time for first INSERT"
}
] |
[
{
"msg_contents": "Hello folks,\n\nThis isn't really a problem, so much as an observation of just how much \nthe internals have changed over the years. We've got an older version \nwe're hoping to upgrade soon, and a developer asked me to optimize this \ntoday:\n\nSELECT order_id\n FROM order\n WHERE order_id = root_order_id\n AND order_id IN (\n SELECT DISTINCT m.root_order_id\n FROM wacky_orders c\n JOIN order m USING (root_order_id)\n WHERE m.order_type = 'regular'\n GROUP BY m.root_order_id, m.route_id\n HAVING COUNT(1) > 1\n );\n\n From what I could tell, the query was fine. But this part of the \nexplain was confusing the hell out of me:\n\n-> Seq Scan on order (cost=0.00..218943.98 rows=24092 width=20)\n Filter: (order_id = root_order_id)\n\nThe thing is, that subquery there only produced 150 rows. So I shrugged \nand simplified further by making a temp table, and got this:\n\nSELECT order_id\n FROM order m\n JOIN zany_orders z ON (m.order_id = z.root_order_id)\n WHERE m.order_id = m.root_order_id;\n\nWhich produced this:\n\nMerge Join (cost=220705.29..220826.19 rows=1 width=10)\n Merge Cond: (m.order_id = z.root_order_id)\n -> Sort (cost=220697.42..220757.65 rows=24092 width=20)\n Sort Key: m.order_id\n -> Seq Scan on order m (cost=0.00..218943.98 rows=24092 width=20)\n Filter: (order_id = root_order_id)\n -> Sort (cost=7.87..8.24 rows=149 width=11)\n Sort Key: z.root_order_id\n -> Seq Scan on zany_orders z (cost=0.00..2.49 rows=149 width=11)\n\nOk, now it's just screwing with me. The order table has about 5M rows, \nand this is clearly not a good idea, here. But then I took a closer \nlook. Why did it decide to filter based on a condition 90% of the table \nfits, and then *merge* those results in with the 150-row temp table?\n\nSo, for giggles, I cast a column type to tell the planner it shouldn't \nconsider the columns equivalent:\n\nSELECT master_order_id\n FROM order m\n JOIN zany_orders z ON (m.order_id = z.root_order_id)\n WHERE m.order_id::VARCHAR = m.root_order_id;\n\nAnd voila:\n\nNested Loop (cost=0.00..839.82 rows=1 width=8)\n -> Seq Scan on zany_orders z (cost=0.00..2.49 rows=149 width=11)\n -> Index Scan using order_pkey on order m (cost=0.00..5.60 rows=1 \nwidth=8)\n Index Cond: (m.order_id = z.root_order_id)\n Filter: ((order_id)::varchar = root_order_id)\n\nI tried this with a mere 9.0 install and it wasn't having any of it. It \ncompletely ignored the red-herring WHERE clause except as a post-filter. \nI'm pretty sure that if it were possible to manifest itself to slap me \nfor even trying, it would have done so.\n\nI've noticed lots of little things like this recently, and I have to \nsay, the planner has made huge improvements regardless of what \nperception may reflect sometimes. It still has some holes and room for \nimprovement, but I just wanted to thank the devs for all their hard work.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Fri, 8 Jul 2011 15:20:31 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Just a note about column equivalence disarming the planner"
}
] |
[
{
"msg_contents": "I have run into issue where the query optimizer is choosing the wrong\nexecution plan when I'm trying to join two large tables that have been\npartitioned. I would really appreciate it if someone could help me out\nthis. I don't know whether I've found a bug in the optimizer, or whether\nthere is some parameter/option I need to set in postgres. Below, I've\nincluded my execution plans. I'm using postgres 9.0.3, and I'm running this\non a pretty beefy Linux server.\n\nMy two tables:\n-widget: has 4041866 records, and is broken up into 4 partitions (no records\nare in the parent table).\n-icecream: I'm starting with zero records, but since this there could be\nbillions of ice-cream records, I will partition and will not have any\nrecords in the parent table.\n\nSo, then I then create my first partition in icecream table, and load\n4041866 records into it.\n\nHere is the query I'm using to join the two tables:\nexplain analyze\nSELECT\n r.widget_id,\n r.widget_type_id,\n avg(rc.cost)::double precision cost_avg\nFROM\n widget r,\n icecream rc\nWHERE\nr.widget_type_id = 4\nand r.widgetset_id = 5\nAND r.widget_id = rc.widget_id\nand rc.dataset_id = 281\ngroup by r.widget_id,r.chromosome, r.start_pos, r.end_pos,r.widget_type_id\n;\n\nHere is the corresponding execution plan:\n\n\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=147262.20..147299.12 rows=1136 width=41) (actual\ntime=31876.290..31904.880 rows=11028 loops=1)\n -> Merge Join (cost=95574.83..112841.79 rows=1147347 width=41) (actual\ntime=31130.870..31832.922 rows=11028 loops=1)\n Merge Cond: (r.widget_id = rc.widget_id)\n -> Sort (cost=1913.89..1942.27 rows=11352 width=21) (actual\ntime=56.818..68.701 rows=11028 loops=1)\n Sort Key: r.widget_id\n Sort Method: quicksort Memory: 1246kB\n -> Append (cost=4.28..1149.30 rows=11352 width=21) (actual\ntime=0.139..40.513 rows=11028 loops=1)\n -> Bitmap Heap Scan on widget r (cost=4.28..12.75\nrows=1 width=48) (actual time=0.030..0.030 rows=0 loops=1)\n Recheck Cond: (widgetset_id = 5)\n Filter: (widget_type_id = 4)\n -> Bitmap Index Scan on widget_widgetset_id_idx\n (cost=0.00..4.28 rows=4 width=0) (actual time=0.023..0.023 rows=0 loops=1)\n Index Cond: (widgetset_id = 5)\n -> Index Scan using\nwidget_part_5_widget_widget_type_id_idx on widget_part_5 r\n (cost=0.00..1136.55 rows=11351 width=21) (actual time=0.106..18.489\nrows=11028 loops=1)\n Index Cond: (widget_type_id = 4)\n Filter: (widgetset_id = 5)\n -> Sort (cost=93660.94..93711.47 rows=20214 width=24) (actual\ntime=29730.522..30766.354 rows=946140 loops=1)\n Sort Key: rc.widget_id\n Sort Method: external sort Disk: 165952kB\n -> Append (cost=0.00..92215.33 rows=20214 width=24) (actual\ntime=0.057..13731.204 rows=4041866 loops=1)\n -> Seq Scan on icecream rc (cost=0.00..23.00 rows=5\nwidth=24) (actual time=0.002..0.002 rows=0 loops=1)\n Filter: (dataset_id = 281)\n -> Seq Scan on icecream_part_281 rc\n (cost=0.00..92192.33 rows=20209 width=24) (actual time=0.051..5427.730\nrows=4041866 loops=1)\n Filter: (dataset_id = 281)\n Total runtime: 33182.945 ms\n(24 rows)\n\n\nThe query is doing a merge join, is taking 33 seconds, but should take less\nthan a second. So, then I do: select * from icecream;\n\nNow, when I run the same query again, I get a different and correct\nexecution plan (nested loop), and the query takes less than 1 second as I\nwould expect.\n\n\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=7223611.41..7223648.33 rows=1136 width=41) (actual\ntime=392.822..420.166 rows=11028 loops=1)\n -> Nested Loop (cost=4.28..341195.22 rows=229413873 width=41) (actual\ntime=0.231..331.800 rows=11028 loops=1)\n Join Filter: (r.widget_id = rc.widget_id)\n -> Append (cost=4.28..1149.30 rows=11352 width=21) (actual\ntime=0.051..50.181 rows=11028 loops=1)\n -> Bitmap Heap Scan on widget r (cost=4.28..12.75 rows=1\nwidth=48) (actual time=0.013..0.013 rows=0 loops=1)\n Recheck Cond: (widgetset_id = 5)\n Filter: (widget_type_id = 4)\n -> Bitmap Index Scan on widget_widgetset_id_idx\n (cost=0.00..4.28 rows=4 width=0) (actual time=0.007..0.007 rows=0 loops=1)\n Index Cond: (widgetset_id = 5)\n -> Index Scan using widget_part_5_widget_widget_type_id_idx\non widget_part_5 r (cost=0.00..1136.55 rows=11351 width=21) (actual\ntime=0.033..21.254 rows=11028 loops=1)\n Index Cond: (widget_type_id = 4)\n Filter: (widgetset_id = 5)\n -> Append (cost=0.00..29.88 rows=6 width=24) (actual\ntime=0.014..0.018 rows=1 loops=11028)\n -> Seq Scan on icecream rc (cost=0.00..23.00 rows=5\nwidth=24) (actual time=0.001..0.001 rows=0 loops=11028)\n Filter: (rc.dataset_id = 281)\n -> Index Scan using icecream_part_281_widget_id_idx on\nicecream_part_281 rc (cost=0.00..6.88 rows=1 width=24) (actual\ntime=0.009..0.010 rows=1 loops=11028)\n Index Cond: (rc.widget_id = r.widget_id)\n Filter: (rc.dataset_id = 281)\n Total runtime: 431.935 ms\n(19 rows)\n\n\nMy guess as to what happened:\n-because the icecream parent table has zero records, the query optimizer\nchooses the incorrect execution plan\n-when I do select * from icecream, the optimizer now knows how many records\nare really in the icecream table, by knowing that the icecream table has\npartitions.\n\nNext, if I run vacuum analyze on the parent table, I again get a wrong/slow\nexecution plan (this time it uses the hash join). Again, I think this is\nbecause the parent table itself has zero records.\n\n\n\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=191926.03..191962.95 rows=1136 width=41) (actual\ntime=28967.567..28994.395 rows=11028 loops=1)\n -> Hash Join (cost=166424.79..191585.47 rows=11352 width=41) (actual\ntime=28539.196..28917.830 rows=11028 loops=1)\n Hash Cond: (r.widget_id = rc.widget_id)\n -> Append (cost=4.28..1149.30 rows=11352 width=21) (actual\ntime=0.054..54.068 rows=11028 loops=1)\n -> Bitmap Heap Scan on widget r (cost=4.28..12.75 rows=1\nwidth=48) (actual time=0.013..0.013 rows=0 loops=1)\n Recheck Cond: (widgetset_id = 5)\n Filter: (widget_type_id = 4)\n -> Bitmap Index Scan on widget_widgetset_id_idx\n (cost=0.00..4.28 rows=4 width=0) (actual time=0.007..0.007 rows=0 loops=1)\n Index Cond: (widgetset_id = 5)\n -> Index Scan using widget_part_5_widget_widget_type_id_idx\non widget_part_5 r (cost=0.00..1136.55 rows=11351 width=21) (actual\ntime=0.035..22.419 rows=11028 loops=1)\n Index Cond: (widget_type_id = 4)\n Filter: (widgetset_id = 5)\n -> Hash (cost=92214.73..92214.73 rows=4041823 width=24) (actual\ntime=28438.419..28438.419 rows=4041866 loops=1)\n Buckets: 524288 Batches: 2 Memory Usage: 118449kB\n -> Append (cost=0.00..92214.73 rows=4041823 width=24)\n(actual time=0.020..14896.908 rows=4041866 loops=1)\n -> Seq Scan on icecream rc (cost=0.00..23.00 rows=5\nwidth=24) (actual time=0.002..0.002 rows=0 loops=1)\n Filter: (dataset_id = 281)\n -> Seq Scan on icecream_part_281 rc\n (cost=0.00..92191.73 rows=4041818 width=24) (actual time=0.012..5718.592\nrows=4041866 loops=1)\n Filter: (dataset_id = 281)\n Total runtime: 29007.937 ms\n(20 rows)\n\n\nselect * from icecream does not fix this issue.\n\n\nI could of course disable hash join and merge join to force postgres to use\na nested loop, but my system is often joining these two tables, and I'd\nrather not have to set this in every single place.\nset enable_mergejoin=off;\nset enable_hashjoin=off;\nset enable_nestloop = on;\n\n\nthanks in advance!!!\n\nAnish\n\nI have run into issue where the query optimizer is choosing the wrong execution plan when I'm trying to join two large tables that have been partitioned. I would really appreciate it if someone could help me out this. I don't know whether I've found a bug in the optimizer, or whether there is some parameter/option I need to set in postgres. Below, I've included my execution plans. I'm using postgres 9.0.3, and I'm running this on a pretty beefy Linux server.\nMy two tables:-widget: has 4041866 records, and is broken up into 4 partitions (no records are in the parent table).-icecream: I'm starting with zero records, but since this there could be billions of ice-cream records, I will partition and will not have any records in the parent table.\nSo, then I then create my first partition in icecream table, and load 4041866 records into it.Here is the query I'm using to join the two tables:explain analyze\nSELECT r.widget_id, r.widget_type_id, avg(rc.cost)::double precision cost_avgFROM widget r, icecream rcWHERE\nr.widget_type_id = 4and r.widgetset_id = 5AND r.widget_id = rc.widget_idand rc.dataset_id = 281group by r.widget_id,r.chromosome, r.start_pos, r.end_pos,r.widget_type_id\n;Here is the corresponding execution plan: QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate (cost=147262.20..147299.12 rows=1136 width=41) (actual time=31876.290..31904.880 rows=11028 loops=1)\n -> Merge Join (cost=95574.83..112841.79 rows=1147347 width=41) (actual time=31130.870..31832.922 rows=11028 loops=1) Merge Cond: (r.widget_id = rc.widget_id) -> Sort (cost=1913.89..1942.27 rows=11352 width=21) (actual time=56.818..68.701 rows=11028 loops=1)\n Sort Key: r.widget_id Sort Method: quicksort Memory: 1246kB -> Append (cost=4.28..1149.30 rows=11352 width=21) (actual time=0.139..40.513 rows=11028 loops=1)\n -> Bitmap Heap Scan on widget r (cost=4.28..12.75 rows=1 width=48) (actual time=0.030..0.030 rows=0 loops=1) Recheck Cond: (widgetset_id = 5) Filter: (widget_type_id = 4)\n -> Bitmap Index Scan on widget_widgetset_id_idx (cost=0.00..4.28 rows=4 width=0) (actual time=0.023..0.023 rows=0 loops=1) Index Cond: (widgetset_id = 5)\n -> Index Scan using widget_part_5_widget_widget_type_id_idx on widget_part_5 r (cost=0.00..1136.55 rows=11351 width=21) (actual time=0.106..18.489 rows=11028 loops=1) Index Cond: (widget_type_id = 4)\n Filter: (widgetset_id = 5) -> Sort (cost=93660.94..93711.47 rows=20214 width=24) (actual time=29730.522..30766.354 rows=946140 loops=1) Sort Key: rc.widget_id\n Sort Method: external sort Disk: 165952kB -> Append (cost=0.00..92215.33 rows=20214 width=24) (actual time=0.057..13731.204 rows=4041866 loops=1) -> Seq Scan on icecream rc (cost=0.00..23.00 rows=5 width=24) (actual time=0.002..0.002 rows=0 loops=1)\n Filter: (dataset_id = 281) -> Seq Scan on icecream_part_281 rc (cost=0.00..92192.33 rows=20209 width=24) (actual time=0.051..5427.730 rows=4041866 loops=1)\n Filter: (dataset_id = 281) Total runtime: 33182.945 ms(24 rows)The query is doing a merge join, is taking 33 seconds, but should take less than a second. So, then I do: select * from icecream;\nNow, when I run the same query again, I get a different and correct execution plan (nested loop), and the query takes less than 1 second as I would expect. QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate (cost=7223611.41..7223648.33 rows=1136 width=41) (actual time=392.822..420.166 rows=11028 loops=1)\n -> Nested Loop (cost=4.28..341195.22 rows=229413873 width=41) (actual time=0.231..331.800 rows=11028 loops=1) Join Filter: (r.widget_id = rc.widget_id) -> Append (cost=4.28..1149.30 rows=11352 width=21) (actual time=0.051..50.181 rows=11028 loops=1)\n -> Bitmap Heap Scan on widget r (cost=4.28..12.75 rows=1 width=48) (actual time=0.013..0.013 rows=0 loops=1) Recheck Cond: (widgetset_id = 5) Filter: (widget_type_id = 4)\n -> Bitmap Index Scan on widget_widgetset_id_idx (cost=0.00..4.28 rows=4 width=0) (actual time=0.007..0.007 rows=0 loops=1) Index Cond: (widgetset_id = 5)\n -> Index Scan using widget_part_5_widget_widget_type_id_idx on widget_part_5 r (cost=0.00..1136.55 rows=11351 width=21) (actual time=0.033..21.254 rows=11028 loops=1) Index Cond: (widget_type_id = 4)\n Filter: (widgetset_id = 5) -> Append (cost=0.00..29.88 rows=6 width=24) (actual time=0.014..0.018 rows=1 loops=11028) -> Seq Scan on icecream rc (cost=0.00..23.00 rows=5 width=24) (actual time=0.001..0.001 rows=0 loops=11028)\n Filter: (rc.dataset_id = 281) -> Index Scan using icecream_part_281_widget_id_idx on icecream_part_281 rc (cost=0.00..6.88 rows=1 width=24) (actual time=0.009..0.010 rows=1 loops=11028)\n Index Cond: (rc.widget_id = r.widget_id) Filter: (rc.dataset_id = 281) Total runtime: 431.935 ms(19 rows)\nMy guess as to what happened:-because the icecream parent table has zero records, the query optimizer chooses the incorrect execution plan-when I do select * from icecream, the optimizer now knows how many records are really in the icecream table, by knowing that the icecream table has partitions.\nNext, if I run vacuum analyze on the parent table, I again get a wrong/slow execution plan (this time it uses the hash join). Again, I think this is because the parent table itself has zero records.\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate (cost=191926.03..191962.95 rows=1136 width=41) (actual time=28967.567..28994.395 rows=11028 loops=1)\n -> Hash Join (cost=166424.79..191585.47 rows=11352 width=41) (actual time=28539.196..28917.830 rows=11028 loops=1) Hash Cond: (r.widget_id = rc.widget_id) -> Append (cost=4.28..1149.30 rows=11352 width=21) (actual time=0.054..54.068 rows=11028 loops=1)\n -> Bitmap Heap Scan on widget r (cost=4.28..12.75 rows=1 width=48) (actual time=0.013..0.013 rows=0 loops=1) Recheck Cond: (widgetset_id = 5) Filter: (widget_type_id = 4)\n -> Bitmap Index Scan on widget_widgetset_id_idx (cost=0.00..4.28 rows=4 width=0) (actual time=0.007..0.007 rows=0 loops=1) Index Cond: (widgetset_id = 5)\n -> Index Scan using widget_part_5_widget_widget_type_id_idx on widget_part_5 r (cost=0.00..1136.55 rows=11351 width=21) (actual time=0.035..22.419 rows=11028 loops=1) Index Cond: (widget_type_id = 4)\n Filter: (widgetset_id = 5) -> Hash (cost=92214.73..92214.73 rows=4041823 width=24) (actual time=28438.419..28438.419 rows=4041866 loops=1) Buckets: 524288 Batches: 2 Memory Usage: 118449kB\n -> Append (cost=0.00..92214.73 rows=4041823 width=24) (actual time=0.020..14896.908 rows=4041866 loops=1) -> Seq Scan on icecream rc (cost=0.00..23.00 rows=5 width=24) (actual time=0.002..0.002 rows=0 loops=1)\n Filter: (dataset_id = 281) -> Seq Scan on icecream_part_281 rc (cost=0.00..92191.73 rows=4041818 width=24) (actual time=0.012..5718.592 rows=4041866 loops=1)\n Filter: (dataset_id = 281) Total runtime: 29007.937 ms(20 rows)select * from icecream does not fix this issue.\nI could of course disable hash join and merge join to force postgres to use a nested loop, but my system is often joining these two tables, and I'd rather not have to set this in every single place.\nset enable_mergejoin=off;set enable_hashjoin=off;set enable_nestloop = on;thanks in advance!!!Anish",
"msg_date": "Fri, 8 Jul 2011 14:36:48 -0700",
"msg_from": "Anish Kejariwal <[email protected]>",
"msg_from_op": true,
"msg_subject": "issue with query optimizer when joining two partitioned tables"
},
{
"msg_contents": "On 09.07.2011 00:36, Anish Kejariwal wrote:\n> My guess as to what happened:\n> -because the icecream parent table has zero records, the query optimizer\n> chooses the incorrect execution plan\n> -when I do select * from icecream, the optimizer now knows how many records\n> are really in the icecream table, by knowing that the icecream table has\n> partitions.\n\n\"select * from icecream\" won't have any direct effect on the \noptimization of subsequent queries. What probably happened is that \nautoanalyze ran in the background while you ran that select, and \nanalyzed some of the partitions. Simply waiting a while would've had the \nsame effect.\n\n> Next, if I run vacuum analyze on the parent table, I again get a wrong/slow\n> execution plan (this time it uses the hash join). Again, I think this is\n> because the parent table itself has zero records.\n>\n>\n>\n> QUERY PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=191926.03..191962.95 rows=1136 width=41) (actual\n> time=28967.567..28994.395 rows=11028 loops=1)\n> -> Hash Join (cost=166424.79..191585.47 rows=11352 width=41) (actual\n> time=28539.196..28917.830 rows=11028 loops=1)\n> Hash Cond: (r.widget_id = rc.widget_id)\n> -> Append (cost=4.28..1149.30 rows=11352 width=21) (actual\n> time=0.054..54.068 rows=11028 loops=1)\n> -> Bitmap Heap Scan on widget r (cost=4.28..12.75 rows=1\n> width=48) (actual time=0.013..0.013 rows=0 loops=1)\n> Recheck Cond: (widgetset_id = 5)\n> Filter: (widget_type_id = 4)\n> -> Bitmap Index Scan on widget_widgetset_id_idx\n> (cost=0.00..4.28 rows=4 width=0) (actual time=0.007..0.007 rows=0 loops=1)\n> Index Cond: (widgetset_id = 5)\n> -> Index Scan using widget_part_5_widget_widget_type_id_idx\n> on widget_part_5 r (cost=0.00..1136.55 rows=11351 width=21) (actual\n> time=0.035..22.419 rows=11028 loops=1)\n> Index Cond: (widget_type_id = 4)\n> Filter: (widgetset_id = 5)\n> -> Hash (cost=92214.73..92214.73 rows=4041823 width=24) (actual\n> time=28438.419..28438.419 rows=4041866 loops=1)\n> Buckets: 524288 Batches: 2 Memory Usage: 118449kB\n> -> Append (cost=0.00..92214.73 rows=4041823 width=24)\n> (actual time=0.020..14896.908 rows=4041866 loops=1)\n> -> Seq Scan on icecream rc (cost=0.00..23.00 rows=5\n> width=24) (actual time=0.002..0.002 rows=0 loops=1)\n> Filter: (dataset_id = 281)\n> -> Seq Scan on icecream_part_281 rc\n> (cost=0.00..92191.73 rows=4041818 width=24) (actual time=0.012..5718.592\n> rows=4041866 loops=1)\n> Filter: (dataset_id = 281)\n> Total runtime: 29007.937 ms\n> (20 rows)\n\nThe cost estimates in the above slow plan are pretty accurate, so I \nsuspect the cost estimates for the fast plan are not, or the planner \nwould choose that.\n\n> I could of course disable hash join and merge join to force postgres to use\n> a nested loop, but my system is often joining these two tables, and I'd\n> rather not have to set this in every single place.\n> set enable_mergejoin=off;\n> set enable_hashjoin=off;\n> set enable_nestloop = on;\n\nCan you do explain analyze with these settings? That might give us a \nclue on where it's going wrong.\n\nAlso, I suspect that when you load more data into icecream, the planner \nmight start to pick the faster plan, because the seqscan on icecream \nwill start to look more expensive compared to the index scan and nested \nloop join in the faster plan.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Sat, 09 Jul 2011 10:54:08 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with query optimizer when joining two partitioned\n tables"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> On 09.07.2011 00:36, Anish Kejariwal wrote:\n>> My guess as to what happened:\n>> -because the icecream parent table has zero records, the query optimizer\n>> chooses the incorrect execution plan\n>> -when I do select * from icecream, the optimizer now knows how many records\n>> are really in the icecream table, by knowing that the icecream table has\n>> partitions.\n\n> \"select * from icecream\" won't have any direct effect on the \n> optimization of subsequent queries. What probably happened is that \n> autoanalyze ran in the background while you ran that select, and \n> analyzed some of the partitions. Simply waiting a while would've had the \n> same effect.\n\nYeah. Also, the reason that a manual vacuum on icecream changes things\nyet again is that in 9.0 and up, we have a notion of summary stats\nacross the whole inheritance tree, but autoanalyze hasn't been taught to\ngather those. The manual command on the parent table does gather them,\nthough.\n\nSo what's happening here is that we suddenly have an accurate idea of\nthe size of the join product as a result of having inheritance summary\nstats to estimate with, and that drives the estimated cost of the merge\nor hash join down out of the stratosphere. The estimated cost of the\nnestloop goes down a lot too, but not as much.\n\nI experimented with a similar case here, and it seems like a lot of the\nremaining error in the nestloop estimate comes from this:\n\n>> -> Seq Scan on icecream rc (cost=0.00..23.00 rows=5 width=24) (actual time=0.002..0.002 rows=0 loops=1)\n>> Filter: (dataset_id = 281)\n\nThe indexscan on the nonempty child partition is estimated at less than\n10 cost units, so this is a *large* fraction of what the planner sees as\nthe per-outer-row cost of a nestloop. And with more than 11000 rows on\nthe other side of the join, that discourages it from using the nestloop.\nIn reality of course this takes negligible time compared to examining\nthe child partition.\n\nNow why is the seqscan cost estimate so large, when actually the parent\nicecream table is totally empty? It's because the planner has been\ntaught to never believe that an empty table is empty. If memory serves,\nit's really estimating on an assumption that the table contains 10 pages\nand some corresponding number of rows. This is a reasonable defensive\nposture when dealing with ordinary tables, I think, since most likely\nif the catalogs say the table is empty that's just a leftover from when\nit was created. But maybe we should reconsider the heuristic for tables\nthat are members of inheritance trees --- particularly parents of\ninheritance trees.\n\nI was able to defeat the empty-table heuristic here by doing\n\nupdate pg_class set relpages = 1 where relname = 'icecream';\n\nand then I started getting much more realistic estimates in my test\ncase. (It still wanted to use a merge join initially, but after\nknocking down random_page_cost it went to the nestloop.) It would\nbe interesting to see what sorts of results Anish gets with that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 09 Jul 2011 13:43:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with query optimizer when joining two partitioned tables "
},
{
"msg_contents": "Thanks Tom and Heikki! I really appreciate your help.\n\nI went ahead and loaded all the data. In the icream table, I now have ~175\npartitions, each with 4041866 records.\n\nThe data finished loading 12 hours ago, and I then ran the same query I gave\nyou guys, and it took 25 seconds since it used the wrong execution plan as\nexpected.\n\n HashAggregate (cost=27680.90..28045.81 rows=11228 width=41) (actual\ntime=24769.190..24817.618 rows=11028 loops=1)\n -> Hash Join (cost=1304.04..18901.88 rows=292634 width=41) (actual\ntime=3938.965..24688.718 rows=11028 loops=1)\n Hash Cond: (rc.widget_id = r.widget_id)\n -> Append (cost=0.00..12110.95 rows=292634 width=24) (actual\ntime=2854.925..22887.638 rows=309579 loops=1)\n -> Seq Scan on icecream rc (cost=0.00..25.60 rows=1\nwidth=24) (actual time=0.003..0.003 rows=0 loops=1)\n Filter: ((widgetset_id = 5) AND (dataset_id = 283))\n -> Index Scan using icecream_part_283_widgetset_id_idx on\nicecream_part_283 rc (cost=0.00..12085.35 rows=292633 width=24) (actual\ntime=2854.915..2\n1784.769 rows=309579 loops=1)\n Index Cond: (widgetset_id = 5)\n Filter: (dataset_id = 283)\n -> Hash (cost=1163.69..1163.69 rows=11228 width=21) (actual\ntime=1083.704..1083.704 rows=11028 loops=1)\n Buckets: 2048 Batches: 1 Memory Usage: 604kB\n -> Append (cost=4.28..1163.69 rows=11228 width=21) (actual\ntime=528.216..1066.659 rows=11028 loops=1)\n -> Bitmap Heap Scan on widget r (cost=4.28..12.75\nrows=1 width=48) (actual time=528.017..528.017 rows=0 loops=1)\n Recheck Cond: (widgetset_id = 5)\n Filter: (widget_type_id = 4)\n -> Bitmap Index Scan on widget_widgetset_id_idx\n (cost=0.00..4.28 rows=4 width=0) (actual time=527.995..527.995 rows=0\nloops=1)\n Index Cond: (widgetset_id = 5)\n -> Index Scan using\nwidget_part_5_widget_widget_type_id_idx on widget_part_5 r\n (cost=0.00..1150.94 rows=11227 width=21) (actual time=0.191..512.847 rows=1\n1028 loops=1)\n Index Cond: (widget_type_id = 4)\n Filter: (widgetset_id = 5)\n Total runtime: 24844.016 ms\n(21 rows)\n\n\nI then changed my enable* options to force it to use a nested loop:\nset enable_mergejoin=off;\nset enable_hashjoin=off;\nset enable_nestloop = on;\n\n\n HashAggregate (cost=460004.79..460369.70 rows=11228 width=41) (actual\ntime=298.014..341.822 rows=11028 loops=1)\n -> Nested Loop (cost=4.28..338747.85 rows=4041898 width=41) (actual\ntime=0.175..248.529 rows=11028 loops=1)\n Join Filter: (r.widget_id = rc.widget_id)\n -> Append (cost=4.28..1163.69 rows=11228 width=21) (actual\ntime=0.053..42.532 rows=11028 loops=1)\n -> Bitmap Heap Scan on widget r (cost=4.28..12.75 rows=1\nwidth=48) (actual time=0.014..0.014 rows=0 loops=1)\n Recheck Cond: (widgetset_id = 5)\n Filter: (widget_type_id = 4)\n -> Bitmap Index Scan on widget_widgetset_id_idx\n (cost=0.00..4.28 rows=4 width=0) (actual time=0.008..0.008 rows=0 loops=1)\n Index Cond: (widgetset_id = 5)\n -> Index Scan using widget_part_5_widget_widget_type_id_idx\non widget_part_5 r (cost=0.00..1150.94 rows=11227 width=21) (actual\ntime=0.032..18.410 rows=11028 lo\nops=1)\n Index Cond: (widget_type_id = 4)\n Filter: (widgetset_id = 5)\n -> Append (cost=0.00..29.99 rows=6 width=24) (actual\ntime=0.009..0.012 rows=1 loops=11028)\n -> Seq Scan on icecream rc (cost=0.00..23.00 rows=5\nwidth=24) (actual time=0.001..0.001 rows=0 loops=11028)\n Filter: (rc.dataset_id = 283)\n -> Index Scan using icecream_part_283_widget_id_idx on\nicecream_part_283 rc (cost=0.00..6.99 rows=1 width=24) (actual\ntime=0.004..0.006 rows=1 loo\nps=11028)\n Index Cond: (rc.widget_id = r.widget_id)\n Filter: (rc.dataset_id = 283)\n Total runtime: 361.180 ms\n(19 rows)\n\nThe query was nice and fast as expected.\n\nI then restored the enable* options back to default. The query was slow\nagain and taking around ~19 seconds. So, this gives us some information\nabout whether autoanalyze is running in the background:\n-I don't think waiting is making a difference. I loaded the data 12 hours\nago, and then ran the query and the query was very slow. I then ran the\nquery again but for a different dataset and it was also slow. It was only\nonce I changed the enable* parameters could I get my expected performance.\n-I would have tried select * from ice-cream again, but with 176 partitions\nthis query is no longer feasible. But, I do agree that select * from\nicecream is causing the statistics to be updated.\n\nSo, then I followed Tom's suggestion to defeat the empty-table heuristic.\n\nselect relpages from pg_class where relname = 'icecream';\n\n relpages\n----------\n 0\n(1 row)\n\nOk, so the planner thinks that the parent table is empty. I then ran:\nupdate pg_class set relpages = 1 where relname = 'icecream';\n\n\n HashAggregate (cost=201199.27..201564.18 rows=11228 width=41) (actual\ntime=277.195..304.620 rows=11028 loops=1)\n -> Nested Loop (cost=4.28..79942.45 rows=4041894 width=41) (actual\ntime=0.227..231.181 rows=11028 loops=1)\n Join Filter: (r.widget_id = rc.widget_id)\n -> Append (cost=4.28..1163.69 rows=11228 width=21) (actual\ntime=0.125..40.834 rows=11028 loops=1)\n -> Bitmap Heap Scan on widget r (cost=4.28..12.75 rows=1\nwidth=48) (actual time=0.022..0.022 rows=0 loops=1)\n Recheck Cond: (widgetset_id = 5)\n Filter: (widget_type_id = 4)\n -> Bitmap Index Scan on widget_widgetset_id_idx\n (cost=0.00..4.28 rows=4 width=0) (actual time=0.019..0.019 rows=0 loops=1)\n Index Cond: (widgetset_id = 5)\n -> Index Scan using widget_part_5_widget_widget_type_id_idx\non widget_part_5 r (cost=0.00..1150.94 rows=11227 width=21) (actual\ntime=0.100..18.964 rows=11028 lo\nops=1)\n Index Cond: (widget_type_id = 4)\n Filter: (widgetset_id = 5)\n -> Append (cost=0.00..6.99 rows=2 width=24) (actual\ntime=0.008..0.012 rows=1 loops=11028)\n -> Seq Scan on icecream rc (cost=0.00..0.00 rows=1\nwidth=24) (actual time=0.001..0.001 rows=0 loops=11028)\n Filter: (rc.dataset_id = 283)\n -> Index Scan using icecream_part_283_widget_id_idx on\nicecream_part_283 rc (cost=0.00..6.99 rows=1 width=24) (actual\ntime=0.004..0.006 rows=1 loo\nps=11028)\n Index Cond: (rc.widget_id = r.widget_id)\n Filter: (rc.dataset_id = 283)\n Total runtime: 318.634 ms\n(19 rows)\n\n\nWow! that fixes it. Thanks you so much!!!! I've been struggling with this\nissue for 2-3 days. (Also, in the past, I've seen inconsistent performance\nwith this query, which may be the result of the planner sometimes choosing\nthe wrong plan, but I'll chase that down later).\n\nTom said: But maybe we should reconsider the heuristic for tables that are\nmembers of inheritance trees --- particularly parents of inheritance trees.\n\nI agree. I think postgres should get updated to take this into account. I\nshouldn't have to set the relpages to 1 for all the empty parent tables that\nI have partitioned. Should I file this as a bug/enhancement?\n\nAlso, do I need to worry about about autoanalyze/autovacuum setting back\nrelpages to zero for the parent icecream table?\n\nthanks!!!\nAnish\n\n\n\nOn Sat, Jul 9, 2011 at 10:43 AM, Tom Lane <[email protected]> wrote:\n\n> Heikki Linnakangas <[email protected]> writes:\n> > On 09.07.2011 00:36, Anish Kejariwal wrote:\n> >> My guess as to what happened:\n> >> -because the icecream parent table has zero records, the query optimizer\n> >> chooses the incorrect execution plan\n> >> -when I do select * from icecream, the optimizer now knows how many\n> records\n> >> are really in the icecream table, by knowing that the icecream table has\n> >> partitions.\n>\n> > \"select * from icecream\" won't have any direct effect on the\n> > optimization of subsequent queries. What probably happened is that\n> > autoanalyze ran in the background while you ran that select, and\n> > analyzed some of the partitions. Simply waiting a while would've had the\n> > same effect.\n>\n> Yeah. Also, the reason that a manual vacuum on icecream changes things\n> yet again is that in 9.0 and up, we have a notion of summary stats\n> across the whole inheritance tree, but autoanalyze hasn't been taught to\n> gather those. The manual command on the parent table does gather them,\n> though.\n>\n> So what's happening here is that we suddenly have an accurate idea of\n> the size of the join product as a result of having inheritance summary\n> stats to estimate with, and that drives the estimated cost of the merge\n> or hash join down out of the stratosphere. The estimated cost of the\n> nestloop goes down a lot too, but not as much.\n>\n> I experimented with a similar case here, and it seems like a lot of the\n> remaining error in the nestloop estimate comes from this:\n>\n> >> -> Seq Scan on icecream rc (cost=0.00..23.00 rows=5 width=24) (actual\n> time=0.002..0.002 rows=0 loops=1)\n> >> Filter: (dataset_id = 281)\n>\n> The indexscan on the nonempty child partition is estimated at less than\n> 10 cost units, so this is a *large* fraction of what the planner sees as\n> the per-outer-row cost of a nestloop. And with more than 11000 rows on\n> the other side of the join, that discourages it from using the nestloop.\n> In reality of course this takes negligible time compared to examining\n> the child partition.\n>\n> Now why is the seqscan cost estimate so large, when actually the parent\n> icecream table is totally empty? It's because the planner has been\n> taught to never believe that an empty table is empty. If memory serves,\n> it's really estimating on an assumption that the table contains 10 pages\n> and some corresponding number of rows. This is a reasonable defensive\n> posture when dealing with ordinary tables, I think, since most likely\n> if the catalogs say the table is empty that's just a leftover from when\n> it was created. But maybe we should reconsider the heuristic for tables\n> that are members of inheritance trees --- particularly parents of\n> inheritance trees.\n>\n> I was able to defeat the empty-table heuristic here by doing\n>\n> update pg_class set relpages = 1 where relname = 'icecream';\n>\n> and then I started getting much more realistic estimates in my test\n> case. (It still wanted to use a merge join initially, but after\n> knocking down random_page_cost it went to the nestloop.) It would\n> be interesting to see what sorts of results Anish gets with that.\n>\n> regards, tom lane\n>\n\nThanks Tom and Heikki! I really appreciate your help.I went ahead and loaded all the data. In the icream table, I now have ~175 partitions, each with 4041866 records.\nThe data finished loading 12 hours ago, and I then ran the same query I gave you guys, and it took 25 seconds since it used the wrong execution plan as expected. HashAggregate (cost=27680.90..28045.81 rows=11228 width=41) (actual time=24769.190..24817.618 rows=11028 loops=1)\n -> Hash Join (cost=1304.04..18901.88 rows=292634 width=41) (actual time=3938.965..24688.718 rows=11028 loops=1) Hash Cond: (rc.widget_id = r.widget_id) -> Append (cost=0.00..12110.95 rows=292634 width=24) (actual time=2854.925..22887.638 rows=309579 loops=1)\n -> Seq Scan on icecream rc (cost=0.00..25.60 rows=1 width=24) (actual time=0.003..0.003 rows=0 loops=1) Filter: ((widgetset_id = 5) AND (dataset_id = 283))\n -> Index Scan using icecream_part_283_widgetset_id_idx on icecream_part_283 rc (cost=0.00..12085.35 rows=292633 width=24) (actual time=2854.915..21784.769 rows=309579 loops=1) Index Cond: (widgetset_id = 5)\n Filter: (dataset_id = 283) -> Hash (cost=1163.69..1163.69 rows=11228 width=21) (actual time=1083.704..1083.704 rows=11028 loops=1) Buckets: 2048 Batches: 1 Memory Usage: 604kB\n -> Append (cost=4.28..1163.69 rows=11228 width=21) (actual time=528.216..1066.659 rows=11028 loops=1) -> Bitmap Heap Scan on widget r (cost=4.28..12.75 rows=1 width=48) (actual time=528.017..528.017 rows=0 loops=1)\n Recheck Cond: (widgetset_id = 5) Filter: (widget_type_id = 4) -> Bitmap Index Scan on widget_widgetset_id_idx (cost=0.00..4.28 rows=4 width=0) (actual time=527.995..527.995 rows=0 loops=1)\n Index Cond: (widgetset_id = 5) -> Index Scan using widget_part_5_widget_widget_type_id_idx on widget_part_5 r (cost=0.00..1150.94 rows=11227 width=21) (actual time=0.191..512.847 rows=1\n1028 loops=1) Index Cond: (widget_type_id = 4) Filter: (widgetset_id = 5) Total runtime: 24844.016 ms(21 rows)\nI then changed my enable* options to force it to use a nested loop:set enable_mergejoin=off;set enable_hashjoin=off;set enable_nestloop = on;\n HashAggregate (cost=460004.79..460369.70 rows=11228 width=41) (actual time=298.014..341.822 rows=11028 loops=1) -> Nested Loop (cost=4.28..338747.85 rows=4041898 width=41) (actual time=0.175..248.529 rows=11028 loops=1)\n Join Filter: (r.widget_id = rc.widget_id) -> Append (cost=4.28..1163.69 rows=11228 width=21) (actual time=0.053..42.532 rows=11028 loops=1) -> Bitmap Heap Scan on widget r (cost=4.28..12.75 rows=1 width=48) (actual time=0.014..0.014 rows=0 loops=1)\n Recheck Cond: (widgetset_id = 5) Filter: (widget_type_id = 4) -> Bitmap Index Scan on widget_widgetset_id_idx (cost=0.00..4.28 rows=4 width=0) (actual time=0.008..0.008 rows=0 loops=1)\n Index Cond: (widgetset_id = 5) -> Index Scan using widget_part_5_widget_widget_type_id_idx on widget_part_5 r (cost=0.00..1150.94 rows=11227 width=21) (actual time=0.032..18.410 rows=11028 lo\nops=1) Index Cond: (widget_type_id = 4) Filter: (widgetset_id = 5) -> Append (cost=0.00..29.99 rows=6 width=24) (actual time=0.009..0.012 rows=1 loops=11028)\n -> Seq Scan on icecream rc (cost=0.00..23.00 rows=5 width=24) (actual time=0.001..0.001 rows=0 loops=11028) Filter: (rc.dataset_id = 283) -> Index Scan using icecream_part_283_widget_id_idx on icecream_part_283 rc (cost=0.00..6.99 rows=1 width=24) (actual time=0.004..0.006 rows=1 loo\nps=11028) Index Cond: (rc.widget_id = r.widget_id) Filter: (rc.dataset_id = 283) Total runtime: 361.180 ms(19 rows)\nThe query was nice and fast as expected.I then restored the enable* options back to default. The query was slow again and taking around ~19 seconds. So, this gives us some information about whether autoanalyze is running in the background:\n-I don't think waiting is making a difference. I loaded the data 12 hours ago, and then ran the query and the query was very slow. I then ran the query again but for a different dataset and it was also slow. It was only once I changed the enable* parameters could I get my expected performance. \n-I would have tried select * from ice-cream again, but with 176 partitions this query is no longer feasible. But, I do agree that select * from icecream is causing the statistics to be updated.\nSo, then I followed Tom's suggestion to defeat the empty-table heuristic.select relpages from pg_class where relname = 'icecream'; relpages ----------\n 0(1 row)Ok, so the planner thinks that the parent table is empty. I then ran:update pg_class set relpages = 1 where relname = 'icecream';\n HashAggregate (cost=201199.27..201564.18 rows=11228 width=41) (actual time=277.195..304.620 rows=11028 loops=1) -> Nested Loop (cost=4.28..79942.45 rows=4041894 width=41) (actual time=0.227..231.181 rows=11028 loops=1)\n Join Filter: (r.widget_id = rc.widget_id) -> Append (cost=4.28..1163.69 rows=11228 width=21) (actual time=0.125..40.834 rows=11028 loops=1) -> Bitmap Heap Scan on widget r (cost=4.28..12.75 rows=1 width=48) (actual time=0.022..0.022 rows=0 loops=1)\n Recheck Cond: (widgetset_id = 5) Filter: (widget_type_id = 4) -> Bitmap Index Scan on widget_widgetset_id_idx (cost=0.00..4.28 rows=4 width=0) (actual time=0.019..0.019 rows=0 loops=1)\n Index Cond: (widgetset_id = 5) -> Index Scan using widget_part_5_widget_widget_type_id_idx on widget_part_5 r (cost=0.00..1150.94 rows=11227 width=21) (actual time=0.100..18.964 rows=11028 lo\nops=1) Index Cond: (widget_type_id = 4) Filter: (widgetset_id = 5) -> Append (cost=0.00..6.99 rows=2 width=24) (actual time=0.008..0.012 rows=1 loops=11028)\n -> Seq Scan on icecream rc (cost=0.00..0.00 rows=1 width=24) (actual time=0.001..0.001 rows=0 loops=11028) Filter: (rc.dataset_id = 283) -> Index Scan using icecream_part_283_widget_id_idx on icecream_part_283 rc (cost=0.00..6.99 rows=1 width=24) (actual time=0.004..0.006 rows=1 loo\nps=11028) Index Cond: (rc.widget_id = r.widget_id) Filter: (rc.dataset_id = 283) Total runtime: 318.634 ms(19 rows)\nWow! that fixes it. Thanks you so much!!!! I've been struggling with this issue for 2-3 days. (Also, in the past, I've seen inconsistent performance with this query, which may be the result of the planner sometimes choosing the wrong plan, but I'll chase that down later).\nTom said: But maybe we should reconsider the heuristic for tables that are members of inheritance trees --- particularly parents of inheritance trees.I agree. I think postgres should get updated to take this into account. I shouldn't have to set the relpages to 1 for all the empty parent tables that I have partitioned. Should I file this as a bug/enhancement?\nAlso, do I need to worry about about autoanalyze/autovacuum setting back relpages to zero for the parent icecream table?thanks!!!Anish\nOn Sat, Jul 9, 2011 at 10:43 AM, Tom Lane <[email protected]> wrote:\nHeikki Linnakangas <[email protected]> writes:\n> On 09.07.2011 00:36, Anish Kejariwal wrote:\n>> My guess as to what happened:\n>> -because the icecream parent table has zero records, the query optimizer\n>> chooses the incorrect execution plan\n>> -when I do select * from icecream, the optimizer now knows how many records\n>> are really in the icecream table, by knowing that the icecream table has\n>> partitions.\n\n> \"select * from icecream\" won't have any direct effect on the\n> optimization of subsequent queries. What probably happened is that\n> autoanalyze ran in the background while you ran that select, and\n> analyzed some of the partitions. Simply waiting a while would've had the\n> same effect.\n\nYeah. Also, the reason that a manual vacuum on icecream changes things\nyet again is that in 9.0 and up, we have a notion of summary stats\nacross the whole inheritance tree, but autoanalyze hasn't been taught to\ngather those. The manual command on the parent table does gather them,\nthough.\n\nSo what's happening here is that we suddenly have an accurate idea of\nthe size of the join product as a result of having inheritance summary\nstats to estimate with, and that drives the estimated cost of the merge\nor hash join down out of the stratosphere. The estimated cost of the\nnestloop goes down a lot too, but not as much.\n\nI experimented with a similar case here, and it seems like a lot of the\nremaining error in the nestloop estimate comes from this:\n\n>> -> Seq Scan on icecream rc (cost=0.00..23.00 rows=5 width=24) (actual time=0.002..0.002 rows=0 loops=1)\n>> Filter: (dataset_id = 281)\n\nThe indexscan on the nonempty child partition is estimated at less than\n10 cost units, so this is a *large* fraction of what the planner sees as\nthe per-outer-row cost of a nestloop. And with more than 11000 rows on\nthe other side of the join, that discourages it from using the nestloop.\nIn reality of course this takes negligible time compared to examining\nthe child partition.\n\nNow why is the seqscan cost estimate so large, when actually the parent\nicecream table is totally empty? It's because the planner has been\ntaught to never believe that an empty table is empty. If memory serves,\nit's really estimating on an assumption that the table contains 10 pages\nand some corresponding number of rows. This is a reasonable defensive\nposture when dealing with ordinary tables, I think, since most likely\nif the catalogs say the table is empty that's just a leftover from when\nit was created. But maybe we should reconsider the heuristic for tables\nthat are members of inheritance trees --- particularly parents of\ninheritance trees.\n\nI was able to defeat the empty-table heuristic here by doing\n\nupdate pg_class set relpages = 1 where relname = 'icecream';\n\nand then I started getting much more realistic estimates in my test\ncase. (It still wanted to use a merge join initially, but after\nknocking down random_page_cost it went to the nestloop.) It would\nbe interesting to see what sorts of results Anish gets with that.\n\n regards, tom lane",
"msg_date": "Sun, 10 Jul 2011 08:33:59 -0700",
"msg_from": "Anish Kejariwal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: issue with query optimizer when joining two partitioned tables"
},
{
"msg_contents": "On 2011-07-09 18:43, Tom Lane wrote:\n> Heikki Linnakangas<[email protected]> writes:\n>> On 09.07.2011 00:36, Anish Kejariwal wrote:\n>>> My guess as to what happened:\n>>> -because the icecream parent table has zero records, the query optimizer\n>>> chooses the incorrect execution plan\n>>> -when I do select * from icecream, the optimizer now knows how many records\n>>> are really in the icecream table, by knowing that the icecream table has\n>>> partitions.\n>\n>> \"select * from icecream\" won't have any direct effect on the\n>> optimization of subsequent queries. What probably happened is that\n>> autoanalyze ran in the background while you ran that select, and\n>> analyzed some of the partitions. Simply waiting a while would've had the\n>> same effect.\n>\n> Yeah. Also, the reason that a manual vacuum on icecream changes things\n> yet again is that in 9.0 and up, we have a notion of summary stats\n> across the whole inheritance tree, but autoanalyze hasn't been taught to\n> gather those. The manual command on the parent table does gather them,\n> though.\n\nIs stats-gathering significantly more expensive than an FTS? Could an FTS\nupdate stats as a matter of course (or perhaps only if enough changes in table)?\n-- \nJeremy\n",
"msg_date": "Sun, 10 Jul 2011 16:46:25 +0100",
"msg_from": "Jeremy Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with query optimizer when joining two partitioned\n tables"
}
] |
[
{
"msg_contents": "\nHello,\n\nWe are running a PostgreSQL 8.4 database, with two tables containing a\nlot (> 1 million) moderatly small rows. It contains some btree indexes,\nand one of the two tables contains a gin full-text index.\n\nWe noticed that the autovacuum process tend to use a lot of memory,\nbumping the postgres process near 1Gb while it's running.\n\nI looked in the documentations, but I didn't find the information : do\nyou know how to estimate the memory required for the autovacuum if we\nincrease the number of rows ? Is it linear ? Logarithmic ? \n\nAlso, is there a way to reduce that memory usage ? Would running the\nautovacuum more frequently lower its memory usage ?\n\nRegards,\n\n-- \nGa�l Le Mignot - [email protected]\nPilot Systems - 9, rue Desargues - 75011 Paris\nTel : +33 1 44 53 05 55 - www.pilotsystems.net\nG�rez vos contacts et vos newsletters : www.cockpit-mailing.com\n",
"msg_date": "Sat, 09 Jul 2011 09:25:32 +0200",
"msg_from": "Gael Le Mignot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Memory usage of auto-vacuum"
},
{
"msg_contents": "On 9/07/2011 3:25 PM, Gael Le Mignot wrote:\n>\n> Hello,\n>\n> We are running a PostgreSQL 8.4 database, with two tables containing a\n> lot (> 1 million) moderatly small rows. It contains some btree indexes,\n> and one of the two tables contains a gin full-text index.\n>\n> We noticed that the autovacuum process tend to use a lot of memory,\n> bumping the postgres process near 1Gb while it's running.\n\nWhat is maintenance_work_mem set to in postgresql.conf?\n\n--\nCraig Ringer\n\nPOST Newspapers\n276 Onslow Rd, Shenton Park\nPh: 08 9381 3088 Fax: 08 9388 2258\nABN: 50 008 917 717\nhttp://www.postnewspapers.com.au/\n",
"msg_date": "Sat, 09 Jul 2011 16:31:47 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory usage of auto-vacuum"
},
{
"msg_contents": "Hi,\n\nOn Sat, 2011-07-09 at 09:25 +0200, Gael Le Mignot wrote:\n> [...]\n> We are running a PostgreSQL 8.4 database, with two tables containing a\n> lot (> 1 million) moderatly small rows. It contains some btree indexes,\n> and one of the two tables contains a gin full-text index.\n> \n> We noticed that the autovacuum process tend to use a lot of memory,\n> bumping the postgres process near 1Gb while it's running.\n> \n\nWell, it could be its own memory (see maintenance_work_mem), or shared\nmemory. So, it's hard to say if it's really an issue or not.\n\nBTW, how much memory do you have on this server? what values are used\nfor shared_buffers and maintenance_work_mem?\n\n> I looked in the documentations, but I didn't find the information : do\n> you know how to estimate the memory required for the autovacuum if we\n> increase the number of rows ? Is it linear ? Logarithmic ? \n> \n\nIt should use up to maintenance_work_mem. Depends on how much memory you\nset on this parameter.\n\n> Also, is there a way to reduce that memory usage ?\n\nReduce maintenance_work_mem. Of course, if you do that, VACUUM could\ntake a lot longer to execute.\n\n> Would running the\n> autovacuum more frequently lower its memory usage ?\n> \n\nYes.\n\n\n-- \nGuillaume\n http://blog.guillaume.lelarge.info\n http://www.dalibo.com\n\n",
"msg_date": "Sat, 09 Jul 2011 10:33:03 +0200",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory usage of auto-vacuum"
},
{
"msg_contents": "Hello Craig!\n\nSat, 09 Jul 2011 16:31:47 +0800, you wrote: \n\n > On 9/07/2011 3:25 PM, Gael Le Mignot wrote:\n >> \n >> Hello,\n >> \n >> We are running a PostgreSQL 8.4 database, with two tables containing a\n >> lot (> 1 million) moderatly small rows. It contains some btree indexes,\n >> and one of the two tables contains a gin full-text index.\n >> \n >> We noticed that the autovacuum process tend to use a lot of memory,\n >> bumping the postgres process near 1Gb while it's running.\n\n > What is maintenance_work_mem set to in postgresql.conf?\n\nIt's the debian default, which is 16Mb. Do you think we should reduce it ?\n\nI also forgot to add something which may be important : there are a lot\nof INSERT (and SELECT) in those tables, but very few UPDATE/DELETE.\n\nRegards,\n\n-- \nGa�l Le Mignot - [email protected]\nPilot Systems - 9, rue Desargues - 75011 Paris\nTel : +33 1 44 53 05 55 - www.pilotsystems.net\nG�rez vos contacts et vos newsletters : www.cockpit-mailing.com\n",
"msg_date": "Sat, 09 Jul 2011 10:39:30 +0200",
"msg_from": "Gael Le Mignot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory usage of auto-vacuum"
},
{
"msg_contents": "Hello Guillaume!\n\nSat, 09 Jul 2011 10:33:03 +0200, you wrote: \n\n > Hi,\n > On Sat, 2011-07-09 at 09:25 +0200, Gael Le Mignot wrote:\n >> [...]\n >> We are running a PostgreSQL 8.4 database, with two tables containing a\n >> lot (> 1 million) moderatly small rows. It contains some btree indexes,\n >> and one of the two tables contains a gin full-text index.\n >> \n >> We noticed that the autovacuum process tend to use a lot of memory,\n >> bumping the postgres process near 1Gb while it's running.\n >> \n\n > Well, it could be its own memory (see maintenance_work_mem), or shared\n > memory. So, it's hard to say if it's really an issue or not.\n\n > BTW, how much memory do you have on this server? what values are used\n > for shared_buffers and maintenance_work_mem?\n\nmaintenance_work_mem is at 16Mb, shared_buffers at 24Mb.\n\nThe server currently has 2Gb, we'll add more to it (it's a VM), but we\nwould like to be able to make an estimate on how much memory it'll need\nfor a given rate of INSERT into the table, so we can estimate future\ncosts.\n\n >> I looked in the documentations, but I didn't find the information : do\n >> you know how to estimate the memory required for the autovacuum if we\n >> increase the number of rows ? Is it linear ? Logarithmic ? \n >> \n\n > It should use up to maintenance_work_mem. Depends on how much memory you\n > set on this parameter.\n\nSo, it shouldn't depend on data size ? Is there a fixed multiplicative\nfactor between maintenance_work_mem and the memory actually used ?\n\n >> Also, is there a way to reduce that memory usage ?\n\n > Reduce maintenance_work_mem. Of course, if you do that, VACUUM could\n > take a lot longer to execute.\n\n >> Would running the autovacuum more frequently lower its memory usage ?\n >> \n\n > Yes.\n\nThanks, we'll try that.\n\nRegards,\n\n-- \nGa�l Le Mignot - [email protected]\nPilot Systems - 9, rue Desargues - 75011 Paris\nTel : +33 1 44 53 05 55 - www.pilotsystems.net\nG�rez vos contacts et vos newsletters : www.cockpit-mailing.com\n",
"msg_date": "Sat, 09 Jul 2011 10:43:23 +0200",
"msg_from": "Gael Le Mignot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory usage of auto-vacuum"
},
{
"msg_contents": "On Sat, 2011-07-09 at 10:43 +0200, Gael Le Mignot wrote:\n> Hello Guillaume!\n> \n> Sat, 09 Jul 2011 10:33:03 +0200, you wrote: \n> \n> > Hi,\n> > On Sat, 2011-07-09 at 09:25 +0200, Gael Le Mignot wrote:\n> >> [...]\n> >> We are running a PostgreSQL 8.4 database, with two tables containing a\n> >> lot (> 1 million) moderatly small rows. It contains some btree indexes,\n> >> and one of the two tables contains a gin full-text index.\n> >> \n> >> We noticed that the autovacuum process tend to use a lot of memory,\n> >> bumping the postgres process near 1Gb while it's running.\n> >> \n> \n> > Well, it could be its own memory (see maintenance_work_mem), or shared\n> > memory. So, it's hard to say if it's really an issue or not.\n> \n> > BTW, how much memory do you have on this server? what values are used\n> > for shared_buffers and maintenance_work_mem?\n> \n> maintenance_work_mem is at 16Mb, shared_buffers at 24Mb.\n> \n\nIOW, default values.\n\n> The server currently has 2Gb, we'll add more to it (it's a VM), but we\n> would like to be able to make an estimate on how much memory it'll need\n> for a given rate of INSERT into the table, so we can estimate future\n> costs.\n> \n> >> I looked in the documentations, but I didn't find the information : do\n> >> you know how to estimate the memory required for the autovacuum if we\n> >> increase the number of rows ? Is it linear ? Logarithmic ? \n> >> \n> \n> > It should use up to maintenance_work_mem. Depends on how much memory you\n> > set on this parameter.\n> \n> So, it shouldn't depend on data size ?\n\nNope, it shouldn't.\n\n> Is there a fixed multiplicative\n> factor between maintenance_work_mem and the memory actually used ?\n> \n\n1 :)\n\n> >> Also, is there a way to reduce that memory usage ?\n> \n> > Reduce maintenance_work_mem. Of course, if you do that, VACUUM could\n> > take a lot longer to execute.\n> \n> >> Would running the autovacuum more frequently lower its memory usage ?\n> >> \n> \n> > Yes.\n> \n> Thanks, we'll try that.\n> \n\nI don't quite understand how you can get up to 1GB used by your process.\nAccording to your configuration, and unless I'm wrong, it shouldn't take\nmore than 40MB. Perhaps a bit more, but not 1GB. So, how did you find\nthis number?\n\n\n-- \nGuillaume\n http://blog.guillaume.lelarge.info\n http://www.dalibo.com\n\n",
"msg_date": "Sat, 09 Jul 2011 10:53:14 +0200",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory usage of auto-vacuum"
},
{
"msg_contents": "Hello Guillaume!\n\nSat, 09 Jul 2011 10:53:14 +0200, you wrote: \n\n > I don't quite understand how you can get up to 1GB used by your process.\n > According to your configuration, and unless I'm wrong, it shouldn't take\n > more than 40MB. Perhaps a bit more, but not 1GB. So, how did you find\n > this number?\n\nLooking at \"top\" we saw the postgres process growing and growing and\nthen shrinking back, and doing a \"select * from pg_stat_activity;\" in\nparallel of the growing we found only the \"vacuum analyze\" query running. \n\nBut maybe we drawn the conclusion too quickly, I'll try disabling the\nauto vacuum to see if we really get rid of the problem doing it.\n\nThanks for your answers.\n\nRegards,\n\n-- \nGa�l Le Mignot - [email protected]\nPilot Systems - 9, rue Desargues - 75011 Paris\nTel : +33 1 44 53 05 55 - www.pilotsystems.net\nG�rez vos contacts et vos newsletters : www.cockpit-mailing.com\n",
"msg_date": "Sat, 09 Jul 2011 11:00:44 +0200",
"msg_from": "Gael Le Mignot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory usage of auto-vacuum"
},
{
"msg_contents": "On Sat, 2011-07-09 at 11:00 +0200, Gael Le Mignot wrote:\n> Hello Guillaume!\n> \n> Sat, 09 Jul 2011 10:53:14 +0200, you wrote: \n> \n> > I don't quite understand how you can get up to 1GB used by your process.\n> > According to your configuration, and unless I'm wrong, it shouldn't take\n> > more than 40MB. Perhaps a bit more, but not 1GB. So, how did you find\n> > this number?\n> \n> Looking at \"top\" we saw the postgres process growing and growing and\n> then shrinking back, and doing a \"select * from pg_stat_activity;\" in\n> parallel of the growing we found only the \"vacuum analyze\" query running. \n> \n\nThere is not only one postgres process. So you first need to be sure\nthat it's the one that executes the autovacuum.\n\n> But maybe we drawn the conclusion too quickly, I'll try disabling the\n> auto vacuum to see if we really get rid of the problem doing it.\n> \n\nDisabling the autovacuum is usually a bad idea. You'll have to execute\nVACUUM/ANALYZE via cron, which could get hard to configure.\n\nBTW, what's your PostgreSQL release? I assume at least 8.3 since you're\nusing FTS?\n\n\n-- \nGuillaume\n http://blog.guillaume.lelarge.info\n http://www.dalibo.com\n\n",
"msg_date": "Sat, 09 Jul 2011 11:06:16 +0200",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory usage of auto-vacuum"
},
{
"msg_contents": "Hello Guillaume!\n\nSat, 09 Jul 2011 11:06:16 +0200, you wrote: \n\n > On Sat, 2011-07-09 at 11:00 +0200, Gael Le Mignot wrote:\n >> Hello Guillaume!\n >> \n >> Sat, 09 Jul 2011 10:53:14 +0200, you wrote: \n >> \n >> > I don't quite understand how you can get up to 1GB used by your process.\n >> > According to your configuration, and unless I'm wrong, it shouldn't take\n >> > more than 40MB. Perhaps a bit more, but not 1GB. So, how did you find\n >> > this number?\n >> \n >> Looking at \"top\" we saw the postgres process growing and growing and\n >> then shrinking back, and doing a \"select * from pg_stat_activity;\" in\n >> parallel of the growing we found only the \"vacuum analyze\" query running. \n >> \n\n > There is not only one postgres process. So you first need to be sure\n > that it's the one that executes the autovacuum.\n\nShouldn't \"pg_stat_activity\" contain the current jobs of all the processes ?\n\n >> But maybe we drawn the conclusion too quickly, I'll try disabling the\n >> auto vacuum to see if we really get rid of the problem doing it.\n >> \n\n > Disabling the autovacuum is usually a bad idea. You'll have to execute\n > VACUUM/ANALYZE via cron, which could get hard to configure.\n\nOh, yes, sure, I meant as a test to know if it's the vacuum or not, not\nto definitely disable it.\n\n > BTW, what's your PostgreSQL release? I assume at least 8.3 since you're\n > using FTS?\n\nIt's 8.4 from Debian Squeeze.\n\n-- \nGa�l Le Mignot - [email protected]\nPilot Systems - 9, rue Desargues - 75011 Paris\nTel : +33 1 44 53 05 55 - www.pilotsystems.net\nG�rez vos contacts et vos newsletters : www.cockpit-mailing.com\n",
"msg_date": "Sat, 09 Jul 2011 11:27:00 +0200",
"msg_from": "Gael Le Mignot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory usage of auto-vacuum"
},
{
"msg_contents": "On 9/07/2011 4:43 PM, Gael Le Mignot wrote:\n\n> maintenance_work_mem is at 16Mb, shared_buffers at 24Mb.\n\nWoah, what? And you're hitting a gigabyte for autovacuum? Yikes. That \njust doesn't sound right.\n\nAre you using any contrib modules? If so, which ones?\n\nAre you able to post your DDL?\n\nHow big is the database? (Not that it should matter).\n\n--\nCraig Ringer\n\nPOST Newspapers\n276 Onslow Rd, Shenton Park\nPh: 08 9381 3088 Fax: 08 9388 2258\nABN: 50 008 917 717\nhttp://www.postnewspapers.com.au/\n",
"msg_date": "Sat, 09 Jul 2011 20:15:11 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory usage of auto-vacuum"
},
{
"msg_contents": "Gael Le Mignot <[email protected]> writes:\n> Sat, 09 Jul 2011 11:06:16 +0200, you wrote: \n>>> BTW, what's your PostgreSQL release? I assume at least 8.3 since you're\n>>> using FTS?\n\n> It's 8.4 from Debian Squeeze.\n\n8.4.what?\n\nIn particular I'm wondering if you need this 8.4.6 fix:\nhttp://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=f0e4331d04fa007830666c5baa2c3e37cce9c3ff\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 09 Jul 2011 12:23:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory usage of auto-vacuum "
},
{
"msg_contents": "Hello Tom!\n\nSat, 09 Jul 2011 12:23:18 -0400, you wrote: \n\n > Gael Le Mignot <[email protected]> writes:\n >> Sat, 09 Jul 2011 11:06:16 +0200, you wrote: \n >>>> BTW, what's your PostgreSQL release? I assume at least 8.3 since you're\n >>>> using FTS?\n\n >> It's 8.4 from Debian Squeeze.\n\n > 8.4.what?\n\nIt's 8.4.8-0squeeze1\n\n > In particular I'm wondering if you need this 8.4.6 fix:\n > http://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=f0e4331d04fa007830666c5baa2c3e37cce9c3ff\n\nThanks for the tip, it very well could have been that, but it's 8.4.8, I\nchecked the concerned source file and the patch is there, and I didn't\nfind any Debian-specific patch that could collide with it.\n\nRegards,\n\n-- \nGa�l Le Mignot - [email protected]\nPilot Systems - 9, rue Desargues - 75011 Paris\nTel : +33 1 44 53 05 55 - www.pilotsystems.net\nG�rez vos contacts et vos newsletters : www.cockpit-mailing.com\n",
"msg_date": "Sun, 10 Jul 2011 12:06:38 +0200",
"msg_from": "Gael Le Mignot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory usage of auto-vacuum"
},
{
"msg_contents": "Hello,\n\nHere is an update on my problem :\n\n- the problem was caused by \"VACUUM ANALYZE\", but by a plain \"VACUUM\" ;\n\n- it was exactly the same with manual and automatic \"VACUUM ANALYZE\" ;\n\n- it was caused by a GIN index on a tsvector, using a very high (10000)\n statistics target.\n\nSetting back the statistics to 1000 reduced the amount of RAM used to a\nvery reasonable amount.\n\nThe value of 10000 is indeed not very realistic, but I think that would\ndeserve some mention on the documentation, if possible with an estimate\nof the maximal memory usage for a given statistics target and table\nsize.\n\nDo you think it's a good idea, and if so, if that estimate can be\nreasonably made ?\n\nRegards,\n\n-- \nGa�l Le Mignot - [email protected]\nPilot Systems - 9, rue Desargues - 75011 Paris\nTel : +33 1 44 53 05 55 - www.pilotsystems.net\nG�rez vos contacts et vos newsletters : www.cockpit-mailing.com\n",
"msg_date": "Tue, 12 Jul 2011 17:44:08 +0200",
"msg_from": "Gael Le Mignot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory usage of auto-vacuum"
}
] |
[
{
"msg_contents": "Dear all ,\ncould any one help me?\nwhen i use pgadmin to exceute a query it shows the total time for query ..\nsuch as \n(select * form table_name.........)query total time is for example 100 ms\ni want to know the command that can retentive the query total time \nin millisecond \n\nif i connect with postgresql from java using JDBC \ni need the query total time necessaryto use it in my project \ni don't want run explian just query \nthank's\nDear all ,\ncould any one help me?\nwhen i use pgadmin to exceute a query it shows the total time for query ..\nsuch as \n(select * form table_name.........)query total time is for example 100 ms\ni want to know the command that can retentive the query total time in millisecond \nif i connect with postgresql from java using JDBC \ni need the query total time necessaryto use it in my project \ni don't want run explian just query \nthank's",
"msg_date": "Sun, 10 Jul 2011 04:41:45 -0700 (PDT)",
"msg_from": "Radhya sahal <[email protected]>",
"msg_from_op": true,
"msg_subject": "query total time im milliseconds"
},
{
"msg_contents": "On Sun, Jul 10, 2011 at 4:41 AM, Radhya sahal <[email protected]> wrote:\n\n> Dear all ,\n> could any one help me?\n> when i use pgadmin to exceute a query it shows the total time for query ..\n> such as\n> (select * form table_name.........)query total time is for example 100 ms\n> i want to know the command that can retentive the query total time\n> in millisecond\n> if i connect with postgresql from java using JDBC\n> i need the query total time necessaryto use it in my project\n> i don't want run explian just query\n> thank's\n>\n\n\nlong startTime = System.currentTimeMillis();\n//execute query\nlong executionTime = System.currentTimeMillis() - startTime;\n\nOn Sun, Jul 10, 2011 at 4:41 AM, Radhya sahal <[email protected]> wrote:\nDear all ,\ncould any one help me?\nwhen i use pgadmin to exceute a query it shows the total time for query ..\nsuch as \n(select * form table_name.........)query total time is for example 100 ms\ni want to know the command that can retentive the query total time in millisecond \nif i connect with postgresql from java using JDBC \ni need the query total time necessaryto use it in my project \ni don't want run explian just query \nthank'slong startTime = System.currentTimeMillis();//execute querylong executionTime = System.currentTimeMillis() - startTime;",
"msg_date": "Sun, 10 Jul 2011 10:51:52 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query total time im milliseconds"
},
{
"msg_contents": "----- Forwarded Message ----\nFrom: Radhya sahal <[email protected]>\nTo: Samuel Gendler <[email protected]>\nSent: Sun, July 10, 2011 11:25:46 AM\nSubject: Re: [PERFORM] query total time im milliseconds\n\n\nThank's\n\nlong startTime = System.currentTimeMillis();\n//execute query\nlong executionTime = System.currentTimeMillis() - startTime; \n\nthis executionTime is not an actual time for query ,\nit includes time for access to postgresql server\n using JDBC\n\n\n\n________________________________\nFrom: Samuel Gendler <[email protected]>\nTo: Radhya sahal <[email protected]>\nCc: pgsql-performance group <[email protected]>\nSent: Sun, July 10, 2011 10:51:52 AM\nSubject: Re: [PERFORM] query total time im milliseconds\n\n\n\n\nOn Sun, Jul 10, 2011 at 4:41 AM, Radhya sahal <[email protected]> wrote:\n\nDear all ,\n>could any one help me?\n>when i use pgadmin to exceute a query it shows the total time for query ..\n>such as \n>(select * form table_name.........)query total time is for example 100 ms\n>i want to know the command that can retentive the query total time \n>in millisecond \n>\n>if i connect with postgresql from java using JDBC \n>i need the query total time necessaryto use it in my project \n>i don't want run explian just query \n>thank's\n\n\nlong startTime = System.currentTimeMillis();\n//execute query\nlong executionTime = System.currentTimeMillis() - startTime; \n\n\n\n----- Forwarded Message ----From: Radhya sahal <[email protected]>To: Samuel Gendler <[email protected]>Sent: Sun, July 10, 2011 11:25:46 AMSubject: Re: [PERFORM] query total time im milliseconds\n\nThank's\n\nlong startTime = System.currentTimeMillis();\n//execute query\nlong executionTime = System.currentTimeMillis() - startTime; \nthis executionTime is not an actual time for query ,\nit includes time for access to postgresql server\n using JDBC\n\n\n\nFrom: Samuel Gendler <[email protected]>To: Radhya sahal <[email protected]>Cc: pgsql-performance group <[email protected]>Sent: Sun, July 10, 2011 10:51:52 AMSubject: Re: [PERFORM] query total time im milliseconds\nOn Sun, Jul 10, 2011 at 4:41 AM, Radhya sahal <[email protected]> wrote:\n\n\n\nDear all ,\ncould any one help me?\nwhen i use pgadmin to exceute a query it shows the total time for query ..\nsuch as \n(select * form table_name.........)query total time is for example 100 ms\ni want to know the command that can retentive the query total time in millisecond \nif i connect with postgresql from java using JDBC \ni need the query total time necessaryto use it in my project \ni don't want run explian just query \nthank's\n\n\nlong startTime = System.currentTimeMillis();\n//execute query\nlong executionTime = System.currentTimeMillis() - startTime;",
"msg_date": "Sun, 10 Jul 2011 11:26:11 -0700 (PDT)",
"msg_from": "Radhya sahal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fw: query total time im milliseconds"
},
{
"msg_contents": "On 11/07/2011 2:26 AM, Radhya sahal wrote:\n\n> long startTime = System.currentTimeMillis();\n> //execute query\n> long executionTime = System.currentTimeMillis() - startTime;\n>\n> this executionTime is not an actual time for query ,\n> it includes time for access to postgresql server\n> using JDBC\n\n\nThe pg_stat_statements contrib module in PostgreSQL 8.4 and above might \nbe able to do what you want. See the documentation here:\n\n http://www.postgresql.org/docs/9.0/static/pgstatstatements.html\n\nI don't think the core PostgreSQL server currently retains information \nabout how long the last query executed ran for. I thought the PL/PgSQL \n\"GET DIAGNOSTICS\" statement might be able to find out how long the last \nquery run within that PL/PgSQL function took, but it can't. So I think \nyou're out of luck for now.\n\nPostgreSQL *CAN* log query durations to the server log, it just doesn't \n(AFAIK) offer any way to find out how long the last query took from SQL \nand doesn't keep that information after the statement finishes.\n\n--\nCraig Ringer\n\nPOST Newspapers\n276 Onslow Rd, Shenton Park\nPh: 08 9381 3088 Fax: 08 9388 2258\nABN: 50 008 917 717\nhttp://www.postnewspapers.com.au/\n",
"msg_date": "Mon, 11 Jul 2011 06:54:00 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fw: query total time im milliseconds"
}
] |
[
{
"msg_contents": "Dear all ,\ncould any one help me?\nwhen i use pgadmin to exceute a query it shows the total time for query ..\nsuch as \n(select * form table_name.........)query total time is for example 100 ms\ni want to know the command that can retetive the query total time in millisecond \n\nif i connect with postgresql from java using JDBC \ni need the query total time necessary to use it in my project \ni don't want run explian,explain gives estimated i want real thim for the query \n\nthank's\n\n\n\n\n Dear all ,\ncould any one help me?\nwhen i use pgadmin to exceute a query it shows the total time for query ..\nsuch as \n(select * form table_name.........)query total time is for example 100 ms\ni want to know the command that can retetive the query total time in millisecond \nif i connect with postgresql from java using JDBC \ni need the query total time necessary to use it in my project \ni don't want run explian,explain gives estimated i want real thim for the query \nthank's",
"msg_date": "Sun, 10 Jul 2011 06:07:13 -0700 (PDT)",
"msg_from": "Radhya sahal <[email protected]>",
"msg_from_op": true,
"msg_subject": "query total time im milliseconds"
}
] |
[
{
"msg_contents": "Dear all ,\ncould any one help me?\nwhen i use pgadmin to exceute a query it shows the total time for query ..\nsuch as \n(select * form table_name.........)query total time is for example 100 ms\ni want to know the command that can retetive the query total time in millisecond \n\nif i connect with postgresql from java using JDBC \ni need the query total time necessary to use it in my project \ni don't want run explian,explain gives estimated i want real total time for the \nquery \n\nthank's\n \n\n\n\n\n\n\n Dear all ,\ncould any one help me?\nwhen i use pgadmin to exceute a query it shows the total time for query ..\nsuch as \n(select * form table_name.........)query total time is for example 100 ms\ni want to know the command that can retetive the query total time in millisecond \nif i connect with postgresql from java using JDBC \ni need the query total time necessary to use it in my project \ni don't want run explian,explain gives estimated i want real total time for the query \nthank's",
"msg_date": "Sun, 10 Jul 2011 06:08:01 -0700 (PDT)",
"msg_from": "Radhya sahal <[email protected]>",
"msg_from_op": true,
"msg_subject": "query total time im milliseconds"
},
{
"msg_contents": "On 10/07/2011 9:08 PM, Radhya sahal wrote:\n>\n> Dear all ,\n> could any one help me?\n> when i use pgadmin to exceute a query it shows the total time for query ..\n> such as\n> (select * form table_name.........)query total time is for example 100 ms\n> i want to know the command that can retetive the query total time in\n> millisecond\n> if i connect with postgresql from java using JDBC\n> i need the query total time necessary to use it in my project\n> i don't want run explian,explain gives estimated i want real total time\n> for the query\n\nRecord the value of System.currentTimeMillis() or System.nanoTime(). Run \nyour query. Subtract the recorded value from the current value of \nSystem.currentTimeMillis() or System.nanoTime() to get the execution \ntime including how long the query took to transfer data from the server \nto your client.\n\nI'm not aware of a way to find out how long the query took to execute on \nthe server - excluding data transfer to the client - without using \nEXPLAIN ANALYZE.\n\n--\nCraig Ringer\n\nPOST Newspapers\n276 Onslow Rd, Shenton Park\nPh: 08 9381 3088 Fax: 08 9388 2258\nABN: 50 008 917 717\nhttp://www.postnewspapers.com.au/\n",
"msg_date": "Mon, 11 Jul 2011 06:20:45 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query total time im milliseconds"
}
] |
[
{
"msg_contents": "Dear all ,\ncould any one help me?\nwhen i use pgadmin to exceute a query it shows the total time for query ..\nsuch as \n(select * form table_name.........)query total time is for example 100 ms\ni want to know the command that can retetive the query total time in millisecond \n\nif i connect with postgresql from java using JDBC \ni need the query total time necessary to use it in my project \ni don't want run explian,explain gives estimated i want real total time for the \nquery \n\nthank's\n \n\n\n\n\n\n\n\n\n\n Dear all ,\ncould any one help me?\nwhen i use pgadmin to exceute a query it shows the total time for query ..\nsuch as \n(select * form table_name.........)query total time is for example 100 ms\ni want to know the command that can retetive the query total time in millisecond \nif i connect with postgresql from java using JDBC \ni need the query total time necessary to use it in my project \ni don't want run explian,explain gives estimated i want real total time for the query \nthank's",
"msg_date": "Sun, 10 Jul 2011 09:44:30 -0700 (PDT)",
"msg_from": "Radhya sahal <[email protected]>",
"msg_from_op": true,
"msg_subject": "query total time im milliseconds"
}
] |
[
{
"msg_contents": "I know this has been discussed various times...\n\nWe are maintaining a large multi tenant database where *all* tables have \na tenant-id and all indexes and PKs lead with the tenant-id.\nStatistics and counts for the all other columns are only really \nmeaningful within the context of the tenant they belong to.\n\nThere appear to be five options for me:\n1. Using single column indexes on all interesting columns and rely on \nPostgreSQLs bitmap indexes to combine them (which are pretty cool).\n2. Use multi column indexes and accept that sometimes Postgres pick the \nwrong index (because a non-tenant-id\ncolumn might seem highly selective over the table, but it is not for a \nparticular tenant - or vice versa).\n3. Use a functional index that combines multiple columns and only query \nvia these, that causes statistics\ngathering for the expression.\nI.e. create index i on t((tenantid||column1)) and SELECT ... FROM t \nWHERE tenantid||column1 = '...'\n4. Play with n_distinct and/or set the statistics for the inner columns \nto some fixed values that lead to the plans that we want.\n5. Have a completely different schema and maybe a database per tenant.\n\nCurrently we use Oracle and query hinting, but I do not like that \npractice at all (and Postgres does not have hints anyway).\nAre there any other options?\n\n#1 would be the simplest, but I am concerned about the overhead, both \nmaintaining two indexes and building the bitmap during queries - for \nevery query.\n\nI don't think #2 is actually an option. We have some tenants with many \n(sometimes 100s) millions of rows per table,\nand picking the wrong index would be disastrous.\n\nCould something like #3 be generally added to Postgres? I.e. if there is \na multi column index keep combined statistics for\nthe involved columns. Of course in that case is it no longer possible to \nquery the index by prefix.\n#3 also seems expensive as the expression needs to be evaluated for each \nchanged row.\n\nStill trying #4. I guess it involves setting the stat target for the \ninner columns to 0 and then inserting my own records into\npg_statistic. Probably only setting n_distinct, i.e. set it \"low\" if the \ninner column is not selective within the context of a tenant and \"high\" \notherwise.\n\nFor various reasons #5 is also not an option.\n\nAnd of course the same set of questions comes up with joins.\n\nThanks.\n\n-- Lars\n\n",
"msg_date": "Sun, 10 Jul 2011 14:16:06 -0700",
"msg_from": "lars <[email protected]>",
"msg_from_op": true,
"msg_subject": "Statistics and Multi-Column indexes"
},
{
"msg_contents": "On Sun, Jul 10, 2011 at 2:16 PM, lars <[email protected]> wrote:\n\n> I know this has been discussed various times...\n>\n> We are maintaining a large multi tenant database where *all* tables have a\n> tenant-id and all indexes and PKs lead with the tenant-id.\n> Statistics and counts for the all other columns are only really meaningful\n> within the context of the tenant they belong to.\n>\n> There appear to be five options for me:\n> 1. Using single column indexes on all interesting columns and rely on\n> PostgreSQLs bitmap indexes to combine them (which are pretty cool).\n> 2. Use multi column indexes and accept that sometimes Postgres pick the\n> wrong index (because a non-tenant-id\n> column might seem highly selective over the table, but it is not for a\n> particular tenant - or vice versa).\n> 3. Use a functional index that combines multiple columns and only query via\n> these, that causes statistics\n> gathering for the expression.\n> I.e. create index i on t((tenantid||column1)) and SELECT ... FROM t WHERE\n> tenantid||column1 = '...'\n> 4. Play with n_distinct and/or set the statistics for the inner columns to\n> some fixed values that lead to the plans that we want.\n> 5. Have a completely different schema and maybe a database per tenant.\n>\n>\nWhat about partitioning tables by tenant id and then maintaining indexes on\neach partition independent of tenant id, since constraint exclusion should\nhandle filtering by tenant id for you. That seems like a potentially more\ntolerable variant of #5 How many tenants are we talking about? I gather\npartitioning starts to become problematic when the number of partitions gets\nlarge.\n\nOn Sun, Jul 10, 2011 at 2:16 PM, lars <[email protected]> wrote:\nI know this has been discussed various times...\n\nWe are maintaining a large multi tenant database where *all* tables have a tenant-id and all indexes and PKs lead with the tenant-id.\nStatistics and counts for the all other columns are only really meaningful within the context of the tenant they belong to.\n\nThere appear to be five options for me:\n1. Using single column indexes on all interesting columns and rely on PostgreSQLs bitmap indexes to combine them (which are pretty cool).\n2. Use multi column indexes and accept that sometimes Postgres pick the wrong index (because a non-tenant-id\ncolumn might seem highly selective over the table, but it is not for a particular tenant - or vice versa).\n3. Use a functional index that combines multiple columns and only query via these, that causes statistics\ngathering for the expression.\nI.e. create index i on t((tenantid||column1)) and SELECT ... FROM t WHERE tenantid||column1 = '...'\n4. Play with n_distinct and/or set the statistics for the inner columns to some fixed values that lead to the plans that we want.\n5. Have a completely different schema and maybe a database per tenant.\n What about partitioning tables by tenant id and then maintaining indexes on each partition independent of tenant id, since constraint exclusion should handle filtering by tenant id for you. That seems like a potentially more tolerable variant of #5 How many tenants are we talking about? I gather partitioning starts to become problematic when the number of partitions gets large.",
"msg_date": "Sun, 10 Jul 2011 14:31:25 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics and Multi-Column indexes"
},
{
"msg_contents": "lars <[email protected]> wrote:\n \n> We are maintaining a large multi tenant database where *all*\n> tables have a tenant-id and all indexes and PKs lead with the\n> tenant-id. Statistics and counts for the all other columns are\n> only really meaningful within the context of the tenant they\n> belong to.\n> \n> There appear to be five options for me:\n> 1. Using single column indexes on all interesting columns and rely\n> on PostgreSQLs bitmap indexes to combine them (which are pretty\n> cool).\n \nThose are cool -- when programmers are skeptical that they should\njust say *what* they want and let the database figure out how to get\nit, I like to point out that this option is available to the\nplanner, but not to application programming. Of course, there are a\ngreat many other reason *I* find more compelling, but this one tends\nto impress application programmers.\n \nAssuming you keep the primary key as a multi-column index, this\nseems like a good place to start.\n \n> 2. Use multi column indexes and accept that sometimes Postgres\n> pick the wrong index (because a non-tenant-id column might seem\n> highly selective over the table, but it is not for a particular\n> tenant - or vice versa).\n \nIf you have a lot of queries which access data based on matching\nsome set of columns, an occasional multicolumn index in addition to\nthe individual column index may be worth it.\n \nYou might want to avoid prepared statements, since these are planned\nfor the general case and can fall down badly for the extremes.\n \n> 3. Use a functional index that combines multiple columns and only\n> query via these, that causes statistics gathering for the\n> expression. I.e. create index i on t((tenantid||column1)) and\n> SELECT ... FROM t WHERE tenantid||column1 = '...'\n \nI would hold off on that until I saw evidence of a specific need.\n \n> 4. Play with n_distinct and/or set the statistics for the inner\n> columns to some fixed values that lead to the plans that we want.\n \nTry not to think in terms of \"plans we want\", but in terms of\nmodeling your costs so that, given your tables and indexes, the\nPostgreSQL planner will do a good job of picking a fast plan. You\nnormally need to tweak a few of the costing factors to match the\nreality of your server and load.\n \n> 5. Have a completely different schema and maybe a database per\n> tenant.\n \n> Are there any other options?\n \nIf most queries operate within a single tenant and you have less\nthan 100 tenants, you might think about partitioned tables. Beyond\n100, or if most queries need to look at many partitions, it becomes\nproblematic.\n \n> I don't think #2 is actually an option. We have some tenants with\n> many (sometimes 100s) millions of rows per table, and picking the\n> wrong index would be disastrous.\n \nYou can always drop counter-productive or unused indexes. I think\nit's best to look at indexes, as much as possible, as database\ntuning options, rather than something with any semantic meaning. If\nyou're doing things right, an index will never change the result of\nany query, so you are free to try different tunings and see what's\nfastest with your schema and your production load.\n \nWe have tables with 100s of millions of rows where everything is\nindexed by county number. The biggest county has about 20% of the\nrows; the smallest about 0.05%. We generally do pretty well with\n(1) and (2). We do occasionally find it useful to create an index\nwith a WHERE clause, though. You might want to consider those.\n \n-Kevin\n",
"msg_date": "Mon, 11 Jul 2011 15:32:05 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics and Multi-Column indexes"
},
{
"msg_contents": "On 07/10/2011 02:31 PM, Samuel Gendler wrote:\n> What about partitioning tables by tenant id and then maintaining \n> indexes on each partition independent of tenant id, since constraint \n> exclusion should handle filtering by tenant id for you. That seems \n> like a potentially more tolerable variant of #5 How many tenants are \n> we talking about? I gather partitioning starts to become problematic \n> when the number of partitions gets large.\n>\nI thought I had replied... Apparently I didn't.\n\nThe database can grow in two dimensions: The number of tenants and the \nnumber of rows per tenant.\nWe have many tenants with relatively little data and a few with a lot of \ndata. So the number of tenants\nis known ahead of time and might be 1000's.\n\n-- Lars\n\n",
"msg_date": "Fri, 15 Jul 2011 14:40:53 -0700",
"msg_from": "lars <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics and Multi-Column indexes"
}
] |
[
{
"msg_contents": "Hello,\nI'm a postgres newbie and am wondering what's the best way to do this.\n\nI am gathering some data and will be inserting to a table once daily.\nThe table is quite simple but I want the updates to be as efficient as\npossible since\nthis db is part of a big data project.\n\nSay I have a table with these columns:\n| Date | Hostname | DayVal | WeekAvg | MonthAvg |\n\nWhen I insert a new row I have the values for Date, Hostname, DayVal.\nIs it possible to define the table is such a way that the WeekAvg and\nMonthAvg\nare automatically updated as follows?\n WeekAvg = current rows DayVal plus the sum of DayVal for the\nprevious 6 rows.\n MonthAvg = current row's DayVal plus the sum of DayVal for the\nprevious 29 rows.\n\nShould I place the logic in a Trigger or in a Function?\nDoes someone have an example or a link showing how I could set this\nup?\n\nRegards,\nAlan\n",
"msg_date": "Tue, 12 Jul 2011 00:41:03 -0700 (PDT)",
"msg_from": "alan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Trigger or Function"
},
{
"msg_contents": "On Tue, Jul 12, 2011 at 9:41 AM, alan <[email protected]> wrote:\n> Hello,\n> I'm a postgres newbie and am wondering what's the best way to do this.\n>\n> I am gathering some data and will be inserting to a table once daily.\n> The table is quite simple but I want the updates to be as efficient as\n> possible since\n> this db is part of a big data project.\n>\n> Say I have a table with these columns:\n> | Date | Hostname | DayVal | WeekAvg | MonthAvg |\n>\n> When I insert a new row I have the values for Date, Hostname, DayVal.\n> Is it possible to define the table is such a way that the WeekAvg and\n> MonthAvg\n> are automatically updated as follows?\n> WeekAvg = current rows DayVal plus the sum of DayVal for the\n> previous 6 rows.\n> MonthAvg = current row's DayVal plus the sum of DayVal for the\n> previous 29 rows.\n>\n> Should I place the logic in a Trigger or in a Function?\n> Does someone have an example or a link showing how I could set this\n> up?\n\nIMHO that design does not fit the relational model well because you\nare trying to store multirow aggregate values in individual rows. For\nexample, your values will be wrong if you insert rows in the wrong\norder (i.e. today's data before yesterday's data).\n\nMy first approach would be to remove WeekAvg and MonthAvg from the\ntable and create a view which calculates appropriate values.\n\nIf that proves too inefficient (e.g. because the data set is too huge\nand too much data is queried for individual queries) we can start\noptimizing. One approach to optimizing would be to have secondary\ntables\n\n| Week | Hostname | WeekAvg |\n| Month | Hostname | MonthAvg |\n\nand update them with an insert trigger and probably also with an\nupdate and delete trigger.\n\nIf you actually need increasing values (i.e. running totals) you can\nuse windowing functions (analytic SQL in Oracle). View definitions\nthen of course need to change.\nhttp://www.postgresql.org/docs/9.0/interactive/queries-table-expressions.html#QUERIES-WINDOW\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Thu, 14 Jul 2011 13:06:56 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger or Function"
},
{
"msg_contents": "> My first approach would be to remove WeekAvg and MonthAvg from the\n> table and create a view which calculates appropriate values.\n\nThanks Robert, I had to upgrade to 9.0.4 to use the extended windowing\nfeatures.\nHere is how I set it up. If anyone sees an issue, please let me know.\nI'm new to postgres.\n\nBasically, my \"daily_vals\" table contains HOST, DATE, & VALUE columns.\nWhat I wanted was a way to automatically populate a 4th column\ncalled \"rolling_average\", which would be the sum of <n> preceding\ncolumns.\n\ntestdb=# select * from daily_vals;\n rid | date | host | value\n-----+------------+--------+-------------\n 1 | 2011-07-01 | hosta | 100.0000\n 2 | 2011-07-02 | hosta | 200.0000\n 3 | 2011-07-03 | hosta | 400.0000\n 4 | 2011-07-04 | hosta | 500.0000\n 5 | 2011-07-05 | hosta | 100.0000\n 6 | 2011-07-06 | hosta | 700.0000\n 7 | 2011-07-07 | hosta | 200.0000\n 8 | 2011-07-08 | hosta | 100.0000\n 9 | 2011-07-09 | hosta | 100.0000\n 10 | 2011-07-10 | hosta | 100.0000\n 11 | 2011-07-01 | hostb | 5.7143\n 12 | 2011-07-02 | hostb | 8.5714\n 13 | 2011-07-03 | hostb | 11.4286\n 14 | 2011-07-04 | hostb | 8.5714\n 15 | 2011-07-05 | hostb | 2.8571\n 16 | 2011-07-06 | hostb | 1.4286\n 17 | 2011-07-07 | hostb | 1.4286\n\n\nI created a view called weekly_average using this VIEW statement.\n\nCREATE OR REPLACE\n VIEW weekly_average\n AS SELECT *, sum(value) OVER (PARTITION BY host\n ORDER BY rid\n ROWS BETWEEN 6 PRECEDING AND CURRENT ROW\n ) as rolling_average FROM daily_vals;\n\n\nThe I query the view just like a regular table.\nthe rolling average is calulated from the previuous 6 rows (for each\nhost).\n\ntestdb=# select * from weekly_average;\n rid | date | host | value | rolling_average\n-----+------------+--------+----------+------------------\n 1 | 2011-07-01 | hosta | 100.0000 | 100.0000\n 2 | 2011-07-02 | hosta | 200.0000 | 300.0000\n 3 | 2011-07-03 | hosta | 400.0000 | 700.0000\n 4 | 2011-07-04 | hosta | 500.0000 | 1200.0000\n 5 | 2011-07-05 | hosta | 100.0000 | 1300.0000\n 6 | 2011-07-06 | hosta | 700.0000 | 2000.0000\n 7 | 2011-07-07 | hosta | 200.0000 | 1400.0000\n 8 | 2011-07-08 | hosta | 100.0000 | 1400.0000\n 9 | 2011-07-09 | hosta | 100.0000 | 1200.0000\n 10 | 2011-07-10 | hosta | 100.0000 | 600.0000\n 11 | 2011-07-01 | hostb | 5.7143 | 5.7143\n 12 | 2011-07-02 | hostb | 8.5714 | 14.2857\n 13 | 2011-07-03 | hostb | 11.4286 | 25.7143\n 14 | 2011-07-04 | hostb | 8.5714 | 34.2857\n 15 | 2011-07-05 | hostb | 2.8571 | 37.1428\n 16 | 2011-07-06 | hostb | 1.4286 | 38.5714\n 17 | 2011-07-07 | hostb | 1.4286 | 40.0000\n\nAlan\n\n\n",
"msg_date": "Sat, 23 Jul 2011 08:58:30 -0700 (PDT)",
"msg_from": "alan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trigger or Function"
},
{
"msg_contents": "On 24/07/11 03:58, alan wrote:\n>> My first approach would be to remove WeekAvg and MonthAvg from the\n>> table and create a view which calculates appropriate values.\n> Thanks Robert, I had to upgrade to 9.0.4 to use the extended windowing\n> features.\n> Here is how I set it up. If anyone sees an issue, please let me know.\n> I'm new to postgres.\n>\n> Basically, my \"daily_vals\" table contains HOST, DATE,& VALUE columns.\n> What I wanted was a way to automatically populate a 4th column\n> called \"rolling_average\", which would be the sum of<n> preceding\n> columns.\n>\n> testdb=# select * from daily_vals;\n> rid | date | host | value\n> -----+------------+--------+-------------\n> 1 | 2011-07-01 | hosta | 100.0000\n> 2 | 2011-07-02 | hosta | 200.0000\n> 3 | 2011-07-03 | hosta | 400.0000\n> 4 | 2011-07-04 | hosta | 500.0000\n> 5 | 2011-07-05 | hosta | 100.0000\n> 6 | 2011-07-06 | hosta | 700.0000\n> 7 | 2011-07-07 | hosta | 200.0000\n> 8 | 2011-07-08 | hosta | 100.0000\n> 9 | 2011-07-09 | hosta | 100.0000\n> 10 | 2011-07-10 | hosta | 100.0000\n> 11 | 2011-07-01 | hostb | 5.7143\n> 12 | 2011-07-02 | hostb | 8.5714\n> 13 | 2011-07-03 | hostb | 11.4286\n> 14 | 2011-07-04 | hostb | 8.5714\n> 15 | 2011-07-05 | hostb | 2.8571\n> 16 | 2011-07-06 | hostb | 1.4286\n> 17 | 2011-07-07 | hostb | 1.4286\n>\n>\n> I created a view called weekly_average using this VIEW statement.\n>\n> CREATE OR REPLACE\n> VIEW weekly_average\n> AS SELECT *, sum(value) OVER (PARTITION BY host\n> ORDER BY rid\n> ROWS BETWEEN 6 PRECEDING AND CURRENT ROW\n> ) as rolling_average FROM daily_vals;\n>\n>\n> The I query the view just like a regular table.\n> the rolling average is calulated from the previuous 6 rows (for each\n> host).\n>\n> testdb=# select * from weekly_average;\n> rid | date | host | value | rolling_average\n> -----+------------+--------+----------+------------------\n> 1 | 2011-07-01 | hosta | 100.0000 | 100.0000\n> 2 | 2011-07-02 | hosta | 200.0000 | 300.0000\n> 3 | 2011-07-03 | hosta | 400.0000 | 700.0000\n> 4 | 2011-07-04 | hosta | 500.0000 | 1200.0000\n> 5 | 2011-07-05 | hosta | 100.0000 | 1300.0000\n> 6 | 2011-07-06 | hosta | 700.0000 | 2000.0000\n> 7 | 2011-07-07 | hosta | 200.0000 | 1400.0000\n> 8 | 2011-07-08 | hosta | 100.0000 | 1400.0000\n> 9 | 2011-07-09 | hosta | 100.0000 | 1200.0000\n> 10 | 2011-07-10 | hosta | 100.0000 | 600.0000\n> 11 | 2011-07-01 | hostb | 5.7143 | 5.7143\n> 12 | 2011-07-02 | hostb | 8.5714 | 14.2857\n> 13 | 2011-07-03 | hostb | 11.4286 | 25.7143\n> 14 | 2011-07-04 | hostb | 8.5714 | 34.2857\n> 15 | 2011-07-05 | hostb | 2.8571 | 37.1428\n> 16 | 2011-07-06 | hostb | 1.4286 | 38.5714\n> 17 | 2011-07-07 | hostb | 1.4286 | 40.0000\n>\n> Alan\n>\n>\n>\nThe above gives just the rolling sum, you need to divide by the number \nof rows in the sum to get the average (I assume you want the arithmetic \nmean, as the are many types of average!).\n\nCREATE OR REPLACE\n VIEW weekly_average\n AS SELECT\n *,\n round((sum(value) OVER mywindow / LEAST(6, (row_number() OVER \nmywindow))), 4) AS rolling_average\n FROM daily_vals\n WINDOW mywindow AS\n (\n PARTITION BY host\n ORDER BY rid\n ROWS BETWEEN 6 PRECEDING AND CURRENT ROW\n );\n\nCheers,\nGavin\n",
"msg_date": "Sat, 30 Jul 2011 13:01:25 +1200",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger or Function"
},
{
"msg_contents": "On Sat, Jul 30, 2011 at 3:01 AM, Gavin Flower\n<[email protected]> wrote:\n> On 24/07/11 03:58, alan wrote:\n>>>\n>>> My first approach would be to remove WeekAvg and MonthAvg from the\n>>> table and create a view which calculates appropriate values.\n>>\n>> Thanks Robert, I had to upgrade to 9.0.4 to use the extended windowing\n>> features.\n>> Here is how I set it up. If anyone sees an issue, please let me know.\n>> I'm new to postgres.\n>>\n>> Basically, my \"daily_vals\" table contains HOST, DATE,& VALUE columns.\n>> What I wanted was a way to automatically populate a 4th column\n>> called \"rolling_average\", which would be the sum of<n> preceding\n>> columns.\n\nThere seems to be contradiction in the naming here. Did you mean \"avg\nof<n> preceding columns.\"?\n\n>> I created a view called weekly_average using this VIEW statement.\n>>\n>> CREATE OR REPLACE\n>> VIEW weekly_average\n>> AS SELECT *, sum(value) OVER (PARTITION BY host\n>> ORDER BY rid\n>> ROWS BETWEEN 6 PRECEDING AND CURRENT ROW\n>> ) as rolling_average FROM daily_vals;\n\n> The above gives just the rolling sum, you need to divide by the number of\n> rows in the sum to get the average (I assume you want the arithmetic mean,\n> as the are many types of average!).\n>\n> CREATE OR REPLACE\n> VIEW weekly_average\n> AS SELECT\n> *,\n> round((sum(value) OVER mywindow / LEAST(6, (row_number() OVER\n> mywindow))), 4) AS rolling_average\n> FROM daily_vals\n> WINDOW mywindow AS\n> (\n> PARTITION BY host\n> ORDER BY rid\n> ROWS BETWEEN 6 PRECEDING AND CURRENT ROW\n> );\n\nWhy not\n\nCREATE OR REPLACE\n VIEW weekly_average\n AS SELECT *, avg(value) OVER (PARTITION BY host\n ORDER BY rid\n ROWS BETWEEN 6 PRECEDING AND CURRENT ROW\n ) as rolling_average FROM daily_vals;\n\nWhat did I miss?\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Mon, 1 Aug 2011 09:18:49 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger or Function"
},
{
"msg_contents": "On 01/08/11 19:18, Robert Klemme wrote:\n> On Sat, Jul 30, 2011 at 3:01 AM, Gavin Flower\n> <[email protected]> wrote:\n>> On 24/07/11 03:58, alan wrote:\n>>>> My first approach would be to remove WeekAvg and MonthAvg from the\n>>>> table and create a view which calculates appropriate values.\n>>> Thanks Robert, I had to upgrade to 9.0.4 to use the extended windowing\n>>> features.\n>>> Here is how I set it up. If anyone sees an issue, please let me know.\n>>> I'm new to postgres.\n>>>\n>>> Basically, my \"daily_vals\" table contains HOST, DATE,& VALUE columns.\n>>> What I wanted was a way to automatically populate a 4th column\n>>> called \"rolling_average\", which would be the sum of<n> preceding\n>>> columns.\n> There seems to be contradiction in the naming here. Did you mean \"avg\n> of<n> preceding columns.\"?\n>\n>>> I created a view called weekly_average using this VIEW statement.\n>>>\n>>> CREATE OR REPLACE\n>>> VIEW weekly_average\n>>> AS SELECT *, sum(value) OVER (PARTITION BY host\n>>> ORDER BY rid\n>>> ROWS BETWEEN 6 PRECEDING AND CURRENT ROW\n>>> ) as rolling_average FROM daily_vals;\n>> The above gives just the rolling sum, you need to divide by the number of\n>> rows in the sum to get the average (I assume you want the arithmetic mean,\n>> as the are many types of average!).\n>>\n>> CREATE OR REPLACE\n>> VIEW weekly_average\n>> AS SELECT\n>> *,\n>> round((sum(value) OVER mywindow / LEAST(6, (row_number() OVER\n>> mywindow))), 4) AS rolling_average\n>> FROM daily_vals\n>> WINDOW mywindow AS\n>> (\n>> PARTITION BY host\n>> ORDER BY rid\n>> ROWS BETWEEN 6 PRECEDING AND CURRENT ROW\n>> );\n> Why not\n>\n> CREATE OR REPLACE\n> VIEW weekly_average\n> AS SELECT *, avg(value) OVER (PARTITION BY host\n> ORDER BY rid\n> ROWS BETWEEN 6 PRECEDING AND CURRENT ROW\n> ) as rolling_average FROM daily_vals;\n>\n> What did I miss?\n>\n> Kind regards\n>\n> robert\n>\n<Chuckle> Your fix is much more elegant and efficient, though both \napproaches work!\n",
"msg_date": "Mon, 01 Aug 2011 21:26:01 +1200",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger or Function"
}
] |
[
{
"msg_contents": "Hi, all.\n\nI have a query, looking like this:\nSELECT\n\tpub_date\nFROM\n\ttubesite_object\n\tINNER JOIN tubesite_image\n\t\tON tubesite_image.object_ptr_id = tubesite_object.id\nWHERE\n\ttubesite_object.site_id = 8\n\tAND tubesite_object.pub_date < E'2011-07-12 13:25:00'\nORDER BY\n\ttubesite_object.pub_date ASC\nLIMIT 21;\n\n\nThat query takes cca 10-15 seconds to run. Here is query plan:\n\n Limit (cost=0.00..415.91 rows=21 width=8) (actual \ntime=11263.089..11263.089 rows=0 loops=1)\n -> Nested Loop (cost=0.00..186249.55 rows=9404 width=8) (actual \ntime=11263.087..11263.087 rows=0 loops=1)\n -> Index Scan using tubesite_object_pub_date_idx on \ntubesite_object (cost=0.00..183007.09 rows=9404 width=12) (actual \ntime=0.024..11059.487 rows=9374 loops=1)\n Index Cond: (pub_date < '2011-07-12 \n13:25:00-05'::timestamp with time zone)\n Filter: (site_id = 8)\n -> Index Scan using tubesite_image_pkey on tubesite_image \n(cost=0.00..0.33 rows=1 width=4) (actual time=0.021..0.021 rows=0 \nloops=9374)\n Index Cond: (tubesite_image.object_ptr_id = \ntubesite_object.id)\n Total runtime: 11263.141 ms\n\n\nThis query runs quickly (around second or two) when there is only few \nconnections to the database. Once I have 50-80 connections (200 is the \nlimit, although I never have more than 120-150 connections), that query \ntakes around 10-15 seconds.\n\nBut, if I disable nestedloops, here is the query plan:\n\n Limit (cost=22683.45..22683.51 rows=21 width=8) (actual \ntime=136.009..136.009 rows=0 loops=1)\n -> Sort (cost=22683.45..22706.96 rows=9404 width=8) (actual \ntime=136.007..136.007 rows=0 loops=1)\n Sort Key: tubesite_object.pub_date\n Sort Method: quicksort Memory: 25kB\n -> Hash Join (cost=946.51..22429.91 rows=9404 width=8) \n(actual time=135.934..135.934 rows=0 loops=1)\n Hash Cond: (tubesite_object.id = \ntubesite_image.object_ptr_id)\n -> Bitmap Heap Scan on tubesite_object \n(cost=545.40..21828.97 rows=9404 width=12) (actual time=20.874..104.075 \nrows=9374 loops=1)\n Recheck Cond: (site_id = 8)\n Filter: (pub_date < '2011-07-12 \n13:25:00-05'::timestamp with time zone)\n -> Bitmap Index Scan on tubesite_object_site_id \n(cost=0.00..543.05 rows=9404 width=0) (actual time=18.789..18.789 \nrows=9374 loops=1)\n Index Cond: (site_id = 8)\n -> Hash (cost=215.49..215.49 rows=14849 width=4) \n(actual time=21.068..21.068 rows=14849 loops=1)\n -> Seq Scan on tubesite_image (cost=0.00..215.49 \nrows=14849 width=4) (actual time=0.029..9.073 rows=14849 loops=1)\n Total runtime: 136.287 ms\n\n\nNow, if I disable nested loops in postgres.conf, then my load average on \nthe server goes skyhigh (i presume because a lot of other queries are \nnow being planned incorrectly).\n\nI have set up default_statistics_target to 2000, and have vacumed and \nanalyzed the database.\n\nHere are the other options I have set up in postgresql.conf (that differ \nfrom the default settings):\n\n version | PostgreSQL 8.4.8 on x86_64-pc-linux-gnu, \ncompiled by GCC gcc-4.3.real (Debian 4.3.2-1.1) 4.3.2, 64-bit\n checkpoint_segments | 64\n default_statistics_target | 2000\n effective_cache_size | 20GB\n external_pid_file | /var/run/postgresql/8.4-main.pid\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n listen_addresses | *\n log_autovacuum_min_duration | 0\n log_checkpoints | on\n log_line_prefix | %t [%p]: [%l-1]\n log_min_duration_statement | 1s\n maintenance_work_mem | 256MB\n max_connections | 200\n max_stack_depth | 3MB\n port | 5432\n server_encoding | UTF8\n shared_buffers | 2GB\n statement_timeout | 30min\n temp_buffers | 4096\n TimeZone | localtime\n track_activity_query_size | 2048\n unix_socket_directory | /var/run/postgresql\n wal_buffers | 128MB\n work_mem | 64MB\n\n\n\nWhy is planner using NestedLoops, that is, what can I do to make him NOT \nto use NestedLoops (other than issuing SET enable_nestloop TO false; \nbefore each query) ?\n\n\tMario\n",
"msg_date": "Tue, 12 Jul 2011 20:11:46 +0200",
"msg_from": "Mario Splivalo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Planner choosing NestedLoop, although it is slower..."
},
{
"msg_contents": "On 07/12/2011 11:11 AM, Mario Splivalo wrote:\n> Hi, all.\n>\n> I have a query, looking like this:\n> SELECT\n> pub_date\n> FROM\n> tubesite_object\n> INNER JOIN tubesite_image\n> ON tubesite_image.object_ptr_id = tubesite_object.id\n> WHERE\n> tubesite_object.site_id = 8\n> AND tubesite_object.pub_date < E'2011-07-12 13:25:00'\n> ORDER BY\n> tubesite_object.pub_date ASC\n> LIMIT 21;\n>\n\n> Why is planner using NestedLoops, that is, what can I do to make him NOT\n> to use NestedLoops (other than issuing SET enable_nestloop TO false;\n> before each query) ?\n\nThe planner is using a nested loops because the startup overhead is \nless, and it think that it will only have run a small 0.2% (21/9404) of \nthe loops before reaching your limit of 21 results. In fact it has to \nrun all the loops, because there are 0 results. (Is that what you expected?)\n\nTry a using CTE to make the planner think you are going to use all the \nrows of the joined table. That may cause the planner to use a merge \njoin, which has higher startup cost (sort) but less overall cost if it \nthe join will not finish early.\n\nWITH t AS (\n SELECT tubesite_object.site_id AS site_id,\n tubesite_object.pub_date as pub_date\n FROM tubesite_object\n INNER JOIN tubesite_image\n ON tubesite_image.object_ptr_id = tubesite_object.id\n)\nSELECT pub_date\nFROM t\nWHERE t.site_id = 8 AND t.pub_date < E'2011-07-12 13:25:00'\nORDER BY t.pub_date ASC LIMIT 21;\n\n",
"msg_date": "Tue, 12 Jul 2011 12:55:14 -0700",
"msg_from": "Clem Dickey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner choosing NestedLoop, although it is slower..."
},
{
"msg_contents": "Mario Splivalo <[email protected]> writes:\n> Limit (cost=0.00..415.91 rows=21 width=8) (actual \n> time=11263.089..11263.089 rows=0 loops=1)\n> -> Nested Loop (cost=0.00..186249.55 rows=9404 width=8) (actual \n> time=11263.087..11263.087 rows=0 loops=1)\n\n> Why is planner using NestedLoops,\n\nBecause it thinks the LIMIT will kick in and end the query when the join\nis only 21/9404ths (ie, a fraction of a percent) complete. A NestLoop\nresults in saving a lot of work in that situation, whereas hash-and-sort\nhas to do the whole join despite the LIMIT.\n\nWhat you need to look into is why the estimated join size is 9400 rows\nwhen the actual join size is zero. Are both tables ANALYZEd? Are you\nintentionally selecting rows that have no join partners?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Jul 2011 16:04:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner choosing NestedLoop, although it is slower... "
},
{
"msg_contents": "On 07/12/2011 10:04 PM, Tom Lane wrote:\n> Mario Splivalo<[email protected]> writes:\n>> Limit (cost=0.00..415.91 rows=21 width=8) (actual\n>> time=11263.089..11263.089 rows=0 loops=1)\n>> -> Nested Loop (cost=0.00..186249.55 rows=9404 width=8) (actual\n>> time=11263.087..11263.087 rows=0 loops=1)\n>\n>> Why is planner using NestedLoops,\n>\n> Because it thinks the LIMIT will kick in and end the query when the join\n> is only 21/9404ths (ie, a fraction of a percent) complete. A NestLoop\n> results in saving a lot of work in that situation, whereas hash-and-sort\n> has to do the whole join despite the LIMIT.\n>\n> What you need to look into is why the estimated join size is 9400 rows\n> when the actual join size is zero. Are both tables ANALYZEd? Are you\n> intentionally selecting rows that have no join partners?\n\nHi, Tom.\n\nYes, both tables have been ANALYZEd. What do you mean, intentilnaly \nselecting rows taht have no join partners?\n\n\tMario\n",
"msg_date": "Wed, 13 Jul 2011 00:06:46 +0200",
"msg_from": "Mario Splivalo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planner choosing NestedLoop, although it is slower..."
},
{
"msg_contents": "Mario Splivalo <[email protected]> writes:\n> On 07/12/2011 10:04 PM, Tom Lane wrote:\n>> What you need to look into is why the estimated join size is 9400 rows\n>> when the actual join size is zero. Are both tables ANALYZEd? Are you\n>> intentionally selecting rows that have no join partners?\n\n> Yes, both tables have been ANALYZEd. What do you mean, intentilnaly \n> selecting rows taht have no join partners?\n\nI'm wondering why the actual join size is zero. That seems like a\nrather unexpected case for a query like this.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Jul 2011 18:39:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner choosing NestedLoop, although it is slower... "
},
{
"msg_contents": "On 07/13/2011 12:39 AM, Tom Lane wrote:\n> Mario Splivalo<[email protected]> writes:\n>> On 07/12/2011 10:04 PM, Tom Lane wrote:\n>>> What you need to look into is why the estimated join size is 9400 rows\n>>> when the actual join size is zero. Are both tables ANALYZEd? Are you\n>>> intentionally selecting rows that have no join partners?\n>\n>> Yes, both tables have been ANALYZEd. What do you mean, intentilnaly\n>> selecting rows taht have no join partners?\n>\n> I'm wondering why the actual join size is zero. That seems like a\n> rather unexpected case for a query like this.\n\nIt is true that this particular query returns 0 rows. But it's created \nby django, and I can't do much to alter it.\n\n\tMario\n",
"msg_date": "Wed, 13 Jul 2011 01:57:06 +0200",
"msg_from": "Mario Splivalo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planner choosing NestedLoop, although it is slower..."
},
{
"msg_contents": "On 07/13/2011 12:39 AM, Tom Lane wrote:\n> Mario Splivalo<[email protected]> writes:\n>> On 07/12/2011 10:04 PM, Tom Lane wrote:\n>>> What you need to look into is why the estimated join size is 9400 rows\n>>> when the actual join size is zero. Are both tables ANALYZEd? Are you\n>>> intentionally selecting rows that have no join partners?\n>\n>> Yes, both tables have been ANALYZEd. What do you mean, intentilnaly\n>> selecting rows taht have no join partners?\n>\n> I'm wondering why the actual join size is zero. That seems like a\n> rather unexpected case for a query like this.\n\nYes, seems that planer gets confused by LIMIT. This query:\n\nselect * from tubesite_object join tubesite_image on id=object_ptr_id \nwhere site_id = 8 and pub_date < '2011-07-12 13:25:00' order by pub_date \ndesc ;\n\nDoes not choose Nested Loop, and is done instantly (20 ms), and returns \nno rows. However, if I add LIMIT at the end, it chooses NestedLoop and \nit takes 500ms if I'm alone on the server, and 10+ seconds if there 50+ \nconnections on the server.\n\n\tMario\n",
"msg_date": "Wed, 13 Jul 2011 02:53:32 +0200",
"msg_from": "Mario Splivalo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planner choosing NestedLoop, although it is slower..."
},
{
"msg_contents": "On 07/13/2011 02:53 AM, Mario Splivalo wrote:\n> On 07/13/2011 12:39 AM, Tom Lane wrote:\n>> Mario Splivalo<[email protected]> writes:\n>>> On 07/12/2011 10:04 PM, Tom Lane wrote:\n>>>> What you need to look into is why the estimated join size is 9400 rows\n>>>> when the actual join size is zero. Are both tables ANALYZEd? Are you\n>>>> intentionally selecting rows that have no join partners?\n>>\n>>> Yes, both tables have been ANALYZEd. What do you mean, intentilnaly\n>>> selecting rows taht have no join partners?\n>>\n>> I'm wondering why the actual join size is zero. That seems like a\n>> rather unexpected case for a query like this.\n>\n> Yes, seems that planer gets confused by LIMIT. This query:\n>\n> select * from tubesite_object join tubesite_image on id=object_ptr_id\n> where site_id = 8 and pub_date < '2011-07-12 13:25:00' order by pub_date\n> desc ;\n>\n> Does not choose Nested Loop, and is done instantly (20 ms), and returns\n> no rows. However, if I add LIMIT at the end, it chooses NestedLoop and\n> it takes 500ms if I'm alone on the server, and 10+ seconds if there 50+\n> connections on the server.\n\nAs explained/suggested by RhodiumToad on IRC, adding composite index on \n(site_id, pub_date) made nestedloop query finish in around 100 seconds!\n\nThank you!\n\n\tMario\n",
"msg_date": "Wed, 13 Jul 2011 04:04:35 +0200",
"msg_from": "Mario Splivalo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planner choosing NestedLoop, although it is slower..."
}
] |
[
{
"msg_contents": "shared_buffers is big enough to hold the entire database, and there is plenty of extra space. (verified with PG_buffercache) \nSo i don't think that is the reason. \n\n\nTom Lane <[email protected]> schrieb:\n\n>Jeff Janes <[email protected]> writes:\n>> On 7/12/11, lars <[email protected]> wrote:\n>>> The fact that a select (maybe a big analytical query we'll run) touching\n>>> many rows will update the WAL and wait\n>>> (apparently) for that IO to complete is making a fully cached database\n>>> far less useful.\n>>> I just artificially created this scenario.\n>\n>> I can't think of any reason that that WAL would have to be flushed\n>> synchronously.\n>\n>Maybe he's running low on shared_buffers? We would have to flush WAL\n>before writing a dirty buffer out, so maybe excessive pressure on\n>available buffers is part of the issue here.\n>\n>\t\t\tregards, tom lane\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 12 Jul 2011 21:01:34 -0700",
"msg_from": "Lars <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UPDATEDs slowing SELECTs in a fully cached database"
}
] |
[
{
"msg_contents": "Hi list,\n\nMy employer will be donated a NetApp FAS 3040 SAN [1] and we want to run\nour warehouse DB on it. The pg9.0 DB currently comprises ~1.5TB of\ntables, 200GB of indexes, and grows ~5%/month. The DB is not update\ncritical, but undergoes larger read and insert operations frequently.\n\nMy employer is a university with little funds and we have to find a\ncheap way to scale for the next 3 years, so the SAN seems a good chance\nto us. We are now looking for the remaining server parts to maximize DB\nperformance with costs <= $4000. I digged out the following\nconfiguration with the discount we receive from Dell:\n \t\n 1 x Intel Xeon X5670, 6C, 2.93GHz, 12M Cache\n 16 GB (4x4GB) Low Volt DDR3 1066Mhz\n PERC H700 SAS RAID controller\n 4 x 300 GB 10k SAS 6Gbps 2.5\" in RAID 10\n\nI was thinking to put the WAL and the indexes on the local disks, and\nthe rest on the SAN. If funds allow, we might downgrade the disks to\nSATA and add a 50 GB SATA SSD for the WAL (SAS/SATA mixup not possible).\n\nAny comments on the configuration? Any experiences with iSCSI vs. Fibre\nChannel for SANs and PostgreSQL? If the SAN setup sucks, do you see a\ncheap alternative how to connect as many as 16 x 2TB disks as DAS?\n\nThanks so much!\n\nBest,\nChris\n\n[1]: http://www.b2net.co.uk/netapp/fas3000.pdf\n\n",
"msg_date": "Thu, 14 Jul 2011 23:34:24 -0700",
"msg_from": "chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hardware advice for scalable warehouse db"
},
{
"msg_contents": "chris wrote:\n> My employer is a university with little funds and we have to find a\n> cheap way to scale for the next 3 years, so the SAN seems a good chance\n> to us.\n\nA SAN is rarely ever the cheapest way to scale anything; you're paying \nextra for reliability instead.\n\n\n> I was thinking to put the WAL and the indexes on the local disks, and\n> the rest on the SAN. If funds allow, we might downgrade the disks to\n> SATA and add a 50 GB SATA SSD for the WAL (SAS/SATA mixup not possible).\n> \n\nIf you want to keep the bulk of the data on the SAN, this is a \nreasonable way to go, performance-wise. But be aware that losing the \nWAL means your database is likely corrupted. That means that much of \nthe reliability benefit of the SAN is lost in this configuration.\n\n\n> Any experiences with iSCSI vs. Fibre\n> Channel for SANs and PostgreSQL? If the SAN setup sucks, do you see a\n> cheap alternative how to connect as many as 16 x 2TB disks as DAS?\n> \n\nI've never heard anyone recommend iSCSI if you care at all about \nperformance, while FC works fine for this sort of job. The physical \ndimensions of 3.5\" drives makes getting 16 of them in one reasonably \nsized enclosure normally just out of reach. But a Dell PowerVault \nMD1000 will give you 15 x 2TB as inexpensively as possible in a single \n3U space (well, as cheaply as you want to go--you might build your own \ngiant box cheaper but I wouldn't recommend ). I've tested MD1000, \nMD1200, and MD1220 arrays before, and always gotten seriously good \nperformance relative to the dollars spent with that series. Only one of \nthese Dell storage arrays I've heard two disappointing results from (but \nnot tested directly yet) is the MD3220.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\n\n\n",
"msg_date": "Fri, 15 Jul 2011 08:10:37 +0100",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware advice for scalable warehouse db"
},
{
"msg_contents": "> 1 x Intel Xeon X5670, 6C, 2.93GHz, 12M Cache\n> 16 GB (4x4GB) Low Volt DDR3 1066Mhz\n> PERC H700 SAS RAID controller\n> 4 x 300 GB 10k SAS 6Gbps 2.5\" in RAID 10\n\nApart from Gregs excellent recommendations. I would strongly suggest\nmore memory. 16GB in 2011 is really on the low side.\n\nPG is using memory (either shared_buffers og OS cache) for\nkeeping frequently accessed data in. Good recommendations are hard\nwithout knowledge of data and access-patterns, but 64, 128 and 256GB\nsystem are quite frequent when you have data that can't all be\nin memory at once.\n\nSAN's are nice, but I think you can buy a good DAS thing each year\nfor just the support cost of a Netapp, but you might have gotten a\nreally good deal there too. But you are getting a huge amount of\nadvanced configuration features and potential ways of sharing and..\nand .. just see the specs.\n\n.. and if you need those the SAN is a good way to go, but\nthey do come with a huge pricetag.\n\nJesper\n\n",
"msg_date": "Fri, 15 Jul 2011 10:22:59 +0200 (CEST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Hardware advice for scalable warehouse db"
},
{
"msg_contents": "\nOn 7/15/2011 2:10 AM, Greg Smith wrote:\n> chris wrote:\n>> My employer is a university with little funds and we have to find a\n>> cheap way to scale for the next 3 years, so the SAN seems a good chance\n>> to us.\n> A SAN is rarely ever the cheapest way to scale anything; you're paying\n> extra for reliability instead.\n>\n>\n>> I was thinking to put the WAL and the indexes on the local disks, and\n>> the rest on the SAN. If funds allow, we might downgrade the disks to\n>> SATA and add a 50 GB SATA SSD for the WAL (SAS/SATA mixup not possible).\n>>\n> If you want to keep the bulk of the data on the SAN, this is a\n> reasonable way to go, performance-wise. But be aware that losing the\n> WAL means your database is likely corrupted. That means that much of\n> the reliability benefit of the SAN is lost in this configuration.\n>\n>\n>> Any experiences with iSCSI vs. Fibre\n>> Channel for SANs and PostgreSQL? If the SAN setup sucks, do you see a\n>> cheap alternative how to connect as many as 16 x 2TB disks as DAS?\n>>\n> I've never heard anyone recommend iSCSI if you care at all about\n> performance, while FC works fine for this sort of job. The physical\n> dimensions of 3.5\" drives makes getting 16 of them in one reasonably\n> sized enclosure normally just out of reach. But a Dell PowerVault\n> MD1000 will give you 15 x 2TB as inexpensively as possible in a single\n> 3U space (well, as cheaply as you want to go--you might build your own\n> giant box cheaper but I wouldn't recommend ).\n\nI'm curious what people think of these:\nhttp://www.pc-pitstop.com/sas_cables_enclosures/scsase166g.asp\n\nI currently have my database on two of these and for my purpose they \nseem to be fine and are quite a bit less expensive than the Dell \nMD1000. I actually have three more of the 3G versions with expanders \nfor mass storage arrays (RAID0) and haven't had any issues with them in \nthe three years I've had them.\n\nBob\n\n\n\n",
"msg_date": "Fri, 15 Jul 2011 11:39:49 -0500",
"msg_from": "Robert Schnabel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware advice for scalable warehouse db"
},
{
"msg_contents": "On Fri, Jul 15, 2011 at 12:34 AM, chris <[email protected]> wrote:\n> I was thinking to put the WAL and the indexes on the local disks, and\n> the rest on the SAN. If funds allow, we might downgrade the disks to\n> SATA and add a 50 GB SATA SSD for the WAL (SAS/SATA mixup not possible).\n\nJust to add to the conversation, there's no real advantage to putting\nWAL on SSD. Indexes can benefit from them, but WAL is mosty\nseqwuential throughput and for that a pair of SATA 1TB drives at\n7200RPM work just fine for most folks. For example, in one big server\nwe're running we have 24 drives in a RAID-10 for the /data/base dir\nwith 4 drives in a RAID-10 for pg_xlog, and those 4 drives tend to\nhave the same io util % under iostat as the 24 drives under normal\nusage. It takes a special kind of load (lots of inserts happening in\nlarge transactions quickly) for the 4 drive RAID-10 to have more than\n50% util ever.\n",
"msg_date": "Fri, 15 Jul 2011 11:59:56 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware advice for scalable warehouse db"
},
{
"msg_contents": "On Fri, Jul 15, 2011 at 10:39 AM, Robert Schnabel\n<[email protected]> wrote:\n> I'm curious what people think of these:\n> http://www.pc-pitstop.com/sas_cables_enclosures/scsase166g.asp\n>\n> I currently have my database on two of these and for my purpose they seem to\n> be fine and are quite a bit less expensive than the Dell MD1000. I actually\n> have three more of the 3G versions with expanders for mass storage arrays\n> (RAID0) and haven't had any issues with them in the three years I've had\n> them.\n\nI have a co-worker who's familiar with them and they seem a lot like\nthe 16 drive units we use from Aberdeen, which fully outfitted with\n15k SAS drives run $5k to $8k depending on the drives etc.\n",
"msg_date": "Fri, 15 Jul 2011 12:01:12 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware advice for scalable warehouse db"
},
{
"msg_contents": "\n> Just to add to the conversation, there's no real advantage to putting\n> WAL on SSD. Indexes can benefit from them, but WAL is mosty\n> seqwuential throughput and for that a pair of SATA 1TB drives at\n> 7200RPM work just fine for most folks. \n\nActually, there's a strong disadvantage to putting WAL on SSD. SSD is\nvery prone to fragmentation if you're doing a lot of deleting and\nreplacing files. I've implemented data warehouses where the database\nwas on SSD but WAL was still on HDD.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Fri, 15 Jul 2011 11:46:16 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware advice for scalable warehouse db"
},
{
"msg_contents": "Hi list,\n\nThanks a lot for your very helpful feedback!\n\n> I've tested MD1000, MD1200, and MD1220 arrays before, and always gotten\n> seriously good performance relative to the dollars spent\nGreat hint, but I'm afraid that's too expensive for us. But it's a great\nway to scale over the years, I'll keep that in mind.\n\nI had a look at other server vendors who offer 4U servers with slots for\n16 disks for 4k in total (w/o disks), maybe that's an even\ncheaper/better solution for us. If you had the choice between 16 x 2TB\nSATA vs. a server with some SSDs for WAL/indexes and a SAN (with SATA\ndisk) for data, what would you choose performance-wise?\n\nAgain, thanks so much for your help.\n\nBest,\nChris\n",
"msg_date": "Fri, 15 Jul 2011 11:49:20 -0700",
"msg_from": "\"chris r.\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware advice for scalable warehouse db"
},
{
"msg_contents": "On Fri, Jul 15, 2011 at 11:49 AM, chris r. <[email protected]> wrote:\n> Hi list,\n>\n> Thanks a lot for your very helpful feedback!\n>\n>> I've tested MD1000, MD1200, and MD1220 arrays before, and always gotten\n>> seriously good performance relative to the dollars spent\n> Great hint, but I'm afraid that's too expensive for us. But it's a great\n> way to scale over the years, I'll keep that in mind.\n>\n> I had a look at other server vendors who offer 4U servers with slots for\n> 16 disks for 4k in total (w/o disks), maybe that's an even\n> cheaper/better solution for us. If you had the choice between 16 x 2TB\n> SATA vs. a server with some SSDs for WAL/indexes and a SAN (with SATA\n> disk) for data, what would you choose performance-wise?\n>\n> Again, thanks so much for your help.\n>\n> Best,\n> Chris\n\nSATA drives can easily flip bits and postgres does not checksum data,\nso it will not automatically detect corruption for you. I would steer\nwell clear of SATA unless you are going to be using a fs like ZFS\nwhich checksums data. I would hope that a SAN would detect this for\nyou, but I have no idea.\n\n\n-- \nRob Wultsch\[email protected]\n",
"msg_date": "Fri, 15 Jul 2011 12:25:31 -0700",
"msg_from": "Rob Wultsch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware advice for scalable warehouse db"
},
{
"msg_contents": "On 7/14/11 11:34 PM, chris wrote:\n> Any comments on the configuration? Any experiences with iSCSI vs. Fibre\n> Channel for SANs and PostgreSQL? If the SAN setup sucks, do you see a\n> cheap alternative how to connect as many as 16 x 2TB disks as DAS?\n\nHere's the problem with iSCSI: on gigabit ethernet, your maximum\npossible throughput is 100mb/s, which means that your likely maximum\ndatabase throughput (for a seq scan or vacuum, for example) is 30mb/s.\nThat's about a third of what you can get with good internal RAID.\n\nWhile multichannel iSCSI is possible, it's hard to configure, and\ndoesn't really allow you to spread a *single* request across multiple\nchannels. So: go with fiber channel if you're using a SAN.\n\niSCSI also has horrible lag times, but you don't care about that so much\nfor DW.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Fri, 15 Jul 2011 12:46:16 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware advice for scalable warehouse db"
},
{
"msg_contents": "Hi Chris,\n\nA couple comments on the NetApp SAN.\nWe use NetApp, primarily with Fiber connectivity and FC drives. All of the\nPostgres files are located on the SAN and this configuration works well.\nWe have tried iSCSI, but performance his horrible. Same with SATA drives.\nThe SAN will definitely be more costly then local drives. It really depends\non what your needs are.\nThe biggest benefit for me in using SAN is using the special features that\nit offers. We use snapshots and flex clones, which is a great way to backup\nand clone large databases.\n\nCheers,\nTerry\n\n\nOn Thu, Jul 14, 2011 at 11:34 PM, chris <[email protected]> wrote:\n\n> Hi list,\n>\n> My employer will be donated a NetApp FAS 3040 SAN [1] and we want to run\n> our warehouse DB on it. The pg9.0 DB currently comprises ~1.5TB of\n> tables, 200GB of indexes, and grows ~5%/month. The DB is not update\n> critical, but undergoes larger read and insert operations frequently.\n>\n> My employer is a university with little funds and we have to find a\n> cheap way to scale for the next 3 years, so the SAN seems a good chance\n> to us. We are now looking for the remaining server parts to maximize DB\n> performance with costs <= $4000. I digged out the following\n> configuration with the discount we receive from Dell:\n>\n> 1 x Intel Xeon X5670, 6C, 2.93GHz, 12M Cache\n> 16 GB (4x4GB) Low Volt DDR3 1066Mhz\n> PERC H700 SAS RAID controller\n> 4 x 300 GB 10k SAS 6Gbps 2.5\" in RAID 10\n>\n> I was thinking to put the WAL and the indexes on the local disks, and\n> the rest on the SAN. If funds allow, we might downgrade the disks to\n> SATA and add a 50 GB SATA SSD for the WAL (SAS/SATA mixup not possible).\n>\n> Any comments on the configuration? Any experiences with iSCSI vs. Fibre\n> Channel for SANs and PostgreSQL? If the SAN setup sucks, do you see a\n> cheap alternative how to connect as many as 16 x 2TB disks as DAS?\n>\n> Thanks so much!\n>\n> Best,\n> Chris\n>\n> [1]: http://www.b2net.co.uk/netapp/fas3000.pdf\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi Chris,A couple comments on the NetApp SAN.We use NetApp, primarily with Fiber connectivity and FC drives. All of the Postgres files are located on the SAN and this configuration works well.We have tried iSCSI, but performance his horrible. Same with SATA drives.\n\nThe SAN will definitely be more costly then local drives. It really depends on what your needs are.The biggest benefit for me in using SAN is using the special features that it offers. We use snapshots and flex clones, which is a great way to backup and clone large databases.\nCheers,TerryOn Thu, Jul 14, 2011 at 11:34 PM, chris <[email protected]> wrote:\n\nHi list,\n\nMy employer will be donated a NetApp FAS 3040 SAN [1] and we want to run\nour warehouse DB on it. The pg9.0 DB currently comprises ~1.5TB of\ntables, 200GB of indexes, and grows ~5%/month. The DB is not update\ncritical, but undergoes larger read and insert operations frequently.\n\nMy employer is a university with little funds and we have to find a\ncheap way to scale for the next 3 years, so the SAN seems a good chance\nto us. We are now looking for the remaining server parts to maximize DB\nperformance with costs <= $4000. I digged out the following\nconfiguration with the discount we receive from Dell:\n\n 1 x Intel Xeon X5670, 6C, 2.93GHz, 12M Cache\n 16 GB (4x4GB) Low Volt DDR3 1066Mhz\n PERC H700 SAS RAID controller\n 4 x 300 GB 10k SAS 6Gbps 2.5\" in RAID 10\n\nI was thinking to put the WAL and the indexes on the local disks, and\nthe rest on the SAN. If funds allow, we might downgrade the disks to\nSATA and add a 50 GB SATA SSD for the WAL (SAS/SATA mixup not possible).\n\nAny comments on the configuration? Any experiences with iSCSI vs. Fibre\nChannel for SANs and PostgreSQL? If the SAN setup sucks, do you see a\ncheap alternative how to connect as many as 16 x 2TB disks as DAS?\n\nThanks so much!\n\nBest,\nChris\n\n[1]: http://www.b2net.co.uk/netapp/fas3000.pdf\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 27 Jul 2011 09:02:53 -0700",
"msg_from": "Terry Schmitt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware advice for scalable warehouse db"
}
] |
[
{
"msg_contents": "Hi all,\nhere is my postgresql configuration:\n\n\"version\";\"PostgreSQL 9.0.3 on amd64-portbld-freebsd8.0, compiled by GCC cc\n(GCC) 4.2.1 20070719 [FreeBSD], 64-bit\"\n\"bytea_output\";\"escape\"\n\"checkpoint_segments\";\"64\"\n\"client_encoding\";\"UNICODE\"\n\"effective_cache_size\";\"6GB\"\n\"fsync\";\"off\"\n\"lc_collate\";\"C\"\n\"lc_ctype\";\"C\"\n\"listen_addresses\";\"*\"\n\"log_destination\";\"syslog\"\n\"max_connections\";\"20\"\n\"max_stack_depth\";\"2MB\"\n\"server_encoding\";\"UTF8\"\n\"shared_buffers\";\"4GB\"\n\"silent_mode\";\"on\"\n\"synchronous_commit\";\"off\"\n\"TimeZone\";\"Europe/Jersey\"\n\"update_process_title\";\"off\"\n\"work_mem\";\"24MB\"\n\nI have a partitioned table tcpsessiondata:\n\nCREATE TABLE appqosdata.tcpsessiondata\n(\n detectorid smallint not null,\n createdtime bigint not null,\n sessionid bigint not null,\n...\n);\n\nThat table has many millions of rows in each partition and no data in the\nmain one. Here is an example:\n\nselect count(*) from appqosdata.tcpsessiondata_1;\n count\n----------\n 49377910\n(1 row)\n\nEvery partition has a \"Primary Key (detectorid, createdtime)\"\n\nI run the following query on that table:\nselect\n cast (SD.detectorid as numeric),\n CAST( (createdtime / 61000000000::bigint) AS numeric) as timegroup,\n sum(datafromsource)+sum(datafromdestination) as numbytes,\n CAST ( sum(packetsfromsource)+sum(packetsfromdestination) AS numeric) as\nnumpackets\nfrom\n appqosdata.tcpsessiondata SD\nwhere\n SD.detectorid >= 0 and SD.createdtime >= 1297266601368086000::bigint\n and SD.createdtime < 1297270202368086000::bigint\ngroup by SD.detectorid, timegroup\n\nThe table is partitioned by a \"sessionid\" which is not used in this\nparticular query so I already expect all partitions to be touched. However I\nhave a bad scan choice on at least a couple of partitions:\n\n\"HashAggregate (cost=5679026.42..5679028.76 rows=67 width=34) (actual\ntime=160113.366..160113.366 rows=0 loops=1)\"\n\" Output: (sd.detectorid)::numeric, (((sd.createdtime /\n61000000000::bigint))::numeric), (sum(sd.datafromsource) +\nsum(sd.datafromdestination)), ((sum(sd.packetsfromsource) +\nsum(sd.packetsfromdestination)))::numeric, sd.detectorid\"\n\" -> Result (cost=0.00..5679025.41 rows=67 width=34) (actual\ntime=160113.360..160113.360 rows=0 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime, sd.datafromsource,\nsd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination,\n((sd.createdtime / 61000000000::bigint))::numeric\"\n\" -> Append (cost=0.00..5679025.08 rows=67 width=34) (actual\ntime=160113.356..160113.356 rows=0 loops=1)\"\n\" -> Seq Scan on appqosdata.tcpsessiondata sd\n(cost=0.00..23.65 rows=1 width=34) (actual time=0.002..0.002 rows=0\nloops=1)\"\n\" Output: sd.detectorid, sd.createdtime,\nsd.datafromsource, sd.datafromdestination, sd.packetsfromsource,\nsd.packetsfromdestination\"\n\" Filter: ((sd.detectorid >= 0) AND (sd.createdtime >=\n1297266601368086000::bigint) AND (sd.createdtime <\n1297270202368086000::bigint))\"\n\" -> Seq Scan on appqosdata.tcpsessiondata_1 sd\n(cost=0.00..1373197.46 rows=1 width=34) (actual time=46436.737..46436.737\nrows=0 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime,\nsd.datafromsource, sd.datafromdestination, sd.packetsfromsource,\nsd.packetsfromdestination\"\n\" Filter: ((sd.detectorid >= 0) AND (sd.createdtime >=\n1297266601368086000::bigint) AND (sd.createdtime <\n1297270202368086000::bigint))\"\n\" -> Seq Scan on appqosdata.tcpsessiondata_2 sd\n(cost=0.00..2447484.00 rows=1 width=34) (actual time=108359.967..108359.967\nrows=0 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime,\nsd.datafromsource, sd.datafromdestination, sd.packetsfromsource,\nsd.packetsfromdestination\"\n\" Filter: ((sd.detectorid >= 0) AND (sd.createdtime >=\n1297266601368086000::bigint) AND (sd.createdtime <\n1297270202368086000::bigint))\"\n\" -> Index Scan using tcpsessiondata_3_pkey on\nappqosdata.tcpsessiondata_3 sd (cost=0.00..11.51 rows=1 width=34) (actual\ntime=0.016..0.016 rows=0 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime,\nsd.datafromsource, sd.datafromdestination, sd.packetsfromsource,\nsd.packetsfromdestination\"\n\" Index Cond: ((sd.detectorid >= 0) AND (sd.createdtime\n>= 1297266601368086000::bigint) AND (sd.createdtime <\n1297270202368086000::bigint))\"\n\n... (many more partitions here)....\n\n\" -> Index Scan using tcpsessiondata_61_pkey on\nappqosdata.tcpsessiondata_61 sd (cost=0.00..8162.42 rows=1 width=34)\n(actual time=25.446..25.446 rows=0 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime,\nsd.datafromsource, sd.datafromdestination, sd.packetsfromsource,\nsd.packetsfromdestination\"\n\" Index Cond: ((sd.detectorid >= 0) AND (sd.createdtime\n>= 1297266601368086000::bigint) AND (sd.createdtime <\n1297270202368086000::bigint))\"\n\" -> Index Scan using tcpsessiondata_62_pkey on\nappqosdata.tcpsessiondata_62 sd (cost=0.00..11.51 rows=1 width=34) (actual\ntime=0.008..0.008 rows=0 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime,\nsd.datafromsource, sd.datafromdestination, sd.packetsfromsource,\nsd.packetsfromdestination\"\n\" Index Cond: ((sd.detectorid >= 0) AND (sd.createdtime\n>= 1297266601368086000::bigint) AND (sd.createdtime <\n1297270202368086000::bigint))\"\n\" -> Index Scan using tcpsessiondata_63_pkey on\nappqosdata.tcpsessiondata_63 sd (cost=0.00..11.51 rows=1 width=34) (actual\ntime=0.006..0.006 rows=0 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime,\nsd.datafromsource, sd.datafromdestination, sd.packetsfromsource,\nsd.packetsfromdestination\"\n\" Index Cond: ((sd.detectorid >= 0) AND (sd.createdtime\n>= 1297266601368086000::bigint) AND (sd.createdtime <\n1297270202368086000::bigint))\"\n\" -> Seq Scan on appqosdata.tcpsessiondata_64 sd\n(cost=0.00..13.00 rows=1 width=34) (actual time=0.102..0.102 rows=0\nloops=1)\"\n\" Output: sd.detectorid, sd.createdtime,\nsd.datafromsource, sd.datafromdestination, sd.packetsfromsource,\nsd.packetsfromdestination\"\n\" Filter: ((sd.detectorid >= 0) AND (sd.createdtime >=\n1297266601368086000::bigint) AND (sd.createdtime <\n1297270202368086000::bigint))\"\n\" -> Seq Scan on appqosdata.tcpsessiondata_65 sd\n(cost=0.00..117.64 rows=1 width=34) (actual time=0.854..0.854 rows=0\nloops=1)\"\n\" Output: sd.detectorid, sd.createdtime,\nsd.datafromsource, sd.datafromdestination, sd.packetsfromsource,\nsd.packetsfromdestination\"\n\" Filter: ((sd.detectorid >= 0) AND (sd.createdtime >=\n1297266601368086000::bigint) AND (sd.createdtime <\n1297270202368086000::bigint))\"\n\" -> Index Scan using tcpsessiondata_66_pkey on\nappqosdata.tcpsessiondata_66 sd (cost=0.00..11.51 rows=1 width=34) (actual\ntime=0.007..0.007 rows=0 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime,\nsd.datafromsource, sd.datafromdestination, sd.packetsfromsource,\nsd.packetsfromdestination\"\n\" Index Cond: ((sd.detectorid >= 0) AND (sd.createdtime\n>= 1297266601368086000::bigint) AND (sd.createdtime <\n1297270202368086000::bigint))\"\n\" -> Index Scan using tcpsessiondata_67_pkey on\nappqosdata.tcpsessiondata_67 sd (cost=0.00..11.51 rows=1 width=34) (actual\ntime=0.005..0.005 rows=0 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime,\nsd.datafromsource, sd.datafromdestination, sd.packetsfromsource,\nsd.packetsfromdestination\"\n\" Index Cond: ((sd.detectorid >= 0) AND (sd.createdtime\n>= 1297266601368086000::bigint) AND (sd.createdtime <\n1297270202368086000::bigint))\"\n\"Total runtime: 160114.339 ms\"\n\nThe question is: why do we get a seq scan on appqosdata.tcpsessiondata_1 and\nappqosdata.tcpsessiondata_2 even if the planner estimates correctly 1 row\nout of millions could potentially be selected? As you can see ~90% of the\ntime is spent on those 2 partitions even if they are not apparently\ndifferent from any of the others.\n\nI would appreciate any help with this issue.\n\nThank you,\nSvetlin Manavski\n\nHi all,here is my postgresql configuration:\"version\";\"PostgreSQL 9.0.3 on amd64-portbld-freebsd8.0, compiled by GCC cc (GCC) 4.2.1 20070719 [FreeBSD], 64-bit\"\"bytea_output\";\"escape\"\n\"checkpoint_segments\";\"64\"\"client_encoding\";\"UNICODE\"\"effective_cache_size\";\"6GB\"\"fsync\";\"off\"\"lc_collate\";\"C\"\n\"lc_ctype\";\"C\"\"listen_addresses\";\"*\"\"log_destination\";\"syslog\"\"max_connections\";\"20\"\"max_stack_depth\";\"2MB\"\n\"server_encoding\";\"UTF8\"\"shared_buffers\";\"4GB\"\"silent_mode\";\"on\"\"synchronous_commit\";\"off\"\"TimeZone\";\"Europe/Jersey\"\n\"update_process_title\";\"off\"\"work_mem\";\"24MB\"I have a partitioned table tcpsessiondata:CREATE TABLE appqosdata.tcpsessiondata ( detectorid smallint not null,\n createdtime bigint not null, sessionid bigint not null,...);That table has many millions of rows in each partition and no data in the main one. Here is an example:select count(*) from appqosdata.tcpsessiondata_1;\n count ---------- 49377910(1 row)Every partition has a \"Primary Key (detectorid, createdtime)\"I run the following query on that table:select cast (SD.detectorid as numeric),\n CAST( (createdtime / 61000000000::bigint) AS numeric) as timegroup, sum(datafromsource)+sum(datafromdestination) as numbytes, CAST ( sum(packetsfromsource)+sum(packetsfromdestination) AS numeric) as numpackets \nfrom appqosdata.tcpsessiondata SD where SD.detectorid >= 0 and SD.createdtime >= 1297266601368086000::bigint and SD.createdtime < 1297270202368086000::bigint group by SD.detectorid, timegroup\nThe table is partitioned by a \"sessionid\" which is not used in this particular query so I already expect all partitions to be touched. However I have a bad scan choice on at least a couple of partitions:\n\"HashAggregate (cost=5679026.42..5679028.76 rows=67 width=34) (actual time=160113.366..160113.366 rows=0 loops=1)\"\" Output: (sd.detectorid)::numeric, (((sd.createdtime / 61000000000::bigint))::numeric), (sum(sd.datafromsource) + sum(sd.datafromdestination)), ((sum(sd.packetsfromsource) + sum(sd.packetsfromdestination)))::numeric, sd.detectorid\"\n\" -> Result (cost=0.00..5679025.41 rows=67 width=34) (actual time=160113.360..160113.360 rows=0 loops=1)\"\" Output: sd.detectorid, sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination, ((sd.createdtime / 61000000000::bigint))::numeric\"\n\" -> Append (cost=0.00..5679025.08 rows=67 width=34) (actual time=160113.356..160113.356 rows=0 loops=1)\"\" -> Seq Scan on appqosdata.tcpsessiondata sd (cost=0.00..23.65 rows=1 width=34) (actual time=0.002..0.002 rows=0 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination\"\" Filter: ((sd.detectorid >= 0) AND (sd.createdtime >= 1297266601368086000::bigint) AND (sd.createdtime < 1297270202368086000::bigint))\"\n\" -> Seq Scan on appqosdata.tcpsessiondata_1 sd (cost=0.00..1373197.46 rows=1 width=34) (actual time=46436.737..46436.737 rows=0 loops=1)\"\" Output: sd.detectorid, sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination\"\n\" Filter: ((sd.detectorid >= 0) AND (sd.createdtime >= 1297266601368086000::bigint) AND (sd.createdtime < 1297270202368086000::bigint))\"\" -> Seq Scan on appqosdata.tcpsessiondata_2 sd (cost=0.00..2447484.00 rows=1 width=34) (actual time=108359.967..108359.967 rows=0 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination\"\" Filter: ((sd.detectorid >= 0) AND (sd.createdtime >= 1297266601368086000::bigint) AND (sd.createdtime < 1297270202368086000::bigint))\"\n\" -> Index Scan using tcpsessiondata_3_pkey on appqosdata.tcpsessiondata_3 sd (cost=0.00..11.51 rows=1 width=34) (actual time=0.016..0.016 rows=0 loops=1)\"\" Output: sd.detectorid, sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination\"\n\" Index Cond: ((sd.detectorid >= 0) AND (sd.createdtime >= 1297266601368086000::bigint) AND (sd.createdtime < 1297270202368086000::bigint))\"... (many more partitions here)....\n\" -> Index Scan using tcpsessiondata_61_pkey on appqosdata.tcpsessiondata_61 sd (cost=0.00..8162.42 rows=1 width=34) (actual time=25.446..25.446 rows=0 loops=1)\"\" Output: sd.detectorid, sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination\"\n\" Index Cond: ((sd.detectorid >= 0) AND (sd.createdtime >= 1297266601368086000::bigint) AND (sd.createdtime < 1297270202368086000::bigint))\"\" -> Index Scan using tcpsessiondata_62_pkey on appqosdata.tcpsessiondata_62 sd (cost=0.00..11.51 rows=1 width=34) (actual time=0.008..0.008 rows=0 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination\"\" Index Cond: ((sd.detectorid >= 0) AND (sd.createdtime >= 1297266601368086000::bigint) AND (sd.createdtime < 1297270202368086000::bigint))\"\n\" -> Index Scan using tcpsessiondata_63_pkey on appqosdata.tcpsessiondata_63 sd (cost=0.00..11.51 rows=1 width=34) (actual time=0.006..0.006 rows=0 loops=1)\"\" Output: sd.detectorid, sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination\"\n\" Index Cond: ((sd.detectorid >= 0) AND (sd.createdtime >= 1297266601368086000::bigint) AND (sd.createdtime < 1297270202368086000::bigint))\"\" -> Seq Scan on appqosdata.tcpsessiondata_64 sd (cost=0.00..13.00 rows=1 width=34) (actual time=0.102..0.102 rows=0 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination\"\" Filter: ((sd.detectorid >= 0) AND (sd.createdtime >= 1297266601368086000::bigint) AND (sd.createdtime < 1297270202368086000::bigint))\"\n\" -> Seq Scan on appqosdata.tcpsessiondata_65 sd (cost=0.00..117.64 rows=1 width=34) (actual time=0.854..0.854 rows=0 loops=1)\"\" Output: sd.detectorid, sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination\"\n\" Filter: ((sd.detectorid >= 0) AND (sd.createdtime >= 1297266601368086000::bigint) AND (sd.createdtime < 1297270202368086000::bigint))\"\" -> Index Scan using tcpsessiondata_66_pkey on appqosdata.tcpsessiondata_66 sd (cost=0.00..11.51 rows=1 width=34) (actual time=0.007..0.007 rows=0 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination\"\" Index Cond: ((sd.detectorid >= 0) AND (sd.createdtime >= 1297266601368086000::bigint) AND (sd.createdtime < 1297270202368086000::bigint))\"\n\" -> Index Scan using tcpsessiondata_67_pkey on appqosdata.tcpsessiondata_67 sd (cost=0.00..11.51 rows=1 width=34) (actual time=0.005..0.005 rows=0 loops=1)\"\" Output: sd.detectorid, sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination\"\n\" Index Cond: ((sd.detectorid >= 0) AND (sd.createdtime >= 1297266601368086000::bigint) AND (sd.createdtime < 1297270202368086000::bigint))\"\"Total runtime: 160114.339 ms\"\nThe question is: why do we get a seq scan on appqosdata.tcpsessiondata_1 and appqosdata.tcpsessiondata_2 even if the planner estimates correctly 1 row out of millions could potentially be selected? As you can see ~90% of the time is spent on those 2 partitions even if they are not apparently different from any of the others. \nI would appreciate any help with this issue.Thank you,Svetlin Manavski",
"msg_date": "Fri, 15 Jul 2011 13:57:18 +0100",
"msg_from": "Svetlin Manavski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unexpected seq scans when expected result is 1 row out of milions"
},
{
"msg_contents": "Svetlin Manavski <[email protected]> writes:\n> The question is: why do we get a seq scan on appqosdata.tcpsessiondata_1 and\n> appqosdata.tcpsessiondata_2 even if the planner estimates correctly 1 row\n> out of millions could potentially be selected? As you can see ~90% of the\n> time is spent on those 2 partitions even if they are not apparently\n> different from any of the others.\n\nWell, there must be *something* different about them. Are you sure\nthey've got the same indexes as the others? It would be useful to see\npsql's \\d report for those partitions, as well as for one of the\npartitions that's behaving as expected. You might also compare the\nEXPLAIN results for doing the query on just one child table between\nthe normal and misbehaving partitions.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Jul 2011 13:58:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unexpected seq scans when expected result is 1 row out of milions"
}
] |
[
{
"msg_contents": "Hi,\n\nIs BBU still needed with SSD? \n\nSSD has its own cache. And in certain models such as Intel 320 that cache is backed by capacitors. So in a sense that cache acts as a BBU that's backed by capacitors instead of batteries. \n\nIn this case is BBU still needed? If I put 2 SSD in software RAID 1, would that be any slower than 2 SSD in HW RAID 1 with BBU? What are the pros and cons?\n\nThanks.\n\nAndy\n",
"msg_date": "Sun, 17 Jul 2011 18:43:19 -0700 (PDT)",
"msg_from": "Andy <[email protected]>",
"msg_from_op": true,
"msg_subject": "BBU still needed with SSD?"
},
{
"msg_contents": "On 18/07/2011 9:43 AM, Andy wrote:\n> Hi,\n>\n> Is BBU still needed with SSD?\nYou *need* an SSD with a supercapacitor or on-board battery backup for \nits cache. Otherwise you *will* lose data.\n\nConsumer SSDs are like a hard disk attached to a RAID controller with \nwrite-back caching enabled and no BBU. In other words: designed to eat \nyour data.\n\n> In this case is BBU still needed? If I put 2 SSD in software RAID 1, would that be any slower than 2 SSD in HW RAID 1 with BBU? What are the pros and cons?\n>\nYou don't need write-back caching for fsync() performance if your SSDs \nhave big enough caches. I don't know enough to say whether there are \nother benefits to having them on a BBU HW raid controller or whether SW \nRAID is fine.\n\nPOST Newspapers\n276 Onslow Rd, Shenton Park\nPh: 08 9381 3088 Fax: 08 9388 2258\nABN: 50 008 917 717\nhttp://www.postnewspapers.com.au/\n",
"msg_date": "Mon, 18 Jul 2011 10:30:20 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU still needed with SSD?"
},
{
"msg_contents": "On 2011-07-18 03:43, Andy wrote:\n> Hi,\n>\n> Is BBU still needed with SSD?\n>\n> SSD has its own cache. And in certain models such as Intel 320 that cache is backed by capacitors. So in a sense that cache acts as a BBU that's backed by capacitors instead of batteries.\n>\n> In this case is BBU still needed? If I put 2 SSD\n+with supercap?\n> in software RAID 1, would that be any slower than 2 SSD in HW RAID 1 with BBU? What are the pros and cons?\nThe biggest drawback of 2 SSD's with supercap in hardware raid 1, is \nthat if they are both new and of the same model/firmware, they'd \nprobably reach the end of their write cycles at the same time, thereby \nfailing simultaneously. You'd have to start with two SSD's with \ndifferent remaining life left in the software raid setup.\n\nregards,\nYeb\n\n\n",
"msg_date": "Mon, 18 Jul 2011 09:56:27 +0200",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU still needed with SSD?"
},
{
"msg_contents": "On Sun, Jul 17, 2011 at 7:30 PM, Craig Ringer\n<[email protected]> wrote:\n> On 18/07/2011 9:43 AM, Andy wrote:\n>> Is BBU still needed with SSD?\n>\n> You *need* an SSD with a supercapacitor or on-board battery backup for its\n> cache. Otherwise you *will* lose data.\n>\n> Consumer SSDs are like a hard disk attached to a RAID controller with\n> write-back caching enabled and no BBU. In other words: designed to eat your\n> data.\n\nNo you don't. Greg Smith pulled the power on a Intel 320 series drive\nwithout suffering any data loss thanks to the 6 regular old caps it\nhas. Look for his post in a long thread titled \"Intel SSDs that may\nnot suck\".\n\n>> In this case is BBU still needed? If I put 2 SSD in software RAID 1, would\n>> that be any slower than 2 SSD in HW RAID 1 with BBU? What are the pros and\n>> cons?\n\nWhat will perform better will vary greatly depending on the exact\nSSDs, rotating disks, RAID BBU controller and application. But\ncertainly a couple of Intel 320s in RAID1 seem to be an inexpensive\nway of getting very good performance while maintaining reliability.\n\n-Dave\n",
"msg_date": "Mon, 18 Jul 2011 16:37:13 -0700",
"msg_from": "David Rees <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU still needed with SSD?"
},
{
"msg_contents": "Andy wrote:\n> SSD has its own cache. And in certain models such as Intel 320 that cache is backed by capacitors. So in a sense that cache acts as a BBU that's backed by capacitors instead of batteries. \n> \n\nTests I did on the 320 series says it works fine: \nhttp://archives.postgresql.org/message-id/[email protected]\n\nAnd there's a larger discussion of this topic at \nhttp://blog.2ndquadrant.com/en/2011/04/intel-ssd-now-off-the-sherr-sh.html \nthat answers this question in a bit more detail.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\n\n\n",
"msg_date": "Mon, 18 Jul 2011 20:39:57 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU still needed with SSD?"
},
{
"msg_contents": "\n\n--- On Mon, 7/18/11, David Rees <[email protected]> wrote:\n\n> >> In this case is BBU still needed? If I put 2 SSD\n> in software RAID 1, would\n> >> that be any slower than 2 SSD in HW RAID 1 with\n> BBU? What are the pros and\n> >> cons?\n> \n> What will perform better will vary greatly depending on the\n> exact\n> SSDs, rotating disks, RAID BBU controller and\n> application. But\n> certainly a couple of Intel 320s in RAID1 seem to be an\n> inexpensive\n> way of getting very good performance while maintaining\n> reliability.\n\nI'm not comparing SSD in SW RAID with rotating disks in HW RAID with BBU though. I'm just comparing SSDs with or without BBU. I'm going to get a couple of Intel 320s, just want to know if BBU makes sense for them.\n",
"msg_date": "Mon, 18 Jul 2011 18:33:50 -0700 (PDT)",
"msg_from": "Andy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BBU still needed with SSD?"
},
{
"msg_contents": "Andy wrote:\n> \n> \n> --- On Mon, 7/18/11, David Rees <[email protected]> wrote:\n> \n> > >> In this case is BBU still needed? If I put 2 SSD\n> > in software RAID 1, would\n> > >> that be any slower than 2 SSD in HW RAID 1 with\n> > BBU? What are the pros and\n> > >> cons?\n> >\n> > What will perform better will vary greatly depending on the\n> > exact\n> > SSDs, rotating disks, RAID BBU controller and\n> > application.? But\n> > certainly a couple of Intel 320s in RAID1 seem to be an\n> > inexpensive\n> > way of getting very good performance while maintaining\n> > reliability.\n> \n> I'm not comparing SSD in SW RAID with rotating disks in HW RAID with\n> BBU though. I'm just comparing SSDs with or without BBU. I'm going to\n> get a couple of Intel 320s, just want to know if BBU makes sense for\n> them.\n\nYes, it certainly does, even if you have a RAID BBU.\n\n--\n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Mon, 18 Jul 2011 21:59:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU still needed with SSD?"
},
{
"msg_contents": "\n> > I'm not comparing SSD in SW RAID with rotating disks\n> in HW RAID with\n> > BBU though. I'm just comparing SSDs with or without\n> BBU. I'm going to\n> > get a couple of Intel 320s, just want to know if BBU\n> makes sense for\n> > them.\n> \n> Yes, it certainly does, even if you have a RAID BBU.\n\n\"even if you have a RAID BBU\"? Can you elaborate?\n\nI'm talking about after I get 2 Intel 320s, should I spend the extra money on a RAID BBU? Adding RAID BBU in this case wouldn't improve reliability, but does it improve performance? If so, how much improvement can it bring?\n",
"msg_date": "Mon, 18 Jul 2011 20:56:38 -0700 (PDT)",
"msg_from": "Andy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BBU still needed with SSD?"
},
{
"msg_contents": "* Yeb Havinga:\n\n> The biggest drawback of 2 SSD's with supercap in hardware raid 1, is\n> that if they are both new and of the same model/firmware, they'd\n> probably reach the end of their write cycles at the same time, thereby\n> failing simultaneously.\n\nI thought so too, but I've got two Intel 320s (I suppose, the report\ndevice model is \"SSDSA2CT040G3\") in a RAID 1 configuration, and after\nabout a month of testing, one is down to 89 on the media wearout\nindicator, and the other is still at 96. Both devices are\ndeteriorating, but one at a significantly faster rate.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Tue, 19 Jul 2011 07:56:54 +0000",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU still needed with SSD?"
},
{
"msg_contents": "On 07/18/2011 11:56 PM, Andy wrote:\n> I'm talking about after I get 2 Intel 320s, should I spend the extra \n> money on a RAID BBU? Adding RAID BBU in this case wouldn't improve \n> reliability, but does it improve performance? If so, how much \n> improvement can it bring?\n\nIt won't improve performance enough that I would bother. The main \nbenefit of adding a RAID with BBU to traditional disks is that you can \ncommit much, much faster to the card RAM than the disks can spin. You \ncan go from 100 commits/second to 10,000 commits/second that way (in \ntheory--actually getting >2000 at the database level is harder).\n\nSince the Intel 320 drives can easily hit 2000 to 4000 commits/second on \ntheir own, using the cache that's built-in to the drive, the advantage \nof adding a RAID card on top of that is pretty minimal. Adding a RAID \ncache will help some, because that layer will be faster than the SSD at \nabsorbing writes, and putting another cache layer into a system always \nhelps with improving burst performance. But you'd probably be better \noff using the same money to add more RAM, or more/bigger SSD drives. \nThe fundamental thing that RAID BBU units do--speed up commits--you will \nonly see minimal benefit from with these SSDs.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\n\n\n",
"msg_date": "Tue, 19 Jul 2011 06:19:37 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU still needed with SSD?"
},
{
"msg_contents": "On 2011-07-19 09:56, Florian Weimer wrote:\n> * Yeb Havinga:\n>\n>> The biggest drawback of 2 SSD's with supercap in hardware raid 1, is\n>> that if they are both new and of the same model/firmware, they'd\n>> probably reach the end of their write cycles at the same time, thereby\n>> failing simultaneously.\n> I thought so too, but I've got two Intel 320s (I suppose, the report\n> device model is \"SSDSA2CT040G3\") in a RAID 1 configuration, and after\n> about a month of testing, one is down to 89 on the media wearout\n> indicator, and the other is still at 96. Both devices are\n> deteriorating, but one at a significantly faster rate.\nThat's great news if this turns out to be generally true. Is it on mdadm \nsoftware raid?\n\nI searched a bit in the mdadm manual for reasons this can be the case. \nIt isn't the occasional check (echo check > \n/sys/block/md0/md/sync_action) since that seems to do two reads and \ncompare. Another idea was that the layout of the mirror might not be \ndifferent, but the manual says that the --layout configuration directive \nis only for RAID 5,6 and 10, but not RAID 1. Then my eye caught \n--write-behind, the maximum number of outstanding writes and it has a \nnon-zero default value, but is only done if a drive is marked write-mostly.\n\nMaybe it is caused by the initial build of the array? But then a 7% \ndifference seems like an awful lot.\n\nIt would be interesting to see if the drives also show total xyz \nwritten, and if that differs a lot too.\n\nregards,\nYeb Havinga\n\n",
"msg_date": "Tue, 19 Jul 2011 12:35:37 +0200",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU still needed with SSD?"
},
{
"msg_contents": "* Yeb Havinga:\n\n> On 2011-07-19 09:56, Florian Weimer wrote:\n>> * Yeb Havinga:\n>>\n>>> The biggest drawback of 2 SSD's with supercap in hardware raid 1, is\n>>> that if they are both new and of the same model/firmware, they'd\n>>> probably reach the end of their write cycles at the same time, thereby\n>>> failing simultaneously.\n>> I thought so too, but I've got two Intel 320s (I suppose, the report\n>> device model is \"SSDSA2CT040G3\") in a RAID 1 configuration, and after\n>> about a month of testing, one is down to 89 on the media wearout\n>> indicator, and the other is still at 96. Both devices are\n>> deteriorating, but one at a significantly faster rate.\n> That's great news if this turns out to be generally true. Is it on\n> mdadm software raid?\n\nYes, it is.\n\nIt's a mixed blessing because judging by the values, one of the drives\nwears down pretty quickly.\n\n> Maybe it is caused by the initial build of the array? But then a 7%\n> difference seems like an awful lot.\n\nBoth drives a supposedly fresh from the factory, and they started with\nthe wearout indicator at 100. The initial build should write just\nzeros, and I would expect the drive firmware to recognize that.\n\nI've got a second system against which I could run the same test. I\nwonder if it is reproducible.\n\n> It would be interesting to see if the drives also show total xyz\n> written, and if that differs a lot too.\n\nDo you know how to check that with smartctl?\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Tue, 19 Jul 2011 10:47:07 +0000",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU still needed with SSD?"
},
{
"msg_contents": "On 2011-07-19 12:47, Florian Weimer wrote:\n>\n>> It would be interesting to see if the drives also show total xyz\n>> written, and if that differs a lot too.\n> Do you know how to check that with smartctl?\nsmartctl -a /dev/<your disk> should show all values. If it shows \nsomething that looks like garbage, it means that the database of \nsmartmontools doesn't have the correct information yet for these new \ndrives. I know that for the recently new OCZ vertex 2 and 3 SSDs you \nneed at least 5.40 or 5.41 and that's pretty new stuff. (I just happened \nto install Fedora 15 today and that has smartmontools 5.41, whereas e.g. \nScientific Linux 6 has 5.39).\n\n-- \nYeb Havinga\nhttp://www.mgrid.net/\nMastering Medical Data\n\n",
"msg_date": "Tue, 19 Jul 2011 13:25:36 +0200",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU still needed with SSD?"
},
{
"msg_contents": "* Yeb Havinga:\n\n> On 2011-07-19 12:47, Florian Weimer wrote:\n>>\n>>> It would be interesting to see if the drives also show total xyz\n>>> written, and if that differs a lot too.\n>> Do you know how to check that with smartctl?\n\n> smartctl -a /dev/<your disk> should show all values. If it shows\n> something that looks like garbage, it means that the database of\n> smartmontools doesn't have the correct information yet for these new\n> drives. I know that for the recently new OCZ vertex 2 and 3 SSDs you\n> need at least 5.40 or 5.41 and that's pretty new stuff. (I just\n> happened to install Fedora 15 today and that has smartmontools 5.41,\n> whereas e.g. Scientific Linux 6 has 5.39).\n\nIs this \"Total_LBAs_Written\"? The values appear to be far too low:\n\n241 Total_LBAs_Written 0x0032 100 100 000 Old_age Always - 188276\n242 Total_LBAs_Read 0x0032 100 100 000 Old_age Always - 116800\n\n241 Total_LBAs_Written 0x0032 100 100 000 Old_age Always - 189677\n242 Total_LBAs_Read 0x0032 100 100 000 Old_age Always - 92509\n\nThe second set of numbers are from the drive which wears more quickly.\n\nThe read asymmetry is not unusual for RAID-1 configurations (depending\non the implementation; few do \"read both and compare\", as originally\nenvisioned, but prefer the primary block device instead). Reduced read\ntraffic could translate to increased fragmentation and wear if the drive\ndefragments on read. I don't know if the Intel 320s do this.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Tue, 19 Jul 2011 11:37:27 +0000",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU still needed with SSD?"
},
{
"msg_contents": "On 2011-07-19 13:37, Florian Weimer wrote:\n> Is this \"Total_LBAs_Written\"?\nI got the same name \"Total_LBAs_Written\" on an 5.39 smartmontools, which \nwas renamed to 241 Lifetime_Writes_GiB after upgrade to 5.42. Note that \nthis is smartmontools new interpretation of the values, which happen to \nmatch with the OCZ tools interpretation (241: SSD Lifetime writes from \nhost Number of bytes written to SSD: 448 G). So for the Intels \nit's probably also lifetime writes in GB but you'd have to check with an \nIntel smart values reader to be absolutely sure.\n> The values appear to be far too low:\n>\n> 241 Total_LBAs_Written 0x0032 100 100 000 Old_age Always - 188276\n> 242 Total_LBAs_Read 0x0032 100 100 000 Old_age Always - 116800\n>\n> 241 Total_LBAs_Written 0x0032 100 100 000 Old_age Always - 189677\n> 242 Total_LBAs_Read 0x0032 100 100 000 Old_age Always - 92509\nHmm that would mean 188TB written. Does that value seem right to your \nuse case? If you'd write 100MB/s sustained, it would take 22 days to \nreach 188TB.\n> The second set of numbers are from the drive which wears more quickly.\nIt's strange that there's such a large difference in lifetime left, when \nlifetime writes are so similar. Maybe there are more small md metadata \nupdates on the second disk, but without digging into md's internals it's \nimpossible to say anything constructive about it.\n\nOff-topic: new cool tool in smartmontools-5.4x: \n/usr/sbin/update-smart-drivedb :-)\n\n-- \n\nYeb Havinga\nhttp://www.mgrid.net/\nMastering Medical Data\n\n",
"msg_date": "Tue, 19 Jul 2011 14:16:07 +0200",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU still needed with SSD?"
},
{
"msg_contents": "Yeb Havinga wrote:\n> So for the Intels it's probably also lifetime writes in GB but you'd \n> have to check with an Intel smart values reader to be absolutely sure.\n\nWith my 320 series drive, the LBA units are pretty clearly 32MB each. \nWatch this:\n\nroot@toy:/ssd/data# smartctl --version\nsmartctl 5.40 2010-07-12 r3124 [x86_64-unknown-linux-gnu] (local build)\n...\n\nroot@toy:/ssd/data# du -skh pg_xlog/\n4.2G pg_xlog/\n\nroot@toy:/ssd/data# smartctl -a /dev/sdg1 | grep LBAs\n241 Total_LBAs_Written 0x0032 100 100 000 Old_age \nAlways - 18128\n242 Total_LBAs_Read 0x0032 100 100 000 Old_age \nAlways - 10375\n\nroot@toy:/ssd/data# cat pg_xlog/* > /dev/null\n\nroot@toy:/ssd/data# smartctl -a /dev/sdg1 | grep LBAs\n241 Total_LBAs_Written 0x0032 100 100 000 Old_age \nAlways - 18128\n242 Total_LBAs_Read 0x0032 100 100 000 Old_age \nAlways - 10508\n\nThat's an increase of 133 after reading 4.2GB of data, which means makes \neach LBA turn out to be 32MB in size. Let's try to confirm that by \ndoing a write:\n\nroot@toy:/ssd/gsmith# smartctl -a /dev/sdg1 | grep LBAs\n241 Total_LBAs_Written 0x0032 100 100 000 Old_age \nAlways - 18159\n242 Total_LBAs_Read 0x0032 100 100 000 Old_age \nAlways - 10508\nroot@toy:/ssd/gsmith# dd if=/dev/zero of=test_file.0 bs=32M count=25 && sync\n25+0 records in\n25+0 records out\n838860800 bytes (839 MB) copied, 5.95257 s, 141 MB/s\nroot@toy:/ssd/gsmith# smartctl -a /dev/sdg1 | grep LBAs\n241 Total_LBAs_Written 0x0032 100 100 000 Old_age \nAlways - 18184\n242 Total_LBAs_Read 0x0032 100 100 000 Old_age \nAlways - 10508\n\n18184 - 18159 = 25; exactly the count I used in 32MB blocks.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\n\n\n",
"msg_date": "Tue, 19 Jul 2011 10:29:43 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU still needed with SSD?"
},
{
"msg_contents": "Have you also created your partitions with a reasonably new fdisk (or\nequivalent) with -c -u as options?\n\nYour partitions should be starting somewhere at 2048 i guess (let the\nsw figure that out). The fast degradation of the one disk might\nindicate bad partitioning? (maybe recheck with a grml.iso or something\nalike http://www.grml.org/ )\nAlso, ... did you know that any unused space in the disk is being used\nas bad block 'replacement'? so just leave out 1-2 GB space at the end\nof your disk to make use of this 'feature'\n\notherwise, mdadm supports raid1 with more than 2 drives. I havent seen\nthis configuration much but it makes absolute sense on drives where\nyou expect failure. (i am not speaking spare, but really raid1 with >\n2 drives).\n\nI like this setup, with ssd drives it might be the solution to decay.\n\nregs,\nklaus\n",
"msg_date": "Thu, 21 Jul 2011 09:19:45 +0200",
"msg_from": "Klaus Ita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU still needed with SSD?"
}
] |
[
{
"msg_contents": "I have 2 small servers, one a fairly new server with a x3450 (4-core \nwith HT) cpu running at 2.67GHz and an older E5335 (4-core) cpu running \nat 2GHz.\n\nI have been quite surprised how the E5335 compares very closely to the \nx3450, but maybe I have tested it wrongly.\n\nhere's the CPUINFO:\nprocessor : 3\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 15\nmodel name : Intel(R) Xeon(R) CPU E5335 @ 2.00GHz\nstepping : 7\ncpu MHz : 1995.036\ncache size : 4096 KB\nphysical id : 3\nsiblings : 1\ncore id : 0\ncpu cores : 1\nfpu : yes\nfpu_exception : yes\ncpuid level : 10\nwp : yes\nflags : fpu tsc msr pae cx8 apic mtrr cmov pat clflush acpi \nmmx fxsr sse sse2 ss ht syscall nx lm constant_tsc pni vmx ssse3 cx16 \nlahf_lm\nbogomips : 4989.65\nclflush size : 64\ncache_alignment : 64\naddress sizes : 36 bits physical, 48 bits virtual\npower management:\nOS: CentOS 64bit\nPostgres: 9.0.4 compiled\n\nprocessor : 7\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 30\nmodel name : Intel(R) Xeon(R) CPU X3450 @ 2.67GHz\nstepping : 5\ncpu MHz : 2660.099\ncache size : 8192 KB\nphysical id : 0\nsiblings : 8\ncore id : 3\ncpu cores : 4\napicid : 7\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge \nmca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx \nrdtscp lm constant_tsc ida nonstop_tsc pni monitor ds_cpl vmx smx est \ntm2 ssse3 cx16 xtpr sse4_1 sse4_2 popcnt lahf_lm [8]\nbogomips : 5319.92\nOS: CentOS 32bit\nPostgres: 9.0.4 compiled\n\n\nIn my testing I have a 32bit CentOS on the x3450, but a 64bit CentOS on \nthe E5335. Can this make such a bit difference or should the perform \nfairly close to the same speed? Both servers have 8GB of RAM, and the \ndatabase I tested with is only 3.7GB.\n\nI'm a bit surprised as the x3450 has DDR3, while the E5335 has DDR2, and \nof course because of the cycle speed difference alone I would think the \nX3450 should beat the E5335.\n\n",
"msg_date": "Mon, 18 Jul 2011 13:48:20 -0600",
"msg_from": "\"M. D.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "cpu comparison"
},
{
"msg_contents": "On Mon, Jul 18, 2011 at 01:48:20PM -0600, M. D. wrote:\n> I have 2 small servers, one a fairly new server with a x3450 (4-core\n> with HT) cpu running at 2.67GHz and an older E5335 (4-core) cpu\n> running at 2GHz.\n> \n> I have been quite surprised how the E5335 compares very closely to\n> the x3450, but maybe I have tested it wrongly.\n> \n> here's the CPUINFO:\n> processor : 3\n> vendor_id : GenuineIntel\n> cpu family : 6\n> model : 15\n> model name : Intel(R) Xeon(R) CPU E5335 @ 2.00GHz\n> stepping : 7\n> cpu MHz : 1995.036\n> cache size : 4096 KB\n> physical id : 3\n> siblings : 1\n> core id : 0\n> cpu cores : 1\n> fpu : yes\n> fpu_exception : yes\n> cpuid level : 10\n> wp : yes\n> flags : fpu tsc msr pae cx8 apic mtrr cmov pat clflush\n> acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc pni vmx\n> ssse3 cx16 lahf_lm\n> bogomips : 4989.65\n> clflush size : 64\n> cache_alignment : 64\n> address sizes : 36 bits physical, 48 bits virtual\n> power management:\n> OS: CentOS 64bit\n> Postgres: 9.0.4 compiled\n> \n> processor : 7\n> vendor_id : GenuineIntel\n> cpu family : 6\n> model : 30\n> model name : Intel(R) Xeon(R) CPU X3450 @ 2.67GHz\n> stepping : 5\n> cpu MHz : 2660.099\n> cache size : 8192 KB\n> physical id : 0\n> siblings : 8\n> core id : 3\n> cpu cores : 4\n> apicid : 7\n> fdiv_bug : no\n> hlt_bug : no\n> f00f_bug : no\n> coma_bug : no\n> fpu : yes\n> fpu_exception : yes\n> cpuid level : 11\n> wp : yes\n> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr\n> pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm\n> pbe nx rdtscp lm constant_tsc ida nonstop_tsc pni monitor ds_cpl vmx\n> smx est tm2 ssse3 cx16 xtpr sse4_1 sse4_2 popcnt lahf_lm [8]\n> bogomips : 5319.92\n> OS: CentOS 32bit\n> Postgres: 9.0.4 compiled\n> \n> \n> In my testing I have a 32bit CentOS on the x3450, but a 64bit CentOS\n> on the E5335. Can this make such a bit difference or should the\n> perform fairly close to the same speed? Both servers have 8GB of\n> RAM, and the database I tested with is only 3.7GB.\n> \n> I'm a bit surprised as the x3450 has DDR3, while the E5335 has DDR2,\n> and of course because of the cycle speed difference alone I would\n> think the X3450 should beat the E5335.\n> \n\nYes, you have basically shown that running two different tests give\ndifferent results -- or that an apple is not an orange. You need to\nonly vary 1 variable at a time for it to mean anything.\n\nRegards,\nKen\n",
"msg_date": "Mon, 18 Jul 2011 15:11:32 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cpu comparison"
},
{
"msg_contents": "Dne 18.7.2011 22:11, [email protected] napsal(a):\n>> > In my testing I have a 32bit CentOS on the x3450, but a 64bit CentOS\n>> > on the E5335. Can this make such a bit difference or should the\n>> > perform fairly close to the same speed? Both servers have 8GB of\n>> > RAM, and the database I tested with is only 3.7GB.\n>> > \n>> > I'm a bit surprised as the x3450 has DDR3, while the E5335 has DDR2,\n>> > and of course because of the cycle speed difference alone I would\n>> > think the X3450 should beat the E5335.\n>> > \n> Yes, you have basically shown that running two different tests give\n> different results -- or that an apple is not an orange. You need to\n> only vary 1 variable at a time for it to mean anything.\n\nHe just run the same test on two different machines - I'm not sure\nwhat's wrong with it? Sure, it would be nice to compare 32bit to 32bit,\nbut the OP probably can't do that and wonders if this is the cause. Why\nis that comparing apples and oranges?\n\nAccording to http://www.cpubenchmark.net, the X3450 is about 2x as fast\nas E5335 (5,298 vs. 2,575), although this is just a synthetic score.\n\nI'm a bit confused by the E5335 cpuinfo output, because it says \"cpu\ncores : 1\" as I'd expect \"4\" here.\n\nI do recall hyperthreading generally was not recommended for a DB, not\nsure if that changed recently. A quick search revealed this post\n\nhttp://serverfault.com/questions/219791/hyperthreading-vs-sql-server-postgresql\n\nstating that since Nehalem CPUs (and X3450 is Nehalem) this should not\nbe a problem anymore. Not sure if it's true, I guess it's worth testing\nas it might slow down the X3450 box.\n\nOP: We need more details about the test's has run, without them we're\njust guessing. Have you collected some system stats (vmstat, iostat)\nduring the test?\n\nTomas\n",
"msg_date": "Mon, 18 Jul 2011 23:56:40 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cpu comparison"
},
{
"msg_contents": "On 7/18/11 12:48 PM, M. D. wrote:\n> I have 2 small servers, one a fairly new server with a x3450 (4-core\n> with HT) cpu running at 2.67GHz and an older E5335 (4-core) cpu running\n> at 2GHz.\n> \n> I have been quite surprised how the E5335 compares very closely to the\n> x3450, but maybe I have tested it wrongly.\n\nWhat test? What were the results?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Mon, 18 Jul 2011 15:03:34 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cpu comparison"
},
{
"msg_contents": "On Mon, Jul 18, 2011 at 11:56:40PM +0200, Tomas Vondra wrote:\n> Dne 18.7.2011 22:11, [email protected] napsal(a):\n> >> > In my testing I have a 32bit CentOS on the x3450, but a 64bit CentOS\n> >> > on the E5335. Can this make such a bit difference or should the\n> >> > perform fairly close to the same speed? Both servers have 8GB of\n> >> > RAM, and the database I tested with is only 3.7GB.\n> >> > \n> >> > I'm a bit surprised as the x3450 has DDR3, while the E5335 has DDR2,\n> >> > and of course because of the cycle speed difference alone I would\n> >> > think the X3450 should beat the E5335.\n> >> > \n> > Yes, you have basically shown that running two different tests give\n> > different results -- or that an apple is not an orange. You need to\n> > only vary 1 variable at a time for it to mean anything.\n> \n> He just run the same test on two different machines - I'm not sure\n> what's wrong with it? Sure, it would be nice to compare 32bit to 32bit,\n> but the OP probably can't do that and wonders if this is the cause. Why\n> is that comparing apples and oranges?\n> \nIt is only that 32 vs. 64 bit, compiler and other things can easily make\na factor of 2 change in the results. So it is not telling you much about\nthe processor differences, neccessarily.\n\nRegards,\nKen\n",
"msg_date": "Mon, 18 Jul 2011 18:19:36 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cpu comparison"
},
{
"msg_contents": "M. D. wrote:\n> I'm a bit surprised as the x3450 has DDR3, while the E5335 has DDR2, \n> and of course because of the cycle speed difference alone I would \n> think the X3450 should beat the E5335.\n\nTry comparing them with stream-scaling to see what happens:\n\nhttps://github.com/gregs1104/stream-scaling\n\nYou can't really test CPU performance in a simple way anymore; it varies \ndepending on the number of processes running at once. This test is the \nbest way I've found to show how that works. On a single thread, the \nX3450 may not be significantly better than the E5535. But what should \nhappen is that total speed keeps going up as you add more threads on the \nnewer system, while the old DDR2 model stays as the same basic total.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\n\n\n",
"msg_date": "Mon, 18 Jul 2011 20:47:05 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cpu comparison"
},
{
"msg_contents": "On Mon, Jul 18, 2011 at 6:47 PM, Greg Smith <[email protected]> wrote:\n> M. D. wrote:\n>>\n>> I'm a bit surprised as the x3450 has DDR3, while the E5335 has DDR2, and\n>> of course because of the cycle speed difference alone I would think the\n>> X3450 should beat the E5335.\n>\n> Try comparing them with stream-scaling to see what happens:\n>\n> https://github.com/gregs1104/stream-scaling\n>\n> You can't really test CPU performance in a simple way anymore; it varies\n> depending on the number of processes running at once. This test is the best\n> way I've found to show how that works. On a single thread, the X3450 may\n> not be significantly better than the E5535. But what should happen is that\n> total speed keeps going up as you add more threads on the newer system,\n> while the old DDR2 model stays as the same basic total.\n\nBy way of example we have a server with dual 6 core opterons that runs\non 667MHz memory and it maxes out the stream test with 8 threads,\ngetting no faster as you add threads. OTOH, our 4x12 core opteron\nmachines with 1333MHz memory and like 8 different channels to it,\nscales right up to 40 or more threads running the stream test.\n",
"msg_date": "Mon, 18 Jul 2011 19:38:34 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cpu comparison"
},
{
"msg_contents": "On 07/18/2011 03:56 PM, Tomas Vondra wrote:\n> Dne 18.7.2011 22:11,[email protected] napsal(a):\n>>>> In my testing I have a 32bit CentOS on the x3450, but a 64bit CentOS\n>>>> on the E5335. Can this make such a bit difference or should the\n>>>> perform fairly close to the same speed? Both servers have 8GB of\n>>>> RAM, and the database I tested with is only 3.7GB.\n>>>>\n>>>> I'm a bit surprised as the x3450 has DDR3, while the E5335 has DDR2,\n>>>> and of course because of the cycle speed difference alone I would\n>>>> think the X3450 should beat the E5335.\n>>>>\n>> Yes, you have basically shown that running two different tests give\n>> different results -- or that an apple is not an orange. You need to\n>> only vary 1 variable at a time for it to mean anything.\n> He just run the same test on two different machines - I'm not sure\n> what's wrong with it? Sure, it would be nice to compare 32bit to 32bit,\n> but the OP probably can't do that and wonders if this is the cause. Why\n> is that comparing apples and oranges?\n>\n> According tohttp://www.cpubenchmark.net, the X3450 is about 2x as fast\n> as E5335 (5,298 vs. 2,575), although this is just a synthetic score.\n>\n> I'm a bit confused by the E5335 cpuinfo output, because it says \"cpu\n> cores : 1\" as I'd expect \"4\" here.\n>\n> I do recall hyperthreading generally was not recommended for a DB, not\n> sure if that changed recently. A quick search revealed this post\n>\n> http://serverfault.com/questions/219791/hyperthreading-vs-sql-server-postgresql\n>\n> stating that since Nehalem CPUs (and X3450 is Nehalem) this should not\n> be a problem anymore. Not sure if it's true, I guess it's worth testing\n> as it might slow down the X3450 box.\n>\n> OP: We need more details about the test's has run, without them we're\n> just guessing. Have you collected some system stats (vmstat, iostat)\n> during the test?\n>\n> Tomas\n>\nThank you. That was exactly my reason for posting.\n\nI did some more serious testing, and it seems like what I was testing \nwith did not give my proper results at all, or maybe because I had not \ntweaked the config file.\nAfter more testing, I'm seeing the x3450 more than 2x faster as the \nE5335. This is just a simple test, but it's something that is run on a \ncontinuous basis in this application so that's what I wanted to test \nwith. Table item_change has around 2M rows.\n\nIf someone would, please, can you tell me if it would help me to \npartition the item_change table (it has a date column)? As far as I've \nseen, an application needs to change if a table is partitioned, right?\n\nHere's the query I ran:\nexplain analyse select item.item_id,item_plu.number,item.description,\n(select dept.name from dept where dept.dept_id = item.dept_id),\n(select subdept.name from subdept where subdept.subdept_id = \nitem.subdept_id),\n(select sum(on_hand) from item_change where item_change.item_id = \nitem.item_id),\n(select sum(on_order) from item_change where item_change.item_id = \nitem.item_id),\n(select sum(total_cost) from item_change where item_change.item_id = \nitem.item_id),\n(select price from item_price where item_price.item_id = item.item_id\nand item_price.zone_id = 'OUe1zXgADRnWemS1grOerQ' and \nitem_price.price_type = 0\nand item_price.size_name = item.sell_size)\nfrom item\njoin item_plu on item.item_id = item_plu.item_id and item_plu.seq_num = 0\nwhere item.inactive_on is null;\n\n\nE5335\n \nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=0.27..56795323.05 rows=79821 width=95) (actual \ntime=0.270..35769.722 rows=72273 loops=1)\n Merge Cond: (item.item_id = item_plu.item_id)\n -> Index Scan using item_pkey on item (cost=0.00..9599.57 \nrows=72249 width=86) (actual time=0.011..216.709 rows=72273 loops=1)\n Filter: (inactive_on IS NULL)\n -> Index Scan using item_plu_pkey on item_plu (cost=0.00..5551.89 \nrows=79821 width=32) (actual time=0.013..226.435 rows=80114 loops=1)\n Index Cond: (item_plu.seq_num = 0)\n SubPlan 1\n -> Seq Scan on dept (cost=0.00..5.16 rows=1 width=8) (actual \ntime=0.003..0.007 rows=1 loops=72273)\n Filter: (dept_id = $0)\n SubPlan 2\n -> Index Scan using subdept_pkey on subdept (cost=0.00..5.27 \nrows=1 width=8) (actual time=0.009..0.011 rows=1 loops=72273)\n Index Cond: (subdept_id = $1)\n SubPlan 3\n -> Aggregate (cost=231.86..231.87 rows=1 width=6) (actual \ntime=0.152..0.153 rows=1 loops=72273)\n -> Index Scan using item_change_i2 on item_change \n(cost=0.00..231.63 rows=91 width=6) (actual time=0.021..0.094 rows=28 \nloops=72273)\n Index Cond: (item_id = $2)\n SubPlan 4\n -> Aggregate (cost=231.86..231.87 rows=1 width=5) (actual \ntime=0.132..0.133 rows=1 loops=72273)\n -> Index Scan using item_change_i2 on item_change \n(cost=0.00..231.63 rows=91 width=5) (actual time=0.021..0.076 rows=28 \nloops=72273)\n Index Cond: (item_id = $2)\n SubPlan 5\n -> Aggregate (cost=231.86..231.87 rows=1 width=8) (actual \ntime=0.133..0.134 rows=1 loops=72273)\n -> Index Scan using item_change_i2 on item_change \n(cost=0.00..231.63 rows=91 width=8) (actual time=0.021..0.075 rows=28 \nloops=72273)\n Index Cond: (item_id = $2)\n SubPlan 6\n -> Index Scan using item_price_i3 on item_price (cost=0.00..5.29 \nrows=1 width=7) (actual time=0.015..0.017 rows=1 loops=72273)\n Index Cond: (item_id = $2)\n Filter: ((zone_id = 'OUe1zXgADRnWemS1grOerQ'::bpchar) AND \n(price_type = 0) AND ((size_name)::text = ($3)::text))\n Total runtime: 35871.253 ms\n(29 rows)\n\n\nX3450\n \nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=0.15..57610807.07 rows=80066 width=95) (actual \ntime=0.141..14680.486 rows=72247 loops=1)\n Merge Cond: (item.item_id = item_plu.item_id)\n -> Index Scan using item_pkey on item (cost=0.00..10446.59 \nrows=72181 width=86) (actual time=0.005..79.796 rows=72247 loops=1)\n Filter: (inactive_on IS NULL)\n -> Index Scan using item_plu_pkey on item_plu (cost=0.00..5456.43 \nrows=80066 width=32) (actual time=0.012..75.303 rows=80085 loops=1)\n Index Cond: (item_plu.seq_num = 0)\n SubPlan 1\n -> Seq Scan on dept (cost=0.00..5.16 rows=1 width=8) (actual \ntime=0.001..0.003 rows=1 loops=72247)\n Filter: (dept_id = $0)\n SubPlan 2\n -> Index Scan using subdept_pkey on subdept (cost=0.00..5.27 \nrows=1 width=8) (actual time=0.007..0.007 rows=1 loops=72247)\n Index Cond: (subdept_id = $1)\n SubPlan 3\n -> Aggregate (cost=234.53..234.54 rows=1 width=6) (actual \ntime=0.060..0.060 rows=1 loops=72247)\n -> Index Scan using item_change_i2 on item_change \n(cost=0.00..234.29 rows=92 width=6) (actual time=0.018..0.041 rows=28 \nloops=72247)\n Index Cond: (item_id = $2)\n SubPlan 4\n -> Aggregate (cost=234.53..234.54 rows=1 width=5) (actual \ntime=0.053..0.053 rows=1 loops=72247)\n -> Index Scan using item_change_i2 on item_change \n(cost=0.00..234.29 rows=92 width=5) (actual time=0.018..0.034 rows=28 \nloops=72247)\n Index Cond: (item_id = $2)\n SubPlan 5\n -> Aggregate (cost=234.53..234.54 rows=1 width=8) (actual \ntime=0.053..0.053 rows=1 loops=72247)\n -> Index Scan using item_change_i2 on item_change \n(cost=0.00..234.29 rows=92 width=8) (actual time=0.018..0.034 rows=28 \nloops=72247)\n Index Cond: (item_id = $2)\n SubPlan 6\n -> Index Scan using item_price_i3 on item_price (cost=0.00..5.29 \nrows=1 width=7) (actual time=0.012..0.013 rows=1 loops=72247)\n Index Cond: (item_id = $2)\n Filter: ((zone_id = 'OUe1zXgADRnWemS1grOerQ'::bpchar) AND \n(price_type = 0) AND ((size_name)::text = ($3)::text))\n Total runtime: 14695.559 ms\n(29 rows)\n",
"msg_date": "Mon, 18 Jul 2011 21:25:25 -0600",
"msg_from": "\"M. D.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: cpu comparison"
},
{
"msg_contents": "I'm just top posting this because this whole thread needs a reset before it\ngoes any farther.\n\nStart with a real description of these hosts - Number and types of disks,\nfilesystem configs, processors, memory, OS, etc. If your db is small enough\nto fit into RAM, please show us the db config you are using which ensures\nthat you are making best use of available RAM, etc.\n\nThen we need to know what your test looks like - showing us a query and an\nexplain plan without any info about the table structure, indexes, number of\nrows, and table usage patterns doesn't provide anywhere near enough info to\ndiagnose inefficiency.\n\nThere are several documents linked right from the page for this mailing list\nthat describe exactly how to go about providing enough info to get help from\nthe list. Please read through them, then update us with the necessary\ninformation, and I'm sure we'll be able to offer you some insight into what\nis going on.\n\nAnd for the record, your app probably doesn't need to change to use table\npartitioning, at least when selecting data. Depending upon how data is\nloaded, you may need to change how you do inserts. But it is impossible to\ncomment on whether partitioning might help you without knowing table\nstructure, value distributions, query patterns, and number of rows in the\ntable. If you are always selecting over the whole range of data,\npartitioning isn't likely to buy you anything, for example.\n\nI'm just top posting this because this whole thread needs a reset before it goes any farther.Start with a real description of these hosts - Number and types of disks, filesystem configs, processors, memory, OS, etc. If your db is small enough to fit into RAM, please show us the db config you are using which ensures that you are making best use of available RAM, etc.\nThen we need to know what your test looks like - showing us a query and an explain plan without any info about the table structure, indexes, number of rows, and table usage patterns doesn't provide anywhere near enough info to diagnose inefficiency.\nThere are several documents linked right from the page for this mailing list that describe exactly how to go about providing enough info to get help from the list. Please read through them, then update us with the necessary information, and I'm sure we'll be able to offer you some insight into what is going on.\nAnd for the record, your app probably doesn't need to change to use table partitioning, at least when selecting data. Depending upon how data is loaded, you may need to change how you do inserts. But it is impossible to comment on whether partitioning might help you without knowing table structure, value distributions, query patterns, and number of rows in the table. If you are always selecting over the whole range of data, partitioning isn't likely to buy you anything, for example.",
"msg_date": "Mon, 18 Jul 2011 22:05:51 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cpu comparison"
}
] |
[
{
"msg_contents": "As one of the people recommending early investigation of Intel's recent \n320 series drives, I've been following the news around them too. It \nlooks like there's one serious firmware bug that shows up on these so \nfar, what's being called the \"8MB bug\". Basically, under some \nconditions, the drive comes back from a restart believing it's only 8MB \nin size. Very bad.\n\nIn the discussion forum where this been highlighted: \nhttp://communities.intel.com/message/131328 the largest data point I \nnoticed said that a large deployment has had 7 out of their 600 320 \ndrive deployments go bad in this way, so 1.2%. Another report says 13 \nout of their 64 systems are dead now. You did never even consider \ndeploying this drive unless it was with RAID-1 and good real-time \nbackups, right?\n\nHopefully everyone knows by now that the V1 of any hardware should sit \nin QA for a while to shake out issues like this before you move \nproduction onto it, and this one seems to be the big bug in this \ndesign. It seems like a firmware bug that an update will fix, not a \nmore serious design issue, so I'm not too concerned about it yet. \nhttp://communities.intel.com/thread/23217 is where the official \ncommentary on the resolution is being posted to.\n\nP.S. they have increased the warranty on these drives to 5 years, before \nthis all happened, so Intel has made a large bet on this model working \nas advertised: \nhttp://newsroom.intel.com/community/intel_newsroom/blog/2011/05/19/chip-shot-new-5-year-limited-warranty-on-intel-ssd-320 \nWe just need to figure out how/where they're drawing the \"enterprise \nusage levels\" line at.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\n\n\n",
"msg_date": "Thu, 21 Jul 2011 07:16:24 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Intel 320 series drives firmware bug"
}
] |
[
{
"msg_contents": "next question.\n\nI have a product table with a 'category\" column that I want to\nmaintain in a separate table.\n\nCREATE TABLE products (\n product_id INTEGER DEFAULT\nnextval('product_id_seq'::regclass) NOT NULL,\n name VARCHAR(60) NOT NULL,\n category SMALLINT NOT NULL,\n CONSTRAINT product_id PRIMARY KEY (product_id)\n);\nCREATE TABLE products (\n category_id INTEGER DEFAULT\nnextval('category_id_seq'::regclass) NOT NULL,\n name VARCHAR(20) NOT NULL,\n CONSTRAINT category_id PRIMARY KEY (category_id)\n);\n\nEvery product must have a category,\nSince many (but not all) products have the same category I only want 1\ntable with unique categories.\n\nTo do the insert into the products table I need to retrieve or insert\nthe category_id in categories first.\nWhich means more code on my client app (if ($cat_id =\nget_cat_id($cat)) }else { $cat_id = insert_cat($cat)})\n\nCan I write a BEFORE ROW trigger for the products table to runs on\nINSERT or UPDATE to\n 1. insert a new category & return the new category_id OR\n 2. return the existing category_id for the (to be inserted row)\n\nAlan\nI donproducts.category to be a foreign key that points to the uniqie\ncategory_id id in the want to keep I need to do get the cate\n",
"msg_date": "Sat, 23 Jul 2011 09:23:48 -0700 (PDT)",
"msg_from": "alan <[email protected]>",
"msg_from_op": true,
"msg_subject": "insert"
},
{
"msg_contents": "I think I figured it out myself.\nIf anyone sees issues with this (simple) approach, please let me know.\n\nI changed my table definitions to this:\n\nCREATE SEQUENCE public.product_id_seq\nCREATE TABLE products (\n product_id INTEGER DEFAULT nextval('product_id_seq'::regclass) NOT\nNULL,\n name VARCHAR(60) NOT NULL,\n category SMALLINT NOT NULL,\n CONSTRAINT product_id PRIMARY KEY (product_id)\n);\nCREATE SEQUENCE public.category_id_seq\nCREATE TABLE category (\n category_id INTEGER DEFAULT nextval('category_id_seq'::regclass)\nNOT NULL,\n name VARCHAR(20) NOT NULL,\n CONSTRAINT category_id PRIMARY KEY (category_id)\n);\nALTER TABLE products ADD CONSTRAINT category_products_fk\n FOREIGN KEY (category)\n REFERENCES category (category_id)\n ON DELETE NO ACTION ON UPDATE CASCADE\n;\n\nThen created this function:\n\nCREATE OR REPLACE FUNCTION getid(_table text,_pk text,_name text)\nRETURNS integer AS $$\nDECLARE _id integer;\nBEGIN\n EXECUTE 'SELECT '\n || _pk\n || ' FROM '\n || _table::regclass\n || ' WHERE name'\n || ' = '\n || quote_literal(_name)\n INTO _id;\n\n IF _id > 0 THEN\n return _id;\n ELSE\n EXECUTE 'INSERT INTO '\n || _table\n || ' VALUES (DEFAULT,' || quote_literal(_name) || ')'\n || ' RETURNING ' || _pk\n INTO _id;\n return _id;\n END IF;\nEND;\n$$\n LANGUAGE 'plpgsql' VOLATILE;\n\nNow I can just insert into the products table via:\n\nINSERT INTO products VALUES(DEFAULT,'Postgresql for\nDummies',getid('category','category_id','books'));\n\nFor example:\n\ntestdb=# select * from products;\n product_id | name | category\n------------+------+----------\n(0 rows)\n\niims_test=# select * from category;\n category_id | name\n-------------+------\n(0 rows)\n\ntestdb=# insert into products values(DEFAULT,'Postgresql for\nDummies',getid('category','category_id','books'));\nINSERT 0 1\n\ntestdb=# select * from\ncategory;\n category_id | name\n-------------+-------\n 1 | books\n\ntestdb=# select * from products;\n product_id | name | category\n------------+------------------------+----------\n 1 | Postgresql for Dummies | 1\n\nUpdating the category_id in category table are also cascaded to the\nproduct table.\n\ntestdb=# UPDATE category SET category_id = 2 WHERE category_id = 1;\nUPDATE 1\n\ntestdb=# SELECT * FROM products;\n product_id | name | category\n------------+------------------------+----------\n 1 | Postgresql for Dummies | 2\n\n\nAlan\n",
"msg_date": "Mon, 25 Jul 2011 07:34:45 -0700 (PDT)",
"msg_from": "alan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: insert"
},
{
"msg_contents": "alan <[email protected]> wrote:\n \n> Can I write a BEFORE ROW trigger for the products table to runs\n> on INSERT or UPDATE to\n> 1. insert a new category & return the new category_id OR\n> 2. return the existing category_id for the (to be inserted row)\n \nWhat would you be using to match an existing category? If this\naccurately identifies a category, why not use it for the key to the\ncategory table, rather than generating a synthetic key value?\n \n-Kevin\n",
"msg_date": "Fri, 29 Jul 2011 15:14:37 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert"
},
{
"msg_contents": "On 30/07/11 08:14, Kevin Grittner wrote:\n> alan<[email protected]> wrote:\n>\n>> Can I write a BEFORE ROW trigger for the products table to runs\n>> on INSERT or UPDATE to\n>> 1. insert a new category& return the new category_id OR\n>> 2. return the existing category_id for the (to be inserted row)\n>\n> What would you be using to match an existing category? If this\n> accurately identifies a category, why not use it for the key to the\n> category table, rather than generating a synthetic key value?\n>\n> -Kevin\n>\nHi Alan,\n\nThis is the way I would define the tables, I think it conforms tom your \nrequirements, and the definitions look clearer.\n\nI have the convention that the id of the table itself is not prefixed \nwith the table name, but references to the id field of other tables are \n(e.g. category_id). This is not something you need to follow, but it \nhelps to clearly identify what is a foreign key, and what is the current \ntable's id! Likewise, I think it is simpler to make the table names \nsingular, but this again is a bit arbitrary.\n\nI guess, even if you prefer my conventions, it is more important to \nfollow the standards of the existing database!\n\n\nCREATE TABLE product\n(\n id SERIAL PRIMARY KEY,\n category_id int REFERENCES category(id),\n name VARCHAR(60) NOT NULL\n);\n\nCREATE TABLE category\n(\n id SERIAL PRIMARY KEY,\n name VARCHAR(20) UNIQUE NOT NULL\n);\n\nThough for the primary key of the category table, it might be better to \nexplicitly assign the key, then you have more control of the numbers used.\n\nI would be a bit wary of automatically inserting a new category, when \nthe given category is not already there, you could end up with several \nvariations of spelling for the same category! I once saw a system with \nabout 20 variations of spelling, and number of spaces between words, for \nthe name of the same company!\n\nPossibly your client GUI application could have a drop down list of \navailable categories, and provision to enter new ones, but then this \nmight be outside your control.\n\n\nCheers,\nGAvin\n",
"msg_date": "Sat, 30 Jul 2011 17:24:21 +1200",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert"
},
{
"msg_contents": "Hello.\n\nPlease note that in multitasking environment you may have problems with \nyour code. Two connections may check if \"a\" is available and if not (and \nboth got empty \"select\" result), try to insert. One will succeed, \nanother will fail if you have a unique constraint on category name (and \nyou'd better have one).\n\nPlease note that select for update won't help you much, since this is \nnew record you are looking for, and select don't return (and lock) it. I \nam using \"lock table <tableName> in SHARE ROW EXCLUSIVE mode\" in this case.\n\nBut then, if you have multiple lookup dictinaries, you need to ensure \nstrict order of locking or you will be getting deadlocks. As for me, I \ndid create a special application-side class to retrieve such values. If \nI can't find a value in main connection with simple select, I open new \nconnection, perform table lock, check if value is in there. If it is \nnot, add the value and commit. This may produce orphaned dictionary \nentries (if dictionary entry is committed an main transaction is rolled \nback), but this is usually OK for dictionaries. At the same time I don't \nintroduce hard locks into main transaction and don't have to worry about \ndeadlocks.\n\nBest regards, Vitalii Tymchyshyn\n\n",
"msg_date": "Mon, 01 Aug 2011 11:52:17 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert"
},
{
"msg_contents": "Vitalii Tymchyshyn <[email protected]> wrote:\n \n> Please note that in multitasking environment you may have problems\n> with your code. Two connections may check if \"a\" is available and\n> if not (and both got empty \"select\" result), try to insert. One\n> will succeed, another will fail if you have a unique constraint on\n> category name (and you'd better have one).\n> \n> Please note that select for update won't help you much, since this\n> is new record you are looking for, and select don't return (and\n> lock) it. I am using \"lock table <tableName> in SHARE ROW\n> EXCLUSIVE mode\" in this case.\n> \n> But then, if you have multiple lookup dictinaries, you need to\n> ensure strict order of locking or you will be getting deadlocks.\n> As for me, I did create a special application-side class to\n> retrieve such values. If I can't find a value in main connection\n> with simple select, I open new connection, perform table lock,\n> check if value is in there. If it is not, add the value and\n> commit. This may produce orphaned dictionary entries (if\n> dictionary entry is committed an main transaction is rolled back),\n> but this is usually OK for dictionaries. At the same time I don't\n> introduce hard locks into main transaction and don't have to worry\n> about deadlocks.\n \nIt sounds like you might want to check out the new \"truly\nserializable\" transactions in version 9.1. If you can download the\nlatest beta version of it and test with\ndefault_transaction_isolation = 'serializable' I would be interested\nto hear your results. Note that you can't have deadlocks, but you\ncan have other types of serialization failures, so your software\nneeds to be prepared to start a transaction over from the beginning\nwhen the SQLSTATE of a failure is '40001'.\n \nThe Wiki page which was used to document and organize the work is:\n \nhttp://wiki.postgresql.org/wiki/Serializable\n \nThis is in a little bit of a funny state because not all of the\nwording that was appropriate while the feature was under development\n(e.g., future tense verbs) has been changed to something more\nappropriate for a finished feature, but it should cover the\ntheoretical ground pretty well. An overlapping document which was\ninitially based on parts of the Wiki page and has received more\nrecent attention is the README-SSI file here:\n \nhttp://git.postgresql.org/gitweb/?p=postgresql.git;a=blob_plain;f=src/backend/storage/lmgr/README-SSI;hb=master\n \nSome examples geared toward programmers and DBAs is at this Wiki\npage:\n \nhttp://wiki.postgresql.org/wiki/SSI\n \nIt could use a couple more examples and a bit of language cleanup,\nbut what is there is fairly sound. The largest omission is that we\nneed to show more explicitly that serialization failures can occur\nat times other than COMMIT. (I got a little carried away trying to\nshow that there was no blocking and that the \"first committer\nwins\".)\n \n-Kevin\n",
"msg_date": "Mon, 01 Aug 2011 09:47:16 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert"
}
] |
[
{
"msg_contents": "I have a problem with poor query plan.\n\nMy PostgreSQL is \"PostgreSQL 8.4.8, compiled by Visual C++ build 1400,\n32-bit\" installed by EnterpriseDB installer on Windows 7 32 bit.\n\nSteps to reproduce:\n\nStart with fresh installation and execute the following:\n\ndrop table if exists small;\ndrop table if exists large;\n\nCREATE TABLE small\n(\n id bigint,\n primary key(id)\n);\n\nCREATE TABLE large\n(\n id bigint,\n primary key(id)\n);\n\n--Insert 100000 rows into large\nCREATE or replace FUNCTION populate_large() RETURNS bigint AS $$\nDECLARE\n id1 bigint := 0;\nBEGIN\n LOOP\n insert into large(id) values(id1);\n id1 := id1 +1;\n if id1>100000 then\n exit;\n end if;\n END LOOP;\n return id1;\nEND\n$$ LANGUAGE plpgsql;\n\n--Insert 1000 rows into small\nCREATE or replace FUNCTION populate_small() RETURNS bigint AS $$\nDECLARE\n id1 bigint := 0;\nBEGIN\n LOOP\n insert into small(id) values(id1);\n id1 := id1 +1;\n if id1>1000 then\n exit;\n end if;\n END LOOP;\n return id1;\nEND\n$$ LANGUAGE plpgsql;\n\nselect populate_large(),populate_small();\nanalyze;\n\nThen execute\n\nexplain analyze insert into large(id) select id from small where id\nnot in(select id from large);\n\nIt gives\n\n\"Seq Scan on small (cost=1934.01..823278.28 rows=500 width=8) (actual\ntime=6263.588..6263.588 rows=0 loops=1)\"\n\" Filter: (NOT (SubPlan 1))\"\n\" SubPlan 1\"\n\" -> Materialize (cost=1934.01..3325.02 rows=100001 width=8)\n(actual time=0.007..3.012 rows=501 loops=1001)\"\n\" -> Seq Scan on large (cost=0.00..1443.01 rows=100001\nwidth=8) (actual time=0.010..5.810 rows=1001 loops=1)\"\n\"Total runtime: 6263.703 ms\"\n\nBut\n\nexplain analyze insert into large(id) select id from small where not\nexists (select id from large l where small.id=l.id);\n\nexeutes much faster:\n\n\"Merge Anti Join (cost=0.00..85.58 rows=1 width=8) (actual\ntime=15.793..15.793 rows=0 loops=1)\"\n\" Merge Cond: (small.id = l.id)\"\n\" -> Index Scan using small_pkey on small (cost=0.00..43.27\nrows=1001 width=8) (actual time=0.025..3.515 rows=1001 loops=1)\"\n\" -> Index Scan using large_pkey on large l (cost=0.00..3050.28\nrows=100001 width=8) (actual time=0.017..2.932 rows=1001 loops=1)\"\n\"Total runtime: 15.863 ms\"\n\nBoth queries are semantically the same.\n",
"msg_date": "Sun, 24 Jul 2011 18:06:40 +0400",
"msg_from": "=?KOI8-R?B?5M3J1NLJyiD3wdPJzNjF1w==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad query plan"
},
{
"msg_contents": "=?KOI8-R?B?5M3J1NLJyiD3wdPJzNjF1w==?= <[email protected]> writes:\n> explain analyze insert into large(id) select id from small where id\n> not in(select id from large);\n> [ crummy plan ]\n> explain analyze insert into large(id) select id from small where not\n> exists (select id from large l where small.id=l.id);\n> [ better plan ]\n> Both queries are semantically the same.\n\nNo, they are not. NOT IN is hard to optimize because it has strange\nbehaviors with nulls in the data. Use the NOT EXISTS formulation.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 24 Jul 2011 12:02:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan "
},
{
"msg_contents": "On 25/07/11 02:06, Дмитрий Васильев wrote:\n> I have a problem with poor query plan.\n>\n> My PostgreSQL is \"PostgreSQL 8.4.8, compiled by Visual C++ build 1400,\n> 32-bit\" installed by EnterpriseDB installer on Windows 7 32 bit.\n>\n> Steps to reproduce:\n>\n> Start with fresh installation and execute the following:\n>\n> drop table if exists small;\n> drop table if exists large;\n>\n> CREATE TABLE small\n> (\n> id bigint,\n> primary key(id)\n> );\n>\n> CREATE TABLE large\n> (\n> id bigint,\n> primary key(id)\n> );\n>\n> --Insert 100000 rows into large\n> CREATE or replace FUNCTION populate_large() RETURNS bigint AS $$\n> DECLARE\n> id1 bigint := 0;\n> BEGIN\n> LOOP\n> insert into large(id) values(id1);\n> id1 := id1 +1;\n> if id1>100000 then\n> exit;\n> end if;\n> END LOOP;\n> return id1;\n> END\n> $$ LANGUAGE plpgsql;\n>\n> --Insert 1000 rows into small\n> CREATE or replace FUNCTION populate_small() RETURNS bigint AS $$\n> DECLARE\n> id1 bigint := 0;\n> BEGIN\n> LOOP\n> insert into small(id) values(id1);\n> id1 := id1 +1;\n> if id1>1000 then\n> exit;\n> end if;\n> END LOOP;\n> return id1;\n> END\n> $$ LANGUAGE plpgsql;\n>\n> select populate_large(),populate_small();\n> analyze;\n>\n> Then execute\n>\n> explain analyze insert into large(id) select id from small where id\n> not in(select id from large);\n>\n> It gives\n>\n> \"Seq Scan on small (cost=1934.01..823278.28 rows=500 width=8) (actual\n> time=6263.588..6263.588 rows=0 loops=1)\"\n> \" Filter: (NOT (SubPlan 1))\"\n> \" SubPlan 1\"\n> \" -> Materialize (cost=1934.01..3325.02 rows=100001 width=8)\n> (actual time=0.007..3.012 rows=501 loops=1001)\"\n> \" -> Seq Scan on large (cost=0.00..1443.01 rows=100001\n> width=8) (actual time=0.010..5.810 rows=1001 loops=1)\"\n> \"Total runtime: 6263.703 ms\"\n>\n> But\n>\n> explain analyze insert into large(id) select id from small where not\n> exists (select id from large l where small.id=l.id);\n>\n> exeutes much faster:\n>\n> \"Merge Anti Join (cost=0.00..85.58 rows=1 width=8) (actual\n> time=15.793..15.793 rows=0 loops=1)\"\n> \" Merge Cond: (small.id = l.id)\"\n> \" -> Index Scan using small_pkey on small (cost=0.00..43.27\n> rows=1001 width=8) (actual time=0.025..3.515 rows=1001 loops=1)\"\n> \" -> Index Scan using large_pkey on large l (cost=0.00..3050.28\n> rows=100001 width=8) (actual time=0.017..2.932 rows=1001 loops=1)\"\n> \"Total runtime: 15.863 ms\"\n>\n> Both queries are semantically the same.\n>\nOut of interest, I ran your code on my existing 9.1beta3 installation.\n\nNotes\n(1) the second SELECT ran a faster than the first.\n(2) both plans are different to the ones you got\n\n$ psql\npsql (9.1beta3)\n[...]\ngavin=> explain analyze insert into large(id) select id from small where id\ngavin-> not in(select id from large);\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Insert on large (cost=1543.01..1559.02 rows=500 width=8) (actual \ntime=51.090..51.090 rows=0 loops=1)\n -> Seq Scan on small (cost=1543.01..1559.02 rows=500 width=8) \n(actual time=51.087..51.087 rows=0 loops=1)\n Filter: (NOT (hashed SubPlan 1))\n SubPlan 1\n -> Seq Scan on large (cost=0.00..1443.01 rows=100001 \nwidth=8) (actual time=0.008..13.867 rows=100001 loops=1)\n Total runtime: 51.582 ms\n(6 rows)\n\ngavin=> explain analyze insert into large(id) select id from small where not\ngavin-> exists (select id from large l where small.id=l.id);\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Insert on large (cost=0.00..80.94 rows=1 width=8) (actual \ntime=0.907..0.907 rows=0 loops=1)\n -> Merge Anti Join (cost=0.00..80.94 rows=1 width=8) (actual \ntime=0.906..0.906 rows=0 loops=1)\n Merge Cond: (small.id = l.id)\n -> Index Scan using small_pkey on small (cost=0.00..40.61 \nrows=1001 width=8) (actual time=0.010..0.225 rows=1001 loops=1)\n -> Index Scan using large_pkey on large l \n(cost=0.00..2800.12 rows=100001 width=8) (actual time=0.006..0.235 \nrows=1001 loops=1)\n Total runtime: 1.000 ms\n(6 rows)\n\npostgresql.conf parameters changed:\nshared_buffers = 2GB\ntemp_buffers = 64MB\nwork_mem = 16MB\nmaintenance_work_mem = 512MB\nmax_stack_depth = 6MB\ncheckpoint_segments = 8\ncpu_index_tuple_cost = 0.0025\ncpu_operator_cost = 0.001\neffective_cache_size = 2GB\n\n\n\n",
"msg_date": "Mon, 25 Jul 2011 08:14:43 +1200",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan"
}
] |
[
{
"msg_contents": "Dear all,\n\nI am using Postgres-8.4.2 on Windows system.\nI have 2 databases in my postgres database ( globedatabase (21GB), \nurldatabase).\n\nI restore globedatabase from a .sql file on yesterday morning.I insert \nsome new data in that database.\nIn the evening, by mistake I issued a *drop database globedatabase* command.\n\nToday morning, I restore again the same database from backup (.sql) file.\nMy .sql file have data till yesterday morning but I want newly insert \ndata now. Is it possible.\n\nIs it possible to get the data back till the state before drop database \ncommand.\n\nMy pglog files is in the E:/data directory & Binary log is also enabled.\n\nPlease let me know if it is possible. It's urgent.\n\n\nThanks & Regards\nAdarsh Sharma\n\n\n\n\n\nDear all,\n\nI am using Postgres-8.4.2 on Windows system.\nI have 2 databases in my postgres database ( globedatabase (21GB),\nurldatabase).\n\nI restore globedatabase from a .sql file on yesterday morning.I insert\nsome new data in that database.\nIn the evening, by mistake I issued a drop database globedatabase\ncommand.\n\nToday morning, I restore again the same database from backup (.sql)\nfile.\nMy .sql file have data till yesterday morning but I want newly insert\ndata now. Is it possible.\n\nIs it possible to get the data back till the state before drop database\ncommand.\n\nMy pglog files is in the E:/data directory & Binary log is also\nenabled.\n\nPlease let me know if it is possible. It's urgent.\n\n\nThanks & Regards\nAdarsh Sharma",
"msg_date": "Mon, 25 Jul 2011 12:08:28 +0530",
"msg_from": "Adarsh Sharma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Restore database after drop command"
},
{
"msg_contents": "\nOn Jul 25, 2011, at 12:08 PM, Adarsh Sharma wrote:\n\n> I restore globedatabase from a .sql file on yesterday morning.I insert some new data in that database.\n> In the evening, by mistake I issued a drop database globedatabase command.\n> Today morning, I restore again the same database from backup (.sql) file.\n> My .sql file have data till yesterday morning but I want newly insert data now. Is it possible.\n> Is it possible to get the data back till the state before drop database command.\n\nNo you won't be able to recover. \n\nIf you have Online Backup, then PITR would help you.\n\nThanks & Regards,\nVibhor Kumar\nBlogs: http://vibhork.blogspot.com\nhttp://vibhorkumar.wordpress.com\n\n",
"msg_date": "Mon, 25 Jul 2011 12:17:17 +0530",
"msg_from": "Vibhor Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restore database after drop command"
},
{
"msg_contents": "I go through the link, so it is impossible to get the data back.\nI have following files in my pg_xlog directory :\n\n000000010000000700000091\n000000010000000700000092\n000000010000000700000093\n000000010000000700000094\n000000010000000700000095\n000000010000000700000096\n000000010000000700000097\n000000010000000700000098\n\nI think I issued the drop database command 1 month ago.\n From the manual, I understand that my segment files are recycled to \nnewer ones :\n\n/The segment files are given numeric names that reflect their position \nin the abstract WAL sequence. When not using WAL archiving, the system \nnormally creates just a few segment files and then \"recycles\" them by \nrenaming no-longer-needed segment files to higher segment numbers. It's \nassumed that a segment file whose contents precede the \ncheckpoint-before-last is no longer of interest and can be recycled.\n\n/My archive_status folder is empty.\nHow would we know that which data these segment files corresponds too.\n\nI followed below steps 1 month ago :\n1. Load globdatabase through backup.sql (21 GB)file\n2. Insert some data near about 3-4 tables ( KB) data.\n3. Drop database globdatabase.\n4. Load globdatabase through backup.sql (21GB)file\n\nMay be there is chance because we work very rarely on that system.\nNow i have the backup file bt I want that 3-4 tables.\n\n\nThanks\n\nVibhor Kumar wrote:\n> On Jul 25, 2011, at 12:08 PM, Adarsh Sharma wrote:\n>\n> \n>> I restore globedatabase from a .sql file on yesterday morning.I insert some new data in that database.\n>> In the evening, by mistake I issued a drop database globedatabase command.\n>> Today morning, I restore again the same database from backup (.sql) file.\n>> My .sql file have data till yesterday morning but I want newly insert data now. Is it possible.\n>> Is it possible to get the data back till the state before drop database command.\n>> \n>\n> No you won't be able to recover. \n>\n> If you have Online Backup, then PITR would help you.\n>\n> Thanks & Regards,\n> Vibhor Kumar\n> Blogs: http://vibhork.blogspot.com\n> http://vibhorkumar.wordpress.com\n>\n> \n\n\n\n\n\n\n\nI go through the link, so it is impossible to get the data back.\nI have following files in my pg_xlog directory :\n\n000000010000000700000091\n\n000000010000000700000092\n\n000000010000000700000093\n\n000000010000000700000094\n\n000000010000000700000095\n\n000000010000000700000096\n\n000000010000000700000097\n\n000000010000000700000098\n\n\nI think I issued the drop database command 1 month ago. \n>From the manual, I understand that my segment files are recycled to\nnewer ones :\n\nThe segment files are given numeric names that reflect their\nposition\nin the abstract WAL sequence. When not using WAL archiving, the system\nnormally creates just a few segment files and then \"recycles\"\nthem by renaming no-longer-needed segment files to higher segment\nnumbers. It's assumed that a segment file whose contents precede the\ncheckpoint-before-last is no longer of interest and can be recycled.\n\nMy archive_status folder is empty.\nHow would we know that which data these segment files corresponds too.\n\nI followed below steps 1 month ago :\n1. Load globdatabase through backup.sql (21 GB)file\n2. Insert some data near about 3-4 tables ( KB) data.\n3. Drop database globdatabase.\n4. Load globdatabase through backup.sql (21GB)file\n\nMay be there is chance because we work very rarely on that system.\nNow i have the backup file bt I want that 3-4 tables.\n\n\nThanks\n\nVibhor Kumar wrote:\n\nOn Jul 25, 2011, at 12:08 PM, Adarsh Sharma wrote:\n\n \n\nI restore globedatabase from a .sql file on yesterday morning.I insert some new data in that database.\nIn the evening, by mistake I issued a drop database globedatabase command.\nToday morning, I restore again the same database from backup (.sql) file.\nMy .sql file have data till yesterday morning but I want newly insert data now. Is it possible.\nIs it possible to get the data back till the state before drop database command.\n \n\n\nNo you won't be able to recover. \n\nIf you have Online Backup, then PITR would help you.\n\nThanks & Regards,\nVibhor Kumar\nBlogs: http://vibhork.blogspot.com\nhttp://vibhorkumar.wordpress.com",
"msg_date": "Mon, 25 Jul 2011 12:41:05 +0530",
"msg_from": "Adarsh Sharma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Restore database after drop command"
},
{
"msg_contents": "> [ADMIN] [PERFORM]\n\nFirst rule of mailing lists: DO NOT CROSS POST. Please stick to one\nmailing list. I've replied on pgsql-general where your post started out.\nPlease do not reply to the posts on -admin or -perform.\n\nMy reply follows below.\n\nOn 25/07/11 15:11, Adarsh Sharma wrote:\n> I go through the link, so it is impossible to get the data back.\n> I have following files in my pg_xlog directory :\n> \n> 000000010000000700000091\n> 000000010000000700000092\n> 000000010000000700000093\n> 000000010000000700000094\n> 000000010000000700000095\n> 000000010000000700000096\n> 000000010000000700000097\n> 000000010000000700000098\n> \n> I think I issued the drop database command 1 month ago.\n\n.... and you're asking NOW? Even though \"it's urgent\"?\n\nThe first rule of data recovery: As soon as you realize something is\nwrong, make a copy of everything immediately. Then stop using it.\n\n> \n> I think I issued the drop database command 1 month ago.\n> From the manual, I understand that my segment files are recycled to newer ones :\n\nCorrect. Any chance you ever had of recovering your data is almost\ncertainly gone because you restored into it - probably immediately\ndestroying your deleted data - then kept on using the database for\nanother month.\n\nIf you're willing to spend a lot of money you might be able to recover\nsome of it using hard-drive level overwritten data forensics, but I\nrather doubt it.\n\n--\nCraig Ringer\n",
"msg_date": "Mon, 25 Jul 2011 15:46:55 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [WAS:ADMIN] [WAS:PERFORM] Restore database after drop\n command"
},
{
"msg_contents": "* Adarsh Sharma:\n\n> I restore globedatabase from a .sql file on yesterday morning.I insert\n> some new data in that database.\n> In the evening, by mistake I issued a *drop database globedatabase* command.\n>\n> Today morning, I restore again the same database from backup (.sql) file.\n> My .sql file have data till yesterday morning but I want newly insert\n> data now. Is it possible.\n>\n> Is it possible to get the data back till the state before drop\n> database command.\n\nIt might have been possible if you had performed a hard shutdown\ndirectly after discovering the mistake, by undeleting the database files\nat the operating system level. This has been made more difficult\n(perhaps even impossible) by your subsequent write activity.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Mon, 25 Jul 2011 10:51:31 +0000",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restore database after drop command"
},
{
"msg_contents": "Dne 25.7.2011 09:11, Adarsh Sharma napsal(a):\n> I go through the link, so it is impossible to get the data back.\n> I have following files in my pg_xlog directory :\n> \n> 000000010000000700000091\n> 000000010000000700000092\n> 000000010000000700000093\n> 000000010000000700000094\n> 000000010000000700000095\n> 000000010000000700000096\n> 000000010000000700000097\n> 000000010000000700000098\n> \n> How would we know that which data these segment files corresponds too.\n\nThe xlog segments are for the whole cluster, not for individual objects\n(tables etc.). It's very difficult to read data from those files if you\ndon't have a proper base backup (copy of the data files) and all\nsubsequent xlog files.\n\n> I followed below steps 1 month ago :\n> 1. Load globdatabase through backup.sql (21 GB)file\n> 2. Insert some data near about 3-4 tables ( KB) data.\n> 3. Drop database globdatabase.\n> 4. Load globdatabase through backup.sql (21GB)file\n> \n> May be there is chance because we work very rarely on that system.\n> Now i have the backup file bt I want that 3-4 tables.\n\nNo, there's almost no chance to do that. If your wal_level is archive or\nhot_standby, then those 21GB in step (4) were written to the xlog\ndirectory. And as you keep only 8 wal segments (128MB), the data are\nlong gone.\n\nIf you have wal_level=minimal, then there's a slight chance the data are\nactually still in the wal segments. That depends on how the .sql backup\nloads data (COPY does not write data into the wal segments). But even in\nthat case you don't have information that is necessary to parse the\nfiles as you've dropped the database.\n\nTomas\n",
"msg_date": "Mon, 25 Jul 2011 20:15:10 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] Restore database after drop command"
}
] |
[
{
"msg_contents": "Dear all\n\nfirst of all congratulations on your greak work here since from time to time\ni 've found many answers to my problems. unfortunately for this specific\nproblem i didnt find much relevant information, so i would ask for your\nguidance dealing with the following situation:\n\nwe have a dedicated server (8.4.4, redhat) with 24 cpus and 36 GB or RAM. i\nwould say that the traffic in the server is huge and the cpu utilization is\npretty high too (avg ~ 75% except during the nights when is it much lower).\ni am trying to tune the server a little bit to handle this problem. the\nincoming data in the database are about 30-40 GB /day. \n\nat first the checkpoint_segments were set to 50, the checkpoint_timeout at\n15 min and the checkpoint_completion_target was 0.5 sec.\n\ni noticed that the utilization of the server was higher when it was close to\nmaking a checkpoint and since the parameter of full_page_writes is ON , i\nchanged the parameters mentioned above to (i did that after reading a lot of\nstuff online):\ncheckpoint_segments->250\ncheckpoint_timeout->40min\ncheckpoint_completion_target -> 0.8\n\nbut the cpu utilization is not significantly lower. another parameter i will\ncertainly change is the wal_buffers which is now set at 64KB and i plan to\nmake it 16MB. can this parameter cause a significant percentage of the\nproblem?\n\nare there any suggestions what i can do to tune better the server? i can\nprovide any information you find relevant for the configuration of the\nserver, the OS, the storage etc\n\nthank you in advance\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/heavy-load-high-cpu-itilization-tp4631760p4631760.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Mon, 25 Jul 2011 11:00:44 -0700 (PDT)",
"msg_from": "Filippos <[email protected]>",
"msg_from_op": true,
"msg_subject": "heavy load-high cpu itilization"
},
{
"msg_contents": "On Mon, Jul 25, 2011 at 12:00 PM, Filippos <[email protected]> wrote:\n> Dear all\n>\n> first of all congratulations on your greak work here since from time to time\n> i 've found many answers to my problems. unfortunately for this specific\n> problem i didnt find much relevant information, so i would ask for your\n> guidance dealing with the following situation:\n>\n> we have a dedicated server (8.4.4, redhat) with 24 cpus and 36 GB or RAM. i\n\nThere are known data eating bugs in 8.4.4 you should upgrade to\n8.4.latest as soon as possible.\n\n> would say that the traffic in the server is huge and the cpu utilization is\n> pretty high too (avg ~ 75% except during the nights when is it much lower).\n> i am trying to tune the server a little bit to handle this problem. the\n> incoming data in the database are about 30-40 GB /day.\n\nSo you're either CPU or IO bound. We need to see which.\n\nLook at these two pages:\nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problems\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nto get started.\n\n> at first the checkpoint_segments were set to 50, the checkpoint_timeout at\n> 15 min and the checkpoint_completion_target was 0.5 sec.\n\ncheckpoint_completion_target is not in seconds, it's a percentage to\nhave completely by the time the next checkpoint arrives. a checkpoint\ncompletion target of 1.0 means that the bg writer should write out\ndata fast enough to flush everything out of WAL to the disks right as\nyou reach checkpoint timeout. the more aggressive this is the more of\nthe data will already be flushed to disk when the timeout occurs.\nHowever, this comes at the expense of more IO overall as multiple\nupdates to the same block result in multiple writes instead of just\none.\n\n> i noticed that the utilization of the server was higher when it was close to\n> making a checkpoint and since the parameter of full_page_writes is ON , i\n> changed the parameters mentioned above to (i did that after reading a lot of\n> stuff online):\n> checkpoint_segments->250\n> checkpoint_timeout->40min\n> checkpoint_completion_target -> 0.8\n>\n> but the cpu utilization is not significantly lower. another parameter i will\n> certainly change is the wal_buffers which is now set at 64KB and i plan to\n> make it 16MB. can this parameter cause a significant percentage of the\n> problem?\n\nMost of the work done by checkpointing / background writing is IO\nintensive, not CPU intensive.\n\n> are there any suggestions what i can do to tune better the server? i can\n> provide any information you find relevant for the configuration of the\n> server, the OS, the storage etc\n\nFirst you need to more accurately identify the problem. Tools like\niostat, vmstat, top, and so forth can help you figure out if the\nproblem is that you're IO bound or CPU bound. It's also possible\nyou've got a thundering herd issue where there's too many processes\nall trying to vie for the limited number of cores at the same time.\nIf you've got more than 30k to 50k context switches per second in\nvmstat it's likely you're getting too many things trying to run at\nonce.\n",
"msg_date": "Fri, 29 Jul 2011 13:54:13 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: heavy load-high cpu itilization"
},
{
"msg_contents": "thx a lot for your answer. \ni will provide some stats, so if you could help me figure out the source of\nthe problem that would be great \n\n-*top -c*\nTasks: 1220 total, 49 running, 1171 sleeping, 0 stopped, 0 zombie\nCpu(s): *84.1%us*, 2.8%sy, 0.0%ni, 12.3%id, 0.1%wa, 0.1%hi, 0.6%si, \n0.0%st\nMem: 98846996k total, 98632044k used, 214952k free, 134320k buffers\nSwap: 50331640k total, 116312k used, 50215328k free, 89445208k cached\n\n-SELECT count(procpid) FROM pg_stat_activity -> *422*\n-SELECT count(procpid) FROM pg_stat_activity WHERE (NOW() - query_start) >\nINTERVAL '1 MINUTES' AND current_query = '<IDLE>' -> *108*\n-SELECT count(procpid) FROM pg_stat_activity WHERE (NOW() - query_start) >\nINTERVAL '5 MINUTES' AND current_query = '<IDLE>' -> *45*\n\n-*vmstat -n 1 10*\nprocs -----------memory---------- ---swap-- -----io---- --system--\n-----cpu------\n r b swpd free buff cache si so bi bo in cs us sy id\nwa st\n41 1 116300 347008 134176 89608912 0 0 143 210 0 0 11 1 88 \n0 0\n20 0 116300 423556 134116 89581840 0 0 8336 3038 11118 21139 81 5\n13 0 0\n24 0 116300 412904 134108 89546840 0 0 8488 9025 10621 22921 81 4\n15 0 0\n23 0 116300 409388 134084 89513728 0 0 8320 548 11386 20226 82 4\n14 0 0\n34 0 116300 403688 134088 89509520 0 0 6336 0 9552 20994 83 3\n14 0 0\n22 1 116300 337972 134104 89518624 0 0 8792 28 8980 20455 83 4\n13 0 0\n37 0 116300 303956 134116 89528720 0 0 8440 536 9644 20492 84 3\n13 0 0\n17 1 116300 293212 134112 89532816 0 0 5864 8240 9527 19771 85 3\n12 0 0\n14 0 116300 282168 134116 89540720 0 0 7772 752 10141 21780 84 3\n13 0 0\n44 0 116300 278684 134100 89536080 0 0 7352 555 9856 21539 85 2\n13 0 0\n\n-*vmstat -s*\n 98846992 total memory\n 98685392 used memory\n 40342200 active memory\n 52644588 inactive memory\n 161604 free memory\n 129960 buffer memory\n 89421936 swap cache\n 50331640 total swap\n 116300 used swap\n 50215340 free swap\n 2258553017 non-nice user cpu ticks\n 1125281 nice user cpu ticks\n 146638389 system cpu ticks\n 17789847697 idle cpu ticks\n 83090716 IO-wait cpu ticks\n 5045742 IRQ cpu ticks\n 38895985 softirq cpu ticks\n 0 stolen cpu ticks\n 29142450583 pages paged in\n 42731005078 pages paged out\n 39784 pages swapped in\n 3395187 pages swapped out\n 1338370564 interrupts\n 1176640487 CPU context switches\n 1305704895 boot time\n 24471946 forks\n\n(after 30 sec)\n-*vmstat -s*\n 98846992 total memory\n 98367312 used memory\n 39959952 active memory\n 52957104 inactive memory\n 479684 free memory\n 129720 buffer memory\n 89410640 swap cache\n 50331640 total swap\n 116296 used swap\n 50215344 free swap\n 2258645091 non-nice user cpu ticks\n 1125282 nice user cpu ticks\n 146640181 system cpu ticks\n 17789863186 idle cpu ticks\n 83090856 IO-wait cpu ticks\n 5045855 IRQ cpu ticks\n 38896749 softirq cpu ticks\n 0 stolen cpu ticks\n 29142861271 pages paged in\n 42731249289 pages paged out\n 39784 pages swapped in\n 3395187 pages swapped out\n 1338808821 interrupts\n 1177463384 CPU context switches\n 1305704895 boot time\n 24472003 forks\n\nfrom the above -> context switches /s = (1177463384 - 1176640487)/30 =\n*27429*\n\nthx in advance for any advice\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/heavy-load-high-cpu-itilization-tp4647751p4650542.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Sat, 30 Jul 2011 13:02:11 -0700 (PDT)",
"msg_from": "Filippos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: heavy load-high cpu itilization"
},
{
"msg_contents": "thx a lot for your answer.\ni will provide some stats, so if you could help me figure out the source of\nthe problem that would be great\n\n-top -c\nTasks: 1220 total, 49 running, 1171 sleeping, 0 stopped, 0 zombie\nCpu(s): 84.1%us, 2.8%sy, 0.0%ni, 12.3%id, 0.1%wa, 0.1%hi, 0.6%si, \n0.0%st\nMem: 98846996k total, 98632044k used, 214952k free, 134320k buffers\nSwap: 50331640k total, 116312k used, 50215328k free, 89445208k cached\n\n-SELECT count(procpid) FROM pg_stat_activity -> 422\n-SELECT count(procpid) FROM pg_stat_activity WHERE (NOW() - query_start) >\nINTERVAL '1 MINUTES' AND current_query = '<IDLE>' -> 108\n-SELECT count(procpid) FROM pg_stat_activity WHERE (NOW() - query_start) >\nINTERVAL '5 MINUTES' AND current_query = '<IDLE>' -> 45\n\n-vmstat -n 1 10\nprocs -----------memory---------- ---swap-- -----io---- --system--\n-----cpu------\n r b swpd free buff cache si so bi bo in cs us sy id\nwa st\n41 1 116300 347008 134176 89608912 0 0 143 210 0 0 11 1 88 \n0 0\n20 0 116300 423556 134116 89581840 0 0 8336 3038 11118 21139 81 5\n13 0 0\n24 0 116300 412904 134108 89546840 0 0 8488 9025 10621 22921 81 4\n15 0 0\n23 0 116300 409388 134084 89513728 0 0 8320 548 11386 20226 82 4\n14 0 0\n34 0 116300 403688 134088 89509520 0 0 6336 0 9552 20994 83 3\n14 0 0\n22 1 116300 337972 134104 89518624 0 0 8792 28 8980 20455 83 4\n13 0 0\n37 0 116300 303956 134116 89528720 0 0 8440 536 9644 20492 84 3\n13 0 0\n17 1 116300 293212 134112 89532816 0 0 5864 8240 9527 19771 85 3\n12 0 0\n14 0 116300 282168 134116 89540720 0 0 7772 752 10141 21780 84 3\n13 0 0\n44 0 116300 278684 134100 89536080 0 0 7352 555 9856 21539 85 2\n13 0 0\n\n-vmstat -s\n 98846992 total memory\n 98685392 used memory\n 40342200 active memory\n 52644588 inactive memory\n 161604 free memory\n 129960 buffer memory\n 89421936 swap cache\n 50331640 total swap\n 116300 used swap\n 50215340 free swap\n 2258553017 non-nice user cpu ticks\n 1125281 nice user cpu ticks\n 146638389 system cpu ticks\n 17789847697 idle cpu ticks\n 83090716 IO-wait cpu ticks\n 5045742 IRQ cpu ticks\n 38895985 softirq cpu ticks\n 0 stolen cpu ticks\n 29142450583 pages paged in\n 42731005078 pages paged out\n 39784 pages swapped in\n 3395187 pages swapped out\n 1338370564 interrupts\n 1176640487 CPU context switches\n 1305704895 boot time\n 24471946 forks\n\n(after 30 sec)\n-vmstat -s\n 98846992 total memory\n 98367312 used memory\n 39959952 active memory\n 52957104 inactive memory\n 479684 free memory\n 129720 buffer memory\n 89410640 swap cache\n 50331640 total swap\n 116296 used swap\n 50215344 free swap\n 2258645091 non-nice user cpu ticks\n 1125282 nice user cpu ticks\n 146640181 system cpu ticks\n 17789863186 idle cpu ticks\n 83090856 IO-wait cpu ticks\n 5045855 IRQ cpu ticks\n 38896749 softirq cpu ticks\n 0 stolen cpu ticks\n 29142861271 pages paged in\n 42731249289 pages paged out\n 39784 pages swapped in\n 3395187 pages swapped out\n 1338808821 interrupts\n 1177463384 CPU context switches\n 1305704895 boot time\n 24472003 forks\n\nfrom the above -> context switches /s = (1177463384 - 1176640487)/30 = 27429\n\nthx in advance for any advice \n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/heavy-load-high-cpu-itilization-tp4647751p4651856.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Sun, 31 Jul 2011 05:17:36 -0700 (PDT)",
"msg_from": "Filippos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: heavy load-high cpu itilization"
},
{
"msg_contents": "On Jul 30, 2011, at 3:02 PM, Filippos wrote:\n> thx a lot for your answer. \n> i will provide some stats, so if you could help me figure out the source of\n> the problem that would be great \n> \n> -*top -c*\n> Tasks: 1220 total, 49 running, 1171 sleeping, 0 stopped, 0 zombie\n> Cpu(s): *84.1%us*, 2.8%sy, 0.0%ni, 12.3%id, 0.1%wa, 0.1%hi, 0.6%si, \n> 0.0%st\n> Mem: 98846996k total, 98632044k used, 214952k free, 134320k buffers\n> Swap: 50331640k total, 116312k used, 50215328k free, 89445208k cached\n\n84% CPU isn't horrible, and you do have idle CPU time available. So you don't look to be too CPU-bound, although you need to keep in mind that one process might be CPU intensive and taking a long time to run, thereby blocking other processes that depend on it's results.\n\n> -SELECT count(procpid) FROM pg_stat_activity -> *422*\n> -SELECT count(procpid) FROM pg_stat_activity WHERE (NOW() - query_start) >\n> INTERVAL '1 MINUTES' AND current_query = '<IDLE>' -> *108*\n> -SELECT count(procpid) FROM pg_stat_activity WHERE (NOW() - query_start) >\n> INTERVAL '5 MINUTES' AND current_query = '<IDLE>' -> *45*\n\nIt would be good to look at getting some connection pooling happening.\n\nYour vmstat output shows you generally have CPU available. Can you provide some output from iostat -xk 2?\n\n> -*vmstat -n 1 10*\n> procs -----------memory---------- ---swap-- -----io---- --system--\n> -----cpu------\n> r b swpd free buff cache si so bi bo in cs us sy id\n> wa st\n> 41 1 116300 347008 134176 89608912 0 0 143 210 0 0 11 1 88 \n> 0 0\n> 20 0 116300 423556 134116 89581840 0 0 8336 3038 11118 21139 81 5\n> 13 0 0\n> 24 0 116300 412904 134108 89546840 0 0 8488 9025 10621 22921 81 4\n> 15 0 0\n> 23 0 116300 409388 134084 89513728 0 0 8320 548 11386 20226 82 4\n> 14 0 0\n> 34 0 116300 403688 134088 89509520 0 0 6336 0 9552 20994 83 3\n> 14 0 0\n> 22 1 116300 337972 134104 89518624 0 0 8792 28 8980 20455 83 4\n> 13 0 0\n> 37 0 116300 303956 134116 89528720 0 0 8440 536 9644 20492 84 3\n> 13 0 0\n> 17 1 116300 293212 134112 89532816 0 0 5864 8240 9527 19771 85 3\n> 12 0 0\n> 14 0 116300 282168 134116 89540720 0 0 7772 752 10141 21780 84 3\n> 13 0 0\n> 44 0 116300 278684 134100 89536080 0 0 7352 555 9856 21539 85 2\n> 13 0 0\n> \n> -*vmstat -s*\n> 98846992 total memory\n> 98685392 used memory\n> 40342200 active memory\n> 52644588 inactive memory\n> 161604 free memory\n> 129960 buffer memory\n> 89421936 swap cache\n> 50331640 total swap\n> 116300 used swap\n> 50215340 free swap\n> 2258553017 non-nice user cpu ticks\n> 1125281 nice user cpu ticks\n> 146638389 system cpu ticks\n> 17789847697 idle cpu ticks\n> 83090716 IO-wait cpu ticks\n> 5045742 IRQ cpu ticks\n> 38895985 softirq cpu ticks\n> 0 stolen cpu ticks\n> 29142450583 pages paged in\n> 42731005078 pages paged out\n> 39784 pages swapped in\n> 3395187 pages swapped out\n> 1338370564 interrupts\n> 1176640487 CPU context switches\n> 1305704895 boot time\n> 24471946 forks\n> \n> (after 30 sec)\n> -*vmstat -s*\n> 98846992 total memory\n> 98367312 used memory\n> 39959952 active memory\n> 52957104 inactive memory\n> 479684 free memory\n> 129720 buffer memory\n> 89410640 swap cache\n> 50331640 total swap\n> 116296 used swap\n> 50215344 free swap\n> 2258645091 non-nice user cpu ticks\n> 1125282 nice user cpu ticks\n> 146640181 system cpu ticks\n> 17789863186 idle cpu ticks\n> 83090856 IO-wait cpu ticks\n> 5045855 IRQ cpu ticks\n> 38896749 softirq cpu ticks\n> 0 stolen cpu ticks\n> 29142861271 pages paged in\n> 42731249289 pages paged out\n> 39784 pages swapped in\n> 3395187 pages swapped out\n> 1338808821 interrupts\n> 1177463384 CPU context switches\n> 1305704895 boot time\n> 24472003 forks\n> \n> from the above -> context switches /s = (1177463384 - 1176640487)/30 =\n> *27429*\n> \n> thx in advance for any advice\n> \n> \n> \n> \n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/heavy-load-high-cpu-itilization-tp4647751p4650542.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n--\nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n",
"msg_date": "Wed, 17 Aug 2011 23:31:18 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: heavy load-high cpu itilization"
}
] |
[
{
"msg_contents": "Hi,\nI want to configure Logging of postgres in such a way that messages of\ndifferent severity should be logged in different log file. eg: all ERROR\nmessage should be written in error-msg.log file while all NOTICE mesage\nshould be written in notice-msg.log file.\n\nIn order to do that what changes should i need to do in configuration file ?\nCould you pl give a solution.\n\n\n-- \n With Regards,\n Shailesh Singh\n\nHi,\nI want to configure Logging of postgres in such a way that messages of different severity should be logged in different log file. eg: all ERROR message should be written in error-msg.log file while all NOTICE mesage should be written in notice-msg.log file.\n \nIn order to do that what changes should i need to do in configuration file ? Could you pl give a solution.\n-- With Regards, Shailesh Singh",
"msg_date": "Wed, 27 Jul 2011 14:41:09 +0530",
"msg_from": "shailesh singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "issue related to logging facility of postgres"
},
{
"msg_contents": "On Wed, Jul 27, 2011 at 5:11 AM, shailesh singh <[email protected]> wrote:\n> I want to configure Logging of postgres in such a way that messages of\n> different severity should be logged in different log file. eg: all ERROR\n> message should be written in error-msg.log file while all NOTICE mesage\n> should be written in notice-msg.log file.\n>\n> In order to do that what changes should i need to do in configuration file ?\n> Could you pl give a solution.\n\nThere's no such facility built-in. You might want to do something\nlike \"log everything in CSV format, and then run a Perl script over\nit afterwards to split it up\".\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 31 Aug 2011 20:12:53 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue related to logging facility of postgres"
},
{
"msg_contents": "Syslog does that, I believe. Have a look at the man page for syslog.conf.\n\nOn Wed, Jul 27, 2011 at 5:11 AM, shailesh singh <[email protected]> wrote:\n> Hi,\n> I want to configure Logging of postgres in such a way that messages of\n> different severity should be logged in different log file. eg: all ERROR\n> message should be written in error-msg.log file while all NOTICE mesage\n> should be written in notice-msg.log file.\n>\n> In order to do that what changes should i need to do in configuration file ?\n> Could you pl give a solution.\n>\n> --\n> With Regards,\n> Shailesh Singh\n>\n>\n>\n",
"msg_date": "Thu, 1 Sep 2011 07:51:27 -0400",
"msg_from": "Justin Pitts <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue related to logging facility of postgres"
}
] |
[
{
"msg_contents": "Hi guys,\n\nI met with the problem that when I was using WITH clause to reuse a subquery, I got a huge performance penalty because of query planner.\n\nHere are the details, the original query is \n\nEXPLAIN ANALYZE WITH latest_identities AS\n(\n SELECT DISTINCT ON (memberid) memberid, username, changedate\n FROM t_username_history\n WHERE memberid IN (SELECT memberid FROM t_member WHERE firstname || ' ' || substring(lastname,1,1) = 'Eddie T')\n ORDER BY memberid, changedate DESC\n)\nSELECT t_member.email as email, t_member.username as username, t_member.location as location, t_member.locale as locale, t_member.status as status, t_member.creationdate as creationdate, t_forum_member.pos\\\nts as posts, t_forum_member.expertlocations as expertlocations, t_nexus_member.pages_created as pages_created, t_nexus_member.pages_edited as pages_edited, t_member_contributions.hotel_reviews as hotel_rev\\\niews, t_member_contributions.restaurant_reviews as restaurant_reviews, t_member_contributions.attraction_reviews as attraction_reviews, t_member_contributions.geo_reviews as geo_reviews, t_member_contribut\\\nions.photos as photos, t_member_contributions.videos as videos, t_recent_contribution.recent_contribution_date as recent_contribution_date, t_recent_contribution.recent_contribution_type as recent_contribu\\\ntion_type, t_owner_member.memberid as owner_memberid, t_member_interaction.flags as interaction_flags, t_media.path as ta_avatar_path, t_external_member.externalid as facebookid, latest_identities.username\\\n as latest_identity\nFROM t_member left join t_forum_member on (t_member.memberid = t_forum_member.memberid) left join t_nexus_member on (t_member.memberid = t_nexus_member.memberid) left join t_member_contributions on (t_memb\\\ner.memberid = t_member_contributions.memberid) left join t_recent_contribution on (t_member.memberid = t_recent_contribution.memberid) left join t_owner_member on (t_member.memberid = t_owner_member.member\\\nid) left join t_member_interaction on (t_member.memberid = t_member_interaction.memberid) left join t_media on (t_member.avatar = t_media.id) left join t_external_member on (t_member.memberid = t_external_\\\nmember.memberid AND t_external_member.idtype = 'FB') left join latest_identities on (t_member.memberid = latest_identities.memberid)\nWHERE t_member.firstname || ' ' || substring(t_member.lastname,1,1) = 'Eddie T';\n\nThe may seems scary, but what it really does is searching for members with certain name and joining with a bunch of other tables on memberid. The t_username_history table has multiple rows for a memberid therefore I just get the most recent record for each memberid that I am interested in before the join.\n\nHere is the link to explain:\nhttp://explain.depesz.com/s/ZKb\n\nSince the red part looks suboptimal to me, I changed it using WITH subquery:\n\nEXPLAIN WITH memberids AS\n(\n SELECT memberid FROM t_member WHERE firstname || ' ' || substring(lastname,1,1) = 'Eddie T'\n),\nlatest_identities AS\n(\n SELECT DISTINCT ON (memberid) memberid, username, changedate\n FROM t_username_history\n WHERE memberid IN (SELECT memberid FROM memberids)\n ORDER BY memberid, changedate DESC\n)\nSELECT t_member.email as email, t_member.username as username, t_member.location as location, t_member.locale as locale, t_member.status as status, t_member.creationdate as creationdate, t_forum_member.pos\\\nts as posts, t_forum_member.expertlocations as expertlocations, t_nexus_member.pages_created as pages_created, t_nexus_member.pages_edited as pages_edited, t_member_contributions.hotel_reviews as hotel_rev\\\niews, t_member_contributions.restaurant_reviews as restaurant_reviews, t_member_contributions.attraction_reviews as attraction_reviews, t_member_contributions.geo_reviews as geo_reviews, t_member_contribut\\\nions.photos as photos, t_member_contributions.videos as videos, t_recent_contribution.recent_contribution_date as recent_contribution_date, t_recent_contribution.recent_contribution_type as recent_contribu\\\ntion_type, t_owner_member.memberid as owner_memberid, t_member_interaction.flags as interaction_flags, t_media.path as ta_avatar_path, t_external_member.externalid as facebookid, latest_identities.username\\\n as latest_identity\nFROM t_member left join t_forum_member on (t_member.memberid = t_forum_member.memberid) left join t_nexus_member on (t_member.memberid = t_nexus_member.memberid) left join t_member_contributions on (t_memb\\\ner.memberid = t_member_contributions.memberid) left join t_recent_contribution on (t_member.memberid = t_recent_contribution.memberid) left join t_owner_member on (t_member.memberid = t_owner_member.member\\\nid) left join t_member_interaction on (t_member.memberid = t_member_interaction.memberid) left join t_media on (t_member.avatar = t_media.id) left join t_external_member on (t_member.memberid = t_external_\\\nmember.memberid AND t_external_member.idtype = 'FB') left join latest_identities on (t_member.memberid = latest_identities.memberid)\nWHERE t_member.memberid IN (SELECT memberid FROM memberids)\n\nHowever, this query runs forever because (I think) the planner join the tables before filter by where clause.\n\nHere is the explain link:\nhttp://explain.depesz.com/s/v2K\n\nAnyone knows why the planner is doing this?\n\nRegards,\nLi\nHi guys,I met with the problem that when I was using WITH clause to reuse a subquery, I got a huge performance penalty because of query planner.Here are the details, the original query is EXPLAIN ANALYZE WITH latest_identities AS( SELECT DISTINCT ON (memberid) memberid, username, changedate FROM t_username_history WHERE memberid IN (SELECT memberid FROM t_member WHERE firstname || ' ' || substring(lastname,1,1) = 'Eddie T') ORDER BY memberid, changedate DESC)SELECT t_member.email as email, t_member.username as username, t_member.location as location, t_member.locale as locale, t_member.status as status, t_member.creationdate as creationdate, t_forum_member.pos\\ts as posts, t_forum_member.expertlocations as expertlocations, t_nexus_member.pages_created as pages_created, t_nexus_member.pages_edited as pages_edited, t_member_contributions.hotel_reviews as hotel_rev\\iews, t_member_contributions.restaurant_reviews as restaurant_reviews, t_member_contributions.attraction_reviews as attraction_reviews, t_member_contributions.geo_reviews as geo_reviews, t_member_contribut\\ions.photos as photos, t_member_contributions.videos as videos, t_recent_contribution.recent_contribution_date as recent_contribution_date, t_recent_contribution.recent_contribution_type as recent_contribu\\tion_type, t_owner_member.memberid as owner_memberid, t_member_interaction.flags as interaction_flags, t_media.path as ta_avatar_path, t_external_member.externalid as facebookid, latest_identities.username\\ as latest_identityFROM t_member left join t_forum_member on (t_member.memberid = t_forum_member.memberid) left join t_nexus_member on (t_member.memberid = t_nexus_member.memberid) left join t_member_contributions on (t_memb\\er.memberid = t_member_contributions.memberid) left join t_recent_contribution on (t_member.memberid = t_recent_contribution.memberid) left join t_owner_member on (t_member.memberid = t_owner_member.member\\id) left join t_member_interaction on (t_member.memberid = t_member_interaction.memberid) left join t_media on (t_member.avatar = t_media.id) left join t_external_member on (t_member.memberid = t_external_\\member.memberid AND t_external_member.idtype = 'FB') left join latest_identities on (t_member.memberid = latest_identities.memberid)WHERE t_member.firstname || ' ' || substring(t_member.lastname,1,1) = 'Eddie T';The may seems scary, but what it really does is searching for members with certain name and joining with a bunch of other tables on memberid. The t_username_history table has multiple rows for a memberid therefore I just get the most recent record for each memberid that I am interested in before the join.Here is the link to explain:http://explain.depesz.com/s/ZKbSince the red part looks suboptimal to me, I changed it using WITH subquery:EXPLAIN WITH memberids AS( SELECT memberid FROM t_member WHERE firstname || ' ' || substring(lastname,1,1) = 'Eddie T'),latest_identities AS( SELECT DISTINCT ON (memberid) memberid, username, changedate FROM t_username_history WHERE memberid IN (SELECT memberid FROM memberids) ORDER BY memberid, changedate DESC)SELECT t_member.email as email, t_member.username as username, t_member.location as location, t_member.locale as locale, t_member.status as status, t_member.creationdate as creationdate, t_forum_member.pos\\ts as posts, t_forum_member.expertlocations as expertlocations, t_nexus_member.pages_created as pages_created, t_nexus_member.pages_edited as pages_edited, t_member_contributions.hotel_reviews as hotel_rev\\iews, t_member_contributions.restaurant_reviews as restaurant_reviews, t_member_contributions.attraction_reviews as attraction_reviews, t_member_contributions.geo_reviews as geo_reviews, t_member_contribut\\ions.photos as photos, t_member_contributions.videos as videos, t_recent_contribution.recent_contribution_date as recent_contribution_date, t_recent_contribution.recent_contribution_type as recent_contribu\\tion_type, t_owner_member.memberid as owner_memberid, t_member_interaction.flags as interaction_flags, t_media.path as ta_avatar_path, t_external_member.externalid as facebookid, latest_identities.username\\ as latest_identityFROM t_member left join t_forum_member on (t_member.memberid = t_forum_member.memberid) left join t_nexus_member on (t_member.memberid = t_nexus_member.memberid) left join t_member_contributions on (t_memb\\er.memberid = t_member_contributions.memberid) left join t_recent_contribution on (t_member.memberid = t_recent_contribution.memberid) left join t_owner_member on (t_member.memberid = t_owner_member.member\\id) left join t_member_interaction on (t_member.memberid = t_member_interaction.memberid) left join t_media on (t_member.avatar = t_media.id) left join t_external_member on (t_member.memberid = t_external_\\member.memberid AND t_external_member.idtype = 'FB') left join latest_identities on (t_member.memberid = latest_identities.memberid)WHERE t_member.memberid IN (SELECT memberid FROM memberids)However, this query runs forever because (I think) the planner join the tables before filter by where clause.Here is the explain link:http://explain.depesz.com/s/v2KAnyone knows why the planner is doing this?Regards,Li",
"msg_date": "Thu, 28 Jul 2011 17:00:06 -0400",
"msg_from": "Li Jin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance penalty when using WITH "
},
{
"msg_contents": "Li Jin <[email protected]> writes:\n> Anyone knows why the planner is doing this?\n\nWITH is an optimization fence. This is intentional and documented.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Jul 2011 13:40:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance penalty when using WITH "
},
{
"msg_contents": "On Thu, Jul 28, 2011 at 11:00 PM, Li Jin <[email protected]> wrote:\n> I met with the problem that when I was using WITH clause to reuse a\n> subquery, I got a huge performance penalty because of query planner.\n> Here are the details, the original query is\n> EXPLAIN ANALYZE WITH latest_identities AS\n> (\n> SELECT DISTINCT ON (memberid) memberid, username, changedate\n> FROM t_username_history\n> WHERE memberid IN (SELECT memberid FROM t_member WHERE firstname || ' '\n> || substring(lastname,1,1) = 'Eddie T')\n> ORDER BY memberid, changedate DESC\n> )\n\nAnother observation: That criterion looks suspicious to me. I would\nexpect any RDBMS to be better able to optimize this:\n\nWHERE firstname = 'Eddie' AND lastname like 'T%'\n\nI know it's semantically not the same but I would assume this is good\nenough for the common usecase. Plus, if there is an index on\n(firstname, lastname) then that could be used.\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Sat, 30 Jul 2011 15:10:25 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance penalty when using WITH"
},
{
"msg_contents": "On Sat, Jul 30, 2011 at 8:10 AM, Robert Klemme\n<[email protected]> wrote:\n> On Thu, Jul 28, 2011 at 11:00 PM, Li Jin <[email protected]> wrote:\n>> I met with the problem that when I was using WITH clause to reuse a\n>> subquery, I got a huge performance penalty because of query planner.\n>> Here are the details, the original query is\n>> EXPLAIN ANALYZE WITH latest_identities AS\n>> (\n>> SELECT DISTINCT ON (memberid) memberid, username, changedate\n>> FROM t_username_history\n>> WHERE memberid IN (SELECT memberid FROM t_member WHERE firstname || ' '\n>> || substring(lastname,1,1) = 'Eddie T')\n>> ORDER BY memberid, changedate DESC\n>> )\n>\n> Another observation: That criterion looks suspicious to me. I would\n> expect any RDBMS to be better able to optimize this:\n>\n> WHERE firstname = 'Eddie' AND lastname like 'T%'\n>\n> I know it's semantically not the same but I would assume this is good\n> enough for the common usecase. Plus, if there is an index on\n> (firstname, lastname) then that could be used.\n\ndisagree. just one of the ways that could be stymied would to change\nthe function behind the '||' operator.\n\nmerlin\n",
"msg_date": "Tue, 2 Aug 2011 16:48:41 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance penalty when using WITH"
},
{
"msg_contents": "On Tue, Aug 2, 2011 at 11:48 PM, Merlin Moncure <[email protected]> wrote:\n> On Sat, Jul 30, 2011 at 8:10 AM, Robert Klemme\n> <[email protected]> wrote:\n>> On Thu, Jul 28, 2011 at 11:00 PM, Li Jin <[email protected]> wrote:\n>>> I met with the problem that when I was using WITH clause to reuse a\n>>> subquery, I got a huge performance penalty because of query planner.\n>>> Here are the details, the original query is\n>>> EXPLAIN ANALYZE WITH latest_identities AS\n>>> (\n>>> SELECT DISTINCT ON (memberid) memberid, username, changedate\n>>> FROM t_username_history\n>>> WHERE memberid IN (SELECT memberid FROM t_member WHERE firstname || ' '\n>>> || substring(lastname,1,1) = 'Eddie T')\n>>> ORDER BY memberid, changedate DESC\n>>> )\n>>\n>> Another observation: That criterion looks suspicious to me. I would\n>> expect any RDBMS to be better able to optimize this:\n>>\n>> WHERE firstname = 'Eddie' AND lastname like 'T%'\n>>\n>> I know it's semantically not the same but I would assume this is good\n>> enough for the common usecase. Plus, if there is an index on\n>> (firstname, lastname) then that could be used.\n>\n> disagree. just one of the ways that could be stymied would to change\n> the function behind the '||' operator.\n\nI don't understand what you mean. Can you please elaborate?\n\nTo explain my point a bit: I meant that by querying individual fields\nseparately instead of applying a criterion on a function of the two\nthe RDBMS has a better chance to use indexes and come up with a better\nplan for this part of the query.\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Wed, 3 Aug 2011 09:18:12 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance penalty when using WITH"
},
{
"msg_contents": "Robert,\n\nI've built an index on this expression firstname || ' ' || substring(lastname,1,1). I believe this is the best index for this particular query. Correct me if I am wrong.\n\nLi\n\nOn Aug 3, 2011, at 3:18 AM, Robert Klemme wrote:\n\n> On Tue, Aug 2, 2011 at 11:48 PM, Merlin Moncure <[email protected]> wrote:\n>> On Sat, Jul 30, 2011 at 8:10 AM, Robert Klemme\n>> <[email protected]> wrote:\n>>> On Thu, Jul 28, 2011 at 11:00 PM, Li Jin <[email protected]> wrote:\n>>>> I met with the problem that when I was using WITH clause to reuse a\n>>>> subquery, I got a huge performance penalty because of query planner.\n>>>> Here are the details, the original query is\n>>>> EXPLAIN ANALYZE WITH latest_identities AS\n>>>> (\n>>>> SELECT DISTINCT ON (memberid) memberid, username, changedate\n>>>> FROM t_username_history\n>>>> WHERE memberid IN (SELECT memberid FROM t_member WHERE firstname || ' '\n>>>> || substring(lastname,1,1) = 'Eddie T')\n>>>> ORDER BY memberid, changedate DESC\n>>>> )\n>>> \n>>> Another observation: That criterion looks suspicious to me. I would\n>>> expect any RDBMS to be better able to optimize this:\n>>> \n>>> WHERE firstname = 'Eddie' AND lastname like 'T%'\n>>> \n>>> I know it's semantically not the same but I would assume this is good\n>>> enough for the common usecase. Plus, if there is an index on\n>>> (firstname, lastname) then that could be used.\n>> \n>> disagree. just one of the ways that could be stymied would to change\n>> the function behind the '||' operator.\n> \n> I don't understand what you mean. Can you please elaborate?\n> \n> To explain my point a bit: I meant that by querying individual fields\n> separately instead of applying a criterion on a function of the two\n> the RDBMS has a better chance to use indexes and come up with a better\n> plan for this part of the query.\n> \n> Kind regards\n> \n> robert\n> \n> -- \n> remember.guy do |as, often| as.you_can - without end\n> http://blog.rubybestpractices.com/\n\n\nRobert,I've built an index on this expression firstname || ' ' || substring(lastname,1,1). I believe this is the best index for this particular query. Correct me if I am wrong.LiOn Aug 3, 2011, at 3:18 AM, Robert Klemme wrote:On Tue, Aug 2, 2011 at 11:48 PM, Merlin Moncure <[email protected]> wrote:On Sat, Jul 30, 2011 at 8:10 AM, Robert Klemme<[email protected]> wrote:On Thu, Jul 28, 2011 at 11:00 PM, Li Jin <[email protected]> wrote:I met with the problem that when I was using WITH clause to reuse asubquery, I got a huge performance penalty because of query planner.Here are the details, the original query isEXPLAIN ANALYZE WITH latest_identities AS( SELECT DISTINCT ON (memberid) memberid, username, changedate FROM t_username_history WHERE memberid IN (SELECT memberid FROM t_member WHERE firstname || ' '|| substring(lastname,1,1) = 'Eddie T') ORDER BY memberid, changedate DESC)Another observation: That criterion looks suspicious to me. I wouldexpect any RDBMS to be better able to optimize this:WHERE firstname = 'Eddie' AND lastname like 'T%'I know it's semantically not the same but I would assume this is goodenough for the common usecase. Plus, if there is an index on(firstname, lastname) then that could be used.disagree. just one of the ways that could be stymied would to changethe function behind the '||' operator.I don't understand what you mean. Can you please elaborate?To explain my point a bit: I meant that by querying individual fieldsseparately instead of applying a criterion on a function of the twothe RDBMS has a better chance to use indexes and come up with a betterplan for this part of the query.Kind regardsrobert-- remember.guy do |as, often| as.you_can - without endhttp://blog.rubybestpractices.com/",
"msg_date": "Wed, 3 Aug 2011 09:27:11 -0400",
"msg_from": "Li Jin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance penalty when using WITH"
},
{
"msg_contents": "On Wed, Aug 3, 2011 at 3:27 PM, Li Jin <[email protected]> wrote:\n> Robert,\n> I've built an index on this expression firstname || ' ' ||\n> substring(lastname,1,1). I believe this is the best index for this\n> particular query. Correct me if I am wrong.\n\nMaybe, maybe not. Difficult to tell from a distance. I would have an\nindex on (firstname, lastname). You could try that and look at the\nplan for the other query. That's the only ultimate test which will\ngive you hard facts.\n\nKind regards\n\nrobert\n\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Wed, 3 Aug 2011 18:15:18 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance penalty when using WITH"
},
{
"msg_contents": "On Wed, Aug 3, 2011 at 2:18 AM, Robert Klemme\n<[email protected]> wrote:\n>>> Another observation: That criterion looks suspicious to me. I would\n>>> expect any RDBMS to be better able to optimize this:\n>>>\n>>> WHERE firstname = 'Eddie' AND lastname like 'T%'\n>>>\n>>> I know it's semantically not the same but I would assume this is good\n>>> enough for the common usecase. Plus, if there is an index on\n>>> (firstname, lastname) then that could be used.\n>>\n>> disagree. just one of the ways that could be stymied would to change\n>> the function behind the '||' operator.\n>\n> I don't understand what you mean. Can you please elaborate?\n>\n> To explain my point a bit: I meant that by querying individual fields\n> separately instead of applying a criterion on a function of the two\n> the RDBMS has a better chance to use indexes and come up with a better\n> plan for this part of the query.\n\nYes, but your assuming that it is safe and generally advantageous to\ndo that. Both assumptions I think are false.\n\nThe || operator is trivially hacked:\ncreate or replace function funky_concat(l text, r text) returns text as\n$$\n select textcat(textcat($1, 'abc'), $2);\n$$ language sql immutable ;\n\nupdate pg_operator set oprcode = 'funky_concat' where oid = 654;\n\npostgres=# select 'a' || 'b';\n?column?\n----------\n aabcb\n(1 row)\n\nAlso even ignoring the above it's not free to have the database try\nand analyze every instance of the || operator to see if it can be\ndecomposed to boolean field operations.\n\nmerlin\n",
"msg_date": "Wed, 3 Aug 2011 11:24:09 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance penalty when using WITH"
},
{
"msg_contents": "On Wed, Aug 3, 2011 at 3:27 PM, Li Jin <[email protected]> wrote:\n> Robert,\n> I've built an index on this expression firstname || ' ' ||\n> substring(lastname,1,1). I believe this is the best index for this\n> particular query. Correct me if I am wrong.\n\nMaybe, maybe not. Difficult to tell from a distance. I would have an\nindex on (firstname, lastname). You could try that and look at the\nplan for the other query. That's the only ultimate test which will\ngive you hard facts.\n\nKind regards\n\nrobert\n\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Wed, 3 Aug 2011 19:13:23 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance penalty when using WITH"
},
{
"msg_contents": "On Wed, Aug 3, 2011 at 6:24 PM, Merlin Moncure <[email protected]> wrote:\n> On Wed, Aug 3, 2011 at 2:18 AM, Robert Klemme\n> <[email protected]> wrote:\n>>>> Another observation: That criterion looks suspicious to me. I would\n>>>> expect any RDBMS to be better able to optimize this:\n>>>>\n>>>> WHERE firstname = 'Eddie' AND lastname like 'T%'\n>>>>\n>>>> I know it's semantically not the same but I would assume this is good\n>>>> enough for the common usecase. Plus, if there is an index on\n>>>> (firstname, lastname) then that could be used.\n>>>\n>>> disagree. just one of the ways that could be stymied would to change\n>>> the function behind the '||' operator.\n>>\n>> I don't understand what you mean. Can you please elaborate?\n>>\n>> To explain my point a bit: I meant that by querying individual fields\n>> separately instead of applying a criterion on a function of the two\n>> the RDBMS has a better chance to use indexes and come up with a better\n>> plan for this part of the query.\n>\n> Yes, but your assuming that it is safe and generally advantageous to\n> do that. Both assumptions I think are false.\n\nI am not sure why you say I assume this is _safe_. I said it is \"good\nenough for the common usecase\". And it is certainly good enough for\nthis particular query.\n\nAs for the \"generally advantageous\" I'd say that an index on \"raw\"\ncolumn values is usually useful for more queries than an index on a\nspecific function. That's why I'd say generally an index on column\nvalues is more versatile and I would prefer it. Of course you might\nachieve orders of magnitude of speedup for individual queries with an\nindex on a function tailored to that particular query but if you need\nto do that for multiple queries you pay a higher penalty for updates.\n\n> The || operator is trivially hacked:\n> create or replace function funky_concat(l text, r text) returns text as\n> $$\n> select textcat(textcat($1, 'abc'), $2);\n> $$ language sql immutable ;\n>\n> update pg_operator set oprcode = 'funky_concat' where oid = 654;\n>\n> postgres=# select 'a' || 'b';\n> ?column?\n> ----------\n> aabcb\n> (1 row)\n>\n> Also even ignoring the above it's not free to have the database try\n> and analyze every instance of the || operator to see if it can be\n> decomposed to boolean field operations.\n\nEven with your hacked operator you would need an index on the\nexpression to make it efficient. That could be done with the original\n|| as well. But my point was to query\n\nWHERE a = 'foo' and b like 'b%'\ninstead of WHERE a || ' ' || substring(b, 1, 1) = 'foo b'\n\nto use an index on (a,b). That index would also be useful for queries like\n\nWHERE a = 'foo'\nWHERE a like 'fo%'\nWHERE a = 'foo' and b = 'bar'\n\nand probably also\n\nWHERE a > 'foo'\nWHERE a > 'foo' and b like 'b%'\nWHERE a > 'foo' and b = 'bar'\n\nKind regards\n\nrobert\n\n\nPS: Sorry for the earlier duplicate. Gmail had a hickup.\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Wed, 3 Aug 2011 19:30:46 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance penalty when using WITH"
}
] |
[
{
"msg_contents": "Hi,\n\nI am a Noob with db tuning and trying to analyze pg_stats_brwriter data\n\ncheckpoints_timed | checkpoints_req | buffers_checkpoint | buffers_clean |\nmaxwritten_clean | buffers_backend | buffers_alloc\n\n\n-------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------+------\n 35241 | 58 | 699136 | 581839 |\n 1597 | 1663650 | 2205969940\n\n\nAlmost all checkpoints (99.8%) that happened are because of\ncheckpoint_timeout passing. Is this good or should I increaase my\ncheckpoint_segments?\nDuring checkpoints, 699136 8K buffers were written out which is pretty low\n(less than 1MB).\n\nbuffers allocated (2205969940 8K), 1663650 times a database backend\n(probably the client itself) had to write a page in order to make space for\nthe new allocation. Buffer allocated seems to be too high than backend\nbuffers.\n\nHow to read more into the data?\n\nRegards\nRohan\n\nHi,I am a Noob with db tuning and trying to analyze pg_stats_brwriter datacheckpoints_timed | checkpoints_req | buffers_checkpoint | buffers_clean | maxwritten_clean | buffers_backend | buffers_alloc \n-------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------+------ 35241 | 58 | 699136 | 581839 | 1597 | 1663650 | 2205969940\nAlmost all checkpoints (99.8%) that happened are because of checkpoint_timeout passing. Is this good or should I increaase my checkpoint_segments?During checkpoints, 699136 8K buffers were written out which is pretty low (less than 1MB).\nbuffers allocated (2205969940 8K), 1663650 times a database backend (probably the client itself) had to write a page in order to make space for the new allocation. Buffer allocated seems to be too high than backend buffers.\n How to read more into the data?RegardsRohan",
"msg_date": "Fri, 29 Jul 2011 19:07:15 +0530",
"msg_from": "Rohan Malhotra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Queries related to checkpoints"
},
{
"msg_contents": "Rohan Malhotra <[email protected]> wrote:\n \nFirst off, for a one-row result with too many values to fit on one\nline, you might want to use this in psql:\n \n\\x on\n \nMore importantly, you seem to be misinterpreting the numbers.\n \nYou've allocated 2,205,969,940 buffers. Of those allocations, the\nallocating backend had to first write a dirty buffer to free up a\nbuffer to use 1,663,650 times. That's pretty small as a percentage\nof allocations, but since it's larger than the other causes of dirty\nbuffer writes (699,136 during checkpoints and 581,839 by the\nbackground writer), I would be tempted to make the background writer\na little more aggressive. Assuming you're currently at the defaults\nfor these, perhaps:\n \nbgwriter_lru_maxpages = 200\nbgwriter_lru_multiplier = 4\n \nThis may (or may not) increase the physical writes on your system,\nso you want to closely monitor the impact of the change in terms of\nwhatever metrics matter most to you. For example, in our shop, we\ntend to tune our big databases which back a website such that we get\nzero \"write storms\" which cause delays of 20 seconds or more on\nqueries which rormally run in less than a millisecond. Your\nconcerns may be different.\n \nFor more detailed treatment of the issue look for posts by Greg\nSmith; or better yet, buy his book:\n \nhttp://www.postgresql.org/docs/books/\n \n-Kevin\n",
"msg_date": "Fri, 29 Jul 2011 10:25:38 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Queries related to checkpoints"
}
] |
[
{
"msg_contents": "Hello.\n\nI've found strange behavior of my pg installation (tested both 8.4 and\n9.0 - they behave same) on FreeBSD platform.\nIn short - when some table have PK on bigint field - COPY to that\ntable from file becomes slower and slower as table grows. When table\nreaches ~5GB - COPY of 100k records may take up to 20 mins. I've\nexperimented with all params in configs, moved indexes to separate hdd\netc - nothing made any improvement. However, once I'm dropping 64 bit\nPK - COPY of 100k records passes in seconds. Interesting thing - same\ntable has other indexes, including composite ones, but none of them\ninclude bigint fields, that's why I reached decision that bug\nconnected with indexes on bigint fields only.\n\nIn terms of IO picture is following: after copy started gstat shows\n100% load on index partition (as I mentioned above - I've tried\nseparate hdd to keep index tablespace), large queue (over 2k\nelements), and constant slow write on speed of ~2MB\\s. Hdd becomes\ncompletely unresponsive, even ls on empty folder hangs for minute or\nso.\n\nTo avoid thoughts like \"your hdd is slow, you haven't tuned\npostgresql.conf etc\" - all slowness dissapears with drop of bigint PK,\nsame time other indexes on same table remain alive. And yes - I've\ntried drop PK \\ recreate PK, vacuum full analyze and all other things\n- nothing helped, only drop helps.\n\nIs this known and expected behavior?\n\n-- \nAyrapetyan Robert,\nComodo Anti-Malware Data Processing Analysis and Management System (CAMDPAMS)\nhttp://repo-qa.camdpams.odessa.office.comodo.net/mediawiki/index.php\n",
"msg_date": "Sun, 31 Jul 2011 16:51:47 +0300",
"msg_from": "Robert Ayrapetyan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance die when COPYing to table with bigint PK"
},
{
"msg_contents": "On Sun, Jul 31, 2011 at 2:51 PM, Robert Ayrapetyan\n<[email protected]> wrote:\n\n> I've found strange behavior of my pg installation (tested both 8.4 and\n> 9.0 - they behave same) on FreeBSD platform.\n> In short - when some table have PK on bigint field - COPY to that\n> table from file becomes slower and slower as table grows. When table\n> reaches ~5GB - COPY of 100k records may take up to 20 mins. I've\n> experimented with all params in configs, moved indexes to separate hdd\n> etc - nothing made any improvement. However, once I'm dropping 64 bit\n> PK - COPY of 100k records passes in seconds. Interesting thing - same\n> table has other indexes, including composite ones, but none of them\n> include bigint fields, that's why I reached decision that bug\n> connected with indexes on bigint fields only.\n>\n> In terms of IO picture is following: after copy started gstat shows\n> 100% load on index partition (as I mentioned above - I've tried\n> separate hdd to keep index tablespace), large queue (over 2k\n> elements), and constant slow write on speed of ~2MB\\s. Hdd becomes\n> completely unresponsive, even ls on empty folder hangs for minute or\n> so.\n>\n> To avoid thoughts like \"your hdd is slow, you haven't tuned\n> postgresql.conf etc\" - all slowness dissapears with drop of bigint PK,\n> same time other indexes on same table remain alive. And yes - I've\n> tried drop PK \\ recreate PK, vacuum full analyze and all other things\n> - nothing helped, only drop helps.\n>\n> Is this known and expected behavior?\n\nThis is a duplicate post with one on BUGS, being discussed there.\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n",
"msg_date": "Mon, 1 Aug 2011 09:54:45 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance die when COPYing to table with bigint PK"
},
{
"msg_contents": "31.07.11 16:51, Robert Ayrapetyan написав(ла):\n> Hello.\n>\n> I've found strange behavior of my pg installation (tested both 8.4 and\n> 9.0 - they behave same) on FreeBSD platform.\n> In short - when some table have PK on bigint field - COPY to that\n> table from file becomes slower and slower as table grows. When table\n> reaches ~5GB - COPY of 100k records may take up to 20 mins. I've\n> experimented with all params in configs, moved indexes to separate hdd\n> etc - nothing made any improvement. However, once I'm dropping 64 bit\n> PK - COPY of 100k records passes in seconds. Interesting thing - same\n> table has other indexes, including composite ones, but none of them\n> include bigint fields, that's why I reached decision that bug\n> connected with indexes on bigint fields only.\nI did see this behavior, but as for me it occurs for UNIQUE indexes only \n(including PK), not dependent on field type.\nYou can check this by dropping PK and creating it as a regular \nnon-unique index.\n\nBest regards, Vitalii Tymchyshyn\n",
"msg_date": "Mon, 01 Aug 2011 12:06:36 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance die when COPYing to table with bigint PK"
},
{
"msg_contents": "Quite possible.\nBut anyway - I don't think performance degradation must be so huge in\ncase of using UNIQUE indexes.\n\nOn Mon, Aug 1, 2011 at 12:06 PM, Vitalii Tymchyshyn <[email protected]> wrote:\n> 31.07.11 16:51, Robert Ayrapetyan написав(ла):\n>>\n>> Hello.\n>>\n>> I've found strange behavior of my pg installation (tested both 8.4 and\n>> 9.0 - they behave same) on FreeBSD platform.\n>> In short - when some table have PK on bigint field - COPY to that\n>> table from file becomes slower and slower as table grows. When table\n>> reaches ~5GB - COPY of 100k records may take up to 20 mins. I've\n>> experimented with all params in configs, moved indexes to separate hdd\n>> etc - nothing made any improvement. However, once I'm dropping 64 bit\n>> PK - COPY of 100k records passes in seconds. Interesting thing - same\n>> table has other indexes, including composite ones, but none of them\n>> include bigint fields, that's why I reached decision that bug\n>> connected with indexes on bigint fields only.\n>\n> I did see this behavior, but as for me it occurs for UNIQUE indexes only\n> (including PK), not dependent on field type.\n> You can check this by dropping PK and creating it as a regular non-unique\n> index.\n>\n> Best regards, Vitalii Tymchyshyn\n>\n\n\n\n-- \nAyrapetyan Robert,\nComodo Anti-Malware Data Processing Analysis and Management System (CAMDPAMS)\nhttp://repo-qa.camdpams.odessa.office.comodo.net/mediawiki/index.php\n",
"msg_date": "Mon, 1 Aug 2011 12:15:52 +0300",
"msg_from": "Robert Ayrapetyan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance die when COPYing to table with bigint PK"
},
{
"msg_contents": "Seems this assumption is not right. Just created simple index on\nbigint column - situation with huge performance\ndegradation repeated. Dropping this index solved COPY issues on the fly.\nSo I'm still convinced - this bug relates to FreeBSD 64-bit + UFS +\nbigint column index\n(some of these may be superfluous, but I have no resources to check on\ndifferent platforms with different filesystems).\n\nOn Mon, Aug 1, 2011 at 12:15 PM, Robert Ayrapetyan\n<[email protected]> wrote:\n> Quite possible.\n> But anyway - I don't think performance degradation must be so huge in\n> case of using UNIQUE indexes.\n>\n> On Mon, Aug 1, 2011 at 12:06 PM, Vitalii Tymchyshyn <[email protected]> wrote:\n>> 31.07.11 16:51, Robert Ayrapetyan написав(ла):\n>>>\n>>> Hello.\n>>>\n>>> I've found strange behavior of my pg installation (tested both 8.4 and\n>>> 9.0 - they behave same) on FreeBSD platform.\n>>> In short - when some table have PK on bigint field - COPY to that\n>>> table from file becomes slower and slower as table grows. When table\n>>> reaches ~5GB - COPY of 100k records may take up to 20 mins. I've\n>>> experimented with all params in configs, moved indexes to separate hdd\n>>> etc - nothing made any improvement. However, once I'm dropping 64 bit\n>>> PK - COPY of 100k records passes in seconds. Interesting thing - same\n>>> table has other indexes, including composite ones, but none of them\n>>> include bigint fields, that's why I reached decision that bug\n>>> connected with indexes on bigint fields only.\n>>\n>> I did see this behavior, but as for me it occurs for UNIQUE indexes only\n>> (including PK), not dependent on field type.\n>> You can check this by dropping PK and creating it as a regular non-unique\n>> index.\n>>\n>> Best regards, Vitalii Tymchyshyn\n>>\n>\n>\n>\n> --\n> Ayrapetyan Robert,\n> Comodo Anti-Malware Data Processing Analysis and Management System (CAMDPAMS)\n> http://repo-qa.camdpams.odessa.office.comodo.net/mediawiki/index.php\n>\n\n\n\n-- \nAyrapetyan Robert,\nComodo Anti-Malware Data Processing Analysis and Management System (CAMDPAMS)\nhttp://repo-qa.camdpams.odessa.office.comodo.net/mediawiki/index.php\n",
"msg_date": "Tue, 2 Aug 2011 11:26:42 +0300",
"msg_from": "Robert Ayrapetyan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance die when COPYing to table with bigint PK"
},
{
"msg_contents": "02.08.11 11:26, Robert Ayrapetyan написав(ла):\n> Seems this assumption is not right. Just created simple index on\n> bigint column - situation with huge performance\n> degradation repeated. Dropping this index solved COPY issues on the fly.\n> So I'm still convinced - this bug relates to FreeBSD 64-bit + UFS +\n> bigint column index\n> (some of these may be superfluous, but I have no resources to check on\n> different platforms with different filesystems).\nInterrrresting. We also have FreeBSDx64 on UFS and are using bigint \n(bigserial) keys. It seems I will need to perform more tests here \nbecause I do see similar problems. I for sure can do a copy of data with \nint4 keys and test the performance.\nBTW: The thing we are going to try on next upgrade is to change UFS \nblock size from 16K to 8K. What problem I saw is that with default \nsetting, UFS needs to read additional 8K when postgresql writes it's \npage (and for index random writes can be vital). Unfortunately, such a \nchanges requires partition reformat and I can't afford it for now.\n\nBest regards, Vitalii Tymchyshyn\n",
"msg_date": "Tue, 02 Aug 2011 11:42:42 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance die when COPYing to table with bigint PK"
},
{
"msg_contents": "Robert Ayrapetyan <[email protected]> wrote:\n \n> So I'm still convinced - this bug relates to FreeBSD 64-bit + UFS\n> + bigint column index\n> (some of these may be superfluous, but I have no resources to\n> check on different platforms with different filesystems).\n \nLinux 64 bit XFS bigint column index only shows a slightly longer\nrun time for bigint versus int here. What timings do you get for\nthe insert statements if you run the following in your environment?\n \ncreate table bi (big bigint not null, medium int not null);\ninsert into bi with x(n) as (select generate_series(1, 1000000)\nselect n + 5000000000, n from x;\n\\timing on\ntruncate table bi; insert into bi with x(n) as (select\ngenerate_series(1, 1000000)) select n + 5000000000, n from x;\ntruncate table bi; insert into bi with x(n) as (select\ngenerate_series(1, 1000000)) select n + 5000000000, n from x;\ntruncate table bi; insert into bi with x(n) as (select\ngenerate_series(1, 1000000)) select n + 5000000000, n from x;\ncreate unique index bi_medium on bi (medium);\ntruncate table bi; insert into bi with x(n) as (select\ngenerate_series(1, 1000000)) select n + 5000000000, n from x;\ntruncate table bi; insert into bi with x(n) as (select\ngenerate_series(1, 1000000)) select n + 5000000000, n from x;\ntruncate table bi; insert into bi with x(n) as (select\ngenerate_series(1, 1000000)) select n + 5000000000, n from x;\ndrop index bi_medium;\ncreate unique index bi_big on bi (big);\ntruncate table bi; insert into bi with x(n) as (select\ngenerate_series(1, 1000000)) select n + 5000000000, n from x;\ntruncate table bi; insert into bi with x(n) as (select\ngenerate_series(1, 1000000)) select n + 5000000000, n from x;\ntruncate table bi; insert into bi with x(n) as (select\ngenerate_series(1, 1000000)) select n + 5000000000, n from x;\n\\timing off\ndrop table bi;\n \nHere's what I get:\n \nTime: 1629.141 ms\nTime: 1638.060 ms\nTime: 1711.833 ms\n \nTime: 4151.953 ms\nTime: 4602.679 ms\nTime: 5107.259 ms\n \nTime: 4654.060 ms\nTime: 5158.157 ms\nTime: 5101.110 ms\n \n-Kevin\n",
"msg_date": "Tue, 02 Aug 2011 12:41:33 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance die when COPYing to table with\n\t bigint PK"
},
{
"msg_contents": "Hi.\n\nTimings for your test:\n\nfoo=# create table bi (big bigint not null, medium int not null);\nCREATE TABLE\nfoo=# insert into bi with x(n) as (select generate_series(1, 1000000))\nfoo-# select n + 5000000000, n from x;\nINSERT 0 1000000\nfoo=# \\timing on\nTiming is on.\nfoo=# truncate table bi; insert into bi with x(n) as (select\nTRUNCATE TABLE\nTime: 211.205 ms\nfoo(# generate_series(1, 1000000)) select n + 5000000000, n from x;\nINSERT 0 1000000\nTime: 2789.607 ms\nfoo=# truncate table bi; insert into bi with x(n) as (select\nTRUNCATE TABLE\nTime: 206.712 ms\nfoo(# generate_series(1, 1000000)) select n + 5000000000, n from x;\nINSERT 0 1000000\nTime: 2959.679 ms\nfoo=# truncate table bi; insert into bi with x(n) as (select\nTRUNCATE TABLE\nTime: 594.584 ms\nfoo(# generate_series(1, 1000000)) select n + 5000000000, n from x;\nINSERT 0 1000000\nTime: 3651.206 ms\nfoo=# create unique index bi_medium on bi (medium);\nCREATE INDEX\nTime: 781.407 ms\nfoo=# truncate table bi; insert into bi with x(n) as (select\nTRUNCATE TABLE\nTime: 42.177 ms\nfoo(# generate_series(1, 1000000)) select n + 5000000000, n from x;\nINSERT 0 1000000\nTime: 5671.883 ms\nfoo=# truncate table bi; insert into bi with x(n) as (select\nTRUNCATE TABLE\nTime: 139.418 ms\nfoo(# generate_series(1, 1000000)) select n + 5000000000, n from x;\nINSERT 0 1000000\nTime: 5668.894 ms\nfoo=# truncate table bi; insert into bi with x(n) as (select\nTRUNCATE TABLE\nTime: 204.479 ms\nfoo(# generate_series(1, 1000000)) select n + 5000000000, n from x;\nINSERT 0 1000000\nTime: 6530.010 ms\nfoo=# drop index bi_medium;\nDROP INDEX\nTime: 212.038 ms\nfoo=# create unique index bi_big on bi (big);\nCREATE INDEX\nTime: 650.492 ms\nfoo=# truncate table bi; insert into bi with x(n) as (select\nTRUNCATE TABLE\nTime: 39.818 ms\nfoo(# generate_series(1, 1000000)) select n + 5000000000, n from x;\nINSERT 0 1000000\nTime: 8093.276 ms\nfoo=# truncate table bi; insert into bi with x(n) as (select\nTRUNCATE TABLE\nTime: 282.165 ms\nfoo(# generate_series(1, 1000000)) select n + 5000000000, n from x;\nINSERT 0 1000000\nTime: 5988.694 ms\nfoo=# truncate table bi; insert into bi with x(n) as (select\nTRUNCATE TABLE\nTime: 245.859 ms\nfoo(# generate_series(1, 1000000)) select n + 5000000000, n from x;\nINSERT 0 1000000\nTime: 5702.236 ms\nfoo=# \\timing off\nTiming is off.\n\n\nNow please perform mine:\n\nCREATE TABLESPACE tblsp_ix LOCATION '/foo';\nCREATE SCHEMA test;\nCREATE TABLE test.t\n(\n id_big bigint, --PRIMARY KEY USING INDEX TABLESPACE tblsp_ix,\n ts timestamp NOT NULL,\n ip inet,\n id_medium integer NOT NULL,\n id_small smallint NOT NULL,\n id_smalll smallint NOT NULL\n);\nCREATE INDEX ix_t ON test.t\n USING btree (ts, ip, id_medium, id_small) TABLESPACE tblsp_ix;\n\ngen_data.csh\n-------------cut here-----------------------------------------------------------\n#!/bin/tcsh\nset f = $1\nset lines_cnt = $2\nrm ${f}\nset id_big = -2147483648\nset time_t = 1000000000\nset ts = `date -r ${time_t}`\nset ip = \"127.0.0.1\"\nset id_medium = -2147483648\nset id_small = 0\nset echo_style = both\nwhile ( $lines_cnt > 0 )\n echo \"${id_big}\\t${ts}\\t${ip}\\t${id_medium}\\t${id_small}\\t${id_small}\"\n>> ${f}\n @ id_big = ${id_big} + 1\n @ time_t = ${time_t} + 1\n @ id_medium = ${id_medium} + 1\n @ lines_cnt = ${lines_cnt} - 1\nend\nexit 0\n-------------cut here-----------------------------------------------------------\n\ntime ./gen_data.csh app.data 100000\n9.564u 2.487s 0:12.05 99.9% 420+1113k 0+51io 0pf+0w\n\ncopy_data.csh\n-------------cut here-----------------------------------------------------------\n#!/bin/tcsh\nset f = $1\nset cnt = $2\nwhile ( $cnt > 0 )\n time psql -d foo -c \"COPY test.t(id_big, ts, ip, id_medium,\nid_small, id_smalll) from '$f'\"\n @ cnt = ${cnt} - 1\nend\nexit 0\n-------------cut here-----------------------------------------------------------\n\ntime copy_data.csh /aaa/app.data 100\n...\n0.000u 0.027s 0:01.55 1.2% 474+1254k 0+0io 0pf+0w\nCOPY 100000\n...\n(~1-3 sec for every of 100 iterations with 3-4 spikes to 5 secs max)\n\nCREATE INDEX ix_t_big ON test.t USING btree (id_big) TABLESPACE tblsp_ix;\n\ntime copy_data.csh /aaa/app.data 100\n(the show begins from iteration # ~20):\nCOPY 100000\n0.000u 0.005s 0:20.70 0.0% 0+0k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.005s 0:06.50 0.0% 0+0k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.037s 0:03.44 0.8% 704+514k 0+0io 0pf+0w\nCOPY 100000\n0.007u 0.029s 0:04.55 0.4% 808+1746k 0+0io 0pf+0w\nCOPY 100000\n0.005u 0.000s 0:03.60 0.0% 0+0k 0+0io 0pf+0w\nCOPY 100000\n0.005u 0.000s 0:02.55 0.0% 0+0k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.037s 0:03.03 0.9% 469+197k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.036s 0:03.85 0.7% 526+1393k 0+0io 0pf+0w\nCOPY 100000\n0.005u 0.000s 0:06.66 0.0% 0+0k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.037s 0:02.73 1.0% 526+1393k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.005s 0:11.85 0.0% 0+0k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.022s 0:02.56 0.7% 492+1238k 0+0io 0pf+0w\nCOPY 100000\n0.007u 0.022s 0:02.46 0.8% 650+1328k 0+0io 0pf+0w\nCOPY 100000\n0.006u 0.031s 0:04.71 0.6% 692+525k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.039s 0:29.10 0.1% 526+1393k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.036s 0:36.29 0.0% 538+1164k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.037s 0:43.77 0.0% 526+1393k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.036s 1:01.94 0.0% 538+1164k 0+0io 0pf+0w\nCOPY 100000\n0.007u 0.029s 0:13.99 0.1% 808+2074k 0+0io 0pf+0w\nCOPY 100000\n0.003u 0.005s 0:46.02 0.0% 0+0k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.031s 0:45.58 0.0% 316+836k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.038s 1:00.39 0.0% 526+1393k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.036s 0:24.38 0.1% 538+1164k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.037s 0:41.32 0.0% 538+1382k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.005s 0:46.13 0.0% 0+0k 0+0io 0pf+0w\nCOPY 100000\n0.005u 0.000s 0:43.15 0.0% 0+0k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.005s 0:45.59 0.0% 0+0k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.005s 1:54.92 0.0% 0+0k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.037s 2:22.47 0.0% 538+1382k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.005s 1:40.65 0.0% 0+0k 0+0io 0pf+0w\nCOPY 100000\n0.006u 0.020s 1:43.52 0.0% 650+1328k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.036s 1:43.33 0.0% 538+1164k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.036s 1:47.00 0.0% 526+1393k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.036s 2:18.94 0.0% 538+1164k 0+0io 0pf+0w\n\nfrom that moment all iterations went for more then 1 min and I interrupted test.\n\nDROP INDEX test.ix_t_big;\n\ntime copy_data.csh /aaa/app.data 100\nCOPY 100000\n0.000u 0.005s 0:02.42 0.0% 0+0k 0+0io 0pf+0w\nCOPY 100000\n0.007u 0.029s 0:01.88 1.0% 808+2074k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.005s 0:01.83 0.0% 0+0k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.005s 0:01.75 0.0% 0+0k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.005s 0:01.82 0.0% 0+0k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.037s 0:01.81 1.6% 526+1393k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.036s 0:01.84 1.6% 538+1164k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.036s 0:01.86 1.6% 421+1114k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.036s 0:01.77 1.6% 538+1164k 0+0io 0pf+0w\n...\nEverything returned back to good perfomance state.\n\nWith number of rows > 50 mln all numbers in test with index on bigint column\nare multiplied on 20, while without index even on 200 mln rows speed\nremains constant (1-2 sec per 100k rows file).\n\nP.S. tried same with 2 columns (bigint and int) - it didn't produced such effect\nprobably because data volume has critical effect.\n\n\nOn Tue, Aug 2, 2011 at 8:41 PM, Kevin Grittner\n<[email protected]> wrote:\n> Robert Ayrapetyan <[email protected]> wrote:\n>\n>> So I'm still convinced - this bug relates to FreeBSD 64-bit + UFS\n>> + bigint column index\n>> (some of these may be superfluous, but I have no resources to\n>> check on different platforms with different filesystems).\n>\n> Linux 64 bit XFS bigint column index only shows a slightly longer\n> run time for bigint versus int here. What timings do you get for\n> the insert statements if you run the following in your environment?\n>\n> create table bi (big bigint not null, medium int not null);\n> insert into bi with x(n) as (select generate_series(1, 1000000)\n> select n + 5000000000, n from x;\n> \\timing on\n> truncate table bi; insert into bi with x(n) as (select\n> generate_series(1, 1000000)) select n + 5000000000, n from x;\n> truncate table bi; insert into bi with x(n) as (select\n> generate_series(1, 1000000)) select n + 5000000000, n from x;\n> truncate table bi; insert into bi with x(n) as (select\n> generate_series(1, 1000000)) select n + 5000000000, n from x;\n> create unique index bi_medium on bi (medium);\n> truncate table bi; insert into bi with x(n) as (select\n> generate_series(1, 1000000)) select n + 5000000000, n from x;\n> truncate table bi; insert into bi with x(n) as (select\n> generate_series(1, 1000000)) select n + 5000000000, n from x;\n> truncate table bi; insert into bi with x(n) as (select\n> generate_series(1, 1000000)) select n + 5000000000, n from x;\n> drop index bi_medium;\n> create unique index bi_big on bi (big);\n> truncate table bi; insert into bi with x(n) as (select\n> generate_series(1, 1000000)) select n + 5000000000, n from x;\n> truncate table bi; insert into bi with x(n) as (select\n> generate_series(1, 1000000)) select n + 5000000000, n from x;\n> truncate table bi; insert into bi with x(n) as (select\n> generate_series(1, 1000000)) select n + 5000000000, n from x;\n> \\timing off\n> drop table bi;\n>\n> Here's what I get:\n>\n> Time: 1629.141 ms\n> Time: 1638.060 ms\n> Time: 1711.833 ms\n>\n> Time: 4151.953 ms\n> Time: 4602.679 ms\n> Time: 5107.259 ms\n>\n> Time: 4654.060 ms\n> Time: 5158.157 ms\n> Time: 5101.110 ms\n>\n> -Kevin\n>\n\n\n\n-- \nAyrapetyan Robert,\nComodo Anti-Malware Data Processing Analysis and Management System (CAMDPAMS)\nhttp://repo-qa.camdpams.odessa.office.comodo.net/mediawiki/index.php\n",
"msg_date": "Wed, 3 Aug 2011 17:39:16 +0300",
"msg_from": "Robert Ayrapetyan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance die when COPYing to table with bigint PK"
},
{
"msg_contents": "Robert Ayrapetyan <[email protected]> wrote:\n> Kevin Grittner <[email protected]> wrote:\n \n>> What timings do you get for the insert statements if you run the\n>> following in your environment?\n \n>> Here's what I get:\n>>\n>> Time: 1629.141 ms\n>> Time: 1638.060 ms\n>> Time: 1711.833 ms\n>>\n>> Time: 4151.953 ms\n>> Time: 4602.679 ms\n>> Time: 5107.259 ms\n>>\n>> Time: 4654.060 ms\n>> Time: 5158.157 ms\n>> Time: 5101.110 ms\n \n> Timings for your test:\n \n> [no index]\n> Time: 2789.607 ms\n> Time: 2959.679 ms\n> Time: 3651.206 ms\n \n> [int index]\n> Time: 5671.883 ms\n> Time: 5668.894 ms\n> Time: 6530.010 ms\n \n> [bigint index]\n> Time: 8093.276 ms\n> Time: 5988.694 ms\n> Time: 5702.236 ms\n \n> [regarding tests which do show the problem]\n> tried same with 2 columns (bigint and int) - it didn't produced\n> such effect probably because data volume has critical effect.\n \nBased on what you're showing, this is almost certainly just a matter\nof pushing your volume of active data above the threshold of what\nyour cache holds, forcing it to do disk access rather than RAM\naccess for a significant portion of the reads.\n \n-Kevin\n",
"msg_date": "Thu, 04 Aug 2011 10:59:56 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance die when COPYing to table with\n\t bigint PK"
},
{
"msg_contents": "04.08.11 18:59, Kevin Grittner написав(ла):\n> Robert Ayrapetyan<[email protected]> wrote:\n>> Kevin Grittner<[email protected]> wrote:\n>\n>> [regarding tests which do show the problem]\n>> tried same with 2 columns (bigint and int) - it didn't produced\n>> such effect probably because data volume has critical effect.\n>\n> Based on what you're showing, this is almost certainly just a matter\n> of pushing your volume of active data above the threshold of what\n> your cache holds, forcing it to do disk access rather than RAM\n> access for a significant portion of the reads.\n>\n> -Kevin\nYep. Seems so. Plus famous \"you'd better insert data, then create indexes\".\nOn my database it takes twice the time for int8 then for int4 to insert \ndata.\nAlso it takes ~twice a time (2 hours) to add 200K of rows to 200M of \nrows than to make an index over 200M of rows (1 hour).\n\nBest regards, Vitalii Tymchyshyn\n",
"msg_date": "Thu, 04 Aug 2011 19:11:58 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance die when COPYing to table with bigint PK"
},
{
"msg_contents": "If you look at the rest of my mail - you would notice 50 times\ndifference in performance.\nWhat you would say?\n\nOn Thu, Aug 4, 2011 at 7:11 PM, Vitalii Tymchyshyn <[email protected]> wrote:\n> 04.08.11 18:59, Kevin Grittner написав(ла):\n>>\n>> Robert Ayrapetyan<[email protected]> wrote:\n>>>\n>>> Kevin Grittner<[email protected]> wrote:\n>>\n>>> [regarding tests which do show the problem]\n>>> tried same with 2 columns (bigint and int) - it didn't produced\n>>> such effect probably because data volume has critical effect.\n>>\n>> Based on what you're showing, this is almost certainly just a matter\n>> of pushing your volume of active data above the threshold of what\n>> your cache holds, forcing it to do disk access rather than RAM\n>> access for a significant portion of the reads.\n>>\n>> -Kevin\n>\n> Yep. Seems so. Plus famous \"you'd better insert data, then create indexes\".\n> On my database it takes twice the time for int8 then for int4 to insert\n> data.\n> Also it takes ~twice a time (2 hours) to add 200K of rows to 200M of rows\n> than to make an index over 200M of rows (1 hour).\n>\n> Best regards, Vitalii Tymchyshyn\n>\n\n\n\n-- \nAyrapetyan Robert,\nComodo Anti-Malware Data Processing Analysis and Management System (CAMDPAMS)\nhttp://repo-qa.camdpams.odessa.office.comodo.net/mediawiki/index.php\n",
"msg_date": "Thu, 4 Aug 2011 20:19:03 +0300",
"msg_from": "Robert Ayrapetyan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance die when COPYing to table with bigint PK"
},
{
"msg_contents": "Robert Ayrapetyan <[email protected]> wrote:\n \n> If you look at the rest of my mail - you would notice 50 times\n> difference in performance.\n> What you would say?\n \nThat accessing a page from RAM is more than 50 times as fast as a\nrandom access of that page from disk.\n \n-Kevin\n",
"msg_date": "Thu, 04 Aug 2011 12:22:31 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance die when COPYing to table with\n\t bigint PK"
},
{
"msg_contents": "All you are saying disproves following:\n\nin experiment I replaces bigint index:\n\nCREATE INDEX ix_t_big ON test.t USING btree (id_big) TABLESPACE tblsp_ix;\n\nwith 4 (!) other indexes:\n\nCREATE INDEX ix_t2 ON test.t USING btree (ip) TABLESPACE tblsp_ix;\nCREATE INDEX ix_t3 ON test.t USING btree (id_small) TABLESPACE tblsp_ix;\nCREATE INDEX ix_t4 ON test.t USING btree (id_smalll) TABLESPACE tblsp_ix;\nCREATE INDEX ix_t5 ON test.t USING btree (ts) TABLESPACE tblsp_ix;\n\nwhich are definitely larger then one bigint index.\n\n0.000u 0.005s 0:13.23 0.0% 0+0k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.035s 0:05.08 0.5% 421+1114k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.036s 0:19.28 0.1% 526+1393k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.005s 0:05.56 0.0% 0+0k 0+0io 0pf+0w\nCOPY 100000\n0.006u 0.012s 0:05.57 0.1% 984+1820k 0+0io 0pf+0w\nCOPY 100000\n0.007u 0.029s 0:05.20 0.3% 808+1746k 0+0io 0pf+0w\nCOPY 100000\n0.005u 0.000s 0:05.35 0.0% 0+0k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.011s 0:05.92 0.1% 316+836k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.005s 0:12.08 0.0% 0+0k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.029s 0:05.46 0.3% 808+2074k 0+0io 0pf+0w\nCOPY 100000\n0.002u 0.002s 0:05.35 0.0% 0+0k 0+0io 0pf+0w\nCOPY 100000\n0.000u 0.005s 0:06.52 0.0% 0+0k 0+0io 0pf+0w\n\nInsertions became slower 4-5 times, which is ok.\n\nNothing is closer to even half of minute, while one bigint index constantly\ngives more then minute and even 2 for 100k records.\n\n\n\n\nOn Thu, Aug 4, 2011 at 8:22 PM, Kevin Grittner\n<[email protected]> wrote:\n> Robert Ayrapetyan <[email protected]> wrote:\n>\n>> If you look at the rest of my mail - you would notice 50 times\n>> difference in performance.\n>> What you would say?\n>\n> That accessing a page from RAM is more than 50 times as fast as a\n> random access of that page from disk.\n>\n> -Kevin\n>\n\n\n\n-- \nAyrapetyan Robert,\nComodo Anti-Malware Data Processing Analysis and Management System (CAMDPAMS)\nhttp://repo-qa.camdpams.odessa.office.comodo.net/mediawiki/index.php\n",
"msg_date": "Thu, 4 Aug 2011 21:33:31 +0300",
"msg_from": "Robert Ayrapetyan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance die when COPYing to table with bigint PK"
},
{
"msg_contents": "In my tests it greatly depends on if index writes are random or sequential.\nMy test time goes down from few hours to seconds if I add to the end of\nindex.\nAs for me, best comparision would be to make two equal int4 columns with\nsame data as in int8, two indexes, then perform the test. My bet it will be\nslower than int8.\n\nЧетвер, 4 серпня 2011 р. користувач Robert Ayrapetyan <\[email protected]> написав:\n> All you are saying disproves following:\n>\n> in experiment I replaces bigint index:\n>\n> CREATE INDEX ix_t_big ON test.t USING btree (id_big) TABLESPACE tblsp_ix;\n>\n> with 4 (!) other indexes:\n>\n>>> If you look at the rest of my mail - you would notice 50 times\n>>> difference in performance.\n>>> What you would say?\n>>\n>> That accessing a page from RAM is more than 50 times as fast as a\n>> random access of that page from disk.\n>>\n>> -Kevin\n>>\n>\n>\n>\n> --\n> Ayrapetyan Robert,\n> Comodo Anti-Malware Data Processing Analysis and Management System\n(CAMDPAMS)\n> http://repo-qa.camdpams.odessa.office.comodo.net/mediawiki/index.php\n>\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nIn my tests it greatly depends on if index writes are random or sequential. My test time goes down from few hours to seconds if I add to the end of index.As for me, best comparision would be to make two equal int4 columns with same data as in int8, two indexes, then perform the test. My bet it will be slower than int8.\nЧетвер, 4 серпня 2011 р. користувач Robert Ayrapetyan <[email protected]> написав:> All you are saying disproves following:>> in experiment I replaces bigint index:\n>> CREATE INDEX ix_t_big ON test.t USING btree (id_big) TABLESPACE tblsp_ix;>> with 4 (!) other indexes:>>>> If you look at the rest of my mail - you would notice 50 times>>> difference in performance.\n>>> What you would say?>>>> That accessing a page from RAM is more than 50 times as fast as a>> random access of that page from disk.>>>> -Kevin>>>\n>>> --> Ayrapetyan Robert,> Comodo Anti-Malware Data Processing Analysis and Management System (CAMDPAMS)> http://repo-qa.camdpams.odessa.office.comodo.net/mediawiki/index.php\n>-- Best regards, Vitalii Tymchyshyn",
"msg_date": "Fri, 5 Aug 2011 10:14:50 +0300",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance die when COPYing to table with bigint PK"
},
{
"msg_contents": "Yes, you are right. Performance become even more awful.\nCan some techniques from pg_bulkload be implemented in postgres core?\nCurrent performance is not suitable for any enterprise-wide production system.\n\n2011/8/5 Віталій Тимчишин <[email protected]>:\n>\n> In my tests it greatly depends on if index writes are random or sequential.\n> My test time goes down from few hours to seconds if I add to the end of\n> index.\n> As for me, best comparision would be to make two equal int4 columns with\n> same data as in int8, two indexes, then perform the test. My bet it will be\n> slower than int8.\n>\n> Четвер, 4 серпня 2011 р. користувач Robert Ayrapetyan\n> <[email protected]> написав:\n>> All you are saying disproves following:\n>>\n>> in experiment I replaces bigint index:\n>>\n>> CREATE INDEX ix_t_big ON test.t USING btree (id_big) TABLESPACE tblsp_ix;\n>>\n>> with 4 (!) other indexes:\n>>\n>>>> If you look at the rest of my mail - you would notice 50 times\n>>>> difference in performance.\n>>>> What you would say?\n>>>\n>>> That accessing a page from RAM is more than 50 times as fast as a\n>>> random access of that page from disk.\n>>>\n>>> -Kevin\n>>>\n>>\n>>\n>>\n>> --\n>> Ayrapetyan Robert,\n>> Comodo Anti-Malware Data Processing Analysis and Management System\n>> (CAMDPAMS)\n>> http://repo-qa.camdpams.odessa.office.comodo.net/mediawiki/index.php\n>>\n>\n> --\n> Best regards,\n> Vitalii Tymchyshyn\n>\n\n\n\n-- \nAyrapetyan Robert,\nComodo Anti-Malware Data Processing Analysis and Management System (CAMDPAMS)\nhttp://repo-qa.camdpams.odessa.office.comodo.net/mediawiki/index.php\n",
"msg_date": "Fri, 5 Aug 2011 11:44:04 +0300",
"msg_from": "Robert Ayrapetyan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance die when COPYing to table with bigint PK"
},
{
"msg_contents": "05.08.11 11:44, Robert Ayrapetyan О©ҐО©ҐО©ҐО©ҐО©ҐО©ҐО©Ґ(О©ҐО©Ґ):\n> Yes, you are right. Performance become even more awful.\n> Can some techniques from pg_bulkload be implemented in postgres core?\n> Current performance is not suitable for any enterprise-wide production system.\nBTW: I was thinking this morning about indexes.\nHow about next feature:\nImplement new index type, that will have two \"zones\" - old & new. New \nzone is of fixed configurable size, say 100 pages (800 K).\nAny search goes into both zones. So, as soon as index is larger then \n800K, the search must be done twice.\nAs soon as new zone hit's it's size limit, part (may be only one?) of \nit's pages are merged with old zone. The merge is \"rolling\" - if last \nmerge've stopped at \"X\" entry, next merge will start at entry right after X.\n\nAs for me, this should greatly resolve large index insert problem:\n1) Insert into new zone must be quick because it's small and hot in cache.\n2) During merge writes will be grouped because items with near keys (for \nB-tree) or hashes (for hash index) will go to small subset of \"old\" zone \npages. In future, merge can be also done by autovacuum in background.\nYes, we get dual index search, but new zone will be hot, so this won't \nmake it twice as costly.\n\nBest regards, Vitalii Tymchyshyn\n\n",
"msg_date": "Fri, 05 Aug 2011 13:36:38 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance die when COPYing to table with bigint PK"
}
] |
[
{
"msg_contents": "Dear all,\n\nI research a lot on Postgresql Performance Tuning and find some \nparameters to increase the select performance in postgresql.\nBy increasing shared_buffers,effective_cache_size ,work_mem, \nmaintainance etc , we can achieve performance in select queries.\n\nBut In my application about 200 connections are made to DB server and \ninsert into 2 tables occured.\nAnd it takes more than hours to complete.\n\nI understand the variable checkpoint_segments & want to know is there \nany more ways to increase the write performance.\n\n\nThanks\n",
"msg_date": "Mon, 01 Aug 2011 13:01:30 +0530",
"msg_from": "Adarsh Sharma <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to Speed up Insert from Multiple Connections"
},
{
"msg_contents": "Adarsh Sharma <[email protected]> wrote:\n \n> By increasing shared_buffers,effective_cache_size ,work_mem, \n> maintainance etc , we can achieve performance in select queries.\n> \n> But In my application about 200 connections are made to DB server\n> and insert into 2 tables occured.\n> And it takes more than hours to complete.\n \nUnless you have 100 cores, 200 connections is counter-productive. \nYou should probably be looking at a connection pooler which can\nroute the requests of that many clients through a connection pooled\nlimited to a number of database connections somewhere between two\nand three times the number of actual cores. Both throughput and\nresponse time will probably improve dramatically.\n \nThe other thing is that for good performance with writes, you should\nbe using a hardware RAID controller with battery-backed cache,\nconfigured fro write-back. You should also be trying to group many\nwrites into a single database transaction where that is feasible,\nparticularly when the writes are related in such a way that you\nwouldn't want to see some of them in the database without others.\n \n> I understand the variable checkpoint_segments & want to know is\n> there any more ways to increase the write performance.\n \nOne obvious omission from your list is wal_buffers, which should\nalmost always be set to 16MB. If you can afford to lose some\ntransactions after an apparently successful commit, you could look\nat turning off synchronous_commit.\n \nIf you don't mind losing the entire database on a crash, there are\nlots of other settings you could use, which is collectively often\nreferred to as \"DBAs running with scissors.\" Most people don't want\nto do that, but there are some cases where it makes sense: if there\nare redundant databases, the database is easily rebuilt from other\nsources, or the data is just not that important.\n \nI'm afraid that to get more detailed advice, you would need to\nprovide more details about the problem.\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n",
"msg_date": "Mon, 01 Aug 2011 08:57:08 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to Speed up Insert from Multiple Connections"
}
] |
[
{
"msg_contents": "Hello all,\nWe are planning to test one of our products, which works with Oracle, on \nPostgreSQL. The database size is about 100 GB. It is a product with a \nnot-so-high load ( about 10 tps - mostly read). My doubts are about \nPostgreSQL settings. For Oracle, we give about 4 GB SGA (shared buffer) \nand 1.5 GB PGA (sum of session-specific memory). The machine configuration \nis \nOpteron 2CPU * 4cores @ 2.3GHz \n16GB RAM\nOS Solaris10 x64 \n\nSo far I have changed the following settings in postgresql.conf\n\nshared_buffers = 2GB \ntemp_buffers = 8MB \nwork_mem = 16MB \nmaintenance_work_mem = 32MB \nwal_level = archive \ncheckpoint_segments = 10 \ncheckpoint_completion_target = 0.7 \narchive_mode = on \neffective_cache_size = 6GB \nlog_destination = 'csvlog' \nlogging_collector = on \nlog_directory = '/backup/datapump/pgdata/log' \nlog_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' \nlog_rotation_age = 1d \nclient_min_messages = notice \nlog_min_messages = warning \nlog_min_duration_statement = 3000 \n\nCould you please let me know the parameters I should pay attention to? Do \nthe settings mentioned above look OK?\nWe are suing weblogic. Should we let weblogic manage the connection pool \nor try something else?\n\nRegards,\nJayadevan\n\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\nHello all,\nWe are planning to test one of our products,\nwhich works with Oracle, on PostgreSQL. The database size is about\n100 GB. It is a product with a not-so-high load ( about 10 tps - mostly\nread). My doubts are about PostgreSQL settings. For Oracle, we give about\n4 GB SGA (shared buffer) and 1.5 GB PGA (sum of session-specific memory).\nThe machine configuration is \nOpteron 2CPU * 4cores @ 2.3GHz \n16GB RAM\nOS Solaris10 x64 \n\nSo far I have changed the following\nsettings in postgresql.conf\n\nshared_buffers = 2GB \n \ntemp_buffers = 8MB \n \nwork_mem = 16MB \n \nmaintenance_work_mem = 32MB \n \nwal_level = archive \n \ncheckpoint_segments = 10 \n \ncheckpoint_completion_target = 0.7 \n \narchive_mode = on \n \neffective_cache_size = 6GB \n \nlog_destination = 'csvlog' \n \nlogging_collector = on \n \nlog_directory = '/backup/datapump/pgdata/log'\n \nlog_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'\n\nlog_rotation_age = 1d \n \nclient_min_messages = notice \n \nlog_min_messages = warning \n \nlog_min_duration_statement = 3000 \n \n\nCould you please let me know the parameters\nI should pay attention to? Do the settings mentioned above look OK?\nWe are suing weblogic. Should we let\nweblogic manage the connection pool or try something else?\n\nRegards,\nJayadevan\n\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only\nfor the person to whom it is addressed and may contain confidential and/or\nprivileged material. If you have received this e-mail in error, kindly\ncontact the sender and destroy all copies of the original communication.\nIBS makes no warranty, express or implied, nor guarantees the accuracy,\nadequacy or completeness of the information contained in this email or\nany attachment and is not liable for any errors, defects, omissions, viruses\nor for resultant loss or damage, if any, direct or indirect.\"",
"msg_date": "Mon, 1 Aug 2011 17:39:34 +0530",
"msg_from": "Jayadevan M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Parameters for PostgreSQL"
},
{
"msg_contents": "On Mon, Aug 1, 2011 at 7:09 AM, Jayadevan M\n<[email protected]> wrote:\n> Hello all,\n> We are planning to test one of our products, which works with Oracle, on\n> PostgreSQL. The database size is about 100 GB. It is a product with a\n> not-so-high load ( about 10 tps - mostly read). My doubts are about\n> PostgreSQL settings. For Oracle, we give about 4 GB SGA (shared buffer) and\n> 1.5 GB PGA (sum of session-specific memory). The machine configuration is\n> Opteron 2CPU * 4cores @ 2.3GHz\n> 16GB RAM\n> OS Solaris10 x64\n>\n> So far I have changed the following settings in postgresql.conf\n>\n> shared_buffers = 2GB\n> temp_buffers = 8MB\n> work_mem = 16MB\n> maintenance_work_mem = 32MB\n> wal_level = archive\n> checkpoint_segments = 10\n> checkpoint_completion_target = 0.7\n> archive_mode = on\n> effective_cache_size = 6GB\n> log_destination = 'csvlog'\n> logging_collector = on\n> log_directory = '/backup/datapump/pgdata/log'\n> log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'\n> log_rotation_age = 1d\n> client_min_messages = notice\n> log_min_messages = warning\n> log_min_duration_statement = 3000\n>\n> Could you please let me know the parameters I should pay attention to? Do\n> the settings mentioned above look OK?\n\nThe settings above look ok. I would consider raising\nmaintenance_work_mem much higher, say to 1gb. I personally don't like\nthe timestamp encoded into the log filename and do something much\nsimpler, like:\nlog_filename = 'postgresql-%d.log'\n\nand set the logs to truncate on rotation.\n\n> We are suing weblogic. Should we let weblogic manage the connection pool or\n> try something else?\n\nDon't have a experience with weblogic, but at 10 tps, it doesn't\nmatter a whole lot. I'd consider sticking with what you've got unless\nyou have a good reason to change it.\n\nmerlin\n",
"msg_date": "Mon, 1 Aug 2011 15:25:50 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parameters for PostgreSQL"
},
{
"msg_contents": "On 1/08/2011 8:09 PM, Jayadevan M wrote:\n\n> The machine configuration is\n> Opteron 2CPU * 4cores @ 2.3GHz\n> 16GB RAM\n> OS Solaris10 x64\n\nThe most important spec has been omitted. What's the storage subsystem? \nFor most database workloads that's *WAY* more important than the CPUs.\n\nIt certainly will be for yours, since your 100GB database won't fit into \n16GB of RAM, so you'll be doing a lot of disk I/O.\n\n> Could you please let me know the parameters I should pay attention to?\n> Do the settings mentioned above look OK?\n\nThere's nothing obviously dangerous like a giant work_mem setting.\n\nLike Oracle, it's very much a matter of tuning to your machine and \nworkload. Parameters right for one workload will be less than ideal for \nanother. If at all possible, do some load testing with a dev instance \nand tweak based on that.\n\nThe most recent book on Pg performance is Greg Smith's \"PostgreSQL High \nPerformance\" and it's had very positive comments on this list. It might \nbe worth a look.\n\n> We are suing weblogic.\n ^^^^^\nBest. Typo. Ever.\n\nI hear most people who use it want to, you're just brave enough to do it :-P\n\n> Should we let weblogic manage the connection pool\n> or try something else?\n\nIn Glassfish 3.1 and JBoss 7 I let the app server manage the connection \npool. Assuming Weblogic works as well - which I'd hope - you should be \nfine doing the same.\n\nPostgreSQL doesn't have any built-in pooling or admission control - each \n\"connection\" is also an executor backend and isn't especially cheap to \nhave around, so you don't want hundreds and hundreds of them.\nIf your app hangs on to connections from the pool for a long time, you \nmight land up wanting to use an external thin pooler like pgpool-II. I \nwouldn't worry about anything like this unless you start getting \nmax_connections exceeded exceptions via your pooler, though, as it \nshouldn't be an issue for most EE apps with a container-powered \nconnection pool.\n\n--\nCraig Ringer\n\nPOST Newspapers\n276 Onslow Rd, Shenton Park\nPh: 08 9381 3088 Fax: 08 9388 2258\nABN: 50 008 917 717\nhttp://www.postnewspapers.com.au/\n",
"msg_date": "Tue, 02 Aug 2011 07:57:42 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parameters for PostgreSQL"
},
{
"msg_contents": "Hello,\n\n>The most important spec has been omitted. What's the storage subsystem? \nWe have storage on SAN, RAID 5.\n \n> > We are suing weblogic.\n> ^^^^^\n> Best. Typo. Ever.\n> \n> I hear most people who use it want to, you're just brave enough to do it \n:-P\nI wish I could make a few millions that way.\n\n\nThank you for all the replies. The first step is, of course, to migrate \nthe data. I am working with ora2pg for that. I assume creating files with \n'COPY' to work as input for PostgreSQL is the right approach? We don't \nhave many stored procedures or packages. So that part should be OK.\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\nHello,\n\n>The most important spec has been omitted. What's the storage subsystem?\n\nWe have storage on SAN, RAID 5.\n \n> > We are suing weblogic.\n> ^^^^^\n> Best. Typo. Ever.\n> \n> I hear most people who use it want to, you're just brave enough to\ndo it :-P\nI wish I could make a few millions that way.\n\n\nThank you for all the replies. The first step is,\nof course, to migrate the data. I am working with ora2pg for that. I assume\ncreating files with 'COPY' to work as input for PostgreSQL is the right\napproach? We don't have many stored procedures or packages. So that part\nshould be OK.\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only\nfor the person to whom it is addressed and may contain confidential and/or\nprivileged material. If you have received this e-mail in error, kindly\ncontact the sender and destroy all copies of the original communication.\nIBS makes no warranty, express or implied, nor guarantees the accuracy,\nadequacy or completeness of the information contained in this email or\nany attachment and is not liable for any errors, defects, omissions, viruses\nor for resultant loss or damage, if any, direct or indirect.\"",
"msg_date": "Thu, 4 Aug 2011 09:12:28 +0530",
"msg_from": "Jayadevan M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parameters for PostgreSQL"
},
{
"msg_contents": "On 04/08/11 11:42, Jayadevan M wrote:\n> Hello,\n>\n> >The most important spec has been omitted. What's the storage subsystem?\n> We have storage on SAN, RAID 5.\n\nRAID 5? That's *really* not ideal for database workloads, either Pg or\nOracle, unless your RAID 5 storage backend has enough battery-backed\nwrite cache to keep huge amounts of writes in RAM and reorder them\nreally effectively.\n\nI hope each RAID 5 LUN is only across a few disks and is layered with\nRAID 1, though. RAID 5 becomes less reliable than using a single disk\nwhen used with too many HDDs, because the probability of a double-disk\nfailure becomes greater than that of a single standalone disk failing.\nAfter being bitten by that a few times, these days I'm using RAID 6 in\nmost cases where RAID 10 isn't practical.\n\nIn any case, \"SAN\" can be anything from a Linux box running an iSCSI\ntarget on top of a RAID 5 `md' software RAID volume on four 5400RPM\nHDDs, right up to a giant hundreds-of-fast-disks monster filer full of\ndedicated ASICs and great gobs of battery backed write cache DRAM. Are\nyou able to be any more specific about what you're dealing with?\n\n> \n> > > We are suing weblogic.\n> > ^^^^^\n> > Best. Typo. Ever.\n> >\n> > I hear most people who use it want to, you're just brave enough to\n> do it :-P\n> I wish I could make a few millions that way.\n>\n>\n> Thank you for all the replies. The first step is, of course, to\n> migrate the data. I am working with ora2pg for that. I assume creating\n> files with 'COPY' to work as input for PostgreSQL is the right\n> approach? We don't have many stored procedures or packages. So that\n> part should be OK.\n\n\n\n\n\n\n\n On 04/08/11 11:42, Jayadevan M wrote:\n \n\n Hello, \n\n \n >The most important spec has been omitted. What's the storage\n subsystem? \n\n We have storage on SAN, RAID 5.\n\n\n RAID 5? That's *really* not ideal for database workloads, either Pg\n or Oracle, unless your RAID 5 storage backend has enough\n battery-backed write cache to keep huge amounts of writes in RAM and\n reorder them really effectively.\n\n I hope each RAID 5 LUN is only across a few disks and is layered\n with RAID 1, though. RAID 5 becomes less reliable than using a\n single disk when used with too many HDDs, because the probability of\n a double-disk failure becomes greater than that of a single\n standalone disk failing. After being bitten by that a few times,\n these days I'm using RAID 6 in most cases where RAID 10 isn't\n practical.\n\n In any case, \"SAN\" can be anything from a Linux box running an iSCSI\n target on top of a RAID 5 `md' software RAID volume on four 5400RPM\n HDDs, right up to a giant hundreds-of-fast-disks monster filer full\n of dedicated ASICs and great gobs of battery backed write cache\n DRAM. Are you able to be any more specific about what you're dealing\n with?\n\n \n > > We are suing weblogic.\n > ^^^^^\n > Best. Typo. Ever.\n > \n > I hear most people who use it want to, you're just brave\n enough to\n do it :-P\n I wish I could make a few millions that way. \n\n\n\n Thank you for all the replies. The first step is,\n of course, to migrate the data. I am working with ora2pg for\n that. I assume\n creating files with 'COPY' to work as input for PostgreSQL is\n the right\n approach? We don't have many stored procedures or packages. So\n that part\n should be OK.",
"msg_date": "Thu, 04 Aug 2011 11:54:48 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parameters for PostgreSQL"
},
{
"msg_contents": "On 04/08/11 11:42, Jayadevan M wrote:\n> Hello,\n>\n> >The most important spec has been omitted. What's the storage subsystem?\n> We have storage on SAN, RAID 5.\n\nRAID 5? That's *really* not ideal for database workloads, either Pg or\nOracle, unless your RAID 5 storage backend has enough battery-backed\nwrite cache to keep huge amounts of writes in RAM and reorder them\nreally effectively.\n\nI hope each RAID 5 LUN is only across a few disks and is layered with\nRAID 1, though. RAID 5 becomes less reliable than using a single disk\nwhen used with too many HDDs, because the probability of a double-disk\nfailure becomes greater than that of a single standalone disk failing.\nAfter being bitten by that a few times, these days I'm using RAID 6 in\nmost cases where RAID 10 isn't practical.\n\nIn any case, \"SAN\" can be anything from a Linux box running an iSCSI\ntarget on top of a RAID 5 `md' software RAID volume on four 5400RPM\nHDDs, right up to a giant hundreds-of-fast-disks monster filer full of\ndedicated ASICs and great gobs of battery backed write cache DRAM. Are\nyou able to be any more specific about what you're dealing with?\n\n> \n> > > We are suing weblogic.\n> > ^^^^^\n> > Best. Typo. Ever.\n> >\n> > I hear most people who use it want to, you're just brave enough to\n> do it :-P\n> I wish I could make a few millions that way.\n>\n>\n> Thank you for all the replies. The first step is, of course, to\n> migrate the data. I am working with ora2pg for that. I assume creating\n> files with 'COPY' to work as input for PostgreSQL is the right\n> approach? We don't have many stored procedures or packages. So that\n> part should be OK.\n\n\n\n\n\n\n\n\n On 04/08/11 11:42, Jayadevan M wrote:\n \n\n Hello, \n \n >The most important spec has been omitted. What's the storage\n subsystem? \n We have storage on SAN, RAID 5.\n\n\n RAID 5? That's *really* not ideal for database workloads, either Pg\n or Oracle, unless your RAID 5 storage backend has enough\n battery-backed write cache to keep huge amounts of writes in RAM and\n reorder them really effectively.\n\n I hope each RAID 5 LUN is only across a few disks and is layered\n with RAID 1, though. RAID 5 becomes less reliable than using a\n single disk when used with too many HDDs, because the probability of\n a double-disk failure becomes greater than that of a single\n standalone disk failing. After being bitten by that a few times,\n these days I'm using RAID 6 in most cases where RAID 10 isn't\n practical.\n\n In any case, \"SAN\" can be anything from a Linux box running an iSCSI\n target on top of a RAID 5 `md' software RAID volume on four 5400RPM\n HDDs, right up to a giant hundreds-of-fast-disks monster filer full\n of dedicated ASICs and great gobs of battery backed write cache\n DRAM. Are you able to be any more specific about what you're dealing\n with?\n\n \n > > We are suing weblogic.\n > ^^^^^\n > Best. Typo. Ever.\n > \n > I hear most people who use it want to, you're just brave\n enough to do it :-P\n I wish I could make a few millions that way. \n\n\n Thank you for all the replies. The first step is, of course,\n to migrate the data. I am working with ora2pg for that. I assume\n creating files with 'COPY' to work as input for PostgreSQL is\n the right approach? We don't have many stored procedures or\n packages. So that part should be OK.",
"msg_date": "Thu, 04 Aug 2011 12:03:23 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parameters for PostgreSQL"
},
{
"msg_contents": "I think RAID 10 is best among all the RAID Levels.\n\n\nThanks\n\n\nCraig Ringer wrote:\n> On 04/08/11 11:42, Jayadevan M wrote:\n>> Hello,\n>>\n>> >The most important spec has been omitted. What's the storage subsystem?\n>> We have storage on SAN, RAID 5.\n>\n> RAID 5? That's *really* not ideal for database workloads, either Pg or \n> Oracle, unless your RAID 5 storage backend has enough battery-backed \n> write cache to keep huge amounts of writes in RAM and reorder them \n> really effectively.\n>\n> I hope each RAID 5 LUN is only across a few disks and is layered with \n> RAID 1, though. RAID 5 becomes less reliable than using a single disk \n> when used with too many HDDs, because the probability of a double-disk \n> failure becomes greater than that of a single standalone disk failing. \n> After being bitten by that a few times, these days I'm using RAID 6 in \n> most cases where RAID 10 isn't practical.\n>\n> In any case, \"SAN\" can be anything from a Linux box running an iSCSI \n> target on top of a RAID 5 `md' software RAID volume on four 5400RPM \n> HDDs, right up to a giant hundreds-of-fast-disks monster filer full of \n> dedicated ASICs and great gobs of battery backed write cache DRAM. Are \n> you able to be any more specific about what you're dealing with?\n>\n>> \n>> > > We are suing weblogic.\n>> > ^^^^^\n>> > Best. Typo. Ever.\n>> >\n>> > I hear most people who use it want to, you're just brave enough to \n>> do it :-P\n>> I wish I could make a few millions that way.\n>>\n>>\n>> Thank you for all the replies. The first step is, of course, to \n>> migrate the data. I am working with ora2pg for that. I assume \n>> creating files with 'COPY' to work as input for PostgreSQL is the \n>> right approach? We don't have many stored procedures or packages. So \n>> that part should be OK.\n>\n>\n\n\n\n\n\n\n\nI think RAID 10 is best among all the RAID Levels.\n\n\nThanks\n\n\nCraig Ringer wrote:\n\n\nOn 04/08/11 11:42, Jayadevan M wrote:\n \n\n Hello, \n \n>The most important spec has been omitted. What's the storage\nsubsystem? \n We have storage on SAN, RAID 5.\n\n\nRAID 5? That's *really* not ideal for database workloads, either Pg or\nOracle, unless your RAID 5 storage backend has enough battery-backed\nwrite cache to keep huge amounts of writes in RAM and reorder them\nreally effectively.\n\nI hope each RAID 5 LUN is only across a few disks and is layered with\nRAID 1, though. RAID 5 becomes less reliable than using a single disk\nwhen used with too many HDDs, because the probability of a double-disk\nfailure becomes greater than that of a single standalone disk failing.\nAfter being bitten by that a few times, these days I'm using RAID 6 in\nmost cases where RAID 10 isn't practical.\n\nIn any case, \"SAN\" can be anything from a Linux box running an iSCSI\ntarget on top of a RAID 5 `md' software RAID volume on four 5400RPM\nHDDs, right up to a giant hundreds-of-fast-disks monster filer full of\ndedicated ASICs and great gobs of battery backed write cache DRAM. Are\nyou able to be any more specific about what you're dealing with?\n\n \n> > We are suing weblogic.\n> ^^^^^\n> Best. Typo. Ever.\n> \n> I hear most people who use it want to, you're just brave enough to\ndo it :-P\nI wish I could make a few millions that way. \n\n\n Thank you for all the replies. The first step is, of course,\nto migrate the data. I am working with ora2pg for that. I assume\ncreating files with 'COPY' to work as input for PostgreSQL is the right\napproach? We don't have many stored procedures or packages. So that\npart should be OK.",
"msg_date": "Thu, 04 Aug 2011 10:45:41 +0530",
"msg_from": "Adarsh Sharma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parameters for PostgreSQL"
},
{
"msg_contents": "\n\nOn 8/3/2011 11:03 PM, Craig Ringer wrote:\n> great gobs of battery backed write cache DRAM.\n\nNow I know what I'm asking Santa for Christmas this year!\n\n-Andy\n",
"msg_date": "Thu, 04 Aug 2011 08:57:59 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parameters for PostgreSQL"
}
] |
[
{
"msg_contents": "Can a transaction committed asynchronously report an error, duplicate key or\nsomething like that, causing a client with a OK transaction but server with\na FAILED transaction.\n\n \n\nThanks\n\n \n\n\nCan a transaction committed asynchronously report an error, duplicate key or something like that, causing a client with a OK transaction but server with a FAILED transaction. Thanks",
"msg_date": "Mon, 1 Aug 2011 09:29:37 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "synchronous_commit off"
},
{
"msg_contents": "On 08/01/2011 09:29 AM, Anibal David Acosta wrote:\n>\n> Can a transaction committed asynchronously report an error, duplicate \n> key or something like that, causing a client with a OK transaction but \n> server with a FAILED transaction.\n>\n>\n\nNo. You are turning off the wait for the transaction to hit disk before \nreturning to the client, but all the validation checks are done before \nthat. The sole risk with synchronous_commit off is that a client will \nget COMMIT, but the server will lose the transaction completely--when \nthere's a crash before it's written to disk.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n\n\n\n\n\n\nOn 08/01/2011 09:29 AM, Anibal David Acosta wrote:\n\n\n\n\n\nCan a transaction committed asynchronously\nreport an error, duplicate key or something like that, causing a client\nwith a OK transaction but server with a FAILED transaction.\n\n\n\n\nNo. You are turning off the wait for the transaction to hit disk\nbefore returning to the client, but all the validation checks are done\nbefore that. The sole risk with synchronous_commit off is that a\nclient will get COMMIT, but the server will lose the transaction\ncompletely--when there's a crash before it's written to disk.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us",
"msg_date": "Mon, 01 Aug 2011 15:52:45 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: synchronous_commit off"
},
{
"msg_contents": "the application doesn't manage money or something really really critical, so\nI can live with the \"in case of crash\" that is not a normal behavior J\n\n \n\n \n\nThanks.\n\n \n\n \n\n \n\nDe: [email protected]\n[mailto:[email protected]] En nombre de Greg Smith\nEnviado el: lunes, 01 de agosto de 2011 03:53 p.m.\nPara: [email protected]\nAsunto: Re: [PERFORM] synchronous_commit off\n\n \n\nOn 08/01/2011 09:29 AM, Anibal David Acosta wrote: \n\nCan a transaction committed asynchronously report an error, duplicate key or\nsomething like that, causing a client with a OK transaction but server with\na FAILED transaction.\n\n \n\n\nNo. You are turning off the wait for the transaction to hit disk before\nreturning to the client, but all the validation checks are done before that.\nThe sole risk with synchronous_commit off is that a client will get COMMIT,\nbut the server will lose the transaction completely--when there's a crash\nbefore it's written to disk.\n\n\n\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\nthe application doesn't manage money or something really really critical, so I can live with the “in case of crash” that is not a normal behavior J Thanks. De: [email protected] [mailto:[email protected]] En nombre de Greg SmithEnviado el: lunes, 01 de agosto de 2011 03:53 p.m.Para: [email protected]: Re: [PERFORM] synchronous_commit off On 08/01/2011 09:29 AM, Anibal David Acosta wrote: Can a transaction committed asynchronously report an error, duplicate key or something like that, causing a client with a OK transaction but server with a FAILED transaction. No. You are turning off the wait for the transaction to hit disk before returning to the client, but all the validation checks are done before that. The sole risk with synchronous_commit off is that a client will get COMMIT, but the server will lose the transaction completely--when there's a crash before it's written to disk.-- Greg Smith 2ndQuadrant US [email protected] Baltimore, MDPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us",
"msg_date": "Mon, 1 Aug 2011 16:05:39 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: synchronous_commit off"
},
{
"msg_contents": "On 2/08/2011 3:52 AM, Greg Smith wrote:\n> On 08/01/2011 09:29 AM, Anibal David Acosta wrote:\n>>\n>> Can a transaction committed asynchronously report an error, duplicate \n>> key or something like that, causing a client with a OK transaction \n>> but server with a FAILED transaction.\n>>\n>>\n>\n> No. You are turning off the wait for the transaction to hit disk \n> before returning to the client, but all the validation checks are done \n> before that. The sole risk with synchronous_commit off is that a \n> client will get COMMIT, but the server will lose the transaction \n> completely--when there's a crash before it's written to disk.\n\nWhat about an I/O error during write, or other HW-level issues that \nmight cause a transaction to fail to commit while others proceed fine?\n\n--\nCraig Ringer\n\n\nPOST Newspapers\n276 Onslow Rd, Shenton Park\nPh: 08 9381 3088 Fax: 08 9388 2258\nABN: 50 008 917 717\nhttp://www.postnewspapers.com.au/\n\n\n\n\n\n On 2/08/2011 3:52 AM, Greg Smith wrote:\n \n\n On 08/01/2011 09:29 AM, Anibal David Acosta wrote:\n \n\n\n\n\nCan a transaction committed\n asynchronously\n report an error, duplicate key or something like that,\n causing a client\n with a OK transaction but server with a FAILED transaction.\n\n\n\n\n No. You are turning off the wait for the transaction to hit disk\n before returning to the client, but all the validation checks are\n done\n before that. The sole risk with synchronous_commit off is that a\n client will get COMMIT, but the server will lose the transaction\n completely--when there's a crash before it's written to disk.\n\n\n What about an I/O error during write, or other HW-level issues that\n might cause a transaction to fail to commit while others proceed\n fine?\n\n --\n Craig Ringer\n\n\nPOST Newspapers\n276 Onslow Rd, Shenton Park\nPh: 08 9381 3088 Fax: 08 9388 2258\nABN: 50 008 917 717\nhttp://www.postnewspapers.com.au/",
"msg_date": "Tue, 02 Aug 2011 07:49:11 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: synchronous_commit off"
},
{
"msg_contents": "No: The commit has the same guarantees as a synchronous commit w.r.t. data consistency. The commit can only fail (as a whole) due to hardware problems or postgres backend crashes. \n\n\nAnd yes: The client commit returns, but the server can fail later and not persist the transaction and it will be lost (again as a whole).\n\nYour application should be able to tolerate losing the latest committed transactions if you use this.\n\nThe difference to fsync=off is that a server crash will leave the database is a consistent state with just the latest transactions lost.\n\n\n\n________________________________\nFrom: Anibal David Acosta <[email protected]>\nTo: [email protected]\nSent: Monday, August 1, 2011 6:29 AM\nSubject: [PERFORM] synchronous_commit off\n\n\nCan a transaction committed asynchronously report an error, duplicate key or something like that, causing a client with a OK transaction but server with a FAILED transaction.\n \nThanks\nNo: The commit has the same guarantees as a synchronous commit w.r.t. data consistency. The commit can only fail (as a whole) due to hardware problems or postgres backend crashes. And yes: The client commit returns, but the server can fail later and not persist the transaction and it will be lost (again as a whole).Your application should be able to tolerate losing the latest committed transactions if you use this.The difference to fsync=off is that a server crash will leave the database is a consistent state with just the latest transactions lost.From: Anibal David Acosta <[email protected]>To: [email protected]: Monday, August 1, 2011 6:29 AMSubject: [PERFORM] synchronous_commit offCan a transaction committed asynchronously report an error, duplicate key or something like that, causing a client with a OK transaction but server with a FAILED transaction. Thanks",
"msg_date": "Tue, 2 Aug 2011 17:08:08 -0700 (PDT)",
"msg_from": "lars hofhansl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: synchronous_commit off"
}
] |
[
{
"msg_contents": "Hello everyone,\n\nThis is my first post on this list, I tried to look after possible solutions in the archive, as well as in google, but I could not find an explanation for such a specific situation.\n\nI am facing a performance problem connected with Postgres Tsearch2 FTS mechanism.\n\nHere is my query:\n\nselect participant.participant_id from participant participant\njoin person person on person.person_participant_id = participant.participant_id\nleft join registration registration on registration.registration_registered_participant_id = participant.participant_id\nleft join enrollment enrollment on registration.registration_enrollment_id = enrollment.enrollment_id\njoin registration_configuration registration_configuration on enrollment.enrollment_configuration_id = registration_configuration.configuration_id\nleft join event_context context on context.context_id = registration_configuration.configuration_context_id \nwhere participant.participant_type = 'PERSON'\nand participant_status = 'ACTIVE'\nand context.context_code in ('GB2TST2010A')\t\t\t\t\nand registration_configuration.configuration_type in ('VISITOR')\nand registration_configuration.configuration_id is not null\nand participant.participant_tsv || person.person_tsv @@ to_tsquery('simple',to_tsquerystring('Abigail'))\nlimit 100\n\nAs you see, I am using two vectors which I concatenate and check against a tsquery. \n\nBoth vectors are indexed with GIN and updated with respective triggers in the following way:\n\nALTER TABLE person ALTER COLUMN person_tsv SET STORAGE EXTENDED; \nCREATE INDEX person_ft_index ON person USING gin(person_tsv); \nCREATE OR REPLACE FUNCTION update_person_tsv() RETURNS trigger AS $$ BEGIN NEW.person_tsv := to_tsvector('simple',create_tsv( ARRAY[NEW.person_first_name, NEW.person_last_name, NEW.person_middle_name] )); RETURN NEW; END; $$ LANGUAGE 'plpgsql';\nCREATE TRIGGER person_tsv_update BEFORE INSERT or UPDATE ON person FOR EACH ROW EXECUTE PROCEDURE update_person_tsv();\n\nALTER TABLE participant ALTER COLUMN participant_tsv SET STORAGE EXTENDED; \nCREATE INDEX participant_ft_index ON participant USING gin(participant_tsv); \nCREATE OR REPLACE FUNCTION update_participant_tsv() RETURNS trigger AS $$ BEGIN NEW.participant_tsv := to_tsvector('simple',create_tsv( ARRAY[NEW.participant_login, NEW.participant_email] )); RETURN NEW; END; $$ LANGUAGE 'plpgsql';\nCREATE TRIGGER participant_tsv_update BEFORE INSERT or UPDATE ON participant FOR EACH ROW EXECUTE PROCEDURE update_participant_tsv();\n\nThe database is quite big - has almost one million of participant records. The above query has taken almost 67 seconds to execute and fetch 100 rows, which is unacceptable for us.\n\nAs I assume, the problem is, when the vectors are concatenated, the individual indexes for each vector are not used. The execution plan done after 1st execution of the query:\n\n\"Limit (cost=46063.13..93586.79 rows=100 width=4) (actual time=4963.620..39703.645 rows=100 loops=1)\"\n\" -> Nested Loop (cost=46063.13..493736.04 rows=942 width=4) (actual time=4963.617..39703.349 rows=100 loops=1)\"\n\" Join Filter: (registration_configuration.configuration_id = enrollment.enrollment_configuration_id)\"\n\" -> Nested Loop (cost=46063.13..493662.96 rows=3769 width=8) (actual time=4963.517..39701.557 rows=159 loops=1)\"\n\" -> Nested Loop (cost=46063.13..466987.33 rows=3769 width=8) (actual time=4963.498..39698.542 rows=159 loops=1)\"\n\" -> Hash Join (cost=46063.13..430280.76 rows=4984 width=8) (actual time=4963.464..39692.676 rows=216 loops=1)\"\n\" Hash Cond: (participant.participant_id = person.person_participant_id)\"\n\" Join Filter: ((participant.participant_tsv || person.person_tsv) @@ to_tsquery('simple'::regconfig, to_tsquerystring('Abigail'::text)))\"\n\" -> Seq Scan on participant (cost=0.00..84680.85 rows=996741 width=42) (actual time=0.012..3132.944 rows=1007151 loops=1)\"\n\" Filter: (((participant_type)::text = 'PERSON'::text) AND ((participant_status)::text = 'ACTIVE'::text))\"\n\" -> Hash (cost=25495.39..25495.39 rows=1012539 width=38) (actual time=3145.628..3145.628 rows=1007151 loops=1)\"\n\" Buckets: 2048 Batches: 128 Memory Usage: 556kB\"\n\" -> Seq Scan on person (cost=0.00..25495.39 rows=1012539 width=38) (actual time=0.062..1582.990 rows=1007151 loops=1)\"\n\" -> Index Scan using idx_registration_registered_participant_id on registration (cost=0.00..7.35 rows=1 width=8) (actual time=0.018..0.019 rows=1 loops=216)\"\n\" Index Cond: (registration.registration_registered_participant_id = person.person_participant_id)\"\n\" -> Index Scan using enrollment_pkey on enrollment (cost=0.00..7.07 rows=1 width=8) (actual time=0.011..0.013 rows=1 loops=159)\"\n\" Index Cond: (enrollment.enrollment_id = registration.registration_enrollment_id)\"\n\" -> Materialize (cost=0.00..16.55 rows=1 width=4) (actual time=0.002..0.005 rows=2 loops=159)\"\n\" -> Nested Loop (cost=0.00..16.55 rows=1 width=4) (actual time=0.056..0.077 rows=2 loops=1)\"\n\" Join Filter: (registration_configuration.configuration_context_id = context.context_id)\"\n\" -> Index Scan using idx_configuration_type on registration_configuration (cost=0.00..8.27 rows=1 width=8) (actual time=0.018..0.022 rows=3 loops=1)\"\n\" Index Cond: ((configuration_type)::text = 'VISITOR'::text)\"\n\" Filter: (configuration_id IS NOT NULL)\"\n\" -> Index Scan using idx_event_context_code on event_context context (cost=0.00..8.27 rows=1 width=4) (actual time=0.008..0.010 rows=1 loops=3)\"\n\" Index Cond: ((context.context_code)::text = 'GB2TST2010A'::text)\"\n\"Total runtime: 39775.578 ms\"\n\nThe assumption seems to be correct, no indexes on vectors are used - sequence scans are done instead:\n\nJoin Filter: ((participant.participant_tsv || person.person_tsv) @@ to_tsquery('simple'::regconfig, to_tsquerystring('Abigail'::text)))\"\n\" -> Seq Scan on participant (cost=0.00..84680.85 rows=996741 width=42) (actual time=0.012..3132.944 rows=1007151 loops=1)\"\n\" Filter: (((participant_type)::text = 'PERSON'::text) AND ((participant_status)::text = 'ACTIVE'::text))\"\n\" -> Hash (cost=25495.39..25495.39 rows=1012539 width=38) (actual time=3145.628..3145.628 rows=1007151 loops=1)\"\n\" Buckets: 2048 Batches: 128 Memory Usage: 556kB\"\n\" -> Seq Scan on person (cost=0.00..25495.39 rows=1012539 width=38) (actual time=0.062..1582.990 rows=1007151 loops=1)\"\n\n\nAfter I removed one of the vectors from the query and used only a single vector \n...\nand person.person_tsv @@ to_tsquery('simple', to_tsquery('simple',to_tsquerystring('Abigail'))\n...\nthen the execution was much faster - about 5 seconds\n\nPlan afterwards:\n\n\"Limit (cost=41.14..8145.82 rows=100 width=4) (actual time=3.776..13.454 rows=100 loops=1)\"\n\" -> Nested Loop (cost=41.14..21923.77 rows=270 width=4) (actual time=3.773..13.248 rows=100 loops=1)\"\n\" -> Nested Loop (cost=41.14..19730.17 rows=270 width=8) (actual time=3.760..11.971 rows=100 loops=1)\"\n\" Join Filter: (registration_configuration.configuration_id = enrollment.enrollment_configuration_id)\"\n\" -> Nested Loop (cost=0.00..16.55 rows=1 width=4) (actual time=0.051..0.051 rows=1 loops=1)\"\n\" Join Filter: (registration_configuration.configuration_context_id = context.context_id)\"\n\" -> Index Scan using idx_configuration_type on registration_configuration (cost=0.00..8.27 rows=1 width=8) (actual time=0.020..0.022 rows=2 loops=1)\"\n\" Index Cond: ((configuration_type)::text = 'VISITOR'::text)\"\n\" Filter: (configuration_id IS NOT NULL)\"\n\" -> Index Scan using idx_event_context_code on event_context context (cost=0.00..8.27 rows=1 width=4) (actual time=0.008..0.009 rows=1 loops=2)\"\n\" Index Cond: ((context.context_code)::text = 'GB2TST2010A'::text)\"\n\" -> Nested Loop (cost=41.14..19700.12 rows=1080 width=12) (actual time=3.578..11.431 rows=269 loops=1)\"\n\" -> Nested Loop (cost=41.14..12056.27 rows=1080 width=12) (actual time=3.568..8.203 rows=269 loops=1)\"\n\" -> Bitmap Heap Scan on person (cost=41.14..3687.07 rows=1080 width=4) (actual time=3.553..4.401 rows=346 loops=1)\"\n\" Recheck Cond: (person_tsv @@ to_tsquery('simple'::regconfig, to_tsquerystring('Abigail'::text)))\"\n\" -> Bitmap Index Scan on person_ft_index (cost=0.00..40.87 rows=1080 width=0) (actual time=3.353..3.353 rows=1060 loops=1)\"\n\" Index Cond: (person_tsv @@ to_tsquery('simple'::regconfig, to_tsquerystring('Abigail'::text)))\"\n\" -> Index Scan using idx_registration_registered_participant_id on registration (cost=0.00..7.74 rows=1 width=8) (actual time=0.006..0.007 rows=1 loops=346)\"\n\" Index Cond: (registration.registration_registered_participant_id = person.person_participant_id)\"\n\" -> Index Scan using enrollment_pkey on enrollment (cost=0.00..7.07 rows=1 width=8) (actual time=0.006..0.007 rows=1 loops=269)\"\n\" Index Cond: (enrollment.enrollment_id = registration.registration_enrollment_id)\"\n\" -> Index Scan using participant_pkey on participant (cost=0.00..8.11 rows=1 width=4) (actual time=0.007..0.009 rows=1 loops=100)\"\n\" Index Cond: (participant.participant_id = person.person_participant_id)\"\n\" Filter: (((participant.participant_type)::text = 'PERSON'::text) AND ((participant.participant_status)::text = 'ACTIVE'::text))\"\n\"Total runtime: 13.858 ms\"\n\nNow the index on vector was used:\n\n\"Recheck Cond: (person_tsv @@ to_tsquery('simple'::regconfig, to_tsquerystring('Abigail'::text)))\"\n\" -> Bitmap Index Scan on person_ft_index (cost=0.00..40.87 rows=1080 width=0) (actual time=3.353..3.353 rows=1060 loops=1)\"\n\" Index Cond: (person_tsv @@ to_tsquery('simple'::regconfig, to_tsquerystring('Abigail'::text)))\"\n\nSo, there is apparently a problem with vector concatenating - the indexes don't work then. I tried to use the vectors separately and to make 'OR' comparison between single vector @@ ts_query checks,\nbut it didn't help very much (performance was better, but still over 20 sec):\n...\n(participant.participant_tsv @@ to_tsquery('simple',to_tsquerystring('Abigail'))) OR (person.person_tsv @@ to_tsquery('simple',to_tsquerystring('Abigail'))) \n...\n\nIs there a way to make this work with better performance? Or is it necessary to create a single vector that contains data from multiple tables and then add an index on it? It would be so far problematic for us,\nbecause we are using multiple complex queries with variable number of selected columns. I know that another solution might be an union among multiple queries, every of which uses a single vector,\nbut this solution is inconvenient too.\n\nGreetings\n\nJan\n",
"msg_date": "Tue, 02 Aug 2011 08:22:36 +0200",
"msg_from": "=?UTF-8?Q?Jan_Wielgus?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?Q?Tsearch2_-_bad_performance_with_concatenated?=\n\t=?UTF-8?Q?_ts-vectors?="
},
{
"msg_contents": "On 02/08/11 18:22, Jan Wielgus wrote:\n> select participant.participant_id from participant participant\n> join person person on person.person_participant_id = participant.participant_id\n> left join registration registration on registration.registration_registered_participant_id = participant.participant_id\n> left join enrollment enrollment on registration.registration_enrollment_id = enrollment.enrollment_id\n> join registration_configuration registration_configuration on enrollment.enrollment_configuration_id = registration_configuration.configuration_id\n> left join event_context context on context.context_id = registration_configuration.configuration_context_id\n> where participant.participant_type = 'PERSON'\n> and participant_status = 'ACTIVE'\n> and context.context_code in ('GB2TST2010A')\t\t\t\t\n> and registration_configuration.configuration_type in ('VISITOR')\n> and registration_configuration.configuration_id is not null\n> and participant.participant_tsv || person.person_tsv @@ to_tsquery('simple',to_tsquerystring('Abigail'))\n> limit 100\n\nI am experimenting with formatting styles, especially relating to \njoins. Because I have poor eyesight: visual clues are important, so \nthat I can focus on key points. Hence the use of abbreviations, naming \nconventions, and careful indenting. (I found this especially \nimportant, when I had to write a stored procedure with some 3K lines of \nSybase TransactSQL!) I also use uppercase key words, but I have not \nbothered here.\n\nSo I would like people's opinions on how I have reformatted the above.\n\n\nselect\n participant.participant_id\nfrom\n participant pa\n join person pe\n on pe.person_participant_id = pa.participant_id\n left join registration re\n on re.registration_registered_participant_id = pa.participant_id\n left join enrollment en\n on re.registration_enrollment_id = en.enrollment_id\n join registration_configuration rc\n on en.enrollment_configuration_id = rc.configuration_id\n left join event_context ec\n on ec.context_id = rc.configuration_context_id\nwhere\n pa.participant_type = 'PERSON' and\n pa.participant_status = 'ACTIVE' and\n ec.context_code in ('GB2TST2010A') and\n rc.configuration_type in ('VISITOR') and\n rc.configuration_id is not null and\n pa.participant_tsv || pe.person_tsv @@ \nto_tsquery('simple',to_tsquerystring('Abigail'))\nlimit 100\n\n\nCheers,\nGavin\n",
"msg_date": "Wed, 03 Aug 2011 11:01:58 +1200",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tsearch2 - bad performance with concatenated ts-vectors"
},
{
"msg_contents": "Gavin Flower <[email protected]> wrote:\n \n> I am experimenting with formatting styles, especially relating to \n> joins. Because I have poor eyesight: visual clues are important,\n> so that I can focus on key points. Hence the use of\n> abbreviations, naming conventions, and careful indenting.\n \n> So I would like people's opinions on how I have reformatted [...]\n \nThis is a little off-topic for a performance list, but I have to\nadmit a lot of sympathy, since I have to do the same for similar\nreasons. I found that keeping operators near the left margin and\nalways having matching parentheses be on the same line or in the\nsame column helps me tremendously. (This style tends not to be so\npopular for younger coders with stronger eyes.)\n \nI tend to go more this way:\n \nselect\n participant.participant_id\n from participant pa\n join person pe\n on pe.person_participant_id = pa.participant_id\n left join registration re\n on re.registration_registered_participant_id\n = pa.participant_id\n left join enrollment en\n on re.registration_enrollment_id = en.enrollment_id\n join registration_configuration rc\n on en.enrollment_configuration_id = rc.configuration_id\n left join event_context ec\n on ec.context_id = rc.configuration_context_id\n where pa.participant_type = 'PERSON'\n and pa.participant_status = 'ACTIVE'\n and ec.context_code in ('GB2TST2010A')\n and rc.configuration_type in ('VISITOR')\n and rc.configuration_id is not null\n and pa.participant_tsv || pe.person_tsv\n @@ to_tsquery('simple',to_tsquerystring('Abigail'))\n limit 100\n;\n \nWith multiple ON conditions I like to do this:\n \n JOIN some_table st\n ON ( st.this_column = ot.this_column\n AND st.another_column = ot.another_column\n AND ( st.feathers > ot.feathers\n OR st.lead_ingots > ot.lead_ingots\n )\n )\n \nI often line up the operators and column-names, too, if it seems\neasier on the eyes.\n \nThe above won't look very sensible in a proportional font.\n \nI doubt this will become a dominant coding convention; it's just\nwhat works for me. ;-)\n \n-Kevin\n",
"msg_date": "Thu, 04 Aug 2011 16:16:56 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tsearch2 - bad performance with concatenated\n\t ts-vectors"
},
{
"msg_contents": "On Tue, Aug 2, 2011 at 2:22 AM, Jan Wielgus <[email protected]> wrote:\n> So, there is apparently a problem with vector concatenating - the indexes don't work then. I tried to use the vectors separately and to make 'OR' comparison between single vector @@ ts_query checks,\n> but it didn't help very much (performance was better, but still over 20 sec):\n> ...\n> (participant.participant_tsv @@ to_tsquery('simple',to_tsquerystring('Abigail'))) OR (person.person_tsv @@ to_tsquery('simple',to_tsquerystring('Abigail')))\n> ...\n>\n> Is there a way to make this work with better performance? Or is it necessary to create a single vector that contains data from multiple tables and then add an index on it? It would be so far problematic for us,\n> because we are using multiple complex queries with variable number of selected columns. I know that another solution might be an union among multiple queries, every of which uses a single vector,\n> but this solution is inconvenient too.\n\nOnly something of the form 'indexed-column indexable-operator value'\nis going to be indexable. So when you concatenate two columns from\ndifferent tables - as you say - not indexable.\n\nIn general, OR-based conditions that cross table boundaries tend to be\nexpensive, because they have to be applied only after performing the\njoin. You can't know for sure looking only at a row from one table\nwhether or not it will be needed, so you have to join them all and\nthen filter the results.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Mon, 24 Oct 2011 15:31:43 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tsearch2 - bad performance with concatenated ts-vectors"
}
] |
[
{
"msg_contents": "Dear all,\n\nJust want to know which join is better for querying data faster.\n\nI have 2 tables A ( 70 GB ) & B ( 7 MB )\n\nA has 10 columns & B has 3 columns.Indexes exist on both tables's ids.\n\nselect p.* from table A p, B q where p.id=q.id\n\nor\n\nselect p.* from table B q , A p where q.id=p.id\n\n\nThanks\n",
"msg_date": "Tue, 02 Aug 2011 12:12:11 +0530",
"msg_from": "Adarsh Sharma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Which Join is better"
},
{
"msg_contents": "El Martes 02 Agosto 2011, Adarsh Sharma escribió:\n> Dear all,\n> \n> Just want to know which join is better for querying data faster.\n> \n> I have 2 tables A ( 70 GB ) & B ( 7 MB )\n> \n> A has 10 columns & B has 3 columns.Indexes exist on both tables's ids.\n> \n> select p.* from table A p, B q where p.id=q.id\n> \n> or\n> \n> select p.* from table B q , A p where q.id=p.id\n> \n> \n> Thanks\n\n\nHi Adarsh,\n\nWhat does a \"EXPLAIN ANALYZE\" say after a VACCUM?\n\n-- \nMaría Arias de Reyna Domínguez\nÁrea de Operaciones\n\nEmergya Consultoría \nTfno: +34 954 51 75 77 / +34 607 43 74 27\nFax: +34 954 51 64 73 \nwww.emergya.es \n",
"msg_date": "Tue, 2 Aug 2011 08:47:02 +0200",
"msg_from": "Maria Arias de Reyna <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Which Join is better"
},
{
"msg_contents": "On 2 August 2011 08:42, Adarsh Sharma <[email protected]> wrote:\n\n> Dear all,\n>\n> Just want to know which join is better for querying data faster.\n>\n> I have 2 tables A ( 70 GB ) & B ( 7 MB )\n>\n> A has 10 columns & B has 3 columns.Indexes exist on both tables's ids.\n>\n> select p.* from table A p, B q where p.id=q.id\n>\n> or\n>\n> select p.* from table B q , A p where q.id=p.id\n>\n>\n>\nHi,\nit really doesn't matter. PostgreSQL can reorder the joins as it likes.\nAnd you can always check, but I think the plans will be the same.\n\nregards\nSzymon\n\nOn 2 August 2011 08:42, Adarsh Sharma <[email protected]> wrote:\nDear all,\n\nJust want to know which join is better for querying data faster.\n\nI have 2 tables A ( 70 GB ) & B ( 7 MB )\n\nA has 10 columns & B has 3 columns.Indexes exist on both tables's ids.\n\nselect p.* from table A p, B q where p.id=q.id\n\nor\n\nselect p.* from table B q , A p where q.id=p.id\nHi,it really doesn't matter. PostgreSQL can reorder the joins as it likes.And you can always check, but I think the plans will be the same.\nregardsSzymon",
"msg_date": "Tue, 2 Aug 2011 08:47:41 +0200",
"msg_from": "Szymon Guz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Which Join is better"
},
{
"msg_contents": "Unless you use the explicit join syntax:\n\n\nselect p.* from A p join B q on (p.id = q.id)\n\nand also set join_collapse_limit= 1\nThe order of the joins is determined by the planner.\n\n\nAlso explain is your friend :)\n\n________________________________\nFrom: Adarsh Sharma <[email protected]>\nTo: [email protected]\nSent: Monday, August 1, 2011 11:42 PM\nSubject: [PERFORM] Which Join is better\n\nDear all,\n\nJust want to know which join is better for querying data faster.\n\nI have 2 tables A ( 70 GB ) & B ( 7 MB )\n\nA has 10 columns & B has 3 columns.Indexes exist on both tables's ids.\n\nselect p.* from table A p, B q where p.id=q.id\n\nor\n\nselect p.* from table B q , A p where q.id=p.id\n\n\nThanks\n\n-- Sent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 2 Aug 2011 17:16:26 -0700 (PDT)",
"msg_from": "lars hofhansl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Which Join is better"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm looking for a hint how array access performs in PostgreSQL in respect to performance. Normally I would expect access of a 1-dimensional Array at slot i (array[i]) to perform in constant time (random access).\n\nIs this also true for postgres' arrays?\n\nMay concrete example is a 1-dimensional array d of length <= 600 (which will grow at a rate of 1 entry/day) stored in a table's column. I need to access this array two times per tuple, i.e. d[a], d[b]. Therefore I hope access is not linear. Is this correct?\n\nAlso I'm having some performance issues building this array. I'm doing this with a used-defined aggregate function, starting with an empty array and using array_append and some calculation for each new entry. I assume this involves some copying/memory allocation on each call, but I could not find the implementation of array_append in postgres-source/git. \n\nIs there an efficient way to append to an array? I could also start with a pre-initialized array of the required length, but this involves some complexity.\n\nThank you\n\nRegards,\nAndreas\n",
"msg_date": "Tue, 2 Aug 2011 15:00:08 +0200 (CEST)",
"msg_from": "Andreas Brandl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Array access performance"
},
{
"msg_contents": "\n> Is this also true for postgres' arrays?\n\nSorry, I'm using latest postgres 9.0.4 on debian squeeze/amd64.\n\nGreetings\nAndreas\n",
"msg_date": "Tue, 2 Aug 2011 15:11:52 +0200 (CEST)",
"msg_from": "Andreas Brandl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Array access performance"
},
{
"msg_contents": "Andreas Brandl <[email protected]> writes:\n> I'm looking for a hint how array access performs in PostgreSQL in respect to performance. Normally I would expect access of a 1-dimensional Array at slot i (array[i]) to perform in constant time (random access).\n\n> Is this also true for postgres' arrays?\n\nOnly if the element type is fixed-length (no strings for instance) and\nthe array does not contain, and never has contained, any nulls.\nOtherwise a scan through all the previous elements is required to find\na particular element.\n\nBy and large, if you're thinking of using arrays large enough to make\nthis an interesting question, I would say stop right there and redesign\nyour database schema. You're not thinking relationally, and it's gonna\ncost ya.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Aug 2011 10:49:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Array access performance "
},
{
"msg_contents": "Hi Tom,\n\n> > I'm looking for a hint how array access performs in PostgreSQL in\n> > respect to performance. Normally I would expect access of a\n> > 1-dimensional Array at slot i (array[i]) to perform in constant time\n> > (random access).\n> \n> > Is this also true for postgres' arrays?\n> \n> Only if the element type is fixed-length (no strings for instance) and\n> the array does not contain, and never has contained, any nulls.\n> Otherwise a scan through all the previous elements is required to find\n> a particular element.\n\nWe're using bigint elements here and don't have nulls, so this should be fine.\n\n> By and large, if you're thinking of using arrays large enough to make\n> this an interesting question, I would say stop right there and\n> redesign\n> your database schema. You're not thinking relationally, and it's gonna\n> cost ya.\n\nIn general, I agree. We're having a nice relational database but are facing some perfomance issues. My approach is to build a materialized view which exploits the array feature and heavily relies on constant time access on arrays.\n\nThank you!\n\nRegards,\nAndreas\n",
"msg_date": "Tue, 2 Aug 2011 17:16:09 +0200 (CEST)",
"msg_from": "Andreas Brandl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Array access performance"
}
] |
[
{
"msg_contents": "Hi All ,\n\n\nCan you please help me out with the following questions .\n\n\n\nOur application is running on Postgres 7.4.X . I agree that this is a very\nold version of Postgres and we should have upgraded . The issue that we\nfaced is that\n\n\n\n1 . There was a system crash due to a hardware failure .\n\n\n\n2 . When the system came up , we tried to insert a few records into the\ndatabase . However at this point in time we saw that Postgres was taking a\nlot of CPU & memory .\n\n\n\nAround 42% CPU consumption . This was a cause of concern .\n\n\n\n3 . We re-indexed the database and it helped reduce the cpu & memory\nconsumption .\n\n\n\nMy question is\n\n\n\nA ) Isn’t Postgres database resilient enough to handle hardware system\nfailure ? or it sometime results in a corrupt index for its tables ? I read\non the Postgres site that hardware failure can cause corrupt indexes .\nBesides this are there any other scenario which may result in such\ncorruption .\n\nB) If there has been improvement / enhancements done by Postgres regarding\nthe way it handles corrupt indexes can you please pass me more information\nabout the bug Id or some documentation on it ? Our application does not do\nany REINDEXING . I am in a dilemma if we should seriously incorporate it in\nour application .\n\n\n\nI ideally want to push to a higher version of Postgres . If I can prove that\nthere will be significant performance benefits and that crashes won’t occur\nthen I will be able to present a strong case .\n\n\nSince my question is related to Performance & Data corruption i saw on the\nPostgres site that i should provide the following information\n\n\nAddition Info :\n\n\nCPU manufacturer and model : Intel's Itanium Processor\n\nDo you use a RAID controller? yes\n\n\nPCIe SAS SmartArray P410i RAID Controller\n\nPCIe SAS SmartArray P411 RAID Controller\n\n\nIs is Write back caching enabled ?\n\n Total Cache Size (MB)............... 144\n Read Cache........................ N/A\n\n Write Cache....................... N/A\n\nNo of disks : 4\n\n\nHave you *ever* set fsync=off in the postgresql config file?\n\n#fsync = true # turns forced synchronization on or off\n\nI never changed it .\n\n\nHave you had any unexpected power loss lately? Replaced a failed RAID disk?\nHad an operating system crash? Yes system crashed had occured .\n\n\nHope this information helps .\n\n\nRegards,\n\nSumeet\n\nHi All ,Can you please help me out with the following questions .\n \nOur application is running on Postgres 7.4.X . I agree that\nthis is a very old version of Postgres and we should have upgraded . The issue\nthat we faced is that \n \n1 . There was a system crash due to a hardware failure . \n \n2 . When the system came up , we tried to insert a few\nrecords into the database . However at this point in time we saw that Postgres\nwas taking a lot of CPU & memory .\n \nAround 42% CPU consumption . This was a cause of concern . \n \n3 . We re-indexed the database and it helped reduce the cpu\n& memory consumption . \n \nMy question is \n \nA ) Isn’t Postgres database resilient enough to handle\nhardware system failure ? or it sometime results in a corrupt index for its\ntables ? I read on the Postgres site that hardware failure can cause corrupt\nindexes . Besides this are there any other scenario which may result in such\ncorruption . \nB) If there has been improvement / enhancements done by\nPostgres regarding the way it handles corrupt indexes can you please pass me\nmore information about the bug Id or some documentation on it ? Our\napplication does not do any REINDEXING . I am in a dilemma if we should\nseriously incorporate it in our application . \n \nI ideally want to push to a higher version of Postgres . If\nI can prove that there will be significant performance benefits and that\ncrashes won’t occur then I will be able to present a strong case . Since my question is related to Performance & Data corruption i saw on the Postgres site that i should provide the following information\nAddition Info :CPU manufacturer and model : Intel's Itanium Processor Do you use a RAID controller? yes \nPCIe SAS SmartArray P410i RAID ControllerPCIe SAS SmartArray P411 RAID ControllerIs is Write back caching enabled ? \n Total Cache Size (MB)............... 144 Read Cache........................ N/A Write Cache....................... N/ANo of disks : 4\nHave you ever set fsync=off in the postgresql config file?#fsync = true # turns forced synchronization on or off\nI never changed it . Have you had any unexpected power loss lately? Replaced a failed RAID disk? Had an operating system crash? Yes system crashed had occured . \nHope this information helps . Regards,Sumeet",
"msg_date": "Wed, 3 Aug 2011 13:05:29 +0530",
"msg_from": "Sumeet Jauhar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Suspected Postgres Datacorruption"
},
{
"msg_contents": "Hi All ,\n\n\nCan you please help me out with the following questions .\n\n\n\nOur application is running on Postgres 7.4.X . I agree that this is a very\nold version of Postgres and we should have upgraded . The issue that we\nfaced is that\n\n\n\n1 . There was a system crash due to a hardware failure .\n\n\n\n2 . When the system came up , we tried to insert a few records into the\ndatabase . However at this point in time we saw that Postgres was taking a\nlot of CPU & memory .\n\n\n\nAround 42% CPU consumption . This was a cause of concern .\n\n\n\n3 . We re-indexed the database and it helped reduce the cpu & memory\nconsumption .\n\n\n\nMy question is\n\n\n\nA ) Isn’t Postgres database resilient enough to handle hardware system\nfailure ? or it sometime results in a corrupt index for its tables ? I read\non the Postgres site that hardware failure can cause corrupt indexes .\nBesides this are there any other scenario which may result in such\ncorruption .\n\nB) If there has been improvement / enhancements done by Postgres regarding\nthe way it handles corrupt indexes can you please pass me more information\nabout the bug Id or some documentation on it ? Our application does not do\nany REINDEXING . I am in a dilemma if we should seriously incorporate it in\nour application .\n\n\n\nI ideally want to push to a higher version of Postgres . If I can prove that\nthere will be significant performance benefits and that crashes won’t occur\nthen I will be able to present a strong case .\n\n\nSince my question is related to Performance & Data corruption i saw on the\nPostgres site that i should provide the following information\n\n\nAddition Info :\n\n\nCPU manufacturer and model : Intel's Itanium Processor\n\nDo you use a RAID controller? yes\n\n\nPCIe SAS SmartArray P410i RAID Controller\n\nPCIe SAS SmartArray P411 RAID Controller\n\n\nIs is Write back caching enabled ?\n\n Total Cache Size (MB)............... 144\n Read Cache........................ N/A\n\n Write Cache....................... N/A\n\nNo of disks : 4\n\n\nHave you *ever* set fsync=off in the postgresql config file?\n\n#fsync = true # turns forced synchronization on or off\n\nI never changed it .\n\n\nHave you had any unexpected power loss lately? Replaced a failed RAID disk?\nHad an operating system crash? Yes system crashed had occured .\n\n\nHope this information helps .\n\n\nRegards,\n\nSumeet\n\n\nHi All ,Can you please help me out with the following questions .\n \nOur application is running on Postgres 7.4.X . I agree that\nthis is a very old version of Postgres and we should have upgraded . The issue\nthat we faced is that \n \n1 . There was a system crash due to a hardware failure . \n \n2 . When the system came up , we tried to insert a few\nrecords into the database . However at this point in time we saw that Postgres\nwas taking a lot of CPU & memory .\n \nAround 42% CPU consumption . This was a cause of concern . \n \n3 . We re-indexed the database and it helped reduce the cpu\n& memory consumption . \n \nMy question is \n \nA ) Isn’t Postgres database resilient enough to handle\nhardware system failure ? or it sometime results in a corrupt index for its\ntables ? I read on the Postgres site that hardware failure can cause corrupt\nindexes . Besides this are there any other scenario which may result in such\ncorruption . \nB) If there has been improvement / enhancements done by\nPostgres regarding the way it handles corrupt indexes can you please pass me\nmore information about the bug Id or some documentation on it ? Our\napplication does not do any REINDEXING . I am in a dilemma if we should\nseriously incorporate it in our application . \n \nI ideally want to push to a higher version of Postgres . If\nI can prove that there will be significant performance benefits and that\ncrashes won’t occur then I will be able to present a strong case . Since my question is related to Performance & Data corruption i saw on the Postgres site that i should provide the following information\nAddition Info :CPU manufacturer and model : Intel's Itanium Processor Do you use a RAID controller? yes \nPCIe SAS SmartArray P410i RAID ControllerPCIe SAS SmartArray P411 RAID ControllerIs is Write back caching enabled ? \n Total Cache Size (MB)............... 144 Read Cache........................ N/A Write Cache....................... N/ANo of disks : 4\nHave you ever set fsync=off in the postgresql config file?#fsync = true # turns forced synchronization on or off\nI never changed it . Have you had any unexpected power loss lately? Replaced a failed RAID disk? Had an operating system crash? Yes system crashed had occured . \nHope this information helps . Regards,Sumeet",
"msg_date": "Wed, 3 Aug 2011 22:44:47 +0530",
"msg_from": "Sumeet Jauhar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Suspected Postgres Datacorruption"
},
{
"msg_contents": "On Wed, Aug 3, 2011 at 1:35 AM, Sumeet Jauhar <[email protected]> wrote:\n>\n>\n> Our application is running on Postgres 7.4.X . I agree that this is a very\n> old version of Postgres and we should have upgraded . The issue that we\n> faced is that\n\nWow, that is a very old version. It has been out of maintenance for a\nlong time. If there are data eating bugs in it they aren't gonna get\nfixed.\n\n> 1 . There was a system crash due to a hardware failure .\n>\n> 2 . When the system came up , we tried to insert a few records into the\n> database . However at this point in time we saw that Postgres was taking a\n> lot of CPU & memory .\n>\n> Around 42% CPU consumption . This was a cause of concern .\n>\n> 3 . We re-indexed the database and it helped reduce the cpu & memory\n> consumption .\n>\n> My question is\n>\n> A ) Isn’t Postgres database resilient enough to handle hardware system\n> failure ? or it sometime results in a corrupt index for its tables ? I read\n> on the Postgres site that hardware failure can cause corrupt indexes .\n> Besides this are there any other scenario which may result in such\n> corruption .\n\nDepends on the hardware failure. If your RAID controller starts\nwriting garbage to the drive array, how exactly should postgresql fix\nthat? OTOH, if you just have a big boom and the power supply goes\nout, most the time you're fine. Of course, if the drive subsystem is\nlying about fsyncs, then postgresql can't guarantee your data anyway.\nSo, it really depends on your hardware. Standard test to make sure\nyour hardware is ok is to install postgresql, start a lot of\ntransactions at once, and walk around back and pull the power plug.\nIf it comes back up a half dozen times without errors you're probably\nok, but hey, there could still be a corner case out there too. Bonus\npoints if you initiate checkpoint that'll take a few minutes before\nyou pull the plug, increasing the chance you'll find problems.\n\nWith 7.4 there's a real likelihood that there are data loss bugs in\nthere that have never been fixed and never will be.\n\n> B) If there has been improvement / enhancements done by Postgres regarding\n> the way it handles corrupt indexes can you please pass me more information\n> about the bug Id or some documentation on it ? Our application does not do\n> any REINDEXING . I am in a dilemma if we should seriously incorporate it in\n> our application .\n\nOf course, there's been lots of improvements since 7.4 But being a\ndatabase when it encounters errors it tries not to guess too much\nabout what you want. IS a reindex the right thing to do? Maybe,\nmaybe not. That's the job of the DBA to figure out. Regular\nreindexing is not needed and if your particular machine does need it\nyou need to figure out why and change it so that it's not needed. If\nindexes are getting corrupted, chances are so are tables and you'll\nnotice too late.\n\n> I ideally want to push to a higher version of Postgres . If I can prove that\n> there will be significant performance benefits and that crashes won’t occur\n> then I will be able to present a strong case .\n\nHehe. It would be hard to NOT get significant performance\nimprovements moving from 7.4 to 9.0. Heck our load on our production\nservers went from 12 to 3 or so when we went from 8.1 to 8.3. Saved\nus a ton on what we would have had to spend to keep 8.1 happy.\nInstall a test version of 9.0 on a laptop, point your test servers at\nit, and watch it outrun your production database for 90% of everything\nyou do.\n\nWe run 8.3 and 8.4 in production and they are literally light years\nahead of 7.4 in terms of stability, performance, and capabilities.\nPlus when you find a problem in one of them, it gets fixed, fast.\nThey're still supported. Just that would be enough to justify an\nupgrade for me.\n",
"msg_date": "Thu, 4 Aug 2011 15:21:57 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suspected Postgres Datacorruption"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Scott Marlowe\n> Sent: Thursday, August 04, 2011 5:22 PM\n> To: Sumeet Jauhar\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Suspected Postgres Datacorruption\n> \n> > I ideally want to push to a higher version of Postgres . If I can\n> prove that\n> > there will be significant performance benefits and that crashes won't\n> occur\n> > then I will be able to present a strong case .\n> \n> Hehe. It would be hard to NOT get significant performance\n> improvements moving from 7.4 to 9.0. Heck our load on our production\n> servers went from 12 to 3 or so when we went from 8.1 to 8.3. Saved\n> us a ton on what we would have had to spend to keep 8.1 happy.\n> Install a test version of 9.0 on a laptop, point your test servers at\n> it, and watch it outrun your production database for 90% of everything\n> you do.\n\nAt a previous engagement, when we moved from 7.4 to 8.1 we saw a huge drop in transaction times. I don't remember the numbers but it was substantial. We also suffered very badly from checkpoint problems with 7.4, and we were able to tune them out in 8.1. When we went from 8.1 to 8.3, there wasn't an improvement in response times but we were able to deliver the same level of performance using a fraction of the I/O (due to HOT, autovacuum improvements the checkpoint smoothing stuff).\n\nWe also ran 7.4 for quite a while (on reliable hardware), and never had any corruption problems except for some index corruption issues - but that bug was pretty obscure and was fixed in 7.4\n\nBrad.\n",
"msg_date": "Thu, 4 Aug 2011 22:47:09 +0100",
"msg_from": "\"Nicholson, Brad (Toronto, ON, CA)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suspected Postgres Datacorruption"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Nicholson, Brad (Toronto, ON, CA)\n> Sent: Thursday, August 04, 2011 5:47 PM\n> To: Scott Marlowe; Sumeet Jauhar\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Suspected Postgres Datacorruption\n> \n> \n> We also ran 7.4 for quite a while (on reliable hardware), and never had\n> any corruption problems except for some index corruption issues - but\n> that bug was pretty obscure and was fixed in 7.4\n\nBy the way - to the original person asking about 7.4 do not view this as an endorsement. I would not trust my data to 7.4 any longer.\n\nBrad.\n",
"msg_date": "Thu, 4 Aug 2011 23:00:42 +0100",
"msg_from": "\"Nicholson, Brad (Toronto, ON, CA)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suspected Postgres Datacorruption"
},
{
"msg_contents": "Yes the very fact that we are using a very very old version of Postgres is\ncertainly causing alot of problems .\n\nOn Fri, Aug 5, 2011 at 2:51 AM, Scott Marlowe <[email protected]>wrote:\n\n> On Wed, Aug 3, 2011 at 1:35 AM, Sumeet Jauhar <[email protected]>\n> wrote:\n> >\n> >\n> > Our application is running on Postgres 7.4.X . I agree that this is a\n> very\n> > old version of Postgres and we should have upgraded . The issue that we\n> > faced is that\n>\n> Wow, that is a very old version. It has been out of maintenance for a\n> long time. If there are data eating bugs in it they aren't gonna get\n> fixed.\n>\n> [ Sumeet ] i plan to propose an upgrade soon . This data corruption issue\nseems to be the best push / driver for me to go ahead and implement it .\n\n\n> > 1 . There was a system crash due to a hardware failure .\n> >\n> > 2 . When the system came up , we tried to insert a few records into the\n> > database . However at this point in time we saw that Postgres was taking\n> a\n> > lot of CPU & memory .\n> >\n> > Around 42% CPU consumption . This was a cause of concern .\n> >\n> > 3 . We re-indexed the database and it helped reduce the cpu & memory\n> > consumption .\n> >\n> > My question is\n> >\n> > A ) Isn’t Postgres database resilient enough to handle hardware system\n> > failure ? or it sometime results in a corrupt index for its tables ? I\n> read\n> > on the Postgres site that hardware failure can cause corrupt indexes .\n> > Besides this are there any other scenario which may result in such\n> > corruption .\n>\n> Depends on the hardware failure. If your RAID controller starts\n> writing garbage to the drive array, how exactly should postgresql fix\n> that? OTOH, if you just have a big boom and the power supply goes\n> out, most the time you're fine. Of course, if the drive subsystem is\n> lying about fsyncs, then postgresql can't guarantee your data anyway.\n> So, it really depends on your hardware. Standard test to make sure\n> your hardware is ok is to install postgresql, start a lot of\n> transactions at once, and walk around back and pull the power plug.\n> If it comes back up a half dozen times without errors you're probably\n> ok, but hey, there could still be a corner case out there too. Bonus\n> points if you initiate checkpoint that'll take a few minutes before\n> you pull the plug, increasing the chance you'll find problems.\n>\n> With 7.4 there's a real likelihood that there are data loss bugs in\n> there that have never been fixed and never will be.\n>\n\n [ Sumeet ] The scenario that you have pointed out . ie to go back and\nunplug the powersupply while there are database operations going on seems a\ngood test case . I will do that and see what possibly happens . A faulty\nRAID on the system is bound to cause problems . I agree . It will manifest\nitself in someway .\n\n>\n> > B) If there has been improvement / enhancements done by Postgres\n> regarding\n> > the way it handles corrupt indexes can you please pass me more\n> information\n> > about the bug Id or some documentation on it ? Our application does not\n> do\n> > any REINDEXING . I am in a dilemma if we should seriously incorporate it\n> in\n> > our application .\n>\n> Of course, there's been lots of improvements since 7.4 But being a\n> database when it encounters errors it tries not to guess too much\n> about what you want. IS a reindex the right thing to do? Maybe,\n> maybe not. That's the job of the DBA to figure out. Regular\n> reindexing is not needed and if your particular machine does need it\n> you need to figure out why and change it so that it's not needed. If\n> indexes are getting corrupted, chances are so are tables and you'll\n> notice too late.\n>\n> [ Sumee ] Thanks . i was of the opinion that re-indexing could be\nincorporated as a precautionary thing everytime the system crashes . However\nthe hard part is to do it only when the system crashes . and the harder part\nis to know that the system has actually crashed and its not a simple reboot\n.\nDBA should help me . WIll do that .\n\n\n> > I ideally want to push to a higher version of Postgres . If I can prove\n> that\n> > there will be significant performance benefits and that crashes won’t\n> occur\n> > then I will be able to present a strong case .\n>\n> Hehe. It would be hard to NOT get significant performance\n> improvements moving from 7.4 to 9.0. Heck our load on our production\n> servers went from 12 to 3 or so when we went from 8.1 to 8.3. Saved\n> us a ton on what we would have had to spend to keep 8.1 happy.\n> Install a test version of 9.0 on a laptop, point your test servers at\n> it, and watch it outrun your production database for 90% of everything\n> you do.\n>\n> We run 8.3 and 8.4 in production and they are literally light years\n> ahead of 7.4 in terms of stability, performance, and capabilities.\n> Plus when you find a problem in one of them, it gets fixed, fast.\n> They're still supported. Just that would be enough to justify an\n> upgrade for me.\n>\n\n [ Sumeet ] ok so i agree we need to move ahead and shift to a higher\nversion . But how do we decide that . Which one would you say is the\nstablest version of Postgres [ still supported version ] out in the market\nbelow beacuse Brad here says his 8.1 version did have performance impacts\n. Brad - How had you decide on the version . Was it the latest version\navailable at that point in time or there was someother reason ? I am also\npretty sure that upgrading 2 times would not have been easy .\n\nYes the very fact that we are using a very very old version of Postgres is certainly causing alot of problems . On Fri, Aug 5, 2011 at 2:51 AM, Scott Marlowe <[email protected]> wrote:\nOn Wed, Aug 3, 2011 at 1:35 AM, Sumeet Jauhar <[email protected]> wrote:\n\n>\n>\n> Our application is running on Postgres 7.4.X . I agree that this is a very\n> old version of Postgres and we should have upgraded . The issue that we\n> faced is that\n\nWow, that is a very old version. It has been out of maintenance for a\nlong time. If there are data eating bugs in it they aren't gonna get\nfixed.\n[ Sumeet ] i plan to propose an upgrade soon . This data corruption issue seems to be the best push / driver for me to go ahead and implement it . \n\n> 1 . There was a system crash due to a hardware failure .\n>\n> 2 . When the system came up , we tried to insert a few records into the\n> database . However at this point in time we saw that Postgres was taking a\n> lot of CPU & memory .\n>\n> Around 42% CPU consumption . This was a cause of concern .\n>\n> 3 . We re-indexed the database and it helped reduce the cpu & memory\n> consumption .\n>\n> My question is\n>\n> A ) Isn’t Postgres database resilient enough to handle hardware system\n> failure ? or it sometime results in a corrupt index for its tables ? I read\n> on the Postgres site that hardware failure can cause corrupt indexes .\n> Besides this are there any other scenario which may result in such\n> corruption .\n\nDepends on the hardware failure. If your RAID controller starts\nwriting garbage to the drive array, how exactly should postgresql fix\nthat? OTOH, if you just have a big boom and the power supply goes\nout, most the time you're fine. Of course, if the drive subsystem is\nlying about fsyncs, then postgresql can't guarantee your data anyway.\nSo, it really depends on your hardware. Standard test to make sure\nyour hardware is ok is to install postgresql, start a lot of\ntransactions at once, and walk around back and pull the power plug.\nIf it comes back up a half dozen times without errors you're probably\nok, but hey, there could still be a corner case out there too. Bonus\npoints if you initiate checkpoint that'll take a few minutes before\nyou pull the plug, increasing the chance you'll find problems.\n\nWith 7.4 there's a real likelihood that there are data loss bugs in\nthere that have never been fixed and never will be. [ Sumeet ] The scenario that you have pointed out . ie to go back and unplug the powersupply while there are database operations going on seems a good test case . I will do that and see what possibly happens . A faulty RAID on the system is bound to cause problems . I agree . It will manifest itself in someway . \n\n\n> B) If there has been improvement / enhancements done by Postgres regarding\n> the way it handles corrupt indexes can you please pass me more information\n> about the bug Id or some documentation on it ? Our application does not do\n> any REINDEXING . I am in a dilemma if we should seriously incorporate it in\n> our application .\n\nOf course, there's been lots of improvements since 7.4 But being a\ndatabase when it encounters errors it tries not to guess too much\nabout what you want. IS a reindex the right thing to do? Maybe,\nmaybe not. That's the job of the DBA to figure out. Regular\nreindexing is not needed and if your particular machine does need it\nyou need to figure out why and change it so that it's not needed. If\nindexes are getting corrupted, chances are so are tables and you'll\nnotice too late.\n [ Sumee ] Thanks . i was of the opinion that re-indexing could be incorporated as a precautionary thing everytime the system crashes . However the hard part is to do it only when the system crashes . and the harder part is to know that the system has actually crashed and its not a simple reboot . \nDBA should help me . WIll do that . \n> I ideally want to push to a higher version of Postgres . If I can prove that\n> there will be significant performance benefits and that crashes won’t occur\n> then I will be able to present a strong case .\n\nHehe. It would be hard to NOT get significant performance\nimprovements moving from 7.4 to 9.0. Heck our load on our production\nservers went from 12 to 3 or so when we went from 8.1 to 8.3. Saved\nus a ton on what we would have had to spend to keep 8.1 happy.\nInstall a test version of 9.0 on a laptop, point your test servers at\nit, and watch it outrun your production database for 90% of everything\nyou do.\n\nWe run 8.3 and 8.4 in production and they are literally light years\nahead of 7.4 in terms of stability, performance, and capabilities.\nPlus when you find a problem in one of them, it gets fixed, fast.\nThey're still supported. Just that would be enough to justify an\nupgrade for me.\n [ Sumeet ] ok so i agree we need to move ahead and shift to a higher version . But how do we decide that . Which one would you say is the stablest version of Postgres [ still supported version ] out in the market below beacuse Brad here says his 8.1 version did have performance impacts . Brad - How had you decide on the version . Was it the latest version available at that point in time or there was someother reason ? I am also pretty sure that upgrading 2 times would not have been easy .",
"msg_date": "Fri, 5 Aug 2011 08:03:31 +0530",
"msg_from": "Sumeet Jauhar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Suspected Postgres Datacorruption"
},
{
"msg_contents": "On Thu, Aug 4, 2011 at 8:33 PM, Sumeet Jauhar <[email protected]> wrote:\n>\n> [ Sumeet ] ok so i agree we need to move ahead and shift to a higher\n> version . But how do we decide that . Which one would you say is the\n> stablest version of Postgres [ still supported version ] out in the market\n> below beacuse Brad here says his 8.1 version did have performance impacts\n> . Brad - How had you decide on the version . Was it the latest version\n> available at that point in time or there was someother reason ? I am also\n> pretty sure that upgrading 2 times would not have been easy .\n\nI would upgrade to either 8.2 or 9.0 and here's my reasons. with 8.2\nyou still have implicit casts, which your application may depend upon.\n Most other changes between 7.4 and 8.2 were pretty small, so if\nyou've got a lot of implicit casts in your SQL, 8.2 will be the least\npainful of the upgrades to late model pgsqls. HOWEVER, 8.2 is getting\npretty old now and performance wise 9.0 will pretty handily beat it.\nIn terms of stability, there are no reports of any versions after\nabout 8.1 or 8.2 being particularly unstable, but keep in mind that\nsupport for 8.1 and 8.2 will be ending / may have ended already, so if\nyou can possibly test against 9.0 and see if it works well enough,\nthen you should really do so. The changes to things like autovacuum\ngetting multi-threaded (8.3) HOT updates (8.3) on disk tracking of\nfree space map (8.4) and a few other big breakthroughs make going to\nthe latest (9.0) or near latest (8.4) much more attractive. And trust\nme, you WILL feel the difference in performance, it's huge from 7.4 to\n8.3 and after that the incremental changes are noticeable, if not as\nhuge.\n",
"msg_date": "Thu, 4 Aug 2011 20:40:05 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suspected Postgres Datacorruption"
},
{
"msg_contents": "On Thu, Aug 4, 2011 at 8:40 PM, Scott Marlowe <[email protected]> wrote:\n> then you should really do so. The changes to things like autovacuum\n> getting multi-threaded (8.3) HOT updates (8.3) on disk tracking of\n\nWait, multithreaded autovac may have been put in place in 8.2 .\nAnyway, my points still stand, just might be off a version here or\nthere.\n\n\n-- \nTo understand recursion, one must first understand recursion.\n",
"msg_date": "Thu, 4 Aug 2011 20:41:35 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suspected Postgres Datacorruption"
},
{
"msg_contents": "Thank you . Scott and Brad . Valuable information for sure . I plan to\nbrowse through the documentation for Postgres 9 and identify all the\npotential advantages that it will bring to our application . As\nrightly pointed out 8.2 may be on the path to obsolescence .\n\nOn Friday, August 5, 2011, Scott Marlowe <[email protected]> wrote:\n> On Thu, Aug 4, 2011 at 8:40 PM, Scott Marlowe <[email protected]> wrote:\n>> then you should really do so. The changes to things like autovacuum\n>> getting multi-threaded (8.3) HOT updates (8.3) on disk tracking of\n>\n> Wait, multithreaded autovac may have been put in place in 8.2 .\n> Anyway, my points still stand, just might be off a version here or\n> there.\n>\n>\n> --\n> To understand recursion, one must first understand recursion.\n>\n",
"msg_date": "Fri, 5 Aug 2011 08:28:55 +0530",
"msg_from": "Sumeet Jauhar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Suspected Postgres Datacorruption"
},
{
"msg_contents": "Scott Marlowe <[email protected]> writes:\n> I would upgrade to either 8.2 or 9.0 and here's my reasons. with 8.2\n> you still have implicit casts, which your application may depend upon.\n> Most other changes between 7.4 and 8.2 were pretty small, so if\n> you've got a lot of implicit casts in your SQL, 8.2 will be the least\n> painful of the upgrades to late model pgsqls. HOWEVER, 8.2 is getting\n> pretty old now and performance wise 9.0 will pretty handily beat it.\n> In terms of stability, there are no reports of any versions after\n> about 8.1 or 8.2 being particularly unstable, but keep in mind that\n> support for 8.1 and 8.2 will be ending / may have ended already, so if\n> you can possibly test against 9.0 and see if it works well enough,\n> then you should really do so.\n\nSee:\nhttp://wiki.postgresql.org/wiki/PostgreSQL_Release_Support_Policy\n8.1 is dead already, 8.2 will go off life support this December.\n\nSo if you're getting involved in a major-version upgrade now, you\nreally owe it to yourself to jump to 8.4 or later. IMO anyway.\n\n(FWIW, I know of no reason to think that 8.4->9.0 is a bigger jump\nthan any other major-release bump from the application compatibility\nstandpoint. Scott is correct to identify the removal of some implicit\ncasts-to-text in 8.3 as the single largest pain point we've introduced\nin recent memory. Personally I'm betting that this will be eclipsed\nby the shift to standard_conforming_strings=on in 9.1 ...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 04 Aug 2011 23:05:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suspected Postgres Datacorruption "
},
{
"msg_contents": "Sumeet Jauhar wrote:\n>\n> Our application is running on Postgres 7.4.X . I agree that this is a \n> very old version of Postgres and we should have upgraded .\n>\n\nIt's important to know the .X here. The latest 7.4 is 7.4.30: \nhttp://www.postgresql.org/docs/7.4/static/release.html\n\nIf you're running a 7.4 much lower than .30, you almost certainly have a \nversion with corruption bugs related to indexes. There's a bunch of \nthem mentioned in the release notes of many 7.4 versions listed there.\n\n> I ideally want to push to a higher version of Postgres . If I can \n> prove that there will be significant performance benefits and that \n> crashes won�t occur then I will be able to present a strong case .\n>\n\nGo visit http://suckit.blog.hu/2009/09/29/postgresql_history for minute.\n\n8.0 is faster than the 7.4 you're running, and that's showing the speed \nincrease from there. Your application might easily run 10X as fast on a \nnewer PostgreSQL version.\n\nNow, on top of all this, it sounds like you might have a problem with \nyour drives/controller not doing writes reliably. See \nhttp://wiki.postgresql.org/wiki/Reliable_Writes for more information. \nIf that's the situation, the version of PostgreSQL you use won't matter \ntoo much--the database will still be unreliable if the hardware is \nconfigured to do the wrong thing.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 09 Aug 2011 00:13:05 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suspected Postgres Datacorruption"
},
{
"msg_contents": "On 4/08/2011 1:14 AM, Sumeet Jauhar wrote:\n\n> 1 . There was a system crash due to a hardware failure .\n\n> A ) Isn�t Postgres database resilient enough to handle hardware system\n> failure ? or it sometime results in a corrupt index for its tables ? I\n\nYou should *always* be able to pull the plug out of a box running Pg at \nany point and have it come up just fine. If you can't, it's a bug. They \ndo turn up, but quite rarely and usually relating to odd corner cases in \nnewly introduced features.\n\nIf you've done something like turn off fsync, of course, you've just \ntold PostgreSQL \"I don't care about my data, don't bother keeping it \ncrash safe\" and you get garbage if there's a system crash. But that's \nyour choice to enable if your use case doesn't require data durability. \nYou haven't done that, so this isn't the cause of your issue.\n\nThe only other known case (in a current version) where index corruption \nis expected after a crash is if you are using hash indexes. Hash indexes \nare NOT CRASH SAFE, as per the documentation, and WILL need to be \nreindexed after a crash. Don't use them unless you really know you need \nthem (you don't).\n\nOf course, if you're using 7.4.2 or something ancient, you're missing a \nlot of bug fixes, and some of them DID relate to data durability issues.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 10 Aug 2011 14:05:56 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Suspected Postgres Datacorruption"
}
] |
[
{
"msg_contents": "I had done some testing for my application (WIP) and I had executed same SQL\nscript and queries on real physical 64-bit Windows 7 and on virtualized\n64-bit CentOS 6.\n\nBoth database servers are tuned with real having 8 GB RAM and 4 cores,\nvirtualized having 2 GB RAM and 2 virtual cores.\n\nVirtualized server crushed real physical server in performance in both DDL\nand DML scripts.\n\nMy question is simple. Does PostgreSQL perform better on Linux than on\nWindows and how much is it faster in your tests?\n\nThank you for your time.\n\nI had done some testing for my application (WIP) and I had executed same SQL script and queries on real physical 64-bit Windows 7 and on virtualized 64-bit CentOS 6. \nBoth database servers are tuned with real having 8 GB RAM and 4 cores, virtualized having 2 GB RAM and 2 virtual cores. \nVirtualized server crushed real physical server in performance in both DDL and DML scripts. \nMy question is simple. Does PostgreSQL perform better on Linux than on Windows and how much is it faster in your tests? \nThank you for your time.",
"msg_date": "Wed, 3 Aug 2011 18:37:12 +0200",
"msg_from": "Dusan Misic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres performance on Linux and Windows"
},
{
"msg_contents": "On 8/3/2011 11:37 AM, Dusan Misic wrote:\n> I had done some testing for my application (WIP) and I had executed same\n> SQL script and queries on real physical 64-bit Windows 7 and on\n> virtualized 64-bit CentOS 6.\n>\n> Both database servers are tuned with real having 8 GB RAM and 4 cores,\n> virtualized having 2 GB RAM and 2 virtual cores.\n>\n> Virtualized server crushed real physical server in performance in both\n> DDL and DML scripts.\n>\n> My question is simple. Does PostgreSQL perform better on Linux than on\n> Windows and how much is it faster in your tests?\n>\n> Thank you for your time.\n>\n\nGiven the exact same hardware, I think PG will perform better on Linux.\n\nYour question \"how much faster\" is really dependent on usage. If you're \ncpu bound then I'd bet they perform the same. You are cpu bound after \nall, and on the exact same hardware, it should be the same.\n\nIf you have lots of clients, with lots of IO, I think linux would \nperform better, but hard to say how much. I cant recall anyone posting \nbenchmarks from \"the exact same hardware\".\n\nComparing windows on metal vs linux on vm is like comparing apples to \nMissouri. If your test was io bound, and the vmserver was write \ncaching, that's why your vm won so well... but I'd hate to see a power \nfailure.\n\nIt would be interesting to compare windows on metal vs windows on vm \nthough. (Which, I have done linux on metal vs linux on vm, but the \nhardware specs where different (dual amd64 4 sata software raid10 vs \nintel 8-core something with 6-disk scsi hardware raid), but linux on \nmetal won every time.)\n\nI think in the long run, running the system you are best at, will be a \nwin. If you don't know linux much, and run into problems, how much \ntime/money will you spend fixing it. Compared to windows.\n\nIf you have to have the fastest, absolute, system. Linux on metal is \nthe way to go.\n\n(This is all speculation and personal opinion, I have no numbers to back \nanything up)\n\n-Andy\n",
"msg_date": "Wed, 03 Aug 2011 12:05:50 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance on Linux and Windows"
},
{
"msg_contents": "Thank you Andy for your answer.\n\nThat is exactly what I had expected, but it is better to consult with\nexperts on this matter.\n\nAgain, thank you.\n\nDusan\n On Aug 3, 2011 7:05 PM, \"Andy Colson\" <[email protected]> wrote:\n> On 8/3/2011 11:37 AM, Dusan Misic wrote:\n>> I had done some testing for my application (WIP) and I had executed same\n>> SQL script and queries on real physical 64-bit Windows 7 and on\n>> virtualized 64-bit CentOS 6.\n>>\n>> Both database servers are tuned with real having 8 GB RAM and 4 cores,\n>> virtualized having 2 GB RAM and 2 virtual cores.\n>>\n>> Virtualized server crushed real physical server in performance in both\n>> DDL and DML scripts.\n>>\n>> My question is simple. Does PostgreSQL perform better on Linux than on\n>> Windows and how much is it faster in your tests?\n>>\n>> Thank you for your time.\n>>\n>\n> Given the exact same hardware, I think PG will perform better on Linux.\n>\n> Your question \"how much faster\" is really dependent on usage. If you're\n> cpu bound then I'd bet they perform the same. You are cpu bound after\n> all, and on the exact same hardware, it should be the same.\n>\n> If you have lots of clients, with lots of IO, I think linux would\n> perform better, but hard to say how much. I cant recall anyone posting\n> benchmarks from \"the exact same hardware\".\n>\n> Comparing windows on metal vs linux on vm is like comparing apples to\n> Missouri. If your test was io bound, and the vmserver was write\n> caching, that's why your vm won so well... but I'd hate to see a power\n> failure.\n>\n> It would be interesting to compare windows on metal vs windows on vm\n> though. (Which, I have done linux on metal vs linux on vm, but the\n> hardware specs where different (dual amd64 4 sata software raid10 vs\n> intel 8-core something with 6-disk scsi hardware raid), but linux on\n> metal won every time.)\n>\n> I think in the long run, running the system you are best at, will be a\n> win. If you don't know linux much, and run into problems, how much\n> time/money will you spend fixing it. Compared to windows.\n>\n> If you have to have the fastest, absolute, system. Linux on metal is\n> the way to go.\n>\n> (This is all speculation and personal opinion, I have no numbers to back\n> anything up)\n>\n> -Andy\n\nThank you Andy for your answer. \nThat is exactly what I had expected, but it is better to consult with experts on this matter. \nAgain, thank you. \nDusan \n\nOn Aug 3, 2011 7:05 PM, \"Andy Colson\" <[email protected]> wrote:> On 8/3/2011 11:37 AM, Dusan Misic wrote:\n>> I had done some testing for my application (WIP) and I had executed same>> SQL script and queries on real physical 64-bit Windows 7 and on>> virtualized 64-bit CentOS 6.>>>> Both database servers are tuned with real having 8 GB RAM and 4 cores,\n>> virtualized having 2 GB RAM and 2 virtual cores.>>>> Virtualized server crushed real physical server in performance in both>> DDL and DML scripts.>>>> My question is simple. Does PostgreSQL perform better on Linux than on\n>> Windows and how much is it faster in your tests?>>>> Thank you for your time.>>> > Given the exact same hardware, I think PG will perform better on Linux.> > Your question \"how much faster\" is really dependent on usage. If you're \n> cpu bound then I'd bet they perform the same. You are cpu bound after > all, and on the exact same hardware, it should be the same.> > If you have lots of clients, with lots of IO, I think linux would \n> perform better, but hard to say how much. I cant recall anyone posting > benchmarks from \"the exact same hardware\".> > Comparing windows on metal vs linux on vm is like comparing apples to \n> Missouri. If your test was io bound, and the vmserver was write > caching, that's why your vm won so well... but I'd hate to see a power > failure.> > It would be interesting to compare windows on metal vs windows on vm \n> though. (Which, I have done linux on metal vs linux on vm, but the > hardware specs where different (dual amd64 4 sata software raid10 vs > intel 8-core something with 6-disk scsi hardware raid), but linux on \n> metal won every time.)> > I think in the long run, running the system you are best at, will be a > win. If you don't know linux much, and run into problems, how much > time/money will you spend fixing it. Compared to windows.\n> > If you have to have the fastest, absolute, system. Linux on metal is > the way to go.> > (This is all speculation and personal opinion, I have no numbers to back > anything up)\n> > -Andy",
"msg_date": "Wed, 3 Aug 2011 19:16:35 +0200",
"msg_from": "Dusan Misic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres performance on Linux and Windows"
},
{
"msg_contents": "Dusan Misic <[email protected]> wrote:\n \n> My question is simple. Does PostgreSQL perform better on Linux\n> than on Windows and how much is it faster in your tests?\n \nWe tested this quite a while back (on 8.0 and 8.1) with identical\nhardware and identical databases running in matching versions of\nPostgreSQL. On both saturation stress tests and load balancing a\nreal live web site between PostgreSQL on Windows and Linux, Linux\ncame out about 40% faster. Who knows what the number would be\ntoday, with current PostgreSQL, Linux, and Windows? Anyway, perhaps\nit's a useful data point for you.\n \nBTW, I wrote a tiny Java program to push data in both directions as\nfast a possible over our network to check for networking problems\n(it really showed up half duplex legs pretty dramatically), and when\neverything was on one switch it ran 30% faster if both ends were\nLinux than when both ends were Windows. I found it interesting that\nwith one end on Linux and one on Windows, it split the difference. \nSo this is not unique to PostgreSQL.\n \n-Kevin\n",
"msg_date": "Wed, 03 Aug 2011 12:29:47 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance on Linux and Windows"
},
{
"msg_contents": "Dusan Misic wrote:\n>\n> I had done some testing for my application (WIP) and I had executed \n> same SQL script and queries on real physical 64-bit Windows 7 and on \n> virtualized 64-bit CentOS 6.\n>\n> Both database servers are tuned with real having 8 GB RAM and 4 cores, \n> virtualized having 2 GB RAM and 2 virtual cores.\n>\n> Virtualized server crushed real physical server in performance in both \n> DDL and DML scripts.\n>\n> My question is simple. Does PostgreSQL perform better on Linux than on \n> Windows and how much is it faster in your tests?\n>\n\nYou didn't mention what tuning you did on the Windows server. If you \nset shared_buffers to a large value, more than around 512MB, that's been \nreported to slow the server down rather than make it faster on that OS.\n\nThe other thing you can easily get wrong in this sort of comparison is \nhaving one server enforce synchronous writes, while the other cheats. \nMany virtualized systems will not flush information to disk properly \nduring writes, which is faster but can lead to database corruption after \na crash. See http://wiki.postgresql.org/wiki/Reliable_Writes for more \ninformation on this general topic. Generally for a VM solution, you \nneed to check if it properly handles the \"fsync\" system call.\n\nComparing performance across two different operating systems fairly is \nreally hard to get right. It's easy to skew the results because of \nsomething unrelated to the difference in database performance, such as \nKevin's commentary about network speed heavily influencing results.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Mon, 08 Aug 2011 23:52:39 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance on Linux and Windows"
}
] |
[
{
"msg_contents": "Dear all,\n\n From the last few days, I researched a lot on Postgresql Performance \nTuning due to slow speed of my server.\nMy application selects data from mysql database about 100000 rows , \nprocess it & insert into postgres 2 tables by making about 45 connections.\n\nI set my postgresql parameters in postgresql.conf as below: ( OS : \nUbuntu, RAM : 16 GB, Postgres : 8.4.2 )\n\nmax_connections = 80\nshared_buffers = 2048MB\nwork_mem = 32MB\nmaintenance_work_mem = 512MB\nfsync=off \nfull_page_writes=off \nsynchronous_commit=off \ncheckpoint_segments = 32\ncheckpoint_completion_target = 0.7 \neffective_cache_size = 4096MB\n\n\nAfter this I change my pg_xlog directory to a separate directory other \nthan data directory by symlinking.\n\n\nBy Application issue insert statements through postgresql connections only.\n\nPlease let me know if I missing any other important configuration.\n\n\n\nThanks\n\n\n",
"msg_date": "Thu, 04 Aug 2011 10:26:44 +0530",
"msg_from": "Adarsh Sharma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Need to tune for Heavy Write"
},
{
"msg_contents": "Hi Adarsh,\n\nHave you set checkpoint_segments and checkpoint_completion_target the right\nway?\n\nTuning these parameters are a MUST if you want good write performance.\n\nSee this link for more information:\nhttp://www.postgresql.org/docs/current/static/runtime-config-wal.html<http://www.postgresql.org/docs/current/static/runtime-config-wal.html#RUNTIME-CONFIG-WAL-CHECKPOINTS>\n\nCheers,\n\nDusan\n\nOn Thu, Aug 4, 2011 at 6:56 AM, Adarsh Sharma <[email protected]>wrote:\n\n> Dear all,\n>\n> From the last few days, I researched a lot on Postgresql Performance Tuning\n> due to slow speed of my server.\n> My application selects data from mysql database about 100000 rows , process\n> it & insert into postgres 2 tables by making about 45 connections.\n>\n> I set my postgresql parameters in postgresql.conf as below: ( OS : Ubuntu,\n> RAM : 16 GB, Postgres : 8.4.2 )\n>\n> max_connections = 80\n> shared_buffers = 2048MB\n> work_mem = 32MB\n> maintenance_work_mem = 512MB\n> fsync=off full_page_writes=off synchronous_commit=off checkpoint_segments =\n> 32\n> checkpoint_completion_target = 0.7 effective_cache_size = 4096MB\n>\n>\n> After this I change my pg_xlog directory to a separate directory other than\n> data directory by symlinking.\n>\n>\n> By Application issue insert statements through postgresql connections only.\n>\n> Please let me know if I missing any other important configuration.\n>\n>\n>\n> Thanks\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.**\n> org <[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/**mailpref/pgsql-performance<http://www.postgresql.org/mailpref/pgsql-performance>\n>\n\nHi Adarsh, Have you set checkpoint_segments and checkpoint_completion_target the right way? Tuning these parameters are a MUST if you want good write performance. See this link for more information: http://www.postgresql.org/docs/current/static/runtime-config-wal.html \nCheers, DusanOn Thu, Aug 4, 2011 at 6:56 AM, Adarsh Sharma <[email protected]> wrote:\n\nDear all,\n\n>From the last few days, I researched a lot on Postgresql Performance Tuning due to slow speed of my server.\nMy application selects data from mysql database about 100000 rows , process it & insert into postgres 2 tables by making about 45 connections.\n\nI set my postgresql parameters in postgresql.conf as below: ( OS : Ubuntu, RAM : 16 GB, Postgres : 8.4.2 )\n\nmax_connections = 80\nshared_buffers = 2048MB\nwork_mem = 32MB\nmaintenance_work_mem = 512MB\nfsync=off full_page_writes=off synchronous_commit=off checkpoint_segments = 32\ncheckpoint_completion_target = 0.7 effective_cache_size = 4096MB\n\n\nAfter this I change my pg_xlog directory to a separate directory other than data directory by symlinking.\n\n\nBy Application issue insert statements through postgresql connections only.\n\nPlease let me know if I missing any other important configuration.\n\n\n\nThanks\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 4 Aug 2011 10:18:41 +0200",
"msg_from": "Dusan Misic <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need to tune for Heavy Write"
},
{
"msg_contents": "On Thu, Aug 4, 2011 at 6:56 AM, Adarsh Sharma <[email protected]> wrote:\n> After this I change my pg_xlog directory to a separate directory other than\n> data directory by symlinking.\n>(...)\n> Please let me know if I missing any other important configuration.\n\nMoving the pg_xlog to a different directory only helps when that\ndirectory is on a different harddisk (or whatever I/O device).\n\nHTH,\n\nWBL\n-- \n\"Patriotism is the conviction that your country is superior to all\nothers because you were born in it.\" -- George Bernard Shaw\n",
"msg_date": "Thu, 4 Aug 2011 10:34:14 +0200",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need to tune for Heavy Write"
},
{
"msg_contents": "To put it simple, you need to set checkpoint_segments way higher than your\ncurrent value!\n\nLink: wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nOn Aug 4, 2011 6:57 AM, \"Adarsh Sharma\" <[email protected]> wrote:\n\nTo put it simple, you need to set checkpoint_segments way higher than your current value! \nLink: wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server \nOn Aug 4, 2011 6:57 AM, \"Adarsh Sharma\" <[email protected]> wrote:",
"msg_date": "Thu, 4 Aug 2011 10:43:25 +0200",
"msg_from": "Dusan Misic <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need to tune for Heavy Write"
},
{
"msg_contents": "On Thu, Aug 4, 2011 at 2:34 AM, Willy-Bas Loos <[email protected]> wrote:\n> On Thu, Aug 4, 2011 at 6:56 AM, Adarsh Sharma <[email protected]> wrote:\n>> After this I change my pg_xlog directory to a separate directory other than\n>> data directory by symlinking.\n>>(...)\n>> Please let me know if I missing any other important configuration.\n>\n> Moving the pg_xlog to a different directory only helps when that\n> directory is on a different harddisk (or whatever I/O device).\n\nNot entirely true. By simply being on a different mounted file\nsystem this moves the fsync calls on the pg_xlog directories off of\nthe same file system as the main data store. Previous testing has\nshown improvements in performance from just using a different file\nsystem.\n\nThat said, the only real solution to a heavy write load is a heavy\nduty IO subsystem, with lots of drives and battery backed cache.\n",
"msg_date": "Thu, 4 Aug 2011 02:46:02 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need to tune for Heavy Write"
},
{
"msg_contents": "Scott is right. His answer solves the problem in the long run. Even if your\nwrite load increases, it will perform fast enough.\n\nFor now try increasing checkpoint_segments size, restart Postgres for new\nsettings to take effect and try again with your write load.\n\nIf you are not satisfied with write speed, then it is time to upgrade your\nstorage system / aka to increase I/O performance.\n On Aug 4, 2011 10:46 AM, \"Scott Marlowe\" <[email protected]> wrote:\n> On Thu, Aug 4, 2011 at 2:34 AM, Willy-Bas Loos <[email protected]> wrote:\n>> On Thu, Aug 4, 2011 at 6:56 AM, Adarsh Sharma <[email protected]>\nwrote:\n>>> After this I change my pg_xlog directory to a separate directory other\nthan\n>>> data directory by symlinking.\n>>>(...)\n>>> Please let me know if I missing any other important configuration.\n>>\n>> Moving the pg_xlog to a different directory only helps when that\n>> directory is on a different harddisk (or whatever I/O device).\n>\n> Not entirely true. By simply being on a different mounted file\n> system this moves the fsync calls on the pg_xlog directories off of\n> the same file system as the main data store. Previous testing has\n> shown improvements in performance from just using a different file\n> system.\n>\n> That said, the only real solution to a heavy write load is a heavy\n> duty IO subsystem, with lots of drives and battery backed cache.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nScott is right. His answer solves the problem in the long run. Even if your write load increases, it will perform fast enough. \nFor now try increasing checkpoint_segments size, restart Postgres for new settings to take effect and try again with your write load. \nIf you are not satisfied with write speed, then it is time to upgrade your storage system / aka to increase I/O performance.\n\nOn Aug 4, 2011 10:46 AM, \"Scott Marlowe\" <[email protected]> wrote:> On Thu, Aug 4, 2011 at 2:34 AM, Willy-Bas Loos <[email protected]> wrote:\n>> On Thu, Aug 4, 2011 at 6:56 AM, Adarsh Sharma <[email protected]> wrote:>>> After this I change my pg_xlog directory to a separate directory other than\n>>> data directory by symlinking.>>>(...)>>> Please let me know if I missing any other important configuration.>>>> Moving the pg_xlog to a different directory only helps when that\n>> directory is on a different harddisk (or whatever I/O device).> > Not entirely true. By simply being on a different mounted file> system this moves the fsync calls on the pg_xlog directories off of\n> the same file system as the main data store. Previous testing has> shown improvements in performance from just using a different file> system.> > That said, the only real solution to a heavy write load is a heavy\n> duty IO subsystem, with lots of drives and battery backed cache.> > -- > Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 4 Aug 2011 11:03:13 +0200",
"msg_from": "Dusan Misic <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need to tune for Heavy Write"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Scott Marlowe\n> Sent: Thursday, August 04, 2011 4:46 AM\n> To: Willy-Bas Loos\n> Cc: Adarsh Sharma; [email protected]\n> Subject: Re: [PERFORM] Need to tune for Heavy Write\n>\n>\n> > Moving the pg_xlog to a different directory only helps when that\n> > directory is on a different harddisk (or whatever I/O device).\n> \n> Not entirely true. By simply being on a different mounted file\n> system this moves the fsync calls on the pg_xlog directories off of\n> the same file system as the main data store. Previous testing has\n> shown improvements in performance from just using a different file\n> system.\n> \n\nIs this still the case for xfs or ext4 where fsync is properly flushing only the correct blocks to disk, or was this referring to the good old ext3 flush everything on fysnc issue?\n\nBrad.\n\n",
"msg_date": "Thu, 4 Aug 2011 13:41:47 +0100",
"msg_from": "\"Nicholson, Brad (Toronto, ON, CA)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need to tune for Heavy Write"
},
{
"msg_contents": "On 4/08/2011 12:56 PM, Adarsh Sharma wrote:\n> Dear all,\n>\n> From the last few days, I researched a lot on Postgresql Performance\n> Tuning due to slow speed of my server.\n> My application selects data from mysql database about 100000 rows ,\n> process it & insert into postgres 2 tables by making about 45 connections.\n\nWhy 45?\n\nDepending on your disk subsystem, that may be way too many for optimum \nthroughput. Or too few, for that matter.\n\nAlso, how are you doing your inserts? Are they being done in a single \nbig transaction per connection, or at least in resonable chunks? If \nyou're doing stand-alone INSERTs autocommit-style you'll see pretty \nshoddy performance.\n\nHave you looked into using COPY to bulk load your data? Possibly using \nthe libpq or jdbc copy APIs, or possibly using server-side COPY?\n\n> fsync=off full_page_writes=off synchronous_commit=off\n\n!!!!\n\nI hope you don't want to KEEP that data if you have a hardware fault or \npower loss. Setting fsync=off is pretty much saying \"I don't mind if you \neat my data\".\n\nKeep. Really. Really. Good. Backups.\n\n--\nCraig Ringer\n",
"msg_date": "Thu, 04 Aug 2011 21:49:54 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need to tune for Heavy Write"
},
{
"msg_contents": "Adarsh Sharma <[email protected]> wrote:\n \n> Postgres : 8.4.2\n \nYou should definitely update to a more recent bug patch level:\n \nhttp://www.postgresql.org/support/versioning\n \n> RAM : 16 GB\n \n> effective_cache_size = 4096MB\n \nThat should probably be more like 12GB to 15GB. It probably won't\naffect the load time here, but could affect other queries.\n \n> My application selects data from mysql database about 100000\n> rows process it & insert into postgres 2 tables by making about 45\n> connections.\n \nHow many cores do you have? How many disk spindles in what sort of\narray with what sort of controller.\n \nQuite possibly you can improve performance dramatically by not\nturning loose a \"thundering herd\" of competing processes.\n \nCan you load the target table without indexes and then build the\nindexes?\n \nCan you use the COPY command (or at least prepared statements) for\nthe inserts to minimize parse/plan time?\n \nAn important setting you're missing is:\n \nwal_buffers = 16MB\n \n-Kevin\n",
"msg_date": "Thu, 04 Aug 2011 08:57:30 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need to tune for Heavy Write"
},
{
"msg_contents": "On Thu, Aug 4, 2011 at 7:57 AM, Kevin Grittner\n<[email protected]> wrote:\n>> RAM : 16 GB\n>\n>> effective_cache_size = 4096MB\n>\n> That should probably be more like 12GB to 15GB. It probably won't\n> affect the load time here, but could affect other queries.\n\nActually on a heavily written database a large effective cache size\nmakes things slower.\n",
"msg_date": "Thu, 4 Aug 2011 09:07:21 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need to tune for Heavy Write"
},
{
"msg_contents": "On Thu, Aug 4, 2011 at 6:41 AM, Nicholson, Brad (Toronto, ON, CA)\n<[email protected]> wrote:\n>> -----Original Message-----\n>> From: [email protected] [mailto:pgsql-performance-\n>> [email protected]] On Behalf Of Scott Marlowe\n>> Sent: Thursday, August 04, 2011 4:46 AM\n>> To: Willy-Bas Loos\n>> Cc: Adarsh Sharma; [email protected]\n>> Subject: Re: [PERFORM] Need to tune for Heavy Write\n>>\n>>\n>> > Moving the pg_xlog to a different directory only helps when that\n>> > directory is on a different harddisk (or whatever I/O device).\n>>\n>> Not entirely true. By simply being on a different mounted file\n>> system this moves the fsync calls on the pg_xlog directories off of\n>> the same file system as the main data store. Previous testing has\n>> shown improvements in performance from just using a different file\n>> system.\n>>\n>\n> Is this still the case for xfs or ext4 where fsync is properly flushing only the correct blocks to disk, or was this referring to the good old ext3 flush everything on fysnc issue?\n\nGood question. One I do not know the answer to. Since I run my dbs\nwith separate pg_xlog drive sets I've never been able to test that.\n",
"msg_date": "Thu, 4 Aug 2011 10:40:25 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need to tune for Heavy Write"
},
{
"msg_contents": "On Wed, Aug 3, 2011 at 9:56 PM, Adarsh Sharma <[email protected]>wrote:\n\n> Dear all,\n>\n> From the last few days, I researched a lot on Postgresql Performance Tuning\n> due to slow speed of my server.\n> My application selects data from mysql database about 100000 rows , process\n> it & insert into postgres 2 tables by making about 45 connections.\n\n\nIt's already been mentioned, but is worth reinforcing, that if you are\ninserting 100,000 rows in 100,000 transactions, you'll see a huge\nperformance improvement by doing many more inserts per transaction. Try\ndoing at least 500 inserts in each transaction (though you can possibly go\nquite a bit higher than that without any issues, depending upon what other\ntraffic the database is handling in parallel). You almost certainly don't\nneed 45 connections in order to insert only 100,000 rows. I've got a crappy\nVM with 2GB of RAM in which inserting 100,000 relatively narrow rows\nrequires less than 10 seconds if I do it in a single transaction on a single\nconnection. Probably much less than 10 seconds, but the code I just tested\nwith does other work while doing the inserts, so I don't have a pure test at\nhand.\n\nOn Wed, Aug 3, 2011 at 9:56 PM, Adarsh Sharma <[email protected]> wrote:\nDear all,\n\n>From the last few days, I researched a lot on Postgresql Performance Tuning due to slow speed of my server.\nMy application selects data from mysql database about 100000 rows , process it & insert into postgres 2 tables by making about 45 connections.It's already been mentioned, but is worth reinforcing, that if you are inserting 100,000 rows in 100,000 transactions, you'll see a huge performance improvement by doing many more inserts per transaction. Try doing at least 500 inserts in each transaction (though you can possibly go quite a bit higher than that without any issues, depending upon what other traffic the database is handling in parallel). You almost certainly don't need 45 connections in order to insert only 100,000 rows. I've got a crappy VM with 2GB of RAM in which inserting 100,000 relatively narrow rows requires less than 10 seconds if I do it in a single transaction on a single connection. Probably much less than 10 seconds, but the code I just tested with does other work while doing the inserts, so I don't have a pure test at hand.",
"msg_date": "Thu, 4 Aug 2011 10:40:07 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need to tune for Heavy Write"
},
{
"msg_contents": "On 05/08/11 05:40, Samuel Gendler wrote:\n>\n>\n> On Wed, Aug 3, 2011 at 9:56 PM, Adarsh Sharma \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> Dear all,\n>\n> From the last few days, I researched a lot on Postgresql\n> Performance Tuning due to slow speed of my server.\n> My application selects data from mysql database about 100000 rows\n> , process it & insert into postgres 2 tables by making about 45\n> connections.\n>\n>\n> It's already been mentioned, but is worth reinforcing, that if you are \n> inserting 100,000 rows in 100,000 transactions, you'll see a huge \n> performance improvement by doing many more inserts per transaction. \n> Try doing at least 500 inserts in each transaction (though you can \n> possibly go quite a bit higher than that without any issues, depending \n> upon what other traffic the database is handling in parallel). You \n> almost certainly don't need 45 connections in order to insert only \n> 100,000 rows. I've got a crappy VM with 2GB of RAM in which inserting \n> 100,000 relatively narrow rows requires less than 10 seconds if I do \n> it in a single transaction on a single connection. Probably much less \n> than 10 seconds, but the code I just tested with does other work while \n> doing the inserts, so I don't have a pure test at hand.\n\nAlso worth mentioning is doing those 500 inserts in *fewer* than 500 \nINSERT operations is likely to be a huge improvement, e.g:\n\nINSERT INTO table VALUES (....),(....);\n\ninstead of\n\nINSERT INTO table VALUES (....);\nINSERT INTO table VALUES (....);\n\nI'd be tempted to do all 500 row insertions in one INSERT statement as \nabove. You might find that 1 connection doing this is fast enough (it is \nonly doing 200 actual INSERT calls in that case to put in 100000 rows).\n\nregards\n\nMark\n\n\n\n\n\n\n\n\n\n\n\n On 05/08/11 05:40, Samuel Gendler wrote:\n \n\n\n\nOn Wed, Aug 3, 2011 at 9:56 PM, Adarsh Sharma <[email protected]>\n wrote:\n\n Dear all,\n\n From the last few days, I researched a lot on Postgresql\n Performance Tuning due to slow speed of my server.\n My application selects data from mysql database about 100000\n rows , process it & insert into postgres 2 tables by\n making about 45 connections.\n\n\nIt's already been mentioned, but is worth reinforcing, that\n if you are inserting 100,000 rows in 100,000 transactions,\n you'll see a huge performance improvement by doing many more\n inserts per transaction. Try doing at least 500 inserts in\n each transaction (though you can possibly go quite a bit\n higher than that without any issues, depending upon what other\n traffic the database is handling in parallel). You almost\n certainly don't need 45 connections in order to insert only\n 100,000 rows. I've got a crappy VM with 2GB of RAM in which\n inserting 100,000 relatively narrow rows requires less than 10\n seconds if I do it in a single transaction on a single\n connection. Probably much less than 10 seconds, but the code\n I just tested with does other work while doing the inserts, so\n I don't have a pure test at hand.\n\n\n\n Also worth mentioning is doing those 500 inserts in *fewer* than 500\n INSERT operations is likely to be a huge improvement, e.g:\n\n INSERT INTO table VALUES (....),(....);\n\n instead of\n\n INSERT INTO table VALUES (....);\n INSERT INTO table VALUES (....);\n\n I'd be tempted to do all 500 row insertions in one INSERT statement\n as above. You might find that 1 connection doing this is fast enough\n (it is only doing 200 actual INSERT calls in that case to put in\n 100000 rows).\n\n regards\n\n Mark",
"msg_date": "Fri, 05 Aug 2011 10:32:34 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need to tune for Heavy Write"
},
{
"msg_contents": "On Aug 4, 2011, at 10:07 AM, Scott Marlowe wrote:\n> On Thu, Aug 4, 2011 at 7:57 AM, Kevin Grittner\n> <[email protected]> wrote:\n>>> RAM : 16 GB\n>> \n>>> effective_cache_size = 4096MB\n>> \n>> That should probably be more like 12GB to 15GB. It probably won't\n>> affect the load time here, but could affect other queries.\n> \n> Actually on a heavily written database a large effective cache size\n> makes things slower.\n\neffective_cache_size or shared_buffers? I can see why a large shared_buffers could cause problems, but what effect does effective_cache_size have on a write workload?\n--\nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n",
"msg_date": "Wed, 17 Aug 2011 16:17:18 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need to tune for Heavy Write"
}
] |
[
{
"msg_contents": "In this example it looks to me like the planner is choosing a Seq Scan resulting in 18x running time compared to running it with enable_seqscan = 'off'. Adding more indexes to public.gene (please see below) seemed to make things worse. I definitely have run VACUUM ANALYZE on everything, manually. What am I missing? Thank you for any feedback.\n\nQuery:\nSELECT *\n FROM gene_af_polyphen\n WHERE dataset_id = '001-1' AND\n (vartype = 'snp' OR\n vartype = 'ins' OR\n vartype = 'del' OR\n vartype = 'sub');\n\nQuery plan:\nhttp://explain.depesz.com/s/qnZ\n\nQuery plan after SET enable_seqscan TO 'off':\nhttp://explain.depesz.com/s/N5q\n\nHardware:\n24GB memory / 8 core running Linux 2.6.32 x86_64\n\nDatabase configuration:\n version | PostgreSQL 9.0.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (Ubuntu 4.4.3-4ubuntu5) 4.4.3, 64-bit\n autovacuum | off\n default_transaction_isolation | serializable\n effective_cache_size | 18GB\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n listen_addresses | *\n max_connections | 100\n max_stack_depth | 2MB\n server_encoding | UTF8\n shared_buffers | 6GB\n TimeZone | US/Eastern\n work_mem | 1GB\n\nTable row counts:\npublic.gene\t~1 billion (may have lots of NULLs in several columns)\npublic.af\t38878319\npublic.polyphen\t25821\n\nDatabase:\n\n Table \"public.gene\"\n Column | Type | Modifiers \n-----------------------+------------------------+-----------\n dataset_id | character varying(255) | \n referencename | character varying(255) | \n index | integer | \n locus | integer | \n haplotype | integer | \n chromosome | character varying(255) | \n begincoord | integer | \n endcoord | integer | \n vartype | character varying(255) | \n reference | character varying(255) | \n call | character varying(255) | \n xref | text | \n geneid | integer | \n mrnaacc | character varying(255) | \n proteinacc | character varying(255) | \n symbol | character varying(255) | \n orientation | character(1) | \n exoncategory | character varying(255) | \n exon | integer | \n codingregionknown | character(1) | \n aacategory | character varying(255) | \n nucleotidepos | character varying(255) | \n proteinpos | character varying(255) | \n aaannot | character varying(255) | \n aacall | character varying(255) | \n aaref | character varying(255) | \n allele | character varying(255) | \n component | character varying(255) | \n componentindex | character varying(255) | \n impact | character varying(255) | \n annotationrefsequence | character varying(255) | \n samplesequence | character varying(255) | \n genomerefsequence | character varying(255) | \n pfam | character varying(255) | \n unknown1 | character varying(255) | \nIndexes:\n \"gene_dataset_id_idx\" btree (dataset_id), tablespace \"indexspace\"\n\n Table \"public.af\"\n Column | Type | Modifiers \n-------------+------------------------+-----------\n chromosome | character varying(255) | not null\n endcoord | integer | not null\n rs_id | character varying(255) | \n reference | character varying(255) | not null\n call | character varying(255) | not null\n allele_freq | numeric | \nIndexes:\n \"af_allele_freq_idx\" btree (allele_freq), tablespace \"indexspace\"\n \"af_call_idx\" btree (call), tablespace \"indexspace\"\n \"af_chromosome_idx\" btree (chromosome), tablespace \"indexspace\"\n \"af_endcoord_idx\" btree (endcoord), tablespace \"indexspace\"\n \"af_reference_idx\" btree (reference), tablespace \"indexspace\"\n\n Table \"public.polyphen\"\n Column | Type | Modifiers \n-------------------------+------------------------+-----------\n mrnaacc | character varying(255) | not null\n proteinpos | character varying(255) | not null\n annotationrefsequence | character varying(255) | not null\n samplesequence | character varying(255) | not null\n prediction | character varying(255) | \n probability_deleterious | numeric | \nIndexes:\n \"polyphen_annotationrefsequence_idx1\" btree (annotationrefsequence), tablespace \"indexspace\"\n \"polyphen_mrnaacc_idx1\" btree (mrnaacc), tablespace \"indexspace\"\n \"polyphen_proteinpos_idx1\" btree (proteinpos), tablespace \"indexspace\"\n \"polyphen_samplesequence_idx1\" btree (samplesequence), tablespace \"indexspace\"\n\nCREATE VIEW gene_af_polyphen AS\nSELECT gene.dataset_id dataset_id,\n gene.referencename referencename,\n gene.index \"index\",\n gene.locus locus,\n gene.haplotype haplotype,\n gene.chromosome chromosome,\n gene.begincoord begincoord,\n gene.endcoord endcoord,\n gene.vartype vartype,\n gene.reference reference,\n gene.call call,\n gene.xref xref,\n gene.geneid geneid,\n gene.mrnaacc mrnaacc,\n gene.proteinacc proteinacc,\n gene.symbol symbol,\n gene.orientation orientation,\n gene.exoncategory exoncategory,\n gene.exon exon,\n gene.codingregionknown codingregionknown,\n gene.aacategory aacategory,\n gene.nucleotidepos nucleotidepos,\n gene.proteinpos proteinpos,\n gene.aaannot aaannot,\n gene.aacall aacall,\n gene.aaref aaref,\n gene.allele allele,\n gene.component component,\n gene.componentindex componentindex,\n gene.impact impact,\n gene.annotationrefsequence annotationrefsequence,\n gene.samplesequence samplesequence,\n gene.genomerefsequence genomerefsequence,\n gene.pfam pfam,\n gene.unknown1 unknown1,\n af.rs_id rs_id,\n af.allele_freq allele_freq,\n polyphen.prediction prediction,\n polyphen.probability_deleterious probability_deleterious\n FROM gene\n LEFT JOIN af\n ON gene.chromosome = af.chromosome AND\n gene.endcoord = af.endcoord AND\n gene.reference = af.reference AND\n gene.call = af.call\n LEFT JOIN polyphen\n ON gene.mrnaacc = polyphen.mrnaacc AND\n gene.proteinpos = polyphen.proteinpos AND\n gene.annotationrefsequence = polyphen.annotationrefsequence AND\n gene.samplesequence = polyphen.samplesequence;\n\n",
"msg_date": "Thu, 4 Aug 2011 09:40:08 -0400",
"msg_from": "Nassib Nassar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Seq Scan vs. Index Scan"
},
{
"msg_contents": "Nassib Nassar <[email protected]> wrote: \n \n> In this example it looks to me like the planner is choosing a Seq\n> Scan resulting in 18x running time compared to running it with\n> enable_seqscan = 'off'.\n \nI would try these settings:\n \nrandom_page_cost = 2\ncpu_tuple_cost = 0.02\n \nBased on your estimated cost versus actual run times, there's a good\nchance they'll better model your environment, and serve you well in\ngeneral.\n \n-Kevin\n",
"msg_date": "Thu, 04 Aug 2011 10:07:42 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seq Scan vs. Index Scan"
}
] |
[
{
"msg_contents": "Hey,\n\n I'm a new user of PostgreSQL. I found one of my tables is taking unexpectedly large space:\n\nselect pg_size_pretty(pg_relation_size('archive_files'));\n pg_size_pretty\n----------------\n1113 MB\nThe structure of this table is like:\n Column | Type | Modifiers\n------------+-------------------+-----------\narchive_id | integer | not null\ndir_no | integer | not null\nfname | character varying | not null\ntype | smallint | not null\nsize | bigint | not null\nmod_date | integer | not null\nblocks | bigint | not null\nblk_offset | bigint | not null\n\nthe field \"fname\" stores file names without any directory names. In our case, each record is expected to take around 300 bytes.\n\nHowever, this table contains 934829 records, which means each record takes about 1.2KB.\n\nI did vaccum, reindex, the size is still the same. Is there anything else that I can do?\n\nThanks!\n\nJohn\n\nHey, I’m a new user of PostgreSQL. I found one of my tables is taking unexpectedly large space: select pg_size_pretty(pg_relation_size('archive_files')); pg_size_pretty ---------------- 1113 MB The structure of this table is like: Column | Type | Modifiers ------------+-------------------+----------- archive_id | integer | not null dir_no | integer | not null fname | character varying | not null type | smallint | not null size | bigint | not null mod_date | integer | not null blocks | bigint | not null blk_offset | bigint | not null the field “fname” stores file names without any directory names. In our case, each record is expected to take around 300 bytes. However, this table contains 934829 records, which means each record takes about 1.2KB. I did vaccum, reindex, the size is still the same. Is there anything else that I can do? Thanks! John",
"msg_date": "Thu, 4 Aug 2011 18:56:56 +0000",
"msg_from": "Jian Shi <[email protected]>",
"msg_from_op": true,
"msg_subject": "table size is bigger than expected"
},
{
"msg_contents": "On Thu, Aug 4, 2011 at 2:56 PM, Jian Shi <[email protected]> wrote:\n> Hey,\n>\n> I’m a new user of PostgreSQL. I found one of my tables is taking\n> unexpectedly large space:\n>\n> select\n> pg_size_pretty(pg_relation_size('archive_files'));\n>\n> pg_size_pretty\n>\n> ----------------\n>\n> 1113 MB\n>\n>\n> the field “fname” stores file names without any directory names. In our\n> case, each record is expected to take around 300 bytes.\n>\n> However, this table contains 934829 records, which means each record takes\n> about 1.2KB.\n>\nwhat does this query yield?\n\nselect pg_size_pretty(sum(length(fname))) from archive_files;\n",
"msg_date": "Thu, 4 Aug 2011 16:17:30 -0400",
"msg_from": "Justin Pitts <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: table size is bigger than expected"
},
{
"msg_contents": "On 08/04/2011 11:56 AM, Jian Shi wrote:\n>\n> Hey,\n>\n> I'm a new user of PostgreSQL. I found one of my tables is taking \n> unexpectedly large space:...\n>\n> I did vaccum, reindex, the size is still the same. Is there anything \n> else that I can do?\n>\n>\nDid you try CLUSTER? A basic vacuum only identifies space as reusable, \nit doesn't actually shrink on-disk size.\n\nIf you have workloads that update or delete a small number of tuples per \ntransaction, the autovacuum process should keep things reasonably under \ncontrol. But if you run transactions that do bulk updates or deletes, \nyou may need to intervene. The CLUSTER statement will completely rewrite \nand reindex your table (and will physically reorder the table based on \nthe selected index). Note: CLUSTER requires an exclusive lock on the table.\n\nCheers,\nSteve\n\n\n\n\n\n\n\n On 08/04/2011 11:56 AM, Jian Shi wrote:\n \n\n\n\n\nHey,\n \n I’m a new user of PostgreSQL. I found one\n of my tables is taking unexpectedly large space:...\n \nI did vaccum, reindex, the size is still\n the same. Is there anything else that I can do?\n \n\n\n\n Did you try CLUSTER? A basic vacuum only identifies space as\n reusable, it doesn't actually shrink on-disk size.\n\n If you have workloads that update or delete a small number of tuples\n per transaction, the autovacuum process should keep things\n reasonably under control. But if you run transactions that do bulk\n updates or deletes, you may need to intervene. The CLUSTER statement\n will completely rewrite and reindex your table (and will physically\n reorder the table based on the selected index). Note: CLUSTER\n requires an exclusive lock on the table.\n\n Cheers,\n Steve",
"msg_date": "Thu, 04 Aug 2011 16:47:19 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: table size is bigger than expected"
}
] |
[
{
"msg_contents": "hi, We recently bought a 4 8core 128G memory database server and I am setting it up to replace our old 4 4cores 128G memory database server as a master. The memory related settings that we use on the old machine seem a bit wrong according to the experts on IRC:\nmax_connections = 600shared_buffers = 32GBeffective_cache_size = 64GBwork_mem = 5MBmaintenance_work_mem = 1GB wal_buffers = 64kB\nwe are using ubuntu 10/etc/sysctl.confkernel.shmmax=35433480192kernel.shmall=8650752\nCan anyone suggest better values?\nthanks,Claire\nhi, We recently bought a 4 8core 128G memory database server and I am setting it up to replace our old 4 4cores 128G memory database server as a master. The memory related settings that we use on the old machine seem a bit wrong according to the experts on IRC:max_connections = 600shared_buffers = 32GBeffective_cache_size = 64GBwork_mem =\n 5MBmaintenance_work_mem = 1GB wal_buffers = 64kBwe are using ubuntu 10/etc/sysctl.confkernel.shmmax=35433480192kernel.shmall=8650752Can anyone suggest better values?thanks,Claire",
"msg_date": "Thu, 4 Aug 2011 13:27:13 -0700 (PDT)",
"msg_from": "Claire Chang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres 8.4 memory related parameters"
},
{
"msg_contents": "Claire Chang <[email protected]> wrote:\n \n> hi, We recently bought a 4 8core 128G memory database server and I\n> am setting it up to replace our old 4 4cores 128G memory database\n> server as a master. The memory related settings that we use on\n> the old machine seem a bit wrong according to the experts on IRC:\n \n> max_connections = 600\n \nYou're probably going to get better performance by setting that to 2\nto 3 times the number of actual cores (don't county hyperthreading\nfor this purpose), and using a connection pooler to funnel the 600\nuser connections down to a smaller number of database connections.\n \n> shared_buffers = 32GB\n \nI seem to remember seeing some benchmarks showing that performance\nfalls off after 10GB or 20GB on that setting.\n \n> effective_cache_size = 64GB\n \nSeems sane.\n \n> work_mem = 5MB\n \nYou could bump that up, especially if you go to the connection pool.\n \n> maintenance_work_mem = 1GB\n \nOK, but I might double that.\n \n> wal_buffers = 64kB\n \nThis should definitely be set to 16MB.\n \n-Kevin\n",
"msg_date": "Thu, 04 Aug 2011 15:38:35 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 8.4 memory related parameters"
},
{
"msg_contents": "On 08/04/2011 03:38 PM, Kevin Grittner wrote:\n\n> You're probably going to get better performance by setting that to 2\n> to 3 times the number of actual cores (don't county hyperthreading\n> for this purpose), and using a connection pooler to funnel the 600\n> user connections down to a smaller number of database connections.\n\nYour note about Hyperthreading *used* to be true. I'm not sure exactly \nwhat they did to the Intel nehalem cores, but hyperthreading actually \nseems to be much better now. It's not a true multiplier, but our pgbench \nscores were 40% to 60% higher with HT enabled up to at least 5x the \nnumber of cores.\n\nI was honestly shocked at those results, but they were consistent across \nmultiple machines from two separate vendors.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Thu, 4 Aug 2011 16:02:11 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 8.4 memory related parameters"
},
{
"msg_contents": "On Thu, Aug 4, 2011 at 2:38 PM, Kevin Grittner\n<[email protected]> wrote:\n> Claire Chang <[email protected]> wrote:\n>\n>> hi, We recently bought a 4 8core 128G memory database server and I\n>> am setting it up to replace our old 4 4cores 128G memory database\n>> server as a master. The memory related settings that we use on\n>> the old machine seem a bit wrong according to the experts on IRC:\n>\n>> max_connections = 600\n>\n> You're probably going to get better performance by setting that to 2\n> to 3 times the number of actual cores (don't county hyperthreading\n> for this purpose), and using a connection pooler to funnel the 600\n> user connections down to a smaller number of database connections.\n>\n>> shared_buffers = 32GB\n>\n> I seem to remember seeing some benchmarks showing that performance\n> falls off after 10GB or 20GB on that setting.\n>\n>> effective_cache_size = 64GB\n>\n> Seems sane.\n>\n>> work_mem = 5MB\n>\n> You could bump that up, especially if you go to the connection pool.\n>\n>> maintenance_work_mem = 1GB\n>\n> OK, but I might double that.\n>\n>> wal_buffers = 64kB\n>\n> This should definitely be set to 16MB.\n\nAgreed with everything so far. A few other points. If you're doing a\nLOT of writing, and the immediate working set will fit in less\nshared_buffers then lower it down to something in the 1 to 4G range\nmax. Lots of write and a large shared_buffer do not mix well. I have\ngotten much better performance from lowering shared_buffers on\nmachines that need to write a lot. I run Ubuntu 10.04 for my big\npostgresql servers right now. With that in mind, here's some\npointers.\n\nI'd recommend adding this to rc.local:\n\n# turns off swap\n/sbin/swapoff -a\n\nI had a few weird kswapd storms where the kernel just seems to get\nconfused about having 128G of ram and swap space. Machine was lagging\nvery hard at odd times of the day until I just turned off swap.\n\nand if you have a caching RAID controller with battery backup then I'd\nadd a line like this:\n\necho noop > /sys/block/sda/queue/scheduler\n\nfor every RAID drive you have. Any other scheduler really just gets\nin the way of a good caching RAID controller.\n\nThere's also some parameters that affect how fast dirty caches are\nwritten out by the OS, worth looking into, but they made no big\ndifference on my 128G 48 core 34 15krpm drive system.\n\nIf you do a lot of inserts / updates / deletes then look at making\nvacuum more aggressive. Also look at making the bgwriter a bit more\naggressive and cranking up the timeout and having lots of checkpoint\nsegments.\n",
"msg_date": "Thu, 4 Aug 2011 15:12:18 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 8.4 memory related parameters"
},
{
"msg_contents": "Shaun Thomas <[email protected]> wrote:\n> On 08/04/2011 03:38 PM, Kevin Grittner wrote:\n> \n>> You're probably going to get better performance by setting that\n>> to 2 to 3 times the number of actual cores (don't county\n>> hyperthreading for this purpose), and using a connection pooler\n>> to funnel the 600 user connections down to a smaller number of\n>> database connections.\n> \n> Your note about Hyperthreading *used* to be true. I'm not sure\n> exactly what they did to the Intel nehalem cores, but\n> hyperthreading actually seems to be much better now. It's not a\n> true multiplier, but our pgbench scores were 40% to 60% higher\n> with HT enabled up to at least 5x the number of cores.\n \nNote that I didn't recommend not *using* HT, just not counting it\ntoward the core count for purposes of calculating how many active\nconnections to allow. The important question for this purpose isn't\nwhether you ran faster with HT enabled, but where you hit the knee\nin the performance graph.\n \nDid you actually run faster with 5x more active connections than\ncores than with 3x more connections than cores? Was the load\nhitting disk or running entirely from cache? If the former, what was\nyour cache hit rate and how many spindles did you have in what\nconfiguration. I oversimplified slightly from the formula I\nactually have been using based on benchmarks here, which is ((2 *\ncore_count) + effective_spindle_count. Mostly I simplified because\nit's so hard to explain how to calculate an\n\"effective_spindle_count\". ;-)\n \nAnyway, I'm always willing to take advantage of the benchmarking\nwork of others, so I'm very curious about where performance topped\nout for you with HT enabled, and whether disk waits were part of the\nmix.\n \n-Kevin\n",
"msg_date": "Thu, 04 Aug 2011 16:36:28 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 8.4 memory related parameters"
},
{
"msg_contents": "On 08/04/2011 04:36 PM, Kevin Grittner wrote:\n\n> Anyway, I'm always willing to take advantage of the benchmarking\n> work of others, so I'm very curious about where performance topped\n> out for you with HT enabled, and whether disk waits were part of the\n> mix.\n\nHah. Well, it peaked at 2x physical cores, where it ended up being 60% \nfaster than true cores. It started to fall after that, until I hit 64 \nconcurrent connections and it dropped down to 36% faster. I should also \nnote that this is with core turbo enabled and performance mode BIOS \nsettings so it never goes into power saving mode. Without those, our \nresults were inconsistent, with variance of up to 40% per run, on top of \n40% worse performance at concurrency past 2x core count.\n\nI tested along a scale from 1 to 64 concurrent connections at a scale of \n100 so it would fit in memory. I was trying to test some new X5675s \ncores against our old E7450s. The scary part was that a dual X5675 ended \nup being 2.5x faster than a quad E7450 at 24-user concurrency. It's \nunreal. It's a great way to save on per-core licensing fees.\n\nWe're also on an 8.2 database. We're upgrading soon, I promise. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Thu, 4 Aug 2011 16:49:28 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 8.4 memory related parameters"
},
{
"msg_contents": "Shaun Thomas <[email protected]> wrote:\n \n> it peaked at 2x physical cores, where it ended up being 60% \n> faster than true cores.\n \nNot sure I understand the terminology here -- \"physical cores\" is\ncounting HT or not?\n \nThanks,\n \n-Kevin\n",
"msg_date": "Thu, 04 Aug 2011 16:54:12 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 8.4 memory related parameters"
},
{
"msg_contents": "On 08/04/2011 04:54 PM, Kevin Grittner wrote:\n\n>> it peaked at 2x physical cores, where it ended up being 60%\n>> faster than true cores.\n>\n> Not sure I understand the terminology here -- \"physical cores\" is\n> counting HT or not?\n\nNo. So with a dual X5675, that's 12 cores. My numbers peaked at \n24-concurrency. At that concurrency, HT was 60% faster than non-HT. \nSorry if I mixed my terminology. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Thu, 4 Aug 2011 16:58:03 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 8.4 memory related parameters"
},
{
"msg_contents": "Shaun Thomas <[email protected]> wrote:\n \n> So with a dual X5675, that's 12 cores. My numbers peaked at \n> 24-concurrency. At that concurrency, HT was 60% faster than\n> non-HT. Sorry if I mixed my terminology. :)\n \nNo problem -- I appreciate the information. I just wanted to be\nsure I was understanding it correctly. So, with hyperthreading\nturned on, the optimal number of active connections was twice the\nactual cores. And since the active data set was fully cached, disk\nspindles were not a resource which played any significant role in\nthe test, making the \"effective spindle count\" zero. So this is one\nmore data point confirming the overall accuracy of the formula I\nuse, and providing evidence that it is not affected by use of\nhyperthreading if you base your numbers on actual cores.\n \noptimal pool size = ((2 * actual core count) + effective spindle\ncount)\n\noptimal pool size = ((2 * 12) + 0)\n\noptimal pool size = 24\n \nThanks!\n \n-Kevin\n",
"msg_date": "Fri, 05 Aug 2011 09:00:58 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 8.4 memory related parameters"
},
{
"msg_contents": "On 08/05/2011 09:00 AM, Kevin Grittner wrote:\n\n> optimal pool size = ((2 * actual core count) + effective spindle\n> count)\n\nHow does that work? If your database fits in memory, your optimal TPS is \nonly constrained by CPU. Any fetches from disk reduce your throughput \nfrom IO Waits. How do you account for SSDs/PCIe cards which act as an \neffective spindle multiplier?\n\nI've seen Java apps that, through the use of several systems using Java \nHibernate pool sharing, are not compatible with connection poolers such \nas PGBouncer. As such, they had 50x CPU count and still managed \n12,000TPS because everything in use was cached. Throw a disk seek or two \nin there, and it drops down to 2000 or less. Throw in a PCIe card, and \npure streams of \"disk\" reads remain at 12,000TPS.\n\nIt just seems a little counter-intuitive. I totally agree that it's not \noptimal to have connections higher than effective threads, but *adding* \nspindles? I'd be more inclined to believe this:\n\noptimal pool size = 3*cores - cores/spindles\n\nThen, as your spindles increase, you're subtracting less and less until \nyou reach optimal 3x.\n\nOne disk, but on a 4-cpu system?\n\n12 - 4 = 8. So you'd have the classic 2x cores.\n\nOn a RAID 1+0 with 4 disk pairs (still 4 cpu)?\n\n12 - 1 = 11.\n\nOn a giant SAN with couple dozen disks or a PCIe card that tests an \norder of magnitude faster than a 6-disk RAID?\n\n12 - [small fraction] = 12\n\nIt still fits your 3x rule, but seems to actually account for the fact \ndisks suck. :p\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Fri, 5 Aug 2011 09:21:26 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 8.4 memory related parameters"
},
{
"msg_contents": "Shaun Thomas <[email protected]> wrote:\n> On 08/05/2011 09:00 AM, Kevin Grittner wrote:\n> \n>> optimal pool size = ((2 * actual core count) + effective spindle\n>> count)\n> \n> How does that work? If your database fits in memory, your optimal\n> TPS is only constrained by CPU. Any fetches from disk reduce your\n> throughput from IO Waits.\n \nI think you're misunderstanding the purpose of the formula. I'm not\nsaying that causing processes to wait for disk speeds things up;\nclearly things will run faster if the active data set is cached. \nWhat I'm saying is that if processes are blocked waiting for disk\nthey are not going to be using CPU, and there is room for that many\nadditional processes to be useful, as the CPUs and other drives\nwould otherwise be sitting idle.\n \n> How do you account for SSDs/PCIe cards which act as an \n> effective spindle multiplier?\n \nThe \"effective spindle count\" is basically about \"how many resources\nare reads typically waiting on with this hardware and workload\". \nPerhaps a better name for that could be chosen, but it's the best\nI've come up with.\n \n> I'd be more inclined to believe this:\n> \n> optimal pool size = 3*cores - cores/spindles\n> \n> Then, as your spindles increase, you're subtracting less and less\n> until you reach optimal 3x.\n \nWell, to take an extreme example in another direction, let's\nhypothesize a machine with one CPU and 24 drives, where random disk\naccess is the bottleneck for the workload. My formula would have 26\nprocesses, which would typically be running with 26 blocked waiting\non a read for a cache miss, while the other two processes would be\nserving up responses for cache hits and getting requests for the\nnext actual disk reads ready. Your formula would have two processes\nstruggling to get reads going on 24 spindles while also serving up\ncached data.\n \nJust because disk speed sucks doesn't mean you don't want to do your\ndisk reads in parallel; quite the opposite!\n \n-Kevin\n",
"msg_date": "Fri, 05 Aug 2011 09:58:46 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 8.4 memory related parameters"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> wrote:\n \n> which would typically be running with 26 blocked waiting\n> on a read for a cache miss,\n \nI meant 24 there.\n \n-Kevin\n",
"msg_date": "Fri, 05 Aug 2011 10:05:43 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 8.4 memory related parameters"
},
{
"msg_contents": "On 08/05/2011 09:58 AM, Kevin Grittner wrote:\n\n> What I'm saying is that if processes are blocked waiting for disk\n> they are not going to be using CPU, and there is room for that many\n> additional processes to be useful, as the CPUs and other drives\n> would otherwise be sitting idle.\n\nHaha. The way you say that made me think back on the scenario with 4 \ncpus and 1 disk. Naturally that kind of system is IO starved, and will \nprobably sit at IO-wait at 10% or more on any kind of notable activity \nlevel. I was like... \"Well, so long as everything is waiting anyway, why \nnot just increase it to 100?\"\n\nNow, typically you want to avoid context switching. Certain caveats need \nto be made for anything with less than two, or even four cpus because of \nvarious system and Postgres monitoring/maintenance threads. My own \nbenchmarks illustrate (to me, anyway) that generally, performance peaks \nwhen PG threads equal CPU threads *however they're supplied*.\n\nNever minding fudge factor for idling connections waiting on IO, which \nyou said yourself can be problematic the more of them there are. :)\n\nI'd say just put it at 2x, maybe 3x, and call it good. Realistically you \nwon't really notice further tweaking, and a really active system would \nconverge to cpu count through a pooler and be cached to the gills anyway.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Fri, 5 Aug 2011 12:09:57 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 8.4 memory related parameters"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Claire Chang <[email protected]> wrote: \n> \n>> shared_buffers = 32GB\n>> \n> \n> I seem to remember seeing some benchmarks showing that performance\n> falls off after 10GB or 20GB on that setting.\n> \n\nNot even quite that high. I've never heard of a setting over 10GB being \nanything other than worse than a smaller setting, and that was on \nSolaris. At this point I never consider a value over 8GB, and even that \nneeds to be carefully matched against how heavy the writes on the server \nare. You just can't set shared_buffers to a huge value in PostgreSQL \nyet, and \"huge\" means \">8GB\" right now.\n\nNote that the problems you can run into with too much buffer cache are \nmuch worse with a low setting for checkpoint_segments...and this \nconfiguration doesn't change it at all from the tiny default. That \nshould go to at least 64 on a server this size.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Mon, 08 Aug 2011 23:59:08 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 8.4 memory related parameters"
}
] |
[
{
"msg_contents": "I have postgresql 9.0.1 on windows 2003 ent with 6GB ram, 4 disk SATA RAID\n10.\nI am running SymmetricDS to replication over WAN. But yesterday there was a\nbig problem, i updated alot of rows and query to gap data of SymmetricDS run\nverry very slowly.\n\nHere is my postgresql.conf to tunning PostgreSQL\neffective_cache_size = 4GB\nwork_mem = 2097151\nshared_buffers = 1GB\n\nHere is query :\nexplain analyze select d.data_id, d.table_name, d.event_type, d.row_data,\nd.pk_data, d.old_data, d.create_time, d.trigger_hist_id, d.channel_id,\nd.transaction_id, d.source_node_id, d.external_data, '' from sym_data d\ninner join sym_data_gap g on g.status='GP' and d.data_id between g.start_id\nand g.end_id where d.channel_id='sale_transaction' order by d.data_id asc;\n\nAnd here is result :\nNested Loop (cost=0.00..1517515125.95 rows=26367212590 width=1403) (actual\ntime=14646.390..7745828.163 rows=2764140 loops=1)\n -> Index Scan using sym_data_pkey on sym_data d (cost=0.00..637148.72\nrows=3129103 width=1403) (actual time=71.989..55643.665 rows=3124631\nloops=1)\n Filter: ((channel_id)::text = 'sale_transaction'::text)\n -> Index Scan using sym_data_gap_pkey on sym_data_gap g\n (cost=0.00..358.37 rows=8426 width=8) (actual time=2.459..2.459 rows=1\nloops=3124631)\n Index Cond: ((d.data_id >= g.start_id) AND (d.data_id <= g.end_id))\n Filter: (g.status = 'GP'::bpchar)\nTotal runtime: 7746577.478 ms\n\nHere is table sym_data it have 437319 rows with data_id between start_id and\nend_id of sym_data_gap has status = 'GP'\n\nCREATE TABLE sym_data\n(\n data_id serial NOT NULL,\n table_name character varying(50) NOT NULL,\n event_type character(1) NOT NULL,\n row_data text,\n pk_data text,\n old_data text,\n trigger_hist_id integer NOT NULL,\n channel_id character varying(20),\n transaction_id character varying(255),\n source_node_id character varying(50),\n external_data character varying(50),\n create_time timestamp without time zone,\n CONSTRAINT sym_data_pkey PRIMARY KEY (data_id)\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE sym_data OWNER TO postgres;\n\n-- Index: idx_d_channel_id\n\n-- DROP INDEX idx_d_channel_id;\n\nCREATE INDEX idx_d_channel_id\n ON sym_data\n USING btree\n (data_id, channel_id);\n\nAnd here is sym_data_gap table it have 57838 rows have status = 'GP'\n\nCREATE TABLE sym_data_gap\n(\n start_id integer NOT NULL,\n end_id integer NOT NULL,\n status character(2),\n create_time timestamp without time zone NOT NULL,\n last_update_hostname character varying(255),\n last_update_time timestamp without time zone NOT NULL,\n CONSTRAINT sym_data_gap_pkey PRIMARY KEY (start_id, end_id)\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE sym_data_gap OWNER TO postgres;\n\n-- Index: idx_dg_status\n\n-- DROP INDEX idx_dg_status;\n\nCREATE INDEX idx_dg_status\n ON sym_data_gap\n USING btree\n (status);\n\nBecause the query run very slowly so data is not replication between to\ndistance. Please help me.\n\nSorry for my English\nTuan Hoang ANh\n\nI have postgresql 9.0.1 on windows 2003 ent with 6GB ram, 4 disk SATA RAID 10.\nI am running SymmetricDS to replication over WAN. But yesterday there was a big problem, i updated alot of rows and query to gap data of SymmetricDS run verry very slowly.Here is my postgresql.conf to tunning PostgreSQL\neffective_cache_size = 4GBwork_mem = 2097151shared_buffers = 1GBHere is query :explain analyze select d.data_id, d.table_name, d.event_type, d.row_data, d.pk_data, d.old_data, d.create_time, d.trigger_hist_id, d.channel_id, d.transaction_id, d.source_node_id, d.external_data, '' from sym_data d inner join sym_data_gap g on g.status='GP' and d.data_id between g.start_id and g.end_id where d.channel_id='sale_transaction' order by d.data_id asc;\nAnd here is result : Nested Loop (cost=0.00..1517515125.95 rows=26367212590 width=1403) (actual time=14646.390..7745828.163 rows=2764140 loops=1) -> Index Scan using sym_data_pkey on sym_data d (cost=0.00..637148.72 rows=3129103 width=1403) (actual time=71.989..55643.665 rows=3124631 loops=1)\n Filter: ((channel_id)::text = 'sale_transaction'::text) -> Index Scan using sym_data_gap_pkey on sym_data_gap g (cost=0.00..358.37 rows=8426 width=8) (actual time=2.459..2.459 rows=1 loops=3124631)\n Index Cond: ((d.data_id >= g.start_id) AND (d.data_id <= g.end_id)) Filter: (g.status = 'GP'::bpchar)Total runtime: 7746577.478 msHere is table sym_data it have 437319 rows with data_id between start_id and end_id of sym_data_gap has status = 'GP'\nCREATE TABLE sym_data( data_id serial NOT NULL, table_name character varying(50) NOT NULL, event_type character(1) NOT NULL, row_data text,\n pk_data text, old_data text, trigger_hist_id integer NOT NULL, channel_id character varying(20), transaction_id character varying(255), source_node_id character varying(50),\n external_data character varying(50), create_time timestamp without time zone, CONSTRAINT sym_data_pkey PRIMARY KEY (data_id))WITH ( OIDS=FALSE);\nALTER TABLE sym_data OWNER TO postgres;-- Index: idx_d_channel_id-- DROP INDEX idx_d_channel_id;CREATE INDEX idx_d_channel_id ON sym_data\n USING btree (data_id, channel_id);And here is sym_data_gap table it have 57838 rows have status = 'GP'CREATE TABLE sym_data_gap\n( start_id integer NOT NULL, end_id integer NOT NULL, status character(2), create_time timestamp without time zone NOT NULL, last_update_hostname character varying(255),\n last_update_time timestamp without time zone NOT NULL, CONSTRAINT sym_data_gap_pkey PRIMARY KEY (start_id, end_id))WITH ( OIDS=FALSE);ALTER TABLE sym_data_gap OWNER TO postgres;\n-- Index: idx_dg_status-- DROP INDEX idx_dg_status;CREATE INDEX idx_dg_status ON sym_data_gap USING btree (status);\nBecause the query run very slowly so data is not replication between to distance. Please help me.Sorry for my EnglishTuan Hoang ANh",
"msg_date": "Fri, 5 Aug 2011 23:43:33 +0700",
"msg_from": "tuanhoanganh <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 9.0.1 on Windows performance tunning help please"
},
{
"msg_contents": "tuanhoanganh <[email protected]> wrote:\n \n> I have postgresql 9.0.1\n \nhttp://www.postgresql.org/support/versioning\n \n> 6GB ram\n \n> work_mem = 2097151\n \nI think that has the potential to push you into swapping:\n \ncc=> set work_mem = 2097151;\nSET\ncc=> show work_mem;\n work_mem\n-----------\n 2097151kB\n(1 row)\n \nThat's 2GB, and that much can be allocated, potentially several\ntimes, per connection.\n \n> -> Index Scan using sym_data_pkey on sym_data d \n> (cost=0.00..637148.72 rows=3129103 width=1403)\n> (actual time=71.989..55643.665 rows=3124631 loops=1)\n> Filter: ((channel_id)::text = 'sale_transaction'::text)\n \nThis index scan is going to randomly access all tuples in the\ntable's heap. That is probably going to be much slower than a\nsequential scan. It is apparently choosing this index to avoid a\nsort, because of the mis-estimation on the number of rows. Is it\ncritical that the rows be returned in that order? If not, you might\nsee much faster performance by leaving off the ORDER BY clause so\nthat it can use the seqscan.\n \nYou could potentially make queries like this much faster by indexing\non channel_id, or by indexing on data_id WHERE channel_id =\n'sale_transaction'..\n \nYou could also set up optimization barriers with clever use of a CTE\nor an OFFSET 0 to force it to use a seqscan followed by a sort, but\nI would look at the other options first.\n \n-Kevin\n",
"msg_date": "Fri, 05 Aug 2011 12:20:59 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.0.1 on Windows performance tunning\n\t help please"
},
{
"msg_contents": "Thanks for your help.\nI create index on channel_id and data_id like your comment.\n\n- Index: idx_d_channel_id2\n\n-- DROP INDEX idx_d_channel_id2;\n\nCREATE INDEX idx_d_channel_id2\n ON sym_data\n USING btree\n (channel_id);\n\n-- Index: idx_d_channel_id3\n\n-- DROP INDEX idx_d_channel_id3;\n\nCREATE INDEX idx_d_channel_id3\n ON sym_data\n USING btree\n (data_id)\n WHERE channel_id::text = 'sale_transaction'::text;\n\n-- Index: idx_d_channel_id4\n\n-- DROP INDEX idx_d_channel_id4;\n\nCREATE INDEX idx_d_channel_id4\n ON sym_data\n USING btree\n (data_id)\n WHERE channel_id::text = 'item'::text;\n\nHere is new explan analyze\n\nexplain analyze select d.data_id, d.table_name, d.event_type, d.row_data,\nd.pk_data, d.old_data, d.create_time, d.trigger_hist_id, d.channel_id,\nd.transaction_id, d.source_node_id, d.external_data, '' from sym_data d\ninner join sym_data_gap g on g.status='GP' and d.data_id between g.start_id\nand g.end_id where d.channel_id='sale_transaction' order by d.data_id asc;\n\nNested Loop (cost=0.00..1512979014.35 rows=26268463088 width=1401) (actual\ntime=25741.704..7650979.311 rows=2764140 loops=1)\n -> Index Scan using idx_d_channel_id3 on sym_data d\n (cost=0.00..1781979.40 rows=3117384 width=1401) (actual\ntime=83.718..55126.002 rows=3124631 loops=1)\n -> Index Scan using sym_data_gap_pkey on sym_data_gap g\n (cost=0.00..358.37 rows=8426 width=8) (actual time=2.428..2.429 rows=1\nloops=3124631)\n Index Cond: ((d.data_id >= g.start_id) AND (d.data_id <= g.end_id))\n Filter: (g.status = 'GP'::bpchar)\nTotal runtime: 7651803.073 ms\n\nBut query performance don't change.\nPlease help me.\n\nTuan Hoang ANh\n\nOn Sat, Aug 6, 2011 at 12:20 AM, Kevin Grittner <[email protected]\n> wrote:\n\n> tuanhoanganh <[email protected]> wrote:\n>\n> > I have postgresql 9.0.1\n>\n> http://www.postgresql.org/support/versioning\n>\n> > 6GB ram\n>\n> > work_mem = 2097151\n>\n> I think that has the potential to push you into swapping:\n>\n> cc=> set work_mem = 2097151;\n> SET\n> cc=> show work_mem;\n> work_mem\n> -----------\n> 2097151kB\n> (1 row)\n>\n> That's 2GB, and that much can be allocated, potentially several\n> times, per connection.\n>\n> > -> Index Scan using sym_data_pkey on sym_data d\n> > (cost=0.00..637148.72 rows=3129103 width=1403)\n> > (actual time=71.989..55643.665 rows=3124631 loops=1)\n> > Filter: ((channel_id)::text = 'sale_transaction'::text)\n>\n> This index scan is going to randomly access all tuples in the\n> table's heap. That is probably going to be much slower than a\n> sequential scan. It is apparently choosing this index to avoid a\n> sort, because of the mis-estimation on the number of rows. Is it\n> critical that the rows be returned in that order? If not, you might\n> see much faster performance by leaving off the ORDER BY clause so\n> that it can use the seqscan.\n>\n> You could potentially make queries like this much faster by indexing\n> on channel_id, or by indexing on data_id WHERE channel_id =\n> 'sale_transaction'..\n>\n> You could also set up optimization barriers with clever use of a CTE\n> or an OFFSET 0 to force it to use a seqscan followed by a sort, but\n> I would look at the other options first.\n>\n> -Kevin\n>\n\nThanks for your help.I create index on channel_id and data_id like your comment. - Index: idx_d_channel_id2-- DROP INDEX idx_d_channel_id2;\nCREATE INDEX idx_d_channel_id2 ON sym_data USING btree (channel_id);-- Index: idx_d_channel_id3-- DROP INDEX idx_d_channel_id3;\nCREATE INDEX idx_d_channel_id3 ON sym_data USING btree (data_id) WHERE channel_id::text = 'sale_transaction'::text;-- Index: idx_d_channel_id4\n-- DROP INDEX idx_d_channel_id4;CREATE INDEX idx_d_channel_id4 ON sym_data USING btree (data_id) WHERE channel_id::text = 'item'::text;\nHere is new explan analyzeexplain analyze select d.data_id, d.table_name, d.event_type, d.row_data, d.pk_data, d.old_data, d.create_time, d.trigger_hist_id, d.channel_id, d.transaction_id, d.source_node_id, d.external_data, '' from sym_data d inner join sym_data_gap g on g.status='GP' and d.data_id between g.start_id and g.end_id where d.channel_id='sale_transaction' order by d.data_id asc;\nNested Loop (cost=0.00..1512979014.35 rows=26268463088 width=1401) (actual time=25741.704..7650979.311 rows=2764140 loops=1) -> Index Scan using idx_d_channel_id3 on sym_data d (cost=0.00..1781979.40 rows=3117384 width=1401) (actual time=83.718..55126.002 rows=3124631 loops=1)\n -> Index Scan using sym_data_gap_pkey on sym_data_gap g (cost=0.00..358.37 rows=8426 width=8) (actual time=2.428..2.429 rows=1 loops=3124631) Index Cond: ((d.data_id >= g.start_id) AND (d.data_id <= g.end_id))\n Filter: (g.status = 'GP'::bpchar)Total runtime: 7651803.073 msBut query performance don't change.Please help me.Tuan Hoang ANh\nOn Sat, Aug 6, 2011 at 12:20 AM, Kevin Grittner <[email protected]> wrote:\ntuanhoanganh <[email protected]> wrote:\n\n> I have postgresql 9.0.1\n\nhttp://www.postgresql.org/support/versioning\n\n> 6GB ram\n\n> work_mem = 2097151\n\nI think that has the potential to push you into swapping:\n\ncc=> set work_mem = 2097151;\nSET\ncc=> show work_mem;\n work_mem\n-----------\n 2097151kB\n(1 row)\n\nThat's 2GB, and that much can be allocated, potentially several\ntimes, per connection.\n\n> -> Index Scan using sym_data_pkey on sym_data d\n> (cost=0.00..637148.72 rows=3129103 width=1403)\n> (actual time=71.989..55643.665 rows=3124631 loops=1)\n> Filter: ((channel_id)::text = 'sale_transaction'::text)\n\nThis index scan is going to randomly access all tuples in the\ntable's heap. That is probably going to be much slower than a\nsequential scan. It is apparently choosing this index to avoid a\nsort, because of the mis-estimation on the number of rows. Is it\ncritical that the rows be returned in that order? If not, you might\nsee much faster performance by leaving off the ORDER BY clause so\nthat it can use the seqscan.\n\nYou could potentially make queries like this much faster by indexing\non channel_id, or by indexing on data_id WHERE channel_id =\n'sale_transaction'..\n\nYou could also set up optimization barriers with clever use of a CTE\nor an OFFSET 0 to force it to use a seqscan followed by a sort, but\nI would look at the other options first.\n\n-Kevin",
"msg_date": "Sat, 6 Aug 2011 09:16:12 +0700",
"msg_from": "tuanhoanganh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 9.0.1 on Windows performance tunning help please"
},
{
"msg_contents": "Tuan --\n\n> \n> Thanks for your help.\n> I create index on channel_id and data_id like your comment. \n...\n<...>\n> \n> explain analyze select d.data_id, d.table_name, d.event_type, d.row_data, d.pk_data, d.old_data, d.create_time, d.trigger_hist_id, d.channel_id, d.transaction_id, > d.source_node_id, d.external_data, '' from sym_data d inner join sym_data_gap g on g.status='GP' and d.data_id between g.start_id and g.end_id where\n> d.channel_id='sale_transaction' order by d.data_id asc;\n>\n> Nested Loop (cost=0.00..1512979014.35 rows=26268463088 width=1401) (actual time=25741.704..7650979.311 rows=2764140 loops=1)\n> -> Index Scan using idx_d_channel_id3 on sym_data d (cost=0.00..1781979.40 rows=3117384 width=1401) (actual time=83.718..55126.002 rows=3124631 loops=1)\n> -> Index Scan using sym_data_gap_pkey on sym_data_gap g (cost=0.00..358.37 rows=8426 width=8) (actual time=2.428..2.429 rows=1 loops=3124631)\n> Index Cond: ((d.data_id >= g.start_id) AND (d.data_id <= g.end_id))\n> Filter: (g.status = 'GP'::bpchar)\n> Total runtime: 7651803.073 ms\n> \n> But query performance don't change.\n> Please help me.\n\nDid you run an analyze on the table after building the new indexes ? The row estimates seem to be off wildly,\nalthough that may be a symptom of something else and not related, it is worth ruling out the easily tried.\n\nHTH,\n\nGreg Williamson\n\n",
"msg_date": "Fri, 5 Aug 2011 20:09:43 -0700 (PDT)",
"msg_from": "Greg Williamson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.0.1 on Windows performance tunning help please"
},
{
"msg_contents": "Yes, I run\nVACUUM VERBOSE ANALYZE sym_data;\nVACUUM VERBOSE ANALYZE sym_data_gap;\nafter create index.\n\nIf i remove ORDER BY, the query run faster.\n\nexplain analyze select d.data_id, d.table_name, d.event_type, d.row_data,\nd.pk_data, d.old_data, d.create_time, d.trigger_hist_id, d.channel_id,\nd.transaction_id, d.source_node_id, d.external_data, '' from sym_data d\ninner join sym_data_gap g on g.status='GP' and d.data_id between g.start_id\nand g.end_id where d.channel_id='sale_transaction';\n\nNested Loop (cost=0.00..1384889042.54 rows=26266634550 width=1400) (actual\ntime=63.546..36699.188 rows=2764140 loops=1)\n -> Index Scan using idx_dg_status on sym_data_gap g (cost=0.00..2802.42\nrows=75838 width=8) (actual time=63.348..122.565 rows=75838 loops=1)\n Index Cond: (status = 'GP'::bpchar)\n -> Index Scan using idx_d_channel_id3 on sym_data d (cost=0.00..13065.83\nrows=346352 width=1400) (actual time=0.027..0.450 rows=36 loops=75838)\n Index Cond: ((d.data_id >= g.start_id) AND (d.data_id <= g.end_id))\nTotal runtime: 37226.543 ms\n\nOn Sat, Aug 6, 2011 at 10:09 AM, Greg Williamson <[email protected]>wrote:\n\n>\n> Did you run an analyze on the table after building the new indexes ? The\n> row estimates seem to be off wildly,\n> although that may be a symptom of something else and not related, it is\n> worth ruling out the easily tried.\n>\n> HTH,\n>\n> Greg Williamson\n>\n>\n\nYes, I runVACUUM VERBOSE ANALYZE sym_data;VACUUM VERBOSE ANALYZE sym_data_gap;after create index.If i remove ORDER BY, the query run faster.\nexplain analyze select d.data_id, d.table_name, d.event_type, d.row_data, d.pk_data, d.old_data, d.create_time, d.trigger_hist_id, d.channel_id, d.transaction_id, d.source_node_id, d.external_data, '' from sym_data d inner join sym_data_gap g on g.status='GP' and d.data_id between g.start_id and g.end_id where d.channel_id='sale_transaction';\nNested Loop (cost=0.00..1384889042.54 rows=26266634550 width=1400) (actual time=63.546..36699.188 rows=2764140 loops=1) -> Index Scan using idx_dg_status on sym_data_gap g (cost=0.00..2802.42 rows=75838 width=8) (actual time=63.348..122.565 rows=75838 loops=1)\n Index Cond: (status = 'GP'::bpchar) -> Index Scan using idx_d_channel_id3 on sym_data d (cost=0.00..13065.83 rows=346352 width=1400) (actual time=0.027..0.450 rows=36 loops=75838)\n Index Cond: ((d.data_id >= g.start_id) AND (d.data_id <= g.end_id))Total runtime: 37226.543 msOn Sat, Aug 6, 2011 at 10:09 AM, Greg Williamson <[email protected]> wrote:\n\nDid you run an analyze on the table after building the new indexes ? The row estimates seem to be off wildly,\nalthough that may be a symptom of something else and not related, it is worth ruling out the easily tried.\n\nHTH,\n\nGreg Williamson",
"msg_date": "Sat, 6 Aug 2011 10:43:50 +0700",
"msg_from": "tuanhoanganh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 9.0.1 on Windows performance tunning help please"
}
] |
[
{
"msg_contents": "[Please don't top-post; it makes the discussion hard to follow.]\n \n \ntuanhoanganh wrote:\n> Greg Williamson wrote:\n \n>> Did you run an analyze on the table after building the new\n>> indexes? The row estimates seem to be off wildly, although that\n>> may be a symptom of something else\n \nI think that's because the optimizer doesn't know how to estimate the\nrange test properly, and resorts to \"magic numbers\" based on\npercentages of the rows in the table.\n \n> If i remove ORDER BY, the query run faster.\n \nYeah, it thinks there will be 26 billion rows, and that sorting that\nwould be very expensive. You really have only 76 thousand rows,\nwhich wouldn't be so bad. I'm not sure whether this would work, but\nif you need the ordering, you might try:\n \nWITH x AS SELECT * FROM x ORDER BY d.data_id;\n \n-Kevin\n",
"msg_date": "Sat, 06 Aug 2011 11:07:38 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 9.0.1 on Windows performance tunning\n\t help please"
}
] |
[
{
"msg_contents": "\"Kevin Grittner\" wrote:\n \n> WITH x AS SELECT * FROM x ORDER BY d.data_id;\n \nIt ate part of what I had on that line. (Note to self: don't use\nangle-bracketing in posts.)\n \nTrying again with different punctuation:\n \nWITH x AS [original query] SELECT * FROM x ORDER BY d.data_id;\n \n-Kevin\n\n",
"msg_date": "Sat, 06 Aug 2011 11:26:07 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 9.0.1 on Windows performance tunning\n\t help please"
},
{
"msg_contents": "Thanks for your help. But I can not change the query because it is generate\nby SymmetricDS program. So I only can create index on table and change\nconfig of postgres to tunning the query. Is there any way to do that?\n\nSorry for my English\n\nTuan Hoang ANh\n\nOn Sat, Aug 6, 2011 at 11:26 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> \"Kevin Grittner\" wrote:\n>\n> > WITH x AS SELECT * FROM x ORDER BY d.data_id;\n>\n> It ate part of what I had on that line. (Note to self: don't use\n> angle-bracketing in posts.)\n>\n> Trying again with different punctuation:\n>\n> WITH x AS [original query] SELECT * FROM x ORDER BY d.data_id;\n>\n> -Kevin\n>\n>\n\nThanks for your help. But I can not change the query because it is generate by SymmetricDS program. So I only can create index on table and change config of postgres to tunning the query. Is there any way to do that?\nSorry for my EnglishTuan Hoang ANhOn Sat, Aug 6, 2011 at 11:26 PM, Kevin Grittner <[email protected]> wrote:\n\"Kevin Grittner\" wrote:\n\n> WITH x AS SELECT * FROM x ORDER BY d.data_id;\n\nIt ate part of what I had on that line. (Note to self: don't use\nangle-bracketing in posts.)\n\nTrying again with different punctuation:\n\nWITH x AS [original query] SELECT * FROM x ORDER BY d.data_id;\n\n-Kevin",
"msg_date": "Tue, 9 Aug 2011 01:19:42 +0700",
"msg_from": "tuanhoanganh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.0.1 on Windows performance tunning help please"
},
{
"msg_contents": "tuanhoanganh <[email protected]> wrote:\n \n> I can not change the query because it is generate by SymmetricDS\n> program. So I only can create index on table and change config of\n> postgres to tunning the query. Is there any way to do that?\n \nI'm not familiar with SymetricDS. Would it be possible for you to\ncreate a VIEW which used the faster technique and have SymetricDS\nuse the view?\n \n-Kevin\n",
"msg_date": "Mon, 08 Aug 2011 13:35:11 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 9.0.1 on Windows performance tunning\n\t help please"
}
] |
[
{
"msg_contents": "Hello PG perf junkies, \n\n\nSorry this may get a little long winded. Apologies if the formatting gets\ntrashed. \n\n\n\nBackground:\n\nI have been setting up some new servers for PG and I am getting some odd\nnumbers with zcav, I am hoping a second set of eyes here can point me in the\nright direction. (other tests like bonniee++ (1.03e) and dd also give me odd\n(flat and low) numbers)\n\nI will preface this with, yes I bought greg's book. Yes I read it, and it\nhas helped me in the past, but seem to have hit an oddity. \n\n(hardware,os, and config stuff listed at the end)\n\n\n\n\n\nShort version: my zcav and dd tests look to get I/O bound. My numbers in\nZCAV are flat like and SSD which is odd for 15K rpm disks. \n\n\n\n\nLong version:\n\n\nIn the past when dealing with storage I typically see a large gain with\nmoving from ext3 to XFS, provided I set readahead to 16384 on either\nfilesystem.\n\nI also see typical down ward trends in the MB/s (expected) and upward trends\nin access times (expected) with either file system. \n\n\nThese blades + storage-blades are giving me atypical results .\n\n\nI am not seeing a dramatic down turn in MB/s in zcav nor am I seeing access\ntime really increase. (something I have only seen before when I forget to\nhave readahead set high enough) things are just flat at about 420MB/s in\nzcav @ .6ms for access time with XFS and ~470MB/s @.56ms for ext3.\n\nFWIW I get worthless results with zcav and bonnie++ using 1.03 or 1.96\nsometimes, which isn't something I have had happen before even though greg\ndoes mention it. \n\n\nAlso when running zcav I will see kswapdX (0 and 1 in my two socket case)\nstart to eat significant cpu time (~40-50% each), with dd - kswapd and\npdflush become very active as well. This only happens once free mem gets\nlow. As well zcav or dd looks to get CPU bound at 100% while i/o wait stays\nalmost at 0.0 most of the time. (iostat -x -d shows util % at 98% though). I\nsee this with either XFS or ext3. Also when I cat /proc/zoneinfo it looks\nlike I am getting heavy contention for a single page in DMA while the tests\nare running. (see end of email for zoneinfo)\n\nBonnie is giving me 99% cpu usage reported. Watching it while running it\nbounces between 100 and 99. Kswap goes nuts here as well. \n\n\nI am lead to believe that I may need a 2.6.32 (rhel 6.1) or higher kernel to\nsee some of the kswapd issues go away. (testing that hopefully later this\nweek). Maybe that will take care of everything. I don't know yet. \n\n Side note: Setting vm.swappiness to 10 (or 0) doesn't help, although others\non the RHEL support site indicated it did fix kswap issues for them. \n\n\n\nRunning zcav on my home system (4 disk raid 1+0 3ware controller +BBWC using\next4 ubunut 2.6.38-8 I don't see zcav near 100% and I see lots of i/o wait\nas expected, and my zoneinfo for DMA doesn't sit at 1)\n\nNot going to focus too much on ext3 since I am pretty sure I should be able\nto get better numbers from XFS. \n\n\n\nWith mkfs.xfs I have done some reading and it appears that it can't\nautomatically read the stripsize (aka stripe size to anyone other than HP)\nor the number of disks. So I have been using the following:\n\nmkfs.xfs -b size=4k -d su=256k,sw=6,agcount=256\n\n(256K is the default hp stripsize for raid1+0, I have 12 disks in raid 10 so\nI used sw=6, agcount of 256 because that is a (random) number I got from\ngoogle that seemed in the ball park.)\n\n\n\n\n\n\nwhich gives me:\nmeta-data=/dev/cciss/c0d0 isize=256 agcount=256, agsize=839936\nblks\n = sectsz=512 attr=2\ndata = bsize=4096 blocks=215012774, imaxpct=25\n = sunit=64 swidth=384 blks\nnaming =version 2 bsize=4096 ascii-ci=0\nlog =internal log bsize=4096 blocks=32768, version=2\n = sectsz=512 sunit=64 blks, lazy-count=1\nrealtime =none extsz=4096 blocks=0, rtextents=0\n\n\n(if I don't specif the agcount or su,sw stuff I get\nmeta-data=/dev/cciss/c0d0 isize=256 agcount=4, agsize=53753194\nblks\n = sectsz=512 attr=2\ndata = bsize=4096 blocks=215012774, imaxpct=25\n = sunit=0 swidth=0 blks\nnaming =version 2 bsize=4096 ascii-ci=0\nlog =internal log bsize=4096 blocks=32768, version=2\n = sectsz=512 sunit=0 blks, lazy-count=1\nrealtime =none extsz=4096 blocks=0, rtextents=0)\n\n)\n\n\n\n\n\nSo it seems like I should be giving it the extra parameters at mkfs.xfs\ntime... could someone confirm ? In the past I have never specified the su or\nsw or ag groups I have taken the defaults. But since I am getting odd\nnumbers here I started playing with em. Getting little or no change. \n\n\n\nfor mounting:\nlogbufs=8,noatime,nodiratime,nobarrier,inode64,allocsize=16m\n\n\n\n(I know that noatime also means nodiratime according xfs.org, but in the\npast I seem to get better numbers when having both)\n\nI am using nobarrier because I have a battery backed raid cache and the FAQ\n@ XFS.org seems to indicate that is the right choice. \n\n\nFWIW, if I put sunit and swidth in the mount options it seems to change them\nlower (when viewed with xfs_info) so I haven't been putting it in the mount\noptions. \n\n\n\n\nverify readahead:\nblockdev --getra /dev/cciss/c0d0\n16384\n\n\n\n\n\n\n\nIf anyone wants the benchmark outputs I can send them, but basically zcav\nbeing FLAT for bother MB/s and access time tells me something is wrong. And\nit will take days for me to re run all the ones I have done. I didn't save\nmuch once I saw results that don't fit with what I thought I should get.\n\n\n\n\nI haven't done much with pgbench yet as I figure its pointless to move on\nwhile the raw I/O numbers look off to me. At that time I am going to make\nthe call between wal on the OS raid 1 or going to 10 data disks and 2 os and\n2 wal. \n\n\n\n\n\n\nI have gone up to 2.6.18-27(something, wanna say 2 or 4) to see if the issue\nwent away, it didn't. I have gone back to 2.6.18-238.5 and put in a new\nCCISS driver directly from HP, and the issue also does not go away. People\nat work are thinking it might kernel bug that we have somehow never notice\nbefore which is why we are going to look at RHEL 6.1. we tried a 5.3 kernel\nthat someone on rh bugzilla said didn't have the issue but this blade had a\nfit with it - no network, lots of other stuff not working and then it kernel\npanic'd so we quickly gave up on that... \n\n\n\nWe may try and shoehorn in the 6.1 kernel and a few dependencies as well.\nMoving to RHEL 6.1. will mean a long test period before it can go into prod\nand we want to get this new hardware in sooner than that can be done. (even\nwith all it's problems its probably still faster than what it is replacing\njust from the 48GB of ram and 3 gen newer CPUS)\n\n\n\n\n\n\n\n\nHardware and config stuff as it sits right now.\n\n\n\nBlade Hardware:\nProLiant BL460c G7 (bios power flag set to high performance)\n2 intel 5660 cpus. (HT left on)\n48GB of ram (12x4GB @ 1333MHz)\nSmart Array P410i (Embedded)\nPoints of interest from hpacucli -\n\t- Hardware Revision: Rev C\n\t- Firmware Version: 3.66\n\t- Cache Board Present: True\n\t- Elevator Sort: Enabled\n\t- Cache Status: OK\n\t- Cache Backup Power Source: Capacitors\n \t- Battery/Capacitor Count: 1\n \t- Battery/Capacitor Status: OK\n\t- Total Cache Size: 512 MB\n \t- Accelerator Ratio: 25% Read / 75% Write\n\t- Strip Size: 256 KB\n\t- 2x 15K RPM 146GB 6Gbps SAS in raid 1 for OS (ext3)\n\t- Array Accelerator: Enabled\n\t- Status: OK\n\t- drives firmware = HPD5\n\nBlade Storage subsystem:\nHP SB2200 (12 disk 15K )\n\nPoints of interest from hpacucli \n\nSmart Array P410i in Slot 3\n Controller Status: OK\n Hardware Revision: Rev C\n Firmware Version: 3.66\n Elevator Sort: Enabled\n Wait for Cache Room: Disabled\n Cache Board Present: True\n Cache Status: OK\n Accelerator Ratio: 25% Read / 75% Write\n Drive Write Cache: Disabled\n Total Cache Size: 1024 MB\n No-Battery Write Cache: Disabled\n Cache Backup Power Source: Capacitors\n Battery/Capacitor Count: 1\n Battery/Capacitor Status: OK\n SATA NCQ Supported: True\n\n\n Logical Drive: 1\n Size: 820.2 GB\n Fault Tolerance: RAID 1+0\n Heads: 255\n Sectors Per Track: 32\n Cylinders: 65535\n Strip Size: 256 KB\n Status: OK\n Array Accelerator: Enabled\n Disk Name: /dev/cciss/c0d0\n Mount Points: /raid 820.2 GB\n OS Status: LOCKED\n\n12 drives in Raid 1+0, using XFS. \n\n\nOS: \nOS: RHEL 5.6 (2.6.18-238.9.1.el5)\nDatabase use: PG 9.0.2 for OLTP. \n\n\nCCISS info:\nfilename:\n/lib/modules/2.6.18-238.9.1.el5/kernel/drivers/block/cciss.ko\nversion: 3.6.22-RH1\ndescription: Driver for HP Controller SA5xxx SA6xxx version 3.6.22-RH1\nauthor: Hewlett-Packard Company\n\nXFS INFO:\nxfsdump-2.2.48-3.el5\nxfsprogs-2.10.2-7.el5\n\nhead of ZONEINFO while zcav is running and kswap is going nuts:\nthe min,low,high of 1 seems odd to me. On other systems these get above 1. \n\nNode 0, zone DMA\n pages free 2493\n min 1\n low 1\n high 1\n active 0\n inactive 0\n scanned 0 (a: 3 i: 3)\n spanned 4096\n present 2393\n nr_anon_pages 0\n nr_mapped 1\n nr_file_pages 0\n nr_slab 0\n nr_page_table_pages 0\n nr_dirty 0\n nr_writeback 0\n nr_unstable 0\n nr_bounce 0\n numa_hit 0\n numa_miss 0\n numa_foreign 0\n numa_interleave 0\n numa_local 0\n numa_other 0\n protection: (0, 3822, 24211, 24211)\n pagesets\n all_unreclaimable: 1\n prev_priority: 12\n start_pfn: 0\n\n\n\nnumastat (probably worthless since I have been pounding on this box for a\nwhile before capturing it)\n\n node0 node1\nnuma_hit 3126413031 247696913\nnuma_miss 95489353 2781917287\nnuma_foreign 2781917287 95489353\ninterleave_hit 81178 97872\nlocal_node 3126297257 247706110\nother_node 95605127 2781908090\n\n\n\n\n\n",
"msg_date": "Mon, 8 Aug 2011 00:14:39 -0600",
"msg_from": "\"mark\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "XFS options and benchmark woes"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: mark [mailto:[email protected]]\n> Sent: Monday, August 08, 2011 12:15 AM\n> To: '[email protected]'\n> Subject: XFS options and benchmark woes\n> \n> Hello PG perf junkies,\n> \n> \n> Sorry this may get a little long winded. Apologies if the formatting\n> gets trashed.\n> \n> \n> \n> Background:\n> \n> I have been setting up some new servers for PG and I am getting some\n> odd numbers with zcav, I am hoping a second set of eyes here can point\n> me in the right direction. (other tests like bonniee++ (1.03e) and dd\n> also give me odd (flat and low) numbers)\n> \n> I will preface this with, yes I bought greg's book. Yes I read it, and\n> it has helped me in the past, but seem to have hit an oddity.\n> \n> (hardware,os, and config stuff listed at the end)\n> \n> \n> \n> \n> \n> Short version: my zcav and dd tests look to get I/O bound. My numbers\n> in ZCAV are flat like and SSD which is odd for 15K rpm disks.\n\n\nuggg, ZCAV numbers appear to be CPU bound. Not i/o .\n\n> \n> \n> \n> \n> Long version:\n> \n> \n> In the past when dealing with storage I typically see a large gain with\n> moving from ext3 to XFS, provided I set readahead to 16384 on either\n> filesystem.\n> \n> I also see typical down ward trends in the MB/s (expected) and upward\n> trends in access times (expected) with either file system.\n> \n> \n> These blades + storage-blades are giving me atypical results .\n> \n> \n> I am not seeing a dramatic down turn in MB/s in zcav nor am I seeing\n> access time really increase. (something I have only seen before when I\n> forget to have readahead set high enough) things are just flat at about\n> 420MB/s in zcav @ .6ms for access time with XFS and ~470MB/s @.56ms for\n> ext3.\n> \n> FWIW I get worthless results with zcav and bonnie++ using 1.03 or 1.96\n> sometimes, which isn't something I have had happen before even though\n> greg does mention it.\n> \n> \n> Also when running zcav I will see kswapdX (0 and 1 in my two socket\n> case) start to eat significant cpu time (~40-50% each), with dd -\n> kswapd and pdflush become very active as well. This only happens once\n> free mem gets low. As well zcav or dd looks to get CPU bound at 100%\n> while i/o wait stays almost at 0.0 most of the time. (iostat -x -d\n> shows util % at 98% though). I see this with either XFS or ext3. Also\n> when I cat /proc/zoneinfo it looks like I am getting heavy contention\n> for a single page in DMA while the tests are running. (see end of email\n> for zoneinfo)\n> \n> Bonnie is giving me 99% cpu usage reported. Watching it while running\n> it bounces between 100 and 99. Kswap goes nuts here as well.\n> \n> \n> I am lead to believe that I may need a 2.6.32 (rhel 6.1) or higher\n> kernel to see some of the kswapd issues go away. (testing that\n> hopefully later this week). Maybe that will take care of everything. I\n> don't know yet.\n> \n> Side note: Setting vm.swappiness to 10 (or 0) doesn't help, although\n> others on the RHEL support site indicated it did fix kswap issues for\n> them.\n> \n> \n> \n> Running zcav on my home system (4 disk raid 1+0 3ware controller +BBWC\n> using ext4 ubunut 2.6.38-8 I don't see zcav near 100% and I see lots of\n> i/o wait as expected, and my zoneinfo for DMA doesn't sit at 1)\n> \n> Not going to focus too much on ext3 since I am pretty sure I should be\n> able to get better numbers from XFS.\n> \n> \n> \n> With mkfs.xfs I have done some reading and it appears that it can't\n> automatically read the stripsize (aka stripe size to anyone other than\n> HP) or the number of disks. So I have been using the following:\n> \n> mkfs.xfs -b size=4k -d su=256k,sw=6,agcount=256\n> \n> (256K is the default hp stripsize for raid1+0, I have 12 disks in raid\n> 10 so I used sw=6, agcount of 256 because that is a (random) number I\n> got from google that seemed in the ball park.)\n> \n> \n> \n> \n> \n> \n> which gives me:\n> meta-data=/dev/cciss/c0d0 isize=256 agcount=256,\n> agsize=839936 blks\n> = sectsz=512 attr=2\n> data = bsize=4096 blocks=215012774,\n> imaxpct=25\n> = sunit=64 swidth=384 blks\n> naming =version 2 bsize=4096 ascii-ci=0\n> log =internal log bsize=4096 blocks=32768, version=2\n> = sectsz=512 sunit=64 blks, lazy-\n> count=1\n> realtime =none extsz=4096 blocks=0, rtextents=0\n> \n> \n> (if I don't specif the agcount or su,sw stuff I get\n> meta-data=/dev/cciss/c0d0 isize=256 agcount=4,\n> agsize=53753194 blks\n> = sectsz=512 attr=2\n> data = bsize=4096 blocks=215012774,\n> imaxpct=25\n> = sunit=0 swidth=0 blks\n> naming =version 2 bsize=4096 ascii-ci=0\n> log =internal log bsize=4096 blocks=32768, version=2\n> = sectsz=512 sunit=0 blks, lazy-\n> count=1\n> realtime =none extsz=4096 blocks=0, rtextents=0)\n> \n> )\n> \n> \n> \n> \n> \n> So it seems like I should be giving it the extra parameters at mkfs.xfs\n> time... could someone confirm ? In the past I have never specified the\n> su or sw or ag groups I have taken the defaults. But since I am getting\n> odd numbers here I started playing with em. Getting little or no\n> change.\n> \n> \n> \n> for mounting:\n> logbufs=8,noatime,nodiratime,nobarrier,inode64,allocsize=16m\n> \n> \n> \n> (I know that noatime also means nodiratime according xfs.org, but in\n> the past I seem to get better numbers when having both)\n> \n> I am using nobarrier because I have a battery backed raid cache and the\n> FAQ @ XFS.org seems to indicate that is the right choice.\n> \n> \n> FWIW, if I put sunit and swidth in the mount options it seems to change\n> them lower (when viewed with xfs_info) so I haven't been putting it in\n> the mount options.\n> \n> \n> \n> \n> verify readahead:\n> blockdev --getra /dev/cciss/c0d0\n> 16384\n> \n> \n> \n> \n> \n> \n> \n> If anyone wants the benchmark outputs I can send them, but basically\n> zcav being FLAT for bother MB/s and access time tells me something is\n> wrong. And it will take days for me to re run all the ones I have done.\n> I didn't save much once I saw results that don't fit with what I\n> thought I should get.\n> \n> \n> \n> \n> I haven't done much with pgbench yet as I figure its pointless to move\n> on while the raw I/O numbers look off to me. At that time I am going to\n> make the call between wal on the OS raid 1 or going to 10 data disks\n> and 2 os and 2 wal.\n> \n> \n> \n> \n> \n> \n> I have gone up to 2.6.18-27(something, wanna say 2 or 4) to see if the\n> issue went away, it didn't. I have gone back to 2.6.18-238.5 and put in\n> a new CCISS driver directly from HP, and the issue also does not go\n> away. People at work are thinking it might kernel bug that we have\n> somehow never notice before which is why we are going to look at RHEL\n> 6.1. we tried a 5.3 kernel that someone on rh bugzilla said didn't\n> have the issue but this blade had a fit with it - no network, lots of\n> other stuff not working and then it kernel panic'd so we quickly gave\n> up on that...\n> \n> \n> \n> We may try and shoehorn in the 6.1 kernel and a few dependencies as\n> well. Moving to RHEL 6.1. will mean a long test period before it can go\n> into prod and we want to get this new hardware in sooner than that can\n> be done. (even with all it's problems its probably still faster than\n> what it is replacing just from the 48GB of ram and 3 gen newer CPUS)\n> \n> \n> \n> \n> \n> \n> \n> \n> Hardware and config stuff as it sits right now.\n> \n> \n> \n> Blade Hardware:\n> ProLiant BL460c G7 (bios power flag set to high performance)\n> 2 intel 5660 cpus. (HT left on)\n> 48GB of ram (12x4GB @ 1333MHz)\n> Smart Array P410i (Embedded)\n> Points of interest from hpacucli -\n> \t- Hardware Revision: Rev C\n> \t- Firmware Version: 3.66\n> \t- Cache Board Present: True\n> \t- Elevator Sort: Enabled\n> \t- Cache Status: OK\n> \t- Cache Backup Power Source: Capacitors\n> \t- Battery/Capacitor Count: 1\n> \t- Battery/Capacitor Status: OK\n> \t- Total Cache Size: 512 MB\n> \t- Accelerator Ratio: 25% Read / 75% Write\n> \t- Strip Size: 256 KB\n> \t- 2x 15K RPM 146GB 6Gbps SAS in raid 1 for OS (ext3)\n> \t- Array Accelerator: Enabled\n> \t- Status: OK\n> \t- drives firmware = HPD5\n> \n> Blade Storage subsystem:\n> HP SB2200 (12 disk 15K )\n> \n> Points of interest from hpacucli\n> \n> Smart Array P410i in Slot 3\n> Controller Status: OK\n> Hardware Revision: Rev C\n> Firmware Version: 3.66\n> Elevator Sort: Enabled\n> Wait for Cache Room: Disabled\n> Cache Board Present: True\n> Cache Status: OK\n> Accelerator Ratio: 25% Read / 75% Write\n> Drive Write Cache: Disabled\n> Total Cache Size: 1024 MB\n> No-Battery Write Cache: Disabled\n> Cache Backup Power Source: Capacitors\n> Battery/Capacitor Count: 1\n> Battery/Capacitor Status: OK\n> SATA NCQ Supported: True\n> \n> \n> Logical Drive: 1\n> Size: 820.2 GB\n> Fault Tolerance: RAID 1+0\n> Heads: 255\n> Sectors Per Track: 32\n> Cylinders: 65535\n> Strip Size: 256 KB\n> Status: OK\n> Array Accelerator: Enabled\n> Disk Name: /dev/cciss/c0d0\n> Mount Points: /raid 820.2 GB\n> OS Status: LOCKED\n> \n> 12 drives in Raid 1+0, using XFS.\n> \n> \n> OS:\n> OS: RHEL 5.6 (2.6.18-238.9.1.el5)\n> Database use: PG 9.0.2 for OLTP.\n> \n> \n> CCISS info:\n> filename: /lib/modules/2.6.18-\n> 238.9.1.el5/kernel/drivers/block/cciss.ko\n> version: 3.6.22-RH1\n> description: Driver for HP Controller SA5xxx SA6xxx version 3.6.22-\n> RH1\n> author: Hewlett-Packard Company\n> \n> XFS INFO:\n> xfsdump-2.2.48-3.el5\n> xfsprogs-2.10.2-7.el5\n> \n> head of ZONEINFO while zcav is running and kswap is going nuts:\n> the min,low,high of 1 seems odd to me. On other systems these get above\n> 1.\n> \n> Node 0, zone DMA\n> pages free 2493\n> min 1\n> low 1\n> high 1\n> active 0\n> inactive 0\n> scanned 0 (a: 3 i: 3)\n> spanned 4096\n> present 2393\n> nr_anon_pages 0\n> nr_mapped 1\n> nr_file_pages 0\n> nr_slab 0\n> nr_page_table_pages 0\n> nr_dirty 0\n> nr_writeback 0\n> nr_unstable 0\n> nr_bounce 0\n> numa_hit 0\n> numa_miss 0\n> numa_foreign 0\n> numa_interleave 0\n> numa_local 0\n> numa_other 0\n> protection: (0, 3822, 24211, 24211)\n> pagesets\n> all_unreclaimable: 1\n> prev_priority: 12\n> start_pfn: 0\n> \n> \n> \n> numastat (probably worthless since I have been pounding on this box for\n> a while before capturing it)\n> \n> node0 node1\n> numa_hit 3126413031 247696913\n> numa_miss 95489353 2781917287\n> numa_foreign 2781917287 95489353\n> interleave_hit 81178 97872\n> local_node 3126297257 247706110\n> other_node 95605127 2781908090\n> \n> \n> \n\n\n",
"msg_date": "Mon, 8 Aug 2011 00:18:24 -0600",
"msg_from": "\"mark\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: XFS options and benchmark woes"
}
] |
[
{
"msg_contents": "Hello PG perf junkies, \n\n\nSorry this may get a little long winded. Apologies if the formatting gets\ntrashed. Also apologies if this double posts. (I originally set it yesterday\nwith the wrong account and the message is stalled - so my bad there) if\nsomeone is a mod and it's still in the wait queue feel free to remove them. \n\n\nShort version: \nmy zcav and dd tests look to get ->CPU bound<-. Yes CPU bound, with junk\nnumbers. My numbers in ZCAV are flat like and SSD which is odd for 15K rpm\ndisks. I am not sure what the point of moving further would be given these\nunexpected poor numbers. Well I knew 12 disks wasn't going to be something\nthat impressed me, I am used to 24, but I was expecting about 40-50% better\nthan what I am getting.\n\n\nBackground:\n\nI have been setting up some new servers for PG and I am getting some odd\nnumbers with zcav, I am hoping a second set of eyes here can point me in the\nright direction. (other tests like bonniee++ (1.03e) and dd also give me odd\n(flat and low) numbers)\n\nI will preface this with, yes I bought greg's book. Yes I read it, and it\nhas helped me in the past, but seem to have hit an oddity. \n\n(hardware,os, and config stuff listed at the end)\n\n\n\n\nLong version:\n\n\nIn the past when dealing with storage I typically see a large gain with\nmoving from ext3 to XFS, provided I set readahead to 16384 on either\nfilesystem.\n\nI also see typical down ward trends in the MB/s (expected) and upward trends\nin access times (expected) with either file system. \n\n\nThese blades + storage-blades are giving me atypical results .\n\n\nI am not seeing a dramatic down turn in MB/s in zcav nor am I seeing access\ntime really increase. (something I have only seen before when I forget to\nhave readahead set high enough) things are just flat at about 420MB/s in\nzcav @ .6ms for access time with XFS and ~470MB/s @.56ms for ext3.\n\nFWIW I get worthless results with zcav and bonnie++ using 1.03 or 1.96\nsometimes, which isn't something I have had happen before even though greg\ndoes mention it. \n\n\nAlso when running zcav I will see kswapdX (0 and 1 in my two socket case)\nstart to eat significant cpu time (~40-50% each), with dd - kswapd and\npdflush become very active as well. This only happens once free mem gets\nlow. As well zcav or dd looks to get CPU bound at 100% while i/o wait stays\nalmost at 0.0 most of the time. (iostat -x -d shows util % at 98% though). I\nsee this with either XFS or ext3. Also when I cat /proc/zoneinfo it looks\nlike I am getting heavy contention for a single page in DMA while the tests\nare running. (see end of email for zoneinfo)\n\nBonnie is giving me 99% cpu usage reported. Watching it while running it\nbounces between 100 and 99. Kswap goes nuts here as well. \n\n\nI am lead to believe that I may need a 2.6.32 (rhel 6.1) or higher kernel to\nsee some of the kswapd issues go away. (testing that hopefully later this\nweek). Maybe that will take care of everything. I don't know yet. \n\n Side note: Setting vm.swappiness to 10 (or 0) doesn't help, although others\non the RHEL support site indicated it did fix kswap issues for them. \n\n\n\nRunning zcav on my home system (4 disk raid 1+0 3ware controller +BBWC using\next4 ubunut 2.6.38-8 I don't see zcav near 100% and I see lots of i/o wait\nas expected, and my zoneinfo for DMA doesn't sit at 1)\n\nNot going to focus too much on ext3 since I am pretty sure I should be able\nto get better numbers from XFS. \n\n\n\nWith mkfs.xfs I have done some reading and it appears that it can't\nautomatically read the stripsize (aka stripe size to anyone other than HP)\nor the number of disks. So I have been using the following:\n\nmkfs.xfs -b size=4k -d su=256k,sw=6,agcount=256\n\n(256K is the default hp stripsize for raid1+0, I have 12 disks in raid 10 so\nI used sw=6, agcount of 256 because that is a (random) number I got from\ngoogle that seemed in the ball park.)\n\n\n\n\n\n\nwhich gives me:\nmeta-data=/dev/cciss/c0d0 isize=256 agcount=256, agsize=839936\nblks\n = sectsz=512 attr=2\ndata = bsize=4096 blocks=215012774, imaxpct=25\n = sunit=64 swidth=384 blks\nnaming =version 2 bsize=4096 ascii-ci=0\nlog =internal log bsize=4096 blocks=32768, version=2\n = sectsz=512 sunit=64 blks, lazy-count=1\nrealtime =none extsz=4096 blocks=0, rtextents=0\n\n\n(if I don't specif the agcount or su,sw stuff I get\nmeta-data=/dev/cciss/c0d0 isize=256 agcount=4, agsize=53753194\nblks\n = sectsz=512 attr=2\ndata = bsize=4096 blocks=215012774, imaxpct=25\n = sunit=0 swidth=0 blks\nnaming =version 2 bsize=4096 ascii-ci=0\nlog =internal log bsize=4096 blocks=32768, version=2\n = sectsz=512 sunit=0 blks, lazy-count=1\nrealtime =none extsz=4096 blocks=0, rtextents=0)\n\n)\n\n\n\n\n\nSo it seems like I should be giving it the extra parameters at mkfs.xfs\ntime... could someone confirm ? In the past I have never specified the su or\nsw or ag groups I have taken the defaults. But since I am getting odd\nnumbers here I started playing with em. Getting little or no change. \n\n\n\nfor mounting:\nlogbufs=8,noatime,nodiratime,nobarrier,inode64,allocsize=16m\n\n\n\n(I know that noatime also means nodiratime according xfs.org, but in the\npast I seem to get better numbers when having both)\n\nI am using nobarrier because I have a battery backed raid cache and the FAQ\n@ XFS.org seems to indicate that is the right choice. \n\n\nFWIW, if I put sunit and swidth in the mount options it seems to change them\nlower (when viewed with xfs_info) so I haven't been putting it in the mount\noptions. \n\n\n\n\nverify readahead:\nblockdev --getra /dev/cciss/c0d0\n16384\n\n\n\n\n\n\n\nIf anyone wants the benchmark outputs I can send them, but basically zcav\nbeing FLAT for bother MB/s and access time tells me something is wrong. And\nit will take days for me to re run all the ones I have done. I didn't save\nmuch once I saw results that don't fit with what I thought I should get.\n\n\n\n\nI haven't done much with pgbench yet as I figure its pointless to move on\nwhile the raw I/O numbers look off to me. At that time I am going to make\nthe call between wal on the OS raid 1 or going to 10 data disks and 2 os and\n2 wal. \n\n\n\n\n\n\nI have gone up to 2.6.18-27(something, wanna say 2 or 4) to see if the issue\nwent away, it didn't. I have gone back to 2.6.18-238.5 and put in a new\nCCISS driver directly from HP, and the issue also does not go away. People\nat work are thinking it might kernel bug that we have somehow never notice\nbefore which is why we are going to look at RHEL 6.1. we tried a 5.3 kernel\nthat someone on rh bugzilla said didn't have the issue but this blade had a\nfit with it - no network, lots of other stuff not working and then it kernel\npanic'd so we quickly gave up on that... \n\n\n\nWe may try and shoehorn in the 6.1 kernel and a few dependencies as well.\nMoving to RHEL 6.1. will mean a long test period before it can go into prod\nand we want to get this new hardware in sooner than that can be done. (even\nwith all it's problems its probably still faster than what it is replacing\njust from the 48GB of ram and 3 gen newer CPUS)\n\n\n\n\n\n\n\n\nHardware and config stuff as it sits right now.\n\n\n\nBlade Hardware:\nProLiant BL460c G7 (bios power flag set to high performance)\n2 intel 5660 cpus. (HT left on)\n48GB of ram (12x4GB @ 1333MHz)\nSmart Array P410i (Embedded)\nPoints of interest from hpacucli -\n\t- Hardware Revision: Rev C\n\t- Firmware Version: 3.66\n\t- Cache Board Present: True\n\t- Elevator Sort: Enabled\n\t- Cache Status: OK\n\t- Cache Backup Power Source: Capacitors\n \t- Battery/Capacitor Count: 1\n \t- Battery/Capacitor Status: OK\n\t- Total Cache Size: 512 MB\n \t- Accelerator Ratio: 25% Read / 75% Write\n\t- Strip Size: 256 KB\n\t- 2x 15K RPM 146GB 6Gbps SAS in raid 1 for OS (ext3)\n\t- Array Accelerator: Enabled\n\t- Status: OK\n\t- drives firmware = HPD5\n\nBlade Storage subsystem:\nHP SB2200 (12 disk 15K )\n\nPoints of interest from hpacucli \n\nSmart Array P410i in Slot 3\n Controller Status: OK\n Hardware Revision: Rev C\n Firmware Version: 3.66\n Elevator Sort: Enabled\n Wait for Cache Room: Disabled\n Cache Board Present: True\n Cache Status: OK\n Accelerator Ratio: 25% Read / 75% Write\n Drive Write Cache: Disabled\n Total Cache Size: 1024 MB\n No-Battery Write Cache: Disabled\n Cache Backup Power Source: Capacitors\n Battery/Capacitor Count: 1\n Battery/Capacitor Status: OK\n SATA NCQ Supported: True\n\n\n Logical Drive: 1\n Size: 820.2 GB\n Fault Tolerance: RAID 1+0\n Heads: 255\n Sectors Per Track: 32\n Cylinders: 65535\n Strip Size: 256 KB\n Status: OK\n Array Accelerator: Enabled\n Disk Name: /dev/cciss/c0d0\n Mount Points: /raid 820.2 GB\n OS Status: LOCKED\n\n12 drives in Raid 1+0, using XFS. \n\n\nOS: \nOS: RHEL 5.6 (2.6.18-238.9.1.el5)\nDatabase use: PG 9.0.2 for OLTP. \n\n\nCCISS info:\nfilename:\n/lib/modules/2.6.18-238.9.1.el5/kernel/drivers/block/cciss.ko\nversion: 3.6.22-RH1\ndescription: Driver for HP Controller SA5xxx SA6xxx version 3.6.22-RH1\nauthor: Hewlett-Packard Company\n\nXFS INFO:\nxfsdump-2.2.48-3.el5\nxfsprogs-2.10.2-7.el5\n\nXFS mkfs string:\nmkfs.xfs -b size=4k -d su=256k,sw=6,agcount=256\n\n\nmkfs.xfs output:\nmeta-data=/dev/cciss/c0d0 isize=256 agcount=256, agsize=839936\nblks\n = sectsz=512 attr=2\ndata = bsize=4096 blocks=215012774, imaxpct=25\n = sunit=64 swidth=384 blks\nnaming =version 2 bsize=4096 ascii-ci=0\nlog =internal log bsize=4096 blocks=32768, version=2\n = sectsz=512 sunit=64 blks, lazy-count=1\nrealtime =none extsz=4096 blocks=0, rtextents=0\n\n\nhead of ZONEINFO while zcav is running and kswap is going nuts:\nthe min,low,high of 1 seems odd to me. On other systems these get above 1. \n\nNode 0, zone DMA\n pages free 2493\n min 1\n low 1\n high 1\n active 0\n inactive 0\n scanned 0 (a: 3 i: 3)\n spanned 4096\n present 2393\n nr_anon_pages 0\n nr_mapped 1\n nr_file_pages 0\n nr_slab 0\n nr_page_table_pages 0\n nr_dirty 0\n nr_writeback 0\n nr_unstable 0\n nr_bounce 0\n numa_hit 0\n numa_miss 0\n numa_foreign 0\n numa_interleave 0\n numa_local 0\n numa_other 0\n protection: (0, 3822, 24211, 24211)\n pagesets\n all_unreclaimable: 1\n prev_priority: 12\n start_pfn: 0\n\n\n\nnumastat (probably worthless since I have been pounding on this box for a\nwhile before capturing it)\n\n node0 node1\nnuma_hit 3126413031 247696913\nnuma_miss 95489353 2781917287\nnuma_foreign 2781917287 95489353\ninterleave_hit 81178 97872\nlocal_node 3126297257 247706110\nother_node 95605127 2781908090\n\n",
"msg_date": "Mon, 8 Aug 2011 20:06:28 -0600",
"msg_from": "\"mark\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "benchmark woes and XFS options"
},
{
"msg_contents": "I think your notion that you have an HP CCISS driver in this older \nkernel that just doesn't drive your card very fast is worth exploring. \nWhat I sometimes do in the situation you're in is boot a Linux \ndistribution that comes with a decent live CD, such as Debian or \nUbuntu. Just mount the suspect drive, punch up read-ahead, and re-test \nperformance. That should work well enough to do a simple dd test, and \nprobably well enough to compile and run bonnie++ too. If that gives \ngood performance numbers, it should narrow the list of possible causes \nconsiderably. You really need to separate out \"bad driver\" from the \nother possibilities here given what you've described, and that's a low \nimpact way to do it.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Mon, 08 Aug 2011 23:42:19 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: benchmark woes and XFS options"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Greg Smith [mailto:[email protected]]\n> Sent: Monday, August 08, 2011 9:42 PM\n> To: mark\n> Cc: [email protected]\n> Subject: Re: [PERFORM] benchmark woes and XFS options\n> \n> I think your notion that you have an HP CCISS driver in this older\n> kernel that just doesn't drive your card very fast is worth exploring.\n> What I sometimes do in the situation you're in is boot a Linux\n> distribution that comes with a decent live CD, such as Debian or\n> Ubuntu. Just mount the suspect drive, punch up read-ahead, and re-test\n> performance. That should work well enough to do a simple dd test, and\n> probably well enough to compile and run bonnie++ too. If that gives\n> good performance numbers, it should narrow the list of possible causes\n> considerably. You really need to separate out \"bad driver\" from the\n> other possibilities here given what you've described, and that's a low\n> impact way to do it.\n> \n> --\n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\nThanks Greg.\n\nI will try and give HPSA a whirl and report back. Both with single disk and\nthe whole raid set. I am out of the office this week so I might have some\ndelay before I can do some more detective work. \n\nI don't think I have any gear that won't require either the CCISS or HPSA\ndriver and in SFF drives. But will try and look around. \n\n\n\n-Mark\n\n\n\n",
"msg_date": "Mon, 8 Aug 2011 22:13:58 -0600",
"msg_from": "\"mark\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: benchmark woes and XFS options"
}
] |
[
{
"msg_contents": "Dear All,\n\nI have some problems with regexp queries performance - common sense tells me\nthat my queries should run faster than they do.\n\nThe database - table in question has 590 K records, table's size is 3.5GB. I\nam effectively querying a single attribute \"subject\" which has an average\nsize of 2KB, so we are doing a query on ~1GB of data. The query looks more\nor less like this:\n\nSELECT T.tender_id FROM archive_tender T WHERE\n(( T.subject !~* '\\\\mpattern1.*\\\\M' ) AND ( T.subject ~* '\\\\mpattern2\\\\M' OR\n[4-5 more similar terms] ) AND T.erased = 0 AND T.rejected = 0\nORDER BY\n tender_id DESC\nLIMIT\n 10000;\n\nThe planner shows seq scan on subject which is OK with regexp match.\n\nNow, the query above takes about 60sec to execute; exactly: 70s for the\nfirst run and 60s for the next runs. In my opinion this is too long: It\nshould take 35 s to read the whole table into RAM (assuming 100 MB/s\ntransfers - half the HDD benchmarked speed). With 12 GB of RAM the whole\ntable should be easily buffered on the operating system level. The regexp\nmatch on 1 GB of data takes 1-2 s (I benchmarked it with a simple pcre\ntest). The system is not in the production mode, so there is no additional\ndatabase activity (no reads, no updates, effectively db is read-only)\n\nTo summarize: any idea how to speed up this query? (please, don't suggest\nregexp indexing - in this application it would be too time consuming to\nimplement them, and besides - as above - I think that Postgres should do\nbetter here even with seq-scan).\n\nServer parameters:\nRAM: 12 GB\nCores: 8\nHDD: SATA; shows 200 MB/s transfer speed\nOS: Linux 64bit; Postgres 8.4\n\n\nSome performance params from postgresql.conf:\nmax_connections = 16\nshared_buffers = 24MB\ntemp_buffers = 128MB\nmax_prepared_transactions = 50\nwork_mem = 128MB\nmaintenance_work_mem = 1GB\neffective_cache_size = 8GB\n\nDatabase is vacuumed.\n\n\nRegards,\n\nGreg\n\nDear All,I have some problems with regexp queries performance - common sense tells me that my queries should run faster than they do.The database - table in question has 590 K records, table's size is 3.5GB. I am effectively querying a single attribute \"subject\" which has an average size of 2KB, so we are doing a query on ~1GB of data. The query looks more or less like this:\nSELECT T.tender_id FROM archive_tender T WHERE(( T.subject !~* '\\\\mpattern1.*\\\\M' ) AND ( T.subject ~* '\\\\mpattern2\\\\M' OR [4-5 more similar terms] ) AND T.erased = 0 AND T.rejected = 0ORDER BY \n tender_id DESCLIMIT 10000;The planner shows seq scan on subject which is OK with regexp match.Now, the query above takes about 60sec to execute; exactly: 70s for the first run and 60s for the next runs. In my opinion this is too long: It should take 35 s to read the whole table into RAM (assuming 100 MB/s transfers - half the HDD benchmarked speed). With 12 GB of RAM the whole table should be easily buffered on the operating system level. The regexp match on 1 GB of data takes 1-2 s (I benchmarked it with a simple pcre test). The system is not in the production mode, so there is no additional database activity (no reads, no updates, effectively db is read-only)\nTo summarize: any idea how to speed up this query? (please, don't suggest regexp indexing - in this application it would be too time consuming to implement them, and besides - as above - I think that Postgres should do better here even with seq-scan).\nServer parameters:RAM: 12 GBCores: 8HDD: SATA; shows 200 MB/s transfer speedOS: Linux 64bit; Postgres 8.4Some performance params from postgresql.conf:max_connections = 16 shared_buffers = 24MB \ntemp_buffers = 128MB max_prepared_transactions = 50 work_mem = 128MB maintenance_work_mem = 1GB effective_cache_size = 8GBDatabase is vacuumed.\nRegards,Greg",
"msg_date": "Wed, 10 Aug 2011 16:26:18 +0200",
"msg_from": "Grzegorz Blinowski <[email protected]>",
"msg_from_op": true,
"msg_subject": "poor pefrormance with regexp searches on large tables"
},
{
"msg_contents": "On 10 Srpen 2011, 16:26, Grzegorz Blinowski wrote:\n> Now, the query above takes about 60sec to execute; exactly: 70s for the\n> first run and 60s for the next runs. In my opinion this is too long: It\n> should take 35 s to read the whole table into RAM (assuming 100 MB/s\n> transfers - half the HDD benchmarked speed). With 12 GB of RAM the whole\n> table should be easily buffered on the operating system level. The regexp\n\nAnd is it really in the page cache? I'm not an expert in this field, but\nI'd guess no. Check if it really gets the data from cache using iostat or\nsomething like that. Use fincore to see what's really in the cache, it's\navailable here:\n\nhttp://code.google.com/p/linux-ftools/\n\n> Some performance params from postgresql.conf:\n> max_connections = 16\n> shared_buffers = 24MB\n\nWhy just 24MBs? Have you tried with more memory here, e.g. 256MB or 512MB?\nI'm not suggesting the whole table should fit here (seq scan uses small\nring cache anyway), but 24MB is just the bare minimum to start the DB.\n\n> Database is vacuumed.\n\nJust vacuumed or compacted? The simple vacuum just marks the dead tuples\nas empty, it does not compact the database. So if you've done a lot of\nchanges and then just run vacuum, it may still may occupy a lot of space\non the disk. How did you get that the table size is 3.5GB? Is that the\nsize of the raw data, have you used pg_relation_size or something else?\n\nTomas\n\n\n\n",
"msg_date": "Wed, 10 Aug 2011 17:08:29 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: poor pefrormance with regexp searches on large tables"
},
{
"msg_contents": "Try to use single regular expression.\n\n2011/8/10, Grzegorz Blinowski <[email protected]>:\n> Dear All,\n>\n> I have some problems with regexp queries performance - common sense tells me\n> that my queries should run faster than they do.\n>\n> The database - table in question has 590 K records, table's size is 3.5GB. I\n> am effectively querying a single attribute \"subject\" which has an average\n> size of 2KB, so we are doing a query on ~1GB of data. The query looks more\n> or less like this:\n>\n> SELECT T.tender_id FROM archive_tender T WHERE\n> (( T.subject !~* '\\\\mpattern1.*\\\\M' ) AND ( T.subject ~* '\\\\mpattern2\\\\M' OR\n> [4-5 more similar terms] ) AND T.erased = 0 AND T.rejected = 0\n> ORDER BY\n> tender_id DESC\n> LIMIT\n> 10000;\n>\n> The planner shows seq scan on subject which is OK with regexp match.\n>\n> Now, the query above takes about 60sec to execute; exactly: 70s for the\n> first run and 60s for the next runs. In my opinion this is too long: It\n> should take 35 s to read the whole table into RAM (assuming 100 MB/s\n> transfers - half the HDD benchmarked speed). With 12 GB of RAM the whole\n> table should be easily buffered on the operating system level. The regexp\n> match on 1 GB of data takes 1-2 s (I benchmarked it with a simple pcre\n> test). The system is not in the production mode, so there is no additional\n> database activity (no reads, no updates, effectively db is read-only)\n>\n> To summarize: any idea how to speed up this query? (please, don't suggest\n> regexp indexing - in this application it would be too time consuming to\n> implement them, and besides - as above - I think that Postgres should do\n> better here even with seq-scan).\n>\n> Server parameters:\n> RAM: 12 GB\n> Cores: 8\n> HDD: SATA; shows 200 MB/s transfer speed\n> OS: Linux 64bit; Postgres 8.4\n>\n>\n> Some performance params from postgresql.conf:\n> max_connections = 16\n> shared_buffers = 24MB\n> temp_buffers = 128MB\n> max_prepared_transactions = 50\n> work_mem = 128MB\n> maintenance_work_mem = 1GB\n> effective_cache_size = 8GB\n>\n> Database is vacuumed.\n>\n>\n> Regards,\n>\n> Greg\n>\n\n\n-- \n------------\npasman\n",
"msg_date": "Wed, 10 Aug 2011 17:09:16 +0200",
"msg_from": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: poor pefrormance with regexp searches on large tables"
},
{
"msg_contents": "Grzegorz Blinowski <[email protected]> wrote:\n \n> Some performance params from postgresql.conf:\n \nPlease paste the result of running the query on this page:\n \nhttp://wiki.postgresql.org/wiki/Server_Configuration\n \nFor a start, the general advice is usually to start with\nshared_buffers at the lesser of 25% of system RAM or 8 GB, and\nadjust from there based on benchmarks. So you might want to try 4GB\nfor that one.\n\nJust to confirm, you are using 2 Phase Commit? (People sometimes\nmistake the max_prepared_transactions setting for something related\nto prepared statements.)\n \nI concur with previous advice that using one regular expression\nwhich matches all of the terms is going to be a lot faster than\nmatching each small regular expression separately and then combining\nthem.\n \n-Kevin\n",
"msg_date": "Wed, 10 Aug 2011 10:27:57 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: poor pefrormance with regexp searches on large\n\t tables"
},
{
"msg_contents": "Thnaks for all the help so far, I increased the shared_mem config parameter\n(Postgress didn't accept higher values than default, had to increase\nsystemwide shared mem). The current config (as suggested by Kevin Grittner)\nis as follows:\n\n version | PostgreSQL 8.4.7 on x86_64-redhat-linux-gnu,\ncompiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-50), 64-bit\n autovacuum | off\n client_encoding | LATIN2\n effective_cache_size | 8GB\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n listen_addresses | *\n log_rotation_age | 1d\n log_rotation_size | 0\n log_truncate_on_rotation | on\n logging_collector | on\n maintenance_work_mem | 1GB\n max_connections | 16\n max_prepared_transactions | 50\n max_stack_depth | 8MB\n port | 5432\n server_encoding | UTF8\n shared_buffers | 1GB\n statement_timeout | 25min\n temp_buffers | 16384\n TimeZone | Europe/Berlin\n work_mem | 128MB\n\n\nHowever, changing shared_mem didn't help. We also checked system I/O stats\nduring the query - and in fact there is almost no IO (even with suboptimal\nshared_memory). So the problem is not disk transfer/access but rather the\nway Postgres handles regexp queries... As I have wirtten it is difficult to\nrewrite the query syntax (the SQL generation in this app is quite complex),\nbut it should be relatively easy to at least join all OR clauses into one\nregexp, I can try this from the psql CLI. I will post an update if anything\ninteresting happens...\n\nCheers,\n\nGreg\n\n\nOn Wed, Aug 10, 2011 at 5:27 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Grzegorz Blinowski <[email protected]> wrote:\n>\n> > Some performance params from postgresql.conf:\n>\n> Please paste the result of running the query on this page:\n>\n> http://wiki.postgresql.org/wiki/Server_Configuration\n>\n> For a start, the general advice is usually to start with\n> shared_buffers at the lesser of 25% of system RAM or 8 GB, and\n> adjust from there based on benchmarks. So you might want to try 4GB\n> for that one.\n>\n> Just to confirm, you are using 2 Phase Commit? (People sometimes\n> mistake the max_prepared_transactions setting for something related\n> to prepared statements.)\n>\n> I concur with previous advice that using one regular expression\n> which matches all of the terms is going to be a lot faster than\n> matching each small regular expression separately and then combining\n> them.\n>\n> -Kevin\n>\n\nThnaks for all the help so far, I increased the shared_mem config parameter (Postgress didn't accept higher values than default, had to increase systemwide shared mem). The current config (as suggested by Kevin Grittner) is as follows:\n version | PostgreSQL 8.4.7 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-50), 64-bit autovacuum | off client_encoding | LATIN2\n effective_cache_size | 8GB lc_collate | en_US.UTF-8 lc_ctype | en_US.UTF-8 listen_addresses | * log_rotation_age | 1d log_rotation_size | 0\n log_truncate_on_rotation | on logging_collector | on maintenance_work_mem | 1GB max_connections | 16 max_prepared_transactions | 50 max_stack_depth | 8MB port | 5432\n server_encoding | UTF8 shared_buffers | 1GB statement_timeout | 25min temp_buffers | 16384 TimeZone | Europe/Berlin work_mem | 128MB\nHowever, changing shared_mem didn't help. We also checked system I/O stats during the query - and in fact there is almost no IO (even with suboptimal shared_memory). So the problem is not disk transfer/access but rather the way Postgres handles regexp queries... As I have wirtten it is difficult to rewrite the query syntax (the SQL generation in this app is quite complex), but it should be relatively easy to at least join all OR clauses into one regexp, I can try this from the psql CLI. I will post an update if anything interesting happens...\nCheers,GregOn Wed, Aug 10, 2011 at 5:27 PM, Kevin Grittner <[email protected]> wrote:\nGrzegorz Blinowski <[email protected]> wrote:\n\n> Some performance params from postgresql.conf:\n\nPlease paste the result of running the query on this page:\n\nhttp://wiki.postgresql.org/wiki/Server_Configuration\n\nFor a start, the general advice is usually to start with\nshared_buffers at the lesser of 25% of system RAM or 8 GB, and\nadjust from there based on benchmarks. So you might want to try 4GB\nfor that one.\n\nJust to confirm, you are using 2 Phase Commit? (People sometimes\nmistake the max_prepared_transactions setting for something related\nto prepared statements.)\n\nI concur with previous advice that using one regular expression\nwhich matches all of the terms is going to be a lot faster than\nmatching each small regular expression separately and then combining\nthem.\n\n-Kevin",
"msg_date": "Wed, 10 Aug 2011 19:01:33 +0200",
"msg_from": "Grzegorz Blinowski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: poor pefrormance with regexp searches on large tables"
},
{
"msg_contents": "Dne 10.8.2011 19:01, Grzegorz Blinowski napsal(a):\n> However, changing shared_mem didn't help. We also checked system I/O\n> stats during the query - and in fact there is almost no IO (even with\n> suboptimal shared_memory). So the problem is not disk transfer/access\n> but rather the way Postgres handles regexp queries... As I have wirtten\n> it is difficult to rewrite the query syntax (the SQL generation in this\n> app is quite complex), but it should be relatively easy to at least join\n> all OR clauses into one regexp, I can try this from the psql CLI. I will\n> post an update if anything interesting happens...\n\nCan you post EXPLAIN ANALYZE, prefferably using explain.depesz.com?\n\nTomas\n",
"msg_date": "Wed, 10 Aug 2011 19:15:44 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: poor pefrormance with regexp searches on large tables"
},
{
"msg_contents": "Grzegorz Blinowski <[email protected]> wrote:\n \n> the problem is not disk transfer/access but rather the way\n> Postgres handles regexp queries.\n \nAs a diagnostic step, could you figure out some non-regexp way to\nselect about the same percentage of rows with about the same\ndistribution across the table, and compare times? So far I haven't\nseen any real indication that the time is spent in evaluating the\nregular expressions, versus just loading pages from the OS into\nshared buffers and picking out individual tuples and columns from\nthe table. For all we know, the time is mostly spent decompressing\nthe 2K values. Perhaps you need to save them without compression. \nIf they are big enough after compression to be stored out-of-line by\ndefault, you might want to experiment with having them in-line in\nthe tuple.\n \nhttp://www.postgresql.org/docs/8.4/interactive/storage-toast.html\n \n-Kevin\n",
"msg_date": "Wed, 10 Aug 2011 12:17:44 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: poor pefrormance with regexp searches on large\n\t tables"
},
{
"msg_contents": "Grzegorz Blinowski <[email protected]> wrote:\n \n> autovacuum | off\n \nBTW, that's generally not a good idea -- it leaves you much more\nvulnerable to bloat which could cause performance problems to\nmanifest in any number of ways. You might want to calculate your\nheap bloat on this table.\n \n-Kevin\n",
"msg_date": "Wed, 10 Aug 2011 12:22:49 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: poor pefrormance with regexp searches on large\n\t tables"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> wrote:\n \n> So far I haven't seen any real indication that the time is spent\n> in evaluating the regular expressions\n \nJust as a reality check here, I ran some counts against a\nmoderately-sized table (half a million rows). Just counting the\nrows unconditionally was about five times as fast as having to pick\nout even a small column for a compare. Taking a substring of a\nbigger (but normally non-TOASTed) value and doing a compare was only\na little slower. Using a regular expression anchored to the front\nof the string to do the equivalent of the compare to the substring\ntook about twice as long as the substring approach. For a\nnon-anchored regular expression where it would normally need to scan\nin a bit, it took twice as long as the anchored regular expression. \nThese times seem like they might leave some room for improvement,\nbut it doesn't seem too outrageous.\n \nEach test run three times.\n \nselect count(*) from \"Case\";\n count\n--------\n 527769\n(1 row)\n \nTime: 47.696 ms\nTime: 47.858 ms\nTime: 47.687 ms\n \nselect count(*) from \"Case\" where \"filingCtofcNo\" = '0878';\n count\n--------\n 198645\n(1 row)\n \nTime: 219.233 ms\nTime: 225.410 ms\nTime: 226.723 ms\n \nselect count(*) from \"Case\"\nwhere substring(\"caption\" from 1 for 5) = 'State';\n count\n--------\n 178142\n(1 row)\n \nTime: 238.160 ms\nTime: 237.114 ms\nTime: 240.388 ms\n \nselect count(*) from \"Case\" where \"caption\" ~ '^State';\n count\n--------\n 178142\n(1 row)\n \nTime: 532.821 ms\nTime: 535.341 ms\nTime: 529.121 ms\n \nselect count(*) from \"Case\" where \"caption\" ~ 'Wisconsin';\n count\n--------\n 157483\n(1 row)\n \nTime: 1167.433 ms\nTime: 1172.282 ms\nTime: 1170.562 ms\n \n-Kevin\n",
"msg_date": "Wed, 10 Aug 2011 13:10:16 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: poor pefrormance with regexp searches on large\n\t tables"
},
{
"msg_contents": "A small followup regarding the suggestion to turn off compression - I used:\n\nALTER TABLE archive_tender ALTER COLUMN subject SET STORAGE EXTERNAL\n\nto turn off compression, however I get an impression that \"nothing happend\".\nWhen exactly this alteration takes effect? Perhaps I should reload the\nentire db from backup to change the storage method?\n\nRegards,\n\ngreg\n\n\nOn Wed, Aug 10, 2011 at 7:17 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Grzegorz Blinowski <[email protected]> wrote:\n>\n> > the problem is not disk transfer/access but rather the way\n> > Postgres handles regexp queries.\n>\n> As a diagnostic step, could you figure out some non-regexp way to\n> select about the same percentage of rows with about the same\n> distribution across the table, and compare times? So far I haven't\n> seen any real indication that the time is spent in evaluating the\n> regular expressions, versus just loading pages from the OS into\n> shared buffers and picking out individual tuples and columns from\n> the table. For all we know, the time is mostly spent decompressing\n> the 2K values. Perhaps you need to save them without compression.\n> If they are big enough after compression to be stored out-of-line by\n> default, you might want to experiment with having them in-line in\n> the tuple.\n>\n> http://www.postgresql.org/docs/8.4/interactive/storage-toast.html\n>\n> -Kevin\n>\n\nA small followup regarding the suggestion to turn off compression - I used:ALTER TABLE archive_tender ALTER COLUMN subject SET STORAGE EXTERNAL to turn off compression, however I get an impression that \"nothing happend\". When exactly this alteration takes effect? Perhaps I should reload the entire db from backup to change the storage method?\nRegards,gregOn Wed, Aug 10, 2011 at 7:17 PM, Kevin Grittner <[email protected]> wrote:\nGrzegorz Blinowski <[email protected]> wrote:\n\n> the problem is not disk transfer/access but rather the way\n> Postgres handles regexp queries.\n\nAs a diagnostic step, could you figure out some non-regexp way to\nselect about the same percentage of rows with about the same\ndistribution across the table, and compare times? So far I haven't\nseen any real indication that the time is spent in evaluating the\nregular expressions, versus just loading pages from the OS into\nshared buffers and picking out individual tuples and columns from\nthe table. For all we know, the time is mostly spent decompressing\nthe 2K values. Perhaps you need to save them without compression.\nIf they are big enough after compression to be stored out-of-line by\ndefault, you might want to experiment with having them in-line in\nthe tuple.\n\nhttp://www.postgresql.org/docs/8.4/interactive/storage-toast.html\n\n-Kevin",
"msg_date": "Thu, 11 Aug 2011 10:39:34 +0200",
"msg_from": "Grzegorz Blinowski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: poor pefrormance with regexp searches on large tables"
},
{
"msg_contents": "Grzegorz Blinowski <[email protected]> wrote:\n \n> A small followup regarding the suggestion to turn off compression\n> - I used:\n> \n> ALTER TABLE archive_tender ALTER COLUMN subject SET STORAGE\n> EXTERNAL\n> \n> to turn off compression, however I get an impression that \"nothing\n> happend\". When exactly this alteration takes effect? Perhaps I\n> should reload the entire db from backup to change the storage\n> method?\n \nYeah, the storage option just affects future storage of values; it\ndoes not perform a conversion automatically. There are various ways\nyou could cause the rows to be re-written so that they use the new\nTOAST policy for the column. One of the simplest would be to do a\ndata-only dump of the table, truncate the table, and restore the\ndata. If that table is a big enough portion of the database, a\npg_dump of the whole database might be about as simple.\n \n-Kevin\n",
"msg_date": "Thu, 11 Aug 2011 08:56:14 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: poor pefrormance with regexp searches on large\n\t tables"
},
{
"msg_contents": "To summarize this thread:\n\nWe have tried most of the suggestions and found two of them effective:\n\n1) collapsing OR expressions in the WHERE clause into one '(...)|(...)'\nregexp resulted in about 60% better search time\n2) changing long attribute storage to EXTERNAL gave 30% better search time\n(but only on the first search - i.e. before data is cached)\n\nSurprisingly, changing shared_mem from 24MB to 1 GB gave no apparent effect.\n\nThanks once again for all your help!!!\n\nRegards,\n\nGreg\n\n\nOn Thu, Aug 11, 2011 at 3:56 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Grzegorz Blinowski <[email protected]> wrote:\n>\n> > A small followup regarding the suggestion to turn off compression\n> > - I used:\n> >\n> > ALTER TABLE archive_tender ALTER COLUMN subject SET STORAGE\n> > EXTERNAL\n> >\n> > to turn off compression, however I get an impression that \"nothing\n> > happend\". When exactly this alteration takes effect? Perhaps I\n> > should reload the entire db from backup to change the storage\n> > method?\n>\n> Yeah, the storage option just affects future storage of values; it\n> does not perform a conversion automatically. There are various ways\n> you could cause the rows to be re-written so that they use the new\n> TOAST policy for the column. One of the simplest would be to do a\n> data-only dump of the table, truncate the table, and restore the\n> data. If that table is a big enough portion of the database, a\n> pg_dump of the whole database might be about as simple.\n>\n> -Kevin\n>\n\nTo summarize this thread:We have tried most of the suggestions and found two of them effective:1) collapsing OR expressions in the WHERE clause into one '(...)|(...)' regexp resulted in about 60% better search time\n2) changing long attribute storage to EXTERNAL gave 30% better search time (but only on the first search - i.e. before data is cached)Surprisingly, changing shared_mem from 24MB to 1 GB gave no apparent effect.\nThanks once again for all your help!!!Regards,GregOn Thu, Aug 11, 2011 at 3:56 PM, Kevin Grittner <[email protected]> wrote:\nGrzegorz Blinowski <[email protected]> wrote:\n\n> A small followup regarding the suggestion to turn off compression\n> - I used:\n>\n> ALTER TABLE archive_tender ALTER COLUMN subject SET STORAGE\n> EXTERNAL\n>\n> to turn off compression, however I get an impression that \"nothing\n> happend\". When exactly this alteration takes effect? Perhaps I\n> should reload the entire db from backup to change the storage\n> method?\n\nYeah, the storage option just affects future storage of values; it\ndoes not perform a conversion automatically. There are various ways\nyou could cause the rows to be re-written so that they use the new\nTOAST policy for the column. One of the simplest would be to do a\ndata-only dump of the table, truncate the table, and restore the\ndata. If that table is a big enough portion of the database, a\npg_dump of the whole database might be about as simple.\n\n-Kevin",
"msg_date": "Fri, 12 Aug 2011 17:07:48 +0200",
"msg_from": "Grzegorz Blinowski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: poor pefrormance with regexp searches on large tables"
},
{
"msg_contents": "Grzegorz Blinowski <[email protected]> wrote:\n \n> 2) changing long attribute storage to EXTERNAL gave 30% better\n> search time (but only on the first search - i.e. before data is\n> cached)\n \nThat suggests that all of the following are true:\n \n(1) The long value was previously being compressed and stored\nin-line.\n \n(2) It's now being stored uncompressed, out-of-line in the TOAST\ntable.\n \n(3) Following the TOAST pointers on cached tuples isn't\nsignificantly more or less expensive than decompressing the data.\n \n(4) The smaller base tuple caused fewer page reads from disk, even\nwith the out-of-line storage for the large value.\n \nThe first three aren't surprising; that last one is. Unless there\nis significant bloat of the table, I'm having trouble seeing why\nthat first run is cheaper this way. Make sure your vacuum policy is\naggressive enough; otherwise you will probably see a slow but steady\ndeterioration in performance..\n \n-Kevin\n",
"msg_date": "Fri, 12 Aug 2011 10:25:16 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: poor pefrormance with regexp searches on large\n\t tables"
}
] |
[
{
"msg_contents": "Greetings,\n\nI've been hitting a \"out of memory error\" during autovacuum of\nrelatively large tables (compared to the amount of RAM available). I'm\ntrying to trace the cause of the issue; the answer is somewhere below\nand I don't know how to interpret the data. I can solve the issue\nright now by using more memory but that won't help me understand how\nto interpret the data.\nI've pasted data that I thought relevant but feel free to direct me\n(I've read http://wiki.postgresql.org/wiki/Guide_to_reporting_problems).\nI've searched through older answers to the same question back in 2007\nbut that did not really get me anywhere.\n\nThe error message is:\n[10236]: [1-1] user=,db=,remote= ERROR: out of memory\n[10236]: [2-1] user=,db=,remote= DETAIL: Failed on request of size 395973594.\n[10236]: [3-1] user=,db=,remote= CONTEXT: automatic vacuum of table\n\"***.public.serialized_series\"\n\nI can recreate the error by running vacuum verbose serialized_series\nstraight from psql on the box itself.\n\nThanks!\n\nNow on to the data themselves.\n\n--- table sizes ---\n\nSELECT\n nspname,\nrelname,\n pg_size_pretty(pg_relation_size(C.oid)) AS \"size\"\nFROM pg_class C\nLEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)\nWHERE nspname NOT IN ('pg_catalog', 'information_schema') and relname\nin ('serialized_series', 'pg_toast_16573');\nnspname | relname | size\n----------+-------------------+---------\npg_toast | pg_toast_16573 | 2200 MB\npublic | serialized_series | 1772 MB\n\nSELECT\n nspname,\n relname,\n relkind as \"type\",\n pg_size_pretty(pg_table_size(C.oid)) AS size,\n pg_size_pretty(pg_indexes_size(C.oid)) AS idxsize,\n pg_size_pretty(pg_total_relation_size(C.oid)) as \"total\"\nFROM pg_class C\nLEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)\nWHERE nspname NOT IN ('pg_catalog', 'information_schema') AND\n nspname !~ '^pg_toast' AND\n relkind IN ('r','i') AND\n relname IN ('serialized_series');\n nspname | relname | type | size | idxsize | total\n---------+-------------------+------+---------+---------+---------\n public | serialized_series | r | 4008 MB | 844 MB | 4853 MB\n\n--- table structure ---\nserialized_series has a blob field (that ends up in toast_16573) and a\nbunch of integer ids and timestamps.\n\n--- postgresql.conf (subset) ----\nmax_connections = 200\nshared_buffers = 1971421kB\ntemp_buffers = 8MB\nwork_mem = 9857kB\nmaintenance_work_mem = 752MB\nmax_files_per_process = 1000\neffective_cache_size = 3942842kB\nautovacuum = on\nlog_autovacuum_min_duration = -1\nautovacuum_max_workers = 3\nautovacuum_naptime = 1min\nautovacuum_vacuum_threshold = 50\nautovacuum_analyze_threshold = 50\nautovacuum_vacuum_scale_factor = 0.2\nautovacuum_analyze_scale_factor = 0.1\nautovacuum_freeze_max_age = 200000000\nautovacuum_vacuum_cost_delay = 20\nautovacuum_vacuum_cost_limit = -1\n\n--- os ---\nLinux ******* 2.6.32-314-ec2 #27-Ubuntu SMP Wed Mar 2 22:53:38 UTC\n2011 x86_64 GNU/Linux\n\n total used free shared buffers cached\nMem: 7700 5521 2179 0 20 5049\n-/+ buffers/cache: 451 7249\nSwap: 0 0 0\n\n--- versions ---\nselect version();\n version\n-------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.0.4 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real\n(Ubuntu 4.4.3-4ubuntu5) 4.4.3, 64-bit\n\nii postgresql 9.0.4-1~lucid1\n object-relational SQL database (supported ve\nii postgresql-9.0 9.0.4-1~lucid1\n object-relational SQL database, version 9.0\nii postgresql-client 9.0.4-1~lucid1\n front-end programs for PostgreSQL (supported\nii postgresql-client-9.0 9.0.4-1~lucid1\n front-end programs for PostgreSQL 9.0\nii postgresql-client-common 119~lucid1\n manager for multiple PostgreSQL client versi\nii postgresql-common 119~lucid1\n PostgreSQL database-cluster manager\nii postgresql-contrib 9.0.4-1~lucid1\n additional facilities for PostgreSQL (suppor\nii postgresql-contrib-9.0 9.0.4-1~lucid1\n additional facilities for PostgreSQL\n\n--- kernel params ---\nkernel.shmmax = 8074940416\nkernel.shmall = 1971421\nkernel.shmmni = 4096\n\n--- ulimit for postgres user ---\ncore file size (blocks, -c) 0\ndata seg size (kbytes, -d) unlimited\nscheduling priority (-e) 20\nfile size (blocks, -f) unlimited\npending signals (-i) 16382\nmax locked memory (kbytes, -l) 64\nmax memory size (kbytes, -m) unlimited\nopen files (-n) 1024\npipe size (512 bytes, -p) 8\nPOSIX message queues (bytes, -q) 819200\nreal-time priority (-r) 0\nstack size (kbytes, -s) 8192\ncpu time (seconds, -t) unlimited\nmax user processes (-u) unlimited\nvirtual memory (kbytes, -v) unlimited\nfile locks (-x) unlimited\n\n\n--- ipcs ---\n------ Shared Memory Segments --------\nkey shmid owner perms bytes nattch status\n0x0052e6a9 1441792 postgres 600 2087182336 17\n\n------ Semaphore Arrays --------\nkey semid owner perms nsems\n0x0052e6a9 18743296 postgres 600 17\n0x0052e6aa 18776065 postgres 600 17\n0x0052e6ab 18808834 postgres 600 17\n0x0052e6ac 18841603 postgres 600 17\n0x0052e6ad 18874372 postgres 600 17\n0x0052e6ae 18907141 postgres 600 17\n0x0052e6af 18939910 postgres 600 17\n0x0052e6b0 18972679 postgres 600 17\n0x0052e6b1 19005448 postgres 600 17\n0x0052e6b2 19038217 postgres 600 17\n0x0052e6b3 19070986 postgres 600 17\n0x0052e6b4 19103755 postgres 600 17\n0x0052e6b5 19136524 postgres 600 17\n\n\n--- postgresql log ---\nTopMemoryContext: 89936 total in 10 blocks; 8576 free (8 chunks); 81360 used\n TopTransactionContext: 24576 total in 2 blocks; 22448 free (26\nchunks); 2128 used\n TOAST to main relid map: 24576 total in 2 blocks; 15984 free (5\nchunks); 8592 used\n AV worker: 24576 total in 2 blocks; 19832 free (8 chunks); 4744 used\n Autovacuum Portal: 8192 total in 1 blocks; 8160 free (0 chunks); 32 used\n Vacuum: 8192 total in 1 blocks; 8080 free (0 chunks); 112 used\n Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used\n smgr relation table: 24576 total in 2 blocks; 13920 free (4 chunks); 10656 used\n TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0\nchunks); 32 used\n Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used\n PortalMemory: 0 total in 0 blocks; 0 free (0 chunks); 0 used\n Relcache by OID: 24576 total in 2 blocks; 13872 free (3 chunks); 10704 used\n CacheMemoryContext: 817840 total in 20 blocks; 167624 free (2\nchunks); 650216 used\n serialized_series_dim_context_key_dim_metric_key_resolution_idx:\n2048 total in 1 blocks; 128 free (0 chunks); 1920 used\n serialized_series_rolledup_resolution_end_date_start_date_idx: 2048\ntotal in 1 blocks; 440 free (0 chunks); 1608 used\n ix_serialized_series_end_date: 2048 total in 1 blocks; 752 free (0\nchunks); 1296 used\n ix_serialized_series_start_date: 2048 total in 1 blocks; 752 free\n(0 chunks); 1296 used\n ix_serialized_series_dim_metric_key: 2048 total in 1 blocks; 752\nfree (0 chunks); 1296 used\n ix_serialized_series_source_id: 2048 total in 1 blocks; 752 free (0\nchunks); 1296 used\n ix_serialized_series_dim_context_key: 2048 total in 1 blocks; 752\nfree (0 chunks); 1296 used\n ix_serialized_series_rolledup: 2048 total in 1 blocks; 752 free (0\nchunks); 1296 used\n serialized_series_pkey: 2048 total in 1 blocks; 752 free (0\nchunks); 1296 used\n pg_index_indrelid_index: 2048 total in 1 blocks; 704 free (0\nchunks); 1344 used\n pg_attrdef_adrelid_adnum_index: 2048 total in 1 blocks; 608 free (0\nchunks); 1440 used\n pg_db_role_setting_databaseid_rol_index: 2048 total in 1 blocks;\n656 free (0 chunks); 1392 used\n pg_opclass_am_name_nsp_index: 3072 total in 2 blocks; 1496 free (4\nchunks); 1576 used\n pg_foreign_data_wrapper_name_index: 3072 total in 2 blocks; 1744\nfree (3 chunks); 1328 used\n pg_enum_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks); 1328 used\n pg_class_relname_nsp_index: 3072 total in 2 blocks; 1648 free (2\nchunks); 1424 used\n pg_foreign_server_oid_index: 3072 total in 2 blocks; 1744 free (3\nchunks); 1328 used\n pg_statistic_relid_att_inh_index: 3072 total in 2 blocks; 1496 free\n(4 chunks); 1576 used\n pg_cast_source_target_index: 3072 total in 2 blocks; 1648 free (2\nchunks); 1424 used\n pg_language_name_index: 3072 total in 2 blocks; 1744 free (3\nchunks); 1328 used\n pg_amop_fam_strat_index: 3072 total in 2 blocks; 1384 free (2\nchunks); 1688 used\n pg_index_indexrelid_index: 3072 total in 2 blocks; 1696 free (2\nchunks); 1376 used\n pg_ts_template_tmplname_index: 3072 total in 2 blocks; 1648 free (2\nchunks); 1424 used\n pg_ts_config_map_index: 3072 total in 2 blocks; 1496 free (4\nchunks); 1576 used\n pg_opclass_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks); 1376 used\n pg_foreign_data_wrapper_oid_index: 3072 total in 2 blocks; 1744\nfree (3 chunks); 1328 used\n pg_ts_dict_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks); 1328 used\n pg_conversion_default_index: 3072 total in 2 blocks; 1432 free (3\nchunks); 1640 used\n pg_operator_oprname_l_r_n_index: 3072 total in 2 blocks; 1432 free\n(3 chunks); 1640 used\n pg_trigger_tgrelid_tgname_index: 3072 total in 2 blocks; 1600 free\n(2 chunks); 1472 used\n pg_enum_typid_label_index: 3072 total in 2 blocks; 1648 free (2\nchunks); 1424 used\n pg_ts_config_oid_index: 3072 total in 2 blocks; 1744 free (3\nchunks); 1328 used\n pg_user_mapping_oid_index: 3072 total in 2 blocks; 1744 free (3\nchunks); 1328 used\n pg_opfamily_am_name_nsp_index: 3072 total in 2 blocks; 1496 free (4\nchunks); 1576 used\n pg_type_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks); 1328 used\n pg_aggregate_fnoid_index: 3072 total in 2 blocks; 1744 free (3\nchunks); 1328 used\n pg_constraint_oid_index: 3072 total in 2 blocks; 1744 free (3\nchunks); 1328 used\n pg_rewrite_rel_rulename_index: 3072 total in 2 blocks; 1648 free (2\nchunks); 1424 used\n pg_ts_parser_prsname_index: 3072 total in 2 blocks; 1648 free (2\nchunks); 1424 used\n pg_ts_config_cfgname_index: 3072 total in 2 blocks; 1648 free (2\nchunks); 1424 used\n pg_ts_parser_oid_index: 3072 total in 2 blocks; 1744 free (3\nchunks); 1328 used\n pg_operator_oid_index: 3072 total in 2 blocks; 1744 free (3\nchunks); 1328 used\n pg_namespace_nspname_index: 3072 total in 2 blocks; 1744 free (3\nchunks); 1328 used\n pg_ts_template_oid_index: 3072 total in 2 blocks; 1744 free (3\nchunks); 1328 used\n pg_amop_opr_fam_index: 3072 total in 2 blocks; 1648 free (2\nchunks); 1424 used\n pg_default_acl_role_nsp_obj_index: 3072 total in 2 blocks; 1496\nfree (4 chunks); 1576 used\n pg_ts_dict_dictname_index: 3072 total in 2 blocks; 1648 free (2\nchunks); 1424 used\n pg_type_typname_nsp_index: 3072 total in 2 blocks; 1648 free (2\nchunks); 1424 used\n pg_opfamily_oid_index: 3072 total in 2 blocks; 1744 free (3\nchunks); 1328 used\n pg_class_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks); 1376 used\n pg_proc_proname_args_nsp_index: 3072 total in 2 blocks; 1496 free\n(4 chunks); 1576 used\n pg_attribute_relid_attnum_index: 3072 total in 2 blocks; 1600 free\n(2 chunks); 1472 used\n pg_proc_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks); 1328 used\n pg_language_oid_index: 3072 total in 2 blocks; 1744 free (3\nchunks); 1328 used\n pg_namespace_oid_index: 3072 total in 2 blocks; 1696 free (2\nchunks); 1376 used\n pg_amproc_fam_proc_index: 3072 total in 2 blocks; 1384 free (2\nchunks); 1688 used\n pg_foreign_server_name_index: 3072 total in 2 blocks; 1744 free (3\nchunks); 1328 used\n pg_attribute_relid_attnam_index: 3072 total in 2 blocks; 1648 free\n(2 chunks); 1424 used\n pg_conversion_oid_index: 3072 total in 2 blocks; 1744 free (3\nchunks); 1328 used\n pg_user_mapping_user_server_index: 3072 total in 2 blocks; 1648\nfree (2 chunks); 1424 used\n pg_conversion_name_nsp_index: 3072 total in 2 blocks; 1648 free (2\nchunks); 1424 used\n pg_authid_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks); 1376 used\n pg_auth_members_member_role_index: 3072 total in 2 blocks; 1648\nfree (2 chunks); 1424 used\n pg_tablespace_oid_index: 3072 total in 2 blocks; 1744 free (3\nchunks); 1328 used\n pg_database_datname_index: 3072 total in 2 blocks; 1696 free (2\nchunks); 1376 used\n pg_auth_members_role_member_index: 3072 total in 2 blocks; 1648\nfree (2 chunks); 1424 used\n pg_database_oid_index: 3072 total in 2 blocks; 1696 free (2\nchunks); 1376 used\n pg_authid_rolname_index: 3072 total in 2 blocks; 1744 free (3\nchunks); 1328 used\n MdSmgr: 8192 total in 1 blocks; 8000 free (0 chunks); 192 used\n LOCALLOCK hash: 24576 total in 2 blocks; 15984 free (5 chunks); 8592 used\n Timezones: 83472 total in 2 blocks; 3744 free (0 chunks); 79728 used\n Postmaster: 57344 total in 3 blocks; 49184 free (323 chunks); 8160 used\n ErrorContext: 8192 total in 1 blocks; 8160 free (3 chunks); 32 used\n[10236]: [1-1] user=,db=,remote= ERROR: out of memory\n[10236]: [2-1] user=,db=,remote= DETAIL: Failed on request of size 395973594.\n[10236]: [3-1] user=,db=,remote= CONTEXT: automatic vacuum of table\n\"******.public.serialized_series\"\n\nThanks,\n\n--\nAlexis Lê-Quôc\n",
"msg_date": "Wed, 10 Aug 2011 11:28:01 -0400",
"msg_from": "=?UTF-8?B?QWxleGlzIEzDqi1RdcO0Yw==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Autovacuum running out of memory"
},
{
"msg_contents": "=?UTF-8?B?QWxleGlzIEzDqi1RdcO0Yw==?= <[email protected]> writes:\n> I've been hitting a \"out of memory error\" during autovacuum of\n> relatively large tables (compared to the amount of RAM available).\n\n> The error message is:\n> [10236]: [1-1] user=,db=,remote= ERROR: out of memory\n> [10236]: [2-1] user=,db=,remote= DETAIL: Failed on request of size 395973594.\n> [10236]: [3-1] user=,db=,remote= CONTEXT: automatic vacuum of table\n> \"***.public.serialized_series\"\n\n> --- postgresql.conf (subset) ----\n> shared_buffers = 1971421kB\n> work_mem = 9857kB\n> maintenance_work_mem = 752MB\n\nSince the memory map shows that not very much memory has been allocated\nby VACUUM yet, I suspect it's failing while trying to create the work\narray for remembering dead tuple TIDs. It will assume that it can use\nup to maintenance_work_mem for that. (The fact that it didn't ask for\nthe whole 752MB probably means this is a relatively small table in\nwhich there couldn't possibly be that many TIDs.) So the short answer\nis \"reduce maintenance_work_mem to something under 300MB\".\n\nHowever, I find it a bit odd that you're getting this failure in what\nappears to be a 64-bit build. That means you're not running out of\naddress space, so you must actually be out of RAM+swap. Does the\nmachine have only 4GB or so of RAM? If so, that value for\nshared_buffers is unrealistically large; it's not leaving enough RAM for\nother purposes such as this.\n\nWhere did you get the above-quoted parameter settings, anyway? They\nseem a bit weird, as in written to many more decimal places than anyone\ncould really expect to mean anything.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Aug 2011 13:17:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum running out of memory "
},
{
"msg_contents": "On Wed, Aug 10, 2011 at 1:17 PM, Tom Lane <[email protected]> wrote:\n> Alexis Le-Quoc <[email protected]> writes:\n>> I've been hitting a \"out of memory error\" during autovacuum of\n>> relatively large tables (compared to the amount of RAM available).\n>\n>> The error message is:\n>> [10236]: [1-1] user=,db=,remote= ERROR: out of memory\n>> [10236]: [2-1] user=,db=,remote= DETAIL: Failed on request of size 395973594.\n>> [10236]: [3-1] user=,db=,remote= CONTEXT: automatic vacuum of table\n>> \"***.public.serialized_series\"\n>\n>> --- postgresql.conf (subset) ----\n>> shared_buffers = 1971421kB\n>> work_mem = 9857kB\n>> maintenance_work_mem = 752MB\n>\n> Since the memory map shows that not very much memory has been allocated\n> by VACUUM yet, I suspect it's failing while trying to create the work\n> array for remembering dead tuple TIDs. It will assume that it can use\n> up to maintenance_work_mem for that. (The fact that it didn't ask for\n> the whole 752MB probably means this is a relatively small table in\n> which there couldn't possibly be that many TIDs.) So the short answer\n> is \"reduce maintenance_work_mem to something under 300MB\".\n>\n> However, I find it a bit odd that you're getting this failure in what\n> appears to be a 64-bit build. That means you're not running out of\n> address space, so you must actually be out of RAM+swap. Does the\n> machine have only 4GB or so of RAM? If so, that value for\n> shared_buffers is unrealistically large; it's not leaving enough RAM for\n> other purposes such as this.\n\nThe box has little under 8GB (it's on EC2, a \"m1.large\" instance)\n\n total used free shared buffers cached\nMem: 7700 6662 1038 0 25 6078\n-/+ buffers/cache: 558 7142\nSwap: 0 0 0\n\nThere is no swap.\n\n> Where did you get the above-quoted parameter settings, anyway? They\n> seem a bit weird, as in written to many more decimal places than anyone\n> could really expect to mean anything.\n\nI have them computed by our configuration management system. Here's\nthe logic behind it (edited from ruby):\n\n# Compute shared memory for procps\npage_size = getconf PAGE_SIZE\nphys_pages = getconf _PHYS_PAGES\nshmall = phys_pages\nshmmax = shmall * page_size\n\nshared_buffers = kb_memory_total / 4\nwork_mem = (kb_memory_total / max_connections / 4)\nmaintenance_work_mem = (kb_memory_total * 100 / (1024 * 1024))\n\nIn turn they come from High-Performance Postgresql 9.0\n(http://www.postgresql.org/about/news.1249)\n\nThanks,\n\n-- \nAlexis Lê-Quôc\n",
"msg_date": "Wed, 10 Aug 2011 14:47:30 -0400",
"msg_from": "=?UTF-8?B?QWxleGlzIEzDqi1RdcO0Yw==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum running out of memory"
},
{
"msg_contents": "=?UTF-8?B?QWxleGlzIEzDqi1RdcO0Yw==?= <[email protected]> writes:\n> On Wed, Aug 10, 2011 at 1:17 PM, Tom Lane <[email protected]> wrote:\n>> However, I find it a bit odd that you're getting this failure in what\n>> appears to be a 64-bit build. That means you're not running out of\n>> address space, so you must actually be out of RAM+swap. Does the\n>> machine have only 4GB or so of RAM? If so, that value for\n>> shared_buffers is unrealistically large; it's not leaving enough RAM for\n>> other purposes such as this.\n\n> The box has little under 8GB (it's on EC2, a \"m1.large\" instance)\n> There is no swap.\n\nHmph. Is there other stuff being run on the same instance? Are there a\nwhole lot of active PG processes? Maybe Amazon isn't really giving you\na whole 8GB, or there are weird address space restrictions in the EC2\nenvironment. Anyway I think I'd suggest reducing shared_buffers to 1GB\nor so.\n\n>> Where did you get the above-quoted parameter settings, anyway?\n\n> In turn they come from High-Performance Postgresql 9.0\n> (http://www.postgresql.org/about/news.1249)\n\nI'm sure even Greg wouldn't claim his methods are good to more than one\nor two significant digits.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Aug 2011 14:54:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum running out of memory "
},
{
"msg_contents": "On Wed, Aug 10, 2011 at 2:54 PM, Tom Lane <[email protected]> wrote:\n> Alexis Le-Quoc <[email protected]> writes:\n>> On Wed, Aug 10, 2011 at 1:17 PM, Tom Lane <[email protected]> wrote:\n>>> However, I find it a bit odd that you're getting this failure in what\n>>> appears to be a 64-bit build. That means you're not running out of\n>>> address space, so you must actually be out of RAM+swap. Does the\n>>> machine have only 4GB or so of RAM? If so, that value for\n>>> shared_buffers is unrealistically large; it's not leaving enough RAM for\n>>> other purposes such as this.\n>\n>> The box has little under 8GB (it's on EC2, a \"m1.large\" instance)\n>> There is no swap.\n>\n> Hmph. Is there other stuff being run on the same instance? Are there a\n> whole lot of active PG processes? Maybe Amazon isn't really giving you\n> a whole 8GB, or there are weird address space restrictions in the EC2\n> environment. Anyway I think I'd suggest reducing shared_buffers to 1GB\n> or so.\n>\n\nDone and that fixed it. Thanks.\n\nNow this is counter-intuitive (so much for intuition).\nAny pointers to educate myself on why more shared buffers is\ndetrimental? I thought they would only compete with the OS page cache.\nCould it be caused by the \"no-overcommit\" policy that I told the\nkernel to enforce.\n\nAs far as other things running on the same instance, nothing stands\nout. It is a \"dedicated\" db instance.\n\n>>> Where did you get the above-quoted parameter settings, anyway?\n>\n>> In turn they come from High-Performance Postgresql 9.0\n>> (http://www.postgresql.org/about/news.1249)\n>\n> I'm sure even Greg wouldn't claim his methods are good to more than one\n> or two significant digits.\n\nAgreed, they are meaningless. I just did not make the effort to\nautomatically round the values in my ruby code.\n\n-- \nAlexis Lê-Quôc\n",
"msg_date": "Wed, 10 Aug 2011 15:08:49 -0400",
"msg_from": "=?UTF-8?B?QWxleGlzIEzDqi1RdcO0Yw==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum running out of memory"
},
{
"msg_contents": "=?UTF-8?B?QWxleGlzIEzDqi1RdcO0Yw==?= <[email protected]> writes:\n> On Wed, Aug 10, 2011 at 2:54 PM, Tom Lane <[email protected]> wrote:\n>> Hmph. Is there other stuff being run on the same instance? Are there a\n>> whole lot of active PG processes? Maybe Amazon isn't really giving you\n>> a whole 8GB, or there are weird address space restrictions in the EC2\n>> environment. Anyway I think I'd suggest reducing shared_buffers to 1GB\n>> or so.\n\n> Done and that fixed it. Thanks.\n\n> Now this is counter-intuitive (so much for intuition).\n> Any pointers to educate myself on why more shared buffers is\n> detrimental?\n\nMy guess is that it's an address space restriction at bottom. Postgres\nstarts (on typical systems) with program text at the beginning of its\naddress space, static data after that, a large hole in the middle, and\nstack up at the top. Then the shared memory block gets allocated\nsomewhere in the hole, at a spot that's more or less at the whim of the\nOS. If a Postgres process subsequently asks for more private memory via\nmalloc, it can only possibly get as much as the distance from the\noriginal static area to the shared memory block's position in the\nprocess's address space.\n\nSo I'm thinking that the EC2 environment is giving you some lowball\naddress for the shared memory block that's constraining the process's\nprivate memory space to just a few hundred meg, even though in a 64-bit\nbuild there's room for umpteen gigabytes. Possibly it's worth filing a\nbug with Amazon about how they should pick a more sensible address ...\nbut first you should confirm that theory by looking at the process\nmemory map (should be visible in /proc someplace).\n\nIt may also be that the problem is not process-address-space related but\nreflects some inefficiency in EC2's overall use of RAM, possibly\nexacerbated by PG's request for a large shared memory block. But you'd\nneed to find an EC2 expert to investigate that idea.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Aug 2011 15:27:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum running out of memory "
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm testing Streamin replication with one hot standby node and I'm \nexperiencing high delay of hot standby node.\n\nWhen I reach aprox. 50 transactions per second where every transaction \nincludes only simple \"UPDATE status SET number = number + 1 WHERE name = \n'iterations'\", the standby server falls behind master and the delay is several \nminutes. In comparison, Slony1 have no issues with the same test and its delay \nis only several seconds.\n\nAm I missing some configuration that affects speed of streaming replicaiton?\n\nBoth servers are connected using 1Gb network, CPU usage of both servers is \nlow, disk latency is not issue.\n\nmaster server postgresql.conf\n-----\nwal_level = 'hot_standby'\nvacuum_defer_cleanup_age = 10000\ncheckpoint_segments = 8\ncheckpoint_timeout = 30s\narchive_mode = off\nmax_wal_senders = 3\nwal_sender_delay = 10ms\nwal_keep_segments = 128\n-----\n\nstandby server postgresql.conf\n-----\nhot_standby = on\nmax_standby_streaming_delay = 30s\n-----\n\nstandby server recovery.conf\n-----\nstandby_mode = 'on'\nprimary_conninfo = 'host=master port=5432 user=replication2 password=password'\ntrigger_file = '/var/lib/postgresql/9.0/main/replica_trigger'\n-----\n\n\nThank you for any advice\nAntonin Faltynek\n",
"msg_date": "Thu, 11 Aug 2011 16:46:38 +0200",
"msg_from": "Antonin Faltynek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Streaming replication performance"
},
{
"msg_contents": "On Thu, Aug 11, 2011 at 9:46 AM, Antonin Faltynek <[email protected]> wrote:\n> Hi all,\n>\n> I'm testing Streamin replication with one hot standby node and I'm\n> experiencing high delay of hot standby node.\n>\n> When I reach aprox. 50 transactions per second where every transaction\n> includes only simple \"UPDATE status SET number = number + 1 WHERE name =\n> 'iterations'\", the standby server falls behind master and the delay is several\n> minutes. In comparison, Slony1 have no issues with the same test and its delay\n> is only several seconds.\n>\n> Am I missing some configuration that affects speed of streaming replicaiton?\n\nhm -- how are you determining the standby is behind? are you running\nany other queries on the standby?\n\nmerlin\n",
"msg_date": "Fri, 12 Aug 2011 11:47:24 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming replication performance"
},
{
"msg_contents": "Dne Pá 12. srpna 2011 18:47:24 Merlin Moncure napsal(a):\n> On Thu, Aug 11, 2011 at 9:46 AM, Antonin Faltynek <[email protected]> wrote:\n> > Hi all,\n> > \n> > I'm testing Streamin replication with one hot standby node and I'm\n> > experiencing high delay of hot standby node.\n> > \n> > When I reach aprox. 50 transactions per second where every transaction\n> > includes only simple \"UPDATE status SET number = number + 1 WHERE name =\n> > 'iterations'\", the standby server falls behind master and the delay is\n> > several minutes. In comparison, Slony1 have no issues with the same test\n> > and its delay is only several seconds.\n> > \n> > Am I missing some configuration that affects speed of streaming\n> > replicaiton?\n> \n> hm -- how are you determining the standby is behind? are you running\n> any other queries on the standby?\n> \n> merlin\n\nSimply, during test I'm generating series of numbers that I'm periodicaly \nchecking on both master and stand by server.\n\nAlso I'm measuring several server metrics, like CPU usage, disk usage, network \nbandwidth.\n\nWhen testing streaming replication network usage does not get over 1Mbps and \ncatching master takes several tens of minutes, while during test of WAL file \nshipping, network usage get to 3-5Mpbs and stand by server does not fall \nbehind master more than several seconds.\n\nThere is no other traffic on stand by server, only periodical check of test \ntable (SELECT * FROM status) every 2s. Extension of this check interval does \nnot affect behavior of streaming replication.\n\nTonda\n",
"msg_date": "Tue, 16 Aug 2011 22:03:16 +0200",
"msg_from": "Pin007 <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming replication performance"
}
] |
[
{
"msg_contents": "I have PostgreSQL 8.4.8 on Ubuntu Linux x64. Server is a Core i7 950 with 6GB of RAM. 2GB of RAM us used by Java, some small amount by the kernel / services and the rest is available to PostgreSQL. Hard drive is a single 7200 RPM SATA 1TB Caviar Black HDD. No other applications / processes are running when I perform my tests.\n\nI have an application that performs about 80% reads and 20% writes for a specific billrun. It takes about 60 minutes to complete, and I can have it perform precisely the same queries repeatedly. I have consistently showed that when shared_buffers = 24MB (the default), and wal_buffers = 64kB, the system completes the process in 50 minutes. When I bump shared_buffers to 1500MB, the system slows down and takes 60 minutes to complete the same process. Changing that back to 24MB, but then changing wal_buffers to 16MB has the same impact - performance drops from 50 minutes to about 61 minutes. Changing those two parameters back to the defaults returns the time to 50 minutes.\n\nfsync = off for these tests - not sure if it is relevant. All other settings are at their defaults.\n\nPlease explain why the system is slower with the recommended values for these two settings? The DB is about 74GB, the largest table has 180 million rows.",
"msg_date": "Thu, 11 Aug 2011 16:35:34 -0700",
"msg_from": "Waldo Nell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Recommended optimisations slows down PostgreSQL 8.4"
},
{
"msg_contents": "On Thu, Aug 11, 2011 at 04:35:34PM -0700, Waldo Nell wrote:\n> I have PostgreSQL 8.4.8 on Ubuntu Linux x64. Server is a Core i7 950 with 6GB of RAM. 2GB of RAM us used by Java, some small amount by the kernel / services and the rest is available to PostgreSQL. Hard drive is a single 7200 RPM SATA 1TB Caviar Black HDD. No other applications / processes are running when I perform my tests.\n> \n> I have an application that performs about 80% reads and 20% writes for a specific billrun. It takes about 60 minutes to complete, and I can have it perform precisely the same queries repeatedly. I have consistently showed that when shared_buffers = 24MB (the default), and wal_buffers = 64kB, the system completes the process in 50 minutes. When I bump shared_buffers to 1500MB, the system slows down and takes 60 minutes to complete the same process. Changing that back to 24MB, but then changing wal_buffers to 16MB has the same impact - performance drops from 50 minutes to about 61 minutes. Changing those two parameters back to the defaults returns the time to 50 minutes.\n> \n> fsync = off for these tests - not sure if it is relevant. All other settings are at their defaults.\n> \n> Please explain why the system is slower with the recommended values for these two settings? The DB is about 74GB, the largest table has 180 million rows.\n\nOne guess is that you are using the defaults for other costing parameters and they\ndo not accurately reflect your system. This means that it will be a crap shoot as\nto whether a plan is faster or slower and what will affect the timing.\n\nRegards,\nKen\n",
"msg_date": "Thu, 11 Aug 2011 19:18:50 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recommended optimisations slows down PostgreSQL 8.4"
},
{
"msg_contents": "\nOn 2011-08-11, at 17:18 , [email protected] wrote:\n\n> One guess is that you are using the defaults for other costing parameters and they\n> do not accurately reflect your system. This means that it will be a crap shoot as\n> to whether a plan is faster or slower and what will affect the timing.\n\nOk, but I thought the way to best optimise PostgreSQL is to start with the parameters having the biggest impact and work from there. To adjust multiple parameters would not give a clear indication as to the benefit of each, as they may cancel each other out.\n\nTo test your theory, what other parameters should I be looking at? Here are some more with their current values:\n\nrandom_page_cost = 4.0\neffective_cache_size = 128MB\n\nRemember this runs on SATA so random seeks are not as fast as say SSD.",
"msg_date": "Thu, 11 Aug 2011 17:27:14 -0700",
"msg_from": "Waldo Nell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recommended optimisations slows down PostgreSQL 8.4"
},
{
"msg_contents": "On 08/11/2011 07:35 PM, Waldo Nell wrote:\n> Please explain why the system is slower with the recommended values for these two settings?\n\nIf the other parameters are at their defaults, the server is probably \nexecuting a checkpoint every few seconds running your test. I'd wager \nyour log is filled with checkpoint warnings, about them executing too \nfrequently.\n\nEach time a checkpoint happens, all of shared_buffers is dumped out. So \nif you increase shared_buffers, but don't space the checkpoints out \nmore, it just ends up writing the same data over and over again. Using \na smaller shared_buffers lets the OS deal with that problem instead, so \nit actually ends up being more efficient.\n\nBasically, you can't increase shared_buffers usefully without also \nincreasing checkpoint_segments. All three of shared_buffers, \nwal_buffers, and checkpoint_segments have to go up before you'll see the \nexpected benefit from raising any of them; you can't change a parameter \nat a time and expect an improvement. Try this:\n\nshared_buffers=512MB\nwal_buffers=16MB\ncheckpoint_segments=64\n\nAnd see how that does. If that finally beats your 50 minute record, you \ncan see if further increase to shared_buffers and checkpoint_segments \ncontinue to help from there. Effective upper limits on your server are \nprobably around 2GB for shared_buffers and around 256 for \ncheckpoint_segments; they could be lower if your application uses a lot \nof transitory data (gets read/written once, so the database cache is no \nbetter than the OS one).\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Thu, 11 Aug 2011 21:17:02 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recommended optimisations slows down PostgreSQL 8.4"
},
{
"msg_contents": "\nOn 2011-08-11, at 18:17 , Greg Smith wrote:\n\n> shared_buffers=512MB\n> wal_buffers=16MB\n> checkpoint_segments=64\n\nThanks for the advice. I tried these values... And it is even worse - went up to 63 minutes (from 60 minutes). Like I said this load is read mostly. My 80 / 20% might be a bit inaccurate, if so it could be more like 90% read and 10% write. I do not see any checkpoint warnings in my logs.\n\nI guess that means the OS cache is better for this particular use case than the postgresql cache?\n\n",
"msg_date": "Fri, 12 Aug 2011 09:28:08 -0700",
"msg_from": "Waldo Nell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recommended optimisations slows down PostgreSQL 8.4"
},
{
"msg_contents": "On Thu, Aug 11, 2011 at 7:27 PM, Waldo Nell <[email protected]> wrote:\n>\n> On 2011-08-11, at 17:18 , [email protected] wrote:\n>\n>> One guess is that you are using the defaults for other costing parameters and they\n>> do not accurately reflect your system. This means that it will be a crap shoot as\n>> to whether a plan is faster or slower and what will affect the timing.\n>\n> Ok, but I thought the way to best optimise PostgreSQL is to start with the parameters having the biggest impact and work from there. To adjust multiple parameters would not give a clear indication as to the benefit of each, as they may cancel each other out.\n\nA couple points:\n*) shared buffers is a highly nuanced setting that is very workload\ndependent. it mainly affects write heavy loads, and the pattern of\nwriting is very important in terms of the benefits you may or may not\nsee. it also changes checkpoint behavior -- this will typically\nmanifest as a negative change with raising buffers but this can be\nmitigated. if your i/o becomes very bursty after raising this setting\nit's a red flag that more tuning is required.\n\n*) fsync = off: throw the book out on traditional tuning advice. with\nthis setting (dangerously) set, the o/s is essentially responsible for\ni/o patterns so you should focus your tuning efforts there. the\nbenefits of raising shared buffers don't play as much in this case.\n\n> To test your theory, what other parameters should I be looking at? Here are some more with their current values:\n>\n> random_page_cost = 4.0\n> effective_cache_size = 128MB\n\n*) these settings affect query plans. changing them could have no\naffect or dramatic effect depending on the specific queries you have\nand if they or chosen badly due to overly default conservative\nsettings. the postgresql planner has gotten pretty accurate over the\nyears in the sense that you will want to tune these to be as close to\nreality as possible.\n\nIn my opinion before looking at postgresql.conf you need to make sure\nyour queries and their plans are good. fire up pgfouine and see where\nthose 60 minutes are gettings spent. maybe you have a problem query\nthat demands optimization.\n\nmerlin\n",
"msg_date": "Fri, 12 Aug 2011 11:32:53 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recommended optimisations slows down PostgreSQL 8.4"
},
{
"msg_contents": "\nOn 2011-08-12, at 09:32 , Merlin Moncure wrote:\n\n> In my opinion before looking at postgresql.conf you need to make sure\n> your queries and their plans are good. fire up pgfouine and see where\n> those 60 minutes are gettings spent. maybe you have a problem query\n> that demands optimization.\n\nThanks for your advice, I am sure to look into what you said. I might just add some background information. The process used to take 266 minutes to complete - which I got down to 49 minutes. I spent a LOT of time optimising queries, implementing multithreading to utilise the extra cores and place more load on the DB that way etc. So that being done as best I can for now, I am focussing on the DB itself. I am a firm believer the best place to optimise is first the queries / code THEN the hardware / fine tuning the parameters.\n\nThe fsync = off was because the production system runs on a uber expensive SAN system with multipathing over Fibre Channel, it is on UPS and backup generators in a secure datacenter, and we have daily backups we can fall back to.",
"msg_date": "Fri, 12 Aug 2011 09:54:46 -0700",
"msg_from": "Waldo Nell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recommended optimisations slows down PostgreSQL 8.4"
},
{
"msg_contents": "Waldo Nell <[email protected]> wrote:\n \n> The fsync = off was because the production system runs on a uber\n> expensive SAN system with multipathing over Fibre Channel, it is\n> on UPS and backup generators in a secure datacenter, and we have\n> daily backups we can fall back to.\n \nTurning fsync off in production may be OK as long as those daily\nbackups aren't in the same building as the uber expensive SAN, and\nit's really OK to fall back on a daily backup if the database server\ncrashes or locks up. By the way, I never trust a backup until I\nhave successfully restored from it; you should probably do that at\nleast on some periodic basis if you can't do it every time.\n \nThe other thing I would point out is that if you are tuning with\ndifferent table sizes, RAM sizes, or I/O performance characteristics\nfrom production, the database tuning in one environment may not have\nmuch to do with what works best in the other environment.\n \nAs for why the recommended settings are having paradoxical effects\nin this environment -- this advice is generally based on systems\nwith fsync on and a RAID controller with battery-backed cache\nconfigured for write-back. I don't know how well the advice\ngeneralizes to a single spindle with fsync off.\n \n-Kevin\n",
"msg_date": "Fri, 12 Aug 2011 12:10:38 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recommended optimisations slows down PostgreSQL\n\t 8.4"
},
{
"msg_contents": "On 08/12/2011 12:28 PM, Waldo Nell wrote:\n> I guess that means the OS cache is better for this particular use case \n> than the postgresql cache?\n\nThere you go. It's not magic; the database cache has some properties \nthat work very well for some workloads. And for others, you might as \nwell let the OS deal with it. The fact that you have the option of \nadjusting the proportions here is a controversial design point, but it \ndoes let you tune to your workload in this sort of case.\n\n> The fsync = off was because the production system runs on a uber expensive SAN system with multipathing over Fibre Channel, it is> on UPS and backup generators in a secure datacenter, and we have daily backups we can fall back to.\n\n\nThe safer alternative is to turn synchronous_commit off and increase \nwal_writer_delay. That may not be quite as fast as turning fsync off, \nbut the risk level is a lot lower too. The first time someone \naccidentally unplugs a part of your server, you'll realize that the UPS \nand generators don't really provide very much protection against the \nthings that actually happen in a data center. Having backups is great, \nbut needing to restore from them is no fun.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Fri, 12 Aug 2011 13:15:01 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recommended optimisations slows down PostgreSQL 8.4"
},
{
"msg_contents": "\nOn 2011-08-12, at 10:10 , Kevin Grittner wrote:\n\n> Turning fsync off in production may be OK as long as those daily\n> backups aren't in the same building as the uber expensive SAN, and\n> it's really OK to fall back on a daily backup if the database server\n> crashes or locks up. By the way, I never trust a backup until I\n> have successfully restored from it; you should probably do that at\n> least on some periodic basis if you can't do it every time.\n\nYes we have daily tape backups that are taken off site. And since we refresh QA from prod at least 4 times a month, we know the backups are good on a frequent basis. Very valid points.\n\n> \n> The other thing I would point out is that if you are tuning with\n> different table sizes, RAM sizes, or I/O performance characteristics\n> from production, the database tuning in one environment may not have\n> much to do with what works best in the other environment.\n\nMy DB is an exact duplicate I took from production, however my testing is on different hardware and since I cannot afford a SAN for testing purposes, I am testing on a 7200rpm SATA drive so yeah I guess that is true... I will need to performance test on production environment.\n\n> \n> As for why the recommended settings are having paradoxical effects\n> in this environment -- this advice is generally based on systems\n> with fsync on and a RAID controller with battery-backed cache\n> configured for write-back. I don't know how well the advice\n> generalizes to a single spindle with fsync off.\n\nThanks, I will be sure to carry on testing on production.\n\n",
"msg_date": "Fri, 12 Aug 2011 10:19:09 -0700",
"msg_from": "Waldo Nell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recommended optimisations slows down PostgreSQL 8.4"
},
{
"msg_contents": "Waldo Nell <[email protected]> writes:\n> I have PostgreSQL 8.4.8 on Ubuntu Linux x64. Server is a Core i7 950\n> with 6GB of RAM. 2GB of RAM us used by Java, some small amount by the\n> kernel / services and the rest is available to PostgreSQL.\n\n[ and the DB is 74GB, and things get slower when raising shared_buffers\n from 24MB to 1500MB ]\n\nOne other point here is that with the DB so much larger than available\nRAM, you are almost certainly doing lots of I/O (unless your test case\nhas lots of locality of reference). With small shared_buffers, the\nspace for kernel disk cache amounts to 3 or so GB, and that's your\nprimary buffer against duplicate I/Os. When you crank shared_buffers\nup to half that, you now have two buffer pools of about the same size\nindependently trying to cache the most-used parts of the DB. This is\nlikely to not work too well and result in much more I/O. You save some\nshared-buffers-to-kernel-buffers transfers with more shared_buffers, but\nif the amount of disk I/O goes up a lot in consequence, you'll come out\nway behind.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Aug 2011 14:41:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recommended optimisations slows down PostgreSQL 8.4 "
},
{
"msg_contents": "* Waldo Nell ([email protected]) wrote:\n> The fsync = off was because the production system runs on a uber expensive SAN system with multipathing over Fibre Channel, it is on UPS and backup generators in a secure datacenter, and we have daily backups we can fall back to.\n\nSo, two points: #1- the uber-expensive SAN should make twiddling fsync\nhave much less of an effect on performance than in a non-SAN/non-BBWC\nenvironment, so you might validate that you really need it off. #2- the\nSAN, FC, UPS, etc will be of no help if the OS or PG crash. Seems\npretty harsh to resort back to a daily backup in the event the OS reboots\ndue to some wacky NMI, or the ASR going haywire..\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Fri, 12 Aug 2011 17:07:42 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recommended optimisations slows down PostgreSQL 8.4"
}
] |
[
{
"msg_contents": "I have PostgreSQL 8.4.8 on Ubuntu Linux x64. Server is a Core i7 950 with 6GB of RAM. 2GB of RAM us used by Java, some small amount by the kernel / services and the rest is available to PostgreSQL. Hard drive is a single 7200 RPM SATA 1TB Caviar Black HDD. No other applications / processes are running when I perform my tests.\n\nI have an application that performs about 80% reads and 20% writes for a specific billrun. It takes about 60 minutes to complete, and I can have it perform precisely the same queries repeatedly. I have consistently showed that when shared_buffers = 24MB (the default), and wal_buffers = 64kB, the system completes the process in 50 minutes. When I bump shared_buffers to 1500MB, the system slows down and takes 60 minutes to complete the same process. Changing that back to 24MB, but then changing wal_buffers to 16MB has the same impact - performance drops from 50 minutes to about 61 minutes. Changing those two parameters back to the defaults returns the time to 50 minutes.\n\nfsync = off for these tests - not sure if it is relevant. All other settings are at their defaults.\n\nPlease explain why the system is slower with the recommended values for these two settings? The DB is about 74GB, the largest table has 180 million rows.",
"msg_date": "Thu, 11 Aug 2011 17:03:01 -0700",
"msg_from": "Waldo Nell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Recommended optimisations slows down PostgreSQL 8.4"
},
{
"msg_contents": "On 12/08/2011 8:03 AM, Waldo Nell wrote:\n> I have PostgreSQL 8.4.8 on Ubuntu Linux x64. Server is a Core i7 950 with 6GB of RAM. 2GB of RAM us used by Java, some small amount by the kernel / services and the rest is available to PostgreSQL. Hard drive is a single 7200 RPM SATA 1TB Caviar Black HDD. No other applications / processes are running when I perform my tests.\n>\n> I have an application that performs about 80% reads and 20% writes for a specific billrun. It takes about 60 minutes to complete, and I can have it perform precisely the same queries repeatedly. I have consistently showed that when shared_buffers = 24MB (the default), and wal_buffers = 64kB, the system completes the process in 50 minutes. When I bump shared_buffers to 1500MB, the system slows down and takes 60 minutes to complete the same process. Changing that back to 24MB, but then changing wal_buffers to 16MB has the same impact - performance drops from 50 minutes to about 61 minutes. Changing those two parameters back to the defaults returns the time to 50 minutes.\n>\n> fsync = off for these tests - not sure if it is relevant.\n\nIt certainly is if you're doing any writes as part of your testing.\n\nfsync=off is pretty much the same as saying \"eat my data whenever you \nfeel like it, I really don't care about it\". It's a good option for \nbulk-loading data into new empty clusters and for systems that rely on \nreplication plus a tolerance for data loss. For anything else it's a \nreally, really bad idea.\n\n> Please explain why the system is slower with the recommended values for these two settings? The DB is about 74GB, the largest table has 180 million rows.\n\nWithout any performance measurements, EXPLAIN ANALYZE results, etc it's \nvery hard to know. You'd need to collect vmstat and iostat output among \nother things to be able to get any idea.\n\n--\nCraig Ringer\n",
"msg_date": "Fri, 12 Aug 2011 10:17:14 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recommended optimisations slows down PostgreSQL 8.4"
}
] |
[
{
"msg_contents": "\nhi pgpool Expert\n\nmy architecture as follows: \nMaster/Slave with Streaming Replication and pgpool-II \nversion of pgpool-II is pgpool-II.3.0.4 \nversion of PostgreSQL is 9.0.2 \n\nI am using pgpool works as master/slave sub mode stream\n\nand pgpool key configuration is:\n=====================================================================================================\n\nnum_init_children=100\nmax_pool=4\nchild_life_time=60\nconnection_life_time=0\nchild_max_connections=0\nclient_idle_limit=0\nconnection_cache=true\n\n=====================================================================================================\n\nand java jdbc connection test code as fllows:\n\n=====================================================================================================\nimport java.sql.Connection;\nimport java.sql.DriverManager;\nimport java.sql.ResultSet;\nimport java.sql.SQLException;\nimport java.sql.Statement;\n\npublic class TestPgpool {\n public static void main(String[] args) { \n\n for(;;){ \n Thread th = new Thread(new TestThread()); \n\n th.start(); \n try { \n\n Thread.sleep(10); \n\n } catch (InterruptedException e) { \n System.out.println(\"1--------------\");\n e.printStackTrace(); \n System.out.println(\"1--------------\");\n } \n } \n } \n\n static class TestThread implements Runnable{ \n\n public void run() { \n\n Connection con = null; \n\n Statement stmt = null; \n try { \n\n Class.forName(\"org.postgresql.Driver\"); \n\n con = DriverManager.getConnection( \n\n \n\"jdbc:postgresql://192.168.1.116:9999/spring250_20100630_705\", \n\n \"postgres\",\"postgres\"); \n\n stmt = con.createStatement(); \n\n String sql = \"SELECT * FROM bb_member limit 1\"; \n\n ResultSet rs = stmt.executeQuery(sql); \n System.out.print(\"OK(\"); \n\n while(rs.next()){ \n\n System.out.print(rs.getInt(1) + \"=\" \n\n + rs.getString(2) + \" \"); \n } \n\n System.out.println(\")\"); \n stmt.close(); \n con.close(); \n } catch (SQLException e) { \n System.out.println(\"2--------------\");\n e.printStackTrace();\n System.out.println(\"2--------------\");\n\n } catch (ClassNotFoundException e) { \n e.printStackTrace(); \n } \n } \n } \n}\n=====================================================================================================\n\nquestion:\n\n\tI do some db falt tests \n\t1)the test code run always connect pgpool,\n\t2)test master or slave go down \n\n\tbut when mster or slave go down ,java code throws exception :\n\t\n\torg.postgresql.util.PSQLException: An I/O error occurred while sending to\nthe backend\n\torg.postgresql.util.PSQLException: The connection attempt failed.\n\t\n\tthe error happened once for little time,then goes normal.\n\nwhat should I do to solve this problem?and show the reason about the matter.\n\nthanks for any help\n\njeno\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/pgpool-master-or-slave-goes-down-java-access-error-tp4692837p4692837.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Fri, 12 Aug 2011 03:58:10 -0700 (PDT)",
"msg_from": "jenopob <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgpool master or slave goes down java access error"
},
{
"msg_contents": "\nOn 2011-08-12, at 03:58 , jenopob wrote:\n\n> \n> \tI do some db falt tests \n> \t1)the test code run always connect pgpool,\n> \t2)test master or slave go down \n> \n> \tbut when mster or slave go down ,java code throws exception :\n> \t\n> \torg.postgresql.util.PSQLException: An I/O error occurred while sending to\n> the backend\n> \torg.postgresql.util.PSQLException: The connection attempt failed.\n> \t\n> \tthe error happened once for little time,then goes normal.\n> \n> what should I do to solve this problem?and show the reason about the matter.\n\nI do not know much about pgpool but it sounds to me like the TCP connection is broken, and threads die and new threads make new connections they start working again. If I understand you correctly, you need to write code that detects this IOException and then retries the connection attempt and re-execute the query.",
"msg_date": "Fri, 12 Aug 2011 09:44:18 -0700",
"msg_from": "Waldo Nell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgpool master or slave goes down java access error"
}
] |
[
{
"msg_contents": "Hello,\n\nI need to compare quiery execution : I have 2 tables partitioned by Datex (\ndaily):\n\nsummary_daily (\n counter | bigint\n datasource_id | integer\n application_id | integer \n action | character(1) \n srcreporter_id | integer \n destreporter_id | integer \n bytes | bigint\n srcusergroup_id | integer \n datex | timestamp with time zone \n root_cause_id | integer \n rule_id | integer \n srcgeo_id | integer \n destgeo_id | integer \n mlapp_id | bigint \n)\n\napp (\n counter | bigint \n bytes | bigint \n action | character(1) \n datex | timestamp with time zone \n datasource_id | integer\n application_id | integer \n mlapp_id | bigint \n root_cause_id | integer\n)\n\n\nThe second table has been created from the first by aggregation.\n\n table Summary has 9 mln rec per partition,\n table App has 7 mln rec per partition\n\nexecution plan looks the same except the actual time is a huge difference.\n\nwork_mem=10mb, \n\ndays/partitions query from Summary query from App \n\n1 2.5 sec 1 sec\n3 5.5 sec 1.5 sec\n7 60 sec 8 sec.\n\nwhen I set session work_mem=60mb query for 7 days takes 8.5 sec vs 60 sec.\n\nhow can I see where/when it is using disk or memory?\n\nexplain analyze SELECT summary_app.action, sum(summary_app.counter),\nsummary_app.mlapp_id, \n summary_app.application_id, sum(summary_app.bytes),\nsummary_app.root_cause_id\n FROM summary_app\n WHERE summary_app.datasource_id = 10 and \n summary_app.datex >= '2011-08-03 00:00:00+00'::timestamp with time zone \n AND summary_app.datex < '2011-08-06 00:00:00+00'::timestamp with time zone \n group by mlapp_id, application_id,action, root_cause_id \n\n\n\nHashAggregate (cost=8223.97..8226.97 rows=200 width=37) (actual\ntime=4505.607..4506.806 rows=3126 loops=1)\n -> Append (cost=0.00..8213.42 rows=703 width=37) (actual\ntime=1071.043..4046.780 rows=283968 loops=1)\n -> Seq Scan on summary_daily_data summary_app (cost=0.00..23.83\nrows=1 width=37) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((datex >= '2011-08-03 00:00:00+00'::timestamp with\ntime zone) AND (datex < '2011-08-06 00:00:00+00'::timestamp with time zone)\nAND (datasource_id = 10))\n -> Bitmap Heap Scan on summ_daily_15191 summary_app \n(cost=1854.89..2764.60 rows=234 width=37) (actual time=1071.041..1343.235\nrows=94656 loops=1)\n Recheck Cond: ((datasource_id = 10) AND (datex >= '2011-08-03\n00:00:00+00'::timestamp with time zone) AND (datex < '2011-08-06\n00:00:00+00'::timestamp with time zone))\n -> BitmapAnd (cost=1854.89..1854.89 rows=234 width=0)\n(actual time=1054.310..1054.310 rows=0 loops=1)\n -> Bitmap Index Scan on ind_fw_15191 \n(cost=0.00..868.69 rows=46855 width=0) (actual time=17.896..17.896\nrows=94656 loops=1)\n Index Cond: (datasource_id = 10)\n -> Bitmap Index Scan on ind_datex_15191 \n(cost=0.00..985.83 rows=46855 width=0) (actual time=1020.834..1020.834\nrows=9370944 loops=1)\n Index Cond: ((datex >= '2011-08-03\n00:00:00+00'::timestamp with time zone) AND (datex < '2011-08-06\n00:00:00+00'::timestamp with time zone))\n\n\nthe same query from the smaller table:\n\n\nHashAggregate (cost=252859.36..253209.94 rows=23372 width=34) (actual\ntime=371.164..372.153 rows=3126 loops=1)\n -> Append (cost=0.00..249353.62 rows=233716 width=34) (actual\ntime=11.028..115.915 rows=225072 loops=1)\n -> Seq Scan on summary_app (cost=0.00..28.03 rows=1 width=37)\n(actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((datex >= '2011-08-03 00:00:00+00'::timestamp with\ntime zone) AND (datex < '2011-08-06 00:00:00+00'::timestamp with time zone)\nAND (datasource_id = 10))\n -> Bitmap Heap Scan on summ_app_15191 summary_app \n(cost=2299.40..82014.85 rows=72293 width=34) (actual time=11.027..31.341\nrows=75024 loops=1)\n Recheck Cond: ((datasource_id = 10) AND (datex >= '2011-08-03\n00:00:00+00'::timestamp with time zone) AND (datex < '2011-08-06\n00:00:00+00'::timestamp with time zone))\n -> Bitmap Index Scan on summ_app_fw_datex_15191 \n(cost=0.00..2281.32 rows=72293 width=0) (actual time=10.910..10.910\nrows=75024 loops=1)\n Index Cond: ((datasource_id = 10) AND (datex >=\n'2011-08-03 00:00:00+00'::timestamp with time zone) AND (datex < '2011-08-06\n00:00:00+00'::timestamp with time zone))\n\n\nWhy the difference is so large? How I can tune this query?\n\nthank you.\n\nHelen \n\n \n\n\n \n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/How-to-see-memory-usage-using-explain-analyze-tp4694681p4694681.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Fri, 12 Aug 2011 14:29:51 -0700 (PDT)",
"msg_from": "hyelluas <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to see memory usage using explain analyze ?"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: hyelluas [mailto:[email protected]]\n> Sent: Friday, August 12, 2011 5:30 PM\n> To: [email protected]\n> Subject: How to see memory usage using explain analyze ?\n> \n> Hello,\n> \n> I need to compare quiery execution : I have 2 tables partitioned by\n> Datex (\n> daily):\n> \n> summary_daily (\n> counter | bigint\n> datasource_id | integer\n> application_id | integer\n> action | character(1)\n> srcreporter_id | integer\n> destreporter_id | integer\n> bytes | bigint\n> srcusergroup_id | integer\n> datex | timestamp with time zone\n> root_cause_id | integer\n> rule_id | integer\n> srcgeo_id | integer\n> destgeo_id | integer\n> mlapp_id | bigint\n> )\n> \n> app (\n> counter | bigint\n> bytes | bigint\n> action | character(1)\n> datex | timestamp with time zone\n> datasource_id | integer\n> application_id | integer\n> mlapp_id | bigint\n> root_cause_id | integer\n> )\n> \n> \n> The second table has been created from the first by aggregation.\n> \n> table Summary has 9 mln rec per partition,\n> table App has 7 mln rec per partition\n> \n> execution plan looks the same except the actual time is a huge\n> difference.\n> \n> work_mem=10mb,\n> \n> days/partitions query from Summary query from App\n> \n> 1 2.5 sec 1 sec\n> 3 5.5 sec 1.5 sec\n> 7 60 sec 8 sec.\n> \n> when I set session work_mem=60mb query for 7 days takes 8.5 sec vs 60\n> sec.\n> \n> how can I see where/when it is using disk or memory?\n> \n> explain analyze SELECT summary_app.action, sum(summary_app.counter),\n> summary_app.mlapp_id,\n> summary_app.application_id, sum(summary_app.bytes),\n> summary_app.root_cause_id\n> FROM summary_app\n> WHERE summary_app.datasource_id = 10 and\n> summary_app.datex >= '2011-08-03 00:00:00+00'::timestamp with time\n> zone\n> AND summary_app.datex < '2011-08-06 00:00:00+00'::timestamp with time\n> zone\n> group by mlapp_id, application_id,action, root_cause_id\n> \n> \n> \n> HashAggregate (cost=8223.97..8226.97 rows=200 width=37) (actual\n> time=4505.607..4506.806 rows=3126 loops=1)\n> -> Append (cost=0.00..8213.42 rows=703 width=37) (actual\n> time=1071.043..4046.780 rows=283968 loops=1)\n> -> Seq Scan on summary_daily_data summary_app\n> (cost=0.00..23.83\n> rows=1 width=37) (actual time=0.001..0.001 rows=0 loops=1)\n> Filter: ((datex >= '2011-08-03 00:00:00+00'::timestamp\n> with\n> time zone) AND (datex < '2011-08-06 00:00:00+00'::timestamp with time\n> zone)\n> AND (datasource_id = 10))\n> -> Bitmap Heap Scan on summ_daily_15191 summary_app\n> (cost=1854.89..2764.60 rows=234 width=37) (actual\n> time=1071.041..1343.235\n> rows=94656 loops=1)\n> Recheck Cond: ((datasource_id = 10) AND (datex >= '2011-\n> 08-03\n> 00:00:00+00'::timestamp with time zone) AND (datex < '2011-08-06\n> 00:00:00+00'::timestamp with time zone))\n> -> BitmapAnd (cost=1854.89..1854.89 rows=234 width=0)\n> (actual time=1054.310..1054.310 rows=0 loops=1)\n> -> Bitmap Index Scan on ind_fw_15191\n> (cost=0.00..868.69 rows=46855 width=0) (actual time=17.896..17.896\n> rows=94656 loops=1)\n> Index Cond: (datasource_id = 10)\n> -> Bitmap Index Scan on ind_datex_15191\n> (cost=0.00..985.83 rows=46855 width=0) (actual time=1020.834..1020.834\n> rows=9370944 loops=1)\n> Index Cond: ((datex >= '2011-08-03\n> 00:00:00+00'::timestamp with time zone) AND (datex < '2011-08-06\n> 00:00:00+00'::timestamp with time zone))\n> \n> \n> the same query from the smaller table:\n> \n> \n> HashAggregate (cost=252859.36..253209.94 rows=23372 width=34) (actual\n> time=371.164..372.153 rows=3126 loops=1)\n> -> Append (cost=0.00..249353.62 rows=233716 width=34) (actual\n> time=11.028..115.915 rows=225072 loops=1)\n> -> Seq Scan on summary_app (cost=0.00..28.03 rows=1\nwidth=37)\n> (actual time=0.001..0.001 rows=0 loops=1)\n> Filter: ((datex >= '2011-08-03 00:00:00+00'::timestamp\n> with\n> time zone) AND (datex < '2011-08-06 00:00:00+00'::timestamp with time\n> zone)\n> AND (datasource_id = 10))\n> -> Bitmap Heap Scan on summ_app_15191 summary_app\n> (cost=2299.40..82014.85 rows=72293 width=34) (actual\n> time=11.027..31.341\n> rows=75024 loops=1)\n> Recheck Cond: ((datasource_id = 10) AND (datex >= '2011-\n> 08-03\n> 00:00:00+00'::timestamp with time zone) AND (datex < '2011-08-06\n> 00:00:00+00'::timestamp with time zone))\n> -> Bitmap Index Scan on summ_app_fw_datex_15191\n> (cost=0.00..2281.32 rows=72293 width=0) (actual time=10.910..10.910\n> rows=75024 loops=1)\n> Index Cond: ((datasource_id = 10) AND (datex >=\n> '2011-08-03 00:00:00+00'::timestamp with time zone) AND (datex <\n'2011-\n> 08-06\n> 00:00:00+00'::timestamp with time zone))\n> \n> \n> Why the difference is so large? How I can tune this query?\n> \n> thank you.\n> \n> Helen\n> \n> \n> \n> \n> \n> \n> --\n> View this message in context:\n> http://postgresql.1045698.n5.nabble.com/How-to-see-memory-usage-using-\n> explain-analyze-tp4694681p4694681.html\n> Sent from the PostgreSQL - performance mailing list archive at\n> Nabble.com.\n\nHelen,\n\nI'm probably a bit late answering your question.\nBut, just in case...\n\nIt looks like one table has \"combined\" index summ_app_fw_datex_15191 on\nboth: datasource_id and datex, which works better than 2 separate\nindexes ind_datex_15191(datex) and ind_fw_15191(datasource_id), that you\nhave on the other table.\nBesides, this:\n\n-> Bitmap Index Scan on ind_datex_15191\n(cost=0.00..985.83 rows=46855 width=0) (actual time=1020.834..1020.834\nrows=9370944 loops=1)\n\nShows that statistics on ind_datex_15191 are completely \"out of wack\"\n(expected rows=46855, actual rows=9370944).\n\nHTH,\nIgor Neyman\n\n",
"msg_date": "Mon, 15 Aug 2011 14:19:45 -0400",
"msg_from": "\"Igor Neyman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to see memory usage using explain analyze ?"
},
{
"msg_contents": "Igor,\n\nthank you , my tests showed better performance against the larger summary\ntables when I splited the index for datasource_id & datex , I use to have a\ncomposed index.\n\nRegarding that index statistics - should I analyze the tables? I thought\nauto vacuum takes care of it.\n\nhelen \n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/How-to-see-memory-usage-using-explain-analyze-tp4694962p4701919.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Mon, 15 Aug 2011 11:32:34 -0700 (PDT)",
"msg_from": "hyelluas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to see memory usage using explain analyze ?"
},
{
"msg_contents": "\n> -----Original Message-----\n> From: hyelluas [mailto:[email protected]]\n> Sent: Monday, August 15, 2011 2:33 PM\n> To: [email protected]\n> Subject: Re: How to see memory usage using explain analyze ?\n> \n> Igor,\n> \n> thank you , my tests showed better performance against the larger\n> summary\n> tables when I splited the index for datasource_id & datex , I use to\n> have a\n> composed index.\n> \n> Regarding that index statistics - should I analyze the tables? I\n> thought\n> auto vacuum takes care of it.\n> \n> helen\n> \n> --\n> View this message in context:\n> http://postgresql.1045698.n5.nabble.com/How-to-see-memory-usage-using-\n> explain-analyze-tp4694962p4701919.html\n> Sent from the PostgreSQL - performance mailing list archive at\n> Nabble.com.\n\n\nBut, having different sets of indexes, you can't compare execution\nplans.\nIn regards to statistics, you could try to ANALYZE table manually, may\nbe increasing \"default_statistics_target\".\n>From the docs:\n\n\"default_statistics_target (integer)\n\n Sets the default statistics target for table columns that have not\nhad a column-specific target set via ALTER TABLE SET STATISTICS. Larger\nvalues increase the time needed to do ANALYZE, but might improve the\nquality of the planner's estimates. The default is 10. For more\ninformation on the use of statistics by the PostgreSQL query planner,\nrefer to Section 14.2.\"\n\nHTH,\nIgor\n",
"msg_date": "Wed, 17 Aug 2011 11:27:09 -0400",
"msg_from": "\"Igor Neyman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to see memory usage using explain analyze ?"
},
{
"msg_contents": "Igor,\n\nThank you for the hint, I read about the planner, added \"vacuum analyze \" to\nmy procedures.\n\nThere is no join in my query but GROUP BY that is taking all the time and I\ndon't know how to tune it.\nIt gets executed by the procedure, the execution time requirement is < 4\nsec, \nbut it takes 8-11 sec against 3 partitions , 9 mln rec each, it goes to 22\nsec for 5 partitions.\n \n\nI've been testing PostgreSQL performance for the last 2 months, comparing it\nwhith MySQL, \nPostgreSQL performance with 5+ mln records on the table with 14 columns is\nworse.\nIs 14 columns is a big table for Postgres or 5mln rec is a big table?\n\nThe whole picture is that there are 2 databases : OLTP & \"OLAP\" that use to\nbe on different machines and on different databases.\nThe new project requires to put it on one database & machine.\n\nI preferred Postgres ( poorly designed oltp would not suffer even more on\nmysql) and now I'm trying to tune OLAP db.\n\nthank you.\n\nHelen\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/How-to-see-memory-usage-using-explain-analyze-tp4694962p4709415.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Wed, 17 Aug 2011 11:51:59 -0700 (PDT)",
"msg_from": "hyelluas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to see memory usage using explain analyze ?"
}
] |
[
{
"msg_contents": "Hi,\n\nI've run a lot of pgbench tests recently (trying to compare various fs,\nblock sizes etc.), and I've noticed several really strange results.\n\nEeach benchmark consists of three simple steps:\n\n1) set-up the database\n2) read-only run (10 clients, 5 minutes)\n3) read-write run (10 clients, 5 minutes)\n\nwith a short read-only warm-up (1 client, 1 minute) before each run.\n\nI've run nearly 200 of these, and in about 10 cases I got something that\nlooks like this:\n\nhttp://www.fuzzy.cz/tmp/pgbench/tps.png\nhttp://www.fuzzy.cz/tmp/pgbench/latency.png\n\ni.e. it runs just fine for about 3:40 and then something goes wrong. The\nbench should take 5:00 minutes, but it somehow locks, does nothing for\nabout 2 minutes and then all the clients end at the same time. So instead\nof 5 minutes the run actually takes about 6:40.\n\nThe question is what went wrong - AFAIK there's nothing else running on\nthe machine that could cause this. I'm looking for possible culprits -\nI'll try to repeat this run and see if it happens again.\n\nThe pgbench log is available here (notice the 10 lines at the end, those\nare the 10 blocked clients) along with the postgres.log\n\nhttp://www.fuzzy.cz/tmp/pgbench/pgbench.log.gz\nhttp://www.fuzzy.cz/tmp/pgbench/pg.log\n\nIgnore the \"immediate shutdown request\" warning (once the benchmark is\nover, I don't need it anymore. Besides that there's just a bunch of\n\"pgstat wait timeout\" warnings (which makes sense, because the pgbench run\ndoes a lot of I/O).\n\nI'd understand a slowdown, but why does it block?\n\nI'm using PostgreSQL 9.0.4, the machine has 2GB of RAM and 1GB of shared\nbuffers. I admit the machine might be configured a bit differently (e.g.\nsmaller shared buffers) but I've seen just about 10 such strange results\nout of 200 runs, so I doubt this is the cause.\n\nI was thinking about something like autovacuum, but I'd expect that to\nhappen much more frequently (same config, same workload, etc.). And it\nhappens with just some file systems.\n\nFor example for ext3/writeback, the STDDEV(latency) looks like this\n(x-axis represents PostgreSQL block size, y-axis fs block size):\n\n http://www.fuzzy.cz/tmp/pgbench/ext3-writeback.png\n\nwhile for ext4/journal:\n\n http://www.fuzzy.cz/tmp/pgbench/ext4-journal.png\n\nthanks\nTomas\n\n",
"msg_date": "Sat, 13 Aug 2011 01:37:19 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "strange pgbench results (as if blocked at the end)"
},
{
"msg_contents": "On 13/08/2011 7:37 AM, Tomas Vondra wrote:\n\n> I'd understand a slowdown, but why does it block?\n\nMy first guess is that you're having checkpoint issues. Try enabling \nlogging of checkpoints and checkpoint timings and see if anything there \nlines up with the pause you encounter.\n\n--\nCraig Ringer\n\n",
"msg_date": "Sat, 13 Aug 2011 08:18:48 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange pgbench results (as if blocked at the end)"
},
{
"msg_contents": "On 08/12/2011 07:37 PM, Tomas Vondra wrote:\n> I've run nearly 200 of these, and in about 10 cases I got something that\n> looks like this:\n>\n> http://www.fuzzy.cz/tmp/pgbench/tps.png\n> http://www.fuzzy.cz/tmp/pgbench/latency.png\n>\n> i.e. it runs just fine for about 3:40 and then something goes wrong. The\n> bench should take 5:00 minutes, but it somehow locks, does nothing for\n> about 2 minutes and then all the clients end at the same time. So instead\n> of 5 minutes the run actually takes about 6:40.\n> \n\nYou need to run tests like these for 10 minutes to see the full cycle of \nthings; then you'll likely see them on most runs, instead of only 5%. \nIt's probably the case that some of your tests are finishing before the \nfirst checkpoint does, which is why you don't see the bad stuff every time.\n\nThe long pauses are most likely every client blocking once the \ncheckpoint sync runs. When those fsync calls go out, Linux will freeze \nfor quite a while there on ext3. In this example, the drop in TPS/rise \nin latency at around 50:30 is either the beginning of a checkpoint or \nthe dirty_background_ratio threshold in Linux being exceeded; they tend \nto happen around the same time. It executes the write phase for a bit, \nthen gets into the sync phase around 51:40. You can find a couple of \nexamples just like this on my giant test set around what was committed \nas the fsync compaction feature in 9.1, all at \nhttp://www.2ndquadrant.us/pgbench-results/index.htm\n\nThe one most similar to your case is \nhttp://www.2ndquadrant.us/pgbench-results/481/index.html Had that test \nonly run for 5 minutes, it would have looked just like yours, ending \nafter the long pause that's in the middle on my run. The freeze was \nover 3 minutes long in that example. (My server has a fairly fast disk \nsubsystem, probably faster than what you're testing, but it also has 8GB \nof RAM that it can dirty to more than make up for it).\n\nIn my tests, I switched from ext3 to XFS to get better behavior. You \ngot the same sort of benefit from ext4. ext3 just doesn't handle its \nwrite cache filling and then having fsync calls execute very well. I've \ngiven up on that as an unsolvable problem; improving behavior on XFS and \next4 are the only problems worth worrying about now to me.\n\nAnd I keep seeing too many data corruption issues on ext4 to recommend \nanyone use it yet for PostgreSQL, that's why I focused on XFS. ext4 \nstill needs at least a few more months before all the bug fixes it's \ngotten in later kernels are backported to the 2.6.32 versions deployed \nin RHEL6 and Debian Squeeze, the newest Linux distributions my customers \ncare about right now. On RHEL6 for example, go read \nhttp://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/6.1_Technical_Notes/kernel.html \n, specifically BZ#635199, and you tell me if that sounds like it's \nconsidered stable code yet or not. \"The block layer will be updated in \nfuture kernels to provide this more efficient mechanism of ensuring \nordering...these future block layer improvements will change some kernel \ninterfaces...\" Yikes, that does not inspire confidence to me.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n\n\n\n\n\n\nOn 08/12/2011 07:37 PM, Tomas Vondra wrote:\n\nI've run nearly 200 of these, and in about 10 cases I got something that\nlooks like this:\n\nhttp://www.fuzzy.cz/tmp/pgbench/tps.png\nhttp://www.fuzzy.cz/tmp/pgbench/latency.png\n\ni.e. it runs just fine for about 3:40 and then something goes wrong. The\nbench should take 5:00 minutes, but it somehow locks, does nothing for\nabout 2 minutes and then all the clients end at the same time. So instead\nof 5 minutes the run actually takes about 6:40.\n \n\n\nYou need to run tests like these for 10 minutes to see the full cycle\nof things; then you'll likely see them on most runs, instead of only\n5%. It's probably the case that some of your tests are finishing\nbefore the first checkpoint does, which is why you don't see the bad\nstuff every time.\n\nThe long pauses are most likely every client blocking once the\ncheckpoint sync runs. When those fsync calls go out, Linux will freeze\nfor quite a while there on ext3. In this example, the drop in TPS/rise\nin latency at around 50:30 is either the beginning of a checkpoint or\nthe dirty_background_ratio threshold in Linux being exceeded; they tend\nto happen around the same time. It executes the write phase for a bit,\nthen gets into the sync phase around 51:40. You can find a couple of\nexamples just like this on my giant test set around what was committed\nas the fsync compaction feature in 9.1, all at\nhttp://www.2ndquadrant.us/pgbench-results/index.htm\n\nThe one most similar to your case is\nhttp://www.2ndquadrant.us/pgbench-results/481/index.html Had that test\nonly run for 5 minutes, it would have looked just like yours, ending\nafter the long pause that's in the middle on my run. The freeze was\nover 3 minutes long in that example. (My server has a fairly fast disk\nsubsystem, probably faster than what you're testing, but it also has\n8GB of RAM that it can dirty to more than make up for it).\n\nIn my tests, I switched from ext3 to XFS to get better behavior. You\ngot the same sort of benefit from ext4. ext3 just doesn't handle its\nwrite cache filling and then having fsync calls execute very well. \nI've given up on that as an unsolvable problem; improving behavior on\nXFS and ext4 are the only problems worth worrying about now to me.\n\nAnd I keep seeing too many data corruption issues on ext4 to recommend\nanyone use it yet for PostgreSQL, that's why I focused on XFS. ext4\nstill needs at least a few more months before all the bug fixes it's\ngotten in later kernels are backported to the 2.6.32 versions deployed\nin RHEL6 and Debian Squeeze, the newest Linux distributions my\ncustomers care about right now. On RHEL6 for example, go read\nhttp://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/6.1_Technical_Notes/kernel.html\n, specifically BZ#635199, and you tell me if that sounds like it's\nconsidered stable code yet or not. \"The block layer will be updated in\nfuture kernels to provide this more efficient mechanism of ensuring\nordering...these future block layer improvements will change some\nkernel interfaces...\" Yikes, that does not inspire confidence to me.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us",
"msg_date": "Fri, 12 Aug 2011 23:09:07 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange pgbench results (as if blocked at the end)"
},
{
"msg_contents": "On 13 Srpen 2011, 5:09, Greg Smith wrote:\n> The long pauses are most likely every client blocking once the\n> checkpoint sync runs. When those fsync calls go out, Linux will freeze\n> for quite a while there on ext3. In this example, the drop in TPS/rise\n> in latency at around 50:30 is either the beginning of a checkpoint or\n> the dirty_background_ratio threshold in Linux being exceeded; they tend\n> to happen around the same time. It executes the write phase for a bit,\n> then gets into the sync phase around 51:40. You can find a couple of\n> examples just like this on my giant test set around what was committed\n> as the fsync compaction feature in 9.1, all at\n> http://www.2ndquadrant.us/pgbench-results/index.htm\n> \n> The one most similar to your case is\n> http://www.2ndquadrant.us/pgbench-results/481/index.html Had that test\n> only run for 5 minutes, it would have looked just like yours, ending\n> after the long pause that's in the middle on my run. The freeze was\n> over 3 minutes long in that example. (My server has a fairly fast disk\n> subsystem, probably faster than what you're testing, but it also has 8GB\n> of RAM that it can dirty to more than make up for it).\n\nI guess you're right - I was thinking about checkpoints too, but what\nreally puzzled me was that only some of the runs (with about the same\nworkload) were affected by that.\n\nIt's probably a timing issue - the tests were running for 5 minutes and\ncheckpoint timeout is 5 minutes too. So the runs where the checkpoint\ntimed out early had to write very little data, but when the checkpoint\ntimed out just before the end it had to write much more data.\n\nI've increased the test duration to 10 minutes, decreased the\ncheckpoint timeout to 4 minutes and a checkpoint is issued just before\nthe pgbench. That way the starting position should be more or less the\nsame for all runs.\n\n> In my tests, I switched from ext3 to XFS to get better behavior. You\n> got the same sort of benefit from ext4. ext3 just doesn't handle its\n> write cache filling and then having fsync calls execute very well. I've\n> given up on that as an unsolvable problem; improving behavior on XFS and\n> ext4 are the only problems worth worrying about now to me.\n\nFor production systems, XFS seems like a good choice. The purpose of\nthe tests I've run was merely to see what is the effect of various block\nsize and mount options for available file systems (including\nexperimental ones).\n\nIf interested, you can see the results here http://www.fuzzy.cz/bench/\nalthough it's the first run with runs not long enough (just 5 minutes)\nand some other slightly misconfigured options (shared buffers and\ncheckpoints).\n\n> And I keep seeing too many data corruption issues on ext4 to recommend\n> anyone use it yet for PostgreSQL, that's why I focused on XFS. ext4\n> still needs at least a few more months before all the bug fixes it's\n> gotten in later kernels are backported to the 2.6.32 versions deployed\n> in RHEL6 and Debian Squeeze, the newest Linux distributions my customers\n> care about right now. On RHEL6 for example, go read\n> http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/6.1_Technical_Notes/kernel.html\n> , specifically BZ#635199, and you tell me if that sounds like it's\n> considered stable code yet or not. \"The block layer will be updated in\n> future kernels to provide this more efficient mechanism of ensuring\n> ordering...these future block layer improvements will change some kernel\n> interfaces...\" Yikes, that does not inspire confidence to me.\n\nXFS is naturally much more mature / stable than EXT4, but I'm not quite\nsure I want to judge the stability of code based on a comment in release\nnotes. As I understand it, the comment says something like \"things are\nnot working as efficiently as it should, we'll improve that in the\nfuture\" and it relates to the block layer as a whole, not just specific\nfile systems. But I don't have access to the bug #635199, so maybe I\nmissed something.\n\nregards\nTomas\n",
"msg_date": "Sun, 14 Aug 2011 14:51:37 +0200",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange pgbench results (as if blocked at the end)"
},
{
"msg_contents": "On Sun, Aug 14, 2011 at 6:51 AM, <[email protected]> wrote:\n>\n> I've increased the test duration to 10 minutes, decreased the\n> checkpoint timeout to 4 minutes and a checkpoint is issued just before\n> the pgbench. That way the starting position should be more or less the\n> same for all runs.\n\nAlso look at increasing checkpoint completion target to something\ncloser to 1. 0.8 is a nice starting place.\n",
"msg_date": "Sun, 14 Aug 2011 07:15:00 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange pgbench results (as if blocked at the end)"
},
{
"msg_contents": "On Sun, 14 Aug 2011 07:15:00 -0600, Scott Marlowe\n<[email protected]> wrote:\n> On Sun, Aug 14, 2011 at 6:51 AM, <[email protected]> wrote:\n>>\n>> I've increased the test duration to 10 minutes, decreased the\n>> checkpoint timeout to 4 minutes and a checkpoint is issued just before\n>> the pgbench. That way the starting position should be more or less the\n>> same for all runs.\n> \n> Also look at increasing checkpoint completion target to something\n> closer to 1. 0.8 is a nice starting place.\n\nYes, I've increased that already:\n\ncheckpoints_segments=64 \ncheckpoints_completion_target=0.9\n\nTomas\n",
"msg_date": "Sun, 14 Aug 2011 16:10:01 +0200",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange pgbench results (as if blocked at the end)"
},
{
"msg_contents": "<[email protected]> writes:\n> On 13 Srpen 2011, 5:09, Greg Smith wrote:\n>> And I keep seeing too many data corruption issues on ext4 to recommend\n>> anyone use it yet for PostgreSQL, that's why I focused on XFS. ext4\n>> still needs at least a few more months before all the bug fixes it's\n>> gotten in later kernels are backported to the 2.6.32 versions deployed\n>> in RHEL6 and Debian Squeeze, the newest Linux distributions my customers\n>> care about right now. On RHEL6 for example, go read\n>> http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/6.1_Technical_Notes/kernel.html\n>> , specifically BZ#635199, and you tell me if that sounds like it's\n>> considered stable code yet or not. \"The block layer will be updated in\n>> future kernels to provide this more efficient mechanism of ensuring\n>> ordering...these future block layer improvements will change some kernel\n>> interfaces...\" Yikes, that does not inspire confidence to me.\n\n> XFS is naturally much more mature / stable than EXT4, but I'm not quite\n> sure I want to judge the stability of code based on a comment in release\n> notes. As I understand it, the comment says something like \"things are\n> not working as efficiently as it should, we'll improve that in the\n> future\" and it relates to the block layer as a whole, not just specific\n> file systems. But I don't have access to the bug #635199, so maybe I\n> missed something.\n\nI do ;-). The reason for the tech note was to point out that RHEL6.1\nwould incorporate backports of upstream kernel changes that broke the\nABI for loadable kernel modules, compared to what it had been in\nRHEL6.0. That's of great interest to third-party software developers\nwho add device or filesystem drivers to RHEL, but I don't think it\nspeaks at all to whether the code is unstable from a user's standpoint.\n(The changes in question were purely for performance, and involved a\nconversion from write barriers in the block layer to flush+fua, whatever\nthat is.) Furthermore, this affected every filesystem not only ext4,\nso it really entirely fails to support Greg's argument.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 14 Aug 2011 13:33:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange pgbench results (as if blocked at the end) "
},
{
"msg_contents": "On 08/14/2011 08:51 AM, [email protected] wrote:\n> I've increased the test duration to 10 minutes, decreased the\n> checkpoint timeout to 4 minutes and a checkpoint is issued just before\n> the pgbench. That way the starting position should be more or less the\n> same for all runs.\n> \n\nThat's basically what I settled on for pgbench-tools. Force a \ncheckpoint just before the test, so the beginning of each run is aligned \nmore consistently, then run for long enough that you're guaranteed at \nleast one checkpoint finishes[1] (and you might see more than one if you \nfill checkpoint_segments fast enough). I never bothered trying to \ncompress that test cycle down by decreasing checkpoint_timeout. There's \nalready too many things you need to do in order to get this test working \nwell, and I didn't want to include a change I'd never recommend people \nmake on a production server in the mix.\n\n[1] If your checkpoint behavior goes pathological, for example the \nextended checkpoints possible when the background writer fsync queue \nfills, it's not actually guaranteed that the checkpoint will finish \nwithin 5 minutes after it starts. So a 10 minute run doesn't assure \nyou'll a checkpoint begin and end in all circumstances, but it is the \nexpected case.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Sun, 14 Aug 2011 20:28:58 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange pgbench results (as if blocked at the end)"
},
{
"msg_contents": "On 08/14/2011 01:33 PM, Tom Lane wrote:\n> (The changes in question were purely for performance, and involved a\n> conversion from write barriers in the block layer to flush+fua, whatever\n> that is.) Furthermore, this affected every filesystem not only ext4,\n> so it really entirely fails to support Greg's argument.\n> \n\nFUA is \"Force Unit Access\", shorthand for forcing write-through. If \nyou're staring at this from the perspective I am, where I assume that \nevery line of new filesystem code always takes a while from release \nuntil it's stable, it still concerns me. I suspect that both of these \ncode paths--the write barrier one and the flush+FUA one--are fine on \nXFS. One of the reasons XFS was so terrible in its early years was \nbecause it took a long time to get those parts in particular correct, \nwhich was complicated by how badly drives lied about everything back then.\n\nIf these changes shift around which section of the ext4 write path code \nare exercised, it could easily be the case that it moves fsync related \nwork (which has been one of the buggiest parts of that filesystem) from \ncode that has been tested heavily to parts that haven't been hit as hard \nyet by users. My paranoid raving about this particular bug isn't \nstrongly supported; I'll concede that with the additional data you've \nprovided. But I don't think it's completely unfounded either.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Mon, 15 Aug 2011 16:36:49 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange pgbench results (as if blocked at the end)"
}
] |
[
{
"msg_contents": "News update for anyone else who's trapped like me, waiting for a fix to \nthe Intel 320 SSD bug where they can truncate themselves to 8MB. Over \nthe weekend Intel has announced a firmware fix for the problem is done, \nand is due to ship \"within the next two weeks\": \nhttp://communities.intel.com/thread/24121\n\nOn the larger SSD reliability front, Tom's Hardware surveyed heavy SSD \nusers they're friendly with who use Intel drives. The most interesting \ndata came from Softlayer: \nhttp://www.tomshardware.com/reviews/ssd-reliability-failure-rate,2923-6.html\n\nThis supports two claims I made before based on my private data that \nwere controversial:\n\n-Annualized SSD failure rates are not significantly lower than \ntraditional drives in the first couple of years. Jury is still out on \nwhether they will spike upwards starting at 3 years as mechanical ones do.\n\n-The most common source of dead drives is sudden, catastrophic \nelectronics failure. These are not predicted by SMART, and have nothing \nto do with hitting the drive's wear limits.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Mon, 15 Aug 2011 19:49:52 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Reports from SSD purgatory"
},
{
"msg_contents": "<dons flameproof underpants once more...>\n\nThis comment by the author I think tends to support my theory that most \nof the\nfailures seen are firmware related (and not due to actual hardware \nfailures, which\nas I mentioned in the previous thread are very rare and should occur \nroughly equally\noften in hard drives as SSDs) :\n\n/As we explained in the article, write endurance is a spec'ed failure. \nThat won't happen in the first year, even at enterprise level use. That \nhas nothing to do with our data. We're interested in random failures. \nThe stuff people have been complaining about... BSODs with OCZ drives, \nLPM stuff with m4s, the SSD 320 problem that makes capacity disappear... \netc... Mostly \"soft\" errors. Any hard error that occurs is subject to \nthe \"defective parts per million\" problem that any electrical component \nalso suffers from./\n\nand from the main article body:\n\n/Firmware is the most significant, and we see its impact in play almost \nevery time an SSD problem is reported.\n/\n(Hard drives also suffer from firmware bugs of course)\n\nI think I'm generally encouraged by this article because it suggests \nthat once the firmware bugs are fixed (or if you buy from a vendor less \nlikely to ship with bugs in the first place), then SSD reliability will \nbe much better than it is perceived to be today.\n\n\n\n\n\n\n\n\n\n\n <dons flameproof underpants once more...>\n\n This comment by the author I think tends to support my theory that\n most of the\n failures seen are firmware related (and not due to actual hardware\n failures, which\n as I mentioned in the previous thread are very rare and should occur\n roughly equally\n often in hard drives as SSDs) :\n\nAs we explained in the article, write endurance is a spec'ed\n failure. That won't happen in the first year, even at enterprise\n level use. That has nothing to do with our data. We're interested\n in random failures. The stuff people have been complaining\n about... BSODs with OCZ drives, LPM stuff with m4s, the SSD 320\n problem that makes capacity disappear... etc... Mostly \"soft\"\n errors. Any hard error that occurs is subject to the \"defective\n parts per million\" problem that any electrical component also\n suffers from.\n\n and from the main article body:\n\nFirmware is the most significant, and we see its impact in play\n almost every time an SSD problem is reported. \n\n (Hard drives also suffer from firmware bugs of course) \n\n I think I'm generally encouraged by this article because it suggests\n that once the firmware bugs are fixed (or if you buy from a vendor\n less likely to ship with bugs in the first place), then SSD\n reliability will be much better than it is perceived to be today.",
"msg_date": "Mon, 15 Aug 2011 22:13:51 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reports from SSD purgatory"
},
{
"msg_contents": "On 08/15/2011 07:49 PM, Greg Smith wrote:\n> News update for anyone else who's trapped like me, waiting for a fix \n> to the Intel 320 SSD bug where they can truncate themselves to 8MB. \n> Over the weekend Intel has announced a firmware fix for the problem is \n> done, and is due to ship \"within the next two weeks\": \n> http://communities.intel.com/thread/24121\n\nhttp://communities.intel.com/thread/24205\nhttp://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=18363\n\nI can't believe I'm going to end up using FreeDos to fix this problem.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Fri, 19 Aug 2011 13:20:27 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reports from SSD purgatory"
},
{
"msg_contents": "\n\n---- Original message ----\n>Date: Mon, 15 Aug 2011 19:49:52 -0400\n>From: [email protected] (on behalf of Greg Smith <[email protected]>)\n>Subject: [PERFORM] Reports from SSD purgatory \n>To: \"[email protected]\" <[email protected]>\n>\n>News update for anyone else who's trapped like me, waiting for a fix to \n>the Intel 320 SSD bug where they can truncate themselves to 8MB. Over \n>the weekend Intel has announced a firmware fix for the problem is done, \n>and is due to ship \"within the next two weeks\": \n>http://communities.intel.com/thread/24121\n>\n>On the larger SSD reliability front, Tom's Hardware surveyed heavy SSD \n>users they're friendly with who use Intel drives. The most interesting \n>data came from Softlayer: \n>http://www.tomshardware.com/reviews/ssd-reliability-failure-rate,2923-6.html\n>\n>This supports two claims I made before based on my private data that \n>were controversial:\n>\n>-Annualized SSD failure rates are not significantly lower than \n>traditional drives in the first couple of years. Jury is still out on \n>whether they will spike upwards starting at 3 years as mechanical ones do.\n>\n>-The most common source of dead drives is sudden, catastrophic \n>electronics failure. These are not predicted by SMART, and have nothing \n>to do with hitting the drive's wear limits.\n\nIt's worth knowing exactly what that means. Turns out that NAND quality is price specific. There's gooduns and baduns. Is this a failure in the controller(s) or the NAND?\n\nAlso, given that PG is *nix centric and support for TRIM is win centric, having that makes a big difference in performance. \n\n\n>\n>-- \n>Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n>PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n>\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 24 Aug 2011 14:48:09 -0400 (EDT)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reports from SSD purgatory"
},
{
"msg_contents": "On Wed, Aug 24, 2011 at 1:48 PM, <[email protected]> wrote:\n>\n>\n> ---- Original message ----\n>>Date: Mon, 15 Aug 2011 19:49:52 -0400\n>>From: [email protected] (on behalf of Greg Smith <[email protected]>)\n>>Subject: [PERFORM] Reports from SSD purgatory\n>>To: \"[email protected]\" <[email protected]>\n>>\n>>News update for anyone else who's trapped like me, waiting for a fix to\n>>the Intel 320 SSD bug where they can truncate themselves to 8MB. Over\n>>the weekend Intel has announced a firmware fix for the problem is done,\n>>and is due to ship \"within the next two weeks\":\n>>http://communities.intel.com/thread/24121\n>>\n>>On the larger SSD reliability front, Tom's Hardware surveyed heavy SSD\n>>users they're friendly with who use Intel drives. The most interesting\n>>data came from Softlayer:\n>>http://www.tomshardware.com/reviews/ssd-reliability-failure-rate,2923-6.html\n>>\n>>This supports two claims I made before based on my private data that\n>>were controversial:\n>>\n>>-Annualized SSD failure rates are not significantly lower than\n>>traditional drives in the first couple of years. Jury is still out on\n>>whether they will spike upwards starting at 3 years as mechanical ones do.\n>>\n>>-The most common source of dead drives is sudden, catastrophic\n>>electronics failure. These are not predicted by SMART, and have nothing\n>>to do with hitting the drive's wear limits.\n>\n> It's worth knowing exactly what that means. Turns out that NAND quality is price specific. There's gooduns and baduns. Is this a failure in the controller(s) or the NAND?\n>\n> Also, given that PG is *nix centric and support for TRIM is win centric, having that makes a big difference in performance.\n\none point about TRIM -- no raid controller that I know of supports\ntrim, which suggests it might not even be possible to support. How\nmuch does it help really? Probably not as much as you would think\nbecause newer SSD drives have very sophisticated controllers that make\nit at least partially obsolete.\n\nmerlin\n\nmerlin\n",
"msg_date": "Wed, 24 Aug 2011 13:54:57 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reports from SSD purgatory"
},
{
"msg_contents": "On Wed, 24 Aug 2011, Merlin Moncure wrote:\n\n> On Wed, Aug 24, 2011 at 1:48 PM, <[email protected]> wrote:\n>>\n>>\n>>\n>> Also, given that PG is *nix centric and support for TRIM is win \n>> centric, having that makes a big difference in performance.\n>\n> one point about TRIM -- no raid controller that I know of supports\n> trim, which suggests it might not even be possible to support. How\n> much does it help really? Probably not as much as you would think\n> because newer SSD drives have very sophisticated controllers that make\n> it at least partially obsolete.\n\nif the SSD can know that the user doesn't care about data in a particular \nblock, the SSD can overwrite that block with new data.\n\nSince the SSDs do their writing to new blocks and erase old blocks later, \nthe more empty blocks you have available, the less likely you are to hit a \ngarbage collection pause when you try to write to the drive.\n\nif you are careful to never write temporary files to the drive and only \nuse it for database-like 'update in place' type of things (no 'write a new \nfile and then rename it over the old one' tricks), then TRIM won't make \nany difference because every block that you have ever written to is one \nthat you care about (or close enough to this for practical purposes)\n\nbut if you don't take this care, the drive works to preserve all the data \nblocks that you have ever written to, even if the filesystem has freed \nthem and dosn't care about them. The worst case would be a log strcutured \nfilesystem (btrfs for example) where every write is to a new block and \nthen the old block is freed later.\n\nDavid Lang\n",
"msg_date": "Wed, 24 Aug 2011 12:13:38 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Reports from SSD purgatory"
},
{
"msg_contents": "On 24 Srpen 2011, 20:48, [email protected] wrote:\n\n> It's worth knowing exactly what that means. Turns out that NAND quality\n> is price specific. There's gooduns and baduns. Is this a failure in the\n> controller(s) or the NAND?\n\nWhy is that important? It's simply a failure of electronics and it has\nnothing to do with the wear limits. It simply fails without prior warning\nfrom the SMART.\n\n> Also, given that PG is *nix centric and support for TRIM is win centric,\n> having that makes a big difference in performance.\n\nWindows specific? What do you mean? TRIM is a low-level way to tell the\ndrive 'this block is empty and may be used for something else' - it's just\nanother command sent to the drive. It has to be supported by the\nfilesystem, though (e.g. ext4/btrfs support it).\n\nTomas\n\n",
"msg_date": "Wed, 24 Aug 2011 21:32:16 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reports from SSD purgatory"
},
{
"msg_contents": "On Wed, Aug 24, 2011 at 2:32 PM, Tomas Vondra <[email protected]> wrote:\n> On 24 Srpen 2011, 20:48, [email protected] wrote:\n>\n>> It's worth knowing exactly what that means. Turns out that NAND quality\n>> is price specific. There's gooduns and baduns. Is this a failure in the\n>> controller(s) or the NAND?\n>\n> Why is that important? It's simply a failure of electronics and it has\n> nothing to do with the wear limits. It simply fails without prior warning\n> from the SMART.\n>\n>> Also, given that PG is *nix centric and support for TRIM is win centric,\n>> having that makes a big difference in performance.\n>\n> Windows specific? What do you mean? TRIM is a low-level way to tell the\n> drive 'this block is empty and may be used for something else' - it's just\n> another command sent to the drive. It has to be supported by the\n> filesystem, though (e.g. ext4/btrfs support it).\n\nWell, it's a fair point that TRIM support is probably more widespread\non windows.\n\nmerlin\n",
"msg_date": "Wed, 24 Aug 2011 14:41:47 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reports from SSD purgatory"
},
{
"msg_contents": "\n\n---- Original message ----\n>Date: Wed, 24 Aug 2011 21:32:16 +0200\n>From: [email protected] (on behalf of \"Tomas Vondra\" <[email protected]>)\n>Subject: Re: [PERFORM] Reports from SSD purgatory \n>To: [email protected]\n>Cc: [email protected]\n>\n>On 24 Srpen 2011, 20:48, [email protected] wrote:\n>\n>> It's worth knowing exactly what that means. Turns out that NAND quality\n>> is price specific. There's gooduns and baduns. Is this a failure in the\n>> controller(s) or the NAND?\n>\n>Why is that important? It's simply a failure of electronics and it has\n>nothing to do with the wear limits. It simply fails without prior warning\n>from the SMART.\n\nIt matters because if it's the controller, there's nothing one can do about it (the vendor). If it's the NAND, then the vendor/customer can get drives with gooduns rather than baduns. Not necessarily a quick fix, but knowing the quality of the NAND in the SSD you're planning to buy matters.\n>\n>> Also, given that PG is *nix centric and support for TRIM is win centric,\n>> having that makes a big difference in performance.\n>\n>Windows specific? What do you mean? TRIM is a low-level way to tell the\n>drive 'this block is empty and may be used for something else' - it's just\n>another command sent to the drive. It has to be supported by the\n>filesystem, though (e.g. ext4/btrfs support it).\n\nMy point. The firmware and MS have been faster to support TRIM than *nix, linux in particular. Those that won't/can't move to a recent kernel don't get TRIM.\n\n>\n>Tomas\n>\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 24 Aug 2011 15:42:05 -0400 (EDT)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reports from SSD purgatory"
},
{
"msg_contents": "On 8/24/2011 1:32 PM, Tomas Vondra wrote:\n> Why is that important? It's simply a failure of electronics and it has \n> nothing to do with the wear limits. It simply fails without prior \n> warning from the SMART.\n\nIn the cited article (actually in all articles I've read on this \nsubject), the failures were not properly analyzed*.\nTherefore the conclusion that the failures were of electronics \ncomponents is invalid.\nIn the most recent article, people have pointed to it as confirming \nelectronics failures\nbut the article actually states that the majority of failures were \nsuspected to be\nfirmware-related.\n\nWe know that a) there have been failures, but b) not the cause.\n\nWe don't even know for sure that the cause was not cell wear.\nThat's because all we know is that the drives did not report\nwear before failing. The wear reporting mechanism could be broken for \nall we know.\n\n--\n*A \"proper\" analysis would involve either the original manufacturer's FA \nlab, or a qualified independent analysis lab.\n\n\n",
"msg_date": "Wed, 24 Aug 2011 13:43:00 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reports from SSD purgatory"
},
{
"msg_contents": "On 24 Srpen 2011, 21:41, Merlin Moncure wrote:\n> On Wed, Aug 24, 2011 at 2:32 PM, Tomas Vondra <[email protected]> wrote:\n>> On 24 Srpen 2011, 20:48, [email protected] wrote:\n>>> Also, given that PG is *nix centric and support for TRIM is win\n>>> centric,\n>>> having that makes a big difference in performance.\n>>\n>> Windows specific? What do you mean? TRIM is a low-level way to tell the\n>> drive 'this block is empty and may be used for something else' - it's\n>> just\n>> another command sent to the drive. It has to be supported by the\n>> filesystem, though (e.g. ext4/btrfs support it).\n>\n> Well, it's a fair point that TRIM support is probably more widespread\n> on windows.\n\nAFAIK the only versions that supports it natively are Windows 7 and\nWindows Server 2008 R2 - with other versions you're stuck with\ncommand-line tools equal to wiper.sh or hdparm. So I don't see a\nsignificant difference here - with a reasonably new systems (at least\nkernel 2.6.33), the support is about the same.\n\nObviously there more machines with Windows, especially in the field of\ndesktop/laptop, but that does not make the TRIM Windows-specific I guess.\nMost of them runs old versions (without TRIM support) anyway.\n\nTomas\n\n",
"msg_date": "Wed, 24 Aug 2011 21:54:48 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reports from SSD purgatory"
},
{
"msg_contents": "On 24 Srpen 2011, 21:42, [email protected] wrote:\n>\n>\n> ---- Original message ----\n>>Date: Wed, 24 Aug 2011 21:32:16 +0200\n>>From: [email protected] (on behalf of \"Tomas Vondra\"\n>> <[email protected]>)\n>>Subject: Re: [PERFORM] Reports from SSD purgatory\n>>To: [email protected]\n>>Cc: [email protected]\n>>\n>>On 24 Srpen 2011, 20:48, [email protected] wrote:\n>>\n>>> It's worth knowing exactly what that means. Turns out that NAND\n>>> quality\n>>> is price specific. There's gooduns and baduns. Is this a failure in\n>>> the\n>>> controller(s) or the NAND?\n>>\n>>Why is that important? It's simply a failure of electronics and it has\n>>nothing to do with the wear limits. It simply fails without prior warning\n>>from the SMART.\n>\n> It matters because if it's the controller, there's nothing one can do\n> about it (the vendor). If it's the NAND, then the vendor/customer can get\n> drives with gooduns rather than baduns. Not necessarily a quick fix, but\n> knowing the quality of the NAND in the SSD you're planning to buy matters.\n\nOK, now I see the difference. Still, it'll be quite difficult to find out\nwhich NAND manufacturers are good, especially when the drive manufacturer\nmay use more of them at the same time. And as David Boreham pointed out,\nwe don't know why the drives actually failed :-(\n\n>>> Also, given that PG is *nix centric and support for TRIM is win\n>>> centric,\n>>> having that makes a big difference in performance.\n>>\n>>Windows specific? What do you mean? TRIM is a low-level way to tell the\n>>drive 'this block is empty and may be used for something else' - it's\n>> just\n>>another command sent to the drive. It has to be supported by the\n>>filesystem, though (e.g. ext4/btrfs support it).\n>\n> My point. The firmware and MS have been faster to support TRIM than *nix,\n> linux in particular. Those that won't/can't move to a recent kernel don't\n> get TRIM.\n\nFaster? Windows 7 was released on October 2009, Linux supports TRIM since\nFebruary 2010. That's about 3 or 4 months difference - given that it may\neasily take a year to put a new OS / kernel into a production, it's\nnegligible difference. For example most of the corporations / banks I'm\nworking for are still using Windows XP.\n\nDon't get me wrong - I'm not blindly fighting against Windows, I just\ndon't see how this makes the TRIM a windows-specific feature.\n\nTomas\n\n",
"msg_date": "Wed, 24 Aug 2011 22:10:28 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reports from SSD purgatory"
},
{
"msg_contents": "On Wed, 24 Aug 2011, Tomas Vondra wrote:\n\n> On 24 Srpen 2011, 21:42, [email protected] wrote:\n>>\n>>\n>>\n>> My point. The firmware and MS have been faster to support TRIM than *nix,\n>> linux in particular. Those that won't/can't move to a recent kernel don't\n>> get TRIM.\n>\n> Faster? Windows 7 was released on October 2009, Linux supports TRIM since\n> February 2010. That's about 3 or 4 months difference - given that it may\n> easily take a year to put a new OS / kernel into a production, it's\n> negligible difference. For example most of the corporations / banks I'm\n> working for are still using Windows XP.\n>\n> Don't get me wrong - I'm not blindly fighting against Windows, I just\n> don't see how this makes the TRIM a windows-specific feature.\n\nthe thing is that many people using Linux are using RedHat Enterprise \nLinux 5, which was released several years prior to that, and trim is not \none of the things that Red Hat has backported to their ancient kernel. so \nfor those people it doesn't exist prior to RHEL 6.0 which was released \nmuch more recently.\n\nDavid Lang\n",
"msg_date": "Wed, 24 Aug 2011 13:27:03 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Reports from SSD purgatory"
}
] |
[
{
"msg_contents": "Hi,\nI encountered a problem while trying to improve the performance of a certain\nselect query I have made.\nhere is a simplified code for the function I am using\n\nCREATE OR REPLACE FUNCTION test_func(STR text)\nRETURNS integer AS\n$BODY$\n\nbegin\n\ninsert into plcbug(val) values('begin time before perform');\n\nperform t1.val FROM t1 WHERE\n(COALESCE(rpad(t1.val, 100),'') ) like COALESCE(STR || '%','')\norder by COALESCE(rpad(t1.val, 100), '') using ~<~ LIMIT 5;\n\ninsert into plcbug(val) values('time after perform');\n\nreturn 1;\nEND;\n$BODY$\nLANGUAGE plpgsql VOLATILE\nCOST 100;\nALTER FUNCTION test_func(text) OWNER TO postgres;\n\n\nplcbug is a table I am using in order to see how much time has past between\nthe perform query.\n\nt1 (about 800,000 records) is:\ncreate table t1 (val varchar(200))\n\nthis is the code of the index for the query\n\nCREATE INDEX ixt1 ON t1 USING btree\n((COALESCE(rpad(val::text, 100), ''::text)) varchar_pattern_ops)\n\n\nthe problem is that for some reason the index is not being used when I try\nto run the function with the STR variable(the running time is about 70\nmilliseconds), but if I am writing the same text instead of using the\nvariable STR then the index is being used(the runing time is about 6\nmilliseconds)\n\nto make it more clear\nCOALESCE(STR || '%','') this is when I use the variable and the function is\nbeing called by\nselect test_func('si')\n\nCOALESCE('si' || '%','') this is when I write the text at hand and the index\nis being used.\n\nI tried to cast the expression with every type I could think of with no\nsuccess of making the index work\n\npostgresql version is 9.0.4 64-bit on windows server 2008 R2.\n\nmore info:\ni did not know how to do \"explain analyze\" for the code inside the function.\nso i did something which i believe still represent the same problem. instead\nof using the variable (STR) i did a select from a very simple, one record\ntable t2, which holds the value.\n\ncreate table t2 (val varchar(200));\ninsert into t2 (val) values ('si');\nanalyze t2;\n\nselect t1.val FROM t1 WHERE\n(COALESCE(rpad(t1.val, 100),'') ) like COALESCE((select val from t2 limit 1)\n|| '%','')\norder by COALESCE(rpad(t1.val, 100), '') using ~<~ LIMIT 5;\n\nhttp://explain.depesz.com/s/FRb\n\n\nselect t1.val FROM t1 WHERE\n(COALESCE(rpad(t1.val, 100),'') ) like COALESCE('si' || '%','')\norder by COALESCE(rpad(t1.val, 100), '') using ~<~ LIMIT 5;\n\nhttp://explain.depesz.com/s/2XI\n\n\nThanks in advance for the help!\nEran\n\nHi, I encountered a problem while trying to improve the performance of a certain select query I have made. here is a simplified code for the function I am using \nCREATE OR REPLACE FUNCTION test_func(STR text) RETURNS integer AS $BODY$ begin insert into plcbug(val) values('begin time before perform'); \nperform t1.val FROM t1 WHERE (COALESCE(rpad(t1.val, 100),'') ) like COALESCE(STR || '%','') order by COALESCE(rpad(t1.val, 100), '') using ~<~ LIMIT 5; \ninsert into plcbug(val) values('time after perform'); return 1; END; $BODY$ LANGUAGE plpgsql VOLATILE COST 100; \n\nALTER FUNCTION test_func(text) OWNER TO postgres; plcbug is a table I am using in order to see how much time has past between the perform query.t1 (about 800,000 records) is:\ncreate table t1 (val varchar(200))this is the code of the index for the query CREATE INDEX ixt1 ON t1 USING btree ((COALESCE(rpad(val::text, 100), ''::text)) varchar_pattern_ops)\nthe problem is that for some reason the index is not being used when I try to run the function with the STR variable(the running time is about 70 milliseconds), but if I am writing the same text instead of using the variable STR then the index is being used(the runing time is about 6 milliseconds) \nto make it more clear COALESCE(STR || '%','') this is when I use the variable and the function is being called by select test_func('si') \nCOALESCE('si' || '%','') this is when I write the text at hand and the index is being used. I tried to cast the expression with every type I could think of with no success of making the index work \npostgresql version is 9.0.4 64-bit on windows server 2008 R2.more info:i did not know how to do \"explain analyze\" for the code inside the function. so i did something which i believe still represent the same problem. instead of using the variable (STR) i did a select from a very simple, one record table t2, which holds the value.\ncreate table t2 (val varchar(200));insert into t2 (val) values ('si');analyze t2;select t1.val FROM t1 WHERE (COALESCE(rpad(t1.val, 100),'') ) like COALESCE((select val from t2 limit 1) || '%','') \norder by COALESCE(rpad(t1.val, 100), '') using ~<~ LIMIT 5; http://explain.depesz.com/s/FRb\nselect t1.val FROM t1 WHERE \n(COALESCE(rpad(t1.val, 100),'') ) like COALESCE('si' || '%','') order by COALESCE(rpad(t1.val, 100), '') using ~<~ LIMIT 5; http://explain.depesz.com/s/2XI\nThanks in advance for the help! Eran",
"msg_date": "Tue, 16 Aug 2011 07:30:18 +0300",
"msg_from": "Eyal Wilde <[email protected]>",
"msg_from_op": true,
"msg_subject": "index not being used when variable is sent"
},
{
"msg_contents": "Eyal Wilde <[email protected]> writes:\n> CREATE OR REPLACE FUNCTION test_func(STR text)\n> ...\n> perform t1.val FROM t1 WHERE\n> (COALESCE(rpad(t1.val, 100),'') ) like COALESCE(STR || '%','')\n> order by COALESCE(rpad(t1.val, 100), '') using ~<~ LIMIT 5;\n\n[ doesn't use index ]\n\nNo, it doesn't. The LIKE index optimization requires the LIKE pattern\nto be a constant at plan time, so that the planner can extract the\npattern's fixed prefix. An expression depending on a function parameter\nis certainly not constant.\n\nIf you really need this to work, you could use EXECUTE USING so that\nthe query is re-planned for each execution.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Aug 2011 09:40:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index not being used when variable is sent "
},
{
"msg_contents": "Thanks for the reply.\n\n(i'm sorry for that i didn't really know how to reply to a certain\nmessage...)\n\nwell, i used LIKE, but i actually wanted just \"starts with\".\nthe solution i found without using LIKE is this:\n\nCREATE OR REPLACE FUNCTION test_func(STR text)\nRETURNS integer AS\n$BODY$\ndeclare\n STR2 varchar;\n\nbegin\n\n-- example: if STR is 'abc' then STR2 would be 'abd'\nSTR2 :=\nsubstring(STR,0,length(STR))||chr((ascii(substring(STR,length(STR)))+1));\n\ninsert into plcbug(val) values('begin time before perform');\n\nperform t1.val FROM t1 WHERE\n(COALESCE(rpad((val)::text, 100, ' '::text), ''::text) ~>=~ STR::text) AND\n(COALESCE(rpad((val)::text, 100, ' '::text), ''::text) ~<~ STR2::text)\norder by COALESCE(rpad(t1.val, 100), '') using ~<~ LIMIT 5;\n\ninsert into plcbug(val) values('time after perform');\n\nreturn 1;\nEND;\n$BODY$\nLANGUAGE plpgsql VOLATILE\nCOST 100;\nALTER FUNCTION test_func(text) OWNER TO postgres;\n\n\n1. is there any more elegant solution?\n2. considering LIKE, practically there are only two cases: the expression\n(variable||'%') may be '%something%' or 'something%' [*], right?? do you\nthink the optimizer can do better by conditionally splitting the plan\naccording to actual value of a variable?\n\n[*] for the sake of the discussion lets forget about '_something'.\n\n\nThanks again.\n\nOn Tue, Aug 16, 2011 at 16:40, Tom Lane <[email protected]> wrote:\n\n> Eyal Wilde <[email protected]> writes:\n> > CREATE OR REPLACE FUNCTION test_func(STR text)\n> > ...\n> > perform t1.val FROM t1 WHERE\n> > (COALESCE(rpad(t1.val, 100),'') ) like COALESCE(STR || '%','')\n> > order by COALESCE(rpad(t1.val, 100), '') using ~<~ LIMIT 5;\n>\n> [ doesn't use index ]\n>\n> No, it doesn't. The LIKE index optimization requires the LIKE pattern\n> to be a constant at plan time, so that the planner can extract the\n> pattern's fixed prefix. An expression depending on a function parameter\n> is certainly not constant.\n>\n> If you really need this to work, you could use EXECUTE USING so that\n> the query is re-planned for each execution.\n>\n> regards, tom lane\n>\n\nThanks for the reply.(i'm sorry for that i didn't really know how to reply to a certain message...)\nwell, i used LIKE, but i actually wanted just \"starts with\".the solution i found without using LIKE is this:CREATE OR REPLACE FUNCTION test_func(STR text) \nRETURNS integer AS $BODY$ declare STR2 varchar;begin -- example: if STR is 'abc' then STR2 would be 'abd'\n\nSTR2 := substring(STR,0,length(STR))||chr((ascii(substring(STR,length(STR)))+1));insert into plcbug(val) values('begin time before perform'); perform t1.val FROM t1 WHERE \n(COALESCE(rpad((val)::text, 100, ' '::text), ''::text) ~>=~ STR::text) AND (COALESCE(rpad((val)::text, 100, ' '::text), ''::text) ~<~ STR2::text)order by COALESCE(rpad(t1.val, 100), '') using ~<~ LIMIT 5; \ninsert into plcbug(val) values('time after perform'); return 1; END; $BODY$ LANGUAGE plpgsql VOLATILE COST 100; \n\nALTER FUNCTION test_func(text) OWNER TO postgres; 1. is there any more elegant solution?2. considering LIKE, practically there are only two cases: the expression (variable||'%') may be '%something%' or 'something%' [*], right?? do you think the optimizer can do better by conditionally splitting the plan according to actual value of a variable?\n[*] for the sake of the discussion lets forget about '_something'.Thanks again.On Tue, Aug 16, 2011 at 16:40, Tom Lane <[email protected]> wrote:\nEyal Wilde <[email protected]> writes:\n> CREATE OR REPLACE FUNCTION test_func(STR text)\n> ...\n> perform t1.val FROM t1 WHERE\n> (COALESCE(rpad(t1.val, 100),'') ) like COALESCE(STR || '%','')\n> order by COALESCE(rpad(t1.val, 100), '') using ~<~ LIMIT 5;\n\n[ doesn't use index ]\n\nNo, it doesn't. The LIKE index optimization requires the LIKE pattern\nto be a constant at plan time, so that the planner can extract the\npattern's fixed prefix. An expression depending on a function parameter\nis certainly not constant.\n\nIf you really need this to work, you could use EXECUTE USING so that\nthe query is re-planned for each execution.\n\n regards, tom lane",
"msg_date": "Wed, 17 Aug 2011 09:49:05 +0300",
"msg_from": "Eyal Wilde <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index not being used when variable is sent"
},
{
"msg_contents": "On Aug 17, 2011, at 1:49 AM, Eyal Wilde wrote:\n> 1. is there any more elegant solution?\n\nVery possibly, but I'm having a heck of a time trying to figure out what your current code is actually doing.\n\nWhat's the actual problem you're trying to solve here?\n--\nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n",
"msg_date": "Wed, 17 Aug 2011 15:49:59 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index not being used when variable is sent"
}
] |
[
{
"msg_contents": "Hope all is well. I have received tremendous help from this list prior and therefore wanted some more advice. \n\nI bought some new servers and instead of RAID 5 (which I think greatly hindered our writing performance), I configured 6 SCSI 15K drives with RAID 10. This is dedicated to /var/lib/pgsql. The main OS has 2 SCSI 15K drives on a different virtual disk and also Raid 10, a total of 146Gb. I was thinking of putting Postgres' xlog directory on the OS virtual drive. Does this even make sense to do?\n\nThe system memory is 64GB and the CPUs are dual Intel E5645 chips (they are 6-core each). \n\nIt is a dedicated PostgreSQL box and needs to support heavy read and moderately heavy writes. \n\nCurrently, I have this for the current system which as 16Gb Ram:\n\n max_connections = 350\n\nwork_mem = 32MB\nmaintenance_work_mem = 512MB\nwal_buffers = 640kB\n\n# This is what I was helped with before and made reporting queries blaze by\nseq_page_cost = 1.0\nrandom_page_cost = 3.0\ncpu_tuple_cost = 0.5\neffective_cache_size = 8192MB\n\nAny help and input is greatly appreciated. \n\nThank you\n\nOgden",
"msg_date": "Tue, 16 Aug 2011 20:35:03 -0500",
"msg_from": "Ogden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tuning Tips for a new Server"
},
{
"msg_contents": "On 8/16/2011 8:35 PM, Ogden wrote:\n> Hope all is well. I have received tremendous help from this list prior and therefore wanted some more advice.\n>\n> I bought some new servers and instead of RAID 5 (which I think greatly hindered our writing performance), I configured 6 SCSI 15K drives with RAID 10. This is dedicated to /var/lib/pgsql. The main OS has 2 SCSI 15K drives on a different virtual disk and also Raid 10, a total of 146Gb. I was thinking of putting Postgres' xlog directory on the OS virtual drive. Does this even make sense to do?\n>\n> The system memory is 64GB and the CPUs are dual Intel E5645 chips (they are 6-core each).\n>\n> It is a dedicated PostgreSQL box and needs to support heavy read and moderately heavy writes.\n>\n> Currently, I have this for the current system which as 16Gb Ram:\n>\n> max_connections = 350\n>\n> work_mem = 32MB\n> maintenance_work_mem = 512MB\n> wal_buffers = 640kB\n>\n> # This is what I was helped with before and made reporting queries blaze by\n> seq_page_cost = 1.0\n> random_page_cost = 3.0\n> cpu_tuple_cost = 0.5\n> effective_cache_size = 8192MB\n>\n> Any help and input is greatly appreciated.\n>\n> Thank you\n>\n> Ogden\n\nWhat seems to be the problem? I mean, if nothing is broke, then don't \nfix it :-)\n\nYou say reporting query's are fast, and the disk's should take care of \nyour slow write problem from before. (Did you test the write \nperformance?) So, whats wrong?\n\n\n-Andy\n",
"msg_date": "Wed, 17 Aug 2011 08:41:46 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Tips for a new Server"
},
{
"msg_contents": "\nOn Aug 17, 2011, at 8:41 AM, Andy Colson wrote:\n\n> On 8/16/2011 8:35 PM, Ogden wrote:\n>> Hope all is well. I have received tremendous help from this list prior and therefore wanted some more advice.\n>> \n>> I bought some new servers and instead of RAID 5 (which I think greatly hindered our writing performance), I configured 6 SCSI 15K drives with RAID 10. This is dedicated to /var/lib/pgsql. The main OS has 2 SCSI 15K drives on a different virtual disk and also Raid 10, a total of 146Gb. I was thinking of putting Postgres' xlog directory on the OS virtual drive. Does this even make sense to do?\n>> \n>> The system memory is 64GB and the CPUs are dual Intel E5645 chips (they are 6-core each).\n>> \n>> It is a dedicated PostgreSQL box and needs to support heavy read and moderately heavy writes.\n>> \n>> Currently, I have this for the current system which as 16Gb Ram:\n>> \n>> max_connections = 350\n>> \n>> work_mem = 32MB\n>> maintenance_work_mem = 512MB\n>> wal_buffers = 640kB\n>> \n>> # This is what I was helped with before and made reporting queries blaze by\n>> seq_page_cost = 1.0\n>> random_page_cost = 3.0\n>> cpu_tuple_cost = 0.5\n>> effective_cache_size = 8192MB\n>> \n>> Any help and input is greatly appreciated.\n>> \n>> Thank you\n>> \n>> Ogden\n> \n> What seems to be the problem? I mean, if nothing is broke, then don't fix it :-)\n> \n> You say reporting query's are fast, and the disk's should take care of your slow write problem from before. (Did you test the write performance?) So, whats wrong?\n\n\n I was wondering what the best parameters would be with my new setup. The work_mem obviously will increase as will everything else as it's a 64Gb machine as opposed to a 16Gb machine. The configuration I posted was for a 16Gb machine but this new one is 64Gb. I needed help in how to jump these numbers up. \n\nThank you\n\nOgden",
"msg_date": "Wed, 17 Aug 2011 09:28:56 -0500",
"msg_from": "Ogden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning Tips for a new Server"
},
{
"msg_contents": "On 17 Srpen 2011, 3:35, Ogden wrote:\n> Hope all is well. I have received tremendous help from this list prior and\n> therefore wanted some more advice.\n>\n> I bought some new servers and instead of RAID 5 (which I think greatly\n> hindered our writing performance), I configured 6 SCSI 15K drives with\n> RAID 10. This is dedicated to /var/lib/pgsql. The main OS has 2 SCSI 15K\n> drives on a different virtual disk and also Raid 10, a total of 146Gb. I\n> was thinking of putting Postgres' xlog directory on the OS virtual drive.\n> Does this even make sense to do?\n\nYes, but it greatly depends on the amount of WAL and your workload. If you\nneed to write a lot of WAL data (e.g. during bulk loading), this may\nsignificantly improve performance. It may also help when you have a\nwrite-heavy workload (a lot of clients updating records, background writer\netc.) as that usually means a lot of seeking (while WAL is written\nsequentially).\n\n> The system memory is 64GB and the CPUs are dual Intel E5645 chips (they\n> are 6-core each).\n>\n> It is a dedicated PostgreSQL box and needs to support heavy read and\n> moderately heavy writes.\n\nWhat is the size of the database? So those are the new servers? What's the\ndifference compared to the old ones? What is the RAID controller, how much\nwrite cache is there?\n\n> Currently, I have this for the current system which as 16Gb Ram:\n>\n> max_connections = 350\n>\n> work_mem = 32MB\n> maintenance_work_mem = 512MB\n> wal_buffers = 640kB\n\nAre you really using 350 connections? Something like \"#cpus + #drives\" is\nusually recommended as a sane number, unless the connections are idle most\nof the time. And even in that case a pooling is recommended usually.\n\nAnyway if this worked fine for your workload, I don't think you need to\nchange those settings. I'd probably bump up the wal_buffers to 16MB - it\nmight help a bit, definitely won't hurt and it's so little memory it's not\nworth the effort I guess.\n\n>\n> # This is what I was helped with before and made reporting queries blaze\n> by\n> seq_page_cost = 1.0\n> random_page_cost = 3.0\n> cpu_tuple_cost = 0.5\n> effective_cache_size = 8192MB\n\nAre you sure the cpu_tuple_cost = 0.5 is correct? That seems a bit crazy\nto me, as it says reading a page sequentially is just twice as expensive\nas processing it. This value should be abou 100x lower or something like\nthat.\n\nWhat are the checkpoint settings (segments, completion target). What about\nshared buffers?\n\nTomas\n\n",
"msg_date": "Wed, 17 Aug 2011 16:44:39 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Tips for a new Server"
},
{
"msg_contents": "On 17 Srpen 2011, 16:28, Ogden wrote:\n> I was wondering what the best parameters would be with my new setup. The\n> work_mem obviously will increase as will everything else as it's a 64Gb\n> machine as opposed to a 16Gb machine. The configuration I posted was for\n> a 16Gb machine but this new one is 64Gb. I needed help in how to jump\n> these numbers up.\n\nWell, that really depends on how you come to the current work_mem settings.\n\nIf you've decided that with this amount of work_mem the queries run fine\nand higher values don't give you better performance (because the amount of\ndata that needs to be sorted / hashed) fits into the work_mem, then don't\nincrease it.\n\nBut if you've just set it so that the memory is not exhausted, increasing\nit may actually help you.\n\nWhat I think you should review is the amount of shared buffers,\ncheckpoints and page cache settings (see this for example\nhttp://notemagnet.blogspot.com/2008/08/linux-write-cache-mystery.html).\n\nTomas\n\n",
"msg_date": "Wed, 17 Aug 2011 17:08:31 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Tips for a new Server"
},
{
"msg_contents": "\nOn Aug 17, 2011, at 9:44 AM, Tomas Vondra wrote:\n\n> On 17 Srpen 2011, 3:35, Ogden wrote:\n>> Hope all is well. I have received tremendous help from this list prior and\n>> therefore wanted some more advice.\n>> \n>> I bought some new servers and instead of RAID 5 (which I think greatly\n>> hindered our writing performance), I configured 6 SCSI 15K drives with\n>> RAID 10. This is dedicated to /var/lib/pgsql. The main OS has 2 SCSI 15K\n>> drives on a different virtual disk and also Raid 10, a total of 146Gb. I\n>> was thinking of putting Postgres' xlog directory on the OS virtual drive.\n>> Does this even make sense to do?\n> \n> Yes, but it greatly depends on the amount of WAL and your workload. If you\n> need to write a lot of WAL data (e.g. during bulk loading), this may\n> significantly improve performance. It may also help when you have a\n> write-heavy workload (a lot of clients updating records, background writer\n> etc.) as that usually means a lot of seeking (while WAL is written\n> sequentially).\n\nThe database is about 200Gb so using /usr/local/pgsql/pg_xlog on a virtual disk with 100Gb should not be a problem with the disk space should it?\n\n>> The system memory is 64GB and the CPUs are dual Intel E5645 chips (they\n>> are 6-core each).\n>> \n>> It is a dedicated PostgreSQL box and needs to support heavy read and\n>> moderately heavy writes.\n> \n> What is the size of the database? So those are the new servers? What's the\n> difference compared to the old ones? What is the RAID controller, how much\n> write cache is there?\n> \n\nI am sorry I overlooked specifying this. The database is about 200Gb and yes these are new servers which bring more power (RAM, CPU) over the last one. The RAID Controller is a Perc H700 and there is 512Mb write cache. The servers are Dells. \n\n>> Currently, I have this for the current system which as 16Gb Ram:\n>> \n>> max_connections = 350\n>> \n>> work_mem = 32MB\n>> maintenance_work_mem = 512MB\n>> wal_buffers = 640kB\n> \n> Are you really using 350 connections? Something like \"#cpus + #drives\" is\n> usually recommended as a sane number, unless the connections are idle most\n> of the time. And even in that case a pooling is recommended usually.\n> \n> Anyway if this worked fine for your workload, I don't think you need to\n> change those settings. I'd probably bump up the wal_buffers to 16MB - it\n> might help a bit, definitely won't hurt and it's so little memory it's not\n> worth the effort I guess.\n\nSo just increasing the wal_buffers is okay? I thought there would be more as the memory in the system is now 4 times as much. Perhaps shared_buffers too (down below). \n\n>> \n>> # This is what I was helped with before and made reporting queries blaze\n>> by\n>> seq_page_cost = 1.0\n>> random_page_cost = 3.0\n>> cpu_tuple_cost = 0.5\n>> effective_cache_size = 8192MB\n> \n> Are you sure the cpu_tuple_cost = 0.5 is correct? That seems a bit crazy\n> to me, as it says reading a page sequentially is just twice as expensive\n> as processing it. This value should be abou 100x lower or something like\n> that.\n\nThese settings are for the old server, keep in mind. It's a 16GB machine (the new one is 64Gb). The value for cpu_tuple_cost should be 0.005? How are the other ones?\n\n\n> What are the checkpoint settings (segments, completion target). What about\n> shared buffers?\n\n\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 5min # range 30s-1h\ncheckpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0 - was 0.5\n#checkpoint_warning = 30s # 0 disables\n\nAnd\n\nshared_buffers = 4096MB \n\n\nThank you very much\n\nOgden\n\n\n",
"msg_date": "Wed, 17 Aug 2011 11:39:21 -0500",
"msg_from": "Ogden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning Tips for a new Server"
},
{
"msg_contents": "I am using bonnie++ to benchmark our current Postgres system (on RAID 5) with the new one we have, which I have configured with RAID 10. The drives are the same (SAS 15K). I tried the new system with ext3 and then XFS but the results seem really outrageous as compared to the current system, or am I reading things wrong?\n\nThe benchmark results are here:\n\nhttp://malekkoheavyindustry.com/benchmark.html\n\n\nThank you\n\nOgden\nI am using bonnie++ to benchmark our current Postgres system (on RAID 5) with the new one we have, which I have configured with RAID 10. The drives are the same (SAS 15K). I tried the new system with ext3 and then XFS but the results seem really outrageous as compared to the current system, or am I reading things wrong?The benchmark results are here:http://malekkoheavyindustry.com/benchmark.htmlThank youOgden",
"msg_date": "Wed, 17 Aug 2011 13:26:56 -0500",
"msg_from": "Ogden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "On Wed, Aug 17, 2011 at 01:26:56PM -0500, Ogden wrote:\n> I am using bonnie++ to benchmark our current Postgres system (on RAID 5) with the new one we have, which I have configured with RAID 10. The drives are the same (SAS 15K). I tried the new system with ext3 and then XFS but the results seem really outrageous as compared to the current system, or am I reading things wrong?\n> \n> The benchmark results are here:\n> \n> http://malekkoheavyindustry.com/benchmark.html\n> \n> \n> Thank you\n> \n> Ogden\n\nThat looks pretty normal to me.\n\nKen\n",
"msg_date": "Wed, 17 Aug 2011 13:31:01 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "\nOn Aug 17, 2011, at 1:31 PM, [email protected] wrote:\n\n> On Wed, Aug 17, 2011 at 01:26:56PM -0500, Ogden wrote:\n>> I am using bonnie++ to benchmark our current Postgres system (on RAID 5) with the new one we have, which I have configured with RAID 10. The drives are the same (SAS 15K). I tried the new system with ext3 and then XFS but the results seem really outrageous as compared to the current system, or am I reading things wrong?\n>> \n>> The benchmark results are here:\n>> \n>> http://malekkoheavyindustry.com/benchmark.html\n>> \n>> \n>> Thank you\n>> \n>> Ogden\n> \n> That looks pretty normal to me.\n> \n> Ken\n\nBut such a jump from the current db01 system to this? Over 20 times difference from the current system to the new one with XFS. Is that much of a jump normal?\n\nOgden",
"msg_date": "Wed, 17 Aug 2011 13:32:41 -0500",
"msg_from": "Ogden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "On 17/08/2011 7:26 PM, Ogden wrote:\n> I am using bonnie++ to benchmark our current Postgres system (on RAID \n> 5) with the new one we have, which I have configured with RAID 10. The \n> drives are the same (SAS 15K). I tried the new system with ext3 and \n> then XFS but the results seem really outrageous as compared to the \n> current system, or am I reading things wrong?\n>\n> The benchmark results are here:\n>\n> http://malekkoheavyindustry.com/benchmark.html\n>\nThe results are not completely outrageous, however you don't say what \ndrives, how many and what RAID controller you have in the current and \nnew systems. You might expect that performance from 10/12 disks in RAID \n10 with a good controller. I would say that your current system is \noutrageous in that is is so slow!\n\nCheers,\nGary.\n",
"msg_date": "Wed, 17 Aug 2011 19:33:13 +0100",
"msg_from": "Gary Doades <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "On Wed, Aug 17, 2011 at 01:32:41PM -0500, Ogden wrote:\n> \n> On Aug 17, 2011, at 1:31 PM, [email protected] wrote:\n> \n> > On Wed, Aug 17, 2011 at 01:26:56PM -0500, Ogden wrote:\n> >> I am using bonnie++ to benchmark our current Postgres system (on RAID 5) with the new one we have, which I have configured with RAID 10. The drives are the same (SAS 15K). I tried the new system with ext3 and then XFS but the results seem really outrageous as compared to the current system, or am I reading things wrong?\n> >> \n> >> The benchmark results are here:\n> >> \n> >> http://malekkoheavyindustry.com/benchmark.html\n> >> \n> >> \n> >> Thank you\n> >> \n> >> Ogden\n> > \n> > That looks pretty normal to me.\n> > \n> > Ken\n> \n> But such a jump from the current db01 system to this? Over 20 times difference from the current system to the new one with XFS. Is that much of a jump normal?\n> \n> Ogden\n\nYes, RAID5 is bad for in many ways. XFS is much better than EXT3. You would get similar\nresults with EXT4 as well, I suspect, although you did not test that.\n\nRegards,\nKen\n",
"msg_date": "Wed, 17 Aug 2011 13:35:51 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "On 8/17/2011 1:35 PM, [email protected] wrote:\n> On Wed, Aug 17, 2011 at 01:32:41PM -0500, Ogden wrote:\n>>\n>> On Aug 17, 2011, at 1:31 PM, [email protected] wrote:\n>>\n>>> On Wed, Aug 17, 2011 at 01:26:56PM -0500, Ogden wrote:\n>>>> I am using bonnie++ to benchmark our current Postgres system (on RAID 5) with the new one we have, which I have configured with RAID 10. The drives are the same (SAS 15K). I tried the new system with ext3 and then XFS but the results seem really outrageous as compared to the current system, or am I reading things wrong?\n>>>>\n>>>> The benchmark results are here:\n>>>>\n>>>> http://malekkoheavyindustry.com/benchmark.html\n>>>>\n>>>>\n>>>> Thank you\n>>>>\n>>>> Ogden\n>>>\n>>> That looks pretty normal to me.\n>>>\n>>> Ken\n>>\n>> But such a jump from the current db01 system to this? Over 20 times difference from the current system to the new one with XFS. Is that much of a jump normal?\n>>\n>> Ogden\n>\n> Yes, RAID5 is bad for in many ways. XFS is much better than EXT3. You would get similar\n> results with EXT4 as well, I suspect, although you did not test that.\n>\n> Regards,\n> Ken\n>\n\nA while back I tested ext3 and xfs myself and found xfs performs better \nfor PG. However, I also have a photos site with 100K files (split into \na small subset of directories), and xfs sucks bad on it.\n\nSo my db is on xfs, and my photos are on ext4.\n\nThe numbers between raid5 and raid10 dont really surprise me either. I \nwent from 100 Meg/sec to 230 Meg/sec going from 3 disk raid 5 to 4 disk \nraid 10. (I'm, of course, using SATA drives.... with 4 gig of ram... \nand 2 cores. Everyone with more than 8 cores and 64 gig of ram is off \nmy Christmas list! :-) )\n\n-Andy\n",
"msg_date": "Wed, 17 Aug 2011 13:48:59 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "\nOn Aug 17, 2011, at 1:48 PM, Andy Colson wrote:\n\n> On 8/17/2011 1:35 PM, [email protected] wrote:\n>> On Wed, Aug 17, 2011 at 01:32:41PM -0500, Ogden wrote:\n>>> \n>>> On Aug 17, 2011, at 1:31 PM, [email protected] wrote:\n>>> \n>>>> On Wed, Aug 17, 2011 at 01:26:56PM -0500, Ogden wrote:\n>>>>> I am using bonnie++ to benchmark our current Postgres system (on RAID 5) with the new one we have, which I have configured with RAID 10. The drives are the same (SAS 15K). I tried the new system with ext3 and then XFS but the results seem really outrageous as compared to the current system, or am I reading things wrong?\n>>>>> \n>>>>> The benchmark results are here:\n>>>>> \n>>>>> http://malekkoheavyindustry.com/benchmark.html\n>>>>> \n>>>>> \n>>>>> Thank you\n>>>>> \n>>>>> Ogden\n>>>> \n>>>> That looks pretty normal to me.\n>>>> \n>>>> Ken\n>>> \n>>> But such a jump from the current db01 system to this? Over 20 times difference from the current system to the new one with XFS. Is that much of a jump normal?\n>>> \n>>> Ogden\n>> \n>> Yes, RAID5 is bad for in many ways. XFS is much better than EXT3. You would get similar\n>> results with EXT4 as well, I suspect, although you did not test that.\n>> \n>> Regards,\n>> Ken\n>> \n> \n> A while back I tested ext3 and xfs myself and found xfs performs better for PG. However, I also have a photos site with 100K files (split into a small subset of directories), and xfs sucks bad on it.\n> \n> So my db is on xfs, and my photos are on ext4.\n\n\nWhat about the OS itself? I put the Debian linux sysem also on XFS but haven't played around with it too much. Is it better to put the OS itself on ext4 and the /var/lib/pgsql partition on XFS?\n\nThanks\n\nOgden",
"msg_date": "Wed, 17 Aug 2011 13:55:17 -0500",
"msg_from": "Ogden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "\nOn Aug 17, 2011, at 1:33 PM, Gary Doades wrote:\n\n> On 17/08/2011 7:26 PM, Ogden wrote:\n>> I am using bonnie++ to benchmark our current Postgres system (on RAID 5) with the new one we have, which I have configured with RAID 10. The drives are the same (SAS 15K). I tried the new system with ext3 and then XFS but the results seem really outrageous as compared to the current system, or am I reading things wrong?\n>> \n>> The benchmark results are here:\n>> \n>> http://malekkoheavyindustry.com/benchmark.html\n>> \n> The results are not completely outrageous, however you don't say what drives, how many and what RAID controller you have in the current and new systems. You might expect that performance from 10/12 disks in RAID 10 with a good controller. I would say that your current system is outrageous in that is is so slow!\n> \n> Cheers,\n> Gary.\n\n\nYes, under heavy writes the load would shoot right up which is what caused us to look at upgrading. If it is the RAID 5, it is mind boggling that it could be that much of a difference. I expected a difference, now that much. \n\nThe new system has 6 drives, 300Gb 15K SAS and I've put them into a RAID 10 configuration. The current system is ext3 with RAID 5 over 4 disks on a Perc/5i controller which has half the write cache as the new one (256 Mb vs 512Mb). \n\nOgden",
"msg_date": "Wed, 17 Aug 2011 13:56:17 -0500",
"msg_from": "Ogden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "On 17 Srpen 2011, 18:39, Ogden wrote:\n>> Yes, but it greatly depends on the amount of WAL and your workload. If\n>> you\n>> need to write a lot of WAL data (e.g. during bulk loading), this may\n>> significantly improve performance. It may also help when you have a\n>> write-heavy workload (a lot of clients updating records, background\n>> writer\n>> etc.) as that usually means a lot of seeking (while WAL is written\n>> sequentially).\n>\n> The database is about 200Gb so using /usr/local/pgsql/pg_xlog on a virtual\n> disk with 100Gb should not be a problem with the disk space should it?\n\nI think you've mentioned the database is on 6 drives, while the other\nvolume is on 2 drives, right? That makes the OS drive about 3x slower\n(just a rough estimate). But if the database drive is used heavily, it\nmight help to move the xlog directory to the OS disk. See how is the db\nvolume utilized and if it's fully utilized, try to move the xlog\ndirectory.\n\nThe only way to find out is to actualy try it with your workload.\n\n>> What is the size of the database? So those are the new servers? What's\n>> the difference compared to the old ones? What is the RAID controller, how\n>> much write cache is there?\n>\n> I am sorry I overlooked specifying this. The database is about 200Gb and\n> yes these are new servers which bring more power (RAM, CPU) over the last\n> one. The RAID Controller is a Perc H700 and there is 512Mb write cache.\n> The servers are Dells.\n\nOK, sounds good although I don't have much experience with this controller.\n\n>>> Currently, I have this for the current system which as 16Gb Ram:\n>>>\n>>> max_connections = 350\n>>>\n>>> work_mem = 32MB\n>>> maintenance_work_mem = 512MB\n>>> wal_buffers = 640kB\n>>\n>> Anyway if this worked fine for your workload, I don't think you need to\n>> change those settings. I'd probably bump up the wal_buffers to 16MB - it\n>> might help a bit, definitely won't hurt and it's so little memory it's\n>> not\n>> worth the effort I guess.\n>\n> So just increasing the wal_buffers is okay? I thought there would be more\n> as the memory in the system is now 4 times as much. Perhaps shared_buffers\n> too (down below).\n\nYes, I was just commenting that particular piece of config. Shared buffers\nshould be increased too.\n\n>>> # This is what I was helped with before and made reporting queries\n>>> blaze\n>>> by\n>>> seq_page_cost = 1.0\n>>> random_page_cost = 3.0\n>>> cpu_tuple_cost = 0.5\n>>> effective_cache_size = 8192MB\n>>\n>> Are you sure the cpu_tuple_cost = 0.5 is correct? That seems a bit crazy\n>> to me, as it says reading a page sequentially is just twice as expensive\n>> as processing it. This value should be abou 100x lower or something like\n>> that.\n>\n> These settings are for the old server, keep in mind. It's a 16GB machine\n> (the new one is 64Gb). The value for cpu_tuple_cost should be 0.005? How\n> are the other ones?\n\nThe default values are like this:\n\nseq_page_cost = 1.0\nrandom_page_cost = 4.0\ncpu_tuple_cost = 0.01\ncpu_index_tuple_cost = 0.005\ncpu_operator_cost = 0.0025\n\nIncreasing the cpu_tuple_cost to 0.5 makes it way too expensive I guess,\nso the database believes processing two 8kB pages is just as expensive as\nreading one from the disk. I guess this change penalizes plans that read a\nlot of pages, e.g. sequential scans (and favor index scans etc.). Maybe it\nmakes sense in your case, I'm just wondering why you set it like that.\n\n>> What are the checkpoint settings (segments, completion target). What\n>> about\n>> shared buffers?\n>\n>\n> #checkpoint_segments = 3 # in logfile segments, min 1, 16MB\n> each\n> #checkpoint_timeout = 5min # range 30s-1h\n> checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0\n> - 1.0 - was 0.5\n> #checkpoint_warning = 30s # 0 disables\n\nYou need to bump checkpoint segments up, e.g. 64 or maybe even more. This\nmeans how many WAL segments will be available until a checkpoint has to\nhappen. Checkpoint is a process when dirty buffers from shared buffers are\nwritten to the disk, so it may be very I/O intensive. Each segment is\n16MB, so 3 segments is just 48MB of data, while 64 is 1GB.\n\nMore checkpoint segments result in longer recovery in case of database\ncrash (because all the segments since last checkpoint need to be applied).\nBut it's essential for good write performance.\n\nCompletion target seems fine, but I'd consider increasing the timeout too.\n\n> shared_buffers = 4096MB\n\nThe usual recommendation is about 25% of RAM for shared buffers, with 64GB\nof RAM that is 16GB. And you should increase effective_cache_size too.\n\nSee this: http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nTomas\n\n",
"msg_date": "Wed, 17 Aug 2011 20:56:49 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Tips for a new Server"
},
{
"msg_contents": "On 17/08/2011 7:56 PM, Ogden wrote:\n> On Aug 17, 2011, at 1:33 PM, Gary Doades wrote:\n>\n>> On 17/08/2011 7:26 PM, Ogden wrote:\n>>> I am using bonnie++ to benchmark our current Postgres system (on RAID 5) with the new one we have, which I have configured with RAID 10. The drives are the same (SAS 15K). I tried the new system with ext3 and then XFS but the results seem really outrageous as compared to the current system, or am I reading things wrong?\n>>>\n>>> The benchmark results are here:\n>>>\n>>> http://malekkoheavyindustry.com/benchmark.html\n>>>\n>> The results are not completely outrageous, however you don't say what drives, how many and what RAID controller you have in the current and new systems. You might expect that performance from 10/12 disks in RAID 10 with a good controller. I would say that your current system is outrageous in that is is so slow!\n>>\n>> Cheers,\n>> Gary.\n>\n> Yes, under heavy writes the load would shoot right up which is what caused us to look at upgrading. If it is the RAID 5, it is mind boggling that it could be that much of a difference. I expected a difference, now that much.\n>\n> The new system has 6 drives, 300Gb 15K SAS and I've put them into a RAID 10 configuration. The current system is ext3 with RAID 5 over 4 disks on a Perc/5i controller which has half the write cache as the new one (256 Mb vs 512Mb).\nHmm... for only 6 disks in RAID 10 I would say that the figures are a \nbit higher than I would expect. The PERC 5 controller is pretty poor in \nmy opinion, PERC 6 a lot better and the new H700's pretty good. I'm \nguessing you have a H700 in your new system.\n\nI've just got a Dell 515 with a H700 and 8 SAS in RAID 10 and I only get \naround 600 MB/s read using ext4 and Ubuntu 10.4 server.\n\nLike I say, your figures are not outrageous, just unexpectedly good :)\n\nCheers,\nGary.\n",
"msg_date": "Wed, 17 Aug 2011 20:04:41 +0100",
"msg_from": "Gary Doades <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "\nOn Aug 17, 2011, at 1:56 PM, Tomas Vondra wrote:\n\n> On 17 Srpen 2011, 18:39, Ogden wrote:\n>>> Yes, but it greatly depends on the amount of WAL and your workload. If\n>>> you\n>>> need to write a lot of WAL data (e.g. during bulk loading), this may\n>>> significantly improve performance. It may also help when you have a\n>>> write-heavy workload (a lot of clients updating records, background\n>>> writer\n>>> etc.) as that usually means a lot of seeking (while WAL is written\n>>> sequentially).\n>> \n>> The database is about 200Gb so using /usr/local/pgsql/pg_xlog on a virtual\n>> disk with 100Gb should not be a problem with the disk space should it?\n> \n> I think you've mentioned the database is on 6 drives, while the other\n> volume is on 2 drives, right? That makes the OS drive about 3x slower\n> (just a rough estimate). But if the database drive is used heavily, it\n> might help to move the xlog directory to the OS disk. See how is the db\n> volume utilized and if it's fully utilized, try to move the xlog\n> directory.\n> \n> The only way to find out is to actualy try it with your workload.\n\nThank you for your help. I just wanted to ask then, for now I should also put the xlog directory in the /var/lib/pgsql directory which is on the RAID container that is over 6 drives. You see, I wanted to put it on the container with the 2 drives because just the OS is installed on it and has the space (about 100Gb free). \n\nBut you don't think it will be a problem to put the xlog directory along with everything else on /var/lib/pgsql/data? I had seen someone suggesting separating it for their setup and it sounded like a good idea so I thought why not, but in retrospect and what you are saying with the OS drives being 3x slower, it may be okay just to put them on the 6 drives. \n\nThoughts?\n\nThank you once again for your tremendous help\n\nOgden",
"msg_date": "Wed, 17 Aug 2011 14:09:39 -0500",
"msg_from": "Ogden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning Tips for a new Server"
},
{
"msg_contents": "On 8/17/2011 1:55 PM, Ogden wrote:\n>\n> On Aug 17, 2011, at 1:48 PM, Andy Colson wrote:\n>\n>> On 8/17/2011 1:35 PM, [email protected] wrote:\n>>> On Wed, Aug 17, 2011 at 01:32:41PM -0500, Ogden wrote:\n>>>>\n>>>> On Aug 17, 2011, at 1:31 PM, [email protected] wrote:\n>>>>\n>>>>> On Wed, Aug 17, 2011 at 01:26:56PM -0500, Ogden wrote:\n>>>>>> I am using bonnie++ to benchmark our current Postgres system (on RAID 5) with the new one we have, which I have configured with RAID 10. The drives are the same (SAS 15K). I tried the new system with ext3 and then XFS but the results seem really outrageous as compared to the current system, or am I reading things wrong?\n>>>>>>\n>>>>>> The benchmark results are here:\n>>>>>>\n>>>>>> http://malekkoheavyindustry.com/benchmark.html\n>>>>>>\n>>>>>>\n>>>>>> Thank you\n>>>>>>\n>>>>>> Ogden\n>>>>>\n>>>>> That looks pretty normal to me.\n>>>>>\n>>>>> Ken\n>>>>\n>>>> But such a jump from the current db01 system to this? Over 20 times difference from the current system to the new one with XFS. Is that much of a jump normal?\n>>>>\n>>>> Ogden\n>>>\n>>> Yes, RAID5 is bad for in many ways. XFS is much better than EXT3. You would get similar\n>>> results with EXT4 as well, I suspect, although you did not test that.\n>>>\n>>> Regards,\n>>> Ken\n>>>\n>>\n>> A while back I tested ext3 and xfs myself and found xfs performs better for PG. However, I also have a photos site with 100K files (split into a small subset of directories), and xfs sucks bad on it.\n>>\n>> So my db is on xfs, and my photos are on ext4.\n>\n>\n> What about the OS itself? I put the Debian linux sysem also on XFS but haven't played around with it too much. Is it better to put the OS itself on ext4 and the /var/lib/pgsql partition on XFS?\n>\n> Thanks\n>\n> Ogden\n\nI doubt it matters. The OS is not going to batch delete thousands of \nfiles. Once its setup, its pretty constant. I would not worry about it.\n\n-Andy\n",
"msg_date": "Wed, 17 Aug 2011 14:13:00 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "On Wed, Aug 17, 2011 at 12:56 PM, Tomas Vondra <[email protected]> wrote:\n>\n> I think you've mentioned the database is on 6 drives, while the other\n> volume is on 2 drives, right? That makes the OS drive about 3x slower\n> (just a rough estimate). But if the database drive is used heavily, it\n> might help to move the xlog directory to the OS disk. See how is the db\n> volume utilized and if it's fully utilized, try to move the xlog\n> directory.\n>\n> The only way to find out is to actualy try it with your workload.\n\nThis is a very important point. I've found on most machines with\nhardware caching RAID and 8 or fewer 15k SCSI drives it's just as\nfast to put it all on one big RAID-10 and if necessary partition it to\nput the pg_xlog on its own file system. After that depending on the\nworkload you might need a LOT of drives in the pg_xlog dir or just a\npair. Under normal ops many dbs will use only a tiny % of a\ndedicated pg_xlog. Then something like a site indexer starts to run,\nand writing heavily to the db, and the usage shoots to 100% and it's\nthe bottleneck.\n",
"msg_date": "Wed, 17 Aug 2011 13:14:26 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Tips for a new Server"
},
{
"msg_contents": "On Wed, Aug 17, 2011 at 1:55 PM, Ogden <[email protected]> wrote:\n\n>\n>\n> What about the OS itself? I put the Debian linux sysem also on XFS but\n> haven't played around with it too much. Is it better to put the OS itself on\n> ext4 and the /var/lib/pgsql partition on XFS?\n>\n>\nWe've always put the OS on whatever default filesystem it uses, and then put\nPGDATA on a RAID 10/XFS and PGXLOG on RAID 1/XFS (and for our larger\ninstallations, we setup another RAID 10/XFS for heavily accessed indexes or\ntables). If you have a battery-backed cache on your controller (and it's\nbeen tested to work), you can increase performance by mounting the XFS\npartitions with \"nobarrier\"...just make sure your battery backup works.\n\nI don't know how current this information is for 9.x (we're still on 8.4),\nbut there is (used to be?) a threshold above which more shared_buffers\ndidn't help. The numbers vary, but somewhere between 8 and 16 GB is\ntypically quoted. We set ours to 25% RAM, but no more than 12 GB (even for\nour machines with 128+ GB of RAM) because that seems to be a breaking point\nfor our workload.\n\nOf course, no advice will take the place of testing with your workload, so\nbe sure to test =)\n\nOn Wed, Aug 17, 2011 at 1:55 PM, Ogden <[email protected]> wrote:\n\nWhat about the OS itself? I put the Debian linux sysem also on XFS but haven't played around with it too much. Is it better to put the OS itself on ext4 and the /var/lib/pgsql partition on XFS?\nWe've always put the OS on whatever default filesystem it uses, and then put PGDATA on a RAID 10/XFS and PGXLOG on RAID 1/XFS (and for our larger installations, we setup another RAID 10/XFS for heavily accessed indexes or tables). If you have a battery-backed cache on your controller (and it's been tested to work), you can increase performance by mounting the XFS partitions with \"nobarrier\"...just make sure your battery backup works.\n\nI don't know how current this information is for 9.x (we're still on 8.4), but there is (used to be?) a threshold above which more shared_buffers didn't help. The numbers vary, but somewhere between 8 and 16 GB is typically quoted. We set ours to 25% RAM, but no more than 12 GB (even for our machines with 128+ GB of RAM) because that seems to be a breaking point for our workload.\nOf course, no advice will take the place of testing with your workload, so be sure to test =)",
"msg_date": "Wed, 17 Aug 2011 14:16:11 -0500",
"msg_from": "J Sisson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "\nOn Aug 17, 2011, at 2:14 PM, Scott Marlowe wrote:\n\n> On Wed, Aug 17, 2011 at 12:56 PM, Tomas Vondra <[email protected]> wrote:\n>> \n>> I think you've mentioned the database is on 6 drives, while the other\n>> volume is on 2 drives, right? That makes the OS drive about 3x slower\n>> (just a rough estimate). But if the database drive is used heavily, it\n>> might help to move the xlog directory to the OS disk. See how is the db\n>> volume utilized and if it's fully utilized, try to move the xlog\n>> directory.\n>> \n>> The only way to find out is to actualy try it with your workload.\n> \n> This is a very important point. I've found on most machines with\n> hardware caching RAID and 8 or fewer 15k SCSI drives it's just as\n> fast to put it all on one big RAID-10 and if necessary partition it to\n> put the pg_xlog on its own file system. After that depending on the\n> workload you might need a LOT of drives in the pg_xlog dir or just a\n> pair. Under normal ops many dbs will use only a tiny % of a\n> dedicated pg_xlog. Then something like a site indexer starts to run,\n> and writing heavily to the db, and the usage shoots to 100% and it's\n> the bottleneck.\n\nI suppose this is my confusion. Or rather I am curious about this. On my current production database the pg_xlog directory is 8Gb (our total database is 200Gb). Does this warrant a totally separate setup (and hardware) than PGDATA?",
"msg_date": "Wed, 17 Aug 2011 14:22:24 -0500",
"msg_from": "Ogden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning Tips for a new Server"
},
{
"msg_contents": "On 17 Srpen 2011, 21:22, Ogden wrote:\n>> This is a very important point. I've found on most machines with\n>> hardware caching RAID and 8 or fewer 15k SCSI drives it's just as\n>> fast to put it all on one big RAID-10 and if necessary partition it to\n>> put the pg_xlog on its own file system. After that depending on the\n>> workload you might need a LOT of drives in the pg_xlog dir or just a\n>> pair. Under normal ops many dbs will use only a tiny % of a\n>> dedicated pg_xlog. Then something like a site indexer starts to run,\n>> and writing heavily to the db, and the usage shoots to 100% and it's\n>> the bottleneck.\n>\n> I suppose this is my confusion. Or rather I am curious about this. On my\n> current production database the pg_xlog directory is 8Gb (our total\n> database is 200Gb). Does this warrant a totally separate setup (and\n> hardware) than PGDATA?\n\nThis is not about database size, it's about the workload - the way you're\nusing your database. Even a small database may produce a lot of WAL\nsegments, if the workload is write-heavy. So it's impossible to recommend\nsomething except to try that on your own.\n\nTomas\n\n",
"msg_date": "Wed, 17 Aug 2011 21:44:10 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Tips for a new Server"
},
{
"msg_contents": "On Aug 17, 2011, at 1:35 PM, [email protected] wrote:\n\n> On Wed, Aug 17, 2011 at 01:32:41PM -0500, Ogden wrote:\n>> \n>> On Aug 17, 2011, at 1:31 PM, [email protected] wrote:\n>> \n>>> On Wed, Aug 17, 2011 at 01:26:56PM -0500, Ogden wrote:\n>>>> I am using bonnie++ to benchmark our current Postgres system (on RAID 5) with the new one we have, which I have configured with RAID 10. The drives are the same (SAS 15K). I tried the new system with ext3 and then XFS but the results seem really outrageous as compared to the current system, or am I reading things wrong?\n>>>> \n>>>> The benchmark results are here:\n>>>> \n>>>> http://malekkoheavyindustry.com/benchmark.html\n>>>> \n>>>> \n>>>> Thank you\n>>>> \n>>>> Ogden\n>>> \n>>> That looks pretty normal to me.\n>>> \n>>> Ken\n>> \n>> But such a jump from the current db01 system to this? Over 20 times difference from the current system to the new one with XFS. Is that much of a jump normal?\n>> \n>> Ogden\n> \n> Yes, RAID5 is bad for in many ways. XFS is much better than EXT3. You would get similar\n> results with EXT4 as well, I suspect, although you did not test that.\n\n\ni tested ext4 and the results did not seem to be that close to XFS. Especially when looking at the Block K/sec for the Sequential Output. \n\nhttp://malekkoheavyindustry.com/benchmark.html\n\nSo XFS would be best in this case?\n\nThank you\n\nOgden\nOn Aug 17, 2011, at 1:35 PM, [email protected] wrote:On Wed, Aug 17, 2011 at 01:32:41PM -0500, Ogden wrote:On Aug 17, 2011, at 1:31 PM, [email protected] wrote:On Wed, Aug 17, 2011 at 01:26:56PM -0500, Ogden wrote:I am using bonnie++ to benchmark our current Postgres system (on RAID 5) with the new one we have, which I have configured with RAID 10. The drives are the same (SAS 15K). I tried the new system with ext3 and then XFS but the results seem really outrageous as compared to the current system, or am I reading things wrong?The benchmark results are here:http://malekkoheavyindustry.com/benchmark.htmlThank youOgdenThat looks pretty normal to me.KenBut such a jump from the current db01 system to this? Over 20 times difference from the current system to the new one with XFS. Is that much of a jump normal?OgdenYes, RAID5 is bad for in many ways. XFS is much better than EXT3. You would get similarresults with EXT4 as well, I suspect, although you did not test that.i tested ext4 and the results did not seem to be that close to XFS. Especially when looking at the Block K/sec for the Sequential Output. http://malekkoheavyindustry.com/benchmark.htmlSo XFS would be best in this case?Thank youOgden",
"msg_date": "Wed, 17 Aug 2011 15:40:03 -0500",
"msg_from": "Ogden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "On Wed, Aug 17, 2011 at 03:40:03PM -0500, Ogden wrote:\n> \n> On Aug 17, 2011, at 1:35 PM, [email protected] wrote:\n> \n> > On Wed, Aug 17, 2011 at 01:32:41PM -0500, Ogden wrote:\n> >> \n> >> On Aug 17, 2011, at 1:31 PM, [email protected] wrote:\n> >> \n> >>> On Wed, Aug 17, 2011 at 01:26:56PM -0500, Ogden wrote:\n> >>>> I am using bonnie++ to benchmark our current Postgres system (on RAID 5) with the new one we have, which I have configured with RAID 10. The drives are the same (SAS 15K). I tried the new system with ext3 and then XFS but the results seem really outrageous as compared to the current system, or am I reading things wrong?\n> >>>> \n> >>>> The benchmark results are here:\n> >>>> \n> >>>> http://malekkoheavyindustry.com/benchmark.html\n> >>>> \n> >>>> \n> >>>> Thank you\n> >>>> \n> >>>> Ogden\n> >>> \n> >>> That looks pretty normal to me.\n> >>> \n> >>> Ken\n> >> \n> >> But such a jump from the current db01 system to this? Over 20 times difference from the current system to the new one with XFS. Is that much of a jump normal?\n> >> \n> >> Ogden\n> > \n> > Yes, RAID5 is bad for in many ways. XFS is much better than EXT3. You would get similar\n> > results with EXT4 as well, I suspect, although you did not test that.\n> \n> \n> i tested ext4 and the results did not seem to be that close to XFS. Especially when looking at the Block K/sec for the Sequential Output. \n> \n> http://malekkoheavyindustry.com/benchmark.html\n> \n> So XFS would be best in this case?\n> \n> Thank you\n> \n> Ogden\n\nIt appears so for at least the Bonnie++ benchmark. I would really try to benchmark\nyour actual DB on both EXT4 and XFS because some of the comparative benchmarks between\nthe two give the win to EXT4 for INSERT/UPDATE database usage with PostgreSQL. Only\nyour application will know for sure....:)\n\nKen\n",
"msg_date": "Wed, 17 Aug 2011 15:56:40 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "\nOn Aug 17, 2011, at 3:56 PM, [email protected] wrote:\n\n> On Wed, Aug 17, 2011 at 03:40:03PM -0500, Ogden wrote:\n>> \n>> On Aug 17, 2011, at 1:35 PM, [email protected] wrote:\n>> \n>>> On Wed, Aug 17, 2011 at 01:32:41PM -0500, Ogden wrote:\n>>>> \n>>>> On Aug 17, 2011, at 1:31 PM, [email protected] wrote:\n>>>> \n>>>>> On Wed, Aug 17, 2011 at 01:26:56PM -0500, Ogden wrote:\n>>>>>> I am using bonnie++ to benchmark our current Postgres system (on RAID 5) with the new one we have, which I have configured with RAID 10. The drives are the same (SAS 15K). I tried the new system with ext3 and then XFS but the results seem really outrageous as compared to the current system, or am I reading things wrong?\n>>>>>> \n>>>>>> The benchmark results are here:\n>>>>>> \n>>>>>> http://malekkoheavyindustry.com/benchmark.html\n>>>>>> \n>>>>>> \n>>>>>> Thank you\n>>>>>> \n>>>>>> Ogden\n>>>>> \n>>>>> That looks pretty normal to me.\n>>>>> \n>>>>> Ken\n>>>> \n>>>> But such a jump from the current db01 system to this? Over 20 times difference from the current system to the new one with XFS. Is that much of a jump normal?\n>>>> \n>>>> Ogden\n>>> \n>>> Yes, RAID5 is bad for in many ways. XFS is much better than EXT3. You would get similar\n>>> results with EXT4 as well, I suspect, although you did not test that.\n>> \n>> \n>> i tested ext4 and the results did not seem to be that close to XFS. Especially when looking at the Block K/sec for the Sequential Output. \n>> \n>> http://malekkoheavyindustry.com/benchmark.html\n>> \n>> So XFS would be best in this case?\n>> \n>> Thank you\n>> \n>> Ogden\n> \n> It appears so for at least the Bonnie++ benchmark. I would really try to benchmark\n> your actual DB on both EXT4 and XFS because some of the comparative benchmarks between\n> the two give the win to EXT4 for INSERT/UPDATE database usage with PostgreSQL. Only\n> your application will know for sure....:)\n> \n> Ken\n\n\nWhat are some good methods that one can use to benchmark PostgreSQL under heavy loads? Ie. to emulate heavy writes? Are there any existing scripts and what not?\n\nThank you\n\nAfra",
"msg_date": "Wed, 17 Aug 2011 16:01:53 -0500",
"msg_from": "Ogden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "On 08/17/2011 02:26 PM, Ogden wrote:\n> I am using bonnie++ to benchmark our current Postgres system (on RAID \n> 5) with the new one we have, which I have configured with RAID 10. The \n> drives are the same (SAS 15K). I tried the new system with ext3 and \n> then XFS but the results seem really outrageous as compared to the \n> current system, or am I reading things wrong?\n>\n> The benchmark results are here:\n> http://malekkoheavyindustry.com/benchmark.html\n\nCongratulations--you're now qualified to be a member of the \"RAID5 \nsucks\" club. You can find other members at \nhttp://www.miracleas.com/BAARF/BAARF2.html Reasonable read speeds and \njust terrible write ones are expected if that's on your old hardware. \nYour new results are what I would expect from the hardware you've \ndescribed.\n\nThe only thing that looks weird are your ext4 \"Sequential Output - \nBlock\" results. They should be between the ext3 and the XFS results, \nnot far lower than either. Normally this only comes from using a bad \nset of mount options. With a battery-backed write cache, you'd want to \nuse \"nobarrier\" for example; if you didn't do that, that can crush \noutput rates.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 17 Aug 2011 17:17:31 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Greg Smith\n> Sent: Wednesday, August 17, 2011 3:18 PM\n> To: [email protected]\n> Subject: Re: [PERFORM] Raid 5 vs Raid 10 Benchmarks Using bonnie++\n> \n> On 08/17/2011 02:26 PM, Ogden wrote:\n> > I am using bonnie++ to benchmark our current Postgres system (on RAID\n> > 5) with the new one we have, which I have configured with RAID 10.\n> The\n> > drives are the same (SAS 15K). I tried the new system with ext3 and\n> > then XFS but the results seem really outrageous as compared to the\n> > current system, or am I reading things wrong?\n> >\n> > The benchmark results are here:\n> > http://malekkoheavyindustry.com/benchmark.html\n> \n> Congratulations--you're now qualified to be a member of the \"RAID5\n> sucks\" club. You can find other members at\n> http://www.miracleas.com/BAARF/BAARF2.html Reasonable read speeds and\n> just terrible write ones are expected if that's on your old hardware.\n> Your new results are what I would expect from the hardware you've\n> described.\n> \n> The only thing that looks weird are your ext4 \"Sequential Output -\n> Block\" results. They should be between the ext3 and the XFS results,\n> not far lower than either. Normally this only comes from using a bad\n> set of mount options. With a battery-backed write cache, you'd want to\n> use \"nobarrier\" for example; if you didn't do that, that can crush\n> output rates.\n> \n\nTo clarify maybe for those new at using non-default mount options.\n\nWith XFS the mount option is nobarrier. With ext4 I think it is barrier=0\n\nSomeone please correct me if I am misleading people or otherwise mistaken.\n\n-mark\n\n",
"msg_date": "Wed, 17 Aug 2011 18:35:29 -0600",
"msg_from": "\"mark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "On 08/17/2011 08:35 PM, mark wrote:\n> With XFS the mount option is nobarrier. With ext4 I think it is barrier=0\n\nhttp://www.mjmwired.net/kernel/Documentation/filesystems/ext4.txt\n\next4 supports both; \"nobarrier\" and \"barrier=0\" mean the same thing. I \ntend to use \"nobarrier\" just because I'm used to that name on XFS systems.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 17 Aug 2011 21:08:08 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "\nOn Aug 17, 2011, at 4:16 PM, Greg Smith wrote:\n\n> On 08/17/2011 02:26 PM, Ogden wrote:\n>> I am using bonnie++ to benchmark our current Postgres system (on RAID 5) with the new one we have, which I have configured with RAID 10. The drives are the same (SAS 15K). I tried the new system with ext3 and then XFS but the results seem really outrageous as compared to the current system, or am I reading things wrong?\n>> \n>> The benchmark results are here:\n>> http://malekkoheavyindustry.com/benchmark.html\n>> \n> \n> Congratulations--you're now qualified to be a member of the \"RAID5 sucks\" club. You can find other members at http://www.miracleas.com/BAARF/BAARF2.html Reasonable read speeds and just terrible write ones are expected if that's on your old hardware. Your new results are what I would expect from the hardware you've described.\n> \n> The only thing that looks weird are your ext4 \"Sequential Output - Block\" results. They should be between the ext3 and the XFS results, not far lower than either. Normally this only comes from using a bad set of mount options. With a battery-backed write cache, you'd want to use \"nobarrier\" for example; if you didn't do that, that can crush output rates.\n\n\nIsn't this very dangerous? I have the Dell PERC H700 card - I see that it has 512Mb Cache. Is this the same thing and good enough to switch to nobarrier? Just worried if a sudden power shut down, then data can be lost on this option. \n\nI did not do that with XFS and it did quite well - I know it's up to my app and more testing, but in your experience, what is usually a good filesystem to use? I keep reading conflicting things..\n\nThank you\n\nOgden\n\n\n",
"msg_date": "Wed, 17 Aug 2011 22:48:09 -0500",
"msg_from": "Ogden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "On 18/08/2011 11:48 AM, Ogden wrote:\n> Isn't this very dangerous? I have the Dell PERC H700 card - I see that it has 512Mb Cache. Is this the same thing and good enough to switch to nobarrier? Just worried if a sudden power shut down, then data can be lost on this option.\n>\n>\nYeah, I'm confused by that too. Shouldn't a write barrier flush data to \npersistent storage - in this case, the RAID card's battery backed cache? \nWhy would it force a RAID controller cache flush to disk, too?\n\n--\nCraig Ringer\n",
"msg_date": "Thu, 18 Aug 2011 13:35:56 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "On 18/08/11 17:35, Craig Ringer wrote:\n> On 18/08/2011 11:48 AM, Ogden wrote:\n>> Isn't this very dangerous? I have the Dell PERC H700 card - I see \n>> that it has 512Mb Cache. Is this the same thing and good enough to \n>> switch to nobarrier? Just worried if a sudden power shut down, then \n>> data can be lost on this option.\n>>\n>>\n> Yeah, I'm confused by that too. Shouldn't a write barrier flush data \n> to persistent storage - in this case, the RAID card's battery backed \n> cache? Why would it force a RAID controller cache flush to disk, too?\n>\n>\n\nIf the card's cache has a battery, then the cache is preserved in the \nadvent of crash/power loss etc - provided it has enough charge, so \nsetting 'writeback' property on arrays is safe. The PERC/SERVERRAID \ncards I'm familiar (LSI Megaraid rebranded models) all switch to \nwrite-though mode if they detect the battery is dangerously discharged \nso this is not normally a problem (but commit/fsync performance will \nfall off a cliff when this happens)!\n\nCheers\n\nMark\n\n",
"msg_date": "Thu, 18 Aug 2011 19:07:51 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "On Thu, Aug 18, 2011 at 1:35 AM, Craig Ringer <[email protected]> wrote:\n> On 18/08/2011 11:48 AM, Ogden wrote:\n>>\n>> Isn't this very dangerous? I have the Dell PERC H700 card - I see that it\n>> has 512Mb Cache. Is this the same thing and good enough to switch to\n>> nobarrier? Just worried if a sudden power shut down, then data can be lost\n>> on this option.\n>>\n>>\n> Yeah, I'm confused by that too. Shouldn't a write barrier flush data to\n> persistent storage - in this case, the RAID card's battery backed cache? Why\n> would it force a RAID controller cache flush to disk, too?\n\nThe \"barrier\" is the linux fs/block way of saying \"these writes need\nto be on persistent media before I can depend on them\". On typical\nspinning media disks, that means out of the disk cache (which is not\npersistent) and on platters. The way it assures that the writes are\non \"persistant media\" is with a \"flush cache\" type of command. The\n\"flush cache\" is a close approximation to \"make sure it's persistent\".\n\nIf your cache is battery backed, it is now persistent, and there is no\nneed to \"flush cache\", hence the nobarrier option if you believe your\ncache is persistent.\n\nNow, make sure that even though your raid cache is persistent, your\ndisks have cache in write-through mode, cause it would suck for your\nraid cache to \"work\", but believe the data is safely on disk and only\nfind out that it was in the disks (small) cache, and you're raid is\nout of sync after an outage because of that... I believe most raid\ncards will handle that correctly for you automatically.\n\na.\n\n\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.\n",
"msg_date": "Thu, 18 Aug 2011 09:26:17 -0400",
"msg_from": "Aidan Van Dyk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "\nOn Aug 18, 2011, at 2:07 AM, Mark Kirkwood wrote:\n\n> On 18/08/11 17:35, Craig Ringer wrote:\n>> On 18/08/2011 11:48 AM, Ogden wrote:\n>>> Isn't this very dangerous? I have the Dell PERC H700 card - I see that it has 512Mb Cache. Is this the same thing and good enough to switch to nobarrier? Just worried if a sudden power shut down, then data can be lost on this option.\n>>> \n>>> \n>> Yeah, I'm confused by that too. Shouldn't a write barrier flush data to persistent storage - in this case, the RAID card's battery backed cache? Why would it force a RAID controller cache flush to disk, too?\n>> \n>> \n> \n> If the card's cache has a battery, then the cache is preserved in the advent of crash/power loss etc - provided it has enough charge, so setting 'writeback' property on arrays is safe. The PERC/SERVERRAID cards I'm familiar (LSI Megaraid rebranded models) all switch to write-though mode if they detect the battery is dangerously discharged so this is not normally a problem (but commit/fsync performance will fall off a cliff when this happens)!\n> \n> Cheers\n> \n> Mark\n\n\nSo a setting such as this:\n\nDevice Name : /dev/sdb\nType : SAS\nRead Policy : No Read Ahead\nWrite Policy : Write Back\nCache Policy : Not Applicable\nStripe Element Size : 64 KB\nDisk Cache Policy : Enabled\n\n\nIs sufficient to enable nobarrier then with these settings?\n\nThank you\n\nOgden",
"msg_date": "Thu, 18 Aug 2011 09:09:30 -0500",
"msg_from": "Ogden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "On Aug 17, 2011, at 4:17 PM, Greg Smith wrote:\n\n> On 08/17/2011 02:26 PM, Ogden wrote:\n>> I am using bonnie++ to benchmark our current Postgres system (on RAID 5) with the new one we have, which I have configured with RAID 10. The drives are the same (SAS 15K). I tried the new system with ext3 and then XFS but the results seem really outrageous as compared to the current system, or am I reading things wrong?\n>> \n>> The benchmark results are here:\n>> http://malekkoheavyindustry.com/benchmark.html\n> \n> Congratulations--you're now qualified to be a member of the \"RAID5 sucks\" club. You can find other members at http://www.miracleas.com/BAARF/BAARF2.html Reasonable read speeds and just terrible write ones are expected if that's on your old hardware. Your new results are what I would expect from the hardware you've described.\n> \n> The only thing that looks weird are your ext4 \"Sequential Output - Block\" results. They should be between the ext3 and the XFS results, not far lower than either. Normally this only comes from using a bad set of mount options. With a battery-backed write cache, you'd want to use \"nobarrier\" for example; if you didn't do that, that can crush output rates.\n\n\nI have mounted the ext4 system with the nobarrier option:\n\n/dev/sdb1 on /var/lib/pgsql type ext4 (rw,noatime,data=writeback,barrier=0,nobh,errors=remount-ro)\n\nYet the results show absolutely a decrease in performance in the ext4 \"Sequential Output - Block\" results:\n\nhttp://malekkoheavyindustry.com/benchmark.html\n\nHowever, the Random seeks is better, even more so than XFS...\n\nAny thoughts as to why this is occurring?\n\nOgden\n\n\n\nOn Aug 17, 2011, at 4:17 PM, Greg Smith wrote:On 08/17/2011 02:26 PM, Ogden wrote:I am using bonnie++ to benchmark our current Postgres system (on RAID 5) with the new one we have, which I have configured with RAID 10. The drives are the same (SAS 15K). I tried the new system with ext3 and then XFS but the results seem really outrageous as compared to the current system, or am I reading things wrong?The benchmark results are here:http://malekkoheavyindustry.com/benchmark.htmlCongratulations--you're now qualified to be a member of the \"RAID5 sucks\" club. You can find other members at http://www.miracleas.com/BAARF/BAARF2.html Reasonable read speeds and just terrible write ones are expected if that's on your old hardware. Your new results are what I would expect from the hardware you've described.The only thing that looks weird are your ext4 \"Sequential Output - Block\" results. They should be between the ext3 and the XFS results, not far lower than either. Normally this only comes from using a bad set of mount options. With a battery-backed write cache, you'd want to use \"nobarrier\" for example; if you didn't do that, that can crush output rates.I have mounted the ext4 system with the nobarrier option:/dev/sdb1 on /var/lib/pgsql type ext4 (rw,noatime,data=writeback,barrier=0,nobh,errors=remount-ro)Yet the results show absolutely a decrease in performance in the ext4 \"Sequential Output - Block\" results:http://malekkoheavyindustry.com/benchmark.htmlHowever, the Random seeks is better, even more so than XFS...Any thoughts as to why this is occurring?Ogden",
"msg_date": "Thu, 18 Aug 2011 12:31:28 -0500",
"msg_from": "Ogden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "On 19/08/11 02:09, Ogden wrote:\n> On Aug 18, 2011, at 2:07 AM, Mark Kirkwood wrote:\n>\n>> On 18/08/11 17:35, Craig Ringer wrote:\n>>> On 18/08/2011 11:48 AM, Ogden wrote:\n>>>> Isn't this very dangerous? I have the Dell PERC H700 card - I see that it has 512Mb Cache. Is this the same thing and good enough to switch to nobarrier? Just worried if a sudden power shut down, then data can be lost on this option.\n>>>>\n>>>>\n>>> Yeah, I'm confused by that too. Shouldn't a write barrier flush data to persistent storage - in this case, the RAID card's battery backed cache? Why would it force a RAID controller cache flush to disk, too?\n>>>\n>>>\n>> If the card's cache has a battery, then the cache is preserved in the advent of crash/power loss etc - provided it has enough charge, so setting 'writeback' property on arrays is safe. The PERC/SERVERRAID cards I'm familiar (LSI Megaraid rebranded models) all switch to write-though mode if they detect the battery is dangerously discharged so this is not normally a problem (but commit/fsync performance will fall off a cliff when this happens)!\n>>\n>> Cheers\n>>\n>> Mark\n>\n> So a setting such as this:\n>\n> Device Name : /dev/sdb\n> Type : SAS\n> Read Policy : No Read Ahead\n> Write Policy : Write Back\n> Cache Policy : Not Applicable\n> Stripe Element Size : 64 KB\n> Disk Cache Policy : Enabled\n>\n>\n> Is sufficient to enable nobarrier then with these settings?\n>\n\n\nHmm - that output looks different from the cards I'm familiar with. I'd \nwant to see the manual entries for \"Cache Policy=Not Applicable\" and \n\"Disk Cache Policy=Enabled\" to understand what the settings actually \nmean. Assuming \"Disk Cache Policy=Enabled\" means what I think it does \n(i.e writes are cached in the physical drives cache), this setting seems \nwrong if your card has on board cache + battery, you would want to only \ncache 'em in the *card's* cache (too many caches to keep straight in \none's head, lol).\n\nCheers\n\nMark\n",
"msg_date": "Fri, 19 Aug 2011 12:52:20 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "On 19/08/11 12:52, Mark Kirkwood wrote:\n> On 19/08/11 02:09, Ogden wrote:\n>> On Aug 18, 2011, at 2:07 AM, Mark Kirkwood wrote:\n>>\n>>> On 18/08/11 17:35, Craig Ringer wrote:\n>>>> On 18/08/2011 11:48 AM, Ogden wrote:\n>>>>> Isn't this very dangerous? I have the Dell PERC H700 card - I see \n>>>>> that it has 512Mb Cache. Is this the same thing and good enough to \n>>>>> switch to nobarrier? Just worried if a sudden power shut down, \n>>>>> then data can be lost on this option.\n>>>>>\n>>>>>\n>>>> Yeah, I'm confused by that too. Shouldn't a write barrier flush \n>>>> data to persistent storage - in this case, the RAID card's battery \n>>>> backed cache? Why would it force a RAID controller cache flush to \n>>>> disk, too?\n>>>>\n>>>>\n>>> If the card's cache has a battery, then the cache is preserved in \n>>> the advent of crash/power loss etc - provided it has enough charge, \n>>> so setting 'writeback' property on arrays is safe. The \n>>> PERC/SERVERRAID cards I'm familiar (LSI Megaraid rebranded models) \n>>> all switch to write-though mode if they detect the battery is \n>>> dangerously discharged so this is not normally a problem (but \n>>> commit/fsync performance will fall off a cliff when this happens)!\n>>>\n>>> Cheers\n>>>\n>>> Mark\n>>\n>> So a setting such as this:\n>>\n>> Device Name : /dev/sdb\n>> Type : SAS\n>> Read Policy : No Read Ahead\n>> Write Policy : Write Back\n>> Cache Policy : Not Applicable\n>> Stripe Element Size : 64 KB\n>> Disk Cache Policy : Enabled\n>>\n>>\n>> Is sufficient to enable nobarrier then with these settings?\n>>\n>\n>\n> Hmm - that output looks different from the cards I'm familiar with. \n> I'd want to see the manual entries for \"Cache Policy=Not Applicable\" \n> and \"Disk Cache Policy=Enabled\" to understand what the settings \n> actually mean. Assuming \"Disk Cache Policy=Enabled\" means what I think \n> it does (i.e writes are cached in the physical drives cache), this \n> setting seems wrong if your card has on board cache + battery, you \n> would want to only cache 'em in the *card's* cache (too many caches \n> to keep straight in one's head, lol).\n>\n\nFWIW - here's what our ServerRaid (M5015) output looks like for a RAID 1 \narray configured with writeback, reads not cached on the card's memory, \nphysical disk caches disabled:\n\n$ MegaCli64 -LDInfo -L0 -a0\n\nAdapter 0 -- Virtual Drive Information:\nVirtual Drive: 0 (Target Id: 0)\nName :\nRAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0\nSize : 67.054 GB\nState : Optimal\nStrip Size : 64 KB\nNumber Of Drives : 2\nSpan Depth : 1\nDefault Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache \nif Bad BBU\nCurrent Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache \nif Bad BBU\nAccess Policy : Read/Write\nDisk Cache Policy : Disabled\nEncryption Type : None\n\n",
"msg_date": "Fri, 19 Aug 2011 12:58:58 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "apologies for such a late response to this thread, but there is domething \nI think is _really_ dangerous here.\n\nOn Thu, 18 Aug 2011, Aidan Van Dyk wrote:\n\n> On Thu, Aug 18, 2011 at 1:35 AM, Craig Ringer <[email protected]> wrote:\n>> On 18/08/2011 11:48 AM, Ogden wrote:\n>>>\n>>> Isn't this very dangerous? I have the Dell PERC H700 card - I see that it\n>>> has 512Mb Cache. Is this the same thing and good enough to switch to\n>>> nobarrier? Just worried if a sudden power shut down, then data can be lost\n>>> on this option.\n>>>\n>>>\n>> Yeah, I'm confused by that too. Shouldn't a write barrier flush data to\n>> persistent storage - in this case, the RAID card's battery backed cache? Why\n>> would it force a RAID controller cache flush to disk, too?\n>\n> The \"barrier\" is the linux fs/block way of saying \"these writes need\n> to be on persistent media before I can depend on them\". On typical\n> spinning media disks, that means out of the disk cache (which is not\n> persistent) and on platters. The way it assures that the writes are\n> on \"persistant media\" is with a \"flush cache\" type of command. The\n> \"flush cache\" is a close approximation to \"make sure it's persistent\".\n>\n> If your cache is battery backed, it is now persistent, and there is no\n> need to \"flush cache\", hence the nobarrier option if you believe your\n> cache is persistent.\n>\n> Now, make sure that even though your raid cache is persistent, your\n> disks have cache in write-through mode, cause it would suck for your\n> raid cache to \"work\", but believe the data is safely on disk and only\n> find out that it was in the disks (small) cache, and you're raid is\n> out of sync after an outage because of that... I believe most raid\n> cards will handle that correctly for you automatically.\n\nif you don't have barriers enabled, the data may not get written out of \nmain memory to the battery backed memory on the card as the OS has no \nreason to do the write out of the OS buffers now rather than later.\n\nEvery raid card I have seen has ignored the 'flush cache' type of command \nif it has a battery and that battery is good, so you leave the barriers \nenabled and the card still gives you great performance.\n\nDavid Lang\n",
"msg_date": "Mon, 12 Sep 2011 15:57:48 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "On Mon, Sep 12, 2011 at 6:57 PM, <[email protected]> wrote:\n\n>> The \"barrier\" is the linux fs/block way of saying \"these writes need\n>> to be on persistent media before I can depend on them\". On typical\n>> spinning media disks, that means out of the disk cache (which is not\n>> persistent) and on platters. The way it assures that the writes are\n>> on \"persistant media\" is with a \"flush cache\" type of command. The\n>> \"flush cache\" is a close approximation to \"make sure it's persistent\".\n>>\n>> If your cache is battery backed, it is now persistent, and there is no\n>> need to \"flush cache\", hence the nobarrier option if you believe your\n>> cache is persistent.\n>>\n>> Now, make sure that even though your raid cache is persistent, your\n>> disks have cache in write-through mode, cause it would suck for your\n>> raid cache to \"work\", but believe the data is safely on disk and only\n>> find out that it was in the disks (small) cache, and you're raid is\n>> out of sync after an outage because of that... I believe most raid\n>> cards will handle that correctly for you automatically.\n>\n> if you don't have barriers enabled, the data may not get written out of main\n> memory to the battery backed memory on the card as the OS has no reason to\n> do the write out of the OS buffers now rather than later.\n\nIt's not quite so simple. The \"sync\" calls (pick your flavour) is\nwhat tells the OS buffers they have to go out. The syscall (on a\nworking FS) won't return until the write and data has reached the\n\"device\" safely, and is considered persistent.\n\nBut in linux, a barrier is actually a \"synchronization\" point, not\njust a \"flush cache\"... It's a \"guarantee everything up to now is\npersistent, I'm going to start counting on it\". But depending on your\ncard, drivers and yes, kernel version, that \"barrier\" is sometimes a\n\"drain/block I/O queue, issue cache flush, wait, write specific data,\nflush, wait, open I/O queue\". The double flush is because it needs to\nguarantee everything previous is good before it writes the \"critical\"\npiece, and then needs to guarantee that too.\n\nNow, on good raid hardware it's not usually that bad.\n\nAnd then, just to confuse people more, LVM up until 2.6.29 (so that\nincludes all those RHEL5/CentOS5 installs out there which default to\nusing LVM) didn't handle barriers, it just sort of threw them out as\nit came across them, meaning that you got the performance of\nnobarrier, even if you thought you were using barriers on poor raid\nhardware.\n\n> Every raid card I have seen has ignored the 'flush cache' type of command if\n> it has a battery and that battery is good, so you leave the barriers enabled\n> and the card still gives you great performance.\n\nXFS FAQ goes over much of it, starting at Q24:\n http://xfs.org/index.php/XFS_FAQ#Q:_What_is_the_problem_with_the_write_cache_on_journaled_filesystems.3F\n\nSo, for pure performance, on a battery-backed controller, nobarrier is\nthe recommended *performance* setting.\n\nBut, to throw a wrench into the plan, what happens when during normal\nbattery tests, your raid controller decides the battery is failing...\nof course, it's going to start screaming and send all your monitoring\nalarms off (you're monitoring that, right?), but have you thought to\nmake sure that your FS is remounted with barriers at the first sign of\nbattery trouble?\n\na.\n\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.\n",
"msg_date": "Mon, 12 Sep 2011 20:05:58 -0400",
"msg_from": "Aidan Van Dyk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "On Mon, 12 Sep 2011, Aidan Van Dyk wrote:\n\n> On Mon, Sep 12, 2011 at 6:57 PM, <[email protected]> wrote:\n>\n>>> The \"barrier\" is the linux fs/block way of saying \"these writes need\n>>> to be on persistent media before I can depend on them\". �On typical\n>>> spinning media disks, that means out of the disk cache (which is not\n>>> persistent) and on platters. �The way it assures that the writes are\n>>> on \"persistant media\" is with a \"flush cache\" type of command. �The\n>>> \"flush cache\" is a close approximation to \"make sure it's persistent\".\n>>>\n>>> If your cache is battery backed, it is now persistent, and there is no\n>>> need to \"flush cache\", hence the nobarrier option if you believe your\n>>> cache is persistent.\n>>>\n>>> Now, make sure that even though your raid cache is persistent, your\n>>> disks have cache in write-through mode, cause it would suck for your\n>>> raid cache to \"work\", but believe the data is safely on disk and only\n>>> find out that it was in the disks (small) cache, and you're raid is\n>>> out of sync after an outage because of that... �I believe most raid\n>>> cards will handle that correctly for you automatically.\n>>\n>> if you don't have barriers enabled, the data may not get written out of main\n>> memory to the battery backed memory on the card as the OS has no reason to\n>> do the write out of the OS buffers now rather than later.\n>\n> It's not quite so simple. The \"sync\" calls (pick your flavour) is\n> what tells the OS buffers they have to go out. The syscall (on a\n> working FS) won't return until the write and data has reached the\n> \"device\" safely, and is considered persistent.\n>\n> But in linux, a barrier is actually a \"synchronization\" point, not\n> just a \"flush cache\"... It's a \"guarantee everything up to now is\n> persistent, I'm going to start counting on it\". But depending on your\n> card, drivers and yes, kernel version, that \"barrier\" is sometimes a\n> \"drain/block I/O queue, issue cache flush, wait, write specific data,\n> flush, wait, open I/O queue\". The double flush is because it needs to\n> guarantee everything previous is good before it writes the \"critical\"\n> piece, and then needs to guarantee that too.\n>\n> Now, on good raid hardware it's not usually that bad.\n>\n> And then, just to confuse people more, LVM up until 2.6.29 (so that\n> includes all those RHEL5/CentOS5 installs out there which default to\n> using LVM) didn't handle barriers, it just sort of threw them out as\n> it came across them, meaning that you got the performance of\n> nobarrier, even if you thought you were using barriers on poor raid\n> hardware.\n\nthis is part of the problem.\n\nif you have a simple fs-on-hardware you may be able to get away with the \nbarriers, but if you have fs-on-x-on-y-on-hardware type of thing \n(specifically where LVM is one of the things in the middle), and those \nthings in the middle do not honor barriers, the fsync becomes meaningless \nbecause without propogating the barrier down the stack, the writes that \nthe fsync triggers may not get to the disk.\n\n>> Every raid card I have seen has ignored the 'flush cache' type of command if\n>> it has a battery and that battery is good, so you leave the barriers enabled\n>> and the card still gives you great performance.\n>\n> XFS FAQ goes over much of it, starting at Q24:\n> http://xfs.org/index.php/XFS_FAQ#Q:_What_is_the_problem_with_the_write_cache_on_journaled_filesystems.3F\n>\n> So, for pure performance, on a battery-backed controller, nobarrier is\n> the recommended *performance* setting.\n>\n> But, to throw a wrench into the plan, what happens when during normal\n> battery tests, your raid controller decides the battery is failing...\n> of course, it's going to start screaming and send all your monitoring\n> alarms off (you're monitoring that, right?), but have you thought to\n> make sure that your FS is remounted with barriers at the first sign of\n> battery trouble?\n\nyep.\n\non a good raid card with battery backed cache, the performance difference \nbetween barriers being on and barriers being off should be minimal. If \nit's not, I think that you have something else going on.\n\nDavid Lang\n>From [email protected] Mon Sep 12 22:08:55 2011\nReceived: from maia.hub.org (maia-2.hub.org [200.46.204.251])\n\tby mail.postgresql.org (Postfix) with ESMTP id 6DDC4B5DC3B\n\tfor <[email protected]>; Mon, 12 Sep 2011 22:08:55 -0300 (ADT)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by maia.hub.org (mx1.hub.org [200.46.204.251]) (amavisd-maia, port 10024)\n with ESMTP id 49727-04\n for <[email protected]>;\n Tue, 13 Sep 2011 01:08:48 +0000 (UTC)\nX-Greylist: domain auto-whitelisted by SQLgrey-1.7.6\nReceived: from mail-yx0-f174.google.com (mail-yx0-f174.google.com [209.85.213.174])\n\tby mail.postgresql.org (Postfix) with ESMTP id 7F5DEB5DC22\n\tfor <[email protected]>; Mon, 12 Sep 2011 22:08:48 -0300 (ADT)\nReceived: by yxm8 with SMTP id 8so28805yxm.19\n for <[email protected]>; Mon, 12 Sep 2011 18:08:48 -0700 (PDT)\nDKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=gmail.com; s=gamma;\n h=mime-version:sender:in-reply-to:references:date\n :x-google-sender-auth:message-id:subject:from:to:cc:content-type;\n bh=iZ3QCl+nR615GJJBcw9lzuxmVR2JBp1SiT/nsejm8S8=;\n b=ZQo6kFRjKTJeJqow8VVT8w5tZnSrgG6VsFzDn6Hb56VPukOZPPYL7QNw56K2Z0P+Gy\n xx6e5CmVWbjRyWeUcFuCYhxK0uViQ1JCEjoU9WGA6x8OtCJQLbhl4ORrvg3ZpIWmAwmG\n hWq1A9Mq4Ok/ANUWDK0EluofMg1RSBBfwZ3Z8=\nMIME-Version: 1.0\nReceived: by 10.68.6.201 with SMTP id d9mr1052618pba.19.1315876128062; Mon, 12\n Sep 2011 18:08:48 -0700 (PDT)\nReceived: by 10.68.54.4 with HTTP; Mon, 12 Sep 2011 18:08:48 -0700 (PDT)\nIn-Reply-To: <[email protected]>\nReferences: <CAO2AxyoGvmRYtq=1=weOU_CCnAxK8FboBa8GWS6XZ923CXCCgA@mail.gmail.com>\n\t<[email protected]>\nDate: Mon, 12 Sep 2011 20:08:48 -0500\nX-Google-Sender-Auth: GjKYg0tbGJg7MsLUHlQnEqCQoIk\nMessage-ID: <CAO2Axyp-F5FF1ysjk9OGuxpTVaQUWDqyG1kGuZ0L7LUh3HGXzg@mail.gmail.com>\nSubject: Re: RAID Controller (HP P400) beat by SW-RAID?\nFrom: Anthony Presley <[email protected]>\nTo: Alan Hodgson <[email protected]>\nCc: [email protected]\nContent-Type: multipart/alternative; boundary=bcaec53961362e7cb404acc84964\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits=-1.898 tagged_above=-10 required=5\n tests=BAYES_00=-1.9, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001\nX-Spam-Level: \nX-Archive-Number: 201109/174\nX-Sequence-Number: 44950\n\n--bcaec53961362e7cb404acc84964\nContent-Type: text/plain; charset=ISO-8859-1\n\nSo, today, I did the following:\n\n - Swapped out the 5410's (2.3Ghz) for 5470's (3.33Ghz)\n - Set the ext4 mount options to be noatime,barrier=0,data=writeback\n - Installed PG 9.1 from the yum repo\n\nItem one:\n With the accelerator cache set to 0/100 (all 512MB for writing), loading\nthe db / creating the indexes was about 8 minutes faster. Was hoping for\nmore, but didn't get it. If I split the CREATE INDEXes into separate psql\ninstances, will that be done in parallel?\n\nItem two:\n I'm still getting VERY strange results in my SELECT queries.\n\nFor example, on the new server:\n http://explain.depesz.com/s/qji - This takes 307ms, all the time. Doesn't\nmatter if it's \"cached\", or fresh from a reboot.\n\nSame query on the live / old server:\n http://explain.depesz.com/s/8Pd - This can take 2-3s the first time, but\nthen takes 42ms once it's cached.\n\nBoth of these servers have the same indexes, and almost identical data.\n However, the old server is doing some different planning than the new\nserver.\n\nWhat did I switch (or should I unswitch)?\n\n\n--\nAnthony\n\nOn Sun, Sep 11, 2011 at 9:12 PM, Alan Hodgson <[email protected]> wrote:\n\n> On September 11, 2011 03:44:34 PM Anthony Presley wrote:\n> > First thing I noticed is that it takes the same amount of time to load\n> the\n> > db (about 40 minutes) on the new hardware as the old hardware. I was\n> > really hoping with the faster, additional drives and a hardware RAID\n> > controller, that this would be faster. The database is only about 9GB\n> > with pg_dump (about 28GB with indexes).\n>\n> Loading the DB is going to be CPU-bound (on a single) core, unless your\n> disks\n> really suck, which they don't. Most of the time will be spent building\n> indexes.\n>\n> I don't know offhand why the queries are slower, though, unless you're not\n> getting as much cached before testing as on the older box.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n--bcaec53961362e7cb404acc84964\nContent-Type: text/html; charset=ISO-8859-1\nContent-Transfer-Encoding: quoted-printable\n\nSo, today, I did the following:<div><br></div><div>=A0=A0- Swapped out the =\n5410's (2.3Ghz) for 5470's (3.33Ghz)</div><div>=A0=A0- Set the ext4=\n mount options to be=A0noatime,barrier=3D0,data=3Dwriteback</div><div>=A0=\n=A0- Installed PG 9.1 from the yum repo</div>\n<div><br></div><div>Item one:</div><div>=A0=A0With the accelerator cache se=\nt to 0/100 (all 512MB for writing), loading the db / creating the indexes w=\nas about 8 minutes faster. =A0Was hoping for more, but didn't get it. =\n=A0If I split the CREATE INDEXes into separate psql instances, will that be=\n done in parallel?</div>\n<div><br></div><div>Item two:</div><div>=A0=A0I'm still getting VERY st=\nrange results in my SELECT queries. =A0</div><div><br></div><div>For exampl=\ne, on the new server:</div><div>=A0=A0<a href=3D\"http://explain.depesz.com/=\ns/qji\">http://explain.depesz.com/s/qji</a>=A0- This takes 307ms, all the ti=\nme. =A0Doesn't matter if it's "cached", or fresh from a r=\neboot.</div>\n<div><br></div><div>Same query on the live / old server:</div><div>=A0=A0<a=\n href=3D\"http://explain.depesz.com/s/8Pd\">http://explain.depesz.com/s/8Pd</=\na>=A0- This can take 2-3s the first time, but then takes 42ms once it's=\n cached.</div>\n<div><br></div><div>Both of these servers have the same indexes, and almost=\n identical data. =A0However, the old server is doing some different plannin=\ng than the new server.</div><div><br></div><div>What did I switch (or shoul=\nd I unswitch)?</div>\n<div><br></div><div><br></div><div>--</div><div>Anthony</div><div><br><div =\nclass=3D\"gmail_quote\">On Sun, Sep 11, 2011 at 9:12 PM, Alan Hodgson <span d=\nir=3D\"ltr\"><<a href=3D\"mailto:[email protected]\">[email protected]</a>=\n></span> wrote:<br>\n<blockquote class=3D\"gmail_quote\" style=3D\"margin:0 0 0 .8ex;border-left:1p=\nx #ccc solid;padding-left:1ex;\"><div class=3D\"im\">On September 11, 2011 03:=\n44:34 PM Anthony Presley wrote:<br>\n> First thing I noticed is that it takes the same amount of time to load=\n the<br>\n> db (about 40 minutes) on the new hardware as the old hardware. =A0I wa=\ns<br>\n> really hoping with the faster, additional drives and a hardware RAID<b=\nr>\n> controller, that this would be faster. =A0The database is only about 9=\nGB<br>\n> with pg_dump (about 28GB with indexes).<br>\n<br>\n</div>Loading the DB is going to be CPU-bound (on a single) core, unless yo=\nur disks<br>\nreally suck, which they don't. Most of the time will be spent building<=\nbr>\nindexes.<br>\n<br>\nI don't know offhand why the queries are slower, though, unless you'=\n;re not<br>\ngetting as much cached before testing as on the older box.<br>\n<font color=3D\"#888888\"><br>\n--<br>\nSent via pgsql-performance mailing list (<a href=3D\"mailto:pgsql-performanc=\[email protected]\">[email protected]</a>)<br>\nTo make changes to your subscription:<br>\n<a href=3D\"http://www.postgresql.org/mailpref/pgsql-performance\" target=3D\"=\n_blank\">http://www.postgresql.org/mailpref/pgsql-performance</a></font></bl=\nockquote></div>\n</div>\n\n--bcaec53961362e7cb404acc84964--\n",
"msg_date": "Mon, 12 Sep 2011 17:47:59 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
},
{
"msg_contents": "On Mon, Sep 12, 2011 at 8:47 PM, <[email protected]> wrote:\n\n>> XFS FAQ goes over much of it, starting at Q24:\n>>\n>> http://xfs.org/index.php/XFS_FAQ#Q:_What_is_the_problem_with_the_write_cache_on_journaled_filesystems.3F\n>>\n>> So, for pure performance, on a battery-backed controller, nobarrier is\n>> the recommended *performance* setting.\n>>\n>> But, to throw a wrench into the plan, what happens when during normal\n>> battery tests, your raid controller decides the battery is failing...\n>> of course, it's going to start screaming and send all your monitoring\n>> alarms off (you're monitoring that, right?), but have you thought to\n>> make sure that your FS is remounted with barriers at the first sign of\n>> battery trouble?\n>\n> yep.\n>\n> on a good raid card with battery backed cache, the performance difference\n> between barriers being on and barriers being off should be minimal. If it's\n> not, I think that you have something else going on.\n\nThe performance boost you'll get is that you don't have the temporary\nstall in parallelization that the barriers have. With barriers, even\nif the controller cache doesn't really flush, you still have the\n\"can't send more writes to the device until the barrier'ed write is\ndone\", so at all those points, you have only a single write command in\nflight. The performance penalty of barriers on good cards comes\nbecause barriers are written to prevent the devices from reordering of\nwrite persistence, and do that by waiting for a write to be\n\"persistent\" before allowing more to be queued to the device.\n\nWith nobarrier, you operate under the assumption that the block device\nwrites are persisted in the order commands are issued to the devices,\nso you never have to \"drain the queue\", as you do in the normal\nbarrier implementation, and can (in theory) always have more request\nthat the raid card can be working on processing, reordering, and\ndispatching to platters for the maximum theoretical throughput...\n\nOf course, linux has completely re-written/changed the\nsync/barrier/flush methods over the past few years, and there is no\nguarantee they don't keep changing the implementation details in the\nfuture, so keep up on the filesystem details of whatever you're\nusing...\n\nSo keep doing burn-ins, with real pull-the-cord tests... They can't\n\"prove\" it's 100% safe, but they can quickly prove when it's not ;-)\n\na.\n\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.\n",
"msg_date": "Mon, 12 Sep 2011 22:15:29 -0400",
"msg_from": "Aidan Van Dyk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI'm using postgres 9.0.3, and here's the OS I'm running this on:\nLinux 2.6.18-238.12.1.el5xen #1 SMP Tue May 31 14:02:29 EDT 2011 x86_64\nx86_64 x86_64 GNU/Linux\n\nI have a fairly straight forward query. I'm doing a group by on an ID, and\nthen calculating some a statistic on the resulting data. The problem I'm\nrunning into is that when I'm calculating the statistics via a function,\nit's twice as slow as when I'm calculating the statistics directly in my\nquery. I want to be able to use a function, since I'll be using this\nparticular calculation in many places.\n\nAny idea of what's going on? Below, I've included my function, and both\nqueries (I removed the type_ids, and just wrote …ids…\n\nHere's my function (I also tried stable):\nCREATE OR REPLACE FUNCTION calc_test(a double precision, b integer, c\ninteger)\nRETURNS double precision AS $$\nBEGIN\n return a/b/c* 1000000000::double precision;\nEND;\n$$ LANGUAGE plpgsql immutable;\n\n\nThe query that takes 7.6 seconds, when I calculate the statistic from within\nthe query:\nexplain analyze\nselect\n agg.primary_id,\n avg(agg.a / agg.b / agg.c * 1000000000::double precision) foo,\n stddev(agg.a / agg.b / agg.c * 1000000000::double precision) bar\nfrom mytable agg\nwhere agg.type_id in (....ids....)\ngroup by agg.primary_id;\n\nThe execution plan:\n HashAggregate (cost=350380.58..350776.10 rows=9888 width=20) (actual\ntime=7300.414..7331.659 rows=20993 loops=1)\n -> Bitmap Heap Scan on mytable agg (cost=28667.90..337509.63\nrows=1716127 width=20) (actual time=200.064..2861.600 rows=2309230 loops=1)\n Recheck Cond: (type_id = ANY ('{....ids....}'::integer[]))\n -> Bitmap Index Scan on mytable_type_id_idx (cost=0.00..28238.87\nrows=1716127 width=0) (actual time=192.725..192.725 rows=2309230 loops=1)\n Index Cond: (type_id = ANY ('{....ids....}'::integer[]))\n Total runtime: 7358.337 ms\n(6 rows)\n\n\n\n\nThe same query, but now I'm calling the function. When I call the function\nit's taking 15.5 seconds.\nexplain analyze select\n agg.primary_id,\n avg(calc_test(agg.a,agg.b,agg.c)) foo,\n stddev(calc_test(agg.a,agg.b,agg.c)) bar\nfrom mytable agg\nwhere agg.type_id in (....ids....)\ngroup by agg.primary_id;\n\nand, here's the execution plan:\n\n HashAggregate (cost=350380.58..355472.90 rows=9888 width=20) (actual\ntime=13660.838..13686.618 rows=20993 loops=1)\n -> Bitmap Heap Scan on mytable agg (cost=28667.90..337509.63\nrows=1716127 width=20) (actual time=170.385..2881.122 rows=2309230 loops=1)\n Recheck Cond: (type_id = ANY ('{....ids....}'::integer[]))\n -> Bitmap Index Scan on mytable_type_id_idx (cost=0.00..28238.87\nrows=1716127 width=0) (actual time=162.834..162.834 rows=2309230 loops=1)\n Index Cond: (type_id = ANY ('{....ids....}'::integer[]))\n Total runtime: 13707.560 ms\n\n\nThanks!\n\nAnish\n\nHi everyone,I'm using postgres 9.0.3, and here's the OS I'm running this on:Linux 2.6.18-238.12.1.el5xen #1 SMP Tue May 31 14:02:29 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux\nI have a fairly straight forward query. I'm doing a group by on an ID, and then calculating some a statistic on the resulting data. The problem I'm running into is that when I'm calculating the statistics via a function, it's twice as slow as when I'm calculating the statistics directly in my query. I want to be able to use a function, since I'll be using this particular calculation in many places.\nAny idea of what's going on? Below, I've included my function, and both queries (I removed the type_ids, and just wrote …ids…Here's my function (I also tried stable):\nCREATE OR REPLACE FUNCTION calc_test(a double precision, b integer, c integer)RETURNS double precision AS $$BEGIN return a/b/c* 1000000000::double precision;END;\n$$ LANGUAGE plpgsql immutable;The query that takes 7.6 seconds, when I calculate the statistic from within the query:explain analyze select agg.primary_id,\n avg(agg.a / agg.b / agg.c * 1000000000::double precision) foo, stddev(agg.a / agg.b / agg.c * 1000000000::double precision) barfrom mytable aggwhere agg.type_id in (....ids....)\ngroup by agg.primary_id;The execution plan: HashAggregate (cost=350380.58..350776.10 rows=9888 width=20) (actual time=7300.414..7331.659 rows=20993 loops=1) -> Bitmap Heap Scan on mytable agg (cost=28667.90..337509.63 rows=1716127 width=20) (actual time=200.064..2861.600 rows=2309230 loops=1)\n Recheck Cond: (type_id = ANY ('{....ids....}'::integer[])) -> Bitmap Index Scan on mytable_type_id_idx (cost=0.00..28238.87 rows=1716127 width=0) (actual time=192.725..192.725 rows=2309230 loops=1)\n Index Cond: (type_id = ANY ('{....ids....}'::integer[])) Total runtime: 7358.337 ms(6 rows)The same query, but now I'm calling the function. When I call the function it's taking 15.5 seconds.\nexplain analyze select agg.primary_id, avg(calc_test(agg.a,agg.b,agg.c)) foo, stddev(calc_test(agg.a,agg.b,agg.c)) barfrom mytable aggwhere agg.type_id in (....ids....)\ngroup by agg.primary_id;and, here's the execution plan: HashAggregate (cost=350380.58..355472.90 rows=9888 width=20) (actual time=13660.838..13686.618 rows=20993 loops=1)\n -> Bitmap Heap Scan on mytable agg (cost=28667.90..337509.63 rows=1716127 width=20) (actual time=170.385..2881.122 rows=2309230 loops=1) Recheck Cond: (type_id = ANY ('{....ids....}'::integer[]))\n -> Bitmap Index Scan on mytable_type_id_idx (cost=0.00..28238.87 rows=1716127 width=0) (actual time=162.834..162.834 rows=2309230 loops=1) Index Cond: (type_id = ANY ('{....ids....}'::integer[]))\n Total runtime: 13707.560 msThanks!Anish",
"msg_date": "Wed, 17 Aug 2011 11:20:39 -0700",
"msg_from": "Anish Kejariwal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Calculating statistic via function rather than with query is slowing\n\tmy query"
},
{
"msg_contents": "Hello\n\n2011/8/17 Anish Kejariwal <[email protected]>:\n> Hi everyone,\n> I'm using postgres 9.0.3, and here's the OS I'm running this on:\n> Linux 2.6.18-238.12.1.el5xen #1 SMP Tue May 31 14:02:29 EDT 2011 x86_64\n> x86_64 x86_64 GNU/Linux\n> I have a fairly straight forward query. I'm doing a group by on an ID, and\n> then calculating some a statistic on the resulting data. The problem I'm\n> running into is that when I'm calculating the statistics via a function,\n> it's twice as slow as when I'm calculating the statistics directly in my\n> query. I want to be able to use a function, since I'll be using this\n> particular calculation in many places.\n> Any idea of what's going on? Below, I've included my function, and both\n> queries (I removed the type_ids, and just wrote …ids…\n> Here's my function (I also tried stable):\n> CREATE OR REPLACE FUNCTION calc_test(a double precision, b integer, c\n> integer)\n> RETURNS double precision AS $$\n> BEGIN\n> return a/b/c* 1000000000::double precision;\n> END;\n> $$ LANGUAGE plpgsql immutable;\n>\n\nthis is overhead of plpgsql call. For this simple functions use a SQL\nfunctions instead\n\nCREATE OR REPLACE FUNCTION calc_test(a double precision, b integer, c\n integer)\n RETURNS double precision AS $$\n> SELECT $1/$2/$3* 1000000000::double precision;\n> $$ LANGUAGE sql;\n\nRegards\n\nPavel Stehule\n\n> The query that takes 7.6 seconds, when I calculate the statistic from within\n> the query:\n> explain analyze\n> select\n> agg.primary_id,\n> avg(agg.a / agg.b / agg.c * 1000000000::double precision) foo,\n> stddev(agg.a / agg.b / agg.c * 1000000000::double precision) bar\n> from mytable agg\n> where agg.type_id in (....ids....)\n> group by agg.primary_id;\n> The execution plan:\n> HashAggregate (cost=350380.58..350776.10 rows=9888 width=20) (actual\n> time=7300.414..7331.659 rows=20993 loops=1)\n> -> Bitmap Heap Scan on mytable agg (cost=28667.90..337509.63\n> rows=1716127 width=20) (actual time=200.064..2861.600 rows=2309230 loops=1)\n> Recheck Cond: (type_id = ANY ('{....ids....}'::integer[]))\n> -> Bitmap Index Scan on mytable_type_id_idx (cost=0.00..28238.87\n> rows=1716127 width=0) (actual time=192.725..192.725 rows=2309230 loops=1)\n> Index Cond: (type_id = ANY ('{....ids....}'::integer[]))\n> Total runtime: 7358.337 ms\n> (6 rows)\n>\n>\n>\n> The same query, but now I'm calling the function. When I call the function\n> it's taking 15.5 seconds.\n> explain analyze select\n> agg.primary_id,\n> avg(calc_test(agg.a,agg.b,agg.c)) foo,\n> stddev(calc_test(agg.a,agg.b,agg.c)) bar\n> from mytable agg\n> where agg.type_id in (....ids....)\n> group by agg.primary_id;\n> and, here's the execution plan:\n> HashAggregate (cost=350380.58..355472.90 rows=9888 width=20) (actual\n> time=13660.838..13686.618 rows=20993 loops=1)\n> -> Bitmap Heap Scan on mytable agg (cost=28667.90..337509.63\n> rows=1716127 width=20) (actual time=170.385..2881.122 rows=2309230 loops=1)\n> Recheck Cond: (type_id = ANY ('{....ids....}'::integer[]))\n> -> Bitmap Index Scan on mytable_type_id_idx (cost=0.00..28238.87\n> rows=1716127 width=0) (actual time=162.834..162.834 rows=2309230 loops=1)\n> Index Cond: (type_id = ANY ('{....ids....}'::integer[]))\n> Total runtime: 13707.560 ms\n>\n> Thanks!\n> Anish\n",
"msg_date": "Wed, 17 Aug 2011 20:27:54 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Calculating statistic via function rather than with\n\tquery is slowing my query"
},
{
"msg_contents": "Thanks Pavel! that definitely solved it.\n\nUnfortunately, the function I gave you was a simple/short version of what\nthe actual function is going to be. The actual function is going to get\nparameters passed to it, and based on the parameters will go through some\nif...else conditions, and maybe even call another function. Based on that,\nI was definitely hoping to use plpgsql, and the overhead is unfortunate.\n\nIs there any way to get around this overhead? Will I still have the same\noverhead if I use plperl, plpython, pljava, or write the function in C?\n\nAnish\n\n\nOn Wed, Aug 17, 2011 at 11:27 AM, Pavel Stehule <[email protected]>wrote:\n\n> Hello\n>\n> 2011/8/17 Anish Kejariwal <[email protected]>:\n> > Hi everyone,\n> > I'm using postgres 9.0.3, and here's the OS I'm running this on:\n> > Linux 2.6.18-238.12.1.el5xen #1 SMP Tue May 31 14:02:29 EDT 2011 x86_64\n> > x86_64 x86_64 GNU/Linux\n> > I have a fairly straight forward query. I'm doing a group by on an ID,\n> and\n> > then calculating some a statistic on the resulting data. The problem I'm\n> > running into is that when I'm calculating the statistics via a function,\n> > it's twice as slow as when I'm calculating the statistics directly in my\n> > query. I want to be able to use a function, since I'll be using this\n> > particular calculation in many places.\n> > Any idea of what's going on? Below, I've included my function, and both\n> > queries (I removed the type_ids, and just wrote …ids…\n> > Here's my function (I also tried stable):\n> > CREATE OR REPLACE FUNCTION calc_test(a double precision, b integer, c\n> > integer)\n> > RETURNS double precision AS $$\n> > BEGIN\n> > return a/b/c* 1000000000::double precision;\n> > END;\n> > $$ LANGUAGE plpgsql immutable;\n> >\n>\n> this is overhead of plpgsql call. For this simple functions use a SQL\n> functions instead\n>\n> CREATE OR REPLACE FUNCTION calc_test(a double precision, b integer, c\n> integer)\n> RETURNS double precision AS $$\n> > SELECT $1/$2/$3* 1000000000::double precision;\n> > $$ LANGUAGE sql;\n>\n> Regards\n>\n> Pavel Stehule\n>\n> > The query that takes 7.6 seconds, when I calculate the statistic from\n> within\n> > the query:\n> > explain analyze\n> > select\n> > agg.primary_id,\n> > avg(agg.a / agg.b / agg.c * 1000000000::double precision) foo,\n> > stddev(agg.a / agg.b / agg.c * 1000000000::double precision) bar\n> > from mytable agg\n> > where agg.type_id in (....ids....)\n> > group by agg.primary_id;\n> > The execution plan:\n> > HashAggregate (cost=350380.58..350776.10 rows=9888 width=20) (actual\n> > time=7300.414..7331.659 rows=20993 loops=1)\n> > -> Bitmap Heap Scan on mytable agg (cost=28667.90..337509.63\n> > rows=1716127 width=20) (actual time=200.064..2861.600 rows=2309230\n> loops=1)\n> > Recheck Cond: (type_id = ANY ('{....ids....}'::integer[]))\n> > -> Bitmap Index Scan on mytable_type_id_idx\n> (cost=0.00..28238.87\n> > rows=1716127 width=0) (actual time=192.725..192.725 rows=2309230 loops=1)\n> > Index Cond: (type_id = ANY ('{....ids....}'::integer[]))\n> > Total runtime: 7358.337 ms\n> > (6 rows)\n> >\n> >\n> >\n> > The same query, but now I'm calling the function. When I call the\n> function\n> > it's taking 15.5 seconds.\n> > explain analyze select\n> > agg.primary_id,\n> > avg(calc_test(agg.a,agg.b,agg.c)) foo,\n> > stddev(calc_test(agg.a,agg.b,agg.c)) bar\n> > from mytable agg\n> > where agg.type_id in (....ids....)\n> > group by agg.primary_id;\n> > and, here's the execution plan:\n> > HashAggregate (cost=350380.58..355472.90 rows=9888 width=20) (actual\n> > time=13660.838..13686.618 rows=20993 loops=1)\n> > -> Bitmap Heap Scan on mytable agg (cost=28667.90..337509.63\n> > rows=1716127 width=20) (actual time=170.385..2881.122 rows=2309230\n> loops=1)\n> > Recheck Cond: (type_id = ANY ('{....ids....}'::integer[]))\n> > -> Bitmap Index Scan on mytable_type_id_idx\n> (cost=0.00..28238.87\n> > rows=1716127 width=0) (actual time=162.834..162.834 rows=2309230 loops=1)\n> > Index Cond: (type_id = ANY ('{....ids....}'::integer[]))\n> > Total runtime: 13707.560 ms\n> >\n> > Thanks!\n> > Anish\n>\n\nThanks Pavel! that definitely solved it. Unfortunately, the function I gave you was a simple/short version of what the actual function is going to be. The actual function is going to get parameters passed to it, and based on the parameters will go through some if...else conditions, and maybe even call another function. Based on that, I was definitely hoping to use plpgsql, and the overhead is unfortunate. \nIs there any way to get around this overhead? Will I still have the same overhead if I use plperl, plpython, pljava, or write the function in C?\nAnishOn Wed, Aug 17, 2011 at 11:27 AM, Pavel Stehule <[email protected]> wrote:\nHello\n\n2011/8/17 Anish Kejariwal <[email protected]>:\n> Hi everyone,\n> I'm using postgres 9.0.3, and here's the OS I'm running this on:\n> Linux 2.6.18-238.12.1.el5xen #1 SMP Tue May 31 14:02:29 EDT 2011 x86_64\n> x86_64 x86_64 GNU/Linux\n> I have a fairly straight forward query. I'm doing a group by on an ID, and\n> then calculating some a statistic on the resulting data. The problem I'm\n> running into is that when I'm calculating the statistics via a function,\n> it's twice as slow as when I'm calculating the statistics directly in my\n> query. I want to be able to use a function, since I'll be using this\n> particular calculation in many places.\n> Any idea of what's going on? Below, I've included my function, and both\n> queries (I removed the type_ids, and just wrote …ids…\n> Here's my function (I also tried stable):\n> CREATE OR REPLACE FUNCTION calc_test(a double precision, b integer, c\n> integer)\n> RETURNS double precision AS $$\n> BEGIN\n> return a/b/c* 1000000000::double precision;\n> END;\n> $$ LANGUAGE plpgsql immutable;\n>\n\nthis is overhead of plpgsql call. For this simple functions use a SQL\nfunctions instead\n\nCREATE OR REPLACE FUNCTION calc_test(a double precision, b integer, c\n integer)\n RETURNS double precision AS $$\n> SELECT $1/$2/$3* 1000000000::double precision;\n> $$ LANGUAGE sql;\n\nRegards\n\nPavel Stehule\n\n> The query that takes 7.6 seconds, when I calculate the statistic from within\n> the query:\n> explain analyze\n> select\n> agg.primary_id,\n> avg(agg.a / agg.b / agg.c * 1000000000::double precision) foo,\n> stddev(agg.a / agg.b / agg.c * 1000000000::double precision) bar\n> from mytable agg\n> where agg.type_id in (....ids....)\n> group by agg.primary_id;\n> The execution plan:\n> HashAggregate (cost=350380.58..350776.10 rows=9888 width=20) (actual\n> time=7300.414..7331.659 rows=20993 loops=1)\n> -> Bitmap Heap Scan on mytable agg (cost=28667.90..337509.63\n> rows=1716127 width=20) (actual time=200.064..2861.600 rows=2309230 loops=1)\n> Recheck Cond: (type_id = ANY ('{....ids....}'::integer[]))\n> -> Bitmap Index Scan on mytable_type_id_idx (cost=0.00..28238.87\n> rows=1716127 width=0) (actual time=192.725..192.725 rows=2309230 loops=1)\n> Index Cond: (type_id = ANY ('{....ids....}'::integer[]))\n> Total runtime: 7358.337 ms\n> (6 rows)\n>\n>\n>\n> The same query, but now I'm calling the function. When I call the function\n> it's taking 15.5 seconds.\n> explain analyze select\n> agg.primary_id,\n> avg(calc_test(agg.a,agg.b,agg.c)) foo,\n> stddev(calc_test(agg.a,agg.b,agg.c)) bar\n> from mytable agg\n> where agg.type_id in (....ids....)\n> group by agg.primary_id;\n> and, here's the execution plan:\n> HashAggregate (cost=350380.58..355472.90 rows=9888 width=20) (actual\n> time=13660.838..13686.618 rows=20993 loops=1)\n> -> Bitmap Heap Scan on mytable agg (cost=28667.90..337509.63\n> rows=1716127 width=20) (actual time=170.385..2881.122 rows=2309230 loops=1)\n> Recheck Cond: (type_id = ANY ('{....ids....}'::integer[]))\n> -> Bitmap Index Scan on mytable_type_id_idx (cost=0.00..28238.87\n> rows=1716127 width=0) (actual time=162.834..162.834 rows=2309230 loops=1)\n> Index Cond: (type_id = ANY ('{....ids....}'::integer[]))\n> Total runtime: 13707.560 ms\n>\n> Thanks!\n> Anish",
"msg_date": "Wed, 17 Aug 2011 12:00:54 -0700",
"msg_from": "Anish Kejariwal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Calculating statistic via function rather than with\n\tquery is slowing my query"
},
{
"msg_contents": "2011/8/17 Anish Kejariwal <[email protected]>:\n> Thanks Pavel! that definitely solved it.\n> Unfortunately, the function I gave you was a simple/short version of what\n> the actual function is going to be. The actual function is going to get\n> parameters passed to it, and based on the parameters will go through some\n> if...else conditions, and maybe even call another function. Based on that,\n> I was definitely hoping to use plpgsql, and the overhead is unfortunate.\n> Is there any way to get around this overhead? Will I still have the same\n> overhead if I use plperl, plpython, pljava, or write the function in C?\n\nonly SQL and C has zero overhead - SQL because uses inlining and C is\njust readable assambler.\n\nI am thinking, overhead of PL/pgSQL is minimal from languages from your list.\n\nRegards\n\nPavel\n\n>\n> Anish\n>\n> On Wed, Aug 17, 2011 at 11:27 AM, Pavel Stehule <[email protected]>\n> wrote:\n>>\n>> Hello\n>>\n>> 2011/8/17 Anish Kejariwal <[email protected]>:\n>> > Hi everyone,\n>> > I'm using postgres 9.0.3, and here's the OS I'm running this on:\n>> > Linux 2.6.18-238.12.1.el5xen #1 SMP Tue May 31 14:02:29 EDT 2011 x86_64\n>> > x86_64 x86_64 GNU/Linux\n>> > I have a fairly straight forward query. I'm doing a group by on an ID,\n>> > and\n>> > then calculating some a statistic on the resulting data. The problem\n>> > I'm\n>> > running into is that when I'm calculating the statistics via a function,\n>> > it's twice as slow as when I'm calculating the statistics directly in my\n>> > query. I want to be able to use a function, since I'll be using this\n>> > particular calculation in many places.\n>> > Any idea of what's going on? Below, I've included my function, and both\n>> > queries (I removed the type_ids, and just wrote …ids…\n>> > Here's my function (I also tried stable):\n>> > CREATE OR REPLACE FUNCTION calc_test(a double precision, b integer, c\n>> > integer)\n>> > RETURNS double precision AS $$\n>> > BEGIN\n>> > return a/b/c* 1000000000::double precision;\n>> > END;\n>> > $$ LANGUAGE plpgsql immutable;\n>> >\n>>\n>> this is overhead of plpgsql call. For this simple functions use a SQL\n>> functions instead\n>>\n>> CREATE OR REPLACE FUNCTION calc_test(a double precision, b integer, c\n>> integer)\n>> RETURNS double precision AS $$\n>> > SELECT $1/$2/$3* 1000000000::double precision;\n>> > $$ LANGUAGE sql;\n>>\n>> Regards\n>>\n>> Pavel Stehule\n>>\n>> > The query that takes 7.6 seconds, when I calculate the statistic from\n>> > within\n>> > the query:\n>> > explain analyze\n>> > select\n>> > agg.primary_id,\n>> > avg(agg.a / agg.b / agg.c * 1000000000::double precision) foo,\n>> > stddev(agg.a / agg.b / agg.c * 1000000000::double precision) bar\n>> > from mytable agg\n>> > where agg.type_id in (....ids....)\n>> > group by agg.primary_id;\n>> > The execution plan:\n>> > HashAggregate (cost=350380.58..350776.10 rows=9888 width=20) (actual\n>> > time=7300.414..7331.659 rows=20993 loops=1)\n>> > -> Bitmap Heap Scan on mytable agg (cost=28667.90..337509.63\n>> > rows=1716127 width=20) (actual time=200.064..2861.600 rows=2309230\n>> > loops=1)\n>> > Recheck Cond: (type_id = ANY ('{....ids....}'::integer[]))\n>> > -> Bitmap Index Scan on mytable_type_id_idx\n>> > (cost=0.00..28238.87\n>> > rows=1716127 width=0) (actual time=192.725..192.725 rows=2309230\n>> > loops=1)\n>> > Index Cond: (type_id = ANY ('{....ids....}'::integer[]))\n>> > Total runtime: 7358.337 ms\n>> > (6 rows)\n>> >\n>> >\n>> >\n>> > The same query, but now I'm calling the function. When I call the\n>> > function\n>> > it's taking 15.5 seconds.\n>> > explain analyze select\n>> > agg.primary_id,\n>> > avg(calc_test(agg.a,agg.b,agg.c)) foo,\n>> > stddev(calc_test(agg.a,agg.b,agg.c)) bar\n>> > from mytable agg\n>> > where agg.type_id in (....ids....)\n>> > group by agg.primary_id;\n>> > and, here's the execution plan:\n>> > HashAggregate (cost=350380.58..355472.90 rows=9888 width=20) (actual\n>> > time=13660.838..13686.618 rows=20993 loops=1)\n>> > -> Bitmap Heap Scan on mytable agg (cost=28667.90..337509.63\n>> > rows=1716127 width=20) (actual time=170.385..2881.122 rows=2309230\n>> > loops=1)\n>> > Recheck Cond: (type_id = ANY ('{....ids....}'::integer[]))\n>> > -> Bitmap Index Scan on mytable_type_id_idx\n>> > (cost=0.00..28238.87\n>> > rows=1716127 width=0) (actual time=162.834..162.834 rows=2309230\n>> > loops=1)\n>> > Index Cond: (type_id = ANY ('{....ids....}'::integer[]))\n>> > Total runtime: 13707.560 ms\n>> >\n>> > Thanks!\n>> > Anish\n>\n>\n",
"msg_date": "Wed, 17 Aug 2011 21:08:44 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Calculating statistic via function rather than with\n\tquery is slowing my query"
},
{
"msg_contents": "On 18/08/2011 3:00 AM, Anish Kejariwal wrote:\n> Thanks Pavel! that definitely solved it.\n>\n> Unfortunately, the function I gave you was a simple/short version of \n> what the actual function is going to be. The actual function is going \n> to get parameters passed to it, and based on the parameters will go \n> through some if...else conditions, and maybe even call another \n> function. Based on that, I was definitely hoping to use plpgsql, and \n> the overhead is unfortunate.\n>\n> Is there any way to get around this overhead? Will I still have the \n> same overhead if I use plperl, plpython, pljava, or write the function \n> in C?\n\nYou can probably still write it as an SQL function if you use CASE WHEN \nappropriately.\n\n--\nCraig Ringer\n\n\n\n\n\n\n On 18/08/2011 3:00 AM, Anish Kejariwal wrote:\n Thanks Pavel! that definitely solved it. �\n \n\nUnfortunately, the function I gave you was a simple/short\n version of what the actual function is going to be. �The actual\n function is going to get parameters passed to it, and based on\n the parameters will go through some if...else conditions, and\n maybe even call another function. �Based on that, I was\n definitely hoping to use plpgsql, and the overhead is�unfortunate. �\n\n\n\nIs\n there any way to get around this overhead? �Will I still\n have the same overhead if I use plperl, plpython, pljava, or\n write the function in C?\n\n\n\n\n You can probably still write it as an SQL function if you use CASE\n WHEN appropriately.\n\n --\n Craig Ringer",
"msg_date": "Thu, 18 Aug 2011 08:05:49 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Calculating statistic via function rather than with\n\tquery is slowing my query"
},
{
"msg_contents": "Thanks for the help Pavel and Craig. I really appreciate it. I'm going to\ntry a couple of these different options (write a c function, use a sql\nfunction with case statements, and use plperl), so I can see which gives me\nthe realtime performance that I need, and works best for clean code in my\nparticular case.\n\nthanks!\n\nAnish\n\nOn Wed, Aug 17, 2011 at 5:05 PM, Craig Ringer <[email protected]> wrote:\n\n> On 18/08/2011 3:00 AM, Anish Kejariwal wrote:\n>\n> Thanks Pavel! that definitely solved it.\n>\n> Unfortunately, the function I gave you was a simple/short version of what\n> the actual function is going to be. The actual function is going to get\n> parameters passed to it, and based on the parameters will go through some\n> if...else conditions, and maybe even call another function. Based on that,\n> I was definitely hoping to use plpgsql, and the overhead is unfortunate.\n>\n> Is there any way to get around this overhead? Will I still have the same\n> overhead if I use plperl, plpython, pljava, or write the function in C?\n>\n>\n> You can probably still write it as an SQL function if you use CASE WHEN\n> appropriately.\n>\n> --\n> Craig Ringer\n>\n\nThanks for the help Pavel and Craig. I really appreciate it. I'm going to try a couple of these different options (write a c function, use a sql function with case statements, and use plperl), so I can see which gives me the realtime performance that I need, and works best for clean code in my particular case.\nthanks!AnishOn Wed, Aug 17, 2011 at 5:05 PM, Craig Ringer <[email protected]> wrote:\n\n\n On 18/08/2011 3:00 AM, Anish Kejariwal wrote:\n Thanks Pavel! that definitely solved it. \n \n\nUnfortunately, the function I gave you was a simple/short\n version of what the actual function is going to be. The actual\n function is going to get parameters passed to it, and based on\n the parameters will go through some if...else conditions, and\n maybe even call another function. Based on that, I was\n definitely hoping to use plpgsql, and the overhead is unfortunate. \n\n\n\nIs\n there any way to get around this overhead? Will I still\n have the same overhead if I use plperl, plpython, pljava, or\n write the function in C?\n\n\n\n\n You can probably still write it as an SQL function if you use CASE\n WHEN appropriately.\n\n --\n Craig Ringer",
"msg_date": "Wed, 17 Aug 2011 18:03:11 -0700",
"msg_from": "Anish Kejariwal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Calculating statistic via function rather than with\n\tquery is slowing my query"
},
{
"msg_contents": "On 18/08/2011 9:03 AM, Anish Kejariwal wrote:\n> Thanks for the help Pavel and Craig. I really appreciate it. I'm \n> going to try a couple of these different options (write a c function, \n> use a sql function with case statements, and use plperl), so I can see \n> which gives me the realtime performance that I need, and works best \n> for clean code in my particular case.\nDo you really mean \"realtime\"? Or just \"fast\"?\n\nIf you have strongly bounded latency requirements, any SQL-based, \ndisk-based system is probably not for you. Especially not one that \nrelies on a statics-based query planner, caching, and periodic \ncheckpoints. I'd be looking into in-memory databases designed for \nrealtime environments where latency is critical.\n\nHard realtime: If this system fails to respond within <x> milliseconds, \nall the time, every time, then something will go \"smash\" or \"boom\" \nexpensively and unrecoverably.\n\nSoft realtime: If this system responds late, the late response is \nexpensive or less useful. Frequent late responses are unacceptable but \nthe occasional one might be endurable.\n\nJust needs to be fast: If it responds late, the user gets irritated \nbecause they're sitting and waiting for a response. Regular long stalls \nare unacceptable, but otherwise the user can put up with it. You're more \nconcerned with average latency than maximum latency.\n\n--\nCraig Ringer\n",
"msg_date": "Thu, 18 Aug 2011 13:32:34 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Calculating statistic via function rather than with\n\tquery is slowing my query"
},
{
"msg_contents": "Hi Craig,\n\nFair point. For now, I mean \"just fast\" - which is 5-15 seconds, but I'd\nlike to get it down to the 1-2 second range.\n\n From the query I provided, I have approximately 30,000 unique keys (what I\ncalled primary_id) that I'm grouping by, and each key has a series of\nnumerical values for each of the type_ids. I'm looking at averages, stddev\nand other statistics across a few hundred type_ids (where agg.type_id in\n....). The part of the query that varies is the user specified type_ids,\nwhich makes it impossible to precalculate my statistics.\n\nI'd like this to eventually scale to a million unique keys, and a thousand\ntype_ids.\n\nFor now Postgres been great for modeling the data, understanding where I hit\nperformance bottle necks, and providing a fast enough user interface. But,\nI'm definitely starting to think about whether I can cache my data (with\nmillions of keys and thousands of type_ids, the data might be too large),\nand whether to look into distributed databases (even thought I can't\nprecompute the stats, my queries are easily distributable across multiple\nprocessors since each processor could take a batch of keys). I might even\nwant to consider a column oriented database - since my keys don't change\noften, I could potentially add new columns when there are new type_ids.\n\nI've been thinking of looking into memcached or hbase. If you have any\nsuggestions on which options I should explore, I'd greatly appreciate it.\n\nSorry, for veering off topic a bit from postgres.\n\nthanks,\nAnish\n\n\n\n\n\n\n\nOn Wed, Aug 17, 2011 at 10:32 PM, Craig Ringer <[email protected]>wrote:\n\n> On 18/08/2011 9:03 AM, Anish Kejariwal wrote:\n>\n>> Thanks for the help Pavel and Craig. I really appreciate it. I'm going\n>> to try a couple of these different options (write a c function, use a sql\n>> function with case statements, and use plperl), so I can see which gives me\n>> the realtime performance that I need, and works best for clean code in my\n>> particular case.\n>>\n> Do you really mean \"realtime\"? Or just \"fast\"?\n>\n> If you have strongly bounded latency requirements, any SQL-based,\n> disk-based system is probably not for you. Especially not one that relies on\n> a statics-based query planner, caching, and periodic checkpoints. I'd be\n> looking into in-memory databases designed for realtime environments where\n> latency is critical.\n>\n> Hard realtime: If this system fails to respond within <x> milliseconds, all\n> the time, every time, then something will go \"smash\" or \"boom\" expensively\n> and unrecoverably.\n>\n> Soft realtime: If this system responds late, the late response is expensive\n> or less useful. Frequent late responses are unacceptable but the occasional\n> one might be endurable.\n>\n> Just needs to be fast: If it responds late, the user gets irritated because\n> they're sitting and waiting for a response. Regular long stalls are\n> unacceptable, but otherwise the user can put up with it. You're more\n> concerned with average latency than maximum latency.\n>\n> --\n> Craig Ringer\n>\n\nHi Craig,Fair point. For now, I mean \"just fast\" - which is 5-15 seconds, but I'd like to get it down to the 1-2 second range.From the query I provided, I have approximately 30,000 unique keys (what I called primary_id) that I'm grouping by, and each key has a series of numerical values for each of the type_ids. I'm looking at averages, stddev and other statistics across a few hundred type_ids (where agg.type_id in ....). The part of the query that varies is the user specified type_ids, which makes it impossible to precalculate my statistics.\nI'd like this to eventually scale to a million unique keys, and a thousand type_ids.For now Postgres been great for modeling the data, understanding where I hit performance bottle necks, and providing a fast enough user interface. But, I'm definitely starting to think about whether I can cache my data (with millions of keys and thousands of type_ids, the data might be too large), and whether to look into distributed databases (even thought I can't precompute the stats, my queries are easily distributable across multiple processors since each processor could take a batch of keys). I might even want to consider a column oriented database - since my keys don't change often, I could potentially add new columns when there are new type_ids.\nI've been thinking of looking into memcached or hbase. If you have any suggestions on which options I should explore, I'd greatly appreciate it.Sorry, for veering off topic a bit from postgres.\nthanks,Anish On Wed, Aug 17, 2011 at 10:32 PM, Craig Ringer <[email protected]> wrote:\nOn 18/08/2011 9:03 AM, Anish Kejariwal wrote:\n\nThanks for the help Pavel and Craig. I really appreciate it. I'm going to try a couple of these different options (write a c function, use a sql function with case statements, and use plperl), so I can see which gives me the realtime performance that I need, and works best for clean code in my particular case.\n\nDo you really mean \"realtime\"? Or just \"fast\"?\n\nIf you have strongly bounded latency requirements, any SQL-based, disk-based system is probably not for you. Especially not one that relies on a statics-based query planner, caching, and periodic checkpoints. I'd be looking into in-memory databases designed for realtime environments where latency is critical.\n\nHard realtime: If this system fails to respond within <x> milliseconds, all the time, every time, then something will go \"smash\" or \"boom\" expensively and unrecoverably.\n\nSoft realtime: If this system responds late, the late response is expensive or less useful. Frequent late responses are unacceptable but the occasional one might be endurable.\n\nJust needs to be fast: If it responds late, the user gets irritated because they're sitting and waiting for a response. Regular long stalls are unacceptable, but otherwise the user can put up with it. You're more concerned with average latency than maximum latency.\n\n--\nCraig Ringer",
"msg_date": "Thu, 18 Aug 2011 11:46:06 -0700",
"msg_from": "Anish Kejariwal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Calculating statistic via function rather than with\n\tquery is slowing my query"
}
] |
[
{
"msg_contents": "Hello,\n\nI have an old application that was written on Postgres 8.1.\nThere are a few hundreds tables, 30-40 columns per table, hundreds of\nviews, and all the sql is inside java code.\n\nWe are moving it to 8.4, it seems to be VERY slow.\nThere are 20-30 tables transactions - the objects are spread acrross\nmultiple tables and some tables have data from different objects. \n\nI need a short term tuning strategy minimizing rewrite & redesign.\n\nShould I start with replacing the sql with procedures?\n \nShould I start with replacing the views with the procedures to save time on\nrecreating an execution plan and parsing?\n\nShould I start with tuning server parameters ?\n \nall your suggestions are greatly appreciated!\n\nthank you.\n\nHelen\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/tunning-strategy-needed-tp4710245p4710245.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Wed, 17 Aug 2011 15:40:11 -0700 (PDT)",
"msg_from": "hyelluas <[email protected]>",
"msg_from_op": true,
"msg_subject": "tunning strategy needed"
},
{
"msg_contents": "On 18/08/2011 6:40 AM, hyelluas wrote:\n> Hello,\n>\n> I have an old application that was written on Postgres 8.1.\n> There are a few hundreds tables, 30-40 columns per table, hundreds of\n> views, and all the sql is inside java code.\n>\n> We are moving it to 8.4, it seems to be VERY slow.\n> There are 20-30 tables transactions - the objects are spread acrross\n> multiple tables and some tables have data from different objects.\n>\n> I need a short term tuning strategy minimizing rewrite& redesign.\n>\n>\n\n- Turn on auto explain and slow query logging\n\n- Examine the slow queries and plans. Run them manually with EXPLAIN \nANALYZE. Check that the statistics make sense and if they're inaccurate, \nincrease the statistics targets on those columns/tables then re-ANALYZE.\n\n- If the stats are accurate but the query is still slow, try playing \nwith the cost parameters and see if you get a better result, then test \nthose settings server-wide to see if they improve overall performance.\n\n--\nCraig Ringer\n",
"msg_date": "Thu, 18 Aug 2011 13:45:48 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tunning strategy needed"
},
{
"msg_contents": "On 18 Srpen 2011, 0:40, hyelluas wrote:\n\n> Should I start with replacing the sql with procedures?\n>\n> Should I start with replacing the views with the procedures to save time\n> on\n> recreating an execution plan and parsing?\n>\n> Should I start with tuning server parameters ?\n\nYes, you should start by tuning the server as a whole. Did you just\ninstall the DB and restored your database? Have you tuned the config?\n\nTell us what are the basic performance-related parameters, i.e.\n\nshared_buffers\neffective_cache_size\ncheckpoint_segments\ncheckpoint_completion_target\nwork_mem\nmaintainance_work_mem\nseq_page_cost\nrandom_page_cost\n\nand more information about the hardware and setup too (RAM, database size).\n\nThere's a quite nice guide regarding general tuning:\n\nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nTomas\n\n",
"msg_date": "Thu, 18 Aug 2011 10:52:01 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tunning strategy needed"
},
{
"msg_contents": "\nthank you, it is a great article.\n\n the current on 8.1 checkpoint_segments = 3.\n it looks too low for me , but I'm not tuning the 8.1 schema.\n \nI'm looking for a generic approach of improving that beast while moving it\nto 8.4 \n\nI'm trying to understand the internals for Views vs. Functions\n\nthere was none on 8.1 and I'm using pgpsql for 8.4.\n\nDoes it make sence to put view's sql into a function? would it save on\nre-creating the execution plan and parsing? \n\n\nthank you.\nHelen\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/tunning-strategy-needed-tp4710245p4717048.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Fri, 19 Aug 2011 14:14:53 -0700 (PDT)",
"msg_from": "hyelluas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: tunning strategy needed"
}
] |
[
{
"msg_contents": "I'm in the process of upgrading from postgres 7.4.8 to 9.0.4 and wanted to run my decisions past some folks who can give me some input on whether my decisions make sense or not. \n\nIt's basically a LAPP configuration and on a busy day we probably get in the neighborhood of a million hits.\n\n\nServer Info:\n\n- 4 dual-core AMD Opteron 2212 processors, 2010.485 MHz\n- 64GB RAM\n- 16 67GB RAID 1 drives and 1 464GB RAID 10 drive (all ext3)\n- Linux 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux\n\nThere are 3 separate databases:\n\nDB1 is 10GB and consists of multiple tables that I've spread out so that the 3 most used have their data and indexes on 6 separate RAID1 drives, the 3 next busiest have data & index on 3 drives, and the remaining tables and indexes are on the RAID10 drive. The WAL for all is on a separate RAID1 drive.\n\nThe others are very write-heavy, started as one table within the original DB, and were split out on an odd/even id # in an effort to get better performance:\n\nDB2 is 25GB with data, index, and WAL all on separate RAID1 drives.\nDB3 is 15GB with data, index, and WAL on separate RAID1 drives.\n\nHere are the changes I made to postgres.conf. The only differences between the conf file for DB1 and those for DB2 & 3 are the port and effective_cache_size (which I made slightly smaller -- 8 GB instead of 10 -- for the 2 write-heavy DBs). The 600 max connections are often idle and don't get explicitly closed in the application. I'm looking at connection pooling as well.\n\n autovacuum = on\n\n autovacuum_analyze_threshold = 250\n\n autovacuum_freeze_max_age = 200000000\n\n autovacuum_max_workers = 3\n\n autovacuum_naptime = 10min\n\n autovacuum_vacuum_cost_delay = 20ms\n\n autovacuum_vacuum_cost_limit = -1\n\n autovacuum_vacuum_threshold = 250\n\n checkpoint_completion_target = 0.7\n\n checkpoint_segments = 64\n\n checkpoint_timeout = 5min\n\n checkpoint_warning = 30s\n\n deadlock_timeout = 3s\n\n effective_cache_size = 10GB\n\n log_autovacuum_min_duration = 1s\n\n maintenance_work_mem = 256MB\n\n max_connections = 600\n\n max_locks_per_transaction = 64\n\n max_stack_depth = 8MB\n\n shared_buffers = 4GB\n\n vacuum_cost_delay = 10ms\n\n wal_buffers = 32MB\n\n wal_level = minimal\n\n work_mem = 128MB\n\n\n\n\nANY comments or suggestions would be greatly appreciated. \n\nThank you,\nMidge\n\n\n\n\n\n\n\n\n\n\nI'm in the process of upgrading from \npostgres 7.4.8 to 9.0.4 and wanted to run my decisions past some folks who can \ngive me some input on whether my decisions make sense or not. \n \n\nIt's \nbasically a LAPP configuration and on a busy day we probably get in the \nneighborhood of a million hits.\n \nServer Info:\n \n- 4 dual-core AMD Opteron 2212 processors, \n2010.485 MHz\n- 64GB RAM\n- 16 67GB RAID 1 drives and 1 464GB RAID 10 \ndrive (all ext3)\n- Linux 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 \nEDT 2009 x86_64 x86_64 x86_64 GNU/Linux\n \nThere are 3 separate databases:\n \nDB1 is 10GB and consists of multiple tables that \nI've spread out so that the 3 most used have their data and indexes on 6 \nseparate RAID1 drives, the 3 next busiest have data & index on 3 drives, and \nthe remaining tables and indexes are on the RAID10 drive. The WAL for all is on \na separate RAID1 drive.\n \nThe others are very write-heavy, started as one \ntable within the original DB, and were split out on an odd/even id # in an \neffort to get better performance:\n \nDB2 is 25GB with data, index, and WAL all on \nseparate RAID1 drives.\nDB3 is 15GB with data, index, and WAL on separate \nRAID1 drives.\n \nHere are the changes I made to postgres.conf. The \nonly differences between the conf file for DB1 and those for DB2 & 3 are the \nport and effective_cache_size (which I made slightly smaller -- 8 GB instead of \n10 -- for the 2 write-heavy DBs). The 600 max connections are often idle and \ndon't get explicitly closed in the application. I'm looking at connection \npooling as well.\n \n\n autovacuum = \non\n autovacuum_analyze_threshold = \n250\n autovacuum_freeze_max_age = \n200000000\n autovacuum_max_workers = \n3\n autovacuum_naptime = \n10min\n autovacuum_vacuum_cost_delay = \n20ms\n autovacuum_vacuum_cost_limit = \n-1\n autovacuum_vacuum_threshold = 250\n checkpoint_completion_target = \n0.7\n checkpoint_segments = \n64\n checkpoint_timeout = \n5min\n checkpoint_warning = \n30s\n deadlock_timeout = 3s\n effective_cache_size = \n10GB\n log_autovacuum_min_duration = 1s\n maintenance_work_mem = \n256MB\n max_connections = 600\n max_locks_per_transaction = \n64\n max_stack_depth = 8MB\n shared_buffers = 4GB\n vacuum_cost_delay = \n10ms\n wal_buffers \n= 32MB\n wal_level = \nminimal\n work_mem = \n128MB\n \n \nANY comments or suggestions would be greatly \nappreciated. \n \nThank you,\nMidge",
"msg_date": "Thu, 18 Aug 2011 14:55:50 -0700",
"msg_from": "\"Midge Brown\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "settings input for upgrade"
},
{
"msg_contents": "On Thu, Aug 18, 2011 at 11:55 PM, Midge Brown <[email protected]> wrote:\n> I'm in the process of upgrading from postgres 7.4.8 to 9.0.4 and wanted to\n> run my decisions past some folks who can give me some input on whether my\n> decisions make sense or not.\n\nI am not sure what decisions you actually refer to here: in your\nposting I can only see description of the current setup but no\ndecisions for the upgrade (i.e. changed parameters, other physical\nlayout etc.).\n\n> The others are very write-heavy, started as one table within the original\n> DB, and were split out on an odd/even id # in an effort to get better\n> performance:\n\nDid it pay off? I mean you planned to increase performance and did\nthis actually happen? Apart from reserving IO bandwidth (which you\nachieved by placing data on different disks) you basically only added\nreserved memory for each instance by separating them. Or are there\nany other effects achieved by separating (like reduced lock contention\non some globally shared resource, distribution of CPU for logging)?\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Sat, 20 Aug 2011 11:38:58 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: settings input for upgrade"
},
{
"msg_contents": "Robert, \n\nI was largely looking for input on whether I may have inadvertently shot myself in the foot with some of the choices I made when setting up postgresql 9.0, which is on different hardware than was the 7.4 setup.\n\nThe splitting of one table to two separate databases was done on 7.4 and did make a positive change in write performance. I was including that information only in an attempt to provide as much detail as possible.\n\n- Midge\n ----- Original Message ----- \n From: Robert Klemme \n To: Midge Brown \n Cc: [email protected] \n Sent: Saturday, August 20, 2011 2:38 AM\n Subject: Re: [PERFORM] settings input for upgrade\n\n\n On Thu, Aug 18, 2011 at 11:55 PM, Midge Brown <[email protected]> wrote:\n > I'm in the process of upgrading from postgres 7.4.8 to 9.0.4 and wanted to\n > run my decisions past some folks who can give me some input on whether my\n > decisions make sense or not.\n\n I am not sure what decisions you actually refer to here: in your\n posting I can only see description of the current setup but no\n decisions for the upgrade (i.e. changed parameters, other physical\n layout etc.).\n\n > The others are very write-heavy, started as one table within the original\n > DB, and were split out on an odd/even id # in an effort to get better\n > performance:\n\n Did it pay off? I mean you planned to increase performance and did\n this actually happen? Apart from reserving IO bandwidth (which you\n achieved by placing data on different disks) you basically only added\n reserved memory for each instance by separating them. Or are there\n any other effects achieved by separating (like reduced lock contention\n on some globally shared resource, distribution of CPU for logging)?\n\n Kind regards\n\n robert\n\n -- \n remember.guy do |as, often| as.you_can - without end\n http://blog.rubybestpractices.com/\n\n\n\n\n\n\nRobert, \n \nI was largely looking for input on whether I may \nhave inadvertently shot myself in the foot with some of the choices I made when \nsetting up postgresql 9.0, which is on different hardware than was the 7.4 \nsetup.\n \nThe splitting of one table to two separate \ndatabases was done on 7.4 and did make a positive change in write performance. I \nwas including that information only in an attempt to provide as much detail as \npossible.\n \n- Midge\n\n----- Original Message ----- \nFrom:\nRobert Klemme \nTo: Midge Brown \nCc: [email protected]\n\nSent: Saturday, August 20, 2011 2:38 \n AM\nSubject: Re: [PERFORM] settings input for \n upgrade\nOn Thu, Aug 18, 2011 at 11:55 PM, Midge Brown <[email protected]> \n wrote:> I'm in the process of upgrading from postgres 7.4.8 to 9.0.4 \n and wanted to> run my decisions past some folks who can give me some \n input on whether my> decisions make sense or not.I am not sure \n what decisions you actually refer to here: in yourposting I can only see \n description of the current setup but nodecisions for the upgrade (i.e. \n changed parameters, other physicallayout etc.).> The others are \n very write-heavy, started as one table within the original> DB, and \n were split out on an odd/even id # in an effort to get better> \n performance:Did it pay off? I mean you planned to increase \n performance and didthis actually happen? Apart from reserving IO \n bandwidth (which youachieved by placing data on different disks) you \n basically only addedreserved memory for each instance by separating \n them. Or are thereany other effects achieved by separating (like \n reduced lock contentionon some globally shared resource, distribution of \n CPU for logging)?Kind regardsrobert-- remember.guy \n do |as, often| as.you_can - without endhttp://blog.rubybestpractices.com/",
"msg_date": "Sat, 20 Aug 2011 11:33:45 -0700",
"msg_from": "\"Midge Brown\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: settings input for upgrade"
},
{
"msg_contents": "On Thu, Aug 18, 2011 at 3:55 PM, Midge Brown <[email protected]> wrote:\n> Here are the changes I made to postgres.conf. The only differences between\n> the conf file for DB1 and those for DB2 & 3 are the port and\n> effective_cache_size (which I made slightly smaller -- 8 GB instead of 10 --\n> for the 2 write-heavy DBs). The 600 max connections are often idle and don't\n> get explicitly closed in the application. I'm looking at connection pooling\n> as well.\n\n> work_mem = 128MB\n\nI'd lower this unless you are certain that something like 16MB just\nisn't gonna get similar performance. Even with mostly connections\nidle, 128M is a rather large work_mem. Remember it's per sort, per\nconnection. It can quickly cause the kernel to dump file cache that\nkeeps the machine running fast if a couple dozen connections run a\nhandful of large sorts at once. What happens is that while things run\nsmooth when there's low to medium load, under high load the machine\nwill start thrashing trying to allocate too much work_mem and then\njust slow to a crawl.\n",
"msg_date": "Sat, 20 Aug 2011 22:01:21 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: settings input for upgrade"
},
{
"msg_contents": "Thank you! \n ----- Original Message ----- \n From: Scott Marlowe \n To: Midge Brown \n Cc: [email protected] \n Sent: Saturday, August 20, 2011 9:01 PM\n Subject: Re: [PERFORM] settings input for upgrade\n\n\n On Thu, Aug 18, 2011 at 3:55 PM, Midge Brown wrote:\n > Here are the changes I made to postgres.conf. The only differences between\n > the conf file for DB1 and those for DB2 & 3 are the port and\n > effective_cache_size (which I made slightly smaller -- 8 GB instead of 10 --\n > for the 2 write-heavy DBs). The 600 max connections are often idle and don't\n > get explicitly closed in the application. I'm looking at connection pooling\n > as well.\n\n > work_mem = 128MB\n\n I'd lower this unless you are certain that something like 16MB just\n isn't gonna get similar performance. Even with mostly connections\n idle, 128M is a rather large work_mem. Remember it's per sort, per\n connection. It can quickly cause the kernel to dump file cache that\n keeps the machine running fast if a couple dozen connections run a\n handful of large sorts at once. What happens is that while things run\n smooth when there's low to medium load, under high load the machine\n will start thrashing trying to allocate too much work_mem and then\n just slow to a crawl.\n\n\n\n\n\n\nThank you! \n\n----- Original Message ----- \nFrom:\nScott \n Marlowe \nTo: Midge Brown \nCc: [email protected]\n\nSent: Saturday, August 20, 2011 9:01 \n PM\nSubject: Re: [PERFORM] settings input for \n upgrade\nOn Thu, Aug 18, 2011 at 3:55 PM, Midge \n Brown wrote:> Here are the changes I made to postgres.conf. The \n only differences between> the conf file for DB1 and those for DB2 & \n 3 are the port and> effective_cache_size (which I made slightly smaller \n -- 8 GB instead of 10 --> for the 2 write-heavy DBs). The 600 max \n connections are often idle and don't> get explicitly closed in the \n application. I'm looking at connection pooling> as well.> \n work_mem = 128MBI'd lower this unless you are certain that something \n like 16MB justisn't gonna get similar performance. Even with mostly \n connectionsidle, 128M is a rather large work_mem. Remember it's per \n sort, perconnection. It can quickly cause the kernel to dump file \n cache thatkeeps the machine running fast if a couple dozen connections run \n ahandful of large sorts at once. What happens is that while things \n runsmooth when there's low to medium load, under high load the \n machinewill start thrashing trying to allocate too much work_mem and \n thenjust slow to a crawl.",
"msg_date": "Sat, 20 Aug 2011 22:44:58 -0700",
"msg_from": "\"Midge Brown\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: settings input for upgrade"
},
{
"msg_contents": "On Sat, Aug 20, 2011 at 8:33 PM, Midge Brown <[email protected]> wrote:\n> Robert,\n>\n> I was largely looking for input on whether I may have inadvertently shot\n> myself in the foot with some of the choices I made when setting up\n> postgresql 9.0, which is on different hardware than was the 7.4 setup.\n\nOK, I though the config change was the diff for the other two database\nand not for 9.0.\n\n> The splitting of one table to two separate databases was done on 7.4 and did\n> make a positive change in write performance. I was including that\n> information only in an attempt to provide as much detail as possible.\n\nGood to know! Thanks!\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Sun, 21 Aug 2011 13:15:07 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: settings input for upgrade"
},
{
"msg_contents": "On 08/18/2011 05:55 PM, Midge Brown wrote:\n> DB1 is 10GB and consists of multiple tables that I've spread out so \n> that the 3 most used have their data and indexes on 6 separate RAID1 \n> drives, the 3 next busiest have data & index on 3 drives, and the \n> remaining tables and indexes are on the RAID10 drive. The WAL for all \n> is on a separate RAID1 drive.\n> DB2 is 25GB with data, index, and WAL all on separate RAID1 drives.\n> DB3 is 15GB with data, index, and WAL on separate RAID1 drives.\n\nAnytime you have a set of disks and a set of databases/tables to lay out \nonto them, there are two main options to consider:\n\n-Put all of them into a single RAID10 array. Performance will be high \nnow matter what subset of the database is being used. But if one \nparticular part of a database is really busy, it can divert resources \naway from the rest.\n\n-Break the database into fine-grained pieces and carefully lay out each \nof them on disk. Performance of any individual chunk will be steady \nhere. But if only a subset of the data is being used, server resources \nwill be idle. All of the disks that don't have data related to that \nwill be unused.\n\nConsider two configurations following these ideas:\n\n1) 12 disks are placed into a large RAID10 array. Peak transfer rate \nwill be about 600MB/s on sequential scans.\n\n2) 6 RAID1 arrays are created and the database is manually laid out onto \nthose disks. Peak transfer rate from any one section will be closer to \n100MB/s.\n\nEach of these is optimizing for a different use scenario. Here's the \nbest case for each:\n\n-One user is active, and they're hitting one of the database sections. \nIn setup (1) they might get 600MB/s, the case where it shows the most \nbenefit. In setup (2), they'd only get 100MB/s.\n\n-10 users are pounding one section of the database; 1 user is hitting a \ndifferent section. In setup (2), all 10 users will be fighting over \naccess to one section of the disk, each getting (at best) 10MB/s of its \ntransfers. The nature of random I/O means that it will likely be much \nworse for them. Meanwhile, the user hitting the other database section \nwill still be merrily chugging away getting their 100MB/s. Had setup \n(1) been used, you'd have 11 users fighting over 600MB/s, so at best \n55MB/s for each. And with the random mix, it could be much worse.\n\nWhich of these is better? Well, (1) is guaranteed to use your hardware \nto its fullest capability. There are some situations where contention \nover the disk array will cause performance to be lower for some people, \ncompared to if they had an isolated environment split up more like (2). \nBut the rest of the time, (2) will have taken a large number of disks \nand left them idle. The second example shows this really well. The \nmere fact that you have such a huge aggregate speed available means that \nthe big array really doesn't necessarily suffer that badly from a heavy \nload. It has 6X as much capacity to handle them. You really need to \nhave a >6:1 misbalance in access before the carefully laid out version \npulls ahead. In every other case, the big array wins.\n\nYou can defend (2) as the better choice if you have really compelling, \nhard data proving use of the various parts of the data is split quite \nevenly among the expected incoming workload. If you have response time \nlatency targets that require separating resources evenly among the \nvarious types of users, it can also make sense there. I don't know if \nthe data you've been collecting from your older version is good enough \nto know that for sure or not.\n\nIn every other case, you'd be better off just dumping the whole pile \ninto a single, large array, and letting the array and operating system \nfigure out how to schedule things best. That why this is the normal \npractice for building PostgreSQL systems. The sole exception is that \nsplitting out the pg_xlog filesystem can usually be justified in a \nlarger array. The fact that it's always sequential I/O means that \nmixing its work with the rest of the server doesn't work as well as \ngiving it a dedicated pair of drives to write to, where it doesn't ever \nstop to seek somewhere else.\n> wal_buffers = 32MB\n\nThis might as well drop to 16MB. And you've already gotten some \nwarnings about work_mem. Switching to a connection pooler would help \nwith that, too.\n\n\n> autovacuum_analyze_threshold = 250\n>\n> autovacuum_naptime = 10min\n>\n> autovacuum_vacuum_threshold = 250\n>\n> vacuum_cost_delay = 10ms\n>\n\nThis strikes me as more customization than you really should be doing to \nautovacuum, if you haven't been running on a recent version of \nPostgreSQL yet. You shouldn't ever need to touch the thresholds for \nexample. Those only matter on really small tables; once something gets \nbig enough to really matter, the threshold part is really small compared \nto the scale factor one. And the defaults are picked partly so that \ncleanup of the system catalog tables is done frequently enough. You're \nslowing that cleanup by moving the thresholds upward so much, and that's \nnot a great idea.\n\nFor similar reasons, you really shouldn't be touching autovacuum_naptime \nunless there's really good evidence it's necessary for your environment.\n\nChanging things such that regular vacuums executed at the command line \nhappen with a cost delay like this should be fine though. Those will \nhappen using twice as many resources as the autovacuum ones, but not run \nas fast as possible as in the normal case.\n\n> deadlock_timeout = 3s\n\nYou probably don't want to increase this. When you reach the point \nwhere you want to find slow lock issues by turning on log_lock_waits, \nyou're just going to put it right back to the default again--or lower it.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n\n\n\n\n\n\nOn 08/18/2011 05:55 PM, Midge Brown wrote:\n\n \nDB1 is 10GB and consists of multiple\ntables that I've spread out so that the 3 most used have their data and\nindexes on 6 separate RAID1 drives, the 3 next busiest have data &\nindex on 3 drives, and the remaining tables and indexes are on the\nRAID10 drive. The WAL for all is on a separate RAID1 drive.\nDB2 is 25GB with data, index, and\nWAL all on separate RAID1 drives.\nDB3 is 15GB with data, index, and\nWAL on separate RAID1 drives.\n\n\nAnytime you have a set of disks and a set of databases/tables to lay\nout onto them, there are two main options to consider:\n\n-Put all of them into a single RAID10 array. Performance will be high\nnow matter what subset of the database is being used. But if one\nparticular part of a database is really busy, it can divert resources\naway from the rest.\n\n-Break the database into fine-grained pieces and carefully lay out each\nof them on disk. Performance of any individual chunk will be steady\nhere. But if only a subset of the data is being used, server resources\nwill be idle. All of the disks that don't have data related to that\nwill be unused.\n\nConsider two configurations following these ideas:\n\n1) 12 disks are placed into a large RAID10 array. Peak transfer rate\nwill be about 600MB/s on sequential scans.\n\n2) 6 RAID1 arrays are created and the database is manually laid out\nonto those disks. Peak transfer rate from any one section will be\ncloser to 100MB/s.\n\nEach of these is optimizing for a different use scenario. Here's the\nbest case for each:\n\n-One user is active, and they're hitting one of the database sections. \nIn setup (1) they might get 600MB/s, the case where it shows the most\nbenefit. In setup (2), they'd only get 100MB/s.\n\n-10 users are pounding one section of the database; 1 user is hitting a\ndifferent section. In setup (2), all 10 users will be fighting over\naccess to one section of the disk, each getting (at best) 10MB/s of its\ntransfers. The nature of random I/O means that it will likely be much\nworse for them. Meanwhile, the user hitting the other database section\nwill still be merrily chugging away getting their 100MB/s. Had setup\n(1) been used, you'd have 11 users fighting over 600MB/s, so at best\n55MB/s for each. And with the random mix, it could be much worse.\n\nWhich of these is better? Well, (1) is guaranteed to use your hardware\nto its fullest capability. There are some situations where contention\nover the disk array will cause performance to be lower for some people,\ncompared to if they had an isolated environment split up more like\n(2). But the rest of the time, (2) will have taken a large number of\ndisks and left them idle. The second example shows this really well. \nThe mere fact that you have such a huge aggregate speed available means\nthat the big array really doesn't necessarily suffer that badly from a\nheavy load. It has 6X as much capacity to handle them. You really\nneed to have a >6:1 misbalance in access before the carefully laid\nout version pulls ahead. In every other case, the big array wins.\n\nYou can defend (2) as the better choice if you have really compelling,\nhard data proving use of the various parts of the data is split quite\nevenly among the expected incoming workload. If you have response time\nlatency targets that require separating resources evenly among the\nvarious types of users, it can also make sense there. I don't know if\nthe data you've been collecting from your older version is good enough\nto know that for sure or not.\n\nIn every other case, you'd be better off just dumping the whole pile\ninto a single, large array, and letting the array and operating system\nfigure out how to schedule things best. That why this is the normal\npractice for building PostgreSQL systems. The sole exception is that\nsplitting out the pg_xlog filesystem can usually be justified in a\nlarger array. The fact that it's always sequential I/O means that\nmixing its work with the rest of the server doesn't work as well as\ngiving it a dedicated pair of drives to write to, where it doesn't ever\nstop to seek somewhere else.\n \n\n wal_buffers\n= 32MB\n\n\nThis might as well drop to 16MB. And you've already gotten some\nwarnings about work_mem. Switching to a connection pooler would help\nwith that, too.\n\n\n\n autovacuum_analyze_threshold\n= 250\n autovacuum_naptime =\n10min\n\n\n autovacuum_vacuum_threshold = 250\n\n vacuum_cost_delay = 10ms\n\n\n\nThis strikes me as more customization than you really should be doing\nto autovacuum, if you haven't been running on a recent version of\nPostgreSQL yet. You shouldn't ever need to touch the thresholds for\nexample. Those only matter on really small tables; once something gets\nbig enough to really matter, the threshold part is really small\ncompared to the scale factor one. And the defaults are picked partly\nso that cleanup of the system catalog tables is done frequently\nenough. You're slowing that cleanup by moving the thresholds upward so\nmuch, and that's not a great idea. \n\nFor similar reasons, you really shouldn't be touching\nautovacuum_naptime unless there's really good evidence it's necessary\nfor your environment.\n\nChanging things such that regular vacuums executed at the command line\nhappen with a cost delay like this should be fine though. Those will\nhappen using twice as many resources as the autovacuum ones, but not\nrun as fast as possible as in the normal case.\n\n\n deadlock_timeout\n= 3s\n\n\n\nYou probably don't want to increase this. When you reach the point\nwhere you want to find slow lock issues by turning on log_lock_waits,\nyou're just going to put it right back to the default again--or lower\nit.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us",
"msg_date": "Sun, 21 Aug 2011 15:20:09 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: settings input for upgrade"
},
{
"msg_contents": "On Sun, Aug 21, 2011 at 1:20 PM, Greg Smith <[email protected]> wrote:\n> deadlock_timeout = 3s\n>\n> You probably don't want to increase this. When you reach the point where\n> you want to find slow lock issues by turning on log_lock_waits, you're just\n> going to put it right back to the default again--or lower it.\n\nAll of these random changes brings up the REAL subject, that changes\nshould be made by measuring performance before and after each change\nset and justifying each change. Just randomly throwing what seem like\ngood changes at the database is a surefire recipe for disaster.\n",
"msg_date": "Sun, 21 Aug 2011 19:26:18 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: settings input for upgrade"
},
{
"msg_contents": "Thank you. I'll set work_mem back to 16MB and see what happens from there.\n-Midge\n ----- Original Message ----- \n From: Scott Marlowe \n To: Midge Brown \n Cc: [email protected] \n Sent: Saturday, August 20, 2011 9:01 PM\n Subject: Re: [PERFORM] settings input for upgrade\n\n\n On Thu, Aug 18, 2011 at 3:55 PM, Midge Brown <[email protected]> wrote:\n > Here are the changes I made to postgres.conf. The only differences between\n > the conf file for DB1 and those for DB2 & 3 are the port and\n > effective_cache_size (which I made slightly smaller -- 8 GB instead of 10 --\n > for the 2 write-heavy DBs). The 600 max connections are often idle and don't\n > get explicitly closed in the application. I'm looking at connection pooling\n > as well.\n\n > work_mem = 128MB\n\n I'd lower this unless you are certain that something like 16MB just\n isn't gonna get similar performance. Even with mostly connections\n idle, 128M is a rather large work_mem. Remember it's per sort, per\n connection. It can quickly cause the kernel to dump file cache that\n keeps the machine running fast if a couple dozen connections run a\n handful of large sorts at once. What happens is that while things run\n smooth when there's low to medium load, under high load the machine\n will start thrashing trying to allocate too much work_mem and then\n just slow to a crawl.\n\n\n\n\n\n\n\nThank you. I'll set work_mem back to 16MB and see \nwhat happens from there.\n-Midge\n\n----- Original Message ----- \nFrom:\nScott \n Marlowe \nTo: Midge Brown \nCc: [email protected]\n\nSent: Saturday, August 20, 2011 9:01 \n PM\nSubject: Re: [PERFORM] settings input for \n upgrade\nOn Thu, Aug 18, 2011 at 3:55 PM, Midge Brown <[email protected]> \n wrote:> Here are the changes I made to postgres.conf. The only \n differences between> the conf file for DB1 and those for DB2 & 3 \n are the port and> effective_cache_size (which I made slightly smaller \n -- 8 GB instead of 10 --> for the 2 write-heavy DBs). The 600 max \n connections are often idle and don't> get explicitly closed in the \n application. I'm looking at connection pooling> as well.> \n work_mem = 128MBI'd lower this unless you are certain that something \n like 16MB justisn't gonna get similar performance. Even with mostly \n connectionsidle, 128M is a rather large work_mem. Remember it's per \n sort, perconnection. It can quickly cause the kernel to dump file \n cache thatkeeps the machine running fast if a couple dozen connections run \n ahandful of large sorts at once. What happens is that while things \n runsmooth when there's low to medium load, under high load the \n machinewill start thrashing trying to allocate too much work_mem and \n thenjust slow to a crawl.",
"msg_date": "Mon, 22 Aug 2011 09:34:14 -0700",
"msg_from": "\"Midge Brown\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: settings input for upgrade"
},
{
"msg_contents": "Thank you so much for the input, and the detail provided. \n\nI'll be making the configuration changes, probably over the course of the week, checking the affect after each (as reminded by Scott Marlowe). I was pushed to put the new version into production over the weekend, which at least may provide me with some accurate feedback, and so will see what happens for a bit before addressing the disk/drive layout. \n\n-Midge\n\n ----- Original Message ----- \n From: Greg Smith \n To: [email protected] \n Sent: Sunday, August 21, 2011 12:20 PM\n Subject: Re: [PERFORM] settings input for upgrade\n\n\n On 08/18/2011 05:55 PM, Midge Brown wrote:\n\n\n DB1 is 10GB and consists of multiple tables that I've spread out so that the 3 most used have their data and indexes on 6 separate RAID1 drives, the 3 next busiest have data & index on 3 drives, and the remaining tables and indexes are on the RAID10 drive. The WAL for all is on a separate RAID1 drive. \n DB2 is 25GB with data, index, and WAL all on separate RAID1 drives.\n DB3 is 15GB with data, index, and WAL on separate RAID1 drives.\n\n Anytime you have a set of disks and a set of databases/tables to lay out onto them, there are two main options to consider:\n\n -Put all of them into a single RAID10 array. Performance will be high now matter what subset of the database is being used. But if one particular part of a database is really busy, it can divert resources away from the rest.\n\n -Break the database into fine-grained pieces and carefully lay out each of them on disk. Performance of any individual chunk will be steady here. But if only a subset of the data is being used, server resources will be idle. All of the disks that don't have data related to that will be unused.\n\n Consider two configurations following these ideas:\n\n 1) 12 disks are placed into a large RAID10 array. Peak transfer rate will be about 600MB/s on sequential scans.\n\n 2) 6 RAID1 arrays are created and the database is manually laid out onto those disks. Peak transfer rate from any one section will be closer to 100MB/s.\n\n Each of these is optimizing for a different use scenario. Here's the best case for each:\n\n -One user is active, and they're hitting one of the database sections. In setup (1) they might get 600MB/s, the case where it shows the most benefit. In setup (2), they'd only get 100MB/s.\n\n -10 users are pounding one section of the database; 1 user is hitting a different section. In setup (2), all 10 users will be fighting over access to one section of the disk, each getting (at best) 10MB/s of its transfers. The nature of random I/O means that it will likely be much worse for them. Meanwhile, the user hitting the other database section will still be merrily chugging away getting their 100MB/s. Had setup (1) been used, you'd have 11 users fighting over 600MB/s, so at best 55MB/s for each. And with the random mix, it could be much worse.\n\n Which of these is better? Well, (1) is guaranteed to use your hardware to its fullest capability. There are some situations where contention over the disk array will cause performance to be lower for some people, compared to if they had an isolated environment split up more like (2). But the rest of the time, (2) will have taken a large number of disks and left them idle. The second example shows this really well. The mere fact that you have such a huge aggregate speed available means that the big array really doesn't necessarily suffer that badly from a heavy load. It has 6X as much capacity to handle them. You really need to have a >6:1 misbalance in access before the carefully laid out version pulls ahead. In every other case, the big array wins.\n\n You can defend (2) as the better choice if you have really compelling, hard data proving use of the various parts of the data is split quite evenly among the expected incoming workload. If you have response time latency targets that require separating resources evenly among the various types of users, it can also make sense there. I don't know if the data you've been collecting from your older version is good enough to know that for sure or not.\n\n In every other case, you'd be better off just dumping the whole pile into a single, large array, and letting the array and operating system figure out how to schedule things best. That why this is the normal practice for building PostgreSQL systems. The sole exception is that splitting out the pg_xlog filesystem can usually be justified in a larger array. The fact that it's always sequential I/O means that mixing its work with the rest of the server doesn't work as well as giving it a dedicated pair of drives to write to, where it doesn't ever stop to seek somewhere else.\n \n wal_buffers = 32MB\n\n This might as well drop to 16MB. And you've already gotten some warnings about work_mem. Switching to a connection pooler would help with that, too.\n\n\n\n autovacuum_analyze_threshold = 250 \n autovacuum_naptime = 10min\n\n\n\n autovacuum_vacuum_threshold = 250\n\n vacuum_cost_delay = 10ms\n\n\n This strikes me as more customization than you really should be doing to autovacuum, if you haven't been running on a recent version of PostgreSQL yet. You shouldn't ever need to touch the thresholds for example. Those only matter on really small tables; once something gets big enough to really matter, the threshold part is really small compared to the scale factor one. And the defaults are picked partly so that cleanup of the system catalog tables is done frequently enough. You're slowing that cleanup by moving the thresholds upward so much, and that's not a great idea. \n\n For similar reasons, you really shouldn't be touching autovacuum_naptime unless there's really good evidence it's necessary for your environment.\n\n Changing things such that regular vacuums executed at the command line happen with a cost delay like this should be fine though. Those will happen using twice as many resources as the autovacuum ones, but not run as fast as possible as in the normal case.\n\n\n deadlock_timeout = 3s \n\n You probably don't want to increase this. When you reach the point where you want to find slow lock issues by turning on log_lock_waits, you're just going to put it right back to the default again--or lower it.\n\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n\n\n\n\n\n\nThank you so much for the input, and the detail \nprovided. \n \nI'll be making the configuration changes, \nprobably over the course of the week, checking the affect after each (as \nreminded by Scott Marlowe). I was pushed to put the new version into \nproduction over the weekend, which at least may provide me with some \naccurate feedback, and so will see what happens for a bit before addressing \nthe disk/drive layout. \n \n-Midge\n \n\n----- Original Message ----- \nFrom:\nGreg \n Smith \nTo: [email protected]\n\nSent: Sunday, August 21, 2011 12:20 \n PM\nSubject: Re: [PERFORM] settings input for \n upgrade\nOn 08/18/2011 05:55 PM, Midge Brown wrote:\n\n DB1 is 10GB and consists of \n multiple tables that I've spread out so that the 3 most used have their \n data and indexes on 6 separate RAID1 drives, the 3 next busiest have data \n & index on 3 drives, and the remaining tables and indexes are on the \n RAID10 drive. The WAL for all is on a separate RAID1 drive.\nDB2 is 25GB with data, index, and WAL all on \n separate RAID1 drives.\nDB3 is 15GB with data, index, and WAL on \n separate RAID1 drives.Anytime you have a set of \n disks and a set of databases/tables to lay out onto them, there are two main \n options to consider:-Put all of them into a single RAID10 array. \n Performance will be high now matter what subset of the database is being \n used. But if one particular part of a database is really busy, it can \n divert resources away from the rest.-Break the database into \n fine-grained pieces and carefully lay out each of them on disk. \n Performance of any individual chunk will be steady here. But if only a \n subset of the data is being used, server resources will be idle. All of \n the disks that don't have data related to that will be unused.Consider \n two configurations following these ideas:1) 12 disks are placed into a \n large RAID10 array. Peak transfer rate will be about 600MB/s on \n sequential scans.2) 6 RAID1 arrays are created and the database is \n manually laid out onto those disks. Peak transfer rate from any one \n section will be closer to 100MB/s.Each of these is optimizing for a \n different use scenario. Here's the best case for each:-One user \n is active, and they're hitting one of the database sections. In setup \n (1) they might get 600MB/s, the case where it shows the most benefit. In \n setup (2), they'd only get 100MB/s.-10 users are pounding one section \n of the database; 1 user is hitting a different section. In setup (2), \n all 10 users will be fighting over access to one section of the disk, each \n getting (at best) 10MB/s of its transfers. The nature of random I/O means that \n it will likely be much worse for them. Meanwhile, the user hitting the \n other database section will still be merrily chugging away getting their \n 100MB/s. Had setup (1) been used, you'd have 11 users fighting over \n 600MB/s, so at best 55MB/s for each. And with the random mix, it could \n be much worse.Which of these is better? Well, (1) is guaranteed \n to use your hardware to its fullest capability. There are some \n situations where contention over the disk array will cause performance to be \n lower for some people, compared to if they had an isolated environment split \n up more like (2). But the rest of the time, (2) will have taken a large \n number of disks and left them idle. The second example shows this really \n well. The mere fact that you have such a huge aggregate speed available \n means that the big array really doesn't necessarily suffer that badly from a \n heavy load. It has 6X as much capacity to handle them. You really \n need to have a >6:1 misbalance in access before the carefully laid out \n version pulls ahead. In every other case, the big array wins.You \n can defend (2) as the better choice if you have really compelling, hard data \n proving use of the various parts of the data is split quite evenly among the \n expected incoming workload. If you have response time latency targets \n that require separating resources evenly among the various types of users, it \n can also make sense there. I don't know if the data you've been \n collecting from your older version is good enough to know that for sure or \n not.In every other case, you'd be better off just dumping the whole \n pile into a single, large array, and letting the array and operating system \n figure out how to schedule things best. That why this is the normal \n practice for building PostgreSQL systems. The sole exception is that \n splitting out the pg_xlog filesystem can usually be justified in a larger \n array. The fact that it's always sequential I/O means that mixing its \n work with the rest of the server doesn't work as well as giving it a dedicated \n pair of drives to write to, where it doesn't ever stop to seek somewhere \n else. \n \n wal_buffers = \n 32MBThis \n might as well drop to 16MB. And you've already gotten some warnings \n about work_mem. Switching to a connection pooler would help with that, \n too.\n\n autovacuum_analyze_threshold = \n 250\n autovacuum_naptime = \n 10min\n\n\n autovacuum_vacuum_threshold \n = 250\n vacuum_cost_delay = \n 10msThis strikes me as more \n customization than you really should be doing to autovacuum, if you haven't \n been running on a recent version of PostgreSQL yet. You shouldn't \n ever need to touch the thresholds for example. Those only matter on \n really small tables; once something gets big enough to really matter, the \n threshold part is really small compared to the scale factor one. And the \n defaults are picked partly so that cleanup of the system catalog tables is \n done frequently enough. You're slowing that cleanup by moving the \n thresholds upward so much, and that's not a great idea. For \n similar reasons, you really shouldn't be touching autovacuum_naptime unless \n there's really good evidence it's necessary for your environment.Changing things such \n that regular vacuums executed at the command line happen with a cost delay \n like this should be fine though. Those will happen using twice as many \n resources as the autovacuum ones, but not run as fast as possible as in the \n normal case.\n\n deadlock_timeout = \n 3s You \n probably don't want to increase this. When you reach the point where you \n want to find slow lock issues by turning on log_lock_waits, you're just going \n to put it right back to the default again--or lower it.-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us",
"msg_date": "Mon, 22 Aug 2011 09:48:40 -0700",
"msg_from": "\"Midge Brown\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: settings input for upgrade"
},
{
"msg_contents": "On 08/22/2011 12:48 PM, Midge Brown wrote:\n> I was pushed to put the new version into production over the weekend, \n> which at least may provide me with some accurate feedback, and so will \n> see what happens for a bit before addressing the disk/drive layout.\n\nThe good news is that deploying onto the split up configuration will \ngive you lots of data to collect about just how the load is split up \nover the various parts of the database. You can just monitor which \ndrives the I/O goes to. If there are some that are really underused, \nand can arrange downtime and disk space to merge them together in the \nfuture, that may be an option for improving performance one day.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n\n\n\n\n\n\nOn 08/22/2011 12:48 PM, Midge Brown wrote:\n\n \nI was pushed to put the new version\ninto production over the weekend, which at least may provide me\nwith some accurate feedback, and so will see what happens for a bit\nbefore addressing the disk/drive layout.\n\nThe good news is that deploying onto the split up configuration will\ngive you lots of data to collect about just how the load is split up\nover the various parts of the database. You can just monitor which\ndrives the I/O goes to. If there are some that are really underused, \nand can arrange downtime and disk space to merge them together in the\nfuture, that may be an option for improving performance one day.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us",
"msg_date": "Mon, 22 Aug 2011 12:53:00 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: settings input for upgrade"
}
] |
[
{
"msg_contents": "Is there any performance benefit of using constant size tuples?\n",
"msg_date": "Fri, 19 Aug 2011 11:03:48 +0200",
"msg_from": "Krzysztof Chodak <[email protected]>",
"msg_from_op": true,
"msg_subject": "Variable versus constrant size tuples"
},
{
"msg_contents": "On 8/19/2011 4:03 AM, Krzysztof Chodak wrote:\n> Is there any performance benefit of using constant size tuples?\n>\n\nIf you are referring to varchar(80) vs text, then no, there is no benefit.\n\n-Andy\n",
"msg_date": "Fri, 19 Aug 2011 10:03:10 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Variable versus constrant size tuples"
},
{
"msg_contents": "On Fri, Aug 19, 2011 at 4:03 AM, Krzysztof Chodak\n<[email protected]> wrote:\n> Is there any performance benefit of using constant size tuples?\n\nnot really. If your tuple size is under a known maximum length, then\na toast table doesn't have to be created. that's a pretty minor\ndetail though.\n\nmerlin\n",
"msg_date": "Fri, 19 Aug 2011 10:03:52 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Variable versus constrant size tuples"
},
{
"msg_contents": "Thank you. Now I see that page consists of record pointers list build\nfrom offset and length so there is no benefit of having constant\nlength here.\n\nOn Fri, Aug 19, 2011 at 17:03, Merlin Moncure <[email protected]> wrote:\n> On Fri, Aug 19, 2011 at 4:03 AM, Krzysztof Chodak\n> <[email protected]> wrote:\n>> Is there any performance benefit of using constant size tuples?\n>\n> not really. If your tuple size is under a known maximum length, then\n> a toast table doesn't have to be created. that's a pretty minor\n> detail though.\n>\n> merlin\n>\n",
"msg_date": "Fri, 19 Aug 2011 17:19:34 +0200",
"msg_from": "Krzysztof Chodak <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Variable versus constrant size tuples"
}
] |
[
{
"msg_contents": "\nI'm buying a bunch of new machines (all will run an application that heavily\nwrites to PG). These machines will have 2 spindle groups in a RAID-1 config.\nDrives will be either 15K SAS, or 10K SATA (I haven't decided if it is \nbetter\nto buy the faster drives, or drives that are identical to the ones we are\nalready running in our production servers, thus achieving commonality in\nspares across all machines).\n\nController choice looks to be between Adaptec 6405, with the \nsupercapacitor unit;\nor LSI 9260-4i with its BBU. Price is roughly the same.\n\nWould be grateful for any thoughts on this choice.\n\nThanks.\n\n\n",
"msg_date": "Mon, 22 Aug 2011 20:42:56 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": true,
"msg_subject": "RAID Controllers"
},
{
"msg_contents": "On Mon, Aug 22, 2011 at 8:42 PM, David Boreham <[email protected]> wrote:\n>\n> I'm buying a bunch of new machines (all will run an application that heavily\n> writes to PG). These machines will have 2 spindle groups in a RAID-1 config.\n> Drives will be either 15K SAS, or 10K SATA (I haven't decided if it is\n> better\n> to buy the faster drives, or drives that are identical to the ones we are\n> already running in our production servers, thus achieving commonality in\n> spares across all machines).\n>\n> Controller choice looks to be between Adaptec 6405, with the supercapacitor\n> unit;\n> or LSI 9260-4i with its BBU. Price is roughly the same.\n>\n> Would be grateful for any thoughts on this choice.\n\nIf you're running linux and thus stuck with the command line on the\nLSI, I'd recommend anything else. MegaRAID is the hardest RAID\ncontrol software to use I've ever seen. If you can spring for the\nmoney, get the Areca 1680:\nhttp://www.newegg.com/Product/Product.aspx?Item=N82E16816151023 Be\nsure and get the battery unit for it. You can configure it from an\nexternal ethernet connector very easily, and the performance is\noutstandingly good.\n",
"msg_date": "Mon, 22 Aug 2011 22:55:33 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID Controllers"
},
{
"msg_contents": "\nOn 8/22/2011 9:42 PM, David Boreham wrote:\n> I'm buying a bunch of new machines (all will run an application that heavily\n> writes to PG). These machines will have 2 spindle groups in a RAID-1 config.\n> Drives will be either 15K SAS, or 10K SATA (I haven't decided if it is\n> better\n> to buy the faster drives, or drives that are identical to the ones we are\n> already running in our production servers, thus achieving commonality in\n> spares across all machines).\n>\n> Controller choice looks to be between Adaptec 6405, with the\n> supercapacitor unit;\n> or LSI 9260-4i with its BBU. Price is roughly the same.\n>\n> Would be grateful for any thoughts on this choice.\n\nI'm by no means an expert but it seems to me if you're going to choose \nbetween two 6 GB/s cards you may as well put SAS2 drives in. I have two \nAdaptec 6445 cards in one of my boxes and several other Adaptec series 5 \ncontrollers in others. They suit my needs and I haven't had any \nproblems with them. I think it has been mentioned previously but they \ndo tend to run hot so plenty of airflow would be good.\n\nBob\n\n",
"msg_date": "Tue, 23 Aug 2011 06:14:52 -0500",
"msg_from": "Robert Schnabel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID Controllers"
},
{
"msg_contents": "On 8/23/2011 5:14 AM, Robert Schnabel wrote:\n>\n> I'm by no means an expert but it seems to me if you're going to choose \n> between two 6 GB/s cards you may as well put SAS2 drives in. I have \n> two Adaptec 6445 cards in one of my boxes and several other Adaptec \n> series 5 controllers in others. They suit my needs and I haven't had \n> any problems with them. I think it has been mentioned previously but \n> they do tend to run hot so plenty of airflow would be good.\n\nThanks. Good point about airflow. By SAS I meant 6Gbit SAS drives. But \nwe have many servers already with 10k raptors and it is tempting to use \nthose since we would be able to use a common pool of spare drives across \nall servers. 15K rpm is tempting though. I'm not sure if the DB \ntransaction commit rate scales up linearly when BBU is used (it would \nwithout BBU).\n\n\n",
"msg_date": "Tue, 23 Aug 2011 07:42:59 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID Controllers"
},
{
"msg_contents": "On August 22, 2011 09:55:33 PM Scott Marlowe wrote:\n>\n> If you're running linux and thus stuck with the command line on the\n> LSI, I'd recommend anything else. MegaRAID is the hardest RAID\n> control software to use I've ever seen. If you can spring for the\n> money, get the Areca 1680:\n> http://www.newegg.com/Product/Product.aspx?Item=N82E16816151023 Be\n> sure and get the battery unit for it. You can configure it from an\n> external ethernet connector very easily, and the performance is\n> outstandingly good.\n\nI second the Areca recommendation - excellent controllers. The 1880s are even \nbetter.\n",
"msg_date": "Tue, 23 Aug 2011 10:11:43 -0700",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID Controllers"
},
{
"msg_contents": "On 8/22/2011 10:55 PM, Scott Marlowe wrote:\n> If you're running linux and thus stuck with the command line on the\n> LSI, I'd recommend anything else. MegaRAID is the hardest RAID\n> control software to use I've ever seen. If you can spring for the\n> money, get the Areca 1680:\n> http://www.newegg.com/Product/Product.aspx?Item=N82E16816151023 Be\n> sure and get the battery unit for it. You can configure it from an\n> external ethernet connector very easily, and the performance is\n> outstandingly good.\nThanks. I took a look at Areca. The fan on the controller board is a big\nwarning signal for me (those fans are in my experience the single most\nunreliable component ever used in computers).\n\nCan you say a bit more about the likely problems with the CLI ?\nI'm thinking that I configure the card once, and copy the config\nto all the other boxes, so even if it's as obscure as Cisco IOS,\nhow bad can it be ? Is the concern more with things like a rebuild;\nmonitoring for drive failures -- that kind of constant management\ntask ?\n\nHow about Adaptec on Linux ? The supercapacitor and NAND\nflash idea looks like a good one, provided the firmware doesn't\nhave bugs (true with any write back controller though).\n\n\n",
"msg_date": "Tue, 23 Aug 2011 16:42:26 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID Controllers"
},
{
"msg_contents": "On Tue, Aug 23, 2011 at 4:42 PM, David Boreham <[email protected]> wrote:\n> On 8/22/2011 10:55 PM, Scott Marlowe wrote:\n>>\n>> If you're running linux and thus stuck with the command line on the\n>> LSI, I'd recommend anything else. MegaRAID is the hardest RAID\n>> control software to use I've ever seen. If you can spring for the\n>> money, get the Areca 1680:\n>> http://www.newegg.com/Product/Product.aspx?Item=N82E16816151023 Be\n>> sure and get the battery unit for it. You can configure it from an\n>> external ethernet connector very easily, and the performance is\n>> outstandingly good.\n>\n> Thanks. I took a look at Areca. The fan on the controller board is a big\n> warning signal for me (those fans are in my experience the single most\n> unreliable component ever used in computers).\n\nI've been using Arecas for years. A dozen or more. Zero fan failures.\n 1 bad card, it came bad.\n\n> Can you say a bit more about the likely problems with the CLI ?\n\nThe MegaCLI interface is the single most difficult user interface I've\never used. Non-obvious and difficult syntax, google it. You'll get\nplenty of hits.\n\n> I'm thinking that I configure the card once, and copy the config\n> to all the other boxes, so even if it's as obscure as Cisco IOS,\n\nI've dealt with IOS and it's super easy to work with compared to MegaCLI.\n\n> how bad can it be ? Is the concern more with things like a rebuild;\n> monitoring for drive failures -- that kind of constant management\n> task ?\n\nAll of it. I've used it before just enough to never want to touch it\nagain. There's a cheat sheet here:\nhttp://tools.rapidsoft.de/perc/perc-cheat-sheet.html\n\n> How about Adaptec on Linux ? The supercapacitor and NAND\n> flash idea looks like a good one, provided the firmware doesn't\n> have bugs (true with any write back controller though).\n\nI haven't used the newer cards. Older ones had a bad rep for\nperformance but apparently their newer ones can be pretty darned good.\n",
"msg_date": "Tue, 23 Aug 2011 17:01:00 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID Controllers"
},
{
"msg_contents": "On 08/23/2011 06:42 PM, David Boreham wrote:\n> I took a look at Areca. The fan on the controller board is a big\n> warning signal for me (those fans are in my experience the single most\n> unreliable component ever used in computers).\n\nI have one of their really early/cheap models here, purchased in early \n2007 . The fan on it just died last month. Since this is my home \nsystem, I just got a replacement at Radio Shack and spliced the right \nconnector onto it; had it been a production server I would have bought a \nspare fan with the system.\n\nTo put this into perspective, that system is on its 3rd power supply, \nand has gone through at least 4 drive failures since installation.\n\n> Can you say a bit more about the likely problems with the CLI ?\n\nLet's see...this week I needed to figure out how to turn off the \nindividual drive caches on a LSI system, they are set at the factory to \n\"use disk's default\" which is really strange--leaves me not even sure \nwhat state that is. The magic incantation for that one was:\n\nMegaCli -LDSetProp DisDskCache -LALL -aALL\n\nThere's a certainly a learning curve there.\n\n> I'm thinking that I configure the card once, and copy the config\n> to all the other boxes, so even if it's as obscure as Cisco IOS,\n> how bad can it be ? Is the concern more with things like a rebuild;\n> monitoring for drive failures -- that kind of constant management\n> task ?\n\nYou can't just copy the configurations around. All you have are these \nlow-level things that fix individual settings. To get the same \nconfiguration on multiple systems, you need to script all of the \nchanges, and hope that all of the systems ship with the same defaults.\n\nWhat I do is dump the entire configuration and review that carefully for \neach deployment. It helps to have a checklist and patience.\n\n> How about Adaptec on Linux ? The supercapacitor and NAND\n> flash idea looks like a good one, provided the firmware doesn't\n> have bugs (true with any write back controller though).\n\nI only have one server with a recent Adaptec controller, a 5405. That \nseemed to be the generation of cards where Adaptec got their act \ntogether on Linux again, they benchmarked well in reviews and the \ndrivers seem reasonable. It's worked fine for the small server it's \ndeployed in. I haven't been able to test a larger array with one of \nthem yet, but it sounds like you're not planning to run one of those \nanyway. If I had 24 drives to connect, I'd prefer an LSI controller \njust because I know those scale fine to that level; I'm not sure how \nwell Adaptec does there. Haven't found anyone brave enough to try that \ntest yet.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 23 Aug 2011 23:28:11 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID Controllers"
}
] |
[
{
"msg_contents": "I am in the progress of an 8.3 to 8.4 upgrade for a customer. I seem to \nhave stumbled upon what looks like a regression. The two databases \n(8.3.14 and 8.4.8) have identical tuning parameters (where that makes \nsense) and run on identical hardware. Both databases are regularly \nvacuumed and analyzed (not by autovacuum), and performing an ANALYZE \ndoes not change the plans shown below.\n\nThe query below is a slightly simplified version of what a report uses. \nI note that 8.3 chooses a better plan (approx twice as fast) and 8.4 is \nmassively *overestimating* the rows from the top hash join (8.3 \nmassively underestimates them mind you). This row overestimating becomes \nmore of an issue when the remaining subqueries etc are added into the \nquery below - to the point where the 8.4 runtime goes into days unless. \nNow I have some ways around that (convert NOT IN to NOT EXISTS), but \nthere seems to be nothing I can do to speed this base query, which \nunfortunately is a common construction in the application. Any ideas?\n\nQuery and plans:\n\n EXPLAIN ANALYZE\n SELECT 1\n FROM correspondence c\n JOIN audit_log gen_al on (gen_al.audit_id = generated_audit_id)\n JOIN correspondence_master cm using(corresp_master_id)\n JOIN person p on(p.person_id = cm.person_id)\n\n WHERE c.corresp_type_id IN ('CL11', 'CL11A', 'CL12', 'CL15', \n'CL15A', 'CL16', 'DM_1', 'DM_2')\n AND cm.person_id IS NOT NULL\n AND gen_al.audit_timestamp > ('2011-08-19 \n13:05'::timestamp - '6 months'::interval)\n AND p.active = true\n AND p.exclude_walklist_alt = false\n AND p.postal_address_id is null\n AND p.unpublished = false\n AND p.enrolment_status_id in ('E', 'T')\n AND p.person_type in ('M', 'D', 'O')\n\nQUERY PLAN 8.3\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=1710864.04..1835979.46 rows=33873 width=0) (actual \ntime=34520.372..39762.042 rows=220336 loops=1)\n Hash Cond: (p.person_id = cm.person_id)\n -> Seq Scan on person p (cost=0.00..115154.11 rows=2566020 \nwidth=4) (actual time=0.066..3347.560 rows=3101074 loops=1)\n Filter: (active AND (NOT exclude_walklist_alt) AND \n(postal_address_id IS NULL) AND (NOT unpublished) AND \n(enrolment_status_id = ANY ('{E,T}'::text[])) AND (person_type = ANY \n('{M,D,O}'::text[])))\n -> Hash (cost=1703761.54..1703761.54 rows=568200 width=4) (actual \ntime=34519.041..34519.041 rows=251911 loops=1)\n -> Hash Join (cost=793298.59..1703761.54 rows=568200 \nwidth=4) (actual time=8383.900..34414.612 rows=251911 loops=1)\n Hash Cond: (cm.corresp_master_id = c.corresp_master_id)\n -> Seq Scan on correspondence_master cm \n(cost=0.00..778644.91 rows=33636281 width=12) (actual \ntime=0.045..9651.799 rows=33966957 loops=1)\n Filter: (person_id IS NOT NULL)\n -> Hash (cost=785006.92..785006.92 rows=663333 \nwidth=8) (actual time=8260.951..8260.951 rows=358582 loops=1)\n -> Hash Join (cost=233226.31..785006.92 \nrows=663333 width=8) (actual time=7042.396..8140.430 rows=358582 loops=1)\n Hash Cond: (c.generated_audit_id = \ngen_al.audit_id)\n -> Bitmap Heap Scan on correspondence c \n(cost=74599.32..527309.84 rows=4103876 width=16) (actual \ntime=869.067..2474.081 rows=5297729 loops=1)\n Recheck Cond: (corresp_type_id = ANY \n('{CL11,CL11A,CL12,CL15,CL15A,CL16,DM_1,DM_2}'::text[]))\n -> Bitmap Index Scan on \ncorresp_type_fk (cost=0.00..73573.35 rows=4103876 width=0) (actual \ntime=834.201..834.201 rows=5297729 loops=1)\n Index Cond: (corresp_type_id = \nANY ('{CL11,CL11A,CL12,CL15,CL15A,CL16,DM_1,DM_2}'::text[]))\n -> Hash (cost=111084.39..111084.39 \nrows=2897808 width=8) (actual time=3373.619..3373.619 rows=2854079 loops=1)\n -> Index Scan using \naudit_log_audit_timestamp on audit_log gen_al (cost=0.00..111084.39 \nrows=2897808 width=8) (actual time=0.164..2322.894 rows=2854079 loops=1)\n Index Cond: (audit_timestamp > \n'2011-02-19 13:05:00'::timestamp without time zone)\n Total runtime: 39787.258 ms\n\n\n\nQUERY PLAN 8.4\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=914274.32..2577396.93 rows=3757429 width=0) (actual \ntime=11425.505..62959.205 rows=220336 loops=1)\n Hash Cond: (c.generated_audit_id = gen_al.audit_id)\n -> Hash Join (cost=761247.58..2313627.39 rows=3757429 width=8) \n(actual time=8335.459..58598.883 rows=3087353 loops=1)\n Hash Cond: (cm.person_id = p.person_id)\n -> Hash Join (cost=604036.44..2031002.71 rows=4202314 \nwidth=12) (actual time=3850.257..49067.879 rows=3586540 loops=1)\n Hash Cond: (cm.corresp_master_id = c.corresp_master_id)\n -> Seq Scan on correspondence_master cm \n(cost=0.00..777996.60 rows=33964871 width=12) (actual \ntime=0.024..10206.688 rows=33966957 loops=1)\n Filter: (person_id IS NOT NULL)\n -> Hash (cost=530987.51..530987.51 rows=4202314 \nwidth=16) (actual time=3848.577..3848.577 rows=5297729 loops=1)\n -> Bitmap Heap Scan on correspondence c \n(cost=76346.23..530987.51 rows=4202314 width=16) (actual \ntime=673.660..2272.497 rows=5297729 loops=1)\n Recheck Cond: (corresp_type_id = ANY \n('{CL11,CL11A,CL12,CL15,CL15A,CL16,DM_1,DM_2}'::text[]))\n -> Bitmap Index Scan on corresp_type_fk \n(cost=0.00..75295.65 rows=4202314 width=0) (actual time=640.301..640.301 \nrows=5297729 loops=1)\n Index Cond: (corresp_type_id = ANY \n('{CL11,CL11A,CL12,CL15,CL15A,CL16,DM_1,DM_2}'::text[]))\n -> Hash (cost=115091.03..115091.03 rows=2567289 width=4) \n(actual time=4484.737..4484.737 rows=3101076 loops=1)\n -> Seq Scan on person p (cost=0.00..115091.03 \nrows=2567289 width=4) (actual time=0.013..3406.661 rows=3101076 loops=1)\n Filter: (active AND (NOT exclude_walklist_alt) AND \n(postal_address_id IS NULL) AND (NOT unpublished) AND \n(enrolment_status_id = ANY ('{E,T}'::text[])) AND (person_type = ANY \n('{M,D,O}'::text[])))\n -> Hash (cost=107100.99..107100.99 rows=2799260 width=8) (actual \ntime=2962.039..2962.039 rows=2854070 loops=1)\n -> Index Scan using audit_log_audit_timestamp on audit_log \ngen_al (cost=0.00..107100.99 rows=2799260 width=8) (actual \ntime=0.181..2055.802 rows=2854070 loops=1)\n Index Cond: (audit_timestamp > '2011-02-19 \n13:05:00'::timestamp without time zone)\n Total runtime: 63007.358 ms\n\n\nCheers\n\nMark\n",
"msg_date": "Tue, 23 Aug 2011 16:52:15 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "8.4 optimization regression?"
},
{
"msg_contents": "Mark Kirkwood <[email protected]> writes:\n> I am in the progress of an 8.3 to 8.4 upgrade for a customer. I seem to \n> have stumbled upon what looks like a regression. The two databases \n> (8.3.14 and 8.4.8) have identical tuning parameters (where that makes \n> sense) and run on identical hardware. Both databases are regularly \n> vacuumed and analyzed (not by autovacuum), and performing an ANALYZE \n> does not change the plans shown below.\n\nHmmm ... this is structurally a pretty simple query, so I'm surprised\nthat 8.3 and 8.4 see it very much differently. The relation-level\nestimates and plan choices are very nearly the same; the only thing\nthat's changed much is the estimates of the join sizes, and there were\nnot that many changes in the join selectivity estimation for simple\ninner joins. I wonder whether you are seeing a bad side-effect of this\npatch:\n\nhttp://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=7f3eba30\n\nThat code would only be reached when one or both join columns lack MCV\nlists in pg_stats; if you had analyzed, the only reason for that to be\nthe case is if the column is unique (or nearly so, in ANALYZE's opinion).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 23 Aug 2011 23:15:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.4 optimization regression? "
},
{
"msg_contents": "On 24/08/11 15:15, Tom Lane wrote:\n> Mark Kirkwood<[email protected]> writes:\n>> I am in the progress of an 8.3 to 8.4 upgrade for a customer. I seem to\n>> have stumbled upon what looks like a regression. The two databases\n>> (8.3.14 and 8.4.8) have identical tuning parameters (where that makes\n>> sense) and run on identical hardware. Both databases are regularly\n>> vacuumed and analyzed (not by autovacuum), and performing an ANALYZE\n>> does not change the plans shown below.\n> Hmmm ... this is structurally a pretty simple query, so I'm surprised\n> that 8.3 and 8.4 see it very much differently. The relation-level\n> estimates and plan choices are very nearly the same; the only thing\n> that's changed much is the estimates of the join sizes, and there were\n> not that many changes in the join selectivity estimation for simple\n> inner joins. I wonder whether you are seeing a bad side-effect of this\n> patch:\n>\n> http://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=7f3eba30\n>\n> That code would only be reached when one or both join columns lack MCV\n> lists in pg_stats; if you had analyzed, the only reason for that to be\n> the case is if the column is unique (or nearly so, in ANALYZE's opinion).\n>\n\nRight that will be the case - audit_id is primary key for audit_log. \nStats entries for the join columns look like:\n\n=# SELECT tablename \n,attname,n_distinct,most_common_vals,most_common_freqs,histogram_bounds \nFROM pg_stats WHERE tablename IN ('correspondence','audit_log') AND \nattname IN ('audit_id','generated_audit_id');\n-[ RECORD 1 \n]-----+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\ntablename | correspondence\nattname | generated_audit_id\nn_distinct | 4625\nmost_common_vals | \n{11983812,15865407,4865496,717803,842478,725709,7255002,2389608,4604147,9996442,8693810,4604145,5916872,2389606,3135764,3307895,10527855,7254994,8959356,9595632,6279892,9595640,2604937,5916870,6279950,1180586,2604768,1180638,11526036,4451499,5252795,6279919,6279955,8958886,2604929,6279904,7543722,8959031,2604804,7543823,8958930,8959226,1180650,2604871,3530205,6279960,11051216,11051224,3530140,7838365,15060203,1180309,1180423,3530177,7543749,7543790,8959026,8959083,12834024,1180447,1180632,1180664,2604779,2604901,2604943,6279944,6280027,7543820,8958992,8959011,3530107,6279923,7543085,15866296,1180470,1180473,2604846,2604874,2604892,6279977,6280046,7543496,8958904,8958914,1180281,1180497,2604801,2604973,3529965,6280051,7543654,7543667,7543815,2604840,2604852,2604877,6279947,6279991,6280016,6280095}\nmost_common_freqs | \n{0.0787667,0.0769333,0.00906667,0.00886667,0.00826667,0.00593333,0.00326667,0.003,0.00293333,0.0027,0.00266667,0.00263333,0.00256667,0.0025,0.00246667,0.00203333,0.00203333,0.00196667,0.0019,0.00186667,0.00183333,0.0018,0.00173333,0.00173333,0.00173333,0.0017,0.0017,0.00166667,0.00166667,0.00163333,0.00163333,0.00163333,0.00163333,0.00163333,0.0016,0.0016,0.0016,0.0016,0.00156667,0.00156667,0.00156667,0.00156667,0.00153333,0.00153333,0.00153333,0.00153333,0.00153333,0.00153333,0.0015,0.0015,0.0015,0.00146667,0.00146667,0.00146667,0.00146667,0.00146667,0.00146667,0.00146667,0.00146667,0.00143333,0.00143333,0.00143333,0.00143333,0.00143333,0.00143333,0.00143333,0.00143333,0.00143333,0.00143333,0.00143333,0.0014,0.0014,0.0014,0.0014,0.00136667,0.00136667,0.00136667,0.00136667,0.00136667,0.00136667,0.00136667,0.00136667,0.00136667,0.00136667,0.00133333,0.00133333,0.00133333,0.00133333,0.00133333,0.00133333,0.00133333,0.00133333,0.00133333,0.0013,0.0013,0.0013,0.0013,0.0013,0.0013,0.0013}\nhistogram_bounds | \n{-614,149124,436276,734992,1111802,1180324,1180449,1180481,1180507,1180610,1180640,1180656,1180672,1475625,1671884,1882852,2257454,2521497,2604750,2604785,2604821,2604857,2604895,2604923,2604957,2683740,3050195,3264561,3529673,3529821,3529894,3530041,3530072,3530093,3530125,3530151,3530181,3530216,3655474,3947599,4230064,4451407,4451648,4604143,4899541,5229325,5442183,5783894,6044973,6279792,6279830,6279872,6279934,6279988,6280024,6280057,6280087,6448106,6666623,6935161,7223774,7543005,7543220,7543548,7543678,7543706,7543733,7543763,7543785,7543831,7730234,8168222,8473126,8704950,8958785,8958894,8958920,8958946,8958981,8959021,8959054,8960124,8963427,9092223,9393810,9649295,9915513,10116459,10340456,10533434,10908764,11474630,12282455,13428124,14054953,14755339,15060207,15769093,16442810,17071416,17860068}\n-[ RECORD 2 \n]-----+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\ntablename | audit_log\nattname | audit_id\nn_distinct | -1\nmost_common_vals |\nmost_common_freqs |\nhistogram_bounds | \n{-899,172915,346206,520991,707646,900140,1090647,1274076,1455922,1631357,1802760,1992032,2160450,2341946,2514505,2670638,2851069,3031271,3190297,3359936,3536716,3706348,3899491,4067528,4232343,4405734,4574480,4753591,4930502,5122384,5287148,5460009,5657326,5824340,6020883,6214608,6409401,6606366,6779433,6945221,7123123,7294108,7495488,7649303,7816323,7997936,8191973,8362771,8526974,8733309,8911487,9099916,9289773,9472155,9661398,9825969,10004845,10176201,10351232,10527642,10680265,10853519,11040326,11229650,11422181,11605451,11806172,11985734,12171654,12364324,12559368,12729402,12912927,13073102,13287145,13455458,13649471,13826738,14004258,14187125,14356543,14539334,14715631,14895857,15060855,15231913,15404735,15577098,15742060,15901413,16088450,16270629,16458319,16650444,16826581,17003138,17158176,17315993,17497551,17687046,17879372}\n\n\nCheers\n\nMark\n\n\n\n\n",
"msg_date": "Wed, 24 Aug 2011 15:48:56 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.4 optimization regression?"
},
{
"msg_contents": "On 24/08/11 15:15, Tom Lane wrote:\n>\n> Hmmm ... this is structurally a pretty simple query, so I'm surprised\n> that 8.3 and 8.4 see it very much differently. The relation-level\n> estimates and plan choices are very nearly the same; the only thing\n> that's changed much is the estimates of the join sizes, and there were\n> not that many changes in the join selectivity estimation for simple\n> inner joins. I wonder whether you are seeing a bad side-effect of this\n> patch:\n>\n> http://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=7f3eba30\n>\n\nHere is what the plan looks like with that patch reversed (it is back to \n8.3 speed too).\n\nQUERY PLAN 8.4 - 7f3eba30\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=948567.18..1882454.51 rows=349702 width=0) (actual \ntime=12320.037..38146.697 rows=217427 loops=1)\n Hash Cond: (cm.person_id = p.person_id)\n -> Hash Join (cost=791689.32..1702481.12 rows=581887 width=4) \n(actual time=7492.004..32727.783 rows=248441 loops=1)\n Hash Cond: (cm.corresp_master_id = c.corresp_master_id)\n -> Seq Scan on correspondence_master cm \n(cost=0.00..777460.25 rows=34003380 width=12) (actual \ntime=0.016..8977.181 rows=33960209 loops=1)\n Filter: (person_id IS NOT NULL)\n -> Hash (cost=783297.43..783297.43 rows=671351 width=8) \n(actual time=7375.019..7375.019 rows=354456 loops=1)\n -> Hash Join (cost=231577.28..783297.43 rows=671351 \nwidth=8) (actual time=6374.538..7257.067 rows=354456 loops=1)\n Hash Cond: (c.generated_audit_id = gen_al.audit_id)\n -> Bitmap Heap Scan on correspondence c \n(cost=77121.49..532445.85 rows=4247118 width=16) (actual \ntime=742.738..2790.225 rows=5293603 loops=1)\n Recheck Cond: (corresp_type_id = ANY \n('{CL11,CL11A,CL12,CL15,CL15A,CL16,DM_1,DM_2}'::text[]))\n -> Bitmap Index Scan on corresp_type_fk \n(cost=0.00..76059.71 rows=4247118 width=0) (actual time=708.164..708.164 \nrows=5293603 loops=1)\n Index Cond: (corresp_type_id = ANY \n('{CL11,CL11A,CL12,CL15,CL15A,CL16,DM_1,DM_2}'::text[]))\n -> Hash (cost=108073.47..108073.47 rows=2827066 \nwidth=8) (actual time=2759.145..2759.145 rows=2819891 loops=1)\n -> Index Scan using \naudit_log_audit_timestamp on audit_log gen_al (cost=0.00..108073.47 \nrows=2827066 width=8) (actual time=0.085..1800.175 rows=2819891 loops=1)\n Index Cond: (audit_timestamp > \n'2011-02-19 13:05:00'::timestamp without time zone)\n -> Hash (cost=115044.00..115044.00 rows=2549829 width=4) (actual \ntime=4827.310..4827.310 rows=3101177 loops=1)\n -> Seq Scan on person p (cost=0.00..115044.00 rows=2549829 \nwidth=4) (actual time=0.061..3600.767 rows=3101177 loops=1)\n Filter: (active AND (NOT exclude_walklist_alt) AND \n(postal_address_id IS NULL) AND (NOT unpublished) AND \n(enrolment_status_id = ANY ('{E,T}'::text[])) AND (person_type = ANY \n('{M,D,O}'::text[])))\n Total runtime: 38171.865 ms\n\n\nCheers\n\nMark\n\n",
"msg_date": "Wed, 24 Aug 2011 17:22:05 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.4 optimization regression?"
},
{
"msg_contents": "On 24/08/11 17:22, Mark Kirkwood wrote:\n> On 24/08/11 15:15, Tom Lane wrote:\n>>\n>> Hmmm ... this is structurally a pretty simple query, so I'm surprised\n>> that 8.3 and 8.4 see it very much differently. The relation-level\n>> estimates and plan choices are very nearly the same; the only thing\n>> that's changed much is the estimates of the join sizes, and there were\n>> not that many changes in the join selectivity estimation for simple\n>> inner joins. I wonder whether you are seeing a bad side-effect of this\n>> patch:\n>>\n>> http://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=7f3eba30 \n>>\n>>\n>\n> Here is what the plan looks like with that patch reversed (it is back \n> to 8.3 speed too).\n>\n> QUERY PLAN 8.4 - 7f3eba30 (better plan snipped)\n>\n>\n\nI note from the commit message that the fix test case was from Grzegorz \nJaskiewicz (antijoin against a small subset of a relation). I was not \nable to find this in the archives - Grzegorz do you recall the actual \ntest case? I thought it might be useful for me to spend some time \nstudying both cases and seeing if I can come up with any tweaks that \nwould let both your and my queries work well!\n\nCheers\n\nMark\n\n",
"msg_date": "Tue, 30 Aug 2011 10:38:55 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.4 optimization regression?"
},
{
"msg_contents": "2011/8/29 Mark Kirkwood <[email protected]>:\n\n> I note from the commit message that the fix test case was from Grzegorz\n> Jaskiewicz (antijoin against a small subset of a relation). I was not able\n> to find this in the archives - Grzegorz do you recall the actual test case?\n> I thought it might be useful for me to spend some time studying both cases\n> and seeing if I can come up with any tweaks that would let both your and my\n> queries work well!\n\nSorry, I don't remember that particular example. If I complained about\nit, it would have been on this list or the general list.\nI'll have a look by date.\n\n-- \nGJ\n",
"msg_date": "Tue, 30 Aug 2011 10:43:48 +0100",
"msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.4 optimization regression?"
},
{
"msg_contents": "On 30/08/11 21:43, Grzegorz Jaśkiewicz wrote:\n> 2011/8/29 Mark Kirkwood<[email protected]>:\n>\n>> I note from the commit message that the fix test case was from Grzegorz\n>> Jaskiewicz (antijoin against a small subset of a relation). I was not able\n>> to find this in the archives - Grzegorz do you recall the actual test case?\n>> I thought it might be useful for me to spend some time studying both cases\n>> and seeing if I can come up with any tweaks that would let both your and my\n>> queries work well!\n> Sorry, I don't remember that particular example. If I complained about\n> it, it would have been on this list or the general list.\n> I'll have a look by date.\n>\n\nThanks - however I think I have managed to make up a good test case that \nshows the particular commit working. More on that to come!\n\nCheers\n\nMark\n",
"msg_date": "Wed, 31 Aug 2011 17:11:56 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.4 optimization regression?"
},
{
"msg_contents": "On 24/08/11 15:15, Tom Lane wrote:\n> Mark Kirkwood<[email protected]> writes:\n>> I am in the progress of an 8.3 to 8.4 upgrade for a customer. I seem to\n>> have stumbled upon what looks like a regression. The two databases\n>> (8.3.14 and 8.4.8) have identical tuning parameters (where that makes\n>> sense) and run on identical hardware. Both databases are regularly\n>> vacuumed and analyzed (not by autovacuum), and performing an ANALYZE\n>> does not change the plans shown below.\n> Hmmm ... this is structurally a pretty simple query, so I'm surprised\n> that 8.3 and 8.4 see it very much differently. The relation-level\n> estimates and plan choices are very nearly the same; the only thing\n> that's changed much is the estimates of the join sizes, and there were\n> not that many changes in the join selectivity estimation for simple\n> inner joins. I wonder whether you are seeing a bad side-effect of this\n> patch:\n>\n> http://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=7f3eba30\n>\n> That code would only be reached when one or both join columns lack MCV\n> lists in pg_stats; if you had analyzed, the only reason for that to be\n> the case is if the column is unique (or nearly so, in ANALYZE's opinion).\n>\n> \t\n\nI've come up with (hopefully) a good set of semi, anti and regular joins \nto demonstrate the effect of this commit. I've attached them, and the \nschema generator (I believe I've used this before for optimization \nexamples...).\n\nAlso I've tried out an experimental patch to make joins like the one I'm \nhaving trouble with *and* also the anti joins the commit was for - get \nbetter row estimates.\n\nSo firstly consider an anti join (these are run against git HEAD rather \nthan 8.4):\n\nEXPLAIN ANALYZE SELECT 1 FROM nodekeyword nk WHERE nk.keywordid < 100000 \nAND NOT EXISTS (SELECT 1 FROM node n WHERE n.nodeid = nk.nodeid AND \nn.updated > '2011-12-01'::timestamp );\n\nWith commit:\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Hash Anti Join (cost=426079.34..765699.66 rows=1599293 width=0) \n(actual time=29907.716..47255.825 rows=1839193 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=1991560 \nwidth=4) (actual time=5.373..11838.738 rows=1999780 loops=1)\n Filter: (keywordid < 100000)\n -> Hash (cost=419643.00..419643.00 rows=392267 width=4) (actual \ntime=29883.980..29883.980 rows=401678 loops=1)\n Buckets: 4096 Batches: 16 Memory Usage: 891kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=392267 \nwidth=4) (actual time=0.339..29295.764 rows=401678 loops=1)\n Filter: (updated > '2011-12-01 00:00:00'::timestamp \nwithout time zone)\n\n\nWithout commit:\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Hash Anti Join (cost=426079.34..760501.96 rows=1 width=0) (actual \ntime=30409.336..47919.613 rows=1839193 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=1991560 \nwidth=4) (actual time=5.359..12081.372 rows=1999780 loops=1)\n Filter: (keywordid < 100000)\n -> Hash (cost=419643.00..419643.00 rows=392267 width=4) (actual \ntime=30392.235..30392.235 rows=401678 loops=1)\n Buckets: 4096 Batches: 16 Memory Usage: 891kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=392267 \nwidth=4) (actual time=0.384..29806.407 rows=401678 loops=1)\n Filter: (updated > '2011-12-01 00:00:00'::timestamp \nwithout time zone)\n\n\nNote the rows estimate for the anti join is hopelessly wrong, so clearly \nthe commitdoes the job here (I think this models the test case for said \ncommit)!\n\nNow some joins:\n\nEXPLAIN ANALYZE SELECT 1 FROM NODE n JOIN nodekeyword nk ON (n.nodeid = \nnk.nodeid) WHERE n.updated > '2011-01-01'::timestamp AND nk.keywordid < \n100000;\n\nWith commit:\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=501666.88..871512.65 rows=1991560 width=0) (actual \ntime=30032.836..53073.731 rows=1993866 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=1991560 \nwidth=4) (actual time=5.327..14393.629 rows=1999780 loops=1)\n Filter: (keywordid < 100000)\n -> Hash (cost=419643.00..419643.00 rows=4999510 width=4) (actual \ntime=30017.777..30017.777 rows=4985269 loops=1)\n Buckets: 4096 Batches: 256 Memory Usage: 699kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=4999510 \nwidth=4) (actual time=0.005..23272.287 rows=4985269 loops=1)\n Filter: (updated > '2011-01-01 00:00:00'::timestamp \nwithout time zone)\n\n\nWithout commit:\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=501666.88..871510.70 rows=1991365 width=0) (actual \ntime=30549.498..54852.399 rows=1993866 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=1991560 \nwidth=4) (actual time=5.331..13760.417 rows=1999780 loops=1)\n Filter: (keywordid < 100000)\n -> Hash (cost=419643.00..419643.00 rows=4999510 width=4) (actual \ntime=30534.464..30534.464 rows=4985269 loops=1)\n Buckets: 4096 Batches: 256 Memory Usage: 699kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=4999510 \nwidth=4) (actual time=0.005..23696.167 rows=4985269 loops=1)\n Filter: (updated > '2011-01-01 00:00:00'::timestamp \nwithout time zone)\n\n\nAnother join:\n\nEXPLAIN ANALYZE SELECT 1 FROM NODE n JOIN nodekeyword nk ON (n.nodeid = \nnk.nodeid) WHERE n.updated > '2011-12-01'::timestamp AND nk.keywordid < \n100000;\n\nWith commit:\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=426079.34..764424.63 rows=392267 width=0) (actual \ntime=29295.966..45578.876 rows=160587 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=1991560 \nwidth=4) (actual time=12.452..12367.760 rows=1999780 loops=1)\n Filter: (keywordid < 100000)\n -> Hash (cost=419643.00..419643.00 rows=392267 width=4) (actual \ntime=29273.571..29273.571 rows=401678 loops=1)\n Buckets: 4096 Batches: 16 Memory Usage: 891kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=392267 \nwidth=4) (actual time=10.899..28678.818 rows=401678 loops=1)\n Filter: (updated > '2011-12-01 00:00:00'::timestamp \nwithout time zone)\n\n\nWithout commit:\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=426079.34..762064.41 rows=156245 width=0) (actual \ntime=29179.313..44605.243 rows=160587 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=1991560 \nwidth=4) (actual time=12.486..11546.469 rows=1999780 loops=1)\n Filter: (keywordid < 100000)\n -> Hash (cost=419643.00..419643.00 rows=392267 width=4) (actual \ntime=29156.889..29156.889 rows=401678 loops=1)\n Buckets: 4096 Batches: 16 Memory Usage: 891kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=392267 \nwidth=4) (actual time=10.915..28545.553 rows=401678 loops=1)\n Filter: (updated > '2011-12-01 00:00:00'::timestamp \nwithout time zone)\n\n\nSo in the case where we filer out a large percentage of the rows the \ncommit inflates the estimates...consider a more extreme example:\n\n\nEXPLAIN ANALYZE SELECT 1 FROM NODE n JOIN nodekeyword nk ON (n.nodeid = \nnk.nodeid) WHERE n.updated > '2011-12-27'::timestamp AND nk.keywordid < \n10000;\n\nWith commit:\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..569488.45 rows=16344 width=0) (actual \ntime=55.452..65341.135 rows=604 loops=1)\n -> Seq Scan on node n (cost=0.00..419643.00 rows=16344 width=4) \n(actual time=13.537..46138.214 rows=14952 loops=1)\n Filter: (updated > '2011-12-27 00:00:00'::timestamp without \ntime zone)\n -> Index Scan using nodekeyword_pk on nodekeyword nk \n(cost=0.00..9.16 rows=1 width=4) (actual time=1.277..1.279 rows=0 \nloops=14952)\n Index Cond: ((nodeid = n.nodeid) AND (keywordid < 10000))\n\n\nWithout commit:\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..569488.45 rows=631 width=0) (actual \ntime=43.969..64988.036 rows=604 loops=1)\n -> Seq Scan on node n (cost=0.00..419643.00 rows=16344 width=4) \n(actual time=2.060..46065.879 rows=14952 loops=1)\n Filter: (updated > '2011-12-27 00:00:00'::timestamp without \ntime zone)\n -> Index Scan using nodekeyword_pk on nodekeyword nk \n(cost=0.00..9.16 rows=1 width=4) (actual time=1.259..1.260 rows=0 \nloops=14952)\n Index Cond: ((nodeid = n.nodeid) AND (keywordid < 10000))\n\n\nSo clearly this commit is not so good for this type of join (this models \nthe case I posted initially).\n\nNow four semi joins:\n\nEXPLAIN ANALYZE SELECT 1 FROM nodekeyword nk WHERE nk.keywordid < 100000 \nAND EXISTS (SELECT 1 FROM node n WHERE n.nodeid = nk.nodeid AND \nn.updated > '2011-12-01'::timestamp );\n\nWith commit:\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=426079.34..753629.40 rows=392267 width=0) \n(actual time=28405.965..43724.471 rows=160587 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=1991560 \nwidth=4) (actual time=5.767..11561.340 rows=1999780 loops=1)\n Filter: (keywordid < 100000)\n -> Hash (cost=419643.00..419643.00 rows=392267 width=4) (actual \ntime=28391.293..28391.293 rows=401678 loops=1)\n Buckets: 4096 Batches: 16 Memory Usage: 891kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=392267 \nwidth=4) (actual time=0.038..27820.097 rows=401678 loops=1)\n Filter: (updated > '2011-12-01 00:00:00'::timestamp \nwithout time zone)\n\nWithout commit:\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=426079.34..780417.56 rows=1991560 width=0) \n(actual time=29447.638..44738.280 rows=160587 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=1991560 \nwidth=4) (actual time=5.771..11501.350 rows=1999780 loops=1)\n Filter: (keywordid < 100000)\n -> Hash (cost=419643.00..419643.00 rows=392267 width=4) (actual \ntime=29433.952..29433.952 rows=401678 loops=1)\n Buckets: 4096 Batches: 16 Memory Usage: 891kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=392267 \nwidth=4) (actual time=0.040..28850.800 rows=401678 loops=1)\n Filter: (updated > '2011-12-01 00:00:00'::timestamp \nwithout time zone)\n\nClearly row estimation is hopelessly broken *without* this commit here.\n\nAnother semi join:\n\nEXPLAIN ANALYZE SELECT 1 FROM nodekeyword nk WHERE nk.keywordid < 100000 \nAND EXISTS (SELECT 1 FROM node n WHERE n.nodeid = nk.nodeid AND \nn.updated > '2011-01-01'::timestamp );\n\nWith commit:\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=501666.88..871512.65 rows=1991560 width=0) \n(actual time=29048.154..51230.453 rows=1993866 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=1991560 \nwidth=4) (actual time=12.423..13430.618 rows=1999780 loops=1)\n Filter: (keywordid < 100000)\n -> Hash (cost=419643.00..419643.00 rows=4999510 width=4) (actual \ntime=29024.442..29024.442 rows=4985269 loops=1)\n Buckets: 4096 Batches: 256 Memory Usage: 699kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=4999510 \nwidth=4) (actual time=0.010..22384.904 rows=4985269 loops=1)\n Filter: (updated > '2011-01-01 00:00:00'::timestamp \nwithout time zone)\n\n\nWithout commit:\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=501666.88..871512.65 rows=1991560 width=0) \n(actual time=28914.970..51162.918 rows=1993866 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=1991560 \nwidth=4) (actual time=12.504..13780.506 rows=1999780 loops=1)\n Filter: (keywordid < 100000)\n -> Hash (cost=419643.00..419643.00 rows=4999510 width=4) (actual \ntime=28891.705..28891.705 rows=4985269 loops=1)\n Buckets: 4096 Batches: 256 Memory Usage: 699kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=4999510 \nwidth=4) (actual time=0.008..22082.459 rows=4985269 loops=1)\n Filter: (updated > '2011-01-01 00:00:00'::timestamp \nwithout time zone)\n\n\nAnother semi join:\n\nEXPLAIN ANALYZE SELECT 1 FROM nodekeyword nk WHERE nk.keywordid < 10000 \nAND EXISTS (SELECT 1 FROM node n WHERE n.nodeid = nk.nodeid AND \nn.updated > '2011-01-01'::timestamp );\n\nWith commit:\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=501666.88..823736.17 rows=192921 width=0) \n(actual time=30120.347..49646.175 rows=199050 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=192921 \nwidth=4) (actual time=12.359..16335.889 rows=199616 loops=1)\n Filter: (keywordid < 10000)\n -> Hash (cost=419643.00..419643.00 rows=4999510 width=4) (actual \ntime=30072.444..30072.444 rows=4985269 loops=1)\n Buckets: 4096 Batches: 256 Memory Usage: 699kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=4999510 \nwidth=4) (actual time=0.009..23409.799 rows=4985269 loops=1)\n Filter: (updated > '2011-01-01 00:00:00'::timestamp \nwithout time zone)\n\nWithout commit:\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=501666.88..823736.17 rows=192921 width=0) \n(actual time=29395.513..48857.600 rows=199050 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=192921 \nwidth=4) (actual time=12.528..16261.983 rows=199616 loops=1)\n Filter: (keywordid < 10000)\n -> Hash (cost=419643.00..419643.00 rows=4999510 width=4) (actual \ntime=29348.826..29348.826 rows=4985269 loops=1)\n Buckets: 4096 Batches: 256 Memory Usage: 699kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=4999510 \nwidth=4) (actual time=0.009..22505.930 rows=4985269 loops=1)\n Filter: (updated > '2011-01-01 00:00:00'::timestamp \nwithout time zone)\n\n\nFinal semi join:\n\nEXPLAIN ANALYZE SELECT 1 FROM nodekeyword nk WHERE nk.keywordid < 10000 \nAND EXISTS (SELECT 1 FROM node n WHERE n.nodeid = nk.nodeid AND \nn.updated > '2011-12-01'::timestamp );\n\nWith commit:\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=426079.34..730392.78 rows=192921 width=0) \n(actual time=29060.665..44713.615 rows=16003 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=192921 \nwidth=4) (actual time=12.366..15064.457 rows=199616 loops=1)\n Filter: (keywordid < 10000)\n -> Hash (cost=419643.00..419643.00 rows=392267 width=4) (actual \ntime=29026.017..29026.017 rows=401678 loops=1)\n Buckets: 4096 Batches: 16 Memory Usage: 891kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=392267 \nwidth=4) (actual time=0.039..28441.039 rows=401678 loops=1)\n Filter: (updated > '2011-12-01 00:00:00'::timestamp \nwithout time zone)\n\nWithout commit:\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=426079.34..730392.78 rows=192921 width=0) \n(actual time=28969.107..43725.339 rows=16003 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=192921 \nwidth=4) (actual time=12.486..14198.613 rows=199616 loops=1)\n Filter: (keywordid < 10000)\n -> Hash (cost=419643.00..419643.00 rows=392267 width=4) (actual \ntime=28935.248..28935.248 rows=401678 loops=1)\n Buckets: 4096 Batches: 16 Memory Usage: 891kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=392267 \nwidth=4) (actual time=0.047..28343.005 rows=401678 loops=1)\n Filter: (updated > '2011-12-01 00:00:00'::timestamp \nwithout time zone)\n\nWell this guy is too sneaky for either case :-(\n\nWe seem to need a patch variant that *only* clamps the estimates in the \nanti or semi join case, e.g (note against git HEAD):\n\ndiff --git a/src/backend/utils/adt/selfuncs.c \nb/src/backend/utils/adt/selfuncs.c\nindex e065826..bf5002f 100644\n--- a/src/backend/utils/adt/selfuncs.c\n+++ b/src/backend/utils/adt/selfuncs.c\n@@ -2257,11 +2257,6 @@ eqjoinsel_inner(Oid operator,\n double nullfrac1 = stats1 ? stats1->stanullfrac : 0.0;\n double nullfrac2 = stats2 ? stats2->stanullfrac : 0.0;\n\n- if (vardata1->rel)\n- nd1 = Min(nd1, vardata1->rel->rows);\n- if (vardata2->rel)\n- nd2 = Min(nd2, vardata2->rel->rows);\n-\n selec = (1.0 - nullfrac1) * (1.0 - nullfrac2);\n if (nd1 > nd2)\n selec /= nd1;\n\n\nNow run all the queries, 1st the anti join:\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Hash Anti Join (cost=426079.34..765699.66 rows=1599293 width=0) \n(actual time=30121.008..48503.453 rows=1839193 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=1991560 \nwidth=4) (actual time=12.623..12853.522 rows=1999780 loops=1)\n Filter: (keywordid < 100000)\n -> Hash (cost=419643.00..419643.00 rows=392267 width=4) (actual \ntime=30108.058..30108.058 rows=401678 loops=1)\n Buckets: 4096 Batches: 16 Memory Usage: 891kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=392267 \nwidth=4) (actual time=0.347..29508.393 rows=401678 loops=1)\n Filter: (updated > '2011-12-01 00:00:00'::timestamp \nwithout time zone)\n\n\nAnd the 3 joins:\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=501666.88..871510.70 rows=1991365 width=0) (actual \ntime=30148.073..52370.308 rows=1993866 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=1991560 \nwidth=4) (actual time=12.291..13300.233 rows=1999780 loops=1)\n Filter: (keywordid < 100000)\n -> Hash (cost=419643.00..419643.00 rows=4999510 width=4) (actual \ntime=30124.453..30124.453 rows=4985269 loops=1)\n Buckets: 4096 Batches: 256 Memory Usage: 699kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=4999510 \nwidth=4) (actual time=0.009..23334.774 rows=4985269 loops=1)\n Filter: (updated > '2011-01-01 00:00:00'::timestamp \nwithout time zone)\n\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=426079.34..762064.41 rows=156245 width=0) (actual \ntime=29954.251..46014.379 rows=160587 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=1991560 \nwidth=4) (actual time=12.420..12126.142 rows=1999780 loops=1)\n Filter: (keywordid < 100000)\n -> Hash (cost=419643.00..419643.00 rows=392267 width=4) (actual \ntime=29936.578..29936.578 rows=401678 loops=1)\n Buckets: 4096 Batches: 16 Memory Usage: 891kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=392267 \nwidth=4) (actual time=10.934..29357.789 rows=401678 loops=1)\n Filter: (updated > '2011-12-01 00:00:00'::timestamp \nwithout time zone)\n\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..569488.45 rows=631 width=0) (actual \ntime=44.065..67179.686 rows=604 loops=1)\n -> Seq Scan on node n (cost=0.00..419643.00 rows=16344 width=4) \n(actual time=2.165..48523.075 rows=14952 loops=1)\n Filter: (updated > '2011-12-27 00:00:00'::timestamp without \ntime zone)\n -> Index Scan using nodekeyword_pk on nodekeyword nk \n(cost=0.00..9.16 rows=1 width=4) (actual time=1.241..1.242 rows=0 \nloops=14952)\n Index Cond: ((nodeid = n.nodeid) AND (keywordid < 10000))\n\n\nAnd the 4 semi joins...\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=426079.34..753629.40 rows=392267 width=0) \n(actual time=29355.949..45220.958 rows=160587 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=1991560 \nwidth=4) (actual time=5.731..11983.132 rows=1999780 loops=1)\n Filter: (keywordid < 100000)\n -> Hash (cost=419643.00..419643.00 rows=392267 width=4) (actual \ntime=29342.387..29342.387 rows=401678 loops=1)\n Buckets: 4096 Batches: 16 Memory Usage: 891kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=392267 \nwidth=4) (actual time=0.039..28763.514 rows=401678 loops=1)\n Filter: (updated > '2011-12-01 00:00:00'::timestamp \nwithout time zone)\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=501666.88..871512.65 rows=1991560 width=0) \n(actual time=30823.334..53136.910 rows=1993866 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=1991560 \nwidth=4) (actual time=12.555..13881.366 rows=1999780 loops=1)\n Filter: (keywordid < 100000)\n -> Hash (cost=419643.00..419643.00 rows=4999510 width=4) (actual \ntime=30800.017..30800.017 rows=4985269 loops=1)\n Buckets: 4096 Batches: 256 Memory Usage: 699kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=4999510 \nwidth=4) (actual time=0.010..24028.932 rows=4985269 loops=1)\n Filter: (updated > '2011-01-01 00:00:00'::timestamp \nwithout time zone)\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=501666.88..823736.17 rows=192921 width=0) \n(actual time=29278.861..48346.647 rows=199050 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=192921 \nwidth=4) (actual time=12.523..15809.390 rows=199616 loops=1)\n Filter: (keywordid < 10000)\n -> Hash (cost=419643.00..419643.00 rows=4999510 width=4) (actual \ntime=29232.161..29232.161 rows=4985269 loops=1)\n Buckets: 4096 Batches: 256 Memory Usage: 699kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=4999510 \nwidth=4) (actual time=0.009..22541.899 rows=4985269 loops=1)\n Filter: (updated > '2011-01-01 00:00:00'::timestamp \nwithout time zone)\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=426079.34..730392.78 rows=192921 width=0) \n(actual time=28594.210..42976.001 rows=16003 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=192921 \nwidth=4) (actual time=12.581..13810.924 rows=199616 loops=1)\n Filter: (keywordid < 10000)\n -> Hash (cost=419643.00..419643.00 rows=392267 width=4) (actual \ntime=28560.258..28560.258 rows=401678 loops=1)\n Buckets: 4096 Batches: 16 Memory Usage: 891kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=392267 \nwidth=4) (actual time=0.048..27983.235 rows=401678 loops=1)\n Filter: (updated > '2011-12-01 00:00:00'::timestamp \nwithout time zone)\n\n(this last one was wildly inaccurate pre patching)\n\nSo this looks quite encouraging (unless I have overlooked a set of \nqueries that now perform worse - which could be the case), thoughts?\n\nregards\n\nMark",
"msg_date": "Wed, 31 Aug 2011 18:07:32 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.4 optimization regression?"
},
{
"msg_contents": "Mark Kirkwood <[email protected]> writes:\n> [ assorted examples showing that commit\n> 7f3eba30c9d622d1981b1368f2d79ba0999cdff2 has got problems ]\n\nThanks for the test cases. After playing with these for a bit I believe\nI've figured out the error in my previous thinking. Clamping the\nndistinct value like that can improve matters when applied to the inside\nrelation of a semi or anti join, but in all other cases it's just wrong.\nIf you think about what is happening in eqjoinsel_inner with the patch,\nwe are reducing the ndistinct estimate for the join key column\nproportionally to the selectivity of whatever baserel restrictions\napply. This then results in proportionally increasing the selectivity\nnumber for the join condition --- in other words, we're more or less\ncancelling out the effects of one or the other relation's base\nrestrictions. So that's pretty broken in general. The reason it is\nimportant for semi/antijoin inner relations is that this is actually the\nonly way that restrictions applied to the inner rel get to impact the\njoin size estimate at all, since set_joinrel_size_estimates is not going\nto factor the inner rel size into what it multiplies the join selectivity\nagainst.\n\nIn short, I was mistakenly extrapolating from the observation that it\nhelped to hack the ndistinct estimate for a semijoin's inner rel, to\nthe conclusion that we should do that for all join input rels.\n\nSo, not only are you correct that we should revert the changes to\neqjoinsel_inner, but what's happening in eqjoinsel_semi is wrong too.\nIt should only be clamping the ndistinct value for the inner side.\nAnd I think it needs to be taking that into account for the case where\nit does have MCVs as well as the case where it doesn't.\n\nSo I'll go back to this with hopefully a clearer picture of what's\nhappening. Thanks again for the test cases.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 31 Aug 2011 18:21:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.4 optimization regression? "
},
{
"msg_contents": "I wrote:\n> Mark Kirkwood <[email protected]> writes:\n>> [ assorted examples showing that commit\n>> 7f3eba30c9d622d1981b1368f2d79ba0999cdff2 has got problems ]\n\n> ...\n> So, not only are you correct that we should revert the changes to\n> eqjoinsel_inner, but what's happening in eqjoinsel_semi is wrong too.\n\nI've retested these examples with the patches I committed yesterday.\nSix of the eight examples are estimated pretty nearly dead on, while the\nother two are estimated about 50% too high (still a lot better than\nbefore). AFAICT there's no easy way to improve those estimates further;\neqjoinsel_semi just plain hasn't got enough information to know how many\nmatches there will be.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Sep 2011 19:13:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.4 optimization regression? "
},
{
"msg_contents": "On 02/09/11 11:13, Tom Lane wrote:\n> I wrote:\n>> Mark Kirkwood<[email protected]> writes:\n>>> [ assorted examples showing that commit\n>>> 7f3eba30c9d622d1981b1368f2d79ba0999cdff2 has got problems ]\n>> ...\n>> So, not only are you correct that we should revert the changes to\n>> eqjoinsel_inner, but what's happening in eqjoinsel_semi is wrong too.\n> I've retested these examples with the patches I committed yesterday.\n> Six of the eight examples are estimated pretty nearly dead on, while the\n> other two are estimated about 50% too high (still a lot better than\n> before). AFAICT there's no easy way to improve those estimates further;\n> eqjoinsel_semi just plain hasn't got enough information to know how many\n> matches there will be.\n>\n>\n\nJust noticed your two commits this morning and ran them through the \nexamples too - results look really good! Not only are the plain join \nqueries looking way better but that last semi join that was way off is \nnow being estimated pretty close. Should be interesting to see how much \nthis improves more complex queries!\n\nCheers\n\nMark\n\n",
"msg_date": "Fri, 02 Sep 2011 11:18:58 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.4 optimization regression?"
},
{
"msg_contents": "On 02/09/11 11:18, Mark Kirkwood wrote:\n> On 02/09/11 11:13, Tom Lane wrote:\n>> I wrote:\n>>> Mark Kirkwood<[email protected]> writes:\n>>>> [ assorted examples showing that commit\n>>>> 7f3eba30c9d622d1981b1368f2d79ba0999cdff2 has got problems ]\n>>> ...\n>>> So, not only are you correct that we should revert the changes to\n>>> eqjoinsel_inner, but what's happening in eqjoinsel_semi is wrong too.\n>> I've retested these examples with the patches I committed yesterday.\n>> Six of the eight examples are estimated pretty nearly dead on, while the\n>> other two are estimated about 50% too high (still a lot better than\n>> before). AFAICT there's no easy way to improve those estimates further;\n>> eqjoinsel_semi just plain hasn't got enough information to know how many\n>> matches there will be.\n>>\n>>\n>\n> Just noticed your two commits this morning and ran them through the \n> examples too - results look really good! Not only are the plain join \n> queries looking way better but that last semi join that was way off is \n> now being estimated pretty close. Should be interesting to see how \n> much this improves more complex queries!\n>\n>\n\nWhile this is still fresh in your mind, a couple of additional anti join \nqueries are still managing to sneak past estimation:\n\nEXPLAIN ANALYZE SELECT 1 FROM nodekeyword nk WHERE nk.keywordid < 100000 \nAND NOT EXISTS (SELECT 1 FROM node n WHERE n.nodeid = nk.nodeid AND \nn.updated > '2011-01-01'::timestamp );\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Hash Anti Join (cost=501666.88..851597.05 rows=1 width=0) (actual \ntime=29956.971..50933.702 rows=5914 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=1991560 \nwidth=4) (actual time=13.352..13765.749 rows=1999780 loops=1)\n Filter: (keywordid < 100000)\n -> Hash (cost=419643.00..419643.00 rows=4999510 width=4) (actual \ntime=29345.238..29345.238 rows=4985269 loops=1)\n Buckets: 4096 Batches: 256 Memory Usage: 699kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=4999510 \nwidth=4) (actual time=0.010..22731.316 rows=4985269 loops=1)\n Filter: (updated > '2011-01-01 00:00:00'::timestamp \nwithout time zone)\n\nEXPLAIN ANALYZE SELECT 1 FROM nodekeyword nk WHERE nk.keywordid < 10000 \nAND NOT EXISTS (SELECT 1 FROM node n WHERE n.nodeid = nk.nodeid AND \nn.updated > '2011-01-01'::timestamp );\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Hash Anti Join (cost=501666.88..821806.96 rows=1 width=0) (actual \ntime=46497.231..49196.057 rows=566 loops=1)\n Hash Cond: (nk.nodeid = n.nodeid)\n -> Seq Scan on nodekeyword nk (cost=0.00..297414.03 rows=192921 \nwidth=4) (actual time=19.916..16250.224 rows=199616 loops=1)\n Filter: (keywordid < 10000)\n -> Hash (cost=419643.00..419643.00 rows=4999510 width=4) (actual \ntime=29901.178..29901.178 rows=4985269 loops=1)\n Buckets: 4096 Batches: 256 Memory Usage: 699kB\n -> Seq Scan on node n (cost=0.00..419643.00 rows=4999510 \nwidth=4) (actual time=0.008..23207.964 rows=4985269 loops=1)\n Filter: (updated > '2011-01-01 00:00:00'::timestamp \nwithout time zone)\n\n\n\n\n\n\n",
"msg_date": "Fri, 02 Sep 2011 11:54:23 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.4 optimization regression?"
},
{
"msg_contents": "Mark Kirkwood <[email protected]> writes:\n> While this is still fresh in your mind, a couple of additional anti join \n> queries are still managing to sneak past estimation:\n\nYeah, those are estimating that all the outer rows have join partners,\nbecause there are more distinct values in the sub-select than there are\nin the outer relation. AFAICS there are not any errors in the\nstatistics, it's just that the estimation rule falls down here.\n\nIf you've heard of a better estimator for semijoin/antijoin selectivity,\nI'm all ears. The best idea I have at the moment is to put an arbitrary\nupper limit on the estimated selectivity, but that would be, well,\narbitrary.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 04 Sep 2011 18:14:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.4 optimization regression? "
}
] |
[
{
"msg_contents": "Hello Everyone,\n\nI am working on an alert script to track the number of connections with the\nhost IPs to the Postgres cluster.\n\n1. I need all the host IPs making a connection to Postgres Cluster (even for\na fraction of second).\n2. I would also want to track number of IDLE connections, IDLE IN\nTRANSACTION connections and length of the connections as well.\n\nI would be making use of pg_stat_activity and also thought of enabling\nlogging the host ips in the db server log files which seems to be expensive\nfor me (in terms of IO and logfile size).\n\nPlease let me know you if there are any alternatives.\n\nThanks\nVenkat\n\nHello Everyone,I am working on an alert script to track the number of connections with the host IPs to the Postgres cluster.1. I need all the host IPs making a connection to Postgres Cluster (even for a fraction of second).\n2. I would also want to track number of IDLE connections, IDLE IN TRANSACTION connections and length of the connections as well.I would be making use of pg_stat_activity and also thought of enabling logging the host ips in the db server log files which seems to be expensive for me (in terms of IO and logfile size).\nPlease let me know you if there are any alternatives.ThanksVenkat",
"msg_date": "Wed, 24 Aug 2011 13:05:45 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to track number of connections and hosts to Postgres cluster"
},
{
"msg_contents": "On Wed, 2011-08-24 at 13:05 +0530, Venkat Balaji wrote:\n> Hello Everyone,\n> \n> I am working on an alert script to track the number of connections with the\n> host IPs to the Postgres cluster.\n> \n> 1. I need all the host IPs making a connection to Postgres Cluster (even for\n> a fraction of second).\n\nYou should set log_connections to on.\n\n> 2. I would also want to track number of IDLE connections, IDLE IN\n> TRANSACTION connections and length of the connections as well.\n> \n\nIDLE and IDLE in transactions are the kind of informations you get in\npg_stat_activity.\n\nLength of connections, you can get it with log_disconnections.\n\n> I would be making use of pg_stat_activity and also thought of enabling\n> logging the host ips in the db server log files which seems to be expensive\n> for me (in terms of IO and logfile size).\n> \n\nUsing pg_stat_activity won't get you really small connections. You need\nlog_connections for that, and log_disconnections for the duration of\nconnections. So you'll have to work on a tool that could get some\ninformations with queries on pg_stat_activity, and that could read\nPostgreSQL log files.\n\n\n-- \nGuillaume\n http://blog.guillaume.lelarge.info\n http://www.dalibo.com\n\n",
"msg_date": "Wed, 24 Aug 2011 09:49:04 +0200",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to track number of connections and hosts to Postgres cluster"
},
{
"msg_contents": "Thanks Guillaume !!\n\nBut, if put log_connections to on and log_disconnections to on wouldn't the\nPostgres be logging in lot of data ?\n\nWill this not be IO intensive ? I understand that this is the best way, but,\nwould want to know if there is an other way to reduce IO ( may be through\nqueries to catalog tables ).\n\nThanks\nVenkat\n\nOn Wed, Aug 24, 2011 at 1:19 PM, Guillaume Lelarge\n<[email protected]>wrote:\n\n> On Wed, 2011-08-24 at 13:05 +0530, Venkat Balaji wrote:\n> > Hello Everyone,\n> >\n> > I am working on an alert script to track the number of connections with\n> the\n> > host IPs to the Postgres cluster.\n> >\n> > 1. I need all the host IPs making a connection to Postgres Cluster (even\n> for\n> > a fraction of second).\n>\n> You should set log_connections to on.\n>\n> > 2. I would also want to track number of IDLE connections, IDLE IN\n> > TRANSACTION connections and length of the connections as well.\n> >\n>\n> IDLE and IDLE in transactions are the kind of informations you get in\n> pg_stat_activity.\n>\n> Length of connections, you can get it with log_disconnections.\n>\n> > I would be making use of pg_stat_activity and also thought of enabling\n> > logging the host ips in the db server log files which seems to be\n> expensive\n> > for me (in terms of IO and logfile size).\n> >\n>\n> Using pg_stat_activity won't get you really small connections. You need\n> log_connections for that, and log_disconnections for the duration of\n> connections. So you'll have to work on a tool that could get some\n> informations with queries on pg_stat_activity, and that could read\n> PostgreSQL log files.\n>\n>\n> --\n> Guillaume\n> http://blog.guillaume.lelarge.info\n> http://www.dalibo.com\n>\n>\n\nThanks Guillaume !!But, if put log_connections to on and log_disconnections to on wouldn't the Postgres be logging in lot of data ?Will this not be IO intensive ? I understand that this is the best way, but, would want to know if there is an other way to reduce IO ( may be through queries to catalog tables ).\nThanksVenkatOn Wed, Aug 24, 2011 at 1:19 PM, Guillaume Lelarge <[email protected]> wrote:\nOn Wed, 2011-08-24 at 13:05 +0530, Venkat Balaji wrote:\n> Hello Everyone,\n>\n> I am working on an alert script to track the number of connections with the\n> host IPs to the Postgres cluster.\n>\n> 1. I need all the host IPs making a connection to Postgres Cluster (even for\n> a fraction of second).\n\nYou should set log_connections to on.\n\n> 2. I would also want to track number of IDLE connections, IDLE IN\n> TRANSACTION connections and length of the connections as well.\n>\n\nIDLE and IDLE in transactions are the kind of informations you get in\npg_stat_activity.\n\nLength of connections, you can get it with log_disconnections.\n\n> I would be making use of pg_stat_activity and also thought of enabling\n> logging the host ips in the db server log files which seems to be expensive\n> for me (in terms of IO and logfile size).\n>\n\nUsing pg_stat_activity won't get you really small connections. You need\nlog_connections for that, and log_disconnections for the duration of\nconnections. So you'll have to work on a tool that could get some\ninformations with queries on pg_stat_activity, and that could read\nPostgreSQL log files.\n\n\n--\nGuillaume\n http://blog.guillaume.lelarge.info\n http://www.dalibo.com",
"msg_date": "Wed, 24 Aug 2011 16:37:02 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to track number of connections and hosts to Postgres cluster"
},
{
"msg_contents": "pg_stat_activity keeps track of all this information.\n\nselect * from pg_stat_activity where datname='databasename';\n\n\n\nVenkat Balaji wrote:\n> Thanks Guillaume !!\n>\n> But, if put log_connections to on and log_disconnections to on \n> wouldn't the Postgres be logging in lot of data ?\n>\n> Will this not be IO intensive ? I understand that this is the best \n> way, but, would want to know if there is an other way to reduce IO ( \n> may be through queries to catalog tables ).\n>\n> Thanks\n> Venkat\n>\n> On Wed, Aug 24, 2011 at 1:19 PM, Guillaume Lelarge \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> On Wed, 2011-08-24 at 13:05 +0530, Venkat Balaji wrote:\n> > Hello Everyone,\n> >\n> > I am working on an alert script to track the number of\n> connections with the\n> > host IPs to the Postgres cluster.\n> >\n> > 1. I need all the host IPs making a connection to Postgres\n> Cluster (even for\n> > a fraction of second).\n>\n> You should set log_connections to on.\n>\n> > 2. I would also want to track number of IDLE connections, IDLE IN\n> > TRANSACTION connections and length of the connections as well.\n> >\n>\n> IDLE and IDLE in transactions are the kind of informations you get in\n> pg_stat_activity.\n>\n> Length of connections, you can get it with log_disconnections.\n>\n> > I would be making use of pg_stat_activity and also thought of\n> enabling\n> > logging the host ips in the db server log files which seems to\n> be expensive\n> > for me (in terms of IO and logfile size).\n> >\n>\n> Using pg_stat_activity won't get you really small connections. You\n> need\n> log_connections for that, and log_disconnections for the duration of\n> connections. So you'll have to work on a tool that could get some\n> informations with queries on pg_stat_activity, and that could read\n> PostgreSQL log files.\n>\n>\n> --\n> Guillaume\n> http://blog.guillaume.lelarge.info\n> http://www.dalibo.com\n>\n>\n\n\n\n\n\n\n\npg_stat_activity keeps track of all this information.\n\nselect * from pg_stat_activity where datname='databasename';\n\n\n\nVenkat Balaji wrote:\nThanks Guillaume !!\n \n\nBut, if put log_connections to on and log_disconnections to on\nwouldn't the Postgres be logging in lot of data ?\n\n\nWill this not be IO intensive ? I understand that this is the\nbest way, but, would want to know if there is an other way to reduce IO\n( may be through queries to catalog tables ).\n\n\nThanks\nVenkat\n\nOn Wed, Aug 24, 2011 at 1:19 PM, Guillaume\nLelarge <[email protected]>\nwrote:\n\nOn Wed, 2011-08-24 at 13:05 +0530, Venkat Balaji\nwrote:\n> Hello Everyone,\n>\n> I am working on an alert script to track the number of connections\nwith the\n> host IPs to the Postgres cluster.\n>\n> 1. I need all the host IPs making a connection to Postgres Cluster\n(even for\n> a fraction of second).\n\n\nYou should set log_connections to on.\n\n> 2. I would also want to track number of IDLE connections, IDLE IN\n> TRANSACTION connections and length of the connections as well.\n>\n\n\nIDLE and IDLE in transactions are the kind of informations you get in\npg_stat_activity.\n\nLength of connections, you can get it with log_disconnections.\n\n> I would be making use of pg_stat_activity and also thought of\nenabling\n> logging the host ips in the db server log files which seems to be\nexpensive\n> for me (in terms of IO and logfile size).\n>\n\n\nUsing pg_stat_activity won't get you really small connections. You need\nlog_connections for that, and log_disconnections for the duration of\nconnections. So you'll have to work on a tool that could get some\ninformations with queries on pg_stat_activity, and that could read\nPostgreSQL log files.\n\n\n--\nGuillaume\n http://blog.guillaume.lelarge.info\n http://www.dalibo.com",
"msg_date": "Wed, 24 Aug 2011 16:39:18 +0530",
"msg_from": "Adarsh Sharma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to track number of connections and hosts to Postgres cluster"
},
{
"msg_contents": "But, the information vanishes if the application logs off.\n\nI am looking for an alternative to track the total amount of the connections\nwith the host IPs through a Cron job.\n\nWhat could be the frequency of cron ?\n\nI know the best is using log_connections and log_disconnections parameters,\nbut, information logged would be too high and is also IO intensive.\n\nThanks\nVenkat\n\nOn Wed, Aug 24, 2011 at 4:39 PM, Adarsh Sharma <[email protected]>wrote:\n\n> **\n> pg_stat_activity keeps track of all this information.\n>\n> select * from pg_stat_activity where datname='databasename';\n>\n>\n>\n>\n> Venkat Balaji wrote:\n>\n> Thanks Guillaume !!\n>\n> But, if put log_connections to on and log_disconnections to on wouldn't\n> the Postgres be logging in lot of data ?\n>\n> Will this not be IO intensive ? I understand that this is the best way,\n> but, would want to know if there is an other way to reduce IO ( may be\n> through queries to catalog tables ).\n>\n> Thanks\n> Venkat\n>\n> On Wed, Aug 24, 2011 at 1:19 PM, Guillaume Lelarge <[email protected]\n> > wrote:\n>\n>> On Wed, 2011-08-24 at 13:05 +0530, Venkat Balaji wrote:\n>> > Hello Everyone,\n>> >\n>> > I am working on an alert script to track the number of connections with\n>> the\n>> > host IPs to the Postgres cluster.\n>> >\n>> > 1. I need all the host IPs making a connection to Postgres Cluster (even\n>> for\n>> > a fraction of second).\n>>\n>> You should set log_connections to on.\n>>\n>> > 2. I would also want to track number of IDLE connections, IDLE IN\n>> > TRANSACTION connections and length of the connections as well.\n>> >\n>>\n>> IDLE and IDLE in transactions are the kind of informations you get in\n>> pg_stat_activity.\n>>\n>> Length of connections, you can get it with log_disconnections.\n>>\n>> > I would be making use of pg_stat_activity and also thought of enabling\n>> > logging the host ips in the db server log files which seems to be\n>> expensive\n>> > for me (in terms of IO and logfile size).\n>> >\n>>\n>> Using pg_stat_activity won't get you really small connections. You need\n>> log_connections for that, and log_disconnections for the duration of\n>> connections. So you'll have to work on a tool that could get some\n>> informations with queries on pg_stat_activity, and that could read\n>> PostgreSQL log files.\n>>\n>>\n>> --\n>> Guillaume\n>> http://blog.guillaume.lelarge.info\n>> http://www.dalibo.com\n>>\n>>\n>\n>\n\nBut, the information vanishes if the application logs off.I am looking for an alternative to track the total amount of the connections with the host IPs through a Cron job.What could be the frequency of cron ?\nI know the best is using log_connections and log_disconnections parameters, but, information logged would be too high and is also IO intensive.ThanksVenkat\nOn Wed, Aug 24, 2011 at 4:39 PM, Adarsh Sharma <[email protected]> wrote:\n\n\npg_stat_activity keeps track of all this information.\n\nselect * from pg_stat_activity where datname='databasename';\n\n\n\nVenkat Balaji wrote:\nThanks Guillaume !!\n \n\nBut, if put log_connections to on and log_disconnections to on\nwouldn't the Postgres be logging in lot of data ?\n\n\nWill this not be IO intensive ? I understand that this is the\nbest way, but, would want to know if there is an other way to reduce IO\n( may be through queries to catalog tables ).\n\n\nThanks\nVenkat\n\nOn Wed, Aug 24, 2011 at 1:19 PM, Guillaume\nLelarge <[email protected]>\nwrote:\n\nOn Wed, 2011-08-24 at 13:05 +0530, Venkat Balaji\nwrote:\n> Hello Everyone,\n>\n> I am working on an alert script to track the number of connections\nwith the\n> host IPs to the Postgres cluster.\n>\n> 1. I need all the host IPs making a connection to Postgres Cluster\n(even for\n> a fraction of second).\n\n\nYou should set log_connections to on.\n\n> 2. I would also want to track number of IDLE connections, IDLE IN\n> TRANSACTION connections and length of the connections as well.\n>\n\n\nIDLE and IDLE in transactions are the kind of informations you get in\npg_stat_activity.\n\nLength of connections, you can get it with log_disconnections.\n\n> I would be making use of pg_stat_activity and also thought of\nenabling\n> logging the host ips in the db server log files which seems to be\nexpensive\n> for me (in terms of IO and logfile size).\n>\n\n\nUsing pg_stat_activity won't get you really small connections. You need\nlog_connections for that, and log_disconnections for the duration of\nconnections. So you'll have to work on a tool that could get some\ninformations with queries on pg_stat_activity, and that could read\nPostgreSQL log files.\n\n\n--\nGuillaume\n http://blog.guillaume.lelarge.info\n http://www.dalibo.com",
"msg_date": "Wed, 24 Aug 2011 16:51:14 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to track number of connections and hosts to Postgres cluster"
},
{
"msg_contents": "On Wed, 2011-08-24 at 16:51 +0530, Venkat Balaji wrote:\n> But, the information vanishes if the application logs off.\n> \n\nThat's why you need a tool to track this.\n\n> I am looking for an alternative to track the total amount of the connections\n> with the host IPs through a Cron job.\n> \n\nIf you only want the number of connections, you can check_postgres.\n> What could be the frequency of cron ?\n> \n\nI don't think you can go below one second.\n\n> I know the best is using log_connections and log_disconnections parameters,\n> but, information logged would be too high and is also IO intensive.\n> \n\nSure. But if you want connection duration, that's the only way.\n\n\n-- \nGuillaume\n http://blog.guillaume.lelarge.info\n http://www.dalibo.com\n\n",
"msg_date": "Wed, 24 Aug 2011 14:16:57 +0200",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to track number of connections and hosts to Postgres cluster"
},
{
"msg_contents": "On 08/24/2011 07:07 AM, Venkat Balaji wrote:\n> But, if put log_connections to on and log_disconnections to on \n> wouldn't the Postgres be logging in lot of data ?\n> Will this not be IO intensive ? I understand that this is the best \n> way, but, would want to know if there is an other way to reduce IO ( \n> may be through queries to catalog tables ).\n>\n\nYour requirements include: \" I need all the host IPs making a \nconnection to Postgres Cluster (even for a fraction of second).\"\n\nThe only way to do this is to log every connection. Any other approach \nfor grabbing the data, such as looking at pg_stat_activity, will \nsometimes miss one.\n\nIf you're willing to lose a connection sometimes, a cron job that polls \npg_stat_activity and saves a summary of what it finds will normally use \nless resources. But connections that start and end between runs will be \nmissed.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 24 Aug 2011 11:33:22 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to track number of connections and hosts to Postgres cluster"
},
{
"msg_contents": "On Wed, Aug 24, 2011 at 9:33 AM, Greg Smith <[email protected]> wrote:\n\n> On 08/24/2011 07:07 AM, Venkat Balaji wrote:\n>\n>> But, if put log_connections to on and log_disconnections to on wouldn't\n>> the Postgres be logging in lot of data ?\n>> Will this not be IO intensive ? I understand that this is the best way,\n>> but, would want to know if there is an other way to reduce IO ( may be\n>> through queries to catalog tables ).\n>>\n>>\n> Your requirements include: \" I need all the host IPs making a connection\n> to Postgres Cluster (even for a fraction of second).\"\n>\n> The only way to do this is to log every connection. Any other approach for\n> grabbing the data, such as looking at pg_stat_activity, will sometimes miss\n> one.\n>\n> If you're willing to lose a connection sometimes, a cron job that polls\n> pg_stat_activity and saves a summary of what it finds will normally use less\n> resources. But connections that start and end between runs will be missed.\n>\n>\nI suppose you could use tcpdump on a separate system with a mirrored switch\nport and have it log TCP SYN and FIN packets on port 5432 to your database\nserver only. Keeps all I/O off your database server.\n\n tcpdump -w port5423.log -n \"tcp and port 5432 and tcp[tcpflags] &\n(tcp-syn|tcp-fin) != 0 and host IP\"\n\nHTH.\n\nGreg\n\nOn Wed, Aug 24, 2011 at 9:33 AM, Greg Smith <[email protected]> wrote:\nOn 08/24/2011 07:07 AM, Venkat Balaji wrote:\n\nBut, if put log_connections to on and log_disconnections to on wouldn't the Postgres be logging in lot of data ?\nWill this not be IO intensive ? I understand that this is the best way, but, would want to know if there is an other way to reduce IO ( may be through queries to catalog tables ).\n\n\n\nYour requirements include: \" I need all the host IPs making a connection to Postgres Cluster (even for a fraction of second).\"\n\nThe only way to do this is to log every connection. Any other approach for grabbing the data, such as looking at pg_stat_activity, will sometimes miss one.\n\nIf you're willing to lose a connection sometimes, a cron job that polls pg_stat_activity and saves a summary of what it finds will normally use less resources. But connections that start and end between runs will be missed.\n\nI suppose you could use tcpdump on a separate system with a mirrored switch port and have it log TCP SYN and FIN packets on port 5432 to your database server only. Keeps all I/O off your database server.\n tcpdump -w port5423.log -n \"tcp and port 5432 and tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 and host IP\"HTH.Greg",
"msg_date": "Wed, 24 Aug 2011 10:46:37 -0600",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to track number of connections and hosts to Postgres cluster"
},
{
"msg_contents": "> I suppose you could use tcpdump on a separate system with a mirrored switch\n> port and have it log TCP SYN and FIN packets on port 5432 to your database\n> server only. Keeps all I/O off your database server.\n> tcpdump -w port5423.log -n \"tcp and port 5432 and tcp[tcpflags] &\n> (tcp-syn|tcp-fin) != 0 and host IP\"\n\nThat's an excellent idea, but note that this will also log\nunsuccessful connection attempts (that is, successful TCP connections\nthat fail PostgreSQL authentication) without much of a way to\ndistinguish the two, especially if the connections are encrypted.\n\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n",
"msg_date": "Wed, 24 Aug 2011 10:08:16 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to track number of connections and hosts to Postgres cluster"
},
{
"msg_contents": "On Wed, Aug 24, 2011 at 5:21 AM, Venkat Balaji <[email protected]> wrote:\n> But, the information vanishes if the application logs off.\n> I am looking for an alternative to track the total amount of the connections\n> with the host IPs through a Cron job.\n> What could be the frequency of cron ?\n> I know the best is using log_connections and log_disconnections parameters,\n> but, information logged would be too high and is also IO intensive.\n\nReally? Have you tested how much IO it will generate? My guess is\nnot that much. And on a database server it should be a miniscule\namount compared to what your DB is doing the rest of the time.\nEliminating this choice is premature optimization IMHO.\n",
"msg_date": "Wed, 24 Aug 2011 12:23:14 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to track number of connections and hosts to Postgres cluster"
},
{
"msg_contents": "lately i did sth similar in one of our servers, to keep track of active, idle\nand idle in transaction connections so as to make some optimization in the\nconnection pooling and i didn't notice any serious io activity there (had\nthe cron job run every minute). so imho unless the server is seriously io\nbound at the moment, you won't notice any difference\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/How-to-track-number-of-connections-and-hosts-to-Postgres-cluster-tp4729546p4732518.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Wed, 24 Aug 2011 17:32:35 -0700 (PDT)",
"msg_from": "MirrorX <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to track number of connections and hosts to Postgres cluster"
},
{
"msg_contents": "Thanks to all for your very helpful replies !\n\nAs Greg Smith rightly said, i faced a problem of missing connections between\nthe runs. I even ran the cron every less than a second, but, still that\nwould become too many runs per second and later i need to take the burden of\ncalculating every thing from the log.\n\nI did not really calculate the IO load while the logging is on. I would\nswitch on \"log_connections\" and \"log_disconnections\" to log the number of\nconnections and duration of a connection.\n\nIf i notice high IO's and huge log generation, then i think Greg Spileburg\nhas suggested a good idea of using tcpdump on a different server. I would\nuse this utility and see how it works (never used it before). Greg\nSpileburg, please help me with any sources of documents you have to use\n\"tcpdump\".\n\nThanks again and sorry for replying late on this !\n\nRegards,\nVenkat\n\nOn Thu, Aug 25, 2011 at 6:02 AM, MirrorX <[email protected]> wrote:\n\n> lately i did sth similar in one of our servers, to keep track of active,\n> idle\n> and idle in transaction connections so as to make some optimization in the\n> connection pooling and i didn't notice any serious io activity there (had\n> the cron job run every minute). so imho unless the server is seriously io\n> bound at the moment, you won't notice any difference\n>\n> --\n> View this message in context:\n> http://postgresql.1045698.n5.nabble.com/How-to-track-number-of-connections-and-hosts-to-Postgres-cluster-tp4729546p4732518.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThanks to all for your very helpful replies !As Greg Smith rightly said, i faced a problem of missing connections between the runs. I even ran the cron every less than a second, but, still that would become too many runs per second and later i need to take the burden of calculating every thing from the log.\nI did not really calculate the IO load while the logging is on. I would switch on \"log_connections\" and \"log_disconnections\" to log the number of connections and duration of a connection.\nIf i notice high IO's and huge log generation, then i think Greg Spileburg has suggested a good idea of using tcpdump on a different server. I would use this utility and see how it works (never used it before). Greg Spileburg, please help me with any sources of documents you have to use \"tcpdump\".\nThanks again and sorry for replying late on this !Regards,VenkatOn Thu, Aug 25, 2011 at 6:02 AM, MirrorX <[email protected]> wrote:\nlately i did sth similar in one of our servers, to keep track of active, idle\nand idle in transaction connections so as to make some optimization in the\nconnection pooling and i didn't notice any serious io activity there (had\nthe cron job run every minute). so imho unless the server is seriously io\nbound at the moment, you won't notice any difference\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/How-to-track-number-of-connections-and-hosts-to-Postgres-cluster-tp4729546p4732518.html\n\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 30 Aug 2011 11:25:47 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: How to track number of connections and hosts to\n\tPostgres cluster"
},
{
"msg_contents": "On Mon, Aug 29, 2011 at 11:55 PM, Venkat Balaji <[email protected]> wrote:\n> If i notice high IO's and huge log generation, then i think Greg Spileburg\n> has suggested a good idea of using tcpdump on a different server. I would\n> use this utility and see how it works (never used it before). Greg\n> Spileburg, please help me with any sources of documents you have to use\n> \"tcpdump\".\n\nThere's also a lot to be said for dumping to a dedicated local drive\nwith fsync turned off. They're logs so you can chance losing them by\nputting them on a cheap fast 7200 rpm SATA drive. If your logs take\nup more than a few megs a second then they are coming out really fast.\n Do you know what your log generation rate in bytes/second is?\n",
"msg_date": "Tue, 30 Aug 2011 00:39:43 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: How to track number of connections and hosts to\n\tPostgres cluster"
},
{
"msg_contents": "Hi Scott,\n\nLog generation rate -\n\n500MB size of log file is generated within minimum 3 mins to maximum of 20\nmins depending on the database behavior.\n\nI did not understand the \"fsync\" stuff you mentioned. Please help me know\nhow would fsync is related to log generation or logging host IPs in the log\nfile ?\n\nThanks\nVenkat\n\nOn Tue, Aug 30, 2011 at 12:09 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Mon, Aug 29, 2011 at 11:55 PM, Venkat Balaji <[email protected]>\n> wrote:\n> > If i notice high IO's and huge log generation, then i think Greg\n> Spileburg\n> > has suggested a good idea of using tcpdump on a different server. I would\n> > use this utility and see how it works (never used it before). Greg\n> > Spileburg, please help me with any sources of documents you have to use\n> > \"tcpdump\".\n>\n> There's also a lot to be said for dumping to a dedicated local drive\n> with fsync turned off. They're logs so you can chance losing them by\n> putting them on a cheap fast 7200 rpm SATA drive. If your logs take\n> up more than a few megs a second then they are coming out really fast.\n> Do you know what your log generation rate in bytes/second is?\n>\n\nHi Scott,Log generation rate -500MB size of log file is generated within minimum 3 mins to maximum of 20 mins depending on the database behavior.I did not understand the \"fsync\" stuff you mentioned. Please help me know how would fsync is related to log generation or logging host IPs in the log file ?\nThanksVenkatOn Tue, Aug 30, 2011 at 12:09 PM, Scott Marlowe <[email protected]> wrote:\nOn Mon, Aug 29, 2011 at 11:55 PM, Venkat Balaji <[email protected]> wrote:\n\n> If i notice high IO's and huge log generation, then i think Greg Spileburg\n> has suggested a good idea of using tcpdump on a different server. I would\n> use this utility and see how it works (never used it before). Greg\n> Spileburg, please help me with any sources of documents you have to use\n> \"tcpdump\".\n\nThere's also a lot to be said for dumping to a dedicated local drive\nwith fsync turned off. They're logs so you can chance losing them by\nputting them on a cheap fast 7200 rpm SATA drive. If your logs take\nup more than a few megs a second then they are coming out really fast.\n Do you know what your log generation rate in bytes/second is?",
"msg_date": "Fri, 2 Sep 2011 11:16:16 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: How to track number of connections and hosts to\n\tPostgres cluster"
},
{
"msg_contents": "On Thu, Sep 1, 2011 at 11:46 PM, Venkat Balaji <[email protected]> wrote:\n> Hi Scott,\n> Log generation rate -\n> 500MB size of log file is generated within minimum 3 mins to maximum of 20\n> mins depending on the database behavior.\n> I did not understand the \"fsync\" stuff you mentioned. Please help me know\n> how would fsync is related to log generation or logging host IPs in the log\n\nSo you're generating logs at a rate of about 166MB a minute or 2.7MB/s\n Seagates from the early 90s are faster than that. Are you logging\nmore than just connections and disconnections? If you log just those\nwhat's the rate?\n\nfsync is when the OS says to write to disk and the disk confirms the\nwrite is complete. It probably doesn't matter here whether the file\nsystem is using a journaling method that's real safe or not, and you\ncan go to something like ext2 where there's no journaling and probably\ndo fine on a dedicated SATA drive or pair if you want them redundant.\n\nThe real issue then is what to do with old log files. Right now\nyou're creating them at 10G an hour, or 240G a day. So you'll need\nsome cron job to go in and delete the old ones. Still with a 1TB\ndrive it'll take about 4 days to fill up, so it's not like you're\ngonna run out of space in a few minutes or anything.\n\nSince log files are pretty much written sequentially they don't need\nthe fastest drives ever made. Most modern 7200RPM 3.5\" SATA drives\ncan write at least at 50 or 60 MB/s on their slowest portions. Just\nrotate them hourly or daily or whatever and process them and delete\nthem.\n",
"msg_date": "Fri, 2 Sep 2011 00:42:29 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: How to track number of connections and hosts to\n\tPostgres cluster"
},
{
"msg_contents": "Hi Scott,\n\nYes, we are logging connections and disconnections with duration as well.\n\nWe have process of rolling out at every 500MB and old log files are deleted\nbefore a certain period of time.\n\nThanks a lot for your help !\n\nRegards,\nVenkat\n\nOn Fri, Sep 2, 2011 at 12:12 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Thu, Sep 1, 2011 at 11:46 PM, Venkat Balaji <[email protected]>\n> wrote:\n> > Hi Scott,\n> > Log generation rate -\n> > 500MB size of log file is generated within minimum 3 mins to maximum of\n> 20\n> > mins depending on the database behavior.\n> > I did not understand the \"fsync\" stuff you mentioned. Please help me know\n> > how would fsync is related to log generation or logging host IPs in the\n> log\n>\n> So you're generating logs at a rate of about 166MB a minute or 2.7MB/s\n> Seagates from the early 90s are faster than that. Are you logging\n> more than just connections and disconnections? If you log just those\n> what's the rate?\n>\n> fsync is when the OS says to write to disk and the disk confirms the\n> write is complete. It probably doesn't matter here whether the file\n> system is using a journaling method that's real safe or not, and you\n> can go to something like ext2 where there's no journaling and probably\n> do fine on a dedicated SATA drive or pair if you want them redundant.\n>\n> The real issue then is what to do with old log files. Right now\n> you're creating them at 10G an hour, or 240G a day. So you'll need\n> some cron job to go in and delete the old ones. Still with a 1TB\n> drive it'll take about 4 days to fill up, so it's not like you're\n> gonna run out of space in a few minutes or anything.\n>\n> Since log files are pretty much written sequentially they don't need\n> the fastest drives ever made. Most modern 7200RPM 3.5\" SATA drives\n> can write at least at 50 or 60 MB/s on their slowest portions. Just\n> rotate them hourly or daily or whatever and process them and delete\n> them.\n>\n\nHi Scott,Yes, we are logging connections and disconnections with duration as well.We have process of rolling out at every 500MB and old log files are deleted before a certain period of time.\nThanks a lot for your help !Regards,VenkatOn Fri, Sep 2, 2011 at 12:12 PM, Scott Marlowe <[email protected]> wrote:\nOn Thu, Sep 1, 2011 at 11:46 PM, Venkat Balaji <[email protected]> wrote:\n\n> Hi Scott,\n> Log generation rate -\n> 500MB size of log file is generated within minimum 3 mins to maximum of 20\n> mins depending on the database behavior.\n> I did not understand the \"fsync\" stuff you mentioned. Please help me know\n> how would fsync is related to log generation or logging host IPs in the log\n\nSo you're generating logs at a rate of about 166MB a minute or 2.7MB/s\n Seagates from the early 90s are faster than that. Are you logging\nmore than just connections and disconnections? If you log just those\nwhat's the rate?\n\nfsync is when the OS says to write to disk and the disk confirms the\nwrite is complete. It probably doesn't matter here whether the file\nsystem is using a journaling method that's real safe or not, and you\ncan go to something like ext2 where there's no journaling and probably\ndo fine on a dedicated SATA drive or pair if you want them redundant.\n\nThe real issue then is what to do with old log files. Right now\nyou're creating them at 10G an hour, or 240G a day. So you'll need\nsome cron job to go in and delete the old ones. Still with a 1TB\ndrive it'll take about 4 days to fill up, so it's not like you're\ngonna run out of space in a few minutes or anything.\n\nSince log files are pretty much written sequentially they don't need\nthe fastest drives ever made. Most modern 7200RPM 3.5\" SATA drives\ncan write at least at 50 or 60 MB/s on their slowest portions. Just\nrotate them hourly or daily or whatever and process them and delete\nthem.",
"msg_date": "Sun, 4 Sep 2011 13:39:43 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: How to track number of connections and hosts to\n\tPostgres cluster"
},
{
"msg_contents": "On Tue, Aug 30, 2011 at 12:55 AM, Venkat Balaji <[email protected]> wrote:\n> Thanks to all for your very helpful replies !\n> As Greg Smith rightly said, i faced a problem of missing connections between\n> the runs. I even ran the cron every less than a second, but, still that\n> would become too many runs per second and later i need to take the burden of\n> calculating every thing from the log.\n> I did not really calculate the IO load while the logging is on. I would\n> switch on \"log_connections\" and \"log_disconnections\" to log the number of\n> connections and duration of a connection.\n\nyet another reason why we need connection and disconnection triggers\n(especially the former, since disconnection triggers can be kludged\nwith an on_proc_exit/dblink hook).\n\nmerlin\n",
"msg_date": "Mon, 12 Sep 2011 12:00:08 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: How to track number of connections and hosts to\n\tPostgres cluster"
}
] |
[
{
"msg_contents": "I have a index in a table. The value of the reltuples value in the pg_class\ntable for this index is less than the number of rows in the table where the\nindex is present.\nFor eg. if i have 800 rows in the table , the reltuples in the pg_class for\nthe index show the value as 769.\nWhat are the scenarios under which this kind of behaviour occurs?\nAlso iam not able to execute the queries involving the indexed column in the\nwhere clause.\nPlease suggest the probable cause for this. \nAlso if i do a reindex on the table, the reltuple values updates to the\ncurrent no. of rows ie. 800.\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/reltuples-value-less-than-rows-in-the-table-tp4729917p4729917.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Wed, 24 Aug 2011 03:04:36 -0700 (PDT)",
"msg_from": "parul <[email protected]>",
"msg_from_op": true,
"msg_subject": "reltuples value less than rows in the table."
},
{
"msg_contents": "parul <[email protected]> wrote:\n \n> I have a index in a table. The value of the reltuples value in the\n> pg_class table for this index is less than the number of rows in\n> the table where the index is present.\n> For eg. if i have 800 rows in the table , the reltuples in the\n> pg_class for the index show the value as 769.\n> What are the scenarios under which this kind of behaviour occurs?\n \nThe fine manual explains it here:\n \nhttp://www.postgresql.org/docs/9.0/interactive/catalog-pg-class.html\n \nTo quote:\n \n| Number of rows in the table. This is only an estimate used by the\n| planner. It is updated by VACUUM, ANALYZE, and a few DDL commands\n| such as CREATE INDEX.\n \nOther operations which affect the number of rows in the table won't\nchange this column, so it can be off by a bit until the next\nautovacuum or explicit operation which sets a new estimate.\n \n> Also iam not able to execute the queries involving the indexed\n> column in the where clause.\n \nAre you saying that you expect that a plan would have been chosen\nwhich would have used the index, but it is choosing some other plan?\nIf so, your best bet is to present the actual query.\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n",
"msg_date": "Thu, 25 Aug 2011 09:43:46 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reltuples value less than rows in the table."
}
] |
[
{
"msg_contents": "\nApologies if this has already been posted here (I hadn't seen it before \ntoday, and\ncan't find a previous post).\nThis will be of interest to anyone looking at using SSDs for database \nstorage :\nhttp://www.intel.com/content/www/us/en/solid-state-drives/ssd-320-enterprise-server-storage-application-specification-addendum.html\n\n\n",
"msg_date": "Wed, 24 Aug 2011 10:58:55 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": true,
"msg_subject": "Intel 320 SSD info"
},
{
"msg_contents": "On Wed, Aug 24, 2011 at 11:58 AM, David Boreham <[email protected]> wrote:\n>\n> Apologies if this has already been posted here (I hadn't seen it before\n> today, and\n> can't find a previous post).\n> This will be of interest to anyone looking at using SSDs for database\n> storage :\n> http://www.intel.com/content/www/us/en/solid-state-drives/ssd-320-enterprise-server-storage-application-specification-addendum.html\n\nhm, I think they need to reconcile those numbers with the ones on this\npage: http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-320-series.html\n\n600 write ips vs 3.7k/23k.\n\nmerlin\n",
"msg_date": "Wed, 24 Aug 2011 12:17:57 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel 320 SSD info"
},
{
"msg_contents": "On 8/24/2011 11:17 AM, Merlin Moncure wrote:\n>\n> hm, I think they need to reconcile those numbers with the ones on this\n> page: http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-320-series.html\n>\n> 600 write ips vs 3.7k/23k.\n>\n>\n\nThey do provide an explanation (and what I find interesting about this \ndocument is that they are basically \"coming clean\" about the real \nworst-case performance, which I personally find refreshing and \nencouraging). The difference is that the high number is achieved if the \ndrive does not need to perform a block erase to process the write (this \nis true most of the time since the capacity is over-provisioned and \nthere is an expectation that GC will have generated free blocks in the \nbackground). The low number is the performance under worst-case \nconditions where the drive is a) full and b) no blocks have been \ntrimmed, and c) GC wasn't able to run yet.\n\nI suspect that in production use it will be possible to predict in \nadvance when the drive is approaching the point where it will run out of \nfree blocks, and hence perform poorly. Whether or not this is possible \nis a big question for us in planning our transition to SSDs in production.\n\nAnyone using SSDs should be aware of how they work and the possible \nworst case performance. This article helps with that !\n\n\n\n\n",
"msg_date": "Wed, 24 Aug 2011 11:23:15 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Intel 320 SSD info"
},
{
"msg_contents": "According to the specs for database storage:\n\n\"Random 4KB arites: Up to 600 IOPS\"\n\nIs that for real? 600 IOPS is *atrociously terrible* for an SSD. Not much faster than mechanical disks.\n\nHas anyone done any performance benchmark of 320 used as a DB storage? Is it really that slow?\n\n\n________________________________\nFrom: David Boreham <[email protected]>\nTo: [email protected]\nSent: Wednesday, August 24, 2011 12:58 PM\nSubject: [PERFORM] Intel 320 SSD info\n\n\nApologies if this has already been posted here (I hadn't seen it before today, and\ncan't find a previous post).\nThis will be of interest to anyone looking at using SSDs for database storage :\nhttp://www.intel.com/content/www/us/en/solid-state-drives/ssd-320-enterprise-server-storage-application-specification-addendum.html\n\n\n\n-- Sent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nAccording to the specs for database storage:\"Random 4KB arites: Up to 600 IOPS\"Is that for real? 600 IOPS is *atrociously terrible* for an SSD. Not much faster than mechanical disks.Has anyone done any performance benchmark of 320 used as a DB storage? Is it really that slow?From: David Boreham <[email protected]>To:\n [email protected]: Wednesday, August 24, 2011 12:58 PMSubject: [PERFORM] Intel 320 SSD infoApologies if this has already been posted here (I hadn't seen it before today, andcan't find a previous post).This will be of interest to anyone looking at using SSDs for database storage :http://www.intel.com/content/www/us/en/solid-state-drives/ssd-320-enterprise-server-storage-application-specification-addendum.html-- Sent via pgsql-performance mailing list ([email protected])To make changes to your\n subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 24 Aug 2011 10:23:49 -0700 (PDT)",
"msg_from": "Andy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel 320 SSD info"
},
{
"msg_contents": "On 8/24/2011 11:23 AM, Andy wrote:\n> According to the specs for database storage:\n>\n> \"Random 4KB arites: Up to 600 IOPS\"\n>\n> Is that for real? 600 IOPS is *atrociously terrible* for an SSD. Not \n> much faster than mechanical disks.\n>\n>\nThe underlying (Flash block) write rate really is terrible (and slower \nthan most rotating disks).\n\nThe trick with SSD is that firmware performs all kinds of stunts to make \nthe performance seen\nby the OS much higher (most of the time !). This is akin to write-back \ncaching in a raid controller,\nfor example, where much higher write rates than the physical drives \nsupport are achievable.\n\n\n\n\n\n\n\n\n On 8/24/2011 11:23 AM, Andy wrote:\n \n\nAccording to the specs for database storage:\n\n\n\"Random 4KB arites: Up to 600 IOPS\"\n\n\nIs that for real? 600 IOPS is *atrociously terrible*\n for an SSD. Not much faster than mechanical disks.\n\n\n\n\n\n The underlying (Flash block) write rate really is terrible (and\n slower than most rotating disks).\n\n The trick with SSD is that firmware performs all kinds of stunts to\n make the performance seen\n by the OS much higher (most of the time !). This is akin to\n write-back caching in a raid controller,\n for example, where much higher write rates than the physical drives\n support are achievable.",
"msg_date": "Wed, 24 Aug 2011 11:25:27 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Intel 320 SSD info"
},
{
"msg_contents": "On 08/24/2011 01:23 PM, Andy wrote:\n> According to the specs for database storage:\n>\n> \"Random 4KB arites: Up to 600 IOPS\"\n>\n> Is that for real? 600 IOPS is *atrociously terrible* for an SSD. Not \n> much faster than mechanical disks.\n>\n> Has anyone done any performance benchmark of 320 used as a DB storage? \n> Is it really that slow?\n>\n\nMany SSDs that claim better are cheating though, or only quoting under \nconditions that don't take into account drive cleanup operations. \nThat's fine in a non-database context, but if the data isn't any good to \nyou unless it's guaranteed to be safe either on disk or on a \nnon-volatile cache, Intel's numbers are the more relevant ones. I \nwouldn't assume other drives really are better unless it's in a true \napples to apples fair comparison.\n\nI published some numbers at \nhttp://archives.postgresql.org/message-id/[email protected] \nthat suggested 400 TPS was the worst-case for this drive on database \nrandom writes running the pgbench workload, which does a couple of \nwrites per commit. That's only 2X as fast as a typical mechanical hard \ndrive running the same workload. On random reads, the performance gap \nis much bigger, in favor of the SSD.\n\nI've measured the performance of this drive from a couple of directions \nnow, and it always comes out the same. For PostgreSQL, reading or \nwriting 8K blocks, I'm seeing completely random workloads hit a \nworst-case of 20MB/s; that's just over 2500 IOPS. It's quite possible \nthat number can go lower under pressure of things like internal drive \ngarbage collection however, which I believe is going into the 600 IOPS \nfigure. I haven't tried to force that yet--drive is too useful to me to \ntry and burn it out doing tests like that at the moment.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n\n\n\n\n\n\nOn 08/24/2011 01:23 PM, Andy wrote:\n\n\nAccording to the specs for database storage:\n\n\n\"Random 4KB arites: Up to 600 IOPS\"\n\n\nIs that for real? 600 IOPS is *atrociously terrible* for\nan SSD. Not much faster than mechanical disks.\n\n\nHas anyone done any performance benchmark of 320 used as a\nDB storage? Is it really that slow?\n\n\n\n\nMany SSDs that claim better are cheating though, or only quoting under\nconditions that don't take into account drive cleanup operations. \nThat's fine in a non-database context, but if the data isn't any good\nto you unless it's guaranteed to be safe either on disk or on a\nnon-volatile cache, Intel's numbers are the more relevant ones. I\nwouldn't assume other drives really are better unless it's in a true\napples to apples fair comparison.\n\nI published some numbers at\nhttp://archives.postgresql.org/message-id/[email protected]\nthat suggested 400 TPS was the worst-case for this drive on database\nrandom writes running the pgbench workload, which does a couple of\nwrites per commit. That's only 2X as fast as a typical mechanical hard\ndrive running the same workload. On random reads, the performance gap\nis much bigger, in favor of the SSD.\n\nI've measured the performance of this drive from a couple of directions\nnow, and it always comes out the same. For PostgreSQL, reading or\nwriting 8K blocks, I'm seeing completely random workloads hit a\nworst-case of 20MB/s; that's just over 2500 IOPS. It's quite possible\nthat number can go lower under pressure of things like internal drive\ngarbage collection however, which I believe is going into the 600 IOPS\nfigure. I haven't tried to force that yet--drive is too useful to me\nto try and burn it out doing tests like that at the moment.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us",
"msg_date": "Wed, 24 Aug 2011 13:41:29 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel 320 SSD info"
},
{
"msg_contents": "On 8/24/2011 11:41 AM, Greg Smith wrote:\n>\n>\n> I've measured the performance of this drive from a couple of \n> directions now, and it always comes out the same. For PostgreSQL, \n> reading or writing 8K blocks, I'm seeing completely random workloads \n> hit a worst-case of 20MB/s; that's just over 2500 IOPS. It's quite \n> possible that number can go lower under pressure of things like \n> internal drive garbage collection however, which I believe is going \n> into the 600 IOPS figure. I haven't tried to force that yet--drive is \n> too useful to me to try and burn it out doing tests like that at the \n> moment.\n\nI hope someone from Intel is reading -- it would be well worth their \nwhile to just send you a few drives,\nsince you are set up to perform the right test, and can provide \nimpartial results.\n\n\n",
"msg_date": "Wed, 24 Aug 2011 11:42:14 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Intel 320 SSD info"
},
{
"msg_contents": "\n\n---- Original message ----\n>Date: Wed, 24 Aug 2011 11:25:27 -0600\n>From: [email protected] (on behalf of David Boreham <[email protected]>)\n>Subject: Re: [PERFORM] Intel 320 SSD info \n>To: [email protected]\n>\n> On 8/24/2011 11:23 AM, Andy wrote:\n>\n> According to the specs for database storage:\n> \"Random 4KB arites: Up to 600 IOPS\"\n> Is that for real? 600 IOPS is *atrociously\n> terrible* for an SSD. Not much faster than\n> mechanical disks.\n>\n> The underlying (Flash block) write rate really is\n> terrible (and slower than most rotating disks).\n\nAt the lowest physical level, yes. It's much simpler to flip the flux in the rust (I know, they've moved on from rust, but I can't give up the image) than to change state in NAND. But that's hardly the point.\n\n>\n> The trick with SSD is that firmware performs all\n> kinds of stunts to make the performance seen\n> by the OS much higher (most of the time !).\n\nIt's not an illusion. Check the AnandTech (or Tom's or whoever you prefer) tests for sequential speeds vs. HDD. You may have to go back a year or so, since they've mostly stopped trying to graph HDD and SSD at the same time. Random is a worse comparison for HDD. Yes, there are legitimate issues with power loss, especially in consumer and prosumer drives. That's why STEC and Violin and Fusion-io and Texas Memory exist. Whether bespoke controllers will continue to be shipped is up in the air. At one time mainframes had bespoke DASD. Not for more than a decade; they run on the same HDD you can buy at Newegg; better QA, but the same drive.\n\n\n> This is\n> akin to write-back caching in a raid controller,\n> for example, where much higher write rates than the\n> physical drives support are achievable.\n\nNot really. Some SSD have lots o RAM cache, others have none at all (notably, SandForce controller). See this: http://www.storagesearch.com/ram-in-flash-ssd.html\n",
"msg_date": "Wed, 24 Aug 2011 13:46:10 -0400 (EDT)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel 320 SSD info"
},
{
"msg_contents": "On Wed, Aug 24, 2011 at 12:23 PM, Andy <[email protected]> wrote:\n> According to the specs for database storage:\n> \"Random 4KB arites: Up to 600 IOPS\"\n> Is that for real? 600 IOPS is *atrociously terrible* for an SSD. Not much\n> faster than mechanical disks.\n> Has anyone done any performance benchmark of 320 used as a DB storage? Is it\n> really that slow?\n\nI have one experience with 320 SSD that replaced a 4 drive RAID 10 10k\nraid. The site users and administrator in question gave summarized\nthe before/after experience thusly: \"PFM\" (Pure Magic). Workload-wise\nit was a largish database (200gb+), 50% read, 50% write, mixed\nolap/oltp.\n\nmerlin\n",
"msg_date": "Wed, 24 Aug 2011 13:27:30 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel 320 SSD info"
},
{
"msg_contents": "On 08/24/2011 01:42 PM, David Boreham wrote:\n> On 8/24/2011 11:41 AM, Greg Smith wrote:\n>>\n>>\n>> I've measured the performance of this drive from a couple of \n>> directions now, and it always comes out the same. For PostgreSQL, \n>> reading or writing 8K blocks, I'm seeing completely random workloads \n>> hit a worst-case of 20MB/s; that's just over 2500 IOPS. It's quite \n>> possible that number can go lower under pressure of things like \n>> internal drive garbage collection however, which I believe is going \n>> into the 600 IOPS figure. I haven't tried to force that yet--drive \n>> is too useful to me to try and burn it out doing tests like that at \n>> the moment.\n>\n> I hope someone from Intel is reading -- it would be well worth their \n> while to just send you a few drives,\n> since you are set up to perform the right test, and can provide \n> impartial results.\n\nDon't worry, they are. With the big firmware bug in the 320 series \nlingering over things, it wasn't really worth wandering down that road \nyet until this week. Now that they can ship me drives that are expected \nto work, I can pick back up work on performance testing them.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 24 Aug 2011 16:11:33 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel 320 SSD info"
},
{
"msg_contents": "On Wed, Aug 24, 2011 at 10:23 AM, Andy <[email protected]> wrote:\n> According to the specs for database storage:\n> \"Random 4KB arites: Up to 600 IOPS\"\n> Is that for real? 600 IOPS is *atrociously terrible* for an SSD. Not much\n> faster than mechanical disks.\n\nKeep in mind that the 600 IOPS is over the entire disk. performance\nis much better over smaller spans - I suspect the 23,000 IOPS you\nmight see on the larger disks over an 8GB span are best case scenario,\nthough.\n\nMoral of the story? If you want the most performance, over-size your\nSSD and \"short-stroke\" it. Interesting to see that the 300/600GB\ndrives lose random write IOPS on the 100% span test over the smaller\ndisks - wonder if you limit access to the first 160GB if performance\nmatches the 160GB disk. I kind of suspect that once you get to 20k+\nrandom write IOPS over 8GB you've hit a controller limit on the SSD\nsince performance there reaches it's peak with the 300GB drive and the\n160GB drive is less than 10% slower.\n\n> Has anyone done any performance benchmark of 320 used as a DB storage? Is it\n> really that slow?\n\nHave the 120GB in my notebook. Could run some tests if people are interested.\n\n-Dave\n",
"msg_date": "Wed, 24 Aug 2011 13:50:35 -0700",
"msg_from": "David Rees <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel 320 SSD info"
}
] |
[
{
"msg_contents": "Hi. I have a table called work (id bigserial, userid int4, kind1 enum, kind2 enum, kind3 enim, value bigint, modified timestamp)\nTable will have about 2*10^6 rows (at same time - overall it can have higher IDs but old records are eventually deleted (moved to separate archive table) therefore the IDs can grow huge). After insert on table work, every row will be updated (value will be reduced, till value = 0 (work is finished)). For each row there will be from 1 to \nmaybe 10 updates on two cells (value, modified). After work is completed (value = 0) it's record will be moved to archive table.\nkind1 is an enum with two values (a and b)\ni'm using:\n- alter table work set fillfactor 50\n- btree index on value, fillfactor 50\n- btree index on kind1, fillfactor 50\n\nmy question:\n1. what can i do to perform this selects faster:\nSELECT id, value FROM work WHERE value>=$1 AND kind1=$2 AND kind2=$3 AND kind3=$4 FOR UPDATE;\nSELECT id, value FROM work WHERE userid=$1 AND kind1=$1 AND kind2=$3 AND kind3=$4 FOR UPDATE;\n2. How about inheriting and partitioning? I'm thinking about creating two tables, one for kind1(a) and second for kind1(b), will it help in performance?\n3. Is btree best for index on enum?\n4. How about creating index on complex keys like (user_id,kind1,kind2,kind3) and (price,kind1,kind2,kind3)? \n\n\nI have PostgreSQL 9.0.4.\nThanks in advance\n\nHi. I have a table called work (id bigserial, userid int4, kind1 enum, kind2 enum, kind3 enim, value bigint, modified timestamp)Table will have about 2*10^6 rows (at same time - overall it can have higher IDs but old records are eventually deleted (moved to separate archive table) therefore the IDs can grow huge). After insert on table work, every row will be updated (value will be reduced, till value = 0 (work is finished)). For each row there will be from 1 to maybe 10 updates on two cells (value, modified). After work is completed (value = 0) it's record will be moved to archive table.kind1 is an enum with two values (a and b)i'm using:- alter table work set fillfactor 50- btree index on value, fillfactor 50- btree index on kind1, fillfactor 50my question:1. what can i do to perform\n this selects faster:SELECT id, value FROM work WHERE value>=$1 AND kind1=$2 AND kind2=$3 AND kind3=$4 FOR UPDATE;SELECT id, value FROM work WHERE userid=$1 AND kind1=$1 AND kind2=$3 AND kind3=$4 FOR UPDATE;2. How about inheriting and partitioning? I'm thinking about creating two tables, one for kind1(a) and second for kind1(b), will it help in performance?3. Is btree best for index on enum?4. How about creating index on complex keys like (user_id,kind1,kind2,kind3) and (price,kind1,kind2,kind3)? I have PostgreSQL 9.0.4.Thanks in advance",
"msg_date": "Mon, 29 Aug 2011 10:13:39 +0100 (BST)",
"msg_from": "Tasdassa Asdasda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance with many updates"
},
{
"msg_contents": "On 29 Srpen 2011, 11:13, Tasdassa Asdasda wrote:\n> Hi. I have a table called work (id bigserial, userid int4, kind1 enum,\n> kind2 enum, kind3 enim, value bigint, modified timestamp)\n> Table will have about 2*10^6 rows (at same time - overall it can have\n> higher IDs but old records are eventually deleted (moved to separate\n> archive table) therefore the IDs can grow huge). After insert on table\n> work, every row will be updated (value will be reduced, till value = 0\n> (work is finished)). For each row there will be from 1 to\n> maybe 10 updates on two cells (value, modified). After work is completed\n> (value = 0) it's record will be moved to archive table.\n> kind1 is an enum with two values (a and b)\n\nOK, how many clients are updating the table concurrently? Is there a\nsingle client or multiple ones?\n\n> i'm using:\n> - alter table work set fillfactor 50\n> - btree index on value, fillfactor 50\n> - btree index on kind1, fillfactor 50\n\nI'd use significantly higher fillfactor - I'd probably start with 90 and\nsee if decreasing it improves the performance. My guess is it won't or\nmaybe it will even hurt performance. Fillfactor 50 means only 50% of the\nspace is used initially, so the table occupies almost 2x the space (so\nmore data needs to be read/written etc).\n\nPrepare a short simulation of your workload and run it with various\nfillfactor settings - that's the best way to see the effect.\n\n> my question:\n> 1. what can i do to perform this selects faster:\n> SELECT id, value FROM work WHERE value>=$1 AND kind1=$2 AND kind2=$3 AND\n> kind3=$4 FOR UPDATE;\n> SELECT id, value FROM work WHERE userid=$1 AND kind1=$1 AND kind2=$3 AND\n> kind3=$4 FOR UPDATE;\n\nWell, that really depends on the data. What is the selectivity of the\nconditions, i.e. how many rows match each part? You can either create an\nindex on each column separately or one index on multiple columns. Try this\n\nINDEX ON (kind1), INDEX ON (kind2), INDEX ON (kind3)\nINDEX ON (kind1, kind2, kind3)\n\nHow does the 'value' relate to the other columns? You could create an\nindex on this column too, but that would prevent HOT and thus the\nfillfactor is pointless.\n\n> 2. How about inheriting and partitioning? I'm thinking about creating two\n> tables, one for kind1(a) and second for kind1(b), will it help in\n> performance?\n\nIt could help, especially if constraint_exclusion is on.\n\n> 3. Is btree best for index on enum?\n\nThe real problem here is selectivity - how many rows match the condition.\nIf too many rows match it, random access is ineffective.\n\nTry it - the only other option is 'hash' indexes, and there are serious\ndisadvantages (just equality, no crash safety etc.).\n\nOr you can try partial indexes:\n\n http://www.postgresql.org/docs/8.4/static/indexes-partial.html\n\ni.e. instead of\n\nCREATE INDEX ... ON table (kind1, kind2, kind3);\n\ndo something like\n\nCREATE INDEX index_a ON table (kind2, kind3) WHERE (kind1 = 'a');\nCREATE INDEX index_b ON table (kind2, kind3) WHERE (kind1 = 'b');\n\n> 4. How about creating index on complex keys like\n> (user_id,kind1,kind2,kind3) and (price,kind1,kind2,kind3)?\n\nWell, that's one of the options. But really, given the small amount of\ninformation you've provided, this whole e-mail is rather a speculation\nbased on my imagination of what the statistical features of the data might\nbe.\n\nThe best solution is to try that - create the various indexes, run EXPLAIN\nANALYZE and post it to http://explain.depesz.com (so that we can see the\nresults).\n\nTomas\n\n",
"msg_date": "Mon, 29 Aug 2011 22:38:06 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance with many updates"
}
] |
[
{
"msg_contents": "Hello,\n\nI asked that question on StackOverflow, but didn't get any valuable\nresponse, so I'll ask it here. :)\n\nI have such query:\n\nSELECT \"spoleczniak_tablica\".\"id\", \"spoleczniak_tablica\".\"postac_id\",\n\"spoleczniak_tablica\".\"hash\", \"spoleczniak_tablica\".\"typ\",\n\"spoleczniak_tablica\".\"ikona\", \"spoleczniak_tablica\".\"opis\",\n\"spoleczniak_tablica\".\"cel\", \"spoleczniak_tablica\".\"data\",\n\"postac_postacie\".\"id\",\n\"postac_postacie\".\"user_id\", \"postac_postacie\".\"avatar\",\n\"postac_postacie\".\"ikonka\",\n\"postac_postacie\".\"imie\", \"postac_postacie\".\"nazwisko\",\n\"postac_postacie\".\"pseudonim\",\n\"postac_postacie\".\"plec\", \"postac_postacie\".\"wzrost\", \"postac_postacie\".\"waga\",\n\"postac_postacie\".\"ur_tydz\", \"postac_postacie\".\"ur_rok\",\n\"postac_postacie\".\"ur_miasto_id\",\n\"postac_postacie\".\"akt_miasto_id\", \"postac_postacie\".\"kasa\",\n\"postac_postacie\".\"punkty\",\n\"postac_postacie\".\"zmeczenie\", \"postac_postacie\".\"zdrowie\",\n\"postac_postacie\".\"kariera\"\nFROM \"spoleczniak_tablica\" INNER JOIN \"postac_postacie\" ON\n(\"spoleczniak_tablica\".\"postac_id\" = \"postac_postacie\".\"id\") WHERE\nspoleczniak_tablica.postac_id = 1 or spoleczniak_tablica.id in(select\nwpis_id from\nspoleczniak_oznaczone where etykieta_id in(select tag_id from\nspoleczniak_subskrypcje where\npostac_id = 1)) or (spoleczniak_tablica.postac_id in(select obserwowany_id from\nspoleczniak_obserwatorium where obserwujacy_id = 1) and hash not\nin('dyskusja', 'kochanie',\n'szturniecie')) or (spoleczniak_tablica.cel = 1 and\nspoleczniak_tablica.hash in('dyskusja',\n'kochanie', 'obserwatorium', 'szturchniecie')) or spoleczniak_tablica.hash =\n'administracja-info' or exists(select 1 from spoleczniak_komentarze\nwhere kredka_id =\nspoleczniak_tablica.id and postac_id = 1) ORDER BY\n\"spoleczniak_tablica\".\"id\" DESC LIMIT\n21;\n\nand it's real performance bottleneck for us. It's one of the most\noften executed query on our site.\n\nHere is EXPLAIN ANALYZE:\n\n Limit (cost=52.69..185979.44 rows=21 width=283) (actual\ntime=5.981..149.110 rows=21 loops=1)\n -> Nested Loop (cost=52.69..27867127142.57 rows=3147528\nwidth=283) (actual time=5.981..149.103 rows=21 loops=1)\n -> Index Scan Backward using spoleczniak_tablica_pkey on\nspoleczniak_tablica (cost=52.69..27866103743.37 rows=3147528\nwidth=194) (actual time=5.971..148.963 rows=21 loops=1)\n Filter: ((postac_id = 1) OR (SubPlan 1) OR ((hashed\nSubPlan 2) AND ((hash)::text <> ALL\n('{dyskusja,kochanie,szturniecie}'::text[]))) OR ((cel = 1) AND\n((hash)::text = ANY\n('{dyskusja,kochanie,obserwatorium,szturchniecie}'::text[]))) OR\n((hash)::text = 'administracja-info'::text) OR (alternatives: SubPlan\n3 or hashed SubPlan 4))\n SubPlan 1\n -> Materialize (cost=13.28..11947.85 rows=1264420\nwidth=4) (actual time=0.000..0.024 rows=485 loops=2137)\n -> Nested Loop (cost=13.28..685.75\nrows=1264420 width=4) (actual time=0.119..0.664 rows=485 loops=1)\n -> HashAggregate (cost=5.89..5.90\nrows=1 width=4) (actual time=0.015..0.017 rows=7 loops=1)\n -> Index Scan using\nspoleczniak_subskrypcje_postac_id on spoleczniak_subskrypcje\n(cost=0.00..5.89 rows=2 width=4) (actual time=0.005..0.009 rows=7\nloops=1)\n Index Cond: (postac_id = 1)\n -> Bitmap Heap Scan on\nspoleczniak_oznaczone (cost=7.38..674.96 rows=391 width=8) (actual\ntime=0.019..0.082 rows=69 loops=7)\n Recheck Cond: (etykieta_id =\nspoleczniak_subskrypcje.tag_id)\n -> Bitmap Index Scan on\nspoleczniak_oznaczone_etykieta_id (cost=0.00..7.29 rows=391 width=0)\n(actual time=0.013..0.013 rows=69 loops=7)\n Index Cond: (etykieta_id =\nspoleczniak_subskrypcje.tag_id)\n SubPlan 2\n -> Index Scan using\nspoleczniak_obserwatorium_obserwujacy_id on spoleczniak_obserwatorium\n(cost=0.00..39.36 rows=21 width=4) (actual time=0.006..0.030 rows=26\nloops=1)\n Index Cond: (obserwujacy_id = 1)\n SubPlan 3\n -> Bitmap Heap Scan on spoleczniak_komentarze\n(cost=18.67..20.68 rows=1 width=0) (never executed)\n Recheck Cond: ((kredka_id =\nspoleczniak_tablica.id) AND (postac_id = 1))\n -> BitmapAnd (cost=18.67..18.67 rows=1\nwidth=0) (never executed)\n -> Bitmap Index Scan on\nspoleczniak_komentarze_kredka_id (cost=0.00..2.98 rows=24 width=0)\n(never executed)\n Index Cond: (kredka_id =\nspoleczniak_tablica.id)\n -> Bitmap Index Scan on\nspoleczniak_komentarze_postac_id (cost=0.00..15.44 rows=890 width=0)\n(never executed)\n Index Cond: (postac_id = 1)\n SubPlan 4\n -> Index Scan using spoleczniak_komentarze_postac_id\non spoleczniak_komentarze (cost=0.00..1610.46 rows=890 width=4)\n(actual time=0.013..2.983 rows=3605 loops=1)\n Index Cond: (postac_id = 1)\n -> Index Scan using postac_postacie_pkey on postac_postacie\n(cost=0.00..0.31 rows=1 width=89) (actual time=0.004..0.005 rows=1\nloops=21)\n Index Cond: (id = spoleczniak_tablica.postac_id)\n Total runtime: 149.211 ms (in rush hours runtime is ~600 ms)\n\nIf I delete ORDER BY clause, runtime is less than 30 ms. As you can\nsee - it's big table, more than 3 000 000 records. Any hints how to\noptimize this query?\n",
"msg_date": "Tue, 30 Aug 2011 07:36:20 +0200",
"msg_from": "Szymon Kosok <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query optimization help"
},
{
"msg_contents": "Hi,\n\nOn 30 August 2011 15:36, Szymon Kosok <[email protected]> wrote:\n> Hello,\n>\n> I asked that question on StackOverflow, but didn't get any valuable\n> response, so I'll ask it here. :)\n>\n> I have such query:\n\nCould you please re-post your explain using this web site:\nhttp://explain.depesz.com/ and post links to Stackoverflow question?\nWhat is your Postgres version? Database settings?\nI see huge discrepancy between predicted and actual row numbers (like\n1264420 vs 485). I would try the following:\n\n- check column statistics (pg_stasts) and focus on the following\ncolumns: n_distinct, null_frac, most_common_vals. If they are way-off\nfrom the actual values then you should tweak (auto)analyze process:\nrun manual/auto analyse more often (check pg_stat_user_tables),\nincrease default_statistics_target (per column or global)\n\n- try to disable nested loop join (set enable_nestloop=off)\n\n-- \nOndrej Ivanic\n([email protected])\n",
"msg_date": "Tue, 30 Aug 2011 16:09:42 +1000",
"msg_from": "=?UTF-8?Q?Ondrej_Ivani=C4=8D?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query optimization help"
},
{
"msg_contents": "2011/8/30 Ondrej Ivanič <[email protected]>:\n> Could you please re-post your explain using this web site:\n> http://explain.depesz.com/ and post links to Stackoverflow question?\n\nHere it is: http://explain.depesz.com/s/Iaa\n\n> - try to disable nested loop join (set enable_nestloop=off)\n\nEven worse performance (http://explain.depesz.com/s/mMi).\n\nMy configuration:http://pastie.org/2453148 (copied and pasted only\nuncommented important variables). It's decent hardware. i7, 16 GB of\nRAM, 3x2 RAID 10 (7200rpm) for OS + data, RAID 1 (2 disks, 7200rpm)\nfor WAL, RAID controller with BBU and 512 MB memory cache (cache is\nset to write only).\n\nPS. Sorry Ondrej, accidentally I've sent reply to you, not to list.\n",
"msg_date": "Tue, 30 Aug 2011 09:44:10 +0200",
"msg_from": "Szymon Kosok <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query optimization help"
},
{
"msg_contents": "Hi,\n\n2011/8/30 Szymon Kosok <[email protected]>:\n> 2011/8/30 Ondrej Ivanič <[email protected]>:\n>> Could you please re-post your explain using this web site:\n>> http://explain.depesz.com/ and post links to Stackoverflow question?\n>\n> Here it is: http://explain.depesz.com/s/Iaa\n>\n>> - try to disable nested loop join (set enable_nestloop=off)\n\nThanks, I would try to \"materialise\" spoleczniak_tablica table. Your\nquery looks like this:\nselect ...\nfrom spoleczniak_tablica\ninner join ...\nwhere ...\norder by spoleczniak_tablica.id desc\nlimit 21\n\nSo I would rewrite your query like this:\nselect ...\nfrom (\n select ...\n from spoleczniak_tablica\n where ....\n order by spoleczniak_tablica.id desc\n limit 21\n) as x\ninner join ...\n\n\n-- \nOndrej Ivanic\n([email protected])\n",
"msg_date": "Tue, 30 Aug 2011 18:20:08 +1000",
"msg_from": "=?UTF-8?Q?Ondrej_Ivani=C4=8D?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query optimization help"
}
] |
[
{
"msg_contents": "Hello,\n\nI asked that question on StackOverflow, but didn't get any valuable\nresponse, so I'll ask it here. :)\n\nI have such query:\n\nSELECT \"spoleczniak_tablica\".\"id\", \"spoleczniak_tablica\".\"postac_id\",\n\"spoleczniak_tablica\".\"hash\", \"spoleczniak_tablica\".\"typ\",\n\"spoleczniak_tablica\".\"ikona\", \"spoleczniak_tablica\".\"opis\",\n\"spoleczniak_tablica\".\"cel\", \"spoleczniak_tablica\".\"data\",\n\"postac_postacie\".\"id\",\n\"postac_postacie\".\"user_id\", \"postac_postacie\".\"avatar\",\n\"postac_postacie\".\"ikonka\",\n\"postac_postacie\".\"imie\", \"postac_postacie\".\"nazwisko\",\n\"postac_postacie\".\"pseudonim\",\n\"postac_postacie\".\"plec\", \"postac_postacie\".\"wzrost\", \"postac_postacie\".\"waga\",\n\"postac_postacie\".\"ur_tydz\", \"postac_postacie\".\"ur_rok\",\n\"postac_postacie\".\"ur_miasto_id\",\n\"postac_postacie\".\"akt_miasto_id\", \"postac_postacie\".\"kasa\",\n\"postac_postacie\".\"punkty\",\n\"postac_postacie\".\"zmeczenie\", \"postac_postacie\".\"zdrowie\",\n\"postac_postacie\".\"kariera\"\nFROM \"spoleczniak_tablica\" INNER JOIN \"postac_postacie\" ON\n(\"spoleczniak_tablica\".\"postac_id\" = \"postac_postacie\".\"id\") WHERE\nspoleczniak_tablica.postac_id = 1 or spoleczniak_tablica.id in(select\nwpis_id from\nspoleczniak_oznaczone where etykieta_id in(select tag_id from\nspoleczniak_subskrypcje where\npostac_id = 1)) or (spoleczniak_tablica.postac_id in(select obserwowany_id from\nspoleczniak_obserwatorium where obserwujacy_id = 1) and hash not\nin('dyskusja', 'kochanie',\n'szturniecie')) or (spoleczniak_tablica.cel = 1 and\nspoleczniak_tablica.hash in('dyskusja',\n'kochanie', 'obserwatorium', 'szturchniecie')) or spoleczniak_tablica.hash =\n'administracja-info' or exists(select 1 from spoleczniak_komentarze\nwhere kredka_id =\nspoleczniak_tablica.id and postac_id = 1) ORDER BY\n\"spoleczniak_tablica\".\"id\" DESC LIMIT\n21;\n\nand it's real performance bottleneck for us. It's one of the most\noften executed query on our site.\n\nHere is EXPLAIN ANALYZE:\n\n Limit (cost=52.69..185979.44 rows=21 width=283) (actual\ntime=5.981..149.110 rows=21 loops=1)\n -> Nested Loop (cost=52.69..27867127142.57 rows=3147528\nwidth=283) (actual time=5.981..149.103 rows=21 loops=1)\n -> Index Scan Backward using spoleczniak_tablica_pkey on\nspoleczniak_tablica (cost=52.69..27866103743.37 rows=3147528\nwidth=194) (actual time=5.971..148.963 rows=21 loops=1)\n Filter: ((postac_id = 1) OR (SubPlan 1) OR ((hashed\nSubPlan 2) AND ((hash)::text <> ALL\n('{dyskusja,kochanie,szturniecie}'::text[]))) OR ((cel = 1) AND\n((hash)::text = ANY\n('{dyskusja,kochanie,obserwatorium,szturchniecie}'::text[]))) OR\n((hash)::text = 'administracja-info'::text) OR (alternatives: SubPlan\n3 or hashed SubPlan 4))\n SubPlan 1\n -> Materialize (cost=13.28..11947.85 rows=1264420\nwidth=4) (actual time=0.000..0.024 rows=485 loops=2137)\n -> Nested Loop (cost=13.28..685.75\nrows=1264420 width=4) (actual time=0.119..0.664 rows=485 loops=1)\n -> HashAggregate (cost=5.89..5.90\nrows=1 width=4) (actual time=0.015..0.017 rows=7 loops=1)\n -> Index Scan using\nspoleczniak_subskrypcje_postac_id on spoleczniak_subskrypcje\n(cost=0.00..5.89 rows=2 width=4) (actual time=0.005..0.009 rows=7\nloops=1)\n Index Cond: (postac_id = 1)\n -> Bitmap Heap Scan on\nspoleczniak_oznaczone (cost=7.38..674.96 rows=391 width=8) (actual\ntime=0.019..0.082 rows=69 loops=7)\n Recheck Cond: (etykieta_id =\nspoleczniak_subskrypcje.tag_id)\n -> Bitmap Index Scan on\nspoleczniak_oznaczone_etykieta_id (cost=0.00..7.29 rows=391 width=0)\n(actual time=0.013..0.013 rows=69 loops=7)\n Index Cond: (etykieta_id =\nspoleczniak_subskrypcje.tag_id)\n SubPlan 2\n -> Index Scan using\nspoleczniak_obserwatorium_obserwujacy_id on spoleczniak_obserwatorium\n(cost=0.00..39.36 rows=21 width=4) (actual time=0.006..0.030 rows=26\nloops=1)\n Index Cond: (obserwujacy_id = 1)\n SubPlan 3\n -> Bitmap Heap Scan on spoleczniak_komentarze\n(cost=18.67..20.68 rows=1 width=0) (never executed)\n Recheck Cond: ((kredka_id =\nspoleczniak_tablica.id) AND (postac_id = 1))\n -> BitmapAnd (cost=18.67..18.67 rows=1\nwidth=0) (never executed)\n -> Bitmap Index Scan on\nspoleczniak_komentarze_kredka_id (cost=0.00..2.98 rows=24 width=0)\n(never executed)\n Index Cond: (kredka_id =\nspoleczniak_tablica.id)\n -> Bitmap Index Scan on\nspoleczniak_komentarze_postac_id (cost=0.00..15.44 rows=890 width=0)\n(never executed)\n Index Cond: (postac_id = 1)\n SubPlan 4\n -> Index Scan using spoleczniak_komentarze_postac_id\non spoleczniak_komentarze (cost=0.00..1610.46 rows=890 width=4)\n(actual time=0.013..2.983 rows=3605 loops=1)\n Index Cond: (postac_id = 1)\n -> Index Scan using postac_postacie_pkey on postac_postacie\n(cost=0.00..0.31 rows=1 width=89) (actual time=0.004..0.005 rows=1\nloops=21)\n Index Cond: (id = spoleczniak_tablica.postac_id)\n Total runtime: 149.211 ms (in rush hours runtime is ~600 ms)\n\nIf I delete ORDER BY clause, runtime is less than 30 ms. As you can\nsee - it's big table, more than 3 000 000 records. Any hints how to\noptimize this query?\n\n(I've sent that message without joining mailing first, If i'll do a\ndouble post, please forgive me)\n",
"msg_date": "Tue, 30 Aug 2011 07:41:12 +0200",
"msg_from": "Szymon Kosok <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query optimization help"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm looking for summaries (or best practices) on SSD usage with PostgreSQL.\nMy use case is mainly a \"read-only\" database.\nAre there any around?\n\nYours, Stefan\n",
"msg_date": "Tue, 30 Aug 2011 19:23:10 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Summaries on SSD usage?"
},
{
"msg_contents": "On Aug 30, 2011, at 12:23 PM, Stefan Keller wrote:\n> I'm looking for summaries (or best practices) on SSD usage with PostgreSQL.\n> My use case is mainly a \"read-only\" database.\n> Are there any around?\n\nI'm not sure, but for read-only why not just put more memory in the server? It'll be a lot cheaper than SSDs.\n--\nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n",
"msg_date": "Thu, 1 Sep 2011 16:28:22 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Summaries on SSD usage?"
},
{
"msg_contents": "You mean something like \"Unlogged Tables\" in PostgreSQL 9.1 (=\nin-memory database) or simply a large ramdisk?\n\nYours, Stefan\n\n2011/9/1 Jim Nasby <[email protected]>:\n> On Aug 30, 2011, at 12:23 PM, Stefan Keller wrote:\n>> I'm looking for summaries (or best practices) on SSD usage with PostgreSQL.\n>> My use case is mainly a \"read-only\" database.\n>> Are there any around?\n>\n> I'm not sure, but for read-only why not just put more memory in the server? It'll be a lot cheaper than SSDs.\n> --\n> Jim C. Nasby, Database Architect [email protected]\n> 512.569.9461 (cell) http://jim.nasby.net\n>\n>\n>\n",
"msg_date": "Fri, 2 Sep 2011 00:15:51 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Summaries on SSD usage?"
},
{
"msg_contents": "On 2011-09-01 23:28, Jim Nasby wrote:\n> On Aug 30, 2011, at 12:23 PM, Stefan Keller wrote:\n>> I'm looking for summaries (or best practices) on SSD usage with PostgreSQL.\n>> My use case is mainly a \"read-only\" database.\n>> Are there any around?\n> I'm not sure, but for read-only why not just put more memory in the server? It'll be a lot cheaper than SSDs\nIt is \"really expensive\" to go over 512GB memory and the performance \nregression for\njust hitting disk in a system where you assume everything is in memory is\nreally huge. SSD makes the \"edge\" be a bit smoother than rotating drives \ndo.\n\nJesper\n\n-- \nJesper\n\n",
"msg_date": "Fri, 02 Sep 2011 06:14:50 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Summaries on SSD usage?"
},
{
"msg_contents": "On Tue, Aug 30, 2011 at 11:23 AM, Stefan Keller <[email protected]> wrote:\n> Hi,\n>\n> I'm looking for summaries (or best practices) on SSD usage with PostgreSQL.\n> My use case is mainly a \"read-only\" database.\n> Are there any around?\n\nHow big is your DB?\nWhat kind of reads are most common, random access or sequential?\nHow big of a dataset do you pull out at once with a query.\n\nSSDs are usually not a big winner for read only databases.\nIf the dataset is small (dozen or so gigs) get more RAM to fit it in\nIf it's big and sequentially accessed, then build a giant RAID-10 or RAID-6\nIf it's big and randomly accessed then buy a bunch of SSDs and RAID them\n",
"msg_date": "Thu, 1 Sep 2011 22:58:10 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Summaries on SSD usage?"
},
{
"msg_contents": "On 09/01/2011 11:14 PM, Jesper Krogh wrote:\n\n> It is \"really expensive\" to go over 512GB memory and the performance\n> regression for just hitting disk in a system where you assume\n> everything is in memory is really huge. SSD makes the \"edge\" be a bit\n> smoother than rotating drives do.\n\nIronically, this is actually the topic of my presentation at Postgres \nOpen. We transitioned to NVRAM PCI cards for exactly this reason. Having \na giant database in cache is great, until a few reads come from your \nslow backing disks, or heaven-forbid, you have to restart your database \nduring a high transactional period.\n\nLemme tell ya... no RAID-10 in the world can supply 12k TPS with little \nto no warning. A good set of SSDs or PCI cards can.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Fri, 2 Sep 2011 09:30:07 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Summaries on SSD usage?"
},
{
"msg_contents": "2011/9/2 Scott Marlowe <[email protected]>:\n> On Tue, Aug 30, 2011 at 11:23 AM, Stefan Keller <[email protected]> wrote:\n> How big is your DB?\n> What kind of reads are most common, random access or sequential?\n> How big of a dataset do you pull out at once with a query.\n>\n> SSDs are usually not a big winner for read only databases.\n> If the dataset is small (dozen or so gigs) get more RAM to fit it in\n> If it's big and sequentially accessed, then build a giant RAID-10 or RAID-6\n> If it's big and randomly accessed then buy a bunch of SSDs and RAID them\n\nMy dataset is a mirror of OpenStreetMap updated daily. For Switzerland\nit's about 10 GB total disk space used (half for tables, half for\nindexes) based on 2 GB raw XML input. Europe would be about 70 times\nlarger (130 GB) and world has 250 GB raw input.\n\nIt's both randomly (= index scan?) and sequentially (= seq scan?)\naccessed with queries like: \" SELECT * FROM osm_point WHERE tags @>\nhstore('tourism','zoo') AND name ILIKE 'Zoo%' \". You can try it\nyourself online, e.g.\nhttp://labs.geometa.info/postgisterminal/?xapi=node[tourism=zoo]\n\nSo I'm still unsure what's better: SSD, NVRAM (PCI card) or plain RAM?\nAnd I'm eager to understand if unlogged tables could help anyway.\n\nYours, Stefan\n",
"msg_date": "Sat, 3 Sep 2011 00:04:30 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Summaries on SSD usage?"
},
{
"msg_contents": "On 2011-09-03 00:04, Stefan Keller wrote:\n> 2011/9/2 Scott Marlowe<[email protected]>:\n>> On Tue, Aug 30, 2011 at 11:23 AM, Stefan Keller<[email protected]> wrote:\n>> How big is your DB?\n>> What kind of reads are most common, random access or sequential?\n>> How big of a dataset do you pull out at once with a query.\n>>\n>> SSDs are usually not a big winner for read only databases.\n>> If the dataset is small (dozen or so gigs) get more RAM to fit it in\n>> If it's big and sequentially accessed, then build a giant RAID-10 or RAID-6\n>> If it's big and randomly accessed then buy a bunch of SSDs and RAID them\n> My dataset is a mirror of OpenStreetMap updated daily. For Switzerland\n> it's about 10 GB total disk space used (half for tables, half for\n> indexes) based on 2 GB raw XML input. Europe would be about 70 times\n> larger (130 GB) and world has 250 GB raw input.\n>\n> It's both randomly (= index scan?) and sequentially (= seq scan?)\n> accessed with queries like: \" SELECT * FROM osm_point WHERE tags @>\n> hstore('tourism','zoo') AND name ILIKE 'Zoo%' \". You can try it\n> yourself online, e.g.\n> http://labs.geometa.info/postgisterminal/?xapi=node[tourism=zoo]\n>\n> So I'm still unsure what's better: SSD, NVRAM (PCI card) or plain RAM?\n> And I'm eager to understand if unlogged tables could help anyway\n\nIt's not that hard to figure out.. take some of your \"typical\" queries.\nsay the one above.. Change the search-term to something \"you'd expect\nthe user to enter in a minute, but hasn't been run\". (could be \"museum\" \ninstead\nof \"zoo\".. then you run it with \\timing and twice.. if the two queries are\n\"close\" to each other in timing, then you only hit memory anyway and\nneither SSD, NVRAM or more RAM will buy you anything. Faster memory\nand faster CPU-cores will.. if you have a significant speedup to the\nsecond run, then more RAM, NVRAM, SSD is a good fix.\n\nTypically I have slow-query-logging turned on, permanently set to around \n250ms.\nIf I find queries in the log that \"i didnt expect\" to take above 250ms then\nI'd start to investigate if query-plans are correct .. and so on..\n\nThe above numbers are \"raw-data\" size and now how PG uses them.. or?\nAnd you havent told anything about the size of your current system.\n\nJesper\n",
"msg_date": "Sat, 03 Sep 2011 08:49:27 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Summaries on SSD usage?"
},
{
"msg_contents": "2011/9/3 Jesper Krogh <[email protected]>:\n> On 2011-09-03 00:04, Stefan Keller wrote:\n> It's not that hard to figure out.. take some of your \"typical\" queries.\n> say the one above.. Change the search-term to something \"you'd expect\n> the user to enter in a minute, but hasn't been run\". (could be \"museum\"\n> instead\n> of \"zoo\".. then you run it with \\timing and twice.. if the two queries are\n> \"close\" to each other in timing, then you only hit memory anyway and\n> neither SSD, NVRAM or more RAM will buy you anything. Faster memory\n> and faster CPU-cores will.. if you have a significant speedup to the\n> second run, then more RAM, NVRAM, SSD is a good fix.\n>\n> Typically I have slow-query-logging turned on, permanently set to around\n> 250ms.\n> If I find queries in the log that \"i didnt expect\" to take above 250ms then\n> I'd start to investigate if query-plans are correct .. and so on..\n>\n> The above numbers are \"raw-data\" size and now how PG uses them.. or?\n> And you havent told anything about the size of your current system.\n\nIts definitely the case that the second query run is much faster\n(first ones go up to 30 seconds and more...).\n\nPG uses the raw data for Switzerlad like this: 10 GB total disk space\nbased on 2 GB raw XML input. Table osm_point is one of the four big\ntables and uses 984 MB for table and 1321 MB for indexes (where hstore\nis the biggest from id, name and geometry).\n\nStefan\n",
"msg_date": "Sat, 3 Sep 2011 10:56:14 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Summaries on SSD usage?"
},
{
"msg_contents": "Shaun,\n\n2011/9/2 Shaun Thomas <[email protected]>:\n> Ironically, this is actually the topic of my presentation at Postgres Open.>\n\nDo you think my problem would now be solved with NVRAM PCI card?\n\nStefan\n\n---------- Forwarded message ----------\nFrom: Stefan Keller <[email protected]>\nDate: 2011/9/3\nSubject: Re: [PERFORM] Summaries on SSD usage?\nTo: Jesper Krogh <[email protected]>\nCc: [email protected]\n\n\n2011/9/3 Jesper Krogh <[email protected]>:\n> On 2011-09-03 00:04, Stefan Keller wrote:\n> It's not that hard to figure out.. take some of your \"typical\" queries.\n> say the one above.. Change the search-term to something \"you'd expect\n> the user to enter in a minute, but hasn't been run\". (could be \"museum\"\n> instead\n> of \"zoo\".. then you run it with \\timing and twice.. if the two queries are\n> \"close\" to each other in timing, then you only hit memory anyway and\n> neither SSD, NVRAM or more RAM will buy you anything. Faster memory\n> and faster CPU-cores will.. if you have a significant speedup to the\n> second run, then more RAM, NVRAM, SSD is a good fix.\n>\n> Typically I have slow-query-logging turned on, permanently set to around\n> 250ms.\n> If I find queries in the log that \"i didnt expect\" to take above 250ms then\n> I'd start to investigate if query-plans are correct .. and so on..\n>\n> The above numbers are \"raw-data\" size and now how PG uses them.. or?\n> And you havent told anything about the size of your current system.\n\nIts definitely the case that the second query run is much faster\n(first ones go up to 30 seconds and more...).\n\nPG uses the raw data for Switzerlad like this: 10 GB total disk space\nbased on 2 GB raw XML input. Table osm_point is one of the four big\ntables and uses 984 MB for table and 1321 MB for indexes (where hstore\nis the biggest from id, name and geometry).\n\nStefan\n",
"msg_date": "Tue, 6 Sep 2011 15:45:57 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Summaries on SSD usage?"
},
{
"msg_contents": "On 09/06/2011 08:45 AM, Stefan Keller wrote:\n\n> Do you think my problem would now be solved with NVRAM PCI card?\n\nThat's a tough call. Part of the reason I'm doing the presentation is \nbecause there are a lot of other high OLTP databases out there which \nhave (or will) reached critical mass where cache can't fulfill generic \ndatabase requests anymore.\n\nAs an example, we were around 11k database transactions per second on \n250GB of data with 32GB of RAM. The first thing we tried was bumping it \nup to 64GB, and that kinda worked. But what you'll find, is that an \nautovacuum, or a nightly vacuum, will occasionally hit a large table and \nflush all of that handy cached data down the tubes, and then your \ndatabase starts choking trying to keep up with the requests.\n\nEven a large, well equipped RAID can only really offer 2500-ish TPS \nbefore you start getting into the larger and more expensive SANs, so you \neither have to pre-load your memory with dd or pgfincore, or if your \nrandom access patterns actually exceed your RAM, you need a bigger disk \npool or tiered storage. And by tiered storage, I mean tablespaces, with \ncritical high-TPS tables located on a PCIe card or a pool of modern \n(capacitor-backed, firmware GC) SSDs.\n\nYour case looks more like you have just a couple big-ass queries/tables \nthat occasionally give you trouble. If optimizing the queries, index \ntweaks, and other sundry tools can't help anymore, you may have to start \ndragging ou the bigger guns. But if you can afford it, having some NVRam \nstorage around as a top-tier tablespace for critical-need data is \nprobably good practice these days.\n\nThey're expensive, though. Even the cheap ones start around $5k. Just \nremember you're paying for the performance in this case, and not storage \ncapacity. Some vendors have demo hardware they'll let you use to \ndetermine if it applies to your case, so you might want to contact \nFusionIO, RAMSAN, Virident, or maybe OCZ.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Tue, 6 Sep 2011 09:07:14 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Summaries on SSD usage?"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have read things someplace saying not exists was better than not in... \nor something like that. Not sure if that was for in/exists and not \nin/not exists, and for a lot of records or not.\n\nHere is my setup:\n\nMy website has a general table, let say 60k rows. Its mostly read-only. \n Every once and a while we get updated data, so I:\ncreate schema upd;\ncreate table upd.general(like public.general);\n\nThen I dump the new data into upd.general. (This has many table's and \nsteps, I'm simplifying it here).\n\nFor the last step, I want to:\n\nbegin;\ndelete from public.general where gid in (select gid from upd.general);\ninsert into public.general select * from upd.general;\n... 7 other tables same way ...\ncommit;\n\n\nMost of the time upd.general will be < 500 rows. Every once and a while \nthings get messed up and we just update the entire database, so count(*) \nupd.general == count(*) public.general.\n\nMy question is:\nfast is nice, but safe and less resource intensive is better, so which \nwould I probably like better:\n\ndelete from public.general where gid in (select gid from upd.general);\n\nor\n\n-- currently dont have and index, so\ncreate index general_pk on upd.general(gid);\ndelete from public.general a where exists(select 1 from upd.general b \nwhere a.gid=b.gid);\n\n\nThanks for any suggestions,\n\n-Andy\n",
"msg_date": "Tue, 30 Aug 2011 15:30:23 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": true,
"msg_subject": "IN or EXISTS"
},
{
"msg_contents": "On 31/08/2011 4:30 AM, Andy Colson wrote:\n> Hi all,\n>\n> I have read things someplace saying not exists was better than not \n> in... or something like that. Not sure if that was for in/exists and \n> not in/not exists, and for a lot of records or not.\n>\n`EXISTS' may perform faster than `IN', yes. Using `IN' it is necessary \nto build a list of values then iterate over them to check for a match. \nBy contrast, `EXISTS' may use a simple index lookup or the like to test \nfor the presence of a value.\n\nOn the other hand, the `IN' subquery is uncorrelated needs only run \nonce, where the `EXISTS' subquery is correlated and has to run once for \nevery outer record. That means that the `IN' list approach can be a lot \nfaster where the subquery in question is relatively time consuming for \nthe number of values it returns. For example, if the `IN' query returns \nonly 5 values and takes 100ms, you're scanning 1 million records in the \nouter query, and the subquery `EXISTS' version would take 50ms, using \n`IN' is a no-brainer since 1 million times 50ms will be a lot slower \nthan 1 times 100ms plus the time required to scan 5 elements 1 million \ntimes.\n\nAnother complication is the possible presence of NULL in an IN list. \nGetting NULLs in `IN' lists is a common source of questions on this \nlist, because people are quite surprised by how it works. EXISTS avoids \nthe NULL handling issue (and in the process demonstrates how woefully \ninconsistent SQL's handling of NULL really is).\n\nTheoretically the query planner could transform:\n\nSELECT * from y WHERE y.id IN (SELECT DISTINCT z.y_id FROM z WHERE \nz.y_id IS NOT NULL);\n\ninto:\n\nSELECT * FROM y WHERE EXISTS (SELECT 1 FROM z WHERE z.y_id = y.id)\n\n... or vice versa depending on which it thought would be faster. AFAIK \nit doesn't currently do this. To be able to do it the planner would need \nto know how to estimate the cost of scanning an `IN' result list. It'd \nalso need to be able to use constraints on the target table to prove \nthat the result of the `IN' may not contain nulls. To transform the \nEXISTS version into the IN version where it'd be more efficient, it'd \nalso have to be able to use constraints on the target table to prove \nthat results of a SELECT would be unique without explicit deduplication.\n\nAll this makes me wonder ... does Pg currently support sorting IN lists \nand using a binary search? It'd be pretty nice to be able to prove that:\n\nSELECT * from y WHERE y.id IN (SELECT z.y_id FROM z);\n\nis equvalent to:\n\nSELECT * FROM y WHERE y.id IN (SELECT DISTINCT z.y_id FROM z WHERE z_id \nIS NOT NULL)\n\n... and either transform it to an EXISTS test or add an ORDER BY z_id \nand flag the resultset as sorted so a binary search could be done on it \nwhenever a row hits the IN test.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 31 Aug 2011 09:33:28 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IN or EXISTS"
},
{
"msg_contents": "On 8/30/2011 8:33 PM, Craig Ringer wrote:\n> On 31/08/2011 4:30 AM, Andy Colson wrote:\n>> Hi all,\n>>\n>> I have read things someplace saying not exists was better than not\n>> in... or something like that. Not sure if that was for in/exists and\n>> not in/not exists, and for a lot of records or not.\n>>\n> `EXISTS' may perform faster than `IN', yes. Using `IN' it is necessary\n> to build a list of values then iterate over them to check for a match.\n> By contrast, `EXISTS' may use a simple index lookup or the like to test\n> for the presence of a value.\n>\n> On the other hand, the `IN' subquery is uncorrelated needs only run\n> once, where the `EXISTS' subquery is correlated and has to run once for\n> every outer record. That means that the `IN' list approach can be a lot\n> faster where the subquery in question is relatively time consuming for\n> the number of values it returns. For example, if the `IN' query returns\n> only 5 values and takes 100ms, you're scanning 1 million records in the\n> outer query, and the subquery `EXISTS' version would take 50ms, using\n> `IN' is a no-brainer since 1 million times 50ms will be a lot slower\n> than 1 times 100ms plus the time required to scan 5 elements 1 million\n> times.\n>\n> Another complication is the possible presence of NULL in an IN list.\n> Getting NULLs in `IN' lists is a common source of questions on this\n> list, because people are quite surprised by how it works. EXISTS avoids\n> the NULL handling issue (and in the process demonstrates how woefully\n> inconsistent SQL's handling of NULL really is).\n>\n> Theoretically the query planner could transform:\n>\n> SELECT * from y WHERE y.id IN (SELECT DISTINCT z.y_id FROM z WHERE\n> z.y_id IS NOT NULL);\n>\n> into:\n>\n> SELECT * FROM y WHERE EXISTS (SELECT 1 FROM z WHERE z.y_id = y.id)\n>\n> ... or vice versa depending on which it thought would be faster. AFAIK\n> it doesn't currently do this. To be able to do it the planner would need\n> to know how to estimate the cost of scanning an `IN' result list. It'd\n> also need to be able to use constraints on the target table to prove\n> that the result of the `IN' may not contain nulls. To transform the\n> EXISTS version into the IN version where it'd be more efficient, it'd\n> also have to be able to use constraints on the target table to prove\n> that results of a SELECT would be unique without explicit deduplication.\n>\n> All this makes me wonder ... does Pg currently support sorting IN lists\n> and using a binary search? It'd be pretty nice to be able to prove that:\n>\n> SELECT * from y WHERE y.id IN (SELECT z.y_id FROM z);\n>\n> is equvalent to:\n>\n> SELECT * FROM y WHERE y.id IN (SELECT DISTINCT z.y_id FROM z WHERE z_id\n> IS NOT NULL)\n>\n> ... and either transform it to an EXISTS test or add an ORDER BY z_id\n> and flag the resultset as sorted so a binary search could be done on it\n> whenever a row hits the IN test.\n>\n> --\n> Craig Ringer\n>\n\nYeah... my current code uses IN. Most of my updates are small, so my \ninner list is 500 integers. It runs fine. What I'm worried about is \nwhen I update the entire table, so my inner list is 60k integers. Maybe \nI'm just worrying for naught. I tested a table with 100k rows, ran both \nwith explain analyzes, and they look the same:\n\n Delete (cost=11186.26..20817.60 rows=25911 width=12) (actual \ntime=408.138..408.138 rows=0 loops=1)\n -> Hash Semi Join (cost=11186.26..20817.60 rows=25911 width=12) \n(actual time=61.997..182.573 rows=105434 loops=1)\n Hash Cond: (public.general.gid = upd.general.gid)\n -> Seq Scan on general (cost=0.00..9113.11 rows=25911 \nwidth=10) (actual time=0.004..42.364 rows=105434 loops=1)\n -> Hash (cost=9868.34..9868.34 rows=105434 width=10) (actual \ntime=61.958..61.958 rows=105434 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 4531kB\n -> Seq Scan on general (cost=0.00..9868.34 rows=105434 \nwidth=10) (actual time=0.003..34.372 rows=105434 loops=1)\n\nWith or without an index, (even if I ANALYZE it) it still does a table \nscan and builds a hash. Both IN and EXISTS act the same way.\n\nI assume:\nBuckets: 16384 Batches: 1 Memory Usage: 4531kB\n\nThat means a total of 4.5 meg of ram was used for the hash, so if my \nwork_mem was lower than that it would swap? (or choose a different plan?)\n\nI'll only ever be running one update at a time, so I'm not worried about \nmultiple connections running at once.\n\nAnyway, I'll just leave it alone (and stop optimizing things that dont \nneed it)\n\n-Andy\n",
"msg_date": "Wed, 31 Aug 2011 08:59:59 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: IN or EXISTS"
},
{
"msg_contents": "On 31 Srpen 2011, 15:59, Andy Colson wrote:\n> I assume:\n> Buckets: 16384 Batches: 1 Memory Usage: 4531kB\n>\n> That means a total of 4.5 meg of ram was used for the hash, so if my\n> work_mem was lower than that it would swap? (or choose a different plan?)\n\nWhy don't you try that? Just set the work_mem to 1MB or so and run the query.\n\nI think it'll use the same plan but multiple batches - read just part of\nthe inner table so that the hash table fits into work_mem, scan the outer\ntable etc. The downside is it'd rescan the outer table several times.\n\nTomas\n\n",
"msg_date": "Wed, 31 Aug 2011 16:19:36 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IN or EXISTS"
},
{
"msg_contents": "On Wed, 2011-08-31 at 09:33 +0800, Craig Ringer wrote:\n> On the other hand, the `IN' subquery is uncorrelated needs only run \n> once, where the `EXISTS' subquery is correlated and has to run once for \n> every outer record.\n\nIf the EXISTS looks semantically similar to an IN (aside from NULL\nsemantics), then it can be made into a semijoin. It doesn't require\nre-executing any part of the plan.\n\nI don't think there are any cases where [NOT] IN is an improvement, am I\nmistaken?\n\n> Another complication is the possible presence of NULL in an IN list. \n> Getting NULLs in `IN' lists is a common source of questions on this \n> list, because people are quite surprised by how it works. EXISTS avoids \n> the NULL handling issue (and in the process demonstrates how woefully \n> inconsistent SQL's handling of NULL really is).\n\nAbsolutely. The NULL behavior of IN is what makes it hard to optimize,\nand therefore you should use EXISTS instead if the semantics are\nsuitable.\n\n> Theoretically the query planner could transform:\n> \n> SELECT * from y WHERE y.id IN (SELECT DISTINCT z.y_id FROM z WHERE \n> z.y_id IS NOT NULL);\n> \n> into:\n> \n> SELECT * FROM y WHERE EXISTS (SELECT 1 FROM z WHERE z.y_id = y.id)\n> \n> ... or vice versa depending on which it thought would be faster.\n\nAlthough those two queries are semantically the same (I think), a lot of\nvery similar pairs of queries are not equivalent. For instance, if it\nwas a NOT IN you couldn't change that to a NOT EXISTS.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Thu, 22 Sep 2011 18:32:00 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IN or EXISTS"
}
] |
[
{
"msg_contents": "Hello all,\nI have a query which takes about 20 minutes to execute and retrieves \n2000-odd records. The explain for the query is pasted here\nhttp://explain.depesz.com/s/52f\nThe same query, with similar data structures/indexes and data comes back \nin 50 seconds in Oracle. We just ported the product to PostgreSQL and are \ntesting it. Any input on what to look for?\n\nPossible relevant parameters are \nshared_buffers = 4GB \ntemp_buffers = 8MB \nwork_mem = 96MB \nmaintenance_work_mem = 1GB \neffective_cache_size = 8GB \ndefault_statistics_target = 50 \n\nIt is a machine with 16 GB RAM.\nRegards,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\nHello all,\nI have a query which takes about 20\nminutes to execute and retrieves 2000-odd records. The explain for the\nquery is pasted here\nhttp://explain.depesz.com/s/52f\nThe same query, with similar data structures/indexes\nand data comes back in 50 seconds in Oracle. We just ported the product\nto PostgreSQL and are testing it. Any input on what to look for?\n\nPossible relevant parameters are \nshared_buffers = 4GB \n \ntemp_buffers = 8MB \n \nwork_mem = 96MB \n \nmaintenance_work_mem = 1GB \n \neffective_cache_size = 8GB \n \ndefault_statistics_target = 50 \n \n\nIt is a machine with 16 GB RAM.\nRegards,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only\nfor the person to whom it is addressed and may contain confidential and/or\nprivileged material. If you have received this e-mail in error, kindly\ncontact the sender and destroy all copies of the original communication.\nIBS makes no warranty, express or implied, nor guarantees the accuracy,\nadequacy or completeness of the information contained in this email or\nany attachment and is not liable for any errors, defects, omissions, viruses\nor for resultant loss or damage, if any, direct or indirect.\"",
"msg_date": "Wed, 31 Aug 2011 14:30:31 +0530",
"msg_from": "Jayadevan M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query performance issue"
},
{
"msg_contents": "On 31.08.2011 12:00, Jayadevan M wrote:\n> Hello all,\n> I have a query which takes about 20 minutes to execute and retrieves\n> 2000-odd records. The explain for the query is pasted here\n> http://explain.depesz.com/s/52f\n> The same query, with similar data structures/indexes and data comes back\n> in 50 seconds in Oracle. We just ported the product to PostgreSQL and are\n> testing it. Any input on what to look for?\n>\n> Possible relevant parameters are\n> shared_buffers = 4GB\n> temp_buffers = 8MB\n> work_mem = 96MB\n> maintenance_work_mem = 1GB\n> effective_cache_size = 8GB\n> default_statistics_target = 50\n>\n> It is a machine with 16 GB RAM.\n\nPlease run EXPLAIN ANALYZE on the query and post that, it's hard to say \nwhat's wrong from just the query plan, without knowing where the time is \nactually spent. And the schema of the tables involved, and any indexes \non them. (see also http://wiki.postgresql.org/wiki/SlowQueryQuestions)\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Wed, 31 Aug 2011 12:34:28 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance issue"
},
{
"msg_contents": "Where is the query? And also paste the \\d to show the tables and\nindexes.\n\n-Sushant.\n\nOn Wed, 2011-08-31 at 14:30 +0530, Jayadevan M wrote:\n> Hello all, \n> I have a query which takes about 20 minutes to execute and retrieves\n> 2000-odd records. The explain for the query is pasted here \n> http://explain.depesz.com/s/52f \n> The same query, with similar data structures/indexes and data comes\n> back in 50 seconds in Oracle. We just ported the product to PostgreSQL\n> and are testing it. Any input on what to look for? \n> \n> Possible relevant parameters are \n> shared_buffers = 4GB \n> temp_buffers = 8MB \n> work_mem = 96MB \n> maintenance_work_mem = 1GB \n> effective_cache_size = 8GB \n> default_statistics_target = 50 \n> \n> It is a machine with 16 GB RAM. \n> Regards, \n> Jayadevan\n> \n> \n> \n> \n> \n> DISCLAIMER: \n> \n> \"The information in this e-mail and any attachment is intended only\n> for the person to whom it is addressed and may contain confidential\n> and/or privileged material. If you have received this e-mail in error,\n> kindly contact the sender and destroy all copies of the original\n> communication. IBS makes no warranty, express or implied, nor\n> guarantees the accuracy, adequacy or completeness of the information\n> contained in this email or any attachment and is not liable for any\n> errors, defects, omissions, viruses or for resultant loss or damage,\n> if any, direct or indirect.\"\n> \n> \n> \n> \n\n\n",
"msg_date": "Wed, 31 Aug 2011 15:07:20 +0530",
"msg_from": "Sushant Sinha <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance issue"
},
{
"msg_contents": "Hello,\n\n> Please run EXPLAIN ANALYZE on the query and post that, it's hard to say \n> what's wrong from just the query plan, without knowing where the time is \n\n> actually spent. And the schema of the tables involved, and any indexes \n> on them. (see also http://wiki.postgresql.org/wiki/SlowQueryQuestions)\nThe details of the tables and indexes may take a bit of effort to explain. \nWill do that.\nI remembered that a similar query took about 90 seconds to run a few days \nago. Now that is also taking a few minutes to run. In between, we made \nsome changes to a few tables (the tables are about 9-10 GB each). This was \nto fix some issue in conversion from CHARACTER VARYING to BOOLEAN on \nPostgreSQL (some columns in Oracle were of type VARCHAR, to store BOOLEAN \nvalues. We changed that to BOOLEAN in PostgreSQL to resolve some issues at \nthe jdbc level). The alters were of similar type - \n\nALTER TABLE cusdynatr ALTER tstflg TYPE boolean USING CASE WHEN tstflg = \n'1' THEN true WHEN tstflg = '0' then FALSE END;\n\nDo such alters result in fragmentation at storage level?\n\nRegards,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\nHello,\n\n> Please run EXPLAIN ANALYZE on the query and post that, it's hard to\nsay \n> what's wrong from just the query plan, without knowing where the time\nis \n> actually spent. And the schema of the tables involved, and any indexes\n\n> on them. (see also http://wiki.postgresql.org/wiki/SlowQueryQuestions)\nThe details of the tables and indexes may take a bit of effort to explain.\nWill do that.\nI remembered that a similar query took about 90 seconds\nto run a few days ago. Now that is also taking a few minutes to run. In\nbetween, we made some changes to a few tables (the tables are about 9-10\nGB each). This was to fix some issue in conversion from CHARACTER VARYING\nto BOOLEAN on PostgreSQL (some columns in Oracle were of type VARCHAR,\nto store BOOLEAN values. We changed that to BOOLEAN in PostgreSQL to resolve\nsome issues at the jdbc level). The alters were of similar type - \n\nALTER TABLE cusdynatr ALTER tstflg TYPE\nboolean USING CASE WHEN tstflg = '1' THEN true WHEN tstflg = '0' then FALSE\nEND;\n\nDo such alters result in fragmentation at storage\nlevel?\n\nRegards,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only\nfor the person to whom it is addressed and may contain confidential and/or\nprivileged material. If you have received this e-mail in error, kindly\ncontact the sender and destroy all copies of the original communication.\nIBS makes no warranty, express or implied, nor guarantees the accuracy,\nadequacy or completeness of the information contained in this email or\nany attachment and is not liable for any errors, defects, omissions, viruses\nor for resultant loss or damage, if any, direct or indirect.\"",
"msg_date": "Wed, 31 Aug 2011 15:37:40 +0530",
"msg_from": "Jayadevan M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query performance issue"
},
{
"msg_contents": "Hello,\n> \n> Please run EXPLAIN ANALYZE on the query and post that, it's hard to say \n> what's wrong from just the query plan, without knowing where the time is \n\n> actually spent. \nHere is the explain analyze\nhttp://explain.depesz.com/s/MY1\nRegards,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\nHello,\n> \n> Please run EXPLAIN ANALYZE on the query and post that, it's hard to\nsay \n> what's wrong from just the query plan, without knowing where the time\nis \n> actually spent. \nHere is the explain analyze\nhttp://explain.depesz.com/s/MY1\nRegards,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only\nfor the person to whom it is addressed and may contain confidential and/or\nprivileged material. If you have received this e-mail in error, kindly\ncontact the sender and destroy all copies of the original communication.\nIBS makes no warranty, express or implied, nor guarantees the accuracy,\nadequacy or completeness of the information contained in this email or\nany attachment and is not liable for any errors, defects, omissions, viruses\nor for resultant loss or damage, if any, direct or indirect.\"",
"msg_date": "Wed, 31 Aug 2011 16:21:46 +0530",
"msg_from": "Jayadevan M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query performance issue"
},
{
"msg_contents": "Hello,\n\n> > \n> > Please run EXPLAIN ANALYZE on the query and post that, it's hard to \nsay \n> > what's wrong from just the query plan, without knowing where the time \nis \n> > actually spent. \n> Here is the explain analyze \n> http://explain.depesz.com/s/MY1 \nGoing through the url tells me that statistics may be off. I will try \nanalyzing the tables. That should help?\nRegards,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\nHello,\n\n> > \n> > Please run EXPLAIN ANALYZE on the query and post that, it's hard\nto say \n> > what's wrong from just the query plan, without knowing where\nthe time is \n> > actually spent. \n> Here is the explain analyze \n> http://explain.depesz.com/s/MY1\n\nGoing through the url tells me that statistics may be off. I will try analyzing\nthe tables. That should help?\nRegards,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only\nfor the person to whom it is addressed and may contain confidential and/or\nprivileged material. If you have received this e-mail in error, kindly\ncontact the sender and destroy all copies of the original communication.\nIBS makes no warranty, express or implied, nor guarantees the accuracy,\nadequacy or completeness of the information contained in this email or\nany attachment and is not liable for any errors, defects, omissions, viruses\nor for resultant loss or damage, if any, direct or indirect.\"",
"msg_date": "Wed, 31 Aug 2011 16:49:37 +0530",
"msg_from": "Jayadevan M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query performance issue"
},
{
"msg_contents": "Missed out looping in community...\n\nOn Wed, Aug 31, 2011 at 5:01 PM, Venkat Balaji <[email protected]>wrote:\n\n> Could you help us know the tables and columns on which Indexes are built ?\n>\n> Query is performing sorting based on key upper(column) and that is where i\n> believe the cost is high.\n>\n> The 'upper' function is used up in the where clause?\n>\n> Thanks\n> Venkat\n>\n>\n> On Wed, Aug 31, 2011 at 4:49 PM, Jayadevan M <[email protected]\n> > wrote:\n>\n>> Hello,\n>>\n>> > >\n>> > > Please run EXPLAIN ANALYZE on the query and post that, it's hard to\n>> say\n>> > > what's wrong from just the query plan, without knowing where the time\n>> is\n>> > > actually spent.\n>> > Here is the explain analyze\n>> > http://explain.depesz.com/s/MY1\n>>\n>> Going through the url tells me that statistics may be off. I will try\n>> analyzing the tables. That should help?\n>> Regards,\n>> Jayadevan\n>>\n>>\n>>\n>>\n>>\n>> DISCLAIMER:\n>>\n>> \"The information in this e-mail and any attachment is intended only for\n>> the person to whom it is addressed and may contain confidential and/or\n>> privileged material. If you have received this e-mail in error, kindly\n>> contact the sender and destroy all copies of the original communication. IBS\n>> makes no warranty, express or implied, nor guarantees the accuracy, adequacy\n>> or completeness of the information contained in this email or any attachment\n>> and is not liable for any errors, defects, omissions, viruses or for\n>> resultant loss or damage, if any, direct or indirect.\"\n>>\n>>\n>>\n>>\n>>\n>\n\nMissed out looping in community...On Wed, Aug 31, 2011 at 5:01 PM, Venkat Balaji <[email protected]> wrote:\nCould you help us know the tables and columns on which Indexes are built ?Query is performing sorting based on key upper(column) and that is where i believe the cost is high.\nThe 'upper' function is used up in the where clause?\nThanksVenkatOn Wed, Aug 31, 2011 at 4:49 PM, Jayadevan M <[email protected]> wrote:\nHello,\n\n> > \n> > Please run EXPLAIN ANALYZE on the query and post that, it's hard\nto say \n> > what's wrong from just the query plan, without knowing where\nthe time is \n> > actually spent. \n> Here is the explain analyze \n> http://explain.depesz.com/s/MY1\n\nGoing through the url tells me that statistics may be off. I will try analyzing\nthe tables. That should help?\nRegards,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only\nfor the person to whom it is addressed and may contain confidential and/or\nprivileged material. If you have received this e-mail in error, kindly\ncontact the sender and destroy all copies of the original communication.\nIBS makes no warranty, express or implied, nor guarantees the accuracy,\nadequacy or completeness of the information contained in this email or\nany attachment and is not liable for any errors, defects, omissions, viruses\nor for resultant loss or damage, if any, direct or indirect.\"",
"msg_date": "Wed, 31 Aug 2011 17:02:58 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance issue"
},
{
"msg_contents": "On 31 Srpen 2011, 13:19, Jayadevan M wrote:\n> Hello,\n>\n>> >\n>> > Please run EXPLAIN ANALYZE on the query and post that, it's hard to\n> say\n>> > what's wrong from just the query plan, without knowing where the time\n> is\n>> > actually spent.\n>> Here is the explain analyze\n>> http://explain.depesz.com/s/MY1\n> Going through the url tells me that statistics may be off. I will try\n> analyzing the tables. That should help?\n> Regards,\n> Jayadevan\n\nThat could help, but not necessarily.\n\nA really interesting part is the sort near the bottom -\n\n-> Sort (cost=1895.95..1896.49 rows=215 width=61) (actual\ntime=25.926..711784.723 rows=2673340321 loops=1)\n Sort Key: memmst.memshpsta\n Sort Method: quicksort Memory: 206kB\n -> Nested Loop (cost=0.01..1887.62 rows=215 width=61) (actual\ntime=0.088..23.445 rows=1121 loops=1)\n\nHow can a sort ge 1121 rows at the input and return 2673340321 rows at the\noutput? Not sure where this comes from.\n\nBTW what PostgreSQL version is this?\n\nTomas\n\n",
"msg_date": "Wed, 31 Aug 2011 13:41:55 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance issue"
},
{
"msg_contents": "> \n> A really interesting part is the sort near the bottom -\n> \n> -> Sort (cost=1895.95..1896.49 rows=215 width=61) (actual\n> time=25.926..711784.723 rows=2673340321 loops=1)\n> Sort Key: memmst.memshpsta\n> Sort Method: quicksort Memory: 206kB\n> -> Nested Loop (cost=0.01..1887.62 rows=215 width=61) (actual\n> time=0.088..23.445 rows=1121 loops=1)\n> \n> How can a sort ge 1121 rows at the input and return 2673340321 rows at \nthe\n> output? Not sure where this comes from.\n> \n> BTW what PostgreSQL version is this?\nPostgreSQL 9.0.4 on x86_64-pc-solaris2.10\n\nRegards,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\n> \n> A really interesting part is the sort near the bottom -\n> \n> -> Sort (cost=1895.95..1896.49 rows=215 width=61) (actual\n> time=25.926..711784.723 rows=2673340321 loops=1)\n> Sort Key: memmst.memshpsta\n> Sort Method: quicksort Memory: 206kB\n> -> Nested Loop (cost=0.01..1887.62 rows=215\nwidth=61) (actual\n> time=0.088..23.445 rows=1121 loops=1)\n> \n> How can a sort ge 1121 rows at the input and return 2673340321 rows\nat the\n> output? Not sure where this comes from.\n> \n> BTW what PostgreSQL version is this?\nPostgreSQL 9.0.4 on x86_64-pc-solaris2.10\n\nRegards,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only\nfor the person to whom it is addressed and may contain confidential and/or\nprivileged material. If you have received this e-mail in error, kindly\ncontact the sender and destroy all copies of the original communication.\nIBS makes no warranty, express or implied, nor guarantees the accuracy,\nadequacy or completeness of the information contained in this email or\nany attachment and is not liable for any errors, defects, omissions, viruses\nor for resultant loss or damage, if any, direct or indirect.\"",
"msg_date": "Wed, 31 Aug 2011 17:27:21 +0530",
"msg_from": "Jayadevan M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query performance issue"
},
{
"msg_contents": "Jayadevan M wrote:\n \n>> And the schema of the tables involved, and any indexes on them.\n \n> The details of the tables and indexes may take a bit of effort to\n> explain. Will do that.\n \nIn psql you can do \\d to get a decent summary.\n \nWithout seeing the query and the table definitions, it's hard to give\nadvice; especially when a sort step increases the number of rows.\nI'm guessing there is incorrect usage of some set-returning function.\n \n-Kevin\n",
"msg_date": "Wed, 31 Aug 2011 07:40:29 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance issue"
},
{
"msg_contents": "Here goes....I think it might be difficult to go through all these\ndefinitions.. \nPRGMEMACCMST \n\n Table \"public.prgmemaccmst\" \n Column | Type | Modifiers \n--------------+-----------------------------+----------- \n cmpcod | character varying(5) | not null \n prgcod | character varying(5) | not null \n memshpnum | character varying(30) | not null \n accsta | character varying(1) | not null \n accstachgdat | timestamp without time zone | not null \n expdat | timestamp without time zone | \n tircod | character varying(5) | \n tirexpdat | timestamp without time zone | \n crdexpdat | timestamp without time zone | \n tiraltdat | timestamp without time zone | \n crdlmtalwflg | boolean | \n lstactdat | timestamp without time zone | \n enrsrc | character varying(1) | not null \n enrsrccod | character varying(15) | \n enrdat | timestamp without time zone | not null \n acrpntflg | boolean | \n usrcod | character varying(25) | \n upddat | timestamp without time zone | \n erlrgn | character varying(20) | \n susflg | character varying(1) | \n fstactdat | timestamp without time zone | \n fstacractnum | character varying(12) | \n acccrtdat | timestamp without time zone | not null \n lsttirprcdat | timestamp without time zone | \n enrtircod | character varying(5) | \nIndexes: \n \"prgmemaccmst_pkey\" PRIMARY KEY, btree (cmpcod, prgcod, memshpnum) \n \"prgmemaccmst_accsta_idx\" btree (accsta) \n \"prgmemaccmst_enrdat_idx\" btree (enrdat) \n \"prgmemaccmst_tircod_idx\" btree (tircod) \n \"prgmemaccmst_tirexpdat_ind\" btree (tirexpdat) \n\n \n\nEAIMEMPFLMST \n View \"public.eaimempflmst\" \n Column | Type | Modifiers | Storage |\nDescription \n-----------+-----------------------------+-----------+----------+------------- \n cmpcod | character varying(5) | | extended | \n memshpnum | character varying(30) | | extended | \n memshptyp | character varying(1) | | extended | \n memshpsta | character varying(1) | | extended | \n pin | character varying(50) | | extended | \n sctqst | character varying(200) | | extended | \n sctans | character varying(200) | | extended | \n rtoclmcnt | smallint | | plain | \n usrcod | character varying(25) | | extended | \n upddat | timestamp without time zone | | plain | \n cusnum | character varying(11) | | extended | \nView definition: \n SELECT memmst.cmpcod, memmst.memshpnum, memmst.memshptyp, memmst.memshpsta,\nmemmst.pin, memmst.sctqst, memmst.sctans, memmst.rtoclmcnt, memmst.usrcod,\nmemmst.upddat, memmst.cusnum \n FROM memmst; \n\nmemmst \n Table \"public.memmst\" \n Column | Type | Modifiers \n-----------+-----------------------------+----------- \n cmpcod | character varying(5) | not null \n memshpnum | character varying(30) | not null \n memshptyp | character varying(1) | not null \n memshpsta | character varying(1) | not null \n pin | character varying(50) | not null \n sctqst | character varying(200) | \n sctans | character varying(200) | \n rtoclmcnt | smallint | \n usrcod | character varying(25) | \n upddat | timestamp without time zone | \n cusnum | character varying(11) | \n weblgn | boolean | \n rsncod | character varying(1) | \n lgntrycnt | smallint | \n lgntrytim | timestamp without time zone | \n rempinchg | boolean | \nIndexes: \n \"memmst_pkey\" PRIMARY KEY, btree (cmpcod, memshpnum) \n \"memmst_idx\" UNIQUE, btree (cusnum, memshpnum, cmpcod) \n \"memmst_upddat_idx\" btree (upddat) \n \n\n View \"public.eaicuspflcntinf\" \n Column | Type | Modifiers | Storage |\nDescription \n-----------+-----------------------------+-----------+----------+------------- \n cmpcod | character varying(5) | | extended | \n cusnum | character varying(11) | | extended | \n adrtyp | character varying(1) | | extended | \n adrlinone | character varying(150) | | extended | \n adrlintwo | character varying(150) | | extended | \n cty | character varying(100) | | extended | \n stt | character varying(100) | | extended | \n ctr | character varying(5) | | extended | \n zipcod | character varying(30) | | extended | \n emladr | character varying(100) | | extended | \n phnnum | character varying(50) | | extended | \n celisdcod | character varying(5) | | extended | \n celaracod | character varying(5) | | extended | \n celnum | character varying(50) | | extended | \n fax | character varying(50) | | extended | \n skypid | character varying(25) | | extended | \n upddat | timestamp without time zone | | plain | \n pstinvflg | boolean | | plain | \n emlinvflg | boolean | | plain | \nView definition: \n SELECT cuscntinf.cmpcod, cuscntinf.cusnum, cuscntinf.adrtyp,\ncuscntinf.adrlinone, cuscntinf.adrlintwo, cuscntinf.cty, cuscntinf.stt,\ncuscntinf.ctr, cuscntinf.zipcod, cuscntinf.emladr, cuscntinf.phnnum,\ncuscntinf.celisdcod, cuscntinf.celaracod, cuscntinf.celnum, cuscntinf.fax,\ncuscntinf.skypid, cuscntinf.upddat, cuscntinf.pstinvflg, cuscntinf.emlinvflg \n FROM cuscntinf; \n\ncuscntinf \n Table \"public.cuscntinf\" \n Column | Type | Modifiers \n--------------+-----------------------------+----------- \n cmpcod | character varying(5) | not null \n cusnum | character varying(11) | not null \n adrtyp | character varying(1) | not null \n adrlinone | character varying(150) | \n adrlintwo | character varying(150) | \n cty | character varying(100) | \n stt | character varying(100) | \n ctr | character varying(5) | \n zipcod | character varying(30) | \n emladr | character varying(100) | \n phnisdcod | character varying(5) | \n phnaracod | character varying(5) | \n phnnum | character varying(50) | \n celisdcod | character varying(5) | \n celaracod | character varying(5) | \n celnum | character varying(50) | \n faxisdcod | character varying(5) | \n faxaracod | character varying(5) | \n fax | character varying(50) | \n skypid | character varying(25) | \n upddat | timestamp without time zone | not null \n emlinvflg | boolean | \n pstinvflg | boolean | \n pstbnccnt | smallint | \n emlhrdbnccnt | smallint | default 0 \n emlmdmbnccnt | smallint | default 0 \n emlsftbnccnt | smallint | default 0 \n lstemlbncdat | timestamp without time zone | \n smsnotsnd | boolean | \nIndexes: \n \"cuscntinf_pkey\" PRIMARY KEY, btree (cmpcod, cusnum, adrtyp) \n \"cuscntinf_celaracod_idx\" btree (celaracod, cusnum, cmpcod) \n \"cuscntinf_celisdcod_idx\" btree (celisdcod, cusnum, cmpcod) \n \"cuscntinf_celnum_idx\" btree (celnum, cusnum, cmpcod) \n \"cuscntinf_emladr_idx\" btree (upper(emladr::text)) \n \"cuscntinf_upddat_idx\" btree (upddat) \n \nCOMONETIM \n Table \"public.comonetim\" \n Column | Type | Modifiers \n--------+-----------------------------+----------- \n cmpcod | character varying(5) | not null \n fldcod | character varying(50) | not null \n fldval | character varying(100) | not null \n flddes | character varying(100) | \n usrcod | character varying(25) | \n seqnum | smallint | \n upddat | timestamp without time zone | \n prvcod | character varying(10) | \nIndexes: \n \"comonetim_pkey\" PRIMARY KEY, btree (cmpcod, fldcod, fldval) \n\nCOMONETIM \n Table \"public.comonetim\" \n Column | Type | Modifiers \n--------+-----------------------------+----------- \n cmpcod | character varying(5) | not null \n fldcod | character varying(50) | not null \n fldval | character varying(100) | not null \n flddes | character varying(100) | \n usrcod | character varying(25) | \n seqnum | smallint | \n upddat | timestamp without time zone | \n prvcod | character varying(10) | \nIndexes: \n \"comonetim_pkey\" PRIMARY KEY, btree (cmpcod, fldcod, fldval) \n\nEAICUSPFLINDINF \n View \"public.eaicuspflindinf\" \n Column | Type | Modifiers | Storage | Description \n--------+-----------------------------+-----------+----------+------------- \n cmpcod | character varying(5) | | extended | \n cusnum | character varying(11) | | extended | \n prflng | character varying(5) | | extended | \n prfadr | character varying(1) | | extended | \n memtle | character varying(5) | | extended | \n gvnnam | character varying(80) | | extended | \n famnam | character varying(80) | | extended | \n initls | character varying(80) | | extended | \n dspnam | character varying(170) | | extended | \n memgnd | character varying(1) | | extended | \n mrlsta | character varying(1) | | extended | \n memdob | timestamp without time zone | | plain | \n idrnum | character varying(18) | | extended | \n pstnum | character varying(30) | | extended | \n cntres | character varying(5) | | extended | \n stfidn | character varying(15) | | extended | \n cmpnam | character varying(80) | | extended | \n dsg | character varying(80) | | extended | \n idttyp | character varying(1) | | extended | \n incbnd | character varying(2) | | extended | \n memnly | character varying(20) | | extended | \n upddat | timestamp without time zone | | plain | \nView definition: \n SELECT cusindinf.cmpcod, cusindinf.cusnum, cusindinf.prflng,\ncusindinf.prfadr, cusindinf.memtle, cusindinf.gvnnam, cusindinf.famnam,\ncusindinf.initls, cusindinf.dspnam, cusindinf.memgnd, cusindinf.mrlsta,\ncusindinf.memdob, cusindinf.idrnum, cusindinf.pstnum, cusindinf.cntres,\ncusindinf.stfidn, cusindinf.cmpnam, cusindinf.dsg, cusindinf.idttyp,\ncusindinf.incbnd, cusindinf.memnly, cusindinf.upddat \n FROM cusindinf; \n\n cusindinf \n Table \"public.cusindinf\" \n Column | Type | Modifiers \n--------+-----------------------------+----------- \n cmpcod | character varying(5) | not null \n cusnum | character varying(11) | not null \n prflng | character varying(5) | not null \n prfadr | character varying(1) | not null \n memtle | character varying(5) | not null \n gvnnam | character varying(80) | not null \n famnam | character varying(80) | not null \n initls | character varying(80) | \n dspnam | character varying(170) | \n memgnd | character varying(1) | not null \n mrlsta | character varying(1) | \n memdob | timestamp without time zone | \n pstnum | character varying(30) | \n cntres | character varying(5) | not null \n stfidn | character varying(15) | \n cmpnam | character varying(80) | \n dsg | character varying(80) | \n idttyp | character varying(1) | \n incbnd | character varying(2) | \n memnly | character varying(20) | \n idrnum | character varying(18) | \n upddat | timestamp without time zone | not null \nIndexes: \n \"cusindinf_pkey\" PRIMARY KEY, btree (cmpcod, cusnum) \n \"cusindinf_idrnum_idx\" btree (idrnum, cusnum, cmpcod) \n \"cusindinf_idx1\" btree (upper(gvnnam::text)) \n \"cusindinf_idx2\" btree (upper(famnam::text)) \n \"cusindinf_idx3\" btree (upper(cmpnam::text)) \n \"cusindinf_idx4\" btree (upper((gvnnam::text || ' '::text) ||\nfamnam::text)) \n \"cusindinf_upddat_idx\" btree (upddat) \n \n\nQuery - \nSELECT PFLMST.MEMSHPNUM, \n PFLMST.MEMSHPTYP, \n ACCMST.PRGCOD, \n CNTINF.EMLADR, \n CNTINF.CELISDCOD, \n CNTINF.CELARACOD, \n CNTINF.CELNUM, \n CNTINF.ADRLINONE , \n CNTINF.ZIPCOD, \n CNTINF.ADRTYP, \n ONE.FLDDES ACCSTA, \n ONE1.FLDDES MEMSHPSTA, \n INDINF.CMPNAM EMPNAM, \n INDINF.PRFADR, \n INDINF.GVNNAM GVNNAM, \n INDINF.FAMNAM FAMNAM, \n INDINF.MEMDOB MEMDOB \nFROM PRGMEMACCMST ACCMST \nJOIN EAIMEMPFLMST PFLMST \nON ACCMST.CMPCOD = PFLMST.CMPCOD \nAND ACCMST.MEMSHPNUM = PFLMST.MEMSHPNUM \nJOIN EAICUSPFLCNTINF CNTINF \nON CNTINF.CMPCOD = PFLMST.CMPCOD \nAND CNTINF.CUSNUM = PFLMST.CUSNUM \nJOIN COMONETIM ONE \nON ONE.CMPCOD =ACCMST.CMPCOD \nAND ONE.FLDCOD='program.member.accountStatus' \nAND ONE.FLDVAL=ACCMST.ACCSTA \nJOIN COMONETIM ONE1 \nON ONE1.CMPCOD =ACCMST.CMPCOD \nAND ONE1.FLDCOD='common.member.membershipStatus' \nAND ONE1.FLDVAL=PFLMST.MEMSHPSTA \nLEFT JOIN EAICUSPFLINDINF INDINF \nON INDINF.CMPCOD = PFLMST.CMPCOD \nAND INDINF.CUSNUM = PFLMST.CUSNUM \nWHERE ACCMST.CMPCOD= 'SA' \nAND UPPER(INDINF.FAMNAM) LIKE 'PRICE' \n || '%' \nORDER BY UPPER(INDINF.GVNNAM), \n UPPER(INDINF.FAMNAM), \n UPPER(INDINF.CMPNAM) \n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Query-performance-issue-tp4753453p4764725.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Fri, 2 Sep 2011 21:48:43 -0700 (PDT)",
"msg_from": "Jayadevan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance issue"
},
{
"msg_contents": "Order by ...upper(xyz), do you have functional index on these ?\n",
"msg_date": "Sun, 4 Sep 2011 11:38:02 +0100",
"msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance issue"
},
{
"msg_contents": "Jayadevan M wrote:\n \n> Here is the explain analyze\n> http://explain.depesz.com/s/MY1\n \n> PostgreSQL 9.0.4 on x86_64-pc-solaris2.10\n \n> work_mem = 96MB\n \nThanks for posting the query and related schema. I tried working\nthrough it, but I keep coming back to this sort, and wondering how a\nsort can have 1121 rows as input and 2673340321 rows as output. Does\nanyone have any ideas on what could cause that?\n \n -> Sort (cost=1895.95..1896.49 rows=215 width=61)\n (actual time=25.926..711784.723\n rows=2673340321 loops=1)\n Sort Key: memmst.memshpsta\n Sort Method: quicksort Memory: 206kB\n -> Nested Loop (cost=0.01..1887.62 rows=215 width=61)\n (actual time=0.088..23.445\n rows=1121 loops=1)\n \n-Kevin\n",
"msg_date": "Sun, 04 Sep 2011 09:30:58 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance issue"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Thanks for posting the query and related schema. I tried working\n> through it, but I keep coming back to this sort, and wondering how a\n> sort can have 1121 rows as input and 2673340321 rows as output. Does\n> anyone have any ideas on what could cause that?\n\nMergejoin rescan. There really are only 1121 rows in the data, but\nthe parent merge join is pulling them over and over again --- evidently\nthere are a lot of equal keys in the data. The EXPLAIN ANALYZE\nmachinery counts each fetch as a new row, even after a mark/restore.\n\nThe planner does know about that effect and will penalize merge joins\nwhen it realizes there are a lot of duplicate keys in the input. In\nthis case I'm thinking that the drastic underestimate of the size of the\nother side of the join results in not penalizing the merge enough.\n\n(On the other hand, hash joins don't like equal keys that much either...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 04 Sep 2011 11:18:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance issue "
},
{
"msg_contents": "I don't think I understood all that. Anyway, is there a way to fix this -\neither by rewriting the query or by creating an index? The output does match\nwhat I am expecting. It does take more than 10 times the time taken by\nOracle for the same result, with PostgreSQL taking more than 20 minutes. I\nam sort of stuck on this since this query does get executed often. By the\nway, changing the filter from FAMNAM to GIVENNAME fetches results in 90\nseconds. Probably there is a difference in the cardinality of values in\nthese 2 columns.\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Query-performance-issue-tp4753453p4768047.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Sun, 4 Sep 2011 11:06:31 -0700 (PDT)",
"msg_from": "Jayadevan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance issue"
},
{
"msg_contents": "On 4 Září 2011, 20:06, Jayadevan wrote:\n> I don't think I understood all that. Anyway, is there a way to fix this -\n> either by rewriting the query or by creating an index? The output does\n> match\n> what I am expecting. It does take more than 10 times the time taken by\n> Oracle for the same result, with PostgreSQL taking more than 20 minutes. I\n> am sort of stuck on this since this query does get executed often. By the\n> way, changing the filter from FAMNAM to GIVENNAME fetches results in 90\n> seconds. Probably there is a difference in the cardinality of values in\n> these 2 columns.\n\nTom Lane explained why sort produces more rows (2673340321) than it gets\non the input (1121), or why it seems like that - it's a bit complicated\nbecause of the merge join.\n\nI'd try to increase statistics target - it's probably 100, change it to\n1000, run ANALYZE and try the query (it may improve the plan without the\nneed to mess with the query).\n\nIf that does not help, you'll have to change the query probably. The\nproblem is the explain analyze you've provided\n(http://explain.depesz.com/s/MY1) does not match the query from your\nyesterday's post so we can't really help with it. I do have some ideas of\nhow to change the query, but it's really wild guessing without the query\nplan.\n\nTomas\n\n",
"msg_date": "Sun, 4 Sep 2011 22:18:15 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance issue"
},
{
"msg_contents": "Hello,\n> \n> If that does not help, you'll have to change the query probably. The\n> problem is the explain analyze you've provided\n> (http://explain.depesz.com/s/MY1) does not match the query from your\n> yesterday's post so we can't really help with it.\nThanks for the pointers. I think I posted the same plan, may be the \nvariable values changed. Anyway, I changed the query and now it comes back \nin 2 seconds. Here is the plan\nhttp://explain.depesz.com/s/n9S\nInteresting observation - PostgreSQL takes from 2 seconds to 20 minutes \nfetch the same data set of 2212 records, with slightly modified queries. \nOracle is consistent (taking under 1 minute in both cases), though not \nconsistently faster. The modified query is \nSELECT PFLMST.MEMSHPNUM,\n PFLMST.MEMSHPTYP,\n ACCMST.PRGCOD,\n CNTINF.EMLADR,\n CNTINF.CELISDCOD,\n CNTINF.CELARACOD,\n CNTINF.CELNUM,\n CNTINF.ADRLINONE ,\n CNTINF.ZIPCOD,\n CNTINF.ADRTYP,\n (select ONE.FLDDES from COMONETIM ONE\n WHERE ONE.CMPCOD =ACCMST.CMPCOD\n AND ONE.FLDCOD='program.member.accountStatus'\n AND ONE.FLDVAL=ACCMST.ACCSTA)ACCSTA,\n (SELECT ONE1.FLDDES FROM COMONETIM ONE1\n WHERE ONE1.CMPCOD =ACCMST.CMPCOD\n AND ONE1.FLDCOD='common.member.membershipStatus'\n AND ONE1.FLDVAL=PFLMST.MEMSHPSTA )MEMSHPSTA,\n INDINF.CMPNAM EMPNAM,\n INDINF.PRFADR,\n INDINF.GVNNAM GVNNAM,\n INDINF.FAMNAM FAMNAM,\n INDINF.MEMDOB MEMDOB\n FROM PRGMEMACCMST ACCMST\n JOIN EAIMEMPFLMST PFLMST\n ON ACCMST.CMPCOD = PFLMST.CMPCOD\n AND ACCMST.MEMSHPNUM = PFLMST.MEMSHPNUM\n JOIN EAICUSPFLCNTINF CNTINF\n ON CNTINF.CMPCOD = PFLMST.CMPCOD\n AND CNTINF.CUSNUM = PFLMST.CUSNUM\n LEFT JOIN EAICUSPFLINDINF INDINF\n ON INDINF.CMPCOD = PFLMST.CMPCOD\n AND INDINF.CUSNUM = PFLMST.CUSNUM\n WHERE ACCMST.CMPCOD= 'SA'\n AND UPPER(INDINF.FAMNAM) LIKE 'PRICE'\n || '%'\n ORDER BY UPPER(INDINF.GVNNAM),\n UPPER(INDINF.FAMNAM),\n UPPER(INDINF.CMPNAM) \n\nRegards,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\nHello,\n> \n> If that does not help, you'll have to change the query probably. The\n> problem is the explain analyze you've provided\n> (http://explain.depesz.com/s/MY1)\ndoes not match the query from your\n> yesterday's post so we can't really help with it.\nThanks for the pointers. I think I posted the same\nplan, may be the variable values changed. Anyway, I changed the query and\nnow it comes back in 2 seconds. Here is the plan\nhttp://explain.depesz.com/s/n9S\nInteresting observation - PostgreSQL takes from 2\nseconds to 20 minutes fetch the same data set of 2212 records, with slightly\nmodified queries. Oracle is consistent (taking under 1 minute in both cases),\nthough not consistently faster. The modified query is \nSELECT PFLMST.MEMSHPNUM,\n PFLMST.MEMSHPTYP,\n ACCMST.PRGCOD,\n CNTINF.EMLADR,\n CNTINF.CELISDCOD,\n CNTINF.CELARACOD,\n CNTINF.CELNUM,\n CNTINF.ADRLINONE ,\n CNTINF.ZIPCOD,\n CNTINF.ADRTYP,\n (select ONE.FLDDES from COMONETIM ONE\n WHERE ONE.CMPCOD =ACCMST.CMPCOD\n AND ONE.FLDCOD='program.member.accountStatus'\n AND ONE.FLDVAL=ACCMST.ACCSTA)ACCSTA,\n (SELECT ONE1.FLDDES FROM COMONETIM ONE1\n WHERE ONE1.CMPCOD =ACCMST.CMPCOD\n AND ONE1.FLDCOD='common.member.membershipStatus'\n AND ONE1.FLDVAL=PFLMST.MEMSHPSTA )MEMSHPSTA,\n INDINF.CMPNAM EMPNAM,\n INDINF.PRFADR,\n INDINF.GVNNAM GVNNAM,\n INDINF.FAMNAM FAMNAM,\n INDINF.MEMDOB MEMDOB\n FROM PRGMEMACCMST ACCMST\n JOIN EAIMEMPFLMST PFLMST\n ON ACCMST.CMPCOD = PFLMST.CMPCOD\n AND ACCMST.MEMSHPNUM = PFLMST.MEMSHPNUM\n JOIN EAICUSPFLCNTINF CNTINF\n ON CNTINF.CMPCOD = PFLMST.CMPCOD\n AND CNTINF.CUSNUM = PFLMST.CUSNUM\n LEFT JOIN EAICUSPFLINDINF INDINF\n ON INDINF.CMPCOD = PFLMST.CMPCOD\n AND INDINF.CUSNUM = PFLMST.CUSNUM\n WHERE ACCMST.CMPCOD= 'SA'\n AND UPPER(INDINF.FAMNAM) LIKE 'PRICE'\n || '%'\n ORDER BY UPPER(INDINF.GVNNAM),\n UPPER(INDINF.FAMNAM),\n UPPER(INDINF.CMPNAM) \n\nRegards,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only\nfor the person to whom it is addressed and may contain confidential and/or\nprivileged material. If you have received this e-mail in error, kindly\ncontact the sender and destroy all copies of the original communication.\nIBS makes no warranty, express or implied, nor guarantees the accuracy,\nadequacy or completeness of the information contained in this email or\nany attachment and is not liable for any errors, defects, omissions, viruses\nor for resultant loss or damage, if any, direct or indirect.\"",
"msg_date": "Mon, 5 Sep 2011 09:49:08 +0530",
"msg_from": "Jayadevan M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query performance issue"
},
{
"msg_contents": "Based on my initial hunch that something resulting from all the ALTERS was\nmaking PostgreSQL planner end up with bad plans, I tried a pg_dump and\npg_restore. Now the 'bad' query comes back in 70 seconds (compared to 20\nminutes earlier) and the rewritten query still comes back in 2 seconds. So\nwe will stick with the re-written query.\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Query-performance-issue-tp4753453p4773061.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Mon, 5 Sep 2011 20:30:12 -0700 (PDT)",
"msg_from": "Jayadevan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance issue"
}
] |
[
{
"msg_contents": "Hi all,\n\n \n\nI am running a simple query:\n\n \n\nSELECT * FROM public.\"Frame\"\n\n \n\nTime taken:\n\n35.833 ms (i.e. roughly 35 seconds)\n\n \n\nNumber of rows:\n\n121830\n\nNumber of columns:\n\n38\n\n \n\nThis is extremely slow for a database server.\n\nCan anyone help me in finding the problem?\n\nThanks,\n\nKOtto\n\n \n\nClient: pgAdmin III\n\n \n\nInformation:\n\n \n\nTable definition for \"Frame\":\n\nCREATE TABLE \"Frame\"\n\n(\n\n \"ID\" bigint NOT NULL,\n\n \"Series.ID\" bigint NOT NULL,\n\n filename text NOT NULL,\n\n \"Frame UID\" text NOT NULL,\n\n \"Instance Number\" integer,\n\n \"Image Type\" text,\n\n \"Scanning Sequence\" text,\n\n \"Sequence Variant\" text,\n\n \"Scan Options\" text,\n\n \"MR Acquisition Type\" text,\n\n \"Sequence Name\" text,\n\n \"Angio Flag\" text,\n\n \"Repetition Time\" double precision,\n\n \"Echo Time\" double precision,\n\n \"Inversion Time\" double precision,\n\n \"Number of Averages\" double precision,\n\n \"Imaging Frequency\" double precision,\n\n \"Imaged Nucleus\" text,\n\n \"Echo Number\" text,\n\n \"Magnetic Field Strength\" double precision,\n\n \"Spacing Between Slices\" double precision,\n\n \"Number of Phase Encoding Steps\" integer,\n\n \"Echo Train Length\" integer,\n\n \"Protocol Name\" text,\n\n \"Trigger Time\" double precision,\n\n \"Nominal Interval\" integer,\n\n \"Cardiac Number of Images\" integer,\n\n \"SAR\" double precision,\n\n \"Image Position Patient\" text,\n\n \"Image Orientation Patient\" text,\n\n \"Slice Location\" double precision,\n\n \"Rows\" integer,\n\n \"Columns\" integer,\n\n \"Pixel Spacing\" text,\n\n \"Transfer Syntax UID\" text,\n\n \"SOP Instance UID\" text,\n\n \"Temporal Position Identifier\" integer,\n\n \"Number Of Temporal Positions\" integer,\n\n CONSTRAINT \"Frame_pkey\" PRIMARY KEY (\"ID\"),\n\n CONSTRAINT \"Frame_ID_key\" UNIQUE (\"ID\")\n\n)\n\nWITH (\n\n OIDS=FALSE\n\n);\n\nALTER TABLE \"Frame\" OWNER TO \"MDDBClient\";\n\nGRANT ALL ON TABLE \"Frame\" TO \"MDDBClient\";\n\nGRANT ALL ON TABLE \"Frame\" TO public;\n\n \n\nPostGreSQL : 9.0\n\nHistory: Query has always been slow\n\nHardware: Win 7 enterprise 64bit with SP1, 3.0GB RAM, Intel Xeon 3050 @\n2.13Ghz dual, 500GB HD (WD5000AAKS).\n\n \n\nExplain:\n\n\"Seq Scan on \"Frame\" (cost=0.00..9537.30 rows=121830 width=541) (actual\ntime=0.047..93.318 rows=121830 loops=1)\"\n\n\"Total runtime: 100.686 ms\"\n\n \n\nAuto Vacuum: Vacuum just performed.\n\n \n\nGUC:\n\n\"version\";\"PostgreSQL 9.0.4, compiled by Visual C++ build 1500, 64-bit\"\n\n\"bytea_output\";\"escape\"\n\n\"client_encoding\";\"UNICODE\"\n\n\"effective_cache_size\";\"2GB\"\n\n\"lc_collate\";\"English_United States.1252\"\n\n\"lc_ctype\";\"English_United States.1252\"\n\n\"listen_addresses\";\"*\"\n\n\"log_destination\";\"stderr\"\n\n\"log_line_prefix\";\"%t \"\n\n\"logging_collector\";\"on\"\n\n\"max_connections\";\"100\"\n\n\"max_stack_depth\";\"2MB\"\n\n\"port\";\"5432\"\n\n\"server_encoding\";\"UTF8\"\n\n\"shared_buffers\";\"32MB\"\n\n\"TimeZone\";\"CET\"\n\n\"work_mem\";\"16MB\"\n\n\nHi all, I am running a simple query: SELECT * FROM public.“Frame” Time taken:35.833 ms (i.e. roughly 35 seconds) Number of rows:121830Number of columns:38 This is extremely slow for a database server.Can anyone help me in finding the problem?Thanks,KOtto Client: pgAdmin III Information: Table definition for “Frame”:CREATE TABLE \"Frame\"( \"ID\" bigint NOT NULL, \"Series.ID\" bigint NOT NULL, filename text NOT NULL, \"Frame UID\" text NOT NULL, \"Instance Number\" integer, \"Image Type\" text, \"Scanning Sequence\" text, \"Sequence Variant\" text, \"Scan Options\" text, \"MR Acquisition Type\" text, \"Sequence Name\" text, \"Angio Flag\" text, \"Repetition Time\" double precision, \"Echo Time\" double precision, \"Inversion Time\" double precision, \"Number of Averages\" double precision, \"Imaging Frequency\" double precision, \"Imaged Nucleus\" text, \"Echo Number\" text, \"Magnetic Field Strength\" double precision, \"Spacing Between Slices\" double precision, \"Number of Phase Encoding Steps\" integer, \"Echo Train Length\" integer, \"Protocol Name\" text, \"Trigger Time\" double precision, \"Nominal Interval\" integer, \"Cardiac Number of Images\" integer, \"SAR\" double precision, \"Image Position Patient\" text, \"Image Orientation Patient\" text, \"Slice Location\" double precision, \"Rows\" integer, \"Columns\" integer, \"Pixel Spacing\" text, \"Transfer Syntax UID\" text, \"SOP Instance UID\" text, \"Temporal Position Identifier\" integer, \"Number Of Temporal Positions\" integer, CONSTRAINT \"Frame_pkey\" PRIMARY KEY (\"ID\"), CONSTRAINT \"Frame_ID_key\" UNIQUE (\"ID\"))WITH ( OIDS=FALSE);ALTER TABLE \"Frame\" OWNER TO \"MDDBClient\";GRANT ALL ON TABLE \"Frame\" TO \"MDDBClient\";GRANT ALL ON TABLE \"Frame\" TO public; PostGreSQL : 9.0History: Query has always been slowHardware: Win 7 enterprise 64bit with SP1, 3.0GB RAM, Intel Xeon 3050 @ 2.13Ghz dual, 500GB HD (WD5000AAKS). Explain:\"Seq Scan on \"Frame\" (cost=0.00..9537.30 rows=121830 width=541) (actual time=0.047..93.318 rows=121830 loops=1)\"\"Total runtime: 100.686 ms\" Auto Vacuum: Vacuum just performed. GUC:\"version\";\"PostgreSQL 9.0.4, compiled by Visual C++ build 1500, 64-bit\"\"bytea_output\";\"escape\"\"client_encoding\";\"UNICODE\"\"effective_cache_size\";\"2GB\"\"lc_collate\";\"English_United States.1252\"\"lc_ctype\";\"English_United States.1252\"\"listen_addresses\";\"*\"\"log_destination\";\"stderr\"\"log_line_prefix\";\"%t \"\"logging_collector\";\"on\"\"max_connections\";\"100\"\"max_stack_depth\";\"2MB\"\"port\";\"5432\"\"server_encoding\";\"UTF8\"\"shared_buffers\";\"32MB\"\"TimeZone\";\"CET\"\"work_mem\";\"16MB\"",
"msg_date": "Wed, 31 Aug 2011 15:04:53 +0200",
"msg_from": "\"Kai Otto\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow performance"
},
{
"msg_contents": "When you ran it, did it really feel like 30 seconds? Or did it come \nright back real quick?\n\nBecause your report says:\n\n > 35.833 ms\n\nThats ms, or milliseconds, or 0.035 seconds.\n\n-Andy\n\n\nOn 8/31/2011 8:04 AM, Kai Otto wrote:\n> Hi all,\n>\n> I am running a simple query:\n>\n> SELECT * FROM public.�Frame�\n>\n> Time taken:\n>\n> 35.833 ms (i.e. roughly 35 seconds)\n>\n> Number of rows:\n>\n> 121830\n>\n> Number of columns:\n>\n> 38\n>\n> *This is extremely slow for a database server.*\n>\n> *Can anyone help me in finding the problem?*\n>\n> Thanks,\n>\n> KOtto\n>\n> *Client:* pgAdmin III\n>\n> *_Information:_*\n>\n> *Table definition for �Frame�:*\n>\n> CREATE TABLE \"Frame\"\n>\n> (\n>\n> \"ID\" bigint NOT NULL,\n>\n> \"Series.ID\" bigint NOT NULL,\n>\n> filename text NOT NULL,\n>\n> \"Frame UID\" text NOT NULL,\n>\n> \"Instance Number\" integer,\n>\n> \"Image Type\" text,\n>\n> \"Scanning Sequence\" text,\n>\n> \"Sequence Variant\" text,\n>\n> \"Scan Options\" text,\n>\n> \"MR Acquisition Type\" text,\n>\n> \"Sequence Name\" text,\n>\n> \"Angio Flag\" text,\n>\n> \"Repetition Time\" double precision,\n>\n> \"Echo Time\" double precision,\n>\n> \"Inversion Time\" double precision,\n>\n> \"Number of Averages\" double precision,\n>\n> \"Imaging Frequency\" double precision,\n>\n> \"Imaged Nucleus\" text,\n>\n> \"Echo Number\" text,\n>\n> \"Magnetic Field Strength\" double precision,\n>\n> \"Spacing Between Slices\" double precision,\n>\n> \"Number of Phase Encoding Steps\" integer,\n>\n> \"Echo Train Length\" integer,\n>\n> \"Protocol Name\" text,\n>\n> \"Trigger Time\" double precision,\n>\n> \"Nominal Interval\" integer,\n>\n> \"Cardiac Number of Images\" integer,\n>\n> \"SAR\" double precision,\n>\n> \"Image Position Patient\" text,\n>\n> \"Image Orientation Patient\" text,\n>\n> \"Slice Location\" double precision,\n>\n> \"Rows\" integer,\n>\n> \"Columns\" integer,\n>\n> \"Pixel Spacing\" text,\n>\n> \"Transfer Syntax UID\" text,\n>\n> \"SOP Instance UID\" text,\n>\n> \"Temporal Position Identifier\" integer,\n>\n> \"Number Of Temporal Positions\" integer,\n>\n> CONSTRAINT \"Frame_pkey\" PRIMARY KEY (\"ID\"),\n>\n> CONSTRAINT \"Frame_ID_key\" UNIQUE (\"ID\")\n>\n> )\n>\n> WITH (\n>\n> OIDS=FALSE\n>\n> );\n>\n> ALTER TABLE \"Frame\" OWNER TO \"MDDBClient\";\n>\n> GRANT ALL ON TABLE \"Frame\" TO \"MDDBClient\";\n>\n> GRANT ALL ON TABLE \"Frame\" TO public;\n>\n> *PostGreSQL :* 9.0\n>\n> *History:* Query has always been slow\n>\n> *Hardware: *Win 7 enterprise 64bit with SP1, 3.0GB RAM, Intel Xeon 3050\n> @ 2.13Ghz dual, 500GB HD (/WD5000AAKS/).\n>\n> *Explain:*\n>\n> \"Seq Scan on \"Frame\" (cost=0.00..9537.30 rows=121830 width=541) (actual\n> time=0.047..93.318 rows=121830 loops=1)\"\n>\n> \"Total runtime: 100.686 ms\"\n>\n> *Auto Vacuum: *Vacuum just performed.\n>\n> **\n>\n> *GUC:*\n>\n> \"version\";\"PostgreSQL 9.0.4, compiled by Visual C++ build 1500, 64-bit\"\n>\n> \"bytea_output\";\"escape\"\n>\n> \"client_encoding\";\"UNICODE\"\n>\n> \"effective_cache_size\";\"2GB\"\n>\n> \"lc_collate\";\"English_United States.1252\"\n>\n> \"lc_ctype\";\"English_United States.1252\"\n>\n> \"listen_addresses\";\"*\"\n>\n> \"log_destination\";\"stderr\"\n>\n> \"log_line_prefix\";\"%t \"\n>\n> \"logging_collector\";\"on\"\n>\n> \"max_connections\";\"100\"\n>\n> \"max_stack_depth\";\"2MB\"\n>\n> \"port\";\"5432\"\n>\n> \"server_encoding\";\"UTF8\"\n>\n> \"shared_buffers\";\"32MB\"\n>\n> \"TimeZone\";\"CET\"\n>\n> \"work_mem\";\"16MB\"\n>\n\n",
"msg_date": "Wed, 31 Aug 2011 13:26:57 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance"
},
{
"msg_contents": "On August 31, 2011 11:26:57 AM Andy Colson wrote:\n> When you ran it, did it really feel like 30 seconds? Or did it come\n> right back real quick?\n> \n> Because your report says:\n> > 35.833 ms\n> \n> Thats ms, or milliseconds, or 0.035 seconds.\n> \n\nI think the \".\" is a thousands separator in some locales, possibly the reason \nfor confusion.\n",
"msg_date": "Wed, 31 Aug 2011 11:51:36 -0700",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance"
},
{
"msg_contents": "On 8/31/2011 1:51 PM, Alan Hodgson wrote:\n> On August 31, 2011 11:26:57 AM Andy Colson wrote:\n>> When you ran it, did it really feel like 30 seconds? Or did it come\n>> right back real quick?\n>>\n>> Because your report says:\n>> > 35.833 ms\n>>\n>> Thats ms, or milliseconds, or 0.035 seconds.\n>>\n>\n> I think the \".\" is a thousands separator in some locales, possibly the reason\n> for confusion.\n>\n\nD'oh. I'm zero for two today. Guess I'll call it a day.\n\n-Andy\n",
"msg_date": "Wed, 31 Aug 2011 13:56:56 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance"
},
{
"msg_contents": "\"Kai Otto\" <[email protected]> wrote:\n \n> Time taken:\n> \n> 35.833 ms (i.e. roughly 35 seconds)\n \nWhich is it? 35 ms or 35 seconds?\n \n> Number of rows:\n> \n> 121830\n> \n> Number of columns:\n> \n> 38\n \n> This is extremely slow for a database server.\n> \n> Can anyone help me in finding the problem?\n \n> \"Seq Scan on \"Frame\" (cost=0.00..9537.30 rows=121830 width=541)\n> (actual time=0.047..93.318 rows=121830 loops=1)\"\n> \n> \"Total runtime: 100.686 ms\"\n \nAssuming 35 seconds for the 121 K rows, it would seem that you're\ntaking less than 1 ms per row on the database server, which may not\nbe too bad, depending on how many of them are read from disk. The\nrest of the time would seem to be in the network and the client. \nThat's where you need to fix something if you want it to be faster.\n \nWith only a fraction of 1% of the run time being on the database\nserver, any attempt to tune things there can't improve performance\nby more than that fraction of a percent.\n \n-Kevin\n",
"msg_date": "Wed, 31 Aug 2011 13:58:57 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance"
},
{
"msg_contents": "On August 31, 2011 11:56:56 AM Andy Colson wrote:\n> On 8/31/2011 1:51 PM, Alan Hodgson wrote:\n> > On August 31, 2011 11:26:57 AM Andy Colson wrote:\n> >> When you ran it, did it really feel like 30 seconds? Or did it come\n> >> right back real quick?\n> >> \n> >> Because your report says:\n> >> > 35.833 ms\n> >> \n> >> Thats ms, or milliseconds, or 0.035 seconds.\n> > \n> > I think the \".\" is a thousands separator in some locales, possibly the\n> > reason for confusion.\n> \n> D'oh. I'm zero for two today. Guess I'll call it a day.\n> \n\nOh, no, you were right. I think the original poster is European, though and \nmisread the output.\n",
"msg_date": "Wed, 31 Aug 2011 13:05:16 -0700",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance"
},
{
"msg_contents": "Hi all,\n\nThanks for the replies and sorry for the late response, I have been away\nfor a few days.\n\nConcerning the performance: 1 ms per row seems slow knowing that the\nentire database is less then 64MB and therefore should easily fit into\nmemory and the client (pgAdmin III) runs on the server.\n\nI am going to test another database to check the performance of the\nhardware.\n\n-Kai\n\n> -----Original Message-----\n> From: Kevin Grittner [mailto:[email protected]]\n> Sent: Wednesday, August 31, 2011 8:59 PM\n> To: Kai Otto; [email protected]\n> Subject: Re: [PERFORM] Slow performance\n> \n> \"Kai Otto\" <[email protected]> wrote:\n> \n> > Time taken:\n> >\n> > 35.833 ms (i.e. roughly 35 seconds)\n> \n> Which is it? 35 ms or 35 seconds?\n> \n> > Number of rows:\n> >\n> > 121830\n> >\n> > Number of columns:\n> >\n> > 38\n> \n> > This is extremely slow for a database server.\n> >\n> > Can anyone help me in finding the problem?\n> \n> > \"Seq Scan on \"Frame\" (cost=0.00..9537.30 rows=121830 width=541)\n> > (actual time=0.047..93.318 rows=121830 loops=1)\"\n> >\n> > \"Total runtime: 100.686 ms\"\n> \n> Assuming 35 seconds for the 121 K rows, it would seem that you're\n> taking less than 1 ms per row on the database server, which may not\n> be too bad, depending on how many of them are read from disk. The\n> rest of the time would seem to be in the network and the client.\n> That's where you need to fix something if you want it to be faster.\n> \n> With only a fraction of 1% of the run time being on the database\n> server, any attempt to tune things there can't improve performance\n> by more than that fraction of a percent.\n> \n> -Kevin\n",
"msg_date": "Mon, 5 Sep 2011 11:21:35 +0200",
"msg_from": "\"Kai Otto\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow performance"
}
] |
[
{
"msg_contents": "I'm running a labour-intensive series of queries on a medium-sized dataset (~100,000 rows) with geometry objects and both gist and btree indices.\n\nThe queries are embedded in plpgsql, and have multiple updates, inserts and deletes to the tables as well as multiple selects which require the indices to function correctly for any kind of performance.\n\nMy problem is that I can't embed a vacuum analyze to reset the indices and speed up processing, and the queries get slower and slower as the un-freed space builds up.\n\n From my understanding, transaction commits within batches are not allowed (so no vacuum embedded within queries). Are there plans to change this? Is there a way to reclaim dead space for tables that have repeated inserts, updates and deletes on them? I have tried a simple analyze, and this does not quite cut it. I'm getting seq-scans after the first round of processing instead of hitting the index correctly.\n\nMy apologies if this is directed at the wrong forum, and thank you for your help.\n\n-cris pond\n\n",
"msg_date": "Fri, 2 Sep 2011 17:25:00 -0700 (PDT)",
"msg_from": "C Pond <[email protected]>",
"msg_from_op": true,
"msg_subject": "Embedded VACUUM"
},
{
"msg_contents": "On 3/09/2011 8:25 AM, C Pond wrote:\n> I'm running a labour-intensive series of queries on a medium-sized dataset (~100,000 rows) with geometry objects and both gist and btree indices.\n>\n> The queries are embedded in plpgsql, and have multiple updates, inserts and deletes to the tables as well as multiple selects which require the indices to function correctly for any kind of performance.\n>\n> My problem is that I can't embed a vacuum analyze to reset the indices and speed up processing, and the queries get slower and slower as the un-freed space builds up.\n>\n> From my understanding, transaction commits within batches are not allowed (so no vacuum embedded within queries). Are there plans to change this? Is there a way to reclaim dead space for tables that have repeated inserts, updates and deletes on them?\nNot, AFAIK, until the transaction doing the deletes/updates commits and \nso do any older SERIALIZABLE transactions as well as any older running \nREAD COMMITTED statements.\n\nThis is one of the areas where Pg's lack of true stored procedures bites \nyou. You'll need to do the work via an out-of-process helper over a \nregular connection, or do your work via dblink to achieve the same effect.\n\n--\nCraig Ringer\n",
"msg_date": "Mon, 05 Sep 2011 17:26:54 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Embedded VACUUM"
}
] |
[
{
"msg_contents": "Dear list,\n\nwe are encountering serious performance problems with our database. \nQueries which took around 100ms or less last week now take several seconds.\n\nThe database runs on Ubuntu Server 10.4.3 (kernel: 2.6.32-33) on \nhardware as follows:\n8-core Intel Xeon CPU with 2.83GHz\n48 GB RAM\nRAID 5 with 8 SAS disks\nPostgreSQL 8.4.8 (installed from the Ubuntu repository).\n\nAdditionally to the DB the machine also hosts a few virtual machines. In \nthe past everything worked very well and the described problem occurs \njust out of the blue. We don't know of any postgresql config changes or \nanything else which might explain the performance reduction.\nWe have a number of DBs running in the cluster, and the problem seems to \naffect all of them.\n\nWe checked the performance of the RAID .. which is reasonable for eg. \n\"hdparm -tT\". Memory is well used, but not swapping.\nvmstat shows, that the machine isn't using the swap and the load \nshouldn't be also to high:\n root@host:~# vmstat\n procs -----------memory---------- ---swap-- -----io---- -system-- \n----cpu----\n r b swpd free buff cache si so bi bo in cs us \nsy id wa\n 0 0 0 308024 884812 40512932 0 0 464 168 353 92 \n4 2 84 9\n\nBonnie++ results given below, I am no expert at interpreting those :-)\n\n\nActivating log_min_duration shows for instance this query --- there are \nnow constantly queries which take absurdely long.\n\n2011-09-02 22:38:18 CEST LOG: Dauer: 25520.374 ms Anweisung: SELECT \nkeyword_id FROM keywords.table_x WHERE keyword=E'diplomaten'\n\ndb=# explain analyze SELECT keyword_id FROM keywords.table_x WHERE \nkeyword=E'diplomaten';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_table_x_keyword on table_x (cost=0.00..8.29 \nrows=1 width=4) (actual time=0.039..0.041 rows=1 loops=1)\n Index Cond: ((keyword)::text = 'diplomaten'::text)\n Total runtime: 0.087 ms\n(3 Zeilen)\n\ndb=# \\d keywords.table_x\n Tabelle �keywords.table_x�\n Spalte | Typ \n| Attribute\n------------+-------------------+------------------------------------------------------------------------------------------------------\n keyword_id | integer | not null Vorgabewert \nnextval('keywords.table_x_keyword_id_seq'::regclass)\n keyword | character varying |\n so | double precision |\nIndexe:\n \"table_x_pkey\" PRIMARY KEY, btree (keyword_id) CLUSTER\n \"idx_table_x_keyword\" btree (keyword)\nFremdschl�sselverweise von:\n TABLE \"keywords.table_x_has\" CONSTRAINT \n\"table_x_has_keyword_id_fkey\" FOREIGN KEY (keyword_id) REFERENCES \nkeywords.table_x(keyword_id) ON UPDATE CASCADE ON DELETE CASCADE\n\n\n\n\nCould you be so kind and give us any advice how to track down the \nproblem or comment on possible reasons???\n\nThank you very much in advance!!!\n\nRegards,\n heinz + gerhard\n\n\n\n\n\n name \n| current_setting\n----------------------------+-------------------------------------------------------------------------------------------------------------\n version | PostgreSQL 8.4.8 on x86_64-pc-linux-gnu, \ncompiled by GCC gcc-4.4.real (Ubuntu 4.4.3-4ubuntu5) 4.4.3, 64-bit\n archive_command | /usr/local/sbin/weblyzard-wal-archiver.sh \n%p %f\n archive_mode | on\n checkpoint_segments | 192\n effective_cache_size | 25000MB\n external_pid_file | /var/run/postgresql/8.4-main.pid\n full_page_writes | on\n geqo | on\n lc_collate | de_AT.UTF-8\n lc_ctype | de_AT.UTF-8\n listen_addresses | *\n log_line_prefix | %t\n log_min_duration_statement | 3s\n maintenance_work_mem | 500MB\n max_connections | 250\n max_stack_depth | 2MB\n port | 5432\n server_encoding | UTF8\n shared_buffers | 7000MB\n ssl | on\n TimeZone | localtime\n unix_socket_directory | /var/run/postgresql\n work_mem | 256MB\n\n\nResults of Bonnie++\n\nVersion 1.96 ------Sequential Output------ --Sequential Input- \n--Random-\nConcurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- \n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP \n/sec %CP\nvoyager 95G 1400 93 27804 3 16324 2 2925 96 41636 3 \n374.9 4\nLatency 7576us 233s 164s 15647us 13120ms \n3302ms\nVersion 1.96 ------Sequential Create------ --------Random \nCreate--------\nvoyager -Create-- --Read--- -Delete-- -Create-- --Read--- \n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP \n/sec %CP\n 16 141 0 +++++ +++ 146 0 157 0 +++++ +++ \n172 0\nLatency 1020ms 128us 9148ms 598ms 37us \n485ms\n1.96,1.96,voyager,1,1314988752,95G,,1400,93,27804,3,16324,2,2925,96,41636,3,374.9,4,16,,,,,141,0,+++++,+++,146,0,157,0,+++++,+++,172,0,7576us,233s,164s,15647us,13120ms,3302ms,1020ms,128us,9148ms,598ms,37us,485ms\n\n\n\n\n\n",
"msg_date": "Sat, 03 Sep 2011 09:26:44 +0200",
"msg_from": "Gerhard Wohlgenannt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sudden drop in DBb performance"
},
{
"msg_contents": "Hi.\nAutoexplain module allow to log plans and statistics of live queries. Try it.\n\n2011/9/3, Gerhard Wohlgenannt <[email protected]>:\n> Dear list,\n>\n> we are encountering serious performance problems with our database.\n> Queries which took around 100ms or less last week now take several seconds.\n>\n> The database runs on Ubuntu Server 10.4.3 (kernel: 2.6.32-33) on\n> hardware as follows:\n> 8-core Intel Xeon CPU with 2.83GHz\n> 48 GB RAM\n> RAID 5 with 8 SAS disks\n> PostgreSQL 8.4.8 (installed from the Ubuntu repository).\n>\n> Additionally to the DB the machine also hosts a few virtual machines. In\n> the past everything worked very well and the described problem occurs\n> just out of the blue. We don't know of any postgresql config changes or\n> anything else which might explain the performance reduction.\n> We have a number of DBs running in the cluster, and the problem seems to\n> affect all of them.\n>\n> We checked the performance of the RAID .. which is reasonable for eg.\n> \"hdparm -tT\". Memory is well used, but not swapping.\n> vmstat shows, that the machine isn't using the swap and the load\n> shouldn't be also to high:\n> root@host:~# vmstat\n> procs -----------memory---------- ---swap-- -----io---- -system--\n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us\n> sy id wa\n> 0 0 0 308024 884812 40512932 0 0 464 168 353 92\n> 4 2 84 9\n>\n> Bonnie++ results given below, I am no expert at interpreting those :-)\n>\n>\n> Activating log_min_duration shows for instance this query --- there are\n> now constantly queries which take absurdely long.\n>\n> 2011-09-02 22:38:18 CEST LOG: Dauer: 25520.374 ms Anweisung: SELECT\n> keyword_id FROM keywords.table_x WHERE keyword=E'diplomaten'\n>\n> db=# explain analyze SELECT keyword_id FROM keywords.table_x WHERE\n> keyword=E'diplomaten';\n>\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using idx_table_x_keyword on table_x (cost=0.00..8.29\n> rows=1 width=4) (actual time=0.039..0.041 rows=1 loops=1)\n> Index Cond: ((keyword)::text = 'diplomaten'::text)\n> Total runtime: 0.087 ms\n> (3 Zeilen)\n>\n> db=# \\d keywords.table_x\n> Tabelle »keywords.table_x«\n> Spalte | Typ\n> | Attribute\n> ------------+-------------------+------------------------------------------------------------------------------------------------------\n> keyword_id | integer | not null Vorgabewert\n> nextval('keywords.table_x_keyword_id_seq'::regclass)\n> keyword | character varying |\n> so | double precision |\n> Indexe:\n> \"table_x_pkey\" PRIMARY KEY, btree (keyword_id) CLUSTER\n> \"idx_table_x_keyword\" btree (keyword)\n> Fremdschlüsselverweise von:\n> TABLE \"keywords.table_x_has\" CONSTRAINT\n> \"table_x_has_keyword_id_fkey\" FOREIGN KEY (keyword_id) REFERENCES\n> keywords.table_x(keyword_id) ON UPDATE CASCADE ON DELETE CASCADE\n>\n>\n>\n>\n> Could you be so kind and give us any advice how to track down the\n> problem or comment on possible reasons???\n>\n> Thank you very much in advance!!!\n>\n> Regards,\n> heinz + gerhard\n>\n>\n>\n>\n>\n> name\n> | current_setting\n> ----------------------------+-------------------------------------------------------------------------------------------------------------\n> version | PostgreSQL 8.4.8 on x86_64-pc-linux-gnu,\n> compiled by GCC gcc-4.4.real (Ubuntu 4.4.3-4ubuntu5) 4.4.3, 64-bit\n> archive_command | /usr/local/sbin/weblyzard-wal-archiver.sh\n> %p %f\n> archive_mode | on\n> checkpoint_segments | 192\n> effective_cache_size | 25000MB\n> external_pid_file | /var/run/postgresql/8.4-main.pid\n> full_page_writes | on\n> geqo | on\n> lc_collate | de_AT.UTF-8\n> lc_ctype | de_AT.UTF-8\n> listen_addresses | *\n> log_line_prefix | %t\n> log_min_duration_statement | 3s\n> maintenance_work_mem | 500MB\n> max_connections | 250\n> max_stack_depth | 2MB\n> port | 5432\n> server_encoding | UTF8\n> shared_buffers | 7000MB\n> ssl | on\n> TimeZone | localtime\n> unix_socket_directory | /var/run/postgresql\n> work_mem | 256MB\n>\n>\n> Results of Bonnie++\n>\n> Version 1.96 ------Sequential Output------ --Sequential Input-\n> --Random-\n> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> voyager 95G 1400 93 27804 3 16324 2 2925 96 41636 3\n> 374.9 4\n> Latency 7576us 233s 164s 15647us 13120ms\n> 3302ms\n> Version 1.96 ------Sequential Create------ --------Random\n> Create--------\n> voyager -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 141 0 +++++ +++ 146 0 157 0 +++++ +++\n> 172 0\n> Latency 1020ms 128us 9148ms 598ms 37us\n> 485ms\n> 1.96,1.96,voyager,1,1314988752,95G,,1400,93,27804,3,16324,2,2925,96,41636,3,374.9,4,16,,,,,141,0,+++++,+++,146,0,157,0,+++++,+++,172,0,7576us,233s,164s,15647us,13120ms,3302ms,1020ms,128us,9148ms,598ms,37us,485ms\n>\n>\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n-- \n------------\npasman\n",
"msg_date": "Mon, 5 Sep 2011 09:27:52 +0200",
"msg_from": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden drop in DBb performance"
},
{
"msg_contents": "On 3 Září 2011, 9:26, Gerhard Wohlgenannt wrote:\n> Dear list,\n>\n> we are encountering serious performance problems with our database.\n> Queries which took around 100ms or less last week now take several\n> seconds.\n>\n> The database runs on Ubuntu Server 10.4.3 (kernel: 2.6.32-33) on\n> hardware as follows:\n> 8-core Intel Xeon CPU with 2.83GHz\n> 48 GB RAM\n> RAID 5 with 8 SAS disks\n> PostgreSQL 8.4.8 (installed from the Ubuntu repository).\n>\n> Additionally to the DB the machine also hosts a few virtual machines. In\n> the past everything worked very well and the described problem occurs\n> just out of the blue. We don't know of any postgresql config changes or\n> anything else which might explain the performance reduction.\n> We have a number of DBs running in the cluster, and the problem seems to\n> affect all of them.\n\nWhat are the virtual machines doing? Are you sure they are not doing a lot\nof IO?\n\n>\n> We checked the performance of the RAID .. which is reasonable for eg.\n> \"hdparm -tT\". Memory is well used, but not swapping.\n> vmstat shows, that the machine isn't using the swap and the load\n> shouldn't be also to high:\n> root@host:~# vmstat\n> procs -----------memory---------- ---swap-- -----io---- -system--\n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us\n> sy id wa\n> 0 0 0 308024 884812 40512932 0 0 464 168 353 92\n> 4 2 84 9\n>\n> Bonnie++ results given below, I am no expert at interpreting those :-)\n>\n>\n> Activating log_min_duration shows for instance this query --- there are\n> now constantly queries which take absurdely long.\n>\n> 2011-09-02 22:38:18 CEST LOG: Dauer: 25520.374 ms Anweisung: SELECT\n> keyword_id FROM keywords.table_x WHERE keyword=E'diplomaten'\n>\n> db=# explain analyze SELECT keyword_id FROM keywords.table_x WHERE\n> keyword=E'diplomaten';\n> QUERY\n> PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using idx_table_x_keyword on table_x (cost=0.00..8.29\n> rows=1 width=4) (actual time=0.039..0.041 rows=1 loops=1)\n> Index Cond: ((keyword)::text = 'diplomaten'::text)\n> Total runtime: 0.087 ms\n> (3 Zeilen)\n>\n> db=# \\d keywords.table_x\n> Tabelle »keywords.table_x«\n> Spalte | Typ\n> | Attribute\n> ------------+-------------------+------------------------------------------------------------------------------------------------------\n> keyword_id | integer | not null Vorgabewert\n> nextval('keywords.table_x_keyword_id_seq'::regclass)\n> keyword | character varying |\n> so | double precision |\n> Indexe:\n> \"table_x_pkey\" PRIMARY KEY, btree (keyword_id) CLUSTER\n> \"idx_table_x_keyword\" btree (keyword)\n> Fremdschlüsselverweise von:\n> TABLE \"keywords.table_x_has\" CONSTRAINT\n> \"table_x_has_keyword_id_fkey\" FOREIGN KEY (keyword_id) REFERENCES\n> keywords.table_x(keyword_id) ON UPDATE CASCADE ON DELETE CASCADE\n\nBut in this explain analyze, the query finished in 41 ms. Use auto-explain\ncontrib module to see the explain plan of the slow execution.\n\n> Could you be so kind and give us any advice how to track down the\n> problem or comment on possible reasons???\n\nOne of the things\n\n>\n> Thank you very much in advance!!!\n>\n> Regards,\n> heinz + gerhard\n>\n>\n>\n>\n>\n> name\n> | current_setting\n> ----------------------------+-------------------------------------------------------------------------------------------------------------\n> version | PostgreSQL 8.4.8 on x86_64-pc-linux-gnu,\n> compiled by GCC gcc-4.4.real (Ubuntu 4.4.3-4ubuntu5) 4.4.3, 64-bit\n> archive_command | /usr/local/sbin/weblyzard-wal-archiver.sh\n> %p %f\n> archive_mode | on\n> checkpoint_segments | 192\n> effective_cache_size | 25000MB\n> external_pid_file | /var/run/postgresql/8.4-main.pid\n> full_page_writes | on\n> geqo | on\n> lc_collate | de_AT.UTF-8\n> lc_ctype | de_AT.UTF-8\n> listen_addresses | *\n> log_line_prefix | %t\n> log_min_duration_statement | 3s\n> maintenance_work_mem | 500MB\n> max_connections | 250\n> max_stack_depth | 2MB\n> port | 5432\n> server_encoding | UTF8\n> shared_buffers | 7000MB\n> ssl | on\n> TimeZone | localtime\n> unix_socket_directory | /var/run/postgresql\n> work_mem | 256MB\n>\n>\n> Results of Bonnie++\n>\n> Version 1.96 ------Sequential Output------ --Sequential Input-\n> --Random-\n> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> voyager 95G 1400 93 27804 3 16324 2 2925 96 41636 3\n> 374.9 4\n> Latency 7576us 233s 164s 15647us 13120ms\n> 3302ms\n> Version 1.96 ------Sequential Create------ --------Random\n> Create--------\n> voyager -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 141 0 +++++ +++ 146 0 157 0 +++++ +++\n> 172 0\n> Latency 1020ms 128us 9148ms 598ms 37us\n> 485ms\n>\n\nThat seems a bit slow ... 27MB/s for writes and 41MB/s forreads is ait\nslow with 8 drives.\n\nTomas\n\n",
"msg_date": "Mon, 5 Sep 2011 09:48:45 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden drop in DBb performance"
},
{
"msg_contents": "On 09/05/2011 02:48 AM, Tomas Vondra wrote:\n> On 3 Září 2011, 9:26, Gerhard Wohlgenannt wrote:\n>> Dear list,\n>>\n>> we are encountering serious performance problems with our database.\n>> Queries which took around 100ms or less last week now take several\n>> seconds.\n\n>> Results of Bonnie++\n>>\n>> Version 1.96 ------Sequential Output------ --Sequential Input-\n>> --Random-\n>> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n>> --Seeks--\n>> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n>> /sec %CP\n>> voyager 95G 1400 93 27804 3 16324 2 2925 96 41636 3\n>> 374.9 4\n>> Latency 7576us 233s 164s 15647us 13120ms\n>> 3302ms\n>> Version 1.96 ------Sequential Create------ --------Random\n>> Create--------\n>> voyager -Create-- --Read--- -Delete-- -Create-- --Read---\n>> -Delete--\n>> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n>> /sec %CP\n>> 16 141 0 +++++ +++ 146 0 157 0 +++++ +++\n>> 172 0\n>> Latency 1020ms 128us 9148ms 598ms 37us\n>> 485ms\n>>\n>\n> That seems a bit slow ... 27MB/s for writes and 41MB/s forreads is ait\n> slow with 8 drives.\n>\n> Tomas\n>\n>\n\nAgreed, that's really slow. A single SATA drive will get 60 MB/s. Did you run Bonnie while the VM's were up and running?\n\n root@host:~# vmstat\n procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 0 0 308024 884812 40512932 0 0 464 168 353 92 4 2 84 9\n\n\nOnly one line? That does not help much. Can you run it as 'vmstat 2' and let it run while a few slow queries are performed? Then paste all the lines?\n\n\n-Andy\n",
"msg_date": "Mon, 05 Sep 2011 08:51:15 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden drop in DBb performance"
},
{
"msg_contents": "On 09/05/2011 03:51 PM, Andy Colson wrote:\n> On 09/05/2011 02:48 AM, Tomas Vondra wrote:\n>> On 3 Září 2011, 9:26, Gerhard Wohlgenannt wrote:\n>>> Dear list,\n>>>\n>>> we are encountering serious performance problems with our database.\n>>> Queries which took around 100ms or less last week now take several\n>>> seconds.\n>\n>>> Results of Bonnie++\n>>>\n>>> Version 1.96 ------Sequential Output------ --Sequential Input-\n>>> --Random-\n>>> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n>>> --Seeks--\n>>> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n>>> /sec %CP\n>>> voyager 95G 1400 93 27804 3 16324 2 2925 96 41636 3\n>>> 374.9 4\n>>> Latency 7576us 233s 164s 15647us 13120ms\n>>> 3302ms\n>>> Version 1.96 ------Sequential Create------ --------Random\n>>> Create--------\n>>> voyager -Create-- --Read--- -Delete-- -Create-- --Read---\n>>> -Delete--\n>>> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n>>> /sec %CP\n>>> 16 141 0 +++++ +++ 146 0 157 0 +++++ +++\n>>> 172 0\n>>> Latency 1020ms 128us 9148ms 598ms 37us\n>>> 485ms\n>>>\n>>\n>> That seems a bit slow ... 27MB/s for writes and 41MB/s forreads is ait\n>> slow with 8 drives.\n>>\n>> Tomas\n>>\n>>\n>\n> Agreed, that's really slow. A single SATA drive will get 60 MB/s. \n> Did you run Bonnie while the VM's were up and running?\n>\n> root@host:~# vmstat\n> procs -----------memory---------- ---swap-- -----io---- -system-- \n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us \n> sy id wa\n> 0 0 0 308024 884812 40512932 0 0 464 168 353 92 \n> 4 2 84 9\n>\n>\n> Only one line? That does not help much. Can you run it as 'vmstat 2' \n> and let it run while a few slow queries are performed? Then paste all \n> the lines?\n>\n>\n> -Andy\n\nHi Andy,\n\nthanks a lot for your help.\n\nBelow please find the results of vmstat 2 over some periode of time .. \nwith normal database / system load.\n\n 0 0 1344332 237196 104140 31468412 0 0 330 102 4322 7130 \n4 2 90 4\n 1 1 1344332 236708 104144 31469000 0 0 322 105 2096 3723 \n1 2 92 5\n 2 1 1344204 240924 104156 31462484 350 0 1906 234 3687 4512 \n12 3 77 9\n 0 0 1344200 238372 104168 31462452 0 0 8 109 4050 8376 \n8 3 86 3\n 0 0 1344200 232668 104168 31462468 0 0 12 158 2036 3633 \n2 2 92 3\n 0 3 1344196 282784 104180 31413384 4 0 1768 343 2490 4391 \n1 2 84 13\n 1 1 1344196 278188 104192 31416080 0 0 1392 341 2215 3850 \n1 2 82 15\n 0 0 1344120 276964 104608 31416904 90 0 634 304 2390 3949 \n4 2 86 8\n 1 1 1344120 277096 104628 31417752 0 0 492 378 2394 3866 \n2 1 87 10\n 0 1 1344120 274476 104628 31418620 0 0 260 233 1997 3255 \n2 1 91 6\n 1 1 1344120 276584 104628 31418808 0 0 128 208 2015 3266 \n2 1 91 6\n 0 0 1343672 272352 106288 31418788 694 0 1346 344 2170 3660 \n3 1 89 6\n 0 1 1343632 270220 107648 31419152 48 0 468 490 2356 3622 \n4 2 88 5\n 0 0 1343624 270708 107660 31419344 20 0 228 138 2086 3518 \n2 3 91 4\n 0 1 1343612 268732 107660 31419584 12 0 168 112 2100 3585 \n3 2 91 3\n 0 0 1343544 266616 107660 31420112 14 0 154 73 2059 3719 \n3 2 93 3\n 0 1 1343540 267368 107684 31420168 0 0 78 260 2256 3970 \n3 2 90 6\n 0 1 1343540 268352 107692 31420356 0 0 94 284 2239 4086 \n2 2 89 6\n 0 0 1343540 274064 107692 31423584 0 0 1622 301 2322 4258 \n2 3 83 13\n 0 2 1343440 273064 107704 31423696 96 0 106 180 2158 3795 \n3 2 90 5\n 0 0 1342184 262888 107708 31426040 840 0 2014 146 2309 3713 \n5 3 83 9\n 0 0 1342184 261904 107732 31426128 0 0 60 158 1893 3510 \n1 3 91 5\n 2 0 1342184 258680 107732 31427436 0 0 794 114 2160 3647 \n2 3 90 5\n 0 2 1342176 258184 107744 31428308 24 0 310 116 1943 3335 \n2 2 91 4\n 1 0 1342172 259068 107756 31428700 2 0 138 143 1976 3468 \n1 1 93 5\n 0 0 1342172 258084 107756 31429948 0 0 620 88 2117 3565 \n3 1 90 6\n 0 0 1342172 258456 107952 31430028 0 0 62 305 2174 3827 \n1 2 91 6\n 1 0 1342172 257480 107952 31430636 0 0 300 256 2316 3959 \n3 2 86 8\n 0 0 1342172 257720 107952 31430772 0 0 46 133 2411 4047 \n3 2 91 3\n 1 2 1342172 257844 107976 31430776 0 0 136 184 2111 3841 \n1 1 92 6\n 1 2 1342172 338376 107576 31349412 0 0 462 8615 2655 5508 \n5 3 79 13\n 1 2 1342172 340772 107580 31351080 0 0 682 377 2503 4022 \n2 1 87 10\n 1 2 1342172 335688 107596 31351992 0 0 548 306 2480 3867 \n4 1 86 9\n 0 2 1342168 337432 107608 31352704 0 0 224 188 1919 3158 \n1 1 93 6\n 0 0 1342168 337804 107608 31353020 0 0 154 249 1933 3175 \n1 1 92 6\n 0 1 1342168 335944 107636 31353464 0 0 212 173 1912 3280 \n4 2 89 5\nprocs -----------memory---------- ---swap-- -----io---- -system-- \n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy \nid wa\n 0 0 1342168 336936 107636 31353512 0 0 14 183 1911 3426 \n2 1 93 4\n 0 1 1342168 334440 107656 31353736 0 0 264 372 2119 3400 \n6 2 84 8\n 0 0 1342164 334084 107680 31354468 0 0 302 413 2361 3613 \n2 1 87 10\n 2 0 1342160 342764 107680 31354916 8 0 184 332 2142 3117 \n1 1 90 7\n 0 1 1342160 343788 107680 31355808 0 0 360 211 2247 3249 \n1 2 91 5\n 2 1 1342156 340804 107704 31355904 0 0 88 280 2287 3448 \n2 2 90 6\n 0 1 1342156 344276 107704 31356464 0 0 316 276 2050 3298 \n1 2 90 7\n 0 0 1342156 344160 107712 31356576 0 0 4 225 1884 3194 \n1 3 90 6\n 0 0 1342152 342548 107724 31356688 0 0 52 231 1963 3232 \n1 3 89 6\n 2 1 1342152 343664 107724 31356764 0 0 104 348 2643 3614 \n3 2 88 8\n 1 1 1342144 341060 107760 31357080 16 0 120 307 2511 3474 \n4 3 87 7\n 1 0 1342140 342332 107780 31357500 8 0 206 193 2243 3448 \n4 2 89 5\n 1 0 1342136 339472 107780 31357508 0 0 32 142 4290 3799 \n6 3 87 4\n 0 0 1342136 341160 107780 31357992 0 0 216 171 2613 3995 \n4 2 88 5\n 0 0 1342136 342168 107820 31357988 0 0 26 140 2347 3753 \n3 4 89 4\n 0 0 1342136 342532 107820 31358128 0 0 36 155 2119 3653 \n2 1 91 5\n 2 0 1342136 341564 107828 31358144 0 0 0 151 1973 3486 \n4 2 90 4\n 1 1 1342136 342076 107852 31358416 0 0 148 284 2251 3857 \n6 2 84 8\n 0 1 1342136 339944 107852 31359284 0 0 482 478 2902 5210 \n4 2 84 10\n 0 1 1342136 342184 107852 31359836 0 0 238 372 2292 4063 \n2 1 88 9\n\ncheers gerhard\n\n",
"msg_date": "Mon, 05 Sep 2011 16:08:42 +0200",
"msg_from": "Gerhard Wohlgenannt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sudden drop in DBb performance"
},
{
"msg_contents": "hi,\n\nthanks a lot for your help!\n\n>> Dear list,\n>>\n>> we are encountering serious performance problems with our database.\n>> Queries which took around 100ms or less last week now take several\n>> seconds.\n>>\n>> The database runs on Ubuntu Server 10.4.3 (kernel: 2.6.32-33) on\n>> hardware as follows:\n>> 8-core Intel Xeon CPU with 2.83GHz\n>> 48 GB RAM\n>> RAID 5 with 8 SAS disks\n>> PostgreSQL 8.4.8 (installed from the Ubuntu repository).\n>>\n>> Additionally to the DB the machine also hosts a few virtual machines. In\n>> the past everything worked very well and the described problem occurs\n>> just out of the blue. We don't know of any postgresql config changes or\n>> anything else which might explain the performance reduction.\n>> We have a number of DBs running in the cluster, and the problem seems to\n>> affect all of them.\n> What are the virtual machines doing? Are you sure they are not doing a lot\n> of IO?\n\nwe also have a ssd-disk in the machine, and the virtual machines do most \nof their IO on that. But there sure also is some amount of IO onto the \nsystems raid array. maybe we should consider having a dedicated database \nserver.\n\n>> We checked the performance of the RAID .. which is reasonable for eg.\n>> \"hdparm -tT\". Memory is well used, but not swapping.\n>> vmstat shows, that the machine isn't using the swap and the load\n>> shouldn't be also to high:\n>> root@host:~# vmstat\n>> procs -----------memory---------- ---swap-- -----io---- -system--\n>> ----cpu----\n>> r b swpd free buff cache si so bi bo in cs us\n>> sy id wa\n>> 0 0 0 308024 884812 40512932 0 0 464 168 353 92\n>> 4 2 84 9\n>>\n>> Bonnie++ results given below, I am no expert at interpreting those :-)\n>>\n>>\n>> Activating log_min_duration shows for instance this query --- there are\n>> now constantly queries which take absurdely long.\n>>\n>> 2011-09-02 22:38:18 CEST LOG: Dauer: 25520.374 ms Anweisung: SELECT\n>> keyword_id FROM keywords.table_x WHERE keyword=E'diplomaten'\n>>\n>> db=# explain analyze SELECT keyword_id FROM keywords.table_x WHERE\n>> keyword=E'diplomaten';\n>> QUERY\n>> PLAN\n>> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Index Scan using idx_table_x_keyword on table_x (cost=0.00..8.29\n>> rows=1 width=4) (actual time=0.039..0.041 rows=1 loops=1)\n>> Index Cond: ((keyword)::text = 'diplomaten'::text)\n>> Total runtime: 0.087 ms\n>> (3 Zeilen)\n>>\n>> db=# \\d keywords.table_x\n>> Tabelle »keywords.table_x«\n>> Spalte | Typ\n>> | Attribute\n>> ------------+-------------------+------------------------------------------------------------------------------------------------------\n>> keyword_id | integer | not null Vorgabewert\n>> nextval('keywords.table_x_keyword_id_seq'::regclass)\n>> keyword | character varying |\n>> so | double precision |\n>> Indexe:\n>> \"table_x_pkey\" PRIMARY KEY, btree (keyword_id) CLUSTER\n>> \"idx_table_x_keyword\" btree (keyword)\n>> Fremdschlüsselverweise von:\n>> TABLE \"keywords.table_x_has\" CONSTRAINT\n>> \"table_x_has_keyword_id_fkey\" FOREIGN KEY (keyword_id) REFERENCES\n>> keywords.table_x(keyword_id) ON UPDATE CASCADE ON DELETE CASCADE\n> But in this explain analyze, the query finished in 41 ms. Use auto-explain\n> contrib module to see the explain plan of the slow execution.\n\nthanks. we will use auto_explain as soon as some long running updates \nare finished (don't won't to kill them)\n\ncheers gerhard\n",
"msg_date": "Mon, 05 Sep 2011 16:14:50 +0200",
"msg_from": "Gerhard Wohlgenannt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sudden drop in DBb performance"
},
{
"msg_contents": "On 5 Září 2011, 15:51, Andy Colson wrote:\n> On 09/05/2011 02:48 AM, Tomas Vondra wrote:\n>> That seems a bit slow ... 27MB/s for writes and 41MB/s forreads is ait\n>> slow with 8 drives.\n>>\n>> Tomas\n>>\n>>\n>\n> Agreed, that's really slow. A single SATA drive will get 60 MB/s. Did\n> you run Bonnie while the VM's were up and running?\n>\n> root@host:~# vmstat\n> procs -----------memory---------- ---swap-- -----io---- -system--\n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy\n> id wa\n> 0 0 0 308024 884812 40512932 0 0 464 168 353 92 4\n> 2 84 9\n>\n>\n> Only one line? That does not help much. Can you run it as 'vmstat 2' and\n> let it run while a few slow queries are performed? Then paste all the\n> lines?\n\nAnd maybe a few lines from \"iostat -x 2\" too.\n\nBTW what kind of raid is it? Is it hw or sw based? Have you checked health\nof the drives?\n\nAre you sure there's nothing else using the drives (e.g. one of the VMs,\nrebuild of the array or something like that)?\n\nTomas\n\n",
"msg_date": "Mon, 5 Sep 2011 16:15:23 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden drop in DBb performance"
},
{
"msg_contents": "hi,\n\nthanks a lot for your help!\n\n>> Dear list,\n>>\n>> we are encountering serious performance problems with our database.\n>> Queries which took around 100ms or less last week now take several\n>> seconds.\n>>\n>> The database runs on Ubuntu Server 10.4.3 (kernel: 2.6.32-33) on\n>> hardware as follows:\n>> 8-core Intel Xeon CPU with 2.83GHz\n>> 48 GB RAM\n>> RAID 5 with 8 SAS disks\n>> PostgreSQL 8.4.8 (installed from the Ubuntu repository).\n>>\n>> Additionally to the DB the machine also hosts a few virtual machines. In\n>> the past everything worked very well and the described problem occurs\n>> just out of the blue. We don't know of any postgresql config changes or\n>> anything else which might explain the performance reduction.\n>> We have a number of DBs running in the cluster, and the problem seems to\n>> affect all of them.\n> What are the virtual machines doing? Are you sure they are not doing a lot\n> of IO?\n\nwe also have a ssd-disk in the machine, and the virtual machines do most \nof their IO on that. But there sure also is some amount of I/O onto the \nsystems raid array coming from the virtual machines. maybe we should \nconsider having a dedicated database server.\n\n>> We checked the performance of the RAID .. which is reasonable for eg.\n>> \"hdparm -tT\". Memory is well used, but not swapping.\n>> vmstat shows, that the machine isn't using the swap and the load\n>> shouldn't be also to high:\n>> root@host:~# vmstat\n>> procs -----------memory---------- ---swap-- -----io---- -system--\n>> ----cpu----\n>> r b swpd free buff cache si so bi bo in cs us\n>> sy id wa\n>> 0 0 0 308024 884812 40512932 0 0 464 168 353 92\n>> 4 2 84 9\n>>\n>> Bonnie++ results given below, I am no expert at interpreting those :-)\n>>\n>>\n>> Activating log_min_duration shows for instance this query --- there are\n>> now constantly queries which take absurdely long.\n>>\n>> 2011-09-02 22:38:18 CEST LOG: Dauer: 25520.374 ms Anweisung: SELECT\n>> keyword_id FROM keywords.table_x WHERE keyword=E'diplomaten'\n>>\n>> db=# explain analyze SELECT keyword_id FROM keywords.table_x WHERE\n>> keyword=E'diplomaten';\n>> QUERY\n>> PLAN\n>> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Index Scan using idx_table_x_keyword on table_x (cost=0.00..8.29\n>> rows=1 width=4) (actual time=0.039..0.041 rows=1 loops=1)\n>> Index Cond: ((keyword)::text = 'diplomaten'::text)\n>> Total runtime: 0.087 ms\n>> (3 Zeilen)\n>>\n>> db=# \\d keywords.table_x\n>> Tabelle »keywords.table_x«\n>> Spalte | Typ\n>> | Attribute\n>> ------------+-------------------+------------------------------------------------------------------------------------------------------\n>> keyword_id | integer | not null Vorgabewert\n>> nextval('keywords.table_x_keyword_id_seq'::regclass)\n>> keyword | character varying |\n>> so | double precision |\n>> Indexe:\n>> \"table_x_pkey\" PRIMARY KEY, btree (keyword_id) CLUSTER\n>> \"idx_table_x_keyword\" btree (keyword)\n>> Fremdschlüsselverweise von:\n>> TABLE \"keywords.table_x_has\" CONSTRAINT\n>> \"table_x_has_keyword_id_fkey\" FOREIGN KEY (keyword_id) REFERENCES\n>> keywords.table_x(keyword_id) ON UPDATE CASCADE ON DELETE CASCADE\n> But in this explain analyze, the query finished in 41 ms. Use auto-explain\n> contrib module to see the explain plan of the slow execution.\n\nthanks. we will use auto_explain as soon as some long running updates \nare finished (don't want to kill them)\n\ncheers gerhard\n",
"msg_date": "Mon, 05 Sep 2011 16:15:27 +0200",
"msg_from": "Gerhard Wohlgenannt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sudden drop in DBb performance"
},
{
"msg_contents": "On 5 Září 2011, 16:08, Gerhard Wohlgenannt wrote:\n> Below please find the results of vmstat 2 over some periode of time ..\n> with normal database / system load.\n\nWhat does a \"normal load\" mean? Does that mean a time when the queries are\nslow?\n\nAre you sure the machine really has 48GB of RAM? Because from the vmstat\noutput it seems like there's just 32GB.\n\nprocs -----------memory---------- ---swap-- -----io---- -system--\n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy\nid wa\n 0 0 1342168 336936 107636 31353512 0 0 14 183 1911 3426\n2 1 93 4\n\n\n1342168 + 336936 + 107636 + 31353512 = 33140252 ~ 31GB\n\nBTW there's 1.3GB of swap, although it's not used heavily (according to\nthe vmstat output).\n\nOtherwise I don't see anything wrong in the output. What is the size of\nthe database (use pg_database_size to get it)? Did it grow significantly\nrecently?\n\nTomas\n\n",
"msg_date": "Mon, 5 Sep 2011 16:39:30 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden drop in DBb performance"
},
{
"msg_contents": "On 09/05/2011 09:39 AM, Tomas Vondra wrote:\n> On 5 Září 2011, 16:08, Gerhard Wohlgenannt wrote:\n>> Below please find the results of vmstat 2 over some periode of time ..\n>> with normal database / system load.\n>\n> What does a \"normal load\" mean? Does that mean a time when the queries are\n> slow?\n>\n> Are you sure the machine really has 48GB of RAM? Because from the vmstat\n> output it seems like there's just 32GB.\n>\n> procs -----------memory---------- ---swap-- -----io---- -system--\n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy\n> id wa\n> 0 0 1342168 336936 107636 31353512 0 0 14 183 1911 3426\n> 2 1 93 4\n>\n>\n> 1342168 + 336936 + 107636 + 31353512 = 33140252 ~ 31GB\n>\n> BTW there's 1.3GB of swap, although it's not used heavily (according to\n> the vmstat output).\n>\n> Otherwise I don't see anything wrong in the output. What is the size of\n> the database (use pg_database_size to get it)? Did it grow significantly\n> recently?\n>\n> Tomas\n>\n\nYeah, its interesting that it swapped in memory, but never out. Looking at this vmstat, it does not look like a hardware problem.(Assuming \"normal load\" means slow queries)\n\n> Did it grow significantly recently?\n\nThat's a good thought, maybe the stats are old and you have bad plans? It could also be major updates to the data too (as opposed to growth).\n\nGerhard, have you done an 'explain analyze' on any of your slow queries? Have you done an analyze lately?\n\n-Andy\n",
"msg_date": "Mon, 05 Sep 2011 10:17:42 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden drop in DBb performance"
},
{
"msg_contents": "On Mon, Sep 5, 2011 at 8:08 AM, Gerhard Wohlgenannt <[email protected]> wrote:\n> Below please find the results of vmstat 2 over some periode of time .. with\n> normal database / system load.\n>\n> 0 0 1344332 237196 104140 31468412 0 0 330 102 4322 7130 4 2\n> 90 4\n> 1 1 1344332 236708 104144 31469000 0 0 322 105 2096 3723 1 2\n> 92 5\n> 2 1 1344204 240924 104156 31462484 350 0 1906 234 3687 4512 12 3\n> 77 9\n> 0 0 1344200 238372 104168 31462452 0 0 8 109 4050 8376 8 3\n> 86 3\n> 0 0 1344200 232668 104168 31462468 0 0 12 158 2036 3633 2 2\n> 92 3\n> 0 3 1344196 282784 104180 31413384 4 0 1768 343 2490 4391 1 2\n> 84 13\n> 1 1 1344196 278188 104192 31416080 0 0 1392 341 2215 3850 1 2\n> 82 15\n> 0 0 1344120 276964 104608 31416904 90 0 634 304 2390 3949 4 2\n> 86 8\n> 1 1 1344120 277096 104628 31417752 0 0 492 378 2394 3866 2 1\n> 87 10\n> 0 1 1344120 274476 104628 31418620 0 0 260 233 1997 3255 2 1\n> 91 6\n> 1 1 1344120 276584 104628 31418808 0 0 128 208 2015 3266 2 1\n> 91 6\n> 0 0 1343672 272352 106288 31418788 694 0 1346 344 2170 3660 3 1\n> 89 6\n> 0 1 1343632 270220 107648 31419152 48 0 468 490 2356 3622 4 2\n> 88 5\n> 0 0 1343624 270708 107660 31419344 20 0 228 138 2086 3518 2 3\n> 91 4\n> 0 1 1343612 268732 107660 31419584 12 0 168 112 2100 3585 3 2\n> 91 3\n> 0 0 1343544 266616 107660 31420112 14 0 154 73 2059 3719 3 2\n> 93 3\n> 0 1 1343540 267368 107684 31420168 0 0 78 260 2256 3970 3 2\n> 90 6\n> 0 1 1343540 268352 107692 31420356 0 0 94 284 2239 4086 2 2\n> 89 6\n> 0 0 1343540 274064 107692 31423584 0 0 1622 301 2322 4258 2 3\n> 83 13\n> 0 2 1343440 273064 107704 31423696 96 0 106 180 2158 3795 3 2\n> 90 5\n> 0 0 1342184 262888 107708 31426040 840 0 2014 146 2309 3713 5 3\n> 83 9\n> 0 0 1342184 261904 107732 31426128 0 0 60 158 1893 3510 1 3\n> 91 5\n> 2 0 1342184 258680 107732 31427436 0 0 794 114 2160 3647 2 3\n> 90 5\n> 0 2 1342176 258184 107744 31428308 24 0 310 116 1943 3335 2 2\n> 91 4\n> 1 0 1342172 259068 107756 31428700 2 0 138 143 1976 3468 1 1\n> 93 5\n> 0 0 1342172 258084 107756 31429948 0 0 620 88 2117 3565 3 1\n> 90 6\n> 0 0 1342172 258456 107952 31430028 0 0 62 305 2174 3827 1 2\n> 91 6\n> 1 0 1342172 257480 107952 31430636 0 0 300 256 2316 3959 3 2\n> 86 8\n> 0 0 1342172 257720 107952 31430772 0 0 46 133 2411 4047 3 2\n> 91 3\n> 1 2 1342172 257844 107976 31430776 0 0 136 184 2111 3841 1 1\n> 92 6\n> 1 2 1342172 338376 107576 31349412 0 0 462 8615 2655 5508 5 3\n> 79 13\n> 1 2 1342172 340772 107580 31351080 0 0 682 377 2503 4022 2 1\n> 87 10\n> 1 2 1342172 335688 107596 31351992 0 0 548 306 2480 3867 4 1\n> 86 9\n> 0 2 1342168 337432 107608 31352704 0 0 224 188 1919 3158 1 1\n> 93 6\n> 0 0 1342168 337804 107608 31353020 0 0 154 249 1933 3175 1 1\n> 92 6\n> 0 1 1342168 335944 107636 31353464 0 0 212 173 1912 3280 4 2\n> 89 5\n> procs -----------memory---------- ---swap-- -----io---- -system--\n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id\n> wa\n> 0 0 1342168 336936 107636 31353512 0 0 14 183 1911 3426 2 1\n> 93 4\n> 0 1 1342168 334440 107656 31353736 0 0 264 372 2119 3400 6 2\n> 84 8\n> 0 0 1342164 334084 107680 31354468 0 0 302 413 2361 3613 2 1\n> 87 10\n> 2 0 1342160 342764 107680 31354916 8 0 184 332 2142 3117 1 1\n> 90 7\n> 0 1 1342160 343788 107680 31355808 0 0 360 211 2247 3249 1 2\n> 91 5\n> 2 1 1342156 340804 107704 31355904 0 0 88 280 2287 3448 2 2\n> 90 6\n> 0 1 1342156 344276 107704 31356464 0 0 316 276 2050 3298 1 2\n> 90 7\n> 0 0 1342156 344160 107712 31356576 0 0 4 225 1884 3194 1 3\n> 90 6\n> 0 0 1342152 342548 107724 31356688 0 0 52 231 1963 3232 1 3\n> 89 6\n> 2 1 1342152 343664 107724 31356764 0 0 104 348 2643 3614 3 2\n> 88 8\n> 1 1 1342144 341060 107760 31357080 16 0 120 307 2511 3474 4 3\n> 87 7\n> 1 0 1342140 342332 107780 31357500 8 0 206 193 2243 3448 4 2\n> 89 5\n> 1 0 1342136 339472 107780 31357508 0 0 32 142 4290 3799 6 3\n> 87 4\n> 0 0 1342136 341160 107780 31357992 0 0 216 171 2613 3995 4 2\n> 88 5\n> 0 0 1342136 342168 107820 31357988 0 0 26 140 2347 3753 3 4\n> 89 4\n> 0 0 1342136 342532 107820 31358128 0 0 36 155 2119 3653 2 1\n> 91 5\n> 2 0 1342136 341564 107828 31358144 0 0 0 151 1973 3486 4 2\n> 90 4\n> 1 1 1342136 342076 107852 31358416 0 0 148 284 2251 3857 6 2\n> 84 8\n> 0 1 1342136 339944 107852 31359284 0 0 482 478 2902 5210 4 2\n> 84 10\n> 0 1 1342136 342184 107852 31359836 0 0 238 372 2292 4063 2 1\n> 88 9\n\nYour IO Wait is actually pretty high. On an 8 core machine, 12.5%\nmeans one core is doing nothing but waiting for IO.\n",
"msg_date": "Mon, 5 Sep 2011 12:45:57 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden drop in DBb performance"
},
{
"msg_contents": "On 09/05/2011 01:45 PM, Scott Marlowe wrote:\n> On Mon, Sep 5, 2011 at 8:08 AM, Gerhard Wohlgenannt<[email protected]> wrote:\n>> Below please find the results of vmstat 2 over some periode of time .. with\n>> normal database / system load.\n>>\n2 1 1344204 240924 104156 31462484 350 0 1906 234 3687 4512 12 3 77 9\n>\n> Your IO Wait is actually pretty high. On an 8 core machine, 12.5%\n> means one core is doing nothing but waiting for IO.\n>\n\nMy server is 2-core, so these numbers looked fine by me. I need to remember core count when I look at these.\n\nSo the line above, for 2 core's would not worry me a bit, but on 8 cores, it pretty much means one core was pegged (with 9% wait? Or is it one core was pegged, and another was 72% io wait?)\n\nI have always loved the vmstat output, but its starting to get confusing when you have to take core's into account. (And my math was never strong in the first place :-) )\n\nGood catch, thanks Scott.\n\n-Andy\n",
"msg_date": "Mon, 05 Sep 2011 14:07:42 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden drop in DBb performance"
},
{
"msg_contents": "On 5 Září 2011, 21:07, Andy Colson wrote:\n> On 09/05/2011 01:45 PM, Scott Marlowe wrote:\n>> On Mon, Sep 5, 2011 at 8:08 AM, Gerhard Wohlgenannt<[email protected]>\n>> wrote:\n>>> Below please find the results of vmstat 2 over some periode of time ..\n>>> with\n>>> normal database / system load.\n>>>\n> 2 1 1344204 240924 104156 31462484 350 0 1906 234 3687 4512 12 3\n> 77 9\n>>\n>> Your IO Wait is actually pretty high. On an 8 core machine, 12.5%\n>> means one core is doing nothing but waiting for IO.\n>>\n>\n> My server is 2-core, so these numbers looked fine by me. I need to\n> remember core count when I look at these.\n>\n> So the line above, for 2 core's would not worry me a bit, but on 8 cores,\n> it pretty much means one core was pegged (with 9% wait? Or is it one core\n> was pegged, and another was 72% io wait?)\n\nAFAIK it's as if one core was 72% io wait. Anyway that's exactly why I was\nasking for \"iostat -x\" because the util% gives a better idea of what's\ngoing on.\n\n> I have always loved the vmstat output, but its starting to get confusing\n> when you have to take core's into account. (And my math was never strong\n> in the first place :-) )\n\nThat's why I love dstat, just do this\n\n$ dstat -C 0,1,2,3,4,5,6,7\n\nand you know all you need.\n\n> Good catch, thanks Scott.\n\nYes, good catch.\n\nStill, this does not explain why the queries were running fast before, and\nwhy the RAID array is so sluggish. Not to mention that we don't know what\nwere the conditions when collecting those numbers (were the VMs off or\nrunning?).\n\nTomas\n\n",
"msg_date": "Mon, 5 Sep 2011 22:38:56 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden drop in DBb performance"
},
{
"msg_contents": "Thanks a lot to everybody for their helpful hints!!!\n\nI am running all these benchmarks while the VMs are up .. with the \nsystem under something like \"typical\" loads ..\n\nThe RAID is hardware based. On of my colleagues will check if there is \nany hardware problem on the RAID (the disks) today, but nothing no \nerrors have been reported.\n\nplease find below the results of\niostat -x 2\nvmstat 2\n\nhmm, looks like we definitely do have a problem with I/O load?!\nbtw: dm-19 is the logical volume where the /var (postgresql) is on ..\n\ncheers gerhard\n\nprocs -----------memory---------- ---swap-- -----io---- -system-- \n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy \nid wa\n 1 16 1370892 434996 33840 28938348 1 1 615 312 9 4 \n5 2 81 12\n 0 15 1370892 440832 33840 28938380 0 0 4 136 2086 3899 \n0 4 12 84\n 1 16 1370892 447008 33864 28938380 0 0 0 27 2442 4252 \n1 5 10 83\n 1 11 1370892 452272 33864 28938380 0 0 12 5 2106 3886 \n0 4 12 83\n 2 4 1370892 315880 33888 28941396 0 0 1522 3084 2213 4120 \n4 3 57 37\n 0 10 1370892 240900 33628 28934060 0 0 1060 17275 3396 4793 \n3 3 55 40\n 1 5 1370892 238172 33044 28905652 0 0 148 267 3943 5284 \n2 3 26 69\n 2 2 1370916 232932 31960 28694024 0 12 1170 5625 3037 6336 \n6 7 61 26\n 1 2 1370912 232788 27588 28697216 10 0 1016 3848 2780 5669 \n8 5 56 31\n 1 4 1370908 2392224 27608 28144712 0 0 936 8811 2514 5244 \n8 6 61 25\n 0 1 1370908 2265428 27612 28153188 0 0 4360 1598 2822 4784 \n13 3 69 15\n 1 2 1370908 2041260 27612 28176788 0 0 11842 474 3679 4255 \n12 4 78 6\n 0 3 1370908 2199880 27624 28272112 0 0 47638 569 7798 5495 \n11 4 70 14\n 0 3 1370908 2000752 27624 28318692 0 0 23492 275 5084 5161 \n10 3 71 17\n 1 0 1370908 1691000 27624 28365060 0 0 22920 117 4961 5426 \n12 5 69 15\n 1 0 1370908 2123512 27624 28367576 0 0 1244 145 2053 3728 \n12 3 83 2\n 2 0 1370908 1740724 27636 28403748 0 0 18272 190 2920 4188 \n12 4 76 8\n 2 0 1370908 1305856 27636 28460172 0 0 28174 493 3744 4750 \n11 6 68 15\n 1 2 1370908 973412 27644 28529640 0 0 34614 305 3419 4522 \n12 5 69 13\n 2 2 1370904 1790820 27656 28659080 2 0 64376 389 5527 5374 \n12 7 69 12\n 1 2 1370904 1384100 27656 28750336 0 0 45740 351 4898 5381 \n13 6 68 13\n 1 0 1370904 954200 27656 28864252 0 0 56544 413 4596 5470 \n13 7 66 14\n 1 0 1370904 1597264 27656 28865756 0 0 926 391 2009 3502 \n11 4 81 4\n 3 2 1370904 1219180 27668 28868244 0 0 1160 500 2180 3772 \n11 5 80 4\n 2 7 1370900 809128 27680 28869020 0 0 298 21875 2417 3936 \n11 5 49 35\n 0 9 1370900 1693360 27680 28869032 0 0 8 0 2756 4174 \n8 5 28 59\n 1 2 1370900 1531100 27688 28871104 0 0 1034 7849 2646 4571 \n10 3 72 15\n\n\n\niostat -x 2:\n\nLinux 2.6.32-33-server (voyager) 06.09.2011 _x86_64_ \n(8 CPU)\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 5,02 0,00 2,41 11,60 0,00 80,97\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s \navgrq-sz avgqu-sz await svctm %util\nsda 3,05 5,22 1,05 0,67 117,54 45,72 \n95,37 0,01 3,94 0,75 0,13\nsdb 10,02 148,15 157,91 93,49 10019,50 5098,93 \n60,14 4,53 18,04 2,30 57,75\ndm-0 0,00 0,00 3,03 4,87 24,21 38,96 \n8,00 0,45 56,83 0,06 0,05\ndm-1 0,00 0,00 1,07 0,87 93,32 6,77 \n51,59 0,01 2,71 0,42 0,08\ndm-2 0,00 0,00 0,00 0,00 0,00 0,00 \n8,00 0,00 6,30 6,30 0,00\ndm-3 0,00 0,00 0,19 0,32 1,54 2,55 \n8,00 0,03 63,61 2,72 0,14\ndm-4 0,00 0,00 0,19 0,88 1,54 7,05 \n8,00 0,04 33,91 12,84 1,38\ndm-5 0,00 0,00 0,10 0,04 0,83 0,33 \n8,00 0,00 16,22 2,63 0,04\ndm-6 0,00 0,00 0,00 0,00 0,00 0,00 \n8,00 0,00 4,88 4,88 0,00\ndm-7 0,00 0,00 0,00 0,00 0,00 0,00 \n8,00 0,00 4,37 4,37 0,00\ndm-8 0,00 0,00 0,00 0,00 0,00 0,00 \n8,00 0,00 4,69 4,69 0,00\ndm-9 0,00 0,00 0,00 0,00 0,00 0,00 \n8,00 0,00 5,71 5,71 0,00\ndm-10 0,00 0,00 0,00 0,00 0,00 0,00 \n8,00 0,00 4,65 4,65 0,00\ndm-11 0,00 0,00 0,00 0,00 0,00 0,00 \n8,00 0,00 4,17 4,17 0,00\ndm-12 0,00 0,00 0,11 1,34 0,90 10,73 \n8,00 0,12 76,31 12,61 1,83\ndm-13 0,00 0,00 0,01 0,00 0,09 0,01 \n8,00 0,00 18,70 1,26 0,00\ndm-14 0,00 0,00 1,83 1,39 14,66 11,10 \n8,00 0,18 55,46 2,77 0,89\ndm-15 0,00 0,00 0,00 0,00 0,00 0,00 \n8,00 0,00 5,35 5,31 0,00\ndm-16 0,00 0,00 0,18 0,02 4,00 0,38 \n21,08 0,00 21,20 5,95 0,12\ndm-17 0,00 0,00 0,00 0,00 0,01 0,01 \n18,76 0,00 30,79 26,47 0,00\ndm-18 0,00 0,00 1,19 0,02 11,05 0,19 \n9,24 0,00 3,57 1,20 0,15\ndm-19 0,00 0,00 159,62 202,37 9949,08 5022,90 \n41,36 0,60 29,19 1,55 56,27\ndm-20 0,00 0,00 2,39 2,31 19,13 18,48 \n8,00 0,18 39,23 1,29 0,61\ndm-21 0,00 0,00 0,62 2,44 5,00 19,53 \n8,00 0,11 34,84 5,41 1,66\ndm-22 0,00 0,00 0,01 0,03 0,09 0,24 \n8,00 0,00 21,67 0,53 0,00\ndm-23 0,00 0,00 0,75 0,66 6,02 5,32 \n8,00 0,04 26,32 4,89 0,69\ndm-24 0,00 0,00 0,00 0,00 0,00 0,00 \n8,00 0,00 5,67 5,67 0,00\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 0,88 0,00 5,27 81,72 0,00 12,13\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s \navgrq-sz avgqu-sz await svctm %util\nsda 0,00 0,50 0,00 3,00 0,00 12,00 \n4,00 0,00 0,00 0,00 0,00\nsdb 0,00 559,00 0,00 523,50 0,00 19148,00 \n36,58 143,87 278,68 1,91 100,00\ndm-0 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-1 0,00 0,00 0,00 1,50 0,00 12,00 \n8,00 0,00 0,00 0,00 0,00\ndm-2 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-3 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-4 0,00 0,00 0,00 4,00 0,00 32,00 \n8,00 0,53 132,50 48,75 19,50\ndm-5 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-6 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-7 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-8 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-9 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-10 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-11 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-12 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-13 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-14 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-15 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-16 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-17 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-18 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-19 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 316,54 0,00 0,00 100,00\ndm-20 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-21 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-22 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-23 0,00 0,00 0,00 1,50 0,00 12,00 \n8,00 0,08 53,33 36,67 5,50\ndm-24 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 1,40 0,00 5,36 53,87 0,00 39,37\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s \navgrq-sz avgqu-sz await svctm %util\nsda 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\nsdb 0,00 118,50 9,00 627,00 196,00 15220,50 \n24,24 137,49 209,69 1,57 100,00\ndm-0 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-1 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-2 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-3 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-4 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-5 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-6 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-7 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-8 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-9 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-10 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-11 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-12 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-13 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-14 0,00 0,00 0,00 3,00 0,00 24,00 \n8,00 0,36 58,33 31,67 9,50\ndm-15 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-16 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-17 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-18 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-19 0,00 0,00 4,00 300,50 68,00 9562,00 \n31,63 226,15 730,23 3,28 100,00\ndm-20 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-21 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-22 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-23 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\ndm-24 0,00 0,00 0,00 0,00 0,00 0,00 \n0,00 0,00 0,00 0,00 0,00\n\n\n\n\n\n\n>> Agreed, that's really slow. A single SATA drive will get 60 MB/s. Did\n>> you run Bonnie while the VM's were up and running?\n>>\n>> root@host:~# vmstat\n>> procs -----------memory---------- ---swap-- -----io---- -system--\n>> ----cpu----\n>> r b swpd free buff cache si so bi bo in cs us sy\n>> id wa\n>> 0 0 0 308024 884812 40512932 0 0 464 168 353 92 4\n>> 2 84 9\n>>\n>>\n>> Only one line? That does not help much. Can you run it as 'vmstat 2' and\n>> let it run while a few slow queries are performed? Then paste all the\n>> lines?\n> And maybe a few lines from \"iostat -x 2\" too.\n>\n> BTW what kind of raid is it? Is it hw or sw based? Have you checked health\n> of the drives?\n>\n> Are you sure there's nothing else using the drives (e.g. one of the VMs,\n> rebuild of the array or something like that)?\n>\n> Tomas\n>\n\n",
"msg_date": "Tue, 06 Sep 2011 10:26:58 +0200",
"msg_from": "Gerhard Wohlgenannt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sudden drop in DBb performance"
},
{
"msg_contents": "hi,\n\n> What does a \"normal load\" mean? Does that mean a time when the queries are\n> slow?\n\nyes, we are have slow queries (according to postgresql.log) with such load\n> Are you sure the machine really has 48GB of RAM? Because from the vmstat\n> output it seems like there's just 32GB.\n>\n> procs -----------memory---------- ---swap-- -----io---- -system--\n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy\n> id wa\n> 0 0 1342168 336936 107636 31353512 0 0 14 183 1911 3426\n> 2 1 93 4\n>\n>\n> 1342168 + 336936 + 107636 + 31353512 = 33140252 ~ 31GB\n\nstrange.\nwe paid for 48G :-) and top and free show 48G\n/root# free\n total used free shared buffers cached\nMem: 49564860 49310444 254416 0 30908 30329576\n-/+ buffers/cache: 18949960 30614900\nSwap: 20971512 1370960 19600552\n\n\n> Otherwise I don't see anything wrong in the output. What is the size of\n> the database (use pg_database_size to get it)? Did it grow significantly\n> recently?\n>\n\nthere are a number of databases in the cluster on that machine,\nin the filesystem it adds up to 271G\n\ncheers gerhard\n",
"msg_date": "Tue, 06 Sep 2011 10:32:09 +0200",
"msg_from": "Gerhard Wohlgenannt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sudden drop in DBb performance"
},
{
"msg_contents": "\n> That's a good thought, maybe the stats are old and you have bad \n> plans? It could also be major updates to the data too (as opposed to \n> growth).\n\nwe have made checks for number of dead tuples etc recently, but looks \nok. and as \"everything\" in the database seems to be very slow atm, I \nguess the problem is not caused by bad plans for specific tables/queries.\n>\n> Gerhard, have you done an 'explain analyze' on any of your slow \n> queries? Have you done an analyze lately?\n>\n\nyes we added the 'auto_explain' module to log/analyze queries >= 5000ms.\na sample result from the logs (there is lots of stuff in the logs, I \nselected this query because it is very simple):\n\n2011-09-06 04:00:35 CEST ANWEISUNG: INSERT into \nkeywords.table_x_site_impact (content_id, site_impact_id, site_impact) \nVALUES (199083087, 1, 1.000000)\n2011-09-06 04:00:35 CEST LOG: Dauer: 15159.723 ms Anweisung: INSERT \ninto keywords.table_x_site_impact (content_id, site_impact_id, \nsite_impact) VALUES (199083087, 1 , 1.000000)\n2011-09-06 04:00:35 CEST LOG: duration: 15159.161 ms plan:\n Result (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.017..0.019 rows=1 loops=1)\n Output: \nnextval('keywords.table_x_site_impact_internal_id_seq'::regclass), \n199083087::bigint, 1::smallint, 1::double precision\n\n\n\n",
"msg_date": "Tue, 06 Sep 2011 10:40:40 +0200",
"msg_from": "Gerhard Wohlgenannt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sudden drop in DBb performance"
},
{
"msg_contents": "\n> That's why I love dstat, just do this\n>\n> $ dstat -C 0,1,2,3,4,5,6,7\n>\n> and you know all you need.\n\ndstat looks like a very nice tool, results below ..\n(now the system load seems a bit lower then before when generating \nresults for vmstat and iostat)\n>> Good catch, thanks Scott.\n> Yes, good catch.\n>\n> Still, this does not explain why the queries were running fast before, and\n> why the RAID array is so sluggish. Not to mention that we don't know what\n> were the conditions when collecting those numbers (were the VMs off or\n> running?).\n>\nthe VMs were running. they are in something like production use, so i \nshouldn't just turn them off .. :-)\nand the processes in the VMs cause a big portion of the DB load, so \nturning them off would distort the results ...\n\nand thanks again for all the replies!!! :-)\n\n\n~# dstat -C 0,1,2,3,4,5,6,7\n-------cpu0-usage--------------cpu1-usage--------------cpu2-usage--------------cpu3-usage--------------cpu4-usage--------------cpu5-usage--------------cpu6-usage--------------cpu7-usage------ \n-dsk/total- -net/total- ---paging-- ---system--\nusr sys idl wai hiq siq:usr sys idl wai hiq siq:usr sys idl wai hiq \nsiq:usr sys idl wai hiq siq:usr sys idl wai hiq siq:usr sys idl wai hiq \nsiq:usr sys idl wai hiq siq:usr sys idl wai hiq siq| read writ| recv \nsend| in out | int csw\n 7 1 75 17 0 0: 4 5 84 7 0 0: 5 3 80 12 0 \n0: 4 3 85 9 0 0: 7 2 75 16 0 0: 4 2 87 8 0 \n0: 7 2 75 16 0 0: 4 1 87 8 0 0|5071k 2578k| 0 0 \n|9760B 9431B|2468 4126\n 0 0 98 2 0 0: 0 0 98 2 0 0: 6 2 22 71 0 \n0: 5 0 76 19 0 0: 0 12 82 6 0 0: 3 7 88 2 0 \n0: 3 1 84 12 0 0: 2 0 94 4 0 0|5160k 1376k| 60k \n225k| 0 0 |2101 3879\n 11 1 84 4 0 0: 2 0 93 6 0 0: 3 4 72 22 0 \n0: 2 2 92 3 0 1: 10 13 22 54 0 1: 6 7 75 12 0 \n0: 3 0 87 10 0 0: 12 0 81 7 0 0|6640k 1683k| 140k \n240k| 0 0 |2860 4617\n 1 1 29 68 0 1: 12 0 80 8 0 0: 6 0 78 16 0 \n1: 3 1 80 16 0 0: 14 14 57 16 0 0: 0 11 78 12 0 \n0: 9 1 83 7 0 0: 0 0 96 4 0 0|4448k 1266k| 102k \n336k| 0 0 |2790 4645\n 0 0 89 11 0 0: 1 0 98 1 0 0: 14 0 57 29 0 \n0: 1 1 89 9 0 0: 1 15 41 43 0 0: 3 15 75 7 0 \n0: 3 2 60 35 0 0: 0 0 95 5 0 0| 18M 1622k| 97k \n285k| 0 0 |3303 4764\n 0 0 96 4 0 0: 0 0 99 0 0 1: 1 2 14 83 0 \n0: 1 25 17 57 0 0: 1 0 87 12 0 0: 1 0 19 80 0 \n0: 3 3 0 94 0 0: 0 0 48 52 0 0|1320k 19M| 40k \n113k| 0 0 |2909 4709\n 1 0 63 36 0 0: 5 2 88 5 0 0: 34 2 0 63 1 \n0: 8 8 72 12 0 0: 0 9 85 6 0 0: 1 2 84 13 0 \n0: 2 1 60 37 0 0: 1 1 62 36 0 0|9160k 5597k| 52k \n143k| 32k 0 |2659 4650\n 4 0 43 53 0 0: 2 0 93 5 0 0: 9 0 63 28 0 \n0: 3 1 89 7 0 0: 2 9 72 16 0 1: 0 13 81 6 0 \n0: 9 1 52 38 0 0: 3 0 84 13 0 0|4980k 1358k| 106k \n239k| 0 0 |2993 5158\n 2 1 90 7 0 0: 2 0 95 3 0 0: 2 3 82 13 0 \n0: 0 0 87 13 0 0: 6 10 32 52 0 0: 2 10 82 6 0 \n0: 5 0 86 9 0 0: 10 5 81 4 0 0|4376k 2949k| 119k \n295k| 0 0 |2729 4630\n 1 0 93 6 0 0: 2 0 91 6 1 0: 15 4 71 11 0 \n0: 7 2 90 1 0 0: 13 10 12 65 0 0: 2 13 41 45 0 \n0: 1 0 97 2 0 0: 1 0 94 5 0 0|3896k 15M| 87k \n242k| 0 0 |2809 5514\n 2 0 98 0 0 0: 0 0 73 27 0 0: 0 0 100 0 0 \n0: 2 1 29 68 0 0: 4 5 0 92 0 0: 2 5 92 2 0 \n0: 0 0 100 0 0 0: 1 0 77 22 0 0| 172k 19M| 40k \n127k| 0 0 |2221 4069\n 0 0 48 52 0 0: 0 0 97 3 0 0: 0 0 92 8 0 \n0: 3 0 91 6 0 0: 2 10 10 78 0 0: 4 10 81 6 0 \n0: 2 0 29 69 0 0: 1 0 26 73 0 0| 652k 6931k| 66k \n233k| 0 0 |2416 4389\n 6 2 72 21 0 0: 3 1 86 10 0 0: 7 0 60 34 0 \n0: 2 2 91 6 0 0: 1 13 78 9 0 0: 2 8 84 6 0 \n0: 2 0 79 19 0 0: 0 2 87 11 0 0|2784k 1456k| 96k \n206k| 0 0 |2854 5226\n 9 4 50 37 0 0: 3 3 84 10 0 0: 4 0 84 12 0 \n0: 2 3 86 9 0 0: 10 2 73 15 0 0: 3 5 84 8 0 \n0: 8 4 81 6 0 0: 1 2 84 13 0 0|2952k 1374k| 133k \n305k| 0 0 |3249 5076\n 9 1 78 13 0 0: 4 4 83 9 0 0: 3 1 68 28 0 \n0: 3 3 82 12 0 0: 9 0 64 26 0 1: 2 1 83 13 0 \n1: 9 3 63 24 0 1: 3 1 91 5 0 0|3648k 1420k| 188k \n444k| 0 0 |3560 5981\n 3 1 63 33 0 0: 0 1 86 13 0 0: 1 0 67 32 0 \n0: 1 2 84 12 0 1: 4 2 49 45 0 0: 6 3 82 9 0 \n0: 1 1 93 5 0 0: 1 1 91 7 0 0|2980k 1385k| 181k \n457k| 0 0 |3023 5230\n 3 5 90 2 0 0: 3 9 84 4 0 0: 2 2 55 41 0 \n0: 21 3 69 7 0 0: 1 3 76 20 0 0: 2 1 93 4 0 \n0: 1 3 67 29 0 0: 0 1 93 6 0 0|2796k 1359k| 104k \n237k| 0 0 |2339 4598\n 6 5 74 15 0 0: 0 5 87 8 0 0: 4 0 91 5 0 \n0: 0 0 98 2 0 0: 11 0 74 16 0 0: 3 1 87 9 0 \n0: 26 4 53 17 0 0: 0 1 92 7 0 0|1920k 1401k| 107k \n278k| 0 0 |2352 4480\n 2 0 91 7 0 0: 1 10 84 5 0 0: 4 0 93 3 0 \n0: 3 2 93 2 0 0: 6 2 84 8 0 0: 1 0 97 2 0 \n0: 2 0 85 13 0 0: 3 2 89 6 0 0|1508k 1397k| 134k \n313k| 0 0 |2374 4547\n 2 0 74 24 0 0: 0 0 97 3 0 0: 1 0 97 2 0 \n0: 3 2 95 0 0 0: 1 0 96 3 0 0: 4 2 91 4 0 \n0: 0 5 89 6 0 0: 3 1 91 6 0 0|1464k 1464k| 68k \n143k| 0 0 |1839 3950\n 4 0 91 5 0 0: 5 4 89 3 0 0: 1 0 98 1 0 \n0: 0 1 97 2 0 0: 2 2 68 28 0 0: 1 0 91 8 0 \n0: 2 8 70 20 0 0: 1 5 91 2 0 1|1060k 1903k| 133k \n142k| 0 0 |2358 4720\n 3 0 91 6 0 0: 4 3 88 5 0 0: 1 0 95 4 0 \n0: 1 2 94 3 0 0: 3 1 53 43 0 0: 3 2 93 2 0 \n0: 0 2 86 13 0 0: 0 4 89 7 0 0|1580k 1332k| 91k \n273k| 0 0 |2249 4564\n 4 0 79 16 0 1: 3 6 84 8 0 0: 1 0 92 7 0 \n0: 2 3 88 7 0 0: 5 2 86 7 0 0: 2 4 90 3 0 \n1: 4 1 81 14 0 0: 7 3 81 10 0 0|1128k 1343k| 122k \n263k| 0 0 |3262 5503\n 0 2 60 38 0 0: 3 5 90 3 0 0: 0 11 78 12 0 \n0: 1 5 91 3 0 0: 2 4 90 4 0 0: 4 2 89 5 0 \n0: 3 0 94 3 0 0: 4 2 87 8 0 0|1252k 1255k| 329k \n494k| 0 0 |3223 5661\n 2 0 84 13 0 1: 3 6 84 7 0 0: 3 7 83 8 0 \n0: 5 15 72 8 0 0: 2 1 96 1 0 0: 1 2 93 4 0 \n0: 4 1 65 30 0 0: 1 0 89 10 0 0|1660k 2513k| 148k \n460k| 0 0 |2597 5129\n 1 2 92 5 0 0: 0 1 41 58 0 0: 10 5 83 2 0 \n0: 3 18 78 2 0 0: 3 0 93 4 0 0: 3 0 97 0 0 \n0: 4 0 80 17 0 0: 2 0 91 7 0 0| 652k 15M| 72k \n198k| 0 0 |2340 6382\n 0 1 88 11 0 0: 3 3 88 7 0 0: 2 1 80 17 0 \n0: 0 13 75 13 0 0: 1 1 73 25 0 0: 1 1 93 5 0 \n0: 0 1 94 5 0 0: 1 0 86 13 0 0|1568k 1155k| 68k \n166k| 0 0 |2348 4738\n 3 0 54 43 0 0: 4 9 82 6 0 0: 2 0 88 11 0 \n0: 2 5 88 6 0 0: 1 0 88 11 0 0: 2 0 93 5 0 \n0: 3 1 80 17 0 0: 0 0 88 12 0 0|1200k 1319k| 113k \n233k| 0 0 |2660 5168\n 0 1 91 8 0 0: 4 6 87 4 0 0: 1 0 72 27 0 \n0: 2 2 70 26 0 0: 2 0 85 13 0 0: 1 0 94 5 0 \n0: 2 0 95 3 0 0: 1 1 89 9 0 0|1032k 1346k| 119k \n234k| 0 0 |2398 4808\n 0 1 73 26 0 0: 3 5 83 10 0 0: 1 2 96 1 0 \n0: 4 0 91 5 0 0: 2 2 90 6 0 0: 2 0 94 4 0 \n0: 3 4 70 23 0 0: 3 6 85 6 0 0|1272k 1522k| 117k \n279k| 0 0 |2739 5237\n 1 0 56 43 0 0: 0 2 86 12 0 0: 2 1 89 8 0 \n0: 2 2 92 4 0 0: 7 1 82 10 0 0: 4 1 92 3 0 \n0: 5 2 80 13 0 0: 4 4 87 6 0 0|1080k 1637k| 141k \n345k| 0 0 |2631 5142\n 0 0 65 35 0 0: 0 1 89 10 0 0: 2 0 84 14 0 \n0: 1 3 92 4 0 0: 0 0 89 11 0 0: 2 0 91 7 0 \n0: 2 5 87 6 0 0: 2 7 87 5 0 0|1012k 1412k| 95k \n215k| 0 0 |2304 4762\n 1 0 93 6 0 0: 3 1 95 1 0 0: 2 1 97 0 0 \n0: 3 3 91 3 0 0: 2 0 72 26 0 0: 1 0 93 6 0 \n0: 0 7 70 23 0 0: 2 8 82 8 0 0|1152k 1548k| 101k \n225k| 0 0 |2334 4706\n 1 0 93 6 0 0: 2 3 93 2 0 0: 5 2 91 3 0 \n0: 6 3 91 1 0 0: 5 2 85 8 0 0: 2 1 91 6 0 \n0: 1 5 60 34 0 0: 2 6 86 6 0 0|1096k 1553k| 126k \n289k| 0 0 |2471 4766\n 0 0 63 37 0 0: 0 0 90 10 0 0: 1 0 99 0 0 \n0: 0 0 93 7 0 0: 1 0 86 13 0 0: 2 2 87 9 0 \n0: 0 7 70 24 0 0: 2 3 93 3 0 0|1256k 1262k| 110k \n277k| 0 0 |2281 4534\n 0 0 24 76 0 0: 2 1 81 16 0 0: 1 3 44 52 0 \n0: 2 2 65 31 0 0: 1 2 75 22 0 0: 1 0 94 5 0 \n0: 0 4 17 79 0 0: 5 3 93 0 0 0| 240k 19M| 83k \n200k| 0 0 |2388 4131\n 1 0 76 23 0 0: 1 1 94 4 0 0: 5 3 81 11 0 \n0: 2 4 80 14 0 0: 2 1 56 41 0 0: 0 0 77 23 0 \n0: 3 7 74 17 0 0: 1 5 90 5 0 0|1108k 3127k| 113k \n262k| 0 0 |2584 4930\n 2 1 90 8 0 0: 2 0 93 5 0 0: 2 1 75 22 0 \n0: 2 1 83 15 0 0: 0 1 79 20 0 0: 0 0 92 7 0 \n1: 2 6 77 14 0 1: 2 8 87 4 0 0|1220k 1358k| 152k \n318k| 0 12k|2590 5104\n 3 0 81 16 0 0: 3 1 88 9 0 0: 3 1 93 3 0 \n0: 2 3 90 5 0 0: 1 0 93 6 0 0: 4 1 92 4 0 \n0: 5 13 37 46 0 0: 2 7 81 10 0 0|1256k 1437k| 167k \n336k| 0 0 |2734 5356\n 0 0 93 7 0 0: 1 2 88 9 0 0: 2 0 92 5 0 \n1: 1 2 94 3 0 0: 0 0 50 50 0 0: 0 0 99 1 0 \n0: 3 11 81 5 0 0: 3 7 86 4 0 0|1108k 1448k| 109k \n225k| 0 0 |2081 4316\n 3 0 42 55 0 0: 0 3 92 5 0 0: 0 0 100 0 0 \n0: 1 0 92 7 0 0: 3 1 69 27 0 0: 1 0 66 33 0 \n0: 4 7 56 33 0 0: 3 3 61 33 0 0| 720k 17M| 104k \n290k| 0 0 |2300 4168\n 0 2 59 39 0 0: 3 7 75 15 0 0: 0 6 89 5 0 \n0: 12 9 75 4 0 0: 1 0 81 18 0 0: 2 1 94 3 0 \n0: 1 8 56 35 0 0: 3 5 57 35 0 1| 428k 8241k| 96k \n259k| 0 124k|2082 5154\n 1 1 90 8 0 0: 2 2 46 49 0 0: 1 0 93 6 0 \n0: 9 4 83 4 0 0: 0 0 95 5 0 0: 1 0 98 1 0 \n0: 6 7 76 11 0 0: 1 4 85 10 0 0| 592k 14M| 82k \n194k| 0 0 |2195 5839\n 4 1 82 13 0 0: 3 0 87 10 0 0: 4 3 81 12 0 \n0: 0 0 82 18 0 0: 1 1 88 11 0 0: 1 0 90 9 0 \n0: 5 7 64 24 0 0: 5 3 86 6 0 0|1876k 1273k| 145k \n323k| 0 0 |2544 4949\n 0 1 66 33 0 0: 5 1 89 6 0 0: 0 1 88 12 0 \n0: 1 3 82 14 0 0: 1 0 86 12 0 1: 1 1 88 10 0 \n0: 8 3 82 8 0 0: 6 7 84 3 0 0|2544k 1259k| 125k \n280k| 0 0 |2487 4776\n 0 1 85 14 0 0: 0 0 100 0 0 0: 3 0 95 2 0 \n0: 5 3 91 2 0 0: 0 0 94 6 0 0: 6 1 84 10 0 \n0: 1 10 50 39 0 0: 4 5 82 9 0 0| 896k 1378k| 92k \n181k| 0 0 |2452 4823\n 0 0 94 6 0 0: 2 1 97 0 0 0: 3 0 94 3 0 \n0: 0 0 89 11 0 0: 2 3 83 12 0 0: 2 2 83 13 0 \n0: 2 9 50 40 0 0: 5 10 66 19 0 0| 848k 1237k| 98k \n204k| 0 0 |2309 4733\n 3 0 93 3 0 1: 2 9 83 6 0 0: 1 0 98 1 0 \n0: 1 3 96 0 0 0: 3 2 84 12 0 0: 1 0 88 11 0 \n0: 1 0 39 60 0 0: 0 4 88 8 0 0| 992k 1113k| 99k \n212k| 0 0 |2318 4696\n 10 0 87 3 0 0: 1 8 90 2 0 0: 2 0 91 7 0 \n0: 4 2 90 4 0 0: 4 2 78 16 0 0: 3 0 89 9 0 \n0: 4 0 90 7 0 0: 0 0 99 1 0 0| 964k 1165k| 119k \n291k| 0 0 |2505 4842\n 8 3 83 6 0 0: 7 0 80 13 0 0: 2 6 67 25 0 \n0: 4 6 80 10 0 0: 2 1 77 20 0 0: 2 1 87 8 0 \n2: 2 1 82 16 0 0: 1 0 95 4 0 0| 888k 1154k| 158k \n331k| 0 0 |2759 5495\n 6 0 91 3 0 0: 4 1 93 2 0 0: 0 10 85 5 0 \n0: 1 10 85 5 0 0: 2 1 37 60 0 0: 0 0 88 12 0 \n0: 5 0 83 12 0 0: 2 1 88 9 0 0| 992k 1143k| 149k \n300k| 0 0 |2606 5054\n 5 0 75 19 0 1: 7 1 81 11 0 0: 4 11 73 12 0 \n0: 6 16 62 16 0 0: 5 1 88 6 0 0: 6 1 84 8 0 \n0: 1 4 71 24 0 1: 1 2 91 6 0 0|1372k 1109k| 193k \n434k| 0 0 |3216 5887\n 3 1 90 6 0 0: 10 3 80 7 0 0: 0 2 92 6 0 \n0: 2 0 94 4 0 0: 0 2 65 33 0 0: 2 0 95 3 0 \n0: 0 7 76 18 0 0: 5 6 79 10 0 0|1044k 1077k| 191k \n409k| 0 0 |2815 5311\n 1 0 90 9 0 0: 6 4 89 2 0 0: 3 0 92 5 0 \n0: 6 1 93 0 0 0: 3 3 45 50 0 0: 1 1 88 10 0 \n0: 3 7 88 3 0 0: 2 7 85 7 0 0| 800k 1214k| 135k \n282k| 0 0 |2570 5140\n 7 1 89 3 0 0: 0 3 91 6 0 0: 1 2 93 4 0 \n0: 3 4 90 3 0 0: 3 1 39 57 0 0: 1 0 88 11 0 \n0: 1 4 79 17 0 0: 3 2 85 9 0 1|1060k 1076k| 108k \n225k| 0 0 |2681 5304\n 1 1 85 13 0 0: 4 5 90 2 0 0: 2 5 91 2 0 \n0: 4 8 83 5 0 0: 2 0 41 58 0 0: 2 2 92 4 0 \n0: 0 2 62 36 0 0: 0 1 90 9 0 0|1956k 948k| 153k \n360k| 0 0 |2802 5189\n 2 0 92 6 0 0: 4 2 85 9 0 0: 0 2 95 3 0 \n0: 1 2 93 4 0 0: 2 3 34 61 0 0: 1 3 85 11 0 \n0: 3 0 55 42 0 0: 1 0 91 8 0 0|1768k 921k| 170k \n367k| 0 0 |2511 4701\n\n\n",
"msg_date": "Tue, 06 Sep 2011 10:55:57 +0200",
"msg_from": "Gerhard Wohlgenannt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sudden drop in DBb performance"
},
{
"msg_contents": "On 6 Září 2011, 10:26, Gerhard Wohlgenannt wrote:\n> Thanks a lot to everybody for their helpful hints!!!\n>\n> I am running all these benchmarks while the VMs are up .. with the\n> system under something like \"typical\" loads ..\n>\n> The RAID is hardware based. On of my colleagues will check if there is\n> any hardware problem on the RAID (the disks) today, but nothing no\n> errors have been reported.\n>\n> please find below the results of\n> iostat -x 2\n> vmstat 2\n>\n> hmm, looks like we definitely do have a problem with I/O load?!\n> btw: dm-19 is the logical volume where the /var (postgresql) is on ..\n\nWell, it definitely looks like that. Something is doing a lot of writes on\nthat drive - the drive is 100% utilized, i.e. it's a bottleneck. You need\nto find out what is writing the data - try iotop or something like that.\n\nAnd it's probably the reason why the bonnie results were so poor.\n\nTomas\n\n",
"msg_date": "Tue, 6 Sep 2011 11:04:38 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden drop in DBb performance"
},
{
"msg_contents": "On 6 Září 2011, 10:55, Gerhard Wohlgenannt wrote:\n>\n>> That's why I love dstat, just do this\n>>\n>> $ dstat -C 0,1,2,3,4,5,6,7\n>>\n>> and you know all you need.\n>\n> dstat looks like a very nice tool, results below ..\n> (now the system load seems a bit lower then before when generating\n> results for vmstat and iostat)\n>>> Good catch, thanks Scott.\n>> Yes, good catch.\n>>\n>> Still, this does not explain why the queries were running fast before,\n>> and\n>> why the RAID array is so sluggish. Not to mention that we don't know\n>> what\n>> were the conditions when collecting those numbers (were the VMs off or\n>> running?).\n>>\n> the VMs were running. they are in something like production use, so i\n> shouldn't just turn them off .. :-)\n> and the processes in the VMs cause a big portion of the DB load, so\n> turning them off would distort the results ...\n\nDistort the results? If you want to measure the RAID performance, you have\nto do that when there are no other processes using it.\n\n> and thanks again for all the replies!!! :-)\n\nPlease, use something like pastebin.com to post there results. It was\nbearable for the vmstat output but this is alamost unreadable due to the\nwrapping.\n\n> ~# dstat -C 0,1,2,3,4,5,6,7\n> -------cpu0-usage--------------cpu1-usage--------------cpu2-usage--------------cpu3-usage--------------cpu4-usage--------------cpu5-usage--------------cpu6-usage--------------cpu7-usage------\n> -dsk/total- -net/total- ---paging-- ---system--\n> usr sys idl wai hiq siq:usr sys idl wai hiq siq:usr sys idl wai hiq\n> siq:usr sys idl wai hiq siq:usr sys idl wai hiq siq:usr sys idl wai hiq\n> siq:usr sys idl wai hiq siq:usr sys idl wai hiq siq| read writ| recv\n> send| in out | int csw\n> 7 1 75 17 0 0: 4 5 84 7 0 0: 5 3 80 12 0\n> 0: 4 3 85 9 0 0: 7 2 75 16 0 0: 4 2 87 8 0\n> 0: 7 2 75 16 0 0: 4 1 87 8 0 0|5071k 2578k| 0 0\n> |9760B 9431B|2468 4126\n...\n\nBut if I read that correctly, the wait for the cores is 17%, 7%, 12%, 9%,\n16%, 8%, 16% and 8%, and the cores are mostly idle (idl is about 85%). So\nit seems there's a low number of processes, switched between the cpus and\nmost of the time they're waiting for the I/O.\n\nGiven the low values for disk I/O and the iostat output we've seen before,\nit's obvious there's a lot of random I/O (mostly writes).\n\nLet's speculate for a while what could cause this (in arbitrary order):\n\n1) Checkpoints. Something is doing a lot of writes, and with DB that often\nmeans a checkpoint is in progress. I'm not sure about your\ncheckpoint_timeout, but you do have 192 segments and about 7GB of shared\nbuffers. That means there may be a lot of dirty buffers (even 100% of the\nbuffers).\n\nYou're using RAID5 and that really is not a write-friendly RAID version.\nWe don't know actual performance as the bonnie was run with VMs accessing\nthe volume, but RAID10 usually performs much better.\n\nEnable log_checkpoints in the config and see what's going on. You can also\nuse iotop to see what processes are doing the writes (it might be a\nbackground writer, ...).\n\n2) The RAID is broken and can't handle the load it handled fine before.\nThis is not very likely, as you've mentioned that there were no warnings\netc.\n\n3) There are some new jobs that do a lot of I/O. Is there anything new\nthat wasn't running before? I guess you'd mention that.\n\n4) The database significantly grew in a short period of time, and the\nactive part now does not fit into the RAM (page cache), so the data has to\nbe retrieved from the disk. And it's not just about the database size,\nit's about the active part of the database - if you're suddenly accessing\nmore data, the cache may not be large enough.\n\nThis is usually a gradual process (cache hit ratio slowly decreases as the\ndatabase grows), but if the database grew rapidly ... This could be a\ncaused by MVCC, i.e. there may be a lot of dead tuples - have you done a\nbig UPDATE / DELETE or something like that recently?\n\nregards\nTomas\n\n",
"msg_date": "Tue, 6 Sep 2011 18:50:19 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden drop in DBb performance"
},
{
"msg_contents": "Hi,\n\nOn 03.09.2011 09:26, Gerhard Wohlgenannt wrote:\n> Activating log_min_duration shows for instance this query --- there are\n> now constantly queries which take absurdely long.\n\n2 things you should check:\n\n- if your /var/lib/postgresql is on an ext3 fs, I've seen such things \nbefore due to the changes around Linux 2.6.2X that caused long stalls \nduring fsync() (see https://lwn.net/Articles/328363/ etc.). But if all \nyour queries take longer than they should and not only a few, that \nprobably isn't the reason.\n\n- regarding your hardware RAID-5, I've seen some controllers switch the \nwrite cache off quietly for various reasons (battery too old, issues \nwith disks). Check if it is still enabled, without write caching \nenabled, most RAID-controllers will just suck.\n\nRegards,\n Marinos\n",
"msg_date": "Tue, 13 Sep 2011 01:11:12 +0200",
"msg_from": "Marinos Yannikos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden drop in DBb performance"
}
] |
[
{
"msg_contents": "\nHi, \n\nI have a database server that's part of a web stack and is experiencing prolonged load average spikes of up to 400+ when the db is restarted and first accessed by the other parts of the stack and has generally poor performance on even simple select queries.\n\nThere are 30 DBs in total on the server coming in at 226GB. The one that's used the most is 67GB and there are another 29 that come to 159GB. \n\nI'd really appreciate it if you could review my configurations below and make any suggestions that might help alleviate the performance issues. I've been looking more into the shared buffers to the point of installing the contrib module to check what they're doing, possibly installing more RAM as the most used db @ 67GB might appreciate it, or moving the most used DB onto another set of disks, possible SSD.\n\n\nPostgreSQL 9.0.4\nPgbouncer 1.4.1\n\nLinux 2.6.18-238.9.1.el5 #1 SMP Tue Apr 12 18:10:13 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux\n\nCentOS release 5.6 (Final)\n\n4 x Intel Xeon-NehalemEX E7540-HexCore [2GHz] ( 24 physical cores )\n32GB DDR3 RAM\n1 x Adaptec 5805 Z SATA/SAS RAID with battery backup\n4 x Seagate Cheetah ST3300657SS 300GB 15RPM SAS drives in RAID 10\n1 x 500GB 7200RPM SATA disk\n\nPostgres and the OS reside on the same ex3 filesystem, whilst query and archive logging go onto the SATA disk which is also ext3.\n\n\n name | current_setting \n--------------------------------+-------------------------------------------------------------------------------------------------------------------\n version | PostgreSQL 9.0.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-48), 64-bit\n archive_command | tar jcf /disk1/db-wal/%f.tar.bz2 %p\n archive_mode | on\n autovacuum | off\n checkpoint_completion_target | 0.9\n checkpoint_segments | 10\n client_min_messages | notice\n effective_cache_size | 17192MB\n external_pid_file | /var/run/postgresql/9-main.pid\n fsync | off\n full_page_writes | on\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n listen_addresses | \n log_checkpoints | on\n log_destination | stderr\n log_directory | /disk1/pg_log\n log_error_verbosity | verbose\n log_filename | postgresql-%Y-%m-%d_%H%M%S.log\n log_line_prefix | %m %u %h \n log_min_duration_statement | 250ms\n log_min_error_statement | error\n log_min_messages | notice\n log_rotation_age | 1d\n logging_collector | on\n maintenance_work_mem | 32MB\n max_connections | 1000\n max_prepared_transactions | 25\n max_stack_depth | 4MB\n port | 6432\n server_encoding | UTF8\n shared_buffers | 8GB\n superuser_reserved_connections | 3\n synchronous_commit | on\n temp_buffers | 5120\n TimeZone | UTC\n unix_socket_directory | /var/run/postgresql\n wal_buffers | 10MB\n wal_level | archive\n wal_sync_method | fsync\n work_mem | 16MB\n\n\nPgbouncer config\n\n[databases]\n* = port=6432\n[pgbouncer]\nuser=postgres\npidfile = /tmp/pgbouncer.pid\nlisten_addr = \nlisten_port = 5432\nunix_socket_dir = /var/run/postgresql\nauth_type = trust \nauth_file = /etc/pgbouncer/userlist.txt\nadmin_users = postgres\nstats_users = postgres\npool_mode = session \nserver_reset_query = DISCARD ALL;\nserver_check_query = select 1\nserver_check_delay = 10 \nserver_idle_timeout = 5\nserver_lifetime = 0\nmax_client_conn = 4096 \ndefault_pool_size = 100\nlog_connections = 1\nlog_disconnections = 1\nlog_pooler_errors = 1\nclient_idle_timeout = 30\nreserve_pool_size = 800\n\n\nThanks in advance\n\nRichard\n\n",
"msg_date": "Mon, 5 Sep 2011 11:28:19 +0100",
"msg_from": "Richard Shaw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Rather large LA"
},
{
"msg_contents": "On 5/09/2011 6:28 PM, Richard Shaw wrote:\n> max_connections | 1000\n\nWoah! No wonder you have \"stampeding herd\" problems after a DB or server \nrestart and are having performance issues.\n\nWhen you have 1000 clients trying to do work at once, they'll all be \nfighting over memory, disk I/O bandwidth, and CPU power which is nowhere \nnear sufficient to allow them to all actually achieve something all at \nonce. You'll have a lot of overhead as the OS tries to be fair and allow \neach to make progress - at the expense of overall throughput.\n\nIf most of those connections are idle most of the time - say, they're \nperistent connections from some webapp that requrires one connection per \nwebserver thread - then the situation isn't so bad. They're still \ncosting you backend RAM and various housekeeping overhead (including \ntask switching) related to lock management and shared memory, though.\n\nConsider using a connection pooler like PgPool-II or PgBouncer if your \napplication is suitable. Most apps will be quite happy using pooled \nconnections; only a few things like advisory locking and HOLD cursors \nwork poorly with pooled connections. Using a pool allows you to reduce \nthe number of actively working and busy connections to the real Pg \nbackend to something your hardware can cope with, which should \ndramatically increase performance and reduce startup load spikes. The \ngeneral very rough rule of thumb for number of active connections is \n\"number of CPU cores + number of HDDs\" but of course this is only \nincredibly rough and depends a lot on your workload and DB.\n\nIdeally PostgreSQL would take care of this pooling inside the server, \nbreaking the \"one connection = one worker backend\" equivalence. \nUnfortunately the server's process-based design makes that harder than \nit could be. There's also a lot of debate about whether pooling is even \nthe core DB server's job and if it is, which of the several possible \napproaches is the most appropriate. Then there's the issue of whether \nin-server connection pooling is even appropriate without admission \ncontrol - which brings up the \"admission control is insanely hard\" \nproblem. So for now, pooling lives outside the server in projects like \nPgPool-II and PgBouncer.\n\n--\nCraig Ringer\n",
"msg_date": "Mon, 05 Sep 2011 18:49:50 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rather large LA"
},
{
"msg_contents": "\nHi Craig,\n\nApologies, I should have made that clearer, I am using PgBouncer 1.4.1 in front of Postgres and included the config at the bottom of my original mail\n\nRegards\n\nRichard\n\n.........\n\nOn 5 Sep 2011, at 11:49, Craig Ringer wrote:\n\n> On 5/09/2011 6:28 PM, Richard Shaw wrote:\n>> max_connections | 1000\n> \n> Woah! No wonder you have \"stampeding herd\" problems after a DB or server restart and are having performance issues.\n> \n> When you have 1000 clients trying to do work at once, they'll all be fighting over memory, disk I/O bandwidth, and CPU power which is nowhere near sufficient to allow them to all actually achieve something all at once. You'll have a lot of overhead as the OS tries to be fair and allow each to make progress - at the expense of overall throughput.\n> \n> If most of those connections are idle most of the time - say, they're peristent connections from some webapp that requrires one connection per webserver thread - then the situation isn't so bad. They're still costing you backend RAM and various housekeeping overhead (including task switching) related to lock management and shared memory, though.\n> \n> Consider using a connection pooler like PgPool-II or PgBouncer if your application is suitable. Most apps will be quite happy using pooled connections; only a few things like advisory locking and HOLD cursors work poorly with pooled connections. Using a pool allows you to reduce the number of actively working and busy connections to the real Pg backend to something your hardware can cope with, which should dramatically increase performance and reduce startup load spikes. The general very rough rule of thumb for number of active connections is \"number of CPU cores + number of HDDs\" but of course this is only incredibly rough and depends a lot on your workload and DB.\n> \n> Ideally PostgreSQL would take care of this pooling inside the server, breaking the \"one connection = one worker backend\" equivalence. Unfortunately the server's process-based design makes that harder than it could be. There's also a lot of debate about whether pooling is even the core DB server's job and if it is, which of the several possible approaches is the most appropriate. Then there's the issue of whether in-server connection pooling is even appropriate without admission control - which brings up the \"admission control is insanely hard\" problem. So for now, pooling lives outside the server in projects like PgPool-II and PgBouncer.\n> \n> --\n> Craig Ringer\n\n",
"msg_date": "Mon, 5 Sep 2011 11:55:20 +0100",
"msg_from": "Richard Shaw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Rather large LA"
},
{
"msg_contents": "On 5/09/2011 6:55 PM, Richard Shaw wrote:\n> Hi Craig,\n>\n> Apologies, I should have made that clearer, I am using PgBouncer 1.4.1 in front of Postgres and included the config at the bottom of my original mail\n>\nAh, I see. The point still stands: your hardware can *not* efficiently \ndo work for 1000 concurrent backend workers. Reduce the maximum number \nof workers by setting a lower cap on the pool size and a lower \nmax_connections. This won't work (you'll run out of pooler connections) \nunless you also set PgBouncer to transaction pooling mode instead of the \ndefault session pooling mode, which you're currently using. It is \n*important* to read the documentation on this before doing it, as there \nare implications for apps that use extra-transactional features like \nHOLD cursors and advisory locks.\n\nSee: http://pgbouncer.projects.postgresql.org/doc/usage.html\n\nIt may also be necessary to set PgBouncer to block (wait) rather than \nreport an error when there is no pooled connection available to start a \nnew transaction on. I'm not sure what PgBouncer's default behavior for \nthat is and didn't see anything immediately clear in the pgbouncer(5) \nini file man page.\n\n--\nCraig Ringer\n",
"msg_date": "Mon, 05 Sep 2011 19:16:45 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rather large LA"
},
{
"msg_contents": "On 09/05/2011 05:28 AM, Richard Shaw wrote:\n>\n> Hi,\n>\n> I have a database server that's part of a web stack and is experiencing prolonged load average spikes of up to 400+ when the db is restarted and first accessed by the other parts of the stack and has generally poor performance on even simple select queries.\n>\n\nIs the slowness new? Or has it always been a bit slow? Have you checked for bloat on your tables/indexes?\n\nWhen you start up, does it peg a cpu or sit around doing IO?\n\nHave you reviewed the server logs?\n\n\nautovacuum | off\n\nWhy? I assume that's a problem.\n\nfsync | off\n\nSeriously?\n\n\n-Andy\n\n\n\n> There are 30 DBs in total on the server coming in at 226GB. The one that's used the most is 67GB and there are another 29 that come to 159GB.\n>\n> I'd really appreciate it if you could review my configurations below and make any suggestions that might help alleviate the performance issues. I've been looking more into the shared buffers to the point of installing the contrib module to check what they're doing, possibly installing more RAM as the most used db @ 67GB might appreciate it, or moving the most used DB onto another set of disks, possible SSD.\n>\n>\n> PostgreSQL 9.0.4\n> Pgbouncer 1.4.1\n>\n> Linux 2.6.18-238.9.1.el5 #1 SMP Tue Apr 12 18:10:13 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux\n>\n> CentOS release 5.6 (Final)\n>\n> 4 x Intel Xeon-NehalemEX E7540-HexCore [2GHz] ( 24 physical cores )\n> 32GB DDR3 RAM\n> 1 x Adaptec 5805 Z SATA/SAS RAID with battery backup\n> 4 x Seagate Cheetah ST3300657SS 300GB 15RPM SAS drives in RAID 10\n> 1 x 500GB 7200RPM SATA disk\n>\n> Postgres and the OS reside on the same ex3 filesystem, whilst query and archive logging go onto the SATA disk which is also ext3.\n>\n>\n> name | current_setting\n> --------------------------------+-------------------------------------------------------------------------------------------------------------------\n> version | PostgreSQL 9.0.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-48), 64-bit\n> archive_command | tar jcf /disk1/db-wal/%f.tar.bz2 %p\n> archive_mode | on\n> autovacuum | off\n> checkpoint_completion_target | 0.9\n> checkpoint_segments | 10\n> client_min_messages | notice\n> effective_cache_size | 17192MB\n> external_pid_file | /var/run/postgresql/9-main.pid\n> fsync | off\n> full_page_writes | on\n> lc_collate | en_US.UTF-8\n> lc_ctype | en_US.UTF-8\n> listen_addresses |\n> log_checkpoints | on\n> log_destination | stderr\n> log_directory | /disk1/pg_log\n> log_error_verbosity | verbose\n> log_filename | postgresql-%Y-%m-%d_%H%M%S.log\n> log_line_prefix | %m %u %h\n> log_min_duration_statement | 250ms\n> log_min_error_statement | error\n> log_min_messages | notice\n> log_rotation_age | 1d\n> logging_collector | on\n> maintenance_work_mem | 32MB\n> max_connections | 1000\n> max_prepared_transactions | 25\n> max_stack_depth | 4MB\n> port | 6432\n> server_encoding | UTF8\n> shared_buffers | 8GB\n> superuser_reserved_connections | 3\n> synchronous_commit | on\n> temp_buffers | 5120\n> TimeZone | UTC\n> unix_socket_directory | /var/run/postgresql\n> wal_buffers | 10MB\n> wal_level | archive\n> wal_sync_method | fsync\n> work_mem | 16MB\n>\n>\n> Pgbouncer config\n>\n> [databases]\n> * = port=6432\n> [pgbouncer]\n> user=postgres\n> pidfile = /tmp/pgbouncer.pid\n> listen_addr =\n> listen_port = 5432\n> unix_socket_dir = /var/run/postgresql\n> auth_type = trust\n> auth_file = /etc/pgbouncer/userlist.txt\n> admin_users = postgres\n> stats_users = postgres\n> pool_mode = session\n> server_reset_query = DISCARD ALL;\n> server_check_query = select 1\n> server_check_delay = 10\n> server_idle_timeout = 5\n> server_lifetime = 0\n> max_client_conn = 4096\n> default_pool_size = 100\n> log_connections = 1\n> log_disconnections = 1\n> log_pooler_errors = 1\n> client_idle_timeout = 30\n> reserve_pool_size = 800\n>\n>\n> Thanks in advance\n>\n> Richard\n>\n>\n\n",
"msg_date": "Mon, 05 Sep 2011 08:39:10 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rather large LA"
},
{
"msg_contents": "\nHi Andy,\n\nIt's not a new issue no, It's a legacy system that is in no way ideal but is also not in a position to be overhauled. Indexes are correct, tables are up to 25 million rows. \n\nOn startup, it hits CPU more than IO, I'll provide some additional stats after I restart it tonight.\n\nServer logs have been reviewed and where possible, slow queries have been fixed. \n\nAutovacuum has been disabled and set to run manually via cron during a quiet period and fsync has recently been turned off to gauge any real world performance increase, there is battery backup on the raid card providing some level of resilience.\n\nThanks\n\nRichard\n\n\nOn 5 Sep 2011, at 14:39, Andy Colson wrote:\n\n> On 09/05/2011 05:28 AM, Richard Shaw wrote:\n>> \n>> Hi,\n>> \n>> I have a database server that's part of a web stack and is experiencing prolonged load average spikes of up to 400+ when the db is restarted and first accessed by the other parts of the stack and has generally poor performance on even simple select queries.\n>> \n> \n> Is the slowness new? Or has it always been a bit slow? Have you checked for bloat on your tables/indexes?\n> \n> When you start up, does it peg a cpu or sit around doing IO?\n> \n> Have you reviewed the server logs?\n> \n> \n> autovacuum | off\n> \n> Why? I assume that's a problem.\n> \n> fsync | off\n> \n> Seriously?\n> \n> \n> -Andy\n> \n> \n> \n>> There are 30 DBs in total on the server coming in at 226GB. The one that's used the most is 67GB and there are another 29 that come to 159GB.\n>> \n>> I'd really appreciate it if you could review my configurations below and make any suggestions that might help alleviate the performance issues. I've been looking more into the shared buffers to the point of installing the contrib module to check what they're doing, possibly installing more RAM as the most used db @ 67GB might appreciate it, or moving the most used DB onto another set of disks, possible SSD.\n>> \n>> \n>> PostgreSQL 9.0.4\n>> Pgbouncer 1.4.1\n>> \n>> Linux 2.6.18-238.9.1.el5 #1 SMP Tue Apr 12 18:10:13 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux\n>> \n>> CentOS release 5.6 (Final)\n>> \n>> 4 x Intel Xeon-NehalemEX E7540-HexCore [2GHz] ( 24 physical cores )\n>> 32GB DDR3 RAM\n>> 1 x Adaptec 5805 Z SATA/SAS RAID with battery backup\n>> 4 x Seagate Cheetah ST3300657SS 300GB 15RPM SAS drives in RAID 10\n>> 1 x 500GB 7200RPM SATA disk\n>> \n>> Postgres and the OS reside on the same ex3 filesystem, whilst query and archive logging go onto the SATA disk which is also ext3.\n>> \n>> \n>> name | current_setting\n>> --------------------------------+-------------------------------------------------------------------------------------------------------------------\n>> version | PostgreSQL 9.0.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-48), 64-bit\n>> archive_command | tar jcf /disk1/db-wal/%f.tar.bz2 %p\n>> archive_mode | on\n>> autovacuum | off\n>> checkpoint_completion_target | 0.9\n>> checkpoint_segments | 10\n>> client_min_messages | notice\n>> effective_cache_size | 17192MB\n>> external_pid_file | /var/run/postgresql/9-main.pid\n>> fsync | off\n>> full_page_writes | on\n>> lc_collate | en_US.UTF-8\n>> lc_ctype | en_US.UTF-8\n>> listen_addresses |\n>> log_checkpoints | on\n>> log_destination | stderr\n>> log_directory | /disk1/pg_log\n>> log_error_verbosity | verbose\n>> log_filename | postgresql-%Y-%m-%d_%H%M%S.log\n>> log_line_prefix | %m %u %h\n>> log_min_duration_statement | 250ms\n>> log_min_error_statement | error\n>> log_min_messages | notice\n>> log_rotation_age | 1d\n>> logging_collector | on\n>> maintenance_work_mem | 32MB\n>> max_connections | 1000\n>> max_prepared_transactions | 25\n>> max_stack_depth | 4MB\n>> port | 6432\n>> server_encoding | UTF8\n>> shared_buffers | 8GB\n>> superuser_reserved_connections | 3\n>> synchronous_commit | on\n>> temp_buffers | 5120\n>> TimeZone | UTC\n>> unix_socket_directory | /var/run/postgresql\n>> wal_buffers | 10MB\n>> wal_level | archive\n>> wal_sync_method | fsync\n>> work_mem | 16MB\n>> \n>> \n>> Pgbouncer config\n>> \n>> [databases]\n>> * = port=6432\n>> [pgbouncer]\n>> user=postgres\n>> pidfile = /tmp/pgbouncer.pid\n>> listen_addr =\n>> listen_port = 5432\n>> unix_socket_dir = /var/run/postgresql\n>> auth_type = trust\n>> auth_file = /etc/pgbouncer/userlist.txt\n>> admin_users = postgres\n>> stats_users = postgres\n>> pool_mode = session\n>> server_reset_query = DISCARD ALL;\n>> server_check_query = select 1\n>> server_check_delay = 10\n>> server_idle_timeout = 5\n>> server_lifetime = 0\n>> max_client_conn = 4096\n>> default_pool_size = 100\n>> log_connections = 1\n>> log_disconnections = 1\n>> log_pooler_errors = 1\n>> client_idle_timeout = 30\n>> reserve_pool_size = 800\n>> \n>> \n>> Thanks in advance\n>> \n>> Richard\n>> \n>> \n> \n\n",
"msg_date": "Mon, 5 Sep 2011 14:57:43 +0100",
"msg_from": "Richard Shaw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Rather large LA"
},
{
"msg_contents": "I think that wal_segments are too low, try 30.\n\n2011/9/5, Andy Colson <[email protected]>:\n> On 09/05/2011 05:28 AM, Richard Shaw wrote:\n>>\n>> Hi,\n>>\n>> I have a database server that's part of a web stack and is experiencing\n>> prolonged load average spikes of up to 400+ when the db is restarted and\n>> first accessed by the other parts of the stack and has generally poor\n>> performance on even simple select queries.\n>>\n>\n> Is the slowness new? Or has it always been a bit slow? Have you checked\n> for bloat on your tables/indexes?\n>\n> When you start up, does it peg a cpu or sit around doing IO?\n>\n> Have you reviewed the server logs?\n>\n>\n> autovacuum | off\n>\n> Why? I assume that's a problem.\n>\n> fsync | off\n>\n> Seriously?\n>\n>\n> -Andy\n>\n>\n>\n>> There are 30 DBs in total on the server coming in at 226GB. The one\n>> that's used the most is 67GB and there are another 29 that come to 159GB.\n>>\n>> I'd really appreciate it if you could review my configurations below and\n>> make any suggestions that might help alleviate the performance issues.\n>> I've been looking more into the shared buffers to the point of installing\n>> the contrib module to check what they're doing, possibly installing more\n>> RAM as the most used db @ 67GB might appreciate it, or moving the most\n>> used DB onto another set of disks, possible SSD.\n>>\n>>\n>> PostgreSQL 9.0.4\n>> Pgbouncer 1.4.1\n>>\n>> Linux 2.6.18-238.9.1.el5 #1 SMP Tue Apr 12 18:10:13 EDT 2011 x86_64 x86_64\n>> x86_64 GNU/Linux\n>>\n>> CentOS release 5.6 (Final)\n>>\n>> 4 x Intel Xeon-NehalemEX E7540-HexCore [2GHz] ( 24 physical cores )\n>> 32GB DDR3 RAM\n>> 1 x Adaptec 5805 Z SATA/SAS RAID with battery backup\n>> 4 x Seagate Cheetah ST3300657SS 300GB 15RPM SAS drives in RAID 10\n>> 1 x 500GB 7200RPM SATA disk\n>>\n>> Postgres and the OS reside on the same ex3 filesystem, whilst query and\n>> archive logging go onto the SATA disk which is also ext3.\n>>\n>>\n>> name |\n>> current_setting\n>> --------------------------------+-------------------------------------------------------------------------------------------------------------------\n>> version | PostgreSQL 9.0.4 on\n>> x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red\n>> Hat 4.1.2-48), 64-bit\n>> archive_command | tar jcf /disk1/db-wal/%f.tar.bz2 %p\n>> archive_mode | on\n>> autovacuum | off\n>> checkpoint_completion_target | 0.9\n>> checkpoint_segments | 10\n>> client_min_messages | notice\n>> effective_cache_size | 17192MB\n>> external_pid_file | /var/run/postgresql/9-main.pid\n>> fsync | off\n>> full_page_writes | on\n>> lc_collate | en_US.UTF-8\n>> lc_ctype | en_US.UTF-8\n>> listen_addresses |\n>> log_checkpoints | on\n>> log_destination | stderr\n>> log_directory | /disk1/pg_log\n>> log_error_verbosity | verbose\n>> log_filename | postgresql-%Y-%m-%d_%H%M%S.log\n>> log_line_prefix | %m %u %h\n>> log_min_duration_statement | 250ms\n>> log_min_error_statement | error\n>> log_min_messages | notice\n>> log_rotation_age | 1d\n>> logging_collector | on\n>> maintenance_work_mem | 32MB\n>> max_connections | 1000\n>> max_prepared_transactions | 25\n>> max_stack_depth | 4MB\n>> port | 6432\n>> server_encoding | UTF8\n>> shared_buffers | 8GB\n>> superuser_reserved_connections | 3\n>> synchronous_commit | on\n>> temp_buffers | 5120\n>> TimeZone | UTC\n>> unix_socket_directory | /var/run/postgresql\n>> wal_buffers | 10MB\n>> wal_level | archive\n>> wal_sync_method | fsync\n>> work_mem | 16MB\n>>\n>>\n>> Pgbouncer config\n>>\n>> [databases]\n>> * = port=6432\n>> [pgbouncer]\n>> user=postgres\n>> pidfile = /tmp/pgbouncer.pid\n>> listen_addr =\n>> listen_port = 5432\n>> unix_socket_dir = /var/run/postgresql\n>> auth_type = trust\n>> auth_file = /etc/pgbouncer/userlist.txt\n>> admin_users = postgres\n>> stats_users = postgres\n>> pool_mode = session\n>> server_reset_query = DISCARD ALL;\n>> server_check_query = select 1\n>> server_check_delay = 10\n>> server_idle_timeout = 5\n>> server_lifetime = 0\n>> max_client_conn = 4096\n>> default_pool_size = 100\n>> log_connections = 1\n>> log_disconnections = 1\n>> log_pooler_errors = 1\n>> client_idle_timeout = 30\n>> reserve_pool_size = 800\n>>\n>>\n>> Thanks in advance\n>>\n>> Richard\n>>\n>>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n-- \n------------\npasman\n",
"msg_date": "Mon, 5 Sep 2011 17:14:14 +0200",
"msg_from": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rather large LA"
},
{
"msg_contents": "On 09/05/2011 08:57 AM, Richard Shaw wrote:\n>\n> Hi Andy,\n>\n> It's not a new issue no, It's a legacy system that is in no way ideal but is also not in a position to be overhauled. Indexes are correct, tables are up to 25 million rows.\n>\n> On startup, it hits CPU more than IO, I'll provide some additional stats after I restart it tonight.\n>\n> Server logs have been reviewed and where possible, slow queries have been fixed.\n>\n> Autovacuum has been disabled and set to run manually via cron during a quiet period and fsync has recently been turned off to gauge any real world performance increase, there is battery backup on the raid card providing some level of resilience.\n>\n> Thanks\n>\n> Richard\n>\n>\n\nSo I'm guessing that setting fsync off did not help your performance problems. And you say CPU is high, so I think we can rule out disk IO problems.\n\n> possibly installing more RAM as the most used db @ 67GB might appreciate it\n\nThat would only be if every row of that 67 gig is being used. If its history stuff that never get's looked up, then adding more ram wont help because none of that data is being loaded anyway. Out of that 67 Gig, what is the working size? (Not really a number you can look up, I'm looking for more of an empirical little/some/lots/most).\n\npgpool:\n\nmax_client_conn = 4096\nreserve_pool_size = 800\n\nI've not used pgpool, but these seem really high. Does that mean pgpool will create 4K connectsions to the backend? Or does it mean it'll allow 4K connections to pgpool but only 800 connections to the backend.\n\nI wonder...When you startup, if you watch vmstat for a bit, do you have tons of context switches? If its not IO, and you dont say \"OMG, CPU is pegged!\" so I assume its not CPU bound, I wonder if there are so many processes fighting for resources they are stepping on each other.\n\nWhen you get up and running (and its slow), what does this display:\n\nps ax|grep postgr|wc --lines\n\nThat and a minute of 'vmstat 2' would be neet to see as well.\n\n-Andy\n\n\n\n",
"msg_date": "Mon, 05 Sep 2011 11:47:15 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rather large LA"
},
{
"msg_contents": "On Monday, September 05, 2011 14:57:43 Richard Shaw wrote:\n> Autovacuum has been disabled and set to run manually via cron during a quiet\n> period and fsync has recently been turned off to gauge any real world\n> performance increase, there is battery backup on the raid card providing\n> some level of resilience.\nThat doesn't help you against a failure due to fsync() off as the BBU can only \nprotect data that actually has been written to disk. Without fsync=on no \nguarantee about that exists.\n\nAndres\n",
"msg_date": "Mon, 05 Sep 2011 19:24:17 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rather large LA"
},
{
"msg_contents": "On September 5, 2011, Richard Shaw <[email protected]> wrote:\n> Hi Andy,\n> \n> It's not a new issue no, It's a legacy system that is in no way ideal but\n> is also not in a position to be overhauled. Indexes are correct, tables\n> are up to 25 million rows.\n> \n> On startup, it hits CPU more than IO, I'll provide some additional stats\n> after I restart it tonight.\n\nI bet it's I/O bound until a good chunk of the active data gets cached. Run \na vmstat 1 while it's that busy, I bet most of the cpu time is really in \nio_wait. \n\n\nOn September 5, 2011, Richard Shaw <[email protected]> wrote:\n> Hi Andy,\n> \n> It's not a new issue no, It's a legacy system that is in no way ideal but\n> is also not in a position to be overhauled. Indexes are correct, tables\n> are up to 25 million rows.\n> \n> On startup, it hits CPU more than IO, I'll provide some additional stats\n> after I restart it tonight.\n\nI bet it's I/O bound until a good chunk of the active data gets cached. Run a vmstat 1 while it's that busy, I bet most of the cpu time is really in io_wait.",
"msg_date": "Mon, 5 Sep 2011 13:05:06 -0700",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rather large LA"
},
{
"msg_contents": "On Mon, Sep 5, 2011 at 11:24 AM, Andres Freund <[email protected]> wrote:\n> On Monday, September 05, 2011 14:57:43 Richard Shaw wrote:\n>> Autovacuum has been disabled and set to run manually via cron during a quiet\n>> period and fsync has recently been turned off to gauge any real world\n>> performance increase, there is battery backup on the raid card providing\n>> some level of resilience.\n> That doesn't help you against a failure due to fsync() off as the BBU can only\n> protect data that actually has been written to disk. Without fsync=on no\n> guarantee about that exists.\n\nFurther, if you've got a bbu cache on the RAID card the gains from\nfsync=off wll be low / nonexistent.\n",
"msg_date": "Mon, 5 Sep 2011 14:23:32 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rather large LA"
},
{
"msg_contents": "\nvmstat 1 and iostat -x output \n\nNormal\n\n\nprocs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------\n r b swpd free buff cache si so bi bo in cs us sy id wa st\n 3 0 2332 442428 73904 31287344 0 0 89 42 0 0 7 5 85 3 0\n 4 1 2332 428428 73904 31288288 0 0 1440 0 6553 29066 5 2 91 1 0\n 4 1 2332 422688 73904 31288688 0 0 856 0 4480 18860 3 1 95 1 0\n 0 0 2332 476072 73920 31289444 0 0 544 1452 4478 19103 3 1 95 0 0\n 3 0 2332 422288 73920 31290572 0 0 1268 496 5565 23410 5 3 91 1 0\n\ncavg-cpu: %user %nice %system %iowait %steal %idle\n 5.11 0.01 2.58 2.56 0.00 89.74\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\nsda 1.00 143.00 523.50 108.00 8364.00 2008.00 16.42 2.78 4.41 1.56 98.35\nsda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nsda2 1.00 143.00 523.50 108.00 8364.00 2008.00 16.42 2.78 4.41 1.56 98.35\nsda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nsdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nsdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 4.89 0.00 2.94 3.14 0.00 89.04\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\nsda 1.00 0.00 285.00 0.00 4808.00 0.00 16.87 2.46 8.29 3.02 86.20\nsda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nsda2 1.00 0.00 285.00 0.00 4808.00 0.00 16.87 2.46 8.29 3.02 86.20\nsda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nsdb 0.00 161.50 0.00 6.50 0.00 1344.00 206.77 0.00 0.69 0.15 0.10\nsdb1 0.00 161.50 0.00 6.50 0.00 1344.00 206.77 0.00 0.69 0.15 0.10\n\n\nAfter Restart\n\nprocs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------\n r b swpd free buff cache si so bi bo in cs us sy id wa st\n 2 34 2332 5819012 75632 25855368 0 0 89 42 0 0 7 5 85 3 0\n 4 39 2332 5813344 75628 25833588 0 0 5104 324 5480 27047 3 1 84 11 0\n 2 47 2332 5815212 75336 25812064 0 0 4356 1664 5627 28695 3 1 84 12 0\n 2 40 2332 5852452 75340 25817496 0 0 5632 828 5817 28832 3 1 84 11 0\n 1 45 2332 5835704 75348 25817072 0 0 4960 1004 5111 25782 2 1 88 9 0\n 2 42 2332 5840320 75356 25811632 0 0 3884 492 5405 27858 3 1 88 8 0\n 0 47 2332 5826648 75348 25805296 0 0 4432 1268 5888 29556 3 1 83 13 0\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 3.26 0.00 1.69 25.21 0.00 69.84\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\nsda 0.50 45.00 520.00 2.50 8316.00 380.00 16.64 71.70 118.28 1.92 100.10\nsda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nsda2 0.50 45.00 520.00 2.50 8316.00 380.00 16.64 71.70 118.28 1.92 100.10\nsda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nsdb 0.00 196.50 0.00 10.50 0.00 1656.00 157.71 0.01 0.67 0.52 0.55\nsdb1 0.00 196.50 0.00 10.50 0.00 1656.00 157.71 0.01 0.67 0.52 0.55\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 3.97 0.00 1.71 20.88 0.00 73.44\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\nsda 1.00 0.00 532.00 0.00 8568.00 0.00 16.11 73.73 148.44 1.88 100.05\nsda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nsda2 1.00 0.00 532.00 0.00 8568.00 0.00 16.11 73.73 148.44 1.88 100.05\nsda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nsdb 0.00 106.50 0.00 11.50 0.00 944.00 82.09 0.00 0.39 0.30 0.35\nsdb1 0.00 106.50 0.00 11.50 0.00 944.00 82.09 0.00 0.39 0.30 0.35\n\nRegards\n\nRichard\n\n.........\n\nOn 5 Sep 2011, at 21:05, Alan Hodgson wrote:\n\n> On September 5, 2011, Richard Shaw <[email protected]> wrote:\n> > Hi Andy,\n> > \n> > It's not a new issue no, It's a legacy system that is in no way ideal but\n> > is also not in a position to be overhauled. Indexes are correct, tables\n> > are up to 25 million rows.\n> > \n> > On startup, it hits CPU more than IO, I'll provide some additional stats\n> > after I restart it tonight.\n> \n> I bet it's I/O bound until a good chunk of the active data gets cached. Run a vmstat 1 while it's that busy, I bet most of the cpu time is really in io_wait. \n\n",
"msg_date": "Mon, 5 Sep 2011 23:36:09 +0100",
"msg_from": "Richard Shaw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Rather large LA"
},
{
"msg_contents": "On Mon, Sep 5, 2011 at 4:36 PM, Richard Shaw <[email protected]> wrote:\n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\n> sda 1.00 143.00 523.50 108.00 8364.00 2008.00 16.42 2.78 4.41 1.56 98.35\n> sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> sda2 1.00 143.00 523.50 108.00 8364.00 2008.00 16.42 2.78 4.41 1.56 98.35\n\nSo what is /dev/sda2 mounted as?\n",
"msg_date": "Mon, 5 Sep 2011 17:31:01 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rather large LA"
},
{
"msg_contents": "\n/\n\nOS and Postgres on same mount point\n\nOn 6 Sep 2011, at 00:31, Scott Marlowe wrote:\n\n> On Mon, Sep 5, 2011 at 4:36 PM, Richard Shaw <[email protected]> wrote:\n>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\n>> sda 1.00 143.00 523.50 108.00 8364.00 2008.00 16.42 2.78 4.41 1.56 98.35\n>> sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n>> sda2 1.00 143.00 523.50 108.00 8364.00 2008.00 16.42 2.78 4.41 1.56 98.35\n> \n> So what is /dev/sda2 mounted as?\n\n",
"msg_date": "Tue, 6 Sep 2011 15:19:34 +0100",
"msg_from": "Richard Shaw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Rather large LA"
},
{
"msg_contents": "On Monday 05 Sep 2011 22:23:32 Scott Marlowe wrote:\n> On Mon, Sep 5, 2011 at 11:24 AM, Andres Freund <[email protected]> wrote:\n> > On Monday, September 05, 2011 14:57:43 Richard Shaw wrote:\n> >> Autovacuum has been disabled and set to run manually via cron during a\n> >> quiet period and fsync has recently been turned off to gauge any real\n> >> world performance increase, there is battery backup on the raid card\n> >> providing some level of resilience.\n> > \n> > That doesn't help you against a failure due to fsync() off as the BBU can\n> > only protect data that actually has been written to disk. Without\n> > fsync=on no guarantee about that exists.\n> \n> Further, if you've got a bbu cache on the RAID card the gains from\n> fsync=off wll be low / nonexistent.\nThats not necessarily true. If you have a mixed load of many small writes and \nsome parallel huge writes (especially in combination with big indexes) \nfsync=off still can give you quite big performance increases. Even in the \npresenence of synchronous_commit=off.\n\nAndres\n",
"msg_date": "Tue, 6 Sep 2011 16:51:17 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rather large LA"
},
{
"msg_contents": "On September 5, 2011 03:36:09 PM you wrote:\n> After Restart\n> \n> procs -----------memory---------- ---swap-- -----io---- --system--\n> -----cpu------ r b swpd free buff cache si so bi bo in\n> cs us sy id wa st 2 34 2332 5819012 75632 25855368 0 0 89 \n> 42 0 0 7 5 85 3 0 4 39 2332 5813344 75628 25833588 0 0 \n> 5104 324 5480 27047 3 1 84 11 0 2 47 2332 5815212 75336 25812064 \n> 0 0 4356 1664 5627 28695 3 1 84 12 0 2 40 2332 5852452 75340\n> 25817496 0 0 5632 828 5817 28832 3 1 84 11 0 1 45 2332\n> 5835704 75348 25817072 0 0 4960 1004 5111 25782 2 1 88 9 0 2\n> 42 2332 5840320 75356 25811632 0 0 3884 492 5405 27858 3 1\n> 88 8 0 0 47 2332 5826648 75348 25805296 0 0 4432 1268 5888\n> 29556 3 1 83 13 0\n> \n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 3.26 0.00 1.69 25.21 0.00 69.84\n> \n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\n> avgqu-sz await svctm %util sda 0.50 45.00 520.00 \n> 2.50 8316.00 380.00 16.64 71.70 118.28 1.92 100.10 sda1 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 sda2 0.50 45.00 520.00 2.50 8316.00 \n> 380.00 16.64 71.70 118.28 1.92 100.10 sda3 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> sdb 0.00 196.50 0.00 10.50 0.00 1656.00 157.71 \n> 0.01 0.67 0.52 0.55 sdb1 0.00 196.50 0.00 10.50 \n> 0.00 1656.00 157.71 0.01 0.67 0.52 0.55\n> \n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 3.97 0.00 1.71 20.88 0.00 73.44\n\nYeah 20% I/O wait I imagine feels pretty slow. 8 cores? \n",
"msg_date": "Tue, 6 Sep 2011 12:07:56 -0700",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rather large LA"
},
{
"msg_contents": "\n24 :)\n\n4 x Intel Xeon-NehalemEX E7540-HexCore [2GHz] \n\nOn 6 Sep 2011, at 20:07, Alan Hodgson wrote:\n\n> On September 5, 2011 03:36:09 PM you wrote:\n>> After Restart\n>> \n>> procs -----------memory---------- ---swap-- -----io---- --system--\n>> -----cpu------ r b swpd free buff cache si so bi bo in\n>> cs us sy id wa st 2 34 2332 5819012 75632 25855368 0 0 89 \n>> 42 0 0 7 5 85 3 0 4 39 2332 5813344 75628 25833588 0 0 \n>> 5104 324 5480 27047 3 1 84 11 0 2 47 2332 5815212 75336 25812064 \n>> 0 0 4356 1664 5627 28695 3 1 84 12 0 2 40 2332 5852452 75340\n>> 25817496 0 0 5632 828 5817 28832 3 1 84 11 0 1 45 2332\n>> 5835704 75348 25817072 0 0 4960 1004 5111 25782 2 1 88 9 0 2\n>> 42 2332 5840320 75356 25811632 0 0 3884 492 5405 27858 3 1\n>> 88 8 0 0 47 2332 5826648 75348 25805296 0 0 4432 1268 5888\n>> 29556 3 1 83 13 0\n>> \n>> avg-cpu: %user %nice %system %iowait %steal %idle\n>> 3.26 0.00 1.69 25.21 0.00 69.84\n>> \n>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\n>> avgqu-sz await svctm %util sda 0.50 45.00 520.00 \n>> 2.50 8316.00 380.00 16.64 71.70 118.28 1.92 100.10 sda1 \n>> 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n>> 0.00 0.00 0.00 sda2 0.50 45.00 520.00 2.50 8316.00 \n>> 380.00 16.64 71.70 118.28 1.92 100.10 sda3 0.00 \n>> 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n>> sdb 0.00 196.50 0.00 10.50 0.00 1656.00 157.71 \n>> 0.01 0.67 0.52 0.55 sdb1 0.00 196.50 0.00 10.50 \n>> 0.00 1656.00 157.71 0.01 0.67 0.52 0.55\n>> \n>> avg-cpu: %user %nice %system %iowait %steal %idle\n>> 3.97 0.00 1.71 20.88 0.00 73.44\n> \n> Yeah 20% I/O wait I imagine feels pretty slow. 8 cores? \n\n",
"msg_date": "Tue, 6 Sep 2011 20:11:10 +0100",
"msg_from": "Richard Shaw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Rather large LA"
},
{
"msg_contents": "On September 6, 2011 12:11:10 PM Richard Shaw wrote:\n> 24 :)\n> \n> 4 x Intel Xeon-NehalemEX E7540-HexCore [2GHz]\n> \n\nNice box.\n\nStill I/O-bound, though. SSDs would help a lot, I would think.\n",
"msg_date": "Tue, 6 Sep 2011 12:21:59 -0700",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rather large LA"
},
{
"msg_contents": "Thanks for the advice, It's one under consideration at the moment. What are your thoughts on increasing RAM and shared_buffers?\n\n\nOn 6 Sep 2011, at 20:21, Alan Hodgson wrote:\n\n> On September 6, 2011 12:11:10 PM Richard Shaw wrote:\n>> 24 :)\n>> \n>> 4 x Intel Xeon-NehalemEX E7540-HexCore [2GHz]\n>> \n> \n> Nice box.\n> \n> Still I/O-bound, though. SSDs would help a lot, I would think.\n\n",
"msg_date": "Tue, 6 Sep 2011 20:35:35 +0100",
"msg_from": "Richard Shaw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Rather large LA"
},
{
"msg_contents": "On September 6, 2011 12:35:35 PM Richard Shaw wrote:\n> Thanks for the advice, It's one under consideration at the moment. What\n> are your thoughts on increasing RAM and shared_buffers?\n> \n\nIf it's running OK after the startup rush, and it seems to be, I would leave \nthem alone. More RAM is always good, but I don't see it helping with this \nparticular issue.\n",
"msg_date": "Tue, 6 Sep 2011 12:47:55 -0700",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rather large LA"
},
{
"msg_contents": "If you are not doing so already, another approach to preventing the slam at\nstartup would be to implement some form of caching either in memcache or an\nhttp accelerator such as varnish (https://www.varnish-cache.org/). Depending\non your application and the usage patterns, you might be able to fairly\neasily insert varnish into your web stack.\n\nDamon\n\nOn Tue, Sep 6, 2011 at 12:47 PM, Alan Hodgson <[email protected]> wrote:\n\n> On September 6, 2011 12:35:35 PM Richard Shaw wrote:\n> > Thanks for the advice, It's one under consideration at the moment. What\n> > are your thoughts on increasing RAM and shared_buffers?\n> >\n>\n> If it's running OK after the startup rush, and it seems to be, I would\n> leave\n> them alone. More RAM is always good, but I don't see it helping with this\n> particular issue.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nIf you are not doing so already, another approach to preventing the slam at startup would be to implement some form of caching either in memcache or an http accelerator such as varnish (https://www.varnish-cache.org/). Depending on your application and the usage patterns, you might be able to fairly easily insert varnish into your web stack.\nDamonOn Tue, Sep 6, 2011 at 12:47 PM, Alan Hodgson <[email protected]> wrote:\nOn September 6, 2011 12:35:35 PM Richard Shaw wrote:\n> Thanks for the advice, It's one under consideration at the moment. What\n> are your thoughts on increasing RAM and shared_buffers?\n>\n\nIf it's running OK after the startup rush, and it seems to be, I would leave\nthem alone. More RAM is always good, but I don't see it helping with this\nparticular issue.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 6 Sep 2011 13:37:41 -0700",
"msg_from": "Damon Snyder <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rather large LA"
},
{
"msg_contents": "Hello.\n\nAs it turned out to be iowait, I'd recommend to try to load at least \nsome hot relations into FS cache with dd on startup. With a lot of RAM \non FreeBSD I even sometimes use this for long queries that require a lot \nof index scans.\nThis converts random IO into sequential IO that is much much faster.\nYou can try it even while your DB starting - if it works you will see \nIOwait drop and user time raise.\nWhat I do on FreeBSD (as I don't have enough RAM to load all the DB into \nRAM) is:\n1) ktrace on backend process[es]. Linux seems to have similar tool\n2) Find files that take a lot of long reads\n3) dd this files to /dev/null\n\nIn this way you can find hot files. As soon as you have them (or if you \ncan afford to load everything), you can put dd into startup scripts. Or \nI can imagine an automatic script that will do such things for some time \nafter startup.\n\nBest regards, Vitalii Tymchyshyn\n",
"msg_date": "Wed, 07 Sep 2011 09:54:25 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rather large LA"
}
] |
[
{
"msg_contents": "Hi everyone, \n\n \n\nMy question is, if I have a table with 500,000 rows, and a SELECT of one row\nis returned in 10 milliseconds, if the table has 6,000,000 of rows and\neverything is OK (statistics, vacuum etc) \n\ncan i suppose that elapsed time will be near to 10?\n\n \n\n \n\n \n\n \n\n\nHi everyone, My question is, if I have a table with 500,000 rows, and a SELECT of one row is returned in 10 milliseconds, if the table has 6,000,000 of rows and everything is OK (statistics, vacuum etc) can i suppose that elapsed time will be near to 10?",
"msg_date": "Tue, 6 Sep 2011 14:31:33 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "how fast index works?"
},
{
"msg_contents": "On 9/6/11 11:31 AM, Anibal David Acosta wrote:\n>\n> Hi everyone,\n>\n> My question is, if I have a table with 500,000 rows, and a SELECT of one row is returned in 10 milliseconds, if the table has 6,000,000 of rows and everything is OK (statistics, vacuum etc)\n>\n> can i suppose that elapsed time will be near to 10?\n>\n\nTheoretically the index is a B-tree with log(N) performance, so a larger table could be slower. But in a real database, the entire subtree might fall together in one spot on the disk, so retrieving a record from a 500,000 record database could take the same time as a 6,000,000 record database.\n\nOn the other hand, if you do a lot of updates and don't have your autovacuum parameters set right, a 500,000 record index might get quite bloated and slow as it digs through several disk blocks to find one record.\n\nThere is no simple answer to your question. In a well-maintained database, 6,000,000 records are not a problem.\n\nCraig\n\n\n\n\n\n\n\n On 9/6/11 11:31 AM, Anibal David Acosta wrote:\n \n\n\n\n\nHi everyone, \n \nMy question is, if I have a table with\n 500,000 rows, and a SELECT of one row is returned in 10\n milliseconds, if the table has 6,000,000 of rows and\n everything is OK (statistics, vacuum etc) \ncan i suppose that elapsed time will be\n near to 10?\n\n\n\n Theoretically the index is a B-tree with log(N) performance, so a\n larger table could be slower. But in a real database, the entire\n subtree might fall together in one spot on the disk, so retrieving a\n record from a 500,000 record database could take the same time as a\n 6,000,000 record database.\n\n On the other hand, if you do a lot of updates and don't have your\n autovacuum parameters set right, a 500,000 record index might get\n quite bloated and slow as it digs through several disk blocks to\n find one record.\n\n There is no simple answer to your question. In a well-maintained\n database, 6,000,000 records are not a problem.\n\n Craig",
"msg_date": "Tue, 06 Sep 2011 12:17:50 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how fast index works?"
},
{
"msg_contents": "Thanks!\n\n \n\n \n\nDe: [email protected]\n[mailto:[email protected]] En nombre de Craig James\nEnviado el: martes, 06 de septiembre de 2011 03:18 p.m.\nPara: [email protected]\nAsunto: Re: [PERFORM] how fast index works?\n\n \n\nOn 9/6/11 11:31 AM, Anibal David Acosta wrote: \n\nHi everyone, \n\n \n\nMy question is, if I have a table with 500,000 rows, and a SELECT of one row\nis returned in 10 milliseconds, if the table has 6,000,000 of rows and\neverything is OK (statistics, vacuum etc) \n\ncan i suppose that elapsed time will be near to 10?\n\n\nTheoretically the index is a B-tree with log(N) performance, so a larger\ntable could be slower. But in a real database, the entire subtree might\nfall together in one spot on the disk, so retrieving a record from a 500,000\nrecord database could take the same time as a 6,000,000 record database.\n\nOn the other hand, if you do a lot of updates and don't have your autovacuum\nparameters set right, a 500,000 record index might get quite bloated and\nslow as it digs through several disk blocks to find one record.\n\nThere is no simple answer to your question. In a well-maintained database,\n6,000,000 records are not a problem.\n\nCraig\n\n\nThanks! De: [email protected] [mailto:[email protected]] En nombre de Craig JamesEnviado el: martes, 06 de septiembre de 2011 03:18 p.m.Para: [email protected]: Re: [PERFORM] how fast index works? On 9/6/11 11:31 AM, Anibal David Acosta wrote: Hi everyone, My question is, if I have a table with 500,000 rows, and a SELECT of one row is returned in 10 milliseconds, if the table has 6,000,000 of rows and everything is OK (statistics, vacuum etc) can i suppose that elapsed time will be near to 10?Theoretically the index is a B-tree with log(N) performance, so a larger table could be slower. But in a real database, the entire subtree might fall together in one spot on the disk, so retrieving a record from a 500,000 record database could take the same time as a 6,000,000 record database.On the other hand, if you do a lot of updates and don't have your autovacuum parameters set right, a 500,000 record index might get quite bloated and slow as it digs through several disk blocks to find one record.There is no simple answer to your question. In a well-maintained database, 6,000,000 records are not a problem.Craig",
"msg_date": "Tue, 6 Sep 2011 16:28:12 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how fast index works?"
},
{
"msg_contents": "On 7/09/2011 2:31 AM, Anibal David Acosta wrote:\n>\n> Hi everyone,\n>\n> My question is, if I have a table with 500,000 rows, and a SELECT of \n> one row is returned in 10 milliseconds, if the table has 6,000,000 of \n> rows and everything is OK (statistics, vacuum etc)\n>\n> can i suppose that elapsed time will be near to 10?\n>\n>\n\nIt's not that simple. In addition to the performance scaling Craig James \nmentioned, there are cache effects.\n\nYour 500,000 row index might fit entirely in RAM. This means that no \ndisk access is required to query and search it, making it extremely \nfast. If the index on the larger table does NOT fit entirely in RAM, or \ncompetes for cache space with other things so it isn't always cached in \nRAM, then it might be vastly slower.\n\nThis is hard to test, because it's not easy to empty the caches. On \nLinux you can the the VM's drop_caches feature, but that drops *all* \ncaches, including cached disk data from running programs, the PostgreSQL \nsystem catalogs, etc. That makes it a rather unrealistic test when the \nonly thing you really want to remove from cache is your index and the \ntable associated with it.\n\nThe best way to test whether data of a certain size will perform well is \nto create dummy data of that size and test with it. Anything else is \nguesswork.\n\n--\nCraig Ringer\n\n\n\n\n\n\n On 7/09/2011 2:31 AM, Anibal David Acosta wrote:\n \n\n\n\n\nHi everyone, \n \nMy question is, if I have a table with\n 500,000 rows, and a SELECT of one row is returned in 10\n milliseconds, if the table has 6,000,000 of rows and\n everything is OK (statistics, vacuum etc) \ncan i suppose that elapsed time will be\n near to 10?\n\n\n\n\n\n It's not that simple. In addition to the performance scaling Craig\n James mentioned, there are cache effects.\n\n Your 500,000 row index might fit entirely in RAM. This means that no\n disk access is required to query and search it, making it extremely\n fast. If the index on the larger table does NOT fit entirely in RAM,\n or competes for cache space with other things so it isn't always\n cached in RAM, then it might be vastly slower.\n\n This is hard to test, because it's not easy to empty the caches. On\n Linux you can the the VM's drop_caches feature, but that drops *all*\n caches, including cached disk data from running programs, the\n PostgreSQL system catalogs, etc. That makes it a rather unrealistic\n test when the only thing you really want to remove from cache is\n your index and the table associated with it.\n\n The best way to test whether data of a certain size will perform\n well is to create dummy data of that size and test with it. Anything\n else is guesswork.\n\n --\n Craig Ringer",
"msg_date": "Wed, 07 Sep 2011 09:03:38 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how fast index works?"
},
{
"msg_contents": "On Tue, Sep 6, 2011 at 1:31 PM, Anibal David Acosta <[email protected]> wrote:\n> Hi everyone,\n>\n>\n>\n> My question is, if I have a table with 500,000 rows, and a SELECT of one row\n> is returned in 10 milliseconds, if the table has 6,000,000 of rows and\n> everything is OK (statistics, vacuum etc)\n>\n> can i suppose that elapsed time will be near to 10?\n\nThe problem with large datasets does not come from the index, but that\nthey increase cache pressure. On today's typical servers it's all\nabout cache, and the fact that disks (at least non ssd drives) are\nseveral orders of magnitude slower than memory. Supposing you had\ninfinite memory holding your data files in cache or infinitely fast\ndisks, looking up a record from a trillion record table would still be\nfaster than reading a record from a hundred record table that had to\nfault to a spinning disk to pull up the data.\n\nmerlin\n",
"msg_date": "Thu, 8 Sep 2011 12:35:58 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how fast index works?"
},
{
"msg_contents": "Hi all,\nI am working on project to migrate PostgreSQL from V8.2 to 9.0 and optimise\nthe new DB\nhas any one done some thing like that before ?\nmy main Task is the Optimisation part so please share some thoughts\n\nRegards\n\nHany\n\nOn Wed, Sep 7, 2011 at 1:03 PM, Craig Ringer <[email protected]> wrote:\n\n> On 7/09/2011 2:31 AM, Anibal David Acosta wrote:\n>\n> Hi everyone, ****\n>\n> ** **\n>\n> My question is, if I have a table with 500,000 rows, and a SELECT of one\n> row is returned in 10 milliseconds, if the table has 6,000,000 of rows and\n> everything is OK (statistics, vacuum etc) ****\n>\n> can i suppose that elapsed time will be near to 10?****\n>\n> **\n> **\n>\n>\n> It's not that simple. In addition to the performance scaling Craig James\n> mentioned, there are cache effects.\n>\n> Your 500,000 row index might fit entirely in RAM. This means that no disk\n> access is required to query and search it, making it extremely fast. If the\n> index on the larger table does NOT fit entirely in RAM, or competes for\n> cache space with other things so it isn't always cached in RAM, then it\n> might be vastly slower.\n>\n> This is hard to test, because it's not easy to empty the caches. On Linux\n> you can the the VM's drop_caches feature, but that drops *all* caches,\n> including cached disk data from running programs, the PostgreSQL system\n> catalogs, etc. That makes it a rather unrealistic test when the only thing\n> you really want to remove from cache is your index and the table associated\n> with it.\n>\n> The best way to test whether data of a certain size will perform well is to\n> create dummy data of that size and test with it. Anything else is guesswork.\n>\n> --\n> Craig Ringer\n>\n\nHi all,\nI am working on project to migrate PostgreSQL from V8.2 to 9.0 and optimise the new DB \nhas any one done some thing like that before ?\nmy main Task is the Optimisation part so please share some thoughts \n \nRegards\n \nHany\nOn Wed, Sep 7, 2011 at 1:03 PM, Craig Ringer <[email protected]> wrote:\n\nOn 7/09/2011 2:31 AM, Anibal David Acosta wrote: \n\n\nHi everyone, \n \nMy question is, if I have a table with 500,000 rows, and a SELECT of one row is returned in 10 milliseconds, if the table has 6,000,000 of rows and everything is OK (statistics, vacuum etc) \ncan i suppose that elapsed time will be near to 10?\nIt's not that simple. In addition to the performance scaling Craig James mentioned, there are cache effects.Your 500,000 row index might fit entirely in RAM. This means that no disk access is required to query and search it, making it extremely fast. If the index on the larger table does NOT fit entirely in RAM, or competes for cache space with other things so it isn't always cached in RAM, then it might be vastly slower.\nThis is hard to test, because it's not easy to empty the caches. On Linux you can the the VM's drop_caches feature, but that drops *all* caches, including cached disk data from running programs, the PostgreSQL system catalogs, etc. That makes it a rather unrealistic test when the only thing you really want to remove from cache is your index and the table associated with it.\nThe best way to test whether data of a certain size will perform well is to create dummy data of that size and test with it. Anything else is guesswork.--Craig Ringer",
"msg_date": "Fri, 9 Sep 2011 09:04:13 +1200",
"msg_from": "Hany ABOU-GHOURY <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how fast index works?"
},
{
"msg_contents": "Hany ABOU-GHOURY <[email protected]> wrote:\n \n> I am working on project to migrate PostgreSQL from V8.2 to 9.0 and\n> optimise the new DB\n \nPlease don't hijack a thread to start a new topic. Start a new\nthread with a subject line which describes the new topic.\n \n-Kevin\n",
"msg_date": "Thu, 08 Sep 2011 16:30:24 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how fast index works?"
}
] |
[
{
"msg_contents": "Hi!\n\n \n\nI have a table not too big but with aprox. 5 millions of rows, this table\nmust have 300 to 400 select per second. But also must have 10~20\ndelete/insert/update per second.\n\n \n\nSo, I need to know if the insert/delete/update really affect the select\nperformance and how to deal with it.\n\n \n\nThe table structure is very simple:\n\n \n\naccount_id integer (PK)\n\nservice_id integer (PK)\n\nenabled char(1)\n\n \n\nThe index created on this has the same 3 columns.\n\n \n\nMost of time the table has more insert or delete than update, when update\noccur the column changed is enabled;\n\n \n\nThanks!\n\n \n\n \n\n\nHi! I have a table not too big but with aprox. 5 millions of rows, this table must have 300 to 400 select per second. But also must have 10~20 delete/insert/update per second. So, I need to know if the insert/delete/update really affect the select performance and how to deal with it. The table structure is very simple: account_id integer (PK)service_id integer (PK)enabled char(1) The index created on this has the same 3 columns. Most of time the table has more insert or delete than update, when update occur the column changed is enabled; Thanks!",
"msg_date": "Thu, 8 Sep 2011 08:51:13 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "how delete/insert/update affects select performace?"
},
{
"msg_contents": "On 8 Září 2011, 14:51, Anibal David Acosta wrote:\n> Hi!\n>\n>\n>\n> I have a table not too big but with aprox. 5 millions of rows, this table\n> must have 300 to 400 select per second. But also must have 10~20\n> delete/insert/update per second.\n>\n> So, I need to know if the insert/delete/update really affect the select\n> performance and how to deal with it.\n\nYes, insert/update do affect query performance, because whenever a row is\nmodified a new copy is created. So the table might grow over time, and\nbigger tables mean more data to read.\n\nThere are two ways to prevent this:\n\n1) autovacuum - has to be configured properly (watch the table size and\nnumber of rows, and if it grows then make it a bit more aggressive)\n\n2) HOT\n\n> The table structure is very simple:\n>\n> account_id integer (PK)\n>\n> service_id integer (PK)\n>\n> enabled char(1)\n>\n> The index created on this has the same 3 columns.\n>\n> Most of time the table has more insert or delete than update, when update\n> occur the column changed is enabled;\n\nSo there's one index on all three columns? I'd remove the \"enabled\" from\nthe index, it's not going to help much I guess and it makes HOT possible\n(the modified column must not be indexed). Plus there will be one less\nindex (the other two columns are already a PK, so there's a unique index).\n\nTomas\n\n",
"msg_date": "Thu, 8 Sep 2011 16:10:19 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how delete/insert/update affects select performace?"
},
{
"msg_contents": "\"Anibal David Acosta\" <[email protected]> wrote:\n \n> I have a table not too big but with aprox. 5 millions of rows,\n> this table must have 300 to 400 select per second. But also must\n> have 10~20 delete/insert/update per second.\n> \n> So, I need to know if the insert/delete/update really affect the\n> select performance and how to deal with it.\n \nIn addition to the advice from Tomas (which was all good) you should\nbe aware that depending on the version of PostgreSQL (which you\ndidn't mention), your hardware (which you didn't describe), and your\nconfiguration (which you didn't show) the data modification can make\nyou vulnerable to a phenomenon where a checkpoint can cause a\nblockage of all disk I/O for a matter of minutes, causing even\nsimple SELECT statements which normally run in under a millisecond\nto run for minutes. This is more likely to occur in a system which\nhas been aggressively tuned for maximum throughput -- you may need\nto balance throughput needs against response time needs.\n \nEvery one of the last several major releases of PostgreSQL has\ngotten better at preventing this problem, so your best protection\nfrom it is to use a recent version.\n \nThere's a good chance that you won't run into this, but if you do,\nyou can generally correct it by reducing your shared_buffers setting\nor making your background writer more aggressive.\n \n-Kevin\n",
"msg_date": "Thu, 08 Sep 2011 09:50:52 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how delete/insert/update affects select\n\t performace?"
},
{
"msg_contents": "Postgres 9.0 on windows server 2008 r2\nHW is a dell dual processor with 16gb of ram .\n\nTthe reason I add the enabled column to index is because a select won't need\nto read the table to get this value\n\nMy select is : exists(select * from table where account_id=X and\nservice_id=Y and enabled='T')\n\nSo, do you think I must remove the enabled from index?\n\nThanks\n\n\n\n\n-----Mensaje original-----\nDe: Kevin Grittner [mailto:[email protected]] \nEnviado el: jueves, 08 de septiembre de 2011 10:51 a.m.\nPara: Anibal David Acosta; [email protected]\nAsunto: Re: [PERFORM] how delete/insert/update affects select performace?\n\n\"Anibal David Acosta\" <[email protected]> wrote:\n \n> I have a table not too big but with aprox. 5 millions of rows, this \n> table must have 300 to 400 select per second. But also must have 10~20 \n> delete/insert/update per second.\n> \n> So, I need to know if the insert/delete/update really affect the \n> select performance and how to deal with it.\n \nIn addition to the advice from Tomas (which was all good) you should be\naware that depending on the version of PostgreSQL (which you didn't\nmention), your hardware (which you didn't describe), and your configuration\n(which you didn't show) the data modification can make you vulnerable to a\nphenomenon where a checkpoint can cause a blockage of all disk I/O for a\nmatter of minutes, causing even simple SELECT statements which normally run\nin under a millisecond to run for minutes. This is more likely to occur in\na system which has been aggressively tuned for maximum throughput -- you may\nneed to balance throughput needs against response time needs.\n \nEvery one of the last several major releases of PostgreSQL has gotten better\nat preventing this problem, so your best protection from it is to use a\nrecent version.\n \nThere's a good chance that you won't run into this, but if you do, you can\ngenerally correct it by reducing your shared_buffers setting or making your\nbackground writer more aggressive.\n \n-Kevin\n\n",
"msg_date": "Thu, 8 Sep 2011 12:40:07 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how delete/insert/update affects select\t performace?"
},
{
"msg_contents": "\"Anibal David Acosta\" <[email protected]> wrote:\n \n> Tthe reason I add the enabled column to index is because a select\n> won't need to read the table to get this value\n \nThat's not true in PostgreSQL, although there is an effort to\nsupport that optimization, at least to some degree. In all current\nversions of PostgreSQL, it will always need to read the heap to\ndetermine whether the index entry is pointing at a version of the\nrow which is visible to your transaction. Adding the enabled column\nto an index will prevent faster HOT updates to that column.\n \n> My select is : exists(select * from table where account_id=X and\n> service_id=Y and enabled='T')\n \nOn the other hand, if you have very many rows where enabled is not\n'T', and you are generally searching for where enabled = 'T', you\nmight want a partial index (an index with a WHERE clause in its\ndefinition). If enabled only has two states, you will probably get\nbetter performance using a boolean column.\n \n-Kevin\n",
"msg_date": "Thu, 08 Sep 2011 12:00:32 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how delete/insert/update affects select\n\t performace?"
},
{
"msg_contents": ">On the other hand, if you have very many rows where enabled is not 'T', and\nyou are generally searching for where enabled = 'T', you might want a\npartial index (an index with a WHERE clause in its definition). If >enabled\nonly has two states, you will probably get better performance using a\nboolean column.\n\nMaybe 1% or 2% are enabled='F' all others are 'T'\n\nAnother question Kevin (thanks for your time)\n\nWhen an insert/update occur, the index is \"reindexed\" how index deals with\nnew or deleted rows.\n\nWhay happened with select, it wait that index \"reindex\" or rebuild or\nsomething? Or just select view another \"version\" of the table?\n\nThanks\n\n\n\n\n-----Mensaje original-----\nDe: Kevin Grittner [mailto:[email protected]] \nEnviado el: jueves, 08 de septiembre de 2011 01:01 p.m.\nPara: Anibal David Acosta; [email protected]\nCC: 'Tomas Vondra'\nAsunto: RE: [PERFORM] how delete/insert/update affects select performace?\n\n\"Anibal David Acosta\" <[email protected]> wrote:\n \n> Tthe reason I add the enabled column to index is because a select \n> won't need to read the table to get this value\n \nThat's not true in PostgreSQL, although there is an effort to support that\noptimization, at least to some degree. In all current versions of\nPostgreSQL, it will always need to read the heap to determine whether the\nindex entry is pointing at a version of the row which is visible to your\ntransaction. Adding the enabled column to an index will prevent faster HOT\nupdates to that column.\n \n> My select is : exists(select * from table where account_id=X and \n> service_id=Y and enabled='T')\n \nOn the other hand, if you have very many rows where enabled is not 'T', and\nyou are generally searching for where enabled = 'T', you might want a\npartial index (an index with a WHERE clause in its definition). If enabled\nonly has two states, you will probably get better performance using a\nboolean column.\n \n-Kevin\n\n",
"msg_date": "Thu, 8 Sep 2011 15:34:00 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how delete/insert/update affects select\t performace?"
},
{
"msg_contents": "\"Anibal David Acosta\" <[email protected]> wrote:\n \n> Maybe 1% or 2% are enabled='F' all others are 'T'\n \nThen an index on this column is almost certainly going to be\ncounter-productive. The only index on this column which *might*\nmake sense is WHERE enabled = 'F', and only if you run queries for\nthat often enough to outweigh the added maintenance cost. If it's\nalways one of those two values, I would use boolean (with NOT NULL\nif appropriate).\n \n> When an insert/update occur, the index is \"reindexed\" how index\n> deals with new or deleted rows.\n \nIgnoring details of HOT updates, where less work is done if no\nindexed column is updated and there is room for the new version of\nthe row (tuple) on the same page, an UPDATE is almost exactly like a\nDELETE and an INSERT in the same transaction. A new tuple (from an\nINSERT or UPDATE) is added to the index(es), and if you query\nthrough the index, it will see entries for both the old and new\nversions of the row; this is why it must visit both versions -- to\ncheck tuple visibility. Eventually the old tuples and their index\nentries are cleaned up through a \"vacuum\" process (autovacuum or an\nexplicit VACUUM command). Until then queries do extra work visiting\nand ignoring the old tuples. (That is why people who turn off\nautovacuum almost always regret it later.)\n \n> Whay happened with select, it wait that index \"reindex\" or rebuild\n> or something? Or just select view another \"version\" of the table?\n \nThe new information is immediately *added*, but there may be other\ntransactions which should still see the old state of the table, so\ncleanup of old tuples and their index entries must wait for those\ntransactions to complete.\n \nSee this for more information:\n \nhttp://www.postgresql.org/docs/9.0/interactive/mvcc.html\n \n-Kevin\n",
"msg_date": "Thu, 08 Sep 2011 14:52:43 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how delete/insert/update affects select\n\t performace?"
},
{
"msg_contents": "On 09/08/2011 12:40 PM, Anibal David Acosta wrote:\n> Postgres 9.0 on windows server 2008 r2\n> HW is a dell dual processor with 16gb of ram .\n> \n\nThe general guidelines for Windows servers such as \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server recommend \na fairly small setting for the shared_buffers parameters on Windows--no \nmore than 512MB. That makes your server a bit less likely to run in the \nnasty checkpoint spike issues Kevin was alluding to. I don't think \nwe've seen any reports of that on Windows. The problem is worst on Linux.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Thu, 08 Sep 2011 21:28:40 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how delete/insert/update affects select\t performace?"
},
{
"msg_contents": "Even if I have a server with 16GB of ram, I must set the shared_buffer to\n512MB on windows?\n\nIn the wiki page they talk about 1/4 of ram, in my case that represent a\nshared_buffer = 4GB, that is incorrect?\n\nI have 8 GB of ram for each processor, each processor is a quad core with\nhyperthreading, that means 16 \"processors\" o something like that. Windows\nshow 16 in task manager.\n\nIf I can't configure more than 512MB of shared_buffer all other RAM is\nunnecessary?\n\nThanks for your time.\n\n\n\n-----Mensaje original-----\nDe: [email protected]\n[mailto:[email protected]] En nombre de Greg Smith\nEnviado el: jueves, 08 de septiembre de 2011 09:29 p.m.\nPara: [email protected]\nAsunto: Re: [PERFORM] how delete/insert/update affects select performace?\n\nOn 09/08/2011 12:40 PM, Anibal David Acosta wrote:\n> Postgres 9.0 on windows server 2008 r2 HW is a dell dual processor \n> with 16gb of ram .\n> \n\nThe general guidelines for Windows servers such as\nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server recommend a\nfairly small setting for the shared_buffers parameters on Windows--no more\nthan 512MB. That makes your server a bit less likely to run in the nasty\ncheckpoint spike issues Kevin was alluding to. I don't think we've seen any\nreports of that on Windows. The problem is worst on Linux.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Thu, 8 Sep 2011 22:36:47 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how delete/insert/update affects select\t performace?"
}
] |
[
{
"msg_contents": "\"Anibal David Acosta\" wrote:\n \n>> The general guidelines for Windows servers such as\n>> http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n>> recommend a fairly small setting for the shared_buffers parameters\n>> on Windows--no more than 512MB.\n \n> Even if I have a server with 16GB of ram, I must set the\n> shared_buffer to 512MB on windows?\n> \n> In the wiki page they talk about 1/4 of ram, in my case that\n> represent a shared_buffer = 4GB, that is incorrect?\n \nThere's an effective maximum, depending on the platform. On Linux\nthat seems to be somewhere in the 8GB to 10GB. On Windows it is much\nless.\n \n> If I can't configure more than 512MB of shared_buffer all other RAM\n> is unnecessary?\n \nThe RAM not used for other purposes is used by the OS for caching. \nPostgreSQL goes through that cache, so the extra memory will be used;\nit's a question of what balance between PostgreSQL buffers and OS\ncache gives the best performance.\n \n-Kevin\n",
"msg_date": "Fri, 09 Sep 2011 07:04:35 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how delete/insert/update affects select\n\t performace?"
}
] |
[
{
"msg_contents": "Hello performance wizards! (Sorry for the re-post if this appears twice - I see no evidence e-mailing to pgsql-perfomrance is working yet.)\n \nMy client has migrated his 8.3 hosted DB to new machines \nrunning PG 9.0. It’s time to look at the config settings. \n\n \nImmediately below are the config settings. \n\n \nThe specifics of the DB and how it is used is below \nthat, but in general let me say that this is a full-time ETL system, with only a \nhandful of actual “users” and automated processes over 300 connections running \n“import” programs 24/7.\n \nI appreciate the help,\n \nCarlo\n \nThe host system:\n \nIntel® Xeon® Processor X5560 (8M Cache, 2.80 GHz, 6.40 \nGT/s Intel® QPI) x 2, dual quad core\n48 GB RAM\nRAID 10, 6 X 600 GB 15krpm \nSAS)\nLINUX Redhat/Centos \n2.6.18-164.el5\n \nSys admin says that battery-backup RAID controller and \nconsequent write settings should have no impact on performance. Is this \ntrue?\n \nCurrent config and my thoughts on what to do with it. If \nit isn’t mentioned here, the values are default \nvalues:\n \n# \n=========================================================== \n\nmax_connections = \n300\nshared_buffers = \n500MB # At 48GB of RAM, could we go to 2GB\n \n# - what is the impact on LINX config?\neffective_cache_size = \n2457MB # Sys admin says assume 25% of 48GB\n \n# is used by OS and other apps\nwork_mem = \n512MB # Complex reads are called many times a second \n\n \n# from each connection, so what should this be?\nmaintenance_work_mem = \n256MB # Should this be bigger - 1GB at least?\ncheckpoint_segments = \n128 # There is lots of write activity; this is high \n\n \n# but could it be higher? \n\n#checkpoint_completion_target \nnot set; \n# Recommendation appears to \nbe .9 for our 128 checkpoint segments\n \ndefault_statistics_target = \n200 # Deprecated?\n \n#autovacuum_freeze_max_age \nnot set; \n# recommendation is \n1,000,000 for non-activity. \n \n# What is the metric for \nwal_buffers setting?\nwal_buffers = \n4MB # Looks low, recommendation appears to be 16MB. \n\n \n# Is it really \"set it and forget it\"?\n \n#synchronous_commit not set; \n\n# Recommendation is to turn \nthis off and leave fsync on\n \n#fsync not set; \n\n# Recommendation is to \nleave this on\n \n#wal_level not set; \n\n# Do we only needed for \nreplication?\n \n#max_wal_senders not set; \n\n# Do we only needed for \nreplication?\n \n# The issue of \nvacuum/analyze is a tricky one.\n# Data imports are running \n24/7. One the DB is seeded, the vast majority\n# of write activity is \nupdates, and not to indexed columns. \n# Deletions are vary \nrare.\nvacuum_cost_delay = \n20ms\n \n# The background writer has \nnot been addressed at all.\n# Can our particular setup \nbenefit from changing \n# the bgwriter \nvalues?\nbgwriter_lru_maxpages = \n100 # This is the default; \n \nlisten_addresses = \n'*'\nport = \n5432\nlog_destination = \n'stderr'\nlogging_collector = \non\nlog_directory = \n'pg_log'\nlog_filename = \n'postgresql-%a.log'\nlog_truncate_on_rotation = \non\nlog_rotation_age = \n1d\nlog_rotation_size = \n0\nlog_line_prefix = \n'%t'\ntrack_counts = \non\n# \n=========================================================== \n\n \n\nThe DB is pretty large, and organized by schema. The \nmost active are:\n \n1) \nOne “Core” schema\na. \n100 tables\nb. \nTypical row counts in the low \nmillions.\nc. \nThis represents the enterprise’s core data. \n\nd. \nEqual read/write activity\n2) \nMultiple “Import” schemas\na. \nContain several thousand raw “flat file” \ntables\nb. \nRagged column structure, up to hundreds of \ncolumns\nc. \nErratic row counts, from dozens of rows to 1 \nmillion\nd. \nEach table sequentially read once, only \nstatus fields are written back\n3) \nOne “Audit” schema\na. \nA new log table is created every \nmonth\nb. \nTypical row count is 200 \nmillion\nc. \nLog every write to the “Core”\nd. \nAlmost entirely write operations, but the few \nread operations that are done have to be fast owing to the size of the \ntables\ne. \nLinks the “Core” data to the “Import” \ndata\n \nThere are next to no “users” on the system – each \nconnection services a constantly running import process which takes the incoming \n“import” data, analyzes the “core” data and decides how to distil the import \ninto the core.\n \nAnalytical Processes are not \nreport-oriented\nThe “Core” reads are mostly single row \nresults\nThe “Import” reads are 1,000 row \npages\nThere is next to no use of aggregate \nqueries\n \nTransactional Processes are a steady stream of \nwrites\nNot bursty or sporadic\nOverwhelmingly inserts and updates, next to no \ndeletes\nEach transaction represents 10 – 50 writes to the “core” \nschema\n \n \t\t \t \t\t \n\n\n\n\n\n\nHello performance wizards! (Sorry for the re-post if this appears twice - I see no evidence e-mailing to pgsql-perfomrance is working yet.)\n \nMy client has migrated his 8.3 hosted DB to new machines \nrunning PG 9.0. It’s time to look at the config settings. \n\n \nImmediately below are the config settings. \n\n \nThe specifics of the DB and how it is used is below \nthat, but in general let me say that this is a full-time ETL system, with only a \nhandful of actual “users” and automated processes over 300 connections running \n“import” programs 24/7.\n \nI appreciate the help,\n \nCarlo\n \nThe host system:\n \nIntel® Xeon® Processor X5560 (8M Cache, 2.80 GHz, 6.40 \nGT/s Intel® QPI) x 2, dual quad core\n48 GB RAM\nRAID 10, 6 X 600 GB 15krpm \nSAS)\nLINUX Redhat/Centos \n2.6.18-164.el5\n \nSys admin says that battery-backup RAID controller and \nconsequent write settings should have no impact on performance. Is this \ntrue?\n \nCurrent config and my thoughts on what to do with it. If \nit isn’t mentioned here, the values are default \nvalues:\n \n# \n=========================================================== \n\nmax_connections = \n300\nshared_buffers = \n500MB # At 48GB of RAM, could we go to 2GB\n \n# - what is the impact on LINX config?\neffective_cache_size = \n2457MB # Sys admin says assume 25% of 48GB\n \n# is used by OS and other apps\nwork_mem = \n512MB # Complex reads are called many times a second \n\n \n# from each connection, so what should this be?\nmaintenance_work_mem = \n256MB # Should this be bigger - 1GB at least?\ncheckpoint_segments = \n128 # There is lots of write activity; this is high \n\n \n# but could it be higher? \n\n#checkpoint_completion_target \nnot set; \n# Recommendation appears to \nbe .9 for our 128 checkpoint segments\n \ndefault_statistics_target = \n200 # Deprecated?\n \n#autovacuum_freeze_max_age \nnot set; \n# recommendation is \n1,000,000 for non-activity. \n \n# What is the metric for \nwal_buffers setting?\nwal_buffers = \n4MB # Looks low, recommendation appears to be 16MB. \n\n \n# Is it really \"set it and forget it\"?\n \n#synchronous_commit not set; \n\n# Recommendation is to turn \nthis off and leave fsync on\n \n#fsync not set; \n\n# Recommendation is to \nleave this on\n \n#wal_level not set; \n\n# Do we only needed for \nreplication?\n \n#max_wal_senders not set; \n\n# Do we only needed for \nreplication?\n \n# The issue of \nvacuum/analyze is a tricky one.\n# Data imports are running \n24/7. One the DB is seeded, the vast majority\n# of write activity is \nupdates, and not to indexed columns. \n# Deletions are vary \nrare.\nvacuum_cost_delay = \n20ms\n \n# The background writer has \nnot been addressed at all.\n# Can our particular setup \nbenefit from changing \n# the bgwriter \nvalues?\nbgwriter_lru_maxpages = \n100 # This is the default; \n \nlisten_addresses = \n'*'\nport = \n5432\nlog_destination = \n'stderr'\nlogging_collector = \non\nlog_directory = \n'pg_log'\nlog_filename = \n'postgresql-%a.log'\nlog_truncate_on_rotation = \non\nlog_rotation_age = \n1d\nlog_rotation_size = \n0\nlog_line_prefix = \n'%t'\ntrack_counts = \non\n# \n=========================================================== \n\n \nThe DB is pretty large, and organized by schema. The \nmost active are:\n \n1)\nOne “Core” schema\na.\n100 tables\nb.\nTypical row counts in the low \nmillions.\nc.\nThis represents the enterprise’s core data. \n\nd.\nEqual read/write activity\n2)\nMultiple “Import” schemas\na.\nContain several thousand raw “flat file” \ntables\nb.\nRagged column structure, up to hundreds of \ncolumns\nc.\nErratic row counts, from dozens of rows to 1 \nmillion\nd.\nEach table sequentially read once, only \nstatus fields are written back\n3)\nOne “Audit” schema\na.\nA new log table is created every \nmonth\nb.\nTypical row count is 200 \nmillion\nc.\nLog every write to the “Core”\nd.\nAlmost entirely write operations, but the few \nread operations that are done have to be fast owing to the size of the \ntables\ne.\nLinks the “Core” data to the “Import” \ndata\n \nThere are next to no “users” on the system – each \nconnection services a constantly running import process which takes the incoming \n“import” data, analyzes the “core” data and decides how to distil the import \ninto the core.\n \nAnalytical Processes are not \nreport-oriented\nThe “Core” reads are mostly single row \nresults\nThe “Import” reads are 1,000 row \npages\nThere is next to no use of aggregate \nqueries\n \nTransactional Processes are a steady stream of \nwrites\nNot bursty or sporadic\nOverwhelmingly inserts and updates, next to no \ndeletes\nEach transaction represents 10 – 50 writes to the “core” \nschema",
"msg_date": "Fri, 9 Sep 2011 16:34:48 +0000",
"msg_from": "Carlo Stonebanks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Migrated from 8.3 to 9.0 - need to update config (re-post)"
},
{
"msg_contents": "Carlo Stonebanks <[email protected]> wrote:\n \n> this is a full-time ETL system, with only a handful of actual\n> *users* and automated processes over 300 connections running\n> *import* programs 24/7\n \n> Intel* Xeon* Processor X5560 (8M Cache, 2.80 GHz, 6.40 \n> GT/s Intel* QPI) x 2, dual quad core 48 GB RAM\n> RAID 10, 6 X 600 GB 15krpm SAS)\n \nSo, eight cores and six spindles. You are probably going to see\n*much* better throughput if you route those 300 workers through\nabout 22 connections. Use a connection pooler which limits active\ntransactions to that and queues up requests to start a transaction.\n \n> Sys admin says that battery-backup RAID controller and \n> consequent write settings should have no impact on performance.\n \nWith only six drives, I your OS, WAL files, indexes, and heap files\nare all in the same RAID? If so, your sys admin is wrong -- you\nwant the controller configured for write-back (with automatic switch\nto write-through on low or failed battery, if possible).\n \n> max_connections = 300\n \nToo high. Both throughput and latency should improve with correct\nuse of a connection pooler.\n \n> shared_buffers = \n> 500MB # At 48GB of RAM, could we go to 2GB\n \nYou might benefit from as much as 8GB, but only testing with your\nactual load will show for sure.\n \n> effective_cache_size = \n> 2457MB # Sys admin says assume 25% of 48GB\n \nAdd together the shared_buffers setting and whatever the OS tells\nyou is used for cache under your normal load. It's usually 75% of\nRM or higher. (NOTE: This doesn't cause any allocation of RAM; it's\na hint to the cost calculations.)\n \n> work_mem = \n> 512MB # Complex reads are called many times a second\n \nMaybe, if you use the connection pooler as described above. Each\nconnection can allocate this multiple times. So with 300\nconnections you could very easily start using 150GB of RAM in\naddition to your shared buffers; causing a swap storm followed by\nOOM crashes. If you stay with 300 connections this *must* be\nreduced by at least an order of magnitude.\n \n> # from each connection, so what should this be?\n> maintenance_work_mem = \n> 256MB # Should this be bigger - 1GB at least?\n \nI'd go to 1 or 2 GB.\n \n> checkpoint_segments = \n> 128 # There is lots of write activity; this is high \n \nOK\n \n> # but could it be higher?\n \nIMO, there's unlikely to be much benefit beyond that.\n \n> #checkpoint_completion_target not set; \n> # Recommendation appears to be .9 for our 128 checkpoint segments\n \n0.9 is probably a good idea.\n \n> default_statistics_target = \n> 200 # Deprecated?\n \nDepends on your data. The default is 100. You might want to leave\nthat in general and boost it for specific columns where you find it\nis needed. Higher values improve estimates and can lead to better\nquery plans, but boost ANALYZE times and query planning time.\n \n> # What is the metric for wal_buffers setting?\n> wal_buffers = \n> 4MB # Looks low, recommendation appears to be 16MB.\n \n16MB is good.\n \n> # Is it really \"set it and forget it\"?\n \nYeah.\n \n> #synchronous_commit not set; \n> \n> # Recommendation is to turn this off and leave fsync on\n \nIf this is off, it makes lack of write-back on the controller a lot\nless painful. Even with write-back it can improve performance some.\nIt does mean that on a crash you can lose some committed\ntransactions (typically less than a second's worth), but you will\nstill have database integrity.\n \n> #fsync not set; \n> \n> # Recommendation is to leave this on\n \nUnless you want to rebuild your database from scratch or restore\nfrom backup on an OS crash, leave this on.\n \n> #wal_level not set; \n> \n> # Do we only needed for replication?\n \nThe lowest level just supports crash recovery. The next level\nsupports archiving, for recovery from a PITR-style backup. The\nthird level is needed to support hot standby (a replicated server on\nwhich you can run targets as it is updated).\n \n> # The issue of vacuum/analyze is a tricky one.\n> # Data imports are running 24/7. One the DB is seeded, the vast\n> # majority of write activity is updates, and not to indexed\n> # columns. Deletions are vary rare.\n> vacuum_cost_delay = \n> 20ms\n \nYou could try that. I would monitor for bloat and make things more\naggressive if needed. If you are not vacuuming aggressively enough,\nperformance will slowly degrade. If you let it go too far, recovery\ncan be a lot of work.\n \n> # The background writer has not been addressed at all.\n> # Can our particular setup benefit from changing the bgwriter\n> # values?\n \nProbably not. If you find that your interactive users have periods\nwhere queries seem to \"freeze\" for a few minutes at a time and then\nreturn to normal levels of performance, you might need to make this\nmore aggressive.\n \n-Kevin\n",
"msg_date": "Fri, 09 Sep 2011 13:16:28 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migrated from 8.3 to 9.0 - need to update config\n\t (re-post)"
},
{
"msg_contents": "On Fri, Sep 9, 2011 at 5:38 PM, Kevin Grittner\n<[email protected]> wrote:\n> This is getting back to that issue of using only enough processes at\n> one time to keep all the bottleneck resources fully utilized. Some\n> people tend to assuem that if they throw a few more concurrent\n> processes into the mix, it'll all get done sooner. There are a\n> great many benchmarks which show otherwise.\n\nOn the other hand, in order to benefit from synchro scans and stuff\nlike that, one has to increase concurrency beyond what is normally\nconsidered optimal.\n\nI have an application that really benefits from synchro scans, but\nfrom time to time, when planets are aligned wrong, the extra\nconcurrency does hurt.\n",
"msg_date": "Fri, 9 Sep 2011 18:04:55 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migrated from 8.3 to 9.0 - need to update config (re-post)"
},
{
"msg_contents": "On Fri, Sep 9, 2011 at 3:16 PM, Kevin Grittner\n<[email protected]> wrote:\n> Add together the shared_buffers setting and whatever the OS tells\n> you is used for cache under your normal load. It's usually 75% of\n> RM or higher. (NOTE: This doesn't cause any allocation of RAM; it's\n> a hint to the cost calculations.)\n\nIn the manual[0] it says to take into account the number of concurrent\naccess to different indices and tables:\n\n\"\n Sets the planner's assumption about the effective size of the\ndisk cache that is available to a single query. This is factored into\nestimates of the cost of using an index; a higher value makes it more\nlikely index scans will be used, a lower value makes it more likely\nsequential scans will be used. When setting this parameter you should\nconsider both PostgreSQL's shared buffers and the portion of the\nkernel's disk cache that will be used for PostgreSQL data files. Also,\ntake into account the expected number of concurrent queries on\ndifferent tables, since they will have to share the available space.\nThis parameter has no effect on the size of shared memory allocated by\nPostgreSQL, nor does it reserve kernel disk cache; it is used only for\nestimation purposes. The default is 128 megabytes (128MB).\n\"\n\nHowever, every mail I've seen on the list, and every bibliography\nseems to ignore that. Does PG consider it automatically now, and\nadmins only have to input the amount of system memory? (in which case\nPG could autoconfigure itself by querying /proc), is the manual wrong,\nor is the advise given everywher just ignoring that bit?\n\n\n[0] http://www.postgresql.org/docs/9.0/static/runtime-config-query.html\n",
"msg_date": "Fri, 9 Sep 2011 18:05:36 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migrated from 8.3 to 9.0 - need to update config (re-post)"
},
{
"msg_contents": "Claudio Freire <[email protected]> wrote:\n> On Fri, Sep 9, 2011 at 3:16 PM, Kevin Grittner\n> <[email protected]> wrote:\n>> Add together the shared_buffers setting and whatever the OS tells\n>> you is used for cache under your normal load. It's usually 75%\n>> of RM or higher. (NOTE: This doesn't cause any allocation of\n>> RAM; it's a hint to the cost calculations.)\n> \n> In the manual[0] it says to take into account the number of\n> concurrent access to different indices and tables:\n \nHmm. I suspect that the manual is technically correct, except that\nit probably only matters in terms of how many connections will\nconcurrently be executing long-running queries which might access\nlarge swaths of large indexes. In many environments, there are a\nlot of maintenance and small query processes, and only occasional\nqueries where this setting would matter. I've always had good\nresults (so far) on the effective assumption that only one such\nquery will run at a time. (That is probably helped by the fact that\nwe normally submit jobs which run such queries to a job queue\nmanager which runs them one at a time...)\n \nThis is getting back to that issue of using only enough processes at\none time to keep all the bottleneck resources fully utilized. Some\npeople tend to assuem that if they throw a few more concurrent\nprocesses into the mix, it'll all get done sooner. There are a\ngreat many benchmarks which show otherwise.\n \n-Kevin\n",
"msg_date": "Fri, 09 Sep 2011 16:08:52 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migrated from 8.3 to 9.0 - need to update config\n\t (re-post)"
},
{
"msg_contents": "Claudio Freire <[email protected]> wrote:\n \n> On the other hand, in order to benefit from synchro scans and\n> stuff like that, one has to increase concurrency beyond what is\n> normally considered optimal.\n \nThat's a good example of why any general configuration advice should\njust be used as a starting point. There's no substitute for\nbenchmarking your own real workload on your own hardware.\n \nOn the third hand, though, you have to be very careful about\ninterpreting these results -- if you used a configuration with a\nsmall effective_cache_size so you could get a lot of benefit from\nthe synchro scans, you might have suppressed choice of an index\nwhich would have allowed them to run faster with lower concurrency.\nOr you might have had to cut your work_mem to a small enough size\n(to avoid OOM errors) to force a totally different plan. So to get\na meaningful comparison, you have to change a number of variables at\nonce.\n \nGood benchmarking is really hard.\n \n-Kevin\n",
"msg_date": "Fri, 09 Sep 2011 16:09:54 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migrated from 8.3 to 9.0 - need to update config\n\t (re-post)"
},
{
"msg_contents": "Hi Kevin,\n\nFirst, thanks for taking the time. I wish I could write back with quick,\nterse questions to your detailed reply - but I'm sorry, this is still going\nto be a wordy post.\n\n>> max_connections = 300\n>Too high. Both throughput and latency should improve with correct use of\n>a connection pooler.\n\nEven for 300 stateful applications that can remain connected for up to a\nweek, continuously distilling data (imports)? The 300 is overkill, a sys\nadmin raised it from 100 when multiple large projects were loaded and the\nserver refused the additional connections. We can take large imports and\nbreak them into multiple smaller ones which the operators are doing to try\nand improve import performance. It does result in some improvement, but I\nthink they have gone over the top and the answer is to improve DB and OS\nperformance. Perhaps I don't understand how connection pooling will work\nwith stateful apps that are continuously reading and writing (the apps are\nDB I/O bound).\n \n> you want the controller configured for write-back (with automatic switch\n> to write-through on low or failed battery, if possible).\n\nFor performance or safety reasons? Since the sys admin thinks there's no\nperformance benefit from this, I would like to be clear on why we should do\nthis.\n\n>> Can our particular setup benefit from changing the bgwriter values?\n> Probably not. If you find that your interactive users have periods\n> where queries seem to \"freeze\" for a few minutes at a time and then\n> return to normal levels of performance, you might need to make this\n> more aggressive.\n\nWe actually experience this. Once again, remember the overwhelming use of\nthe system is long-running import threads with continuous connections. Every\nnow and then the imports behave as if they are suddenly taking a deep\nbreath, slowing down. Sometimes, so much we cancel the import and restart\n(the imports pick up where they left off).\n\nWhat would the bg_writer settings be in this case?\n\nThanks again for your time,\n\nCarlo\n\n\n-----Original Message-----\nFrom: Kevin Grittner [mailto:[email protected]] \nSent: September 9, 2011 2:16 PM\nTo: [email protected]; Carlo Stonebanks\nSubject: Re: [PERFORM] Migrated from 8.3 to 9.0 - need to update config\n(re-post)\n\nCarlo Stonebanks <[email protected]> wrote:\n \n> this is a full-time ETL system, with only a handful of actual\n> *users* and automated processes over 300 connections running\n> *import* programs 24/7\n \n> Intel* Xeon* Processor X5560 (8M Cache, 2.80 GHz, 6.40 \n> GT/s Intel* QPI) x 2, dual quad core 48 GB RAM\n> RAID 10, 6 X 600 GB 15krpm SAS)\n \nSo, eight cores and six spindles. You are probably going to see\n*much* better throughput if you route those 300 workers through\nabout 22 connections. Use a connection pooler which limits active\ntransactions to that and queues up requests to start a transaction.\n \n> Sys admin says that battery-backup RAID controller and \n> consequent write settings should have no impact on performance.\n \nWith only six drives, I your OS, WAL files, indexes, and heap files\nare all in the same RAID? If so, your sys admin is wrong -- you\nwant the controller configured for write-back (with automatic switch\nto write-through on low or failed battery, if possible).\n \n> max_connections = 300\n \nToo high. Both throughput and latency should improve with correct\nuse of a connection pooler.\n \n> shared_buffers = \n> 500MB # At 48GB of RAM, could we go to 2GB\n \nYou might benefit from as much as 8GB, but only testing with your\nactual load will show for sure.\n \n> effective_cache_size = \n> 2457MB # Sys admin says assume 25% of 48GB\n \nAdd together the shared_buffers setting and whatever the OS tells\nyou is used for cache under your normal load. It's usually 75% of\nRM or higher. (NOTE: This doesn't cause any allocation of RAM; it's\na hint to the cost calculations.)\n \n> work_mem = \n> 512MB # Complex reads are called many times a second\n \nMaybe, if you use the connection pooler as described above. Each\nconnection can allocate this multiple times. So with 300\nconnections you could very easily start using 150GB of RAM in\naddition to your shared buffers; causing a swap storm followed by\nOOM crashes. If you stay with 300 connections this *must* be\nreduced by at least an order of magnitude.\n \n> # from each connection, so what should this be?\n> maintenance_work_mem = \n> 256MB # Should this be bigger - 1GB at least?\n \nI'd go to 1 or 2 GB.\n \n> checkpoint_segments = \n> 128 # There is lots of write activity; this is high \n \nOK\n \n> # but could it be higher?\n \nIMO, there's unlikely to be much benefit beyond that.\n \n> #checkpoint_completion_target not set; \n> # Recommendation appears to be .9 for our 128 checkpoint segments\n \n0.9 is probably a good idea.\n \n> default_statistics_target = \n> 200 # Deprecated?\n \nDepends on your data. The default is 100. You might want to leave\nthat in general and boost it for specific columns where you find it\nis needed. Higher values improve estimates and can lead to better\nquery plans, but boost ANALYZE times and query planning time.\n \n> # What is the metric for wal_buffers setting?\n> wal_buffers = \n> 4MB # Looks low, recommendation appears to be 16MB.\n \n16MB is good.\n \n> # Is it really \"set it and forget it\"?\n \nYeah.\n \n> #synchronous_commit not set; \n> \n> # Recommendation is to turn this off and leave fsync on\n \nIf this is off, it makes lack of write-back on the controller a lot\nless painful. Even with write-back it can improve performance some.\nIt does mean that on a crash you can lose some committed\ntransactions (typically less than a second's worth), but you will\nstill have database integrity.\n \n> #fsync not set; \n> \n> # Recommendation is to leave this on\n \nUnless you want to rebuild your database from scratch or restore\nfrom backup on an OS crash, leave this on.\n \n> #wal_level not set; \n> \n> # Do we only needed for replication?\n \nThe lowest level just supports crash recovery. The next level\nsupports archiving, for recovery from a PITR-style backup. The\nthird level is needed to support hot standby (a replicated server on\nwhich you can run targets as it is updated).\n \n> # The issue of vacuum/analyze is a tricky one.\n> # Data imports are running 24/7. One the DB is seeded, the vast\n> # majority of write activity is updates, and not to indexed\n> # columns. Deletions are vary rare.\n> vacuum_cost_delay = \n> 20ms\n \nYou could try that. I would monitor for bloat and make things more\naggressive if needed. If you are not vacuuming aggressively enough,\nperformance will slowly degrade. If you let it go too far, recovery\ncan be a lot of work.\n \n> # The background writer has not been addressed at all.\n> # Can our particular setup benefit from changing the bgwriter\n> # values?\n \nProbably not. If you find that your interactive users have periods\nwhere queries seem to \"freeze\" for a few minutes at a time and then\nreturn to normal levels of performance, you might need to make this\nmore aggressive.\n \n-Kevin\n\n",
"msg_date": "Sat, 10 Sep 2011 10:54:37 -0400",
"msg_from": "Carlo Stonebanks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Migrated from 8.3 to 9.0 - need to update config (re-post)"
},
{
"msg_contents": "Hi Kevin,\n\n\n\n(sorry for late reply, PG forums seem to have problems with my e-mail client, now trying web mail)\n \nFirst, thanks for taking the time. I wish I could write back with quick, terse questions to your detailed reply - but I'm sorry, this is still going to be a wordy post.\n\n\n\n>> max_connections = 300\n\n>Too high. Both throughput and latency should improve with correct use \n\n>of a connection pooler.\n\n\n\nEven for 300 stateful applications that can remain connected for up to a week, continuously distilling data (imports)? The 300 is overkill, a sys admin raised it from 100 when multiple large projects were loaded and the server refused the additional connections. We can take large imports and break them into multiple smaller ones which the operators are doing to try and improve import performance. It does result in some improvement, but I think they have gone over the top and the answer is to improve DB and OS performance. Perhaps I don't understand how connection pooling will work with stateful apps that are continuously reading and writing (the apps are DB I/O bound).\n\n \n> you want the controller configured for write-back (with automatic \n\n> switch to write-through on low or failed battery, if possible).\n\n\n\nFor performance or safety reasons? Since the sys admin thinks there's no performance benefit from this, I would like to be clear on why we should do this.\n\n\n\n>> Can our particular setup benefit from changing the bgwriter values?\n\n> Probably not. If you find that your interactive users have periods \n\n> where queries seem to \"freeze\" for a few minutes at a time and then \n\n> return to normal levels of performance, you might need to make this \n\n> more aggressive.\n\n\n\nWe actually experience this. Once again, remember the overwhelming use of the system is long-running import threads with continuous connections. Every now and then the imports behave as if they are suddenly taking a deep breath, slowing down. Sometimes, so much we cancel the import and restart (the imports pick up where they left off).\n\n\n\nWhat would the bg_writer settings be in this case?\n\n\n\nThanks again for your time,\n\n\n\nCarlo\n\n \n> Date: Fri, 9 Sep 2011 13:16:28 -0500\n> From: [email protected]\n> To: [email protected]; [email protected]\n> Subject: Re: [PERFORM] Migrated from 8.3 to 9.0 - need to update config\t (re-post)\n> \n> Carlo Stonebanks <[email protected]> wrote:\n> \n> > this is a full-time ETL system, with only a handful of actual\n> > *users* and automated processes over 300 connections running\n> > *import* programs 24/7\n> \n> > Intel* Xeon* Processor X5560 (8M Cache, 2.80 GHz, 6.40 \n> > GT/s Intel* QPI) x 2, dual quad core 48 GB RAM\n> > RAID 10, 6 X 600 GB 15krpm SAS)\n> \n> So, eight cores and six spindles. You are probably going to see\n> *much* better throughput if you route those 300 workers through\n> about 22 connections. Use a connection pooler which limits active\n> transactions to that and queues up requests to start a transaction.\n> \n> > Sys admin says that battery-backup RAID controller and \n> > consequent write settings should have no impact on performance.\n> \n> With only six drives, I your OS, WAL files, indexes, and heap files\n> are all in the same RAID? If so, your sys admin is wrong -- you\n> want the controller configured for write-back (with automatic switch\n> to write-through on low or failed battery, if possible).\n> \n> > max_connections = 300\n> \n> Too high. Both throughput and latency should improve with correct\n> use of a connection pooler.\n> \n> > shared_buffers = \n> > 500MB # At 48GB of RAM, could we go to 2GB\n> \n> You might benefit from as much as 8GB, but only testing with your\n> actual load will show for sure.\n> \n> > effective_cache_size = \n> > 2457MB # Sys admin says assume 25% of 48GB\n> \n> Add together the shared_buffers setting and whatever the OS tells\n> you is used for cache under your normal load. It's usually 75% of\n> RM or higher. (NOTE: This doesn't cause any allocation of RAM; it's\n> a hint to the cost calculations.)\n> \n> > work_mem = \n> > 512MB # Complex reads are called many times a second\n> \n> Maybe, if you use the connection pooler as described above. Each\n> connection can allocate this multiple times. So with 300\n> connections you could very easily start using 150GB of RAM in\n> addition to your shared buffers; causing a swap storm followed by\n> OOM crashes. If you stay with 300 connections this *must* be\n> reduced by at least an order of magnitude.\n> \n> > # from each connection, so what should this be?\n> > maintenance_work_mem = \n> > 256MB # Should this be bigger - 1GB at least?\n> \n> I'd go to 1 or 2 GB.\n> \n> > checkpoint_segments = \n> > 128 # There is lots of write activity; this is high \n> \n> OK\n> \n> > # but could it be higher?\n> \n> IMO, there's unlikely to be much benefit beyond that.\n> \n> > #checkpoint_completion_target not set; \n> > # Recommendation appears to be .9 for our 128 checkpoint segments\n> \n> 0.9 is probably a good idea.\n> \n> > default_statistics_target = \n> > 200 # Deprecated?\n> \n> Depends on your data. The default is 100. You might want to leave\n> that in general and boost it for specific columns where you find it\n> is needed. Higher values improve estimates and can lead to better\n> query plans, but boost ANALYZE times and query planning time.\n> \n> > # What is the metric for wal_buffers setting?\n> > wal_buffers = \n> > 4MB # Looks low, recommendation appears to be 16MB.\n> \n> 16MB is good.\n> \n> > # Is it really \"set it and forget it\"?\n> \n> Yeah.\n> \n> > #synchronous_commit not set; \n> > \n> > # Recommendation is to turn this off and leave fsync on\n> \n> If this is off, it makes lack of write-back on the controller a lot\n> less painful. Even with write-back it can improve performance some.\n> It does mean that on a crash you can lose some committed\n> transactions (typically less than a second's worth), but you will\n> still have database integrity.\n> \n> > #fsync not set; \n> > \n> > # Recommendation is to leave this on\n> \n> Unless you want to rebuild your database from scratch or restore\n> from backup on an OS crash, leave this on.\n> \n> > #wal_level not set; \n> > \n> > # Do we only needed for replication?\n> \n> The lowest level just supports crash recovery. The next level\n> supports archiving, for recovery from a PITR-style backup. The\n> third level is needed to support hot standby (a replicated server on\n> which you can run targets as it is updated).\n> \n> > # The issue of vacuum/analyze is a tricky one.\n> > # Data imports are running 24/7. One the DB is seeded, the vast\n> > # majority of write activity is updates, and not to indexed\n> > # columns. Deletions are vary rare.\n> > vacuum_cost_delay = \n> > 20ms\n> \n> You could try that. I would monitor for bloat and make things more\n> aggressive if needed. If you are not vacuuming aggressively enough,\n> performance will slowly degrade. If you let it go too far, recovery\n> can be a lot of work.\n> \n> > # The background writer has not been addressed at all.\n> > # Can our particular setup benefit from changing the bgwriter\n> > # values?\n> \n> Probably not. If you find that your interactive users have periods\n> where queries seem to \"freeze\" for a few minutes at a time and then\n> return to normal levels of performance, you might need to make this\n> more aggressive.\n> \n> -Kevin\n\n \t\t \t \t\t \n\n\n\n\n\n\nHi Kevin,\n(sorry for late reply, PG forums seem to have problems with my e-mail client, now trying web mail) First, thanks for taking the time. I wish I could write back with quick, terse questions to your detailed reply - but I'm sorry, this is still going to be a wordy post.\n>> max_connections = 300\n>Too high. Both throughput and latency should improve with correct use \n>of a connection pooler.\nEven for 300 stateful applications that can remain connected for up to a week, continuously distilling data (imports)? The 300 is overkill, a sys admin raised it from 100 when multiple large projects were loaded and the server refused the additional connections. We can take large imports and break them into multiple smaller ones which the operators are doing to try and improve import performance. It does result in some improvement, but I think they have gone over the top and the answer is to improve DB and OS performance. Perhaps I don't understand how connection pooling will work with stateful apps that are continuously reading and writing (the apps are DB I/O bound).\n > you want the controller configured for write-back (with automatic \n> switch to write-through on low or failed battery, if possible).\nFor performance or safety reasons? Since the sys admin thinks there's no performance benefit from this, I would like to be clear on why we should do this.\n>> Can our particular setup benefit from changing the bgwriter values?\n> Probably not. If you find that your interactive users have periods \n> where queries seem to \"freeze\" for a few minutes at a time and then \n> return to normal levels of performance, you might need to make this \n> more aggressive.\nWe actually experience this. Once again, remember the overwhelming use of the system is long-running import threads with continuous connections. Every now and then the imports behave as if they are suddenly taking a deep breath, slowing down. Sometimes, so much we cancel the import and restart (the imports pick up where they left off).\nWhat would the bg_writer settings be in this case?\nThanks again for your time,\nCarlo > Date: Fri, 9 Sep 2011 13:16:28 -0500> From: [email protected]> To: [email protected]; [email protected]> Subject: Re: [PERFORM] Migrated from 8.3 to 9.0 - need to update config\t (re-post)> > Carlo Stonebanks <[email protected]> wrote:> > > this is a full-time ETL system, with only a handful of actual> > *users* and automated processes over 300 connections running> > *import* programs 24/7> > > Intel* Xeon* Processor X5560 (8M Cache, 2.80 GHz, 6.40 > > GT/s Intel* QPI) x 2, dual quad core 48 GB RAM> > RAID 10, 6 X 600 GB 15krpm SAS)> > So, eight cores and six spindles. You are probably going to see> *much* better throughput if you route those 300 workers through> about 22 connections. Use a connection pooler which limits active> transactions to that and queues up requests to start a transaction.> > > Sys admin says that battery-backup RAID controller and > > consequent write settings should have no impact on performance.> > With only six drives, I your OS, WAL files, indexes, and heap files> are all in the same RAID? If so, your sys admin is wrong -- you> want the controller configured for write-back (with automatic switch> to write-through on low or failed battery, if possible).> > > max_connections = 300> > Too high. Both throughput and latency should improve with correct> use of a connection pooler.> > > shared_buffers = > > 500MB # At 48GB of RAM, could we go to 2GB> > You might benefit from as much as 8GB, but only testing with your> actual load will show for sure.> > > effective_cache_size = > > 2457MB # Sys admin says assume 25% of 48GB> > Add together the shared_buffers setting and whatever the OS tells> you is used for cache under your normal load. It's usually 75% of> RM or higher. (NOTE: This doesn't cause any allocation of RAM; it's> a hint to the cost calculations.)> > > work_mem = > > 512MB # Complex reads are called many times a second> > Maybe, if you use the connection pooler as described above. Each> connection can allocate this multiple times. So with 300> connections you could very easily start using 150GB of RAM in> addition to your shared buffers; causing a swap storm followed by> OOM crashes. If you stay with 300 connections this *must* be> reduced by at least an order of magnitude.> > > # from each connection, so what should this be?> > maintenance_work_mem = > > 256MB # Should this be bigger - 1GB at least?> > I'd go to 1 or 2 GB.> > > checkpoint_segments = > > 128 # There is lots of write activity; this is high > > OK> > > # but could it be higher?> > IMO, there's unlikely to be much benefit beyond that.> > > #checkpoint_completion_target not set; > > # Recommendation appears to be .9 for our 128 checkpoint segments> > 0.9 is probably a good idea.> > > default_statistics_target = > > 200 # Deprecated?> > Depends on your data. The default is 100. You might want to leave> that in general and boost it for specific columns where you find it> is needed. Higher values improve estimates and can lead to better> query plans, but boost ANALYZE times and query planning time.> > > # What is the metric for wal_buffers setting?> > wal_buffers = > > 4MB # Looks low, recommendation appears to be 16MB.> > 16MB is good.> > > # Is it really \"set it and forget it\"?> > Yeah.> > > #synchronous_commit not set; > > > > # Recommendation is to turn this off and leave fsync on> > If this is off, it makes lack of write-back on the controller a lot> less painful. Even with write-back it can improve performance some.> It does mean that on a crash you can lose some committed> transactions (typically less than a second's worth), but you will> still have database integrity.> > > #fsync not set; > > > > # Recommendation is to leave this on> > Unless you want to rebuild your database from scratch or restore> from backup on an OS crash, leave this on.> > > #wal_level not set; > > > > # Do we only needed for replication?> > The lowest level just supports crash recovery. The next level> supports archiving, for recovery from a PITR-style backup. The> third level is needed to support hot standby (a replicated server on> which you can run targets as it is updated).> > > # The issue of vacuum/analyze is a tricky one.> > # Data imports are running 24/7. One the DB is seeded, the vast> > # majority of write activity is updates, and not to indexed> > # columns. Deletions are vary rare.> > vacuum_cost_delay = > > 20ms> > You could try that. I would monitor for bloat and make things more> aggressive if needed. If you are not vacuuming aggressively enough,> performance will slowly degrade. If you let it go too far, recovery> can be a lot of work.> > > # The background writer has not been addressed at all.> > # Can our particular setup benefit from changing the bgwriter> > # values?> > Probably not. If you find that your interactive users have periods> where queries seem to \"freeze\" for a few minutes at a time and then> return to normal levels of performance, you might need to make this> more aggressive.> > -Kevin",
"msg_date": "Tue, 13 Sep 2011 18:56:53 +0000",
"msg_from": "Carlo Stonebanks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Migrated from 8.3 to 9.0 - need to update config\n (re-post)"
},
{
"msg_contents": "On 09/14/2011 02:56 AM, Carlo Stonebanks wrote:\n\n> Even for 300 stateful applications that can remain connected for up to a\n> week, continuously distilling data (imports)?\n\nIf they're all doing active work all that time you can still benefit \nfrom a pooler.\n\nSay your server can service 50 connections at optimum speed, and any \nmore result in reduced overall throughput. You have 300 apps with \nstatements they want to run. Your pooler will basically queue them, so \nat any one time 50 are doing work and 250 are waiting for database \naccess. This should _improve_ database throughput by reducing contention \nif 50 worker connections is your sweet spot. However, it will also \nincrease latency for service for those workers because they may have to \nwait a while before their transaction runs, even though their \ntransaction will complete much faster.\n\nYou'd probably want to pool at the transaction level, so once a client \ngets a connection it keeps it for the lifetime of that transaction and \nthe connection is handed back to the pool when the transaction commits \nor rolls back.\n\n>> you want the controller configured for write-back (with automatic\n>> switch to write-through on low or failed battery, if possible).\n>\n> For performance or safety reasons? Since the sys admin thinks there's no\n> performance benefit from this, I would like to be clear on why we should\n> do this.\n\nfsync!\n\nIf your workload is read-only, it won't help you much. If your workload \nis write-heavy or fairly balanced it'll make a HUGE difference, because \nfsync() on commit won't have to wait for disk I/O, only I/O to the RAID \ncard's cache controller.\n\nYou can also play with commit_delay and synchronous_commit to trade \nguarantees of data persistence off against performance. Don't mind \nlosing up to 5 mins of commits if you lose power? These options are for you.\n\nWhatever you do, do NOT set fsync=off. It should be called \"Eat my data \nif anything goes even slightly wrong=on\"; it does have legitimate uses, \nbut they're not yours.\n\n>> > Can our particular setup benefit from changing the bgwriter values?\n>> Probably not. If you find that your interactive users have periods\n>> where queries seem to \"freeze\" for a few minutes at a time and then\n>> return to normal levels of performance, you might need to make this\n>> more aggressive.\n>\n> We actually experience this. Once again, remember the overwhelming use\n> of the system is long-running import threads with continuous\n> connections. Every now and then the imports behave as if they are\n> suddenly taking a deep breath, slowing down. Sometimes, so much we\n> cancel the import and restart (the imports pick up where they left off).\n\nThis could definitely be checkpointing issues. Enable checkpoint logging.\n\n> What would the bg_writer settings be in this case?\n\nYou need to tune it for your workload I'm afraid. See the manual and \nmailing list discussions.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 14 Sep 2011 09:52:07 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migrated from 8.3 to 9.0 - need to update config (re-post)"
},
{
"msg_contents": "Thanks guys, So, would you say that transaction pooling has a load-balancing effect because of its granularity compared to session pooling? I'm concerned about the side-effects of transaction pooling, like the sessiion-level features we would always have to look out for. Wouldn't this require a code review? Just reading UDF Session State=No on this page got my attention: http://wiki.postgresql.org/wiki/PgBouncer If we go with transaction pooling, will we get any sort of warnings or exceptions when apps and stored pgUDF's are violating transaction pooling features, or will things just quietly go wrong, with one session getting a side-effect from another session's state? Carlo> Date: Wed, 14 Sep 2011 09:52:07 +0800\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]; [email protected]\n> Subject: Re: [PERFORM] Migrated from 8.3 to 9.0 - need to update config (re-post)\n> \n> On 09/14/2011 02:56 AM, Carlo Stonebanks wrote:\n> \n> > Even for 300 stateful applications that can remain connected for up to a\n> > week, continuously distilling data (imports)?\n> \n> If they're all doing active work all that time you can still benefit \n> from a pooler.\n> \n> Say your server can service 50 connections at optimum speed, and any \n> more result in reduced overall throughput. You have 300 apps with \n> statements they want to run. Your pooler will basically queue them, so \n> at any one time 50 are doing work and 250 are waiting for database \n> access. This should _improve_ database throughput by reducing contention \n> if 50 worker connections is your sweet spot. However, it will also \n> increase latency for service for those workers because they may have to \n> wait a while before their transaction runs, even though their \n> transaction will complete much faster.\n> \n> You'd probably want to pool at the transaction level, so once a client \n> gets a connection it keeps it for the lifetime of that transaction and \n> the connection is handed back to the pool when the transaction commits \n> or rolls back.\n> \n> >> you want the controller configured for write-back (with automatic\n> >> switch to write-through on low or failed battery, if possible).\n> >\n> > For performance or safety reasons? Since the sys admin thinks there's no\n> > performance benefit from this, I would like to be clear on why we should\n> > do this.\n> \n> fsync!\n> \n> If your workload is read-only, it won't help you much. If your workload \n> is write-heavy or fairly balanced it'll make a HUGE difference, because \n> fsync() on commit won't have to wait for disk I/O, only I/O to the RAID \n> card's cache controller.\n> \n> You can also play with commit_delay and synchronous_commit to trade \n> guarantees of data persistence off against performance. Don't mind \n> losing up to 5 mins of commits if you lose power? These options are for you.\n> \n> Whatever you do, do NOT set fsync=off. It should be called \"Eat my data \n> if anything goes even slightly wrong=on\"; it does have legitimate uses, \n> but they're not yours.\n> \n> >> > Can our particular setup benefit from changing the bgwriter values?\n> >> Probably not. If you find that your interactive users have periods\n> >> where queries seem to \"freeze\" for a few minutes at a time and then\n> >> return to normal levels of performance, you might need to make this\n> >> more aggressive.\n> >\n> > We actually experience this. Once again, remember the overwhelming use\n> > of the system is long-running import threads with continuous\n> > connections. Every now and then the imports behave as if they are\n> > suddenly taking a deep breath, slowing down. Sometimes, so much we\n> > cancel the import and restart (the imports pick up where they left off).\n> \n> This could definitely be checkpointing issues. Enable checkpoint logging.\n> \n> > What would the bg_writer settings be in this case?\n> \n> You need to tune it for your workload I'm afraid. See the manual and \n> mailing list discussions.\n> \n> --\n> Craig Ringer\n \t\t \t \t\t \n\n\n\n\n\nThanks guys, So, would you say that transaction pooling has a load-balancing effect because of its granularity compared to session pooling? I'm concerned about the side-effects of transaction pooling, like the sessiion-level features we would always have to look out for. Wouldn't this require a code review? Just reading UDF Session State=No on this page got my attention: http://wiki.postgresql.org/wiki/PgBouncer If we go with transaction pooling, will we get any sort of warnings or exceptions when apps and stored pgUDF's are violating transaction pooling features, or will things just quietly go wrong, with one session getting a side-effect from another session's state? Carlo> Date: Wed, 14 Sep 2011 09:52:07 +0800> From: [email protected]> To: [email protected]> CC: [email protected]; [email protected]> Subject: Re: [PERFORM] Migrated from 8.3 to 9.0 - need to update config (re-post)> > On 09/14/2011 02:56 AM, Carlo Stonebanks wrote:> > > Even for 300 stateful applications that can remain connected for up to a> > week, continuously distilling data (imports)?> > If they're all doing active work all that time you can still benefit > from a pooler.> > Say your server can service 50 connections at optimum speed, and any > more result in reduced overall throughput. You have 300 apps with > statements they want to run. Your pooler will basically queue them, so > at any one time 50 are doing work and 250 are waiting for database > access. This should _improve_ database throughput by reducing contention > if 50 worker connections is your sweet spot. However, it will also > increase latency for service for those workers because they may have to > wait a while before their transaction runs, even though their > transaction will complete much faster.> > You'd probably want to pool at the transaction level, so once a client > gets a connection it keeps it for the lifetime of that transaction and > the connection is handed back to the pool when the transaction commits > or rolls back.> > >> you want the controller configured for write-back (with automatic> >> switch to write-through on low or failed battery, if possible).> >> > For performance or safety reasons? Since the sys admin thinks there's no> > performance benefit from this, I would like to be clear on why we should> > do this.> > fsync!> > If your workload is read-only, it won't help you much. If your workload > is write-heavy or fairly balanced it'll make a HUGE difference, because > fsync() on commit won't have to wait for disk I/O, only I/O to the RAID > card's cache controller.> > You can also play with commit_delay and synchronous_commit to trade > guarantees of data persistence off against performance. Don't mind > losing up to 5 mins of commits if you lose power? These options are for you.> > Whatever you do, do NOT set fsync=off. It should be called \"Eat my data > if anything goes even slightly wrong=on\"; it does have legitimate uses, > but they're not yours.> > >> > Can our particular setup benefit from changing the bgwriter values?> >> Probably not. If you find that your interactive users have periods> >> where queries seem to \"freeze\" for a few minutes at a time and then> >> return to normal levels of performance, you might need to make this> >> more aggressive.> >> > We actually experience this. Once again, remember the overwhelming use> > of the system is long-running import threads with continuous> > connections. Every now and then the imports behave as if they are> > suddenly taking a deep breath, slowing down. Sometimes, so much we> > cancel the import and restart (the imports pick up where they left off).> > This could definitely be checkpointing issues. Enable checkpoint logging.> > > What would the bg_writer settings be in this case?> > You need to tune it for your workload I'm afraid. See the manual and > mailing list discussions.> > --> Craig Ringer",
"msg_date": "Thu, 15 Sep 2011 04:21:49 +0000",
"msg_from": "Carlo Stonebanks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Migrated from 8.3 to 9.0 - need to update config\n (re-post)"
}
] |
[
{
"msg_contents": "Hello,\n\nDoes anyone know whether PostgreSQL uses DMA (Direct Memory Access) in\ncertain cases to improve networking IO performance?\n\nI mean \"simple\" query is which doesn't require any CPU processing, for ex\nSELECT column_a FROM table_b WHERE date = \"2001-10-05\"\n\nI need this to devise the best logic for the system with PostgreSQL as\na layer. Certainly I could study PostgreSQL sources or test it with a\nsimple application but I hope PostgreSQL experts are aware of this\nfeature.\n\nThank you.\n\n-- \nKind regards,\nAntonio Rodriges\n",
"msg_date": "Fri, 9 Sep 2011 20:55:42 +0300",
"msg_from": "Antonio Rodriges <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL insights: does it use DMA?"
},
{
"msg_contents": "On Fri, Sep 9, 2011 at 11:55 AM, Antonio Rodriges <[email protected]> wrote:\n> Hello,\n>\n> Does anyone know whether PostgreSQL uses DMA (Direct Memory Access) in\n> certain cases to improve networking IO performance?\n>\n> I mean \"simple\" query is which doesn't require any CPU processing, for ex\n> SELECT column_a FROM table_b WHERE date = \"2001-10-05\"\n>\n> I need this to devise the best logic for the system with PostgreSQL as\n> a layer. Certainly I could study PostgreSQL sources or test it with a\n> simple application but I hope PostgreSQL experts are aware of this\n> feature.\n\nThat's all up to your hardware and OS, not postgresql\n",
"msg_date": "Fri, 9 Sep 2011 13:37:36 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL insights: does it use DMA?"
},
{
"msg_contents": "Scott, regardless of operating system support for DMA, an application\nmay not benefit from it if it doesn't use appropriate system calls.\n\nPostgreSQL is implemented mostly in C, so I do not know whether it\nneeds to use special procedure calls, however this is true, for\nexample, for Java\nhttp://www.ibm.com/developerworks/library/j-zerocopy/\n\n2011/9/9 Scott Marlowe <[email protected]>:\n> On Fri, Sep 9, 2011 at 11:55 AM, Antonio Rodriges <[email protected]> wrote:\n>> Hello,\n>>\n>> Does anyone know whether PostgreSQL uses DMA (Direct Memory Access) in\n>> certain cases to improve networking IO performance?\n>>\n>> I mean \"simple\" query is which doesn't require any CPU processing, for ex\n>> SELECT column_a FROM table_b WHERE date = \"2001-10-05\"\n>>\n>> I need this to devise the best logic for the system with PostgreSQL as\n>> a layer. Certainly I could study PostgreSQL sources or test it with a\n>> simple application but I hope PostgreSQL experts are aware of this\n>> feature.\n>\n> That's all up to your hardware and OS, not postgresql\n>\n\n\n\n-- \nKind regards,\nAntonio Rodriges\n",
"msg_date": "Fri, 9 Sep 2011 22:58:19 +0300",
"msg_from": "Antonio Rodriges <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL insights: does it use DMA?"
},
{
"msg_contents": "Look at developer faq.\n\n2011/9/9, Antonio Rodriges <[email protected]>:\n> Hello,\n>\n> Does anyone know whether PostgreSQL uses DMA (Direct Memory Access) in\n> certain cases to improve networking IO performance?\n>\n> I mean \"simple\" query is which doesn't require any CPU processing, for ex\n> SELECT column_a FROM table_b WHERE date = \"2001-10-05\"\n>\n> I need this to devise the best logic for the system with PostgreSQL as\n> a layer. Certainly I could study PostgreSQL sources or test it with a\n> simple application but I hope PostgreSQL experts are aware of this\n> feature.\n>\n> Thank you.\n>\n> --\n> Kind regards,\n> Antonio Rodriges\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n-- \n------------\npasman\n",
"msg_date": "Fri, 9 Sep 2011 22:03:12 +0200",
"msg_from": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL insights: does it use DMA?"
},
{
"msg_contents": "Antonio Rodriges <[email protected]> wrote:\n \n> PostgreSQL is implemented mostly in C, so I do not know whether it\n> needs to use special procedure calls, however this is true, for\n> example, for Java\n> http://www.ibm.com/developerworks/library/j-zerocopy/\n \nAfter scanning that I'm inclined to think that the only place that\nzerocopy techniques might make sense is in the context of BLOBs. \nThat's not currently happening, and not on anyone's radar as far as\nI know.\n \n-Kevin\n",
"msg_date": "Fri, 09 Sep 2011 15:29:48 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL insights: does it use DMA?"
},
{
"msg_contents": "On 10/09/2011 1:55 AM, Antonio Rodriges wrote:\n> Hello,\n>\n> Does anyone know whether PostgreSQL uses DMA (Direct Memory Access) in\n> certain cases to improve networking IO performance?\n\n From what you described in your message it sounds like what you really \nwant is not DMA, but use of something like the sendfile() system call \n(http://www.freebsd.org/cgi/man.cgi?query=sendfile&sektion=2 \n<http://www.freebsd.org/cgi/man.cgi?query=sendfile&sektion=2>), where \nthe kernel is told to send data over the network with no further \ninteraction with the application.\n\nPostgreSQL does not, and can not, do this for regular query results. The \nsample query you posted most certainly DOES require CPU processing \nduring its execution: Any index being used must be traversed, which \nrequires logic. Date comparisons must be performed, possibly including \ntimezone conversions, and non-matching rows must be discarded. If a \nsequential scan is being done, a bitmap showing holes in the file may be \nconsulted to decide where to scan and where to skip, and checks of row \nversions to determine visibility must be done. Once the data has been \nselected, it must be formatted into proper PostgreSQL v3 network \nprotocol messages, which involves function calls to data type output \nfunctions among many other things. Only then does the data get written \nto a network socket.\n\nNeedless to say, it's not like Pg is just picking a file to open and \ndoing a loop where it reads from the file and writes to a socket.\n\nThat said, PostgreSQL benefits from the DMA the operating system does \nwhen handling system calls. For example, if your network interface \nsupports DMA buffer access then PostgreSQL will benefit from that. \nSimilarly, Pg benefits from the kernel-to-hardware DMA for disk I/O etc. \nBeyond that I doubt there's much. PostgreSQL's use of shared_buffers for \nread data means data will get copied on read, and writes go through the \nOS's buffer cache, so there's unlikely to be direct DMA between \nPostgreSQL buffers and the disk hardware for example.\n\n\nTheoretically PostgreSQL could use something like sendfile() for sending \nlarge object (blob) data and bytea data, but to do so it'd have to \nchange how that data is stored. Currently blob data is stored in (often \ncompressed) 8kb chunks in the pg_largeobject table. This data has to be \nassembled and possibly decompressed to be transmitted. Similar things \napply for bytea fields of tables. In addition, the data is usually sent \nover the text protocol, which means it has to be encoded to hex or (for \nolder versions) octal escapes. That encoding is incompatible with the \nuse of an API like sendfile() .\n\nSo, in practice, PostgreSQL can _not_ use the kinds of direct \nkernel-level disk-to-network sending you seem to be referring to. That \nsort of thing is mostly designed for file servers, web servers, etc \nwhere at some point in the process they end up dumping a disk file down \na network socket without transforming the data. Even they don't benefit \nfrom it as much these days because of the wider use of encryption and \ncompression.\n\n--\nCraig Ringer\n",
"msg_date": "Sat, 10 Sep 2011 08:00:18 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL insights: does it use DMA?"
},
{
"msg_contents": "Thank you a lot, Craig, that's really insightful and exhaustively\ncomplete explanation I've expected!\n\n2011/9/10 Craig Ringer <[email protected]>:\n> On 10/09/2011 1:55 AM, Antonio Rodriges wrote:\n>>\n>> Hello,\n>>\n>> Does anyone know whether PostgreSQL uses DMA (Direct Memory Access) in\n>> certain cases to improve networking IO performance?\n>\n> From what you described in your message it sounds like what you really want\n> is not DMA, but use of something like the sendfile() system call\n> (http://www.freebsd.org/cgi/man.cgi?query=sendfile&sektion=2\n> <http://www.freebsd.org/cgi/man.cgi?query=sendfile&sektion=2>), where the\n> kernel is told to send data over the network with no further interaction\n> with the application.\n>\n> PostgreSQL does not, and can not, do this for regular query results. The\n> sample query you posted most certainly DOES require CPU processing during\n> its execution: Any index being used must be traversed, which requires logic.\n> Date comparisons must be performed, possibly including timezone conversions,\n> and non-matching rows must be discarded. If a sequential scan is being done,\n> a bitmap showing holes in the file may be consulted to decide where to scan\n> and where to skip, and checks of row versions to determine visibility must\n> be done. Once the data has been selected, it must be formatted into proper\n> PostgreSQL v3 network protocol messages, which involves function calls to\n> data type output functions among many other things. Only then does the data\n> get written to a network socket.\n>\n> Needless to say, it's not like Pg is just picking a file to open and doing a\n> loop where it reads from the file and writes to a socket.\n>\n> That said, PostgreSQL benefits from the DMA the operating system does when\n> handling system calls. For example, if your network interface supports DMA\n> buffer access then PostgreSQL will benefit from that. Similarly, Pg benefits\n> from the kernel-to-hardware DMA for disk I/O etc. Beyond that I doubt\n> there's much. PostgreSQL's use of shared_buffers for read data means data\n> will get copied on read, and writes go through the OS's buffer cache, so\n> there's unlikely to be direct DMA between PostgreSQL buffers and the disk\n> hardware for example.\n>\n>\n> Theoretically PostgreSQL could use something like sendfile() for sending\n> large object (blob) data and bytea data, but to do so it'd have to change\n> how that data is stored. Currently blob data is stored in (often compressed)\n> 8kb chunks in the pg_largeobject table. This data has to be assembled and\n> possibly decompressed to be transmitted. Similar things apply for bytea\n> fields of tables. In addition, the data is usually sent over the text\n> protocol, which means it has to be encoded to hex or (for older versions)\n> octal escapes. That encoding is incompatible with the use of an API like\n> sendfile() .\n>\n> So, in practice, PostgreSQL can _not_ use the kinds of direct kernel-level\n> disk-to-network sending you seem to be referring to. That sort of thing is\n> mostly designed for file servers, web servers, etc where at some point in\n> the process they end up dumping a disk file down a network socket without\n> transforming the data. Even they don't benefit from it as much these days\n> because of the wider use of encryption and compression.\n>\n> --\n> Craig Ringer\n>\n\n\n\n-- \nKind regards,\nAntonio Rodriges\n",
"msg_date": "Sat, 10 Sep 2011 15:06:54 +0300",
"msg_from": "Antonio Rodriges <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL insights: does it use DMA?"
}
] |
[
{
"msg_contents": "Sometimes I read that postgres performance is degraded over the time and\nsomething people talk about backup and restore database solve the problem.\n\n \n\nIt is really true?\n\n \n\nI have postgres 9.0 on a windows machine with The autovacuum is ON\n\n \n\nI have some configuration tables \n\nAnd a couple of transactional table.\n\nTransactional table has about 4 millions of rows inserted per day.\n\nIn the midnight all rows are moved to a historical table and in the\nhistorical table rows are about 2 months, any transaction older than 2\nmonths are deleted daily.\n\n \n\n \n\nSo, my question is, if Should I expect same performance over time (example:\nafter 1 year) or should I expect a degradation and must implements come\ntechnics like backup restore every certain time?\n\n \n\nThanks!!\n\n \n\n \n\n \n\n \n\n \n\n \n\n\nSometimes I read that postgres performance is degraded over the time and something people talk about backup and restore database solve the problem. It is really true? I have postgres 9.0 on a windows machine with The autovacuum is ON I have some configuration tables And a couple of transactional table.Transactional table has about 4 millions of rows inserted per day.In the midnight all rows are moved to a historical table and in the historical table rows are about 2 months, any transaction older than 2 months are deleted daily. So, my question is, if Should I expect same performance over time (example: after 1 year) or should I expect a degradation and must implements come technics like backup restore every certain time? Thanks!!",
"msg_date": "Sat, 10 Sep 2011 12:55:46 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "should i expected performance degradation over time"
},
{
"msg_contents": "On 09/10/2011 11:55 AM, Anibal David Acosta wrote:\n> Sometimes I read that postgres performance is degraded over the time and something people talk about backup and restore database solve the problem.\n>\n> It is really true?\n>\n> I have postgres 9.0 on a windows machine with The autovacuum is ON\n>\n> I have some configuration tables\n>\n> And a couple of transactional table.\n>\n> Transactional table has about 4 millions of rows inserted per day.\n>\n> In the midnight all rows are moved to a historical table and in the historical table rows are about 2 months, any transaction older than 2 months are deleted daily.\n>\n> So, my question is, if Should I expect same performance over time (example: after 1 year) or should I expect a degradation and must implements come technics like backup restore every certain time?\n>\n> Thanks!!\n>\n\nYes. And no. Things have changed over that last few versions. In older version of PG I recall hearing about table bloat problems that were really bad, and backup/restore would fix it. (Vacuum full would probably also have fixed it).\n\n\"Vacuum full\", in older versions was a last chance, bring a gun to a knife fight, nothing else has worked, fix table bloat solution. Its not dis-similar from backup/restore.\n\nIn newer versions of PG, autovacuum, vacuum and vacuum full are all much nicer and work better. I really doubt you'll need to resort to backup/restore to fix problems.\n\nJust remember: the harder you hit a table, the less chance autovacuum will have to clean it up. So you might need manual vacuum. autovacuum will cancel itself if the table is getting hit, where-as manual vacuum wont.\n\nKeeping on top of vacuum will keep your tables slim and trim. If things get out of hand, they'll balloon into problems. Vacuum full at that point should clean it up. But, if you ignore the problem for two years, and have super really bad table bloat, well, maybe backup/restore is best.\n\n\n-Andy\n",
"msg_date": "Sat, 10 Sep 2011 12:20:41 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: should i expected performance degradation over time"
},
{
"msg_contents": "On Sat, Sep 10, 2011 at 10:55 AM, Anibal David Acosta <[email protected]> wrote:\n> Sometimes I read that postgres performance is degraded over the time and\n> something people talk about backup and restore database solve the problem.\n>\n> It is really true?\n\nYes and no. If you let things get out of hand, a backup and restore\nmay be your best choice.\n\n> I have postgres 9.0 on a windows machine with The autovacuum is ON\n\nGood start\n\n> Transactional table has about 4 millions of rows inserted per day.\n>\n> In the midnight all rows are moved to a historical table and in the\n> historical table rows are about 2 months, any transaction older than 2\n> months are deleted daily.\n\nYou should look into table partitioning then. but as long as vacuum\nkeeps up you're probably still ok. Look at the check_postgresql.pl\nscript by the same guy who wrote Bucardo. It'll keep you advised of\nhow much bloat your tables have.\n\n> So, my question is, if Should I expect same performance over time (example:\n> after 1 year) or should I expect a degradation and must implements come\n> technics like backup restore every certain time?\n\nIf you maintain your db properly, performance should stay good. If\nyou ignore bloat issues you might have some issues.\n",
"msg_date": "Sat, 10 Sep 2011 12:30:20 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: should i expected performance degradation over time"
},
{
"msg_contents": "Do you know if check_postgresql.pl can run on windows (with perl installed)?\n\nBecause our postgres installation is running on a Windows 2008 R2 server but\ncan't find any tool like this for windows :(\n\nThanks!\n\n\n-----Mensaje original-----\nDe: Scott Marlowe [mailto:[email protected]] \nEnviado el: sábado, 10 de septiembre de 2011 02:30 p.m.\nPara: Anibal David Acosta\nCC: [email protected]\nAsunto: Re: [PERFORM] should i expected performance degradation over time\n\nOn Sat, Sep 10, 2011 at 10:55 AM, Anibal David Acosta <[email protected]>\nwrote:\n> Sometimes I read that postgres performance is degraded over the time \n> and something people talk about backup and restore database solve the\nproblem.\n>\n> It is really true?\n\nYes and no. If you let things get out of hand, a backup and restore may be\nyour best choice.\n\n> I have postgres 9.0 on a windows machine with The autovacuum is ON\n\nGood start\n\n> Transactional table has about 4 millions of rows inserted per day.\n>\n> In the midnight all rows are moved to a historical table and in the \n> historical table rows are about 2 months, any transaction older than 2 \n> months are deleted daily.\n\nYou should look into table partitioning then. but as long as vacuum keeps\nup you're probably still ok. Look at the check_postgresql.pl script by the\nsame guy who wrote Bucardo. It'll keep you advised of how much bloat your\ntables have.\n\n> So, my question is, if Should I expect same performance over time\n(example:\n> after 1 year) or should I expect a degradation and must implements \n> come technics like backup restore every certain time?\n\nIf you maintain your db properly, performance should stay good. If you\nignore bloat issues you might have some issues.\n\n",
"msg_date": "Tue, 11 Oct 2011 15:20:12 -0300",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: should i expected performance degradation over time"
},
{
"msg_contents": "On Tue, Oct 11, 2011 at 2:20 PM, Anibal David Acosta <[email protected]> wrote:\n> Do you know if check_postgresql.pl can run on windows (with perl installed)?\n>\n> Because our postgres installation is running on a Windows 2008 R2 server but\n> can't find any tool like this for windows :(\n>\n> Thanks!\n\nIt's written in Perl, so I would think you could get it to work. But\nif not, you can always extract the big ol' query that it runs from the\nscript and run it some other way.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Fri, 28 Oct 2011 15:03:14 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: should i expected performance degradation over time"
}
] |
[
{
"msg_contents": "Hi,\n\n\nI have a database cluster running PostgreSQL 8.2\n\nand I have **new Linux virtualized database environment running PostgreSQL\n9.0\n\n\nMy question is how to ensure that database schemas are always performing and\nscalable and databases optimized and entirely migrated\n\nThanks in advance!\n\nHany\n\n\n\nHi,\n\nI have a database cluster running PostgreSQL 8.2\n\nand I have new Linux virtualized database environment running PostgreSQL 9.0\n\nMy question is how to ensure that database schemas are always performing and scalable and databases optimized and entirely migrated\nThanks in advance!\nHany",
"msg_date": "Sun, 11 Sep 2011 22:54:06 +1200",
"msg_from": "Hany ABOU-GHOURY <[email protected]>",
"msg_from_op": true,
"msg_subject": "Databases optimization"
},
{
"msg_contents": "I doubt you'll get much useful feedback because your question is too\nbroad for a mailing list answer. If you're looking for basic\nperformance guidelines, I recommend Greg Smith's \"PostgreSQL 9.0 High\nPerformance\" [1] (disclaimer: I used to work with Greg and got a free\ncopy), although I don't think he spent much time on running\nvirtualized (which certainly could affect things). Then if you have\n*specific* hardware or query questions, this list is a great resource.\n\n[1]: http://www.2ndquadrant.com/books/postgresql-9-0-high-performance/\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n",
"msg_date": "Sun, 11 Sep 2011 15:22:09 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Databases optimization"
},
{
"msg_contents": "Thanks Maciek.\n\nI really do not know where to start or how to explain my question\nI am newbie to Postgres. I will try to get more information from the\ndevelopment team and SA's\n\nCheers\nHany\n\n\nOn Mon, Sep 12, 2011 at 10:22 AM, Maciek Sakrejda <[email protected]>wrote:\n\n> I doubt you'll get much useful feedback because your question is too\n> broad for a mailing list answer. If you're looking for basic\n> performance guidelines, I recommend Greg Smith's \"PostgreSQL 9.0 High\n> Performance\" [1] (disclaimer: I used to work with Greg and got a free\n> copy), although I don't think he spent much time on running\n> virtualized (which certainly could affect things). Then if you have\n> *specific* hardware or query questions, this list is a great resource.\n>\n> [1]: http://www.2ndquadrant.com/books/postgresql-9-0-high-performance/\n> ---\n> Maciek Sakrejda | System Architect | Truviso\n>\n> 1065 E. Hillsdale Blvd., Suite 215\n> Foster City, CA 94404\n> (650) 242-3500 Main\n> www.truviso.com\n>\n\nThanks Maciek.I really do not know where to start or how to explain my question I am newbie to Postgres. I will try to get more information from the development team and SA's\nCheersHanyOn Mon, Sep 12, 2011 at 10:22 AM, Maciek Sakrejda <[email protected]> wrote:\nI doubt you'll get much useful feedback because your question is too\nbroad for a mailing list answer. If you're looking for basic\nperformance guidelines, I recommend Greg Smith's \"PostgreSQL 9.0 High\nPerformance\" [1] (disclaimer: I used to work with Greg and got a free\ncopy), although I don't think he spent much time on running\nvirtualized (which certainly could affect things). Then if you have\n*specific* hardware or query questions, this list is a great resource.\n\n[1]: http://www.2ndquadrant.com/books/postgresql-9-0-high-performance/\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com",
"msg_date": "Mon, 12 Sep 2011 10:48:55 +1200",
"msg_from": "Hany ABOU-GHOURY <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Databases optimization"
},
{
"msg_contents": "On Sun, Sep 11, 2011 at 5:22 PM, Maciek Sakrejda <[email protected]>wrote:\n\n> performance guidelines, I recommend Greg Smith's \"PostgreSQL 9.0 High\n> Performance\" [1] (disclaimer: I used to work with Greg and got a free\n> copy)\n>\n> I'll second that. \"PostgreSQL 9.0 High Performance\" is an excellent\nresource\n(I recommend it even for non-PostgreSQL admins because it goes so in-depth\non Linux tuning) so whether you get it for free or not, it's worth the time\nit takes\nto read and absorb the info.\n\nI've never run PostgreSQL virtualized, but I can say that if it's anything\nlike\nrunning SQL Server virtualized, it's not a terribly good idea.\n\nOn Sun, Sep 11, 2011 at 5:22 PM, Maciek Sakrejda <[email protected]> wrote:\nperformance guidelines, I recommend Greg Smith's \"PostgreSQL 9.0 High\nPerformance\" [1] (disclaimer: I used to work with Greg and got a free\ncopy)I'll second that. \"PostgreSQL 9.0 High Performance\" is an excellent resource (I recommend it even for non-PostgreSQL admins because it goes so in-depth on Linux tuning) so whether you get it for free or not, it's worth the time it takes \nto read and absorb the info.I've never run PostgreSQL virtualized, but I can say that if it's anything likerunning SQL Server virtualized, it's not a terribly good idea.",
"msg_date": "Sun, 11 Sep 2011 21:46:57 -0500",
"msg_from": "J Sisson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Databases optimization"
},
{
"msg_contents": "I have a production PostGres v8.2 database on luinx and a test PostGres V9.0\ndatabase on a test linux server\nI am going to do migration but do not want to do that before making sure the\nperformance of the new test Postgres 9.0 database performance is as good as\nthe current production Postgres 8.2\n\nMy question is:\n\nIs there a script that I can run on Postgres V8.2 and PostGres 9.0 that\nallows me test performance and make comparisons\n\n\n\nThanks guys\n\n\n\nOn Mon, Sep 12, 2011 at 2:46 PM, J Sisson <[email protected]> wrote:\n\n> On Sun, Sep 11, 2011 at 5:22 PM, Maciek Sakrejda <[email protected]>wrote:\n>\n>> performance guidelines, I recommend Greg Smith's \"PostgreSQL 9.0 High\n>> Performance\" [1] (disclaimer: I used to work with Greg and got a free\n>> copy)\n>>\n>> I'll second that. \"PostgreSQL 9.0 High Performance\" is an excellent\n> resource\n> (I recommend it even for non-PostgreSQL admins because it goes so in-depth\n> on Linux tuning) so whether you get it for free or not, it's worth the time\n> it takes\n> to read and absorb the info.\n>\n> I've never run PostgreSQL virtualized, but I can say that if it's anything\n> like\n> running SQL Server virtualized, it's not a terribly good idea.\n>\n>\n\nI have a production PostGres v8.2 database on luinx and a test PostGres V9.0 database on a test linux serverI am going to do migration but do not want to do that before making sure the performance of the new test Postgres 9.0 database performance is as good as the current production Postgres 8.2\nMy question is:Is there a script that I can run on Postgres V8.2 and PostGres 9.0 that allows me test performance and make comparisons \nThanks guysOn Mon, Sep 12, 2011 at 2:46 PM, J Sisson <[email protected]> wrote:\n\nOn Sun, Sep 11, 2011 at 5:22 PM, Maciek Sakrejda <[email protected]> wrote:\n\n\nperformance guidelines, I recommend Greg Smith's \"PostgreSQL 9.0 High\nPerformance\" [1] (disclaimer: I used to work with Greg and got a free\ncopy)I'll second that. \"PostgreSQL 9.0 High Performance\" is an excellent resource (I recommend it even for non-PostgreSQL admins because it goes so in-depth on Linux tuning) so whether you get it for free or not, it's worth the time it takes \n\n\nto read and absorb the info.I've never run PostgreSQL virtualized, but I can say that if it's anything likerunning SQL Server virtualized, it's not a terribly good idea.",
"msg_date": "Tue, 13 Sep 2011 11:28:48 +1200",
"msg_from": "Hany ABOU-GHOURY <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Databases optimization"
},
{
"msg_contents": "You may want to try pgreplay ... we've tried it for a similar scenario, and\nso far, it's pretty promising.\n\nI do wish it was able to be loaded from a pgfouine formatted log file, or\nfrom another db ... but that's OK.\n\n\n-- \nAnthony Presley\n\nOn Mon, Sep 12, 2011 at 6:28 PM, Hany ABOU-GHOURY <[email protected]> wrote:\n\n>\n> I have a production PostGres v8.2 database on luinx and a test PostGres\n> V9.0 database on a test linux server\n> I am going to do migration but do not want to do that before making sure\n> the performance of the new test Postgres 9.0 database performance is as good\n> as the current production Postgres 8.2\n>\n> My question is:\n>\n> Is there a script that I can run on Postgres V8.2 and PostGres 9.0 that\n> allows me test performance and make comparisons\n>\n>\n>\n> Thanks guys\n>\n>\n>\n> On Mon, Sep 12, 2011 at 2:46 PM, J Sisson <[email protected]> wrote:\n>\n>> On Sun, Sep 11, 2011 at 5:22 PM, Maciek Sakrejda <[email protected]>wrote:\n>>\n>>> performance guidelines, I recommend Greg Smith's \"PostgreSQL 9.0 High\n>>> Performance\" [1] (disclaimer: I used to work with Greg and got a free\n>>> copy)\n>>>\n>>> I'll second that. \"PostgreSQL 9.0 High Performance\" is an excellent\n>> resource\n>> (I recommend it even for non-PostgreSQL admins because it goes so in-depth\n>>\n>> on Linux tuning) so whether you get it for free or not, it's worth the\n>> time it takes\n>> to read and absorb the info.\n>>\n>> I've never run PostgreSQL virtualized, but I can say that if it's anything\n>> like\n>> running SQL Server virtualized, it's not a terribly good idea.\n>>\n>\n\nYou may want to try pgreplay ... we've tried it for a similar scenario, and so far, it's pretty promising.I do wish it was able to be loaded from a pgfouine formatted log file, or from another db ... but that's OK.\n-- Anthony PresleyOn Mon, Sep 12, 2011 at 6:28 PM, Hany ABOU-GHOURY <[email protected]> wrote:\nI have a production PostGres v8.2 database on luinx and a test PostGres V9.0 database on a test linux server\nI am going to do migration but do not want to do that before making sure the performance of the new test Postgres 9.0 database performance is as good as the current production Postgres 8.2\nMy question is:Is there a script that I can run on Postgres V8.2 and PostGres 9.0 that allows me test performance and make comparisons \nThanks guysOn Mon, Sep 12, 2011 at 2:46 PM, J Sisson <[email protected]> wrote:\n\nOn Sun, Sep 11, 2011 at 5:22 PM, Maciek Sakrejda <[email protected]> wrote:\n\n\n\nperformance guidelines, I recommend Greg Smith's \"PostgreSQL 9.0 High\nPerformance\" [1] (disclaimer: I used to work with Greg and got a free\ncopy)I'll second that. \"PostgreSQL 9.0 High Performance\" is an excellent resource (I recommend it even for non-PostgreSQL admins because it goes so in-depth on Linux tuning) so whether you get it for free or not, it's worth the time it takes \n\n\n\nto read and absorb the info.I've never run PostgreSQL virtualized, but I can say that if it's anything likerunning SQL Server virtualized, it's not a terribly good idea.",
"msg_date": "Mon, 12 Sep 2011 21:49:48 -0500",
"msg_from": "Anthony Presley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Databases optimization"
},
{
"msg_contents": "Hi Anthony,\n\nI will try that thank you very much for your help\n\nCheers\nHany\n\n\n\nOn Tue, Sep 13, 2011 at 2:49 PM, Anthony Presley <[email protected]>wrote:\n\n> You may want to try pgreplay ... we've tried it for a similar scenario, and\n> so far, it's pretty promising.\n>\n> I do wish it was able to be loaded from a pgfouine formatted log file, or\n> from another db ... but that's OK.\n>\n>\n> --\n> Anthony Presley\n>\n>\n> On Mon, Sep 12, 2011 at 6:28 PM, Hany ABOU-GHOURY <[email protected]>wrote:\n>\n>>\n>> I have a production PostGres v8.2 database on luinx and a test PostGres\n>> V9.0 database on a test linux server\n>> I am going to do migration but do not want to do that before making sure\n>> the performance of the new test Postgres 9.0 database performance is as good\n>> as the current production Postgres 8.2\n>>\n>> My question is:\n>>\n>> Is there a script that I can run on Postgres V8.2 and PostGres 9.0 that\n>> allows me test performance and make comparisons\n>>\n>>\n>>\n>> Thanks guys\n>>\n>>\n>>\n>> On Mon, Sep 12, 2011 at 2:46 PM, J Sisson <[email protected]> wrote:\n>>\n>>> On Sun, Sep 11, 2011 at 5:22 PM, Maciek Sakrejda <[email protected]>wrote:\n>>>\n>>>> performance guidelines, I recommend Greg Smith's \"PostgreSQL 9.0 High\n>>>> Performance\" [1] (disclaimer: I used to work with Greg and got a free\n>>>> copy)\n>>>>\n>>>> I'll second that. \"PostgreSQL 9.0 High Performance\" is an excellent\n>>> resource\n>>> (I recommend it even for non-PostgreSQL admins because it goes so\n>>> in-depth\n>>> on Linux tuning) so whether you get it for free or not, it's worth the\n>>> time it takes\n>>> to read and absorb the info.\n>>>\n>>> I've never run PostgreSQL virtualized, but I can say that if it's\n>>> anything like\n>>> running SQL Server virtualized, it's not a terribly good idea.\n>>>\n>>\n\nHi Anthony,I will try that thank you very much for your helpCheersHanyOn Tue, Sep 13, 2011 at 2:49 PM, Anthony Presley <[email protected]> wrote:\nYou may want to try pgreplay ... we've tried it for a similar scenario, and so far, it's pretty promising.\nI do wish it was able to be loaded from a pgfouine formatted log file, or from another db ... but that's OK.\n-- Anthony PresleyOn Mon, Sep 12, 2011 at 6:28 PM, Hany ABOU-GHOURY <[email protected]> wrote:\nI have a production PostGres v8.2 database on luinx and a test PostGres V9.0 database on a test linux server\nI am going to do migration but do not want to do that before making sure the performance of the new test Postgres 9.0 database performance is as good as the current production Postgres 8.2\nMy question is:Is there a script that I can run on Postgres V8.2 and PostGres 9.0 that allows me test performance and make comparisons \nThanks guysOn Mon, Sep 12, 2011 at 2:46 PM, J Sisson <[email protected]> wrote:\n\nOn Sun, Sep 11, 2011 at 5:22 PM, Maciek Sakrejda <[email protected]> wrote:\n\n\n\n\nperformance guidelines, I recommend Greg Smith's \"PostgreSQL 9.0 High\nPerformance\" [1] (disclaimer: I used to work with Greg and got a free\ncopy)I'll second that. \"PostgreSQL 9.0 High Performance\" is an excellent resource (I recommend it even for non-PostgreSQL admins because it goes so in-depth on Linux tuning) so whether you get it for free or not, it's worth the time it takes \n\n\n\n\nto read and absorb the info.I've never run PostgreSQL virtualized, but I can say that if it's anything likerunning SQL Server virtualized, it's not a terribly good idea.",
"msg_date": "Tue, 13 Sep 2011 15:14:10 +1200",
"msg_from": "Hany ABOU-GHOURY <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Databases optimization"
}
] |
[
{
"msg_contents": "I have been a MySQL user for years, including owning a few\nmulti-gigabyte databases for my websites, and using it to host\nalgebra.com (about 12 GB database).\n\nI have had my ups and downs with MySQL. The ups were ease of use and\ndecent performance for small databases such as algebra.com. The downs\nwere things like twenty hour REPAIR TABLE operations on a 35 GB\ntable, etc.\n\nRight now I have a personal (one user) project to create a 5-10\nTerabyte data warehouse. The largest table will consume the most space\nand will take, perhaps, 200,000,000 rows.\n\nI want to use it to obtain valuable business intelligence and to make\nmoney.\n\nI expect it to grow, never shrink, and to be accessed via batch\nqueries. I do not care for batch queries to be super fast, for example\nan hour per query would be just fine.\n\nHowever, while an hour is fine, two weeks per query is NOT fine.\n\nI have a server with about 18 TB of storage and 48 GB of RAM, and 12\nCPU cores.\n\nMy initial plan was to use MySQL, InnoDB, and deal with problems as\nthey arise. Perhaps, say, I would implement my own joining\nprocedures.\n\nAfter reading some disparaging stuff about InnoDB performance on large\ndatasets, however, I am getting cold feet. I have a general feeling\nthat, perhaps, I will not be able to succeed with MySQL, or, perhaps,\nwith either MySQL and Postgres.\n\nI do not know much about Postgres, but I am very eager to learn and\nsee if I can use it for my purposes more effectively than MySQL.\n\nI cannot shell out $47,000 per CPU for Oracle for this project.\n\nTo be more specific, the batch queries that I would do, I hope,\nwould either use small JOINS of a small dataset to a large dataset, or\njust SELECTS from one big table.\n\nSo... Can Postgres support a 5-10 TB database with the use pattern\nstated above?\n\nThanks!\n\ni\n\nI have been a MySQL user for years, including owning a fewmulti-gigabyte databases for my websites, and using it to hostalgebra.com (about 12 GB database).\nI have had my ups and downs with MySQL. The ups were ease of use anddecent performance for small databases such as algebra.com. The downswere things like twenty hour REPAIR TABLE operations on a 35 GB\ntable, etc.Right now I have a personal (one user) project to create a 5-10Terabyte data warehouse. The largest table will consume the most spaceand will take, perhaps, 200,000,000 rows.\nI want to use it to obtain valuable business intelligence and to makemoney.I expect it to grow, never shrink, and to be accessed via batchqueries. I do not care for batch queries to be super fast, for example\nan hour per query would be just fine.However, while an hour is fine, two weeks per query is NOT fine.I have a server with about 18 TB of storage and 48 GB of RAM, and 12\nCPU cores.My initial plan was to use MySQL, InnoDB, and deal with problems asthey arise. Perhaps, say, I would implement my own joiningprocedures.\n\nAfter reading some disparaging stuff about InnoDB performance on largedatasets, however, I am getting cold feet. I have a general feelingthat, perhaps, I will not be able to succeed with MySQL, or, perhaps,\nwith either MySQL and Postgres.I do not know much about Postgres, but I am very eager to learn andsee if I can use it for my purposes more effectively than MySQL.\nI cannot shell out $47,000 per CPU for Oracle for this project.To be more specific, the batch queries that I would do, I hope,would either use small JOINS of a small dataset to a large dataset, or\njust SELECTS from one big table.So... Can Postgres support a 5-10 TB database with the use patternstated above?Thanks!i",
"msg_date": "Sun, 11 Sep 2011 07:35:22 -0500",
"msg_from": "Igor Chudov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On Sun, Sep 11, 2011 at 6:35 AM, Igor Chudov <[email protected]> wrote:\n> I have a server with about 18 TB of storage and 48 GB of RAM, and 12\n> CPU cores.\n\n1 or 2 fast cores is plenty for what you're doing. But the drive\narray and how it's configured etc are very important. There's a huge\ndifference between 10 2TB 7200RPM SATA drives in a software RAID-5 and\n36 500G 15kRPM SAS drives in a RAID-10 (SW or HW would both be ok for\ndata warehouse.)\n\n> I do not know much about Postgres, but I am very eager to learn and\n> see if I can use it for my purposes more effectively than MySQL.\n> I cannot shell out $47,000 per CPU for Oracle for this project.\n> To be more specific, the batch queries that I would do, I hope,\n\nHopefully if needs be you can spend some small percentage of that for\na fast IO subsystem is needed.\n\n> would either use small JOINS of a small dataset to a large dataset, or\n> just SELECTS from one big table.\n> So... Can Postgres support a 5-10 TB database with the use pattern\n> stated above?\n\nI use it on a ~3TB DB and it works well enough. Fast IO is the key\nhere. Lots of drives in RAID-10 or HW RAID-6 if you don't do a lot of\nrandom writing.\n",
"msg_date": "Sun, 11 Sep 2011 06:52:05 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "For 10 TB table and 3hours, disks should have a transfer about 1GB/s (seqscan).\n\n2011/9/11, Scott Marlowe <[email protected]>:\n> On Sun, Sep 11, 2011 at 6:35 AM, Igor Chudov <[email protected]> wrote:\n>> I have a server with about 18 TB of storage and 48 GB of RAM, and 12\n>> CPU cores.\n>\n> 1 or 2 fast cores is plenty for what you're doing. But the drive\n> array and how it's configured etc are very important. There's a huge\n> difference between 10 2TB 7200RPM SATA drives in a software RAID-5 and\n> 36 500G 15kRPM SAS drives in a RAID-10 (SW or HW would both be ok for\n> data warehouse.)\n>\n>> I do not know much about Postgres, but I am very eager to learn and\n>> see if I can use it for my purposes more effectively than MySQL.\n>> I cannot shell out $47,000 per CPU for Oracle for this project.\n>> To be more specific, the batch queries that I would do, I hope,\n>\n> Hopefully if needs be you can spend some small percentage of that for\n> a fast IO subsystem is needed.\n>\n>> would either use small JOINS of a small dataset to a large dataset, or\n>> just SELECTS from one big table.\n>> So... Can Postgres support a 5-10 TB database with the use pattern\n>> stated above?\n>\n> I use it on a ~3TB DB and it works well enough. Fast IO is the key\n> here. Lots of drives in RAID-10 or HW RAID-6 if you don't do a lot of\n> random writing.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n-- \n------------\npasman\n",
"msg_date": "Sun, 11 Sep 2011 15:36:41 +0200",
"msg_from": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On Sun, Sep 11, 2011 at 7:52 AM, Scott Marlowe <[email protected]>wrote:\n\n> On Sun, Sep 11, 2011 at 6:35 AM, Igor Chudov <[email protected]> wrote:\n> > I have a server with about 18 TB of storage and 48 GB of RAM, and 12\n> > CPU cores.\n>\n> 1 or 2 fast cores is plenty for what you're doing.\n\n\nI need those cores to perform other tasks, like image manipulation with\nimagemagick, XML forming and parsing etc.\n\n\n> But the drive\n> array and how it's configured etc are very important. There's a huge\n> difference between 10 2TB 7200RPM SATA drives in a software RAID-5 and\n> 36 500G 15kRPM SAS drives in a RAID-10 (SW or HW would both be ok for\n> data warehouse.)\n\n\nWell, right now, my server has twelve 7,200 RPM 2TB hard drives in a RAID-6\nconfiguration.\n\nThey are managed by a 3WARE 9750 RAID CARD.\n\nI would say that I am not very concerned with linear relationship of read\nspeed to disk speed. If that stuff is somewhat slow, it is OK with me.\n\nWhat I want to avoid is severe degradation of performance due to size (time\ncomplexity greater than O(1)), disastrous REPAIR TABLE operations etc.\n\n\n> I do not know much about Postgres, but I am very eager to learn and\n> > see if I can use it for my purposes more effectively than MySQL.\n> > I cannot shell out $47,000 per CPU for Oracle for this project.\n> > To be more specific, the batch queries that I would do, I hope,\n>\n> Hopefully if needs be you can spend some small percentage of that for\n> a fast IO subsystem is needed.\n>\n>\n\nI am actually open for suggestions here.\n\n\n> > would either use small JOINS of a small dataset to a large dataset, or\n> > just SELECTS from one big table.\n> > So... Can Postgres support a 5-10 TB database with the use pattern\n> > stated above?\n>\n> I use it on a ~3TB DB and it works well enough. Fast IO is the key\n> here. Lots of drives in RAID-10 or HW RAID-6 if you don't do a lot of\n> random writing.\n>\n\nI do not plan to do a lot of random writing. My current design is that my\nperl scripts write to a temporary table every week, and then I do INSERT..ON\nDUPLICATE KEY UPDATE.\n\nBy the way, does that INSERT UPDATE functionality or something like this\nexist in Postgres?\n\ni\n\nOn Sun, Sep 11, 2011 at 7:52 AM, Scott Marlowe <[email protected]> wrote:\nOn Sun, Sep 11, 2011 at 6:35 AM, Igor Chudov <[email protected]> wrote:\n> I have a server with about 18 TB of storage and 48 GB of RAM, and 12\n> CPU cores.\n\n1 or 2 fast cores is plenty for what you're doing. I need those cores to perform other tasks, like image manipulation with imagemagick, XML forming and parsing etc. \n But the drive\narray and how it's configured etc are very important. There's a huge\ndifference between 10 2TB 7200RPM SATA drives in a software RAID-5 and\n36 500G 15kRPM SAS drives in a RAID-10 (SW or HW would both be ok for\ndata warehouse.)Well, right now, my server has twelve 7,200 RPM 2TB hard drives in a RAID-6 configuration.They are managed by a 3WARE 9750 RAID CARD. \nI would say that I am not very concerned with linear relationship of read speed to disk speed. If that stuff is somewhat slow, it is OK with me. What I want to avoid is severe degradation of performance due to size (time complexity greater than O(1)), disastrous REPAIR TABLE operations etc. \n\n> I do not know much about Postgres, but I am very eager to learn and\n> see if I can use it for my purposes more effectively than MySQL.\n> I cannot shell out $47,000 per CPU for Oracle for this project.\n> To be more specific, the batch queries that I would do, I hope,\n\nHopefully if needs be you can spend some small percentage of that for\na fast IO subsystem is needed.\nI am actually open for suggestions here. \n\n> would either use small JOINS of a small dataset to a large dataset, or\n> just SELECTS from one big table.\n> So... Can Postgres support a 5-10 TB database with the use pattern\n> stated above?\n\nI use it on a ~3TB DB and it works well enough. Fast IO is the key\nhere. Lots of drives in RAID-10 or HW RAID-6 if you don't do a lot of\nrandom writing.\nI do not plan to do a lot of random writing. My current design is that my perl scripts write to a temporary table every week, and then I do INSERT..ON DUPLICATE KEY UPDATE. \nBy the way, does that INSERT UPDATE functionality or something like this exist in Postgres?i",
"msg_date": "Sun, 11 Sep 2011 08:59:16 -0500",
"msg_from": "Igor Chudov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "2011/9/11 pasman pasmański <[email protected]>\n\n> For 10 TB table and 3hours, disks should have a transfer about 1GB/s\n> (seqscan).\n>\n>\n\nI have 6 Gb/s disk drives, so it should be not too far, maybe 5 hours for a\nseqscan.\n\ni\n\n\n> 2011/9/11, Scott Marlowe <[email protected]>:\n> > On Sun, Sep 11, 2011 at 6:35 AM, Igor Chudov <[email protected]> wrote:\n> >> I have a server with about 18 TB of storage and 48 GB of RAM, and 12\n> >> CPU cores.\n> >\n> > 1 or 2 fast cores is plenty for what you're doing. But the drive\n> > array and how it's configured etc are very important. There's a huge\n> > difference between 10 2TB 7200RPM SATA drives in a software RAID-5 and\n> > 36 500G 15kRPM SAS drives in a RAID-10 (SW or HW would both be ok for\n> > data warehouse.)\n> >\n> >> I do not know much about Postgres, but I am very eager to learn and\n> >> see if I can use it for my purposes more effectively than MySQL.\n> >> I cannot shell out $47,000 per CPU for Oracle for this project.\n> >> To be more specific, the batch queries that I would do, I hope,\n> >\n> > Hopefully if needs be you can spend some small percentage of that for\n> > a fast IO subsystem is needed.\n> >\n> >> would either use small JOINS of a small dataset to a large dataset, or\n> >> just SELECTS from one big table.\n> >> So... Can Postgres support a 5-10 TB database with the use pattern\n> >> stated above?\n> >\n> > I use it on a ~3TB DB and it works well enough. Fast IO is the key\n> > here. Lots of drives in RAID-10 or HW RAID-6 if you don't do a lot of\n> > random writing.\n> >\n> > --\n> > Sent via pgsql-performance mailing list (\n> [email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n> >\n>\n>\n> --\n> ------------\n> pasman\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n2011/9/11 pasman pasmański <[email protected]>\n\nFor 10 TB table and 3hours, disks should have a transfer about 1GB/s (seqscan).\nI have 6 Gb/s disk drives, so it should be not too far, maybe 5 hours for a seqscan.i \n\n\n2011/9/11, Scott Marlowe <[email protected]>:\n> On Sun, Sep 11, 2011 at 6:35 AM, Igor Chudov <[email protected]> wrote:\n>> I have a server with about 18 TB of storage and 48 GB of RAM, and 12\n>> CPU cores.\n>\n> 1 or 2 fast cores is plenty for what you're doing. But the drive\n> array and how it's configured etc are very important. There's a huge\n> difference between 10 2TB 7200RPM SATA drives in a software RAID-5 and\n> 36 500G 15kRPM SAS drives in a RAID-10 (SW or HW would both be ok for\n> data warehouse.)\n>\n>> I do not know much about Postgres, but I am very eager to learn and\n>> see if I can use it for my purposes more effectively than MySQL.\n>> I cannot shell out $47,000 per CPU for Oracle for this project.\n>> To be more specific, the batch queries that I would do, I hope,\n>\n> Hopefully if needs be you can spend some small percentage of that for\n> a fast IO subsystem is needed.\n>\n>> would either use small JOINS of a small dataset to a large dataset, or\n>> just SELECTS from one big table.\n>> So... Can Postgres support a 5-10 TB database with the use pattern\n>> stated above?\n>\n> I use it on a ~3TB DB and it works well enough. Fast IO is the key\n> here. Lots of drives in RAID-10 or HW RAID-6 if you don't do a lot of\n> random writing.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n--\n------------\npasman\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sun, 11 Sep 2011 09:00:36 -0500",
"msg_from": "Igor Chudov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On Sun, Sep 11, 2011 at 3:59 PM, Igor Chudov <[email protected]> wrote:\n> Well, right now, my server has twelve 7,200 RPM 2TB hard drives in a RAID-6\n> configuration.\n> They are managed by a 3WARE 9750 RAID CARD.\n>\n> I would say that I am not very concerned with linear relationship of read\n> speed to disk speed. If that stuff is somewhat slow, it is OK with me.\n\nWith Raid 6 you'll have abysmal performance on write operations.\nIn data warehousing, there's lots of writes to temporary files, for\nsorting and stuff like that.\n\nYou should either migrate to raid 10, or set up a separate array for\ntemporary files, perhaps raid 0.\n",
"msg_date": "Sun, 11 Sep 2011 16:16:21 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On 09/11/2011 07:35 AM, Igor Chudov wrote:\n> I have been a MySQL user for years, including owning a few\n> multi-gigabyte databases for my websites, and using it to host\n> algebra.com <http://algebra.com> (about 12 GB database).\n>\n> I have had my ups and downs with MySQL. The ups were ease of use and\n> decent performance for small databases such as algebra.com <http://algebra.com>. The downs\n> were things like twenty hour REPAIR TABLE operations on a 35 GB\n> table, etc.\n>\n> Right now I have a personal (one user) project to create a 5-10\n> Terabyte data warehouse. The largest table will consume the most space\n> and will take, perhaps, 200,000,000 rows.\n>\n> I want to use it to obtain valuable business intelligence and to make\n> money.\n>\n> I expect it to grow, never shrink, and to be accessed via batch\n> queries. I do not care for batch queries to be super fast, for example\n> an hour per query would be just fine.\n>\n> However, while an hour is fine, two weeks per query is NOT fine.\n>\n> I have a server with about 18 TB of storage and 48 GB of RAM, and 12\n> CPU cores.\n>\n> My initial plan was to use MySQL, InnoDB, and deal with problems as\n> they arise. Perhaps, say, I would implement my own joining\n> procedures.\n>\n> After reading some disparaging stuff about InnoDB performance on large\n> datasets, however, I am getting cold feet. I have a general feeling\n> that, perhaps, I will not be able to succeed with MySQL, or, perhaps,\n> with either MySQL and Postgres.\n>\n> I do not know much about Postgres, but I am very eager to learn and\n> see if I can use it for my purposes more effectively than MySQL.\n>\n> I cannot shell out $47,000 per CPU for Oracle for this project.\n>\n> To be more specific, the batch queries that I would do, I hope,\n> would either use small JOINS of a small dataset to a large dataset, or\n> just SELECTS from one big table.\n>\n> So... Can Postgres support a 5-10 TB database with the use pattern\n> stated above?\n>\n> Thanks!\n>\n> i\n>\n\nThat is a scale or two larger than I have experience with. I converted my website database from mysql to PG, and it has several db's between 1 and 10 gig. There are parts of the website that were faster with mysql, and there are parts faster with PG. One spot, because PG has superior join support on select statements, I was able to change the code to generate a single more complicated sql statement vs. mysql that had to fire off several simpler statements. Its a search screen where you can type in 15'ish different options. I was able to generate a single sql statement which joins 8 some odd tables and plenty of where statements. PG runs it in the blink of an eye. Its astonishing compared to the pain of mysql. If you ever have to write your own join, or your own lookup function, that's a failure of your database.\n\nOne spot that was slower was a batch insert of data. Its not so much slower that it was a problem. I use COPY on PG vs prepared insert's on mysql. It was pretty close, but mysql still won.\n\nSeeing as you can setup and test both databases, have you considered a trial run?\n\nThings to watch for:\n\n\nI think the same amount of data will use more disk space in PG than in mysql.\n\nImporting data into PG should use COPY and multiple connections at the same time.\n\nPG will only use multi-core if you use multiple connections. (each connecion uses one core).\n\nHuge result sets (like a select statement that returns 1,000,000 rows) will be slow.\n\nPG is a much fuller database than mysql, and as such you can influence its join types, and function calls. (table scan vs index, immutable function vs stable, perl function vs sql). So if at first it appears slow, you have a million options. I think the only option you have in mysql is to pull the data back and code it yourself.\n\nUpgrading to major versions of PG may or may not be painful. (mysql sometimes works seamlessly between versions, it appears brilliant. But I have had problems with an update, and when it goes bad, you dont have a lot of options). In the past PG's only method of upgrade was a full backup of old, restore in new. Things have gotten better, there is new pg_upgrade support (still kinda new though), and there is some 3rd party replication support where you replicate your 9.0 database to a new 9.1 database, and at some point you promote the new 9.1 database as the new master. Or something like that. I've only read posts about it, never done it. But with that much data, you'll need an upgrade plan.\n\nAll in all, if I can summarize my personal view: mysql is fast at the expense of safety and usability. (mysql still cannot do update's with subselects). PG is safe and usable at the expense of speed, and you wont be disappointed by the speed.\n\n-Andy\n",
"msg_date": "Sun, 11 Sep 2011 09:16:27 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On Sun, Sep 11, 2011 at 9:16 AM, Claudio Freire <[email protected]>wrote:\n\n> On Sun, Sep 11, 2011 at 3:59 PM, Igor Chudov <[email protected]> wrote:\n> > Well, right now, my server has twelve 7,200 RPM 2TB hard drives in a\n> RAID-6\n> > configuration.\n> > They are managed by a 3WARE 9750 RAID CARD.\n> >\n> > I would say that I am not very concerned with linear relationship of read\n> > speed to disk speed. If that stuff is somewhat slow, it is OK with me.\n>\n> With Raid 6 you'll have abysmal performance on write operations.\n> In data warehousing, there's lots of writes to temporary files, for\n> sorting and stuff like that.\n>\n> You should either migrate to raid 10, or set up a separate array for\n> temporary files, perhaps raid 0.\n>\n\nThanks. I will rebuild the RAID array early next week and I will see if I\nhave a Raid 10 option with that card.\n\nQuantitatively, what would you say is the write speed difference between\nRAID 10 and RAID 6?\n\nOn Sun, Sep 11, 2011 at 9:16 AM, Claudio Freire <[email protected]> wrote:\nOn Sun, Sep 11, 2011 at 3:59 PM, Igor Chudov <[email protected]> wrote:\n> Well, right now, my server has twelve 7,200 RPM 2TB hard drives in a RAID-6\n> configuration.\n> They are managed by a 3WARE 9750 RAID CARD.\n>\n> I would say that I am not very concerned with linear relationship of read\n> speed to disk speed. If that stuff is somewhat slow, it is OK with me.\n\nWith Raid 6 you'll have abysmal performance on write operations.\nIn data warehousing, there's lots of writes to temporary files, for\nsorting and stuff like that.\n\nYou should either migrate to raid 10, or set up a separate array for\ntemporary files, perhaps raid 0.\nThanks. I will rebuild the RAID array early next week and I will see if I have a Raid 10 option with that card.Quantitatively, what would you say is the write speed difference between RAID 10 and RAID 6?",
"msg_date": "Sun, 11 Sep 2011 09:21:35 -0500",
"msg_from": "Igor Chudov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On 09/11/2011 08:59 AM, Igor Chudov wrote:\n>\n>\n> I do not plan to do a lot of random writing. My current design is that my perl scripts write to a temporary table every week, and then I do INSERT..ON DUPLICATE KEY UPDATE.\n>\n> By the way, does that INSERT UPDATE functionality or something like this exist in Postgres?\n>\n> i\n\nYou have two options:\n\n1) write a function like:\ncreate function doinsert(_id integer, _value text) returns void as $$\nbegin\n update thetable set value = _value where id = _id;\n if not found then\n insert into thetable(id, value) values (_id, _value);\n end if\nend;\n$$ language plpgsql;\n\n2) use two sql statements:\n-- update the existing\nupdate realTable set value = (select value from tmp where tmp.id = realTable.id)\nwhere exists (select value from tmp where tmp.id = realTable.id);\n\n-- insert the missing\ninsert into realTable(id, value)\nselect id, value from tmp where not exists(select 1 from realTable where tmp.id = realTable.id);\n\n\n-Andy\n",
"msg_date": "Sun, 11 Sep 2011 09:23:20 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On Sun, Sep 11, 2011 at 4:16 PM, Andy Colson <[email protected]> wrote:\n> Upgrading to major versions of PG may or may not be painful. (mysql\n> sometimes works seamlessly between versions, it appears brilliant. But I\n> have had problems with an update, and when it goes bad, you dont have a lot\n> of options). In the past PG's only method of upgrade was a full backup of\n> old, restore in new. Things have gotten better, there is new pg_upgrade\n> support (still kinda new though), and there is some 3rd party replication\n> support where you replicate your 9.0 database to a new 9.1 database, and at\n> some point you promote the new 9.1 database as the new master. Or something\n> like that. I've only read posts about it, never done it. But with that\n> much data, you'll need an upgrade plan.\n\nI have used slony to do database migration. It is a pain to set up,\nbut it saves you hours of downtime.\nBasically, you replicate your 9.0 database into a 9.1 slave while the\n9.0 is still hot and working, so you only have a very small downtime.\nIt's an option, but it's a lot of work to set up, only warranted if\nyou really cannot afford the downtime.\n",
"msg_date": "Sun, 11 Sep 2011 16:27:44 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On 09/11/2011 09:21 AM, Igor Chudov wrote:\n>\n>\n> On Sun, Sep 11, 2011 at 9:16 AM, Claudio Freire <[email protected] <mailto:[email protected]>> wrote:\n>\n> On Sun, Sep 11, 2011 at 3:59 PM, Igor Chudov <[email protected] <mailto:[email protected]>> wrote:\n> > Well, right now, my server has twelve 7,200 RPM 2TB hard drives in a RAID-6\n> > configuration.\n> > They are managed by a 3WARE 9750 RAID CARD.\n> >\n> > I would say that I am not very concerned with linear relationship of read\n> > speed to disk speed. If that stuff is somewhat slow, it is OK with me.\n>\n> With Raid 6 you'll have abysmal performance on write operations.\n> In data warehousing, there's lots of writes to temporary files, for\n> sorting and stuff like that.\n>\n> You should either migrate to raid 10, or set up a separate array for\n> temporary files, perhaps raid 0.\n>\n>\n> Thanks. I will rebuild the RAID array early next week and I will see if I have a Raid 10 option with that card.\n>\n> Quantitatively, what would you say is the write speed difference between RAID 10 and RAID 6?\n>\n\nNote that using RAID 10, while faster, cuts your usable space in half. 12 2TB drives in raid 10 == 6 drives * 2TB == 12 TB total space. That's not big enough, is it?\n\n-Andy\n",
"msg_date": "Sun, 11 Sep 2011 09:36:25 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On Sun, Sep 11, 2011 at 4:21 PM, Igor Chudov <[email protected]> wrote:\n> Quantitatively, what would you say is the write speed difference between\n> RAID 10 and RAID 6?\n\nhttps://support.nstein.com/blog/archives/73\n\nThere you can see a comparison with 4 drives, and raid 10 is twice as fast.\nSince raid 5/6 doesn't scale write performance at all (it performs as\na single drive), it's quite expected. 12 drives would probably be\naround 6 times as fast as raid 6.\n\nYou definitely should do some benchmarks to confirm, though.\n\nAnd Andy is right, you'll have a lot less space. If raid 10 doesn't\ngive you enough room, just leave two spare drives for a raid 0\ntemporary partition. That will be at least twice as fast as doing\ntemporary tables on the raid 6.\n\nYou'll obviously have to get creative, tons of options.\n",
"msg_date": "Sun, 11 Sep 2011 16:44:18 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On Sun, Sep 11, 2011 at 17:23, Andy Colson <[email protected]> wrote:\n> On 09/11/2011 08:59 AM, Igor Chudov wrote:\n>> By the way, does that INSERT UPDATE functionality or something like this exist in Postgres?\n> You have two options:\n> 1) write a function like:\n> create function doinsert(_id integer, _value text) returns void as\n> 2) use two sql statements:\n\nUnfortunately both of these options have caveats. Depending on your\nI/O speed, you might need to use multiple loader threads to saturate\nthe write bandwidth.\n\nHowever, neither option is safe from race conditions. If you need to\nload data from multiple threads at the same time, they won't see each\nother's inserts (until commit) and thus cause unique violations. If\nyou could somehow partition their operation by some key, so threads\nare guaranteed not to conflict each other, then that would be perfect.\nThe 2nd option given by Andy is probably faster.\n\nYou *could* code a race-condition-safe function, but that would be a\nno-go on a data warehouse, since each call needs a separate\nsubtransaction which involves allocating a transaction ID.\n\n----\n\nWhich brings me to another important point: don't do lots of small\nwrite transactions, SAVEPOINTs or PL/pgSQL subtransactions. Besides\nbeing inefficient, they introduce a big maintenance burden. In\nPostgreSQL's MVCC, each tuple contains a reference to the 32-bit\ntransaction ID that inserted it (xmin). After hitting the maximum\n32-bit value transaction ID, the number \"wraps around\". To prevent old\nrows from appearing as new, a \"vacuum freeze\" process will run after\npassing autovacuum_freeze_max_age transactions (200 million by\ndefault) to update all old rows in your database. Using fewer\ntransaction IDs means it runs less often.\n\nOn small databases, this is usually not important. But on a 10TB data\nwarehouse, rewriting a large part of your database totally kills\nperformance for any other processes.\nThis is detailed in the documentation:\nhttp://www.postgresql.org/docs/9.1/static/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND\n\nRegards,\nMarti\n",
"msg_date": "Sun, 11 Sep 2011 20:02:06 +0300",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On Sep 11, 2011, at 9:21 AM, Igor Chudov wrote:\n\n> \n> \n> On Sun, Sep 11, 2011 at 9:16 AM, Claudio Freire <[email protected]> wrote:\n> On Sun, Sep 11, 2011 at 3:59 PM, Igor Chudov <[email protected]> wrote:\n> > Well, right now, my server has twelve 7,200 RPM 2TB hard drives in a RAID-6\n> > configuration.\n> > They are managed by a 3WARE 9750 RAID CARD.\n> >\n> > I would say that I am not very concerned with linear relationship of read\n> > speed to disk speed. If that stuff is somewhat slow, it is OK with me.\n> \n> With Raid 6 you'll have abysmal performance on write operations.\n> In data warehousing, there's lots of writes to temporary files, for\n> sorting and stuff like that.\n> \n> You should either migrate to raid 10, or set up a separate array for\n> temporary files, perhaps raid 0.\n> \n> Thanks. I will rebuild the RAID array early next week and I will see if I have a Raid 10 option with that card.\n> \n> Quantitatively, what would you say is the write speed difference between RAID 10 and RAID 6?\n> \n\nAs someone who migrated a RAID 5 installation to RAID 10, I am getting far better read and write performance on heavy calculation queries. Writing on the RAID 5 really made things crawl. For lots of writing, I think RAID 10 is the best. It should also be noted that I changed my filesystem from ext3 to XFS - this is something you can look into as well. \n\nOgden\n\n\nOn Sep 11, 2011, at 9:21 AM, Igor Chudov wrote:On Sun, Sep 11, 2011 at 9:16 AM, Claudio Freire <[email protected]> wrote:\nOn Sun, Sep 11, 2011 at 3:59 PM, Igor Chudov <[email protected]> wrote:\n> Well, right now, my server has twelve 7,200 RPM 2TB hard drives in a RAID-6\n> configuration.\n> They are managed by a 3WARE 9750 RAID CARD.\n>\n> I would say that I am not very concerned with linear relationship of read\n> speed to disk speed. If that stuff is somewhat slow, it is OK with me.\n\nWith Raid 6 you'll have abysmal performance on write operations.\nIn data warehousing, there's lots of writes to temporary files, for\nsorting and stuff like that.\n\nYou should either migrate to raid 10, or set up a separate array for\ntemporary files, perhaps raid 0.\nThanks. I will rebuild the RAID array early next week and I will see if I have a Raid 10 option with that card.Quantitatively, what would you say is the write speed difference between RAID 10 and RAID 6?\n\nAs someone who migrated a RAID 5 installation to RAID 10, I am getting far better read and write performance on heavy calculation queries. Writing on the RAID 5 really made things crawl. For lots of writing, I think RAID 10 is the best. It should also be noted that I changed my filesystem from ext3 to XFS - this is something you can look into as well. Ogden",
"msg_date": "Sun, 11 Sep 2011 13:36:56 -0500",
"msg_from": "Ogden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On Sun, Sep 11, 2011 at 1:36 PM, Ogden <[email protected]> wrote:\n\n> As someone who migrated a RAID 5 installation to RAID 10, I am getting far\n> better read and write performance on heavy calculation queries. Writing on\n> the RAID 5 really made things crawl. For lots of writing, I think RAID 10 is\n> the best. It should also be noted that I changed my filesystem from ext3 to\n> XFS - this is something you can look into as well.\n>\n> Ogden\n>\n> RAID 10 on XFS here, too, both in OLTP and Data-warehousing scenarios. Our\nlargest OLTP is ~375 GB, and PostgreSQL performs admirably (we converted\nfrom MSSQL to PostgreSQL, and we've had more issues with network bottlenecks\nsince converting (where MSSQL was always the bottleneck before)). Now that\nwe have fiber interconnects between our two main datacenters, I'm actually\nhaving to work again haha.\n\nBut yeah, we tried quite a few file systems, and XFS **for our workloads**\nperformed better than everything else we tested, and RAID 10 is a given if\nyou do any significant writing.\n\nOn Sun, Sep 11, 2011 at 1:36 PM, Ogden <[email protected]> wrote:\nAs someone who migrated a RAID 5 installation to RAID 10, I am getting far better read and write performance on heavy calculation queries. Writing on the RAID 5 really made things crawl. For lots of writing, I think RAID 10 is the best. It should also be noted that I changed my filesystem from ext3 to XFS - this is something you can look into as well. \nOgdenRAID 10 on XFS here, too, both in OLTP and Data-warehousing scenarios. Our largest OLTP is ~375 GB, and PostgreSQL performs admirably (we converted from MSSQL to PostgreSQL, and we've had more issues with network bottlenecks since converting (where MSSQL was always the bottleneck before)). Now that we have fiber interconnects between our two main datacenters, I'm actually having to work again haha.\nBut yeah, we tried quite a few file systems, and XFS **for our workloads** performed better than everything else we tested, and RAID 10 is a given if you do any significant writing.",
"msg_date": "Sun, 11 Sep 2011 14:08:36 -0500",
"msg_from": "J Sisson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "Sorry, meant to send this to the list.\n\nFor really big data-warehousing, this document really helped us:\n\nhttp://pgexperts.com/document.html?id=49\n\nSorry, meant to send this to the list.For really big data-warehousing, this document really helped us:http://pgexperts.com/document.html?id=49",
"msg_date": "Sun, 11 Sep 2011 14:30:23 -0500",
"msg_from": "J Sisson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "2011/9/11 pasman pasmański <[email protected]>:\n> For 10 TB table and 3hours, disks should have a transfer about 1GB/s (seqscan).\n\nRandom data point. Our primary servers were built for OLTP with 48\ncores and 32 15kSAS drives. We started out on Arecas but the\nSupermicro 1Us we were using didn't provide enough cooling and the\nArecas were burning out after 2 to 4 months, so on those machines, we\npulled the Arecas and replaced them with simple LSI SAS non-RAID\ncards. Both were RAID-10, the latter with linux software RAID.\n\nWith the Arecas the OLTP performance is outstanding, garnering us\n~8500tps at 40 to 50 threads. However, sequentual performance was\njust so so at around read / write speeds of 500/350MB/s. The SW\nRAID-10 can read AND write at right around 1GB/s. what it lacks in\ntransactional throughput it more than makes up for in sequential read\n/ write performance.\n\nAnother data point. We had a big Oracle installation at my last job,\nand OLAP queries were killing it midday, so I built a simple\nreplication system to grab rows from the big iron Oracle SUN box and\nshove into a single core P IV 2.xGHz machine with 4 120G SATA drives\nin SW RAID-10.\n\nThat machine handily beat the big iron Oracle machine at OLAP queries,\nrunning in 20 minutes what was taking well over an hour for the big\nOracle machine to do, even during its (Oracle machine) off peak load\ntimes.\n",
"msg_date": "Sun, 11 Sep 2011 14:10:03 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "* Igor Chudov ([email protected]) wrote:\n> Right now I have a personal (one user) project to create a 5-10\n> Terabyte data warehouse. The largest table will consume the most space\n> and will take, perhaps, 200,000,000 rows.\n\nI run data-warehouse databases on that order (current largest single\ninstance is ~4TB running under 9.0.4). If the largest table is only\n200M rows, PG should handle that quite well. Our data is partitioned by\nmonth and each month is about 200M records and simple queries can run in\n15-20 minutes (with a single thread), with complex windowing queries\n(split up and run in parallel) finishing in a couple of hours. \n\n> However, while an hour is fine, two weeks per query is NOT fine.\n\nWhat's really, really, really useful are two things: EXPLAIN, and this\nmailing list. :) Seriously, run EXPLAIN on your queries before you run\nthem and see if how the query is going to be executed makes sense.\nHere's a real easy hint: if it says \"External Sort\" and has big numbers,\ncome talk to us here- that's about one of the worst things you can\npossibly do. Of course, PG's going to avoid doing that, but you may\nhave written a query (unintentionally) which forces PG to do a sort, or\nsomething else.\n\n> I have a server with about 18 TB of storage and 48 GB of RAM, and 12\n> CPU cores.\n\nIf you partition up your data and don't mind things running in different\ntransactions, you can definitely get a speed boost with PG by running\nthings in parallel. PG will handle that very well, in fact, if two\nqueries are running against the same table, PG will actually combine\nthem and only actually read the data from disk once.\n\n> I cannot shell out $47,000 per CPU for Oracle for this project.\n\nThe above data warehouse was migrated from an Oracle-based system. :)\n\n> To be more specific, the batch queries that I would do, I hope,\n> would either use small JOINS of a small dataset to a large dataset, or\n> just SELECTS from one big table.\n\nMake sure that you set your 'work_mem' correctly- PG will use that to\nfigure out if it can hash the small table (you want that to happen,\ntrust me..). If you do end up having sorts, it'll also use the work_mem\nvalue to figure out how much memory to use for sorting.\n\n> So... Can Postgres support a 5-10 TB database with the use pattern\n> stated above?\n\nYes, certainly.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Sun, 11 Sep 2011 19:01:35 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On Sun, Sep 11, 2011 at 6:01 PM, Stephen Frost <[email protected]> wrote:\n\n> * Igor Chudov ([email protected]) wrote:\n> > Right now I have a personal (one user) project to create a 5-10\n> > Terabyte data warehouse. The largest table will consume the most space\n> > and will take, perhaps, 200,000,000 rows.\n>\n> I run data-warehouse databases on that order (current largest single\n> instance is ~4TB running under 9.0.4). If the largest table is only\n> 200M rows, PG should handle that quite well. Our data is partitioned by\n> month and each month is about 200M records and simple queries can run in\n> 15-20 minutes (with a single thread), with complex windowing queries\n> (split up and run in parallel) finishing in a couple of hours.\n>\n>\n\nWhich brings up a question.\n\nCan I partition data by month (or quarter), without that month being part of\nPRIMARY KEY?\n\nIf this question sounds weird, I am asking because MySQL enforces this,\nwhich does not fit my data.\n\nIf I can keep my primary key to be the ID that I want (which comes with\ndata), but still partition it by month, I will be EXTREMELY happy.\n\n> However, while an hour is fine, two weeks per query is NOT fine.\n>\n> What's really, really, really useful are two things: EXPLAIN, and this\n> mailing list. :) Seriously, run EXPLAIN on your queries before you run\n> them and see if how the query is going to be executed makes sense.\n> Here's a real easy hint: if it says \"External Sort\" and has big numbers,\n> come talk to us here- that's about one of the worst things you can\n> possibly do. Of course, PG's going to avoid doing that, but you may\n> have written a query (unintentionally) which forces PG to do a sort, or\n> something else.\n>\n>\nVery good, thanks\n\n\n> > I have a server with about 18 TB of storage and 48 GB of RAM, and 12\n> > CPU cores.\n>\n> If you partition up your data and don't mind things running in different\n> transactions, you can definitely get a speed boost with PG by running\n> things in parallel. PG will handle that very well, in fact, if two\n> queries are running against the same table, PG will actually combine\n> them and only actually read the data from disk once.\n>\n> > I cannot shell out $47,000 per CPU for Oracle for this project.\n>\n> The above data warehouse was migrated from an Oracle-based system. :)\n>\n>\nI am wondering, why?\n\n\n> > To be more specific, the batch queries that I would do, I hope,\n> > would either use small JOINS of a small dataset to a large dataset, or\n> > just SELECTS from one big table.\n>\n> Make sure that you set your 'work_mem' correctly- PG will use that to\n> figure out if it can hash the small table (you want that to happen,\n> trust me..). If you do end up having sorts, it'll also use the work_mem\n> value to figure out how much memory to use for sorting.\n>\n>\nI could, say, set work_mem to 30 GB? (64 bit linux)\n\n\n> > So... Can Postgres support a 5-10 TB database with the use pattern\n> > stated above?\n>\n> Yes, certainly.\n>\n>\nthat's great to know.\n\ni\n\n\n> Thanks,\n>\n> Stephen\n>\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.10 (GNU/Linux)\n>\n> iEYEARECAAYFAk5tPc8ACgkQrzgMPqB3kigtSgCffwEmi3AD6Ryff7qZyQYieyKQ\n> jhoAoJDFC1snQmwCIBUjwlC6WVRyAOkn\n> =LPtP\n> -----END PGP SIGNATURE-----\n>\n>\n\nOn Sun, Sep 11, 2011 at 6:01 PM, Stephen Frost <[email protected]> wrote:\n* Igor Chudov ([email protected]) wrote:\n> Right now I have a personal (one user) project to create a 5-10\n> Terabyte data warehouse. The largest table will consume the most space\n> and will take, perhaps, 200,000,000 rows.\n\nI run data-warehouse databases on that order (current largest single\ninstance is ~4TB running under 9.0.4). If the largest table is only\n200M rows, PG should handle that quite well. Our data is partitioned by\nmonth and each month is about 200M records and simple queries can run in\n15-20 minutes (with a single thread), with complex windowing queries\n(split up and run in parallel) finishing in a couple of hours.\nWhich brings up a question. Can I partition data by month (or quarter), without that month being part of PRIMARY KEY?\nIf this question sounds weird, I am asking because MySQL enforces this, which does not fit my data. If I can keep my primary key to be the ID that I want (which comes with data), but still partition it by month, I will be EXTREMELY happy.\n\n> However, while an hour is fine, two weeks per query is NOT fine.\n\nWhat's really, really, really useful are two things: EXPLAIN, and this\nmailing list. :) Seriously, run EXPLAIN on your queries before you run\nthem and see if how the query is going to be executed makes sense.\nHere's a real easy hint: if it says \"External Sort\" and has big numbers,\ncome talk to us here- that's about one of the worst things you can\npossibly do. Of course, PG's going to avoid doing that, but you may\nhave written a query (unintentionally) which forces PG to do a sort, or\nsomething else.\nVery good, thanks \n> I have a server with about 18 TB of storage and 48 GB of RAM, and 12\n> CPU cores.\n\nIf you partition up your data and don't mind things running in different\ntransactions, you can definitely get a speed boost with PG by running\nthings in parallel. PG will handle that very well, in fact, if two\nqueries are running against the same table, PG will actually combine\nthem and only actually read the data from disk once.\n\n> I cannot shell out $47,000 per CPU for Oracle for this project.\n\nThe above data warehouse was migrated from an Oracle-based system. :)\nI am wondering, why? \n> To be more specific, the batch queries that I would do, I hope,\n> would either use small JOINS of a small dataset to a large dataset, or\n> just SELECTS from one big table.\n\nMake sure that you set your 'work_mem' correctly- PG will use that to\nfigure out if it can hash the small table (you want that to happen,\ntrust me..). If you do end up having sorts, it'll also use the work_mem\nvalue to figure out how much memory to use for sorting.\nI could, say, set work_mem to 30 GB? (64 bit linux) \n\n> So... Can Postgres support a 5-10 TB database with the use pattern\n> stated above?\n\nYes, certainly.\nthat's great to know.i \n Thanks,\n\n Stephen\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.10 (GNU/Linux)\n\niEYEARECAAYFAk5tPc8ACgkQrzgMPqB3kigtSgCffwEmi3AD6Ryff7qZyQYieyKQ\njhoAoJDFC1snQmwCIBUjwlC6WVRyAOkn\n=LPtP\n-----END PGP SIGNATURE-----",
"msg_date": "Sun, 11 Sep 2011 18:16:36 -0500",
"msg_from": "Igor Chudov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On Mon, Sep 12, 2011 at 1:16 AM, Igor Chudov <[email protected]> wrote:\n> I could, say, set work_mem to 30 GB? (64 bit linux)\n\nI don't think you'd want that. Remember, work_mem is the amount of\nmemory *per sort*.\nQueries can request several times that much memory, once per sort they\nneed to perform.\n\nYou can set it really high, but not 60% of your RAM - that wouldn't be wise.\n",
"msg_date": "Mon, 12 Sep 2011 01:50:09 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "* Igor Chudov ([email protected]) wrote:\n> Can I partition data by month (or quarter), without that month being part of\n> PRIMARY KEY?\n\nThe way partitioning works in PG is by using CHECK constraints. Not\nsure if you're familiar with those (not sure if MySQL has them), so\nhere's a quick example:\n\nCreate a parent table. Then create two tables which inherit from that\nparent table (this is more of an implementation detail than anything\nelse, the parent table is always empty, it's just there to be the\nsingle, combined, table that you run your select queries against). On\neach of the two 'child' tables, create a CHECK constraint. On table1,\nyou do: \n alter table table1 add check (date < '2000-01-01');\nOn table2, you do:\n alter table table2 add check (date >= '2000-01-01');\n\nOnce those are done, you can query against the 'parent' table with\nsomething like:\nselect * from parent where date = '2010-01-01';\n\nAnd PG will realize it only has to look at table2 to get the results for\nthat query. This means the partitioning can be more-or-less any check\nconstraint that will be satisfied by the data in the table (and PG will\ncheck/enforce this) and that PG can figure out will eliminate a partition\nfrom possibly having the data that matches the request.\n\nTechnically, this means that you could have all kinds of different ways\nyour data is split across the partitions, but remember that all the\nconstraints have to actually be TRUE. :) Typically, people do split\nbased on the PK, but it's not required (realize that PG doesn't support\ncross-table PKs, so if you don't have CHECK constraints which make sure\nthat the tables don't cover the same PK value, you could end up with\nduplicate values across the tables...).\n\n> If this question sounds weird, I am asking because MySQL enforces this,\n> which does not fit my data.\n\nThat part is a little strange..\n\n> If I can keep my primary key to be the ID that I want (which comes with\n> data), but still partition it by month, I will be EXTREMELY happy.\n\nAs I said above, the actual PK is going to be independent and in the\nbase/child tables. That said, yes, you could have the PK in each table\nbe whatever you want and you use month to partition the 'main' table.\nYou then have to come up with some other way to make sure your PK is\nenforced, however, or figure out a way to deal with things if it's not.\nBased on what you've been describing, I'm afraid you'd have to actually\nsearch all the partitions for a given ID on an update, to figure out if\nyou're doing an UPDATE or an INSERT... Unless, of course, the month is\nincluded in the PK somewhere, or is in the incoming data and you can be\n100% confident that the incoming data is never wrong.. :)\n\n> I am wondering, why?\n\nCost, and we had a real hard time (this was a while ago..) getting\nOracle to run decently on Linux, and the Sun gear was just too damn\nexpensive. Also, ease of maintenance- it takes a LOT less effort to\nkeep a PG database set up and running smoothly than an Oracle one, imv.\n\n> I could, say, set work_mem to 30 GB? (64 bit linux)\n\nYou can, but you have to be careful with it, because PG will think it\ncan use 30GB for EACH sort in a given query, and in EACH hash in a given\nquery. What I would recommend is setting the default to something like\n256MB and then looking at specific queries and bumping it up for those\nqueries when it's clear that it'll help the query and won't cause the\nsystem to go into swap. Note that you can set work_mem for a given\nsession after you connect to the database, just do:\n\nset work_mem = '1GB';\n\nin your session before running other queries. Doing that won't impact\nother sessions.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Sun, 11 Sep 2011 22:28:06 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "* Claudio Freire ([email protected]) wrote:\n> I don't think you'd want that. Remember, work_mem is the amount of\n> memory *per sort*.\n> Queries can request several times that much memory, once per sort they\n> need to perform.\n> \n> You can set it really high, but not 60% of your RAM - that wouldn't be wise.\n\nOh, I dunno.. It's only used by the planner, so sometimes you have to\nbump it up, especially when PG thinks the number of rows returned from\nsomething will be a lot more than it really will be. :)\n\n/me has certain queries where it's been set to 100GB... ;)\n\nI agree that it shouldn't be the default, however. That's asking for\ntrouble. Do it for the specific queries that need it.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Sun, 11 Sep 2011 22:29:26 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "Hi,\n\nOn 12 September 2011 12:28, Stephen Frost <[email protected]> wrote:\n> Once those are done, you can query against the 'parent' table with\n> something like:\n> select * from parent where date = '2010-01-01';\n>\n> And PG will realize it only has to look at table2 to get the results for\n> that query. This means the partitioning can be more-or-less any check\n> constraint that will be satisfied by the data in the table (and PG will\n> check/enforce this) and that PG can figure out will eliminate a partition\n> from possibly having the data that matches the request.\n\nTheory is nice but there are few gotchas (in 8.4) :\n\n- planner can use constant expressions only. You will get scans across\nall partitions when you use function (like now(), immutable function\nwith constant arguments), sub query (like part_col = (select x from\n...) .. ) or anything which can't be evaluated to constat during query\nplanning.\n\n- partitions constraints are not \"pushed to joins\" (assuming tables\npartitioned by primary key):\nselect ... from X left join Y on X.primary_key = Y.primary_key where\npart_col >= ... and X.primary_key >= .,, and X.primary_key < ...\nmust be rewritten like\nselect ... from X\nleft join Y on X.primary_key = Y.primary_key and X.primary_key >= .,,\nand Y.primary_key < ...\nwhere X.primary_key >= .,, and X.primary_key < ...\nin order to avoid scan entire Y table (not only relevant partitions)\n\n- ORDER BY / LIMIT X issue fixed in 9.1 (Allow inheritance table scans\nto return meaningfully-sorted results.\n\nMoreover all queries should have 'WHERE' on column which is used for\npartitioning otherwise partitioning is not very useful (yes, it could\nsimplify data management -- drop partition vs delete from X where\npart_col between A and B)\n\n-- \nOndrej Ivanic\n([email protected])\n",
"msg_date": "Mon, 12 Sep 2011 12:56:37 +1000",
"msg_from": "=?UTF-8?Q?Ondrej_Ivani=C4=8D?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On 09/11/2011 09:44 AM, Claudio Freire wrote:\n\n> And Andy is right, you'll have a lot less space. If raid 10 doesn't\n> give you enough room, just leave two spare drives for a raid 0\n> temporary partition. That will be at least twice as fast as doing\n> temporary tables on the raid 6.\n\nAlternatively, throw a lot of memory at the system and point the temp \nspace at /dev/shm. We've had really good luck doing that here, to avoid \nexcessive writes to our NVRAM PCIe cards. Make sure the transaction logs \n(and any archives) get written to a separate LUN (ideally on a separate \ncontroller) for even more win.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email-disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Mon, 12 Sep 2011 11:09:44 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On 09/11/2011 12:02 PM, Marti Raudsepp wrote:\n\n> Which brings me to another important point: don't do lots of small\n> write transactions, SAVEPOINTs or PL/pgSQL subtransactions. Besides\n> being inefficient, they introduce a big maintenance burden.\n\nI'd like to second this. Before a notable application overhaul, we were \nhandling about 300-million transactions per day (250M of that was over a \n6-hour period). To avoid the risk of mid-day vacuum-freeze, we disabled \nautovacuum and run a nightly vacuum over the entire database. And that \nwas *after* bumping autovacuum_freeze_max_age to 600-million.\n\nYou do *not* want to screw with that if you don't have to, and a setting \nof 600M is about 1/3 of the reasonable boundary there. If not for the \nforced autovacuums, a database with this much traffic would be corrupt \nin less than a week. We've managed to cut that transaction traffic by \n60%, and it greatly improved the database's overall health.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email-disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Mon, 12 Sep 2011 11:22:55 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On 09/11/2011 09:27 AM, Claudio Freire wrote:\n\n> I have used slony to do database migration. It is a pain to set up,\n> but it saves you hours of downtime.\n\nI've had to shoot this option down in two separate upgrade scenarios in \ntwo different companies. Theoretically it's possible, but slony is based \non triggers. If you have an OLTP database with frequent writes, that \noverhead (firing the trigger, storing the replicated data, reading the \nreplication log, traffic to the upgrade node) can literally kill your \napplication. Downtime is one thing, but maintenance windows can be \nplanned. Ruining application performance for an undetermined length of \ntime is probably worse.\n\nThankfully 8.4 added pg_migrator/pg_upgrade, so that kind of pain is \nprobably over for the most part. But even without it, 8.4 and above have \nparallel restore, which can drop upgrade times down to a fraction of \ntheir former length.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email-disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Mon, 12 Sep 2011 11:36:22 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On 11.09.2011 22:10, Scott Marlowe wrote:\n\n> Another data point. We had a big Oracle installation at my last job,\n> and OLAP queries were killing it midday, so I built a simple\n> replication system to grab rows from the big iron Oracle SUN box and\n> shove into a single core P IV 2.xGHz machine with 4 120G SATA drives\n> in SW RAID-10.\n>\n> That machine handily beat the big iron Oracle machine at OLAP queries,\n> running in 20 minutes what was taking well over an hour for the big\n> Oracle machine to do, even during its (Oracle machine) off peak load\n> times.\n\nUm, that sounds as if the SUN setup was really bad. Do you remember any \ndetails about the drive configuration there?\n\nKind regards\n\n\trobert\n\n\n\n",
"msg_date": "Mon, 12 Sep 2011 19:04:22 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On 11.09.2011 19:02, Marti Raudsepp wrote:\n> On Sun, Sep 11, 2011 at 17:23, Andy Colson<[email protected]> wrote:\n>> On 09/11/2011 08:59 AM, Igor Chudov wrote:\n>>> By the way, does that INSERT UPDATE functionality or something like this exist in Postgres?\n>> You have two options:\n>> 1) write a function like:\n>> create function doinsert(_id integer, _value text) returns void as\n>> 2) use two sql statements:\n>\n> Unfortunately both of these options have caveats. Depending on your\n> I/O speed, you might need to use multiple loader threads to saturate\n> the write bandwidth.\n>\n> However, neither option is safe from race conditions. If you need to\n> load data from multiple threads at the same time, they won't see each\n> other's inserts (until commit) and thus cause unique violations. If\n> you could somehow partition their operation by some key, so threads\n> are guaranteed not to conflict each other, then that would be perfect.\n> The 2nd option given by Andy is probably faster.\n>\n> You *could* code a race-condition-safe function, but that would be a\n> no-go on a data warehouse, since each call needs a separate\n> subtransaction which involves allocating a transaction ID.\n\nWouldn't it be sufficient to reverse order for race condition safety? \nPseudo code:\n\nbegin\n insert ...\ncatch\n update ...\n if not found error\nend\n\nSpeed is another matter though...\n\nKind regards\n\n\trobert\n\n\n",
"msg_date": "Mon, 12 Sep 2011 19:15:35 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On 9/12/2011 12:15 PM, Robert Klemme wrote:\n> On 11.09.2011 19:02, Marti Raudsepp wrote:\n>> On Sun, Sep 11, 2011 at 17:23, Andy Colson<[email protected]> wrote:\n>>> On 09/11/2011 08:59 AM, Igor Chudov wrote:\n>>>> By the way, does that INSERT UPDATE functionality or something like\n>>>> this exist in Postgres?\n>>> You have two options:\n>>> 1) write a function like:\n>>> create function doinsert(_id integer, _value text) returns void as\n>>> 2) use two sql statements:\n>>\n>> Unfortunately both of these options have caveats. Depending on your\n>> I/O speed, you might need to use multiple loader threads to saturate\n>> the write bandwidth.\n>>\n>> However, neither option is safe from race conditions. If you need to\n>> load data from multiple threads at the same time, they won't see each\n>> other's inserts (until commit) and thus cause unique violations. If\n>> you could somehow partition their operation by some key, so threads\n>> are guaranteed not to conflict each other, then that would be perfect.\n>> The 2nd option given by Andy is probably faster.\n>>\n>> You *could* code a race-condition-safe function, but that would be a\n>> no-go on a data warehouse, since each call needs a separate\n>> subtransaction which involves allocating a transaction ID.\n>\n> Wouldn't it be sufficient to reverse order for race condition safety?\n> Pseudo code:\n>\n> begin\n> insert ...\n> catch\n> update ...\n> if not found error\n> end\n>\n> Speed is another matter though...\n>\n> Kind regards\n>\n> robert\n>\n>\n>\n\nNo, I dont think so, if you had two loaders, both would start a \ntransaction, then neither could see what the other was doing. There are \ntransaction isolation levels, but they are like playing with fire. (in \nmy opinion).\n\n-Andy\n",
"msg_date": "Mon, 12 Sep 2011 12:22:48 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On Mon, Sep 12, 2011 at 11:04 AM, Robert Klemme\n<[email protected]> wrote:\n> On 11.09.2011 22:10, Scott Marlowe wrote:\n>\n>> Another data point. We had a big Oracle installation at my last job,\n>> and OLAP queries were killing it midday, so I built a simple\n>> replication system to grab rows from the big iron Oracle SUN box and\n>> shove into a single core P IV 2.xGHz machine with 4 120G SATA drives\n>> in SW RAID-10.\n>>\n>> That machine handily beat the big iron Oracle machine at OLAP queries,\n>> running in 20 minutes what was taking well over an hour for the big\n>> Oracle machine to do, even during its (Oracle machine) off peak load\n>> times.\n>\n> Um, that sounds as if the SUN setup was really bad. Do you remember any\n> details about the drive configuration there?\n\nIt was actually setup quite well. A very fast SAN with individual\ndrive arrays etc. It was VERY fast at transactional throughput. BUT\nit was not setup for massive OLAP work. The drives that housed the\nstatistical data we were running OLAP against were the slowest in the\nset, since they were made to mostly just take in a small amount of\ndata each minute from the java servers. Originally the stats had been\non a pg server in production and very fast, but some political\ndecision moved it onto the Oracle server. The Oracle DBA wasn't any\nhappier with this move than me, btw.\n",
"msg_date": "Mon, 12 Sep 2011 13:08:04 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On Mon, Sep 12, 2011 at 10:22 AM, Shaun Thomas <[email protected]> wrote:\n> On 09/11/2011 12:02 PM, Marti Raudsepp wrote:\n>\n>> Which brings me to another important point: don't do lots of small\n>> write transactions, SAVEPOINTs or PL/pgSQL subtransactions. Besides\n>> being inefficient, they introduce a big maintenance burden.\n>\n> I'd like to second this. Before a notable application overhaul, we were\n> handling about 300-million transactions per day (250M of that was over a\n> 6-hour period). To avoid the risk of mid-day vacuum-freeze, we disabled\n> autovacuum and run a nightly vacuum over the entire database. And that was\n> *after* bumping autovacuum_freeze_max_age to 600-million.\n>\n> You do *not* want to screw with that if you don't have to, and a setting of\n> 600M is about 1/3 of the reasonable boundary there. If not for the forced\n> autovacuums, a database with this much traffic would be corrupt in less than\n> a week. We've managed to cut that transaction traffic by 60%, and it greatly\n> improved the database's overall health.\n\nI put it to you that your hardware has problems if you have a pg db\nthat's corrupting from having too much vacuum activity. I've had\nexactly one pg corruption problem in the past, and it was a bad SATA\nhard drive on a stats server. I have four 48 core opterons running\nquite hard during the day, have autovacuum on and VERY aggresively\ntuned and have had zero corruption issues in over 3 years of hard\nrunning.\n",
"msg_date": "Mon, 12 Sep 2011 13:48:57 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On 09/12/2011 02:48 PM, Scott Marlowe wrote:\n\n> I put it to you that your hardware has problems if you have a pg db\n> that's corrupting from having too much vacuum activity.\n\nWhat? No. We optimized by basically forcing autovacuum to never run \nduring our active periods. We never actually encountered wrap-around \ncorruption. I was just saying that 600M is a relatively high setting for \nautovacuum_freeze_max_age. :)\n\nI was alluding to the fact that if a DBA had his system running for a \nweek at our transaction level, and PG didn't have forced auto vacuum, \nand their maintenance lapsed even slightly, they could end up with a \ncorrupt database. Not too far-fetched for someone coming from MySQL, really.\n\nOur problem is we run a financial site, and the front-end very \naggressively monitors network and database timeouts. The limit is \nsufficiently low that a vacuum would cause enough IO to trigger \napplication timeouts, even with vacuum_cost_delay. And of course, \nsetting vacuum_cost_delay too high quickly triples or quadruples vacuum \ntimes. Now that we're using FusionIO cards, I've been thinking about \nturning autovacuum back on, but I want to run some tests first.\n\nMy point stands, though. Don't go crazy with transactions until you know \nyour config can stand up to it, and reduce if possible. We found some \ntweak points that drastically reduced transaction count with no \ndetrimental effect on the app itself, so we jumped on them.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email-disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Mon, 12 Sep 2011 15:04:40 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On Mon, Sep 12, 2011 at 23:04, Shaun Thomas <[email protected]> wrote:\n> I was alluding to the fact that if a DBA had his system running for a week\n> at our transaction level, and PG didn't have forced auto vacuum, and their\n> maintenance lapsed even slightly, they could end up with a corrupt database.\n\nIt doesn't actually corrupt your database. If you manage to hit the\nwraparound age, PostgreSQL disallows new connections and tells you to\nrun a VACUUM from a standalone backend. (But that should never happen\ndue to the forced vacuum freeze processes)\n\nRegards,\nMarti\n",
"msg_date": "Mon, 12 Sep 2011 23:19:29 +0300",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On Mon, Sep 12, 2011 at 2:04 PM, Shaun Thomas <[email protected]> wrote:\n> On 09/12/2011 02:48 PM, Scott Marlowe wrote:\n>\n>> I put it to you that your hardware has problems if you have a pg db\n>> that's corrupting from having too much vacuum activity.\n>\n> What? No. We optimized by basically forcing autovacuum to never run during\n> our active periods. We never actually encountered wrap-around corruption. I\n> was just saying that 600M is a relatively high setting for\n> autovacuum_freeze_max_age. :)\n\nYou don't get corruption from wrap around, you get a database that\nstops and tells you to run a vacuum by hand on a single user backend\nand won't come up until you do. You throw around the word corruption\na lot. The PostgreSQL team works REALLY hard to prevent any kind of\ncorruption scenario from rearing its ugly head, so when the word\ncorruption pops up I start to wonder about the system (hardware wise)\nsomeone is using, since only killing the postmaster by hand, then\ndeleting the interlock file and starting a new postmaster while old\npostgres children are still active is just about the only way to\ncorrupt pgsql, short of using vi on one of the files in\n/data/base/xxx/yyy etc.\n\n>\n> I was alluding to the fact that if a DBA had his system running for a week\n> at our transaction level, and PG didn't have forced auto vacuum, and their\n> maintenance lapsed even slightly, they could end up with a corrupt database.\n> Not too far-fetched for someone coming from MySQL, really.\n>\n> Our problem is we run a financial site, and the front-end very aggressively\n> monitors network and database timeouts. The limit is sufficiently low that a\n> vacuum would cause enough IO to trigger application timeouts, even with\n> vacuum_cost_delay. And of course, setting vacuum_cost_delay too high quickly\n> triples or quadruples vacuum times. Now that we're using FusionIO cards,\n> I've been thinking about turning autovacuum back on, but I want to run some\n> tests first.\n>\n> My point stands, though. Don't go crazy with transactions until you know\n> your config can stand up to it, and reduce if possible. We found some tweak\n> points that drastically reduced transaction count with no detrimental effect\n> on the app itself, so we jumped on them.\n>\n> --\n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n> 312-676-8870\n> [email protected]\n>\n> ______________________________________________\n>\n> See http://www.peak6.com/email-disclaimer/ for terms and conditions related\n> to this email\n>\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n",
"msg_date": "Mon, 12 Sep 2011 14:44:44 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On 09/12/2011 03:44 PM, Scott Marlowe wrote:\n\n> The PostgreSQL team works REALLY hard to prevent any kind of\n> corruption scenario from rearing its ugly head, so when the word\n> corruption pops up I start to wonder about the system (hardware\n> wise) someone is using,\n\n\nYou've apparently never used early versions of EnterpriseDB. ;)\n\nKidding aside, it's apparently been a while since I read that particular \npart of the manual. The error I *was* familiar with was from the 8.0 manual:\n\n\"WARNING: some databases have not been vacuumed in 1613770184 transactions\nHINT: Better vacuum them within 533713463 transactions, or you may have \na wraparound failure.\"\n\nEver since the early days, I've been so paranoid about regular \nvacuuming, I'm probably still a little overcautious.\n\nSo, my bad. Having a database down for a few hours isn't exactly \ndesirable, but it's certainly not corruption. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email-disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Mon, 12 Sep 2011 15:55:43 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On Mon, Sep 12, 2011 at 2:55 PM, Shaun Thomas <[email protected]> wrote:\n> On 09/12/2011 03:44 PM, Scott Marlowe wrote:\n>\n>> The PostgreSQL team works REALLY hard to prevent any kind of\n>> corruption scenario from rearing its ugly head, so when the word\n>> corruption pops up I start to wonder about the system (hardware\n>> wise) someone is using,\n>\n>\n> You've apparently never used early versions of EnterpriseDB. ;)\n>\n> Kidding aside, it's apparently been a while since I read that particular\n> part of the manual. The error I *was* familiar with was from the 8.0 manual:\n>\n> \"WARNING: some databases have not been vacuumed in 1613770184 transactions\n> HINT: Better vacuum them within 533713463 transactions, or you may have a\n> wraparound failure.\"\n>\n> Ever since the early days, I've been so paranoid about regular vacuuming,\n> I'm probably still a little overcautious.\n>\n> So, my bad. Having a database down for a few hours isn't exactly desirable,\n> but it's certainly not corruption. :)\n\nNo biggie, more a question of semantics. Just a trigger word for me.\nI started with pgsql 6.5.2 so I know ALL ABOUT corruption. hehe.\n",
"msg_date": "Mon, 12 Sep 2011 15:00:29 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On 12.09.2011 19:22, Andy Colson wrote:\n> On 9/12/2011 12:15 PM, Robert Klemme wrote:\n>> On 11.09.2011 19:02, Marti Raudsepp wrote:\n>>> On Sun, Sep 11, 2011 at 17:23, Andy Colson<[email protected]> wrote:\n>>>> On 09/11/2011 08:59 AM, Igor Chudov wrote:\n>>>>> By the way, does that INSERT UPDATE functionality or something like\n>>>>> this exist in Postgres?\n>>>> You have two options:\n>>>> 1) write a function like:\n>>>> create function doinsert(_id integer, _value text) returns void as\n>>>> 2) use two sql statements:\n>>>\n>>> Unfortunately both of these options have caveats. Depending on your\n>>> I/O speed, you might need to use multiple loader threads to saturate\n>>> the write bandwidth.\n>>>\n>>> However, neither option is safe from race conditions. If you need to\n>>> load data from multiple threads at the same time, they won't see each\n>>> other's inserts (until commit) and thus cause unique violations. If\n>>> you could somehow partition their operation by some key, so threads\n>>> are guaranteed not to conflict each other, then that would be perfect.\n>>> The 2nd option given by Andy is probably faster.\n>>>\n>>> You *could* code a race-condition-safe function, but that would be a\n>>> no-go on a data warehouse, since each call needs a separate\n>>> subtransaction which involves allocating a transaction ID.\n>>\n>> Wouldn't it be sufficient to reverse order for race condition safety?\n>> Pseudo code:\n>>\n>> begin\n>> insert ...\n>> catch\n>> update ...\n>> if not found error\n>> end\n>>\n>> Speed is another matter though...\n\n> No, I dont think so, if you had two loaders, both would start a\n> transaction, then neither could see what the other was doing.\n\nIt depends. But the point is that not both INSERTS can succeed. The \none which fails will attempt the UPDATE and - depending on isolation \nlevel and DB implementation - will be blocked or fail.\n\nIn the case of PG this particular example will work:\n\n1. TX inserts new PK row\n2. TX tries to insert same PK row => blocks\n1. TX commits\n2. TX fails with PK violation\n2. TX does the update (if the error is caught)\n\n> There are\n> transaction isolation levels, but they are like playing with fire. (in\n> my opinion).\n\nYou make them sound like witchcraft. But they are clearly defined - \neven standardized. Granted, different RDBMS might implement them in \ndifferent ways - here's PG's view of TX isolation:\n\nhttp://www.postgresql.org/docs/8.4/interactive/transaction-iso.html\n\nIn my opinion anybody working with RDBMS should make himself familiar \nwith this concept - at least know about it - because it is one of the \nfundamental features of RDBMS and certainly needs consideration in \napplications with highly concurrent DB activity.\n\nKind regards\n\n\trobert\n\n",
"msg_date": "Mon, 12 Sep 2011 23:26:10 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On Tue, Sep 13, 2011 at 00:26, Robert Klemme <[email protected]> wrote:\n> In the case of PG this particular example will work:\n> 1. TX inserts new PK row\n> 2. TX tries to insert same PK row => blocks\n> 1. TX commits\n> 2. TX fails with PK violation\n> 2. TX does the update (if the error is caught)\n\nThat goes against the point I was making in my earlier comment. In\norder to implement this error-catching logic, you'll have to allocate\na new subtransaction (transaction ID) for EVERY ROW you insert. If\nyou're going to be loading billions of rows this way, you will invoke\nthe wrath of the \"vacuum freeze\" process, which will seq-scan all\nolder tables and re-write every row that it hasn't touched yet. You'll\nsurvive it if your database is a few GB in size, but in the terabyte\nland that's unacceptable. Transaction IDs are a scarce resource there.\n\nIn addition, such blocking will limit the parallelism you will get\nfrom multiple inserters.\n\nRegards,\nMarti\n",
"msg_date": "Tue, 13 Sep 2011 18:13:53 +0300",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On Tue, Sep 13, 2011 at 5:13 PM, Marti Raudsepp <[email protected]> wrote:\n> On Tue, Sep 13, 2011 at 00:26, Robert Klemme <[email protected]> wrote:\n>> In the case of PG this particular example will work:\n>> 1. TX inserts new PK row\n>> 2. TX tries to insert same PK row => blocks\n>> 1. TX commits\n>> 2. TX fails with PK violation\n>> 2. TX does the update (if the error is caught)\n>\n> That goes against the point I was making in my earlier comment. In\n> order to implement this error-catching logic, you'll have to allocate\n> a new subtransaction (transaction ID) for EVERY ROW you insert.\n\nI don't think so. You only need to catch the error (see attachment).\nOr does this create a sub transaction?\n\n> If\n> you're going to be loading billions of rows this way, you will invoke\n> the wrath of the \"vacuum freeze\" process, which will seq-scan all\n> older tables and re-write every row that it hasn't touched yet. You'll\n> survive it if your database is a few GB in size, but in the terabyte\n> land that's unacceptable. Transaction IDs are a scarce resource there.\n\nCertainly. But it's not needed as far as I can see.\n\n> In addition, such blocking will limit the parallelism you will get\n> from multiple inserters.\n\nYes, I mentioned the speed issue. But regardless of the solution for\nMySQL's \"INSERT..ON DUPLICATE KEY UPDATE\" which Igor mentioned you\nwill have the locking problem anyhow if you plan to insert\nconcurrently into the same table and be robust.\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/",
"msg_date": "Tue, 13 Sep 2011 18:34:09 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "Hi,\n\n> (see attachment)\n\nunder high concurency you may expect that your data is already in.\nIn such a case you better do nothing at all:\n\nbegin\n \n select dat=a_dat from t where id=a_id into test:\n \n if test is null then\n \n begin\n \n insert into t (id, dat) values (a_id, a_dat);\n exception\n when unique_violation then\n update t set dat = a_dat where id = a_id and dat <> a_dat;\n return 0;\n \n end;\n \n elsif not test then\n \n update t set dat = a_dat where id = a_id;\n return 0;\n \n end if;\n\n return 1;\n\n\nbest regards,\n\nMarc Mamin\n\n-----Ursprüngliche Nachricht-----\nVon: [email protected] im Auftrag von Robert Klemme\nGesendet: Di 9/13/2011 6:34\nAn: Marti Raudsepp\nCc: [email protected]\nBetreff: Re: [PERFORM] Postgres for a \"data warehouse\", 5-10 TB\n \nOn Tue, Sep 13, 2011 at 5:13 PM, Marti Raudsepp <[email protected]> wrote:\n> On Tue, Sep 13, 2011 at 00:26, Robert Klemme <[email protected]> wrote:\n>> In the case of PG this particular example will work:\n>> 1. TX inserts new PK row\n>> 2. TX tries to insert same PK row => blocks\n>> 1. TX commits\n>> 2. TX fails with PK violation\n>> 2. TX does the update (if the error is caught)\n>\n> That goes against the point I was making in my earlier comment. In\n> order to implement this error-catching logic, you'll have to allocate\n> a new subtransaction (transaction ID) for EVERY ROW you insert.\n\nI don't think so. You only need to catch the error (see attachment).\nOr does this create a sub transaction?\n\n> If\n> you're going to be loading billions of rows this way, you will invoke\n> the wrath of the \"vacuum freeze\" process, which will seq-scan all\n> older tables and re-write every row that it hasn't touched yet. You'll\n> survive it if your database is a few GB in size, but in the terabyte\n> land that's unacceptable. Transaction IDs are a scarce resource there.\n\nCertainly. But it's not needed as far as I can see.\n\n> In addition, such blocking will limit the parallelism you will get\n> from multiple inserters.\n\nYes, I mentioned the speed issue. But regardless of the solution for\nMySQL's \"INSERT..ON DUPLICATE KEY UPDATE\" which Igor mentioned you\nwill have the locking problem anyhow if you plan to insert\nconcurrently into the same table and be robust.\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n\n\n\n\n\n\nAW: [PERFORM] Postgres for a \"data warehouse\", 5-10 TB\n\n\n\nHi,\n\n> (see attachment)\n\nunder high concurency you may expect that your data is already in.\nIn such a case you better do nothing at all:\n\nbegin\n \n select dat=a_dat from t where id=a_id into test:\n \n if test is null then\n \n begin\n \n insert into t (id, dat) values (a_id, a_dat);\n exception\n when unique_violation then\n update t set dat = a_dat where id = a_id and dat <> a_dat;\n return 0;\n \n end;\n \n elsif not test then\n \n update t set dat = a_dat where id = a_id;\n return 0;\n \n end if;\n\n return 1;\n\n\nbest regards,\n\nMarc Mamin\n\n-----Ursprüngliche Nachricht-----\nVon: [email protected] im Auftrag von Robert Klemme\nGesendet: Di 9/13/2011 6:34\nAn: Marti Raudsepp\nCc: [email protected]\nBetreff: Re: [PERFORM] Postgres for a \"data warehouse\", 5-10 TB\n\nOn Tue, Sep 13, 2011 at 5:13 PM, Marti Raudsepp <[email protected]> wrote:\n> On Tue, Sep 13, 2011 at 00:26, Robert Klemme <[email protected]> wrote:\n>> In the case of PG this particular example will work:\n>> 1. TX inserts new PK row\n>> 2. TX tries to insert same PK row => blocks\n>> 1. TX commits\n>> 2. TX fails with PK violation\n>> 2. TX does the update (if the error is caught)\n>\n> That goes against the point I was making in my earlier comment. In\n> order to implement this error-catching logic, you'll have to allocate\n> a new subtransaction (transaction ID) for EVERY ROW you insert.\n\nI don't think so. You only need to catch the error (see attachment).\nOr does this create a sub transaction?\n\n> If\n> you're going to be loading billions of rows this way, you will invoke\n> the wrath of the \"vacuum freeze\" process, which will seq-scan all\n> older tables and re-write every row that it hasn't touched yet. You'll\n> survive it if your database is a few GB in size, but in the terabyte\n> land that's unacceptable. Transaction IDs are a scarce resource there.\n\nCertainly. But it's not needed as far as I can see.\n\n> In addition, such blocking will limit the parallelism you will get\n> from multiple inserters.\n\nYes, I mentioned the speed issue. But regardless of the solution for\nMySQL's \"INSERT..ON DUPLICATE KEY UPDATE\" which Igor mentioned you\nwill have the locking problem anyhow if you plan to insert\nconcurrently into the same table and be robust.\n\nKind regards\n\nrobert\n\n--\nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/",
"msg_date": "Tue, 13 Sep 2011 19:54:19 +0200",
"msg_from": "\"Marc Mamin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On Tue, Sep 13, 2011 at 19:34, Robert Klemme <[email protected]> wrote:\n> I don't think so. You only need to catch the error (see attachment).\n> Or does this create a sub transaction?\n\nYes, every BEGIN/EXCEPTION block creates a subtransaction -- like a\nSAVEPOINT it can roll back to in case of an error.\n\n> Yes, I mentioned the speed issue. But regardless of the solution for\n> MySQL's \"INSERT..ON DUPLICATE KEY UPDATE\" which Igor mentioned you\n> will have the locking problem anyhow if you plan to insert\n> concurrently into the same table and be robust.\n\nIn a mass-loading application you can often divide the work between\nthreads in a manner that doesn't cause conflicts.\n\nFor example, if the unique key is foobar_id and you have 4 threads,\nthread 0 will handle rows where (foobar_id%4)=0, thread 1 takes\n(foobar_id%4)=1 etc. Or potentially hash foobar_id before dividing the\nwork.\n\nI already suggested this in my original post.\n\nRegards,\nMarti\n",
"msg_date": "Tue, 13 Sep 2011 21:11:31 +0300",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "I do not need to do insert updates from many threads. I want to do it from\none thread.\n\nMy current MySQL architecture is that I have a table with same layout as the\nmain one, to hold new and updated objects.\n\nWhen there is enough objects, I begin a big INSERT SELECT ... ON DUPLICATE\nKEY UPDATE and stuff that into the master table.\n\ni\n\nOn Tue, Sep 13, 2011 at 1:11 PM, Marti Raudsepp <[email protected]> wrote:\n\n> On Tue, Sep 13, 2011 at 19:34, Robert Klemme <[email protected]>\n> wrote:\n> > I don't think so. You only need to catch the error (see attachment).\n> > Or does this create a sub transaction?\n>\n> Yes, every BEGIN/EXCEPTION block creates a subtransaction -- like a\n> SAVEPOINT it can roll back to in case of an error.\n>\n> > Yes, I mentioned the speed issue. But regardless of the solution for\n> > MySQL's \"INSERT..ON DUPLICATE KEY UPDATE\" which Igor mentioned you\n> > will have the locking problem anyhow if you plan to insert\n> > concurrently into the same table and be robust.\n>\n> In a mass-loading application you can often divide the work between\n> threads in a manner that doesn't cause conflicts.\n>\n> For example, if the unique key is foobar_id and you have 4 threads,\n> thread 0 will handle rows where (foobar_id%4)=0, thread 1 takes\n> (foobar_id%4)=1 etc. Or potentially hash foobar_id before dividing the\n> work.\n>\n> I already suggested this in my original post.\n>\n> Regards,\n> Marti\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI do not need to do insert updates from many threads. I want to do it from one thread. My current MySQL architecture is that I have a table with same layout as the main one, to hold new and updated objects. \nWhen there is enough objects, I begin a big INSERT SELECT ... ON DUPLICATE KEY UPDATE and stuff that into the master table.iOn Tue, Sep 13, 2011 at 1:11 PM, Marti Raudsepp <[email protected]> wrote:\nOn Tue, Sep 13, 2011 at 19:34, Robert Klemme <[email protected]> wrote:\n\n\n\n> I don't think so. You only need to catch the error (see attachment).\n> Or does this create a sub transaction?\n\nYes, every BEGIN/EXCEPTION block creates a subtransaction -- like a\nSAVEPOINT it can roll back to in case of an error.\n\n> Yes, I mentioned the speed issue. But regardless of the solution for\n> MySQL's \"INSERT..ON DUPLICATE KEY UPDATE\" which Igor mentioned you\n> will have the locking problem anyhow if you plan to insert\n> concurrently into the same table and be robust.\n\nIn a mass-loading application you can often divide the work between\nthreads in a manner that doesn't cause conflicts.\n\nFor example, if the unique key is foobar_id and you have 4 threads,\nthread 0 will handle rows where (foobar_id%4)=0, thread 1 takes\n(foobar_id%4)=1 etc. Or potentially hash foobar_id before dividing the\nwork.\n\nI already suggested this in my original post.\n\nRegards,\nMarti\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 13 Sep 2011 13:38:32 -0500",
"msg_from": "Igor Chudov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "Interesting debate.\n\n2011/9/13 Marti Raudsepp <[email protected]>:\n> Yes, every BEGIN/EXCEPTION block creates a subtransaction -- like a\n> SAVEPOINT it can roll back to in case of an error.\n\nAre you sure? In theory I always understood that there are no\n\"subtransactions\".\n\nIn fact when looking at the docs there is chapter 39.6.6. saying \"By\ndefault, any error occurring in a PL/pgSQL function aborts execution\nof the function, and indeed of the surrounding transaction as well.\nYou can trap errors and recover from them by using a BEGIN block with\nan EXCEPTION clause.\"\n(http://www.postgresql.org/docs/current/interactive/plpgsql-control-structures.html\n)\n\nSo the doc isn't totally explicit about this. But whatever: What would\nbe the the function of a subtransaction? To give the possibility to\nrecover and continue within the surrounding transaction?\n\nStefan\n\n2011/9/13 Marti Raudsepp <[email protected]>:\n> On Tue, Sep 13, 2011 at 19:34, Robert Klemme <[email protected]> wrote:\n>> I don't think so. You only need to catch the error (see attachment).\n>> Or does this create a sub transaction?\n>\n> Yes, every BEGIN/EXCEPTION block creates a subtransaction -- like a\n> SAVEPOINT it can roll back to in case of an error.\n>\n>> Yes, I mentioned the speed issue. But regardless of the solution for\n>> MySQL's \"INSERT..ON DUPLICATE KEY UPDATE\" which Igor mentioned you\n>> will have the locking problem anyhow if you plan to insert\n>> concurrently into the same table and be robust.\n>\n> In a mass-loading application you can often divide the work between\n> threads in a manner that doesn't cause conflicts.\n>\n> For example, if the unique key is foobar_id and you have 4 threads,\n> thread 0 will handle rows where (foobar_id%4)=0, thread 1 takes\n> (foobar_id%4)=1 etc. Or potentially hash foobar_id before dividing the\n> work.\n>\n> I already suggested this in my original post.\n>\n> Regards,\n> Marti\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Tue, 13 Sep 2011 20:57:30 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On Tue, Sep 13, 2011 at 12:57 PM, Stefan Keller <[email protected]> wrote:\n> Are you sure? In theory I always understood that there are no\n> \"subtransactions\".\n\n\"subtransaction\" is just another way of saying save points / rollback.\n",
"msg_date": "Tue, 13 Sep 2011 13:27:11 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On Mon, Sep 12, 2011 at 11:26:10PM +0200, Robert Klemme wrote:\n> You make them sound like witchcraft. But they are clearly defined -\n> even standardized. Granted, different RDBMS might implement them in\n> different ways - here's PG's view of TX isolation:\n> \n> http://www.postgresql.org/docs/8.4/interactive/transaction-iso.html\n\nEven better: PostgreSQL 9.1 (Released yesterday! Fresher than milk...)\nships an improved algorithm for serializable transaction isolation\nlevel:\n\n http://www.postgresql.org/docs/9.1/interactive/transaction-iso.html\n\nMore info:\n\n http://wiki.postgresql.org/wiki/Serializable\n\nCheers,\nDr. Gianni Ciolli - 2ndQuadrant Italia\nPostgreSQL Training, Services and Support\[email protected] | www.2ndquadrant.it\n",
"msg_date": "Tue, 13 Sep 2011 20:48:19 +0100",
"msg_from": "Gianni Ciolli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On 13.09.2011 20:11, Marti Raudsepp wrote:\n> On Tue, Sep 13, 2011 at 19:34, Robert Klemme<[email protected]> wrote:\n>> I don't think so. You only need to catch the error (see attachment).\n>> Or does this create a sub transaction?\n>\n> Yes, every BEGIN/EXCEPTION block creates a subtransaction -- like a\n> SAVEPOINT it can roll back to in case of an error.\n\nOuch! Learn something new every day. Thanks for the update!\n\nhttp://www.postgresql.org/docs/8.4/interactive/plpgsql-structure.html\n\nSide note: it seems that Oracle handles this differently (i.e. no \nsubtransaction but the INSERT would be rolled back) making the pattern \npretty usable for this particular situation. Also, I have never heard \nthat TX ids are such a scarse resource over there.\n\nWould anybody think it a good idea to optionally have a BEGIN EXCEPTION \nblock without the current TX semantics? In absence of that what would \nbe a better pattern to do it (other than UPDATE and INSERT if not found)?\n\n>> Yes, I mentioned the speed issue. But regardless of the solution for\n>> MySQL's \"INSERT..ON DUPLICATE KEY UPDATE\" which Igor mentioned you\n>> will have the locking problem anyhow if you plan to insert\n>> concurrently into the same table and be robust.\n>\n> In a mass-loading application you can often divide the work between\n> threads in a manner that doesn't cause conflicts.\n\nYeah, but concurrency might not the only reason to optionally update. \nIf the data is there you might rather want to overwrite it instead of \nfailure.\n\nKind regards\n\n\trobert\n\n",
"msg_date": "Tue, 13 Sep 2011 22:22:47 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
},
{
"msg_contents": "On 13.09.2011 20:57, Stefan Keller wrote:\n> Interesting debate.\n\nIndeed.\n\n> 2011/9/13 Marti Raudsepp<[email protected]>:\n>> Yes, every BEGIN/EXCEPTION block creates a subtransaction -- like a\n>> SAVEPOINT it can roll back to in case of an error.\n>\n> Are you sure? In theory I always understood that there are no\n> \"subtransactions\".\n\nWhat theory are you referring to?\n\n> In fact when looking at the docs there is chapter 39.6.6. saying \"By\n> default, any error occurring in a PL/pgSQL function aborts execution\n> of the function, and indeed of the surrounding transaction as well.\n> You can trap errors and recover from them by using a BEGIN block with\n> an EXCEPTION clause.\"\n> (http://www.postgresql.org/docs/current/interactive/plpgsql-control-structures.html\n> )\n>\n> So the doc isn't totally explicit about this. But whatever: What would\n> be the the function of a subtransaction? To give the possibility to\n> recover and continue within the surrounding transaction?\n\nI find this pretty explicit:\n\nIt is important not to confuse the use of BEGIN/END for grouping \nstatements in PL/pgSQL with the similarly-named SQL commands for \ntransaction control. PL/pgSQL's BEGIN/END are only for grouping; they do \nnot start or end a transaction. Functions and trigger procedures are \nalways executed within a transaction established by an outer query — \nthey cannot start or commit that transaction, since there would be no \ncontext for them to execute in. However, a block containing an EXCEPTION \nclause effectively forms a subtransaction that can be rolled back \nwithout affecting the outer transaction. For more about that see Section \n38.6.5.\n\nhttp://www.postgresql.org/docs/8.4/interactive/plpgsql-structure.html\n\nCheers\n\n\trobert\n\n\n\n",
"msg_date": "Tue, 13 Sep 2011 22:26:04 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
}
] |
[
{
"msg_contents": "I want to thank members on this list which helped me benchmark and conclude that RAID 10 on a XFS filesystem was the way to go over what we had prior. PostgreSQL we have been using with Perl for the last 8 years and it has been nothing but outstanding for us. Things have definitely worked out much better and the writes are much much faster. \n\nSince I want the maximum performance from our new servers, I want to make sure the configuration is what is recommended. Things are running fine and queries that would take seconds prior now only take one or two. I have read a lot of guides on tweaking PostgreSQL as well as a book, however, I would like someone to just review the settings I have and let me know if it's too crazy. It's for a considerably heavy write database with a lot of calculation queries (percentages, averages, sums, etc). \n\nThis is my setup:\n\n2 x Intel E5645 (12 Core CPU total)\n64 GB Ram\nRAID 10 (/var/lib/pgsql lives on it's own RAID controller) on XFS\nPostgreSQL 9.0.4 on Debian Squeeze \nDatabase size about 200Gb. \n\nAnd in postgresql.conf:\n\nmax_connections = 200 \nshared_buffers = 8GB \ntemp_buffers = 128MB \nwork_mem = 40MB\nmaintenance_work_mem = 1GB\n\nwal_buffers = 16MB \n\neffective_cache_size = 48GB\n\nseq_page_cost = 1.0\nrandom_page_cost = 1.1\ncpu_tuple_cost = 0.1\ncpu_index_tuple_cost = 0.05\ncpu_operator_cost = 0.01\ndefault_statistics_target = 1000\n\nWith these settings, output from free -m (Megabytes):\n\n total used free shared buffers cached\nMem: 64550 56605 7945 0 0 55907\n-/+ buffers/cache: 697 63852\nSwap: 7628 6 7622\n\ntop shows:\nSwap: 7812088k total, 6788k used, 7805300k free, 57343264k cached\n\n\nAny suggestions would be awesome. \n\nThank you\n\nOgden",
"msg_date": "Sun, 11 Sep 2011 14:50:06 -0500",
"msg_from": "Ogden <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL performance tweaking on new hardware"
},
{
"msg_contents": "On Sun, Sep 11, 2011 at 1:50 PM, Ogden <[email protected]> wrote:\n> I want to thank members on this list which helped me benchmark and conclude that RAID 10 on a XFS filesystem was the way to go over what we had prior. PostgreSQL we have been using with Perl for the last 8 years and it has been nothing but outstanding for us. Things have definitely worked out much better and the writes are much much faster.\n>\n> Since I want the maximum performance from our new servers, I want to make sure the configuration is what is recommended. Things are running fine and queries that would take seconds prior now only take one or two. I have read a lot of guides on tweaking PostgreSQL as well as a book, however, I would like someone to just review the settings I have and let me know if it's too crazy. It's for a considerably heavy write database with a lot of calculation queries (percentages, averages, sums, etc).\n>\n> This is my setup:\n>\n> 2 x Intel E5645 (12 Core CPU total)\n> 64 GB Ram\n> RAID 10 (/var/lib/pgsql lives on it's own RAID controller) on XFS\n> PostgreSQL 9.0.4 on Debian Squeeze\n> Database size about 200Gb.\n>\n> And in postgresql.conf:\n>\n> max_connections = 200\n> shared_buffers = 8GB\n> temp_buffers = 128MB\n> work_mem = 40MB\n> maintenance_work_mem = 1GB\n>\n> wal_buffers = 16MB\n>\n> effective_cache_size = 48GB\n>\n> seq_page_cost = 1.0\n> random_page_cost = 1.1\n> cpu_tuple_cost = 0.1\n> cpu_index_tuple_cost = 0.05\n> cpu_operator_cost = 0.01\n> default_statistics_target = 1000\n>\n> With these settings, output from free -m (Megabytes):\n>\n> total used free shared buffers cached\n> Mem: 64550 56605 7945 0 0 55907\n> -/+ buffers/cache: 697 63852\n> Swap: 7628 6 7622\n>\n> top shows:\n> Swap: 7812088k total, 6788k used, 7805300k free, 57343264k cached\n>\n>\n> Any suggestions would be awesome.\n\nWell, what's your workload like? If you'd like to smooth out lots of\nheavy writing, then look at cranking up checkpoint_segments, increase\ncheckpoint timeout to 2h, and play with checkpoint completion target.\nIf you write a lot of the same rows over and over, then keep it down\nin the 0.5 range. If you tend to write all unique rows, then closer\nto 1.0 is better. We run at 0.8. As you increase checkpoint\ncompletion target, you'll increase the amount of writes that have to\nhappen twice to the storage array, so unless you're 100% sure you\ndon't write to the same blocks / tuples a lot, keep it below 1.0.\n\nAlso if you're NOT using a battery backed caching RAID controller look\ninto upgrading to one.\n",
"msg_date": "Sun, 11 Sep 2011 14:16:26 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance tweaking on new hardware"
}
] |
[
{
"msg_contents": "We've currently got PG 8.4.4 running on a whitebox hardware set up, with (2)\n5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM SATA drives, using\nthe onboard IDE controller and ext3.\n\nA few weeks back, we purchased two refurb'd HP DL360's G5's, and were hoping\nto set them up with PG 9.0.2, running replicated. These machines have (2)\n5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and are using the HP SA P400i\nwith 512MB of BBWC. PG is running on an ext4 (noatime) partition, and they\ndrives configured as RAID 1+0 (seems with this controller, I cannot do\nJBOD). I've spent a few hours going back and forth benchmarking the new\nsystems, and have set up the DWC, and the accelerator cache using hpacucli.\n I've tried accelerator caches of 25/75, 50/50, and 75/25.\n\nTo start with, I've set the \"relevant\" parameters in postgresql.conf the\nsame on the new config as the old:\n\n max_connections = 150\n shared_buffers = 6400MB (have tried as high as 20GB)\n work_mem = 20MB (have tried as high as 100MB)\n effective_io_concurrency = 6\n fsync = on\n synchronous_commit = off\n wal_buffers = 16MB\n checkpoint_segments = 30 (have tried 200 when I was loading the db)\n random_page_cost = 2.5\n effective_cache_size = 10240MB (have tried as high as 16GB)\n\nFirst thing I noticed is that it takes the same amount of time to load the\ndb (about 40 minutes) on the new hardware as the old hardware. I was really\nhoping with the faster, additional drives and a hardware RAID controller,\nthat this would be faster. The database is only about 9GB with pg_dump\n(about 28GB with indexes).\n\nUsing pgfouine I've identified about 10 \"problematic\" SELECT queries that\ntake anywhere from .1 seconds to 30 seconds on the old hardware. Running\nthese same queries on the new hardware is giving me results in the .2 to 66\nseconds. IE, it's twice as slow.\n\nI've tried increasing the shared_buffers, and some other parameters\n(work_mem), but haven't yet seen the new hardware perform even at the same\nspeed as the old hardware.\n\nI was really hoping that with hardware RAID that something would be faster\n(loading times, queries, etc...). What am I doing wrong?\n\nAbout the only thing left that I know to try is to drop the RAID1+0 and go\nto RAID0 in hardware, and do RAID1 in software. Any other thoughts?\n\nThanks!\n\n\n--\nAnthony\n\nWe've currently got PG 8.4.4 running on a whitebox hardware set up, with (2) 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM SATA drives, using the onboard IDE controller and ext3.\nA few weeks back, we purchased two refurb'd HP DL360's G5's, and were hoping to set them up with PG 9.0.2, running replicated. These machines have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and are using the HP SA P400i with 512MB of BBWC. PG is running on an ext4 (noatime) partition, and they drives configured as RAID 1+0 (seems with this controller, I cannot do JBOD). I've spent a few hours going back and forth benchmarking the new systems, and have set up the DWC, and the accelerator cache using hpacucli. I've tried accelerator caches of 25/75, 50/50, and 75/25.\nTo start with, I've set the \"relevant\" parameters in postgresql.conf the same on the new config as the old: max_connections = 150 shared_buffers = 6400MB (have tried as high as 20GB)\n work_mem = 20MB (have tried as high as 100MB) effective_io_concurrency = 6 fsync = on synchronous_commit = off wal_buffers = 16MB checkpoint_segments = 30 (have tried 200 when I was loading the db)\n random_page_cost = 2.5 effective_cache_size = 10240MB (have tried as high as 16GB)First thing I noticed is that it takes the same amount of time to load the db (about 40 minutes) on the new hardware as the old hardware. I was really hoping with the faster, additional drives and a hardware RAID controller, that this would be faster. The database is only about 9GB with pg_dump (about 28GB with indexes).\nUsing pgfouine I've identified about 10 \"problematic\" SELECT queries that take anywhere from .1 seconds to 30 seconds on the old hardware. Running these same queries on the new hardware is giving me results in the .2 to 66 seconds. IE, it's twice as slow.\nI've tried increasing the shared_buffers, and some other parameters (work_mem), but haven't yet seen the new hardware perform even at the same speed as the old hardware.\nI was really hoping that with hardware RAID that something would be faster (loading times, queries, etc...). What am I doing wrong?About the only thing left that I know to try is to drop the RAID1+0 and go to RAID0 in hardware, and do RAID1 in software. Any other thoughts?\nThanks!--Anthony",
"msg_date": "Sun, 11 Sep 2011 17:44:34 -0500",
"msg_from": "Anthony Presley <[email protected]>",
"msg_from_op": true,
"msg_subject": "RAID Controller (HP P400) beat by SW-RAID?"
},
{
"msg_contents": "* Anthony Presley ([email protected]) wrote:\n> I was really hoping that with hardware RAID that something would be faster\n> (loading times, queries, etc...). What am I doing wrong?\n\next3 and ext4 do NOT perform identically out of the box.. You might be\nrunning into the write barriers problem here with ext4 forcing the RAID\ncontrollers to push commits all the way to the hard drive before\nreturning (thus making the BBWC next to useless).\n\nYou might try w/ ext3 on the new system instead.\n\nAlso, the p800's are definitely better than the p400's, but I don't know\nthat it's the controller that's really the issue here..\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Sun, 11 Sep 2011 18:51:21 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID Controller (HP P400) beat by SW-RAID?"
},
{
"msg_contents": "Dne 12.9.2011 00:44, Anthony Presley napsal(a):\n> We've currently got PG 8.4.4 running on a whitebox hardware set up,\n> with (2) 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM\n> SATA drives, using the onboard IDE controller and ext3.\n> \n> A few weeks back, we purchased two refurb'd HP DL360's G5's, and\n> were hoping to set them up with PG 9.0.2, running replicated. These\n> machines have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and\n> are using the HP SA P400i with 512MB of BBWC. PG is running on an\n> ext4 (noatime) partition, and they drives configured as RAID 1+0\n> (seems with this controller, I cannot do JBOD). I've spent a few\n> hours going back and forth benchmarking the new systems, and have set\n> up the DWC, and the accelerator cache using hpacucli. I've tried\n> accelerator caches of 25/75, 50/50, and 75/25.\n\nWhas is an 'accelerator cache'? Is that the cache on the controller?\nThen give 100% to the write cache - the read cache does not need to be\nprotected by the battery, the page cache at the OS level can do the same\nservice.\n\nProvide more details about the ext3/ext4 - there are various data modes\n(writeback, ordered, journal), various other settings (barriers, stripe\nsize, ...) that matter.\n\nAccording to the benchmark I've done a few days back, the performance\ndifference between ext3 and ext4 is rather small, when comparing equally\nconfigured file systems (i.e. data=journal vs. data=journal) etc.\n\nWith read-only workload (e.g. just SELECT statements), the config does\nnot matter (e.g. journal is just as fast as writeback).\n\nSee for example these comparisons\n\n read-only workload: http://bit.ly/q04Tpg\n read-write workload: http://bit.ly/qKgWgn\n\nThe ext4 is usually a bit faster than equally configured ext3, but the\ndifference should not be 100%.\n\n> To start with, I've set the \"relevant\" parameters in postgresql.conf\n> the same on the new config as the old:\n> \n> max_connections = 150 shared_buffers = 6400MB (have tried as high as\n> 20GB) work_mem = 20MB (have tried as high as 100MB) \n> effective_io_concurrency = 6 fsync = on synchronous_commit = off \n> wal_buffers = 16MB checkpoint_segments = 30 (have tried 200 when I\n> was loading the db) random_page_cost = 2.5 effective_cache_size =\n> 10240MB (have tried as high as 16GB)\n> \n> First thing I noticed is that it takes the same amount of time to\n> load the db (about 40 minutes) on the new hardware as the old\n> hardware. I was really hoping with the faster, additional drives and\n> a hardware RAID controller, that this would be faster. The database\n> is only about 9GB with pg_dump (about 28GB with indexes).\n> \n> Using pgfouine I've identified about 10 \"problematic\" SELECT queries \n> that take anywhere from .1 seconds to 30 seconds on the old\n> hardware. Running these same queries on the new hardware is giving me\n> results in the .2 to 66 seconds. IE, it's twice as slow.\n> \n> I've tried increasing the shared_buffers, and some other parameters \n> (work_mem), but haven't yet seen the new hardware perform even at\n> the same speed as the old hardware.\n\nIn that case some of the assumptions is wrong. For example the new RAID\nis slow for some reason. Bad stripe size, slow controller, ...\n\nDo the basic hw benchmarking, i.e. use bonnie++ to benchmark the disk,\netc. Only if this provides expected results (i.e. the new hw performs\nbetter) it makes sense to mess with the database.\n\nTomas\n",
"msg_date": "Mon, 12 Sep 2011 01:17:56 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID Controller (HP P400) beat by SW-RAID?"
},
{
"msg_contents": "On September 11, 2011 03:44:34 PM Anthony Presley wrote:\n> First thing I noticed is that it takes the same amount of time to load the\n> db (about 40 minutes) on the new hardware as the old hardware. I was\n> really hoping with the faster, additional drives and a hardware RAID\n> controller, that this would be faster. The database is only about 9GB\n> with pg_dump (about 28GB with indexes).\n\nLoading the DB is going to be CPU-bound (on a single) core, unless your disks \nreally suck, which they don't. Most of the time will be spent building \nindexes.\n\nI don't know offhand why the queries are slower, though, unless you're not \ngetting as much cached before testing as on the older box.\n",
"msg_date": "Sun, 11 Sep 2011 19:12:16 -0700",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID Controller (HP P400) beat by SW-RAID?"
},
{
"msg_contents": "On Sun, Sep 11, 2011 at 4:44 PM, Anthony Presley <[email protected]> wrote:\n> We've currently got PG 8.4.4 running on a whitebox hardware set up, with (2)\n> 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM SATA drives, using\n> the onboard IDE controller and ext3.\n> A few weeks back, we purchased two refurb'd HP DL360's G5's, and were hoping\n> to set them up with PG 9.0.2, running replicated. These machines have (2)\n> 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and are using the HP SA P400i\n> with 512MB of BBWC. PG is running on an ext4 (noatime) partition, and they\n\nTwo issues here. One is that the onboard controller and disks on the\nold machine might not be obeying fsync properly, giving a speed boost\nat the expense of crash safeness. Two is that the P400 has gotten\npretty horrible performance reviews on this list in the past.\n",
"msg_date": "Sun, 11 Sep 2011 20:25:22 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID Controller (HP P400) beat by SW-RAID?"
},
{
"msg_contents": "\n\n>From: [email protected]\n[mailto:[email protected]] On Behalf Of Anthony Presley\n>Sent: Sunday, September 11, 2011 4:45 PM\n>To: [email protected]\n>Subject: [PERFORM] RAID Controller (HP P400) beat by SW-RAID?\n\n>We've currently got PG 8.4.4 running on a whitebox hardware set up, with\n(2) 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM SATA drives,\nusing the onboard IDE controller and ext3. \n\n>A few weeks back, we purchased two refurb'd HP DL360's G5's, and were\nhoping to set them up with PG 9.0.2, running replicated. These machines\nhave (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and are using the HP\nSA P400i with 512MB of BBWC. PG is running on an ext4 (noatime) partition,\nand they drives configured as RAID 1+0 (seems with this controller, I cannot\ndo JBOD). I've spent a few hours going back and forth benchmarking the new\nsystems, and have set up the DWC, and the accelerator cache using hpacucli.\n I've tried accelerator caches of 25/75, 50/50, and 75/25.\n>\n\n\nI would start of by recommending a more current version of 9.0...like 9.0.4\nsince you are building a new box. The rumor mill says 9.0.5 and 9.1.0 might\nbe out soon (days?). but that is just rumor mill. Don't bank on it. \n\n\nWhat kernel are you on ?\n\nLong time HP user here, for better and worse... so here are a few other\nlittle things I recommend. \n\nCheck the bios power management. Make sure it is set where you want it.\n(IIRC the G5s have this, I know G6s and G7s do). This can help with nasty\nlatency problems if the box has been idle for a while then needs to start\ndoing work.\n\nThe p400i is not a great card, compared to more modern one, but you should\nbe able to beat the old setup with what you have. Faster clocked cpu's more\nspindles, faster RPM spindles. \n\nAssuming the battery is working, with XFS or ext4 you can use nobarrier\nmount option and you should see some improvement. \n\n\nMake sure the raid card's firmware is current. I can't stress this enough.\nHP fixed a nasty bug with Raid 1+0 a few months ago where you could eat your\ndata... They also seem to be fixing a lot of other bugs along the way as\nwell. So do yourself a big favor and make sure that firmware is current. It\nmight just head off headache down the road.\n\nAlso make sure you have a 8.10.? (IIRC the version number right) or better\nversion of hpacucli... there have been some fixes to that utility as well.\nIIRC most of the fixes in this have been around recognizing newere cards\n(812s and 410s) but some interface bugs have been fixed as well. You may\nneed new packages for HP health. (I don't recall the official name, but new\nversions if hpacucli might not play well with old versions of hp health. \n\nIts HP so they have a new version about every month for firmware and their\ncli utility... thats HP for us. \n\nAnyways that is my fast input.\n\nBest of luck,\n\n\n-Mark\n\n",
"msg_date": "Sun, 11 Sep 2011 21:10:11 -0600",
"msg_from": "\"mark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID Controller (HP P400) beat by SW-RAID?"
},
{
"msg_contents": "On 12/09/11 15:10, mark wrote:\n>\n>> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Anthony Presley\n>> Sent: Sunday, September 11, 2011 4:45 PM\n>> To: [email protected]\n>> Subject: [PERFORM] RAID Controller (HP P400) beat by SW-RAID?\n>> We've currently got PG 8.4.4 running on a whitebox hardware set up, with\n> (2) 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM SATA drives,\n> using the onboard IDE controller and ext3.\n>\n>> A few weeks back, we purchased two refurb'd HP DL360's G5's, and were\n> hoping to set them up with PG 9.0.2, running replicated. These machines\n> have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and are using the HP\n> SA P400i with 512MB of BBWC. PG is running on an ext4 (noatime) partition,\n> and they drives configured as RAID 1+0 (seems with this controller, I cannot\n> do JBOD). I've spent a few hours going back and forth benchmarking the new\n> systems, and have set up the DWC, and the accelerator cache using hpacucli.\n> I've tried accelerator caches of 25/75, 50/50, and 75/25.\n>\n> I would start of by recommending a more current version of 9.0...like 9.0.4\n> since you are building a new box. The rumor mill says 9.0.5 and 9.1.0 might\n> be out soon (days?). but that is just rumor mill. Don't bank on it.\n>\n>\n> What kernel are you on ?\n>\n> Long time HP user here, for better and worse... so here are a few other\n> little things I recommend.\n>\n> Check the bios power management. Make sure it is set where you want it.\n> (IIRC the G5s have this, I know G6s and G7s do). This can help with nasty\n> latency problems if the box has been idle for a while then needs to start\n> doing work.\n>\n> The p400i is not a great card, compared to more modern one, but you should\n> be able to beat the old setup with what you have. Faster clocked cpu's more\n> spindles, faster RPM spindles.\n>\n> Assuming the battery is working, with XFS or ext4 you can use nobarrier\n> mount option and you should see some improvement.\n>\n>\n> Make sure the raid card's firmware is current. I can't stress this enough.\n> HP fixed a nasty bug with Raid 1+0 a few months ago where you could eat your\n> data... They also seem to be fixing a lot of other bugs along the way as\n> well. So do yourself a big favor and make sure that firmware is current. It\n> might just head off headache down the road.\n>\n> Also make sure you have a 8.10.? (IIRC the version number right) or better\n> version of hpacucli... there have been some fixes to that utility as well.\n> IIRC most of the fixes in this have been around recognizing newere cards\n> (812s and 410s) but some interface bugs have been fixed as well. You may\n> need new packages for HP health. (I don't recall the official name, but new\n> versions if hpacucli might not play well with old versions of hp health.\n>\n> Its HP so they have a new version about every month for firmware and their\n> cli utility... that�s HP for us.\n>\n> Anyways that is my fast input.\n>\n> Best of luck,\n>\n>\n> -Mark\n>\n>\npg 9.1.0 has already been released!\n\nI have had it installed and running for just under 24 hours...\n\nthough http://www.postgresql.org/ is still not showing it,\nsee:\nhttp://www.postgresql.org/ftp/source/\nand\nhttp://jdbc.postgresql.org/download.html\n\n\nCheers,\nGavin\n\n",
"msg_date": "Mon, 12 Sep 2011 15:31:01 +1200",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID Controller (HP P400) beat by SW-RAID?"
},
{
"msg_contents": "On Sun, Sep 11, 2011 at 6:17 PM, Tomas Vondra <[email protected]> wrote:\n\n> Dne 12.9.2011 00:44, Anthony Presley napsal(a):\n> > We've currently got PG 8.4.4 running on a whitebox hardware set up,\n> > with (2) 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM\n> > SATA drives, using the onboard IDE controller and ext3.\n> >\n> > A few weeks back, we purchased two refurb'd HP DL360's G5's, and\n> > were hoping to set them up with PG 9.0.2, running replicated. These\n> > machines have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and\n> > are using the HP SA P400i with 512MB of BBWC. PG is running on an\n> > ext4 (noatime) partition, and they drives configured as RAID 1+0\n> > (seems with this controller, I cannot do JBOD). I've spent a few\n> > hours going back and forth benchmarking the new systems, and have set\n> > up the DWC, and the accelerator cache using hpacucli. I've tried\n> > accelerator caches of 25/75, 50/50, and 75/25.\n>\n> Whas is an 'accelerator cache'? Is that the cache on the controller?\n> Then give 100% to the write cache - the read cache does not need to be\n> protected by the battery, the page cache at the OS level can do the same\n> service.\n>\n\nIt is the cache on the controller. I've tried giving 100% to that cache.\n\n\n> Provide more details about the ext3/ext4 - there are various data modes\n> (writeback, ordered, journal), various other settings (barriers, stripe\n> size, ...) that matter.\n>\n\next3 (on the old server) is using CentOS 5.2 defaults for mounting.\n\next4 (on the new server) is using noatime,barrier=0\n\n\n\n> According to the benchmark I've done a few days back, the performance\n> difference between ext3 and ext4 is rather small, when comparing equally\n> configured file systems (i.e. data=journal vs. data=journal) etc.\n>\n> With read-only workload (e.g. just SELECT statements), the config does\n> not matter (e.g. journal is just as fast as writeback).\n>\n> See for example these comparisons\n>\n> read-only workload: http://bit.ly/q04Tpg\n> read-write workload: http://bit.ly/qKgWgn\n>\n> The ext4 is usually a bit faster than equally configured ext3, but the\n> difference should not be 100%.\n>\n\nYes - it's very strange.\n\n\n> > To start with, I've set the \"relevant\" parameters in postgresql.conf\n> > the same on the new config as the old:\n> >\n> > max_connections = 150 shared_buffers = 6400MB (have tried as high as\n> > 20GB) work_mem = 20MB (have tried as high as 100MB)\n> > effective_io_concurrency = 6 fsync = on synchronous_commit = off\n> > wal_buffers = 16MB checkpoint_segments = 30 (have tried 200 when I\n> > was loading the db) random_page_cost = 2.5 effective_cache_size =\n> > 10240MB (have tried as high as 16GB)\n> >\n> > First thing I noticed is that it takes the same amount of time to\n> > load the db (about 40 minutes) on the new hardware as the old\n> > hardware. I was really hoping with the faster, additional drives and\n> > a hardware RAID controller, that this would be faster. The database\n> > is only about 9GB with pg_dump (about 28GB with indexes).\n> >\n> > Using pgfouine I've identified about 10 \"problematic\" SELECT queries\n> > that take anywhere from .1 seconds to 30 seconds on the old\n> > hardware. Running these same queries on the new hardware is giving me\n> > results in the .2 to 66 seconds. IE, it's twice as slow.\n> >\n> > I've tried increasing the shared_buffers, and some other parameters\n> > (work_mem), but haven't yet seen the new hardware perform even at\n> > the same speed as the old hardware.\n>\n> In that case some of the assumptions is wrong. For example the new RAID\n> is slow for some reason. Bad stripe size, slow controller, ...\n>\n> Do the basic hw benchmarking, i.e. use bonnie++ to benchmark the disk,\n> etc. Only if this provides expected results (i.e. the new hw performs\n> better) it makes sense to mess with the database.\n>\n> Tomas\n>\n\n\n\n-- \nAnthony Presley\n\nOn Sun, Sep 11, 2011 at 6:17 PM, Tomas Vondra <[email protected]> wrote:\nDne 12.9.2011 00:44, Anthony Presley napsal(a):\n> We've currently got PG 8.4.4 running on a whitebox hardware set up,\n> with (2) 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM\n> SATA drives, using the onboard IDE controller and ext3.\n>\n> A few weeks back, we purchased two refurb'd HP DL360's G5's, and\n> were hoping to set them up with PG 9.0.2, running replicated. These\n> machines have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and\n> are using the HP SA P400i with 512MB of BBWC. PG is running on an\n> ext4 (noatime) partition, and they drives configured as RAID 1+0\n> (seems with this controller, I cannot do JBOD). I've spent a few\n> hours going back and forth benchmarking the new systems, and have set\n> up the DWC, and the accelerator cache using hpacucli. I've tried\n> accelerator caches of 25/75, 50/50, and 75/25.\n\nWhas is an 'accelerator cache'? Is that the cache on the controller?\nThen give 100% to the write cache - the read cache does not need to be\nprotected by the battery, the page cache at the OS level can do the same\nservice.It is the cache on the controller. I've tried giving 100% to that cache. \n\nProvide more details about the ext3/ext4 - there are various data modes\n(writeback, ordered, journal), various other settings (barriers, stripe\nsize, ...) that matter.ext3 (on the old server) is using CentOS 5.2 defaults for mounting.ext4 (on the new server) is using noatime,barrier=0\n \nAccording to the benchmark I've done a few days back, the performance\ndifference between ext3 and ext4 is rather small, when comparing equally\nconfigured file systems (i.e. data=journal vs. data=journal) etc.\n\nWith read-only workload (e.g. just SELECT statements), the config does\nnot matter (e.g. journal is just as fast as writeback).\n\nSee for example these comparisons\n\n read-only workload: http://bit.ly/q04Tpg\n read-write workload: http://bit.ly/qKgWgn\n\nThe ext4 is usually a bit faster than equally configured ext3, but the\ndifference should not be 100%.Yes - it's very strange. \n\n> To start with, I've set the \"relevant\" parameters in postgresql.conf\n> the same on the new config as the old:\n>\n> max_connections = 150 shared_buffers = 6400MB (have tried as high as\n> 20GB) work_mem = 20MB (have tried as high as 100MB)\n> effective_io_concurrency = 6 fsync = on synchronous_commit = off\n> wal_buffers = 16MB checkpoint_segments = 30 (have tried 200 when I\n> was loading the db) random_page_cost = 2.5 effective_cache_size =\n> 10240MB (have tried as high as 16GB)\n>\n> First thing I noticed is that it takes the same amount of time to\n> load the db (about 40 minutes) on the new hardware as the old\n> hardware. I was really hoping with the faster, additional drives and\n> a hardware RAID controller, that this would be faster. The database\n> is only about 9GB with pg_dump (about 28GB with indexes).\n>\n> Using pgfouine I've identified about 10 \"problematic\" SELECT queries\n> that take anywhere from .1 seconds to 30 seconds on the old\n> hardware. Running these same queries on the new hardware is giving me\n> results in the .2 to 66 seconds. IE, it's twice as slow.\n>\n> I've tried increasing the shared_buffers, and some other parameters\n> (work_mem), but haven't yet seen the new hardware perform even at\n> the same speed as the old hardware.\n\nIn that case some of the assumptions is wrong. For example the new RAID\nis slow for some reason. Bad stripe size, slow controller, ...\n\nDo the basic hw benchmarking, i.e. use bonnie++ to benchmark the disk,\netc. Only if this provides expected results (i.e. the new hw performs\nbetter) it makes sense to mess with the database.\n\nTomas\n-- Anthony Presley",
"msg_date": "Mon, 12 Sep 2011 11:42:11 -0500",
"msg_from": "Anthony Presley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID Controller (HP P400) beat by SW-RAID?"
},
{
"msg_contents": "Mark,\n\nOn Sun, Sep 11, 2011 at 10:10 PM, mark <[email protected]> wrote:\n\n>\n>\n> >From: [email protected]\n> [mailto:[email protected]] On Behalf Of Anthony\n> Presley\n> >Sent: Sunday, September 11, 2011 4:45 PM\n> >To: [email protected]\n> >Subject: [PERFORM] RAID Controller (HP P400) beat by SW-RAID?\n>\n> >We've currently got PG 8.4.4 running on a whitebox hardware set up, with\n> (2) 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM SATA drives,\n> using the onboard IDE controller and ext3.\n>\n> >A few weeks back, we purchased two refurb'd HP DL360's G5's, and were\n> hoping to set them up with PG 9.0.2, running replicated. These machines\n> have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and are using the HP\n> SA P400i with 512MB of BBWC. PG is running on an ext4 (noatime) partition,\n> and they drives configured as RAID 1+0 (seems with this controller, I\n> cannot\n> do JBOD). I've spent a few hours going back and forth benchmarking the new\n> systems, and have set up the DWC, and the accelerator cache using hpacucli.\n> I've tried accelerator caches of 25/75, 50/50, and 75/25.\n> >\n>\n>\n> I would start of by recommending a more current version of 9.0...like 9.0.4\n> since you are building a new box. The rumor mill says 9.0.5 and 9.1.0 might\n> be out soon (days?). but that is just rumor mill. Don't bank on it.\n>\n\nLooks like 9.1 was released today - I may upgrade to that for our testing.\n I was just using whatever is in the repo.\n\n\n> What kernel are you on ?\n>\n\n2.6.18-238.19.1.el5\n\n\n> Long time HP user here, for better and worse... so here are a few other\n> little things I recommend.\n>\n\nThanks!\n\n\n> Check the bios power management. Make sure it is set where you want it.\n> (IIRC the G5s have this, I know G6s and G7s do). This can help with nasty\n> latency problems if the box has been idle for a while then needs to start\n> doing work.\n>\n\nI've checked those, they look ok.\n\n\n> The p400i is not a great card, compared to more modern one, but you should\n> be able to beat the old setup with what you have. Faster clocked cpu's more\n> spindles, faster RPM spindles.\n>\n\nI've upgraded the CPU's to be X5470 today, to see if that helps with the\nspeed of\n\n\n> Assuming the battery is working, with XFS or ext4 you can use nobarrier\n> mount option and you should see some improvement.\n>\n\nI've been using:\n noatime,data=writeback,defaults\n\nI will try:\n noatime,data=writeback,barrier=0,defaults\n\n\n> Make sure the raid card's firmware is current. I can't stress this enough.\n> HP fixed a nasty bug with Raid 1+0 a few months ago where you could eat\n> your\n> data... They also seem to be fixing a lot of other bugs along the way as\n> well. So do yourself a big favor and make sure that firmware is current. It\n> might just head off headache down the road.\n>\n\nI downloaded the latest firmware DVD on Thursday and ran that - everything\nis up to date.\n\n\n> Also make sure you have a 8.10.? (IIRC the version number right) or better\n> version of hpacucli... there have been some fixes to that utility as well.\n> IIRC most of the fixes in this have been around recognizing newere cards\n> (812s and 410s) but some interface bugs have been fixed as well. You may\n> need new packages for HP health. (I don't recall the official name, but new\n> versions if hpacucli might not play well with old versions of hp health.\n>\n\nI got that as well - thanks!\n\n\n> Its HP so they have a new version about every month for firmware and their\n> cli utility... that’s HP for us.\n>\n> Anyways that is my fast input.\n>\n> Best of luck,\n\n\nThanks!\n\n\n-- \nAnthony Presley\n\nMark,On Sun, Sep 11, 2011 at 10:10 PM, mark <[email protected]> wrote:\n\n\n>From: [email protected]\n[mailto:[email protected]] On Behalf Of Anthony Presley\n>Sent: Sunday, September 11, 2011 4:45 PM\n>To: [email protected]\n>Subject: [PERFORM] RAID Controller (HP P400) beat by SW-RAID?\n\n>We've currently got PG 8.4.4 running on a whitebox hardware set up, with\n(2) 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM SATA drives,\nusing the onboard IDE controller and ext3.\n\n>A few weeks back, we purchased two refurb'd HP DL360's G5's, and were\nhoping to set them up with PG 9.0.2, running replicated. These machines\nhave (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and are using the HP\nSA P400i with 512MB of BBWC. PG is running on an ext4 (noatime) partition,\nand they drives configured as RAID 1+0 (seems with this controller, I cannot\ndo JBOD). I've spent a few hours going back and forth benchmarking the new\nsystems, and have set up the DWC, and the accelerator cache using hpacucli.\n I've tried accelerator caches of 25/75, 50/50, and 75/25.\n>\n\n\nI would start of by recommending a more current version of 9.0...like 9.0.4\nsince you are building a new box. The rumor mill says 9.0.5 and 9.1.0 might\nbe out soon (days?). but that is just rumor mill. Don't bank on it.Looks like 9.1 was released today - I may upgrade to that for our testing. I was just using whatever is in the repo.\n \nWhat kernel are you on ?2.6.18-238.19.1.el5 \nLong time HP user here, for better and worse... so here are a few other\nlittle things I recommend.Thanks! \nCheck the bios power management. Make sure it is set where you want it.\n(IIRC the G5s have this, I know G6s and G7s do). This can help with nasty\nlatency problems if the box has been idle for a while then needs to start\ndoing work.I've checked those, they look ok. \nThe p400i is not a great card, compared to more modern one, but you should\nbe able to beat the old setup with what you have. Faster clocked cpu's more\nspindles, faster RPM spindles.I've upgraded the CPU's to be X5470 today, to see if that helps with the speed of \n\nAssuming the battery is working, with XFS or ext4 you can use nobarrier\nmount option and you should see some improvement.I've been using: noatime,data=writeback,defaultsI will try: noatime,data=writeback,barrier=0,defaults\n \nMake sure the raid card's firmware is current. I can't stress this enough.\nHP fixed a nasty bug with Raid 1+0 a few months ago where you could eat your\ndata... They also seem to be fixing a lot of other bugs along the way as\nwell. So do yourself a big favor and make sure that firmware is current. It\nmight just head off headache down the road.I downloaded the latest firmware DVD on Thursday and ran that - everything is up to date. \n\nAlso make sure you have a 8.10.? (IIRC the version number right) or better\nversion of hpacucli... there have been some fixes to that utility as well.\nIIRC most of the fixes in this have been around recognizing newere cards\n(812s and 410s) but some interface bugs have been fixed as well. You may\nneed new packages for HP health. (I don't recall the official name, but new\nversions if hpacucli might not play well with old versions of hp health.I got that as well - thanks! \n\nIts HP so they have a new version about every month for firmware and their\ncli utility... that’s HP for us.\n\nAnyways that is my fast input.\n\nBest of luck,Thanks!-- Anthony Presley",
"msg_date": "Mon, 12 Sep 2011 11:56:27 -0500",
"msg_from": "Anthony Presley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID Controller (HP P400) beat by SW-RAID?"
},
{
"msg_contents": "\nOn 12-9-2011 0:44 Anthony Presley wrote:\n> A few weeks back, we purchased two refurb'd HP DL360's G5's, and were\n> hoping to set them up with PG 9.0.2, running replicated. These machines\n> have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and are using the\n> HP SA P400i with 512MB of BBWC. PG is running on an ext4 (noatime)\n> partition, and they drives configured as RAID 1+0 (seems with this\n> controller, I cannot do JBOD).\n\nIf you really want a JBOD-setup, you can try a RAID0 for each available \ndisk, i.e. in your case 6 separate RAID0's. That's how we configured our \nDell H700 - which doesn't offer JBOD as well - for ZFS.\n\nBest regards,\n\nArjen\n",
"msg_date": "Tue, 13 Sep 2011 08:22:03 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID Controller (HP P400) beat by SW-RAID?"
},
{
"msg_contents": "On Tue, Sep 13, 2011 at 1:22 AM, Arjen van der Meijden <\[email protected]> wrote:\n\n>\n> On 12-9-2011 0:44 Anthony Presley wrote:\n>\n>> A few weeks back, we purchased two refurb'd HP DL360's G5's, and were\n>> hoping to set them up with PG 9.0.2, running replicated. These machines\n>> have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and are using the\n>> HP SA P400i with 512MB of BBWC. PG is running on an ext4 (noatime)\n>> partition, and they drives configured as RAID 1+0 (seems with this\n>> controller, I cannot do JBOD).\n>>\n>\n> If you really want a JBOD-setup, you can try a RAID0 for each available\n> disk, i.e. in your case 6 separate RAID0's. That's how we configured our\n> Dell H700 - which doesn't offer JBOD as well - for ZFS.\n>\n\nThat's a pretty good idea ... I'll try that on our second server today. In\nthe meantime, after tweaking it a bit, we were able to get (with iozone):\n\n\n\nOld New Initial write\n75.85 220.68 Rewrite\n63.95 253.07 Read\n45.04 171.35 Re-read\n45 2405.23 Random read\n27.56 1733.46 Random write\n50.7 239.47\n\nNot as fas as I'd like, but faster than the old disks, for sure.\n\n--\nAnthony\n\nOn Tue, Sep 13, 2011 at 1:22 AM, Arjen van der Meijden <[email protected]> wrote:\n\nOn 12-9-2011 0:44 Anthony Presley wrote:\n\nA few weeks back, we purchased two refurb'd HP DL360's G5's, and were\nhoping to set them up with PG 9.0.2, running replicated. These machines\nhave (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and are using the\nHP SA P400i with 512MB of BBWC. PG is running on an ext4 (noatime)\npartition, and they drives configured as RAID 1+0 (seems with this\ncontroller, I cannot do JBOD).\n\n\nIf you really want a JBOD-setup, you can try a RAID0 for each available disk, i.e. in your case 6 separate RAID0's. That's how we configured our Dell H700 - which doesn't offer JBOD as well - for ZFS.\nThat's a pretty good idea ... I'll try that on our second server today. In the meantime, after tweaking it a bit, we were able to get (with iozone):\n\n\n\n\n\n\nOld\nNew\n\n\n Initial write \n\n75.85\n220.68\n\n\n Rewrite \n\n63.95\n253.07\n\n\n Read \n\n45.04\n171.35\n\n\n Re-read \n\n45\n2405.23\n\n\n Random read \n\n27.56\n1733.46\n\n\n Random write \n\n50.7\n239.47\n\n\n\nNot as fas as I'd like, but faster than the old disks, for sure.--Anthony",
"msg_date": "Tue, 13 Sep 2011 06:33:46 -0500",
"msg_from": "Anthony Presley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID Controller (HP P400) beat by SW-RAID?"
},
{
"msg_contents": "On 09/11/2011 06:44 PM, Anthony Presley wrote:\n> We've currently got PG 8.4.4 running on a whitebox hardware set up, \n> with (2) 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM SATA \n> drives, using the onboard IDE controller and ext3.\n>\n> A few weeks back, we purchased two refurb'd HP DL360's G5's, and were \n> hoping to set them up with PG 9.0.2, running replicated. These \n> machines have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and \n> are using the HP SA P400i with 512MB of BBWC. PG is running on an \n> ext4 (noatime) partition, and they drives configured as RAID 1+0 \n> (seems with this controller, I cannot do JBOD). .\n> To start with, I've set the \"relevant\" parameters in postgresql.conf \n> the same on the new config as the old:\n>\n> fsync = on\n> synchronous_commit = off\n\nThe main thing that a hardware RAID controller improves on is being able \nto write synchronous commits much faster than you can do without one. \nIf you've turned that off, you've essentially neutralized its primary \nvalue. In every other respect, software RAID is faster: the CPUs in \nyour server are much faster than the IO processor on the card, and Linux \nhas a lot more memory for caching than it does too. Turning off sync \ncommit may be fine for loading, but you'll be facing data loss at every \nserver interruption if you roll things out like that. It's not \nrealistic production performance for most places running like that.\n\nA lot of your test results seem like they may be using different levels \nof write reliability, which makes things less fair than they should be \ntoo--in favor of the cheap IDE drives normally. Check out \nhttp://wiki.postgresql.org/wiki/Reliable_Writes for more information \nabout that topic.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 14 Sep 2011 03:22:05 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID Controller (HP P400) beat by SW-RAID?"
}
] |
[
{
"msg_contents": "The recent \"data warehouse\" thread made me think about how I use \nwork_mem for some of my big queries. So I tried SET work_mem = '4GB' \nfor a session and got\n\nERROR: 4194304 is outside the valid range for parameter \"work_mem\" (64 \n.. 2097151)\n\nA bit of searching turned up the \"Allow sorts to use more available \nmemory\" section of the to-do list. Am I correct in reading that the \nmax_val is 2GB and regardless of how much RAM I have in the box I'm \nstuck with only using 2GB? Am I missing something?\n\nI'm using: PostgreSQL 9.0.4, compiled by Visual C++ build 1500, 64-bit \nWindows 2008 Server Enterprise\n\nThanks,\nBob\n\n",
"msg_date": "Mon, 12 Sep 2011 12:33:33 -0500",
"msg_from": "Robert Schnabel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Allow sorts to use more available memory"
},
{
"msg_contents": "On 9/12/2011 12:33 PM, Robert Schnabel wrote:\n> The recent \"data warehouse\" thread made me think about how I use\n> work_mem for some of my big queries. So I tried SET work_mem = '4GB' for\n> a session and got\n>\n> ERROR: 4194304 is outside the valid range for parameter \"work_mem\" (64\n> .. 2097151)\n>\n> A bit of searching turned up the \"Allow sorts to use more available\n> memory\" section of the to-do list. Am I correct in reading that the\n> max_val is 2GB and regardless of how much RAM I have in the box I'm\n> stuck with only using 2GB? Am I missing something?\n>\n> I'm using: PostgreSQL 9.0.4, compiled by Visual C++ build 1500, 64-bit\n> Windows 2008 Server Enterprise\n>\n> Thanks,\n> Bob\n>\n>\nwork_mem is not the total a query can use. I believe each step can use \nthat much, and each backend can use it for multiple bits. So if you had \ntwo backends, each doing 2 sorts, you'd use 2*2 = 4 * 2GB = 8GB.\n\n-Andy\n",
"msg_date": "Mon, 12 Sep 2011 12:47:54 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow sorts to use more available memory"
},
{
"msg_contents": "On 09/12/2011 12:47 PM, Andy Colson wrote:\n\n> work_mem is not the total a query can use. I believe each step can\n> use that much, and each backend can use it for multiple bits. So if\n> you had two backends, each doing 2 sorts, you'd use 2*2 = 4 * 2GB =\n> 8GB.\n\nExactly. Find a big query somewhere in your system. Use EXPLAIN to \nexamine it. Chances are, that one query has one or more sorts. Each one \nof those gets its own work_mem. Each sort. The query have four sorts? It \nmay use 4*work_mem. On a whim a while back, I doubled our 8MB setting to \n16MB on a test system. During a load test, the machine ran out of \nmemory, swapped out, and finally crashed after the OOM killer went nuts.\n\nSet this value *at your own risk* and only after *significant* testing. \nHaving it too high can have rather unexpected consequences. Setting it \nto 1 or 2GB, unless you have VERY few threads, or a TON of memory, is a \nvery, very bad idea.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email-disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Mon, 12 Sep 2011 12:57:58 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow sorts to use more available memory"
},
{
"msg_contents": "On 9/12/2011 12:57 PM, Shaun Thomas wrote:\n> On 09/12/2011 12:47 PM, Andy Colson wrote:\n>\n>> work_mem is not the total a query can use. I believe each step can\n>> use that much, and each backend can use it for multiple bits. So if\n>> you had two backends, each doing 2 sorts, you'd use 2*2 = 4 * 2GB =\n>> 8GB.\n>\n> Exactly. Find a big query somewhere in your system. Use EXPLAIN to\n> examine it.\n\nYeah, and even better, on PG 9, if you EXPLAIN ANALYZE it'll show you \njust how much memory is actually being used.\n\n-Andy\n",
"msg_date": "Mon, 12 Sep 2011 13:02:44 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow sorts to use more available memory"
},
{
"msg_contents": "\nOn 9/12/2011 12:57 PM, Shaun Thomas wrote:\n> On 09/12/2011 12:47 PM, Andy Colson wrote:\n>\n>> work_mem is not the total a query can use. I believe each step can\n>> use that much, and each backend can use it for multiple bits. So if\n>> you had two backends, each doing 2 sorts, you'd use 2*2 = 4 * 2GB =\n>> 8GB.\n> Exactly. Find a big query somewhere in your system. Use EXPLAIN to\n> examine it. Chances are, that one query has one or more sorts. Each one\n> of those gets its own work_mem. Each sort. The query have four sorts? It\n> may use 4*work_mem. On a whim a while back, I doubled our 8MB setting to\n> 16MB on a test system. During a load test, the machine ran out of\n> memory, swapped out, and finally crashed after the OOM killer went nuts.\n>\n> Set this value *at your own risk* and only after *significant* testing.\n> Having it too high can have rather unexpected consequences. Setting it\n> to 1 or 2GB, unless you have VERY few threads, or a TON of memory, is a\n> very, very bad idea.\n>\nYep, I know. But in the context of the data warehouse where *I'm the \nonly user* and I have a query that does, say 4 large sorts like \nhttp://explain.depesz.com/s/BrAO and I have 32GB RAM I'm not worried \nabout using 8GB or 16GB in the case of work_mem = 4GB. I realize the \nquery above only used 1.9GB for the largest sort but I know I have other \nqueries with 1 or 2 sorts that I've watched go to disk.\n\nBob\n\n\n\n",
"msg_date": "Mon, 12 Sep 2011 13:22:51 -0500",
"msg_from": "Robert Schnabel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allow sorts to use more available memory"
},
{
"msg_contents": "On 9/12/2011 1:22 PM, Robert Schnabel wrote:\n>\n> On 9/12/2011 12:57 PM, Shaun Thomas wrote:\n>> On 09/12/2011 12:47 PM, Andy Colson wrote:\n>>\n>>> work_mem is not the total a query can use. I believe each step can\n>>> use that much, and each backend can use it for multiple bits. So if\n>>> you had two backends, each doing 2 sorts, you'd use 2*2 = 4 * 2GB =\n>>> 8GB.\n>> Exactly. Find a big query somewhere in your system. Use EXPLAIN to\n>> examine it. Chances are, that one query has one or more sorts. Each one\n>> of those gets its own work_mem. Each sort. The query have four sorts? It\n>> may use 4*work_mem. On a whim a while back, I doubled our 8MB setting to\n>> 16MB on a test system. During a load test, the machine ran out of\n>> memory, swapped out, and finally crashed after the OOM killer went nuts.\n>>\n>> Set this value *at your own risk* and only after *significant* testing.\n>> Having it too high can have rather unexpected consequences. Setting it\n>> to 1 or 2GB, unless you have VERY few threads, or a TON of memory, is a\n>> very, very bad idea.\n>>\n> Yep, I know. But in the context of the data warehouse where *I'm the\n> only user* and I have a query that does, say 4 large sorts like\n> http://explain.depesz.com/s/BrAO and I have 32GB RAM I'm not worried\n> about using 8GB or 16GB in the case of work_mem = 4GB. I realize the\n> query above only used 1.9GB for the largest sort but I know I have other\n> queries with 1 or 2 sorts that I've watched go to disk.\n>\n> Bob\n>\n>\n>\n>\n\nWow, you are getting close to the limits there. Another thing you can \ndo is mount tmpfs in ram and then just let it spill.\n\n-Andy\n\n",
"msg_date": "Mon, 12 Sep 2011 13:38:19 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow sorts to use more available memory"
},
{
"msg_contents": "On 9/12/2011 1:22 PM, Robert Schnabel wrote:\n>\n> On 9/12/2011 12:57 PM, Shaun Thomas wrote:\n>> On 09/12/2011 12:47 PM, Andy Colson wrote:\n>>\n>>> work_mem is not the total a query can use. I believe each step can\n>>> use that much, and each backend can use it for multiple bits. So if\n>>> you had two backends, each doing 2 sorts, you'd use 2*2 = 4 * 2GB =\n>>> 8GB.\n>> Exactly. Find a big query somewhere in your system. Use EXPLAIN to\n>> examine it. Chances are, that one query has one or more sorts. Each one\n>> of those gets its own work_mem. Each sort. The query have four sorts? It\n>> may use 4*work_mem. On a whim a while back, I doubled our 8MB setting to\n>> 16MB on a test system. During a load test, the machine ran out of\n>> memory, swapped out, and finally crashed after the OOM killer went nuts.\n>>\n>> Set this value *at your own risk* and only after *significant* testing.\n>> Having it too high can have rather unexpected consequences. Setting it\n>> to 1 or 2GB, unless you have VERY few threads, or a TON of memory, is a\n>> very, very bad idea.\n>>\n> Yep, I know. But in the context of the data warehouse where *I'm the\n> only user* and I have a query that does, say 4 large sorts like\n> http://explain.depesz.com/s/BrAO and I have 32GB RAM I'm not worried\n> about using 8GB or 16GB in the case of work_mem = 4GB. I realize the\n> query above only used 1.9GB for the largest sort but I know I have other\n> queries with 1 or 2 sorts that I've watched go to disk.\n>\n> Bob\n>\n>\n>\n>\n\nHuge guess here, cant see select or ddl, but looks like all the tables \nare sequential scans. It might help to add an index or two, then the \ntable joins could be done much more efficiently with with a lot less \nmemory.\n\n-Andy\n",
"msg_date": "Mon, 12 Sep 2011 13:57:09 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow sorts to use more available memory"
},
{
"msg_contents": "I think , you may add a ramdisk as tablespace for temporary tables.\nThis should work similar to bigger work_mem.\n\n2011/9/12, Robert Schnabel <[email protected]>:\n>\n> On 9/12/2011 12:57 PM, Shaun Thomas wrote:\n>> On 09/12/2011 12:47 PM, Andy Colson wrote:\n>>\n>>> work_mem is not the total a query can use. I believe each step can\n>>> use that much, and each backend can use it for multiple bits. So if\n>>> you had two backends, each doing 2 sorts, you'd use 2*2 = 4 * 2GB =\n>>> 8GB.\n>> Exactly. Find a big query somewhere in your system. Use EXPLAIN to\n>> examine it. Chances are, that one query has one or more sorts. Each one\n>> of those gets its own work_mem. Each sort. The query have four sorts? It\n>> may use 4*work_mem. On a whim a while back, I doubled our 8MB setting to\n>> 16MB on a test system. During a load test, the machine ran out of\n>> memory, swapped out, and finally crashed after the OOM killer went nuts.\n>>\n>> Set this value *at your own risk* and only after *significant* testing.\n>> Having it too high can have rather unexpected consequences. Setting it\n>> to 1 or 2GB, unless you have VERY few threads, or a TON of memory, is a\n>> very, very bad idea.\n>>\n> Yep, I know. But in the context of the data warehouse where *I'm the\n> only user* and I have a query that does, say 4 large sorts like\n> http://explain.depesz.com/s/BrAO and I have 32GB RAM I'm not worried\n> about using 8GB or 16GB in the case of work_mem = 4GB. I realize the\n> query above only used 1.9GB for the largest sort but I know I have other\n> queries with 1 or 2 sorts that I've watched go to disk.\n>\n> Bob\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n-- \n------------\npasman\n",
"msg_date": "Mon, 12 Sep 2011 21:12:02 +0200",
"msg_from": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow sorts to use more available memory"
},
{
"msg_contents": "On Mon, Sep 12, 2011 at 11:33 AM, Robert Schnabel\n<[email protected]> wrote:\n> The recent \"data warehouse\" thread made me think about how I use work_mem\n> for some of my big queries. So I tried SET work_mem = '4GB' for a session\n> and got\n>\n> ERROR: 4194304 is outside the valid range for parameter \"work_mem\" (64 ..\n> 2097151)\n\nUbuntu 10.10, pgsql 8.4.8:\n\nsmarlowe=# set work_mem='1000GB';\nSET\n",
"msg_date": "Mon, 12 Sep 2011 14:58:51 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow sorts to use more available memory"
},
{
"msg_contents": "\nOn 9/12/2011 3:58 PM, Scott Marlowe wrote:\n> On Mon, Sep 12, 2011 at 11:33 AM, Robert Schnabel\n> <[email protected]> wrote:\n>> The recent \"data warehouse\" thread made me think about how I use work_mem\n>> for some of my big queries. So I tried SET work_mem = '4GB' for a session\n>> and got\n>>\n>> ERROR: 4194304 is outside the valid range for parameter \"work_mem\" (64 ..\n>> 2097151)\n> Ubuntu 10.10, pgsql 8.4.8:\n>\n> smarlowe=# set work_mem='1000GB';\n> SET\n\nOk, so is this a limitation related to the Windows implementation?\n\nAnd getting back to the to-do list entry and reading the related posts, \nit appears that even if you could set work_mem that high it would only \nuse 2GB anyway. I guess that was the second part of my question. Is \nthat true?\n\n",
"msg_date": "Mon, 12 Sep 2011 17:09:18 -0500",
"msg_from": "Robert Schnabel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allow sorts to use more available memory"
},
{
"msg_contents": "\nOn 9/12/2011 1:57 PM, Andy Colson wrote:\n> On 9/12/2011 1:22 PM, Robert Schnabel wrote:\n>> On 9/12/2011 12:57 PM, Shaun Thomas wrote:\n>>> On 09/12/2011 12:47 PM, Andy Colson wrote:\n>>>\n>>>> work_mem is not the total a query can use. I believe each step can\n>>>> use that much, and each backend can use it for multiple bits. So if\n>>>> you had two backends, each doing 2 sorts, you'd use 2*2 = 4 * 2GB =\n>>>> 8GB.\n>>> Exactly. Find a big query somewhere in your system. Use EXPLAIN to\n>>> examine it. Chances are, that one query has one or more sorts. Each one\n>>> of those gets its own work_mem. Each sort. The query have four sorts? It\n>>> may use 4*work_mem. On a whim a while back, I doubled our 8MB setting to\n>>> 16MB on a test system. During a load test, the machine ran out of\n>>> memory, swapped out, and finally crashed after the OOM killer went nuts.\n>>>\n>>> Set this value *at your own risk* and only after *significant* testing.\n>>> Having it too high can have rather unexpected consequences. Setting it\n>>> to 1 or 2GB, unless you have VERY few threads, or a TON of memory, is a\n>>> very, very bad idea.\n>>>\n>> Yep, I know. But in the context of the data warehouse where *I'm the\n>> only user* and I have a query that does, say 4 large sorts like\n>> http://explain.depesz.com/s/BrAO and I have 32GB RAM I'm not worried\n>> about using 8GB or 16GB in the case of work_mem = 4GB. I realize the\n>> query above only used 1.9GB for the largest sort but I know I have other\n>> queries with 1 or 2 sorts that I've watched go to disk.\n>>\n>> Bob\n>>\n>>\n>>\n>>\n> Huge guess here, cant see select or ddl, but looks like all the tables\n> are sequential scans. It might help to add an index or two, then the\n> table joins could be done much more efficiently with with a lot less\n> memory.\n>\n> -Andy\nIn this case I doubt it. Basically what these queries are doing is \ntaking table1 (~30M rows) and finding all the rows with a certain \ncondition. This produces ~15M rows. Then I have to find all of those \n15M rows that are present in table2. In the case of the query above \nthis results in 1.79M rows. Basically, the 15M rows that meet the \ncondition for table1 have matching rows spread out over 10 different \ntables (table2's).\n\nActually, you just gave me an idea. When I generate the \"table1\" I can \nprobably add a field that tells me which \"table2\" it came from for each \nrow that satisfies my criteria. Sometimes just having someone else make \nyou think is very productive. :-)\n\nThanks\nBob\n\n",
"msg_date": "Mon, 12 Sep 2011 17:21:58 -0500",
"msg_from": "Robert Schnabel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allow sorts to use more available memory"
},
{
"msg_contents": "Robert Schnabel <[email protected]> writes:\n> On 9/12/2011 3:58 PM, Scott Marlowe wrote:\n>> On Mon, Sep 12, 2011 at 11:33 AM, Robert Schnabel\n>> <[email protected]> wrote:\n>>> The recent \"data warehouse\" thread made me think about how I use work_mem\n>>> for some of my big queries. So I tried SET work_mem = '4GB' for a session\n>>> and got\n>>> ERROR: 4194304 is outside the valid range for parameter \"work_mem\" (64 ..\n>>> 2097151)\n\n>> Ubuntu 10.10, pgsql 8.4.8:\n>> smarlowe=# set work_mem='1000GB';\n>> SET\n\n> Ok, so is this a limitation related to the Windows implementation?\n\nYeah. If you look into guc.c you'll find this:\n\n/* upper limit for GUC variables measured in kilobytes of memory */\n/* note that various places assume the byte size fits in a \"long\" variable */\n#if SIZEOF_SIZE_T > 4 && SIZEOF_LONG > 4\n#define MAX_KILOBYTES\tINT_MAX\n#else\n#define MAX_KILOBYTES\t(INT_MAX / 1024)\n#endif\n\nSince Windows, more or less alone among known 64-bit operating systems,\nchose not to make \"long\" the same width as pointers, these values get\nrestricted just as if you were on a 32-bit machine. Few Postgres\ndevelopers use Windows enough to get excited about doing all the tedious\n(and bug-prone) gruntwork that would be required to fix this.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 12 Sep 2011 19:20:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow sorts to use more available memory "
},
{
"msg_contents": "* Robert Schnabel ([email protected]) wrote:\n> And getting back to the to-do list entry and reading the related\n> posts, it appears that even if you could set work_mem that high it\n> would only use 2GB anyway. I guess that was the second part of my\n> question. Is that true?\n\nYes and no. work_mem is used by the planner to figure out what kind of\nplan to use. The planner plans things based off of statistics, but it's\nnot perfect, especially on large tables with lots of data which have\ndependent data between columns.\n\nWhere the 2GB limit comes into play is when you end up with a plan that\ndoes, say, a large sort. PG will use memory for the sort up to\nwork_mem, or 2GB, whichever is lower, and spill to disk after that. I\ndon't believe it has such a limit for a hash table, due to how the data\nstructures for the hash table are allocated (and I recall seeing single\nPG queries that use hash tables getting into the 30+GB range, of course,\nI had work_mem set upwards of 100GB on a 32GB box... :).\n\nSo, if you're doing data warehousing, and you're pretty much the only\nuser (or there's only one at a time), setting it up pretty high is\nacceptable, but you do need to watch the box and make sure you don't run\nit out of memory. Also, make sure you have things configured correctly,\nif you're using Linux, to prevent the OOM killer from kicking in. Also,\nas I suggested before, set it to a reasonable level for the 'default'\nand just up it for specific queries that may benefit from it.\n\n\t\tThanks,\n\n\t\t\tStephen",
"msg_date": "Mon, 12 Sep 2011 21:13:27 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow sorts to use more available memory"
},
{
"msg_contents": "* Robert Schnabel ([email protected]) wrote:\n> And getting back to the to-do list entry and reading the related\n> posts, it appears that even if you could set work_mem that high it\n> would only use 2GB anyway. I guess that was the second part of my\n> question. Is that true?\n\nErrr, and to get back to the to-do (which I've been considering doing\nsomething about...), it's to allow the *actual* memory usage for things\nlike sorts to use more than 2GB, but as others have pointed out, you can\ndo that by putting pgsql_tmp on a memory filesystem and letting the\nsorts spill to the memory-based FS.\n\t\n\tThanks,\n\n\t\tStephen",
"msg_date": "Mon, 12 Sep 2011 21:15:16 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow sorts to use more available memory"
},
{
"msg_contents": "Stephen Frost wrote:\n-- Start of PGP signed section.\n> * Robert Schnabel ([email protected]) wrote:\n> > And getting back to the to-do list entry and reading the related\n> > posts, it appears that even if you could set work_mem that high it\n> > would only use 2GB anyway. I guess that was the second part of my\n> > question. Is that true?\n> \n> Errr, and to get back to the to-do (which I've been considering doing\n> something about...), it's to allow the *actual* memory usage for things\n> like sorts to use more than 2GB, but as others have pointed out, you can\n> do that by putting pgsql_tmp on a memory filesystem and letting the\n> sorts spill to the memory-based FS.\n\nIt would be nice if the tempfs would allow us to control total temp\nmemory usage, except it causes a failure rather than splilling to real\ndisk.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Wed, 5 Oct 2011 17:54:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow sorts to use more available memory"
}
] |
[
{
"msg_contents": "In relation to my previous thread (about SW RAID vs. HW RAID on a P400), I\nwas able to narrow down the filesystem speed and in general, our new system\n(running PG 9.1) is about 3x - 5x faster on the IO.\n\nIn looking at the query plans in more depth, it appears that PG 9.0 and 9.1\nare both preferring to do hash joins, which seem to have a \"linear\" time and\nare slower than PG 8.4 doing an index scan.\n\nFor example, on PG 9.x:\n http://explain.depesz.com/s/qji - This takes 307ms, all the time. Doesn't\nmatter if it's \"cached\", or fresh from a reboot.\n\nSame query on PG 8.4:\n http://explain.depesz.com/s/8Pd - This can take 2-3s the first time, but\nthen takes 42ms once it's cached.\n\nBoth of these servers have the same indexes, similar postgresql.conf, and\nalmost identical data. However, the old server is doing some different\nplanning than the new server. I've run analyze on both of these databases.\n Some relevant PG parameters:\n\n max_connections = 150\n shared_buffers = 6400MB (have tried as high as 20GB)\n work_mem = 20MB (have tried as high as 100MB)\n effective_io_concurrency = 6\n fsync = on\n synchronous_commit = off\n wal_buffers = 16MB\n checkpoint_segments = 30 (have tried 200 when I was loading the db)\n random_page_cost = 2.5\n effective_cache_size = 10240MB (have tried as high as 16GB)\n\nIf I disable the hashjoin, I get massive improvements on PG 9.x ... as fast\n(or faster) than our PG 8.4 instance.\n\n\n-- \nAnthony Presley\n\nIn relation to my previous thread (about SW RAID vs. HW RAID on a P400), I was able to narrow down the filesystem speed and in general, our new system (running PG 9.1) is about 3x - 5x faster on the IO.\nIn looking at the query plans in more depth, it appears that PG 9.0 and 9.1 are both preferring to do hash joins, which seem to have a \"linear\" time and are slower than PG 8.4 doing an index scan.\nFor example, on PG 9.x: http://explain.depesz.com/s/qji - This takes 307ms, all the time. Doesn't matter if it's \"cached\", or fresh from a reboot.\nSame query on PG 8.4: http://explain.depesz.com/s/8Pd - This can take 2-3s the first time, but then takes 42ms once it's cached.\nBoth of these servers have the same indexes, similar postgresql.conf, and almost identical data. However, the old server is doing some different planning than the new server. I've run analyze on both of these databases. Some relevant PG parameters:\n max_connections = 150 shared_buffers = 6400MB (have tried as high as 20GB) work_mem = 20MB (have tried as high as 100MB)\n effective_io_concurrency = 6 fsync = on synchronous_commit = off wal_buffers = 16MB checkpoint_segments = 30 (have tried 200 when I was loading the db) random_page_cost = 2.5\n effective_cache_size = 10240MB (have tried as high as 16GB)If I disable the hashjoin, I get massive improvements on PG 9.x ... as fast (or faster) than our PG 8.4 instance.\n-- Anthony Presley",
"msg_date": "Tue, 13 Sep 2011 06:56:19 -0500",
"msg_from": "Anthony Presley <[email protected]>",
"msg_from_op": true,
"msg_subject": "PG 9.x prefers slower Hash Joins?"
},
{
"msg_contents": "On Tue, Sep 13, 2011 at 4:56 AM, Anthony Presley <[email protected]> wrote:\n> In relation to my previous thread (about SW RAID vs. HW RAID on a P400), I\n> was able to narrow down the filesystem speed and in general, our new system\n> (running PG 9.1) is about 3x - 5x faster on the IO.\n> In looking at the query plans in more depth, it appears that PG 9.0 and 9.1\n> are both preferring to do hash joins, which seem to have a \"linear\" time and\n> are slower than PG 8.4 doing an index scan.\n>\n> For example, on PG 9.x:\n> http://explain.depesz.com/s/qji - This takes 307ms, all the time. Doesn't\n> matter if it's \"cached\", or fresh from a reboot.\n> Same query on PG 8.4:\n> http://explain.depesz.com/s/8Pd - This can take 2-3s the first time, but\n> then takes 42ms once it's cached.\n\nDoes executing this same query repeatedly with the same parameters\nreflect real production use patterns of your system?\n\n\n> Both of these servers have the same indexes, similar postgresql.conf, and\n> almost identical data. However, the old server is doing some different\n> planning than the new server. I've run analyze on both of these databases.\n\n...\n> If I disable the hashjoin, I get massive improvements on PG 9.x ... as fast\n> (or faster) than our PG 8.4 instance.\n\nCan you include buffers in your explain analyze? Also, what is the\nplan for the new server when hashjoin is disabled?\n\nWhat if you lower random_page_cost to 1 (or to whatever value seq_page_cost is)?\n\n\nCheers,\n\nJeff\n",
"msg_date": "Sat, 17 Sep 2011 13:21:50 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG 9.x prefers slower Hash Joins?"
}
] |
[
{
"msg_contents": "I'm just beginning the process of benchmarking and tuning a new server.\n Something I really haven't done before. I'm using Greg's book as a guide.\n I started with bonnie++ (1.96) and immediately got anomalous results (I\nthink).\n\nHardware is as follows:\n\n2x quad core xeon 5504 2.0Ghz, 2x4MB cache\n192GB DDR3 1066 RAM\n24x600GB 15K rpm SAS drives\nadaptec 52445 controller\n\nThe default config, being tested at the moment, has 2 volumes, one 100GB and\none 3.2TB, both are built from a stripe across all 24 disks, rather than\nsplitting some spindles out for one volume and another set for the other\nvolume. At the moment, I'm only testing against the single 3.2TB volume.\n\nThe smaller volume is partitioned into /boot (ext2 and tiny) and / (ext4 and\n91GB). The larger volume is mounted as xfs with the following options\n(cribbed from an email to the list earlier this week, I\nthink): logbufs=8,noatime,nodiratime,nobarrier,inode64,allocsize=16m\n\nBonnie++ delivered the expected huge throughput for sequential read and\nwrite. It seems in line with other benchmarks I found online. However, we\nare only seeing 180 seeks/sec, but seems quite low. I'm hoping someone\nmight be able to confirm that and. hopefully, make some suggestions for\ntracking down the problem if there is one.\n\nResults are as follows:\n\n1.96,1.96,newbox,1,1315935572,379G,,1561,99,552277,46,363872,34,3005,90,981924,49,179.1,56,16,,,,,19107,69,+++++,+++,20006,69,19571,72,+++++,+++,20336,63,7111us,10666ms,14067ms,65528us,592ms,170ms,949us,107us,160us,383us,31us,130us\n\n\nVersion 1.96 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec\n%CP\nnewzonedb.z1.p 379G 1561 99 552277 46 363872 34 3005 90 981924 49\n179.1 56\nLatency 7111us 10666ms 14067ms 65528us 592ms\n170ms\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\nfiles:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n%CP\nnewbox 16 19107 69 +++++ +++ 20006 69 19571 72 +++++ +++ 20336\n 63\nLatency 949us 107us 160us 383us 31us\n130us\n\nAlso, my inclination is to default to the following volume layout:\n\n2 disks in RAID 1 for system\n4 disks in RAID 10 for WAL (xfs)\n18 disks in RAID 10 for data (xfs)\n\nUse case is minimal OLTP traffic, plus a fair amount of data warehouse style\ntraffic - low connection count, queries over sizeable fact tables (100s of\nmillions of rows) partitioned over time, insert-only data loading, via COPY,\nplus some tables are populated via aggregation queries over other tables.\n Basically, based on performance of our current hardware, I'm not concerned\nabout being able to handle the data-loading load, with the 4 drive raid 10\nvolume, so emphasis is on warehouse query speed. I'm not best pleased by\nthe 2 Ghz CPUs, in that context, but I wasn't given a choice on the\nhardware.\n\nAny comments on that proposal are welcome. I've got only a week to settle\non a config and ready the box for production, so the number of iterations I\ncan go through is limited.\n\nI'm just beginning the process of benchmarking and tuning a new server. Something I really haven't done before. I'm using Greg's book as a guide. I started with bonnie++ (1.96) and immediately got anomalous results (I think).\nHardware is as follows:2x quad core xeon 5504 2.0Ghz, 2x4MB cache192GB DDR3 1066 RAM24x600GB 15K rpm SAS drivesadaptec 52445 controller\nThe default config, being tested at the moment, has 2 volumes, one 100GB and one 3.2TB, both are built from a stripe across all 24 disks, rather than splitting some spindles out for one volume and another set for the other volume. At the moment, I'm only testing against the single 3.2TB volume.\nThe smaller volume is partitioned into /boot (ext2 and tiny) and / (ext4 and 91GB). The larger volume is mounted as xfs with the following options (cribbed from an email to the list earlier this week, I think): logbufs=8,noatime,nodiratime,nobarrier,inode64,allocsize=16m\nBonnie++ delivered the expected huge throughput for sequential read and write. It seems in line with other benchmarks I found online. However, we are only seeing 180 seeks/sec, but seems quite low. I'm hoping someone might be able to confirm that and. hopefully, make some suggestions for tracking down the problem if there is one.\nResults are as follows:1.96,1.96,newbox,1,1315935572,379G,,1561,99,552277,46,363872,34,3005,90,981924,49,179.1,56,16,,,,,19107,69,+++++,+++,20006,69,19571,72,+++++,+++,20336,63,7111us,10666ms,14067ms,65528us,592ms,170ms,949us,107us,160us,383us,31us,130us\nVersion 1.96 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\nnewzonedb.z1.p 379G 1561 99 552277 46 363872 34 3005 90 981924 49 179.1 56Latency 7111us 10666ms 14067ms 65528us 592ms 170ms ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CPnewbox 16 19107 69 +++++ +++ 20006 69 19571 72 +++++ +++ 20336 63\nLatency 949us 107us 160us 383us 31us 130usAlso, my inclination is to default to the following volume layout:2 disks in RAID 1 for system\n4 disks in RAID 10 for WAL (xfs)18 disks in RAID 10 for data (xfs)Use case is minimal OLTP traffic, plus a fair amount of data warehouse style traffic - low connection count, queries over sizeable fact tables (100s of millions of rows) partitioned over time, insert-only data loading, via COPY, plus some tables are populated via aggregation queries over other tables. Basically, based on performance of our current hardware, I'm not concerned about being able to handle the data-loading load, with the 4 drive raid 10 volume, so emphasis is on warehouse query speed. I'm not best pleased by the 2 Ghz CPUs, in that context, but I wasn't given a choice on the hardware.\nAny comments on that proposal are welcome. I've got only a week to settle on a config and ready the box for production, so the number of iterations I can go through is limited.",
"msg_date": "Tue, 13 Sep 2011 12:13:57 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": true,
"msg_subject": "raid array seek performance"
},
{
"msg_contents": "On Tue, Sep 13, 2011 at 12:13 PM, Samuel Gendler\n<[email protected]>wrote:\n\n> I'm just beginning the process of benchmarking and tuning a new server.\n> Something I really haven't done before. I'm using Greg's book as a guide.\n> I started with bonnie++ (1.96) and immediately got anomalous results (I\n> think).\n>\n> Hardware is as follows:\n>\n> 2x quad core xeon 5504 2.0Ghz, 2x4MB cache\n> 192GB DDR3 1066 RAM\n> 24x600GB 15K rpm SAS drives\n> adaptec 52445 controller\n>\n> The default config, being tested at the moment, has 2 volumes, one 100GB\n> and one 3.2TB, both are built from a stripe across all 24 disks, rather than\n> splitting some spindles out for one volume and another set for the other\n> volume. At the moment, I'm only testing against the single 3.2TB volume.\n>\n> The smaller volume is partitioned into /boot (ext2 and tiny) and / (ext4\n> and 91GB). The larger volume is mounted as xfs with the following options\n> (cribbed from an email to the list earlier this week, I\n> think): logbufs=8,noatime,nodiratime,nobarrier,inode64,allocsize=16m\n>\n> Bonnie++ delivered the expected huge throughput for sequential read and\n> write. It seems in line with other benchmarks I found online. However, we\n> are only seeing 180 seeks/sec, but seems quite low. I'm hoping someone\n> might be able to confirm that and. hopefully, make some suggestions for\n> tracking down the problem if there is one.\n>\n> Results are as follows:\n>\n>\n> 1.96,1.96,newbox,1,1315935572,379G,,1561,99,552277,46,363872,34,3005,90,981924,49,179.1,56,16,,,,,19107,69,+++++,+++,20006,69,19571,72,+++++,+++,20336,63,7111us,10666ms,14067ms,65528us,592ms,170ms,949us,107us,160us,383us,31us,130us\n>\n>\n> Version 1.96 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec\n> %CP\n> newzonedb.z1.p 379G 1561 99 552277 46 363872 34 3005 90 981924 49\n> 179.1 56\n> Latency 7111us 10666ms 14067ms 65528us 592ms\n> 170ms\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n> %CP\n> newbox 16 19107 69 +++++ +++ 20006 69 19571 72 +++++ +++\n> 20336 63\n> Latency 949us 107us 160us 383us 31us\n> 130us\n>\n>\nMy seek times increase when I reduce the size of the file, which isn't\nsurprising, since once everything fits into cache, seeks aren't dependent on\nmechanical movement. However, I am seeing lots of bonnie++ results in\ngoogle which appear to be for a file size that is 2x RAM which show numbers\ncloser to 1000 seeks/sec (compared to my 180). Usually, I am seeing 16GB\nfile for 8GB hosts. So what is an acceptable random seeks/sec number for a\nfile that is 2x memory? And does file size make a difference independent of\navailable RAM such that the enormous 379GB file that is created on my host\nis skewing the results to the low end?\n\nOn Tue, Sep 13, 2011 at 12:13 PM, Samuel Gendler <[email protected]> wrote:\nI'm just beginning the process of benchmarking and tuning a new server. Something I really haven't done before. I'm using Greg's book as a guide. I started with bonnie++ (1.96) and immediately got anomalous results (I think).\nHardware is as follows:2x quad core xeon 5504 2.0Ghz, 2x4MB cache192GB DDR3 1066 RAM24x600GB 15K rpm SAS drivesadaptec 52445 controller\n\nThe default config, being tested at the moment, has 2 volumes, one 100GB and one 3.2TB, both are built from a stripe across all 24 disks, rather than splitting some spindles out for one volume and another set for the other volume. At the moment, I'm only testing against the single 3.2TB volume.\nThe smaller volume is partitioned into /boot (ext2 and tiny) and / (ext4 and 91GB). The larger volume is mounted as xfs with the following options (cribbed from an email to the list earlier this week, I think): logbufs=8,noatime,nodiratime,nobarrier,inode64,allocsize=16m\nBonnie++ delivered the expected huge throughput for sequential read and write. It seems in line with other benchmarks I found online. However, we are only seeing 180 seeks/sec, but seems quite low. I'm hoping someone might be able to confirm that and. hopefully, make some suggestions for tracking down the problem if there is one.\nResults are as follows:1.96,1.96,newbox,1,1315935572,379G,,1561,99,552277,46,363872,34,3005,90,981924,49,179.1,56,16,,,,,19107,69,+++++,+++,20006,69,19571,72,+++++,+++,20336,63,7111us,10666ms,14067ms,65528us,592ms,170ms,949us,107us,160us,383us,31us,130us\nVersion 1.96 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\nnewzonedb.z1.p 379G 1561 99 552277 46 363872 34 3005 90 981924 49 179.1 56Latency 7111us 10666ms 14067ms 65528us 592ms 170ms ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CPnewbox 16 19107 69 +++++ +++ 20006 69 19571 72 +++++ +++ 20336 63\nLatency 949us 107us 160us 383us 31us 130usMy seek times increase when I reduce the size of the file, which isn't surprising, since once everything fits into cache, seeks aren't dependent on mechanical movement. However, I am seeing lots of bonnie++ results in google which appear to be for a file size that is 2x RAM which show numbers closer to 1000 seeks/sec (compared to my 180). Usually, I am seeing 16GB file for 8GB hosts. So what is an acceptable random seeks/sec number for a file that is 2x memory? And does file size make a difference independent of available RAM such that the enormous 379GB file that is created on my host is skewing the results to the low end?",
"msg_date": "Tue, 13 Sep 2011 16:27:26 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: raid array seek performance"
},
{
"msg_contents": "On 09/13/2011 03:13 PM, Samuel Gendler wrote:\n> Bonnie++ delivered the expected huge throughput for sequential read \n> and write. It seems in line with other benchmarks I found online. \n> However, we are only seeing 180 seeks/sec, but seems quite low.\n\nI wouldn't worry about that if the sequential rates are good. The \nbonnie++ seeks test has been giving me increasingly useless results \nrecently on modern hardware. And bonnie++ 1.96 continues to give me \nenough weird values that I'm still using 1.03e as my standard version.\n\nIf you want to get a useful measurement of seeks/second, setup \npgbench-tools with a SELECT-only test, and create a database that's 2 to \n4X as big as RAM. The TPS result you get from that is a much more \nuseful number for real-world seeks than this.\n\nI'm working on a tool to directly benchmark seek performance in a way \nthat's useful for what people really want to know nowadays. That's \ngoing live to the world at the end of the month, at #PgWest: \nhttp://pgwest2011.sched.org/event/875b87d8d237bef3a53ab27ac9c8057c\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 14 Sep 2011 03:44:53 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: raid array seek performance"
},
{
"msg_contents": "On Wed, Sep 14, 2011 at 2:44 AM, Greg Smith <[email protected]> wrote:\n> If you want to get a useful measurement of seeks/second, setup pgbench-tools\n> with a SELECT-only test, and create a database that's 2 to 4X as big as RAM.\n> The TPS result you get from that is a much more useful number for\n> real-world seeks than this.\n\nA thought on that note: it sure would be nice if you could define\nscaling factor in terms of data size instead of linear multiples of\n100000, something like:\n\npgbench -i -x 64gb\n\nmerlin\n",
"msg_date": "Wed, 14 Sep 2011 13:18:49 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: raid array seek performance"
}
] |
[
{
"msg_contents": "Robert Klemme wrote:\n> On 12.09.2011 19:22, Andy Colson wrote:\n \n>> There are transaction isolation levels, but they are like playing\n>> with fire. (in my opinion).\n \n> You make them sound like witchcraft. But they are clearly defined\n> - even standardized.\n \nYeah, for decades. Developing concurrency control from scratch at\nthe application level over and over again is more like playing with\nfire, in my book.\n \n> Granted, different RDBMS might implement them in different ways -\n> here's PG's view of TX isolation:\n\n\n> http://www.postgresql.org/docs/8.4/interactive/transaction-iso.html\n \nOh, that link is *so* day-before-yesterday! Try this one:\n \nhttp://www.postgresql.org/docs/9.1/interactive/transaction-iso.html\n\n\n> In my opinion anybody working with RDBMS should make himself\n> familiar with this concept - at least know about it - because it\n> is one of the fundamental features of RDBMS and certainly needs\n> consideration in applications with highly concurrent DB activity.\n \n+1\n \nUnderstanding what levels of transaction isolation are available,\nand what the implications of each are, is fundamental. Just as\nthere are cases where a foreign key constraint doesn't exactly work\nfor what you need to enforce, there are cases where serializable\ntransactions don't fit. But where they do fit, developing the\nequivalent from scratch all over again is not as safe or productive\nas using the built-in feature.\n \n-Kevin\n\n\n",
"msg_date": "Tue, 13 Sep 2011 15:34:02 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres for a \"data warehouse\", 5-10 TB"
}
] |
[
{
"msg_contents": "Carlo Stonebanks wrote:\n \n>> max_connections = 300\n> Too high. Both throughput and latency should improve with correct\n> use of a connection pooler.\n \n> Even for 300 stateful applications that can remain connected for\n> up to a week, continuously distilling data (imports)?\n \nAbsolutely.\n \nA good connection pooler will be able to hold those 300 *client*\nconnections, and maintain a much smaller set of connections to the\ndatabase. It will notice when a client connection is requesting the\nstart of a database transaction. If there is an idle database\nconnection it will route the requests there; otherwise it will put\nthat client connection in a queue. When a database transaction is\ncommitted, a waiting client connection (if any) will be assigned to\nits database connection.\n \nEvery benchmark I've seen shows that this will improve both\nthroughput and latency over the approach of releasing a \"thundering\nherd\" of requests against the server. Picture a meat counter with\nfour butchers behind it, and few spinning devices to slice meat.\nIf customers queue up, and the butchers call on people as they are\nready, things go better than if each butcher tries to take on one-\nfourth of the customers at a time and constantly switch between one\norder and another to try to make incremental progress on all of\nthem.\n \n> a sys admin raised it from 100 when multiple large projects were\n> loaded and the server refused the additional connections.\n \nWhoever is making these decisions needs more training. I suggest\nGreg Smith's book:\n \nhttp://www.postgresql.org/docs/books/\n \n(Full disclosure, I was a technical reviewer of the book and got a\nfree copy.)\n \n> you want the controller configured for write-back (with automatic\n> switch to write-through on low or failed battery, if possible).\n \nFor performance or safety reasons?\n \nYou get better performance with write-back. If you can't rely on\nthe battery, then write-back is not safe and you need to use write-\nthrough.\n \n> Since the sys admin thinks there's no performance benefit from\n> this, I would like to be clear on why we should do this.\n \nIf you can get him to change it back and forth for performance\ntesting, it is easy enough to prove. Write a client application\nwhich inserts on row per database transaction. A nice, simple,\nshort row -- like containing one integer column with no indexes.\nHave the external application create the table and do a million\ninserts. Try this with both cache settings. It's best not to\nissue a BEGIN and COMMIT at all. Don't loop in a function or a DO\nblock, because that creates an implicit transaction.\n \n> Every now and then the imports behave as if they are suddenly\n> taking a deep breath, slowing down. Sometimes, so much we cancel\n> the import and restart (the imports pick up where they left off).\n> \n> What would the bg_writer settings be in this case?\n \nI'm not sure what that is based on information so far, so it's\nunclear whether background writer settings would help; but on the\nface of it my bet would be that it's a context switching storm or\nswapping, and the connection pool would be the better solution.\nThose poor butchers are just overwhelmed....\n \n-Kevin\n\n\n",
"msg_date": "Tue, 13 Sep 2011 16:13:00 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Migrated from 8.3 to 9.0 - need to update config\n\t (re-post)"
},
{
"msg_contents": "Ok, connection pooler it is. As I understand it, even if there are no idle connections available we'll get the benefit of putting a turnstile on the butcher's door.\n\nI also ordered the book as soon as you mentioned - the title alone was enough to sell me on it! The book won't be for the errant sys admin who increased the connections, it's for me - I'll use it to whack the sys admin on the head. Thanks fo rthe tip, the author owes you a beer - as do I. Will the book recommend any particular connection pooler product, or is it inappropriate to ask for a recommendation on the forum? Carlo > Date: Tue, 13 Sep 2011 16:13:00 -0500\n> From: [email protected]\n> To: [email protected]; [email protected]\n> Subject: RE: [PERFORM] Migrated from 8.3 to 9.0 - need to update config\t (re-post)\n> \n> Carlo Stonebanks wrote:\n> \n> >> max_connections = 300\n> > Too high. Both throughput and latency should improve with correct\n> > use of a connection pooler.\n> \n> > Even for 300 stateful applications that can remain connected for\n> > up to a week, continuously distilling data (imports)?\n> \n> Absolutely.\n> \n> A good connection pooler will be able to hold those 300 *client*\n> connections, and maintain a much smaller set of connections to the\n> database. It will notice when a client connection is requesting the\n> start of a database transaction. If there is an idle database\n> connection it will route the requests there; otherwise it will put\n> that client connection in a queue. When a database transaction is\n> committed, a waiting client connection (if any) will be assigned to\n> its database connection.\n> \n> Every benchmark I've seen shows that this will improve both\n> throughput and latency over the approach of releasing a \"thundering\n> herd\" of requests against the server. Picture a meat counter with\n> four butchers behind it, and few spinning devices to slice meat.\n> If customers queue up, and the butchers call on people as they are\n> ready, things go better than if each butcher tries to take on one-\n> fourth of the customers at a time and constantly switch between one\n> order and another to try to make incremental progress on all of\n> them.\n> \n> > a sys admin raised it from 100 when multiple large projects were\n> > loaded and the server refused the additional connections.\n> \n> Whoever is making these decisions needs more training. I suggest\n> Greg Smith's book:\n> \n> http://www.postgresql.org/docs/books/\n> \n> (Full disclosure, I was a technical reviewer of the book and got a\n> free copy.)\n> \n> > you want the controller configured for write-back (with automatic\n> > switch to write-through on low or failed battery, if possible).\n> \n> For performance or safety reasons?\n> \n> You get better performance with write-back. If you can't rely on\n> the battery, then write-back is not safe and you need to use write-\n> through.\n> \n> > Since the sys admin thinks there's no performance benefit from\n> > this, I would like to be clear on why we should do this.\n> \n> If you can get him to change it back and forth for performance\n> testing, it is easy enough to prove. Write a client application\n> which inserts on row per database transaction. A nice, simple,\n> short row -- like containing one integer column with no indexes.\n> Have the external application create the table and do a million\n> inserts. Try this with both cache settings. It's best not to\n> issue a BEGIN and COMMIT at all. Don't loop in a function or a DO\n> block, because that creates an implicit transaction.\n> \n> > Every now and then the imports behave as if they are suddenly\n> > taking a deep breath, slowing down. Sometimes, so much we cancel\n> > the import and restart (the imports pick up where they left off).\n> > \n> > What would the bg_writer settings be in this case?\n> \n> I'm not sure what that is based on information so far, so it's\n> unclear whether background writer settings would help; but on the\n> face of it my bet would be that it's a context switching storm or\n> swapping, and the connection pool would be the better solution.\n> Those poor butchers are just overwhelmed....\n> \n> -Kevin\n> \n> \n \t\t \t \t\t \t\t \t \t\t \n\n\n\n\n\n \nOk, connection pooler it is. As I understand it, even if there are no idle connections available we'll get the benefit of putting a turnstile on the butcher's door.I also ordered the book as soon as you mentioned - the title alone was enough to sell me on it! The book won't be for the errant sys admin who increased the connections, it's for me - I'll use it to whack the sys admin on the head. Thanks fo rthe tip, the author owes you a beer - as do I. Will the book recommend any particular connection pooler product, or is it inappropriate to ask for a recommendation on the forum? Carlo > Date: Tue, 13 Sep 2011 16:13:00 -0500> From: [email protected]> To: [email protected]; [email protected]> Subject: RE: [PERFORM] Migrated from 8.3 to 9.0 - need to update config\t (re-post)> > Carlo Stonebanks wrote:> > >> max_connections = 300> > Too high. Both throughput and latency should improve with correct> > use of a connection pooler.> > > Even for 300 stateful applications that can remain connected for> > up to a week, continuously distilling data (imports)?> > Absolutely.> > A good connection pooler will be able to hold those 300 *client*> connections, and maintain a much smaller set of connections to the> database. It will notice when a client connection is requesting the> start of a database transaction. If there is an idle database> connection it will route the requests there; otherwise it will put> that client connection in a queue. When a database transaction is> committed, a waiting client connection (if any) will be assigned to> its database connection.> > Every benchmark I've seen shows that this will improve both> throughput and latency over the approach of releasing a \"thundering> herd\" of requests against the server. Picture a meat counter with> four butchers behind it, and few spinning devices to slice meat.> If customers queue up, and the butchers call on people as they are> ready, things go better than if each butcher tries to take on one-> fourth of the customers at a time and constantly switch between one> order and another to try to make incremental progress on all of> them.> > > a sys admin raised it from 100 when multiple large projects were> > loaded and the server refused the additional connections.> > Whoever is making these decisions needs more training. I suggest> Greg Smith's book:> > http://www.postgresql.org/docs/books/> > (Full disclosure, I was a technical reviewer of the book and got a> free copy.)> > > you want the controller configured for write-back (with automatic> > switch to write-through on low or failed battery, if possible).> > For performance or safety reasons?> > You get better performance with write-back. If you can't rely on> the battery, then write-back is not safe and you need to use write-> through.> > > Since the sys admin thinks there's no performance benefit from> > this, I would like to be clear on why we should do this.> > If you can get him to change it back and forth for performance> testing, it is easy enough to prove. Write a client application> which inserts on row per database transaction. A nice, simple,> short row -- like containing one integer column with no indexes.> Have the external application create the table and do a million> inserts. Try this with both cache settings. It's best not to> issue a BEGIN and COMMIT at all. Don't loop in a function or a DO> block, because that creates an implicit transaction.> > > Every now and then the imports behave as if they are suddenly> > taking a deep breath, slowing down. Sometimes, so much we cancel> > the import and restart (the imports pick up where they left off).> > > > What would the bg_writer settings be in this case?> > I'm not sure what that is based on information so far, so it's> unclear whether background writer settings would help; but on the> face of it my bet would be that it's a context switching storm or> swapping, and the connection pool would be the better solution.> Those poor butchers are just overwhelmed....> > -Kevin> >",
"msg_date": "Wed, 14 Sep 2011 01:27:06 +0000",
"msg_from": "Carlo Stonebanks <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migrated from 8.3 to 9.0 - need to update config\n (re-post)"
},
{
"msg_contents": "\n\nFrom: Carlo Stonebanks [mailto:[email protected]] \nSent: Tuesday, September 13, 2011 9:27 PM\nTo: Performance support Postgresql\nSubject: Re: Migrated from 8.3 to 9.0 - need to update config (re-post)\n\n\n \n________________________________________\nOk, connection pooler it is. As I understand it, even if there are no idle connections available we'll get the benefit of putting a turnstile on the butcher's door.\nI also ordered the book as soon as you mentioned - the title alone was enough to sell me on it! The book won't be for the errant sys admin who increased the connections, it's for me - I'll use it to whack the sys admin on the head. Thanks fo rthe tip, the author owes you a beer - as do I.\n \nWill the book recommend any particular connection pooler product, or is it inappropriate to ask for a recommendation on the forum?\n \nCarlo\n \n\nI'd start with the pg_bouncer: very simple to setup, reliable, no \"extra\" functionality, which seems by your message you don't need.\n\nIgor Neyman\n",
"msg_date": "Wed, 14 Sep 2011 15:40:07 -0400",
"msg_from": "\"Igor Neyman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migrated from 8.3 to 9.0 - need to update config (re-post)"
},
{
"msg_contents": "On Wed, Sep 14, 2011 at 03:40:07PM -0400, Igor Neyman wrote:\n> \n> \n> From: Carlo Stonebanks [mailto:[email protected]] \n> Sent: Tuesday, September 13, 2011 9:27 PM\n> To: Performance support Postgresql\n> Subject: Re: Migrated from 8.3 to 9.0 - need to update config (re-post)\n> \n> \n> �\n> ________________________________________\n> Ok, connection pooler it is. As I understand it, even if there are no idle connections available�we'll�get the benefit of putting a turnstile on the�butcher's�door.\n> I also ordered the book as soon as you mentioned - the title alone was enough to sell me on it! The book won't be for the errant sys admin who increased the connections, it's for me -�I'll use it to whack the sys admin on the head. Thanks fo rthe tip, the author owes you a beer - as do I.\n> �\n> Will the book recommend any particular connection pooler product, or is it inappropriate to ask for a recommendation on the forum?\n> �\n> Carlo\n> �\n> \n> I'd start with the pg_bouncer: very simple to setup, reliable, no \"extra\" functionality, which seems by your message you don't need.\n> \n> Igor Neyman\n\n+1 for pg_bouncer being easy to setup and use and being robust. We also use pgpool here\nbut its is a much bigger beast and I suspect that you do not need its bells and whistles.\n\nRegards,\nKen\n",
"msg_date": "Wed, 14 Sep 2011 14:43:43 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migrated from 8.3 to 9.0 - need to update config\n (re-post)"
}
] |
[
{
"msg_contents": "The doc at http://www.postgresql.org/docs/current/interactive/indexes-types.html\nsays: \"Caution: Hash index operations are not presently WAL-logged, so\nhash indexes might need to be rebuilt with REINDEX after a database\ncrash. They are also not replicated over streaming or file-based\nreplication. For these reasons, hash index use is presently\ndiscouraged.\"\n\nI found a thread here\nhttp://archives.postgresql.org/pgsql-general/2005-05/msg00370.php\nabout <<\"Hash index\" vs. \"b-tree index\" (PostgreSQL 8.0)>> mentioning\nsome issues, like they\n* are not faster than B-trees even for = comparisons\n* aren't WAL safe\n* have poor concurrency (require coarser locks),\n* are significantly slower than creating a b+-tree index.\n\nIn fact these statements seem to rely on the docs back in version 7.2\n(see http://www.postgresql.org/docs/7.2/static/indexes-types.html )\n\nHas this been verified on a recent release? I can't believe that hash\nperforms so bad over all these points. Theory tells me otherwise and\nhttp://en.wikipedia.org/wiki/Hash_table seems to be a success.\n\nAre there any plans to give hash index another chance (or to bury it\nwith a reason)?\n\nStefan\n",
"msg_date": "Wed, 14 Sep 2011 01:04:27 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hash index use presently(?) discouraged since 2005: revive or bury\n it?"
},
{
"msg_contents": "On 14 September 2011 00:04, Stefan Keller <[email protected]> wrote:\n> Has this been verified on a recent release? I can't believe that hash\n> performs so bad over all these points. Theory tells me otherwise and\n> http://en.wikipedia.org/wiki/Hash_table seems to be a success.\n\nHash indexes have been improved since 2005 - their performance was\nimproved quite a bit in 9.0. Here's a more recent analysis:\n\nhttp://www.depesz.com/index.php/2010/06/28/should-you-use-hash-index/\n\n-- \nPeter Geoghegan http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training and Services\n",
"msg_date": "Wed, 14 Sep 2011 01:04:36 +0100",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "Peter Geoghegan <[email protected]> writes:\n> On 14 September 2011 00:04, Stefan Keller <[email protected]> wrote:\n>> Has this been verified on a recent release? I can't believe that hash\n>> performs so bad over all these points. Theory tells me otherwise and\n>> http://en.wikipedia.org/wiki/Hash_table seems to be a success.\n\n> Hash indexes have been improved since 2005 - their performance was\n> improved quite a bit in 9.0. Here's a more recent analysis:\n\n> http://www.depesz.com/index.php/2010/06/28/should-you-use-hash-index/\n\nYeah, looking into the git logs shows several separate major changes\ncommitted during 2008, including storing only the hash code not the\nwhole indexed value (big win on wide values, and lets you index values\nlarger than one index page, which doesn't work in btree). I think that\nthe current state of affairs is still what depesz said, namely that\nthere might be cases where they'd be a win to use, except the lack of\nWAL support is a killer. I imagine somebody will step up and do that\neventually.\n\nThe big picture though is that we're not going to remove hash indexes,\neven if they're nearly useless in themselves, because hash index\nopclasses provide the foundation for the system's knowledge of how to\ndo the datatype-specific hashing needed for hash joins and hash\naggregation. And those things *are* big wins, even if hash indexes\nthemselves never become so.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Sep 2011 20:24:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005: revive or\n\tbury it?"
},
{
"msg_contents": "2011/9/14 Tom Lane <[email protected]>:\n> (...) I think that\n> the current state of affairs is still what depesz said, namely that\n> there might be cases where they'd be a win to use, except the lack of\n> WAL support is a killer. I imagine somebody will step up and do that\n> eventually.\n\nShould I open a ticket?\n\nStefan\n\n2011/9/14 Tom Lane <[email protected]>:\n> Peter Geoghegan <[email protected]> writes:\n>> On 14 September 2011 00:04, Stefan Keller <[email protected]> wrote:\n>>> Has this been verified on a recent release? I can't believe that hash\n>>> performs so bad over all these points. Theory tells me otherwise and\n>>> http://en.wikipedia.org/wiki/Hash_table seems to be a success.\n>\n>> Hash indexes have been improved since 2005 - their performance was\n>> improved quite a bit in 9.0. Here's a more recent analysis:\n>\n>> http://www.depesz.com/index.php/2010/06/28/should-you-use-hash-index/\n>\n> Yeah, looking into the git logs shows several separate major changes\n> committed during 2008, including storing only the hash code not the\n> whole indexed value (big win on wide values, and lets you index values\n> larger than one index page, which doesn't work in btree). I think that\n> the current state of affairs is still what depesz said, namely that\n> there might be cases where they'd be a win to use, except the lack of\n> WAL support is a killer. I imagine somebody will step up and do that\n> eventually.\n>\n> The big picture though is that we're not going to remove hash indexes,\n> even if they're nearly useless in themselves, because hash index\n> opclasses provide the foundation for the system's knowledge of how to\n> do the datatype-specific hashing needed for hash joins and hash\n> aggregation. And those things *are* big wins, even if hash indexes\n> themselves never become so.\n>\n> regards, tom lane\n>\n",
"msg_date": "Wed, 14 Sep 2011 08:39:50 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "On 14.09.2011 09:39, Stefan Keller wrote:\n> Should I open a ticket?\n\nWhat ticket? With whom?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Wed, 14 Sep 2011 11:58:59 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "On 14.09.2011 03:24, Tom Lane wrote:\n> The big picture though is that we're not going to remove hash indexes,\n> even if they're nearly useless in themselves, because hash index\n> opclasses provide the foundation for the system's knowledge of how to\n> do the datatype-specific hashing needed for hash joins and hash\n> aggregation. And those things *are* big wins, even if hash indexes\n> themselves never become so.\n\nWe could drop the hash indexam code but keep the opclasses etc. I'm not \nsure that would gain us, though.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Wed, 14 Sep 2011 12:03:58 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": ">> Hash indexes have been improved since 2005 - their performance was\n\n>> improved quite a bit in 9.0. Here's a more recent analysis:\n> \n>> http://www.depesz.com/index.php/2010/06/28/should-you-use-hash-index/\n> \n> The big picture though is that we're not going to remove hash indexes,\n> even if they're nearly useless in themselves\n\nWell, if they provide 3x the performance of btree indexes on index creation,\nI wouldn't call them \"useless\" just because they're not logged or they can't\nbe unique. In fact, I think the docs should specify that in index creation\nthey're actually better than btree (if, in fact, they are and the \"depesz\" test\nis not a corner case).\n",
"msg_date": "Wed, 14 Sep 2011 10:43:27 +0100 (BST)",
"msg_from": "Leonardo Francalanci <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005: revive or\n\tbury it?"
},
{
"msg_contents": "2011/9/14 Tom Lane <[email protected]> writes:\n> (...) I think that\n> the current state of affairs is still what depesz said, namely that\n> there might be cases where they'd be a win to use, except the lack of\n> WAL support is a killer. I imagine somebody will step up and do that\n> eventually.\n\nHow much of work (in man days) do you estimate would this mean for\nsomeone who can program but has to learn PG internals first?\n\nStefan\n\n2011/9/14 Tom Lane <[email protected]>:\n> Peter Geoghegan <[email protected]> writes:\n>> On 14 September 2011 00:04, Stefan Keller <[email protected]> wrote:\n>>> Has this been verified on a recent release? I can't believe that hash\n>>> performs so bad over all these points. Theory tells me otherwise and\n>>> http://en.wikipedia.org/wiki/Hash_table seems to be a success.\n>\n>> Hash indexes have been improved since 2005 - their performance was\n>> improved quite a bit in 9.0. Here's a more recent analysis:\n>\n>> http://www.depesz.com/index.php/2010/06/28/should-you-use-hash-index/\n>\n> Yeah, looking into the git logs shows several separate major changes\n> committed during 2008, including storing only the hash code not the\n> whole indexed value (big win on wide values, and lets you index values\n> larger than one index page, which doesn't work in btree). I think that\n> the current state of affairs is still what depesz said, namely that\n> there might be cases where they'd be a win to use, except the lack of\n> WAL support is a killer. I imagine somebody will step up and do that\n> eventually.\n>\n> The big picture though is that we're not going to remove hash indexes,\n> even if they're nearly useless in themselves, because hash index\n> opclasses provide the foundation for the system's knowledge of how to\n> do the datatype-specific hashing needed for hash joins and hash\n> aggregation. And those things *are* big wins, even if hash indexes\n> themselves never become so.\n>\n> regards, tom lane\n>\n",
"msg_date": "Thu, 15 Sep 2011 01:03:46 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "Stefan Keller <[email protected]> writes:\n> 2011/9/14 Tom Lane <[email protected]> writes:\n>> (...) I think that\n>> the current state of affairs is still what depesz said, namely that\n>> there might be cases where they'd be a win to use, except the lack of\n>> WAL support is a killer. I imagine somebody will step up and do that\n>> eventually.\n\n> How much of work (in man days) do you estimate would this mean for\n> someone who can program but has to learn PG internals first?\n\nNo idea ... I'm probably not the best person to estimate how long it\nwould take someone to get up to speed on the relevant internals,\nbut I'm sure that would take longer than actually doing the work.\nWhile it's not a trivial task, I think it fits the definition of\n\"a small matter of programming\": a piece of code whose anticipated\nlength is significantly greater than its complexity.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Sep 2011 19:40:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005: revive or\n\tbury it?"
},
{
"msg_contents": "On Wed, Sep 14, 2011 at 4:03 AM, Heikki Linnakangas\n<[email protected]> wrote:\n> On 14.09.2011 03:24, Tom Lane wrote:\n>>\n>> The big picture though is that we're not going to remove hash indexes,\n>> even if they're nearly useless in themselves, because hash index\n>> opclasses provide the foundation for the system's knowledge of how to\n>> do the datatype-specific hashing needed for hash joins and hash\n>> aggregation. And those things *are* big wins, even if hash indexes\n>> themselves never become so.\n>\n> We could drop the hash indexam code but keep the opclasses etc. I'm not sure\n> that would gain us, though.\n\nHM, what if you junked the current hash indexam, and just implemented\na wrapper over btree so that the 'hash index' was just short hand for\nhashing the value into a standard index?\n\nmerlin\n",
"msg_date": "Thu, 15 Sep 2011 15:00:28 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "On Thu, Sep 15, 2011 at 5:00 PM, Merlin Moncure <[email protected]> wrote:\n>\n> HM, what if you junked the current hash indexam, and just implemented\n> a wrapper over btree so that the 'hash index' was just short hand for\n> hashing the value into a standard index?\n\nI'm doing this (only by hand, indexing on hash(blah)) on an\napplication, and it works wonders.\nBut... it's kinda not a hash table. It's still O(log N).\n\nHowever, it would be a *very* useful feature if it can be made\ntransparent for applications.\nAnd I would prefer it over a true hashtable, in the end. Hashes are,\nin fact, O(N) worst case.\n",
"msg_date": "Thu, 15 Sep 2011 17:28:50 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "On Thu, Sep 15, 2011 at 3:28 PM, Claudio Freire <[email protected]> wrote:\n> On Thu, Sep 15, 2011 at 5:00 PM, Merlin Moncure <[email protected]> wrote:\n>>\n>> HM, what if you junked the current hash indexam, and just implemented\n>> a wrapper over btree so that the 'hash index' was just short hand for\n>> hashing the value into a standard index?\n>\n> I'm doing this (only by hand, indexing on hash(blah)) on an\n> application, and it works wonders.\n> But... it's kinda not a hash table. It's still O(log N).\n>\n> However, it would be a *very* useful feature if it can be made\n> transparent for applications.\n> And I would prefer it over a true hashtable, in the end. Hashes are,\n> in fact, O(N) worst case.\n\nyeah -- in my (limited) testing, with int4 or int8, btree handily\nmeets or beats hash on creation, access time, and index size. this\nsuggests to me that a separate index implementation for hash isn't\nbuying us much -- the integer btree code is highly optimized.\n\nmerlin\n",
"msg_date": "Thu, 15 Sep 2011 17:14:42 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "Merlin Moncure <[email protected]> writes:\n> HM, what if you junked the current hash indexam, and just implemented\n> a wrapper over btree so that the 'hash index' was just short hand for\n> hashing the value into a standard index?\n\nSurely creating such a wrapper would be *more* work than adding WAL\nsupport to the hash AM.\n\nI'm not entirely following this eagerness to junk that AM, anyway.\nWe've put a lot of sweat into it over the years, in the hopes that\nit would eventually be good for something. It's on the edge of\nbeing good for something now, and there's doubtless room for more\nimprovements, so why are the knives out?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Sep 2011 18:38:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005: revive or\n\tbury it?"
},
{
"msg_contents": "On Fri, Sep 16, 2011 at 12:38 AM, Tom Lane <[email protected]> wrote:\n> I'm not entirely following this eagerness to junk that AM, anyway.\n> We've put a lot of sweat into it over the years, in the hopes that\n> it would eventually be good for something. It's on the edge of\n> being good for something now, and there's doubtless room for more\n> improvements, so why are the knives out?\n\nThere are lots of issues with hash tables. I'm not going to enumerate\nthem, you probably know them better than I.\n\nBut the reality of it is that btree on hash values is a very useful\nindex kind. It has stable performance, is very compact, and supports\nany type, even user defined, if the hashing function can be\ncustomized. They're better than hashes in all my tests for my use case\n(which is indexing over a column with long strings), and the only\ndrawback is that they have to be supported by application code.\n\nIf PG could have a native implementation, I'm sure lots of people\nwould find it useful.\n\nMaybe scrapping the hash index is too much, but support for indexing\nwith btree with hashing would be very neat.\n\nI read recently hash removed the need to store the value in the index,\nso I don't expect such a wrapper to be difficult to write.\n",
"msg_date": "Fri, 16 Sep 2011 02:34:04 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "On Thu, Sep 15, 2011 at 5:38 PM, Tom Lane <[email protected]> wrote:\n> Merlin Moncure <[email protected]> writes:\n>> HM, what if you junked the current hash indexam, and just implemented\n>> a wrapper over btree so that the 'hash index' was just short hand for\n>> hashing the value into a standard index?\n>\n> Surely creating such a wrapper would be *more* work than adding WAL\n> support to the hash AM.\n>\n> I'm not entirely following this eagerness to junk that AM, anyway.\n> We've put a lot of sweat into it over the years, in the hopes that\n> it would eventually be good for something. It's on the edge of\n> being good for something now, and there's doubtless room for more\n> improvements, so why are the knives out?\n\nJust making an observation. Some quick tests follow the sig. I think\nthe point here is that something has to be done -- now that the\nreplication train has left the station, not having WAL has gone from\nquirky annoyance to major functionality failure. The recent hash work\nhas brought down index build times to a reasonable level, but they are\nstill getting beat by btree. Of course, it's not quite apples to\napples (I figure the timings will even out to an extent once you add\nin the hashing wrapper), but I can't help but wonder if the btree code\nis a better driver and consolidating code is a good thing.\n\nmerlin\n\npostgres=# create table v as select generate_series(1,10000000) as x;\nSELECT 10000000\nTime: 16750.961 ms\npostgres=# create index on v(x);\nCREATE INDEX\nTime: 15158.637 ms\npostgres=# create index on v using hash(x);\nCREATE INDEX\nTime: 22505.468 ms\n\npostgres=# \\d v\n Table \"public.v\"\n Column | Type | Modifiers\n--------+---------+-----------\n x | integer |\nIndexes:\n \"v_x_idx\" btree (x)\n \"v_x_idx1\" hash (x)\n\npostgres=# select relname, relfilenode from pg_class where relname like 'v_x%';\n relname | relfilenode\n----------+-------------\n v_x_idx | 16525\n v_x_idx1 | 16526\n(2 rows)\n\nc:\\Program Files\\PostgreSQL\\9.0\\data>dir/s | grep 16525\n09/15/2011 07:46 PM 224,641,024 16525\n\nc:\\Program Files\\PostgreSQL\\9.0\\data>dir/s | grep 16526\n09/15/2011 07:49 PM 268,451,840 16526\n",
"msg_date": "Thu, 15 Sep 2011 20:00:17 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "On Fri, Sep 16, 2011 at 3:00 AM, Merlin Moncure <[email protected]> wrote:\n>\n> c:\\Program Files\\PostgreSQL\\9.0\\data>dir/s | grep 16525\n> 09/15/2011 07:46 PM 224,641,024 16525\n>\n> c:\\Program Files\\PostgreSQL\\9.0\\data>dir/s | grep 16526\n> 09/15/2011 07:49 PM 268,451,840 16526\n\nThat's not surprising at all.\nHashes need to be bigger to avoid collisions.\n\nWhat's more interesting than index creation, is index maintainance and\naccess costs.\nIn my experience, btree beats hash.\nI haven't tried with 9.1, though.\n",
"msg_date": "Fri, 16 Sep 2011 03:04:58 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "On Thu, Sep 15, 2011 at 8:00 PM, Merlin Moncure <[email protected]> wrote:\n> On Thu, Sep 15, 2011 at 5:38 PM, Tom Lane <[email protected]> wrote:\n>> Merlin Moncure <[email protected]> writes:\n>>> HM, what if you junked the current hash indexam, and just implemented\n>>> a wrapper over btree so that the 'hash index' was just short hand for\n>>> hashing the value into a standard index?\n>>\n>> Surely creating such a wrapper would be *more* work than adding WAL\n>> support to the hash AM.\n>>\n>> I'm not entirely following this eagerness to junk that AM, anyway.\n>> We've put a lot of sweat into it over the years, in the hopes that\n>> it would eventually be good for something. It's on the edge of\n>> being good for something now, and there's doubtless room for more\n>> improvements, so why are the knives out?\n>\n> Just making an observation. Some quick tests follow the sig. I think\n> the point here is that something has to be done -- now that the\n> replication train has left the station, not having WAL has gone from\n> quirky annoyance to major functionality failure. The recent hash work\n> has brought down index build times to a reasonable level, but they are\n> still getting beat by btree. Of course, it's not quite apples to\n> apples (I figure the timings will even out to an extent once you add\n> in the hashing wrapper), but I can't help but wonder if the btree code\n> is a better driver and consolidating code is a good thing.\n\nodd: I was pondering Claudio's point about maintenance of hash indexes\nvs btree and decided to do some more tests. Something very strange is\nhappening: I decided to compare 'update v set x=x+1', historically\none of postgres's weaker points, on the 10M table indexed hash vs\nbtree. The btree typically muddled through in about 5 minutes:\n\npostgres=# update v set x=x+1;\nUPDATE 10000000\nTime: 302341.466 ms\n\nrecreating the table and hash index, I ran it again. 47 minutes into\nthe query, I started to get curious and noticed that cpu time disk\nusage are hovering near zero but nothing is blocked. disk space on the\nindex is *slowly* increasing, now at:\n09/15/2011 11:08 PM 541,024,256 16531\n\nthis is obviously, uh, windows, and I don't have good tools set up.\nI'll repeat the test when i get into the office this morning. thought\nI'd point it out. hm, cancelled the query, dropped the index, and\nre-ran the update without any indexes, everything is normal -- the\nupdate whistled through the table in about 35 seconds.\n\nhm, recreated the hash index and looking more carefully now. still\nseeing the lousy behavior. this definitely bears more investigation\n(this is 9.0.4)...\n\nmerlin\n",
"msg_date": "Thu, 15 Sep 2011 23:20:43 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "2011/9/16 Tom Lane <[email protected]>:\n> I'm not entirely following this eagerness to junk that AM, anyway.\n> We've put a lot of sweat into it over the years, in the hopes that\n> it would eventually be good for something. It's on the edge of\n> being good for something now, and there's doubtless room for more\n> improvements, so why are the knives out?\n\nNo knives from my side. Sorry for the exaggerated subject title.\nI'm also in favor for an enhanced hash index for cases where only \"=\"\ntests are processed and where only few inserts/deletes will occur.\n\nStefan\n",
"msg_date": "Sat, 17 Sep 2011 01:15:26 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "Dne 15.9.2011 01:40, Tom Lane napsal(a):\n> Stefan Keller <[email protected]> writes:\n>> 2011/9/14 Tom Lane <[email protected]> writes:\n>>> (...) I think that\n>>> the current state of affairs is still what depesz said, namely that\n>>> there might be cases where they'd be a win to use, except the lack of\n>>> WAL support is a killer. I imagine somebody will step up and do that\n>>> eventually.\n> \n>> How much of work (in man days) do you estimate would this mean for\n>> someone who can program but has to learn PG internals first?\n> \n> No idea ... I'm probably not the best person to estimate how long it\n> would take someone to get up to speed on the relevant internals,\n> but I'm sure that would take longer than actually doing the work.\n> While it's not a trivial task, I think it fits the definition of\n> \"a small matter of programming\": a piece of code whose anticipated\n> length is significantly greater than its complexity.\n\nWe've been asked by a local university for PostgreSQL-related topics of\ntheses and seminary works, so I'm wondering if adding WAL support to\nhash indexes would be a good fit ...\n\nCan anyone estimate if a student with reasonable C-knowledge a can\nimplement this in about 4 months? It seems like a reasonable amount of\nresearch and work to me.\n\nTomas\n\nPS: All interesting thesis ideas are welcome.\n",
"msg_date": "Sat, 17 Sep 2011 03:04:47 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "On Tue, Sep 13, 2011 at 5:04 PM, Peter Geoghegan <[email protected]> wrote:\n> On 14 September 2011 00:04, Stefan Keller <[email protected]> wrote:\n>> Has this been verified on a recent release? I can't believe that hash\n>> performs so bad over all these points. Theory tells me otherwise and\n>> http://en.wikipedia.org/wiki/Hash_table seems to be a success.\n\nMy understanding is that a huge amount of work has gone into making\nbtree what it is in\nPG, and not nearly as much work has gone into making hash indexes what\nthey could be.\n\n\n> Hash indexes have been improved since 2005 - their performance was\n> improved quite a bit in 9.0. Here's a more recent analysis:\n>\n> http://www.depesz.com/index.php/2010/06/28/should-you-use-hash-index/\n\nThey are 3 time faster to build. But if you rip the WAL logging out\nof btree, how much faster would those get?\n\nAlso, that link doesn't address concurrency of selects at all, only of inserts.\n\nCheers,\n\nJeff\n",
"msg_date": "Sat, 17 Sep 2011 14:48:29 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "On Wed, Sep 14, 2011 at 4:03 PM, Stefan Keller <[email protected]> wrote:\n> 2011/9/14 Tom Lane <[email protected]> writes:\n>> (...) I think that\n>> the current state of affairs is still what depesz said, namely that\n>> there might be cases where they'd be a win to use, except the lack of\n>> WAL support is a killer. I imagine somebody will step up and do that\n>> eventually.\n\n\nI think that adding WAL to hash indexes without first\naddressing the heavy-weight locking issue would be a mistake.\nEven if the WAL was fixed, the bad performance under\nconcurrent selects would still make it at best a narrow\nniche thing. And fixing the locking *after* WAL is in place would\nprobably be very much harder than the other order.\n\n> How much of work (in man days) do you estimate would this mean for\n> someone who can program but has to learn PG internals first?\n\nAre these 8 hour days? :)\n\nI think it could be several months at least and a high likelihood of not\ngetting done at all. (depending on how good the person is, of course).\n\nThey would first have to become familiar with the WAL log and replay system.\nThis is quite hairy.\n\nAlso, I think that adding WAL to hash indexes would be even harder than for\nother indexes, because of bucket-splits, which can touch an arbitrarily high\nnumber of pages. At least, that is what lead me to give up on this last time\nI looked into it seriously.\n\nI think that if it were not for those bucket-splits, it would be\nrelatively easy\nto get rid of both the heavy-weight locks, and to add WAL logging. I had\nconsidered proposing making hash indexes have a fixed number of buckets\nspecified at creation time. That would be an unfortunate limitation, but I\nthink it would be a net win over non-WAL, non-highly-concurrent hash indexes\nthat currently exist. Especially if the number of buckets could be enlarged\nby concurrently making a new, larger, index and then dropping the old one.\nI've only thought about proposing it, because currently I don't have time\nto do anything on it if the proposal was well received.\n\n\nCheers,\n\nJeff\n",
"msg_date": "Sat, 17 Sep 2011 15:11:33 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "On Sat, Sep 17, 2011 at 4:48 PM, Jeff Janes <[email protected]> wrote:\n> On Tue, Sep 13, 2011 at 5:04 PM, Peter Geoghegan <[email protected]> wrote:\n>> On 14 September 2011 00:04, Stefan Keller <[email protected]> wrote:\n>>> Has this been verified on a recent release? I can't believe that hash\n>>> performs so bad over all these points. Theory tells me otherwise and\n>>> http://en.wikipedia.org/wiki/Hash_table seems to be a success.\n>\n> My understanding is that a huge amount of work has gone into making\n> btree what it is in\n> PG, and not nearly as much work has gone into making hash indexes what\n> they could be.\n>\n>\n>> Hash indexes have been improved since 2005 - their performance was\n>> improved quite a bit in 9.0. Here's a more recent analysis:\n>>\n>> http://www.depesz.com/index.php/2010/06/28/should-you-use-hash-index/\n>\n> They are 3 time faster to build. But if you rip the WAL logging out\n> of btree, how much faster would those get?\n>\n> Also, that link doesn't address concurrency of selects at all, only of inserts.\n\nOf course hash indexes are faster to build than varlen string indexes\n:-). I use natural keys 50-80% of the time and hash indexing would\nremove some of the pain in cases where I don't need ordering and range\noperations. In fact, if they are made to properly support wal logging\nand uniqueness, I imagine they should supplant btree in a broad range\nof cases, so much so that it would be awful nice to be able to have\nsyntax to choose hash for primary keys and unique constraints.\n\n@ Jeff:\n>I think that adding WAL to hash indexes without first\naddressing the heavy-weight locking issue would be a mistake.\nEven if the WAL was fixed, the bad performance under\nconcurrent selects would still make it at best a narrow\nniche thing. And fixing the locking *after* WAL is in place would\nprobably be very much harder than the other order.\n\nHere again, I think that any proposed improvement in the current hash\nindex code should be measured against wrapping a btree index. You\nget wal logging and high concurrency for free if you decide to do\nthat.\n\nmerlin\n",
"msg_date": "Sat, 17 Sep 2011 17:14:55 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "On Thu, Sep 15, 2011 at 9:20 PM, Merlin Moncure <[email protected]> wrote:\n>\n> odd: I was pondering Claudio's point about maintenance of hash indexes\n> vs btree and decided to do some more tests. Something very strange is\n> happening: I decided to compare 'update v set x=x+1', historically\n> one of postgres's weaker points, on the 10M table indexed hash vs\n> btree. The btree typically muddled through in about 5 minutes:\n>\n> postgres=# update v set x=x+1;\n> UPDATE 10000000\n> Time: 302341.466 ms\n>\n> recreating the table and hash index, I ran it again. 47 minutes into\n> the query, I started to get curious and noticed that cpu time disk\n> usage are hovering near zero but nothing is blocked. disk space on the\n> index is *slowly* increasing, now at:\n> 09/15/2011 11:08 PM 541,024,256 16531\n\nThe way you created the table, I think the rows are basically going to be\nin order in the table, which means the btree index accesses are going to\nvisit the same block over and over again before going to the next block.\n\nWith hash indexes, it will jump all over the place.\n\nCheers,\n\nJeff\n",
"msg_date": "Sat, 17 Sep 2011 15:29:16 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "Merlin and Jeff,\n\nGeneral remark again:It's hard for me to imagine that btree is\nsuperior for all the issues mentioned before. I still believe in hash\nindex for primary keys and certain unique constraints where you need\nequality search and don't need ordering or range search.\n\n2011/9/17 Jeff Janes <[email protected]>:\n(...)\n> Also, that link doesn't address concurrency of selects at all, only of inserts.\n\nHow would (or did) you test and benchmark concurrency of inserts and selects?\nUse pgbench with own config for a blackbox test?\n\n2011/9/18 Merlin Moncure <[email protected]>:\n> Here again, I think that any proposed improvement in the current hash\n> index code should be measured against wrapping a btree index. You\n> get wal logging and high concurrency for free if you decide to do\n> that.\n\nAs I understand, this would be an enhancement of btree. That's ok for\nbtree but not really exploiting all advantages of a separate hash\nindex, would'nt it?\n\nStefan\n\n2011/9/18 Merlin Moncure <[email protected]>:\n> On Sat, Sep 17, 2011 at 4:48 PM, Jeff Janes <[email protected]> wrote:\n>> On Tue, Sep 13, 2011 at 5:04 PM, Peter Geoghegan <[email protected]> wrote:\n>>> On 14 September 2011 00:04, Stefan Keller <[email protected]> wrote:\n>>>> Has this been verified on a recent release? I can't believe that hash\n>>>> performs so bad over all these points. Theory tells me otherwise and\n>>>> http://en.wikipedia.org/wiki/Hash_table seems to be a success.\n>>\n>> My understanding is that a huge amount of work has gone into making\n>> btree what it is in\n>> PG, and not nearly as much work has gone into making hash indexes what\n>> they could be.\n>>\n>>\n>>> Hash indexes have been improved since 2005 - their performance was\n>>> improved quite a bit in 9.0. Here's a more recent analysis:\n>>>\n>>> http://www.depesz.com/index.php/2010/06/28/should-you-use-hash-index/\n>>\n>> They are 3 time faster to build. But if you rip the WAL logging out\n>> of btree, how much faster would those get?\n>>\n>> Also, that link doesn't address concurrency of selects at all, only of inserts.\n>\n> Of course hash indexes are faster to build than varlen string indexes\n> :-). I use natural keys 50-80% of the time and hash indexing would\n> remove some of the pain in cases where I don't need ordering and range\n> operations. In fact, if they are made to properly support wal logging\n> and uniqueness, I imagine they should supplant btree in a broad range\n> of cases, so much so that it would be awful nice to be able to have\n> syntax to choose hash for primary keys and unique constraints.\n>\n> @ Jeff:\n>>I think that adding WAL to hash indexes without first\n> addressing the heavy-weight locking issue would be a mistake.\n> Even if the WAL was fixed, the bad performance under\n> concurrent selects would still make it at best a narrow\n> niche thing. And fixing the locking *after* WAL is in place would\n> probably be very much harder than the other order.\n>\n> Here again, I think that any proposed improvement in the current hash\n> index code should be measured against wrapping a btree index. You\n> get wal logging and high concurrency for free if you decide to do\n> that.\n>\n> merlin\n>\n",
"msg_date": "Sun, 18 Sep 2011 16:59:10 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "On Sun, Sep 18, 2011 at 7:59 AM, Stefan Keller <[email protected]> wrote:\n> Merlin and Jeff,\n>\n> General remark again:It's hard for me to imagine that btree is\n> superior for all the issues mentioned before. I still believe in hash\n> index for primary keys and certain unique constraints where you need\n> equality search and don't need ordering or range search.\n\nI certainly agree that hash indexes as implemented in PG\ncould be improved on.\n\n>\n> 2011/9/17 Jeff Janes <[email protected]>:\n> (...)\n>> Also, that link doesn't address concurrency of selects at all, only of inserts.\n>\n> How would (or did) you test and benchmark concurrency of inserts and selects?\n> Use pgbench with own config for a blackbox test?\n\nI used pgbench -S -M prepared with a scale that fits in\nshared_buffers, at various concurrencies. drop the pgbench_accounts\nprimary key and build alternatingly a regular index and a hash index\nbetween runs. (If the scale doesn't fit in memory, that should\nadvantage the hash, but I haven't seen a large one--I've never tested\na size at which the branch blocks don't fit in memory)\n\nIt is hard to see real differences here because the index is not the\nmain bottleneck, regardless of which index is in use (at least on only\n8 CPUs, with enough CPUs you might be able to drive the hash index\nover the edge)\n\nI also used a custom pgbench option -P, (a patch adding which feature\nI was supposed to submit to this commitfest, but missed). Cuts down\non a lot of the network chatter, locking, and other overhead and so\nsimulates an index look up occurring on the inside of a nested loop.\n\nThe performance at -c 1 was roughly equal, but at -c 8 the hash was\nthree times slower.\n\nI don't recall concurrent testing inserts (not for speed, anyway).\n\nCheers,\n\nJeff\n",
"msg_date": "Sun, 18 Sep 2011 19:14:39 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "On Sun, Sep 18, 2011 at 9:31 PM, Stefan Keller <[email protected]> wrote:\n> I'm simply referring to literature (like the intro Ramakrishnan & Gehrke).\n> I just know that Oracle an Mysql actually do have them too and use it\n> without those current implementation specific restrictions in\n> Postgres.\n\nWhere exactly do you take that from that Oracle has hash indexes? I\ncan't seem to find them:\nhttp://download.oracle.com/docs/cd/E11882_01/server.112/e16508/indexiot.htm#sthref293\n\nAre you mixing this up with hash partitioning?\nhttp://download.oracle.com/docs/cd/E11882_01/server.112/e16508/schemaob.htm#sthref443\n\nOr am I missing something?\n\n> IMHO by design Hash Index (e.g. linear hashing) work best when:\n> 1. only equal (=) tests are used (on whole values)\n> 2. columns (key values) have very-high cardinality\n>\n> And ideally but not necessarily when index values do not change and\n> number of rows are known ahead of time (avoiding O(N) worst case - but\n> there are approaches to chaining with dynamic resizing).\n>\n> I just collected this to encourage ourselves that enhancing hash\n> indexes could be worthwhile.\n\nThere's still the locking issue Jeff mentioned. At least every time a\ntable resize occurs the whole index must be locked. Or is there a\nmore fine granular locking strategy which I am overlooking?\n\nKind regards\n\nrobert\n\n--\nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Mon, 19 Sep 2011 13:13:37 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "Robert Klemme, 19.09.2011 13:13:\n> On Sun, Sep 18, 2011 at 9:31 PM, Stefan Keller<[email protected]> wrote:\n>> I'm simply referring to literature (like the intro Ramakrishnan& Gehrke).\n>> I just know that Oracle an Mysql actually do have them too and use it\n>> without those current implementation specific restrictions in\n>> Postgres.\n>\n> Where exactly do you take that from that Oracle has hash indexes? I\n> can't seem to find them:\n> http://download.oracle.com/docs/cd/E11882_01/server.112/e16508/indexiot.htm#sthref293\n>\n> Are you mixing this up with hash partitioning?\n> http://download.oracle.com/docs/cd/E11882_01/server.112/e16508/schemaob.htm#sthref443\n>\n> Or am I missing something?\n\nMaybe he was referring to a hash cluster:\nhttp://download.oracle.com/docs/cd/E11882_01/server.112/e17118/statements_5001.htm\n\nThis is a storage option where you can store related rows (e.g. in a parent/child relationship) in the same phyiscal database block based on a hash value. That enables the databse to read parent and child rows with just a single IO.\n\nIn the background Oracle probably has something like a hash index to support that.\n\nThomas\n\n",
"msg_date": "Mon, 19 Sep 2011 14:28:31 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005: revive or\n\tbury it?"
},
{
"msg_contents": "On Sun, Sep 18, 2011 at 9:59 AM, Stefan Keller <[email protected]> wrote:\n> Merlin and Jeff,\n>\n> General remark again:It's hard for me to imagine that btree is\n> superior for all the issues mentioned before. I still believe in hash\n> index for primary keys and certain unique constraints where you need\n> equality search and don't need ordering or range search.\n\nIt is -- but please understand I'm talking about int32 tree vs hash.\nHashing as a technique of course is absolutely going to cream btree\nfor all kinds of data because of the advantages of working with\ndecomposed data -- we are all taught that in comp-sci 101 :-). The\ndebate here is not about the advantages of hashing, but the specific\nimplementation of the hash index used.\n\nPostgres's hash index implementation used to be pretty horrible -- it\nstored the pre-hashed datum in the index which, while making it easier\nto do certain things, made it horribly slow, and, for all intents and\npurposes, useless. Somewhat recently,a lot of work was put in to fix\nthat -- the index now packs the hash code only which made it\ncompetitive with btree and superior for larger keys. However, certain\ntechnical limitations like lack of WAL logging and uniqueness hold\nhash indexing back from being used like it really should be. In cases\nwhere I really *do* need hash indexing, I do it in userland.\n\ncreate table foo\n(\n a_long_field text;\n);\ncreate index on foo(hash(a_long_field));\n\nselect * from foo where hash(a_long_field) = hash(some_value) and\na_long_field = some_value;\n\nThis technique works fine -- the main disadvantage is that enforcing\nuniqueness is a PITA but since the standard index doesn't support it\neither it's no great loss. I also have the option of getting\n'uniqueness' and being able to skip the equality operation if I\nsacrifice some performance and choose a strong digest. Until the hash\nindex issues are worked out, I submit that this remains the go-to\nmethod to do this.\n\nNow, my point here is that I've noticed that even with the latest\noptimizations btree seems to still be superior to the hash indexing by\nmost metrics, so that:\ncreate table foo\n(\n an_int_field int;\n a_long_field text;\n);\n\ncreate index on foo(an_int_field);\ncreate index on foo using hash(a_long_field);\n\nOn performance grounds alone, the btree index seems to be (from my\nvery limited testing) a better bet. So I'm conjecturing that the\ncurrent hash implementation should be replaced with a formalization of\nthe userland technique shown above -- when you request a hash index,\nthe database will silently hash your field and weave it into a btree.\nIt's a hybrid: a hashed btree. To truly demonstrate if the technique\nwas effective though, it would have to be coded up -- it's only fair\nto compare if the btree based hash is also double checking the value\nin the heap which the standard hash index must do.\n\nThe other way to go of course is to try and fix up the existing hash\nindex code -- add wal logging, etc. In theory, a customized hash\nstructure should be able to beat btree all day long which argues to\ncontinue in this direction.\n\n@ jeff:\n>The way you created the table, I think the rows are basically going to be\nin order in the table, which means the btree index accesses are going to\nvisit the same block over and over again before going to the next block.\n\nThis does not explain the behavior. Yeah -- it may take longer but\nyour computer should not be sitting idle during create index\noperations :-). Unfortunately, I was not able to reproduce it on\nlinux. I have to bite the bullet and get the mingw up if I want to\ntry and diagnose -- perhaps it is stalling in the semop calls.\n\nmerlin\n",
"msg_date": "Mon, 19 Sep 2011 09:04:02 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "On Mon, Sep 19, 2011 at 4:04 PM, Merlin Moncure <[email protected]> wrote:\n> On Sun, Sep 18, 2011 at 9:59 AM, Stefan Keller <[email protected]> wrote:\n>> Merlin and Jeff,\n>>\n>> General remark again:It's hard for me to imagine that btree is\n>> superior for all the issues mentioned before. I still believe in hash\n>> index for primary keys and certain unique constraints where you need\n>> equality search and don't need ordering or range search.\n>\n> It is -- but please understand I'm talking about int32 tree vs hash.\n> Hashing as a technique of course is absolutely going to cream btree\n> for all kinds of data because of the advantages of working with\n> decomposed data -- we are all taught that in comp-sci 101 :-). The\n> debate here is not about the advantages of hashing, but the specific\n> implementation of the hash index used.\n>\n> Postgres's hash index implementation used to be pretty horrible -- it\n> stored the pre-hashed datum in the index which, while making it easier\n> to do certain things, made it horribly slow, and, for all intents and\n> purposes, useless. Somewhat recently,a lot of work was put in to fix\n> that -- the index now packs the hash code only which made it\n> competitive with btree and superior for larger keys. However, certain\n> technical limitations like lack of WAL logging and uniqueness hold\n> hash indexing back from being used like it really should be. In cases\n> where I really *do* need hash indexing, I do it in userland.\n>\n> create table foo\n> (\n> a_long_field text;\n> );\n> create index on foo(hash(a_long_field));\n>\n> select * from foo where hash(a_long_field) = hash(some_value) and\n> a_long_field = some_value;\n>\n> This technique works fine -- the main disadvantage is that enforcing\n> uniqueness is a PITA but since the standard index doesn't support it\n> either it's no great loss. I also have the option of getting\n> 'uniqueness' and being able to skip the equality operation if I\n> sacrifice some performance and choose a strong digest. Until the hash\n> index issues are worked out, I submit that this remains the go-to\n> method to do this.\n\nIs this approach (storing the hash code in a btree) really faster than\na regular btree index on \"a_long_field\"? And if so, for which kind of\ndata and load?\n\n> Now, my point here is that I've noticed that even with the latest\n> optimizations btree seems to still be superior to the hash indexing by\n> most metrics, so that:\n> create table foo\n> (\n> an_int_field int;\n> a_long_field text;\n> );\n>\n> create index on foo(an_int_field);\n> create index on foo using hash(a_long_field);\n>\n> On performance grounds alone, the btree index seems to be (from my\n> very limited testing) a better bet. So I'm conjecturing that the\n> current hash implementation should be replaced with a formalization of\n> the userland technique shown above -- when you request a hash index,\n> the database will silently hash your field and weave it into a btree.\n> It's a hybrid: a hashed btree.\n\nI'd rather call it a \"btreefied hash\" because you are storing a hash\nbut in a btree structure. :-) But that's a detail. What I find\nworrying is that then there is a certain level of obscuring the real\nnature since \"create index ... using hash\" is not exactly creating a\nhash table.\n\n> To truly demonstrate if the technique\n> was effective though, it would have to be coded up -- it's only fair\n> to compare if the btree based hash is also double checking the value\n> in the heap which the standard hash index must do.\n\nRight.\n\n> The other way to go of course is to try and fix up the existing hash\n> index code -- add wal logging, etc. In theory, a customized hash\n> structure should be able to beat btree all day long which argues to\n> continue in this direction.\n\nI still haven't seen a solution to locking when a hash table needs\nresizing. All hashing algorithms I can think of at the moment would\nrequire a lock on the whole beast during the resize which makes this\ntype of index impractical for certain loads (heavy updating).\n\nOne solution would be to apply partitioning to the hash table itself\n(e.g. have four partitions for the two least significant bits or 16\nfor the four lest significant bits) and treat them independently. How\nthat would interact with WAL I have no idea though.\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Mon, 19 Sep 2011 17:19:14 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "19.09.11 18:19, Robert Klemme написав(ла):\n> On Mon, Sep 19, 2011 at 4:04 PM, Merlin Moncure<[email protected]> wrote:\n>>\n>> Postgres's hash index implementation used to be pretty horrible -- it\n>> stored the pre-hashed datum in the index which, while making it easier\n>> to do certain things, made it horribly slow, and, for all intents and\n>> purposes, useless. Somewhat recently,a lot of work was put in to fix\n>> that -- the index now packs the hash code only which made it\n>> competitive with btree and superior for larger keys. However, certain\n>> technical limitations like lack of WAL logging and uniqueness hold\n>> hash indexing back from being used like it really should be. In cases\n>> where I really *do* need hash indexing, I do it in userland.\n>>\n>> create table foo\n>> (\n>> a_long_field text;\n>> );\n>> create index on foo(hash(a_long_field));\n>>\n>> select * from foo where hash(a_long_field) = hash(some_value) and\n>> a_long_field = some_value;\n>>\n>> This technique works fine -- the main disadvantage is that enforcing\n>> uniqueness is a PITA but since the standard index doesn't support it\n>> either it's no great loss. I also have the option of getting\n>> 'uniqueness' and being able to skip the equality operation if I\n>> sacrifice some performance and choose a strong digest. Until the hash\n>> index issues are worked out, I submit that this remains the go-to\n>> method to do this.\n> Is this approach (storing the hash code in a btree) really faster than\n> a regular btree index on \"a_long_field\"? And if so, for which kind of\n> data and load?\n\nActually sometimes the field in [potentially] so long, you can't use \nregular b-tree because it won't fit in the page. Say, it is \"text\" type. \nIf you will create regular index, you will actually limit column value \nsize to few KB. I am using md5(text) indexes in this case coupled with \nrather ugly queries (see above). Native support would be nice.\n\nBest regards, Vitalii Tymchyshyn.\n",
"msg_date": "Mon, 19 Sep 2011 18:28:57 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "Robert Klemme <[email protected]> writes:\n> I still haven't seen a solution to locking when a hash table needs\n> resizing. All hashing algorithms I can think of at the moment would\n> require a lock on the whole beast during the resize which makes this\n> type of index impractical for certain loads (heavy updating).\n\nThat seems rather drastically overstated. The existing hash index code\nonly needs to hold an index-scope lock for a short interval while it\nupdates the bucket mapping information after a bucket split. All other\nlocks are per-bucket or per-page. The conflicting share-lockers of the\nindex-wide lock also only need to hold it for a short time, not for\ntheir whole indexscans. So that doesn't seem to me to be materially\nworse than the locking situation for a btree, where we also sometimes\nneed exclusive lock on the btree root page, thus blocking incoming\nindexscans for a short time.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Sep 2011 11:35:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005: revive or\n\tbury it?"
},
{
"msg_contents": "19.09.11 18:19, Robert Klemme написав(ла):\n>\n> I still haven't seen a solution to locking when a hash table needs\n> resizing. All hashing algorithms I can think of at the moment would\n> require a lock on the whole beast during the resize which makes this\n> type of index impractical for certain loads (heavy updating).\nSorry for the second reply, I should have not start writing until I've \nread all your post. Anyway.\nDo you need read lock? I'd say readers could use \"old\" copy of hash \ntable up until the moment new bigger copy is ready. This will simply \nlook like the update is not started yet, which AFAIK is OK for MVCC.\nYep, all the writers will wait.\n\nAnother option could be to start background build of larger hash - for \nsome time your performance will be degraded since you are writing to two \nindexes instead of one plus second one is rebuilding, but I'd say low \nlatency solution is possible here.\n\nOne more: I don't see actually why can't you have a \"rolling\" expand of \nhash table. I will try to describe it correct me if I am wrong:\n1) The algorithm I am talking about will take \"n\" bits from hash code to \nfor hash table. So, during expansion it will double number of baskets.\n2) Say, we are going from 2^n = n1 to 2^(n+1) = n2 = n1 * 2 baskets. \nEach new pair of baskets will take data from single source basket \ndepending on the value of new hash bit used. E.g. if n were 2, we've had \n4 baskets and new table will have 8 baskets. Everything from old basket \n#1 will go into new baskets #2 and #3 depending on hash value.\n3) So, we can have a counter on number of baskets processed. Any \noperation on any lower numbered basket will go to \"new set\". Any \noperation on any higher numbered basket will go to \"old set\". Any \noperation on currently converting basket will block until conversion is \ndone.\n\nP.S. Sorry for a lot of possibly dumb thoughts, I don't know why I've \ngot such a though stream on this topic :)\n\nBest regards, Vitalii Tymchyshyn.\n",
"msg_date": "Mon, 19 Sep 2011 18:54:24 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "On Mon, Sep 19, 2011 at 12:54 PM, Vitalii Tymchyshyn <[email protected]> wrote:\n> 19.09.11 18:19, Robert Klemme написав(ла):\n>>\n>> I still haven't seen a solution to locking when a hash table needs\n>> resizing. All hashing algorithms I can think of at the moment would\n>> require a lock on the whole beast during the resize which makes this\n>> type of index impractical for certain loads (heavy updating).\n>\n> Sorry for the second reply, I should have not start writing until I've read\n> all your post. Anyway.\n> Do you need read lock? I'd say readers could use \"old\" copy of hash table up\n> until the moment new bigger copy is ready. This will simply look like the\n> update is not started yet, which AFAIK is OK for MVCC.\n> Yep, all the writers will wait.\n\nAll this would get solved if there's no automatic hash index resizing.\n\nDBAs would have to recreate (possibly concurrently) the hash to make it bigger.\n\nStill, hash has lots of issues. I'm not sure how the hash is\nimplemented in PG, but usually, for low collision rates pseudorandom\nwalks are used to traverse collision chains. But pseudorandom\ncollision chains mean random I/O which is awful for a DB. Those\ntechniques have not been designed to work with secondary memory.\n\nSo, they would have to be adapted to working with secondary memory,\nand that means a lot of R&D. It's not impossible, it's just a lot of\nwork.\n\nI subscribe to the idea that, *in the meanwhile*, without scrapping\nthe hash index and in parallel to improving it, an option for\ntransparently-hashed btrees would be valuable.\n",
"msg_date": "Mon, 19 Sep 2011 13:14:38 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "On Mon, Sep 19, 2011 at 8:19 AM, Robert Klemme\n<[email protected]> wrote:\n> On Mon, Sep 19, 2011 at 4:04 PM, Merlin Moncure <[email protected]> wrote:\n>\n>> The other way to go of course is to try and fix up the existing hash\n>> index code -- add wal logging, etc. In theory, a customized hash\n>> structure should be able to beat btree all day long which argues to\n>> continue in this direction.\n>\n> I still haven't seen a solution to locking when a hash table needs\n> resizing. All hashing algorithms I can think of at the moment would\n> require a lock on the whole beast during the resize which makes this\n> type of index impractical for certain loads (heavy updating).\n\nThe current implementation doesn't EX lock the whole beast during\nresizing, except a brief one at the beginning (and maybe at the end?)\nof the split. It does EX lock the entire bucket being split for the\nduration of the split, though. The main problem that I see is that\ndue to the potential for deadlocks, the locks involved have to be\nheavy-weight. Which means the shared locks which non-splitting\nprocesses have to use to block against those EX locks have to be\nheavy-weight too, and getting even shared heavy-weight locks means\nexclusive light-weight locks, which means contention.\n\nOne way out would be to have a special process (probably vacuum) do\nall the resizing/splitting, rather than having regular backends doing\nit. It should be possible to make this dead-lock free.\n\nAnother would be to make hash indexes only support bit-map scans.\nThen the scanning hash-code would never have a need to pass control\nback to the executor while still holding a bucket lock.\n\nAnother would be to make the current position of the hash index scan\nbe \"refindable\" if a split should occur during the scan, so that you\ndon't need to inhibit splits during a scan of the same bucket. This\nwould probably be easy if there were no overflow pages. But the\noverflow pages get shuffled in with each other and with the main\nbucket page during a split. It would take quite some gymnastics to\nget around that.\n\n\n\nCheers,\n\nJeff\n",
"msg_date": "Mon, 19 Sep 2011 09:15:45 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "On Mon, Sep 19, 2011 at 10:19 AM, Robert Klemme\n<[email protected]> wrote:\n> On Mon, Sep 19, 2011 at 4:04 PM, Merlin Moncure <[email protected]> wrote:\n>> On Sun, Sep 18, 2011 at 9:59 AM, Stefan Keller <[email protected]> wrote:\n>>> Merlin and Jeff,\n>>>\n>>> General remark again:It's hard for me to imagine that btree is\n>>> superior for all the issues mentioned before. I still believe in hash\n>>> index for primary keys and certain unique constraints where you need\n>>> equality search and don't need ordering or range search.\n>>\n>> It is -- but please understand I'm talking about int32 tree vs hash.\n>> Hashing as a technique of course is absolutely going to cream btree\n>> for all kinds of data because of the advantages of working with\n>> decomposed data -- we are all taught that in comp-sci 101 :-). The\n>> debate here is not about the advantages of hashing, but the specific\n>> implementation of the hash index used.\n>>\n>> Postgres's hash index implementation used to be pretty horrible -- it\n>> stored the pre-hashed datum in the index which, while making it easier\n>> to do certain things, made it horribly slow, and, for all intents and\n>> purposes, useless. Somewhat recently,a lot of work was put in to fix\n>> that -- the index now packs the hash code only which made it\n>> competitive with btree and superior for larger keys. However, certain\n>> technical limitations like lack of WAL logging and uniqueness hold\n>> hash indexing back from being used like it really should be. In cases\n>> where I really *do* need hash indexing, I do it in userland.\n>>\n>> create table foo\n>> (\n>> a_long_field text;\n>> );\n>> create index on foo(hash(a_long_field));\n>>\n>> select * from foo where hash(a_long_field) = hash(some_value) and\n>> a_long_field = some_value;\n>>\n>> This technique works fine -- the main disadvantage is that enforcing\n>> uniqueness is a PITA but since the standard index doesn't support it\n>> either it's no great loss. I also have the option of getting\n>> 'uniqueness' and being able to skip the equality operation if I\n>> sacrifice some performance and choose a strong digest. Until the hash\n>> index issues are worked out, I submit that this remains the go-to\n>> method to do this.\n>\n> Is this approach (storing the hash code in a btree) really faster than\n> a regular btree index on \"a_long_field\"? And if so, for which kind of\n> data and load?\n>\n>> Now, my point here is that I've noticed that even with the latest\n>> optimizations btree seems to still be superior to the hash indexing by\n>> most metrics, so that:\n>> create table foo\n>> (\n>> an_int_field int;\n>> a_long_field text;\n>> );\n>>\n>> create index on foo(an_int_field);\n>> create index on foo using hash(a_long_field);\n>>\n>> On performance grounds alone, the btree index seems to be (from my\n>> very limited testing) a better bet. So I'm conjecturing that the\n>> current hash implementation should be replaced with a formalization of\n>> the userland technique shown above -- when you request a hash index,\n>> the database will silently hash your field and weave it into a btree.\n>> It's a hybrid: a hashed btree.\n>\n> I'd rather call it a \"btreefied hash\" because you are storing a hash\n> but in a btree structure. :-) But that's a detail. What I find\n> worrying is that then there is a certain level of obscuring the real\n> nature since \"create index ... using hash\" is not exactly creating a\n> hash table.\n>\n>> To truly demonstrate if the technique\n>> was effective though, it would have to be coded up -- it's only fair\n>> to compare if the btree based hash is also double checking the value\n>> in the heap which the standard hash index must do.\n>\n> Right.\n\nso, i was curious, and decided to do some more performance testing. I\ncreated a table like this:\n\ncreate table foo as select r, r::text || 'acbdefghijklmnop' as b from\ngenerate_series(1,10000000) r;\ncreate index on foo(r);\ncreate index on foo using hash(b);\n\nto simulate the btree/hash hybrid, I cut a pgbench file like so (btree.sql):\n\\setrandom x 1 100000\nselect * from foo where r = :x and b=:x::text || 'acbdefghijklmnop'\n\nand this: for the standard hash (hash.sql):\n\\setrandom x 1 100000\nselect * from foo where b=:x::text || 'acbdefghijklmnop'\n\npgbench -n -c2 -T 10 -f hash.sql etc\n\nOn my test machine, hybrid hash eeks out a slight win on in-cache tests:\nmerlin@mmoncure-ubuntu:~$ pgbench -n -c2 -T 100 -f btree.sql -p 5490\ntps = 3250.793656 (excluding connections establishing)\n\nvs\nmerlin@mmoncure-ubuntu:~$ pgbench -n -c2 -T 100 -f hash.sql -p 5490\ntps = 3081.730400 (excluding connections establishing)\n\n\nTo make the test into i/o bound, I change the setrandom from 100000 to\n10000000; this produced some unexpected results. The hash index is\npulling about double the tps (~80 vs ~ 40) over the hybrid version.\nWell, unless my methodology is wrong, it's unfair to claim btree is\nbeating hash in 'all cases'. hm.\n\nmerlin\n",
"msg_date": "Mon, 19 Sep 2011 13:43:06 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "On Mon, Sep 19, 2011 at 3:43 PM, Merlin Moncure <[email protected]> wrote:\n> To make the test into i/o bound, I change the setrandom from 100000 to\n> 10000000; this produced some unexpected results. The hash index is\n> pulling about double the tps (~80 vs ~ 40) over the hybrid version.\n> Well, unless my methodology is wrong, it's unfair to claim btree is\n> beating hash in 'all cases'. hm.\n\nIs this only selects?\nHash performs badly with updates, IIRC.\nI haven't tried in a long while, though.\n",
"msg_date": "Mon, 19 Sep 2011 15:53:07 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "On Mon, Sep 19, 2011 at 1:53 PM, Claudio Freire <[email protected]> wrote:\n> On Mon, Sep 19, 2011 at 3:43 PM, Merlin Moncure <[email protected]> wrote:\n>> To make the test into i/o bound, I change the setrandom from 100000 to\n>> 10000000; this produced some unexpected results. The hash index is\n>> pulling about double the tps (~80 vs ~ 40) over the hybrid version.\n>> Well, unless my methodology is wrong, it's unfair to claim btree is\n>> beating hash in 'all cases'. hm.\n>\n> Is this only selects?\n> Hash performs badly with updates, IIRC.\n> I haven't tried in a long while, though.\n\njust selects. update test is also very interesting -- the only test I\ndid for for updates is 'update foo set x=x+1' which was a win for\nbtree (20-30% faster typically). perhaps this isn't algorithm induced\nthough -- lack of wal logging could actually hurt time to commit\nbecause it deserializes i/o.\n\nmerlin\n",
"msg_date": "Tue, 20 Sep 2011 11:11:28 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "Merlin Moncure <[email protected]> writes:\n> just selects. update test is also very interesting -- the only test I\n> did for for updates is 'update foo set x=x+1' which was a win for\n> btree (20-30% faster typically). perhaps this isn't algorithm induced\n> though -- lack of wal logging could actually hurt time to commit\n> because it deserializes i/o.\n\nIn 9.1+, you could remove WAL from the comparison by doing the tests on\nan unlogged table.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Sep 2011 12:25:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005: revive or\n\tbury it?"
}
] |
[
{
"msg_contents": "Craig Ringer wrote:\n \nI agreed with almost your entire post, but there is one sentence\nwith which I take issue.\n \n> However, it will also increase latency for service for those\n> workers because they may have to wait a while before their\n> transaction runs, even though their transaction will complete much\n> faster.\n \nMy benchmarks have shown that latency also improves. See these\nposts for my reasoning on why that is:\n \nhttp://archives.postgresql.org/pgsql-performance/2009-03/msg00138.php\nhttp://archives.postgresql.org/pgsql-performance/2010-01/msg00107.php\n \nSo even though there is greater latency from the attempt to *start*\nthe transaction until it is underway, the total latency from the\nattempt to start the transaction until *completion* is less on\naverage, in spite of the time in queue. Perhaps that's what you\nwere getting at, but it sounded to me like you're saying you\nsacrifice latency to achieve the throughput, and that isn't what\nI've seen.\n \n-Kevin\n\n\n\n\n",
"msg_date": "Tue, 13 Sep 2011 21:22:23 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Migrated from 8.3 to 9.0 - need to update config\n\t (re-post)"
}
] |
[
{
"msg_contents": "dear all,\n\ni have a table with (approx) 500.000.000 rows. it has several indexes, one\nof which is a multicolumn index\nfor a column that has an id (integer) and a column that has a timestamp. i\nhave read in the manual that the multicolumn index can be used only if the\nclauses of the query are in the same order as the columns of the index. so i\nam trying the following simple query ->\n\nserver=# explain select count(*) from temp_by_hour where xid > 100 and xdate\n> now() - interval '1 week';\n* QUERY PLAN \n------------------------------------------------------------------------------------------------\n Aggregate (cost=29356607.86..29356607.87 rows=1 width=0)\n -> Seq Scan on temp_by_hour i (cost=0.00..29342531.72 rows=5630456\nwidth=0)\n Filter: ((xid > 100) AND (xdate > (now() - '7 days'::interval)))*\n\nand the index is this ->\n*\"temp_by_hour_idx\" btree (xid, xdate)*\n\nthe size of the index is this -> \n* public | temp_by_hour_idx | index | mydb| temp_by_hour\n| 35 GB *\n\nand the size of the table is this ->\n* public |temp_by_hour | table | mydb| 115 GB\n*\n\n\nany ideas on how i should write to query to use this index? thx in advance\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/cannot-use-multicolumn-index-tp4802634p4802634.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Wed, 14 Sep 2011 05:50:07 -0700 (PDT)",
"msg_from": "MirrorX <[email protected]>",
"msg_from_op": true,
"msg_subject": "cannot use multicolumn index"
},
{
"msg_contents": "here is the explain analyze output->\nserver=# explain analyze select count(*) from temp_by_hour where xid > 100\nand xdate > now() - interval '1 week';\n QUERY\nPLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=29359311.58..29359311.59 rows=1 width=0) (actual\ntime=2728061.589..2728061.590 rows=1 loops=1)\n -> Seq Scan on temp_by_hour (cost=0.00..29345234.14 rows=5630975\nwidth=0) (actual time=560446.661..2726838.501 rows=5760724 loops=1)\n Filter: ((xid > 100) AND (xdate > (now() - '7 days'::interval)))\n Total runtime: 2728063.170 ms\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/cannot-use-multicolumn-index-tp4802634p4802699.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Wed, 14 Sep 2011 06:09:13 -0700 (PDT)",
"msg_from": "MirrorX <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: cannot use multicolumn index"
},
{
"msg_contents": "http://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n",
"msg_date": "Wed, 14 Sep 2011 14:18:09 +0100",
"msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cannot use multicolumn index"
},
{
"msg_contents": "-postgres version -> 8.4.4\n-os -> redhat 5.6\n-specs ->24 cores, 96GB ram, shared_buffers=32 GB\n-postgresql.conf -> i havent made any changes as far as the query tuning\nparameters are concerned.\n#------------------------------------------------------------------------------\n# QUERY TUNING\n#------------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\n#enable_nestloop = on\n#enable_seqscan = on\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\n#seq_page_cost = 1.0 # measured on an arbitrary scale\n#random_page_cost = 4.0 # same scale as above\n#cpu_tuple_cost = 0.01 # same scale as above\n#cpu_index_tuple_cost = 0.005 # same scale as above\n#cpu_operator_cost = 0.0025 # same scale as above\neffective_cache_size = 50GB\n\n# - Genetic Query Optimizer -\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5 # range 1-10\n#geqo_pool_size = 0 # selects default based on effort\n#geqo_generations = 0 # selects default based on effort\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 100 # range 1-10000\n#constraint_exclusion = partition # on, off, or partition\n#cursor_tuple_fraction = 0.1 # range 0.0-1.0\n#from_collapse_limit = 8\n#join_collapse_limit = 8 # 1 disables collapsing of explicit \n # JOIN clauses\n--------------------------------------------------------------------------------------------------------------\n\n\nif any other parameters are relative to my question pls tell me which you\nwant and i can post them (i can post the whole postgresql.conf if it's\nhelpful). my shared buffers ar\n\nmy question apart from the specific example, is a little more general. so,\nis it normal to expect such an index to be used? can i write the query in\nanother form so as to use this index? is it for example that the conditions\nare '>' and not '=' a factor why the index is not used?\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/cannot-use-multicolumn-index-tp4802634p4802871.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Wed, 14 Sep 2011 07:02:16 -0700 (PDT)",
"msg_from": "MirrorX <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: cannot use multicolumn index"
},
{
"msg_contents": "On 14 Září 2011, 15:09, MirrorX wrote:\n> here is the explain analyze output->\n> server=# explain analyze select count(*) from temp_by_hour where xid > 100\n> and xdate > now() - interval '1 week';\n> QUERY\n> PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=29359311.58..29359311.59 rows=1 width=0) (actual\n> time=2728061.589..2728061.590 rows=1 loops=1)\n> -> Seq Scan on temp_by_hour (cost=0.00..29345234.14 rows=5630975\n> width=0) (actual time=560446.661..2726838.501 rows=5760724 loops=1)\n> Filter: ((xid > 100) AND (xdate > (now() - '7 days'::interval)))\n> Total runtime: 2728063.170 ms\n\nSorry, but with this amount of information, no one can actually help.\n\n- What is the problem, i.e. what behaviour you expect?\n- How much data is the table?\n- What portion of it matches the conditions?\n- What is the index definition?\n\nMy bet is the conditions are not selective enough and the index scan would\nbe less effective than reading the whole table. Try to disable seqscan or\nmodify the cost variables so that the index scan is used and see if it's\nfaster or not.\n\nTomas\n\n",
"msg_date": "Wed, 14 Sep 2011 16:14:55 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cannot use multicolumn index"
},
{
"msg_contents": "thx for the answer.\n\n- What is the problem, i.e. what behaviour you expect?\n- How much data is the table?\n- What portion of it matches the conditions?\n- What is the index definition? \n\ni think in my first post i provided most of these details but ->\n1) what i expect is to be able to understand why the index is not used and\nif possibly to use it somehow, or recreate it in a better way\n2) the table has 115 GB and about 700 milion rows\n3) the result should be less than 10 millions rows\n4) the index is a btree \n\ni tried to disable seq_scan and the query plan was changed and used another\nindex and not the one i wanted. \n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/cannot-use-multicolumn-index-tp4802634p4803198.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Wed, 14 Sep 2011 08:14:00 -0700 (PDT)",
"msg_from": "MirrorX <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: cannot use multicolumn index"
},
{
"msg_contents": "14.09.11 18:14, MirrorX написав(ла):\n> i think in my first post i provided most of these details but ->\n> 1) what i expect is to be able to understand why the index is not used and\n> if possibly to use it somehow, or recreate it in a better way\n> 2) the table has 115 GB and about 700 milion rows\n> 3) the result should be less than 10 millions rows\n> 4) the index is a btree\n>\n> i tried to disable seq_scan and the query plan was changed and used another\n> index and not the one i wanted.\nYou has \">\" check on both columns, this means that it has to scan each \nsubtree that satisfy one criteria to check against the other. Here index \ncolumn order is significant. E.g. if you have a lot of xid > 100 and xid \nis first index column, it must check all (a lot) the index subtrees for \nxid>100.\nMulticolumn indexes work best when first columns are checked with \"=\" \nand only last column with range criteria.\nYou may still try to change order of columns in your index if this will \ngive best selectivity on first column.\nAnother option is multiple single column indexes - postgres may merge \nsuch an indexes at runtime (don't remember since which version this \nfeature is available).\n\nBest regards, Vitalii Tymchyshyn.\n\n",
"msg_date": "Wed, 14 Sep 2011 18:28:38 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cannot use multicolumn index"
},
{
"msg_contents": "On 14 Září 2011, 17:14, MirrorX wrote:\n> thx for the answer.\n>\n> - What is the problem, i.e. what behaviour you expect?\n> - How much data is the table?\n> - What portion of it matches the conditions?\n> - What is the index definition?\n>\n> i think in my first post i provided most of these details but ->\n\nHmmm, I haven't received that post and I don't see that in the archives:\n\nhttp://archives.postgresql.org/pgsql-performance/2011-09/msg00210.php\n\nIt's displayed on nabble.com, but it's marked as 'not yet accepted'.\nThat's strange.\n\nAnyway there's still a lot of missing info - what version of PostgreSQL is\nthis? What is the table structure, what indexes are there?\n\n> 1) what i expect is to be able to understand why the index is not used and\n> if possibly to use it somehow, or recreate it in a better way\n> 2) the table has 115 GB and about 700 milion rows\n\nReally? Because the explain analyze output you posted states there are\njust 5.760.724 rows, not 700.000.000.\n\n> 3) the result should be less than 10 millions rows\n\nThat's about 1.5% of the rows, but it may be much larger portion of the\ntable. The table is stored by blocks - whenever you need to read a row,\nyou need to read the whole block.\n\n115GB is about 15.073.280 blocks (8kB). If each row happens to be stored\nin a different block, you'll have to read about 66% of blocks (although\nyou need just 1.4% of rows).\n\nSure, in reality the assumption 'a different block for each row' is not\ntrue, but with a table this large the block probably won't stay in the\ncache (and thus will be read repeatedly from the device).\n\nAnd that's just the table - you have to read the index too (which is 35GB\nin this case).\n\nSo it's not just about the 'row selectivity', it's about 'block\nselectivity' too.\n\nIn short - my guess is the seq scan will be more efficient in this case,\nbut it's hard to prove without the necessary info.\n\n> 4) the index is a btree\n\nGreat, but what are the columns? What data types are used?\n\nBTW I've noticed you stated this in the first post \"i have read in the\nmanual that the multicolumn index can be used only if the clauses of the\nquery are in the same order as the columns of the index\".\n\nThat's not true since 8.1, so unless you're using a very old version of\nPostgreSQL (8.0 or older), you may use whatever columns you want although\nit's not as efficient.\n\nDo you need both columns (xid, xdate) in the WHERE condition, or have you\nused one of them just to fulfill the 'leftmost columns' rule by adding a\ncondition that matches everything? If that's the case, it's hardly going\nto improve the effectivity.\n\nI see two possible solutions:\n\n1) partition the table and use constraint_exclusion so that just a small\nportion of the table is scanned - there are pros/cons of this solution\n\n2) cluster the table by one of the columns, so that an index scan may be\nmore effective (but this might hurt other queries and you'll have to do\nthat repeatedly)\n\nTomas\n\n",
"msg_date": "Wed, 14 Sep 2011 18:05:23 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cannot use multicolumn index"
},
{
"msg_contents": "thank you all for your advice. i will try the table partitioning approach to\nreduce the size of the tables and to be able to handle them more efficiently\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/cannot-use-multicolumn-index-tp4802634p4806239.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Thu, 15 Sep 2011 02:42:58 -0700 (PDT)",
"msg_from": "MirrorX <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: cannot use multicolumn index"
},
{
"msg_contents": "On Wed, Sep 14, 2011 at 6:50 AM, MirrorX <[email protected]> wrote:\n> dear all,\n>\n> i have a table with (approx) 500.000.000 rows. it has several indexes, one\n> of which is a multicolumn index\n> for a column that has an id (integer) and a column that has a timestamp. i\n> have read in the manual that the multicolumn index can be used only if the\n> clauses of the query are in the same order as the columns of the index. so i\n> am trying the following simple query ->\n\nthis is incorrect. Where did you read this? The order in the where\nclause doesn't matter. Older versions of pg cannot use a muilticolumn\nindex unless you use the first column in the where clause / group by,\nbut newer versions can use that index, but since it's much less\nefficient that way they will usually pick another index with the other\ncolumn in it first.\n",
"msg_date": "Sun, 18 Sep 2011 20:37:44 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cannot use multicolumn index"
},
{
"msg_contents": "On Wed, Sep 14, 2011 at 6:50 AM, MirrorX <[email protected]> wrote:\n> any ideas on how i should write to query to use this index? thx in advance\n\nYou can do something like:\n\nset enable_seqscan=off;\nexplain select yourqueryhere;\n\nand see if the plan it comes up with is any better. Use explain\nanalyze to see how long it really takes. Basically if the index isn't\nselective enough a seq scan will be a win, especially in pgsql where\nit has to hit the table anyway, whether it uses the index or not.\n",
"msg_date": "Sun, 18 Sep 2011 20:39:16 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cannot use multicolumn index"
}
] |
[
{
"msg_contents": "Hi,\n\nI did this:\n\nCREATE VIEW unionview AS\n SELECT col, otherstuff FROM (heavy subquery)\n WHERE col BETWEEN 1 AND 3\n UNION ALL\n SELECT col, otherstuff FROM (another heavy subquery)\n WHERE col BETWEEN 4 AND 6;\n\nhoping that the planner could use the WHERE conditions (like it would use check constraints on tables) to exclude one of the subqueries, for a query like:\n\nSELECT * FROM unionview WHERE col=2;\n\nBut it doesn't. (In PostgreSQL 8.4.5, at least.)\n\nIs there a way (currently) to get the planner to use these conditions to exclude subqueries in the UNION ALL? Or is this a case of “sounds nice, but too rare to merit implementing”?\n\nThanks,\n\n- Gulli\n",
"msg_date": "Wed, 14 Sep 2011 10:53:22 -0700 (PDT)",
"msg_from": "=?ISO-8859-1?Q?Gunnlaugur_=DE=F3r_Briem?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Constraint exclusion on UNION ALL subqueries with WHERE conditions"
},
{
"msg_contents": "=?ISO-8859-1?Q?Gunnlaugur_=DE=F3r_Briem?= <[email protected]> writes:\n> I did this:\n\n> CREATE VIEW unionview AS\n> SELECT col, otherstuff FROM (heavy subquery)\n> WHERE col BETWEEN 1 AND 3\n> UNION ALL\n> SELECT col, otherstuff FROM (another heavy subquery)\n> WHERE col BETWEEN 4 AND 6;\n\n> hoping that the planner could use the WHERE conditions (like it would use check constraints on tables) to exclude one of the subqueries, for a query like:\n\n> SELECT * FROM unionview WHERE col=2;\n\n> But it doesn't. (In PostgreSQL 8.4.5, at least.)\n\nWorks for me in 8.4.8. Do you have constraint_exclusion set to ON?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Sep 2011 23:59:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Constraint exclusion on UNION ALL subqueries with WHERE\n\tconditions"
},
{
"msg_contents": "On Monday, September 19, 2011 3:59:30 AM UTC, Tom Lane wrote:\n> Works for me in 8.4.8. Do you have constraint_exclusion set to ON?\n\nI did try with constraint_exclusion set to on, though the docs suggest partition should be enough (\"examine constraints only for ... UNION ALL subqueries\")\n\nHere's a minimal test case (which I should have supplied in the original post, sorry), tried just now in 8.4.8:\n\nCREATE OR REPLACE VIEW v_heavy_view\nAS SELECT (random()*1e5)::integer col\nFROM generate_series(1, 1e6::integer);\n\nCREATE OR REPLACE VIEW v_test_constraint_exclusion AS\nSELECT col FROM v_heavy_view WHERE col < 3\nUNION ALL SELECT col FROM v_heavy_view WHERE col >= 3;\n\nEXPLAIN SELECT * FROM v_test_constraint_exclusion WHERE col=2;\n\n QUERY PLAN \n--------------------------------------------------------------------------\n Result (cost=0.00..70.04 rows=4 width=4)\n -> Append (cost=0.00..70.04 rows=4 width=4)\n -> Subquery Scan v_heavy_view (cost=0.00..35.00 rows=2 width=4)\n Filter: ((v_heavy_view.col < 3) AND (v_heavy_view.col = 2))\n -> Function Scan on generate_series (cost=0.00..20.00 rows=1000 width=0)\n -> Subquery Scan v_heavy_view (cost=0.00..35.00 rows=2 width=4)\n Filter: ((v_heavy_view.col >= 3) AND (v_heavy_view.col = 2))\n -> Function Scan on generate_series (cost=0.00..20.00 rows=1000 width=0)\n\nI want the planner to notice that (v_heavy_view.col >= 3) AND (v_heavy_view.col = 2) can never be satisfied, and skip that subquery.\n\nRegards,\n\n- Gulli\n",
"msg_date": "Tue, 20 Sep 2011 02:15:30 -0700 (PDT)",
"msg_from": "=?ISO-8859-1?Q?Gunnlaugur_=DE=F3r_Briem?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Constraint exclusion on UNION ALL subqueries with WHERE\n conditions"
},
{
"msg_contents": "=?ISO-8859-1?Q?Gunnlaugur_=DE=F3r_Briem?= <[email protected]> writes:\n> On Monday, September 19, 2011 3:59:30 AM UTC, Tom Lane wrote:\n>> Works for me in 8.4.8. Do you have constraint_exclusion set to ON?\n\n> I did try with constraint_exclusion set to on, though the docs suggest partition should be enough (\"examine constraints only for ... UNION ALL subqueries\")\n\n> Here's a minimal test case (which I should have supplied in the original post, sorry), tried just now in 8.4.8:\n\n> CREATE OR REPLACE VIEW v_heavy_view\n> AS SELECT (random()*1e5)::integer col\n> FROM generate_series(1, 1e6::integer);\n\n> CREATE OR REPLACE VIEW v_test_constraint_exclusion AS\n> SELECT col FROM v_heavy_view WHERE col < 3\n> UNION ALL SELECT col FROM v_heavy_view WHERE col >= 3;\n\n> EXPLAIN SELECT * FROM v_test_constraint_exclusion WHERE col=2;\n\nHmm. The reason this particular case doesn't work is that we don't\napply relation_excluded_by_constraints() to functions-in-FROM.\nIt's only used for plain-table RTEs, not subqueries, functions,\netc. I suspect the complainant's real case involved an unflattenable\nsubquery.\n\nProbably the rationale for that coding was that only plain tables\ncould have CHECK constraints; but the portion of the logic that looks\nfor mutually contradictory scan constraints could apply to non-table\nrelations.\n\nShould we change the code to make such checks in these cases?\nThe default behavior (with constraint_exclusion = partition) would\nstill be to do nothing extra, but it would add planning expense when\nconstraint_exclusion = on.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Sep 2011 13:11:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Constraint exclusion on UNION ALL subqueries with WHERE\n\tconditions"
},
{
"msg_contents": "Right, the view that prompted this involved subqueries; the function was just an artificial test case.\n\nThat change seems like a good one for sure.\n\nIdeally I'd like to enable it for a particular view rather than incur the planning expense for the whole DB (something like ALTER VIEW foo WITH CONSTRAINT EXCLUSION), but I guess there's no support currently (and not easily added) for such per-object planner settings? The application can just issue SET constraint_exclusion=on; as needed; for my case that's fine, but for DBAs maybe a bit limiting.\n\nRegards,\n\n- Gulli\n",
"msg_date": "Thu, 22 Sep 2011 02:43:25 -0700 (PDT)",
"msg_from": "=?ISO-8859-1?Q?Gunnlaugur_=DE=F3r_Briem?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Constraint exclusion on UNION ALL subqueries with WHERE\n conditions"
}
] |
[
{
"msg_contents": "It's not an issue for me (it's not really impacting performance), but\nsince it was odd I thought I might ask.\n\nI have this supermegaquery:\n\nSELECT\n t.date AS status_date, lu.id AS memberid, lu.username AS\nusername, u.url AS url, ub.url_pattern AS urlpattern, lu.email AS\nemail,\n lu.birth_date AS birthdate, lu.creation_date AS creationdate,\ns.name AS state, co.name AS country,\n opd.survey_id AS originalSurvey, c.name AS city , lu.confirmed\nAS confirmed , pd.name AS action , sd.duration AS loi\nFROM tracks t\n LEFT JOIN surveyduration_v sd\n ON sd.member_id = t.member_id\n AND sd.survey_id = 5936\n INNER JOIN all_users_v lu\n ON lu.id = t.member_id\n AND lu.panel_source_id = 1\n LEFT JOIN track_status ts\n ON ts.id = t.track_status_id\n LEFT JOIN partners p\n ON p.id = t.partner_id\n LEFT JOIN urls u\n ON u.id = t.url_id\n AND u.survey_id = 5936\n LEFT JOIN url_batchs ub\n ON u.url_batch_id = ub.id\n LEFT JOIN states s\n ON lu.state_id = s.id\n LEFT JOIN cities c\n ON lu.city_id = c.id\n LEFT JOIN countries co\n ON lu.country_id = co.id\n LEFT JOIN partner_deliveries pd\n ON pd.id = t.partner_delivery_id\n AND t.partner_id IS NOT NULL\n LEFT JOIN partner_deliveries opd\n ON opd.id = pd.originator_id\nWHERE t.survey_id = 5936\nAND t.track_status_id IN (5)\n\nWith the views\n\nCREATE OR REPLACE VIEW surveyduration_v AS\n SELECT date_part('epoch'::text, t.date - tl2.date) / 60::double\nprecision AS duration, t.member_id, t.survey_id\n FROM tracks t\n JOIN track_logs tl2 ON t.id = tl2.track_id\n WHERE tl2.track_status_id = 8 AND t.track_status_id = 7;\n\nCREATE OR REPLACE VIEW all_users_v AS\n SELECT 1 AS panel_source_id, livra_users.id,\nlivra_users.birth_date, livra_users.creation_date, livra_users.email,\nlivra_users.first_name, livra_users.last_name, livra_users.username,\nlivra_users.image_link, livra_users.confirmed,\nlivra_users.is_panelist, livra_users.unregistered, livra_users.reason,\nlivra_users.privacy, livra_users.sex, livra_users.site,\nlivra_users.country_id, livra_users.state_id, livra_users.city_id,\nlivra_users.last_activity_date, livra_users.partner_id,\nlivra_users.survey_id, livra_users.panelist_update,\nlivra_users.panelist_percentage\n FROM livra_users\nUNION ALL\n SELECT 2 AS panel_source_id, - external_users.id AS id,\nNULL::timestamp without time zone AS birth_date,\nexternal_users.creation_date, external_users.email, NULL::character\nvarying AS first_name, NULL::character varying AS last_name,\nexternal_users.username, NULL::character varying AS image_link, true\nAS confirmed, external_users.is_panelist, false AS unregistered,\nNULL::integer AS reason, 0 AS privacy, NULL::integer AS sex,\nexternal_users.site, external_users.country_id, NULL::integer AS\nstate_id, NULL::integer AS city_id, NULL::timestamp without time zone\nAS last_activity_date, NULL::integer AS partner_id,\nexternal_users.survey_id, NULL::bigint AS panelist_update,\nNULL::smallint AS panelist_percentage\n FROM external_users;\n\nServer is 9.0.3 running on linux\n\nThe BIG tables are tracks, track_logs and urls, all > 30M rows.\n\nOne detail that could be related is that tracks.member_id is an\nundeclared (denoramlized) foreign key to livra_users.\n\nThe resulting plan is:\n\n\"Hash Left Join (cost=51417.93..974563.27 rows=2241518 width=1276)\"\n\" Hash Cond: (\"*SELECT* 1\".country_id = co.id)\"\n\" -> Hash Left Join (cost=51415.40..941722.50 rows=2241518 width=1271)\"\n\" Hash Cond: (\"*SELECT* 1\".state_id = s.id)\"\n\" -> Hash Left Join (cost=51373.45..910859.68 rows=2241518 width=1263)\"\n\" Hash Cond: (t.partner_delivery_id = pd.id)\"\n\" Join Filter: (t.partner_id IS NOT NULL)\"\n\" -> Hash Left Join (cost=32280.78..854175.26\nrows=2241518 width=1256)\"\n\" Hash Cond: (\"*SELECT* 1\".city_id = c.id)\"\n\" -> Hash Join (cost=24183.20..792841.63\nrows=2241518 width=1249)\"\n\" Hash Cond: (\"*SELECT* 1\".id = t.member_id)\"\n\" -> Append (cost=0.00..148254.38\nrows=3008749 width=168)\"\n\" -> Subquery Scan on \"*SELECT* 1\"\n(cost=0.00..140223.96 rows=3008748 width=168)\"\n\" -> Seq Scan on livra_users\n(cost=0.00..110136.48 rows=3008748 width=168)\"\n\" -> Subquery Scan on \"*SELECT* 2\"\n(cost=0.00..8030.42 rows=1 width=60)\"\n\" -> Result (cost=0.00..8030.41\nrows=1 width=60)\"\n\" One-Time Filter: false\"\n\" -> Seq Scan on\nexternal_users (cost=0.00..8030.41 rows=1 width=60)\"\n\" -> Hash (cost=24181.34..24181.34 rows=149\nwidth=188)\"\n\" -> Hash Left Join\n(cost=21650.42..24181.34 rows=149 width=188)\"\n\" Hash Cond: (u.url_batch_id = ub.id)\"\n\" -> Nested Loop Left Join\n(cost=20828.08..23355.84 rows=149 width=115)\"\n\" -> Merge Left Join\n(cost=20828.08..20841.04 rows=149 width=44)\"\n\" Merge Cond:\n(t.member_id = t.member_id)\"\n\" -> Sort\n(cost=435.90..436.27 rows=149 width=32)\"\n\" Sort Key: t.member_id\"\n\" -> Index\nScan using idx_tracks_survey_id_track_status_id on tracks t\n(cost=0.00..430.52 rows=149 width=32)\"\n\" Index\nCond: ((survey_id = 5936) AND (track_status_id = 5))\"\n\" -> Sort\n(cost=20392.18..20398.28 rows=2440 width=20)\"\n\" Sort Key: t.member_id\"\n\" -> Nested\nLoop (cost=0.00..20254.90 rows=2440 width=20)\"\n\" ->\nIndex Scan using idx_tracks_survey_id_track_status_id on tracks t\n(cost=0.00..2010.03 rows=712 width=20)\"\n\"\nIndex Cond: ((survey_id = 5936) AND (track_status_id = 7))\"\n\" ->\nIndex Scan using idx_track_logs_track_id on track_logs tl2\n(cost=0.00..25.59 rows=3 width=16)\"\n\"\nIndex Cond: (tl2.track_id = t.id)\"\n\"\nFilter: (tl2.track_status_id = 8)\"\n\" -> Index Scan using\nurls_pkey on urls u (cost=0.00..16.87 rows=1 width=87)\"\n\" Index Cond: (u.id =\nt.url_id)\"\n\" Filter: (u.survey_id = 5936)\"\n\" -> Hash (cost=637.15..637.15\nrows=14815 width=81)\"\n\" -> Seq Scan on\nurl_batchs ub (cost=0.00..637.15 rows=14815 width=81)\"\n\" -> Hash (cost=4578.37..4578.37 rows=281537 width=15)\"\n\" -> Seq Scan on cities c\n(cost=0.00..4578.37 rows=281537 width=15)\"\n\" -> Hash (cost=18270.17..18270.17 rows=65799 width=19)\"\n\" -> Hash Left Join (cost=8842.48..18270.17\nrows=65799 width=19)\"\n\" Hash Cond: (pd.originator_id = opd.id)\"\n\" -> Seq Scan on partner_deliveries pd\n(cost=0.00..8019.99 rows=65799 width=19)\"\n\" -> Hash (cost=8019.99..8019.99 rows=65799 width=8)\"\n\" -> Seq Scan on partner_deliveries\nopd (cost=0.00..8019.99 rows=65799 width=8)\"\n\" -> Hash (cost=24.20..24.20 rows=1420 width=16)\"\n\" -> Seq Scan on states s (cost=0.00..24.20 rows=1420 width=16)\"\n\" -> Hash (cost=1.68..1.68 rows=68 width=13)\"\n\" -> Seq Scan on countries co (cost=0.00..1.68 rows=68 width=13)\"\n\nThe curious bit is the rowcount (2241518) which is grossly\nmisestimated. It gets to that rowcount when joining the all_users_v\nview with tracks, both partial results are estimated at ~150 rows\n(more or less on target), it's a join of int PK to int column, so I\ncannot imagine how that join could result in 2M rows, what is pg\nthinking to get to that number?\n\nEven a full cross product couldn't get that high.\n\nPerformance isn't impacted, the plan, even with the misestimation, is\nnear optimal. But I can imagine this kind of misestimation wreaking\nhavoc in other situations.\n",
"msg_date": "Fri, 16 Sep 2011 11:50:40 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": true,
"msg_subject": "Odd misprediction"
},
{
"msg_contents": "On Fri, Sep 16, 2011 at 17:50, Claudio Freire <[email protected]> wrote:\n> It's not an issue for me (it's not really impacting performance), but\n> since it was odd I thought I might ask.\n>\n> I have this supermegaquery:\n>\n> SELECT\n> t.date AS status_date, lu.id AS memberid, lu.username AS\n> username, u.url AS url, ub.url_pattern AS urlpattern, lu.email AS\n> email,\n> lu.birth_date AS birthdate, lu.creation_date AS creationdate,\n> s.name AS state, co.name AS country,\n> opd.survey_id AS originalSurvey, c.name AS city , lu.confirmed\n> AS confirmed , pd.name AS action , sd.duration AS loi\n> FROM tracks t\n> LEFT JOIN surveyduration_v sd\n> ON sd.member_id = t.member_id\n> AND sd.survey_id = 5936\n> INNER JOIN all_users_v lu\n> ON lu.id = t.member_id\n> AND lu.panel_source_id = 1\n> LEFT JOIN track_status ts\n> ON ts.id = t.track_status_id\n> LEFT JOIN partners p\n> ON p.id = t.partner_id\n> LEFT JOIN urls u\n> ON u.id = t.url_id\n> AND u.survey_id = 5936\n> LEFT JOIN url_batchs ub\n> ON u.url_batch_id = ub.id\n> LEFT JOIN states s\n> ON lu.state_id = s.id\n> LEFT JOIN cities c\n> ON lu.city_id = c.id\n> LEFT JOIN countries co\n> ON lu.country_id = co.id\n> LEFT JOIN partner_deliveries pd\n> ON pd.id = t.partner_delivery_id\n> AND t.partner_id IS NOT NULL\n> LEFT JOIN partner_deliveries opd\n> ON opd.id = pd.originator_id\n> WHERE t.survey_id = 5936\n> AND t.track_status_id IN (5)\n>\n> With the views\n>\n> CREATE OR REPLACE VIEW surveyduration_v AS\n> SELECT date_part('epoch'::text, t.date - tl2.date) / 60::double\n> precision AS duration, t.member_id, t.survey_id\n> FROM tracks t\n> JOIN track_logs tl2 ON t.id = tl2.track_id\n> WHERE tl2.track_status_id = 8 AND t.track_status_id = 7;\n>\n> CREATE OR REPLACE VIEW all_users_v AS\n> SELECT 1 AS panel_source_id, livra_users.id,\n> livra_users.birth_date, livra_users.creation_date, livra_users.email,\n> livra_users.first_name, livra_users.last_name, livra_users.username,\n> livra_users.image_link, livra_users.confirmed,\n> livra_users.is_panelist, livra_users.unregistered, livra_users.reason,\n> livra_users.privacy, livra_users.sex, livra_users.site,\n> livra_users.country_id, livra_users.state_id, livra_users.city_id,\n> livra_users.last_activity_date, livra_users.partner_id,\n> livra_users.survey_id, livra_users.panelist_update,\n> livra_users.panelist_percentage\n> FROM livra_users\n> UNION ALL\n> SELECT 2 AS panel_source_id, - external_users.id AS id,\n> NULL::timestamp without time zone AS birth_date,\n> external_users.creation_date, external_users.email, NULL::character\n> varying AS first_name, NULL::character varying AS last_name,\n> external_users.username, NULL::character varying AS image_link, true\n> AS confirmed, external_users.is_panelist, false AS unregistered,\n> NULL::integer AS reason, 0 AS privacy, NULL::integer AS sex,\n> external_users.site, external_users.country_id, NULL::integer AS\n> state_id, NULL::integer AS city_id, NULL::timestamp without time zone\n> AS last_activity_date, NULL::integer AS partner_id,\n> external_users.survey_id, NULL::bigint AS panelist_update,\n> NULL::smallint AS panelist_percentage\n> FROM external_users;\n>\n> Server is 9.0.3 running on linux\n>\n> The BIG tables are tracks, track_logs and urls, all > 30M rows.\n>\n> One detail that could be related is that tracks.member_id is an\n> undeclared (denoramlized) foreign key to livra_users.\n>\n> The resulting plan is:\n>\n> \"Hash Left Join (cost=51417.93..974563.27 rows=2241518 width=1276)\"\n> \" Hash Cond: (\"*SELECT* 1\".country_id = co.id)\"\n> \" -> Hash Left Join (cost=51415.40..941722.50 rows=2241518 width=1271)\"\n> \" Hash Cond: (\"*SELECT* 1\".state_id = s.id)\"\n> \" -> Hash Left Join (cost=51373.45..910859.68 rows=2241518 width=1263)\"\n> \" Hash Cond: (t.partner_delivery_id = pd.id)\"\n> \" Join Filter: (t.partner_id IS NOT NULL)\"\n> \" -> Hash Left Join (cost=32280.78..854175.26\n> rows=2241518 width=1256)\"\n> \" Hash Cond: (\"*SELECT* 1\".city_id = c.id)\"\n> \" -> Hash Join (cost=24183.20..792841.63\n> rows=2241518 width=1249)\"\n> \" Hash Cond: (\"*SELECT* 1\".id = t.member_id)\"\n> \" -> Append (cost=0.00..148254.38\n> rows=3008749 width=168)\"\n> \" -> Subquery Scan on \"*SELECT* 1\"\n> (cost=0.00..140223.96 rows=3008748 width=168)\"\n> \" -> Seq Scan on livra_users\n> (cost=0.00..110136.48 rows=3008748 width=168)\"\n> \" -> Subquery Scan on \"*SELECT* 2\"\n> (cost=0.00..8030.42 rows=1 width=60)\"\n> \" -> Result (cost=0.00..8030.41\n> rows=1 width=60)\"\n> \" One-Time Filter: false\"\n> \" -> Seq Scan on\n> external_users (cost=0.00..8030.41 rows=1 width=60)\"\n> \" -> Hash (cost=24181.34..24181.34 rows=149\n> width=188)\"\n> \" -> Hash Left Join\n> (cost=21650.42..24181.34 rows=149 width=188)\"\n> \" Hash Cond: (u.url_batch_id = ub.id)\"\n> \" -> Nested Loop Left Join\n> (cost=20828.08..23355.84 rows=149 width=115)\"\n> \" -> Merge Left Join\n> (cost=20828.08..20841.04 rows=149 width=44)\"\n> \" Merge Cond:\n> (t.member_id = t.member_id)\"\n> \" -> Sort\n> (cost=435.90..436.27 rows=149 width=32)\"\n> \" Sort Key: t.member_id\"\n> \" -> Index\n> Scan using idx_tracks_survey_id_track_status_id on tracks t\n> (cost=0.00..430.52 rows=149 width=32)\"\n> \" Index\n> Cond: ((survey_id = 5936) AND (track_status_id = 5))\"\n> \" -> Sort\n> (cost=20392.18..20398.28 rows=2440 width=20)\"\n> \" Sort Key: t.member_id\"\n> \" -> Nested\n> Loop (cost=0.00..20254.90 rows=2440 width=20)\"\n> \" ->\n> Index Scan using idx_tracks_survey_id_track_status_id on tracks t\n> (cost=0.00..2010.03 rows=712 width=20)\"\n> \"\n> Index Cond: ((survey_id = 5936) AND (track_status_id = 7))\"\n> \" ->\n> Index Scan using idx_track_logs_track_id on track_logs tl2\n> (cost=0.00..25.59 rows=3 width=16)\"\n> \"\n> Index Cond: (tl2.track_id = t.id)\"\n> \"\n> Filter: (tl2.track_status_id = 8)\"\n> \" -> Index Scan using\n> urls_pkey on urls u (cost=0.00..16.87 rows=1 width=87)\"\n> \" Index Cond: (u.id =\n> t.url_id)\"\n> \" Filter: (u.survey_id = 5936)\"\n> \" -> Hash (cost=637.15..637.15\n> rows=14815 width=81)\"\n> \" -> Seq Scan on\n> url_batchs ub (cost=0.00..637.15 rows=14815 width=81)\"\n> \" -> Hash (cost=4578.37..4578.37 rows=281537 width=15)\"\n> \" -> Seq Scan on cities c\n> (cost=0.00..4578.37 rows=281537 width=15)\"\n> \" -> Hash (cost=18270.17..18270.17 rows=65799 width=19)\"\n> \" -> Hash Left Join (cost=8842.48..18270.17\n> rows=65799 width=19)\"\n> \" Hash Cond: (pd.originator_id = opd.id)\"\n> \" -> Seq Scan on partner_deliveries pd\n> (cost=0.00..8019.99 rows=65799 width=19)\"\n> \" -> Hash (cost=8019.99..8019.99 rows=65799 width=8)\"\n> \" -> Seq Scan on partner_deliveries\n> opd (cost=0.00..8019.99 rows=65799 width=8)\"\n> \" -> Hash (cost=24.20..24.20 rows=1420 width=16)\"\n> \" -> Seq Scan on states s (cost=0.00..24.20 rows=1420 width=16)\"\n> \" -> Hash (cost=1.68..1.68 rows=68 width=13)\"\n> \" -> Seq Scan on countries co (cost=0.00..1.68 rows=68 width=13)\"\n>\n> The curious bit is the rowcount (2241518) which is grossly\n> misestimated. It gets to that rowcount when joining the all_users_v\n> view with tracks, both partial results are estimated at ~150 rows\n> (more or less on target), it's a join of int PK to int column, so I\n> cannot imagine how that join could result in 2M rows, what is pg\n> thinking to get to that number?\n>\n> Even a full cross product couldn't get that high.\n>\n> Performance isn't impacted, the plan, even with the misestimation, is\n> near optimal. But I can imagine this kind of misestimation wreaking\n> havoc in other situations.\n\nLooks like these reports could be related:\n\nhttp://archives.postgresql.org/pgsql-performance/2011-08/msg00248.php\nhttp://archives.postgresql.org/pgsql-hackers/2011-08/msg01388.php\n\nTom Lane tracked these down to a likely cause, but AFAICT this has not\nbeen fixed yet.\n\nRegards,\nMarti\n",
"msg_date": "Sat, 17 Sep 2011 05:40:39 +0300",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd misprediction"
}
] |
[
{
"msg_contents": "2011/9/17 Tomas Vondra <[email protected]> wrote:\n(...)\n> We've been asked by a local university for PostgreSQL-related topics of\n> theses and seminary works\n\nI'm also interested in such proposals or ideas!\n\nHere's some list of topics:\n* Adding WAL-support to hash indexes in PostgreSQL (see ex-topic)\n* Time in PostgreSQL\n* Storing (Weather) Sensor Data in PostgreSQL\n* Fast Bulk Data Inserting in PostgreSQL with Unlogged tables (incl.\nadding GiST support)\n* Performance Tuning of Read-Only a PostgreSQL Database\n* Materialized Views in PostgreSQL: Experiments around Jonathan\nGardner's Proposal\n* more... ?\n\nYours, Stefan\n",
"msg_date": "Sat, 17 Sep 2011 22:01:30 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL-related topics of theses and seminary works sought (Was:\n\tHash index use presently(?) discouraged...)"
},
{
"msg_contents": "17.09.11 23:01, Stefan Keller написав(ла):\n> * more... ?\nWhat I miss from my DB2 UDB days are buffer pools. In PostgreSQL terms \nthis would be part of shared buffers dedicated to a relation or a set of \nrelations. When you have a big DB (not fitting in memory) you also \nusually want some small tables/indexes be in memory, no matter what \nother load DB has.\nComplimentary features are:\n1) Relations preloading at startup - ensure this relation are in memory.\n2) Per buffer pool (or relation) page costs - tell it that this \nindexes/tables ARE in memory\n\nBest regards, Vitalii Tymchyshyn.\n",
"msg_date": "Mon, 19 Sep 2011 12:37:29 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL-related topics of theses and seminary works\n\tsought (Was: Hash index use presently(?) discouraged...)"
},
{
"msg_contents": "Stefan Keller, 17.09.2011 22:01:\n> I'm also interested in such proposals or ideas!\n>\n> Here's some list of topics:\n> * Time in PostgreSQL\n> * Fast Bulk Data Inserting in PostgreSQL with Unlogged tables\n\nI don't understand these two items. Postgres does have a time data type and it has unlogged tables since 9.1\n\nRegards\nThomas\n\n",
"msg_date": "Mon, 19 Sep 2011 12:33:41 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL-related topics of theses and seminary works sought\n\t(Was: Hash index use presently(?) discouraged...)"
},
{
"msg_contents": "2011/9/19 Vitalii Tymchyshyn <[email protected]>:\n> 17.09.11 23:01, Stefan Keller написав(ла):\n>>\n>> * more... ?\n>\n> What I miss from my DB2 UDB days are buffer pools. In PostgreSQL terms this\n> would be part of shared buffers dedicated to a relation or a set of\n> relations. When you have a big DB (not fitting in memory) you also usually\n> want some small tables/indexes be in memory, no matter what other load DB\n> has.\n> Complimentary features are:\n> 1) Relations preloading at startup - ensure this relation are in memory.\n\nyou can use pgfincore extension to achieve that, for the OS cache. It\ndoes not look interesting to do that for shared_buffers of postgresql\n(the subject has been discussed and can be discussed again, please\ncheck mailling list archieve first)\n\n> 2) Per buffer pool (or relation) page costs - tell it that this\n> indexes/tables ARE in memory\n\nyou can use tablespace parameters (*_cost) for that, it has been\nrejected for tables in the past.\nI did propose something to start to work in this direction.\nSee \"[WIP] cache estimates, cache access cost\" in postgresql-hackers\nmailling list.\n\nThis proposal let inform the planner of the table memory usage and\ntake that into account.\n\n\n>\n> Best regards, Vitalii Tymchyshyn.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain +33 (0)6 20 30 22 52\nhttp://2ndQuadrant.fr/\nPostgreSQL: Support 24x7 - Développement, Expertise et Formation\n",
"msg_date": "Mon, 19 Sep 2011 13:57:28 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL-related topics of theses and seminary works\n\tsought (Was: Hash index use presently(?) discouraged...)"
},
{
"msg_contents": "Hello.\n\nI did read and AFAIR sometimes responded on this long discussions. The \nmain point for me is that many DBAs dont want to have even more random \nplans with postgresql knowing what's in memory now and using this \ninformation directly in runtime. I also think this point is valid.\nWhat I would like to have is to force some relations to be in memory by \ngiving them fixed part of shared buffers and to tell postgresql they are \nin memory (lowering page costs) to have fixed optimal plans.\n\nBest regards, Vitalii Tymchyshyn.\n\n19.09.11 14:57, Cédric Villemain написав(ла):\n> 2011/9/19 Vitalii Tymchyshyn<[email protected]>:\n>> 17.09.11 23:01, Stefan Keller написав(ла):\n>>> * more... ?\n>> What I miss from my DB2 UDB days are buffer pools. In PostgreSQL terms this\n>> would be part of shared buffers dedicated to a relation or a set of\n>> relations. When you have a big DB (not fitting in memory) you also usually\n>> want some small tables/indexes be in memory, no matter what other load DB\n>> has.\n>> Complimentary features are:\n>> 1) Relations preloading at startup - ensure this relation are in memory.\n> you can use pgfincore extension to achieve that, for the OS cache. It\n> does not look interesting to do that for shared_buffers of postgresql\n> (the subject has been discussed and can be discussed again, please\n> check mailling list archieve first)\n>\n>> 2) Per buffer pool (or relation) page costs - tell it that this\n>> indexes/tables ARE in memory\n> you can use tablespace parameters (*_cost) for that, it has been\n> rejected for tables in the past.\n> I did propose something to start to work in this direction.\n> See \"[WIP] cache estimates, cache access cost\" in postgresql-hackers\n> mailling list.\n>\n> This proposal let inform the planner of the table memory usage and\n> take that into account.\n\n",
"msg_date": "Mon, 19 Sep 2011 16:06:10 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL-related topics of theses and seminary works\n\tsought (Was: Hash index use presently(?) discouraged...)"
},
{
"msg_contents": "On 17/09/2011 22:01, Stefan Keller wrote:\n> 2011/9/17 Tomas Vondra <[email protected]> wrote:\n> (...)\n>> We've been asked by a local university for PostgreSQL-related topics of\n>> theses and seminary works\n> \n> I'm also interested in such proposals or ideas!\n> \n> Here's some list of topics:\n> * Adding WAL-support to hash indexes in PostgreSQL (see ex-topic)\n> * Time in PostgreSQL\n> * Storing (Weather) Sensor Data in PostgreSQL\n> * Fast Bulk Data Inserting in PostgreSQL with Unlogged tables (incl.\n> adding GiST support)\n> * Performance Tuning of Read-Only a PostgreSQL Database\n> * Materialized Views in PostgreSQL: Experiments around Jonathan\n> Gardner's Proposal\n> * more... ?\n\n * Covering indexes\n * Controllable record compression\n * Memory tables\n\n\n",
"msg_date": "Mon, 19 Sep 2011 15:24:55 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL-related topics of theses and seminary works sought\n\t(Was: Hash index use presently(?) discouraged...)"
}
] |
[
{
"msg_contents": "Stefan Keller wrote:\n \n> It's hard for me to imagine that btree is superior for all the\n> issues mentioned before.\n \nIt would be great if you could show a benchmark technique which shows\notherwise.\n \n-Kevin\n",
"msg_date": "Sun, 18 Sep 2011 10:17:17 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hash index use presently(?) discouraged since\n\t2005: revive or bury it?"
},
{
"msg_contents": "I'm simply referring to literature (like the intro Ramakrishnan & Gehrke).\nI just know that Oracle an Mysql actually do have them too and use it\nwithout those current implementation specific restrictions in\nPostgres.\n\nIMHO by design Hash Index (e.g. linear hashing) work best when:\n1. only equal (=) tests are used (on whole values)\n2. columns (key values) have very-high cardinality\n\nAnd ideally but not necessarily when index values do not change and\nnumber of rows are known ahead of time (avoiding O(N) worst case - but\nthere are approaches to chaining with dynamic resizing).\n\nI just collected this to encourage ourselves that enhancing hash\nindexes could be worthwhile.\n\nStefan\n\n2011/9/18 Kevin Grittner <[email protected]>:\n> Stefan Keller wrote:\n>\n>> It's hard for me to imagine that btree is superior for all the\n>> issues mentioned before.\n>\n> It would be great if you could show a benchmark technique which shows\n> otherwise.\n>\n> -Kevin\n>\n",
"msg_date": "Sun, 18 Sep 2011 21:31:55 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index use presently(?) discouraged since 2005:\n\trevive or bury it?"
},
{
"msg_contents": "Regarding the recent discussion about hash versus B-trees: Here is a trick I invented years ago to make hash indexes REALLY fast. It eliminates the need for a second disk access to check the data in almost all cases, at the cost of an additional 32-bit integer in the hash-table data structure.\n\nUsing this technique, we were able to load a hash-indexed database with data transfer rates that matched a cp (copy) command of the same data on Solaris, HP-UX and IBM AIX systems.\n\nYou build a normal hash table with hash-collision chains. But you add another 32-bit integer \"signature\" field to the hash-collision record (call it \"DBsig\"). You also create a function:\n\n signature = sig(key)\n\nthat produces digital signature. The critical factor in the sig() function is that there is an average of 9 bits set (i.e. it is somewhat \"sparse\" on bits).\n\nDBsig for a hash-collision chain is always the bitwise OR of every record in that hash-collision chain. When you add a record to the hash table, you do a bitwise OR of its signature into the existing DBsig. If you delete a record, you erase DBsig and rebuild it by recomputing the signatures of each record in the hash-collision chain and ORing them together again.\n\nThat means that for any key K, if K is actually on the disk, then all of the bits of sig(K) are always set in the hash-table record's \"DBsig\". If any one bit in sig(K) isn't set in \"DBsig\", then K is not in the database and you don't have to do a disk access to verify it. More formally, if\n\n sig(K) AND DBsig != sig(K)\n\nthen K is definitely not in the database.\n\nA typical hash table implementation might operate with a hash table that's 50-90% full, which means that the majority of accesses to a hash index will return a record and require a disk access to check whether the key K is actually in the database. With the signature method, you can eliminate over 99.9% of these disk accesses -- you only have to access the data when you actually want to read or update it. The hash table can usually fit easily in memory even for large tables, so it is blazingly fast.\n\nFurthermore, performance degrades gracefully as the hash table becomes overloaded. Since each signature has 9 bits set, you can typically have 5-10 hash collisions (a lot of signatures ORed together in each record's DBsig) before the false-positive rate of the signature test gets too high. As the hash table gets overloaded and needs to be resized, the false positives increase gradually and performance decreases due to the resulting unnecessary disk fetches to check the key. In the worst case (e.g. a hash table that's overloaded by a factor of 10 or more), performance degrades to what it would be without the signatures.\n\nFor much higher selectivity, you could use a 64-bit signatures and make the sig() set an average of 13 bits. You'd get very good selectivity even in a badly overloaded hash table (long hash-collision chains).\n\nThis technique was never patented, and it was disclosed at several user-group meetings back in the early 1990's, so there are no restrictions on its use. If this is of any use to anyone, maybe you could stick my name in the code somewhere.\n\nCraig James (the other Craig)\n",
"msg_date": "Sun, 18 Sep 2011 18:14:01 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "How to make hash indexes fast"
},
{
"msg_contents": "Hi,\n\nOn 19 September 2011 11:14, Craig James <[email protected]> wrote:\n> DBsig for a hash-collision chain is always the bitwise OR of every record in\n> that hash-collision chain. When you add a record to the hash table, you do\n> a bitwise OR of its signature into the existing DBsig. If you delete a\n> record, you erase DBsig and rebuild it by recomputing the signatures of each\n> record in the hash-collision chain and ORing them together again.\n\nSound like a Bloom filter [1] to me.\n\nBTW, Does Postgres use Bloom filter anywhere?\n\n[1] http://en.wikipedia.org/wiki/Bloom_filter\n\n-- \nOndrej Ivanic\n([email protected])\n",
"msg_date": "Mon, 19 Sep 2011 11:25:16 +1000",
"msg_from": "=?UTF-8?Q?Ondrej_Ivani=C4=8D?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to make hash indexes fast"
},
{
"msg_contents": "2011/9/19 Ondrej Ivanič <[email protected]>:\n> BTW, Does Postgres use Bloom filter anywhere?\n\nI saw patches for at least in-memory bloom filters (for hash joins)\nNot sure they're committed. I think so.\n",
"msg_date": "Mon, 19 Sep 2011 03:27:22 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to make hash indexes fast"
},
{
"msg_contents": "On Sun, Sep 18, 2011 at 6:14 PM, Craig James <[email protected]> wrote:\n> Regarding the recent discussion about hash versus B-trees: Here is a trick I\n> invented years ago to make hash indexes REALLY fast. It eliminates the need\n> for a second disk access to check the data in almost all cases, at the cost\n> of an additional 32-bit integer in the hash-table data structure.\n\nI don't see how that can work unless almost all of your queries return\nzero rows. If it returns rows, you have to go get those rows in order\nto return them. And you also have to go get them to determine if they\nare currently visible to the current transaction. This sounds like a\nhighly specialized data structure for a highly specialized situation.\n\n> Using this technique, we were able to load a hash-indexed database with data\n> transfer rates that matched a cp (copy) command of the same data on Solaris,\n> HP-UX and IBM AIX systems.\n>\n> You build a normal hash table with hash-collision chains. But you add\n> another 32-bit integer \"signature\" field to the hash-collision record (call\n> it \"DBsig\"). You also create a function:\n>\n> signature = sig(key)\n>\n> that produces digital signature. The critical factor in the sig() function\n> is that there is an average of 9 bits set (i.e. it is somewhat \"sparse\" on\n> bits).\n>\n> DBsig for a hash-collision chain is always the bitwise OR of every record in\n> that hash-collision chain. When you add a record to the hash table, you do\n> a bitwise OR of its signature into the existing DBsig. If you delete a\n> record, you erase DBsig and rebuild it by recomputing the signatures of each\n> record in the hash-collision chain and ORing them together again.\n\nSince PG doesn't store the keys in the hash index, this would mean\nvisiting the table for every entry with the same hash code.\n\n>\n> That means that for any key K, if K is actually on the disk, then all of the\n> bits of sig(K) are always set in the hash-table record's \"DBsig\". If any\n> one bit in sig(K) isn't set in \"DBsig\", then K is not in the database and\n> you don't have to do a disk access to verify it. More formally, if\n>\n> sig(K) AND DBsig != sig(K)\n>\n> then K is definitely not in the database.\n>\n> A typical hash table implementation might operate with a hash table that's\n> 50-90% full, which means that the majority of accesses to a hash index will\n> return a record and require a disk access to check whether the key K is\n> actually in the database.\n\nPG hash indexes do not typically operate at anywhere near that full,\nbecause PG stores the entire 32 bit hash value. Even if there are\nonly 8 buckets, and so only the bottom 3 bits are used to identify the\nbucket, once in the bucket all 32 bits are inspected for collisions.\nSo on tables with less than several hundred million, collisions are\nrare except for identical keys or malicious keys. And if you want\ntables much larger than that, you should probably go whole hog and\nswitch over to 64 bit.\n\nSo the fullness of the hash-value space and the fullness of the actual\ntable are two different things in PG.\n\n\n> With the signature method, you can eliminate over\n> 99.9% of these disk accesses -- you only have to access the data when you\n> actually want to read or update it. The hash table can usually fit easily\n> in memory even for large tables, so it is blazingly fast.\n>\n> Furthermore, performance degrades gracefully as the hash table becomes\n> overloaded. Since each signature has 9 bits set, you can typically have\n> 5-10 hash collisions (a lot of signatures ORed together in each record's\n> DBsig) before the false-positive rate of the signature test gets too high.\n\nBut why not just distribute those 32/(5 to 10) bits to the ordinary\nhash space, increasing them from 32 to 35 bits, rather than creating a\nseparate hash space? Doesn't that get you the same resolving power?\n\nCheers,\n\nJeff\n",
"msg_date": "Sun, 18 Sep 2011 19:58:02 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to make hash indexes fast"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.