threads
listlengths
1
275
[ { "msg_contents": "Hi,\n\nI'm wondering if the plpgsql code:\n\nPERFORM 1 FROM foo;\nIF FOUND THEN ...\n\nis any slower than:\n\nPERFORM 1 FROM foo LIMIT 1;\nIF FOUND THEN ...\n\nSeems like it _could_ be smart enough to know that\n\n1) It's selecting from a real table and not a function\n\n2) GET DIAGNOSTICS is not used\n\nand therefore it does not have to do more than set\nFOUND, and need find only one row/plan the query\nto find only one row. I'm particularly interested\nin the query plan optimization aspect.\n\nWould it be considered poor practice to rely on\nsuch an optimization?\n\nThanks.\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n", "msg_date": "Sat, 22 Oct 2005 05:05:21 +0000", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": true, "msg_subject": "Using LIMIT 1 in plpgsql PERFORM statements" }, { "msg_contents": "Karl,\n\n> PERFORM 1 FROM foo;\n> IF FOUND THEN ...\n>\n> is any slower than:\n>\n> PERFORM 1 FROM foo LIMIT 1;\n> IF FOUND THEN ...\n\nI'm wondering in what context it makes sense to call PERFORM on a constant.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 23 Oct 2005 14:02:35 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LIMIT 1 in plpgsql PERFORM statements" }, { "msg_contents": "\nOn 10/23/2005 04:02:35 PM, Josh Berkus wrote:\n> Karl,\n> \n> > PERFORM 1 FROM foo;\n> > IF FOUND THEN ...\n> >\n> > is any slower than:\n> >\n> > PERFORM 1 FROM foo LIMIT 1;\n> > IF FOUND THEN ...\n> \n> I'm wondering in what context it makes sense to call PERFORM on a\n> constant.\n\nIf you want to find out if the table has any rows.\nI'm really interested in what happens when\nthere's a WHERE qualification. I want to find\nout if there's any of some particular sort of row.\nBut I figured it wasn't worth putting that into\nthe example because I didn't have anything\nspecific to put in the WHERE clause. I suppose\nI should have put it in anyway and followed with ....\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n", "msg_date": "Mon, 24 Oct 2005 00:25:09 +0000", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using LIMIT 1 in plpgsql PERFORM statements" }, { "msg_contents": "\nOn 10/23/2005 04:02:35 PM, Josh Berkus wrote:\n\n> I'm wondering in what context it makes sense to call PERFORM on a\n> constant.\n\nI like to write PERFORMs that return a constant when\nselecting from a table. It emphasizes that the\nselection is being done for its side effects.\n\n(Programs should be written for people to read\nand only incidentally for computers to execute.\nPrograms that people can't read quickly\nbecome useless whereas programs that can't run\nquickly can be fixed. Computers are easy.\nPeople are difficult.)\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n", "msg_date": "Mon, 24 Oct 2005 00:40:18 +0000", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using LIMIT 1 in plpgsql PERFORM statements" }, { "msg_contents": "Karl,\n\n> I like to write PERFORMs that return a constant when\n> selecting from a table. It emphasizes that the\n> selection is being done for its side effects.\n\nWell, there's always the destruction test: run each version of the function \n10,000 times and see if there's an execution time difference.\n\n> (Programs should be written for people to read\n> and only incidentally for computers to execute.\n> Programs that people can't read quickly\n> become useless whereas programs that can't run\n> quickly can be fixed. Computers are easy.\n> People are difficult.)\n\nThat's a nice sentiment, but I don't see how it applies. For example, if I \ndo:\n\nSELECT id INTO v_check\nFROM some_table ORDER BY id LIMIT 1;\n\nIF id > 0 THEN ....\n\n... that says pretty clearly to code maintainers that I'm only interested in \nfinding out whether there's any rows in the table, while making sure I use \nthe index on ID. If I want to make it more clear, I do:\n\n-- check whether the table is populated\n\nNot that there's anything wrong with your IF FOUND approach, but let's not mix \nup optimizations and making your code pretty ... especially for a SQL \nscripting language.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 23 Oct 2005 21:36:35 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LIMIT 1 in plpgsql PERFORM statements" }, { "msg_contents": "On Sun, 2005-23-10 at 21:36 -0700, Josh Berkus wrote:\n> SELECT id INTO v_check\n> FROM some_table ORDER BY id LIMIT 1;\n> \n> IF id > 0 THEN ....\n> \n> ... that says pretty clearly to code maintainers that I'm only interested in \n> finding out whether there's any rows in the table, while making sure I use \n> the index on ID.\n\nWhy would you want to use the index on ID?\n\n-Neil\n\n\n", "msg_date": "Mon, 24 Oct 2005 00:40:10 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LIMIT 1 in plpgsql PERFORM statements" } ]
[ { "msg_contents": "Hi All,\n\n \n\nI am Kishore doing freelance development of J2EE applications. \n\n \n\nWe switched to use Postgresql recently because of the advantages it has over other commercial databases. All went well untill recently, untill we began working on an application that needs to maintain a huge database. \n\n \n\nI am describing the problem we are facing below. Can you please take a look at the case, and help me in configuring the PostgreSQL.\n\n \n\nWe have only two tables, one of which contains 97% of the data and the other table which contains 2.8% of the data. All other contain only the remaining 0.2% of data and are designed to support these two big tables. Currently we have 9 million of records in the first table and 0.2 million of records in the second table.\n\nWe need to insert into the bigger table almost for every second , through out the life time. In addition, we receive at least 200,000 records a day at a fixed time.\n\nWe are facing a critical situation because of the performance of the database. Even a basic query like select count(*) from bigger_table is taking about 4 minutes to return.\n\nThe following is the system configuration.\n\nDatabase : Postgresql 7.3\nOS : Redhat Linux\nProcessor : Athlon,\nMemory : 2 GB\n\nWe are expecting that at least 200 active connections need to be maintained through out the day.\n\n\n \n\nI am also attaching the configuration file that we are currently using.\n\n\n\nCan any you please suggest the best configuration to satisfy the above requirements?\n\nThanks in advance. \n\n \n\nThank you,\n\nKishore.\n\n \n\n\n\t\t\n---------------------------------\n Yahoo! FareChase - Search multiple travel sites in one click.", "msg_date": "Sat, 22 Oct 2005 14:12:05 -0700 (PDT)", "msg_from": "Kishore B <[email protected]>", "msg_from_op": true, "msg_subject": "Need help in setting optimal configuration for a huge database." } ]
[ { "msg_contents": "Hi All,\n\n I am Kishore doing freelance development of J2EE applications.\n\n We switched to use Postgresql recently because of the advantages it has\nover other commercial databases. All went well untill recently, untill we\nbegan working on an application that needs to maintain a huge database.\n\n I am describing the problem we are facing below. Can you please take a look\nat the case, and help me in configuring the PostgreSQL.\n\n We have only two tables, one of which contains 97% of the data and the\nother table which contains 2.8% of the data. All other contain only the\nremaining 0.2% of data and are designed to support these two big tables.\nCurrently we have 9 million of records in the first table and 0.2 million of\nrecords in the second table.\n\nWe need to insert into the bigger table almost for every second , through\nout the life time. In addition, we receive at least 200,000 records a day at\na fixed time.\n\nWe are facing a* critical situation because of the performance of the **\ndatabase**.* Even a basic query like select count(*) from bigger_table is\ntaking about 4 minutes to return.\n\nThe following is the system configuration.\n\nDatabase *:* Postgresql 7.3\nOS : Redhat Linux\nProcessor : Athlon,\nMemory : 2 GB\n\nWe are expecting that at least 200 active connections need to be maintained\nthrough out the day.\n\nCan any you please suggest the best configuration to satisfy the above\nrequirements?\n\nThanks in advance.\n\n Thank you,\n\nKishore.", "msg_date": "Sun, 23 Oct 2005 02:45:25 +0530", "msg_from": "Kishore B <[email protected]>", "msg_from_op": true, "msg_subject": "Need help in setting optimal configuration for a huge database." }, { "msg_contents": "On Sun, Oct 23, 2005 at 02:45:25AM +0530, Kishore B wrote:\n> Database *:* Postgresql 7.3\n\nYou definitely want to upgrade this if you can.\n\n> Memory : 2 GB\n\nFor 2GB of RAM, your effective_cache_size (100000) is a bit low (try doubling\nit), and sort_mem (2048) is probably a bit too low as well.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n\n", "msg_date": "Sat, 22 Oct 2005 23:42:59 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in setting optimal configuration for a huge database." }, { "msg_contents": "[please send replies to the list, not to me directly]\n\nOn Sun, Oct 23, 2005 at 03:19:39AM +0530, Kishore B wrote:\n> *You definitely want to upgrade this if you can.\n> \n> > Memory : 2 GB\n> *\n> We can move upto 12 GB if need to be.\n\nI was referring to your PostgreSQL version, not your RAM. More RAM is almost\nalways an improvement, but for your data set, 2GB sounds quite good. (700k\nrows is not really a “huge database”, BTW -- I've seen people here have\nseveral billion rows a _day_.)\n\n> For now, let us set the configuraiton parameters for 2GB.\n> I failed to mention earlier, that we have a dedicated server for database.\n> Can I set the effective_cache_size to 200000?\n\nYes, that should work fine.\n\n> Can I set the sort_mem size to 4096?\n\nThis depends a bit on the queries you're running. Remember that for each and\nevery sort you do, one of these (measured in 8kB buffers) will get allocated.\nSome tuning of your queries against this would probably be useful.\n\n> Will the performance suffer, if I set these parameters too high?\n\nYes, you can easily run into allocating too much RAM with too high sort_mem,\nwhich could kill your performance. Overestimating effective_cache_size is\nAFAIK not half as bad, though -- it is merely a hint to the planner, it does\nnot actually allocate memory.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n\n", "msg_date": "Sat, 22 Oct 2005 23:57:49 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in setting optimal configuration for a huge database." }, { "msg_contents": "\nOn 10/22/2005 04:15:25 PM, Kishore B wrote:\n\n> Can any you please suggest the best configuration to satisfy the above\n> requirements?\n\nYou've increased shared memory buffers, told the kernel\nto allow more shared memory (lots), and otherwise increased\nthe parameters associated with memory?\n\nIf so you might want to post the relevant configs\nhere.\n\nIf the basic tuning does not help enough you may\nwant to upgrade to 8.0 as it has significant\nperformance improvements.\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n", "msg_date": "Sat, 22 Oct 2005 22:03:03 +0000", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in setting optimal configuration for a huge" }, { "msg_contents": "Kishore B <[email protected]> writes:\n> Even a basic query like select count(*) from bigger_table is\n> taking about 4 minutes to return.\n\nYou do realize that \"select count(*)\" requires a full table scan in\nPostgres? It's never going to be fast.\n\nIf that's not where your performance problem really is, you need to\nshow us some of the actual problem queries. If it is, you should\nrethink why your application needs an exact row count.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 22 Oct 2005 18:15:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in setting optimal configuration for a huge database. " }, { "msg_contents": "Hi Gunderson,\n * Can I set the effective_cache_size to 200000?*\n* Yes, that should work fine.\n\n* Do you mean that I can set the effective_cache_size to 1.5 GB out of 2GB\nMemory that I have in the current system?\n Can I set the sort_memory to 3072? We need to generate reports which make\nheavy use of group by and order by clauses.\n Based on the 2GB available memory, how do you want me to Please note\nfurther that we need to execute upto 10 data centric queries at any\ninstance. Based on these specifications, how do you want me to allocate\nmemory to the following configuration parameters?\n shared_buffers, (Current Setting : 48000 (375MB))\nsort_memory, (Current setting 2048 kb (2MB))\neffective_cache_size (Current setting: 100000 (1GB))\n\n\n On 10/23/05, Steinar H. Gunderson <[email protected]> wrote:\n>\n> [please send replies to the list, not to me directly]\n>\n> On Sun, Oct 23, 2005 at 03:19:39AM +0530, Kishore B wrote:\n> > *You definitely want to upgrade this if you can.\n> >\n> > > Memory : 2 GB\n> > *\n> > We can move upto 12 GB if need to be.\n>\n> I was referring to your PostgreSQL version, not your RAM. More RAM is\n> almost\n> always an improvement, but for your data set, 2GB sounds quite good. (700k\n> rows is not really a \"huge database\", BTW -- I've seen people here have\n> several billion rows a _day_.)\n>\n> > For now, let us set the configuraiton parameters for 2GB.\n> > I failed to mention earlier, that we have a dedicated server for\n> database.\n> > Can I set the effective_cache_size to 200000?\n>\n> Yes, that should work fine.\n>\n> > Can I set the sort_mem size to 4096?\n>\n> This depends a bit on the queries you're running. Remember that for each\n> and\n> every sort you do, one of these (measured in 8kB buffers) will get\n> allocated.\n> Some tuning of your queries against this would probably be useful.\n>\n> > Will the performance suffer, if I set these parameters too high?\n>\n> Yes, you can easily run into allocating too much RAM with too high\n> sort_mem,\n> which could kill your performance. Overestimating effective_cache_size is\n> AFAIK not half as bad, though -- it is merely a hint to the planner, it\n> does\n> not actually allocate memory.\n>\n> /* Steinar */\n> --\n> Homepage: http://www.sesse.net/\n>\n>\n\nHi Gunderson, \n \n Can I set the effective_cache_size to 200000?\n Yes, that should work fine. \nDo you mean that I can set the effective_cache_size to 1.5 GB out of 2GB Memory that I have in the current system?\n \nCan I set the sort_memory to 3072? We need to generate reports which make heavy use of group by and order by clauses.\n \nBased on the 2GB available memory, how do you want me to Please note further that we need to execute upto 10 data centric queries at any instance. Based on these specifications, how do you  want me to allocate memory to the following configuration parameters?\n\n \nshared_buffers, (Current Setting : 48000 (375MB))\nsort_memory,    (Current setting 2048 kb (2MB))\neffective_cache_size (Current setting: 100000 (1GB))\n \n \nOn 10/23/05, Steinar H. Gunderson <[email protected]> wrote:\n[please send replies to the list, not to me directly]On Sun, Oct 23, 2005 at 03:19:39AM +0530, Kishore B wrote:\n>  *You definitely want to upgrade this if you can.>> > Memory : 2 GB> *> We can move upto 12 GB if need to be.I was referring to your PostgreSQL version, not your RAM. More RAM is almost\nalways an improvement, but for your data set, 2GB sounds quite good. (700krows is not really a \"huge database\", BTW -- I've seen people here haveseveral billion rows a _day_.)>  For now, let us set the configuraiton parameters for 2GB.\n> I failed to mention earlier, that we have a dedicated server for database.>  Can I set the effective_cache_size to 200000?Yes, that should work fine.> Can I set the sort_mem size to 4096?\nThis depends a bit on the queries you're running. Remember that for each andevery sort you do, one of these (measured in 8kB buffers) will get allocated.Some tuning of your queries against this would probably be useful.\n>   Will the performance suffer, if I set these parameters too high?Yes, you can easily run into allocating too much RAM with too high sort_mem,which could kill your performance. Overestimating effective_cache_size is\nAFAIK not half as bad, though -- it is merely a hint to the planner, it doesnot actually allocate memory./* Steinar */--Homepage: http://www.sesse.net/", "msg_date": "Sun, 23 Oct 2005 07:35:50 +0530", "msg_from": "Kishore B <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help in setting optimal configuration for a huge database." }, { "msg_contents": "On Sun, 23 Oct 2005, Kishore B wrote:\n\n> We need to insert into the bigger table almost for every second , through\n> out the life time. In addition, we receive at least 200,000 records a day at\n> a fixed time.\n> \n> We are facing a* critical situation because of the performance of the **\n> database**.* Even a basic query like select count(*) from bigger_table is\n> taking about 4 minutes to return.\n\nCount(*) like that always scans the full table, but 4 minutes still sound\nlike a lot. How often do you vacuum? Could it be that the disk is full of\ngarbage due to not enough vacuum?\n\nA query like this can help find bloat:\n\n SELECT oid::regclass, reltuples, relpages FROM pg_class ORDER BY 3 DESC;\n\nI assume to do updates and deletes as well, and not just inserts?\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Sun, 23 Oct 2005 12:04:07 +0200 (CEST)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in setting optimal configuration for a huge" }, { "msg_contents": "\n> We are facing a* critical situation because of the performance of the \n> **database** .* Even a basic query like select count(*) from \n> bigger_table is taking about 4 minutes to return.\n\nSeveral other replies have mentioned that COUNT() requires a full table scan, but this point can't be emphasized enough: Don't do it! People who are migrating from other environments (Oracle or MySQL) are used to COUNT(), MIN() and MAX() returning almost instantaneously, certainly on indexed columns. But for reasons that have something to do with transactions, these operations are unbelievably slow in PostgreSQL. \n\nHere are the alternatives that I've learned.\n\nCOUNT() -- There is no good substitute. What I do is create a new column, \"ROW_NUM\" with an auto-incrementing sequence. Every time I insert a row, it gets a new value. Unfortunately, this doesn't work if you ever delete a row. The alternative is a more complex pair of triggers, one for insert and one for delete, that maintains the count in a separate one-row table. It's a nuisance, but it's a lot faster than doing a full table scan for every COUNT().\n\nMIN() and MAX() -- These are surprisingly slow, because they seem to do a full table scan EVEN ON AN INDEXED COLUMN! I don't understand why, but happily there is an effective substitute:\n\n select mycolumn from mytable order by mycolumn limit 1; -- same as MIN()\n\n select mycolumn from mytable order by mycolumn desc limit 1; -- same as MAX()\n\nFor a large table, MIN or MAX can take 5-10 minutes, where the above \"select...\" replacements can return in one millisecond.\n\nYou should carefully examine your entire application for COUNT, MIN, and MAX, and get rid of them EVERYWHERE. This may be the entire source of your problem. It was in my case. \n\nThis is, in my humble opinion, the only serious flaw in PostgreSQL. I've been totally happy with it in every other way, and once I understood these shortcomings, my application is runs faster than ever on PostgreSQL.\n\nCraig\n", "msg_date": "Sun, 23 Oct 2005 09:31:44 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in setting optimal configuration for a huge" }, { "msg_contents": "On Sun, Oct 23, 2005 at 09:31:44AM -0700, Craig A. James wrote:\n> COUNT() -- There is no good substitute. What I do is create a new column, \n> \"ROW_NUM\" with an auto-incrementing sequence. Every time I insert a row, \n> it gets a new value. Unfortunately, this doesn't work if you ever delete a \n> row. The alternative is a more complex pair of triggers, one for insert \n> and one for delete, that maintains the count in a separate one-row table. \n> It's a nuisance, but it's a lot faster than doing a full table scan for \n> every COUNT().\n\nThis will sometimes give you wrong results if your transactions ever roll\nback, for instance. The correct way to do it is to maintain a table of\ndeltas, and insert a new positive record every time you insert rows, and a\nnegative one every time you delete them (using a trigger, of course). Then\nyou can query it for SUM(). (To keep the table small, run a SUM() in a cron\njob or such to combine the deltas.)\n\nThere has, IIRC, been talks of supporting fast (index-only) scans on\nread-only (ie. archived) partitions of tables, but it doesn't look like this\nis coming in the immediate future. I guess others know more than me here :-)\n\n> MIN() and MAX() -- These are surprisingly slow, because they seem to do a \n> full table scan EVEN ON AN INDEXED COLUMN! I don't understand why, but \n> happily there is an effective substitute:\n\nThey are slow because PostgreSQL has generalized aggregates, ie. MAX() gets\nfed exactly the same data as SUM() would. PostgreSQL 8.1 (soon-to-be\nreleased) can rewrite a MAX() or MIN() to an appropriate LIMIT form, though,\nwhich solves the problem.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n\n", "msg_date": "Sun, 23 Oct 2005 18:55:00 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in setting optimal configuration for a huge" }, { "msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> MIN() and MAX() -- These are surprisingly slow, because they seem to do a full table scan EVEN ON AN INDEXED COLUMN! I don't understand why, but happily there is an effective substitute:\n\n> select mycolumn from mytable order by mycolumn limit 1; -- same as MIN()\n\n> select mycolumn from mytable order by mycolumn desc limit 1; -- same as MAX()\n\nBTW, Postgres does know to do that for itself as of 8.1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 23 Oct 2005 13:06:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in setting optimal configuration for a huge " }, { "msg_contents": "Dnia 23-10-2005, nie o godzinie 09:31 -0700, Craig A. James napisaďż˝(a):\n<cut>\n> MIN() and MAX() -- These are surprisingly slow, because they seem to do a full table scan EVEN ON AN INDEXED COLUMN!\nIn 8.1 this is no true, see the changelog.\n\n> I don't understand why, but happily there is an effective substitute:\n> \n> select mycolumn from mytable order by mycolumn limit 1; -- same as MIN()\n> \n> select mycolumn from mytable order by mycolumn desc limit 1; -- same as MAX()\n\nIn 8.1 these queries are equivalent:\n\nselect mycolumn from mytable order by mycolumn limit 1;\nselect min(mycolumn) from mytable;\n\n-- \nTomasz Rybak <[email protected]>\n\n", "msg_date": "Sun, 23 Oct 2005 19:17:56 +0200", "msg_from": "Tomasz Rybak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in setting optimal configuration for a huge" }, { "msg_contents": "Hi Craig,\n Thank you very much for your response.\n It really covered a great point.\n Thank you,\nKishore.\n\n On 10/23/05, Craig A. James <[email protected]> wrote:\n>\n>\n> > We are facing a* critical situation because of the performance of the\n> > **database** .* Even a basic query like select count(*) from\n> > bigger_table is taking about 4 minutes to return.\n>\n> Several other replies have mentioned that COUNT() requires a full table\n> scan, but this point can't be emphasized enough: Don't do it! People who are\n> migrating from other environments (Oracle or MySQL) are used to COUNT(),\n> MIN() and MAX() returning almost instantaneously, certainly on indexed\n> columns. But for reasons that have something to do with transactions, these\n> operations are unbelievably slow in PostgreSQL.\n>\n> Here are the alternatives that I've learned.\n>\n> COUNT() -- There is no good substitute. What I do is create a new column,\n> \"ROW_NUM\" with an auto-incrementing sequence. Every time I insert a row, it\n> gets a new value. Unfortunately, this doesn't work if you ever delete a row.\n> The alternative is a more complex pair of triggers, one for insert and one\n> for delete, that maintains the count in a separate one-row table. It's a\n> nuisance, but it's a lot faster than doing a full table scan for every\n> COUNT().\n>\n> MIN() and MAX() -- These are surprisingly slow, because they seem to do a\n> full table scan EVEN ON AN INDEXED COLUMN! I don't understand why, but\n> happily there is an effective substitute:\n>\n> select mycolumn from mytable order by mycolumn limit 1; -- same as MIN()\n>\n> select mycolumn from mytable order by mycolumn desc limit 1; -- same as\n> MAX()\n>\n> For a large table, MIN or MAX can take 5-10 minutes, where the above\n> \"select...\" replacements can return in one millisecond.\n>\n> You should carefully examine your entire application for COUNT, MIN, and\n> MAX, and get rid of them EVERYWHERE. This may be the entire source of your\n> problem. It was in my case.\n>\n> This is, in my humble opinion, the only serious flaw in PostgreSQL. I've\n> been totally happy with it in every other way, and once I understood these\n> shortcomings, my application is runs faster than ever on PostgreSQL.\n>\n> Craig\n>\n\nHi Craig,\n \nThank you very much for your response. \n \nIt really covered a great point. \n \nThank you,\nKishore. \nOn 10/23/05, Craig A. James <[email protected]> wrote:\n> We are facing a* critical situation because of the performance of the> **database** .* Even a basic query like select count(*) from\n> bigger_table is taking about 4 minutes to return.Several other replies have mentioned that COUNT() requires a full table scan, but this point can't be emphasized enough: Don't do it!  People who are migrating from other environments (Oracle or MySQL) are used to COUNT(), MIN() and MAX() returning almost instantaneously, certainly on indexed columns.  But for reasons that have something to do with transactions, these operations are unbelievably slow in PostgreSQL.\nHere are the alternatives that I've learned.COUNT() -- There is no good substitute.  What I do is create a new column, \"ROW_NUM\" with an auto-incrementing sequence.  Every time I insert a row, it gets a new value.  Unfortunately, this doesn't work if you ever delete a row.  The alternative is a more complex pair of triggers, one for insert and one for delete, that maintains the count in a separate one-row table.  It's a nuisance, but it's a lot faster than doing a full table scan for every COUNT().\nMIN() and MAX() -- These are surprisingly slow, because they seem to do a full table scan EVEN ON AN INDEXED COLUMN!  I don't understand why, but happily there is an effective substitute:  select mycolumn from mytable order by mycolumn limit 1;  -- same as MIN()\n  select mycolumn from mytable order by mycolumn desc limit 1;  -- same as MAX()For a large table, MIN or MAX can take 5-10 minutes, where the above \"select...\" replacements can return in one millisecond.\nYou should carefully examine your entire application for COUNT, MIN, and MAX, and get rid of them EVERYWHERE.  This may be the entire source of your problem.  It was in my case.This is, in my humble opinion, the only serious flaw in PostgreSQL.  I've been totally happy with it in every other way, and once I understood these shortcomings, my application is runs faster than ever on PostgreSQL.\nCraig", "msg_date": "Mon, 24 Oct 2005 03:05:38 +0530", "msg_from": "Kishore B <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help in setting optimal configuration for a huge database." }, { "msg_contents": "Hi Craig,\n Does the no of tables and the size of each table affect the performance of\na join operation?\n When we are trying to join the two big tables that I described above, pg is\ntaking so long to execute?\n Thank you,\nKishore.\n\n On 10/23/05, Craig A. James <[email protected]> wrote:\n>\n>\n> > We are facing a* critical situation because of the performance of the\n> > **database** .* Even a basic query like select count(*) from\n> > bigger_table is taking about 4 minutes to return.\n>\n> Several other replies have mentioned that COUNT() requires a full table\n> scan, but this point can't be emphasized enough: Don't do it! People who are\n> migrating from other environments (Oracle or MySQL) are used to COUNT(),\n> MIN() and MAX() returning almost instantaneously, certainly on indexed\n> columns. But for reasons that have something to do with transactions, these\n> operations are unbelievably slow in PostgreSQL.\n>\n> Here are the alternatives that I've learned.\n>\n> COUNT() -- There is no good substitute. What I do is create a new column,\n> \"ROW_NUM\" with an auto-incrementing sequence. Every time I insert a row, it\n> gets a new value. Unfortunately, this doesn't work if you ever delete a row.\n> The alternative is a more complex pair of triggers, one for insert and one\n> for delete, that maintains the count in a separate one-row table. It's a\n> nuisance, but it's a lot faster than doing a full table scan for every\n> COUNT().\n>\n> MIN() and MAX() -- These are surprisingly slow, because they seem to do a\n> full table scan EVEN ON AN INDEXED COLUMN! I don't understand why, but\n> happily there is an effective substitute:\n>\n> select mycolumn from mytable order by mycolumn limit 1; -- same as MIN()\n>\n> select mycolumn from mytable order by mycolumn desc limit 1; -- same as\n> MAX()\n>\n> For a large table, MIN or MAX can take 5-10 minutes, where the above\n> \"select...\" replacements can return in one millisecond.\n>\n> You should carefully examine your entire application for COUNT, MIN, and\n> MAX, and get rid of them EVERYWHERE. This may be the entire source of your\n> problem. It was in my case.\n>\n> This is, in my humble opinion, the only serious flaw in PostgreSQL. I've\n> been totally happy with it in every other way, and once I understood these\n> shortcomings, my application is runs faster than ever on PostgreSQL.\n>\n> Craig\n>\n\nHi Craig, \n \nDoes the no of tables and the size of each table affect the performance of a join operation? \n \nWhen we are trying to join the two big tables that I described above, pg is taking so long to execute?\n \nThank you,\nKishore. \nOn 10/23/05, Craig A. James <[email protected]> wrote:\n> We are facing a* critical situation because of the performance of the> **database** .* Even a basic query like select count(*) from\n> bigger_table is taking about 4 minutes to return.Several other replies have mentioned that COUNT() requires a full table scan, but this point can't be emphasized enough: Don't do it!  People who are migrating from other environments (Oracle or MySQL) are used to COUNT(), MIN() and MAX() returning almost instantaneously, certainly on indexed columns.  But for reasons that have something to do with transactions, these operations are unbelievably slow in PostgreSQL.\nHere are the alternatives that I've learned.COUNT() -- There is no good substitute.  What I do is create a new column, \"ROW_NUM\" with an auto-incrementing sequence.  Every time I insert a row, it gets a new value.  Unfortunately, this doesn't work if you ever delete a row.  The alternative is a more complex pair of triggers, one for insert and one for delete, that maintains the count in a separate one-row table.  It's a nuisance, but it's a lot faster than doing a full table scan for every COUNT().\nMIN() and MAX() -- These are surprisingly slow, because they seem to do a full table scan EVEN ON AN INDEXED COLUMN!  I don't understand why, but happily there is an effective substitute:  select mycolumn from mytable order by mycolumn limit 1;  -- same as MIN()\n  select mycolumn from mytable order by mycolumn desc limit 1;  -- same as MAX()For a large table, MIN or MAX can take 5-10 minutes, where the above \"select...\" replacements can return in one millisecond.\nYou should carefully examine your entire application for COUNT, MIN, and MAX, and get rid of them EVERYWHERE.  This may be the entire source of your problem.  It was in my case.This is, in my humble opinion, the only serious flaw in PostgreSQL.  I've been totally happy with it in every other way, and once I understood these shortcomings, my application is runs faster than ever on PostgreSQL.\nCraig", "msg_date": "Mon, 24 Oct 2005 03:25:50 +0530", "msg_from": "Kishore B <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Need help in setting optimal configuration for a huge\n\tdatabase." }, { "msg_contents": "(Please don't top reply... your response has been moved to the bottom)\n\nOn Sun, 2005-10-23 at 16:55, Kishore B wrote:\n> \n> On 10/23/05, Craig A. James <[email protected]> wrote: \n> > We are facing a* critical situation because of the\n> performance of the\n> > **database** .* Even a basic query like select count(*) from\n> > bigger_table is taking about 4 minutes to return.\n> \n> Several other replies have mentioned that COUNT() requires a\n> full table scan,\n\nThis isn't wholly correct. A query like this:\n\nselect count(*) from locatorcodes where locatorcode like 'ZZZ%';\n\ncan use an index. However, since tuple visibility info is NOT stored in\nindexes, ALL these tuples must be looked up in the actual table.\n\n> but this point can't be emphasized enough: Don't do it! \n> People who are migrating from other environments (Oracle or\n> MySQL) are used to COUNT(), MIN() and MAX() returning almost\n> instantaneously, certainly on indexed columns.\n\nWhile I'll admit that min and max are a bit faster in Oracle than in\npostgresql, count doesn't seem much faster in my testing. Of course, on\na wider table Oracle probably is faster, but I'm used to normalizing out\nmy tables so that there's no advantage for Oracle there.\n\n> But for reasons that have something to do with transactions,\n> these operations are unbelievably slow in PostgreSQL. \n\nIt's because of visibility in the MVCC system PostgreSQL uses.\n\n> MIN() and MAX() -- These are surprisingly slow, because they\n> seem to do a full table scan EVEN ON AN INDEXED COLUMN! I\n> don't understand why, but happily there is an effective\n> substitute:\n\nIt's because aggregate in PostgreSQL are abstract things. To make these\ntwo faster would require short circuiting the query planner to use\nsomething other than the abstracted methods PostgreSQL was built around.\n\nOn the other hand, select with limit and order by can use the indexes\nbecause they are not aggregates.\n\n> You should carefully examine your entire application for\n> COUNT, MIN, and MAX, and get rid of them EVERYWHERE. This may\n> be the entire source of your problem. It was in my case.\n\nYou're right on here. The problem is that people often use aggregates\nwhere they shouldn't. Aggregates really are meant to operate across a\nwhole set of data. An aggregate like sum() or avg() seems obviously\ndesigned to hit every tuple. Well, while min, max, and count may not\nlook like they should, they, in fact, do hit every table covered by the\nwhere clause.\n\n> This is, in my humble opinion, the only serious flaw in\n> PostgreSQL. I've been totally happy with it in every other\n> way, and once I understood these shortcomings, my application\n> is runs faster than ever on PostgreSQL. \n\nI wouldn't fully qualify it as a flaw. It's a design quirk, caused by\nthe extensible model PostgreSQL is built under. While it costs you in\none way, like slower min / max / count in some circumstances, it\nbenefits you others, like the ability make your own aggregate functions.\n\n> Hi Craig, \n> \n> Does the no of tables and the size of each table affect the\n> performance of a join operation? \n\nOf course they do. The more information your query has to process, the\nslower it will run. It's usually a pretty much a linear increase in\ntime required, unless you go from everything fitting into buffers to\nspilling to disk. Then things will slow down noticeably.\n\n> \n> When we are trying to join the two big tables that I described above,\n> pg is taking so long to execute?\n\nHard to say. There are many ways to tune PostgreSQL. I strongly\nsuggest you take this thread to the performance list, and post your\npostgresql.conf file, and the output of \"explain analyze <your query\nhere>\" and ask for help. That list is much better equipped to help with\nthese things.\n", "msg_date": "Mon, 24 Oct 2005 10:28:02 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Need help in setting optimal configuration" } ]
[ { "msg_contents": "Hey all.\n\nPlease point me to a place I should be looking if this is a common\nquestion that has been debated periodically and at great length\nalready. :-)\n\nI have a complex query. It's a few Kbytes large, and yes, I've already\nworked on reducing it to be efficient in terms of database design, and\nminimizing the expressions used to join the tables. Running some timing\ntests, I've finding that the query itself, when issued in full, takes\naround 60 milliseconds to complete on modest hardware. If prepared, and\nthen executed, however, it appears to take around 60 milliseconds to\nprepare, and 20 milliseconds to execute. I'm not surprised. PostgreSQL\nis very likely calculating the costs of many, many query plans.\n\nThis is telling me that the quickest method of me to accelerate these\nqueries, is to have them pre-select a query plan, and to use it.\nUnfortunately, I'll only be executing this query once per session,\nso \"PREPARE\" seems to be out of the question.\n\nI am using PHP's PDO PGSQL interface - I haven't read up enough on it\nto determine whether a persistent connection can re-use server-side\nprepared queries as an option. Anybody know?\n\nMy read of the PLPGSQL documentation seems to suggest that it will do\nsome sort of query plan caching. Is there better documentation on this\nthat would explain exactly how it works? What is the best way to define\na PLPGSQL function that will return a set of records? Is RETURNS SETOF\nthe only option in this regard? It seems inefficient to me. Am I doing\nit wrong? Not understanding it? For very simple queries, it seems that\nusing PLPGSQL and SELECT INTO, RETURN, and then SELECT * FROM F(arg)\"\nactually slows down the query slightly. It wasn't giving me much faith,\nand I wanted to pick up some people's opinions befor egoing further.\n\nWhat is the reason that SQL and/or PostgreSQL have not added\nserver-defined prepared statements? As in, one defines a\nserver-defined prepared statement, and all sessions that have\npermission can execute the prepared statement. Is this just an issue\nof nobody implementing it? Or was there some deeper explanation as\nto why this would be a bad thing?\n\nMy reading of views, are that views would not accelerate the queries.\nPerhaps the bytes sent to the server would reduce, however, the cost\nto prepare, and execute the statement would be similar, or possibly\neven longer?\n\nI'm thinking I need some way of defined a server side query, that\ntakes arguments, that will infrequently prepare the query, such that\nthe majority of the time that it is executed, it will not have to\nchoose a query plan.\n\nAm I missing something obvious? :-)\n\nThanks,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Sun, 23 Oct 2005 00:14:23 -0400", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "prepared transactions that persist across sessions?" }, { "msg_contents": "On Sun, Oct 23, 2005 at 00:14:23 -0400,\n [email protected] wrote:\n> Hey all.\n> \n> Please point me to a place I should be looking if this is a common\n> question that has been debated periodically and at great length\n> already. :-)\n\nYou probably want to read:\nhttp://candle.pha.pa.us/main/writings/pgsql/sgml/runtime-config-query.html\n\nConnection pooling might be another approach, since it should be possible\nto reuse prepared statements when reusing a connection.\n\n> I have a complex query. It's a few Kbytes large, and yes, I've already\n> worked on reducing it to be efficient in terms of database design, and\n> minimizing the expressions used to join the tables. Running some timing\n> tests, I've finding that the query itself, when issued in full, takes\n> around 60 milliseconds to complete on modest hardware. If prepared, and\n> then executed, however, it appears to take around 60 milliseconds to\n> prepare, and 20 milliseconds to execute. I'm not surprised. PostgreSQL\n> is very likely calculating the costs of many, many query plans.\n", "msg_date": "Sun, 23 Oct 2005 01:51:36 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: prepared transactions that persist across sessions?" }, { "msg_contents": "> I am using PHP's PDO PGSQL interface - I haven't read up enough on it\n> to determine whether a persistent connection can re-use server-side\n> prepared queries as an option. Anybody know?\n\nIt re-uses server-side prepared queries by default, if you are using the \n PDOPrepare/PDOExecute stuff.\n\nChris\n\n", "msg_date": "Mon, 24 Oct 2005 13:03:03 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: prepared transactions that persist across sessions?" } ]
[ { "msg_contents": "Hi all,\n\nI'm currently testing PostgreSQL 8.1 beta3 and I have a problem with a \nEXPLAIN ANALYZE output. You can find it attached.\n\nI don't understand why I have the Nested Loop at line 19 with an actual \ntime of 254.292..257.328 because I can't find anywhere the line taking \nthis 254 ms. Is it a problem with 8.1b3 or can anyone explain me where I \ncan find the part of the query taking this time? I'm not sure to \nunderstand the new bitmap scan stuff.\n\nThanks for your help\n\nRegards,\n\n--\nGuillaume", "msg_date": "Mon, 24 Oct 2005 01:53:59 +0200", "msg_from": "Guillaume Smet <[email protected]>", "msg_from_op": true, "msg_subject": "Problem analyzing explain analyze output" }, { "msg_contents": "On Mon, Oct 24, 2005 at 01:53:59AM +0200, Guillaume Smet wrote:\n> I don't understand why I have the Nested Loop at line 19 with an actual \n> time of 254.292..257.328 because I can't find anywhere the line taking \n> this 254 ms.\n\nYou don't have a nested loop with that time; however, you have\n\n> -> Nested Loop (cost=887.45..4031.09 rows=587 width=20) (actual time=254.424..280.794 rows=514 loops=1)\n> -> Bitmap Heap Scan on contcrilieu ccl (cost=887.45..1668.96 rows=587 width=8) (actual time=254.292..257.328 rows=514 loops=1)\n> Recheck Cond: ((dcrilieu >= (now() - '60 days'::interval)) AND ((flagcriaccepteelieu)::text = 'O'::text))\n> -> Bitmap Index Scan on idx_contcrilieu_4 (cost=0.00..887.45 rows=587 width=0) (actual time=254.143..254.143 rows=514 loops=1)\n> Index Cond: ((dcrilieu >= (now() - '60 days'::interval)) AND ((flagcriaccepteelieu)::text = 'O'::text))\n> -> Index Scan using pk_lieu on lieu l (cost=0.00..4.01 rows=1 width=12) (actual time=0.034..0.036 rows=1 loops=514)\n> Index Cond: (\"outer\".numlieu = l.numlieu)\n\n\nwhich seems to make sense; you have one run of about 257ms, plus 514 runs\ntaking about 0.035ms each (ie. about 18ms), which should add up to become\nabout 275ms (which is close enough to the reality of 281ms).\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n\n", "msg_date": "Mon, 24 Oct 2005 02:08:24 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem analyzing explain analyze output" }, { "msg_contents": "Steinar,\n\n> which seems to make sense; you have one run of about 257ms, plus 514 runs\n> taking about 0.035ms each (ie. about 18ms), which should add up to become\n> about 275ms (which is close enough to the reality of 281ms).\n\nYep. The line that disturbed me was the bitmap index scan with a cost of \n\"actual time=254.143..254.143\". I was more looking for something like \n\"actual time=0..254.143\" which is what I usually have for an index scan. \nSo I suppose that the bitmap index scan returns rows only when it's \ntotally computed.\n\nThanks for your help.\n\nRegards.\n\n--\nGuillaume\n", "msg_date": "Mon, 24 Oct 2005 08:33:03 +0200", "msg_from": "Guillaume Smet <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problem analyzing explain analyze output" } ]
[ { "msg_contents": " The preliminaries:\n\n - PostgreSQL 8.1 beta 3, Debian experimental\n - database has been VACUUMed FULL ANALYZE.\n - a pg_dump -Fc exists at http://199.77.129.48/inet_test.db\n - ia32 hardware with 2 GB physical memory and the following settings:\n\nshared_buffers = 40960\ntemp_buffers = 16384\nwork_mem = 131072\nmaintenance_work_mem = 262144\neffective_cache_size = 65536\n\n\n I've populated a table evenly with about 2 million rows of RFC 1918\n addresses:\n\n\nTable \"public.inet_addresses\"\n Column | Type | Modifiers\n--------+------+-----------\n addr | inet | not null\nIndexes:\n \"inet_addresses_pkey\" PRIMARY KEY, btree (addr)\n\n\n The following query is very fast:\n\n\nEXPLAIN ANALYZE\nSELECT *\n FROM inet_addresses\n WHERE addr << inet('10.2.0.0/24') \n OR addr << inet('10.4.0.0/24')\n OR addr << inet('10.8.0.0/24');\n\n Bitmap Heap Scan on inet_addresses (cost=6.51..324.48 rows=1792335 width=11) (actual time=0.350..1.104 rows=381 loops=1)\n Recheck Cond: ((addr << '10.2.0.0/24'::inet) OR (addr << '10.4.0.0/24'::inet) OR (addr << '10.8.0.0/24'::inet)) \n Filter: ((addr << '10.2.0.0/24'::inet) OR (addr << '10.4.0.0/24'::inet) OR (addr << '10.8.0.0/24'::inet))\n -> BitmapOr (cost=6.51..6.51 rows=85 width=0) (actual time=0.336..0.336 rows=0 loops=1)\n -> Bitmap Index Scan on inet_addresses_pkey (cost=0.00..2.17 rows=28 width=0) (actual time=0.127..0.127 rows=127 loops=1)\n Index Cond: ((addr > '10.2.0.0/24'::inet) AND (addr <= '10.2.0.255'::inet))\n -> Bitmap Index Scan on inet_addresses_pkey (cost=0.00..2.17 rows=28 width=0) (actual time=0.109..0.109 rows=127 loops=1)\n Index Cond: ((addr > '10.4.0.0/24'::inet) AND (addr <= '10.4.0.255'::inet))\n -> Bitmap Index Scan on inet_addresses_pkey (cost=0.00..2.17 rows=28 width=0) (actual time=0.096..0.096 rows=127 loops=1)\n Index Cond: ((addr > '10.8.0.0/24'::inet) AND (addr <= '10.8.0.255'::inet))\n Total runtime: 1.613 ms\n\n\n Instead of specifying explicit address ranges in the query, I'd like\n to store the ranges in a table:\n\n\ninet_test_db=# \\d inet_ranges\n Table \"public.inet_ranges\"\n Column | Type | Modifiers\n----------+---------+-----------\n range | inet | not null\n range_id | integer | \nIndexes:\n \"inet_ranges_pkey\" PRIMARY KEY, btree (range)\n \"inet_ranges_range_id_idx\" btree (range_id)\n\ninet_test_db=# SELECT * FROM inet_ranges;\n range | range_id\n--------------+----------\n 10.2.0.0/24 | 1\n 10.4.0.0/24 | 1\n 10.8.0.0/24 | 1\n 10.16.0.0/24 | 2\n 10.32.0.0/24 | 2\n 10.64.0.0/24 | 2\n(6 rows)\n\n\n This query is far slower, even though it generates the same result:\n\n\nEXPLAIN ANALYZE \nSELECT * \n FROM inet_addresses as ia, inet_ranges as ir\n WHERE ia.addr << ir.range\n AND ir.range_id=1;\n \n Nested Loop (cost=0.00..171485.93 rows=3072574 width=26) (actual time=1465.803..16922.979 rows=381 loops=1)\n Join Filter: (\"inner\".addr << \"outer\".range)\n -> Seq Scan on inet_ranges ir (cost=0.00..1.07 rows=3 width=15) (actual time=0.008..0.021 rows=3 loops=1)\n Filter: (range_id = 1)\n -> Seq Scan on inet_addresses ia (cost=0.00..31556.83 rows=2048383 width=11) (actual time=0.003..2919.405 rows=2048383 loops=3)\n Total runtime: 16923.457 ms\n\n\n Even when disabling sequential scans, the query planner is unable to\n make use of the inet_addresses_pkey index:\n\n\n Nested Loop (cost=100033605.21..100171874.11 rows=3072574 width=26) (actual time=2796.928..23453.585 rows=381 loops=1)\n Join Filter: (\"inner\".addr << \"outer\".range)\n -> Index Scan using inet_ranges_range_id_idx on inet_ranges ir (cost=0.00..3.04 rows=3 width=15) (actual time=0.069..0.095 rows=3 loops=1)\n Index Cond: (range_id = 1)\n -> Materialize (cost=100033605.21..100054089.04 rows=2048383 width=11) (actual time=0.016..5133.349 rows=2048383 loops=3)\n -> Seq Scan on inet_addresses ia (cost=100000000.00..100031556.83 rows=2048383 width=11) (actual time=0.005..2938.012 rows=2048383 loops=1)\n Total runtime: 23521.418 ms\n\n\n Is it possible to attain the speed of the first query and the\n flexibility of the second? Or will I have to resort to generating\n queries of the first form with the range table in the application\n layer?\n\n-- \nRobert Edmonds\[email protected]\n", "msg_date": "Sun, 23 Oct 2005 23:58:09 -0400", "msg_from": "Robert Edmonds <[email protected]>", "msg_from_op": true, "msg_subject": "performance of implicit join vs. explicit conditions on inet queries?" }, { "msg_contents": "\"Robert Edmonds\" <[email protected]> wrote\n>\n> EXPLAIN ANALYZE\n> SELECT *\n> FROM inet_addresses\n> WHERE addr << inet('10.2.0.0/24')\n> OR addr << inet('10.4.0.0/24')\n> OR addr << inet('10.8.0.0/24');\n>\n> Bitmap Heap Scan on inet_addresses (cost=6.51..324.48 rows=1792335 \n> width=11) (actual time=0.350..1.104 rows=381 loops=1)\n> Recheck Cond: ((addr << '10.2.0.0/24'::inet) OR (addr << \n> '10.4.0.0/24'::inet) OR (addr << '10.8.0.0/24'::inet))\n> Filter: ((addr << '10.2.0.0/24'::inet) OR (addr << '10.4.0.0/24'::inet) \n> OR (addr << '10.8.0.0/24'::inet))\n> -> BitmapOr (cost=6.51..6.51 rows=85 width=0) (actual \n> time=0.336..0.336 rows=0 loops=1)\n> -> Bitmap Index Scan on inet_addresses_pkey (cost=0.00..2.17 \n> rows=28 width=0) (actual time=0.127..0.127 rows=127 loops=1)\n> Index Cond: ((addr > '10.2.0.0/24'::inet) AND (addr <= \n> '10.2.0.255'::inet))\n> -> Bitmap Index Scan on inet_addresses_pkey (cost=0.00..2.17 \n> rows=28 width=0) (actual time=0.109..0.109 rows=127 loops=1)\n> Index Cond: ((addr > '10.4.0.0/24'::inet) AND (addr <= \n> '10.4.0.255'::inet))\n> -> Bitmap Index Scan on inet_addresses_pkey (cost=0.00..2.17 \n> rows=28 width=0) (actual time=0.096..0.096 rows=127 loops=1)\n> Index Cond: ((addr > '10.8.0.0/24'::inet) AND (addr <= \n> '10.8.0.255'::inet))\n> Total runtime: 1.613 ms\n>\n>\n> Instead of specifying explicit address ranges in the query, I'd like\n> to store the ranges in a table:\n>\n>\n> inet_test_db=# \\d inet_ranges\n> Table \"public.inet_ranges\"\n> Column | Type | Modifiers\n> ----------+---------+-----------\n> range | inet | not null\n> range_id | integer |\n> Indexes:\n> \"inet_ranges_pkey\" PRIMARY KEY, btree (range)\n> \"inet_ranges_range_id_idx\" btree (range_id)\n>\n> inet_test_db=# SELECT * FROM inet_ranges;\n> range | range_id\n> --------------+----------\n> 10.2.0.0/24 | 1\n> 10.4.0.0/24 | 1\n> 10.8.0.0/24 | 1\n> 10.16.0.0/24 | 2\n> 10.32.0.0/24 | 2\n> 10.64.0.0/24 | 2\n> (6 rows)\n>\n>\n>\n> EXPLAIN ANALYZE\n> SELECT *\n> FROM inet_addresses as ia, inet_ranges as ir\n> WHERE ia.addr << ir.range\n> AND ir.range_id=1;\n>\n> Nested Loop (cost=0.00..171485.93 rows=3072574 width=26) (actual \n> time=1465.803..16922.979 rows=381 loops=1)\n> Join Filter: (\"inner\".addr << \"outer\".range)\n> -> Seq Scan on inet_ranges ir (cost=0.00..1.07 rows=3 width=15) \n> (actual time=0.008..0.021 rows=3 loops=1)\n> Filter: (range_id = 1)\n> -> Seq Scan on inet_addresses ia (cost=0.00..31556.83 rows=2048383 \n> width=11) (actual time=0.003..2919.405 rows=2048383 loops=3)\n> Total runtime: 16923.457 ms\n>\n\nGood illustration. I guess we have a problem of the historgram statistical \ninformation. That is, the historgrams we used can effectively record the \nlinear space ranges(like ordinary <, >, =), but failed to do it for \nnonlinear ranges like inet data type. So the Nested Loop node make an error \nin estmating number of rows (est: 3072574, real: 381), thus a sequential \nscan is obviously better under this estimation.\n\nI am thinking the historgram problem is not easy to fix, but is there a way \nto change Inet type a little bit to make it linear for your range operators? \n(for example, align the length to 000.000.000.000/00?)\n\nRegards,\nQingqing\n\n\n\n", "msg_date": "Mon, 31 Oct 2005 04:48:41 -0500", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of implicit join vs. explicit conditions on inet\n\tqueries?" }, { "msg_contents": "\"Qingqing Zhou\" <[email protected]> writes:\n> \"Robert Edmonds\" <[email protected]> wrote\n>> Instead of specifying explicit address ranges in the query, I'd like\n>> to store the ranges in a table:\n\n> Good illustration. I guess we have a problem of the historgram statistical \n> information.\n\nNo, that's completely irrelevant to his problem. The reason we can't do\nthis is that the transformation from \"x << const\" to a range check on x\nis a plan-time transformation; there's no mechanism in place to do it\nat runtime. This is not easy to fix, because the mechanism that's doing\nit is primarily intended for LIKE/regex index optimization, and in that\ncase a runtime pattern might well not be optimizable at all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 31 Oct 2005 09:24:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of implicit join vs. explicit conditions on inet\n\tqueries?" }, { "msg_contents": "\n\"Tom Lane\" <[email protected]> wrote\n>\n> No, that's completely irrelevant to his problem. The reason we can't do\n> this is that the transformation from \"x << const\" to a range check on x\n> is a plan-time transformation; there's no mechanism in place to do it\n> at runtime. This is not easy to fix, because the mechanism that's doing\n> it is primarily intended for LIKE/regex index optimization, and in that\n> case a runtime pattern might well not be optimizable at all.\n>\n\nNot quite understand, sorry ...\n\n(1) For this query (in an as-is PG syntax, which find out all rectangles lie \nin a given rectangle) :\n\nSELECT r FROM all_rectangles\n WHERE r << rectangle('(1,9),(9,1)');\n\nIf there is a GiST/Rtree index associated with all_rectangles.r, how do \noptimizer estimate the cost to decide that we should use this index or \nnot(then by a seqscan)?\n\n(2) Does your above explaination mean that we can't use GiST for a spatial \njoin operation?\n\nRegards,\nQingqing \n\n\n", "msg_date": "Tue, 1 Nov 2005 20:19:37 -0500", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of implicit join vs. explicit conditions on inet\n\tqueries?" }, { "msg_contents": "\"Qingqing Zhou\" <[email protected]> writes:\n> \"Tom Lane\" <[email protected]> wrote\n>> No, that's completely irrelevant to his problem. The reason we can't do\n>> this is that the transformation from \"x << const\" to a range check on x\n>> is a plan-time transformation; there's no mechanism in place to do it\n>> at runtime.\n\n> Not quite understand, sorry ...\n\n> (1) For this query (in an as-is PG syntax, which find out all rectangles lie \n> in a given rectangle) :\n\n> SELECT r FROM all_rectangles\n> WHERE r << rectangle('(1,9),(9,1)');\n\nNo, you're thinking of the wrong << operator. The question was about\nthe inet network inclusion operator. We have a special case in\nindxpath.c to transform \"inetcol << inetconstant\" into a range check\non the inet variable, much like we can transform a left-anchored LIKE\npattern into a range check on the text variable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Nov 2005 23:20:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of implicit join vs. explicit conditions on inet\n\tqueries?" } ]
[ { "msg_contents": "In addition to what Mark pointed out, there is the possibility that a\nquery\nis running which is scanning a large table or otherwise bringing in a\nlarge number of pages from disk. That would first use up all available\nunused cache space, and then may start replacing some of your\nfrequently used data. This can cause slowness for some time after the\nprocess which flushed the cache, as pages are reread and recached.\n\nKeep in mind that the cache could be flushed by some external process,\nsuch as copying disk files.\n\nThe use of free memory for caching is not slowing you down; but if it\ncoincides with slowness, it could be a useful clue.\n\n-Kevin\n\n", "msg_date": "Mon, 24 Oct 2005 10:50:57 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Used Memory" }, { "msg_contents": "Kevin Grittner wrote:\n> In addition to what Mark pointed out, there is the possibility that a\n> query\n> is running which is scanning a large table or otherwise bringing in a\n> large number of pages from disk. That would first use up all available\n> unused cache space, and then may start replacing some of your\n> frequently used data. \n\nAn LRU cache is often a bad strategy for database applications. There are two illustrations that show why.\n\n1. You have an index that's used for EVERY search, but each search returns a large and unique set of rows. If it happens that the rows returned exceed the systems cache size, the part or all of your index will be flushed with EVERY query.\n\n2. You do a sequential scan of a table that's one block bigger than the file system cache, then you do it again. At the beginning of the second scan, the first block of the table will have just been swapped out because it was the oldest, so the file system brings it back in, replacing the second block, which is now the oldest. As you scan the table, each block of the table is swapped out JUST BEFORE you get to it. At the start of your query, the file system might have had 99.9% of the relevant data in memory, but it swaps out 100% of it as your query progresses.\n\nScenario 2 above is interesting because a system that is performing very well can suddenly experience a catastrophic performance decline when the size of the data exceeds a critical limit - the file system's avaliable cache.\n\nLRU works well if your frequently-used data is used often enough to keep it in memory. But many applications don't have that luxury. It's often the case that a single query will exceed the file system's cache size. The file system cache is \"dumb\" -- it's strategy is too simple for a relational database.\n\nWhat's needed is a way for the application developer to explicitely say, \"This object is frequenly used, and I want it kept in memory.\"\n\nCraig\n", "msg_date": "Mon, 24 Oct 2005 10:00:04 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Used Memory" }, { "msg_contents": "On Mon, 2005-10-24 at 12:00, Craig A. James wrote:\n> Kevin Grittner wrote:\n> > In addition to what Mark pointed out, there is the possibility that a\n> > query\n> > is running which is scanning a large table or otherwise bringing in a\n> > large number of pages from disk. That would first use up all available\n> > unused cache space, and then may start replacing some of your\n> > frequently used data. \n> \n> An LRU cache is often a bad strategy for database applications. There are two illustrations that show why.\n> \n> 1. You have an index that's used for EVERY search, but each search returns a large and unique set of rows. If it happens that the rows returned exceed the systems cache size, the part or all of your index will be flushed with EVERY query.\n> \n> 2. You do a sequential scan of a table that's one block bigger than the file system cache, then you do it again. At the beginning of the second scan, the first block of the table will have just been swapped out because it was the oldest, so the file system brings it back in, replacing the second block, which is now the oldest. As you scan the table, each block of the table is swapped out JUST BEFORE you get to it. At the start of your query, the file system might have had 99.9% of the relevant data in memory, but it swaps out 100% of it as your query progresses.\n> \n> Scenario 2 above is interesting because a system that is performing very well can suddenly experience a catastrophic performance decline when the size of the data exceeds a critical limit - the file system's avaliable cache.\n> \n> LRU works well if your frequently-used data is used often enough to keep it in memory. But many applications don't have that luxury. It's often the case that a single query will exceed the file system's cache size. The file system cache is \"dumb\" -- it's strategy is too simple for a relational database.\n> \n> What's needed is a way for the application developer to explicitely say, \"This object is frequenly used, and I want it kept in memory.\"\n\nThere's an interesting conversation happening on the linux kernel\nhackers mailing list right about now that applies:\n\nhttp://www.gossamer-threads.com/lists/linux/kernel/580789\n", "msg_date": "Mon, 24 Oct 2005 15:24:59 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Used Memory" }, { "msg_contents": "Scott Marlowe wrote:\n>>What's needed is a way for the application developer to explicitely say,\n>> \"This object is frequenly used, and I want it kept in memory.\"\n> \n> There's an interesting conversation happening on the linux kernel\n> hackers mailing list right about now that applies:\n> \n> http://www.gossamer-threads.com/lists/linux/kernel/580789\n\nThanks for the pointer. If you're a participant in that mailing list, maybe you could forward this comment...\n\nA fundamental flaw in the kernel, which goes WAY back to early UNIX implementations, is that the nice(1) setting of a program only applies to CPU usage, not to other resources. In this case, the file-system cache has no priority, so even if I set postmaster's nice(1) value to a very high priority, any pissant process with the lowest priority possible can come along with a \"cat some-big-file >/dev/null\" and trash my cached file-system pages. It's essentially a denial-of-service mechanism that's built in to the kernel.\n\nThe kernel group's discussion on the heuristics of how and when to toss stale cache pages should have a strong nice(1) component to it. A process with a low priority should not be allowed to toss memory from a higher-priority process unless there is no other source of memory.\n\nGetting back to Postgres, the same points that the linux kernel group are discussing apply to Postgres. There is simply no way to devise a heuristic that comes even close to what the app developer can tell you. A mechanism that allowed an application to say, \"Keep this table in memory\" is the only way. App developers should be advised to use it sparingly, because most of the time the system is pretty good at memory management, and such a mechanism hobbles the system's ability to manage. But when it's needed, there is no substitute.\n\nCraig\n\n", "msg_date": "Mon, 24 Oct 2005 14:47:00 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Used Memory" }, { "msg_contents": "Hi To all those who replied. Thank You.\n\nI monitor my database server a while ago and found out that memory is used\nextensively when I am fetching records from the database. I use the command\n\"fetch all\" in my VB Code and put it in a recordset.Also in this command the\nCPU utilization is used extensively.\n\nIs there something wrong with my code or is it just the way postgresql is\nbehaving which I cannot do something about it?\n\nI just monitor one workstation connecting to the database server and it is\nalready eating up about 20 % of the CPU of database server.\n\nWhich I think will not be applicable to our system since we have a target of\n25 PC connecting to the database server most of the time.\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Craig A. James\nSent: Monday, October 24, 2005 9:47 PM\nTo: Scott Marlowe\nCc: [email protected]\nSubject: Re: [PERFORM] Used Memory\n\nScott Marlowe wrote:\n>>What's needed is a way for the application developer to explicitely \n>>say, \"This object is frequenly used, and I want it kept in memory.\"\n> \n> There's an interesting conversation happening on the linux kernel \n> hackers mailing list right about now that applies:\n> \n> http://www.gossamer-threads.com/lists/linux/kernel/580789\n\nThanks for the pointer. If you're a participant in that mailing list, maybe\nyou could forward this comment...\n\nA fundamental flaw in the kernel, which goes WAY back to early UNIX\nimplementations, is that the nice(1) setting of a program only applies to\nCPU usage, not to other resources. In this case, the file-system cache has\nno priority, so even if I set postmaster's nice(1) value to a very high\npriority, any pissant process with the lowest priority possible can come\nalong with a \"cat some-big-file >/dev/null\" and trash my cached file-system\npages. It's essentially a denial-of-service mechanism that's built in to\nthe kernel.\n\nThe kernel group's discussion on the heuristics of how and when to toss\nstale cache pages should have a strong nice(1) component to it. A process\nwith a low priority should not be allowed to toss memory from a\nhigher-priority process unless there is no other source of memory.\n\nGetting back to Postgres, the same points that the linux kernel group are\ndiscussing apply to Postgres. There is simply no way to devise a heuristic\nthat comes even close to what the app developer can tell you. A mechanism\nthat allowed an application to say, \"Keep this table in memory\" is the only\nway. App developers should be advised to use it sparingly, because most of\nthe time the system is pretty good at memory management, and such a\nmechanism hobbles the system's ability to manage. But when it's needed,\nthere is no substitute.\n\nCraig\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html \n\n", "msg_date": "Tue, 25 Oct 2005 02:39:51 -0000", "msg_from": "\"Christian Paul B. Cosinas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Used Memory" }, { "msg_contents": "Christian Paul B. Cosinas wrote:\n> Hi To all those who replied. Thank You.\n> \n> I monitor my database server a while ago and found out that memory is used\n> extensively when I am fetching records from the database. I use the command\n> \"fetch all\" in my VB Code and put it in a recordset.Also in this command the\n> CPU utilization is used extensively.\n> \n> Is there something wrong with my code or is it just the way postgresql is\n> behaving which I cannot do something about it?\n> \n> I just monitor one workstation connecting to the database server and it is\n> already eating up about 20 % of the CPU of database server.\n> \n> Which I think will not be applicable to our system since we have a target of\n> 25 PC connecting to the database server most of the time.\n> \n\nCould you post the query and the output of EXPLAIN ANALYZE?\n\nIn addition, have you run ANALYZE on all the tables in that database ? \n(sorry, have to ask :-) ....).\n\ncheers\n\nMark\n", "msg_date": "Tue, 25 Oct 2005 16:06:43 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Used Memory" }, { "msg_contents": "Hi mark\n\nI have so many functions, more than 100 functions in the database :) And I\nam dealing about 3 million of records in one database.\nAnd about 100 databases :)\n\n\n-----Original Message-----\nFrom: Mark Kirkwood [mailto:[email protected]]\nSent: Tuesday, October 25, 2005 3:07 AM\nTo: Christian Paul B. Cosinas\nCc: [email protected]\nSubject: Re: [PERFORM] Used Memory\n\nChristian Paul B. Cosinas wrote:\n> Hi To all those who replied. Thank You.\n> \n> I monitor my database server a while ago and found out that memory is \n> used extensively when I am fetching records from the database. I use \n> the command \"fetch all\" in my VB Code and put it in a recordset.Also \n> in this command the CPU utilization is used extensively.\n> \n> Is there something wrong with my code or is it just the way postgresql \n> is behaving which I cannot do something about it?\n> \n> I just monitor one workstation connecting to the database server and \n> it is already eating up about 20 % of the CPU of database server.\n> \n> Which I think will not be applicable to our system since we have a \n> target of\n> 25 PC connecting to the database server most of the time.\n> \n\nCould you post the query and the output of EXPLAIN ANALYZE?\n\nIn addition, have you run ANALYZE on all the tables in that database ? \n(sorry, have to ask :-) ....).\n\ncheers\n\nMark\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html \n\n", "msg_date": "Tue, 25 Oct 2005 03:20:09 -0000", "msg_from": "\"Christian Paul B. Cosinas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Used Memory" }, { "msg_contents": "Christian Paul B. Cosinas wrote:\n> Hi mark\n> \n> I have so many functions, more than 100 functions in the database :) And I\n> am dealing about 3 million of records in one database.\n> And about 100 databases :)\n> \n\nLOL - sorry, mis-understood your previous message to mean you had \nidentified *one* query where 'fetch all' was causing the problem!\n\nHaving said that, to make much more progress, you probably want to \nidentify those queries that are consuming your resource, pick one of two \nof the particularly bad ones and post 'em.\n\nThere are a number of ways to perform said identification, enabling \nstats collection might be worth a try.\n\nregards\n\nMark\n\n", "msg_date": "Tue, 25 Oct 2005 17:14:35 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Used Memory" } ]
[ { "msg_contents": "Now this interests me a lot.\n\nPlease clarify this:\n\nI have 5000 tables, one for each city:\n\nCity1_Photos, City2_Photos, ... City5000_Photos.\n\nEach of these tables are: CREATE TABLE CityN_Photos (location text, lo_id\nlargeobectypeiforgot)\n\nSo, what's the limit for these large objects? I heard I could only have 4\nbillion records for the whole database (not for each table). Is this true?\nIf this isn't true, then would postgres manage to create all the large\nobjects I ask him to?\n\nAlso, this would be a performance penalty, wouldn't it?\n\nMuch thanks for the knowledge shared,\nRodrigo\n\nNow this interests me a lot.\n\nPlease clarify this:\n\nI have 5000 tables, one for each city:\n\nCity1_Photos, City2_Photos, ... City5000_Photos.\n\nEach of these tables are: CREATE TABLE CityN_Photos (location text, lo_id largeobectypeiforgot)\n\nSo, what's the limit for these large objects? I heard I could only have\n4 billion records for the whole database (not for each table). Is this\ntrue? If this isn't true, then would postgres manage to create all the\nlarge objects I ask him to?\n\nAlso, this would be a performance penalty, wouldn't it?\n\nMuch thanks for the knowledge shared,\nRodrigo", "msg_date": "Mon, 24 Oct 2005 19:11:53 +0000", "msg_from": "Rodrigo Madera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inefficient escape codes." } ]
[ { "msg_contents": "Jim C. Nasby\" <jnasby ( at ) pervasive ( dot ) com> wrote:\n> > Stefan Weiss wrote:\n> > ... IMO it would be useful to have a way to tell\n> > PG that some tables were needed frequently, and should be cached if\n> > possible. This would allow application developers to consider joins with\n> > these tables as \"cheap\", even when querying on columns that are \n> > not indexed.\n>\n> Why do you think you'll know better than the database how frequently\n> something is used? At best, your guess will be correct and PostgreSQL\n> (or the kernel) will keep the table in memory. Or, your guess is wrong\n> and you end up wasting memory that could have been used for something\n> else.\n> \n> It would probably be better if you describe why you want to force this\n> table (or tables) into memory, so we can point you at more appropriate\n> solutions.\n\nOr perhaps we could explain why we NEED to force these tables into memory, so we can point you at a more appropriate implementation. ;-)\n\nOk, wittiness aside, here's a concrete example. I have an application with one critical index that MUST remain in memory at all times. The index's tablespace is about 2 GB. As long as it's in memory, performance is excellent - a user's query takes a fraction of a second. But if it gets swapped out, the user's query might take up to five minutes as the index is re-read from memory.\n\nNow here's the rub. The only performance I care about is response to queries from the web application. Everything else is low priority. But there is other activity going on. Suppose, for example, that I'm updating tables, performing queries, doing administration, etc., etc., for a period of an hour, during which no customer visits the site. The another customer comes along and performs a query.\n\nAt this point, no heuristic in the world could have guessed that I DON'T CARE ABOUT PERFORMANCE for anything except my web application. The performance of all the other stuff, the administration, the updates, etc., is utterly irrelevant compared to the performance of the customer's query.\n\nWhat actually happens is that the other activities have swapped out the critical index, and my customer waits, and waits, and waits... and goes away after a minute or two. To solve this, we've been forced to purchase two computers, and mirror the database on both. All administration and modification happens on the \"offline\" database, and the web application only uses the \"online\" database. At some point, we swap the two servers, sync the two databases, and carry on. It's a very unsatisfactory solution.\n\nThere is ONLY one way to convey this sort of information to Postgres, which is to provide the application developer a mechanism to explicitely name the tables that should be locked in memory.\n\nLook at tsearchd that Oleg is working on. It's a direct response to this problem.\n\nIt's been recognized for decades that, as kernel developers (whether a Linux kernel or a database kernel), our ability to predict the behavior of an application is woefully inadequate compared with the application developer's knowledge of the application. Computer Science simply isn't a match for the human brain yet, not even close.\n\nTo give you perspective, since I posted a question about this problem (regarding tsearch2/GIST indexes), half of the responses I received told me that they encountered this problem, and their solution was to use an external full-text engine. They all confirmed that Postgres can't deal with this problem yet, primarily for the reasons outlined above.\n\nCraig\n", "msg_date": "Mon, 24 Oct 2005 15:18:14 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is There Any Way ...." }, { "msg_contents": "This is possible with Oracle utilizing the keep pool\n\nalter table t_name storage ( buffer_pool keep);\n\nIf Postgres were to implement it's own caching system, this seems like\nit would be easily to implement (beyond the initial caching effort).\n\nAlex\n\n\nOn 10/24/05, Craig A. James <[email protected]> wrote:\n> Jim C. Nasby\" <jnasby ( at ) pervasive ( dot ) com> wrote:\n> > > Stefan Weiss wrote:\n> > > ... IMO it would be useful to have a way to tell\n> > > PG that some tables were needed frequently, and should be cached if\n> > > possible. This would allow application developers to consider joins with\n> > > these tables as \"cheap\", even when querying on columns that are\n> > > not indexed.\n> >\n> > Why do you think you'll know better than the database how frequently\n> > something is used? At best, your guess will be correct and PostgreSQL\n> > (or the kernel) will keep the table in memory. Or, your guess is wrong\n> > and you end up wasting memory that could have been used for something\n> > else.\n> >\n> > It would probably be better if you describe why you want to force this\n> > table (or tables) into memory, so we can point you at more appropriate\n> > solutions.\n>\n> Or perhaps we could explain why we NEED to force these tables into memory, so we can point you at a more appropriate implementation. ;-)\n>\n> Ok, wittiness aside, here's a concrete example. I have an application with one critical index that MUST remain in memory at all times. The index's tablespace is about 2 GB. As long as it's in memory, performance is excellent - a user's query takes a fraction of a second. But if it gets swapped out, the user's query might take up to five minutes as the index is re-read from memory.\n>\n> Now here's the rub. The only performance I care about is response to queries from the web application. Everything else is low priority. But there is other activity going on. Suppose, for example, that I'm updating tables, performing queries, doing administration, etc., etc., for a period of an hour, during which no customer visits the site. The another customer comes along and performs a query.\n>\n> At this point, no heuristic in the world could have guessed that I DON'T CARE ABOUT PERFORMANCE for anything except my web application. The performance of all the other stuff, the administration, the updates, etc., is utterly irrelevant compared to the performance of the customer's query.\n>\n> What actually happens is that the other activities have swapped out the critical index, and my customer waits, and waits, and waits... and goes away after a minute or two. To solve this, we've been forced to purchase two computers, and mirror the database on both. All administration and modification happens on the \"offline\" database, and the web application only uses the \"online\" database. At some point, we swap the two servers, sync the two databases, and carry on. It's a very unsatisfactory solution.\n>\n> There is ONLY one way to convey this sort of information to Postgres, which is to provide the application developer a mechanism to explicitely name the tables that should be locked in memory.\n>\n> Look at tsearchd that Oleg is working on. It's a direct response to this problem.\n>\n> It's been recognized for decades that, as kernel developers (whether a Linux kernel or a database kernel), our ability to predict the behavior of an application is woefully inadequate compared with the application developer's knowledge of the application. Computer Science simply isn't a match for the human brain yet, not even close.\n>\n> To give you perspective, since I posted a question about this problem (regarding tsearch2/GIST indexes), half of the responses I received told me that they encountered this problem, and their solution was to use an external full-text engine. They all confirmed that Postgres can't deal with this problem yet, primarily for the reasons outlined above.\n>\n> Craig\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n", "msg_date": "Mon, 24 Oct 2005 19:11:15 -0400", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is There Any Way ...." }, { "msg_contents": "Alex Turner wrote:\n> This is possible with Oracle utilizing the keep pool\n>\n> alter table t_name storage ( buffer_pool keep);\n>\n> If Postgres were to implement it's own caching system, this seems like\n> it would be easily to implement (beyond the initial caching effort).\n>\n> Alex\n>\n>\n> On 10/24/05, Craig A. James <[email protected]> wrote:\n> \n>> Jim C. Nasby\" <jnasby ( at ) pervasive ( dot ) com> wrote:\n>> \n>>>> Stefan Weiss wrote:\n>>>> ... IMO it would be useful to have a way to tell\n>>>> PG that some tables were needed frequently, and should be cached if\n>>>> possible. This would allow application developers to consider joins with\n>>>> these tables as \"cheap\", even when querying on columns that are\n>>>> not indexed.\n>>>> \n>>> Why do you think you'll know better than the database how frequently\n>>> something is used? At best, your guess will be correct and PostgreSQL\n>>> (or the kernel) will keep the table in memory. Or, your guess is wrong\n>>> and you end up wasting memory that could have been used for something\n>>> else.\n>>>\n>>> It would probably be better if you describe why you want to force this\n>>> table (or tables) into memory, so we can point you at more appropriate\n>>> solutions.\n>>> \n>> Or perhaps we could explain why we NEED to force these tables into memory, so we can point you at a more appropriate implementation. ;-)\n>>\n>> Ok, wittiness aside, here's a concrete example. I have an application with one critical index that MUST remain in memory at all times. The index's tablespace is about 2 GB. As long as it's in memory, performance is excellent - a user's query takes a fraction of a second. But if it gets swapped out, the user's query might take up to five minutes as the index is re-read from memory.\n>>\n>> Now here's the rub. The only performance I care about is response to queries from the web application. Everything else is low priority. But there is other activity going on. Suppose, for example, that I'm updating tables, performing queries, doing administration, etc., etc., for a period of an hour, during which no customer visits the site. The another customer comes along and performs a query.\n>>\n>> At this point, no heuristic in the world could have guessed that I DON'T CARE ABOUT PERFORMANCE for anything except my web application. The performance of all the other stuff, the administration, the updates, etc., is utterly irrelevant compared to the performance of the customer's query.\n>>\n>> What actually happens is that the other activities have swapped out the critical index, and my customer waits, and waits, and waits... and goes away after a minute or two. To solve this, we've been forced to purchase two computers, and mirror the database on both. All administration and modification happens on the \"offline\" database, and the web application only uses the \"online\" database. At some point, we swap the two servers, sync the two databases, and carry on. It's a very unsatisfactory solution.\nWe have a similar problem with vacuum being the equivalent of \n\"continuously flush all system caches for a long time\". Our database is \nabout 200GB in size and vacuums take hours and hours. The performance \nis acceptable still, but only because we've hidden the latency in our \napplication.\n\nI've occasionally thought it would be good to have the backend doing a \nvacuum or analyze also call priocntl() prior to doing any real work to \nlower its priority. We'll be switching to the 8.1 release ASAP just \nbecause the direct IO capabilities are appearing to be a win on our \ndevelopment system.\n\n-- Alan\n\n", "msg_date": "Mon, 24 Oct 2005 20:50:25 -0400", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is There Any Way ...." } ]
[ { "msg_contents": "Does Creating Temporary table in a function and NOT dropping them affects\nthe performance of the database?\n\n \n\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html\n\n\n\n\n\n\n\n\nDoes Creating Temporary table\nin a function and NOT dropping them affects the performance of the database?\n \n\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html", "msg_date": "Tue, 25 Oct 2005 07:28:15 -0000", "msg_from": "\"Christian Paul B. Cosinas\" <[email protected]>", "msg_from_op": true, "msg_subject": "Temporary Table" } ]
[ { "msg_contents": "\nHere are the configuration of our database server:\n\tport = 5432\n\tmax_connections = 300\n\tsuperuser_reserved_connections = 10\n\tauthentication_timeout = 60\t\n\tshared_buffers = 48000 \n\tsort_mem = 32168\n\tsync = false\n\nDo you think this is enough? Or can you recommend a better configuration for\nmy server?\n\nThe server is also running PHP and Apache but wer'e not using it\nextensively. For development purpose only. \n\nThe database slow down is occurring most of the time (when the memory free\nis low) I don't think it has something to do with vacuum. We only have a\nfull server vacuum once a day.\n\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html \n\n", "msg_date": "Tue, 25 Oct 2005 07:28:51 -0000", "msg_from": "\"Christian Paul B. Cosinas\" <[email protected]>", "msg_from_op": true, "msg_subject": "FW: Used Memory" } ]
[ { "msg_contents": "Hi All,\n Thank you very much for your help in configuring the database.\n Can you guys please take a look at the following query and let me know why\nthe index is not considered in the plan?\n Here is the extract of the condition string of the query that is taking the\ntransaction_date in index condition:\n *where (account.id <http://account.id> = search_engine.account_fk) and (\naccount.status = 't' and account.id <http://account.id> =\n'40288a820726362f0107263c55d00003') and ( search_engine.id =\nconversion.search_engine_fk and conversion.event_type ='daily_spend' and\nconversion.tactic = 'PPC' and conversion.transaction_date between\n'2005-01-01 00:00:00' and '2005-10-31 23:59:59') group by\naccount.id<http://account.id>\n;\n*\nPlan:\n*\" -> Index Scan using conversion_all on \"conversion\"\n(cost=0.00..6.02rows=1 width=98)\"\n\" Index Cond: (((tactic)::text = 'PPC'::text) AND ((event_type)::text =\n'daily_spend'::text) AND (transaction_date >= '2005-01-01\n00:00:00'::timestamp without time zone) AND (transaction_date <= '2005-10-31\n23:59:59'::timestamp without time zon (..)\"\n*\n Here is the extract of the condition string of the query that is not taking\nthe transaction_date in index condition:\n**\nwhere ( account.status = 't' and account.id <http://account.id> =\nsearch_engine.account_fk and account.id <http://account.id> =\n'40288a820726362f0107263c55d00003' ) and ( search_engine.id =\nconversion.search_engine_fk and conversion.tactic = 'PPC' and\nconversion.transaction_date >= '2005-01-01 00:00:00' and\nconversion.transaction_date <= '2005-10-31 23:59:59' ) group by\naccount.id<http://account.id>\n;\n *Plan:*\n*\" -> Index Scan using conv_evnt_tact_trans_date_sefk on \"conversion\" (cost=\n0.00..6.02 rows=1 width=132)\"\n\" Index Cond: (((\"outer\".id)::text = (\"conversion\".search_engine_fk)::text)\nAND ((\"conversion\".tactic)::text = 'PPC'::text))\"\n\" Filter: ((transaction_date >= '2005-01-01 00:00:00'::timestamp without\ntime zone) AND (transaction_date <= '2005-10-31 23:59:59'::timestamp without\ntime zone))\"\n*\n**\nI have the following indexes defined on the columns.\n*conv_evnt_tact_trans_date_sefk : (search_engine_fk, tactic, event_type,\ntransaction_date);*\n*conv_all : (tactic, event_type, transaction_date);*\n**\nI am really confused when I saw this plan. In both queries, I am using the\nsame columns in the where condition, but the optimizer is taking different\nindexes in these two cases.\nSecond, even though, I have the transaction_date column specified in the\nsecond instance, why is it not taking the constraint as index condition?\n Thanks in advance.\n Thank you,\nKishore.\n\nHi All, \n \nThank you very much for your help in configuring the database.\n \nCan you guys please take a look at the following query and let me know why the index is not considered in the plan?\n \nHere is the extract of the condition string of the query that is taking the transaction_date in index condition:\n \nwhere (account.id = search_engine.account_fk) and ( account.status = 't' and account.id = '40288a820726362f0107263c55d00003') and (  search_engine.id = \nconversion.search_engine_fk   and conversion.event_type ='daily_spend' and conversion.tactic = 'PPC' and conversion.transaction_date between '2005-01-01 00:00:00' and '2005-10-31 23:59:59') group by \naccount.id;\nPlan:\n\"        ->  Index Scan using conversion_all on \"conversion\"  (cost=0.00..6.02 rows=1 width=98)\"\"              Index Cond: (((tactic)::text = 'PPC'::text) AND ((event_type)::text = 'daily_spend'::text) AND (transaction_date >= '2005-01-01 00:00:00'::timestamp without time zone) AND (transaction_date <= '2005-10-31 23:59:59'::timestamp without time zon (..)\"\n\n\nHere is the extract of the condition string of the query that is not  taking the transaction_date in index condition:\n \nwhere ( account.status = 't' and account.id  = search_engine.account_fk and account.id = '40288a820726362f0107263c55d00003' ) and (  search_engine.id = \nconversion.search_engine_fk   and conversion.tactic = 'PPC' and conversion.transaction_date >= '2005-01-01 00:00:00' and conversion.transaction_date <= '2005-10-31 23:59:59'  ) group by account.id\n; \nPlan:\n\"        ->  Index Scan using conv_evnt_tact_trans_date_sefk on \"conversion\"  (cost=0.00..6.02 rows=1 width=132)\"\"              Index Cond: (((\"outer\".id)::text = (\"conversion\".search_engine_fk)::text) AND ((\"conversion\".tactic)::text = 'PPC'::text))\"\n\"              Filter: ((transaction_date >= '2005-01-01 00:00:00'::timestamp without time zone) AND (transaction_date <= '2005-10-31 23:59:59'::timestamp without time zone))\"\n \nI have the following indexes defined on the columns.\nconv_evnt_tact_trans_date_sefk : (search_engine_fk, tactic, event_type, transaction_date);\nconv_all : (tactic, event_type, transaction_date);\n \nI am really confused when I saw this plan. In both queries, I am using the same columns in the where condition, but the optimizer is taking different indexes in these two cases.\nSecond, even though, I have the transaction_date column specified in the second instance, why is it not taking the constraint as index condition?\n \nThanks in advance.\n \nThank you,\nKishore.", "msg_date": "Tue, 25 Oct 2005 14:17:57 +0530", "msg_from": "Kishore B <[email protected]>", "msg_from_op": true, "msg_subject": "Why Index is not working on date columns." }, { "msg_contents": "Kishore B <[email protected]> writes:\n> Can you guys please take a look at the following query and let me know why\n> the index is not considered in the plan?\n\n\"Considered\" and \"used\" are two different things.\n\nThe two examples you give have the same estimated cost (within two\ndecimal places) so the planner sees no particular reason to choose one\nover the other.\n\nI surmise that you are testing on toy tables and extrapolating to what\nwill happen on larger tables. This is an unjustified assumption.\nCreate a realistic test data set, ANALYZE it, and then see if the\nplanner chooses indexes you like.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 Oct 2005 09:38:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why Index is not working on date columns. " }, { "msg_contents": "Hi Tom,\n Thank you for your response.\n\n> I surmise that you are testing on toy tables and extrapolating to what\n> will happen on larger tables.\n>\nThese tables participating here contain more than 8 million records as of\nnow, and on every day, 200K records, will add to them.\n Thank you,\nKishore.\n\n\n On 10/25/05, Tom Lane <[email protected]> wrote:\n>\n> Kishore B <[email protected]> writes:\n> > Can you guys please take a look at the following query and let me know\n> why\n> > the index is not considered in the plan?\n>\n> \"Considered\" and \"used\" are two different things.\n>\n> The two examples you give have the same estimated cost (within two\n> decimal places) so the planner sees no particular reason to choose one\n> over the other.\n>\n> I surmise that you are testing on toy tables and extrapolating to what\n> will happen on larger tables. This is an unjustified assumption.\n> Create a realistic test data set, ANALYZE it, and then see if the\n> planner chooses indexes you like.\n>\n> regards, tom lane\n>\n\nHi Tom,\n \nThank you for your response.\n \nI surmise that you are testing on toy tables and extrapolating to whatwill happen on larger tables. \n\nThese tables participating here contain more than 8 million records as of now, and on every day, 200K records, will add to them.\n \n \nThank you,\nKishore.\n \n \nOn 10/25/05, Tom Lane <[email protected]> wrote:\nKishore B <[email protected]> writes:>  Can you guys please take a look at the following query and let me know why\n> the index is not considered in the plan?\"Considered\" and \"used\" are two different things.The two examples you give have the same estimated cost (within twodecimal places) so the planner sees no particular reason to choose one\nover the other.I surmise that you are testing on toy tables and extrapolating to whatwill happen on larger tables.  This is an unjustified assumption.Create a realistic test data set, ANALYZE it, and then see if the\nplanner chooses indexes you like.                       regards, tom lane", "msg_date": "Tue, 25 Oct 2005 20:34:21 +0530", "msg_from": "Kishore B <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why Index is not working on date columns." }, { "msg_contents": "Kishore B <[email protected]> writes:\n>> I surmise that you are testing on toy tables and extrapolating to what\n>> will happen on larger tables.\n>> \n> These tables participating here contain more than 8 million records as of\n> now, and on every day, 200K records, will add to them.\n\nIn that case, have you ANALYZEd the tables lately? The planner's cost\nestimates correspond to awfully small tables ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 Oct 2005 14:01:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why Index is not working on date columns. " } ]
[ { "msg_contents": "Hello!\n\nI've got a table BOARD_MESSAGE (message_id int8, thread_id int8, ...)\nwith pk on message_id and and a non_unique not_null index on thread_id.\nA count(*) on BOARD_MESSAGE currently yields a total of 1231171 rows,\nthe planner estimated a total of 1232530 rows in this table. I've got\npg_autovacuum running on the database and run an additional nightly\nVACUUM ANALYZE over it every night.\n\nI've got a few queries of the following type:\n\nselect * \n from PUBLIC.BOARD_MESSAGE \n where THREAD_ID = 3354253 \n order by MESSAGE_ID asc \n limit 20 \n offset 0; \n\n\nThere are currently roughly 4500 rows with this thread_id in\nBOARD_MESSAGE. Explain-output is like so:\n\n QUERY PLAN \n\n------------------------------------------------------------------------\n---------------------------------------------- \n Limit (cost=0.00..3927.22 rows=20 width=1148) \n -> Index Scan using pk_board_message on board_message\n(cost=0.00..1100800.55 rows=5606 width=1148) \n Filter: (thread_id = 3354253) \n(3 rows) \n\nI didn't have the patience to actually complete an explain analyze on\nthat one - I cancelled the query on several attempts after more than 40\nminutes runtime. Now I fiddled a little with this statement and tried\nnudging the planner in the right direction like so:\n\nexplain analyze select * from (select * \n from PUBLIC.BOARD_MESSAGE \n where THREAD_ID = 3354253 \n order by MESSAGE_ID asc ) as foo \n limit 20 \n offset 0; \n \nQUERY PLAN\n\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n-------------------------\n\n Limit (cost=8083.59..8083.84 rows=20 width=464) (actual\ntime=1497.455..1498.466 rows=20 loops=1) \n -> Subquery Scan foo (cost=8083.59..8153.67 rows=5606 width=464)\n(actual time=1497.447..1498.408 rows=20 loops=1) \n -> Sort (cost=8083.59..8097.61 rows=5606 width=1148) (actual\ntime=1497.326..1497.353 rows=20 loops=1) \n Sort Key: message_id \n -> Index Scan using nidx_bm_thread_id on board_message\n(cost=0.00..7734.54 rows=5606 width=1148) (actual time=0.283..1431.752\nrows=4215 loops=1)\n\n Index Cond: (thread_id = 3354253) \n Total runtime: 1502.138 ms \n\nNow this is much more like it. As far as I interpret the explain output,\nin the former case the planner decides to just sort the whole table with\nit's 1.2m rows by it's primary key on message_id and then filters out\nthe few thousand rows matching the requested thread_id. In the latter\ncase, it selects the few thousand rows with the matching thread_id\n_first_ and _then_ sorts them according to their message_id. The former\nattempt involves sorting of more than a million rows and then filtering\nthrough the result, the latter just uses the index to retrieve a few\nthousand rows and sorts those - which is much more efficient.\n\nWhat's more puzzling is that the results vary somewhat depending on the\noverall load situation. When using the first approach without the\nsubselect, sometimes the planner chooses exactly the same plan as it\ndoes with the second approach - with equally satisfying results in\nregard to total execution time; sometimes it does use the first plan and\ndoes complete with a very acceptable execution time, too. But sometimes\n(when overall load is sufficiently high, I presume) it just runs and\nruns for minutes on end - I've had this thing running for more than one\nhour on several occasions until I made some changes to my app which\nlimits the maximum execution time for a query to no more than 55\nseconds.\n\nWith this IMHO quite ugly subselect-workaround, performance is\nreproducably stable and sufficiently good under either load, so I chose\nto stick with it for the time being - but I'd still like to know if I\ncould have done anything to have the planner choose the evidently better\nplan for the first query without such a workaround?\n\nKind regards\n\n Markus\n", "msg_date": "Tue, 25 Oct 2005 11:47:57 +0200", "msg_from": "\"Markus Wollny\" <[email protected]>", "msg_from_op": true, "msg_subject": "Strange planner decision on quite simple select" }, { "msg_contents": "Markus Wollny wrote:\n> Hello!\n> \n> I've got a table BOARD_MESSAGE (message_id int8, thread_id int8, ...)\n> with pk on message_id and and a non_unique not_null index on thread_id.\n> A count(*) on BOARD_MESSAGE currently yields a total of 1231171 rows,\n> the planner estimated a total of 1232530 rows in this table. I've got\n> pg_autovacuum running on the database and run an additional nightly\n> VACUUM ANALYZE over it every night.\n> \n> I've got a few queries of the following type:\n> \n> select * \n> from PUBLIC.BOARD_MESSAGE \n> where THREAD_ID = 3354253 \n> order by MESSAGE_ID asc \n> limit 20 \n> offset 0; \n> \n> \n> There are currently roughly 4500 rows with this thread_id in\n> BOARD_MESSAGE. Explain-output is like so:\n> \n> QUERY PLAN \n> \n> ------------------------------------------------------------------------\n> ---------------------------------------------- \n> Limit (cost=0.00..3927.22 rows=20 width=1148) \n> -> Index Scan using pk_board_message on board_message\n> (cost=0.00..1100800.55 rows=5606 width=1148) \n> Filter: (thread_id = 3354253) \n> (3 rows) \n> \n> I didn't have the patience to actually complete an explain analyze on\n> that one - I cancelled the query on several attempts after more than 40\n> minutes runtime. Now I fiddled a little with this statement and tried\n> nudging the planner in the right direction like so:\n\nHmm - it shouldn't take that long. If I'm reading this right, it's \nexpecting to have to fetch 5606 rows to match thread_id=3354253 the 20 \ntimes you've asked for. Now, what it probably doesn't know is that \nthread_id is correlated with message_id quite highly (actually, I don't \nknow that, I'm guessing). So - it starts at message_id=1 and works \nalong, but I'm figuring that it needs to reach message_id's in the 3-4 \nmillion range to see any of the required thread.\n\nSuggestions:\n1. Try \"ORDER BY thread_id,message_id\" and see if that nudges things \nyour way.\n2. Keep #1 and try replacing the index on (thread_id) with \n(thread_id,message_id)\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 25 Oct 2005 11:07:28 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange planner decision on quite simple select" } ]
[ { "msg_contents": "I tried on pgsql-general but got no reply. re-posting here as it's\nprobably the best place to ask\n\nI'm having some significant performance problems with left join. Can\nanyone give me any pointers as to why the following 2 query plans are so\ndifferent?\n\n\nEXPLAIN SELECT *\nFROM\n tokens.ta_tokens t LEFT JOIN\n tokens.ta_tokenhist h1 ON t.token_id = h1.token_id LEFT JOIN\n tokens.ta_tokenhist h2 ON t.token_id = h2.token_id\nWHERE\n h1.histdate = 'now';\n\n\n Nested Loop Left Join (cost=0.00..68778.43 rows=2215 width=1402)\n -> Nested Loop (cost=0.00..55505.62 rows=2215 width=714)\n -> Index Scan using idx_tokenhist__histdate on ta_tokenhist h1 (cost=0.00..22970.70 rows=5752 width=688)\n Index Cond: (histdate = '2005-10-24 13:28:38.411844'::timestamp without time zone)\n -> Index Scan using ta_tokens_pkey on ta_tokens t (cost=0.00..5.64 rows=1 width=26)\n Index Cond: ((t.token_id)::integer = (\"outer\".token_id)::integer)\n -> Index Scan using fkx_tokenhist__tokens on ta_tokenhist h2 (cost=0.00..5.98 rows=1 width=688)\n Index Cond: ((\"outer\".token_id)::integer = (h2.token_id)::integer)\n\n\nPerformance is fine for this one and the plan is pretty much as i'd\nexpect.\n\nThis is where i hit a problem.\n\n\nEXPLAIN SELECT *\nFROM\n tokens.ta_tokens t LEFT JOIN\n tokens.ta_tokenhist h1 ON t.token_id = h1.token_id LEFT JOIN\n tokens.ta_tokenhist h2 ON t.token_id = h2.token_id\nWHERE\n h2.histdate = 'now';\n\n\n Hash Join (cost=1249148.59..9000709.22 rows=2215 width=1402)\n Hash Cond: ((\"outer\".token_id)::integer = (\"inner\".token_id)::integer)\n -> Hash Left Join (cost=1225660.51..8181263.40 rows=4045106 width=714)\n Hash Cond: ((\"outer\".token_id)::integer = (\"inner\".token_id)::integer)\n -> Seq Scan on ta_tokens t (cost=0.00..71828.06 rows=4045106 width=26)\n -> Hash (cost=281243.21..281243.21 rows=10504921 width=688)\n -> Seq Scan on ta_tokenhist h1 (cost=0.00..281243.21 rows=10504921 width=688)\n -> Hash (cost=22970.70..22970.70 rows=5752 width=688)\n -> Index Scan using idx_tokenhist__histdate on ta_tokenhist h2 (cost=0.00..22970.70 rows=5752 width=688)\n Index Cond: (histdate = '2005-10-24 13:34:51.371905'::timestamp without time zone)\n\n\nI would understand if h2 was joined on h1, but it isn't. It only joins\non t. can anyone give any tips on improving the performance of the second\nquery (aside from changing the join order manually)?\n\n\nselect version();\n version\n--------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.0.3 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.0.2 20050821 (prerelease) (Debian 4.0.1-6)\n\n\nThanks\n\n-- \n\n - Rich Doughty\n", "msg_date": "Tue, 25 Oct 2005 12:35:06 +0100", "msg_from": "Rich Doughty <[email protected]>", "msg_from_op": true, "msg_subject": "Outer join query plans and performance" }, { "msg_contents": "Rich Doughty <[email protected]> writes:\n> EXPLAIN SELECT *\n> FROM\n> tokens.ta_tokens t LEFT JOIN\n> tokens.ta_tokenhist h1 ON t.token_id = h1.token_id LEFT JOIN\n> tokens.ta_tokenhist h2 ON t.token_id = h2.token_id\n> WHERE\n> h1.histdate = 'now';\n\n> EXPLAIN SELECT *\n> FROM\n> tokens.ta_tokens t LEFT JOIN\n> tokens.ta_tokenhist h1 ON t.token_id = h1.token_id LEFT JOIN\n> tokens.ta_tokenhist h2 ON t.token_id = h2.token_id\n> WHERE\n> h2.histdate = 'now';\n\nThe reason these are different is that the second case constrains only\nthe last-to-be-joined table, so the full cartesian product of t and h1\nhas to be formed. If this wasn't what you had in mind, you might be\nable to rearrange the order of the LEFT JOINs, but bear in mind that\nin general, changing outer-join ordering changes the results. (This\nis why the planner won't fix it for you.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 Oct 2005 10:12:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Outer join query plans and performance " }, { "msg_contents": "Tom Lane wrote:\n> Rich Doughty <[email protected]> writes:\n> \n>>EXPLAIN SELECT *\n>>FROM\n>> tokens.ta_tokens t LEFT JOIN\n>> tokens.ta_tokenhist h1 ON t.token_id = h1.token_id LEFT JOIN\n>> tokens.ta_tokenhist h2 ON t.token_id = h2.token_id\n>>WHERE\n>> h1.histdate = 'now';\n> \n> \n>>EXPLAIN SELECT *\n>>FROM\n>> tokens.ta_tokens t LEFT JOIN\n>> tokens.ta_tokenhist h1 ON t.token_id = h1.token_id LEFT JOIN\n>> tokens.ta_tokenhist h2 ON t.token_id = h2.token_id\n>>WHERE\n>> h2.histdate = 'now';\n> \n> \n> The reason these are different is that the second case constrains only\n> the last-to-be-joined table, so the full cartesian product of t and h1\n> has to be formed. If this wasn't what you had in mind, you might be\n> able to rearrange the order of the LEFT JOINs, but bear in mind that\n> in general, changing outer-join ordering changes the results. (This\n> is why the planner won't fix it for you.)\n\nFWIW mysql 4.1 (and i'm no fan at all of mysql) completes both these queries\nin approximately 3 seconds. postgres does the first in 6 seconds and the\nsecond in a lot longer (eventually abandoned).\n\n\n-- \n\n - Rich Doughty\n", "msg_date": "Tue, 25 Oct 2005 19:28:34 +0100", "msg_from": "Rich Doughty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Outer join query plans and performance" }, { "msg_contents": "Rich Doughty <[email protected]> writes:\n> Tom Lane wrote:\n>> The reason these are different is that the second case constrains only\n>> the last-to-be-joined table, so the full cartesian product of t and h1\n>> has to be formed. If this wasn't what you had in mind, you might be\n>> able to rearrange the order of the LEFT JOINs, but bear in mind that\n>> in general, changing outer-join ordering changes the results. (This\n>> is why the planner won't fix it for you.)\n\n> FWIW mysql 4.1 (and i'm no fan at all of mysql) completes both these queries\n> in approximately 3 seconds.\n\nDoes mysql get the correct answer, though? It's hard to see how they do\nthis fast unless they (a) are playing fast and loose with the semantics,\nor (b) have very substantially more analysis logic for OUTER JOIN semantics\nthan we do. Perhaps mysql 5.x is better about this sort of thing, but\nfor 4.x I'd definitely find theory (a) more plausible than (b).\n\nThe cases that would be interesting are those where rearranging the\nouter join order actually does change the correct answer --- it may not\nin this particular case, I haven't thought hard about it. It seems\nfairly likely to me that they are rearranging the join order here, and\nI'm just wondering whether they have the logic needed to verify that\nsuch a transformation is correct.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 Oct 2005 18:03:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Outer join query plans and performance " }, { "msg_contents": "Tom Lane wrote:\n> Rich Doughty <[email protected]> writes:\n> \n>>Tom Lane wrote:\n>>\n>>>The reason these are different is that the second case constrains only\n>>>the last-to-be-joined table, so the full cartesian product of t and h1\n>>>has to be formed. If this wasn't what you had in mind, you might be\n>>>able to rearrange the order of the LEFT JOINs, but bear in mind that\n>>>in general, changing outer-join ordering changes the results. (This\n>>>is why the planner won't fix it for you.)\n> \n> \n>>FWIW mysql 4.1 (and i'm no fan at all of mysql) completes both these queries\n>>in approximately 3 seconds.\n> \n> \n> Does mysql get the correct answer, though? It's hard to see how they do\n> this fast unless they (a) are playing fast and loose with the semantics,\n> or (b) have very substantially more analysis logic for OUTER JOIN semantics\n> than we do. Perhaps mysql 5.x is better about this sort of thing, but\n> for 4.x I'd definitely find theory (a) more plausible than (b).\n\ni would assume so. i'll re-run my testcase later and verify the results of the\ntwo side-by-side.\n\n> The cases that would be interesting are those where rearranging the\n> outer join order actually does change the correct answer --- it may not\n> in this particular case, I haven't thought hard about it. It seems\n> fairly likely to me that they are rearranging the join order here, and\n> I'm just wondering whether they have the logic needed to verify that\n> such a transformation is correct.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n\n - Rich Doughty\n", "msg_date": "Wed, 26 Oct 2005 09:33:48 +0100", "msg_from": "Rich Doughty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Outer join query plans and performance" } ]
[ { "msg_contents": " \nHi!\n\n> -----Ursprüngliche Nachricht-----\n> Von: [email protected] \n> [mailto:[email protected]] Im Auftrag \n> von Richard Huxton\n> Gesendet: Dienstag, 25. Oktober 2005 12:07\n> An: Markus Wollny\n> Cc: [email protected]\n> Betreff: Re: [PERFORM] Strange planner decision on quite simple select\n> \n> Hmm - it shouldn't take that long. If I'm reading this right, \n> it's expecting to have to fetch 5606 rows to match \n> thread_id=3354253 the 20 times you've asked for. Now, what it \n> probably doesn't know is that thread_id is correlated with \n> message_id quite highly (actually, I don't know that, I'm \n> guessing). So - it starts at message_id=1 and works along, \n> but I'm figuring that it needs to reach message_id's in the \n> 3-4 million range to see any of the required thread.\n\nReading this I tried with adding a \"AND MESSAGE_ID >= THREAD_ID\" to the WHERE-clause, as you've guessed quite correctly, both message_id and thread_id are derived from the same sequence and thread_id equals the lowest message_id in a thread. This alone did quite a lot to improve things - I got stable executing times down from an average 12 seconds to a mere 2 seconds - just about the same as with the subselect.\n\n> Suggestions:\n> 1. Try \"ORDER BY thread_id,message_id\" and see if that nudges \n> things your way.\n> 2. Keep #1 and try replacing the index on (thread_id) with\n> (thread_id,message_id)\n\nDid both (though adding such an index during ordinary workload took some time as did the VACUUM ANALYZE afterwards) and that worked like a charm - I've got execution times down to as little as a few milliseconds - wow! Thank you very much for providing such insightful hints!\n\nKind regards\n\n Markus\n", "msg_date": "Tue, 25 Oct 2005 15:39:42 +0200", "msg_from": "\"Markus Wollny\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange planner decision on quite simple select" } ]
[ { "msg_contents": "Hi,\n\nI have the following test setup:\n\n* PG 8.0.4 on Linux (Centos 4) compiled from source.\n\n* DB schema: essentially one table with a few int columns and\n one bytea column that stores blobs of 52000 bytes each, a\n primary key on one of the int columns.\n\n* A test client was written in C using libpq to see what rate\n can be reached (inserting records). The client uses a\n prepared tatement and bundles n inserts into a single\n transaction (n is variable for testing).\n\n* Hardware: different setups tested, in particular a\n single-opteron box with a built in SATA disk and also an\n array of SATA disks connected via FC.\n\n From the test run it appears that the insert rate here is\nessentially CPU bound. I'm getting about 11 MB/s net transfer,\nregardless if I use the built in disk or the much faster\narray and regardless various settings (like n, shared_mem).\n\nvmstat says that disk bo is about 30MB/s (the array can do much\nbetter, I tried with dd and sync!) while the CPU is maxed out\nat about 90% us and 10% sy. The client accounts for just 2% CPU,\nmost goes into the postmaster.\n\nThe client inserts random data. I found out that I can improve\nthings by 35% if I use random sequences of bytes that are\nin the printable range vs. full range.\n\n\nQuestion 1:\nAm I correct in assuming that even though I'm passing my 52000\nbytes as a (char *) to PQexecPrepared(), encoding/decoding is\nhappening (think 0 -> \\000) somewhere in the transfer?\n\n\nQuestion 2:\nIs there a better, faster way to do these inserts?\nI'm unsure about large objects. I'm planning to use some\ncustom server side functions to do computations on the bytes\nin these records and the large objects API doesn't appear\nto be well suited for this.\n\n\nSidequestion:\nI've tried to profile the server using CFLAGS=\"-p -DLINUX_PROFILE\".\nI'm getting profiling output but when I look at it using\n\"gprof bin-somewhere/postgres $PGDATA/gmon.out\" I'm only seeing\nwhat I think are the calls for the server startup. How can I profile\nthe (forked) process that actually performs all the work on\nmy connection?\n\n\nSorry for the long post :)\nBye,\nChris.\n\n\n\n", "msg_date": "Tue, 25 Oct 2005 15:44:36 +0200 (CEST)", "msg_from": "\"Chris Mair\" <[email protected]>", "msg_from_op": true, "msg_subject": "insertion of bytea" }, { "msg_contents": "On Tue, Oct 25, 2005 at 03:44:36PM +0200, Chris Mair wrote:\n>Is there a better, faster way to do these inserts?\n\nCOPY is generally the fastest way to do bulk inserts (see\nPQputCopyData). \n\nMike Stone\n", "msg_date": "Tue, 25 Oct 2005 10:05:49 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insertion of bytea" }, { "msg_contents": "\"Chris Mair\" <[email protected]> writes:\n> Am I correct in assuming that even though I'm passing my 52000\n> bytes as a (char *) to PQexecPrepared(), encoding/decoding is\n> happening (think 0 -> \\000) somewhere in the transfer?\n\nAre you specifying it as a text or binary parameter? Have you looked to\nsee if the stored data is what you expect?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 Oct 2005 10:23:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insertion of bytea " }, { "msg_contents": ">>Is there a better, faster way to do these inserts?\n>\n> COPY is generally the fastest way to do bulk inserts (see\n> PQputCopyData).\n\nThanks :)\nI'll give that I try and report the results here later.\n\nBye, Chris.\n\n\n\n", "msg_date": "Tue, 25 Oct 2005 23:36:02 +0200 (CEST)", "msg_from": "\"Chris Mair\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insertion of bytea" }, { "msg_contents": ">> Am I correct in assuming that even though I'm passing my 52000\n>> bytes as a (char *) to PQexecPrepared(), encoding/decoding is\n>> happening (think 0 -> \\000) somewhere in the transfer?\n>\n> Are you specifying it as a text or binary parameter? Have you looked to\n> see if the stored data is what you expect?\n\nI'm specifying it as binary (i.e. one's in PQexecPrepared's\nformat parameter). The stored data is correct.\n\nI'll try \"copy from stdin with binary\" tomorrow and see what\nI get...\n\nThanks & Bye, Chris.\n\n\n", "msg_date": "Tue, 25 Oct 2005 23:41:47 +0200 (CEST)", "msg_from": "\"Chris Mair\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insertion of bytea" }, { "msg_contents": "> On Tue, Oct 25, 2005 at 03:44:36PM +0200, Chris Mair wrote:\n>>Is there a better, faster way to do these inserts?\n>\n> COPY is generally the fastest way to do bulk inserts (see\n> PQputCopyData).\n\n\nHi,\n\nI've rewritten the testclient now to use COPY, but I'm getting\nthe exact same results as when doing bundled, prepared inserts.\n\nI'm CPU-bound with an I/O well below what my disks could do :(\n\n\nBye, Chris.\n\n\nPS1: someone off-list suggested using oprofile, which I will do.\n\nPS2: in case somebody is iterested, the test client is here:\n http://www.1006.org/tmp/20051027/\n\n pgclient-1.1.c is prepared inserts, 2.0 is binary copy.\n\n\n\n\n", "msg_date": "Thu, 27 Oct 2005 15:40:04 +0200 (CEST)", "msg_from": "\"Chris Mair\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insertion of bytea" }, { "msg_contents": "\n> I'm CPU-bound with an I/O well below what my disks could do :(\n> [...]\n> \n> PS1: someone off-list suggested using oprofile, which I will do.\n\nI've used oprofile and found out that with my test client (lots of\nbytea inserts) the server burns a lot of CPU time in pglz_compress.\n\nI'm using random data and my production data will be closed to random\n(due to noise!), so compression is of course pointless.\n\nBy using\nalter table dbtest alter img set storage external;\nI can tell the server not to compress.\n\nOn a test box this brought net insert rate up by 50%,\nwhich is enough to meet the requirements.\n\nThanks again :)\n\nBye, Chris.\n\n\n\n\n", "msg_date": "Mon, 31 Oct 2005 15:29:17 +0100", "msg_from": "Chris Mair <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED: insertion of bytea" } ]
[ { "msg_contents": "We are reindexing frequently, and I'm wondering if\nthis is really necessary, given that it appears to\ntake an exclusive lock on the table.\n\nOur table in question is vacuumed every 4 minutes, and\nwe are reindexing after each one.\n\nI'm not a fan of locking this table that frequently,\neven if it is only for 5 - 10 seconds depending on\nload.\n\nThe vacuum is a standard vacuum. Nightly we do a\nvacuum analyze.\n\nThanks for any tips,\n...Markus\n", "msg_date": "Tue, 25 Oct 2005 06:56:07 -0700 (PDT)", "msg_from": "Markus Benne <[email protected]>", "msg_from_op": true, "msg_subject": "Reindex - Is this necessary after a vacuum?" }, { "msg_contents": "Markus Benne wrote:\n> We are reindexing frequently, and I'm wondering if\n> this is really necessary, given that it appears to\n> take an exclusive lock on the table.\n> \n> Our table in question is vacuumed every 4 minutes, and\n> we are reindexing after each one.\n> \n> I'm not a fan of locking this table that frequently,\n> even if it is only for 5 - 10 seconds depending on\n> load.\n> \n> The vacuum is a standard vacuum. Nightly we do a\n> vacuum analyze.\n\nAt most I'd do a nightly reindex. And in fact, I'd probably drop the \nindex, full vacuum, recreate index.\n\nBut you only need to reindex at all if you have a specific problem with \nthe index bloating. Are you seeing this?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 25 Oct 2005 15:16:27 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reindex - Is this necessary after a vacuum?" }, { "msg_contents": "Markus Benne <[email protected]> writes:\n> Our table in question is vacuumed every 4 minutes, and\n> we are reindexing after each one.\n\nThat's pretty silly. You might need a reindex once in awhile, but\nnot every time you vacuum.\n\nThe draft 8.1 docs contain some discussion of possible reasons for\nperiodic reindexing:\nhttp://developer.postgresql.org/docs/postgres/routine-reindex.html\nbut none of these reasons justify once-per-vacuum reindexes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 Oct 2005 10:27:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reindex - Is this necessary after a vacuum? " } ]
[ { "msg_contents": "Ok, thanks for the limits info, but I have that in the manual. Thanks.\n\nBut what I really want to know is this:\n\n1) All large objects of all tables inside one DATABASE is kept on only one\ntable. True or false?\n\nThanks =o)\nRodrigo\n\nOn 10/25/05, Nörder-Tuitje, Marcus <[email protected]> wrote:\n>\n> oh, btw, no harm, but :\n> having 5000 tables only to gain access via city name is a major design\n> flaw.\n> you might consider putting all into one table working with a distributed\n> index over yer table (city, loc_texdt, blobfield); creating a partitioned\n> index over city.\n> best regards\n>\n> -----Ursprüngliche Nachricht-----\n> *Von:* [email protected] [mailto:\n> [email protected]]*Im Auftrag von *Rodrigo Madera\n> *Gesendet:* Montag, 24. Oktober 2005 21:12\n> *An:* [email protected]\n> *Betreff:* Re: [PERFORM] Inefficient escape codes.\n>\n> Now this interests me a lot.\n>\n> Please clarify this:\n>\n> I have 5000 tables, one for each city:\n>\n> City1_Photos, City2_Photos, ... City5000_Photos.\n>\n> Each of these tables are: CREATE TABLE CityN_Photos (location text, lo_id\n> largeobectypeiforgot)\n>\n> So, what's the limit for these large objects? I heard I could only have 4\n> billion records for the whole database (not for each table). Is this true?\n> If this isn't true, then would postgres manage to create all the large\n> objects I ask him to?\n>\n> Also, this would be a performance penalty, wouldn't it?\n>\n> Much thanks for the knowledge shared,\n> Rodrigo\n>\n>\n>\n\nOk, thanks for the limits info, but I have that in the manual. Thanks.\n\nBut what I really want to know is this:\n\n1) All large objects of all tables inside one DATABASE is kept on only one table. True or false?\n\nThanks =o)\nRodrigoOn 10/25/05, Nörder-Tuitje, Marcus <[email protected]> wrote:\noh, \nbtw, no harm, but : \n \nhaving \n5000 tables only to gain access via city name is a major design \nflaw.\n \nyou \nmight consider putting all into one table working with a distributed index \nover yer table (city, loc_texdt, blobfield); creating a partitioned index over \ncity.\n \nbest \nregards\n\n-----Ursprüngliche Nachricht-----Von:\[email protected] \n [mailto:[email protected]]Im Auftrag von Rodrigo \n MaderaGesendet: Montag, 24. Oktober 2005 21:12An:\[email protected]: Re: [PERFORM] Inefficient \n escape codes.Now this interests me a lot.Please \n clarify this:I have 5000 tables, one for each \n city:City1_Photos, City2_Photos, ... City5000_Photos.Each of \n these tables are: CREATE TABLE CityN_Photos (location text, lo_id \n largeobectypeiforgot)So, what's the limit for these large objects? I \n heard I could only have 4 billion records for the whole database (not for each \n table). Is this true? If this isn't true, then would postgres manage to create \n all the large objects I ask him to?Also, this would be a performance \n penalty, wouldn't it?Much thanks for the knowledge shared,Rodrigo", "msg_date": "Tue, 25 Oct 2005 16:03:19 +0000", "msg_from": "Rodrigo Madera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inefficient escape codes." } ]
[ { "msg_contents": "> If I turn on stats_command_string, how much impact would it have on\n> PostgreSQL server's performance during a period of massive data\n> INSERTs? I know that the answer to the question I'm asking will\n> largely depend upon different factors so I would like to know in which\n> situations it would be negligible or would have a signifcant impact.\n\nFirst of all, we have to assume your writes are buffered in some way or\nyou are using transactions, or you will likely be i/o bound (or you have\na super fast disk setup).\n\nAssuming that, I can tell you from experience on win32 that\nstats_command_string can be fairly expensive for certain types of access\npatterns. What patterns?\n\n1. If your ratio of queries to records involved is low.\n2. If you are accessing data in a very quick way, for example via\nprepared statements over a LAN\n3. Your volume of queries is very high.\n\nIn these cases, the cost is high. stats_command_string can add a\nfractional millisecond ( ~.2 in my setup ) to statement latency and as\nmuch as double cpu time in extreme cases...you are warned. You may want\nto turn it off before doing bulk loads or lengthy record iterations.\n\nMerlin\n", "msg_date": "Tue, 25 Oct 2005 13:02:20 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: impact of stats_command_string" } ]
[ { "msg_contents": "Hi All,\n We are executing a single query that returned very fast on the first\ninstance. But when I executed the same query for multiple times, it is\ngiving strange results. It is not coming back.\n When I checked with the processes running in the system, I observed that\nmultiple instances of postmaster are running and all of them are consuming\nvery high amounts of memory. I could also observe that they are sharing the\nmemory in a uniform distribution across them.\n Please let me know if any body has experienced the same and how do they\nresolved it.\n Thank you,\nKishore.\n\nHi All, \n \nWe are executing a single query that returned very fast on the first instance. But when I executed the same query for multiple times, it is giving strange results. It is not coming back. \n \nWhen I checked with the processes running in the system, I observed that multiple instances of postmaster are running and all of them are consuming very high amounts of memory. I could also observe that they are sharing the memory in a uniform distribution across them. \n\n \nPlease let me know if any body has experienced the same and how do they resolved it.\n \nThank you,\nKishore.", "msg_date": "Wed, 26 Oct 2005 01:16:52 +0530", "msg_from": "Kishore B <[email protected]>", "msg_from_op": true, "msg_subject": "Why different execution times for different instances for the same\n\tquery?" }, { "msg_contents": "On Tue, 2005-10-25 at 14:46, Kishore B wrote:\n> Hi All, \n> \n> We are executing a single query that returned very fast on the first\n> instance. But when I executed the same query for multiple times, it is\n> giving strange results. It is not coming back. \n> \n> When I checked with the processes running in the system, I observed\n> that multiple instances of postmaster are running and all of them are\n> consuming very high amounts of memory. I could also observe that they\n> are sharing the memory in a uniform distribution across them. \n> \n> Please let me know if any body has experienced the same and how do\n> they resolved it.\n\nYou may or may not have an actual problem.\n\nFor one, if they're each using 128 megs, but sharing 120 megs of that\nthen that's not too bad. If they're each using 512 meg and sharing 100\nmeg of that, then you've got a problem.\n\nWhat is your sort mem set to? Going too high can cause memory\nstarvation and other problems.\n\nAlso, when you run top, how much memory is being used for cache and\nbuffer. If you've still got a fair amount used for cache then you're\nprobably ok there.\n\nWhat are your settings in postgresql.conf that aren't default? How's\nthe behaviour as you run 1, then 2, then 3, then 4 and so on? Where's\nthe \"knee\" with this behaviour and what are you running out of, disk IO\nor memory or memory bandwidth.\n\nAre you using iostat/vmstat/top/free/ipcs to check resource usage under\nload?\n", "msg_date": "Tue, 25 Oct 2005 15:24:32 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why different execution times for different" } ]
[ { "msg_contents": "In this particular case both outer joins are to the same table, and\nthe where clause is applied to one or the other, so it's pretty easy\nto prove that they should generate identical results. I'll grant that\nthis is not generally very useful; but then, simple test cases often\ndon't look very useful.\n\nWe've had mixed results with PostgreSQL and queries with\nmultiple outer joins when the WHERE clause limits the results\nbased on columns from the optional tables. In at least one case\nwhich performs very well, we have enough tables to cause the\n\"genetic\" optimizer to kick in. (So I suppose there is a chance\nthat sometimes it won't perform well, although we haven't seen\nthat happen yet.)\n\nI can't speak to MySQL, but both Sybase and MaxDB handled\nsuch cases accurately, and chose a plan with very fast\nexecution. Sybase, however, spent 5 to 10 seconds in the\noptimizer finding the sub-second plan.\n\n-Kevin\n\n\n>>> Tom Lane <[email protected]> >>>\nRich Doughty <[email protected]> writes:\n> Tom Lane wrote:\n>> The reason these are different is that the second case constrains\nonly\n>> the last-to-be-joined table, so the full cartesian product of t and\nh1\n>> has to be formed. If this wasn't what you had in mind, you might be\n>> able to rearrange the order of the LEFT JOINs, but bear in mind that\n>> in general, changing outer-join ordering changes the results. (This\n>> is why the planner won't fix it for you.)\n\n> FWIW mysql 4.1 (and i'm no fan at all of mysql) completes both these\nqueries\n> in approximately 3 seconds.\n\nDoes mysql get the correct answer, though? It's hard to see how they do\nthis fast unless they (a) are playing fast and loose with the semantics,\nor (b) have very substantially more analysis logic for OUTER JOIN\nsemantics\nthan we do. Perhaps mysql 5.x is better about this sort of thing, but\nfor 4.x I'd definitely find theory (a) more plausible than (b).\n\nThe cases that would be interesting are those where rearranging the\nouter join order actually does change the correct answer --- it may not\nin this particular case, I haven't thought hard about it. It seems\nfairly likely to me that they are rearranging the join order here, and\nI'm just wondering whether they have the logic needed to verify that\nsuch a transformation is correct.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 25 Oct 2005 17:24:54 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Outer join query plans and performance" } ]
[ { "msg_contents": "I am creating a temporary table in every function that I execute.\nWhich I think is bout 100,000 temporary tables a day.\n\nWhat is the command for vacuuming these 3 tables?\n\nAlso I read about the auto vacuum of postgresql.\nHow can I execute this auto vacuum or the settings in the configuration?\n\n-----Original Message-----\nFrom: Alvaro Nunes Melo [mailto:[email protected]]\nSent: Tuesday, October 25, 2005 10:58 AM\nTo: Christian Paul B. Cosinas\nSubject: Re: [PERFORM] Temporary Table\n\nHi Christian,\n\nChristian Paul B. Cosinas wrote:\n\n> Does Creating Temporary table in a function and NOT dropping them \n> affects the performance of the database?\n>\nI believe it will depend on how many temporary tables you will create in a\ndaily basis. We had a performance problem caused by them, and by not\nmonitoring properly the database size. The pg_attribite, pg_class and\npg_depend tables grow a lot. When I found out that this was the problem I\nsaw some messages in the list archieve, and now the overall performance is\ngreat.\n\nWhat I do is daily run VACUUM FULL and REINDEX in this three tables.\n\nAlvaro\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html \n\n", "msg_date": "Wed, 26 Oct 2005 02:15:37 -0000", "msg_from": "\"Christian Paul B. Cosinas\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Temporary Table" }, { "msg_contents": "Hi! Here is the Specifications of My Server.\nI would really appreciate the best configuration of postgresql.conf for my\nsevrer.\n\nI have tried so many value in the parameters but It seems that I cannot get\nthe speed I want.\n\nOS: Redhat Linux\nCPU: Dual Xeon\nMemory: 6 gigabyte\nPostgreSQL Version 8.0\n\nMost of my queries are having Order by Clause, and group by clause.\nCreation of temporary table.\n\nThe biggest rows is about 3-5 million which I query almost every 5 seconds.\n\nI'm just wondering is it normal to have this result in my memory usage:\n total used free shared buffers cached\nMem: 6192460 6172488 19972 0 39904 5890824\n-/+ buffers/cache: 241760 5950700\nSwap: 2096472 0 2096472\n\nWhat does this mean?\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html \n\n", "msg_date": "Wed, 26 Oct 2005 06:05:55 -0000", "msg_from": "\"Christian Paul B. Cosinas\" <[email protected]>", "msg_from_op": true, "msg_subject": "Configuration Suggestion" } ]
[ { "msg_contents": "where can i find bests practices for tunning postgresql?\n\n_________________________________________________________________\nConsigue aqu� las mejores y mas recientes ofertas de trabajo en Am�rica \nLatina y USA: http://latam.msn.com/empleos/\n\n", "msg_date": "Tue, 25 Oct 2005 22:24:06 -0600", "msg_from": "=?iso-8859-1?B?U2lkYXIgTPNwZXogQ3J1eg==?= <[email protected]>", "msg_from_op": true, "msg_subject": "blue prints please" }, { "msg_contents": "2005/10/26, Sidar López Cruz <[email protected]>:\n> where can i find bests practices for tunning postgresql?\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n--\nJean-Max Reymond\nCKR Solutions Open Source\nNice France\nhttp://www.ckr-solutions.com\n", "msg_date": "Wed, 26 Oct 2005 08:59:22 +0200", "msg_from": "Jean-Max Reymond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: blue prints please" }, { "msg_contents": "On Tue, Oct 25, 2005 at 22:24:06 -0600,\n Sidar L�pez Cruz <[email protected]> wrote:\n> where can i find bests practices for tunning postgresql?\n\nYou should first read the documentation. For 8.1, that would be here:\nhttp://developer.postgresql.org/docs/postgres/runtime-config.html\n\nThere is also good information on techdocs at:\nhttp://techdocs.postgresql.org/#techguides\n(Look under the subcategory \"optimising\".)\n", "msg_date": "Wed, 26 Oct 2005 13:08:46 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: blue prints please" } ]
[ { "msg_contents": "what happend with postgresql 8.1b4 performance on query?\nplease help me !!!\n\nlook at this:\nselect count(*) from fotos where archivo not in (select archivo from \narchivos)\nAggregate (cost=4899037992.36..4899037992.37 rows=1 width=0)\n-> Seq Scan on fotos (cost=22598.78..4899037338.07 rows=261716 width=0)\n Filter: (NOT (subplan))\n SubPlan\n -> Materialize (cost=22598.78..39304.22 rows=805344 width=58)\n -> Seq Scan on archivos (cost=0.00..13141.44 rows=805344 \nwidth=58)\n\nI WILL DIE WAITING FOR QUERY RESPONSE !!!\n--\nCREATE TABLE archivos ( archivo varchar(20)) WITHOUT OIDS;\nCREATE INDEX archivos_archivo_idx ON archivos USING btree(archivo);\n~800000 rows\n--\nCREATE TABLE fotos\n(\ncedula varchar(20),\nnombre varchar(100),\napellido1 varchar(100),\napellido2 varchar(100),\narchivo varchar(20)\n) WITHOUT OIDS;\nCREATE INDEX fotos_archivo_idx ON fotos USING btree (archivo);\nCREATE INDEX fotos_cedula_idx ON fotos USING btree (cedula);\n~500000 rows\n\n_________________________________________________________________\nConsigue aqu� las mejores y mas recientes ofertas de trabajo en Am�rica \nLatina y USA: http://latam.msn.com/empleos/\n\n", "msg_date": "Tue, 25 Oct 2005 22:26:43 -0600", "msg_from": "=?iso-8859-1?B?U2lkYXIgTPNwZXogQ3J1eg==?= <[email protected]>", "msg_from_op": true, "msg_subject": "zero performance on query" }, { "msg_contents": "On Tue, Oct 25, 2005 at 10:26:43PM -0600, Sidar López Cruz wrote:\n> look at this:\n> select count(*) from fotos where archivo not in (select archivo from \n> archivos)\n> Aggregate (cost=4899037992.36..4899037992.37 rows=1 width=0)\n> -> Seq Scan on fotos (cost=22598.78..4899037338.07 rows=261716 width=0)\n> Filter: (NOT (subplan))\n> SubPlan\n> -> Materialize (cost=22598.78..39304.22 rows=805344 width=58)\n> -> Seq Scan on archivos (cost=0.00..13141.44 rows=805344 \n> width=58)\n\nNow, this is interesting; it seems to trigger exactly the same oddity as my\nquery did (at least one of them; the materialized sequential scan).\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 26 Oct 2005 12:30:51 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: zero performance on query" } ]
[ { "msg_contents": "That seems like a pretty horrible way to do that query, given the table sizes.\n\nWhat about something like:\n\nSELECT count(*)\nFROM fotos f\nLEFT JOIN archivo a USING(archivo)\nWHERE a.archivo IS NULL\n\nIncidentally, can someone explain what the \"Materialize\" subplan does? Is this new in 8.1?\n\nDmitri\n\n\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Sidar López Cruz\n> Sent: Wednesday, October 26, 2005 12:27 AM\n> To: [email protected]\n> Subject: [PERFORM] zero performance on query\n> \n> \n> what happend with postgresql 8.1b4 performance on query?\n> please help me !!!\n> \n> look at this:\n> select count(*) from fotos where archivo not in (select archivo from \n> archivos)\n> Aggregate (cost=4899037992.36..4899037992.37 rows=1 width=0)\n> -> Seq Scan on fotos (cost=22598.78..4899037338.07 rows=261716 \n> -> width=0)\n> Filter: (NOT (subplan))\n> SubPlan\n> -> Materialize (cost=22598.78..39304.22 \n> rows=805344 width=58)\n> -> Seq Scan on archivos (cost=0.00..13141.44 \n> rows=805344 \n> width=58)\n> \n> I WILL DIE WAITING FOR QUERY RESPONSE !!!\n> --\n> CREATE TABLE archivos ( archivo varchar(20)) WITHOUT OIDS; \n> CREATE INDEX archivos_archivo_idx ON archivos USING \n> btree(archivo); ~800000 rows\n> --\n> CREATE TABLE fotos\n> (\n> cedula varchar(20),\n> nombre varchar(100),\n> apellido1 varchar(100),\n> apellido2 varchar(100),\n> archivo varchar(20)\n> ) WITHOUT OIDS;\n> CREATE INDEX fotos_archivo_idx ON fotos USING btree (archivo);\n> CREATE INDEX fotos_cedula_idx ON fotos USING btree (cedula);\n> ~500000 rows\n> \n> _________________________________________________________________\n> Consigue aquí las mejores y mas recientes ofertas de trabajo \n> en América \n> Latina y USA: http://latam.msn.com/empleos/\n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \nThe information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer\n", "msg_date": "Wed, 26 Oct 2005 01:25:45 -0400", "msg_from": "\"Dmitri Bichko\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: zero performance on query" } ]
[ { "msg_contents": "> look at this:\n> select count(*) from fotos where archivo not in (select archivo from\n> archivos)\n> Aggregate (cost=4899037992.36..4899037992.37 rows=1 width=0)\n> -> Seq Scan on fotos (cost=22598.78..4899037338.07 rows=261716 width=0)\n> Filter: (NOT (subplan))\n> SubPlan\n> -> Materialize (cost=22598.78..39304.22 rows=805344 width=58)\n> -> Seq Scan on archivos (cost=0.00..13141.44 rows=805344\n> width=58)\n> \n> I WILL DIE WAITING FOR QUERY RESPONSE !!!\n\nTry:\nselect count(*) from fotos f where not exists (select archivo from archivos a where a.archivo = f.archivo) \n\nselect count(*) from \n(\n\tselect archivo from fotos\n\t except\n\tselect archivo from archivos\t\n);\n", "msg_date": "Wed, 26 Oct 2005 08:05:21 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: zero performance on query" }, { "msg_contents": "On Wed, Oct 26, 2005 at 08:05:21AM -0400, Merlin Moncure wrote:\n> select count(*) from fotos f where not exists (select archivo from archivos a where a.archivo = f.archivo) \n\nThis was an optimization before 7.4, but probably isn't anymore.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 26 Oct 2005 17:20:47 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: zero performance on query" } ]
[ { "msg_contents": "Hi there.\n\nI am currently building a system, where it would be nice to use multiple \nlevels of views upon each other (it is a staticstics system, where \ntraceability is important).\n\nIs there any significant performance reduction in say 10 levels of views \ninstead of one giant, nested sql-statement ? I especially think exection \nplanner-wise.\n\nThe data mainly comes from one small to medium sized tabel (< 5 million \nrows) and a handfull small (< 5000 rows) support tables.\nThe hardware will be okay for the job, but nothing really fancy (specs \nare Xeon, 2G of memory, 6 SCSI-disks in a RAID1+0) . The base will be \nversion 8.1 provided that it gets out of beta around end-of-year.\n\nSvenne\n", "msg_date": "Wed, 26 Oct 2005 17:25:54 +0200", "msg_from": "Svenne Krap <[email protected]>", "msg_from_op": true, "msg_subject": "Perfomance of views" }, { "msg_contents": "Svenne Krap wrote:\n> Hi there.\n> \n> I am currently building a system, where it would be nice to use multiple \n> levels of views upon each other (it is a staticstics system, where \n> traceability is important).\n> \n> Is there any significant performance reduction in say 10 levels of views \n> instead of one giant, nested sql-statement ? I especially think exection \n> planner-wise.\n\nThe planner tries to push conditions \"inside\" views where it can. It's \nnot perfect though, and if you're writing a big query by hand you might \nbe able to do better than it.\n\nIn short, I'd test if you can.\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 27 Oct 2005 10:33:40 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance of views" }, { "msg_contents": "What do you mean exactly but \"pushing conditions inside\" ?\n\nI don't think I will have the option of testing on the full queries, as \nthese take many days to write (the current ones, they are replacing on a \nmssql takes up more that 5kb of query). The current ones are nightmares \nfrom a maintaince standpoint.\n\nBasicly what the application is doing is selecting some base data from \nthe \"large\" table for a point in time (usually a quarter) and selects \nall matching auxilliare data from the other tables. They are made in a \ntime-travel like manner with a first and last useable date.\n\nThe ways I have considered was :\n1) write a big query in hand (not preferred as it gets hard to manage)\n2) write layers of views (still not prefered as I still have to remember \nto put on the right conditions everywhere)\n3) write layers of sql-functions (returning the right sets of rows from \nthe underlying tables) - which I prefer from a development angel .. it \ngets very clean and I cant forget a parameter anywhere.\n\nBut I seem to remember (and I have used PGSQL in production since 7.0) \nthat the planner has some problems with solution 3 (i.e. estimating the \ncost and rearranging the query), but frankly that would be the way I \nwould like to go.\n\nBased on the current (non-optimal) design and hardware constraints, I \nstill have to make sure, the query runs fairly optimal - that means the \nplanner must use indexes intelligently and other stuff as if it was \n(well-)written using solution 1.\n\nWhat do you think of the three solutions ? And is there some ressource \nabout the planners capabilites for someone like me (that is very used to \nwrite reasonably fast and complex sql, can read c-code, but does not \nreally want to dig into the source code)\n\nRegards\n\nSvenne\n\nRichard Huxton wrote:\n\n> Svenne Krap wrote:\n>\n>> Hi there.\n>>\n>> I am currently building a system, where it would be nice to use \n>> multiple levels of views upon each other (it is a staticstics system, \n>> where traceability is important).\n>>\n>> Is there any significant performance reduction in say 10 levels of \n>> views instead of one giant, nested sql-statement ? I especially think \n>> exection planner-wise.\n>\n>\n> The planner tries to push conditions \"inside\" views where it can. It's \n> not perfect though, and if you're writing a big query by hand you \n> might be able to do better than it.\n>\n> In short, I'd test if you can.\n\n\n", "msg_date": "Thu, 27 Oct 2005 13:01:09 +0200", "msg_from": "Svenne Krap <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Perfomance of views" }, { "msg_contents": "\nDon't forget to CC the list\n\n\nSvenne Krap wrote:\n> What do you mean exactly but \"pushing conditions inside\" ?\n\nIf I have something like \"SELECT * FROM complicated_view WHERE foo = 7\" \nthen the planner can look \"inside\" complicated_view and see where it can \nattach the condition \"foo=7\", rather than running the query and applying \nthe condition at the end.\n\nThere are cases where it is safe for the planner to do this, but it \nisn't smart enough to do so.\n\n> I don't think I will have the option of testing on the full queries, as \n> these take many days to write (the current ones, they are replacing on a \n> mssql takes up more that 5kb of query). The current ones are nightmares \n> from a maintaince standpoint.\n\nHmm - it sounds like they would be.\n\n> Basicly what the application is doing is selecting some base data from \n> the \"large\" table for a point in time (usually a quarter) and selects \n> all matching auxilliare data from the other tables. They are made in a \n> time-travel like manner with a first and last useable date.\n> \n> The ways I have considered was :\n> 1) write a big query in hand (not preferred as it gets hard to manage)\n\nAgreed.\n\n> 2) write layers of views (still not prefered as I still have to remember \n> to put on the right conditions everywhere)\n\nThis is what I'd probably do, but of course I don't have full \ninformation about your situation.\n\n> 3) write layers of sql-functions (returning the right sets of rows from \n> the underlying tables) - which I prefer from a development angel .. it \n> gets very clean and I cant forget a parameter anywhere.\n> \n> But I seem to remember (and I have used PGSQL in production since 7.0) \n> that the planner has some problems with solution 3 (i.e. estimating the \n> cost and rearranging the query), but frankly that would be the way I \n> would like to go.\n\nWell, 8.x can \"inline\" a simple sql function into a larger query, but it \ndoesn't sound like that will be enough in your case. Once a function \nbecomes a \"black box\" then there's not much the planner can do to figure \nout what to do.\n\n> Based on the current (non-optimal) design and hardware constraints, I \n> still have to make sure, the query runs fairly optimal - that means the \n> planner must use indexes intelligently and other stuff as if it was \n> (well-)written using solution 1.\n\nWell, #1,#2 are likely to be the most efficient, but you won't know for \nsure about #2 until you test it.\n\nThere are a couple of other options though:\n\n#4 - Write a set-returning function that breaks the query into steps and \nexecutes each in turn. So - fetch IDs from the main table in step 1 and \nstore them in a temporary table, join other tables in later steps.\n\n#5 - Write a function that writes your big query for you and either \nreturns the SQL to your application, or runs it and returns the results.\n\n> What do you think of the three solutions ? And is there some ressource \n> about the planners capabilites for someone like me (that is very used to \n> write reasonably fast and complex sql, can read c-code, but does not \n> really want to dig into the source code)\n\nThere is some stuff in the \"Internals\" section of the manuals and it \nmight be worth rummaging around on http://techdocs.postgresql.org\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 27 Oct 2005 12:29:38 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance of views" }, { "msg_contents": "On 10/27/2005 7:29 AM, Richard Huxton wrote:\n\n> Don't forget to CC the list\n> \n> \n> Svenne Krap wrote:\n>> What do you mean exactly but \"pushing conditions inside\" ?\n> \n> If I have something like \"SELECT * FROM complicated_view WHERE foo = 7\" \n> then the planner can look \"inside\" complicated_view and see where it can \n> attach the condition \"foo=7\", rather than running the query and applying \n> the condition at the end.\n\nSorry, but the planner doesn't attach the condition anywhere. It is the \nrewriter that takes the actual query, replaces the views rangetable and \nexpression entries with the actual underlying objects and adds the views \ncondition with an AND to the queries condition. Simply example:\n\nGiven a view\n\n create view v1 as select a1, b1, c2 from t1, t2 where a1 = a2;\n\nThe statement\n\n select * from v1 where b1 = 'foo';\n\nwill result in a parsetree equivalent to what you would get if the \noriginal query was\n\n select a1, b1, c2 from t1, t2 where (b1 = 'foo') and (a1 = a2);\n\nIt is the planners and optimizers job to recognize where in the \nexecution plan it can push qualifications down into filters or even \nscankeys. The planner should be able to realize that\n\n select * from v1 where a1 = 42;\n\nis in fact equivalent to\n\n select a1, b1, c2 from t1, t2 where a1 = 42 and a1 = a2;\n\nas well as\n\n select a1, b1, c2 from t1, t2 where a1 = 42 and a1 = a2 and a2 = 42;\n\nThis very last addition of \"a2 = 42\" because of \"a2 = a1 = 42\" allows it \nto put a constant scankey onto the scan of t2. The 8.0 planner does \nthat, so the resulting query plan for the last three selects above is \nabsolutely identical.\n\n> \n> There are cases where it is safe for the planner to do this, but it \n> isn't smart enough to do so.\n\nExample?\n\n\nJan\n\n> \n>> I don't think I will have the option of testing on the full queries, as \n>> these take many days to write (the current ones, they are replacing on a \n>> mssql takes up more that 5kb of query). The current ones are nightmares \n>> from a maintaince standpoint.\n> \n> Hmm - it sounds like they would be.\n> \n>> Basicly what the application is doing is selecting some base data from \n>> the \"large\" table for a point in time (usually a quarter) and selects \n>> all matching auxilliare data from the other tables. They are made in a \n>> time-travel like manner with a first and last useable date.\n>> \n>> The ways I have considered was :\n>> 1) write a big query in hand (not preferred as it gets hard to manage)\n> \n> Agreed.\n> \n>> 2) write layers of views (still not prefered as I still have to remember \n>> to put on the right conditions everywhere)\n> \n> This is what I'd probably do, but of course I don't have full \n> information about your situation.\n> \n>> 3) write layers of sql-functions (returning the right sets of rows from \n>> the underlying tables) - which I prefer from a development angel .. it \n>> gets very clean and I cant forget a parameter anywhere.\n>> \n>> But I seem to remember (and I have used PGSQL in production since 7.0) \n>> that the planner has some problems with solution 3 (i.e. estimating the \n>> cost and rearranging the query), but frankly that would be the way I \n>> would like to go.\n> \n> Well, 8.x can \"inline\" a simple sql function into a larger query, but it \n> doesn't sound like that will be enough in your case. Once a function \n> becomes a \"black box\" then there's not much the planner can do to figure \n> out what to do.\n> \n>> Based on the current (non-optimal) design and hardware constraints, I \n>> still have to make sure, the query runs fairly optimal - that means the \n>> planner must use indexes intelligently and other stuff as if it was \n>> (well-)written using solution 1.\n> \n> Well, #1,#2 are likely to be the most efficient, but you won't know for \n> sure about #2 until you test it.\n> \n> There are a couple of other options though:\n> \n> #4 - Write a set-returning function that breaks the query into steps and \n> executes each in turn. So - fetch IDs from the main table in step 1 and \n> store them in a temporary table, join other tables in later steps.\n> \n> #5 - Write a function that writes your big query for you and either \n> returns the SQL to your application, or runs it and returns the results.\n> \n>> What do you think of the three solutions ? And is there some ressource \n>> about the planners capabilites for someone like me (that is very used to \n>> write reasonably fast and complex sql, can read c-code, but does not \n>> really want to dig into the source code)\n> \n> There is some stuff in the \"Internals\" section of the manuals and it \n> might be worth rummaging around on http://techdocs.postgresql.org\n> \n> --\n> Richard Huxton\n> Archonet Ltd\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Thu, 27 Oct 2005 09:41:26 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance of views" }, { "msg_contents": "Svenne Krap <[email protected]> writes:\n> The ways I have considered was :\n> 1) write a big query in hand (not preferred as it gets hard to manage)\n> 2) write layers of views (still not prefered as I still have to remember \n> to put on the right conditions everywhere)\n> 3) write layers of sql-functions (returning the right sets of rows from \n> the underlying tables) - which I prefer from a development angel .. it \n> gets very clean and I cant forget a parameter anywhere.\n\n#1 and #2 should behave pretty similarly, assuming that the \"one big\nquery\" would have been structured the same way as the nest of views is.\n#3 unfortunately will pretty much suck, because there's no chance for\ncross-level optimization.\n\nThere's been some discussion of inline-expanding SQL functions that\nreturn sets when they are called in FROM, which would make a SQL\nfunction that contains just a SELECT effectively equivalent to a view\nas far as the planner's powers of optimization go. No one's tried to\nmake it happen yet though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Oct 2005 10:38:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance of views " }, { "msg_contents": "Tom Lane wrote:\n\n>There's been some discussion of inline-expanding SQL functions that\n>return sets when they are called in FROM, which would make a SQL\n>function that contains just a SELECT effectively equivalent to a view\n>as far as the planner's powers of optimization go. No one's tried to\n>make it happen yet though.\n> \n>\n\nThis is exactly what would be brilliant in my case. Use the functions as \na kind of strict, parameterized views, that in the planner (or wherever) \ngets replaced down to a simple (?!?!) sql-statement.\nThis would imho be highly valuable for almost any kind of complex \ntime-travel application (and surely dozens of other applications).\n\nAnd before anyone suggests it, I don't code C well enough (*cough* \nrusty) to try to do it myself. I would apriciate if it went on the todo \nfor 8.2 though. (I might even be willing to sponsor some money (a single \nor perhpas two thousands of US dollars) for getting it done and release \nit immediately under postgresql standard license (BSD)).\n\nI by the way also support the idea of a way to force a table into a \nPgSQL managed cache like discussed a while ago. Sometimes overall speed \nfor the system is less important than speed of a single query.\n\nI must also say, that I am very impressed with the performance \nenhancements of 8.1 beta, the bitmap index scans are amazing ! Good job, \nguys - PgSQL has come a far way from 7.0 (where I started) and the \nfuture looks bright ;)\n\nSvenne\n", "msg_date": "Thu, 27 Oct 2005 17:10:29 +0200", "msg_from": "Svenne Krap <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Perfomance of views" }, { "msg_contents": "Jan Wieck wrote:\n> On 10/27/2005 7:29 AM, Richard Huxton wrote:\n>> Svenne Krap wrote:\n>>\n>>> What do you mean exactly but \"pushing conditions inside\" ?\n>>\n>> If I have something like \"SELECT * FROM complicated_view WHERE foo = \n>> 7\" then the planner can look \"inside\" complicated_view and see where \n>> it can attach the condition \"foo=7\", rather than running the query and \n>> applying the condition at the end.\n> \n> Sorry, but the planner doesn't attach the condition anywhere. It is the \n> rewriter that takes the actual query, replaces the views rangetable and \n> expression entries with the actual underlying objects and adds the views \n> condition with an AND to the queries condition. Simply example:\n\nThanks for the correction Jan.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 27 Oct 2005 16:40:39 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance of views" } ]
[ { "msg_contents": "I currently have an infrastructure that's based around SQL Server 2000. \nI'm trying to move some of the data over to Postgres, partly to reduce \nthe load on the SQL Server machine and partly because some queries I'd \nlike to run are too slow to be usuable on SQL Server. Mostly likely over \ntime more and more data will move to Postgres. To help with this \ntransition, I created a Postgres plugin which queries the contents of \nSQL Server tables via ODBC and returns a recordset. I then create views \naround the function and I can then read from the SQL Server tables as if \nthey were local to Postgres.\n\nI have four tables involved in this query. The major one is \nprovider_location, which has about 2 million rows and is stored in \nPostgres. The other three are stored in SQL Server and accessed via \nviews like I mentioned above. They are network, network_state, and \nxlat_tbl, and contain about 40, 250, and 500 rows. A simple select * \nfrom any of the views takes somewhere around 50ms.\n\nThis query in question was written for SQL Server. I have no idea why it \nwas written in the form it was, but it ran at a reasonable speed when \nall the tables were on one machine. Running the original query (after \nadjusting for syntax differences) on Postgres resulted in a query that \nwould run for hours, continually allocating more RAM. I eventually had \nto kill the process as it was devouring swap space. My assumption is \nthat Postgres is doing the ODBC query for each row of a join somewhere, \neven though the function is marked stable (immutable didn't make a \ndifference).\n\nFlattening the query made it run in a few minutes. I think the flattened \nquery is easier to read, and it runs faster, so I'm not complaining that \nI can't use the original query. But I'd like to know exactly what causes \nthe bottleneck in the original query, and if there are other approaches \nto solving the issue in case I need them in future queries.\n\nBelow is the original query, the explain output, the modified query, the \nexplain output, and the explain analyze output.\n\nEd\n\nselect\n pl.network_id,n.name as \nnetwork_name,pl.state_cd,count(pl.state_cd) as provider_count\n from development.provider_location pl,development.network n\n where pl.network_id in (select ns.network_id\n from development.network_state ns\n where ns.from_date < current_time\n and (ns.thru_date > current_time or \nns.thru_date is null)\n and (ns.state_cd = pl.state_cd or ns.state_cd='')\n )\n and pl.network_id = n.network_id\n and pl.state_cd is not null\n and pl.state_cd in (select field_value from development.xlat_tbl \nwhere field_name ='State_CD')\n group by pl.state_cd,n.name,pl.network_id\n order by pl.state_cd,network_name;\n\nExplain:\n\nGroupAggregate (cost=80548547.83..80549256.80 rows=47265 width=52)\n -> Sort (cost=80548547.83..80548665.99 rows=47265 width=52)\n Sort Key: pl.state_cd, odbc_select.name, pl.network_id\n -> Hash Join (cost=30.01..80543806.14 rows=47265 width=52)\n Hash Cond: ((\"outer\".network_id)::text = \n(\"inner\".network_id)::text)\n -> Hash IN Join (cost=15.01..80540931.61 rows=9453 width=20)\n Hash Cond: ((\"outer\".state_cd)::text = \n(\"inner\".field_value)::text)\n -> Seq Scan on provider_location pl \n(cost=0.00..80535150.29 rows=1890593 width=20)\n Filter: ((state_cd IS NOT NULL) AND (subplan))\n SubPlan\n -> Function Scan on odbc_select \n(cost=0.00..42.50 rows=2 width=32)\n Filter: (((from_date)::text < \n(('now'::text)::time(6) with time zone)::text) AND (((thru_date)::text > \n(('now'::text)::time(6) with time zone)::text) OR (thru_date IS NULL)) \nAND (((state_cd)::text = ($0)::text) OR ((state_cd)::text = ''::text)))\n -> Hash (cost=15.00..15.00 rows=5 width=32)\n -> Function Scan on odbc_select \n(cost=0.00..15.00 rows=5 width=32)\n Filter: ((field_name)::text = \n'State_CD'::text)\n -> Hash (cost=12.50..12.50 rows=1000 width=64)\n -> Function Scan on odbc_select (cost=0.00..12.50 \nrows=1000 width=64)\n\n\nFlattened query:\n\nselect\n pl.network_id,\n n.name as network_name,\n pl.state_cd,\n count(pl.state_cd) as provider_count\nfrom\n development.network n,\n development.network_state ns,\n development.xlat_tbl xt,\n development.provider_location pl\nwhere\n xt.field_name = 'State_CD'\n and n.network_id = ns.network_id\n and ns.from_date < current_timestamp\n and (ns.thru_date > current_timestamp or ns.thru_date is null)\n and (ns.state_cd = pl.state_cd or ns.state_cd='')\n and pl.network_id = n.network_id\n and pl.state_cd is not null\n and pl.state_cd = xt.field_value\ngroup by\n pl.state_cd,\n n.name,\n pl.network_id\norder by\n pl.state_cd,\n network_name;\n\nExplain:\n\nGroupAggregate (cost=190089.94..190129.90 rows=2664 width=52)\n -> Sort (cost=190089.94..190096.60 rows=2664 width=52)\n Sort Key: pl.state_cd, odbc_select.name, pl.network_id\n -> Merge Join (cost=189895.73..189938.37 rows=2664 width=52)\n Merge Cond: (\"outer\".\"?column4?\" = \"inner\".\"?column3?\")\n -> Sort (cost=189833.40..189834.73 rows=533 width=52)\n Sort Key: (pl.network_id)::text\n -> Hash Join (cost=42.80..189809.26 rows=533 width=52)\n Hash Cond: ((\"outer\".network_id)::text = \n(\"inner\".network_id)::text)\n Join Filter: (((\"inner\".state_cd)::text = \n(\"outer\".state_cd)::text) OR ((\"inner\".state_cd)::text = ''::text))\n -> Hash Join (cost=15.01..185908.10 \nrows=94530 width=20)\n Hash Cond: ((\"outer\".state_cd)::text = \n(\"inner\".field_value)::text)\n -> Seq Scan on provider_location pl \n(cost=0.00..166041.86 rows=3781186 width=20)\n Filter: (state_cd IS NOT NULL)\n -> Hash (cost=15.00..15.00 rows=5 \nwidth=32)\n -> Function Scan on odbc_select \n(cost=0.00..15.00 rows=5 width=32)\n Filter: ((field_name)::text \n= 'State_CD'::text)\n -> Hash (cost=27.50..27.50 rows=113 width=64)\n -> Function Scan on odbc_select \n(cost=0.00..27.50 rows=113 width=64)\n Filter: ((from_date < \n('now'::text)::timestamp(6) with time zone) AND ((thru_date > \n('now'::text)::timestamp(6) with time zone) OR (thru_date IS NULL)))\n -> Sort (cost=62.33..64.83 rows=1000 width=64)\n Sort Key: (odbc_select.network_id)::text\n -> Function Scan on odbc_select (cost=0.00..12.50 \nrows=1000 width=64)\n\nExplain Analyze:\n\n\"GroupAggregate (cost=190089.94..190129.90 rows=2664 width=52) (actual \ntime=254757.742..261725.786 rows=350 loops=1)\"\n\" -> Sort (cost=190089.94..190096.60 rows=2664 width=52) (actual \ntime=254757.438..257267.224 rows=1316774 loops=1)\"\n\" Sort Key: pl.state_cd, odbc_select.name, pl.network_id\"\n\" -> Merge Join (cost=189895.73..189938.37 rows=2664 width=52) \n(actual time=189325.877..203579.050 rows=1316774 loops=1)\"\n\" Merge Cond: (\"outer\".\"?column4?\" = \"inner\".\"?column3?\")\"\n\" -> Sort (cost=189833.40..189834.73 rows=533 width=52) \n(actual time=189282.504..192284.766 rows=1316774 loops=1)\"\n\" Sort Key: (pl.network_id)::text\"\n\" -> Hash Join (cost=42.80..189809.26 rows=533 \nwidth=52) (actual time=1177.758..151180.472 rows=1316774 loops=1)\"\n\" Hash Cond: ((\"outer\".network_id)::text = \n(\"inner\".network_id)::text)\"\n\" Join Filter: (((\"inner\".state_cd)::text = \n(\"outer\".state_cd)::text) OR ((\"inner\".state_cd)::text = ''::text))\"\n\" -> Hash Join (cost=15.01..185908.10 \nrows=94530 width=20) (actual time=1095.949..50495.766 rows=1890457 loops=1)\"\n\" Hash Cond: ((\"outer\".state_cd)::text = \n(\"inner\".field_value)::text)\"\n\" -> Seq Scan on provider_location pl \n(cost=0.00..166041.86 rows=3781186 width=20) (actual \ntime=1071.011..36224.961 rows=1891183 loops=1)\"\n\" Filter: (state_cd IS NOT NULL)\"\n\" -> Hash (cost=15.00..15.00 rows=5 \nwidth=32) (actual time=24.832..24.832 rows=0 loops=1)\"\n\" -> Function Scan on odbc_select \n(cost=0.00..15.00 rows=5 width=32) (actual time=24.469..24.724 rows=51 \nloops=1)\"\n\" Filter: ((field_name)::text \n= 'State_CD'::text)\"\n\" -> Hash (cost=27.50..27.50 rows=113 \nwidth=64) (actual time=81.684..81.684 rows=0 loops=1)\"\n\" -> Function Scan on odbc_select \n(cost=0.00..27.50 rows=113 width=64) (actual time=75.288..81.200 \nrows=211 loops=1)\"\n\" Filter: ((from_date < \n('now'::text)::timestamp(6) with time zone) AND ((thru_date > \n('now'::text)::timestamp(6) with time zone) OR (thru_date IS NULL)))\"\n\" -> Sort (cost=62.33..64.83 rows=1000 width=64) (actual \ntime=43.301..1258.901 rows=1289952 loops=1)\"\n\" Sort Key: (odbc_select.network_id)::text\"\n\" -> Function Scan on odbc_select (cost=0.00..12.50 \nrows=1000 width=64) (actual time=43.010..43.109 rows=34 loops=1)\"\n\"Total runtime: 261902.966 ms\"\n\n", "msg_date": "Wed, 26 Oct 2005 16:33:36 -0400", "msg_from": "\"Edward Di Geronimo Jr.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance issues with custom functions" }, { "msg_contents": "\"Edward Di Geronimo Jr.\" <[email protected]> writes:\n> ... I'd like to know exactly what causes \n> the bottleneck in the original query, and if there are other approaches \n> to solving the issue in case I need them in future queries.\n\nThis is fairly hard to read ... it would help a lot if you had shown the\nview definitions that the query relies on, so that we could match up the\nplan elements with the query a bit better.\n\nHowever, I'm thinking the problem is with this IN clause:\n\n> where pl.network_id in (select ns.network_id\n> from development.network_state ns\n> where ns.from_date < current_time\n> and (ns.thru_date > current_time or \n> ns.thru_date is null)\n> and (ns.state_cd = pl.state_cd or ns.state_cd='')\n> )\n\nBecause the sub-SELECT references pl.state_cd (an outer variable\nreference), there's no chance of optimizing this into a join-style IN.\nSo the sub-SELECT has to be re-executed for each row of the outer query.\n\nBTW, it's not apparent to me that your \"flattened\" query gives the same\nanswers as the original. What if a pl row can join to more than one\nrow of the ns output?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Oct 2005 19:18:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues with custom functions " }, { "msg_contents": "Tom Lane wrote:\n\n>This is fairly hard to read ... it would help a lot if you had shown the\n>view definitions that the query relies on, so that we could match up the\n>plan elements with the query a bit better.\n> \n>\nI wasn't sure how helpful it would be. Here they are:\n\ncreate view development.network as\nselect * from odbc_select('amsterdam', 'bob.dbo.network') as (\n network_id varchar ,\n status_cd varchar ,\n name varchar ,\n network_action varchar ,\n physical_type_cd varchar ,\n service_type_cd varchar ,\n parent_network_id varchar ,\n commission_network_id varchar ,\n rep_id varchar ,\n tax_id varchar ,\n url varchar ,\n entry_method_cd varchar ,\n entry_individual_type_cd varchar ,\n entry_individual_id varchar ,\n service varchar (30),\n cost_routine varchar (150),\n commission_rate numeric(5, 5) ,\n directory_number varchar (11),\n search_url varchar (200),\n member_rate numeric(15, 2) ,\n free_months numeric(18, 0) ,\n eligibility_hound varchar (60)\n)\n\ncreate view development.network_state as\nselect * from odbc_select('amsterdam', 'bob.dbo.network_state') as (\n network_id varchar,\n state_cd varchar,\n product varchar (100) ,\n status_cd varchar,\n entry_method_cd varchar,\n entry_individual_type_cd varchar,\n entry_individual_id varchar,\n logo_id int ,\n from_date timestamp ,\n thru_date timestamp\n)\n\ncreate view development.xlat_tbl as\nselect * from odbc_select('amsterdam', 'xlat_tbl') as (\n field_name varchar ,\n field_value varchar ,\n status_cd varchar ,\n descr varchar ,\n descrshort varchar ,\n entry_method_cd varchar ,\n entry_individual_type_cd varchar ,\n entry_individual_id varchar\n)\n\n>However, I'm thinking the problem is with this IN clause:\n>\n> \n>\n>> where pl.network_id in (select ns.network_id\n>> from development.network_state ns\n>> where ns.from_date < current_time\n>> and (ns.thru_date > current_time or \n>>ns.thru_date is null)\n>> and (ns.state_cd = pl.state_cd or ns.state_cd='')\n>> )\n>> \n>>\n>\n>Because the sub-SELECT references pl.state_cd (an outer variable\n>reference), there's no chance of optimizing this into a join-style IN.\n>So the sub-SELECT has to be re-executed for each row of the outer query.\n>\n>BTW, it's not apparent to me that your \"flattened\" query gives the same\n>answers as the original. What if a pl row can join to more than one\n>row of the ns output?\n> \n>\nWell, I guess you are right. As far as the database can tell, the \nqueries aren't the same. In practice, they are. network_state is \nessentially tracking our contract dates with different discount \nhealthcare networks. from_date and thru_date track the timeframe we use \nthat network, with thru_date being null for the current networks. Some \nnetworks cover all states, in which case state_cd is an empty string. \nOtherwise, there will be a row per state covered. I can't think of any \nway to enforce data integrity on this other than maybe via triggers. Is \nthere any way to make things more clear to the database (both in general \nand on the postgres end of this) ? At the moment, the SQL Server table \nhas the primary key defined as (network_id, state_cd, product), which is \nok for now, but I'm realizing going forward could be an issue if we ever \nstopped using a network in a state and then went back to it.\n\nI guess the next question is, is there any way I can give postgres hints \nabout what constraints exist on the data in these views?\n\nEd\n\n\n\n\n\n\nTom Lane wrote:\n\nThis is fairly hard to read ... it would help a lot if you had shown the\nview definitions that the query relies on, so that we could match up the\nplan elements with the query a bit better.\n \n\nI wasn't sure how helpful it would be. Here they are:\n\ncreate view development.network as \nselect * from odbc_select('amsterdam', 'bob.dbo.network') as (\n    network_id varchar ,\n    status_cd varchar  ,\n    name varchar  ,\n    network_action varchar  ,\n    physical_type_cd varchar  ,\n    service_type_cd varchar  ,\n    parent_network_id varchar  ,\n    commission_network_id varchar  ,\n    rep_id varchar  ,\n    tax_id varchar  ,\n    url varchar  ,\n    entry_method_cd varchar  ,\n    entry_individual_type_cd varchar  ,\n    entry_individual_id varchar  ,\n    service varchar (30),\n    cost_routine varchar (150),\n    commission_rate numeric(5, 5)  ,\n    directory_number varchar (11),\n    search_url varchar (200),\n    member_rate numeric(15, 2)  ,\n    free_months numeric(18, 0)  ,\n    eligibility_hound varchar (60)\n)\n\ncreate view development.network_state as \nselect * from odbc_select('amsterdam', 'bob.dbo.network_state') as (\n    network_id varchar,\n    state_cd varchar,\n    product varchar (100) ,\n    status_cd varchar,\n    entry_method_cd varchar,\n    entry_individual_type_cd varchar,\n    entry_individual_id varchar,\n    logo_id int ,\n    from_date timestamp ,\n    thru_date timestamp \n)\n\ncreate view development.xlat_tbl as\nselect * from odbc_select('amsterdam', 'xlat_tbl') as (\n    field_name varchar  ,\n    field_value varchar  ,\n    status_cd varchar  ,\n    descr varchar  ,\n    descrshort varchar  ,\n    entry_method_cd varchar  ,\n    entry_individual_type_cd varchar  ,\n    entry_individual_id varchar \n)\n\n\nHowever, I'm thinking the problem is with this IN clause:\n\n \n\n where pl.network_id in (select ns.network_id\n from development.network_state ns\n where ns.from_date < current_time\n and (ns.thru_date > current_time or \nns.thru_date is null)\n and (ns.state_cd = pl.state_cd or ns.state_cd='')\n )\n \n\n\nBecause the sub-SELECT references pl.state_cd (an outer variable\nreference), there's no chance of optimizing this into a join-style IN.\nSo the sub-SELECT has to be re-executed for each row of the outer query.\n\nBTW, it's not apparent to me that your \"flattened\" query gives the same\nanswers as the original. What if a pl row can join to more than one\nrow of the ns output?\n \n\nWell, I guess you are right. As far as the database can tell, the\nqueries aren't the same. In practice, they are. network_state is\nessentially tracking our contract dates with different discount\nhealthcare networks. from_date and thru_date track the timeframe we use\nthat network, with thru_date being null for the current networks. Some\nnetworks cover all states, in which case state_cd is an empty string.\nOtherwise, there will be a row per state covered. I can't think of any\nway to enforce data integrity on this other than maybe via triggers. Is\nthere any way to make things more clear to the database (both in\ngeneral and on the postgres end of this) ? At the moment, the SQL\nServer table has the primary key defined as (network_id, state_cd,\nproduct), which is ok for now, but I'm realizing going forward could be\nan issue if we ever stopped using a network in a state and then went\nback to it.\n\nI guess the next question is, is there any way I can give postgres\nhints about what constraints exist on the data in these views?\n\nEd", "msg_date": "Thu, 27 Oct 2005 10:35:41 -0400", "msg_from": "\"Edward Di Geronimo Jr.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issues with custom functions" } ]
[ { "msg_contents": "I am running Postgre 7.4 on FreeBSD. The main table have 2 million record\n(we would like to do at least 10 mil or more). It is mainly a FIFO structure\nwith maybe 200,000 new records coming in each day that displace the older\nrecords.\n\nWe have a GUI that let user browser through the record page by page at about\n25 records a time. (Don't ask me why but we have to have this GUI). This\ntranslates to something like\n\nselect count(*) from table <-- to give feedback about the DB size\nselect * from table order by date limit 25 offset 0\n\nTables seems properly indexed, with vacuum and analyze ran regularly. Still\nthis very basic SQLs takes up to a minute run.\n\nI read some recent messages that select count(*) would need a table scan for\nPostgre. That's disappointing. But I can accept an approximation if there\nare some way to do so. But how can I optimize select * from table order by\ndate limit x offset y? One minute response time is not acceptable.\n\nAny help would be appriciated.\n\nWy\n\nI am running Postgre 7.4 on FreeBSD. The main table have 2 million\nrecord (we would like to do at least 10 mil or more). It is mainly a\nFIFO structure with maybe 200,000 new records coming in each day that\ndisplace the older records.\n\nWe have a GUI that let user browser through the record page by page at\nabout 25 records a time. (Don't ask me why but we have to have this\nGUI). This translates to something like\n\n  select count(*) from table   <-- to give feedback about the DB size\n  select * from table order by date limit 25 offset 0\n\nTables seems properly indexed, with vacuum and analyze ran regularly. Still this very basic SQLs takes up to a minute run.\n\nI read some recent messages that select count(*) would need a table\nscan for Postgre. That's disappointing. But I can accept an\napproximation if there are some way to do so. But how can I optimize\nselect * from table order by date limit x offset y? One minute response\ntime is not acceptable.\n\nAny help would be appriciated.\n\nWy", "msg_date": "Wed, 26 Oct 2005 13:41:17 -0700", "msg_from": "aurora <[email protected]>", "msg_from_op": true, "msg_subject": "browsing table with 2 million records" }, { "msg_contents": "Do you have an index on the date column? Can you post an EXPLAIN\nANALYZE for the slow query?\n\n-- Mark Lewis\n\nOn Wed, 2005-10-26 at 13:41 -0700, aurora wrote:\n> I am running Postgre 7.4 on FreeBSD. The main table have 2 million\n> record (we would like to do at least 10 mil or more). It is mainly a\n> FIFO structure with maybe 200,000 new records coming in each day that\n> displace the older records.\n> \n> We have a GUI that let user browser through the record page by page at\n> about 25 records a time. (Don't ask me why but we have to have this\n> GUI). This translates to something like\n> \n> select count(*) from table <-- to give feedback about the DB size\n> select * from table order by date limit 25 offset 0\n> \n> Tables seems properly indexed, with vacuum and analyze ran regularly.\n> Still this very basic SQLs takes up to a minute run.\n> \n> I read some recent messages that select count(*) would need a table\n> scan for Postgre. That's disappointing. But I can accept an\n> approximation if there are some way to do so. But how can I optimize\n> select * from table order by date limit x offset y? One minute\n> response time is not acceptable.\n> \n> Any help would be appriciated.\n> \n> Wy\n> \n> \n\n", "msg_date": "Wed, 26 Oct 2005 13:59:44 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: browsing table with 2 million records" }, { "msg_contents": "On Wed, 2005-10-26 at 15:41, aurora wrote:\n> I am running Postgre 7.4 on FreeBSD. The main table have 2 million\n> record (we would like to do at least 10 mil or more). It is mainly a\n> FIFO structure with maybe 200,000 new records coming in each day that\n> displace the older records.\n> \n> We have a GUI that let user browser through the record page by page at\n> about 25 records a time. (Don't ask me why but we have to have this\n> GUI). This translates to something like\n> \n> select count(*) from table <-- to give feedback about the DB size\n> select * from table order by date limit 25 offset 0\n> \n> Tables seems properly indexed, with vacuum and analyze ran regularly.\n> Still this very basic SQLs takes up to a minute run.\n> \n> I read some recent messages that select count(*) would need a table\n> scan for Postgre. That's disappointing. But I can accept an\n> approximation if there are some way to do so. But how can I optimize\n> select * from table order by date limit x offset y? One minute\n> response time is not acceptable.\n\nHave you run your script without the select count(*) part and timed it?\n\nWhat does\n\nexplain analyze select * from table order by date limit 25 offset 0\n\nsay? \n\nIs date indexed?\n", "msg_date": "Wed, 26 Oct 2005 16:06:38 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: browsing table with 2 million records" }, { "msg_contents": "\n> We have a GUI that let user browser through the record page by page at\n> about 25 records a time. (Don't ask me why but we have to have this\n> GUI). This translates to something like\n> \n> select count(*) from table <-- to give feedback about the DB size\n\nDo you have a integer field that is an ID that increments? E.g; serial?\n\n> select * from table order by date limit 25 offset 0\n\nYou could use a cursor.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> Tables seems properly indexed, with vacuum and analyze ran regularly.\n> Still this very basic SQLs takes up to a minute run.\n> \n> I read some recent messages that select count(*) would need a table\n> scan for Postgre. That's disappointing. But I can accept an\n> approximation if there are some way to do so. But how can I optimize\n> select * from table order by date limit x offset y? One minute\n> response time is not acceptable.\n> \n> Any help would be appriciated.\n> \n> Wy\n> \n> \n-- \nThe PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n\n", "msg_date": "Wed, 26 Oct 2005 14:09:37 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: browsing table with 2 million records" }, { "msg_contents": ">> select * from table order by date limit 25 offset 0\n\n> Do you have an index on the date column? Can you post an EXPLAIN\n> ANALYZE for the slow query?\n\nWow! Now that I look again there are actually 2 date fields. One is indexed\nand one is not. Order by was done on the column without index. Using the\nindexed column turn a seq scan into index scan and the query performance is\ntotally fine now.\n\nIt would still be helpful if select count(*) can perform well.\n\nThanks!\n\nWy\n\n>>\n  select * from table order by date limit 25 offset 0\n\n> Do you have an index on the date column?  Can you post an EXPLAIN\n> ANALYZE for the slow query?\n\nWow! Now that I look again there are actually 2 date fields. One is\nindexed and one is not. Order by was done on the column without index.\nUsing the indexed column turn a seq scan into index scan and the query\nperformance is totally fine now.\n\nIt would still be helpful if select count(*) can perform well.\n\nThanks!\n\nWy", "msg_date": "Wed, 26 Oct 2005 14:22:58 -0700", "msg_from": "aurora <[email protected]>", "msg_from_op": true, "msg_subject": "Re: browsing table with 2 million records" }, { "msg_contents": "You could also create your own index so to speak as a table that\nsimply contains a list of primary keys and an order value field that\nyou can use as your offset. This can be kept in sync with the master\ntable using triggers pretty easily. 2 million is not very much if you\nonly have a integer pkey, and an integer order value, then you can\njoin it against the main table.\n\ncreate table my_index_table (\nprimary_key_value int,\norder_val int,\nprimary key (primary_key_value));\n\ncreate index my_index_table_order_val_i on index_table (order_val);\n\nselect * from main_table a, my_index_table b where b.order_val>=25 and\nb.order_val<50 and a.primary_key_id=b.primary_key_id\n\nIf the data updates alot then this won't work as well though as the\nindex table will require frequent updates to potentialy large number\nof records (although a small number of pages so it still won't be\nhorrible).\n\nAlex Turner\nNetEconomist\n\nOn 10/26/05, Joshua D. Drake <[email protected]> wrote:\n>\n> > We have a GUI that let user browser through the record page by page at\n> > about 25 records a time. (Don't ask me why but we have to have this\n> > GUI). This translates to something like\n> >\n> > select count(*) from table <-- to give feedback about the DB size\n>\n> Do you have a integer field that is an ID that increments? E.g; serial?\n>\n> > select * from table order by date limit 25 offset 0\n>\n> You could use a cursor.\n>\n> Sincerely,\n>\n> Joshua D. Drake\n>\n>\n> >\n> > Tables seems properly indexed, with vacuum and analyze ran regularly.\n> > Still this very basic SQLs takes up to a minute run.\n> >\n> > I read some recent messages that select count(*) would need a table\n> > scan for Postgre. That's disappointing. But I can accept an\n> > approximation if there are some way to do so. But how can I optimize\n> > select * from table order by date limit x offset y? One minute\n> > response time is not acceptable.\n> >\n> > Any help would be appriciated.\n> >\n> > Wy\n> >\n> >\n> --\n> The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564\n> PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n> Managed Services, Shared and Dedicated Hosting\n> Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n", "msg_date": "Wed, 26 Oct 2005 17:28:36 -0400", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: browsing table with 2 million records" }, { "msg_contents": "aurora <[email protected]> writes:\n> It would still be helpful if select count(*) can perform well.\n\nIf you can settle for an approximate count, pg_class.reltuples might\nhelp you.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Oct 2005 17:31:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: browsing table with 2 million records " }, { "msg_contents": "\n> I am running Postgre 7.4 on FreeBSD. The main table have 2 million record\n> (we would like to do at least 10 mil or more). It is mainly a FIFO \n> structure\n> with maybe 200,000 new records coming in each day that displace the older\n> records.\n\n\tI'm so sorry, but I have to rant XDDD\n\n\tPeople who present a list of 100 items, paginated with 10 items per page \nso that it fits on half a 800x600 screen should be shot.\n\tI can scroll with my mousewheel and use text search in my browser...\n\n\tPeople who present a paginated view with 100.000 pages where you have to \napply bisection search by hand to find records starting with \"F\" are on \npage 38651 should be forced to use a keyboard with just 1 key and type in \nmorse code.\n\n\tProblem of pagination is that the page number is meaningless and rather \nuseless to the user. It is also meaningless to the database, which means \nyou have to use slow kludges like count() and limit/offset. And as people \ninsert stuff in the table while you browse, when you hit next page you \nwill see on top, half of what was on the previous page, because it was \npushed down by new records. Or you might miss records.\n\n\tSo, rather than using a meaningless \"record offset\" as a page number, you \ncan use something meaningful, like a date, first letter of a name, region, \netc.\n\n\tOf course, MySQL, always eager to encourage sucky-sucky practices, \nprovides a neat CALC_FOUND_ROWS hack, which, while not being super SQL \nstandard compliant, allows you to retrieve the number of rows the query \nwould have returned if you wouldn't have used limit, so you can compute \nthe number of pages and grab one page with only one query.\n\n\tSo people use paginators instead of intelligent solutions, like \nxmlhttp+javascript enabled autocompletion in forms, etc. And you have to \nscroll to page 38651 to find letter \"F\".\n\n\tSo if you need to paginate on your site :\n\n\tCHEAT !!!!\n\n\tWho needs a paginated view with 100.000 pages ?\n\n\t- Select min(date) and max(date) from your table\n\t- Present a nifty date selector to choose the records from any day, hour, \nminute, second\n\t- show them, with \"next day\" and \"previous day\" buttons\n\n\t- It's more useful to the user (most likely he wants to know what \nhappened on 01/05/2005 rather than view page 2857)\n\t- It's faster (no more limit/offset ! just \"date BETWEEN a AND b\", \nindexed of course)\n\t- no more new items pushing old ones to the next page while you browse\n\t- you can pretend to your boss it's just like a paginated list\n\n\t\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Wed, 26 Oct 2005 23:49:54 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: browsing table with 2 million records" }, { "msg_contents": "> We have a GUI that let user browser through the record page by page at \n> about 25 records a time. (Don't ask me why but we have to have this \n> GUI). This translates to something like\n> \n> select count(*) from table <-- to give feedback about the DB size\n> select * from table order by date limit 25 offset 0\n\nHeh, sounds like phpPgAdmin...I really should do something about that.\n\n> Tables seems properly indexed, with vacuum and analyze ran regularly. \n> Still this very basic SQLs takes up to a minute run.\n\nYes, COUNT(*) on a large table is always slow in PostgreSQL. Search the \nmailing lists for countless discussions about it.\n\nChris\n\n", "msg_date": "Thu, 27 Oct 2005 09:43:24 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: browsing table with 2 million records" }, { "msg_contents": "> Who needs a paginated view with 100.000 pages ?\n> \n> - Select min(date) and max(date) from your table\n> - Present a nifty date selector to choose the records from any day, \n> hour, minute, second\n> - show them, with \"next day\" and \"previous day\" buttons\n> \n> - It's more useful to the user (most likely he wants to know what \n> happened on 01/05/2005 rather than view page 2857)\n> - It's faster (no more limit/offset ! just \"date BETWEEN a AND b\", \n> indexed of course)\n> - no more new items pushing old ones to the next page while you browse\n> - you can pretend to your boss it's just like a paginated list\n\nAll very well and good, but now do it generically...\n\n", "msg_date": "Thu, 27 Oct 2005 09:46:14 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: browsing table with 2 million records" } ]
[ { "msg_contents": "I DON'T KNOW WHAT TO DO WITH THIS QUERYS...\nComparation with sql server, sql server wins !!!\n\n\nTable sizes:\narchivos: 40MB\nfotos: 55MB\n\nselect count(1) from fotos f where not exists (select a.archivo from \narchivos a where a.archivo=f.archivo)\n173713 ms.\n110217 ms.\n83122 ms.\n\nselect count(*) from\n(\n\tselect archivo from fotos\n\t except\n\tselect archivo from archivos\n) x;\n201479 ms.\n\nSELECT count(*)\nFROM fotos f\nLEFT JOIN archivos a USING(archivo)\nWHERE a.archivo IS NULL\n199523 ms.\n\n_________________________________________________________________\nMSN Amor: busca tu � naranja http://latam.msn.com/amor/\n\n", "msg_date": "Wed, 26 Oct 2005 17:47:31 -0600", "msg_from": "=?iso-8859-1?B?U2lkYXIgTPNwZXogQ3J1eg==?= <[email protected]>", "msg_from_op": true, "msg_subject": "performance on query" }, { "msg_contents": "So the issue is that instead of taking 174 seconds the query now takes\n201?\n\nI'm guessing that SQL server might be using index covering, but that's\njust a guess. Posting query plans (prefferably with actual timing info;\nEXPLAIN ANALYZE on PostgreSQL and whatever the equivalent would be for\nMSSQL) might give us some idea.\n\nOn Wed, Oct 26, 2005 at 05:47:31PM -0600, Sidar L?pez Cruz wrote:\n> I DON'T KNOW WHAT TO DO WITH THIS QUERYS...\n> Comparation with sql server, sql server wins !!!\n> \n> \n> Table sizes:\n> archivos: 40MB\n> fotos: 55MB\n> \n> select count(1) from fotos f where not exists (select a.archivo from \n> archivos a where a.archivo=f.archivo)\n> 173713 ms.\n> 110217 ms.\n> 83122 ms.\n> \n> select count(*) from\n> (\n> \tselect archivo from fotos\n> \t except\n> \tselect archivo from archivos\n> ) x;\n> 201479 ms.\n> \n> SELECT count(*)\n> FROM fotos f\n> LEFT JOIN archivos a USING(archivo)\n> WHERE a.archivo IS NULL\n> 199523 ms.\n> \n> _________________________________________________________________\n> MSN Amor: busca tu ? naranja http://latam.msn.com/amor/\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 26 Oct 2005 22:00:13 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on query" } ]
[ { "msg_contents": "Christopher \n> > - Present a nifty date selector to choose the records from any\nday,\n> > hour, minute, second\n> > - show them, with \"next day\" and \"previous day\" buttons\n> >\n> > - It's more useful to the user (most likely he wants to know\nwhat\n> > happened on 01/05/2005 rather than view page 2857)\n> > - It's faster (no more limit/offset ! just \"date BETWEEN a AND\nb\",\n> > indexed of course)\n> > - no more new items pushing old ones to the next page while you\n> browse\n> > - you can pretend to your boss it's just like a paginated list\n> \n> All very well and good, but now do it generically...\n\nI've done it... \nFirst of all I totally agree with PFC's rant regarding absolute\npositioning while browsing datasets. Among other things, it has serious\nproblems if you have multiple updating your table. Also it's kind of\nsilly to be doing this in a set based data paradigm.\n\nThe 'SQL' way to browse a dataset is by key. If your key has multiple\nparts or you are trying to sort on two or more fields, you are supposed\nto use the row constructor:\n\nselect * from t where (x, y) > (xc, yc) order by x,y;\n\nUnfortunately, this gives the wrong answer in postgresql :(.\n\nThe alternative is to use boolean logic. Here is a log snippit from my\nISAM driver (in ISAM, you are *always* browsing datasets):\n\nprepare system_read_next_menu_item_favorite_file_0 (character varying,\nint4, int4, int4)\n\tas select from system.menu_item_favorite_file\n\twhere mif_user_id >= $1 and \n\t\t(mif_user_id > $1 or mif_menu_item_id >= $2) and \n\t\t(mif_user_id > $1 or mif_menu_item_id > $2 or\nmif_sequence_no > $3) \n\torder by mif_user_id, mif_menu_item_id, mif_sequence_no\n\tlimit $4\n\nThis is a Boolean based 'get next record' in a 3 part key plus a\nparameterized limit. You can do this without using prepared statements\nof course but with the prepared version you can at least do \n\nexecute system_read_next_menu_item_favorite_file_0('abc', 1, 2, 1);\n\nMerlin\n\n", "msg_date": "Thu, 27 Oct 2005 08:59:51 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: browsing table with 2 million records" }, { "msg_contents": "\n\n> I've done it...\n> First of all I totally agree with PFC's rant regarding absolute\n> positioning while browsing datasets. Among other things, it has serious\n> problems if you have multiple updating your table. Also it's kind of\n> silly to be doing this in a set based data paradigm.\n\n\tRecently I've been browsing some site and it had this problem : as users \nkept adding new entries as I was browsing the list page after page, when I \nhit \"next page\" I got on the next page half of what I already saw on the \nprevious page. Of course the webmaster has set the \"visited links\" color \nthe same as \"unvisited links\", so I couldn't tell, and had to use my \nbrain, which was quite upsetting XDDD\n\n\tAnd bookmarking a page to resume browsing at some later time does not \nwork either, because if I bookmark page 15, then when I come back, users \nhave added 10 pages of content and what I bookmarked is now on page 25...\n\n>> All very well and good, but now do it generically...\n\n\tHehe. I like ranting...\n\tIt is not possible to do it in a generic way that works in all cases. For \ninstance :\n\n\tForum topic case :\n\t- posts are added at the bottom and not at the top\n\t- page number is relevant and meaningful\n\n\tHowever, in most cases, you can use a multipart key and get it right.\n\tSuppose that, for instance, you have a base of several million records, \norganized according to :\n\n\t- date (like the original poster)\n\tor :\n\t- country, region, city, customer last name, first name.\n\n\tYou could ask for the first three, but then you'll get 50000 Smiths in \nNew York and 1 Van Bliezinsky.\n\n\tOr you could precalculate, once a week, a key interval distribution that \ncreates reasonable sized intervals (for instance, 100 values in each), \nmaybe asking that each interval should only contain only one city. So, you \nwould get :\n\n\tCountry Region City\tLastName\tFirstName\n\tUSA\tNYC\tNY\tSmith,\t''\n\tUSA\tNYC\tNY\tSmith,\tAlbert\n\tUSA\tNYC\tNY\tSmith,\tBernard\n\t.....\n\tUSA\tNYC\tNY\tSmith,\tWilliam\n\t...\n\tUSA\tNYC\tNY\tVon Braun\n\t...\n\n\tSo you'd predetermine your \"page breaks\" ahead of time, and recompute \nthem once in a while. You won't get identically sized pages, but if the \nstatistical distribution of the data plays nice, you should get evenly \nsized pages.\n\n\tThe interesting part is that you can present the user with a selector \nwhich presents meaningful and useful data, AND is fast to compute, AND is \nfast to use.\n\tIn this case, it would amount to \"Select country, region, city\", then, \ndisplay a list like this :\n\tSmith, ...Albert\n\tSmith, Albus...Bernard\n\t...\n\tSmith, William...\n\t...\n\tVon Braun...Von Schwarts\n\t...\n\n\tSo Jeannette Smith would be easy to find, being in the link \"Smith, \nJean...John\" for instance.\n\n\tIf the aim is to quickly locate a particular record, I like \njavascript-powered autocompletion better ; but for browsing, this \npagination method is cool.\n\n\tRegards !\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Thu, 27 Oct 2005 16:33:51 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: browsing table with 2 million records" } ]
[ { "msg_contents": "Is there something that tells postgres to take the resorces from computer \n(RAM, HDD, SWAP on linux) as it need, not modifying variables on \npostgresql.conf and other operating system things?\n\nA days ago i am trying to show that postgres is better than mssql but when \nexecute a simple query like:\n\n(1)\nselect count(*) from\n(\n\tselect archivo from fotos\n\t except\n\tselect archivo from archivos\n) x;\nAggregate (cost=182162.83..182162.84 rows=1 width=0) (actual \ntime=133974.495..133974.498 rows=1 loops=1)\n -> Subquery Scan x (cost=173857.98..181830.63 rows=132878 width=0) \n(actual time=109148.158..133335.279 rows=169672 loops=1)\n -> SetOp Except (cost=173857.98..180501.86 rows=132878 width=58) \n(actual time=109148.144..132094.382 rows=169672 loops=1)\n -> Sort (cost=173857.98..177179.92 rows=1328775 width=58) \n(actual time=109147.656..113870.975 rows=1328775 loops=1)\n Sort Key: archivo\n -> Append (cost=0.00..38710.50 rows=1328775 width=58) \n(actual time=27.062..29891.075 rows=1328775 loops=1)\n -> Subquery Scan \"*SELECT* 1\" \n(cost=0.00..17515.62 rows=523431 width=58) (actual time=27.052..9560.719 \nrows=523431 loops=1)\n -> Seq Scan on fotos (cost=0.00..12281.31 \nrows=523431 width=58) (actual time=27.038..5390.238 rows=523431 loops=1)\n -> Subquery Scan \"*SELECT* 2\" \n(cost=0.00..21194.88 rows=805344 width=58) (actual time=10.803..12117.788 \nrows=805344 loops=1)\n -> Seq Scan on archivos \n(cost=0.00..13141.44 rows=805344 width=58) (actual time=10.784..5420.164 \nrows=805344 loops=1)\nTotal runtime: 134552.325 ms\n\n\n(2)\nselect count(*) from fotos where archivo not in (select archivo from \narchivos)\nAggregate (cost=29398.98..29398.99 rows=1 width=0) (actual \ntime=26660.565..26660.569 rows=1 loops=1)\n -> Seq Scan on fotos (cost=15154.80..28744.69 rows=261716 width=0) \n(actual time=13930.060..25859.340 rows=169799 loops=1)\n Filter: (NOT (hashed subplan))\n SubPlan\n -> Seq Scan on archivos (cost=0.00..13141.44 rows=805344 \nwidth=58) (actual time=0.319..5647.043 rows=805344 loops=1)\nTotal runtime: 26747.236 ms\n\n\n(3)\nselect count(1) from fotos f where not exists (select a.archivo from \narchivos a where a.archivo=f.archivo)\nAggregate (cost=1761354.08..1761354.09 rows=1 width=0) (actual \ntime=89765.384..89765.387 rows=1 loops=1)\n -> Seq Scan on fotos f (cost=0.00..1760699.79 rows=261716 width=0) \n(actual time=75.556..88880.234 rows=169799 loops=1)\n Filter: (NOT (subplan))\n SubPlan\n -> Index Scan using archivos_archivo_idx on archivos a \n(cost=0.00..13451.40 rows=4027 width=58) (actual time=0.147..0.147 rows=1 \nloops=523431)\n Index Cond: ((archivo)::text = ($0)::text)\nTotal runtime: 89765.714 ms\n\n\n\n(4)\nSELECT count(*)\nFROM fotos f\nLEFT JOIN archivos a USING(archivo)\nWHERE a.archivo IS NULL\nAggregate (cost=31798758.40..31798758.41 rows=1 width=0) (actual \ntime=114267.337..114267.341 rows=1 loops=1)\n -> Merge Left Join (cost=154143.73..31772412.02 rows=10538550 width=0) \n(actual time=85472.696..113392.399 rows=169799 loops=1)\n Merge Cond: (\"outer\".\"?column2?\" = \"inner\".\"?column2?\")\n Filter: (\"inner\".archivo IS NULL)\n -> Sort (cost=62001.08..63309.66 rows=523431 width=58) (actual \ntime=38018.343..39998.201 rows=523431 loops=1)\n Sort Key: (f.archivo)::text\n -> Seq Scan on fotos f (cost=0.00..12281.31 rows=523431 \nwidth=58) (actual time=0.158..4904.410 rows=523431 loops=1)\n -> Sort (cost=92142.65..94156.01 rows=805344 width=58) (actual \ntime=47453.790..50811.216 rows=805701 loops=1)\n Sort Key: (a.archivo)::text\n -> Seq Scan on archivos a (cost=0.00..13141.44 rows=805344 \nwidth=58) (actual time=0.206..7160.148 rows=805344 loops=1)\nTotal runtime: 114893.116 ms\n\n\n\n\nWITH ANY OF THIS QUERIES MSSQL TAKES NOT MUCH OF 7 SECONDS....\n\n\nPLEASE HELP ME\n\n_________________________________________________________________\nConsigue aqu� las mejores y mas recientes ofertas de trabajo en Am�rica \nLatina y USA: http://latam.msn.com/empleos/\n\n", "msg_date": "Thu, 27 Oct 2005 08:25:23 -0600", "msg_from": "=?iso-8859-1?B?U2lkYXIgTPNwZXogQ3J1eg==?= <[email protected]>", "msg_from_op": true, "msg_subject": "how postgresql request the computer resources" }, { "msg_contents": "Sidar López Cruz wrote:\n> Is there something that tells postgres to take the resorces from \n> computer (RAM, HDD, SWAP on linux) as it need, not modifying variables \n> on postgresql.conf and other operating system things?\n\nAh, and how is it to know what to share with other processes?\n\n> A days ago i am trying to show that postgres is better than mssql but \n> when execute a simple query like:\n> \n> (1)\n> select count(*) from\n> Total runtime: 134552.325 ms\n> \n> (2)\n> select count(*) from fotos where archivo not in (select archivo from \n> Total runtime: 26747.236 ms\n> \n> (3)\n> select count(1) from fotos f where not exists (select a.archivo from \n> Total runtime: 89765.714 ms\n> \n> (4)\n> SELECT count(*)\n> Total runtime: 114893.116 ms\n\n> WITH ANY OF THIS QUERIES MSSQL TAKES NOT MUCH OF 7 SECONDS....\n\nIn which case they make a bad choice for showing PostgreSQL is faster \nthan MSSQL. Is this the only query you have, or are others giving you \nproblems too?\n\nI think count(*) is about the weakest point in PG, but I don't think \nthere'll be a general solution available soon. As I'm sure someone has \nmentioned, whatever else, PG needs to check the row for its visibility \ninformation.\n\n From the start of your email, you seem to suspect your configuration \nneeds some work. Once you are happy that your settings in general are \ngood, you can override some by issuing set statements before your query. \nFor example:\n\tSET work_mem = 10000;\nmight well improve example #2 where you had a hash.\n\n--\n Richard Huxton\n Archonet Ltd\n\n", "msg_date": "Thu, 27 Oct 2005 16:56:49 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how postgresql request the computer resources" }, { "msg_contents": "Richard Huxton wrote:\n>> WITH ANY OF THIS QUERIES MSSQL TAKES NOT MUCH OF 7 SECONDS....\n> \n> \n> In which case they make a bad choice for showing PostgreSQL is faster \n> than MSSQL. Is this the only query you have, or are others giving you \n> problems too?\n> \n> I think count(*) is about the weakest point in PG, but I don't think \n> there'll be a general solution available soon. As I'm sure someone has \n> mentioned, whatever else, PG needs to check the row for its visibility \n> information.\n> \n> From the start of your email, you seem to suspect your configuration \n> needs some work. Once you are happy that your settings in general are \n> good, you can override some by issuing set statements before your query. \n> For example:\n> SET work_mem = 10000;\n> might well improve example #2 where you had a hash.\n> \n> -- \n> Richard Huxton\n> Archonet Ltd\n\nSomeone had suggested keeping a vector table with +1 and -1 for row \ninsertion and deletion and then running a cron to sum the vectors and \nupdate a table so that you could select from that table to get the row \ncount. Perhaps some sort of SUM() on a column function.\n\nSince this seems like a reasonable approach (or perhaps there may be yet \nanother better mechanism), cannot someone add this sort of functionality \nto Postgresql to do behind the scenes?\n\n-Mike\n", "msg_date": "Thu, 27 Oct 2005 15:58:55 -0600", "msg_from": "Michael Best <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how postgresql request the computer resources" }, { "msg_contents": "On Thu, Oct 27, 2005 at 03:58:55PM -0600, Michael Best wrote:\n> Richard Huxton wrote:\n> >>WITH ANY OF THIS QUERIES MSSQL TAKES NOT MUCH OF 7 SECONDS....\n> >\n> >\n> >In which case they make a bad choice for showing PostgreSQL is faster \n> >than MSSQL. Is this the only query you have, or are others giving you \n> >problems too?\n> >\n> >I think count(*) is about the weakest point in PG, but I don't think \n> >there'll be a general solution available soon. As I'm sure someone has \n> >mentioned, whatever else, PG needs to check the row for its visibility \n> >information.\n> >\n> > From the start of your email, you seem to suspect your configuration \n> >needs some work. Once you are happy that your settings in general are \n> >good, you can override some by issuing set statements before your query. \n> >For example:\n> > SET work_mem = 10000;\n> >might well improve example #2 where you had a hash.\n> >\n> >-- \n> > Richard Huxton\n> > Archonet Ltd\n> \n> Someone had suggested keeping a vector table with +1 and -1 for row \n> insertion and deletion and then running a cron to sum the vectors and \n> update a table so that you could select from that table to get the row \n> count. Perhaps some sort of SUM() on a column function.\n> \n> Since this seems like a reasonable approach (or perhaps there may be yet \n> another better mechanism), cannot someone add this sort of functionality \n> to Postgresql to do behind the scenes?\n\nThere's all kinds of things that could be added; the issue is\nascertaining what the performance trade-offs are (there's no such thing\nas a free lunch) and if the additional code complexity is worth it.\n\nNote that your suggestion probably wouldn't work in this case because\nthe user isn't doing a simple SELECT count(*) FROM table;. I'd bet that\nMSSQL is using index covering to answer his queries so quickly,\nsomething that currently just isn't possible with PostgreSQL. But if you\nsearch the -hackers archives, you'll find a discussion on adding limited\nheap tuple visibility information to indexes. That would allow for\npartial index covering in many cases, which would probably be a huge win\nfor the queries the user was asking about.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 27 Oct 2005 17:52:27 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how postgresql request the computer resources" } ]
[ { "msg_contents": "Is there a rule-of-thumb for determining the amount of system memory a \ndatabase requres (other than \"all you can afford\")? \n\n\n", "msg_date": "Thu, 27 Oct 2005 14:31:43 -0500", "msg_from": "\"PostgreSQL\" <[email protected]>", "msg_from_op": true, "msg_subject": "How much memory?" } ]
[ { "msg_contents": "Databases basically come in 4 sizes:\n\n1= The entire DB fits into memory.\n2= The performance critical table(s) fit(s) into memory\n3= The indexes of the performance critical table(s) fit into memory.\n4= Neither the performance critical tables nor their indexes fit into memory.\n\nPerformance decreases (exponentially), and development + maintenance cost/difficulty/pain increases (exponentially), as you go down the list.\n\nWhile it is often not possible to be in class \"1\" above, do everything you can to be in at least class \"3\" and do everything you can to avoid class \"4\".\n\nAt ~$75-$150 per GB as of this post, RAM is the cheapest investment you can make in a high perfomance, low hassle DBMS. IWill's and Tyan's 16 DIMM slot mainboards are worth every penny.\n\nron\n\n\n-----Original Message-----\nFrom: PostgreSQL <[email protected]>\nSent: Oct 27, 2005 3:31 PM\nTo: [email protected]\nSubject: [PERFORM] How much memory?\n\nIs there a rule-of-thumb for determining the amount of system memory a \ndatabase requres (other than \"all you can afford\")? \n", "msg_date": "Thu, 27 Oct 2005 18:39:33 -0400 (EDT)", "msg_from": "Ron Peacetree <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How much memory?" }, { "msg_contents": "On Thu, Oct 27, 2005 at 06:39:33PM -0400, Ron Peacetree wrote:\n> Databases basically come in 4 sizes:\n> \n> 1= The entire DB fits into memory.\n> 2= The performance critical table(s) fit(s) into memory\n> 3= The indexes of the performance critical table(s) fit into memory.\n> 4= Neither the performance critical tables nor their indexes fit into memory.\n> \n> Performance decreases (exponentially), and development + maintenance cost/difficulty/pain increases (exponentially), as you go down the list.\n> \n> While it is often not possible to be in class \"1\" above, do everything you can to be in at least class \"3\" and do everything you can to avoid class \"4\".\n> \n> At ~$75-$150 per GB as of this post, RAM is the cheapest investment you can make in a high perfomance, low hassle DBMS. IWill's and Tyan's 16 DIMM slot mainboards are worth every penny.\n\nAnd note that your next investment after RAM should be better disk IO.\nMore CPUs *generally* don't buy you much (if anything). My rule of\nthumb: the only time your database should be CPU-bound is if you've got\na bad design*.\n\n*NOTE: before everyone goes off about query parallelism and big\nin-memory sorts and what-not, keep in mind I said \"rule of thumb\". :)\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 27 Oct 2005 17:55:17 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much memory?" }, { "msg_contents": "Reasons not to buy from Sun or Compaq - why get Opteron 252 when a 240\nwill do just fine for a fraction of the cost, which of course they\ndon't stock, white box all the way baby ;). My box from Sun or Compaq\nor IBM is 2x the whitebox cost because you can't buy apples to apples.\n We have a bitchin' DB server for $7.5k\n\nAlex\n\nOn 10/27/05, Jim C. Nasby <[email protected]> wrote:\n> On Thu, Oct 27, 2005 at 06:39:33PM -0400, Ron Peacetree wrote:\n> > Databases basically come in 4 sizes:\n> >\n> > 1= The entire DB fits into memory.\n> > 2= The performance critical table(s) fit(s) into memory\n> > 3= The indexes of the performance critical table(s) fit into memory.\n> > 4= Neither the performance critical tables nor their indexes fit into memory.\n> >\n> > Performance decreases (exponentially), and development + maintenance cost/difficulty/pain increases (exponentially), as you go down the list.\n> >\n> > While it is often not possible to be in class \"1\" above, do everything you can to be in at least class \"3\" and do everything you can to avoid class \"4\".\n> >\n> > At ~$75-$150 per GB as of this post, RAM is the cheapest investment you can make in a high perfomance, low hassle DBMS. IWill's and Tyan's 16 DIMM slot mainboards are worth every penny.\n>\n> And note that your next investment after RAM should be better disk IO.\n> More CPUs *generally* don't buy you much (if anything). My rule of\n> thumb: the only time your database should be CPU-bound is if you've got\n> a bad design*.\n>\n> *NOTE: before everyone goes off about query parallelism and big\n> in-memory sorts and what-not, keep in mind I said \"rule of thumb\". :)\n> --\n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n", "msg_date": "Fri, 28 Oct 2005 10:02:41 -0400", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much memory?" } ]
[ { "msg_contents": "Hi all,\n\nI wonder what is the main driving factor for vacuum's duration: the size\nof the table, or the number of dead tuples it has to clean ?\n\nWe have a few big tables which are also heavily updated, and I couldn't\nfigure out a way to properly vacuum them. Vacuuming any of those took\nvery long amounts of time (I started one this morning and after ~5h30min\nit's still running - and it's not even the biggest or most updated\ntable), which I can't really afford because it prevents other vacuum\nprocesses on smaller tables to do their job due to the transaction open\nfor the long-running vacuum. \n\nBTW, is it in any way feasible to implement to make one vacuum not\nblocking other vacuums from cleaning dead tuples after the first one\nstarted ? I know it's the transaction not the vacuum which blocks, but\nthen wouldn't be a way to run vacuum somehow in \"out of transaction\ncontext\" mode ?\n\nAnother issue: vacuum is not responding to cancel requests, at least not\nin a reasonable amount of time...\n\nThanks in advance,\nCsaba.\n \n\n\n", "msg_date": "Fri, 28 Oct 2005 17:14:21 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "How long it takes to vacuum a big table" }, { "msg_contents": "We've also experienced problems with VACUUM running for a long time. \nA VACUUM on our pg_largeobject table, for example, can take over 24\nhours to complete (pg_largeobject in our database has over 45million\nrows). With our other tables, we've been able to partition them\n(using inheritance) to keep any single table from getting \"too large\",\nbut we've been unable to do that with pg_largeobject. Currently,\nwe're experimenting with moving some of our bulk (large object) data\noutside of the database and storing it in the filesystem directly.\n\nI know that Hannu Krosing has developed some patches that allow\nconcurrent VACUUMs to run more effectively. Unfortunately, these\npatches didn't get into 8.1 so far as I know. You can search the\nperformance mailing list for more information.\n\n -jan-\n--\nJan L. Peterson\n<[email protected]>\n", "msg_date": "Fri, 28 Oct 2005 10:48:51 -0600", "msg_from": "Jan Peterson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How long it takes to vacuum a big table" } ]
[ { "msg_contents": "I have a table that holds entries as in a ficticious table Log(id integer,\nmsg text).\n Lets say then that I have the program log_tail that has as it´s sole\npurpose to print newly added data elements.\n What is the best solution in terms of performace?\n Thank you for your time,\nRodrigo\n\nI have a table that holds entries as in a ficticious table Log(id integer, msg text).\n \nLets say then that I have the program log_tail that has as it´s sole purpose to print newly added data elements.\n \nWhat is the best solution in terms of performace?\n \nThank you for your time,\nRodrigo", "msg_date": "Fri, 28 Oct 2005 18:39:10 -0300", "msg_from": "Rodrigo Madera <[email protected]>", "msg_from_op": true, "msg_subject": "Best way to check for new data." }, { "msg_contents": "Rodrigo,\n\nYou could use LISTEN + NOTIFY with triggers.\nIn after_insert_statement trigger you could notify a listener, the client could query it immediately.\n\nBest Regards,\nOtto\n\n ----- Original Message ----- \n From: Rodrigo Madera \n To: [email protected] \n Sent: Friday, October 28, 2005 11:39 PM\n Subject: [PERFORM] Best way to check for new data.\n\n\n I have a table that holds entries as in a ficticious table Log(id integer, msg text).\n\n Lets say then that I have the program log_tail that has as it´s sole purpose to print newly added data elements.\n\n What is the best solution in terms of performace?\n\n Thank you for your time,\n Rodrigo\n\n\n\n\n\n\n\nRodrigo,\n \nYou could use LISTEN + NOTIFY with \ntriggers.\nIn after_insert_statement trigger you could notify \na listener, the client could query it immediately.\n \nBest Regards,\nOtto\n \n\n----- Original Message ----- \nFrom:\nRodrigo Madera \nTo: [email protected]\n\nSent: Friday, October 28, 2005 11:39 \n PM\nSubject: [PERFORM] Best way to check for \n new data.\n\nI have a table that holds entries as in a ficticious table Log(id \n integer, msg text).\n \nLets say then that I have the program log_tail that has as it´s sole \n purpose to print newly added data elements.\n \nWhat is the best solution in terms of performace?\n \nThank you for your time,\nRodrigo", "msg_date": "Sat, 29 Oct 2005 00:13:18 +0200", "msg_from": "=?iso-8859-1?Q?Havasv=F6lgyi_Ott=F3?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to check for new data." }, { "msg_contents": "Rodrigo Madera wrote:\n\n> I have a table that holds entries as in a ficticious table Log(id \n> integer, msg text).\n> \n> Lets say then that I have the program log_tail that has as it�s sole \n> purpose to print newly added data elements.\n> \n> What is the best solution in terms of performace?\n\nI have a system that does this. We do it by PK, the PK is bigint, and \nalways increases, the client remembers the last key seen as queries \nbased on that key...\n\nselect ... where events.event_id > ?::bigint order by events.event_id \nlimit 2000\n\nit works, but when alot of data is added, it can become sensative to the \nindex statistics getting out of sync with the data. Best to insert, \nthen update the statistics, then read the data. For us these three \nactivities are independent, but it still seems to work.\n\nI'd investigate the notify mechanism suggested by Otto if you can afford \nto use a postgres specific mechanism like that.\n\nDavid\n\n\n\n\n\n\n\n\nRodrigo Madera wrote:\n\n\nI have a table that holds entries as in a ficticious table\nLog(id integer, msg text).\n \nLets say then that I have the program log_tail that has as it´s\nsole purpose to print newly added data elements.\n \nWhat is the best solution in terms of performace?\n\nI have a system that does this.  We do it by PK, the PK is bigint, and\nalways increases, the client remembers the last key seen as queries\nbased on that key...\n\nselect ... where events.event_id > ?::bigint order by\nevents.event_id limit 2000\n\nit works, but when alot of data is added, it can become sensative to\nthe index statistics getting out of sync with the data.  Best to\ninsert, then update the statistics, then read the data.  For us these\nthree activities are independent, but it still seems to work.\n\nI'd investigate the notify mechanism suggested by Otto if you can\nafford to use a postgres specific mechanism like that.\n\nDavid", "msg_date": "Mon, 31 Oct 2005 13:43:12 +0000", "msg_from": "David Roussel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to check for new data." } ]
[ { "msg_contents": "I have two tables, one is called 'users' the other is 'user_activity'.\n The 'users' table simply contains the users in the system there is\nabout 30,000 rows. The 'user_activity' table stores the activities\nthe user has taken. This table has about 430,000 rows and also\n(notably) has a column which tracks the type of activity. 90% of the\ntable is type 7 which indicates the user logged into the system.\n\nI am trying to write a simple query that returns the last time each\nuser logged into the system. This is how the query looks at the\nmoment:\n\nSELECT u.user_id, MAX(ua.activity_date)\nFROM pp_users u\nLEFT OUTER JOIN user_activity ua ON (u.user_id = ua.user_id AND\nua.user_activity_type_id = 7)\nWHERE u.userstatus_id <> 4\nAND age(u.joined_date) < interval '30 days'\nGROUP BY u.user_id\n\nThe above query takes about 5 seconds but I'm wondering how it can be\noptimized. When the query is formatted as above it does use an index\non the user_id column of the user_activity table... but the cost is\nhuge (cost=0.00..1396700.80).\n\nI have tried formatting it another way with a sub-query but it takes\nabout the same amount to completed:\n\nSELECT u.user_id, ua.last\nFROM pp_users u\nLEFT OUTER JOIN (SELECT max(activity_date) as last, user_id FROM\nuser_activity WHERE user_activity_type_id = 7 GROUP BY user_id) as ua\nON (u.user_id = ua.user_id)\nWHERE u.userstatus_id <> 4\nAND age(u.joined_date) < interval '30 days'\n\nCan anybody offer any pointers on this scenario?\n\nRegards,\nCollin\n", "msg_date": "Fri, 28 Oct 2005 14:53:32 -0700", "msg_from": "Collin Peters <[email protected]>", "msg_from_op": true, "msg_subject": "Simple query: how to optimize" }, { "msg_contents": "Postgres is somewhat speed-challenged on aggregate functions.\nThe most-repeated work-around would be something like:\n\nSELECT u.user_id,\n(SELECT activity_date\n FROM user_activity\n WHERE user_activity.user_id = pp_users.user_id\n AND user_activity_type_id = 7\n ORDER BY activity_date DESC\n LIMIT 1)\nFROM pp_users u\nWHERE u.userstatus_id <> 4\nAND age(u.joined_date) < interval '30 days'\n\n(code above is untested) I've read that aggregate functions are\nimproved in the 8.1 code. I'm running 8.1beta3 on one machine\nbut haven't experimented to verify the claimed improvements.\n\nMartin Nickel\n\n\"Collin Peters\" <[email protected]> wrote in message \nnews:[email protected]...\n>I have two tables, one is called 'users' the other is 'user_activity'.\n> The 'users' table simply contains the users in the system there is\n> about 30,000 rows. The 'user_activity' table stores the activities\n> the user has taken. This table has about 430,000 rows and also\n> (notably) has a column which tracks the type of activity. 90% of the\n> table is type 7 which indicates the user logged into the system.\n>\n> I am trying to write a simple query that returns the last time each\n> user logged into the system. This is how the query looks at the\n> moment:\n>\n> SELECT u.user_id, MAX(ua.activity_date)\n> FROM pp_users u\n> LEFT OUTER JOIN user_activity ua ON (u.user_id = ua.user_id AND\n> ua.user_activity_type_id = 7)\n> WHERE u.userstatus_id <> 4\n> AND age(u.joined_date) < interval '30 days'\n> GROUP BY u.user_id\n>\n> The above query takes about 5 seconds but I'm wondering how it can be\n> optimized. When the query is formatted as above it does use an index\n> on the user_id column of the user_activity table... but the cost is\n> huge (cost=0.00..1396700.80).\n>\n> I have tried formatting it another way with a sub-query but it takes\n> about the same amount to completed:\n>\n> SELECT u.user_id, ua.last\n> FROM pp_users u\n> LEFT OUTER JOIN (SELECT max(activity_date) as last, user_id FROM\n> user_activity WHERE user_activity_type_id = 7 GROUP BY user_id) as ua\n> ON (u.user_id = ua.user_id)\n> WHERE u.userstatus_id <> 4\n> AND age(u.joined_date) < interval '30 days'\n>\n> Can anybody offer any pointers on this scenario?\n>\n> Regards,\n> Collin\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n\n", "msg_date": "Fri, 28 Oct 2005 19:37:11 -0500", "msg_from": "\"PostgreSQL\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple query: how to optimize" } ]
[ { "msg_contents": "On October 28, 2005 2:54 PM\nCollin Peters wrote:\n> I have two tables, one is called 'users' the other is 'user_activity'.\n...\n> I am trying to write a simple query that returns the last time each\n> user logged into the system. This is how the query looks at the\n> moment:\n> \n> SELECT u.user_id, MAX(ua.activity_date)\n> FROM pp_users u\n> LEFT OUTER JOIN user_activity ua ON (u.user_id = ua.user_id AND\n> ua.user_activity_type_id = 7)\n> WHERE u.userstatus_id <> 4\n> AND age(u.joined_date) < interval '30 days'\n> GROUP BY u.user_id\n\nYou're first joining against the entire user table, then filtering out the users\nyou don't need.\n\nInstead, filter out the users you don't need first, then do the join:\n\nSELECT users.user_id, MAX(ua.activity_date)\nFROM \n(SELECT u.user_id \nFROM pp_users u\nWHERE u.userstatus_id <> 4\nAND age(u.joined_date) < interval '30 days'\n) users\nLEFT OUTER JOIN user_activity ua \n ON (users.user_id = ua.user_id \n AND ua.user_activity_type_id = 7)\nGROUP BY users.user_id\n\n(disclaimer: I haven't actually tried this sql)\n", "msg_date": "Fri, 28 Oct 2005 15:40:40 -0700", "msg_from": "\"Roger Hand\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple query: how to optimize" }, { "msg_contents": "These two queries execute at exactly the same speed. When I run run\nEXPLAIN on them both they return the *exact* same query plan as well. \nI find this strange... but it is also kind of what I expected from\nreading up on various things. I am under the impression the\npostgresql will break up your query and run it as it sees best. So\nin the case of these two queries... it seems it is actually almost\nconverting one into the other. Maybe I am wrong.\n\nIs there a good resource list somewhere for postgresql query\noptimization? There are entire books devoted to the subject for\noracle but I can't find more than a few small articles on postgresql\nquery optimizations on the web.\n\nRegards,\nCollin\n\nOn 10/28/05, Roger Hand <[email protected]> wrote:\n> > SELECT u.user_id, MAX(ua.activity_date)\n> > FROM pp_users u\n> > LEFT OUTER JOIN user_activity ua ON (u.user_id = ua.user_id AND\n> > ua.user_activity_type_id = 7)\n> > WHERE u.userstatus_id <> 4\n> > AND age(u.joined_date) < interval '30 days'\n> > GROUP BY u.user_id\n>\n> You're first joining against the entire user table, then filtering out the users\n> you don't need.\n>\n> Instead, filter out the users you don't need first, then do the join:\n>\n> SELECT users.user_id, MAX(ua.activity_date)\n> FROM\n> (SELECT u.user_id\n> FROM pp_users u\n> WHERE u.userstatus_id <> 4\n> AND age(u.joined_date) < interval '30 days'\n> ) users\n> LEFT OUTER JOIN user_activity ua\n> ON (users.user_id = ua.user_id\n> AND ua.user_activity_type_id = 7)\n> GROUP BY users.user_id\n>\n> (disclaimer: I haven't actually tried this sql)\n>\n", "msg_date": "Fri, 28 Oct 2005 16:56:40 -0700", "msg_from": "Collin Peters <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple query: how to optimize" }, { "msg_contents": "A little bit more on my last post that I forget to mention. The two\nqueries run at the same speed and have the same plan only if I have an\nindex on the user_activity.user_id column. Otherwise they run at\ndifferent speeds. The query you gave me actually runs slower without\nthe index. All this is making my head spin!! :O\n\nOn 10/28/05, Collin Peters <[email protected]> wrote:\n> These two queries execute at exactly the same speed. When I run run\n> EXPLAIN on them both they return the *exact* same query plan as well.\n> I find this strange... but it is also kind of what I expected from\n> reading up on various things. I am under the impression the\n> postgresql will break up your query and run it as it sees best. So\n> in the case of these two queries... it seems it is actually almost\n> converting one into the other. Maybe I am wrong.\n>\n> Is there a good resource list somewhere for postgresql query\n> optimization? There are entire books devoted to the subject for\n> oracle but I can't find more than a few small articles on postgresql\n> query optimizations on the web.\n>\n> Regards,\n> Collin\n>\n> On 10/28/05, Roger Hand <[email protected]> wrote:\n> > > SELECT u.user_id, MAX(ua.activity_date)\n> > > FROM pp_users u\n> > > LEFT OUTER JOIN user_activity ua ON (u.user_id = ua.user_id AND\n> > > ua.user_activity_type_id = 7)\n> > > WHERE u.userstatus_id <> 4\n> > > AND age(u.joined_date) < interval '30 days'\n> > > GROUP BY u.user_id\n> >\n> > You're first joining against the entire user table, then filtering out the users\n> > you don't need.\n> >\n> > Instead, filter out the users you don't need first, then do the join:\n> >\n> > SELECT users.user_id, MAX(ua.activity_date)\n> > FROM\n> > (SELECT u.user_id\n> > FROM pp_users u\n> > WHERE u.userstatus_id <> 4\n> > AND age(u.joined_date) < interval '30 days'\n> > ) users\n> > LEFT OUTER JOIN user_activity ua\n> > ON (users.user_id = ua.user_id\n> > AND ua.user_activity_type_id = 7)\n> > GROUP BY users.user_id\n> >\n> > (disclaimer: I haven't actually tried this sql)\n> >\n>\n", "msg_date": "Fri, 28 Oct 2005 17:04:32 -0700", "msg_from": "Collin Peters <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple query: how to optimize" }, { "msg_contents": "On Fri, Oct 28, 2005 at 03:40:40PM -0700, Roger Hand wrote:\n> You're first joining against the entire user table, then filtering out the users\n> you don't need.\n\nThat's just wrong, sorry -- the planner is perfectly able to push the WHERE\ndown before the join.\n\nI'd guess the problem is the age() query; age() doesn't really return what\nyou'd expect, and I don't think it can use an index easily (I might be wrong\nhere, though). Instead, try something like\n\n WHERE u.joined_date >= current_date - interval '30 days'\n\nexcept that if you're running pre-8.0, you might want to precalculate the\nright-hand side on the client.\n\nI couldn't see EXPLAIN ANALYZE of your query, BTW -- having it would be\nuseful.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sat, 29 Oct 2005 02:12:24 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple query: how to optimize" } ]
[ { "msg_contents": "Which effects have UPDATEs on REFERENCEd TABLEs when only columns in the\nreferenced table are updated which are not part of the FOREIGN KEY\nconstraint?\n\nI have one \"master\"-table like\n\n create table t_master (\n m_id serial primary key,\n m_fld1 ...,\n m_fld2 ...,\n ...\n )\n\nThe table above is referenced from several (~30) other tables, i.e. like\n\n create table t_detail (\n d_ebid int REFERENCES t_master (m_id) ON UPDATE CASCADE ON DELETE CASCADE,\n d_fld1 ...,\n d_fld2 ...,\n ...\n )\n\nAll tables which reference t_master have appropriate indexes on the\nreferencing columns, vacuum/analyze is done regularly (daily).\n\nDoes an UPDATE of e.g. m_fld1 in t_master cause a 'lookup' in all tables\nwhich have a cascading update-rule or is this 'lookup' only triggered if\nthe referenced column in t_master is explicitly updated? After removing\nsome detail tables which are not longer needed we see an improvemed\nperformance so at the moment it _looks_ like each update in t_master\ntriggers a 'lookup' in each referencing table also if the referenced\ncolumn (m_id) is not changed.\n\nI've read \"If the row is updated, but the referenced column is not\nactually changed, no action is done.\" in the docs but it is not clear\nfor me whether this \"no action\" really means \"null action\" and so the\nimproved performance has other reasons.\n\nTIA, Martin\n", "msg_date": "Sat, 29 Oct 2005 13:10:31 +0200", "msg_from": "Martin Lesser <[email protected]>", "msg_from_op": true, "msg_subject": "Effects of cascading references in foreign keys" }, { "msg_contents": "> Does an UPDATE of e.g. m_fld1 in t_master cause a 'lookup' in all tables\n> which have a cascading update-rule or is this 'lookup' only triggered if\n> the referenced column in t_master is explicitly updated?\n\nMy tests suggest that a lookup on the referring key is done only\nif the referenced key is changed. Here's an example from 8.1beta4;\nI used this version because EXPLAIN ANALYZE shows triggers and the\ntime spent in them, but I see similar performance characteristics\nin earlier versions. I've intentionally not put an index on the\nreferring column to make lookups on it slow.\n\nCREATE TABLE foo (id serial PRIMARY KEY, x integer NOT NULL);\nCREATE TABLE bar (fooid integer NOT NULL REFERENCES foo ON UPDATE CASCADE);\n\nINSERT INTO foo (x) SELECT * FROM generate_series(1, 100000);\nINSERT INTO bar (fooid) SELECT * FROM generate_series(1, 100000);\n\nANALYZE foo;\nANALYZE bar;\n\nEXPLAIN ANALYZE UPDATE foo SET x = 1 WHERE id = 100000;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------\n Index Scan using foo_pkey on foo (cost=0.00..3.01 rows=1 width=10) (actual time=0.059..0.070 rows=1 loops=1)\n Index Cond: (id = 100000)\n Total runtime: 0.633 ms\n(3 rows)\n\nEXPLAIN ANALYZE UPDATE foo SET x = 1, id = 200000 WHERE id = 100000;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------\n Index Scan using foo_pkey on foo (cost=0.00..3.01 rows=1 width=6) (actual time=0.082..0.092 rows=1 loops=1)\n Index Cond: (id = 100000)\n Trigger for constraint bar_fooid_fkey: time=232.612 calls=1\n Total runtime: 233.073 ms\n(4 rows)\n\nI'm not sure if this is the right place to look, but I see several\nplaces in src/backend/utils/adt/ri_triggers.c with code that looks\nlike this:\n\n /*\n * No need to do anything if old and new keys are equal\n */\n if (ri_KeysEqual(pk_rel, old_row, new_row, &qkey,\n RI_KEYPAIR_PK_IDX))\n {\n heap_close(fk_rel, RowExclusiveLock);\n return PointerGetDatum(NULL);\n }\n\n> After removing some detail tables which are not longer needed we\n> see an improvemed performance so at the moment it _looks_ like each\n> update in t_master triggers a 'lookup' in each referencing table\n> also if the referenced column (m_id) is not changed.\n\nDo you have statistics enabled? You might be able to infer what\nhappens by looking at pg_stat_user_tables or pg_statio_user_tables\nbefore and after an update, assuming that no concurrent activity\nis also affecting the statistics.\n\nI suppose there's overhead just from having a foreign key constraint,\nand possibly additional overhead for each constraint. If so then\nthat might explain at least some of the performance improvement.\nMaybe one of the developers will comment.\n\n-- \nMichael Fuhr\n", "msg_date": "Sat, 29 Oct 2005 08:24:32 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of cascading references in foreign keys" }, { "msg_contents": "On Sat, Oct 29, 2005 at 13:10:31 +0200,\n Martin Lesser <[email protected]> wrote:\n> Which effects have UPDATEs on REFERENCEd TABLEs when only columns in the\n> referenced table are updated which are not part of the FOREIGN KEY\n> constraint?\n\nIn 8.1 there is a check to see if the foreign key value has changed and if\nnot a trigger isn't queued. In the currently released versions any update\nwill fire triggers.\nThe check in comment for trigger.c didn't say if this optimization applied\nto both referencing and referenced keys or just one of those.\nIf you need to know more you can look at the code at:\nhttp://developer.postgresql.org/cvsweb.cgi/pgsql/src/backend/commands/\nfor trigger.c.\n", "msg_date": "Sat, 29 Oct 2005 09:48:35 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of cascading references in foreign keys" }, { "msg_contents": "On Sat, Oct 29, 2005 at 08:24:32 -0600,\n Michael Fuhr <[email protected]> wrote:\n> > Does an UPDATE of e.g. m_fld1 in t_master cause a 'lookup' in all tables\n> > which have a cascading update-rule or is this 'lookup' only triggered if\n> > the referenced column in t_master is explicitly updated?\n> \n> My tests suggest that a lookup on the referring key is done only\n> if the referenced key is changed. Here's an example from 8.1beta4;\n> I used this version because EXPLAIN ANALYZE shows triggers and the\n> time spent in them, but I see similar performance characteristics\n> in earlier versions. I've intentionally not put an index on the\n> referring column to make lookups on it slow.\n\nIt looks like this feature was added last May, so I think it only applies\nto 8.1.\n", "msg_date": "Sat, 29 Oct 2005 09:49:47 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of cascading references in foreign keys" }, { "msg_contents": "On Sat, Oct 29, 2005 at 09:49:47AM -0500, Bruno Wolff III wrote:\n> On Sat, Oct 29, 2005 at 08:24:32 -0600, Michael Fuhr <[email protected]> wrote:\n> > My tests suggest that a lookup on the referring key is done only\n> > if the referenced key is changed. Here's an example from 8.1beta4;\n> > I used this version because EXPLAIN ANALYZE shows triggers and the\n> > time spent in them, but I see similar performance characteristics\n> > in earlier versions. I've intentionally not put an index on the\n> > referring column to make lookups on it slow.\n> \n> It looks like this feature was added last May, so I think it only applies\n> to 8.1.\n\nEarlier versions appear to have at least some kind of optimization.\nHere's a test in 7.3.11 using the same tables I used in 8.1beta4,\nalthough on a slower box.\n\ntest=> UPDATE foo SET x = 1 WHERE id = 100000;\nUPDATE 1\nTime: 32.18 ms\n\ntest=> UPDATE foo SET x = 1, id = 200000 WHERE id = 100000;\nUPDATE 1\nTime: 4144.95 ms\n\ntest=> DROP TABLE bar;\nDROP TABLE\nTime: 240.87 ms\n\ntest=> UPDATE foo SET x = 1, id = 100000 WHERE id = 200000;\nUPDATE 1\nTime: 63.52 ms\n\n-- \nMichael Fuhr\n", "msg_date": "Sat, 29 Oct 2005 10:05:27 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of cascading references in foreign keys" }, { "msg_contents": "Bruno Wolff III wrote:\n> On Sat, Oct 29, 2005 at 13:10:31 +0200,\n> Martin Lesser <[email protected]> wrote:\n> > Which effects have UPDATEs on REFERENCEd TABLEs when only columns in the\n> > referenced table are updated which are not part of the FOREIGN KEY\n> > constraint?\n> \n> In 8.1 there is a check to see if the foreign key value has changed and if\n> not a trigger isn't queued. In the currently released versions any update\n> will fire triggers.\n> The check in comment for trigger.c didn't say if this optimization applied\n> to both referencing and referenced keys or just one of those.\n> If you need to know more you can look at the code at:\n> http://developer.postgresql.org/cvsweb.cgi/pgsql/src/backend/commands/\n> for trigger.c.\n\nIt applies to both. See\nsrc/backend/utils/adt/ri_triggers.c::RI_FKey_keyequal_upd_pk() and\nRI_FKey_keyequal_upd_fk(). The first is for primary keys (pk), the\nsecond for foreign keys (fk). These are called by\nsrc/backend/command/triggers.c::AfterTriggerSaveEvent(). The checks\nprevent the trigger from being registered at all if there is no change\nin the primary/foreign key relationship.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 29 Oct 2005 12:05:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of cascading references in foreign keys" }, { "msg_contents": "Michael Fuhr wrote:\n> On Sat, Oct 29, 2005 at 09:49:47AM -0500, Bruno Wolff III wrote:\n> > On Sat, Oct 29, 2005 at 08:24:32 -0600, Michael Fuhr <[email protected]> wrote:\n> > > My tests suggest that a lookup on the referring key is done only\n> > > if the referenced key is changed. Here's an example from 8.1beta4;\n> > > I used this version because EXPLAIN ANALYZE shows triggers and the\n> > > time spent in them, but I see similar performance characteristics\n> > > in earlier versions. I've intentionally not put an index on the\n> > > referring column to make lookups on it slow.\n> > \n> > It looks like this feature was added last May, so I think it only applies\n> > to 8.1.\n> \n> Earlier versions appear to have at least some kind of optimization.\n> Here's a test in 7.3.11 using the same tables I used in 8.1beta4,\n> although on a slower box.\n> \n> test=> UPDATE foo SET x = 1 WHERE id = 100000;\n> UPDATE 1\n> Time: 32.18 ms\n> \n> test=> UPDATE foo SET x = 1, id = 200000 WHERE id = 100000;\n> UPDATE 1\n> Time: 4144.95 ms\n> \n> test=> DROP TABLE bar;\n> DROP TABLE\n> Time: 240.87 ms\n> \n> test=> UPDATE foo SET x = 1, id = 100000 WHERE id = 200000;\n> UPDATE 1\n> Time: 63.52 ms\n\nYes, I think in 8.0.X those triggers were queued on firing did nothing\nwhile in 8.1 the triggers are not even fired.\n\nThe 8.1 commit to ri_triggers.c has:\n\n\trevision 1.79\n\tdate: 2005/05/30 07:20:58; author: neilc; state: Exp; lines: +131 -65\n\tWhen enqueueing after-row triggers for updates of a table with a foreign\n\tkey, compare the new and old row versions. If the foreign key column has\n\tnot changed, we needn't enqueue the trigger, since the update cannot\n\tviolate the foreign key. This optimization was previously applied in the\n\tRI trigger function, but it is more efficient to avoid firing the\n\ttrigger altogether. Per recent discussion on pgsql-hackers.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 29 Oct 2005 12:19:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of cascading references in foreign keys" }, { "msg_contents": "Michael Fuhr <[email protected]> writes:\n> On Sat, Oct 29, 2005 at 09:49:47AM -0500, Bruno Wolff III wrote:\n>> It looks like this feature was added last May, so I think it only applies\n>> to 8.1.\n\n> Earlier versions appear to have at least some kind of optimization.\n\nYeah. IIRC, for quite some time we've had tests inside the FK update\ntriggers to not bother to search the other table if the key value hasn't\nchanged. What we did in 8.1 was to push that test further upstream, so\nthat the trigger event isn't even queued if the key value hasn't\nchanged. (This is why you don't see the trigger shown as being called\neven once.)\n\nLooking at this, I wonder if there isn't a bug or at least an\ninefficiency in 8.1. The KeysEqual short circuit tests are still there\nin ri_triggers.c; aren't they now redundant with the test in triggers.c?\nAnd don't they need to account for the special case mentioned in the\ncomment in triggers.c, that the RI check must still be done if we are\nlooking at a row updated by the same transaction that created it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 29 Oct 2005 14:01:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of cascading references in foreign keys " }, { "msg_contents": "I wrote:\n> Looking at this, I wonder if there isn't a bug or at least an\n> inefficiency in 8.1. The KeysEqual short circuit tests are still there\n> in ri_triggers.c; aren't they now redundant with the test in triggers.c?\n> And don't they need to account for the special case mentioned in the\n> comment in triggers.c, that the RI check must still be done if we are\n> looking at a row updated by the same transaction that created it?\n\nOK, I take back the possible-bug comment: the special case only applies\nto the FK-side triggers, which is to say RI_FKey_check, and that routine\ndoesn't attempt to skip the check on equal old/new keys. I'm still\nwondering though if the KeysEqual tests in the other RI triggers aren't\nnow a waste of cycles.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 29 Oct 2005 14:35:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of cascading references in foreign keys " }, { "msg_contents": "\nOn Oct 29, 2005, at 9:48 AM, Bruno Wolff III wrote:\n\n> On Sat, Oct 29, 2005 at 13:10:31 +0200,\n> Martin Lesser <[email protected]> wrote:\n>\n>> Which effects have UPDATEs on REFERENCEd TABLEs when only columns \n>> in the\n>> referenced table are updated which are not part of the FOREIGN KEY\n>> constraint?\n>\n> In 8.1 there is a check to see if the foreign key value has changed \n> and if\n> not a trigger isn't queued. In the currently released versions any \n> update\n> will fire triggers.\n> The check in comment for trigger.c didn't say if this optimization \n> applied\n> to both referencing and referenced keys or just one of those.\n> If you need to know more you can look at the code at:\n> http://developer.postgresql.org/cvsweb.cgi/pgsql/src/backend/commands/\n> for trigger.c.\n\nIt seems like this warrants an item somewhere in the release notes, \nand I'm not currently seeing it (or a related item) anywhere. Perhaps \nE.1.3.1 (Performance Improvements)? For some of the more extreme \nUPDATE scenarios I've seen, this could be a big win.\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\n\nOpen Source Solutions. Optimized Web Development.\n\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-469-5150\n615-469-5151 (fax)\n", "msg_date": "Sat, 29 Oct 2005 23:32:21 -0500", "msg_from": "\"Thomas F. O'Connell\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of cascading references in foreign keys" }, { "msg_contents": "Thomas F. O'Connell wrote:\n> > In 8.1 there is a check to see if the foreign key value has changed \n> > and if\n> > not a trigger isn't queued. In the currently released versions any \n> > update\n> > will fire triggers.\n> > The check in comment for trigger.c didn't say if this optimization \n> > applied\n> > to both referencing and referenced keys or just one of those.\n> > If you need to know more you can look at the code at:\n> > http://developer.postgresql.org/cvsweb.cgi/pgsql/src/backend/commands/\n> > for trigger.c.\n> \n> It seems like this warrants an item somewhere in the release notes, \n> and I'm not currently seeing it (or a related item) anywhere. Perhaps \n> E.1.3.1 (Performance Improvements)? For some of the more extreme \n> UPDATE scenarios I've seen, this could be a big win.\n\nHard to say, perhaps:\n\n\tPrevent referential integrity triggers from firing if referenced\n\tcolumns are not changed by an UPDATE\n\n\tPreviously, triggers would fire but do nothing.\n\nHowever, the description seems more complex than it is worth.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 30 Oct 2005 09:10:49 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of cascading references in foreign keys" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n\n> Thomas F. O'Connell wrote:\n>> It seems like this warrants an item somewhere in the release notes, \n>> and I'm not currently seeing it (or a related item) anywhere. Perhaps \n>> E.1.3.1 (Performance Improvements)? For some of the more extreme \n>> UPDATE scenarios I've seen, this could be a big win.\n> Hard to say, perhaps:\n>\n> \tPrevent referential integrity triggers from firing if referenced\n> \tcolumns are not changed by an UPDATE\n>\n> \tPreviously, triggers would fire but do nothing.\n\nAnd this \"firing\" has negative effects for the performance at least in\nversions before 8.1 (we use 8.0.3 in our production).\n\nOne really dirty hack that comes in mind is to put an additional\npk_table (with only one field, the pk from the master) between the\n\"master\"-table and the ~30 detail-tables so each update in the \"master\"\nwould in most cases only trigger a lookup in one table. Only if a pk was\nreally changed the CASCADEd trigger would force a triggered UPDATE in\nthe detail-tables.\n\nAfter denormalization of two of the largest detail-tables into one table\nthe performance improvement was about 10% due to the fact that up to 1\nmio. of rows (of about 30 mio) in the \"master\"-table are updated daily\nand triggered a lookup in 190 mio. rows (before denormalization)\nresp. 115 rows (after denormalization).\n", "msg_date": "Sun, 30 Oct 2005 21:16:20 +0100", "msg_from": "Martin Lesser <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Effects of cascading references in foreign keys" }, { "msg_contents": "\nWould someone add a comment in the code about this, or research it?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> I wrote:\n> > Looking at this, I wonder if there isn't a bug or at least an\n> > inefficiency in 8.1. The KeysEqual short circuit tests are still there\n> > in ri_triggers.c; aren't they now redundant with the test in triggers.c?\n> > And don't they need to account for the special case mentioned in the\n> > comment in triggers.c, that the RI check must still be done if we are\n> > looking at a row updated by the same transaction that created it?\n> \n> OK, I take back the possible-bug comment: the special case only applies\n> to the FK-side triggers, which is to say RI_FKey_check, and that routine\n> doesn't attempt to skip the check on equal old/new keys. I'm still\n> wondering though if the KeysEqual tests in the other RI triggers aren't\n> now a waste of cycles.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 6 Dec 2005 23:41:16 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of cascading references in foreign keys" }, { "msg_contents": "\nWould someone please find the answer to Tom's last question?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> I wrote:\n> > Looking at this, I wonder if there isn't a bug or at least an\n> > inefficiency in 8.1. The KeysEqual short circuit tests are still there\n> > in ri_triggers.c; aren't they now redundant with the test in triggers.c?\n> > And don't they need to account for the special case mentioned in the\n> > comment in triggers.c, that the RI check must still be done if we are\n> > looking at a row updated by the same transaction that created it?\n> \n> OK, I take back the possible-bug comment: the special case only applies\n> to the FK-side triggers, which is to say RI_FKey_check, and that routine\n> doesn't attempt to skip the check on equal old/new keys. I'm still\n> wondering though if the KeysEqual tests in the other RI triggers aren't\n> now a waste of cycles.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Wed, 14 Jun 2006 18:12:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of cascading references in foreign keys" }, { "msg_contents": "Tom Lane wrote:\n> I wrote:\n> > Looking at this, I wonder if there isn't a bug or at least an\n> > inefficiency in 8.1. The KeysEqual short circuit tests are still there\n> > in ri_triggers.c; aren't they now redundant with the test in triggers.c?\n> > And don't they need to account for the special case mentioned in the\n> > comment in triggers.c, that the RI check must still be done if we are\n> > looking at a row updated by the same transaction that created it?\n> \n> OK, I take back the possible-bug comment: the special case only applies\n> to the FK-side triggers, which is to say RI_FKey_check, and that routine\n> doesn't attempt to skip the check on equal old/new keys. I'm still\n> wondering though if the KeysEqual tests in the other RI triggers aren't\n> now a waste of cycles.\n\nWould someone please research this? Thanks.\n\n-- \n Bruce Momjian [email protected]\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Mon, 21 Aug 2006 23:50:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of cascading references in foreign keys" }, { "msg_contents": "\nThis has been saved for the 8.4 release:\n\n\thttp://momjian.postgresql.org/cgi-bin/pgpatches_hold\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Michael Fuhr <[email protected]> writes:\n> > On Sat, Oct 29, 2005 at 09:49:47AM -0500, Bruno Wolff III wrote:\n> >> It looks like this feature was added last May, so I think it only applies\n> >> to 8.1.\n> \n> > Earlier versions appear to have at least some kind of optimization.\n> \n> Yeah. IIRC, for quite some time we've had tests inside the FK update\n> triggers to not bother to search the other table if the key value hasn't\n> changed. What we did in 8.1 was to push that test further upstream, so\n> that the trigger event isn't even queued if the key value hasn't\n> changed. (This is why you don't see the trigger shown as being called\n> even once.)\n> \n> Looking at this, I wonder if there isn't a bug or at least an\n> inefficiency in 8.1. The KeysEqual short circuit tests are still there\n> in ri_triggers.c; aren't they now redundant with the test in triggers.c?\n> And don't they need to account for the special case mentioned in the\n> comment in triggers.c, that the RI check must still be done if we are\n> looking at a row updated by the same transaction that created it?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Wed, 26 Sep 2007 04:21:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of cascading references in foreign keys" }, { "msg_contents": "\nAdded to TODO:\n\n* Improve referential integrity checks\n\n http://archives.postgresql.org/pgsql-performance/2005-10/msg00458.php\n\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Michael Fuhr <[email protected]> writes:\n> > On Sat, Oct 29, 2005 at 09:49:47AM -0500, Bruno Wolff III wrote:\n> >> It looks like this feature was added last May, so I think it only applies\n> >> to 8.1.\n> \n> > Earlier versions appear to have at least some kind of optimization.\n> \n> Yeah. IIRC, for quite some time we've had tests inside the FK update\n> triggers to not bother to search the other table if the key value hasn't\n> changed. What we did in 8.1 was to push that test further upstream, so\n> that the trigger event isn't even queued if the key value hasn't\n> changed. (This is why you don't see the trigger shown as being called\n> even once.)\n> \n> Looking at this, I wonder if there isn't a bug or at least an\n> inefficiency in 8.1. The KeysEqual short circuit tests are still there\n> in ri_triggers.c; aren't they now redundant with the test in triggers.c?\n> And don't they need to account for the special case mentioned in the\n> comment in triggers.c, that the RI check must still be done if we are\n> looking at a row updated by the same transaction that created it?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://postgres.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Fri, 7 Mar 2008 14:19:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of cascading references in foreign keys" } ]
[ { "msg_contents": "Hi there.\n\nI have tried to implement the layered views as suggested earlier on one \nof the simplest queries (just to get a feel for it). And there seems to \nbe something odd going on.\n\nAttached are all the statemens needed to see, how the database is made \nand the contents of postgresql.conf and two explain analyzes:\n\nThe machine is a single cpu Xeon, with 2G of memory and 2 scsi-drives in \na mirror (is going to be extended to 6 within a few weeks) running \n8.1beta3. The whole database has been vacuum analyzed just before the \nexplain analyzes.\n\nI have spend a few hours fiddling around with the performance of it, but \nseems to go nowhere - I might have become snowblind and missed something \nobvious though.\n\nThere are a few things, that strikes me:\n- the base view (ord_result_pct) is reasonable fast (41 ms) - it does a \nlot of seq scans, but right now there are not enough data there to do \notherwise\n- the pretty version (for output) is 17,5 times slower (722ms) even \nthough it just joins against three tiny tables ( < 100 rows each) and \nthe plan seems very different\n- the slow query (the _pretty) has lower expected costs as the other ( \n338 vs 487 \"performance units\") , this looks like some cost parameters \nneed tweaking. I cannot figure out which though.\n- the top nested loop seems to eat most of the time, I have a little \ntrouble seeing what this nested loop is doing there anyways\n\nThanks in advance\n\nSvenne", "msg_date": "Sun, 30 Oct 2005 18:16:04 +0100", "msg_from": "Svenne Krap <[email protected]>", "msg_from_op": true, "msg_subject": "multi-layered view join performance oddities" }, { "msg_contents": "On Sun, Oct 30, 2005 at 06:16:04PM +0100, Svenne Krap wrote:\n> Nested Loop (cost=223.09..338.61 rows=1 width=174) (actual time=20.213..721.361 rows=2250 loops=1)\n> Join Filter: ((\"outer\".dataset_id = \"inner\".dataset_id) AND (\"outer\".nb_property_type_id = \"inner\".nb_property_type_id))\n> -> Hash Join (cost=58.04..164.26 rows=1 width=150) (actual time=5.510..22.088 rows=2250 loops=1)\n\nThere's horrible misestimation here. It expects one row and thus starts a\nnested loop, but gets 2250. No wonder it's slow :-)\n\nThe misestimation can be traced all the way down here:\n\n> Hash Cond: (\"outer\".institut = \"inner\".id)\n> -> Hash Join (cost=56.88..163.00 rows=16 width=137) (actual time=5.473..19.165 rows=2250 loops=1)\n> Hash Cond: (\"outer\".dataset_id = \"inner\".id)\n> -> Hash Join (cost=55.48..160.95 rows=99 width=101) (actual time=5.412..16.264 rows=2250 loops=1)\n\nwhere the planner misestimates the selectivity of your join (it estimates 99\nrows, and there are 2250).\n\nI've had problems joining with Append nodes in the past, and solved the\nproblem by moving the UNION ALL a bit out, but I'm not sure if it's a very\ngood general solution, or a solution to your problems here.\n\nIf all else fails, you could \"set enable_nestloop=false\", but that is not a\ngood idea in the long run, I'd guess -- it's much better to make sure the\nplanner has good estimates and let it do the correct decisions from there.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sun, 30 Oct 2005 18:44:50 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi-layered view join performance oddities" }, { "msg_contents": "Svenne Krap <[email protected]> writes:\n> create view ord_institutes_sum as\n> SELECT ord_property_type_all.dataset_id, ord_property_type_all.nb_property_type_id, 0 AS institut, sum(ord_property_type_all.amount) AS amount\n> FROM ord_property_type_all\n> GROUP BY ord_property_type_all.dataset_id, ord_property_type_all.nb_property_type_id;\n\n> create view ord_result_pct as\n> SELECT t1.dataset_id, t1.nb_property_type_id, t1.institut, t1.amount / t2.amount * 100::numeric AS pct\n> FROM ord_property_type_all t1, ord_institutes_sum t2\n> WHERE t1.dataset_id = t2.dataset_id AND t1.nb_property_type_id = t2.nb_property_type_id;\n\nThis is really pretty horrid code: you're requesting double evaluation\nof the ord_property_type_all view, and then joining the two calculations\nto each other. No, the planner will not detect how silly this is :-(,\nnor will it realize that there's guaranteed to be a match for every row\n--- I believe the latter is the reason for the serious misestimation\nthat Steinar noted. The misestimation doesn't hurt particularly when\nevaluating ord_result_pct by itself, because there are no higher-level\ndecisions to make ... but it hurts a lot when you join ord_result_pct to\nsome other stuff.\n\nIt seems like there must be a way to get the percentage amounts with\nonly one evaluation of ord_property_type_all, but I'm not seeing it\nright offhand.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 30 Oct 2005 13:27:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi-layered view join performance oddities " }, { "msg_contents": "Hi.\n\nYour suggestion with disableing the nested loop really worked well:\n\nrkr=# set enable_nestloop=false;\nSET\nrkr=# explain analyze select * from ord_result_pct_pretty ;\n \nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=230.06..337.49 rows=1 width=174) (actual \ntime=21.893..42.356 rows=2250 loops=1)\n Hash Cond: ((\"outer\".dataset_id = \"inner\".dataset_id) AND \n(\"outer\".nb_property_type_id = \"inner\".nb_property_type_id))\n -> Hash Join (cost=56.94..164.10 rows=26 width=93) (actual \ntime=5.073..17.906 rows=2532 loops=1)\n Hash Cond: (\"outer\".dataset_id = \"inner\".id)\n -> Hash Join (cost=55.54..161.63 rows=161 width=57) (actual \ntime=4.996..14.775 rows=2532 loops=1)\n Hash Cond: (\"outer\".institut = \"inner\".id)\n -> Append (cost=54.38..121.72 rows=2476 width=44) \n(actual time=4.964..11.827 rows=2532 loops=1)\n -> HashAggregate (cost=54.38..57.20 rows=226 \nwidth=16) (actual time=4.964..5.174 rows=282 loops=1)\n -> Seq Scan on ord_entrydata_current \n(cost=0.00..37.50 rows=2250 width=16) (actual time=0.002..1.305 \nrows=2250 loops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..60.00 \nrows=2250 width=20) (actual time=0.009..4.948 rows=2250 loops=1)\n -> Seq Scan on ord_entrydata_current \n(cost=0.00..37.50 rows=2250 width=20) (actual time=0.003..2.098 \nrows=2250 loops=1)\n -> Hash (cost=1.13..1.13 rows=13 width=17) (actual \ntime=0.022..0.022 rows=13 loops=1)\n -> Seq Scan on groups g (cost=0.00..1.13 rows=13 \nwidth=17) (actual time=0.003..0.013 rows=13 loops=1)\n -> Hash (cost=1.32..1.32 rows=32 width=36) (actual \ntime=0.070..0.070 rows=32 loops=1)\n -> Seq Scan on ord_dataset od (cost=0.00..1.32 rows=32 \nwidth=36) (actual time=0.009..0.043 rows=32 loops=1)\n Filter: is_visible\n -> Hash (cost=173.07..173.07 rows=10 width=97) (actual \ntime=15.472..15.472 rows=256 loops=1)\n -> Hash Join (cost=166.15..173.07 rows=10 width=97) (actual \ntime=14.666..15.203 rows=256 loops=1)\n Hash Cond: (\"outer\".nb_property_type_id = \"inner\".id)\n -> HashAggregate (cost=165.05..168.15 rows=248 \nwidth=40) (actual time=14.619..14.849 rows=288 loops=1)\n -> Append (cost=54.38..121.72 rows=2476 width=44) \n(actual time=5.012..11.130 rows=2532 loops=1)\n -> HashAggregate (cost=54.38..57.20 \nrows=226 width=16) (actual time=5.011..5.222 rows=282 loops=1)\n -> Seq Scan on ord_entrydata_current \n(cost=0.00..37.50 rows=2250 width=16) (actual time=0.001..1.261 \nrows=2250 loops=1)\n -> Subquery Scan \"*SELECT* 2\" \n(cost=0.00..60.00 rows=2250 width=20) (actual time=0.010..4.308 \nrows=2250 loops=1)\n -> Seq Scan on ord_entrydata_current \n(cost=0.00..37.50 rows=2250 width=20) (actual time=0.002..1.694 \nrows=2250 loops=1)\n -> Hash (cost=1.08..1.08 rows=8 width=57) (actual \ntime=0.026..0.026 rows=8 loops=1)\n -> Seq Scan on nb_property_type npt \n(cost=0.00..1.08 rows=8 width=57) (actual time=0.004..0.019 rows=8 loops=1)\n Total runtime: 43.297 ms\n(28 rows)\n\nNow, the whole question becomes, how do I get the planner to make a \nbetter estimation of the returned rows.\n\nI am not sure, I can follow your moving-the-union-all-further-out \nadvice, as I see no different place for the unioning of the two datasets.\n\nMaybe one of the core devs know, where to fiddle :)\n\nSvenne\n\nSteinar H. Gunderson wrote:\n\n>On Sun, Oct 30, 2005 at 06:16:04PM +0100, Svenne Krap wrote:\n> \n>\n>> Nested Loop (cost=223.09..338.61 rows=1 width=174) (actual time=20.213..721.361 rows=2250 loops=1)\n>> Join Filter: ((\"outer\".dataset_id = \"inner\".dataset_id) AND (\"outer\".nb_property_type_id = \"inner\".nb_property_type_id))\n>> -> Hash Join (cost=58.04..164.26 rows=1 width=150) (actual time=5.510..22.088 rows=2250 loops=1)\n>> \n>>\n>\n>There's horrible misestimation here. It expects one row and thus starts a\n>nested loop, but gets 2250. No wonder it's slow :-)\n>\n>The misestimation can be traced all the way down here:\n>\n> \n>\n>> Hash Cond: (\"outer\".institut = \"inner\".id)\n>> -> Hash Join (cost=56.88..163.00 rows=16 width=137) (actual time=5.473..19.165 rows=2250 loops=1)\n>> Hash Cond: (\"outer\".dataset_id = \"inner\".id)\n>> -> Hash Join (cost=55.48..160.95 rows=99 width=101) (actual time=5.412..16.264 rows=2250 loops=1)\n>> \n>>\n>\n>where the planner misestimates the selectivity of your join (it estimates 99\n>rows, and there are 2250).\n>\n>I've had problems joining with Append nodes in the past, and solved the\n>problem by moving the UNION ALL a bit out, but I'm not sure if it's a very\n>good general solution, or a solution to your problems here.\n>\n>If all else fails, you could \"set enable_nestloop=false\", but that is not a\n>good idea in the long run, I'd guess -- it's much better to make sure the\n>planner has good estimates and let it do the correct decisions from there.\n>\n>/* Steinar */\n> \n>\n\n\n\n\n\n\n\nHi.\n\nYour suggestion with disableing the nested loop really worked well: \n\nrkr=# set enable_nestloop=false;\nSET\nrkr=# explain analyze select * from ord_result_pct_pretty ;\n                                                                       \nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join  (cost=230.06..337.49 rows=1 width=174) (actual\ntime=21.893..42.356 rows=2250 loops=1)\n   Hash Cond: ((\"outer\".dataset_id = \"inner\".dataset_id) AND\n(\"outer\".nb_property_type_id = \"inner\".nb_property_type_id))\n   ->  Hash Join  (cost=56.94..164.10 rows=26 width=93) (actual\ntime=5.073..17.906 rows=2532 loops=1)\n         Hash Cond: (\"outer\".dataset_id = \"inner\".id)\n         ->  Hash Join  (cost=55.54..161.63 rows=161 width=57)\n(actual time=4.996..14.775 rows=2532 loops=1)\n               Hash Cond: (\"outer\".institut = \"inner\".id)\n               ->  Append  (cost=54.38..121.72 rows=2476 width=44)\n(actual time=4.964..11.827 rows=2532 loops=1)\n                     ->  HashAggregate  (cost=54.38..57.20 rows=226\nwidth=16) (actual time=4.964..5.174 rows=282 loops=1)\n                           ->  Seq Scan on ord_entrydata_current \n(cost=0.00..37.50 rows=2250 width=16) (actual time=0.002..1.305\nrows=2250 loops=1)\n                     ->  Subquery Scan \"*SELECT* 2\" \n(cost=0.00..60.00 rows=2250 width=20) (actual time=0.009..4.948\nrows=2250 loops=1)\n                           ->  Seq Scan on ord_entrydata_current \n(cost=0.00..37.50 rows=2250 width=20) (actual time=0.003..2.098\nrows=2250 loops=1)\n               ->  Hash  (cost=1.13..1.13 rows=13 width=17) (actual\ntime=0.022..0.022 rows=13 loops=1)\n                     ->  Seq Scan on groups g  (cost=0.00..1.13\nrows=13 width=17) (actual time=0.003..0.013 rows=13 loops=1)\n         ->  Hash  (cost=1.32..1.32 rows=32 width=36) (actual\ntime=0.070..0.070 rows=32 loops=1)\n               ->  Seq Scan on ord_dataset od  (cost=0.00..1.32\nrows=32 width=36) (actual time=0.009..0.043 rows=32 loops=1)\n                     Filter: is_visible\n   ->  Hash  (cost=173.07..173.07 rows=10 width=97) (actual\ntime=15.472..15.472 rows=256 loops=1)\n         ->  Hash Join  (cost=166.15..173.07 rows=10 width=97)\n(actual time=14.666..15.203 rows=256 loops=1)\n               Hash Cond: (\"outer\".nb_property_type_id = \"inner\".id)\n               ->  HashAggregate  (cost=165.05..168.15 rows=248\nwidth=40) (actual time=14.619..14.849 rows=288 loops=1)\n                     ->  Append  (cost=54.38..121.72 rows=2476\nwidth=44) (actual time=5.012..11.130 rows=2532 loops=1)\n                           ->  HashAggregate  (cost=54.38..57.20\nrows=226 width=16) (actual time=5.011..5.222 rows=282 loops=1)\n                                 ->  Seq Scan on\nord_entrydata_current  (cost=0.00..37.50 rows=2250 width=16) (actual\ntime=0.001..1.261 rows=2250 loops=1)\n                           ->  Subquery Scan \"*SELECT* 2\" \n(cost=0.00..60.00 rows=2250 width=20) (actual time=0.010..4.308\nrows=2250 loops=1)\n                                 ->  Seq Scan on\nord_entrydata_current  (cost=0.00..37.50 rows=2250 width=20) (actual\ntime=0.002..1.694 rows=2250 loops=1)\n               ->  Hash  (cost=1.08..1.08 rows=8 width=57) (actual\ntime=0.026..0.026 rows=8 loops=1)\n                     ->  Seq Scan on nb_property_type npt \n(cost=0.00..1.08 rows=8 width=57) (actual time=0.004..0.019 rows=8\nloops=1)\n Total runtime: 43.297 ms\n(28 rows)\n\nNow, the whole question becomes, how do I get the planner to make a\nbetter estimation of the returned rows. \n\nI am not sure, I can follow your moving-the-union-all-further-out\nadvice, as I see no different place for the unioning of the two\ndatasets.\n\nMaybe one of the core devs know, where to fiddle :)\n\nSvenne\n\nSteinar H. Gunderson wrote:\n\nOn Sun, Oct 30, 2005 at 06:16:04PM +0100, Svenne Krap wrote:\n \n\n Nested Loop (cost=223.09..338.61 rows=1 width=174) (actual time=20.213..721.361 rows=2250 loops=1)\n Join Filter: ((\"outer\".dataset_id = \"inner\".dataset_id) AND (\"outer\".nb_property_type_id = \"inner\".nb_property_type_id))\n -> Hash Join (cost=58.04..164.26 rows=1 width=150) (actual time=5.510..22.088 rows=2250 loops=1)\n \n\n\nThere's horrible misestimation here. It expects one row and thus starts a\nnested loop, but gets 2250. No wonder it's slow :-)\n\nThe misestimation can be traced all the way down here:\n\n \n\n Hash Cond: (\"outer\".institut = \"inner\".id)\n -> Hash Join (cost=56.88..163.00 rows=16 width=137) (actual time=5.473..19.165 rows=2250 loops=1)\n Hash Cond: (\"outer\".dataset_id = \"inner\".id)\n -> Hash Join (cost=55.48..160.95 rows=99 width=101) (actual time=5.412..16.264 rows=2250 loops=1)\n \n\n\nwhere the planner misestimates the selectivity of your join (it estimates 99\nrows, and there are 2250).\n\nI've had problems joining with Append nodes in the past, and solved the\nproblem by moving the UNION ALL a bit out, but I'm not sure if it's a very\ngood general solution, or a solution to your problems here.\n\nIf all else fails, you could \"set enable_nestloop=false\", but that is not a\ngood idea in the long run, I'd guess -- it's much better to make sure the\nplanner has good estimates and let it do the correct decisions from there.\n\n/* Steinar */", "msg_date": "Sun, 30 Oct 2005 19:33:03 +0100", "msg_from": "Svenne Krap <[email protected]>", "msg_from_op": true, "msg_subject": "Re: multi-layered view join performance oddities" }, { "msg_contents": "Tom Lane wrote:\n\n>Svenne Krap <[email protected]> writes:\n> \n>\n>>create view ord_institutes_sum as\n>> SELECT ord_property_type_all.dataset_id, ord_property_type_all.nb_property_type_id, 0 AS institut, sum(ord_property_type_all.amount) AS amount\n>> FROM ord_property_type_all\n>> GROUP BY ord_property_type_all.dataset_id, ord_property_type_all.nb_property_type_id;\n>> \n>>\n>\n> \n>\n>>create view ord_result_pct as\n>> SELECT t1.dataset_id, t1.nb_property_type_id, t1.institut, t1.amount / t2.amount * 100::numeric AS pct\n>> FROM ord_property_type_all t1, ord_institutes_sum t2\n>> WHERE t1.dataset_id = t2.dataset_id AND t1.nb_property_type_id = t2.nb_property_type_id;\n>> \n>>\n>\n>This is really pretty horrid code: you're requesting double evaluation\n>of the ord_property_type_all view, and then joining the two calculations\n>to each other. No, the planner will not detect how silly this is :-(,\n>nor will it realize that there's guaranteed to be a match for every row\n>--- I believe the latter is the reason for the serious misestimation\n>that Steinar noted. The misestimation doesn't hurt particularly when\n>evaluating ord_result_pct by itself, because there are no higher-level\n>decisions to make ... but it hurts a lot when you join ord_result_pct to\n>some other stuff.\n> \n>\nI don't really see, how this query is horrid from a user perspective, \nthis is exactly the way, the percentage has to be calculated from a \n\"philosophical\" standpoint (performance considerations left out).\nThis is very bad news for me, as most of the other (much larger) queries \nhave the same issue, that the views will be used multiple times got get \nslightly different data, that has to be joined (also more than 2 times \nas in this case)\n\nI think, it has to run multiple times as it returns two different types \nof data.\n\n>It seems like there must be a way to get the percentage amounts with\n>only one evaluation of ord_property_type_all, but I'm not seeing it\n>right offhand.\n> \n>\n\nI will think about how to remove the second evaluation of the view in \nquestion, if anyone knows how, a hint is very appriciated :)\n\nI could of course go the \"materialized view\" way, but would really \nprefer not to.\n\nSvenne\n\n\n\n\n\n\nTom Lane wrote:\n\nSvenne Krap <[email protected]> writes:\n \n\ncreate view ord_institutes_sum as\n SELECT ord_property_type_all.dataset_id, ord_property_type_all.nb_property_type_id, 0 AS institut, sum(ord_property_type_all.amount) AS amount\n FROM ord_property_type_all\n GROUP BY ord_property_type_all.dataset_id, ord_property_type_all.nb_property_type_id;\n \n\n\n \n\ncreate view ord_result_pct as\n SELECT t1.dataset_id, t1.nb_property_type_id, t1.institut, t1.amount / t2.amount * 100::numeric AS pct\n FROM ord_property_type_all t1, ord_institutes_sum t2\n WHERE t1.dataset_id = t2.dataset_id AND t1.nb_property_type_id = t2.nb_property_type_id;\n \n\n\nThis is really pretty horrid code: you're requesting double evaluation\nof the ord_property_type_all view, and then joining the two calculations\nto each other. No, the planner will not detect how silly this is :-(,\nnor will it realize that there's guaranteed to be a match for every row\n--- I believe the latter is the reason for the serious misestimation\nthat Steinar noted. The misestimation doesn't hurt particularly when\nevaluating ord_result_pct by itself, because there are no higher-level\ndecisions to make ... but it hurts a lot when you join ord_result_pct to\nsome other stuff.\n \n\nI don't really see, how this query is horrid from a user perspective,\nthis is exactly the way, the percentage has to be calculated from a\n\"philosophical\" standpoint (performance considerations left out). \nThis is very bad news for me, as most of the other (much larger)\nqueries have the same issue, that the views will be used multiple times\ngot get slightly different data, that has to be joined (also more than\n2 times as in this case)\n\nI think, it has to run multiple times as it returns two different types\nof data. \n\n\nIt seems like there must be a way to get the percentage amounts with\nonly one evaluation of ord_property_type_all, but I'm not seeing it\nright offhand.\n \n\n\nI will think about how to remove the second evaluation of the view in\nquestion, if anyone knows how, a hint is very appriciated :)\n\nI could of course go the \"materialized view\" way, but would really\nprefer not to.\n\nSvenne", "msg_date": "Sun, 30 Oct 2005 19:49:10 +0100", "msg_from": "Svenne Krap <[email protected]>", "msg_from_op": true, "msg_subject": "Re: multi-layered view join performance oddities" } ]
[ { "msg_contents": "There are a few ways to do this...thinking about it a bit, I would add a timestamp column to your log table (indexed) and keep a control table which keeps track of the last log print sweep operation.\n\nThe print operation would just do \nselect * from log where logtime > (select lastlogtime());\n\nThe idea here is not to have to keep track of anything on the log table like a flag indicating print status, which will cause some bloat issues. All you have to do is reindex once in a while.\n\nlastlogtime() is a function which returns the last log time sweep from the control table. we use a function declared immutable to force planner to treat as a constant (others might tell you to do different here).\n\nMerlin\n\n________________________________________\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Rodrigo Madera\nSent: Friday, October 28, 2005 5:39 PM\nTo: [email protected]\nSubject: [PERFORM] Best way to check for new data.\n\nI have a table that holds entries as in a ficticious table Log(id integer, msg text).\n \nLets say then that I have the program log_tail that has as it´s sole purpose to print newly added data elements.\n \nWhat is the best solution in terms of performace?\n \nThank you for your time,\nRodrigo\n \n", "msg_date": "Mon, 31 Oct 2005 08:47:29 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best way to check for new data." } ]
[ { "msg_contents": "Greetings,\n\nWe are running some performance tests in which we are attempting to\ninsert about 100,000,000 rows in a database at a sustained rate. About\n50M rows in, our performance drops dramatically.\n\nThis test is with data that we believe to be close to what we will\nencounter in production. However in tests with purely generated,\nsequential data, we did not notice this slowdown. I'm trying to figure\nout what patterns in the \"real\" data may be causing us problems.\n\nI have log,data and indexes on separate LUNs on an EMC SAN. Prior to\nslowdown, each partition is writing at a consistent rate. Index\npartition is reading at a much lower rate. At the time of slowdown,\nindex partition read rate increases, all write rates decrease. CPU\nutilization drops.\n\nThe server is doing nothing aside from running the DB. It is a dual\nopteron (dual core, looks like 4 cpus) with 4GB RAM. shared_buffers =\n32768. fsync = off. Postgres version is 8.1.b4. OS is SuSE Enterprise\nserver 9.\n\nMy leading hypothesis is that one indexed column may be leading to our\nissue. The column in question is a varchar(12) column which is non-null\nin about 2% of the rows. The value of this column is 5 characters which\nare the same for every row, followed by a 7 character zero filled base\n36 integer. Thus, every value of this field will be exactly 12 bytes\nlong, and will be substantially the same down to the last bytes.\n\nCould this pattern be pessimal for a postgresql btree index? I'm\nrunning a test now to see if I can verify, but my runs take quite a long\ntime...\n\nIf this sounds like an unlikely culprit how can I go about tracking down\nthe issue?\n\nThanks,\n\n-K\n", "msg_date": "Mon, 31 Oct 2005 11:12:05 -0600", "msg_from": "Kelly Burkhart <[email protected]>", "msg_from_op": true, "msg_subject": "8.x index insert performance" }, { "msg_contents": "> We are running some performance tests in which we are attempting to\n> insert about 100,000,000 rows in a database at a sustained rate. About\n> 50M rows in, our performance drops dramatically.\n>\n> This test is with data that we believe to be close to what we will\n> encounter in production. However in tests with purely generated,\n> sequential data, we did not notice this slowdown. I'm trying to figure\n> out what patterns in the \"real\" data may be causing us problems.\n>\n> I have log,data and indexes on separate LUNs on an EMC SAN. Prior to\n> slowdown, each partition is writing at a consistent rate. Index\n> partition is reading at a much lower rate. At the time of slowdown,\n> index partition read rate increases, all write rates decrease. CPU\n> utilization drops.\n\nI'm doing some test-inserts (albeit with much fewer records) into\n8.0.4 (on FreeBSD 6.0 RC1) and the import-time decreased three-fold\nwhen I increased the below mentioned values:\n\nshared_buffers = 8192\ncommit_delay = 100000\ncommit_siblings = 1000\n\nWhen I increased shared_buffers the kernel needed minor tweaking.\n\nregards\nClaus\n", "msg_date": "Mon, 31 Oct 2005 20:02:26 +0100", "msg_from": "Claus Guttesen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.x index insert performance" } ]
[ { "msg_contents": "Kelly wrote:\n> We are running some performance tests in which we are attempting to\n> insert about 100,000,000 rows in a database at a sustained rate.\nAbout\n> 50M rows in, our performance drops dramatically.\n> \n> This test is with data that we believe to be close to what we will\n> encounter in production. However in tests with purely generated,\n> sequential data, we did not notice this slowdown. I'm trying to\nfigure\n> out what patterns in the \"real\" data may be causing us problems.\n> \n> I have log,data and indexes on separate LUNs on an EMC SAN. Prior to\n> slowdown, each partition is writing at a consistent rate. Index\n> partition is reading at a much lower rate. At the time of slowdown,\n> index partition read rate increases, all write rates decrease. CPU\n> utilization drops.\n> \n> The server is doing nothing aside from running the DB. It is a dual\n> opteron (dual core, looks like 4 cpus) with 4GB RAM. shared_buffers =\n> 32768. fsync = off. Postgres version is 8.1.b4. OS is SuSE\nEnterprise\n> server 9.\n> \n> My leading hypothesis is that one indexed column may be leading to our\n> issue. The column in question is a varchar(12) column which is\nnon-null\n> in about 2% of the rows. The value of this column is 5 characters\nwhich\n> are the same for every row, followed by a 7 character zero filled base\n> 36 integer. Thus, every value of this field will be exactly 12 bytes\n> long, and will be substantially the same down to the last bytes.\n> \n> Could this pattern be pessimal for a postgresql btree index? I'm\n> running a test now to see if I can verify, but my runs take quite a\nlong\n> time...\n> \n> If this sounds like an unlikely culprit how can I go about tracking\ndown\n> the issue?\n\nwell, can you defer index generation until after loading the set (or use\nCOPY?)\n\nif that index is causing the problem, you may want to consider setting\nup partial index to exclude null values.\n\nOne interesting thing to do would be to run your inserting process until\nslowdown happens, stop the process, and reindex the table and then\nresume it, and see if this helps.\n\nMerlin\n\n\n\n", "msg_date": "Mon, 31 Oct 2005 12:32:03 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8.x index insert performance" }, { "msg_contents": "On Mon, 2005-10-31 at 12:32 -0500, Merlin Moncure wrote:\n> well, can you defer index generation until after loading the set (or use\n> COPY?)\n\nI cannot defer index generation.\n\nWe are using the copy API. Copying 10000 rows in a batch.\n\n> \n> if that index is causing the problem, you may want to consider setting\n> up partial index to exclude null values.\n\nThis is a single column index. I assumed that null column values were\nnot indexed. Is my assumption incorrect?\n\n-K\n", "msg_date": "Mon, 31 Oct 2005 11:51:46 -0600", "msg_from": "Kelly Burkhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.x index insert performance" }, { "msg_contents": "On Mon, Oct 31, 2005 at 12:32:03PM -0500, Merlin Moncure wrote:\n> if that index is causing the problem, you may want to consider setting\n> up partial index to exclude null values.\n\nHey all.\n\nPardon my ignorance. :-)\n\nI've been trying to figure out whether null values are indexed or not from\nthe documentation. I was under the impression, that null values are not\nstored in the index. Occassionally, though, I then see a suggestion such\nas the above, that seems to indicate to me that null values *are* stored\nin the index, allowing for the 'exclude null values' to have effect?\n\nWhich is it? :-)\n\nThanks,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Mon, 31 Oct 2005 14:35:47 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: 8.x index insert performance" }, { "msg_contents": "[email protected] writes:\n> I've been trying to figure out whether null values are indexed or not from\n> the documentation. I was under the impression, that null values are not\n> stored in the index.\n\nYou're mistaken, at least with regard to btree indexes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 31 Oct 2005 15:30:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.x index insert performance " }, { "msg_contents": "On Mon, 2005-10-31 at 15:30 -0500, Tom Lane wrote:\n> [email protected] writes:\n> > I've been trying to figure out whether null values are indexed or not from\n> > the documentation. I was under the impression, that null values are not\n> > stored in the index.\n> \n> You're mistaken, at least with regard to btree indexes.\n\nHa! So I'm creating an index 98% full of nulls! Looks like this is\neasily fixed with partial indexes.\n\n-K\n", "msg_date": "Mon, 31 Oct 2005 14:59:51 -0600", "msg_from": "Kelly Burkhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.x index insert performance" }, { "msg_contents": "Kelly Burkhart <[email protected]> writes:\n> Ha! So I'm creating an index 98% full of nulls! Looks like this is\n> easily fixed with partial indexes.\n\nStill, though, it's not immediately clear why you'd be seeing a severe\ndropoff in insert performance after 50M rows. Even though there are\nlots of nulls, I don't see why they'd behave any worse for insert speed\nthan real data. One would like to think that the insert speed would\nfollow a nice O(log N) rule.\n\nAre you doing the inserts all in one transaction, or several? If\nseveral, could you get a gprof profile of inserting the same number of\nrows (say a million or so) both before and after the unexpected dropoff\noccurs?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 31 Oct 2005 16:18:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.x index insert performance " }, { "msg_contents": "On Mon, 2005-10-31 at 16:18 -0500, Tom Lane wrote:\n> Kelly Burkhart <[email protected]> writes:\n> > Ha! So I'm creating an index 98% full of nulls! Looks like this is\n> > easily fixed with partial indexes.\n> \n> Still, though, it's not immediately clear why you'd be seeing a severe\n> dropoff in insert performance after 50M rows. Even though there are\n> lots of nulls, I don't see why they'd behave any worse for insert speed\n> than real data. One would like to think that the insert speed would\n> follow a nice O(log N) rule.\n> \n> Are you doing the inserts all in one transaction, or several? If\n> several, could you get a gprof profile of inserting the same number of\n> rows (say a million or so) both before and after the unexpected dropoff\n> occurs?\n\nI'm doing the inserts via libpq copy. Commits are in batches of approx\n15000 rows. I did a run last night after modifying the indexes and saw\nthe same pattern. I'm dumping the database now and will modify my test\nprogram to copy data from the dump rather than purely generated data.\nHopefully, this will allow me to reproduce the problem in a way that\ntakes less time to set up and run.\n\nTom, I'd be happy to profile the backend at several points in the run if\nyou think that would be helpful. What compiler flags should I use?\nCurrent settings in Makefile.global are:\n\nCFLAGS = -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline\n-Wendif-labels -fno-strict-aliasing\n\nShould I change this to:\n\nCFLAGS = -g -pg -Wall ...\n\nOr should I leave the -O2 in?\n\nIt may be weekend by the time I get this done.\n\n-K\n", "msg_date": "Tue, 01 Nov 2005 07:33:49 -0600", "msg_from": "Kelly Burkhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.x index insert performance" }, { "msg_contents": "Kelly Burkhart <[email protected]> writes:\n> Tom, I'd be happy to profile the backend at several points in the run if\n> you think that would be helpful. What compiler flags should I use?\n\nAdd -g -pg and leave the rest alone. Also, if you're on Linux note that\nyou need -DLINUX_PROFILE.\n\n> It may be weekend by the time I get this done.\n\nWell, it's probably too late to think of tweaking 8.1 anyway...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Nov 2005 08:45:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.x index insert performance " }, { "msg_contents": "Second try... no attachment this time.\n\nI've finally gotten around to profiling the back end. Here is a more\nprecise description of what I'm doing:\n\nI am copying data into two tables, order_main and order_transition\n(table defs at the end of this post). The order_transition table has\nroughly double the number of rows as the order_main table.\n\nMy program is a C program using the libpq copy api which effectively\nsimulates our real application. It reads data from two data files, and\nappends copy-formatted data into two in-memory buffers. After 10,000\norder_transitions, it copies the order_main data, then the\norder_transition data, then commits. The test program is running on a\ndifferent machine than the DB.\n\nAfter each batch it writes a record to stdout with the amount of time it\ntook to copy and commit the data (time only includes pg time, not the\ntime it took to build the buffers). A graph showing the performance\ncharacteristics is here:\n\n<http://kkcsm.net/pgcpy.jpg>\n\nThe horizontal axis is number of transitions * 10000 that have been\nwritten. The vertical axis is time in milliseconds to copy and commit\nthe data. The commit time is very consistent up until about 60,000,000\nrows, then performance drops and times become much less consistent.\n\nI profiled the backend at three points, on batches 4, 6042 and 6067.\nThe first is right after start, the second is right before we hit the\nwall, and the third is one of the initial slow batches.\n\nI'm including inline the first 20 lines of gprof output for each batch.\nPlease let me know if this is insufficient. I'll supply any necessary\nfurther info.\n\nSince this thread is stale, I'll repeat relevant hardware/software\nstats: server is a dual, dual-core opteron with 4GB RAM. Disk is an\nEMC Symmetrix connected via FC. Data, index, logs on three separate\nLUNS. OS is SuSE Enterprise 9. Postgres version is 8.1.b4.\nshared_buffers=32768, fsync=off.\n\nThanks in advance for your help.\n\n-K\n\n---------------------------\n> head -n 20 gprof.txt.4.777.47\nFlat profile:\n\nEach sample counts as 0.01 seconds.\n % cumulative self self total \n time seconds seconds calls s/call s/call name \n 10.92 0.38 0.38 55027 0.00 0.00 XLogInsert\n 6.90 0.62 0.24 702994 0.00 0.00 _bt_compare\n 5.46 0.81 0.19 2 0.10 1.64 DoCopy\n 4.60 0.97 0.16 16077 0.00 0.00 CopyReadLine\n 3.74 1.10 0.13 484243 0.00 0.00 bttextcmp\n 2.87 1.20 0.10 93640 0.00 0.00 _bt_binsrch\n 2.59 1.29 0.09 484243 0.00 0.00 varstr_cmp\n 2.59 1.38 0.09 364292 0.00 0.00 LWLockRelease\n 2.30 1.46 0.08 703394 0.00 0.00 FunctionCall2\n 2.01 1.53 0.07 138025 0.00 0.00 hash_any\n 2.01 1.60 0.07 133176 0.00 0.00 ReadBuffer\n 2.01 1.67 0.07 364110 0.00 0.00 LWLockAcquire\n 2.01 1.74 0.07 132563 0.00 0.00 PinBuffer\n 1.72 1.80 0.06 38950 0.00 0.00 _bt_insertonpg\n 1.72 1.86 0.06 38767 0.00 0.00 _bt_mkscankey\n\n---------------------------\n> head -n 20 gprof.txt.6042.1344.84 \nFlat profile:\n\nEach sample counts as 0.01 seconds.\n % cumulative self self total \n time seconds seconds calls s/call s/call name \n 9.67 0.52 0.52 50431 0.00 0.00 XLogInsert\n 7.71 0.94 0.42 1045427 0.00 0.00 _bt_compare\n 5.95 1.26 0.32 713392 0.00 0.00 bttextcmp\n 4.28 1.49 0.23 1045814 0.00 0.00 FunctionCall2\n 3.35 1.67 0.18 155756 0.00 0.00 _bt_binsrch\n 2.60 1.81 0.14 713392 0.00 0.00 varstr_cmp\n 2.60 1.95 0.14 475524 0.00 0.00 LWLockAcquire\n 2.60 2.09 0.14 191837 0.00 0.00 ReadBuffer\n 2.60 2.23 0.14 2 0.07 2.52 DoCopy\n 2.60 2.37 0.14 197393 0.00 0.00 hash_search\n 2.60 2.51 0.14 197205 0.00 0.00 hash_any\n 2.23 2.63 0.12 190481 0.00 0.00 PinBuffer\n 2.04 2.74 0.11 345866 0.00 0.00 AllocSetAlloc\n 1.86 2.84 0.10 475788 0.00 0.00 LWLockRelease\n 1.86 2.94 0.10 29620 0.00 0.00 pg_localtime\n\n---------------------------\n> head -n 20 gprof.txt.6067.9883.31 \nFlat profile:\n\nEach sample counts as 0.01 seconds.\n % cumulative self self total \n time seconds seconds calls s/call s/call name \n 17.17 1.14 1.14 51231 0.00 0.00 XLogInsert\n 10.82 1.85 0.72 1065556 0.00 0.00 _bt_compare\n 4.77 2.17 0.32 158378 0.00 0.00 _bt_binsrch\n 3.18 2.38 0.21 202921 0.00 0.00 hash_search\n 3.18 2.59 0.21 742891 0.00 0.00 bttextcmp\n 2.87 2.78 0.19 1485787 0.00 0.00 pg_detoast_datum\n 2.87 2.97 0.19 1065325 0.00 0.00 FunctionCall2\n 2.65 3.14 0.18 490373 0.00 0.00 LWLockAcquire\n 2.27 3.29 0.15 2 0.08 3.08 DoCopy\n 2.27 3.44 0.15 490908 0.00 0.00 LWLockRelease\n 1.97 3.57 0.13 195049 0.00 0.00 ReadBuffer\n 1.97 3.70 0.13 742891 0.00 0.00 varstr_cmp\n 1.66 3.81 0.11 462134 0.00 0.00 LockBuffer\n 1.51 3.91 0.10 191345 0.00 0.00 PinBuffer\n 1.51 4.01 0.10 195049 0.00 0.00 UnpinBuffer\n\n---------------------------\ncreate table order_main (\n ord_id varchar(12) not null,\n firm_id varchar not null,\n firm_sub_id varchar not null,\n cl_ord_id varchar not null,\n clearing_firm varchar not null,\n clearing_account varchar not null,\n symbol varchar not null,\n side varchar(1) not null,\n size integer not null,\n price numeric(10,4) not null,\n expire_time timestamp with time zone,\n flags varchar(7) not null\n);\n\ncreate unique index order_main_pk on order_main (\n ord_id\n) tablespace idx_space;\n\ncreate index order_main_ak1 on order_main (\n cl_ord_id\n) tablespace idx_space;\n\n\ncreate table order_transition (\n collating_seq bigint not null,\n ord_id varchar(12) not null,\n cl_ord_id varchar,\n sending_time timestamp with time zone not null,\n transact_time timestamp with time zone not null,\n flags varchar(6) not null,\n exec_id varchar(12),\n size integer,\n price numeric(10,4),\n remainder integer,\n contra varchar\n);\n\ncreate unique index order_transition_pk on order_transition (\n collating_seq\n) tablespace idx_space;\n\ncreate index order_transition_ak1 on order_transition (\n ord_id\n) tablespace idx_space;\n\ncreate index order_transition_ak2 on order_transition (\n cl_ord_id\n)\ntablespace idx_space\nwhere cl_ord_id is not null;\n\ncreate index order_transition_ak3 on order_transition (\n exec_id\n)\ntablespace idx_space\nwhere exec_id is not null;\n\n\n", "msg_date": "Thu, 10 Nov 2005 16:01:57 -0600", "msg_from": "Kelly Burkhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.x index insert performance" }, { "msg_contents": "Kelly Burkhart <[email protected]> writes:\n> I've finally gotten around to profiling the back end.\n\nThanks for following up.\n\nThe sudden appearance of pg_detoast_datum() in the top ten in the third\nprofile is suspicious. I wouldn't expect that to get called at all,\nreally, during a normal COPY IN process. The only way I can imagine it\ngetting called is if you have index entries that require toasting, which\nseems a bit unlikely to start happening only after 60 million rows.\nIs it possible that the index keys are getting longer and longer as your\ntest run proceeds?\n\nCould you send me (off list) the complete gprof output files?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 Nov 2005 17:18:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.x index insert performance " }, { "msg_contents": "Kelly Burkhart <[email protected]> writes:\n> On Thu, 2005-11-10 at 17:18 -0500, Tom Lane wrote:\n>> Could you send me (off list) the complete gprof output files?\n\n> Sure,\n\nThanks. Right offhand I can see no smoking gun here. The\npg_detoast_datum entry I was worried about seems to be just measurement\nnoise --- the gprof trace shows that it's called a proportional number\nof times in both cases, and it falls through without actually doing\nanything in all cases.\n\nThe later trace involves a slightly larger amount of time spent\ninserting into the indexes, which is what you'd expect as the indexes\nget bigger, but it doesn't seem that CPU time per se is the issue.\nThe just-before-the-cliff trace shows total CPU of 5.38 sec and the\nafter-the-cliff one shows 6.61 sec.\n\nWhat I now suspect is happening is that you \"hit the wall\" at the point\nwhere the indexes no longer fit into main memory and it starts taking\nsignificant I/O to search and update them. Have you tried watching\niostat or vmstat output to see if there's a noticeable increase in I/O\nat the point where things slow down? Can you check the physical size of\nthe indexes at that point, and see if it seems related to your available\nRAM?\n\nIf that is the correct explanation, then the only solutions I can see\nare (1) buy more RAM or (2) avoid doing incremental index updates;\nthat is, drop the indexes before bulk load and rebuild them afterwards.\n\nOne point to consider is that an index will be randomly accessed only\nif its data is being loaded in random order. If you're loading keys in\nsequential order then only the \"right-hand edge\" of the index would get\ntouched, and it wouldn't need much RAM. So, depending on what order\nyou're loading data in, the primary key index may not be contributing\nto the problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 Nov 2005 18:01:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.x index insert performance " }, { "msg_contents": "Kelly Burkhart <[email protected]> writes:\n> ... A graph showing the performance\n> characteristics is here:\n\n> <http://kkcsm.net/pgcpy.jpg>\n\nI hadn't looked at this chart till just now, but it sure seems to put a\ncrimp in my theory that you are running out of room to hold the indexes\nin RAM. That theory would predict that once you fall over the knee of\nthe curve, performance would get steadily worse; instead it gets\nmarkedly worse and then improves a bit. And there's another cycle of\nworse-and-better around 80M rows. I have *no* idea what's up with that.\nAnyone? Kelly, could there be any patterns in the data that might be\nrelated?\n\nThe narrow spikes look like they are probably induced by checkpoints.\nYou could check that by seeing if their spacing changes when you alter\ncheckpoint_segments and checkpoint_timeout. It might also be\nentertaining to make the bgwriter parameters more aggressive to see\nif you can ameliorate the spikes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 Nov 2005 19:13:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.x index insert performance " }, { "msg_contents": "On Thu, 2005-11-10 at 19:13 -0500, Tom Lane wrote:\n> Kelly Burkhart <[email protected]> writes:\n> > ... A graph showing the performance\n> > characteristics is here:\n> \n> > <http://kkcsm.net/pgcpy.jpg>\n> \n> I hadn't looked at this chart till just now, but it sure seems to put a\n> crimp in my theory that you are running out of room to hold the indexes\n> in RAM. That theory would predict that once you fall over the knee of\n> the curve, performance would get steadily worse; instead it gets\n> markedly worse and then improves a bit. And there's another cycle of\n> worse-and-better around 80M rows. I have *no* idea what's up with that.\n> Anyone? Kelly, could there be any patterns in the data that might be\n> related?\n\nI modified my original program to insert generated, sequential data.\nThe following graph shows the results to be flat:\n\n<http://kkcsm.net/pgcpy_20051111_1.jpg>\n\nThus, hardware is sufficient to handle predictably sequential data.\nThere very well could be a pattern in the data which could affect\nthings, however, I'm not sure how to identify it in 100K rows out of\n100M.\n\nIf I could identify a pattern, what could I do about it? Could I do\nsome kind of a reversible transform on the data? Is it better to insert\nnearly random values? Or nearly sequential?\n\n\nI now have an 8G and a 16G machine I'm loading the data into. I'll\nreport back after that's done.\n\nI also want to try eliminating the order_main table, moving fields to\nthe transition table. This will reduce the number of index updates\nsignificantly at the cost of some wasted space in the table...\n\n-K\n", "msg_date": "Fri, 11 Nov 2005 16:48:25 -0600", "msg_from": "Kelly Burkhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.x index insert performance" }, { "msg_contents": "Kelly Burkhart <[email protected]> writes:\n> On Thu, 2005-11-10 at 19:13 -0500, Tom Lane wrote:\n>> Kelly, could there be any patterns in the data that might be\n>> related?\n\n> I modified my original program to insert generated, sequential data.\n> The following graph shows the results to be flat:\n> <http://kkcsm.net/pgcpy_20051111_1.jpg>\n> Thus, hardware is sufficient to handle predictably sequential data.\n\nYeah, inserting sequentially increasing data would only ever touch the\nright-hand edge of the btree, so memory requirements would be pretty low\nand constant.\n\n> There very well could be a pattern in the data which could affect\n> things, however, I'm not sure how to identify it in 100K rows out of\n> 100M.\n\nI conjecture that the problem areas represent places where the key\nsequence is significantly \"more random\" than it is elsewhere. Hard\nto be more specific than that though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Nov 2005 18:02:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.x index insert performance " }, { "msg_contents": "On Fri, 2005-11-11 at 18:02 -0500, Tom Lane wrote:\n> > There very well could be a pattern in the data which could affect\n> > things, however, I'm not sure how to identify it in 100K rows out of\n> > 100M.\n> \n> I conjecture that the problem areas represent places where the key\n> sequence is significantly \"more random\" than it is elsewhere. Hard\n> to be more specific than that though.\n> \n\nOK, I understand the pattern now.\n\nMy two tables hold orders, and order state transitions. Most orders\nhave two transitions: creation and termination. The problem happens\nwhen there is a significant number of orders where termination is\nhappening a long time after creation, causing order_transition rows with\nold ord_id values to be inserted.\n\nThis is valid, so I have to figure out a way to accomodate it.\n\nYou mentioned playing with checkpointing and bgwriter earlier in this\nthread. I experimented with the bgwriter through the weekend, but I\ndon't have a good idea what sensible parameter changes are...\n\nRe: checkpointing, currently my checkpoints are happening every 5\nminutes (if I turn on fsync, the graph shows checkpoints dramatically).\nIf I increase the checkpoint_timeout, could that be beneficial? Or\nwould I just have more time between larger spikes? \n\n-K\n", "msg_date": "Mon, 14 Nov 2005 08:43:30 -0600", "msg_from": "Kelly Burkhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.x index insert performance" }, { "msg_contents": "At 09:43 AM 11/14/2005, Kelly Burkhart wrote:\n>On Fri, 2005-11-11 at 18:02 -0500, Tom Lane wrote:\n> > > There very well could be a pattern in the data which could affect\n> > > things, however, I'm not sure how to identify it in 100K rows out of\n> > > 100M.\n> >\n> > I conjecture that the problem areas represent places where the key\n> > sequence is significantly \"more random\" than it is elsewhere. Hard\n> > to be more specific than that though.\n> >\n>\n>OK, I understand the pattern now.\n>\n>My two tables hold orders, and order state transitions. Most orders\n>have two transitions: creation and termination. The problem happens\n>when there is a significant number of orders where termination is\n>happening a long time after creation, causing order_transition rows with\n>old ord_id values to be inserted.\n>\n>This is valid, so I have to figure out a way to accomodate it.\nPerhaps a small schema change would help? Instead of having the \norder state transitions explicitly listed in the table, why not \ncreate two new tables; 1 for created orders and 1 for terminated \norders. When an order is created, its ord_id goes into the \nCreatedOrders table. When an order is terminated, its ord_id is \nadded to the TerminatedOrders table and then deleted from the \nCreatedOrders table.\n\nDownsides to this approach are some extra complexity and that you \nwill have to make sure that system disaster recovery includes making \nsure that no ord_id appears in both the CreatedOrders and \nTerminatedOrdes tables. Upsides are that the insert problem goes \naway and certain kinds of accounting and inventory reports are now \neasier to create.\n\nRon\n\n\n", "msg_date": "Mon, 14 Nov 2005 10:57:42 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.x index insert performance" } ]
[ { "msg_contents": "> > if that index is causing the problem, you may want to consider\nsetting\n> > up partial index to exclude null values.\n> \n> This is a single column index. I assumed that null column values were\n> not indexed. Is my assumption incorrect?\n> \n> -K\nIt turns out it is, or it certainly seems to be. I didn't know that :).\nSo partial index will probably not help for null exclusion...\n\nwould be interesting to see if you are getting swaps (check pg_tmp) when\nperformance breaks down. That is an easy fix, bump work_mem.\n\nMerlin\n", "msg_date": "Mon, 31 Oct 2005 14:13:29 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8.x index insert performance" }, { "msg_contents": "On Mon, 2005-10-31 at 13:13, Merlin Moncure wrote:\n> > > if that index is causing the problem, you may want to consider\n> setting\n> > > up partial index to exclude null values.\n> > \n> > This is a single column index. I assumed that null column values were\n> > not indexed. Is my assumption incorrect?\n> > \n> > -K\n> It turns out it is, or it certainly seems to be. I didn't know that :).\n> So partial index will probably not help for null exclusion...\n> \n> would be interesting to see if you are getting swaps (check pg_tmp) when\n> performance breaks down. That is an easy fix, bump work_mem.\n\nOK, here's the issue in a nutshell.\n\nNULLS, like everything else, are indexed. HOWEVER, there's no way for\nthem to be used by a normal query, since =NULL is not a legal\nconstruct. So, you can't do something like:\n\nselect * from sometable where somefield = NULL\n\nbecause you won't get any answers, since nothing can equal NULL and\n\nselect * from sometable where somefield IS NULL won't work because IS is\nnot a nomally indexible operator.\n\nWhich is why you can create two indexes on a table to get around this\nlike so:\n\ncreate index iname1 on table (field) where field IS NULL\n\nand\n\ncreate index iname2 on table (field) where field IS NOT NULL\n\nAnd then the nulls are indexable by IS / IS NOT NULL.\n", "msg_date": "Mon, 31 Oct 2005 15:03:53 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.x index insert performance" } ]
[ { "msg_contents": "> On Mon, Oct 31, 2005 at 12:32:03PM -0500, Merlin Moncure wrote:\n> > if that index is causing the problem, you may want to consider\nsetting\n> > up partial index to exclude null values.\n> \n> Hey all.\n> \n> Pardon my ignorance. :-)\n> \n> I've been trying to figure out whether null values are indexed or not\nfrom\n> the documentation. I was under the impression, that null values are\nnot\n> stored in the index. Occassionally, though, I then see a suggestion\nsuch\n> as the above, that seems to indicate to me that null values *are*\nstored\n> in the index, allowing for the 'exclude null values' to have effect?\n> \n> Which is it? :-)\n\nI think I'm the ignorant one...do explain on any lookup on an indexed\nfield where the field value is null and you get a seqscan.\n\nMerlin\n", "msg_date": "Mon, 31 Oct 2005 15:27:31 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8.x index insert performance" }, { "msg_contents": "On Mon, Oct 31, 2005 at 03:27:31PM -0500, Merlin Moncure wrote:\n> > On Mon, Oct 31, 2005 at 12:32:03PM -0500, Merlin Moncure wrote:\n> > > if that index is causing the problem, you may want to consider setting\n> > > up partial index to exclude null values.\n> > Hey all.\n> > Pardon my ignorance. :-)\n> > I've been trying to figure out whether null values are indexed or not from\n> > the documentation. I was under the impression, that null values are not\n> > stored in the index. Occassionally, though, I then see a suggestion such\n> > as the above, that seems to indicate to me that null values *are* stored\n> > in the index, allowing for the 'exclude null values' to have effect?\n> > Which is it? :-)\n> I think I'm the ignorant one...do explain on any lookup on an indexed\n> field where the field value is null and you get a seqscan.\n\nNahhh... I think the documentation could use more explicit or obvious\nexplanation. Or, I could have checked the source code to see. In any case,\nI expect we aren't the only ones that lacked confidence.\n\nTom was kind enough to point out that null values are stored. I expect\nthat the seqscan is used if the null values are not selective enough,\nthe same as any other value that isn't selective enough.\n\nNow we can both have a little more confidence! :-)\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Mon, 31 Oct 2005 18:57:09 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: 8.x index insert performance" } ]
[ { "msg_contents": "> [email protected] writes:\n> > I've been trying to figure out whether null values are indexed or\nnot\n> from\n> > the documentation. I was under the impression, that null values are\nnot\n> > stored in the index.\n> \n> You're mistaken, at least with regard to btree indexes.\n\nhmm. I tried several different ways to filter/extract null values from\nan indexed key and got a seq scan every time. The only way I could\nquery for/against null values was to convert to bool via function.\n\nHowever I did a partial exclusion on a 1% non null value really big\ntable and index size dropped as expected.\n\nMerlin\n", "msg_date": "Mon, 31 Oct 2005 16:01:34 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8.x index insert performance " }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n>> You're mistaken, at least with regard to btree indexes.\n\n> hmm. I tried several different ways to filter/extract null values from\n> an indexed key and got a seq scan every time.\n\nI said they were stored, not that you could query against them ;-)\nIS NULL isn't considered an indexable operator, mainly because it's\nnot an operator at all in the strict sense of the word; and our index\naccess APIs only support querying on indexable operators.\n\nThe reason they're stored is that they have to be in order to make\nmulti-column indexes work right. I suppose we could special-case\nsingle-column indexes, but we don't. In any case, it's more likely\nthat someone would one day get around to making IS NULL an indexable\noperator than that we'd insert a special case like that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 31 Oct 2005 16:08:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.x index insert performance " } ]
[ { "msg_contents": "> select * from sometable where somefield IS NULL won't work because IS\nis\n> not a nomally indexible operator.\n\nAh, I didn't know that. So there is no real reason not to exclude null\nvalues from all your indexes :). Reading Tom's recent comments\neverything is clear now.\n\nInstead of using your two index approach I prefer to:\ncreate function nullidx(anyelement) returns boolean as $$ select $1 is\nnull; $$ language\nsql immutable;\n\ncreate index on t(nullidx(f)); -- etc\n\nMerlin\n", "msg_date": "Mon, 31 Oct 2005 16:10:57 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8.x index insert performance" }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n\n> > select * from sometable where somefield IS NULL won't work because IS\n> is\n> > not a nomally indexible operator.\n> \n> Ah, I didn't know that. So there is no real reason not to exclude null\n> values from all your indexes :). Reading Tom's recent comments\n> everything is clear now.\n\nThere are other reasons. If you want a query like \n\n SELECT * FROM tab ORDER BY col LIMIT 10\n\nto use an index on col then it can't exclude NULLs or else it wouldn't be\nuseful. (Oracle actually has this problem, you frequently have to add WHERE\ncol IS NOT NULL\" in order to let it use an index.)\n\n\n-- \ngreg\n\n", "msg_date": "02 Nov 2005 12:26:25 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.x index insert performance" } ]
[ { "msg_contents": "We're running 8.1beta3 on one server and are having ridiculous performance \nissues. This is a 2 cpu Opteron box and both processors are staying at 98 \nor 99% utilization processing not-that-complex queries. Prior to the \nupgrade, our I/O wait time was about 60% and cpu utilization rarely got very \nhigh, now I/O wait time is at or near zero.\n\nI'm planning to go back to 8.0 tonight or tomorrow night but thought I'd \ncheck the pqsql-performance prophets before I gave it up. \n\n\n", "msg_date": "Mon, 31 Oct 2005 17:16:46 -0600", "msg_from": "\"PostgreSQL\" <[email protected]>", "msg_from_op": true, "msg_subject": "8.1beta3 performance" }, { "msg_contents": "On Mon, Oct 31, 2005 at 05:16:46PM -0600, PostgreSQL wrote:\n> We're running 8.1beta3 on one server and are having ridiculous performance \n> issues. This is a 2 cpu Opteron box and both processors are staying at 98 \n> or 99% utilization processing not-that-complex queries. Prior to the \n> upgrade, our I/O wait time was about 60% and cpu utilization rarely got very \n> high, now I/O wait time is at or near zero.\n\nIt sounds like some query got planned a different way that happened to be\nreally suboptimal -- I've seen really bad queries be quick on earlier\nversions \"by accident\" and then not have the same luck on later versions.\n\nCould you find out what queries are taking so long (use\nlog_min_duration_statement), and post table definitions and EXPLAIN ANALYZE\noutput here?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Tue, 1 Nov 2005 03:00:26 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.1beta3 performance" }, { "msg_contents": "On Mon, 2005-31-10 at 17:16 -0600, PostgreSQL wrote:\n> We're running 8.1beta3 on one server and are having ridiculous performance \n> issues. This is a 2 cpu Opteron box and both processors are staying at 98 \n> or 99% utilization processing not-that-complex queries. Prior to the \n> upgrade, our I/O wait time was about 60% and cpu utilization rarely got very \n> high, now I/O wait time is at or near zero.\n\nHave you done anything to verify that this is actually a problem with\n8.1, and not some other change that was made as part of the upgrade\nprocess? For example, if ANALYZE hasn't been re-run, that could cause\nthe plans chosen by the optimizer to be completely different.\n\n-Neil\n\n\n", "msg_date": "Mon, 31 Oct 2005 21:06:15 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.1beta3 performance" }, { "msg_contents": "On Mon, 31 Oct 2005 17:16:46 -0600\n\"PostgreSQL\" <[email protected]> wrote:\n\n> We're running 8.1beta3 on one server and are having ridiculous\n> performance issues. This is a 2 cpu Opteron box and both processors\n> are staying at 98 or 99% utilization processing not-that-complex\n> queries. Prior to the upgrade, our I/O wait time was about 60% and\n> cpu utilization rarely got very high, now I/O wait time is at or near\n> zero.\n> \n> I'm planning to go back to 8.0 tonight or tomorrow night but thought\n> I'd check the pqsql-performance prophets before I gave it up. \n\nI have a stock FreeBSD 5.4 box that I put 8.1 on last night. I ran\npgbench against it and my tps dropped from ~300tps in 8.0.3 to 20tps\nin 8.1. That's right. 20. No changes in any system configuration. No\ndata in the new 8.1 database, only the pgbench init'ed stuff. 25\nclients, 100 and 1000 transactions with a scaling factor of 10, which\ngives me 1,000,000 tuples to shoot through. \n\nI wiped out the 8.1 installation, put 8.0.4 in it's place, and\npgbenched it again. ~300tps again.\n\nIt's not a problem with system configuration if 8.0 works fine, but 8.1\nhas problems, unless there is something that 8.1 needs tweaked that 8.0\ndoesn't. In that case, I just need to know what that is and I can tweak\nit.\n\nDual Xeon 2.6GB HTT PowerEdge, 4GB RAM, RAID 5\nFreeBSD 5.4 RELEASE, custom-compiled kernel\nCFLAGS=-O3 -funroll-loops -pipe (also tried -O2, same difference)\n\nJon Brisbin\nWebmeister\nNPC International, Inc.\n", "msg_date": "Tue, 1 Nov 2005 08:50:37 -0600", "msg_from": "Jon Brisbin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.1beta3 performance" }, { "msg_contents": "\n\n\nOn 1/11/05 2:50 pm, \"Jon Brisbin\" <[email protected]> wrote:\n\n> On Mon, 31 Oct 2005 17:16:46 -0600\n> \"PostgreSQL\" <[email protected]> wrote:\n> \n>> We're running 8.1beta3 on one server and are having ridiculous\n>> performance issues. This is a 2 cpu Opteron box and both processors\n>> are staying at 98 or 99% utilization processing not-that-complex\n>> queries. Prior to the upgrade, our I/O wait time was about 60% and\n>> cpu utilization rarely got very high, now I/O wait time is at or near\n>> zero.\n>> \n>> I'm planning to go back to 8.0 tonight or tomorrow night but thought\n>> I'd check the pqsql-performance prophets before I gave it up.\n> \n> I have a stock FreeBSD 5.4 box that I put 8.1 on last night. I ran\n> pgbench against it and my tps dropped from ~300tps in 8.0.3 to 20tps\n> in 8.1. That's right. 20. No changes in any system configuration. No\n> data in the new 8.1 database, only the pgbench init'ed stuff. 25\n> clients, 100 and 1000 transactions with a scaling factor of 10, which\n> gives me 1,000,000 tuples to shoot through.\n> \n> I wiped out the 8.1 installation, put 8.0.4 in it's place, and\n> pgbenched it again. ~300tps again.\n> \n> It's not a problem with system configuration if 8.0 works fine, but 8.1\n> has problems, unless there is something that 8.1 needs tweaked that 8.0\n> doesn't. In that case, I just need to know what that is and I can tweak\n> it.\n\nHi Jon,\n\nDid you run the bundled version of pgbench against it's own installation?\nThere we some changes to pgbench for 8.1, and I have to wonder (bearing in\nmind I haven't really looked at them) whether they could be affecting things\nin any way. Do you get comparable results running the 8.0 pgbench against\nboth server versions?\n\nRegards, Dave \n\n\n", "msg_date": "Tue, 01 Nov 2005 15:49:14 +0000", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.1beta3 performance" }, { "msg_contents": "Jon Brisbin <[email protected]> writes:\n> I have a stock FreeBSD 5.4 box that I put 8.1 on last night. I ran\n> pgbench against it and my tps dropped from ~300tps in 8.0.3 to 20tps\n> in 8.1. That's right. 20. No changes in any system configuration.\n\nYou sure about that last? These numbers are kind of consistent with the\nidea that fsync is off in the 8.0 database and on in the 8.1 database.\n\nUsing the same test case you mention (pgbench -s 10, -c 25 -t 1000),\nI find that 8.1 is a bit faster than 8.0, eg\n\n8.1 fsync off:\ntps = 89.831186 (including connections establishing)\ntps = 89.865065 (excluding connections establishing)\n\n8.1 fsync on:\ntps = 74.865078 (including connections establishing)\ntps = 74.889066 (excluding connections establishing)\n\n8.0 fsync off:\ntps = 80.271338 (including connections establishing)\ntps = 80.302054 (excluding connections establishing)\n\n8.0 fsync on:\ntps = 67.405708 (including connections establishing)\ntps = 67.426546 (excluding connections establishing)\n\n(All database parameters are defaults except fsync.)\n\nThese numbers are with assert-enabled builds, on a cheap PC whose drive\nlies about write-complete, so they're not very representative of the\nreal world I suppose. But I'm sure not seeing any 10x degradation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Nov 2005 12:24:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.1beta3 performance " }, { "msg_contents": "I'm seeing some other little oddities in the beta as well. I'm watching an \nALTER TABLE ADD COLUMN right now that has been running almost two hours. I \nstopped it the first time at 1 hour; I suppose I'll let it go this time and \nsee if it ever completes. The table is about 150K rows. Top, vmstat, and \niostat show almost no cpu or disk activity (1 to 3%) - it's as if it just \nwent to sleep.\n\n\"Tom Lane\" <[email protected]> wrote in message \nnews:[email protected]...\n> Jon Brisbin <[email protected]> writes:\n>> I have a stock FreeBSD 5.4 box that I put 8.1 on last night. I ran\n>> pgbench against it and my tps dropped from ~300tps in 8.0.3 to 20tps\n>> in 8.1. That's right. 20. No changes in any system configuration.\n>\n> You sure about that last? These numbers are kind of consistent with the\n> idea that fsync is off in the 8.0 database and on in the 8.1 database.\n>\n> Using the same test case you mention (pgbench -s 10, -c 25 -t 1000),\n> I find that 8.1 is a bit faster than 8.0, eg\n>\n> 8.1 fsync off:\n> tps = 89.831186 (including connections establishing)\n> tps = 89.865065 (excluding connections establishing)\n>\n> 8.1 fsync on:\n> tps = 74.865078 (including connections establishing)\n> tps = 74.889066 (excluding connections establishing)\n>\n> 8.0 fsync off:\n> tps = 80.271338 (including connections establishing)\n> tps = 80.302054 (excluding connections establishing)\n>\n> 8.0 fsync on:\n> tps = 67.405708 (including connections establishing)\n> tps = 67.426546 (excluding connections establishing)\n>\n> (All database parameters are defaults except fsync.)\n>\n> These numbers are with assert-enabled builds, on a cheap PC whose drive\n> lies about write-complete, so they're not very representative of the\n> real world I suppose. But I'm sure not seeing any 10x degradation.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n", "msg_date": "Wed, 2 Nov 2005 03:32:45 -0600", "msg_from": "\"PostgreSQL\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8.1beta3 performance" }, { "msg_contents": "\"PostgreSQL\" <[email protected]> writes:\n> I'm seeing some other little oddities in the beta as well. I'm watching an \n> ALTER TABLE ADD COLUMN right now that has been running almost two hours. I \n> stopped it the first time at 1 hour; I suppose I'll let it go this time and \n> see if it ever completes. The table is about 150K rows. Top, vmstat, and \n> iostat show almost no cpu or disk activity (1 to 3%) - it's as if it just \n> went to sleep.\n\nYou sure it's not blocked on a lock? Check pg_locks ... if that sheds\nno light, try attaching to the backend process with gdb and getting a\nstack trace.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Nov 2005 09:41:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.1beta3 performance " } ]
[ { "msg_contents": "Hi,\n\nI am trying to optimize my Debian Sarge AMD64 PostgreSQL 8.0\ninstallation, based on the recommendations from \"the Annotated\nPOSTGRESQL.CONF Guide for\nPostgreSQL\" (http://www.powerpostgresql.com/Downloads/annotated_conf_80.html). To see the result of the recommendations I use pgbench from postgresql-contrib. \n\nI have 3 questions about pgbench:\n\n1. Is there a repository somewhere that shows results, using and\ndocumenting different kinds of hard- and software setups so that I can\ncompare my results with someone elses?\n\n2. Is there a reason for the difference in values from run-to-run of\npgbench:\n\nThe command I used (nothing else is done on the machine, not even mouse\nmovement):\njkr@Panoramix:/usr/lib/postgresql/8.0/bin$ ./pgbench -c 10 -t 1000 test\n\nResults for 4 consecutive runs:\n\ntps = 272.932982 (including connections establishing)\ntps = 273.262622 (excluding connections establishing)\n\ntps = 199.501426 (including connections establishing)\ntps = 199.674937 (excluding connections establishing)\n\ntps = 400.462117 (including connections establishing)\ntps = 401.218291 (excluding connections establishing)\n\ntps = 223.695331 (including connections establishing)\ntps = 223.919031 (excluding connections establishing)\n\n3. It appears that running more transactions with the same amount of\nclients leads to a drop in the transactions per second. I do not\nunderstand why this is (a drop from more clients I do understand). Is\nthis because of the way pgbench works, the way PostgrSQL works or even\nLinux?\n\njkr@Panoramix:/usr/lib/postgresql/8.0/bin$ ./pgbench -c 10 -t 10 test\ntps = 379.218809 (including connections establishing)\ntps = 461.968448 (excluding connections establishing)\n\njkr@Panoramix:/usr/lib/postgresql/8.0/bin$ ./pgbench -c 10 -t 100 test\ntps = 533.878031 (including connections establishing)\ntps = 546.571141 (excluding connections establishing)\n\njkr@Panoramix:/usr/lib/postgresql/8.0/bin$ ./pgbench -c 10 -t 1000 test\ntps = 204.344440 (including connections establishing)\ntps = 204.533627 (excluding connections establishing)\n\njkr@Panoramix:/usr/lib/postgresql/8.0/bin$ ./pgbench -c 10 -t 10000 test\ntps = 121.486803 (including connections establishing)\ntps = 121.493681 (excluding connections establishing)\n\n\nTIA\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\n", "msg_date": "Tue, 01 Nov 2005 09:03:09 +0100", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "pgbench results interpretation?" }, { "msg_contents": "On Tue, 1 Nov 2005, Joost Kraaijeveld wrote:\n\n> Hi,\n>\n> I am trying to optimize my Debian Sarge AMD64 PostgreSQL 8.0\n> installation, based on the recommendations from \"the Annotated\n> POSTGRESQL.CONF Guide for\n> PostgreSQL\" (http://www.powerpostgresql.com/Downloads/annotated_conf_80.html). To see the result of the recommendations I use pgbench from postgresql-contrib.\n>\n> I have 3 questions about pgbench:\n>\n> 1. Is there a repository somewhere that shows results, using and\n> documenting different kinds of hard- and software setups so that I can\n> compare my results with someone elses?\n\nOther than the archives of this mailing list, no.\n\n>\n> 2. Is there a reason for the difference in values from run-to-run of\n> pgbench:\n>\n> The command I used (nothing else is done on the machine, not even mouse\n> movement):\n> jkr@Panoramix:/usr/lib/postgresql/8.0/bin$ ./pgbench -c 10 -t 1000 test\n>\n> Results for 4 consecutive runs:\n>\n> tps = 272.932982 (including connections establishing)\n> tps = 273.262622 (excluding connections establishing)\n>\n> tps = 199.501426 (including connections establishing)\n> tps = 199.674937 (excluding connections establishing)\n>\n> tps = 400.462117 (including connections establishing)\n> tps = 401.218291 (excluding connections establishing)\n>\n> tps = 223.695331 (including connections establishing)\n> tps = 223.919031 (excluding connections establishing)\n\nWell, firstly: pgbench is not a good benchmarking tool. It is mostly used\nto generate load. Secondly, the numbers are suspicious: do you have fsync\nturned off? Do you have write caching enabled? If so, you'd want to make\nsure that cache is battery backed. Thirdly, the effects of caching will be\nseen on subsequent runs.\n\n>\n> 3. It appears that running more transactions with the same amount of\n> clients leads to a drop in the transactions per second. I do not\n> understand why this is (a drop from more clients I do understand). Is\n> this because of the way pgbench works, the way PostgrSQL works or even\n> Linux?\n>\n> jkr@Panoramix:/usr/lib/postgresql/8.0/bin$ ./pgbench -c 10 -t 10 test\n> tps = 379.218809 (including connections establishing)\n> tps = 461.968448 (excluding connections establishing)\n>\n> jkr@Panoramix:/usr/lib/postgresql/8.0/bin$ ./pgbench -c 10 -t 100 test\n> tps = 533.878031 (including connections establishing)\n> tps = 546.571141 (excluding connections establishing)\n\nWell, at this rate pgbench is only running for 2 seconds!\n\n>\n> jkr@Panoramix:/usr/lib/postgresql/8.0/bin$ ./pgbench -c 10 -t 1000 test\n> tps = 204.344440 (including connections establishing)\n> tps = 204.533627 (excluding connections establishing)\n>\n> jkr@Panoramix:/usr/lib/postgresql/8.0/bin$ ./pgbench -c 10 -t 10000 test\n> tps = 121.486803 (including connections establishing)\n> tps = 121.493681 (excluding connections establishing)\n>\n\n\nThis degradation seems to suggest effects caused by the disk cache filling\nup (assuming write caching is enabled) and checkpointing.\n\nHope this helps.\n\nGavin\n", "msg_date": "Tue, 1 Nov 2005 20:16:58 +1100 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgbench results interpretation?" }, { "msg_contents": "Hi Gavin,\n\nThanks for answering.\n\nOn Tue, 2005-11-01 at 20:16 +1100, Gavin Sherry wrote:\n> On Tue, 1 Nov 2005, Joost Kraaijeveld wrote:\n> > 1. Is there a repository somewhere that shows results, using and\n> > documenting different kinds of hard- and software setups so that I can\n> > compare my results with someone elses?\n> \n> Other than the archives of this mailing list, no.\nOK.\n\n> >\n> > 2. Is there a reason for the difference in values from run-to-run of\n> > pgbench:\n> Well, firstly: pgbench is not a good benchmarking tool. \nIs there a reason why that is the case? I would like to understand why?\nIs it because the transaction is to small/large? Or that the queries are\nto small/large? Or just experience?\n\n> It is mostly used\n> to generate load. Secondly, the numbers are suspicious: do you have fsync\n> turned off? \nIn the first trials I posted yes, in the second no.\n\n> Do you have write caching enabled? If so, you'd want to make\n> sure that cache is battery backed. \nI am aware of that, but for now, I am mostly interested in the effects\nof the configuration parameters. I won't do this at home ;-)\n\n\n> Thirdly, the effects of caching will be\n> seen on subsequent runs.\nIn that case I would expect mostly rising values. I only copied and\npasted 4 trials that were available in my xterm at the time of writing\nmy email, but I could expand the list ad infinitum: the variance between\nthe runs is very large. I also expect that if there is no shortage of\nmemory wrt caching that the effect would be negligible, but I may be\nwrong. Part of using pgbench is learning about performance, not\nachieving it.\n\n> > 3. It appears that running more transactions with the same amount of\n> > clients leads to a drop in the transactions per second. I do not\n> > understand why this is (a drop from more clients I do understand). Is\n> > this because of the way pgbench works, the way PostgrSQL works or even\n> > Linux?\n> This degradation seems to suggest effects caused by the disk cache filling\n> up (assuming write caching is enabled) and checkpointing.\nWhich diskcache are your referring to? The onboard harddisk or RAID5\ncontroller caches or the OS cache? The first two I can unstand but if\nyou refer to the OS cache I do not understand what I am seeing. I have\nenough memory giving the size of the database: during these duration (~)\ntests fsync was on, and the files could be loaded into memory easily\n(effective_cache_size = 32768 which is ~ 265 MB, the complete database\ndirectory 228 MB)\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\n", "msg_date": "Tue, 01 Nov 2005 11:05:42 +0100", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgbench results interpretation?" }, { "msg_contents": "On Tue, 1 Nov 2005, Joost Kraaijeveld wrote:\n\n> Hi Gavin,\n>\n> Thanks for answering.\n>\n> On Tue, 2005-11-01 at 20:16 +1100, Gavin Sherry wrote:\n> > On Tue, 1 Nov 2005, Joost Kraaijeveld wrote:\n> > > 1. Is there a repository somewhere that shows results, using and\n> > > documenting different kinds of hard- and software setups so that I can\n> > > compare my results with someone elses?\n> >\n> > Other than the archives of this mailing list, no.\n> OK.\n>\n> > >\n> > > 2. Is there a reason for the difference in values from run-to-run of\n> > > pgbench:\n> > Well, firstly: pgbench is not a good benchmarking tool.\n> Is there a reason why that is the case? I would like to understand why?\n> Is it because the transaction is to small/large? Or that the queries are\n> to small/large? Or just experience?\n>\n> > It is mostly used\n> > to generate load. Secondly, the numbers are suspicious: do you have fsync\n> > turned off?\n> In the first trials I posted yes, in the second no.\n>\n> > Do you have write caching enabled? If so, you'd want to make\n> > sure that cache is battery backed.\n> I am aware of that, but for now, I am mostly interested in the effects\n> of the configuration parameters. I won't do this at home ;-)\n\nWell, pgbench (tpc-b) suffers from inherent concurrency issues because all\nconnections are updating the branches table heavily. As an aside, did you\ninitialise with a scaling factor of 10 to match your level of concurrency?\n\n>\n>\n> > Thirdly, the effects of caching will be\n> > seen on subsequent runs.\n> In that case I would expect mostly rising values. I only copied and\n> pasted 4 trials that were available in my xterm at the time of writing\n> my email, but I could expand the list ad infinitum: the variance between\n> the runs is very large. I also expect that if there is no shortage of\n> memory wrt caching that the effect would be negligible, but I may be\n> wrong. Part of using pgbench is learning about performance, not\n> achieving it.\n\nRight. it is well known that performance with pgbench can vary wildly. I\nusually get a lot less variation than you are getting. My point is though,\nit's not a great indication of performance. I generally simulate the\nreal application running in production and test configuration changes with\nthat. The hackers list archive also contains links to the testing Mark\nWong has been doing at OSDL with TPC-C and TPC-H. Taking a look at the\nconfiguration file he is using, along with the annotated postgresql.conf,\nwould be useful, depending on the load you're antipating and your\nhardware.\n\n>\n> > > 3. It appears that running more transactions with the same amount of\n> > > clients leads to a drop in the transactions per second. I do not\n> > > understand why this is (a drop from more clients I do understand). Is\n> > > this because of the way pgbench works, the way PostgrSQL works or even\n> > > Linux?\n> > This degradation seems to suggest effects caused by the disk cache filling\n> > up (assuming write caching is enabled) and checkpointing.\n> Which diskcache are your referring to? The onboard harddisk or RAID5\n> controller caches or the OS cache? The first two I can unstand but if\n> you refer to the OS cache I do not understand what I am seeing. I have\n> enough memory giving the size of the database: during these duration (~)\n> tests fsync was on, and the files could be loaded into memory easily\n> (effective_cache_size = 32768 which is ~ 265 MB, the complete database\n> directory 228 MB)\n\nWell, two things may be at play. 1) if you are using write caching on your\ncontroller/disks then at the point at which that cache fills up\nperformance will degrade to roughly that you can expect if write through\ncache was being used. Secondly, we checkpoint the system periodically to\nensure that recovery wont be too long a job. Running for pgbench for a few\nseconds, you will not see the effect of checkpointing, which usually runs\nonce every 5 minutes.\n\nHope this helps.\n\nThanks,\n\nGavin\n", "msg_date": "Wed, 2 Nov 2005 21:16:27 +1100 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgbench results interpretation?" }, { "msg_contents": "On Wed, 2005-11-02 at 21:16 +1100, Gavin Sherry wrote:\n> connections are updating the branches table heavily. As an aside, did you\n> initialise with a scaling factor of 10 to match your level of concurrency?\nYep, I did.\n\n\n> that. The hackers list archive also contains links to the testing Mark\n> Wong has been doing at OSDL with TPC-C and TPC-H. Taking a look at the\n> configuration file he is using, along with the annotated postgresql.conf,\n> would be useful, depending on the load you're antipating and your\n> hardware.\nI will look into that project.\n\n> Well, two things may be at play. 1) if you are using write caching on your\n> controller/disks then at the point at which that cache fills up\n> performance will degrade to roughly that you can expect if write through\n> cache was being used. Secondly, we checkpoint the system periodically to\n> ensure that recovery wont be too long a job. Running for pgbench for a few\n> seconds, you will not see the effect of checkpointing, which usually runs\n> once every 5 minutes.\nI still think it is strange. Simple tests with tar suggest that I could\neasily do 600-700 tps at 50.000 KB/second ( as measured by iostat), a\ntest with bonnie++ measured throughputs > 40.000 KB/sec during very long\ntimes, with 1723 - 2121 operations per second. These numbers suggest\nthat PostgreSQL is not using all it could from the hardware. Processor\nload however is negligible during the pgbench tests.\n\nAs written before, I will look into the OSDL benchmarks. Maybe they are\nmore suited for my needs: *understanding* performance determinators.\n\n> \n> Hope this helps.\nYou certainly did, thanks.\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\n", "msg_date": "Wed, 02 Nov 2005 15:41:31 +0100", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgbench results interpretation?" } ]
[ { "msg_contents": "We are going live with a application in a few months that is a complete \nrewrite of an existing application. We are moving from an existing \nproprietary database to Postgresql. We are looking for some \ninsight/suggestions as to how folks test Postgresql in such a situation.\n\nWe really want to run it throught the wringer before going live. I'm \nthrowing together a test suite that consists of mostly perl scripts. \nI'm wondering what other, if any approaches folks have taken in a \nsimilar situation. I know there's nothing like a real live test with \nreal users, and that will happen, but we want to do some semi-automated \nload testing prior.\n\nAnyone ever use any profiling apps (gprof) with any success?\n\nWe've got a failover cluster design and would like any insights here as \nwell.\n\nWe're also trying to decide whether a single database with multiple \nschemas or multiple databases are the best solution. We've done some \nresearch on this through the archives, and the answer seems to depend on \nthe database/application design. Still, we welcome any generic ideas on \nthis issue as well.\n\nI've not provided any specifics on hardware or application as we really \nwant high level stuff at this time.\n\nThanks for any pointers or suggestions.\n\n-- \nUntil later, Geoffrey\n", "msg_date": "Tue, 01 Nov 2005 09:14:50 -0500", "msg_from": "Geoffrey <[email protected]>", "msg_from_op": true, "msg_subject": "solutions for new Postgresql application testing" } ]
[ { "msg_contents": "I'm surprised that no one seems to have yet suggested the following\nsimple experiment:\n\nIncrease the RAM 4GB -> 8GB, tune for best performance, and\nrepeat your 100M row insert experiment.\n\nDoes overall insert performance change? Does the performance\ndrop <foo> rows in still occur? Does it occur in ~ the same place?\nEtc.\n \nIf the effect does seem to be sensitive to the amount of RAM in the\nserver, it might be worth redoing the experiment(s) with 2GB and\n16GB as well...\n\nron\n\n-----Original Message-----\nFrom: Kelly Burkhart <[email protected]>\nSent: Oct 31, 2005 12:12 PM\nTo: [email protected]\nSubject: [PERFORM] 8.x index insert performance\n\nGreetings,\n\nWe are running some performance tests in which we are attempting to\ninsert about 100,000,000 rows in a database at a sustained rate. About\n50M rows in, our performance drops dramatically.\n\nThis test is with data that we believe to be close to what we will\nencounter in production. However in tests with purely generated,\nsequential data, we did not notice this slowdown. I'm trying to figure\nout what patterns in the \"real\" data may be causing us problems.\n\nI have log,data and indexes on separate LUNs on an EMC SAN. Prior to\nslowdown, each partition is writing at a consistent rate. Index\npartition is reading at a much lower rate. At the time of slowdown,\nindex partition read rate increases, all write rates decrease. CPU\nutilization drops.\n\nThe server is doing nothing aside from running the DB. It is a dual\nopteron (dual core, looks like 4 cpus) with 4GB RAM. shared_buffers =\n32768. fsync = off. Postgres version is 8.1.b4. OS is SuSE Enterprise\nserver 9.\n\nMy leading hypothesis is that one indexed column may be leading to our\nissue. The column in question is a varchar(12) column which is non-null\nin about 2% of the rows. The value of this column is 5 characters which\nare the same for every row, followed by a 7 character zero filled base\n36 integer. Thus, every value of this field will be exactly 12 bytes\nlong, and will be substantially the same down to the last bytes.\n\nCould this pattern be pessimal for a postgresql btree index? I'm\nrunning a test now to see if I can verify, but my runs take quite a long\ntime...\n\nIf this sounds like an unlikely culprit how can I go about tracking down\nthe issue?\n\nThanks,\n\n-K\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n", "msg_date": "Tue, 1 Nov 2005 10:37:35 -0500 (GMT-05:00)", "msg_from": "Ron Peacetree <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8.x index insert performance" }, { "msg_contents": "On Tue, 2005-11-01 at 10:37 -0500, Ron Peacetree wrote:\n> I'm surprised that no one seems to have yet suggested the following\n> simple experiment:\n> \n> Increase the RAM 4GB -> 8GB, tune for best performance, and\n> repeat your 100M row insert experiment.\n> \n> Does overall insert performance change? Does the performance\n> drop <foo> rows in still occur? Does it occur in ~ the same place?\n> Etc.\n> \n> If the effect does seem to be sensitive to the amount of RAM in the\n> server, it might be worth redoing the experiment(s) with 2GB and\n> 16GB as well...\n\nRon,\n\nI would like to try this, however, since I'm sitting about 1000 miles\naway from the server, tweaking things is not as simple as one might\nhope. I would also like to understand what is going on before I start\nchanging things. If I can't get a satisfactory explanation for what I'm\nseeing with current hardware, I'll have memory added and see what\nhappens.\n\n-K\n", "msg_date": "Thu, 10 Nov 2005 14:46:48 -0600", "msg_from": "Kelly Burkhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.x index insert performance" } ]
[ { "msg_contents": "hello performance minded administrators:\n\nWe have recently converted a number of routines that walk a bill of\nmaterials (which is a nested structure) from the application side to the\nserver side via recursive plpgsql functions. The performance is\nabsolutely fantastic but I have to maintain a specialized 'walker' for\neach specific task that I have to do. It would be very nice and elegant\nif I could pass in the function for the walker to execute while it is\niterating through the bill of materials. I have been beating my head\nagainst the wall for the best way to do this so here I am shopping for\nideas.\n\nA simplified idealized version of what I would like to do is \n\tbegin\n select (callback_routine)(record_type)\n\tend;\n\nfrom within a plpgsql function. I am borrowing the C syntax for a\nfunction pointer here. The problem I am running into is the only way to\ndo callbacks is via dynamic sql...however you can use higher level types\nsuch as row/record type in dynamic sql (at least not efficiently). I\ncould of course make a full dynamic sql call by expanding the record\ntype into a large parameter list but this is unwieldy and brittle.\n\nAny thoughts?\n\nMerlin\n", "msg_date": "Tue, 1 Nov 2005 14:00:00 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "improvise callbacks in plpgsql" }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> A simplified idealized version of what I would like to do is \n> \tbegin\n> select (callback_routine)(record_type)\n> \tend;\n> from within a plpgsql function. I am borrowing the C syntax for a\n> function pointer here.\n\nWell, there's no function pointer type in SQL :-(. I don't see any way\nto do what you want in pure plpgsql. If you're willing to implement an\nauxiliary C function you could probably make it go:\n\n\tcreate function callit(oid, record) returns void ...\n\nwhere the OID has to be the OID of a function taking a record-type\nargument. The regprocedure pseudotype would allow you not to need\nto write any numeric OIDs in your code:\n\n\tselect callit('myfunc(record)'::regprocedure, recordvar);\n\nThe body of callit() need be little more than OidFunctionCall1()\nplus whatever error checking and security checking you want to\ninclude.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Nov 2005 16:29:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improvise callbacks in plpgsql " } ]
[ { "msg_contents": "Geoffrey wrote:\n> We are going live with a application in a few months that is a\ncomplete\n> rewrite of an existing application. We are moving from an existing\n> proprietary database to Postgresql. We are looking for some\n> insight/suggestions as to how folks test Postgresql in such a\nsituation.\n\nShouldn't you run your tests *before* rewriting your application? :).\nYou don't have to answer that.\n\n> We're also trying to decide whether a single database with multiple\n> schemas or multiple databases are the best solution. We've done some\n> research on this through the archives, and the answer seems to depend\non\n> the database/application design. Still, we welcome any generic ideas\non\n> this issue as well.\n\nI can help a little bit here. Yes, this decision will be heavily\ninfluenced by application design. Let's assume you have to keep\nmultiple identical table sets (suppose you have multiple companies on\nthe same server for example). Here are some general stipulations:\n\nReasons to use schemas:\n* If you have a requirement where data must be queried from multiple\ndata stores at the same time, or between various data stores and a\nshared area, this argues for schemas. While it is possible to do this\nwithout schemas via dblink, which is the postgresql inter-database rpc,\nperformance can be an issue and there is some overhead of setting it up.\n\n* If you need to swap out data stores on the fly without reconnecting,\nthen this argues strongly in favor of schemas. With schemas, you can\nmanipulate which datastore you are using by simply manipulating the\nsearch_path. There is one big caveat to this: your non dynamic pl/pgsql\nfunctions will stick to the tables they use following the first time you\nrun them like suction cups. Worse, your sql functions will stick to the\ntables they refer to when compiled, making non-immutable sql functions a\nno-no in a multi-schema environment. However, there is a clever\nworkaround to this by force recompiling you pl/pgsql functions (search\nthe recent archives on this list).\n\n* Finally, since multiple schemas can share a common public area, this\nmeans that if you have to deploy database features that apply to all of\nyour datastores, you can sometimes get away with sticking them in a\npublic area of the databse...server side utility functions are an\nexample of this.\n\nReasons to use databases:\n* Certain third party tools may have trouble with schemas.\n\n* Manipulating the search path can be error prone and relatively\ntedious.\n\n* Database are more fully separate. I run multi schema, and I make\nheavy use of the userlock contrib module. This means I have to take\nspecial care not to have inter-schema overlap of my lock identifier.\nThere are other cases where this might bite you, for example if you\nwanted one data store to respond to notifications but not another.\nThese are solvable problems, but can be a headache.\n\nIn short, there are pros and cons either way. If it's any help, the\nservers I administrate, which have *really complex* data interdependency\nand isolation requirements, use schemas for the extra flexibility.\n\nMerlin\n", "msg_date": "Tue, 1 Nov 2005 14:39:38 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: solutions for new Postgresql application testing" }, { "msg_contents": "Merlin Moncure wrote:\n> Geoffrey wrote:\n> \n>>We are going live with a application in a few months that is a complete\n>>rewrite of an existing application. We are moving from an existing\n>>proprietary database to Postgresql. We are looking for some\n>>insight/suggestions as to how folks test Postgresql in such a situation.\n> \n> Shouldn't you run your tests *before* rewriting your application? :).\n> You don't have to answer that.\n\nThe logic has been proven. What we want to really test is loading and \nthe remote possibility that the compiler built code based on what we \nwrote, rather then what we thought. :)\n\n>>We're also trying to decide whether a single database with multiple\n>>schemas or multiple databases are the best solution. We've done some\n>>research on this through the archives, and the answer seems to depend on\n>>the database/application design. Still, we welcome any generic ideas\n>>on this issue as well.\n> \n> I can help a little bit here. Yes, this decision will be heavily\n> influenced by application design. Let's assume you have to keep\n> multiple identical table sets (suppose you have multiple companies on\n> the same server for example). Here are some general stipulations:\n\n<great feedback snipped>\n\nThanks muchly for your insights. Just the kind of info we're looking \nfor. Now if I could only find that mind reading compiler.\n\nWe lean towards multiple databases when thinking about the possible need \nto bring down a single database without affecting the others. We do \nrequire access to multiple datastores, but that is relatively easily \ndone with either schemas or databases with perl and C, which are our \ntools of choice. These databases are pretty much identical in design, \nsimply for different 'parts' of the business.\n\nAny further thoughts are, of course, still welcome.\n\n-- \nUntil later, Geoffrey\n", "msg_date": "Tue, 01 Nov 2005 18:12:36 -0500", "msg_from": "Geoffrey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: solutions for new Postgresql application testing" } ]
[ { "msg_contents": "> The body of callit() need be little more than OidFunctionCall1()\n> plus whatever error checking and security checking you want to\n> include.\n\nesp=# create table test(f text);\nCREATE TABLE\n\nesp=# create function test() returns void as \n$$ \n\tbegin \n\t\tinsert into test values ('called'); \n\tend; \n$$ language plpgsql;\n\nesp=# create or replace function test2() returns void as\nesp-# $$\nesp$# declare\nesp$# r record;\nesp$# begin\nesp$# select into r 'abc';\nesp$# perform callit('test()'::regprocedure, r);\nesp$# end;\nesp$#\nesp$# $$ language plpgsql;\nCREATE FUNCTION\n\nesp=# select test2();\n\nesp=# select * from test;\n f\n--------\n called\n(1 row)\n\none word...\nw00t\n\nMerlin\n", "msg_date": "Tue, 1 Nov 2005 17:13:48 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improvise callbacks in plpgsql " }, { "msg_contents": "Would you be willing to write up an example of this? We often get asked\nabout support for WITH, so I bet there's other people who would be very\ninterested in what you've got.\n\nOn Tue, Nov 01, 2005 at 05:13:48PM -0500, Merlin Moncure wrote:\n> > The body of callit() need be little more than OidFunctionCall1()\n> > plus whatever error checking and security checking you want to\n> > include.\n> \n> esp=# create table test(f text);\n> CREATE TABLE\n> \n> esp=# create function test() returns void as \n> $$ \n> \tbegin \n> \t\tinsert into test values ('called'); \n> \tend; \n> $$ language plpgsql;\n> \n> esp=# create or replace function test2() returns void as\n> esp-# $$\n> esp$# declare\n> esp$# r record;\n> esp$# begin\n> esp$# select into r 'abc';\n> esp$# perform callit('test()'::regprocedure, r);\n> esp$# end;\n> esp$#\n> esp$# $$ language plpgsql;\n> CREATE FUNCTION\n> \n> esp=# select test2();\n> \n> esp=# select * from test;\n> f\n> --------\n> called\n> (1 row)\n> \n> one word...\n> w00t\n> \n> Merlin\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 1 Nov 2005 17:04:40 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improvise callbacks in plpgsql" } ]
[ { "msg_contents": "I have a client that is testing an internal data platform, and they\nwere happy with PostgreSQL until they tried to join views - at that\ntime they discovered PostgreSQL was not using the indexes, and the\nqueries took 24 hours to execute as a result.\n\nIs this a known issue, or is this possibly a site-specific problem?\n\nThey just implemented the exact same datamodel in MySQL 5.0, with\nviews and InnoDB tables, and performance is still subsecond.\n\nWould love to know if this is a known issue.\n\n-- Mitch\n", "msg_date": "Tue, 1 Nov 2005 18:16:59 -0500", "msg_from": "Mitch Pirtle <[email protected]>", "msg_from_op": true, "msg_subject": "Joining views disables indexes?" }, { "msg_contents": "On Tue, Nov 01, 2005 at 06:16:59PM -0500, Mitch Pirtle wrote:\n> I have a client that is testing an internal data platform, and they\n> were happy with PostgreSQL until they tried to join views - at that\n> time they discovered PostgreSQL was not using the indexes, and the\n> queries took 24 hours to execute as a result.\n> \n> Is this a known issue, or is this possibly a site-specific problem?\n> \n> They just implemented the exact same datamodel in MySQL 5.0, with\n> views and InnoDB tables, and performance is still subsecond.\n> \n> Would love to know if this is a known issue.\n\nViews simply get expanded to a full query, so the views have nothing to\ndo with it.\n\nMake sure that they've run analyze on the entire database. Upping\ndefault_statistics_target to 100 is probably a good idea as well.\n\nIf that doesn't work, get an explain analyze of the query and post it\nhere. You can try posting just an explain, but that's much less useful.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 1 Nov 2005 17:23:14 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Joining views disables indexes?" }, { "msg_contents": "On Tue, Nov 01, 2005 at 06:16:59PM -0500, Mitch Pirtle wrote:\n> I have a client that is testing an internal data platform, and they\n> were happy with PostgreSQL until they tried to join views - at that\n> time they discovered PostgreSQL was not using the indexes, and the\n> queries took 24 hours to execute as a result.\n> \n> Is this a known issue, or is this possibly a site-specific problem?\n\nThis is way too general to give a good solution. In general, PostgreSQL\nshould have no problem using indexes on joins (in versions before 8.0, there\nwas a problem using indexes on joins of differing data types, though).\nThis does of course assume that its statistics are good; I assume you've\ndoing ANALYZE on the database after loading the database?\n\nWhat you want to do is to post your table definitions and EXPLAIN ANALYZE\noutput of a slow query; that could be difficult if it takes 24 hours, though,\nso you might try a slightly quicker query for starters.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 2 Nov 2005 00:28:16 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Joining views disables indexes?" }, { "msg_contents": "Mitch Pirtle <[email protected]> writes:\n> I have a client that is testing an internal data platform, and they\n> were happy with PostgreSQL until they tried to join views - at that\n> time they discovered PostgreSQL was not using the indexes, and the\n> queries took 24 hours to execute as a result.\n\nYou'll need to provide some actual details if you want useful comments.\nLet's see the table schemas, the view definitions, and the EXPLAIN plan\n(I'll spare you a request for EXPLAIN ANALYZE given that it'd take 24\nhours to get ;-) ... although some estimate of the number of rows\nexpected would be helpful). And I trust they remembered to ANALYZE the\nunderlying tables first? Also, which PG version exactly?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Nov 2005 18:28:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Joining views disables indexes? " } ]
[ { "msg_contents": "> I've done the tests with rc1. This is still as slow on windows ...\nabout\n> 6-10\n> times slower thant linux (via Ip socket). (depending on using prepared\n> queries, etc...)\n> \n> By the way, we've tried to insert into the windows database from a\nlinux\n> psql\n> client, via the network. In this configuration, inserting is only\nabout 2\n> times slower than inserting locally (the linux client had a slower CPU\n> 1700Mhz agains 3000).\n> Could it be related to a problem in the windows psql client ?\n\n[OK, I'm bringing this back on-list, and bringing it to QingQing's\nattention, who I secretly hope is the right person to be looking at this\nproblem :)]\n\nJust to recap Marc and I have been looking at the performance disparity\nbetween windows and linux for a single transaction statement by\nstatement insert on a very narrow table with no keys from a remote\nclient. Marc's observations showed (and I verified) that windows is\nmuch slower in this case than it should be. I gprof'ed both the psql\nclient and the server during the insert and didn't see anything\nseriously out of order...unfortunately QQ's latest win32 performance\ntweaks haven't helped.\n\nMarc's observation that by switching to a linux client drops time down\ndrastically is really intersing!\n\nMerlin\n\n\n", "msg_date": "Wed, 2 Nov 2005 08:36:57 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32" }, { "msg_contents": "\n\nOn Wed, 2 Nov 2005, Merlin Moncure wrote:\n\n> >\n> > By the way, we've tried to insert into the windows database from a\n> > linux psql client, via the network. In this configuration, inserting\n> > is only about 2 times slower than inserting locally (the linux client\n> > had a slower CPU 1700Mhz agains 3000). Could it be related to a\n> > problem in the windows psql client ?\n> >\n\nIf you put client/server on the same machine, then we don't know how the\nCPU is splitted. Can you take a look at the approximate number by\nobserving the task manager data while running?\n\nIf communication code is the suspect, can we measure the difference if we\ndisable the redefinition of recv()/send() etc in port/win32.h (may require\nchange related code a little bit as well). In this way, the socket will\nnot be able to pickup signals, but let see if there is any performance\ndifference first.\n\nRegards,\nQingqing\n\n\n>\n> [OK, I'm bringing this back on-list, and bringing it to QingQing's\n> attention, who I secretly hope is the right person to be looking at this\n> problem :)]\n>\nP.s. You scared me ;-)\n", "msg_date": "Thu, 3 Nov 2005 02:11:43 -0500 (EST)", "msg_from": "Qingqing Zhou <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance for win32" } ]
[ { "msg_contents": "> Would you be willing to write up an example of this? We often get\nasked\n> about support for WITH, so I bet there's other people who would be\nvery\n> interested in what you've got.\n\nSure. In fact, I had already decided this to be the next topic on my\nblog. I'm assuming you are asking about tools to deal with recursive\nsets in postgresql. A plpgsql solution is extremely fast, tight, and\neasy if you do it right...Tom's latest suggestions (I have to flesh this\nout some more) provide the missing piece puzzle to make it really tight\nfrom a classic programming perspective. I don't miss the recursive\nquery syntax at all...IMO it's pretty much a hack anyways (to SQL). \n\nMerlin\n\n\n\n", "msg_date": "Wed, 2 Nov 2005 08:45:02 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improvise callbacks in plpgsql" }, { "msg_contents": "Can we get a link to this posted somewhere? I guess on techdocs?\n\nOn Wed, Nov 02, 2005 at 08:45:02AM -0500, Merlin Moncure wrote:\n> > Would you be willing to write up an example of this? We often get\n> asked\n> > about support for WITH, so I bet there's other people who would be\n> very\n> > interested in what you've got.\n> \n> Sure. In fact, I had already decided this to be the next topic on my\n> blog. I'm assuming you are asking about tools to deal with recursive\n> sets in postgresql. A plpgsql solution is extremely fast, tight, and\n> easy if you do it right...Tom's latest suggestions (I have to flesh this\n> out some more) provide the missing piece puzzle to make it really tight\n> from a classic programming perspective. I don't miss the recursive\n> query syntax at all...IMO it's pretty much a hack anyways (to SQL). \n> \n> Merlin\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 2 Nov 2005 17:44:30 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] improvise callbacks in plpgsql" }, { "msg_contents": "On Wed, Nov 02, 2005 at 05:44:30PM -0600, Jim C. Nasby wrote:\n> Can we get a link to this posted somewhere? I guess on techdocs?\n> \n> On Wed, Nov 02, 2005 at 08:45:02AM -0500, Merlin Moncure wrote:\n> > > Would you be willing to write up an example of this? We often get\n> > asked\n> > > about support for WITH, so I bet there's other people who would be\n> > very\n> > > interested in what you've got.\n> > \n> > Sure. In fact, I had already decided this to be the next topic on\n> > my blog. I'm assuming you are asking about tools to deal with\n> > recursive sets in postgresql. A plpgsql solution is extremely\n> > fast, tight, and easy if you do it right...Tom's latest\n> > suggestions (I have to flesh this out some more) provide the\n> > missing piece puzzle to make it really tight from a classic\n> > programming perspective. I don't miss the recursive query syntax\n> > at all...IMO it's pretty much a hack anyways (to SQL). \n\nThis might be worth putting in the docs somewhere. Tutorial?\n\nCheers,\nD\n-- \nDavid Fetter [email protected] http://fetter.org/\nphone: +1 510 893 6100 mobile: +1 415 235 3778\n\nRemember to vote!\n", "msg_date": "Wed, 2 Nov 2005 18:52:33 -0800", "msg_from": "David Fetter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] improvise callbacks in plpgsql" }, { "msg_contents": "On 11/2/05, David Fetter <[email protected]> wrote:\n> On Wed, Nov 02, 2005 at 05:44:30PM -0600, Jim C. Nasby wrote:\n> > Can we get a link to this posted somewhere? I guess on techdocs?\n> >\n> > On Wed, Nov 02, 2005 at 08:45:02AM -0500, Merlin Moncure wrote:\n> > > > Would you be willing to write up an example of this? We often get\n> > > asked\n> > > > about support for WITH, so I bet there's other people who would be\n> > > very\n> > > > interested in what you've got.\n> > >\n> > > Sure. In fact, I had already decided this to be the next topic on\n> > > my blog. I'm assuming you are asking about tools to deal with\n> > > recursive sets in postgresql. A plpgsql solution is extremely\n> > > fast, tight, and easy if you do it right...Tom's latest\n> > > suggestions (I have to flesh this out some more) provide the\n> > > missing piece puzzle to make it really tight from a classic\n> > > programming perspective. I don't miss the recursive query syntax\n> > > at all...IMO it's pretty much a hack anyways (to SQL).\n>\n> This might be worth putting in the docs somewhere. Tutorial?\n\nyou guys can do anything you like with it...\n\nI'm working on part two which will build on the previous example and\nshow how to pass in a function to use as a callback, kind of like a\nfunctor.\n\nbtw, the blog examples are a reduction of my own personal code which\nwent through a vast simplification process. I need to test it a bt\nbefore it hits doc quality, there might be some errors lurking there.\n\nMerlin\n", "msg_date": "Thu, 3 Nov 2005 09:01:52 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] improvise callbacks in plpgsql" } ]
[ { "msg_contents": "> > I've done the tests with rc1. This is still as slow on windows ...\n> about\n> > 6-10\n> > times slower thant linux (via Ip socket). (depending on \n> using prepared \n> > queries, etc...)\n> > \n> > By the way, we've tried to insert into the windows database from a\n> linux\n> > psql\n> > client, via the network. In this configuration, inserting is only\n> about 2\n> > times slower than inserting locally (the linux client had a \n> slower CPU \n> > 1700Mhz agains 3000).\n> > Could it be related to a problem in the windows psql client ?\n> \n> [OK, I'm bringing this back on-list, and bringing it to \n> QingQing's attention, who I secretly hope is the right person \n> to be looking at this problem :)]\n> \n> Just to recap Marc and I have been looking at the performance \n> disparity between windows and linux for a single transaction \n> statement by statement insert on a very narrow table with no \n> keys from a remote client. Marc's observations showed (and I \n> verified) that windows is much slower in this case than it \n> should be. I gprof'ed both the psql client and the server \n> during the insert and didn't see anything seriously out of \n> order...unfortunately QQ's latest win32 performance tweaks \n> haven't helped.\n> \n> Marc's observation that by switching to a linux client drops \n> time down drastically is really intersing!\n\nCould this be a case of the network being slow, as we've seen a couple\nof times before? And if you run psql on the local box, you get it\ndouble.\n\nDo you get a speed difference between the local windows box and a remote\nwnidows box?\n\n//Magnus\n", "msg_date": "Wed, 2 Nov 2005 14:54:14 +0100", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32" }, { "msg_contents": "Le Mercredi 02 Novembre 2005 14:54, Magnus Hagander a écrit :\n> > > I've done the tests with rc1. This is still as slow on windows ...\n> >\n> > about\n> >\n> > > 6-10\n> > > times slower thant linux (via Ip socket). (depending on\n> >\n> > using prepared\n> >\n> > > queries, etc...)\n> > >\n> > > By the way, we've tried to insert into the windows database from a\n> >\n> > linux\n> >\n> > > psql\n> > > client, via the network. In this configuration, inserting is only\n> >\n> > about 2\n> >\n> > > times slower than inserting locally (the linux client had a\n> >\n> > slower CPU\n> >\n> > > 1700Mhz agains 3000).\n> > > Could it be related to a problem in the windows psql client ?\n> >\n> > [OK, I'm bringing this back on-list, and bringing it to\n> > QingQing's attention, who I secretly hope is the right person\n> > to be looking at this problem :)]\n> >\n> > Just to recap Marc and I have been looking at the performance\n> > disparity between windows and linux for a single transaction\n> > statement by statement insert on a very narrow table with no\n> > keys from a remote client. Marc's observations showed (and I\n> > verified) that windows is much slower in this case than it\n> > should be. I gprof'ed both the psql client and the server\n> > during the insert and didn't see anything seriously out of\n> > order...unfortunately QQ's latest win32 performance tweaks\n> > haven't helped.\n> >\n> > Marc's observation that by switching to a linux client drops\n> > time down drastically is really intersing!\n>\n> Could this be a case of the network being slow, as we've seen a couple\n> of times before? And if you run psql on the local box, you get it\n> double.\n>\n> Do you get a speed difference between the local windows box and a remote\n> wnidows box?\n>\n> //Magnus\nThe Windows-Windows test is local (via loopback interface)\nThe Linux (client) - Windows (server) is via network (100Mbits)\n\nI can't test with 2 windows box ... I haven't got that much (all machines \nlinux, except the test one...)\n", "msg_date": "Wed, 2 Nov 2005 15:47:28 +0100", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance for win32" }, { "msg_contents": "\"Magnus Hagander\" <[email protected]> writes:\n>> Marc's observation that by switching to a linux client drops \n>> time down drastically is really intersing!\n\n> Could this be a case of the network being slow,\n\nI'm wondering about nonstandard junk lurking in the TCP stack of the\nWindows client machine. Also, I seem to recall something about a \"QOS\npatch\" that people are supposed to apply, or not apply as the case may\nbe, to get Windows' TCP stack to behave reasonably ... ring a bell?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Nov 2005 10:11:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance for win32 " } ]
[ { "msg_contents": "There any performance differences between a SQL function written in SQL\nlanguage or PL/psSQL language? For example:\n\nCreate or replace function sp_getfreq(\n\tVar1 integer\n) returns Boolean as\n$$\nDeclare\n\tMyval Boolean;\nBegin\n\tSelect var1 in (select var3 from table1) into myval;\n\tReturn myval;\nEnd;\n$$\nLanguage ‘plpgsql’ stable; \n\nAnd with:\n\nCreate or replace function sp_getfreq(\n\tVar1 integer\n) returns boolean as\n$$\nSelect $1 in (select var3 from table1);\n$$\nLanguage ‘sql’ stable;\n\n\nI know the function is really simple, but in theory which of the three would\nrun faster?\n\n\n", "msg_date": "Wed, 2 Nov 2005 09:32:19 -0600", "msg_from": "\"Cristian Prieto\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance difference between sql and pgsql function..." } ]
[ { "msg_contents": "> Would you be willing to write up an example of this? We often get\nasked\n> about support for WITH, so I bet there's other people who would be\nvery\n> interested in what you've got.\n> \nYou can see my blog on the subject here:\nhttp://www.postgresql.org/docs/8.0/interactive/plpgsql.html#PLPGSQL-ADVA\nNTAGES\n\n\nIt doesn't touch the callback issue. I'm going to hit that at a later\ndate, a review would be helpful!\n\nMerlin\n", "msg_date": "Wed, 2 Nov 2005 11:47:48 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improvise callbacks in plpgsql" } ]
[ { "msg_contents": "oops. my blog is here: :-)\nhttp://people.planetpostgresql.org/merlin/\n\n>\nhttp://www.postgresql.org/docs/8.0/interactive/plpgsql.html#PLPGSQL-ADVA\n> NTAGES\n", "msg_date": "Wed, 2 Nov 2005 12:04:35 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improvise callbacks in plpgsql" } ]
[ { "msg_contents": "I want to do statement level triggers for performance, but it seems \nthere is no 'updated', 'inserted', or 'deleted' tables inside the \ntrigger and nothing I can find in the documentation that offers similar \nfunctionality.\n\nIs there any way that I can access only those rows that were changed?\n\nThanks\nRalph\n\n", "msg_date": "Thu, 03 Nov 2005 12:34:14 +1300", "msg_from": "Ralph Mason <[email protected]>", "msg_from_op": true, "msg_subject": "Trigger Rowsets" }, { "msg_contents": "On Thu, Nov 03, 2005 at 12:34:14PM +1300, Ralph Mason wrote:\n> I want to do statement level triggers for performance, but it seems \n> there is no 'updated', 'inserted', or 'deleted' tables inside the \n> trigger and nothing I can find in the documentation that offers similar \n> functionality.\n> \n> Is there any way that I can access only those rows that were changed?\n\nNo. The only way you can do this is with row-level triggers. There's\nalso not currently any plans to allow statement-level triggers to\ninteract with the data that was modified by the statement.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 2 Nov 2005 17:50:12 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger Rowsets" } ]
[ { "msg_contents": "Using PostgreSQL 8.0.4.\n\nI've got a table with 4.5 million rows that I expect to become huge \n(hundred million? who knows). Each row has a start and end time. I \nwant to retrieve all the events during a timespan in one list; \ntypical timespans will involve up to a couple rows. If the start and \nstop happen at the same time (which is possible), the start must come \nfirst in my sequence. So essentially, I want this:\n\n select when_stopped as when_happened,\n 1 as order_hint\n from transaction t\n where '2005-10-25 15:00:00' <= when_stopped\n and when_stopped <= '2005-10-26 10:00:00'\n union all\n select when_stopped as when_happened,\n 2 as order_hint\n from transaction t\n where '2005-10-25 15:00:00' <= when_stopped\n and when_stopped <= '2005-10-26 10:00:00'\n order by when_happened, order_hint;\n\nI'd really like the first row to be retrieved in O(1) time and the \nlast in O(n) time (n = number of rows in the timespan, not the whole \ntable). I previously was doing things manually with flat files. But \nthere's a sort in PostgreSQL's plan, so I think I'm getting O(n log \nn) time for both. It's frustrating to start using a real database and \nget performance regressions.\n\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n---------------------------------\nSort (cost=667469.90..676207.19 rows=3494916 width=8) (actual \ntime=28503.612..31377.254 rows=3364006 loops=1)\n Sort Key: when_happened, order_hint\n -> Append (cost=0.00..194524.95 rows=3494916 width=8) (actual \ntime=0.191..14659.712 rows=3364006 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..97262.48 \nrows=1747458 width=8) (actual time=0.190..5375.925 rows=1682003 loops=1)\n -> Index Scan using transaction_stopped on \n\"transaction\" t (cost=0.00..79787.90 rows=1747458 width=8) (actual \ntime=0.186..2962.585 rows=1682003 loops=1)\n Index Cond: (('2005-10-25 15:00:00'::timestamp \nwithout time zone <= when_stopped) AND (when_stopped <= '2005-10-26 \n10:00:00'::timestamp without time zone))\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..97262.48 \nrows=1747458 width=8) (actual time=0.167..5449.151 rows=1682003 loops=1)\n -> Index Scan using transaction_stopped on \n\"transaction\" t (cost=0.00..79787.90 rows=1747458 width=8) (actual \ntime=0.163..3026.730 rows=1682003 loops=1)\n Index Cond: (('2005-10-25 15:00:00'::timestamp \nwithout time zone <= when_stopped) AND (when_stopped <= '2005-10-26 \n10:00:00'::timestamp without time zone))\nTotal runtime: 33312.814 ms\n(10 rows)\n\nEach side of the union is retrieved in sorted order, but it doesn't \nseem to realize that. There seem to be two things it's missing:\n\n(1) It doesn't recognize that constant expressions are irrelevant to \nthe sort. I.e., the first half of the union:\n\n select when_started as when_happened,\n 1 as order_hint\n from transaction t\n where '2005-10-25 15:00:00' <= when_started\n and when_started <= '2005-10-26 10:00:00'\n order by when_happened, order_hint;\n\ndoes this:\n\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n---------------------\nSort (cost=291770.42..296139.05 rows=1747453 width=8) (actual \ntime=8462.026..9895.715 rows=1681994 loops=1)\n Sort Key: when_started, 1\n -> Index Scan using transaction_started on \"transaction\" t \n(cost=0.00..79788.21 rows=1747453 width=8) (actual \ntime=0.190..2953.393 rows=1681994 loops=1)\n Index Cond: (('2005-10-25 15:00:00'::timestamp without time \nzone <= when_started) AND (when_started <= '2005-10-26 \n10:00:00'::timestamp without time zone))\nTotal runtime: 10835.114 ms\n(5 rows)\n\nThe sort is unnecessary. If I take out the constant order_hint, it \nworks:\n\n select when_started as when_happened\n from transaction t\n where '2005-10-25 15:00:00' <= when_started\n and when_started <= '2005-10-26 10:00:00'\n order by when_happened;\n\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n---------------\nIndex Scan using transaction_started on \"transaction\" t \n(cost=0.00..79788.21 rows=1747453 width=8) (actual \ntime=0.189..2715.513 rows=1681994 loops=1)\n Index Cond: (('2005-10-25 15:00:00'::timestamp without time zone \n<= when_started) AND (when_started <= '2005-10-26 \n10:00:00'::timestamp without time zone))\nTotal runtime: 3630.817 ms\n(3 rows)\n\n(2) It doesn't recognize that each half of the union is sorted and \nthus they only need to be merged. This is true even without the \norder_hint bits:\n\n select when_stopped as when_happened\n from transaction t\n where '2005-10-25 15:00:00' <= when_stopped\n and when_stopped <= '2005-10-26 10:00:00'\n union all\n select when_stopped as when_happened\n from transaction t\n where '2005-10-25 15:00:00' <= when_stopped\n and when_stopped <= '2005-10-26 10:00:00'\n order by when_happened;\n\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n---------------------------------\nSort (cost=667469.90..676207.19 rows=3494916 width=8) (actual \ntime=28088.783..30898.854 rows=3364006 loops=1)\n Sort Key: when_happened\n -> Append (cost=0.00..194524.95 rows=3494916 width=8) (actual \ntime=0.153..14410.485 rows=3364006 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..97262.48 \nrows=1747458 width=8) (actual time=0.152..5287.092 rows=1682003 loops=1)\n -> Index Scan using transaction_stopped on \n\"transaction\" t (cost=0.00..79787.90 rows=1747458 width=8) (actual \ntime=0.149..2885.905 rows=1682003 loops=1)\n Index Cond: (('2005-10-25 15:00:00'::timestamp \nwithout time zone <= when_stopped) AND (when_stopped <= '2005-10-26 \n10:00:00'::timestamp without time zone))\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..97262.48 \nrows=1747458 width=8) (actual time=0.152..5254.425 rows=1682003 loops=1)\n -> Index Scan using transaction_stopped on \n\"transaction\" t (cost=0.00..79787.90 rows=1747458 width=8) (actual \ntime=0.149..2905.861 rows=1682003 loops=1)\n Index Cond: (('2005-10-25 15:00:00'::timestamp \nwithout time zone <= when_stopped) AND (when_stopped <= '2005-10-26 \n10:00:00'::timestamp without time zone))\nTotal runtime: 32766.566 ms\n(10 rows)\n\nIs there some way I can work around this? The best I can think of now \nis to open two connections, one for each half of the union. I can do \nthe merge manually on the client side. It'd work, but I'd really \nprefer the database server take care of this for me.\n\n-- \nScott Lamb <http://www.slamb.org/>\n\n", "msg_date": "Wed, 2 Nov 2005 21:13:11 -0800", "msg_from": "Scott Lamb <[email protected]>", "msg_from_op": true, "msg_subject": "Sorted union" }, { "msg_contents": "On 2 Nov 2005, at 21:13, Scott Lamb wrote:\n\n> I want to retrieve all the events during a timespan in one list; \n> typical timespans will involve up to a couple rows.\n\nErr, I meant up to a couple million rows. With two rows, I wouldn't \nbe so concerned about performance. ;)\n\n-- \nScott Lamb <http://www.slamb.org/>\n\n", "msg_date": "Wed, 2 Nov 2005 21:27:02 -0800", "msg_from": "Scott Lamb <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sorted union" } ]
[ { "msg_contents": "> select when_stopped as when_happened,\n> 1 as order_hint\n> from transaction t\n> where '2005-10-25 15:00:00' <= when_stopped\n> and when_stopped <= '2005-10-26 10:00:00'\n> union all\n> select when_stopped as when_happened,\n> 2 as order_hint\n> from transaction t\n> where '2005-10-25 15:00:00' <= when_stopped\n> and when_stopped <= '2005-10-26 10:00:00'\n> order by when_happened, order_hint;\n\nhmm, try pushing the union into a subquery...this is better style\nbecause it's kind of ambiguous if the ordering will apply before/after\nthe union.\n\nselect q.when from\n(\n select 1 as hint, start_time as when [...]\n union all\n select 2 as hint, end_time as when [...]\n) q order by q.seq, when\n\nquestion: why do you want to flatten the table...is it not easier to\nwork with as records?\n\nMerlin\n \n", "msg_date": "Thu, 3 Nov 2005 08:53:00 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sorted union" }, { "msg_contents": "Merlin Moncure wrote:\n> hmm, try pushing the union into a subquery...this is better style\n> because it's kind of ambiguous if the ordering will apply before/after\n> the union.\n\nSeems to be a little slower. There's a new \"subquery scan\" step.\n\n explain analyze\n select q.when_happened from (\n select when_stopped as when_happened,\n 1 as order_hint\n from transaction t\n where '2005-10-25 15:00:00' <= when_stopped\n and when_stopped <= '2005-10-26 10:00:00'\n union all\n select when_stopped as when_happened,\n 2 as order_hint\n from transaction t\n where '2005-10-25 15:00:00' <= when_stopped\n and when_stopped <= '2005-10-26 10:00:00'\n ) q order by when_happened, order_hint;\n\n \n QUERY PLAN \n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=713013.96..721751.25 rows=3494916 width=12) (actual \ntime=34392.264..37237.148 rows=3364006 loops=1)\n Sort Key: when_happened, order_hint\n -> Subquery Scan q (cost=0.00..229474.11 rows=3494916 width=12) \n(actual time=0.194..20283.452 rows=3364006 loops=1)\n -> Append (cost=0.00..194524.95 rows=3494916 width=8) \n(actual time=0.191..14967.632 rows=3364006 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..97262.48 \nrows=1747458 width=8) (actual time=0.189..5535.139 rows=1682003 loops=1)\n -> Index Scan using transaction_stopped on \n\"transaction\" t (cost=0.00..79787.90 rows=1747458 width=8) (actual \ntime=0.186..3097.268 rows=1682003 loops=1)\n Index Cond: (('2005-10-25 \n15:00:00'::timestamp without time zone <= when_stopped) AND \n(when_stopped <= '2005-10-26 10:00:00'::timestamp without time zone))\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..97262.48 \nrows=1747458 width=8) (actual time=0.173..5625.155 rows=1682003 loops=1)\n -> Index Scan using transaction_stopped on \n\"transaction\" t (cost=0.00..79787.90 rows=1747458 width=8) (actual \ntime=0.169..3146.714 rows=1682003 loops=1)\n Index Cond: (('2005-10-25 \n15:00:00'::timestamp without time zone <= when_stopped) AND \n(when_stopped <= '2005-10-26 10:00:00'::timestamp without time zone))\n Total runtime: 39775.225 ms\n(11 rows)\n\n> question: why do you want to flatten the table...is it not easier to\n> work with as records?\n\nFor most things, yes. But I'm making a bunch of different graphs from \nthese data, and a few of them are much easier with events. The best \nexample is my concurrency graph. Whenever there's a start event, it goes \nup one. Whenever there's a stop event, it goes down one. It's completely \ntrivial once you have it separated into events.\n\nThanks,\nScott\n", "msg_date": "Thu, 03 Nov 2005 07:41:14 -0800", "msg_from": "Scott Lamb <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorted union" } ]
[ { "msg_contents": "Postgresql 8.0.4 using plpgsql\n\nThe basic function is set up as:\nCREATE FUNCTION add_data(t_row mytable) RETURNS VOID AS $func$\nDECLARE\n newtable text;\n thesql text;\nBEGIN\n INSERT INTO newtable thename from mytable where lookup.id =\nt_row.id;\n thesql := 'INSERT INTO ' || newtable || VALUES (' || t_row.* ')';\n EXECUTE thesql;\n RETURN;\nEND;\n$func$ LANGUAGE plpgsql VOLATILE;\n\nSELECT add_data(t.*) FROM mytable t where ....\nERROR: column \"*\" not found in data type mytable\n\nNow I have tried to drop the * but then there is no concatenation\nfunction to join text to a table%ROWTYPE. So my question is how can I\nmake this dynamic insert statement without listing out every\nt_row.colname? Or, alternatively, is there a better way to parse out\neach row of a table into subtables based on a column value?\n\nSven\n\n\n", "msg_date": "Thu, 03 Nov 2005 10:14:15 -0500", "msg_from": "Sven Willenberger <[email protected]>", "msg_from_op": true, "msg_subject": "Function with table%ROWTYPE globbing" } ]
[ { "msg_contents": "> Postgresql 8.0.4 using plpgsql\n> \n> The basic function is set up as:\n> CREATE FUNCTION add_data(t_row mytable) RETURNS VOID AS $func$\n> DECLARE\n> newtable text;\n> thesql text;\n> BEGIN\n> INSERT INTO newtable thename from mytable where lookup.id =\n> t_row.id;\n> thesql := 'INSERT INTO ' || newtable || VALUES (' || t_row.* ')';\n> EXECUTE thesql;\n> RETURN;\n> END;\n> $func$ LANGUAGE plpgsql VOLATILE;\n> \n> SELECT add_data(t.*) FROM mytable t where ....\n> ERROR: column \"*\" not found in data type mytable\n> \n> Now I have tried to drop the * but then there is no concatenation\n> function to join text to a table%ROWTYPE. So my question is how can I\n> make this dynamic insert statement without listing out every\n> t_row.colname? Or, alternatively, is there a better way to parse out\n> each row of a table into subtables based on a column value?\n\nI don't think it's possible. Rowtypes, etc are not first class yet (on\nto do). What I would do is pass the table name, where clause, etc into\nthe add_data function and rewrite as insert...select and do the whole\nthing in one operation.\n\nMerlin\n", "msg_date": "Thu, 3 Nov 2005 10:37:16 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Function with table%ROWTYPE globbing" } ]
[ { "msg_contents": "\n> Merlin Moncure wrote:\n> > hmm, try pushing the union into a subquery...this is better style\n> > because it's kind of ambiguous if the ordering will apply\nbefore/after\n> > the union.\n> \n> Seems to be a little slower. There's a new \"subquery scan\" step.\n\nI figured. However it's more correct, I'm not sure if the original\nquery is necessarily guaranteed to give the right answer (in terms of\nordering). It might though.\n\n> \n> > question: why do you want to flatten the table...is it not easier to\n> > work with as records?\n> \n> For most things, yes. But I'm making a bunch of different graphs from\n> these data, and a few of them are much easier with events. The best\n> example is my concurrency graph. Whenever there's a start event, it\ngoes\n> up one. Whenever there's a stop event, it goes down one. It's\ncompletely\n> trivial once you have it separated into events.\n\nwell, if you don't mind attempting things that are not trivial, how\nabout trying: \n\nselect t, (select count(*) from transaction where t between happened\nand when_stopped) from\n(\n select ((generate_series(1,60) * scale)::text::interval) + '12:00\npm'::time as t\n) q;\nfor example, to check concurrency at every second for a minute (starting\nfrom 1 second) after 12:00 pm, (scale is zero in this case),\n\nselect t, (select count(*) from transaction where t between happened\nand when_stopped) from\n(\n select (generate_series(1,60)::text::interval) + '12:00 pm'::time as\nt\n) q;\n\nthis could be a win depending on how much data you pull into your\nconcurrency graph. maybe not though. \n\nMerlin\n", "msg_date": "Thu, 3 Nov 2005 11:20:55 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sorted union" }, { "msg_contents": "On Nov 3, 2005, at 8:20 AM, Merlin Moncure wrote:\n> select t, (select count(*) from transaction where t between happened\n> and when_stopped) from\n> (\n> select ((generate_series(1,60) * scale)::text::interval) + '12:00\n> pm'::time as t\n> ) q;\n\nWow. I hadn't known about generate_series, but there are a bunch of \nplaces I've needed it.\n\nAs cool as this is, though, I don't think it helps me. There's \nanother event-driven graph that I need. For lack of a better name, I \ncall it the slot graph. Every single transaction is graphed as a \nhorizontal line from its start time to its end time, with a vertical \nline at the start and stop. Successful, timed out, and failed \ntransactions are green, black, and red, respectively. I use it in a \ncouple different ways:\n\n(1) on short timescales, it's nice to look at individual \ntransactions. My tester will max out at either a rate or a \nconcurrency. If I'm having problems, I'll get bursts of timeouts. \nThis graph is the one that makes it clear why - it shows how things \nalign, etc. Actually, even for longer timespans, this is still \nhelpful - it's nice to see that most of the slots are filled with \ntiming-out transactions when the rate falls.\n\n(2) It can show you if something affects all of the transactions at \nonce. When we did a database failover test, we saw a bunch of \nfailures (as expected; our application isn't responsible for \nretries). This graph is the one that showed us that _all_ \ntransactions that were active at a specific time failed and that no \nother transactions failed. (There was a sharp vertical line of reds \nand blacks in the larger block of greens).\n\nI wish I could just show these to you, rather than describing them. \nIt's all proprietary data, though. Maybe soon I'll have similar \ngraphs of my open source SSL proxy.\n\nBut the point is, I don't think I can represent this information \nwithout sending every data point to my application. I assign slots by \nthe start time and free them by the stop time.\n\nBut I think there is something I can do: I can just do a query of the \ntransaction table sorted by start time. My graph tool can keep a \npriority queue of all active transactions, keyed by the stop time. \nWhenever it grabs a new event, it can peek at the next start time but \ncheck if there are any stop times before it. Then at the end, it can \npick up the rest of the stop times. The concurrency will never exceed \na few thousand, so the additional CPU time and memory complexity are \nnot a problem. As a bonus, I will no longer need my index on the stop \ntime. Dropping it will save a lot of disk space.\n\nThanks for getting me off the \"I need a fast query that returns these \nexact results\" mindset. It is good to step back and look at the big \npicture.\n\nMind you, I still think PostgreSQL should be able to perform that \nsorted union fast. Maybe sometime I'll have enough free time to take \nmy first plunge into looking at a database query planner.\n\nRegards,\nScott\n\n-- \nScott Lamb <http://www.slamb.org/>\n\n\n", "msg_date": "Thu, 3 Nov 2005 10:02:47 -0800", "msg_from": "Scott Lamb <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorted union" } ]
[ { "msg_contents": "> On Wed, 2 Nov 2005, Merlin Moncure wrote:\n> If you put client/server on the same machine, then we don't know how\nthe\n> CPU is splitted. Can you take a look at the approximate number by\n> observing the task manager data while running?\n\nok, I generated a test case which was 250k inserts to simple two column\ntable all in single transaction. Every 50k inserts, time is recorded\nvia timeofday(). \n\nRunning from remote, Time progression is:\nFirst 50k: 20 sec\nSecond : 29 sec\n[...]\nfinal: : 66 sec\n\nso, clear upward progression of time/rec. Initial time is 2.5k\ninserts/sec which is decent but not great for such a narrow table. CPU\ntime on server starts around 50% and drops in exact proportion to insert\nperformance. My earlier gprof test also suggest there is no smoking gun\nsucking down all the cpu time.\n\ncpu time on the client is very volatile but with a clear increase over\ntime starting around 20 and ending perhaps 60. My client box is pretty\nquick, 3ghz p4.\n\nRunning the script locally, from the server, cpu time is pegged at 100%\nand stays...first 50k is 23 sec with a much worse decomposition to\nalmost three minutes for final 50k.\n\nMerlin\n\n\n\n \n> If communication code is the suspect, can we measure the difference if\nwe\n> disable the redefinition of recv()/send() etc in port/win32.h (may\nrequire\n> change related code a little bit as well). In this way, the socket\nwill\n> not be able to pickup signals, but let see if there is any performance\n> difference first.\n> \n> Regards,\n> Qingqing\n> \n> \n> >\n> > [OK, I'm bringing this back on-list, and bringing it to QingQing's\n> > attention, who I secretly hope is the right person to be looking at\nthis\n> > problem :)]\n> >\n> P.s. You scared me ;-)\n", "msg_date": "Thu, 3 Nov 2005 13:03:19 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32" }, { "msg_contents": "\n\"\"Merlin Moncure\"\" <[email protected]> wrote\n>\n> Running from remote, Time progression is:\n> First 50k: 20 sec\n> Second : 29 sec\n> [...]\n> final: : 66 sec\n>\nThis may due to the maintainence cost of a big transaction, I am not sure \n... Tom?\n\n> so, clear upward progression of time/rec. Initial time is 2.5k\n> inserts/sec which is decent but not great for such a narrow table. CPU\n> time on server starts around 50% and drops in exact proportion to insert\n> performance. My earlier gprof test also suggest there is no smoking gun\n> sucking down all the cpu time.\n>\n\nNot to 100%, so this means the server is always starving. It is waiting on \nsomething -- of couse not lock. That's why I think there is some problem on \nnetwork communication. Another suspect will be the write - I knwo NTFS \nsystem will issue an internal log when extending a file. To remove the \nsecond suspect, I will try to hack the source to do a \"fake\" write ...\n\nRegards,\nQingqing \n\n\n", "msg_date": "Thu, 3 Nov 2005 16:46:52 -0500", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance for win32" }, { "msg_contents": "\n\"Qingqing Zhou\" <[email protected]> wrote\n>\n> Not to 100%, so this means the server is always starving. It is waiting on \n> something -- of couse not lock. That's why I think there is some problem \n> on network communication. Another suspect will be the write - I knwo NTFS \n> system will issue an internal log when extending a file. To remove the \n> second suspect, I will try to hack the source to do a \"fake\" write ...\n>\n\nTo patch:\n-------------------------\nHere is a quite straight hack to implement \"fake\" write for both relation \nand xlog. Now the server becomes pure CPU play.\n\n1. RelationGetBufferForTuple()/hio.c: remove line (if you do not enable \ncassert, then doesn't matter):\n- Assert(PageIsNew((PageHeader) pageHeader));\n\n2. ReadBuffer()/bufmgr.c: remove line\n- smgrextend(reln->rd_smgr, blockNum, (char *) bufBlock,\n- reln->rd_istemp);\n\n3. XLogWrite()/xlog.c\n errno = 0;\n+ goto fake;\n if (write(openLogFile, from, nbytes) != nbytes)\n {\n /* if write didn't set errno, assume no disk space */\n ...\n }\n+\n+ fake:\n /* Update state for write */\n\n\nTo use it:\n-------------------------\n1. have several copies of a correct data;\n\n2. patch the server;\n\n3. when you startup postmaster, use the following parameters:\npostmaster -c\"checkpoint_timeout=3600\" -c\"bgwriter_all_percent=0\" -Ddata\n\nNote now the database server is one-shoot usable -- after you shutdown, it \nwon't startup again. Just run\nbegin;\n many inserts;\nend;\n\nTo observe:\n-------------------------\n(1) In this case, what's the remote server CPU usage -- 100%? I don't have \nseveral machines to test it. In my single machine, I run 35000 insert \ncommands from psql by cut and paste into it and could observe that:\n---\n25% kernel time\n75% user time\n\n20% postgresql (--enable-debug --enable-cassert)\n65% psql (as same above)\n10% csrss (system process, manage graphics commands (not sure, just googled \nit), etc)\n5% system (system process)\n---\n\n(2) In this case, Linux still keeps almost 10 times faster?\n\nAfter this, we may need more observations like comparison of simple \"select \n1;\" to reduce the code space we may want to explore ...\n\nRegards,\nQingqing \n\n\n", "msg_date": "Thu, 3 Nov 2005 18:30:12 -0500", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance for win32" }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> ok, I generated a test case which was 250k inserts to simple two column\n> table all in single transaction. Every 50k inserts, time is recorded\n> via timeofday(). \n\nYou mean something like the attached?\n\n> Running from remote, Time progression is:\n> First 50k: 20 sec\n> Second : 29 sec\n> [...]\n> final: : 66 sec\n\nOn Unix I get a dead flat line (within measurement noise), both local\nloopback and across my LAN.\n\nafter 50000 30.20 sec\nafter 100000 31.67 sec\nafter 150000 30.98 sec\nafter 200000 29.64 sec\nafter 250000 29.83 sec\n\n\"top\" shows nearly constant CPU usage over the run, too. With a local\nconnection it's pretty well pegged, with LAN connection the server's\nabout 20% idle and the client about 90% (client machine is much faster\nthan server which may affect this, but I'm too lazy to try it in the\nother direction).\n\nI think it's highly likely that you are looking at some strange behavior\nof the Windows TCP stack.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 03 Nov 2005 18:56:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance for win32 " }, { "msg_contents": "\n\nOn Thu, 3 Nov 2005, Tom Lane wrote:\n\n>\n> On Unix I get a dead flat line (within measurement noise), both local\n> loopback and across my LAN.\n>\n> after 50000 30.20 sec\n> after 100000 31.67 sec\n> after 150000 30.98 sec\n> after 200000 29.64 sec\n> after 250000 29.83 sec\n>\n\nConfirmed in Linux. And on a winxp machine(sp2) with server, client\ntogether, with (see almost no performance difference) or without my \"fake\"\nwrite, the observation is still hold for both cases:\n\nafter 50000 25.21 sec\nafter 100000 26.26 sec\nafter 150000 25.23 sec\nafter 200000 26.25 sec\nafter 250000 26.58 sec\n\nIn both cases, postgres 67% cpu, psql 15~20%, rest: system process. Kernel\ntime is 40+% -- where from?\n\nRegards,\nQingqing\n", "msg_date": "Fri, 4 Nov 2005 02:29:49 -0500 (EST)", "msg_from": "Qingqing Zhou <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance for win32 " } ]
[ { "msg_contents": "> Wow. I hadn't known about generate_series, but there are a bunch of\n> places I've needed it.\n\nIt's a wonder tool :).\n \n> But I think there is something I can do: I can just do a query of the\n> transaction table sorted by start time. My graph tool can keep a\n\nReading the previous paragraphs I was just about to suggest this. This\nis a much more elegant method...you are reaping the benefits of having\nnormalized your working set. You were trying to denormalize it back to\nwhat you were used to. Yes, now you can drop your index and simplify\nyour queries...normalized data is always more 'natural'.\n\n> Mind you, I still think PostgreSQL should be able to perform that\n> sorted union fast. Maybe sometime I'll have enough free time to take\n> my first plunge into looking at a database query planner.\n\nI'm not so sure I agree, by using union you were basically pulling two\nindependent sets (even if they were from the same table) that needed to\nbe ordered. There is zero chance of using the index here for ordering\nbecause you are ordering a different set than the one being indexed.\nHad I not been able to talk you out of de-normalizing your table I was\ngoing to suggest rigging up a materialized view and indexing that:\n\nhttp://jonathangardner.net/PostgreSQL/materialized_views/matviews.html\n\nMerlin\n", "msg_date": "Thu, 3 Nov 2005 13:21:07 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sorted union" }, { "msg_contents": "On Nov 3, 2005, at 10:21 AM, Merlin Moncure wrote:\n> Reading the previous paragraphs I was just about to suggest this. \n> This\n> is a much more elegant method...you are reaping the benefits of having\n> normalized your working set. You were trying to denormalize it \n> back to\n> what you were used to. Yes, now you can drop your index and simplify\n> your queries...normalized data is always more 'natural'.\n\nI'm not sure normalized is the right word. In either case, I'm \nstoring it in the same form. In either case, my ConcurrencyProcessor \nclass gets the same form. The only difference is if the database \nsplits the rows or if my application does so.\n\nBut we're essentially agreed. This is the algorithm I'm going to try \nimplementing, and I think it will work out well. It also means \nsending about half as much data from the database to the application.\n\n>> Mind you, I still think PostgreSQL should be able to perform that\n>> sorted union fast. Maybe sometime I'll have enough free time to take\n>> my first plunge into looking at a database query planner.\n>\n> I'm not so sure I agree, by using union you were basically pulling two\n> independent sets (even if they were from the same table) that \n> needed to\n> be ordered.\n\nYes.\n\n> There is zero chance of using the index here for ordering\n> because you are ordering a different set than the one being indexed.\n\nI don't think that's true. It just needs to look at the idea of \nindependently ordering each element of the union and then merging \nthat, compared to the cost of grabbing the union and then ordering \nit. In this case, the former cost is about 0 - it already has \nindependently ordered them, and the merge algorithm is trivial. \n<http://en.wikipedia.org/wiki/Merge_algorithm>\n\nRegards,\nScott\n\n-- \nScott Lamb <http://www.slamb.org/>\n\n\n", "msg_date": "Thu, 3 Nov 2005 10:49:50 -0800", "msg_from": "Scott Lamb <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorted union" } ]
[ { "msg_contents": "The ANSI/ISO specs are not at all ambiguous on this. An\nORDER BY is not allowed for the SELECT statements within\na UNION. It must come at the end and applied to the resulting\nUNION.\n\nSimilarly, the column names in the result come from the first\nquery in the UNION. Column names in the query on the right\nside of a UNION are immaterial.\n\nUnless we have reason to believe that PostgreSQL is\nnon-compliant on this point, I don't think it is a good idea to\nslow the query down with the subquery.\n\n-Kevin\n\n\n>>> \"Merlin Moncure\" <[email protected]> >>>\n\n> Merlin Moncure wrote:\n> > hmm, try pushing the union into a subquery...this is better style\n> > because it's kind of ambiguous if the ordering will apply\nbefore/after\n> > the union.\n> \n> Seems to be a little slower. There's a new \"subquery scan\" step.\n\nI figured. However it's more correct, I'm not sure if the original\nquery is necessarily guaranteed to give the right answer (in terms of\nordering). It might though.\n\n", "msg_date": "Thu, 03 Nov 2005 12:32:06 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sorted union" } ]
[ { "msg_contents": "> The ANSI/ISO specs are not at all ambiguous on this. An\n> ORDER BY is not allowed for the SELECT statements within\n> a UNION. It must come at the end and applied to the resulting\n> UNION.\n\nInteresting :/ \n\nMerlin\n", "msg_date": "Thu, 3 Nov 2005 13:37:52 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sorted union" } ]
[ { "msg_contents": "Both win32 send/recv have pgwin32_poll_signals() in them. This is\nglorified WaitForSingleObjectEx on global pgwin32_signal_event. This is\nprobably part of the problem. Can we work some of the same magic you\nput into check interrupts macro?\n\nISTM everything also in win32 functions is either API call, or marginal\ncase.\n\nMerlin\n", "msg_date": "Thu, 3 Nov 2005 13:42:08 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32" } ]
[ { "msg_contents": "> Both win32 send/recv have pgwin32_poll_signals() in them. This is\n> glorified WaitForSingleObjectEx on global pgwin32_signal_event. This\nis\n> probably part of the problem. Can we work some of the same magic you\nput\n> into check interrupts macro?\n\nWhoop! following a cvs update I see this is already nailed :) Back to\nthe drawing board...\n\nMerlin\n", "msg_date": "Thu, 3 Nov 2005 13:47:09 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32" } ]
[ { "msg_contents": "> Both win32 send/recv have pgwin32_poll_signals() in them. \n> This is glorified WaitForSingleObjectEx on global \n> pgwin32_signal_event. This is probably part of the problem. \n> Can we work some of the same magic you put into check \n> interrupts macro?\n> \n> ISTM everything also in win32 functions is either API call, \n> or marginal case.\n\nUh, we already do that, don't we?\nhttp://developer.postgresql.org/cvsweb.cgi/pgsql/src/backend/port/win32/\nsocket.c?rev=1.10\nhas:\n\nstatic int\npgwin32_poll_signals(void)\n{\n\tif (UNBLOCKED_SIGNAL_QUEUE())\n\t{\n\t\tpgwin32_dispatch_queued_signals();\n\t\terrno = EINTR;\n\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n\n\nAre you testing this on 8.0.x? Or a pre-RC version of 8.1?\n\n//Magnus\n", "msg_date": "Thu, 3 Nov 2005 20:04:01 +0100", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32" }, { "msg_contents": "\n\nOn Thu, 3 Nov 2005, Magnus Hagander wrote:\n\n> > Both win32 send/recv have pgwin32_poll_signals() in them.\n> > This is glorified WaitForSingleObjectEx on global\n> > pgwin32_signal_event. This is probably part of the problem.\n> > Can we work some of the same magic you put into check\n> > interrupts macro?\n> >\n>\n> Uh, we already do that, don't we?\n> http://developer.postgresql.org/cvsweb.cgi/pgsql/src/backend/port/win32/\n> socket.c?rev=1.10\n> has:\n>\n\nYeah, we did this. I am thinking of just use simple mechanism of the win32\nsockets, which could not pick up signals, but I would like to see if there\nis any difference -- do you think there is any point to try this?\n\nRegards,\nQingqing\n", "msg_date": "Thu, 3 Nov 2005 15:25:42 -0500 (EST)", "msg_from": "Qingqing Zhou <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance for win32" } ]
[ { "msg_contents": "Just as an FYI, if you want to reassure yourself that the ORDER BY\nis being applied as intended, you could do the following:\n\n(\n select 1 as hint, start_time as when [...]\n union all\n select 2 as hint, end_time as when [...]\n) order by seq, when\n\nThis is ANSI/ISO standard, and works in PostgreSQL (based on\na quick test).\n\n\n>>> \"Merlin Moncure\" <[email protected]> >>>\n\nhmm, try pushing the union into a subquery...this is better style\nbecause it's kind of ambiguous if the ordering will apply before/after\nthe union.\n\nselect q.when from\n(\n select 1 as hint, start_time as when [...]\n union all\n select 2 as hint, end_time as when [...]\n) q order by q.seq, when\n\n", "msg_date": "Thu, 03 Nov 2005 13:11:43 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sorted union" } ]
[ { "msg_contents": "> > > Both win32 send/recv have pgwin32_poll_signals() in them.\n> > > This is glorified WaitForSingleObjectEx on global \n> > > pgwin32_signal_event. This is probably part of the problem.\n> > > Can we work some of the same magic you put into check interrupts \n> > > macro?\n> > >\n> >\n> > Uh, we already do that, don't we?\n> > \n> http://developer.postgresql.org/cvsweb.cgi/pgsql/src/backend/port/win3\n> > 2/\n> > socket.c?rev=1.10\n> > has:\n> >\n> \n> Yeah, we did this. I am thinking of just use simple mechanism \n> of the win32 sockets, which could not pick up signals, but I \n> would like to see if there is any difference -- do you think \n> there is any point to try this?\n\nSorry, I don't follow you here - what do you mean to do? Remove the\nevent completely so we can't wait on it?\n\n//Magnus\n", "msg_date": "Thu, 3 Nov 2005 21:30:30 +0100", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32" }, { "msg_contents": "\n\nOn Thu, 3 Nov 2005, Magnus Hagander wrote:\n\n>\n> Sorry, I don't follow you here - what do you mean to do? Remove the\n> event completely so we can't wait on it?\n>\n\nI'd like to use the win32 provided recv(), send() functions instead of\nredirect them to pgwin32_recv()/pgwin32_send(), just like libpq does. If\nwe do this, we will lose some functionalities, but I'd like to see the\nperformance difference first. -- do you think that will be any difference?\n\nRegards,\nQingqing\n", "msg_date": "Thu, 3 Nov 2005 15:34:55 -0500 (EST)", "msg_from": "Qingqing Zhou <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance for win32" } ]
[ { "msg_contents": "> > Sorry, I don't follow you here - what do you mean to do? Remove the \n> > event completely so we can't wait on it?\n> >\n> \n> I'd like to use the win32 provided recv(), send() functions \n> instead of redirect them to pgwin32_recv()/pgwin32_send(), \n> just like libpq does. If we do this, we will lose some \n> functionalities, but I'd like to see the performance \n> difference first. -- do you think that will be any difference?\n\nDoesn't work, really. It will no longer be possible to send a signal to\nan idle backend. The idle backend will be blocking on recv(), that's how\nit works. So unless we can get around that somehow, it's a non-starter I\nthink.\n\nI doubt there will be much performance difference, as you hav eto hit\nthe kernel anyway (in the recv/send call). But that part is just a guess\n:-)\n\n\n//Magnus\n", "msg_date": "Thu, 3 Nov 2005 21:39:27 +0100", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32" }, { "msg_contents": "\n\nOn Thu, 3 Nov 2005, Magnus Hagander wrote:\n\n> > > Sorry, I don't follow you here - what do you mean to do? Remove the\n> > > event completely so we can't wait on it?\n> > >\n> >\n> > I'd like to use the win32 provided recv(), send() functions\n> > instead of redirect them to pgwin32_recv()/pgwin32_send(),\n> > just like libpq does. If we do this, we will lose some\n> > functionalities, but I'd like to see the performance\n> > difference first. -- do you think that will be any difference?\n>\n> Doesn't work, really. It will no longer be possible to send a signal to\n> an idle backend. The idle backend will be blocking on recv(), that's how\n> it works. So unless we can get around that somehow, it's a non-starter I\n> think.\n\nYeah, agreed. An alternative is set tiemout like 100 ms or so. When\ntimeout happens, check the signals. But I guess you will be strongly\nagainst it.\n\n>\n> I doubt there will be much performance difference, as you hav eto hit\n> the kernel anyway (in the recv/send call). But that part is just a guess\n> :-)\n\nI know what you mean ... I will take a look -- if the patch (not\nincluding fix signaling problem), if doesn't change much, I will give it a\ntry.\n\nRegards,\nQingqing\n", "msg_date": "Thu, 3 Nov 2005 16:06:32 -0500 (EST)", "msg_from": "Qingqing Zhou <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance for win32" }, { "msg_contents": "\n\"\"Magnus Hagander\"\" <[email protected]> wrote\n>>\n>> I'd like to use the win32 provided recv(), send() functions\n>> instead of redirect them to pgwin32_recv()/pgwin32_send(),\n>> just like libpq does. If we do this, we will lose some\n>> functionalities, but I'd like to see the performance\n>> difference first. -- do you think that will be any difference?\n>\n> I doubt there will be much performance difference, as you hav eto hit\n> the kernel anyway (in the recv/send call). But that part is just a guess\n> :-)\n>\n\nOn a separate line -- I verified Magnus's doubt -- revert pgwin32_recv() to \nrecv() does not improve performance visiblly.\n\nRegards,\nQingqing \n\n\n", "msg_date": "Fri, 4 Nov 2005 17:17:21 -0500", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance for win32" } ]
[ { "msg_contents": "> > Sorry, I don't follow you here - what do you mean to do? Remove the\n> > event completely so we can't wait on it?\n> >\n> \n> I'd like to use the win32 provided recv(), send() functions instead of\n> redirect them to pgwin32_recv()/pgwin32_send(), just like libpq does.\nIf\n> we do this, we will lose some functionalities, but I'd like to see the\n> performance difference first. -- do you think that will be any\ndifference?\n\nI personally strongly doubt this will make a diffenrence. Anyways I\nthink we might be looking at the wrong place. Here was my test:\n1. drop/create table two fields (id int, f text) no keys\n2. begin\n3. insert 500k rows. every 50k get time get geometric growth in insert\ntime\n4. commit\n\nI am doing this via \ntype dump.sql | psql -q mydb\n\nI rearrange:\nevery 50k rows get time but also restart transaction. I would ex\n\nGuess what...no change. This was a shocker. So I wrap dump.sql with\nanother file that is just \n\\i dump.sql\n\\i dump.sql\n\nand get time to insert 50k recs resets after first dump...\n\nMerlin \n", "msg_date": "Thu, 3 Nov 2005 16:04:37 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32" } ]
[ { "msg_contents": "> > > I'd like to use the win32 provided recv(), send() \n> functions instead \n> > > of redirect them to pgwin32_recv()/pgwin32_send(), just \n> like libpq \n> > > does. If we do this, we will lose some functionalities, \n> but I'd like \n> > > to see the performance difference first. -- do you think \n> that will \n> > > be any difference?\n> >\n> > Doesn't work, really. It will no longer be possible to send \n> a signal \n> > to an idle backend. The idle backend will be blocking on recv(), \n> > that's how it works. So unless we can get around that \n> somehow, it's a \n> > non-starter I think.\n> \n> Yeah, agreed. An alternative is set tiemout like 100 ms or \n> so. When timeout happens, check the signals. But I guess you \n> will be strongly against it.\n\nNot on principle, but I don't think it'll give us enough gain for the\ncost. But if it does, I'm certainly not against it.\n\n\n\n//Magnus\n", "msg_date": "Thu, 3 Nov 2005 22:15:31 +0100", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32" } ]
[ { "msg_contents": "I recently upgraded my DB from 7.4.3 to 8.0.4 and I've noticed the following\nerrors appearing in my serverlog:\n\n\n2005-11-03 05:56:57 CST 127.0.0.1(38858) ERROR: Unicode characters greater\nthan or equal to 0x10000 are not supported\n2005-11-03 06:04:09 CST 127.0.0.1(38954) ERROR: invalid byte sequence for\nencoding \"UNICODE\": 0xe02d76\n2005-11-03 06:04:21 CST 127.0.0.1(38964) ERROR: invalid byte sequence for\nencoding \"UNICODE\": 0xe02d76\n2005-11-03 06:11:35 CST 127.0.0.1(39072) ERROR: Unicode characters greater\nthan or equal to 0x10000 are not supported\n2005-11-03 06:23:23 CST 127.0.0.1(39657) ERROR: invalid byte sequence for\nencoding \"UNICODE\": 0xd40d\n2005-11-03 08:10:02 CST 127.0.0.1(44073) ERROR: invalid byte sequence for\nencoding \"UNICODE\": 0xe46973\n2005-11-03 08:21:13 CST 127.0.0.1(44711) ERROR: Unicode characters greater\nthan or equal to 0x10000 are not supported\n2005-11-03 08:26:36 CST 127.0.0.1(44745) ERROR: invalid byte sequence for\nencoding \"UNICODE\": 0xc447\n2005-11-03 08:40:59 CST 127.0.0.1(45087) ERROR: invalid byte sequence for\nencoding \"UNICODE\": 0xdd20\n2005-11-03 09:14:52 CST 127.0.0.1(46009) ERROR: Unicode characters greater\nthan or equal to 0x10000 are not supported\n\nI never received these errors on when running 7.4.3. I used the default\nencodings on 7.4.3 and I tried chaning client_encoding from sql_ascii to\nUNICODE and I'm still seeing this. I'm storing in a text data type email\nthat contains other characterset characters.\n\nAny ideas on how to resolve this?\n\n-Don\n\n--\nDonald Drake\nPresident\nDrake Consulting\nhttp://www.drakeconsult.com/\nhttp://www.MailLaunder.com/\nhttp://www.mobilemeridian.com/\n312-560-1574\n\nI recently upgraded my DB from 7.4.3 to 8.0.4 and I've noticed the following errors appearing in my serverlog:\n\n\n\n2005-11-03 05:56:57 CST 127.0.0.1(38858) ERROR:  Unicode characters greater than or equal to 0x10000 are not supported\n\n2005-11-03 06:04:09 CST 127.0.0.1(38954) ERROR:  invalid byte sequence for encoding \"UNICODE\": 0xe02d76\n\n2005-11-03 06:04:21 CST 127.0.0.1(38964) ERROR:  invalid byte sequence for encoding \"UNICODE\": 0xe02d76\n\n2005-11-03 06:11:35 CST 127.0.0.1(39072) ERROR:  Unicode characters greater than or equal to 0x10000 are not supported\n\n2005-11-03 06:23:23 CST 127.0.0.1(39657) ERROR:  invalid byte sequence for encoding \"UNICODE\": 0xd40d\n\n2005-11-03 08:10:02 CST 127.0.0.1(44073) ERROR:  invalid byte sequence for encoding \"UNICODE\": 0xe46973\n\n2005-11-03 08:21:13 CST 127.0.0.1(44711) ERROR:  Unicode characters greater than or equal to 0x10000 are not supported\n\n2005-11-03 08:26:36 CST 127.0.0.1(44745) ERROR:  invalid byte sequence for encoding \"UNICODE\": 0xc447\n\n2005-11-03 08:40:59 CST 127.0.0.1(45087) ERROR:  invalid byte sequence for encoding \"UNICODE\": 0xdd20\n\n2005-11-03 09:14:52 CST 127.0.0.1(46009) ERROR:  Unicode characters greater than or equal to 0x10000 are not supported\n\n\nI never received these errors on when running 7.4.3.  I used the\ndefault encodings on 7.4.3 and I tried chaning client_encoding from\nsql_ascii to UNICODE and I'm still seeing this. I'm storing in a text\ndata type email that contains other characterset\ncharacters.   \n\n\nAny ideas on how to resolve this?\n\n\n-Don-- Donald DrakePresidentDrake Consultinghttp://www.drakeconsult.com/\nhttp://www.MailLaunder.com/\nhttp://www.mobilemeridian.com/312-560-1574", "msg_date": "Thu, 3 Nov 2005 16:35:13 -0600", "msg_from": "Don Drake <[email protected]>", "msg_from_op": true, "msg_subject": "Encoding on 8.0.4" } ]
[ { "msg_contents": "Hello everyone.\n\nWe are facing a performance problem with views consisting of several \nunioned tables. The simplified schema is as follows:\n\nCREATE TABLE foo (\n\tfoo_object_id\tbigint,\n\tlink_id\t\tbigint,\n\tsomedata\ttext,\n\tPRIMARY KEY (foo_object_id) );\n\nCREATE TABLE bar (\n\tbar_object_id\tbigint,\n\tlink_id\t\tbigint,\n\totherdata\treal,\n\tPRIMARY KEY (bar_object_id) );\n\nThere are actually five of such tables, all having two common attributes \n*_object_id and link_id. All tables have indices on link_id, which is \nvery selective, close to unique. The *_object_id is unique within this \nscope across all tables, but that's not important.\n\nThen we have a view:\n\nCREATE VIEW commonview AS\nSELECT foo_object_id as object_id, link_id, 'It is in foo' as loc\nFROM foo\n\nUNION\n\nSELECT bar_object_id as object_id, link_id, 'It is in bar' as loc\nFROM bar\n\nWe commonly do this:\n\nSELECT object_id FROM commonview WHERE link_id=1234567\n\nThe result is sequential scan on all tables, append, sort and then \nfilter scan on this whole thing. Which of course is slow as hell. We use \nversion 8.0.2.\n\nAnd now the question: Is there a way to force the planner to push the \ncondition lower, so it will use the index? Or do you use some tricks in \nthis scenario? Thanks for your suggestions.\n\nBye.\n\n-- \nMichal Tďż˝borskďż˝\nCTO, Internet Mall, a.s.\n\nInternet Mall - obchody, kterďż˝ si oblďż˝bďż˝te\n<http://www.MALL.cz>\n", "msg_date": "Fri, 04 Nov 2005 12:38:30 +0100", "msg_from": "Michal Taborsky <[email protected]>", "msg_from_op": true, "msg_subject": "Searching union views not using indices" }, { "msg_contents": "Michal Taborsky wrote:\n...\n> UNION\n...\n> The result is sequential scan on all tables, append, sort and then \n> filter scan on this whole thing. Which of course is slow as hell. We use \n> version 8.0.2.\n> \n> And now the question: Is there a way to force the planner to push the \n> condition lower, so it will use the index? Or do you use some tricks in \n> this scenario? Thanks for your suggestions.\n\nTry \"UNION ALL\", since UNION is defined as removing duplicates, which \nprobably accounts for the sort.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 04 Nov 2005 15:01:05 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching union views not using indices" }, { "msg_contents": "On Fri, Nov 04, 2005 at 12:38:30PM +0100, Michal Taborsky wrote:\n> SELECT object_id FROM commonview WHERE link_id=1234567\n> \n> The result is sequential scan on all tables, append, sort and then \n> filter scan on this whole thing. Which of course is slow as hell. We use \n> version 8.0.2.\n\nI couldn't duplicate this in 8.0.4; I don't know if anything's\nchanged since 8.0.2 that would affect the query plan. Could you\npost the EXPLAIN ANALYZE output? It might also be useful to see\nthe output with enable_seqscan disabled.\n\nHave the tables been vacuumed and analyzed recently?\n\n-- \nMichael Fuhr\n", "msg_date": "Fri, 4 Nov 2005 08:12:57 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching union views not using indices" }, { "msg_contents": "Michal Taborsky <[email protected]> writes:\n> We are facing a performance problem with views consisting of several \n> unioned tables. The simplified schema is as follows:\n\nPerhaps you should show us the real schema, because I cannot duplicate\nyour complaint on the toy case you show.\n\nregression=# explain SELECT object_id FROM commonview WHERE link_id=1234567;\n QUERY PLAN\n------------------------------------------------------------------------------------------------\n Subquery Scan commonview (cost=41.40..41.66 rows=13 width=8)\n -> Unique (cost=41.40..41.53 rows=13 width=16)\n -> Sort (cost=41.40..41.43 rows=13 width=16)\n Sort Key: object_id, link_id, loc\n -> Append (cost=0.00..41.16 rows=13 width=16)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..17.12 rows=5 width=16)\n -> Index Scan using fooi on foo (cost=0.00..17.07 rows=5 width=16)\n Index Cond: (link_id = 1234567)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..24.04 rows=8 width=16)\n -> Index Scan using bari on bar (cost=0.00..23.96 rows=8 width=16)\n Index Cond: (link_id = 1234567)\n(11 rows)\n\n(I had to add indexes on link_id to the example, of course.)\n\nAs noted by others, you probably want to be using UNION ALL not UNION,\nbut that's not the crux of the issue.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Nov 2005 10:31:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching union views not using indices " }, { "msg_contents": "Tom Lane napsal(a):\n> Michal Taborsky <[email protected]> writes:\n> \n>>We are facing a performance problem with views consisting of several \n>>unioned tables. The simplified schema is as follows:\n> \n> \n> Perhaps you should show us the real schema, because I cannot duplicate\n> your complaint on the toy case you show.\n> As noted by others, you probably want to be using UNION ALL not UNION,\n> but that's not the crux of the issue.\n\nOK. Mystery (sort of) solved. After you told me it works for you I had \nto assume the problem was somewhere else. And, indeed, it was, though \nit's not too obvious.\n\nThe two attributes are actually not of tybe bigint, but of type \n\"crm_object_id\", which is created as follows:\n\nCREATE DOMAIN \"public\".\"crm_object_id\" AS\n bigint NULL;\n\nEverything started working perfectly after I modified the view like this:\n\nCREATE VIEW commonview AS\nSELECT foo_object_id::bigint as object_id, link_id::bigint, 'It is in \nfoo' as loc FROM foo\nUNION\nSELECT bar_object_id::bigint as object_id, link_id::bigint, 'It is in \nbar' as loc FROM bar\n\nNot even modifying the select as this did not help:\n\nexplain SELECT object_id FROM commonview WHERE \nlink_id=1234567::crm_object_id;\n\nIs this a bug or feature?\n\n-- \nMichal Tďż˝borskďż˝\nCTO, Internet Mall, a.s.\n\nInternet Mall - obchody, kterďż˝ si oblďż˝bďż˝te\n<http://www.MALL.cz>\n", "msg_date": "Fri, 04 Nov 2005 16:55:59 +0100", "msg_from": "Michal Taborsky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Searching union views not using indices" }, { "msg_contents": "Michal Taborsky <[email protected]> writes:\n> OK. Mystery (sort of) solved. After you told me it works for you I had \n> to assume the problem was somewhere else. And, indeed, it was, though \n> it's not too obvious.\n\n> The two attributes are actually not of tybe bigint, but of type \n> \"crm_object_id\", which is created as follows:\n\n> CREATE DOMAIN \"public\".\"crm_object_id\" AS\n> bigint NULL;\n\nAh. The problem is that the UNION's output column is bigint, and the\ntype discrepancy (bigint above, domain below) discourages the planner\nfrom pushing down the WHERE condition.\n\nThere's a related complaint here:\nhttp://archives.postgresql.org/pgsql-bugs/2005-10/msg00227.php\n\nIf we were to change things so that the result of the UNION were still\nthe domain, not plain bigint, then your example would be optimized the\nway you want. I'm unsure about what other side-effects that would have\nthough.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Nov 2005 12:53:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching union views not using indices " } ]
[ { "msg_contents": "> You mean something like the attached?\nnot quite: attached is a file to generate test.\nto do it:\n\npsql yadda\n\\i timeit.sql\n\\t\n\\o dump.sql\nselect make_dump(50000, false);\n\\q\ncat dump.sql | psql -q yadda\n\nand see what pops out. I had to do it that way because redirecting psql\nto dump file caused psql sit forever waiting on more with cpu load...\n\nMerlin", "msg_date": "Fri, 4 Nov 2005 08:49:22 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32 " }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n>> You mean something like the attached?\n\n> not quite: attached is a file to generate test.\n\n> cat dump.sql | psql -q yadda\n\nAh. Does your psql have readline support? if so, does adding -n to\nthat command change anything?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Nov 2005 09:59:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance for win32 " } ]
[ { "msg_contents": "> > You mean something like the attached?\n\noh, btw I ran timeit.c and performance is flat and fairly fast. I'm\npretty sure psql is the culprit here.\n\nMerlin\n", "msg_date": "Fri, 4 Nov 2005 09:08:00 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32 " } ]
[ { "msg_contents": "> Hello everyone.\n> \n> We are facing a performance problem with views consisting of several\n> unioned tables. The simplified schema is as follows:\n> \n> CREATE TABLE foo (\n> \tfoo_object_id\tbigint,\n> \tlink_id\t\tbigint,\n> \tsomedata\ttext,\n> \tPRIMARY KEY (foo_object_id) );\n\npoint 1:\nwell, you may want to consider:\n\ncreate table foobar\n( \n\tprefix text, -- foo/bar/etc\n object_id\t bigint,\n\tlink_id\t\tbigint,\n\tprimary key(prefix, object_id)\n); -- add indexes as appropriate\n\nand push foo/bar specific information to satellite table which refer\nback via pkey-key link. Now you get very quick and easy link id query\nand no view is necessary. You also may want to look at table\ninheritance but make sure you read all the disclaimers first.\n\npoint 2: \nwatch out for union, it is implied sort and duplicate filter. union all\nis faster although you may get duplicates.\n\nMerlin\n", "msg_date": "Fri, 4 Nov 2005 10:07:53 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Searching union views not using indices" } ]
[ { "msg_contents": "> > not quite: attached is a file to generate test.\n> \n> > cat dump.sql | psql -q yadda\n> \n> Ah. Does your psql have readline support? if so, does adding -n to\n> that command change anything?\n> \n\nIt doesn't, and it doesn't. :/ Ok, here's where it gets interesting. I\nremoved all the newlines from the test output (dump.sql) and got flat\ntimes ;). \n\nMerlin\n\n", "msg_date": "Fri, 4 Nov 2005 10:12:23 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32 " }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> It doesn't, and it doesn't. :/ Ok, here's where it gets interesting. I\n> removed all the newlines from the test output (dump.sql) and got flat\n> times ;). \n\nThat's bizarre ... I'd have thought a very long line would be more\nlikely to trigger internal performance problems than the original.\n\nWhat happens if you read the file with \"psql -f dump.sql\" instead\nof cat/stdin?\n\nBTW, I get flat times for your psql test case on Unix, again both with\nlocal and remote client. So whatever is going on here, it's\nWindows-specific.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Nov 2005 10:21:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance for win32 " } ]
[ { "msg_contents": "> That's bizarre ... I'd have thought a very long line would be more\n> likely to trigger internal performance problems than the original.\n> \n> What happens if you read the file with \"psql -f dump.sql\" instead\n> of cat/stdin?\n\nnon-flat. Also ran via \\i and got non flat times.\n\n> BTW, I get flat times for your psql test case on Unix, again both with\n> local and remote client. So whatever is going on here, it's\n> Windows-specific.\n\nyeah. I'm guessing problem is in the mingw flex/bison (which I really,\nreally hope is not the case) or some other win32 specific block of code.\nI'm snooping around there...\n\nMerlin \n", "msg_date": "Fri, 4 Nov 2005 10:31:11 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32 " }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> yeah. I'm guessing problem is in the mingw flex/bison (which I really,\n> really hope is not the case) or some other win32 specific block of code.\n> I'm snooping around there...\n\nMaybe I'm confused here, but I thought we had established that the local\nand remote cases behave differently for you? If so I'd suppose that it\nmust be a networking issue, and there's little point in looking inside\npsql.\n\nIf the problem is internal to psql, gprof or similar tool would be\nhelpful ... got anything like that on Windows?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Nov 2005 10:36:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance for win32 " } ]
[ { "msg_contents": "> \n> \"Merlin Moncure\" <[email protected]> writes:\n> > yeah. I'm guessing problem is in the mingw flex/bison (which I\nreally,\n> > really hope is not the case) or some other win32 specific block of\ncode.\n> > I'm snooping around there...\n> \n> Maybe I'm confused here, but I thought we had established that the\nlocal\n> and remote cases behave differently for you? If so I'd suppose that\nit\n> must be a networking issue, and there's little point in looking inside\n> psql.\n> \nThe local case is *worse*...presumably because psql is competing with\nthe server for cpu time...cpu load is pegged at 100%. On the remote\ncase, I'm getting 50-60% cpu load which is way to high. The problem is\ndefinitely in psql.\n\nMerlin\n\n", "msg_date": "Fri, 4 Nov 2005 10:41:23 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32 " } ]
[ { "msg_contents": "ok, here is gprof output from newlines/no newlines \n[newlines]\n % cumulative self self total \n time seconds seconds calls s/call s/call name \n 19.03 0.67 0.67 1 0.67 3.20 MainLoop\n 17.61 1.29 0.62 500031 0.00 0.00 yylex\n 15.63 1.84 0.55 1500094 0.00 0.00 GetVariable\n 11.08 2.23 0.39 250018 0.00 0.00 SendQuery\n 4.26 2.38 0.15 750051 0.00 0.00 GetVariableBool\n 3.41 2.50 0.12 250024 0.00 0.00 SetVariable\n 2.56 2.59 0.09 250015 0.00 0.00 gets_fromFile\n 2.27 2.67 0.08 750044 0.00 0.00\nyy_switch_to_buffer\n 2.27 2.75 0.08 500031 0.00 0.00 psql_scan\n 2.27 2.83 0.08 pg_strcasecmp\n 1.70 2.89 0.06 4250078 0.00 0.00 emit\n 1.70 2.95 0.06 500031 0.00 0.00 VariableEquals\n 1.70 3.01 0.06 250018 0.00 0.00 AcceptResult\n 1.42 3.06 0.05 250018 0.00 0.00 ResetCancelConn\n\n[no newlines]\n % cumulative self self total \n time seconds seconds calls s/call s/call name \n 23.01 0.26 0.26 250019 0.00 0.00 yylex\n 19.47 0.48 0.22 250018 0.00 0.00 SendQuery\n 11.50 0.61 0.13 1000070 0.00 0.00 GetVariable\n 9.73 0.72 0.11 250042 0.00 0.00 pg_strdup\n 9.73 0.83 0.11 250024 0.00 0.00 SetVariable\n 6.19 0.90 0.07 500039 0.00 0.00 GetVariableBool\n 5.31 0.96 0.06 pg_strcasecmp\n 4.42 1.01 0.05 4250078 0.00 0.00 emit\n 2.65 1.04 0.03 1 0.03 1.01 MainLoop\n\nok, mingw gprof is claiming MainLoop is a culprit here, along with\ngeneral efficiency penalty otherwise in several things (twice many calls\nto yylex, 33%more to getvariable, etc). Just for fun I double checked\nstring len of query input to SendQuery and everything is the right\nlength.\n\nSame # calls to SendQuery, but 2.5 times call time in newlines\ncase...anything jump out? \n\nMerlin\n", "msg_date": "Fri, 4 Nov 2005 11:16:45 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32 " }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> ok, mingw gprof is claiming MainLoop is a culprit here,\n\nThe only thing I can see that would be different for Windows is the\nSetConsoleCtrlHandler kernel call ... could that be expensive? Why\ndo we have either sigsetjmp or setup_cancel_handler inside the per-line\nloop, rather than just before it?\n\nThere is a lot of stuff in MainLoop that doesn't seem like it really\nneeds to be done on every single line, particularly not the repeated\nfetching of psql variables that couldn't possibly change except inside\nHandleSlashCmds. But that all ought to be the same on Unix or Windows.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Nov 2005 11:33:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance for win32 " }, { "msg_contents": "\n\"\"Merlin Moncure\"\" <[email protected]> wrote\n> ok, here is gprof output from newlines/no newlines\n> [newlines]\n> % cumulative self self total\n> time seconds seconds calls s/call s/call name\n> 19.03 0.67 0.67 1 0.67 3.20 MainLoop\n> 17.61 1.29 0.62 500031 0.00 0.00 yylex\n> 15.63 1.84 0.55 1500094 0.00 0.00 GetVariable\n> 11.08 2.23 0.39 250018 0.00 0.00 SendQuery\n> 4.26 2.38 0.15 750051 0.00 0.00 GetVariableBool\n> 3.41 2.50 0.12 250024 0.00 0.00 SetVariable\n> 2.56 2.59 0.09 250015 0.00 0.00 gets_fromFile\n> 2.27 2.67 0.08 750044 0.00 0.00\n> yy_switch_to_buffer\n> 2.27 2.75 0.08 500031 0.00 0.00 psql_scan\n> 2.27 2.83 0.08 pg_strcasecmp\n> 1.70 2.89 0.06 4250078 0.00 0.00 emit\n> 1.70 2.95 0.06 500031 0.00 0.00 VariableEquals\n> 1.70 3.01 0.06 250018 0.00 0.00 AcceptResult\n> 1.42 3.06 0.05 250018 0.00 0.00 ResetCancelConn\n>\n\nMaybe I missed some threads .... do you think it is interesting to test the \n*absoulte* time difference of the same machine on Windows/Linux by using \ntimeit.c? I wonder if windows is slower than Linux ...\n\nRegards,\nQingqing \n\n\n", "msg_date": "Fri, 4 Nov 2005 13:30:34 -0500", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance for win32" } ]
[ { "msg_contents": "Nailed it.\n\nproblem is in mainloop.c -> setup_cancel_handler. Apparently you can\nhave multiple handlers and windows keeps track of them all, even if they\ndo the same thing. Keeping track of so many system handles would\nnaturally slow the whole process down. Commenting that line times are\nflat as a pancake. I am thinking keeping track of a global flag would\nbe appropriate. \n\n\nMerlin\n\n", "msg_date": "Fri, 4 Nov 2005 12:56:02 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32 " }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> Nailed it.\n\n> problem is in mainloop.c -> setup_cancel_handler. Apparently you can\n> have multiple handlers and windows keeps track of them all, even if they\n> do the same thing. Keeping track of so many system handles would\n> naturally slow the whole process down.\n\nYipes. So we really want to do that only once.\n\nAFAICS it is appropriate to move the sigsetjmp and setup_cancel_handler\ncalls in front of the per-line loop inside MainLoop --- can anyone see\na reason not to?\n\nI'm inclined to treat this as an outright bug, not just a minor\nperformance issue, because it implies that a sufficiently long psql\nscript would probably crash a Windows machine.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Nov 2005 13:01:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance for win32 " }, { "msg_contents": "On Fri, Nov 04, 2005 at 01:01:20PM -0500, Tom Lane wrote:\n> \"Merlin Moncure\" <[email protected]> writes:\n> > Nailed it.\n> \n> > problem is in mainloop.c -> setup_cancel_handler. Apparently you\n> > can have multiple handlers and windows keeps track of them all,\n> > even if they do the same thing. Keeping track of so many system\n> > handles would naturally slow the whole process down.\n> \n> Yipes. So we really want to do that only once.\n> \n> AFAICS it is appropriate to move the sigsetjmp and\n> setup_cancel_handler calls in front of the per-line loop inside\n> MainLoop --- can anyone see a reason not to?\n> \n> I'm inclined to treat this as an outright bug, not just a minor\n> performance issue, because it implies that a sufficiently long psql\n> script would probably crash a Windows machine.\n\nOuch. In light of this, are we *sure* what we've got a is a candidate\nfor release?\n\nCheers,\nD\n-- \nDavid Fetter [email protected] http://fetter.org/\nphone: +1 510 893 6100 mobile: +1 415 235 3778\n\nRemember to vote!\n", "msg_date": "Fri, 4 Nov 2005 10:14:31 -0800", "msg_from": "David Fetter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] insert performance for win32" }, { "msg_contents": "Tom Lane wrote:\n> \"Merlin Moncure\" <[email protected]> writes:\n> > Nailed it.\n> \n> > problem is in mainloop.c -> setup_cancel_handler. Apparently you can\n> > have multiple handlers and windows keeps track of them all, even if they\n> > do the same thing. Keeping track of so many system handles would\n> > naturally slow the whole process down.\n> \n> Yipes. So we really want to do that only once.\n> \n> AFAICS it is appropriate to move the sigsetjmp and setup_cancel_handler\n> calls in front of the per-line loop inside MainLoop --- can anyone see\n> a reason not to?\n\nNope.\n\n> I'm inclined to treat this as an outright bug, not just a minor\n> performance issue, because it implies that a sufficiently long psql\n> script would probably crash a Windows machine.\n\nAgreed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 4 Nov 2005 13:15:43 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance for win32" }, { "msg_contents": "David Fetter <[email protected]> writes:\n> On Fri, Nov 04, 2005 at 01:01:20PM -0500, Tom Lane wrote:\n>> I'm inclined to treat this as an outright bug, not just a minor\n>> performance issue, because it implies that a sufficiently long psql\n>> script would probably crash a Windows machine.\n\n> Ouch. In light of this, are we *sure* what we've got a is a candidate\n> for release?\n\nSure. This problem exists in 8.0.* too. Pre-existing bugs don't\ndisqualify an RC in my mind --- we fix them and move on, same as we\nwould do at any other time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Nov 2005 13:19:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] insert performance for win32 " }, { "msg_contents": "David Fetter wrote:\n> On Fri, Nov 04, 2005 at 01:01:20PM -0500, Tom Lane wrote:\n> > \"Merlin Moncure\" <[email protected]> writes:\n> > > Nailed it.\n> > \n> > > problem is in mainloop.c -> setup_cancel_handler. Apparently you\n> > > can have multiple handlers and windows keeps track of them all,\n> > > even if they do the same thing. Keeping track of so many system\n> > > handles would naturally slow the whole process down.\n> > \n> > Yipes. So we really want to do that only once.\n> > \n> > AFAICS it is appropriate to move the sigsetjmp and\n> > setup_cancel_handler calls in front of the per-line loop inside\n> > MainLoop --- can anyone see a reason not to?\n> > \n> > I'm inclined to treat this as an outright bug, not just a minor\n> > performance issue, because it implies that a sufficiently long psql\n> > script would probably crash a Windows machine.\n> \n> Ouch. In light of this, are we *sure* what we've got a is a candidate\n> for release?\n\nGood point. It is something we would fix in a minor release, so it\ndoesn't seem worth doing another RC just for that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 4 Nov 2005 13:21:07 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] insert performance for win32" }, { "msg_contents": "On Fri, 2005-11-04 at 13:21 -0500, Bruce Momjian wrote:\n> David Fetter wrote:\n> > On Fri, Nov 04, 2005 at 01:01:20PM -0500, Tom Lane wrote:\n> > > I'm inclined to treat this as an outright bug, not just a minor\n> > > performance issue, because it implies that a sufficiently long psql\n> > > script would probably crash a Windows machine.\n> > \n> > Ouch. In light of this, are we *sure* what we've got a is a candidate\n> > for release?\n> \n> Good point. It is something we would fix in a minor release, so it\n> doesn't seem worth doing another RC just for that.\n\nWill this be documented in the release notes? If we put unimplemented\nfeatures in TODO, where do we list things we regard as bugs?\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Sun, 06 Nov 2005 09:00:05 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] insert performance for win32" }, { "msg_contents": "\n\nSimon Riggs wrote:\n\n>On Fri, 2005-11-04 at 13:21 -0500, Bruce Momjian wrote:\n> \n>\n>>David Fetter wrote:\n>> \n>>\n>>>On Fri, Nov 04, 2005 at 01:01:20PM -0500, Tom Lane wrote:\n>>> \n>>>\n>>>>I'm inclined to treat this as an outright bug, not just a minor\n>>>>performance issue, because it implies that a sufficiently long psql\n>>>>script would probably crash a Windows machine.\n>>>> \n>>>>\n>>>Ouch. In light of this, are we *sure* what we've got a is a candidate\n>>>for release?\n>>> \n>>>\n>>Good point. It is something we would fix in a minor release, so it\n>>doesn't seem worth doing another RC just for that.\n>> \n>>\n>\n>Will this be documented in the release notes? If we put unimplemented\n>features in TODO, where do we list things we regard as bugs?\n>\n>\n> \n>\n\nNo need, I think. It was patched 2 days ago.\n\ncheers\n\nandrew\n", "msg_date": "Sun, 06 Nov 2005 12:10:06 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] insert performance for win32" } ]
[ { "msg_contents": "> \"Merlin Moncure\" <[email protected]> writes:\n> > Nailed it.\n> \n> > problem is in mainloop.c -> setup_cancel_handler. Apparently you\ncan\n> > have multiple handlers and windows keeps track of them all, even if\nthey\n> > do the same thing. Keeping track of so many system handles would\n> > naturally slow the whole process down.\n> \n> Yipes. So we really want to do that only once.\n> \n> AFAICS it is appropriate to move the sigsetjmp and\nsetup_cancel_handler\n> calls in front of the per-line loop inside MainLoop --- can anyone see\n> a reason not to?\n\nhm. mainloop is re-entrant, right? That means each \\i would reset the\nhandler...what is downside to keeping global flag?\n\n\n> I'm inclined to treat this as an outright bug, not just a minor\ncertainly...\n\n> performance issue, because it implies that a sufficiently long psql\n> script would probably crash a Windows machine.\n\nactually, it's worse than that, it's more of a dos on the whole system,\nas windows will eventually stop granting handles, but there is a good\nchance of side effects on other applications.\n\nMerlin\n", "msg_date": "Fri, 4 Nov 2005 13:07:24 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32 " }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n>> AFAICS it is appropriate to move the sigsetjmp and\n>> setup_cancel_handler\n>> calls in front of the per-line loop inside MainLoop --- can anyone see\n>> a reason not to?\n\n> hm. mainloop is re-entrant, right? That means each \\i would reset the\n> handler...what is downside to keeping global flag?\n\nAh, right, and in fact I'd missed the comment at line 325 pointing out\nthat we're relying on the sigsetjmp to be re-executed every time\nthrough. That could be improved on, likely, but not right before a\nrelease.\n\nDoes the flag need to be global? I'm thinking\n\n void\n setup_cancel_handler(void)\n {\n+\tstatic bool done = false;\n+\n+\tif (!done)\n\t \tSetConsoleCtrlHandler(consoleHandler, TRUE);\n+\tdone = true;\n }\n\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Nov 2005 13:14:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance for win32 " } ]
[ { "msg_contents": "> > I'm inclined to treat this as an outright bug, not just a minor\n> certainly...\n> \n> > performance issue, because it implies that a sufficiently long psql \n> > script would probably crash a Windows machine.\n> \n> actually, it's worse than that, it's more of a dos on the \n> whole system, as windows will eventually stop granting \n> handles, but there is a good chance of side effects on other \n> applications.\n\nDoes it actually use up *handles* there? I don't see anything in the\ndocs that says it should do that - and they usually do document when\nhandles are used. You should be seeing a *huge* increase in system\nhandles very fast if it does, right? \n\nThat said, I definitly agree with calling it a bug :-)\n\n//Magnus\n", "msg_date": "Fri, 4 Nov 2005 19:21:02 +0100", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32 " } ]
[ { "msg_contents": "> >> AFAICS it is appropriate to move the sigsetjmp and \n> >> setup_cancel_handler calls in front of the per-line loop inside \n> >> MainLoop --- can anyone see a reason not to?\n> \n> > hm. mainloop is re-entrant, right? That means each \\i \n> would reset the \n> > handler...what is downside to keeping global flag?\n> \n> Ah, right, and in fact I'd missed the comment at line 325 \n> pointing out that we're relying on the sigsetjmp to be \n> re-executed every time through. That could be improved on, \n> likely, but not right before a release.\n> \n> Does the flag need to be global? I'm thinking\n> \n> void\n> setup_cancel_handler(void)\n> {\n> +\tstatic bool done = false;\n> +\n> +\tif (!done)\n> \t \tSetConsoleCtrlHandler(consoleHandler, TRUE);\n> +\tdone = true;\n> }\n> \n\nSeems like a simple enough solution, don't see why it shouldn't work. As\nlong as psql is single-threaded, which it is...\n(Actually, that code seems to re-set done=true on every call which seems\nunnecessary - but that might be optimised away, I guess)\n\n//Magnus\n\n", "msg_date": "Fri, 4 Nov 2005 19:30:32 +0100", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] insert performance for win32 " } ]
[ { "msg_contents": "SELECT v_barcode, count(v_barcode) FROM lead GROUP BY v_barcode HAVING \ncount(*) > 1;\n\nThis is a pretty good example of the place where 8.1 seems to be quite \nbroken. I understand that this query will want to do a full table scan \n(even through v_barcode is indexed). And the table is largish, at 34 \nmillion rows. In the 8.0 world, this took around 4 minutes. With 8.1beta3, \nthis has run for 30 minutes (as I began to write this) and is still going \nstrong.\n\nAnd it behaves differently than I'd expect. Top shows the postmaster \nprocess running the query as using up 99.9 percent of one CPU, while the i/o \nwait time never gets above 3%. vmstat shows the \"block out\" (bo) number \nquite high, 15 to 20 thousand, which also surprises me. \"block in\" is from \n0 to about 2500. iostat shows 15,000 to 20,000 blocks written every 5 \nseconds, while it shows 0 blocks read. There is no other significant \nprocess running on the box. (Apache is running but is not being used here a \n3:00a.m. on Sunday). This is a dual Opteron box with 16 Gb memory and a \n3ware SATA raid runing 64bit SUSE. Something seems badly wrong.\n\nAs I post this, the query is approaching an hour of run time. I've listed \nan explain of the query and my non-default conf parameters below. Please \nadvise on anything I should change or try, or on any information I can \nprovide that could help diagnose this.\n\n\nGroupAggregate (cost=9899282.83..10285434.26 rows=223858 width=15)\n Filter: (count(*) > 1)\n -> Sort (cost=9899282.83..9994841.31 rows=38223392 width=15)\n Sort Key: v_barcode\n -> Seq Scan on lead (cost=0.00..1950947.92 rows=38223392 width=15)\n\nshared_buffers = 50000\nwork_mem = 16384\nmaintenance_work_mem = 16384\nmax_fsm_pages = 100000\nmax_fsm_relations = 5000\nwal_buffers = 32\ncheckpoint_segments = 32\neffective_cache_size = 50000\ndefault_statistics_target = 50\n\n\n", "msg_date": "Sun, 6 Nov 2005 03:55:18 -0600", "msg_from": "\"PostgreSQL\" <[email protected]>", "msg_from_op": true, "msg_subject": "8.1 iss" }, { "msg_contents": "\"PostgreSQL\" <[email protected]> writes:\n> This is a pretty good example of the place where 8.1 seems to be quite \n> broken.\n\nThat's a bit of a large claim on the basis of one data point.\nDid you remember to re-ANALYZE after loading the table into the\nnew database?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Nov 2005 11:31:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.1 iss " }, { "msg_contents": "\n\"PostgreSQL\" <[email protected]> writes:\n\n...\n> As I post this, the query is approaching an hour of run time. I've listed \n> an explain of the query and my non-default conf parameters below. Please \n> advise on anything I should change or try, or on any information I can \n> provide that could help diagnose this.\n> \n> \n> GroupAggregate (cost=9899282.83..10285434.26 rows=223858 width=15)\n> Filter: (count(*) > 1)\n> -> Sort (cost=9899282.83..9994841.31 rows=38223392 width=15)\n> Sort Key: v_barcode\n> -> Seq Scan on lead (cost=0.00..1950947.92 rows=38223392 width=15)\n> \n> shared_buffers = 50000\n> work_mem = 16384\n...\n\nIt sounds to me like it's doing a large on-disk sort. Increasing work_mem\nshould improve the efficiency. If you increase it enough it might even be able\nto do it in memory, but probably not.\n\nThe shared_buffers is excessive but if you're using the default 8kB block\nsizes then it 400MB of shared pages on a 16GB machine ought not cause\nproblems. It might still be worth trying lowering this to 10,000 or so.\n\nIs this a custom build from postgresql.org sources? RPM build? Or is it a BSD\nports or Gentoo build with unusual options?\n\nPerhaps posting actual vmstat and iostat output might help if someone catches\nsomething you didn't see?\n\n-- \ngreg\n\n", "msg_date": "06 Nov 2005 14:24:00 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.1 iss" }, { "msg_contents": "On Sun, 6 Nov 2005, PostgreSQL wrote:\n\n> SELECT v_barcode, count(v_barcode) FROM lead GROUP BY v_barcode HAVING \n> count(*) > 1;\n> \n> This is a dual Opteron box with 16 Gb memory and a 3ware SATA raid\n> runing 64bit SUSE. Something seems badly wrong.\n> \n> GroupAggregate (cost=9899282.83..10285434.26 rows=223858 width=15)\n> Filter: (count(*) > 1)\n> -> Sort (cost=9899282.83..9994841.31 rows=38223392 width=15)\n> Sort Key: v_barcode\n> -> Seq Scan on lead (cost=0.00..1950947.92 rows=38223392 width=15)\n\nWhat do the plan look like in 8.0? Since it's so much faster I assume you \nget a different plan.\n\n> shared_buffers = 50000\n> work_mem = 16384\n> maintenance_work_mem = 16384\n> max_fsm_pages = 100000\n> max_fsm_relations = 5000\n> wal_buffers = 32\n> checkpoint_segments = 32\n> effective_cache_size = 50000\n> default_statistics_target = 50\n\nThe effective_cache_size is way too low, only 390M and you have a machine\nwith 16G. Try bumping it to 1000000 (which means almost 8G, how nice it\nwould be to be able to write 8G instead...). It could be set even higher \nbut it's hard for me to know what else your memory is used for.\n\nI don't know if this setting will affect this very query, but it should \nhave a positive effect on a lot of queries.\n\nwork_mem also seems low, but it's hard to suggest a good value on it\nwithout knowing more about how your database is used.\n \n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Mon, 7 Nov 2005 06:44:23 +0100 (CET)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.1 iss" }, { "msg_contents": "My most humble apologies to the pg development team (pg_lets?).\n\nI took Greg Stark's advice and set:\n\nshared_buffers = 10000 # was 50000\nwork_mem = 1048576 # 1Gb - was 16384\n\nAlso, I noticed that the EXPLAIN ANALYZE consistently thought reads would \ntake longer than they actually did, so I decreased random_page_cost down to \n1 (the server has a SATA Raid at level 10).\n\nQueries that previously seemed to stall out are still a little slow but \nnothing like before. And I'm seeing a more normal balance of CPU and disk \ni/o while a query is running instead of the high-cpu-low-disk-read situation \nI was seeing before. Concurrency is way up.\n\nI tried a couple of interim sizes for work_mem and so far, the larger the \nbetter (the server has 16Gb). I'll test a little larger size this evening \nand see what it does. Yes, I've read the warning that this is per process.\n\nKudos to you Greg, thanks Luke for your comment (though it seems to disagree \nwith my experience). Also to Dennis, there were not drastic changes in the \nplan between 8.0 and 8.1, it was just the actual execution times.\n\nMartin\n\n\"PostgreSQL\" <[email protected]> wrote in message \nnews:[email protected]...\n> SELECT v_barcode, count(v_barcode) FROM lead GROUP BY v_barcode HAVING \n> count(*) > 1;\n>\n> This is a pretty good example of the place where 8.1 seems to be quite \n> broken.\n... \n\n\n", "msg_date": "Mon, 7 Nov 2005 11:22:16 -0600", "msg_from": "\"PostgreSQL\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8.1 iss" }, { "msg_contents": "Am Montag, 7. November 2005 18:22 schrieb PostgreSQL:\n> My most humble apologies to the pg development team (pg_lets?).\n>\n> I took Greg Stark's advice and set:\n>\n> shared_buffers = 10000 # was 50000\n> work_mem = 1048576 # 1Gb - was 16384\n>\n> Also, I noticed that the EXPLAIN ANALYZE consistently thought reads would\n> take longer than they actually did, so I decreased random_page_cost down to\n> 1 (the server has a SATA Raid at level 10).\n\nDon't do that, use 1.5 or 2, setting it to 1 will only work well if you have \nsmall databases fitting completly in memory.\n\n", "msg_date": "Tue, 8 Nov 2005 11:34:19 +0100", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.1 iss" } ]
[ { "msg_contents": "Hi,\n\nI am experiencing very long update queries and I want to know if it\nreasonable to expect them to perform better. \n\nThe query below is running for more than 1.5 hours (5500 seconds) now,\nwhile the rest of the system does nothing (I don't even type or move a\nmouse...).\n\n- Is that to be expected? \n- Is 180-200 tps with ~ 9000 KB (see output iostat below) not low, given\nthe fact that fsync is off? (Note: with bonnie++ I get write\nperformance > 50 MB/sec and read performace > 70 MB/sec with > 2000\nread/write ops /sec?\n- Does anyone else have any experience with the 3Ware RAID controller\n(which is my suspect)?\n- Any good idea how to determine the real botleneck if this is not the\nperformance I can expect?\n\nMy hard- and software:\n\n- PostgreSQL 8.0.3\n- Debian 3.1 (Sarge) AMD64 \n- Dual Opteron \n- 4GB RAM\n- 3ware Raid5 with 5 disks\n\nPieces of my postgresql.conf (All other is default):\nshared_buffers = 7500\nwork_mem = 260096\nfsync=false\neffective_cache_size = 32768\n\n\n\nThe query with explain (amount and orderbedrag_valuta are float8,\nordernummer and ordernumber int4):\n\nexplain update prototype.orders set amount =\nodbc.orders.orderbedrag_valuta from odbc.orders where ordernumber =\nodbc.orders.ordernummer;\n QUERY PLAN\n-----------------------------------------------------------------------------\nHash Join (cost=50994.74..230038.17 rows=1104379 width=466)\n Hash Cond: (\"outer\".ordernumber = \"inner\".ordernummer)\n -> Seq Scan on orders (cost=0.00..105360.68 rows=3991868 width=455)\n -> Hash (cost=48233.79..48233.79 rows=1104379 width=15)\n -> Seq Scan on orders (cost=0.00..48233.79 rows=1104379\nwidth=15)\n\n\nSample output from iostat during query (about avarage):\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nhdc 0.00 0.00 0.00 0 0\nsda 0.00 0.00 0.00 0 0\nsdb 187.13 23.76 8764.36 24 8852\n\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\n", "msg_date": "Sun, 06 Nov 2005 14:30:54 +0100", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Performance PG 8.0 on dual opteron / 4GB / 3ware Raid5 / Debian??" }, { "msg_contents": "Joost Kraaijeveld <[email protected]> writes:\n> I am experiencing very long update queries and I want to know if it\n> reasonable to expect them to perform better. \n\nDoes that table have any triggers that would fire on the update?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Nov 2005 12:17:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance PG 8.0 on dual opteron / 4GB / 3ware Raid5 / Debian??" }, { "msg_contents": "On Sun, 2005-11-06 at 12:17 -0500, Tom Lane wrote:\n> Does that table have any triggers that would fire on the update?\nAlas, no trigger, constrainst, foreign keys, indixes (have I forgotten\nsomething?)\n\nAll queries are slow. E.g (after vacuum):\n\nselect objectid from prototype.orders\n\nExplain analyse (with PgAdmin):\n\nSeq Scan on orders (cost=0.00..58211.79 rows=1104379 width=40) (actual\ntime=441.971..3252.698 rows=1104379 loops=1)\nTotal runtime: 5049.467 ms\n\nActual execution time: 82163 MS (without getting the data)\n\n \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\n", "msg_date": "Sun, 06 Nov 2005 20:33:53 +0100", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance PG 8.0 on dual opteron / 4GB / 3ware" }, { "msg_contents": "Joost Kraaijeveld <[email protected]> writes:\n> Explain analyse (with PgAdmin):\n> ...\n> Total runtime: 5049.467 ms\n> Actual execution time: 82163 MS (without getting the data)\n\nI'm confused --- where's the 82sec figure coming from, exactly?\n\nWe've heard reports of performance issues in PgAdmin with large\nresult sets ... if you do the same query in psql, what happens?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Nov 2005 15:26:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance PG 8.0 on dual opteron / 4GB / 3ware " }, { "msg_contents": "Hi Tom,\n\nOn Sun, 2005-11-06 at 15:26 -0500, Tom Lane wrote:\n> I'm confused --- where's the 82sec figure coming from, exactly?\n>From actually executing the query.\n\n>From PgAdmin:\n\n-- Executing query:\nselect objectid from prototype.orders\n\nTotal query runtime: 78918 ms.\nData retrieval runtime: 188822 ms.\n1104379 rows retrieved.\n\n\n> We've heard reports of performance issues in PgAdmin with large\n> result sets ... if you do the same query in psql, what happens?\njkr@Panoramix:~/postgresql$ time psql muntdev -c \"select objectid from\nprototype.orders\" > output.txt\n\nreal 0m5.554s\nuser 0m1.121s\nsys 0m0.470s\n\n\nNow *I* am confused. What does PgAdmin do more than giving the query to\nthe database?\n\n(BTW: I have repeated both measurements and the numbers above were all\nfrom the last measurement I did and are about average)\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\n", "msg_date": "Mon, 07 Nov 2005 05:25:59 +0100", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance PG 8.0 on dual opteron / 4GB / 3ware" }, { "msg_contents": "> Now *I* am confused. What does PgAdmin do more than giving the query to\n> the database?\n\nIt builds it into the data grid GUI object.\n\nChris\n\n", "msg_date": "Mon, 07 Nov 2005 12:37:31 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance PG 8.0 on dual opteron / 4GB / 3ware" }, { "msg_contents": "On Mon, 2005-11-07 at 12:37 +0800, Christopher Kings-Lynne wrote:\n> > Now *I* am confused. What does PgAdmin do more than giving the query to\n> > the database?\n> \n> It builds it into the data grid GUI object.\n\nIs that not the difference between the total query runtime and the data\nretrieval runtime (see below)?\n\n-- Executing query:\nselect objectid from prototype.orders\n\nTotal query runtime: 78918 ms.\nData retrieval runtime: 188822 ms.\n1104379 rows retrieved.\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\n", "msg_date": "Mon, 07 Nov 2005 06:04:57 +0100", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance PG 8.0 on dual opteron / 4GB / 3ware" }, { "msg_contents": "Hi Christopher,\n\nOn Mon, 2005-11-07 at 12:37 +0800, Christopher Kings-Lynne wrote:\n> > Now *I* am confused. What does PgAdmin do more than giving the query to\n> > the database?\n> \n> It builds it into the data grid GUI object.\nBut my initial question was about a query that does not produce data at\nall (well, a response from the server saying it is finished). I broke\nthat query off after several hours.\n\nI am now running the query from my initial question with psql (now for\n>1 hour, in a transaction, fsyn off).\n\nSome statistics :\n\nuptime:\n06:35:55 up 9:47, 6 users, load average: 7.08, 7.21, 6.08\n\niostat -x -k 1 (this output appears to be representative):\n\navg-cpu: %user %nice %sys %iowait %idle\n 1.00 0.00 0.50 98.51 0.00\n\nDevice:\t\tsda\t\tsdb\n\nrrqm/s\t\t0.00\t\t0.00\nwrqm/s\t\t14.00\t\t611.00\nr/s\t\t0.00\t\t1.00\nw/s\t\t3.00\t\t201.00\nrsec/s\t\t0.00\t\t32.00\nwsec/s\t\t136.00\t\t6680.00\nrkB/s\t\t0.00\t\t16.00\nwkB/s\t\t68.00\t\t3340.00 \navgrq-sz\t45.33\t\t33.23\navgqu-sz\t0.00\t\t145.67\nawait\t\t0.67\t\t767.19\nsvctm\t\t0.67\t\t4.97\n%util\t\t0.20\t\t100.30\n\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\n", "msg_date": "Mon, 07 Nov 2005 06:46:41 +0100", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance PG 8.0 on dual opteron / 4GB / 3ware" }, { "msg_contents": "Where are the pg_xlog and data directories with respect to each other?\n From this IOStat it looks like they might be on the same partition,\nwhich is not ideal, and actualy surprising that throughput is this\ngood. You need to seperate pg_xlog and data directories to get any\nkind of reasonable performance. Also don't use RAID 5 - RAID 5 bites,\nno really - it bites. Use multiple RAID 1s, or RAID 10s, you will get\nbetter performance. 50MB/70MB is about the same as you get from a\nsingle disk or a RAID 1.\n\nWe use 2x9506S8MI controlers, and have maintained excellent\nperformance with 2xRAID 10 and 2xRAID 1. Make sure you get the\nfirmware update if you have these controllers though.\n\nAlex Turner\nNetEconomist\n\nOn 11/6/05, Joost Kraaijeveld <[email protected]> wrote:\n> Hi,\n>\n> I am experiencing very long update queries and I want to know if it\n> reasonable to expect them to perform better.\n>\n> The query below is running for more than 1.5 hours (5500 seconds) now,\n> while the rest of the system does nothing (I don't even type or move a\n> mouse...).\n>\n> - Is that to be expected?\n> - Is 180-200 tps with ~ 9000 KB (see output iostat below) not low, given\n> the fact that fsync is off? (Note: with bonnie++ I get write\n> performance > 50 MB/sec and read performace > 70 MB/sec with > 2000\n> read/write ops /sec?\n> - Does anyone else have any experience with the 3Ware RAID controller\n> (which is my suspect)?\n> - Any good idea how to determine the real botleneck if this is not the\n> performance I can expect?\n>\n> My hard- and software:\n>\n> - PostgreSQL 8.0.3\n> - Debian 3.1 (Sarge) AMD64\n> - Dual Opteron\n> - 4GB RAM\n> - 3ware Raid5 with 5 disks\n>\n> Pieces of my postgresql.conf (All other is default):\n> shared_buffers = 7500\n> work_mem = 260096\n> fsync=false\n> effective_cache_size = 32768\n>\n>\n>\n> The query with explain (amount and orderbedrag_valuta are float8,\n> ordernummer and ordernumber int4):\n>\n> explain update prototype.orders set amount =\n> odbc.orders.orderbedrag_valuta from odbc.orders where ordernumber =\n> odbc.orders.ordernummer;\n> QUERY PLAN\n> -----------------------------------------------------------------------------\n> Hash Join (cost=50994.74..230038.17 rows=1104379 width=466)\n> Hash Cond: (\"outer\".ordernumber = \"inner\".ordernummer)\n> -> Seq Scan on orders (cost=0.00..105360.68 rows=3991868 width=455)\n> -> Hash (cost=48233.79..48233.79 rows=1104379 width=15)\n> -> Seq Scan on orders (cost=0.00..48233.79 rows=1104379\n> width=15)\n>\n>\n> Sample output from iostat during query (about avarage):\n> Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n> hdc 0.00 0.00 0.00 0 0\n> sda 0.00 0.00 0.00 0 0\n> sdb 187.13 23.76 8764.36 24 8852\n>\n>\n> --\n> Groeten,\n>\n> Joost Kraaijeveld\n> Askesis B.V.\n> Molukkenstraat 14\n> 6524NB Nijmegen\n> tel: 024-3888063 / 06-51855277\n> fax: 024-3608416\n> e-mail: [email protected]\n> web: www.askesis.nl\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n", "msg_date": "Mon, 7 Nov 2005 09:47:53 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance PG 8.0 on dual opteron / 4GB / 3ware Raid5 / Debian??" }, { "msg_contents": "Joost,\n\nI've got experience with these controllers and which version do you \nhave. I'd expect to see higher than 50MB/s although I've never tried \nRAID 5\n\nI routinely see closer to 100MB/s with RAID 1+0 on their 9000 series\n\nI would also suggest that shared buffers should be higher than 7500, \ncloser to 30000, and effective cache should be up around 200k\n\nwork_mem is awfully high, remember that this will be given to each \nand every connection and can be more than 1x this number per \nconnection depending on the number of sorts\ndone in the query.\n\nfsync=false ? I'm not even sure why we have this option, but I'd \nnever set it to false.\n\nDave\n\nOn 6-Nov-05, at 8:30 AM, Joost Kraaijeveld wrote:\n\n> Hi,\n>\n> I am experiencing very long update queries and I want to know if it\n> reasonable to expect them to perform better.\n>\n> The query below is running for more than 1.5 hours (5500 seconds) now,\n> while the rest of the system does nothing (I don't even type or move a\n> mouse...).\n>\n> - Is that to be expected?\n> - Is 180-200 tps with ~ 9000 KB (see output iostat below) not low, \n> given\n> the fact that fsync is off? (Note: with bonnie++ I get write\n> performance > 50 MB/sec and read performace > 70 MB/sec with > 2000\n> read/write ops /sec?\n> - Does anyone else have any experience with the 3Ware RAID controller\n> (which is my suspect)?\n> - Any good idea how to determine the real botleneck if this is not the\n> performance I can expect?\n>\n> My hard- and software:\n>\n> - PostgreSQL 8.0.3\n> - Debian 3.1 (Sarge) AMD64\n> - Dual Opteron\n> - 4GB RAM\n> - 3ware Raid5 with 5 disks\n>\n> Pieces of my postgresql.conf (All other is default):\n> shared_buffers = 7500\n> work_mem = 260096\n> fsync=false\n> effective_cache_size = 32768\n>\n>\n>\n> The query with explain (amount and orderbedrag_valuta are float8,\n> ordernummer and ordernumber int4):\n>\n> explain update prototype.orders set amount =\n> odbc.orders.orderbedrag_valuta from odbc.orders where ordernumber =\n> odbc.orders.ordernummer;\n> QUERY PLAN\n> ---------------------------------------------------------------------- \n> -------\n> Hash Join (cost=50994.74..230038.17 rows=1104379 width=466)\n> Hash Cond: (\"outer\".ordernumber = \"inner\".ordernummer)\n> -> Seq Scan on orders (cost=0.00..105360.68 rows=3991868 \n> width=455)\n> -> Hash (cost=48233.79..48233.79 rows=1104379 width=15)\n> -> Seq Scan on orders (cost=0.00..48233.79 rows=1104379\n> width=15)\n>\n>\n> Sample output from iostat during query (about avarage):\n> Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n> hdc 0.00 0.00 0.00 0 0\n> sda 0.00 0.00 0.00 0 0\n> sdb 187.13 23.76 8764.36 24 8852\n>\n>\n> -- \n> Groeten,\n>\n> Joost Kraaijeveld\n> Askesis B.V.\n> Molukkenstraat 14\n> 6524NB Nijmegen\n> tel: 024-3888063 / 06-51855277\n> fax: 024-3608416\n> e-mail: [email protected]\n> web: www.askesis.nl\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that \n> your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Mon, 14 Nov 2005 18:51:09 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance PG 8.0 on dual opteron / 4GB / 3ware Raid5 / Debian??" }, { "msg_contents": "Hi Dave,\n\nOn Mon, 2005-11-14 at 18:51 -0500, Dave Cramer wrote:\n> Joost,\n> \n> I've got experience with these controllers and which version do you \n> have. I'd expect to see higher than 50MB/s although I've never tried \n> RAID 5\n> \n> I routinely see closer to 100MB/s with RAID 1+0 on their 9000 series\nOK, than there must be hope.\n\n> I would also suggest that shared buffers should be higher than 7500, \n> closer to 30000, and effective cache should be up around 200k\nIn my current 8.1 situation I use shared_buffers = 40000, \neffective_cache_size = 131072 .\n\n> work_mem is awfully high, remember that this will be given to each \n> and every connection and can be more than 1x this number per \n> connection depending on the number of sorts\n> done in the query.\nI use such a high number because I am the only user querying and my\nqueries do sorted joins etc. \n\n\n> fsync=false ? I'm not even sure why we have this option, but I'd \n> never set it to false.\nI want as much speed as possible for a database conversion that MUST be\nhandled in 1 weekend (it lasts now, with the current speed almost 7\ncenturies. I may be off a millenium). If it fails because of hardware\nproblem (the only reason we want and need fsync?) we will try next\nweekend until it finally goes right. \n\nWhat I can see is that only the *write* performance of *long updates*\n(and not inserts) are slow and they get slower in time: the first few\nthousand go relatively fast, after that PostgreSQL crawls to a halt\n(other \"benchmarks\" like bonnie++ or just dd'ing a big file don't have\nthis behavior).\n\nI did notice that changing the I/O scheduler's nr_request from the\ndefault 128 to 1024 or even 4096 made a remarkable performance\nimprovement. I suspect that experimenting with other I/O schedululers\ncould improve performance. But it is hard to find any useful\ndocumentation about I/O schedulers.\n\n\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\n", "msg_date": "Tue, 15 Nov 2005 17:35:42 +0100", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance PG 8.0 on dual opteron / 4GB / 3ware" }, { "msg_contents": "Joost,\n\nOn 11/15/05 8:35 AM, \"Joost Kraaijeveld\" <[email protected]> wrote:\n\n> thousand go relatively fast, after that PostgreSQL crawls to a halt\n> (other \"benchmarks\" like bonnie++ or just dd'ing a big file don't have\n> this behavior).\n\nWith RAID5, it could matter a lot what block size you run your ³dd bigfile²\ntest with. You should run ³dd if=/dev/zero of=bigfile bs=8k count=500000²\nfor a 2GB main memory machine, multiply the count by (<your mem>/2GB).\n\nIt is very important with the 3Ware cards to match the driver to the\nfirmware revision.\n \n> I did notice that changing the I/O scheduler's nr_request from the\n> default 128 to 1024 or even 4096 made a remarkable performance\n> improvement. I suspect that experimenting with other I/O schedululers\n> could improve performance. But it is hard to find any useful\n> documentation about I/O schedulers.\n> \nYou could try deadline, there¹s no harm, but I¹ve found that when you reach\nthe point of experimenting with schedulers, you are probably not addressing\nthe real problem.\n> \nOn a 3Ware 9500 with HW RAID5 and 4 or more disks I think you should get\n100MB/s write rate, which is double what Postgres can use. We find that\nPostgres, even with fsync=false, will only run at a net COPY speed of about\n8-12 MB/s, where 12 is the Bizgres number. 8.1 might do 10. But to get the\n10 or 12, the WAL writing and other writing is about 4-5X more than the net\nwrite speed, or the speed at which the input file is parsed and read into\nthe database.\n\nSo, if you can get your ³dd bigfile² test to write data at 50MB/s+ with a\nblocksize of 8KB, you should be doing well enough.\n\nIncidentally, we also find that using the XFS filesystem and setting the\nreadahead to 8MB or more is extremely beneficial for performance with the\n3Ware cards (and with others, but especially for the older 3Ware cards).\n \nRegards,\n\n- Luke\n\n\n\nRe: [PERFORM] Performance PG 8.0 on dual opteron / 4GB / 3ware\n\n\nJoost,\n\nOn 11/15/05 8:35 AM, \"Joost Kraaijeveld\" <[email protected]> wrote:\n\nthousand go relatively fast, after that PostgreSQL crawls to a halt\n(other \"benchmarks\" like bonnie++ or just dd'ing a big file don't have\nthis behavior).\n\nWith RAID5, it could matter a lot what block size you run your “dd bigfile” test with.  You should run “dd if=/dev/zero of=bigfile bs=8k count=500000” for a 2GB main memory machine, multiply the count by (<your mem>/2GB).\n\nIt is very important with the 3Ware cards to match the driver to the firmware revision.\n   \nI did notice that changing the I/O scheduler's nr_request from the\ndefault 128 to 1024 or even 4096 made a remarkable performance\nimprovement. I suspect that experimenting with other I/O schedululers\ncould improve performance. But it is hard to find any useful\ndocumentation about I/O schedulers.\n\nYou could try deadline, there’s no harm, but I’ve found that when you reach the point of experimenting with schedulers, you are probably not addressing the real problem.\n\nOn a 3Ware 9500 with HW RAID5 and 4 or more disks I think you should get 100MB/s write rate, which is double what Postgres can use.  We find that Postgres, even with fsync=false, will only run at a net COPY speed of about 8-12 MB/s, where 12 is the Bizgres number.  8.1 might do 10.  But to get the 10 or 12, the WAL writing and other writing is about 4-5X more than the net write speed, or the speed at which the input file is parsed and read into the database.\n\nSo, if you can get your “dd bigfile” test to write data at 50MB/s+ with a blocksize of 8KB, you should be doing well enough.\n\nIncidentally, we also find that using the XFS filesystem and setting the readahead to 8MB or more is extremely beneficial for performance with the 3Ware cards (and with others, but especially for the older 3Ware cards).\n \nRegards,\n\n- Luke", "msg_date": "Tue, 15 Nov 2005 10:42:22 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance PG 8.0 on dual opteron / 4GB / 3ware" }, { "msg_contents": "Hi Luke,\n\nOn Tue, 2005-11-15 at 10:42 -0800, Luke Lonergan wrote:\n> With RAID5, it could matter a lot what block size you run your “dd\n> bigfile” test with. You should run “dd if=/dev/zero of=bigfile bs=8k\n> count=500000” for a 2GB main memory machine, multiply the count by\n> (<your mem>/2GB).\nIf I understand correctly (I have 4GB ram):\n\njkr@Panoramix:~/tmp$ dd if=/dev/zero of=bigfile bs=8k count=1000000\n1000000+0 records in\n1000000+0 records out\n8192000000 bytes transferred in 304.085269 seconds (26939812 bytes/sec)\n\nWhich looks suspicious: 26308 MB/sec???\n\n> It is very important with the 3Ware cards to match the driver to the\n> firmware revision.\nOK, I am running 1 driver behind the firmware.\n \n> I did notice that changing the I/O scheduler's nr_request from\n> the\n> default 128 to 1024 or even 4096 made a remarkable performance\n> improvement. I suspect that experimenting with other I/O\n> schedululers\n> could improve performance. But it is hard to find any useful\n> documentation about I/O schedulers.\n> \n> You could try deadline, there’s no harm, but I’ve found that when you\n> reach the point of experimenting with schedulers, you are probably not\n> addressing the real problem.\nIt depends. I/O Schedulers (I assume) have a purpose: some schedulers\nshould be more appropriate for some circumstances. And maybe my specific\ncircumstances (converting a database with *many updates*) is a specific\ncircumstance. I really don't know....\n\n> On a 3Ware 9500 with HW RAID5 and 4 or more disks I think you should\n> get 100MB/s write rate, which is double what Postgres can use. We\n> find that Postgres, even with fsync=false, will only run at a net COPY\n> speed of about 8-12 MB/s, where 12 is the Bizgres number. 8.1 might\n> do 10. But to get the 10 or 12, the WAL writing and other writing is\n> about 4-5X more than the net write speed, or the speed at which the\n> input file is parsed and read into the database.\nAs I have an (almost) seperate WAL disk: iostat does not show any\nsignificant writing on the WAL disk....\n\n> So, if you can get your “dd bigfile” test to write data at 50MB/s+\n> with a blocksize of 8KB, you should be doing well enough.\nSee above.\n\n> Incidentally, we also find that using the XFS filesystem and setting\n> the readahead to 8MB or more is extremely beneficial for performance\n> with the 3Ware cards (and with others, but especially for the older\n> 3Ware cards).\nI don't have problems with my read performance but *only* with my\n*update* performance (and not even insert performance). But than again I\nam not the only one with these problems:\n\nhttp://www.issociate.de/board/goto/894541/3ware_+_RAID5_\n+_xfs_performance.html#msg_894541\nhttp://lkml.org/lkml/2005/4/20/110\nhttp://seclists.org/lists/linux-kernel/2005/Oct/1171.html\n\nI am happy to share the tables against which I am running my checks....\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\n", "msg_date": "Tue, 15 Nov 2005 20:31:56 +0100", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance PG 8.0 on dual opteron / 4GB / 3ware" }, { "msg_contents": "Joost Kraaijeveld wrote:\n> If I understand correctly (I have 4GB ram):\n> \n> jkr@Panoramix:~/tmp$ dd if=/dev/zero of=bigfile bs=8k count=1000000\n> 1000000+0 records in\n> 1000000+0 records out\n> 8192000000 bytes transferred in 304.085269 seconds (26939812 bytes/sec)\n> \n> Which looks suspicious: 26308 MB/sec???\n\nEh? That looks more like ~25.7 MB/sec, assuming 1MB = 1024*1024 bytes.\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n", "msg_date": "Tue, 15 Nov 2005 12:41:23 -0700", "msg_from": "Steve Wampler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance PG 8.0 on dual opteron / 4GB / 3ware" }, { "msg_contents": "On Tue, 2005-11-15 at 12:41 -0700, Steve Wampler wrote:\n> Joost Kraaijeveld wrote:\n> > If I understand correctly (I have 4GB ram):\n> > \n> > jkr@Panoramix:~/tmp$ dd if=/dev/zero of=bigfile bs=8k count=1000000\n> > 1000000+0 records in\n> > 1000000+0 records out\n> > 8192000000 bytes transferred in 304.085269 seconds (26939812 bytes/sec)\n> > \n> > Which looks suspicious: 26308 MB/sec???\n> \n> Eh? That looks more like ~25.7 MB/sec, assuming 1MB = 1024*1024 bytes.\nOooops. This calculation error is not typical for my testing (I think ;-)).\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\n", "msg_date": "Tue, 15 Nov 2005 20:51:27 +0100", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance PG 8.0 on dual opteron / 4GB / 3ware" }, { "msg_contents": "Joost,\n\nOn 11/15/05 11:51 AM, \"Joost Kraaijeveld\" <[email protected]> wrote:\n\n> On Tue, 2005-11-15 at 12:41 -0700, Steve Wampler wrote:\n>> > Joost Kraaijeveld wrote:\n>>> > > If I understand correctly (I have 4GB ram):\n>>> > >\n>>> > > jkr@Panoramix:~/tmp$ dd if=/dev/zero of=bigfile bs=8k count=1000000\n>>> > > 1000000+0 records in\n>>> > > 1000000+0 records out\n>>> > > 8192000000 bytes transferred in 304.085269 seconds (26939812 bytes/sec)\n>>> > >\n>>> > > Which looks suspicious: 26308 MB/sec???\n>> >\n>> > Eh? That looks more like ~25.7 MB/sec, assuming 1MB = 1024*1024 bytes.\n> Oooops. This calculation error is not typical for my testing (I think ;-)).\n\nSummarizing the two facts of note: the write result is 1/4 of what you\nshould be getting, and you are running 1 driver behind the firmware.\n\nYou might update your driver, rerun the test, and if you still have the slow\nresult, verify that your filesystem isn¹t fragmented (multiple undisciplined\napps on the same filesystem will do that).\n\nWAL on a separate disk, on a separate controller? What is the write\nperformance there?\n\nRegards,\n\n- Luke\n\n\n\nRe: [PERFORM] Performance PG 8.0 on dual opteron / 4GB / 3ware\n\n\nJoost,\n\nOn 11/15/05 11:51 AM, \"Joost Kraaijeveld\" <[email protected]> wrote:\n\nOn Tue, 2005-11-15 at 12:41 -0700, Steve Wampler wrote:\n> Joost Kraaijeveld wrote:\n> > If I understand correctly (I have 4GB ram):\n> >\n> > jkr@Panoramix:~/tmp$ dd if=/dev/zero of=bigfile bs=8k count=1000000\n> > 1000000+0 records in\n> > 1000000+0 records out\n> > 8192000000 bytes transferred in 304.085269 seconds (26939812 bytes/sec)\n> >\n> > Which looks suspicious: 26308 MB/sec???\n>\n> Eh?  That looks more like ~25.7 MB/sec, assuming 1MB = 1024*1024 bytes.\nOooops. This calculation error is not typical for my testing (I think ;-)).\n\nSummarizing the two facts of note: the write result is 1/4 of what you should be getting, and you are running 1 driver behind the firmware.\n\nYou might update your driver, rerun the test, and if you still have the slow result, verify that your filesystem isn’t fragmented (multiple undisciplined apps on the same filesystem will do that).\n\nWAL on a separate disk, on a separate controller?  What is the write performance there?\n\nRegards,\n\n- Luke", "msg_date": "Tue, 15 Nov 2005 22:07:53 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance PG 8.0 on dual opteron / 4GB / 3ware" }, { "msg_contents": "Hi Luke,\n\nOn Tue, 2005-11-15 at 22:07 -0800, Luke Lonergan wrote:\n\n> You might update your driver, \nI will do that (but I admit that I am not looking forward to it. When I\nwas young and did not make money with my computer, I liked challenges\nlike compiling kernels and not being able to boot the computer. Not any\nmore :-)).\n\n\n> \n> WAL on a separate disk, on a separate controller? What is the write\n> performance there?\nWAL is on a separate disk and a separate controller, write performance:\n\njkr@Panoramix:/tmp$ dd if=/dev/zero of=bigfile bs=8k count=1000000\n1000000+0 records in\n1000000+0 records out\n8192000000 bytes transferred in 166.499230 seconds (49201429 bytes/sec)\n\nThe quest continues...\n\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\n", "msg_date": "Wed, 16 Nov 2005 07:57:48 +0100", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance PG 8.0 on dual opteron / 4GB / 3ware" }, { "msg_contents": "Hi Luke,\n\n\n> It is very important with the 3Ware cards to match the driver to the\n> firmware revision.\n> So, if you can get your “dd bigfile” test to write data at 50MB/s+\n> with a blocksize of 8KB, you should be doing well enough.\n\nI recompiled my kernel, added the driver and:\n\njkr@Panoramix:~$ dmesg | grep 3w\n3ware 9000 Storage Controller device driver for Linux v2.26.03.019fw.\nscsi4 : 3ware 9000 Storage Controller\n3w-9xxx: scsi4: Found a 3ware 9000 Storage Controller at 0xfd8ffc00,\nIRQ: 28.\n3w-9xxx: scsi4: Firmware FE9X 2.08.00.005, BIOS BE9X 2.03.01.052, Ports:\n8.\n\n\njkr@Panoramix:~/tmp$ dd if=/dev/zero of=bigfile bs=8k count=1000000\n1000000+0 records in\n1000000+0 records out\n8192000000 bytes transferred in 200.982055 seconds (40759858 bytes/sec)\n\nWhich is an remarkable increase in speed (38.9 MB/sec vs 25.7 MB/sec).\n\nThanks for your suggestions.\n\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\n", "msg_date": "Wed, 16 Nov 2005 11:17:05 +0100", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance PG 8.0 on dual opteron / 4GB / 3ware" }, { "msg_contents": "I am mystified by the behavior of \"alarm\" in conjunction with Postgres/perl/DBD. Here is roughly what I'm doing:\n\n eval {\n local $SIG{ALRM} = sub {die(\"Timeout\");};\n $time = gettimeofday;\n alarm 20;\n $sth = $dbh->prepare(\"a query that may take a long time...\");\n $sth->execute();\n alarm 0;\n };\n if ($@ && $@ =~ /Timeout/) {\n my $elapsed = gettimeofday - $time;\n print \"Timed out after $elapsed seconds\";\n }\n\nNow the mystery: It works, but it hardly matters what time I use for the alarm call, the actual alarm event always happens at 26 seconds. I can set \"alarm 1\" or \"alarm 20\", and it almost always hits right at 26 seconds.\n\nNow if I increase alarm to anything in the range of about 25-60 seconds, the actual alarm arrives somewhere around the 90 second mark. It seems as though there are \"windows of opportunity\" for the alarm, and it is ignored until those \"windows\" arrive.\n\nAnyone have a clue what's going on and/or how I can fix it?\n\nA secondary question: It appears that $sth->cancel() is not implemented in the Pg DBD module. Is that true?\n\nThanks,\nCraig\n", "msg_date": "Wed, 16 Nov 2005 12:59:21 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Perl DBD and an alarming problem" }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n> I am mystified by the behavior of \"alarm\" in conjunction with\n> Postgres/perl/DBD. Here is roughly what I'm doing:\n \n> Anyone have a clue what's going on and/or how I can fix it?\n\nNot really, but alarm has never worked well (or claimed to) with\nDBI. The DBI docs recommend trying out Sys::Sigaction:\n\nhttp://search.cpan.org/~lbaxter/Sys-SigAction/\n\n> A secondary question: It appears that $sth->cancel() is not\n> implemented in the Pg DBD module. Is that true?\n\nCorrect. DBD::Pg does not support asynchronous connections. It's\npossible it may in the future, but there has not been much of a\nperceived need. Feel free to enter a request on CPAN:\n\nhttp://rt.cpan.org/NoAuth/Bugs.html?Dist=DBD-Pg\n \nThere may be another way around it, if you can tell us some more\nabout what exactly it is you are trying to do.\n\n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200511161830\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n-----BEGIN PGP SIGNATURE-----\n\niD8DBQFDe8ErvJuQZxSWSsgRAoZ6AJ9h6gV5U7PyLDJIqXLpSB6r7NWaaQCdESSR\nCdNexfvYvSQjOLkEdPXd26U=\n=/W5F\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Wed, 16 Nov 2005 23:30:44 -0000", "msg_from": "\"Greg Sabino Mullane\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perl DBD and an alarming problem" }, { "msg_contents": "On Wed, Nov 16, 2005 at 12:59:21PM -0800, Craig A. James wrote:\n> eval {\n> local $SIG{ALRM} = sub {die(\"Timeout\");};\n> $time = gettimeofday;\n> alarm 20;\n> $sth = $dbh->prepare(\"a query that may take a long time...\");\n> $sth->execute();\n> alarm 0;\n> };\n> if ($@ && $@ =~ /Timeout/) {\n> my $elapsed = gettimeofday - $time;\n> print \"Timed out after $elapsed seconds\";\n> }\n> \n> Now the mystery: It works, but it hardly matters what time I use for the \n> alarm call, the actual alarm event always happens at 26 seconds. I can set \n> \"alarm 1\" or \"alarm 20\", and it almost always hits right at 26 seconds.\n\nHigh-level languages' signal handlers don't always work well with\nlow-level libraries. I haven't dug into the Perl source code but\nI'd guess that since only certain things are safe to do in a signal\nhandler, Perl's handler simply sets some kind of state that the\ninterpreter will examine later during normal execution. If you're\nusing only Perl facilities then that probably happens fairly timely,\nbut if you're stuck in a low-level library (e.g., libpq) then you\nmight have to wait until that library returns control to Perl before\nPerl recognizes that a signal occurred.\n\nAs an example, if I run code such as yours with alarm(2) and a query\nthat takes 5 seconds, I see the following in a process trace (from\nktrace/kdump on FreeBSD):\n\n55395 perl 0.000978 CALL poll(0xbfbfe1b8,0x1,0xffffffff)\n55395 perl 1.996629 RET poll -1 errno 4 Interrupted system call\n55395 perl 0.000013 PSIG SIGALRM caught handler=0x281be22c mask=0x0 code=0x0\n55395 perl 0.000050 CALL sigprocmask(0x1,0,0x805411c)\n55395 perl 0.000005 RET sigprocmask 0\n55395 perl 0.000020 CALL sigreturn(0xbfbfde60)\n55395 perl 0.000007 RET sigreturn JUSTRETURN\n55395 perl 0.000019 CALL poll(0xbfbfe1b8,0x1,0xffffffff)\n55395 perl 3.004065 RET poll 1\n55395 perl 0.000024 CALL recvfrom(0x3,0x81c6000,0x4000,0,0,0)\n55395 perl 0.000016 GIO fd 3 read 60 bytes\n\nThe poll() call is interrupted by SIGALRM after 2 seconds but then\nit starts again and doesn't return until the query completes after\nthe remaining 3 seconds. Only sometime later does Perl invoke the\nALRM handler I installed, presumably because it can't do so until\nthe low-level code returns control to Perl.\n\nIs there a reason you're using alarm() in the client instead of\nsetting statement_timeout on the server?\n\n-- \nMichael Fuhr\n", "msg_date": "Wed, 16 Nov 2005 17:23:45 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perl DBD and an alarming problem" }, { "msg_contents": "\nThanks for the info on alarm and timeouts, this was a big help. One further comment:\n\nMichael Fuhr wrote:\n>> eval {\n>> local $SIG{ALRM} = sub {die(\"Timeout\");};\n>> $time = gettimeofday;\n>> alarm 20;\n>> $sth = $dbh->prepare(\"a query that may take a long time...\");\n>> $sth->execute();\n>> alarm 0;\n>> };\n>> if ($@ && $@ =~ /Timeout/) {\n>> my $elapsed = gettimeofday - $time;\n>> print \"Timed out after $elapsed seconds\";\n>> }\n>>\n>>Now the mystery: It works, but it hardly matters what time I use for the \n>>alarm call, the actual alarm event always happens at 26 seconds...\n> \n> \n> High-level languages' signal handlers don't always work well with\n> low-level libraries...\n>\n> Is there a reason you're using alarm() in the client instead of\n> setting statement_timeout on the server?\n\nstatement_timeout solved the problem, thanks VERY much for the pointer. To answer your question, I use alarm() because all the books and web references said that was how to do it. :-). I've used alarm() with Perl (with a 3rd-party C lib that had a potential infinite loop) very successfully.\n\nSo thanks for the pointer to statement_timeout. But...\n\nWhen I set statement_timeout in the config file, it just didn't do anything - it never timed out (PG 8.0.3). I finally found in the documentation that I can do \"set statement_timeout = xxx\" from PerlDBI on a per-client basis, and that works.\n\nCraig\n", "msg_date": "Thu, 17 Nov 2005 13:04:21 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perl DBD and an alarming problem" }, { "msg_contents": "On Thu, Nov 17, 2005 at 01:04:21PM -0800, Craig A. James wrote:\n> When I set statement_timeout in the config file, it just didn't do anything \n> - it never timed out (PG 8.0.3). I finally found in the documentation that \n> I can do \"set statement_timeout = xxx\" from PerlDBI on a per-client basis, \n> and that works.\n\nYou probably shouldn't set statement_timeout on a global basis\nanyway, but did you reload the server after you made the change?\nSetting statement_timeout in postgresql.conf and then reloading the\nserver works here in 8.0.4.\n\n-- \nMichael Fuhr\n", "msg_date": "Thu, 17 Nov 2005 16:28:57 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perl DBD and an alarming problem" }, { "msg_contents": "[Please copy the mailing list on replies.]\n\nOn Thu, Nov 17, 2005 at 05:38:13PM -0800, Craig A. James wrote:\n> >You probably shouldn't set statement_timeout on a global basis\n> >anyway\n> \n> The server is a \"one trick pony\" so setting a global timeout value is \n> actually appropriate.\n\nBeware that statement_timeout also applies to maintenance commands\nlike VACUUM; it might be more appropriate to set per-user timeouts\nwith ALTER USER. If you do set a global timeout then you might\nwant to set a per-user timeout of 0 for database superusers so\nmaintenance activities don't get timed out.\n\n> >... but did you reload the server after you made the change?\n> >Setting statement_timeout in postgresql.conf and then reloading the\n> >server works here in 8.0.4.\n> \n> Yes. By \"reload\" I assume you mean restarting it from scratch.\n\nEither a restart or a \"pg_ctl reload\", which sends a SIGHUP to the\nserver. You can effect some changes by sending a signal to a running\nserver without having to restart it entirely.\n\n> In this case, I use\n> \n> /etc/init.d/postgresql restart\n> \n> It definitely had no effect at all. I tried values clear down to 1 \n> millisecond, but the server never timed out for any query.\n\nDid you use \"SHOW statement_timeout\" to see if the value was set\nto what you wanted? Are you sure you edited the right file? As a\ndatabase superuser execute \"SHOW config_file\" to see what file the\nserver is using. What exactly did the line look like after you\nchanged it?\n\n-- \nMichael Fuhr\n", "msg_date": "Thu, 17 Nov 2005 20:46:31 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perl DBD and an alarming problem" }, { "msg_contents": "In a recent thread, several people pointed out that UPDATE = DELETE+INSERT. This got me to wondering.\n\nI have a table that, roughly, looks like this:\n\n create table doc (\n id integer primary key,\n document text,\n keywords tsvector\n );\n\nwhere \"keywords\" has a GIST index. There are about 10 million rows in the table, and an average of 20 keywords per document. I have two questions.\n\nFirst, I occasionally rebuild the keywords, after which the VACUUM FULL ANALYZE takes a LONG time - like 24 hours. Given the UPDATE = DELETE+INSERT, it sounds like I'd be better off with something like this:\n\n create table doc (\n id integer primary key,\n document text,\n );\n create table keywords (\n id integer primary key,\n keywords tsvector\n );\n\nThen I could just drop the GIST index, truncate the keywords table, rebuild the keywords, and reindex. My suspicion is that VACUUM FULL ANALYZE would be quick -- there would be no garbage to collect, so all it would to do is the ANALYZE part.\n\nMy second question: With the doc and keywords split into two tables, would the tsearch2/GIST performance be faster? The second schema's \"keywords\" table has just pure keywords (no documents); does that translate to fewer blocks being read during a tsearch2/GIST query? Or are the \"document\" and \"keywords\" columns of the first schema already stored separately on disk so that the size of the \"document\" data doesn't affect the \"keywords\" search performance?\n\nThanks,\nCraig\n", "msg_date": "Sat, 19 Nov 2005 09:54:23 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Storage/Performance and splitting a table" }, { "msg_contents": "On Sat, Nov 19, 2005 at 09:54:23AM -0800, Craig A. James wrote:\n>First, I occasionally rebuild the keywords, after which the VACUUM FULL \n>ANALYZE takes a LONG time - like 24 hours.\n\nYou know you just need vacuum, not vacuum full, right? \n\nMike Stone\n", "msg_date": "Sat, 19 Nov 2005 14:03:04 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storage/Performance and splitting a table" }, { "msg_contents": "\n>> When I set statement_timeout in the config file, it just didn't\n>> do anything - it never timed out (PG 8.0.3).\n> \n>... but did you reload the server after you [changed statement_timeout]?\n\nMystery solved. I have two servers; I was reconfiguring one and restarting the other. Duh.\n\nThanks,\nCraig\n", "msg_date": "Sat, 19 Nov 2005 12:42:50 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perl DBD and an alarming problem" }, { "msg_contents": "This article on ZDNet claims that hyperthreading can *hurt* performance, due to contention in the L1/L2 cache by a second process:\n\n http://news.zdnet.co.uk/0,39020330,39237341,00.htm\n\nHas anyone tested this on Postgres yet? (And based on a recent somewhat caustic thread about performance on this forum, I'm going to avoid speculation, and let those who actually *know* answer! ;-)\n\nCraig\n", "msg_date": "Sun, 20 Nov 2005 13:46:10 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Hyperthreading slows processes?" }, { "msg_contents": "Yeah, it's pretty much a known issue for postgres\n\nDave\nOn 20-Nov-05, at 4:46 PM, Craig A. James wrote:\n\n> This article on ZDNet claims that hyperthreading can *hurt* \n> performance, due to contention in the L1/L2 cache by a second process:\n>\n> http://news.zdnet.co.uk/0,39020330,39237341,00.htm\n>\n> Has anyone tested this on Postgres yet? (And based on a recent \n> somewhat caustic thread about performance on this forum, I'm going \n> to avoid speculation, and let those who actually *know* answer! ;-)\n>\n> Craig\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n", "msg_date": "Sun, 20 Nov 2005 20:24:24 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hyperthreading slows processes?" }, { "msg_contents": "So say I need 10,000 tables, but I can create tablespaces. Wouldn't that solve the performance problem caused by Linux's (or ext2/3's) problems with large directories?\n\nFor example, if each user creates (say) 10 tables, and I have 1000 users, I could create 100 tablespaces, and assign groups of 10 users to each tablespace. This would limit each tablespace to 100 tables, and keep the ext2/3 file-system directories manageable.\n\nWould this work? Would there be other problems?\n\nThanks,\nCraig\n", "msg_date": "Thu, 01 Dec 2005 19:47:30 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 15,000 tables" }, { "msg_contents": "On Thu, 1 Dec 2005, Craig A. James wrote:\n\n> So say I need 10,000 tables, but I can create tablespaces. Wouldn't that \n> solve the performance problem caused by Linux's (or ext2/3's) problems with \n> large directories?\n>\n> For example, if each user creates (say) 10 tables, and I have 1000 users, I \n> could create 100 tablespaces, and assign groups of 10 users to each \n> tablespace. This would limit each tablespace to 100 tables, and keep the \n> ext2/3 file-system directories manageable.\n>\n> Would this work? Would there be other problems?\n\nThis would definantly help, however there's still the question of how \nlarge the tables get, and how many total files are needed to hold the 100 \ntables.\n\nyou still have the problem of having to seek around to deal with all these \ndifferent files (and tablespaces just spread them further apart), you \ncan't solve this, but a large write-back journal (as opposed to \nmetadata-only) would mask the problem.\n\nit would be a trade-off, you would end up writing all your data twice, so \nthe throughput would be lower, but since the data is safe as soon as it \nhits the journal the latency for any one request would be lower, which \nwould allow the system to use the CPU more and overlap it with your \nseeking.\n\nDavid Lang\n", "msg_date": "Thu, 1 Dec 2005 23:46:55 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 15,000 tables" } ]
[ { "msg_contents": "Hello,\n\nI have some strange performance problems with quering a table.It has \n5282864, rows and contains the following columns : id \n,no,id_words,position,senpos and sentence all are integer non null.\n\nIndex on :\n * no\n * no,id_words\n * id_words\n * senpos, sentence, \"no\")\n * d=primary key\n\n\"select count(1) from words_in_text\" takes 9 seconds to compleet.\nThe query 'select * from words_in_text' takes a verry long time to \nreturn the first record (more that 2 minutes) why?\n\nAlso the following query behaves strange.\nselect * from words_in_text where no <100 order by no; \n\nexplain shows that pg is using sequence scan. When i turn of sequence \nscan, index scan is used and is faster. I have a 'Explain verbose \nanalyze' of this query is at the end of the mail.\nThe number of estimated rows is wrong, so I did 'set statistics 1000' on \ncolumn no. After this the estimated number of rows was ok, but pg still \nwas using seq scan.\n\nCan anyone explain why pg is using sequence and not index scan?\n\n\nThe computer is a dell desktop with 768Mb ram. Database on the same \nmachine. I have analyze and vacuum all tables.\nDatabase is 8.0.\n\nThanks\nJeroen\n\n\n\n\nWith enable_seqscan=true\n\n {SORT\n :startup_cost 138632.19\n :total_cost 139441.07\n :plan_rows 323552\n :plan_width 24\n :targetlist (\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 1\n :restype 23\n :restypmod -1\n :resname id\n :ressortgroupref 0\n :resorigtbl 1677903\n :resorigcol 1\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 1\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 1\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 2\n :restype 23\n :restypmod -1\n :resname no\n :ressortgroupref 1\n :resorigtbl 1677903\n :resorigcol 2\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 2\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 3\n :restype 23\n :restypmod -1\n :resname id_words\n :ressortgroupref 0\n :resorigtbl 1677903\n :resorigcol 3\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 3\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 3\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 4\n :restype 23\n :restypmod -1\n :resname position\n :ressortgroupref 0\n :resorigtbl 1677903\n :resorigcol 4\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 4\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 4\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 5\n :restype 23\n :restypmod -1\n :resname senpos\n :ressortgroupref 0\n :resorigtbl 1677903\n :resorigcol 5\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 5\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 5\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 6\n :restype 23\n :restypmod -1\n :resname sentence\n :ressortgroupref 0\n :resorigtbl 1677903\n :resorigcol 6\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 6\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 6\n }\n }\n )\n :qual <>\n :lefttree\n {SEQSCAN\n :startup_cost 0.00\n :total_cost 104880.80\n :plan_rows 323552\n :plan_width 24\n :targetlist (\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 1\n :restype 23\n :restypmod -1\n :resname id\n :ressortgroupref 0\n :resorigtbl 1677903\n :resorigcol 1\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 1\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 1\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 2\n :restype 23\n :restypmod -1\n :resname no\n :ressortgroupref 1\n :resorigtbl 1677903\n :resorigcol 2\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 2\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 3\n :restype 23\n :restypmod -1\n :resname id_words\n :ressortgroupref 0\n :resorigtbl 1677903\n :resorigcol 3\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 3\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 3\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 4\n :restype 23\n :restypmod -1\n :resname position\n :ressortgroupref 0\n :resorigtbl 1677903\n :resorigcol 4\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 4\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 4\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 5\n :restype 23\n :restypmod -1\n :resname senpos\n :ressortgroupref 0\n :resorigtbl 1677903\n :resorigcol 5\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 5\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 5\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 6\n :restype 23\n :restypmod -1\n :resname sentence\n :ressortgroupref 0\n :resorigtbl 1677903\n :resorigcol 6\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 6\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 6\n }\n }\n )\n :qual (\n {OPEXPR\n :opno 97\n :opfuncid 66\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 2\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n }\n {CONST\n :consttype 23\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ 100 0 0 0 ]\n }\n )\n }\n )\n :lefttree <>\n :righttree <>\n :initPlan <>\n :extParam (b)\n :allParam (b)\n :nParamExec 0\n :scanrelid 1\n }\n :righttree <>\n :initPlan <>\n :extParam (b)\n :allParam (b)\n :nParamExec 0\n :numCols 1\n :sortColIdx 2\n :sortOperators 97\n }\n\n Sort (cost=138632.19..139441.07 rows=323552 width=24) (actual \ntime=7677.614..8479.980 rows=194141 loops=1)\n Sort Key: \"no\"\n -> Seq Scan on words_in_text (cost=0.00..104880.80 rows=323552 \nwidth=24) (actual time=187.118..5761.991 rows=194141 lo\nops=1)\n Filter: (\"no\" < 100)\n Total runtime: 9225.382 ms\n\n\nWith enable_seqscan=false\n\n {INDEXSCAN\n :startup_cost 0.00\n :total_cost 606313.33\n :plan_rows 323552\n :plan_width 24\n :targetlist (\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 1\n :restype 23\n :restypmod -1\n :resname id\n :ressortgroupref 0\n :resorigtbl 1677903\n :resorigcol 1\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 1\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 1\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 2\n :restype 23\n :restypmod -1\n :resname no\n :ressortgroupref 1\n :resorigtbl 1677903\n :resorigcol 2\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 2\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 3\n :restype 23\n :restypmod -1\n :resname id_words\n :ressortgroupref 0\n :resorigtbl 1677903\n :resorigcol 3\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 3\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 3\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 4\n :restype 23\n :restypmod -1\n :resname position\n :ressortgroupref 0\n :resorigtbl 1677903\n :resorigcol 4\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 4\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 4\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 5\n :restype 23\n :restypmod -1\n :resname senpos\n :ressortgroupref 0\n :resorigtbl 1677903\n :resorigcol 5\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 5\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 5\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 6\n :restype 23\n :restypmod -1\n :resname sentence\n :ressortgroupref 0\n :resorigtbl 1677903\n :resorigcol 6\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 6\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 6\n }\n }\n )\n :qual <>\n :lefttree <>\n :righttree <>\n :initPlan <>\n :extParam (b)\n :allParam (b)\n :nParamExec 0\n :scanrelid 1\n :indxid (o 1677911)\n :indxqual ((\n {OPEXPR\n :opno 97\n :opfuncid 66\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 1\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n }\n {CONST\n :consttype 23\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ 100 0 0 0 ]\n }\n )\n }\n ))\n :indxqualorig ((\n {OPEXPR\n :opno 97\n :opfuncid 66\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 2\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n }\n {CONST\n :consttype 23\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ 100 0 0 0 ]\n }\n )\n }\n ))\n :indxstrategy ((i 1))\n :indxsubtype ((o 0))\n :indxlossy ((i 0))\n :indxorderdir 1\n }\n\n Index Scan using ind_words_in_text_1 on words_in_text \n(cost=0.00..606313.33 rows=323552 width=24) (actual time=0.208..100\n0.085 rows=194141 loops=1)\n Index Cond: (\"no\" < 100)\n Total runtime: 1733.601 ms\n\n\n\n", "msg_date": "Sun, 06 Nov 2005 19:02:39 +0100", "msg_from": "Jeroen van Iddekinge <[email protected]>", "msg_from_op": true, "msg_subject": "Performance problem with pg8.0" }, { "msg_contents": "Jeroen van Iddekinge wrote:\n> Hello,\n> \n> I have some strange performance problems with quering a table.It has \n> 5282864, rows and contains the following columns : id \n> ,no,id_words,position,senpos and sentence all are integer non null.\n> \n> Index on :\n> * no\n> * no,id_words\n> * id_words\n> * senpos, sentence, \"no\")\n> * d=primary key\n> \n> \"select count(1) from words_in_text\" takes 9 seconds to compleet.\n\nBecause it's reading through the whole table. See mailing list archives \nfor discussion of why it doesn't just use an index.\n\n> The query 'select * from words_in_text' takes a verry long time to \n> return the first record (more that 2 minutes) why?\n\nA long time for the first row, hardly any time for the others. That's \nbecause it assembles all the rows and returns them at the same time. If \nyou don't want all the rows at once use a cursor.\n\n> Also the following query behaves strange.\n> select * from words_in_text where no <100 order by no;\n> explain shows that pg is using sequence scan. When i turn of sequence \n> scan, index scan is used and is faster. I have a 'Explain verbose \n> analyze' of this query is at the end of the mail.\n\nIt's just the \"explain analyze\" that's needed - the \"verbose\" gives far \nmore detail than you'll want at this stage.\n\n> The number of estimated rows is wrong, so I did 'set statistics 1000' on \n> column no. After this the estimated number of rows was ok, but pg still \n> was using seq scan.\n\nI don't see the correct row estimate - it looks like it's getting it \nwrong again to me.\n\n> Can anyone explain why pg is using sequence and not index scan?\n\nThere's one of two reasons:\n1. It thinks it's going to fetch more rows than it does.\n2. It has the relative costs of a seq-scan vs index accesses wrong.\n\nCan you try an \"EXPLAIN ANALYZE\" of\n select * from words_in_text where no < 100 AND no >= 0 order by no;\nSubstitute whatever lower bound is sensible for \"no\". Let's see if that \ngives the system a clue.\n\nThen, we'll need to look at your other tuning settings. Have you made \nany changes to your postgresql.conf settings, in particular those \nmentioned here:\n http://www.powerpostgresql.com/PerfList\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 07 Nov 2005 09:46:38 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem with pg8.0" } ]
[ { "msg_contents": "Greg,\r\n\r\nIncreasing memory actually slows down the current sort performance.\r\n\r\nWe're working on a fix for this now in bizgres.\r\n\r\nLuke\r\n--------------------------\r\nSent from my BlackBerry Wireless Device\r\n\r\n\r\n-----Original Message-----\r\nFrom: [email protected] <[email protected]>\r\nTo: PostgreSQL <[email protected]>\r\nCC: [email protected] <[email protected]>\r\nSent: Sun Nov 06 14:24:00 2005\r\nSubject: Re: [PERFORM] 8.1 iss\r\n\r\n\r\n\"PostgreSQL\" <[email protected]> writes:\r\n\r\n...\r\n> As I post this, the query is approaching an hour of run time. I've listed \r\n> an explain of the query and my non-default conf parameters below. Please \r\n> advise on anything I should change or try, or on any information I can \r\n> provide that could help diagnose this.\r\n> \r\n> \r\n> GroupAggregate (cost=9899282.83..10285434.26 rows=223858 width=15)\r\n> Filter: (count(*) > 1)\r\n> -> Sort (cost=9899282.83..9994841.31 rows=38223392 width=15)\r\n> Sort Key: v_barcode\r\n> -> Seq Scan on lead (cost=0.00..1950947.92 rows=38223392 width=15)\r\n> \r\n> shared_buffers = 50000\r\n> work_mem = 16384\r\n...\r\n\r\nIt sounds to me like it's doing a large on-disk sort. Increasing work_mem\r\nshould improve the efficiency. If you increase it enough it might even be able\r\nto do it in memory, but probably not.\r\n\r\nThe shared_buffers is excessive but if you're using the default 8kB block\r\nsizes then it 400MB of shared pages on a 16GB machine ought not cause\r\nproblems. It might still be worth trying lowering this to 10,000 or so.\r\n\r\nIs this a custom build from postgresql.org sources? RPM build? Or is it a BSD\r\nports or Gentoo build with unusual options?\r\n\r\nPerhaps posting actual vmstat and iostat output might help if someone catches\r\nsomething you didn't see?\r\n\r\n-- \r\ngreg\r\n\r\n\r\n---------------------------(end of broadcast)---------------------------\r\nTIP 2: Don't 'kill -9' the postmaster\r\n\r\n", "msg_date": "Sun, 6 Nov 2005 15:19:02 -0500", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8.1 iss" } ]
[ { "msg_contents": " \n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Joost Kraaijeveld\n> Sent: 07 November 2005 04:26\n> To: Tom Lane\n> Cc: Pgsql-Performance\n> Subject: Re: [PERFORM] Performance PG 8.0 on dual opteron / \n> 4GB / 3ware\n> \n> Hi Tom,\n> \n> On Sun, 2005-11-06 at 15:26 -0500, Tom Lane wrote:\n> > I'm confused --- where's the 82sec figure coming from, exactly?\n> >From actually executing the query.\n> \n> >From PgAdmin:\n> \n> -- Executing query:\n> select objectid from prototype.orders\n> \n> Total query runtime: 78918 ms.\n> Data retrieval runtime: 188822 ms.\n> 1104379 rows retrieved.\n> \n> \n> > We've heard reports of performance issues in PgAdmin with large\n> > result sets ... if you do the same query in psql, what happens?\n> jkr@Panoramix:~/postgresql$ time psql muntdev -c \"select objectid from\n> prototype.orders\" > output.txt\n> \n> real 0m5.554s\n> user 0m1.121s\n> sys 0m0.470s\n> \n> \n> Now *I* am confused. What does PgAdmin do more than giving \n> the query to\n> the database?\n\nNothing - it just uses libpq's pqexec function. The speed issue in\npgAdmin is rendering the results in the grid which can be slow on some\nOS's due to inefficiencies in some grid controls with large data sets.\nThat's why we give 2 times - the first is the query runtime on the\nserver, the second is data retrieval and rendering (iirc, it's been a\nwhile).\n\nRegards, Dave\n", "msg_date": "Mon, 7 Nov 2005 08:51:10 -0000", "msg_from": "\"Dave Page\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance PG 8.0 on dual opteron / 4GB / 3ware" }, { "msg_contents": "Hi Dave,\n\nOn Mon, 2005-11-07 at 08:51 +0000, Dave Page wrote: \n> > On Sun, 2005-11-06 at 15:26 -0500, Tom Lane wrote:\n> > > I'm confused --- where's the 82sec figure coming from, exactly?\n> > >From actually executing the query.\n> > \n> > >From PgAdmin:\n> > \n> > -- Executing query:\n> > select objectid from prototype.orders\n> > \n> > Total query runtime: 78918 ms.\n> > Data retrieval runtime: 188822 ms.\n> > 1104379 rows retrieved.\n> > \n> > \n> > > We've heard reports of performance issues in PgAdmin with large\n> > > result sets ... if you do the same query in psql, what happens?\n> > jkr@Panoramix:~/postgresql$ time psql muntdev -c \"select objectid from\n> > prototype.orders\" > output.txt\n> > \n> > real 0m5.554s\n> > user 0m1.121s\n> > sys 0m0.470s\n> > \n> > \n> > Now *I* am confused. What does PgAdmin do more than giving \n> > the query to\n> > the database?\n> \n> Nothing - it just uses libpq's pqexec function. The speed issue in\n> pgAdmin is rendering the results in the grid which can be slow on some\n> OS's due to inefficiencies in some grid controls with large data sets.\n> That's why we give 2 times - the first is the query runtime on the\n> server, the second is data retrieval and rendering (iirc, it's been a\n> while).\nThat is what I thought, but what could explain the difference in query\nruntime (78 seconds versus 5 seconds) ?\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\n", "msg_date": "Mon, 07 Nov 2005 10:02:59 +0100", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance PG 8.0 on dual opteron / 4GB / 3ware" }, { "msg_contents": "Dave Page wrote:\n\n>>\n>>\n>>Now *I* am confused. What does PgAdmin do more than giving \n>>the query to\n>>the database?\n> \n> \n> Nothing - it just uses libpq's pqexec function. The speed issue in\n> pgAdmin is rendering the results in the grid which can be slow on some\n> OS's due to inefficiencies in some grid controls with large data sets.\n> That's why we give 2 times - the first is the query runtime on the\n> server, the second is data retrieval and rendering (iirc, it's been a\n> while).\n\nyrnc.\nQuery runtime includes data transfer to the client, i.e. until libpq \nreturns the set, second time is retrieving data from libpq and rendering.\n\nRegards,\n", "msg_date": "Mon, 07 Nov 2005 17:45:31 +0000", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance PG 8.0 on dual opteron / 4GB / 3ware" } ]
[ { "msg_contents": " \n\n> -----Original Message-----\n> From: Joost Kraaijeveld [mailto:[email protected]] \n> Sent: 07 November 2005 09:03\n> To: Dave Page\n> Cc: Tom Lane; Pgsql-Performance\n> Subject: RE: [PERFORM] Performance PG 8.0 on dual opteron / \n> 4GB / 3ware\n> \n> > Nothing - it just uses libpq's pqexec function. The speed issue in\n> > pgAdmin is rendering the results in the grid which can be \n> slow on some\n> > OS's due to inefficiencies in some grid controls with large \n> data sets.\n> > That's why we give 2 times - the first is the query runtime on the\n> > server, the second is data retrieval and rendering (iirc, \n> it's been a\n> > while).\n> That is what I thought, but what could explain the difference in query\n> runtime (78 seconds versus 5 seconds) ?\n\nNot in terms of our code - we obviously do a little more than just run\nthe query, but I can't spot anything in there that should be\nnon-constant time.\n\nDon't suppose it's anything as simple as you vacuuming in between is it?\n\nRegards, Dave\n", "msg_date": "Mon, 7 Nov 2005 09:17:12 -0000", "msg_from": "\"Dave Page\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance PG 8.0 on dual opteron / 4GB / 3ware" } ]
[ { "msg_contents": "Christian Paul B. Cosinas wrote:\n> Does Creating Temporary table in a function and NOT dropping them affects\n> the performance of the database?\n\nThe system will drop it automatically, so it shouldn't affect.\n\nWhat _could_ be affecting you if you execute that function a lot, is\naccumulated bloat in pg_class, pg_attribute, or other system catalogs.\nYou may want to make sure these are vacuumed often.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Mon, 7 Nov 2005 08:07:20 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Temporary Table" }, { "msg_contents": "Does Creating Temporary table in a function and NOT dropping them affects\nthe performance of the database?\n\n \n\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html\n\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html\n\n\n\n\n\n\n\n\nDoes Creating Temporary\ntable in a function and NOT dropping them affects the performance of the\ndatabase?\n \n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html\n\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html", "msg_date": "Mon, 7 Nov 2005 15:35:49 -0000", "msg_from": "\"Christian Paul B. Cosinas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Temporary Table" }, { "msg_contents": "Alvaro Herrera wrote:\n> Christian Paul B. Cosinas wrote:\n> \n>> Does Creating Temporary table in a function and NOT dropping them affects\n>> the performance of the database?\n>> \n>\n> The system will drop it automatically, so it shouldn't affect.\n>\n> What _could_ be affecting you if you execute that function a lot, is\n> accumulated bloat in pg_class, pg_attribute, or other system catalogs.\n> You may want to make sure these are vacuumed often.\n>\n> \nThe answer in my experience is a very loud YES YES YES\n\nIf you use lots of temporary tables you will grow and dirty your system \ncatalogs, so you need to be vacuuming them regularly also (pg_call, \npg_attribute) Otherwise your db will slow to a crawl after a while.\n\nRalph\n\n", "msg_date": "Tue, 08 Nov 2005 10:40:01 +1300", "msg_from": "Ralph Mason <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Temporary Table" } ]
[ { "msg_contents": "Here are the configuration of our database server:\n\tport = 5432\n\tmax_connections = 300\n\tsuperuser_reserved_connections = 10\n\tauthentication_timeout = 60\t\n\tshared_buffers = 48000 \n\tsort_mem = 32168\n\tsync = false\n\nDo you think this is enough? Or can you recommend a better configuration for\nmy server?\n\nThe server is also running PHP and Apache but wer'e not using it\nextensively. For development purpose only. \n\nThe database slow down is occurring most of the time (when the memory free\nis low) I don't think it has something to do with vacuum. We only have a\nfull server vacuum once a day.\n\n\n-----Original Message-----\nFrom: Mark Kirkwood [mailto:[email protected]]\nSent: Monday, October 24, 2005 3:14 AM\nTo: Christian Paul B. Cosinas\nCc: [email protected]\nSubject: Re: [PERFORM] Used Memory\n> \n> \n> I just noticed that as long as the free memory in the first row (which \n> is 55036 as of now) became low, the slower is the response of the \n> database server.\n> \n\nAlso, how about posting your postgresql.conf (or just the non-default\nparameters) to this list?\n\n\n\nSome other stuff that could be relevant:\n\n- Is the machine just a database server, or does it run (say) Apache + Php?\n- When the slowdown is noticed, does this coincide with certain activities -\ne.g, backup , daily maintenance, data load(!) etc.\n\n\nregards\n\nMark\n\n> \n> I choose Polesoft Lockspam to fight spam, and you?\n> http://www.polesoft.com/refer.html\n\nNope, not me either.\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html \n\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html \n\n", "msg_date": "Mon, 7 Nov 2005 15:34:41 -0000", "msg_from": "\"Christian Paul B. Cosinas\" <[email protected]>", "msg_from_op": true, "msg_subject": "FW: Used Memory" } ]